-
Notifications
You must be signed in to change notification settings - Fork 18
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Adding documentation and examples for s3 connector (#115)
- Loading branch information
Showing
8 changed files
with
360 additions
and
14 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,220 @@ | ||
--- | ||
id: quickstart_s3_connector | ||
title: "S3Connector" | ||
--- | ||
|
||
Setup | ||
----- | ||
|
||
``` | ||
libraryDependencies += "dev.zio" %% "zio-connect-s3" % "<version>" | ||
``` | ||
|
||
How to use it? | ||
----- | ||
|
||
All available S3Connector combinators and operations are available in the package object `zio.connect.s3`, you only need to import `zio.connect.s3._` | ||
|
||
First, you must configure the underlying S3 connection provided by `zio-aws` you can read more about how to configure it [here][zio-aws] | ||
If you have default credentials in the system environment typically at `~/.aws/credentials` or as env variables | ||
the following configuration will likely work. | ||
|
||
[zio-aws]: https://zio.github.io/zio-aws/docs/overview/overview_config | ||
|
||
```scala | ||
import zio._ | ||
import zio.connect.s3._ | ||
import zio.stream._ | ||
import zio.aws.core.config.AwsConfig | ||
import zio.aws.netty.NettyHttpClient | ||
|
||
lazy val zioAwsConfig = NettyHttpClient.default >>> AwsConfig.default | ||
``` | ||
|
||
Now let's create a bucket: | ||
|
||
```scala | ||
val bucketName = BucketName("this-very-charming-bucket-name") // BucketName is a zio prelude newtype of String | ||
|
||
val program1: ZIO[S3Connector, S3Exception, Unit] = | ||
for { | ||
_ <- ZStream(bucketName) >>> createBucket | ||
} yield () | ||
``` | ||
|
||
The way to understand this is to recognize that `createBucket` is a `ZSink` that expects elements of type `BucketName` as its streamed input. | ||
In this case we have a `ZStream` with a single element of type `BucketName` but we could have an arbitrary number of buckets and the code | ||
would look and work virtually the same. | ||
|
||
Okay, let's put some readable bytes into that bucket: | ||
|
||
```scala | ||
val objectKey = ObjectKey("my-object") // ObjectKey is a zio prelude newtype of String | ||
|
||
val program2: ZIO[S3Connector, S3Exception, Unit] = | ||
for { | ||
content <- Random.nextString(100).map(_.getBytes).map(Chunk.fromArray) | ||
_ <- ZStream.fromChunk(content) >>> putObject(bucketName, objectKey) | ||
} yield () | ||
``` | ||
|
||
Here a stream of chunks of bytes are streamed into the `putObject` sink. The sink takes two arguments, the bucket name and the object key to associate with the data | ||
being streamed in. | ||
|
||
Let's list objects in the bucket: | ||
|
||
```scala | ||
val program3: ZIO[S3Connector, S3Exception, Chunk[ObjectKey]] = | ||
for { | ||
keys <- listObjects(bucketName).runCollect | ||
} yield keys | ||
``` | ||
|
||
`listObjects` is a `ZStream` that emits elements of type `ObjectKey` and we can use the `runCollect` operator to collect | ||
all the elements into a `Chunk`. | ||
|
||
Here's what it looks like to get an object put earlier: | ||
|
||
```scala | ||
val program5: ZIO[S3Connector, Object, String] = | ||
for { | ||
content <- getObject(bucketName, objectKey) >>> ZPipeline.utf8Decode >>> ZSink.mkString | ||
} yield content | ||
``` | ||
|
||
Finally, let's look at how to run one of these program: | ||
|
||
```scala | ||
def run = program1.provide(zioAwsConfig, S3.live, s3ConnectorLiveLayer) | ||
``` | ||
|
||
You need to provide the configuration layer for `zio-aws`, the `S3` layer from `zio-aws` and the `s3ConnectorLiveLayer` | ||
which is the live implementation of the `S3Connector` interface. | ||
|
||
Test / Stub | ||
----------- | ||
A stub implementation of S3Connector is provided for testing purposes via the `TestS3Connector.layer`. It uses | ||
internally an `TRef[Map[BucketName, S3Bucket]]` instead of talking to S3. You can use create the test harness as follows: | ||
|
||
```scala | ||
import zio.connect.s3._ | ||
|
||
object MyTestSpec extends ZIOSpecDefault { | ||
|
||
override def spec = | ||
suite("MyTestSpec")(???) | ||
.provide(s3ConnectorTestLayer) | ||
|
||
} | ||
``` | ||
|
||
Operators & Examples | ||
---- | ||
|
||
The following operators are available: | ||
|
||
## `copyObject` | ||
|
||
Copy an object from one bucket to another | ||
|
||
```scala | ||
ZStream(CopyObject(bucket1, objectKey, bucket2)) >>> copyObject | ||
``` | ||
|
||
## `createBucket` | ||
|
||
Creates S3 buckets | ||
|
||
```scala | ||
ZStream(bucketName1, bucketName2) >>> createBucket | ||
``` | ||
|
||
## `deleteEmptyBucket` | ||
|
||
Deletes empty S3 buckets | ||
|
||
```scala | ||
ZStream(bucketName1, bucketName2) >>> deleteEmptyBucket | ||
``` | ||
The buckets must be empty, if they are not you will get an `BucketsNotEmptyException` from S3 | ||
|
||
|
||
## `deleteObjects` | ||
|
||
Deletes objects from an S3 bucket | ||
|
||
```scala | ||
ZStream(objectKey1, objectKey2) >>> deleteObjects(bucketName) | ||
``` | ||
Does not result in an error, if object keys do not exist | ||
|
||
|
||
## `existsBucket` | ||
|
||
Checks if a bucket exists | ||
|
||
```scala | ||
ZStream(bucketName1, bucketName2) >>> existsBucket | ||
``` | ||
|
||
## `existsObject` | ||
|
||
Checks if an object exists in an s3 bucket | ||
|
||
```scala | ||
ZStream(objectKey1, objectKey2) >>> existsObject(bucketName) | ||
``` | ||
It expects the bucket to exist and will return a `NoSuchBucketException` if the _bucket_ does not | ||
|
||
|
||
## `getObject` | ||
|
||
Gets an object from an S3 bucket | ||
|
||
```scala | ||
getObject(bucket2, objectKey) >>> ZPipeline.utf8Decode >>> ZSink.mkString | ||
``` | ||
You will receive the objects as a stream of bytes, parsing/decoding of course depends on the object contents. | ||
The example here assumes you have a stream of utf-8 encoded bytes and you want to decode them into a string. | ||
|
||
|
||
## `listBuckets` | ||
|
||
Lists all buckets in the account | ||
|
||
```scala | ||
listBuckets >>> ZSink.collectAll | ||
``` | ||
Currently, gets ALL buckets, there is no pagination support yet. You may want to use some other ZStream combinators | ||
to filter the lists prior to collecting bucket names | ||
|
||
|
||
## `listObjects` | ||
|
||
Lists all objects keys in a bucket takes a `BucketName` as an argument | ||
|
||
```scala | ||
listObjects(bucketName) >>> ZSink.collectAll | ||
``` | ||
Currently, gets ALL objects in the bucket, there is no pagination support yet. You may want to use some other ZStream combinators | ||
to filter the lists prior to collecting object keys | ||
|
||
|
||
## `moveObject` | ||
|
||
Move an object from one bucket to another | ||
|
||
```scala | ||
ZStream(MoveObject(sourceBucket, sourceKey, targetBucket, targetKey)) >>> moveObject | ||
``` | ||
The `sourceBucket`, `sourceKey`, and `targetBucket` must exist. If the `targetKey` exists, it will be overwritten. | ||
|
||
|
||
## `putObject` | ||
|
||
Puts an object into an S3 bucket | ||
|
||
```scala | ||
ZStream.fromChunk(content) >>> putObject(bucketName, objectKey) | ||
``` | ||
Expects as stream of bytes, returns a `Unit` if successful. |
61 changes: 61 additions & 0 deletions
61
examples/s3-connector-examples/src/main/scala/Example1.scala
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
import zio._ | ||
import zio.aws.core.config.AwsConfig | ||
import zio.aws.netty.NettyHttpClient | ||
import zio.aws.s3.S3 | ||
import zio.connect.s3.S3Connector._ | ||
import zio.connect.s3._ | ||
import zio.stream._ | ||
|
||
import java.nio.charset.StandardCharsets | ||
|
||
object Example1 extends ZIOAppDefault { | ||
|
||
// Please read https://zio.github.io/zio-aws/docs/overview/overview_config to learn more about configuring/authenticating zio-aws | ||
// this configuration will work provided you have default aws credentials, i.e. access key and secret key in your `.aws` directory | ||
lazy val zioAwsConfig = NettyHttpClient.default >>> AwsConfig.default | ||
|
||
// Program does the following: | ||
// 1. Creates two random buckets | ||
// 2. Puts a quote as a text file into one bucket | ||
// 3. Copies that to another, | ||
// 4. Lists the objects in those buckets | ||
// 5. Gets the quote back from the second bucket | ||
// 6. Deletes the objects in both buckets | ||
// 7. Checks for the existence of the objects in both buckets | ||
// 8. Deletes the buckets provided they are empty | ||
val program: ZIO[S3Connector, Object, String] = { | ||
for { | ||
bucket1 <- Random.nextUUID.map(_.toString).map(uuid => BucketName(s"zio-connect-s3-bucket-$uuid")) | ||
bucket2 <- Random.nextUUID.map(_.toString).map(uuid => BucketName(s"zio-connect-s3-bucket-$uuid")) | ||
_ <- ZStream(bucket1, bucket2).run(createBucket) | ||
buckets <- listBuckets.runCollect | ||
objectKey = ObjectKey("quote.txt") | ||
_ <- ZStream.fromIterable(quote.getBytes(StandardCharsets.UTF_8)).run(putObject(bucket1, objectKey)) | ||
_ <- ZStream(CopyObject(bucket1, objectKey, bucket2)).run(copyObject) | ||
objectsPerBucket <- ZIO.foreach(buckets)(bucket => listObjects(bucket).runCollect.map((bucket, _))) | ||
_ <- ZIO.foreach(objectsPerBucket) { case (bucket, objects) => | ||
Console.printLine(s"Objects in bucket $bucket: ${objects.mkString}") | ||
} | ||
text <- getObject(bucket2, objectKey) >>> ZPipeline.utf8Decode >>> ZSink.mkString | ||
_ <- ZIO.foreachPar(buckets)(bucket => ZStream(objectKey).run(deleteObjects(bucket))) | ||
bucketsNonEmpty <- ZIO.foreachPar(buckets)(bucket => ZStream(objectKey).run(existsObject(bucket))) | ||
_ <- ZStream | ||
.fromChunk(buckets) | ||
.run(deleteEmptyBucket) | ||
.when(bucketsNonEmpty.forall(_ == false)) | ||
.orElseFail(new RuntimeException("Could not delete non-empty buckets")) | ||
} yield text | ||
} | ||
|
||
override def run: ZIO[Any with ZIOAppArgs with Scope, Object, String] = | ||
program | ||
.provide(zioAwsConfig, S3.live, s3ConnectorLiveLayer) | ||
.tapBoth( | ||
error => Console.printLine(s"error: ${error}"), | ||
text => Console.printLine(s"${text} ==\n ${quote}\nis ${text == quote}") | ||
) | ||
|
||
private def quote = | ||
"You should give up looking for lost cats and start searching for the other half of your shadow" | ||
|
||
} |
Oops, something went wrong.