A Pulumi component that synchronizes a local folder to Amazon S3, Azure Blob Storage, or Google Cloud Storage.
The component is available in these Pulumi-supported languages:
- JavaScript/TypeScript:
@pulumi/synced-folder
- Python:
pulumi_synced_folder
- Go:
github.com/pulumi/pulumi-synced-folder/sdk/go/synced-folder
- .NET:
Pulumi.SyncedFolder
- YAML
Given a cloud-storage bucket and the path to a local folder, the component synchronizes files from the folder to the bucket, deleting any files in the destination bucket that don't exist locally. It does this in one of two ways:
-
By managing each file as an individual Pulumi resource (
aws.s3.BucketObject
,azure.storage.Blob
, orgcp.storage.BucketObject
). This is the component's default behavior. -
By delegating sync responsibility to a cloud provider CLI (e.g.,
aws
,az
, orgcloud
/gsutil
). This behavior is enabled by setting themanagedObjects
input property tofalse
and ensuring the relevant CLI tool is installed alongsidepulumi
.
The former approach β having Pulumi manage your resources for you β is generally preferable, but in some cases, for example a website consisting of thousands of files, it may not be the best fit. (For example, when using this approach with Pulumi Cloud as your backend, it will increase the number of resources under management and could affect your pricing.) This component lets you choose the approach that works best for you, without having to break out of your Pulumi program or workflow.
Below are a few examples in Pulumi YAML, each of which assumes the existence of a site
folder containing one or more files to be uploaded. See the examples folder for additional languages and scenarios.
Here, a local folder, ./site
, is pushed to Amazon S3, its contents managed as individual s3.BucketObject
s:
name: synced-folder-examples-aws-yaml
runtime: yaml
description: An example of using the synced-folder component.
resources:
s3-bucket:
type: aws:s3:Bucket
properties:
acl: public-read
website:
indexDocument: index.html
errorDocument: error.html
# π
synced-bucket-folder:
type: synced-folder:index:S3BucketFolder
properties:
path: ./site
bucketName: ${s3-bucket.bucket}
acl: public-read
outputs:
url: http://${s3-bucket.websiteEndpoint}
Here, the folder's contents are synced to an Azure Blob Storage container, but instead of managing each file as an azure.storage.Blob
, the component invokes the Azure CLI (specifically the az storage blob sync
command) with Pulumi Command. The optional managedObjects
property lets you configure this behavior on a folder-by-folder basis.
name: synced-folder-examples-azure-yaml
runtime: yaml
description: An example of using the synced-folder component in YAML.
resources:
resource-group:
type: azure-native:resources:ResourceGroup
storage:
type: azure-native:storage:StorageAccount
properties:
resourceGroupName: ${resource-group.name}
kind: StorageV2
sku:
name: Standard_LRS
website:
type: azure-native:storage:StorageAccountStaticWebsite
properties:
resourceGroupName: ${resource-group.name}
accountName: ${storage.name}
indexDocument: index.html
error404Document: error.html
# π
synced-azure-blob-folder:
type: synced-folder:index:AzureBlobFolder
properties:
path: ./site
resourceGroupName: ${resource-group.name}
storageAccountName: ${storage.name}
containerName: ${website.containerName}
managedObjects: false # π Sync files with the Azure CLI.
outputs:
url: ${storage.primaryEndpoints.web}
Here, ./site
is synced to a Google Cloud Storage bucket.
name: synced-folder-examples-google-cloud-yaml
runtime: yaml
description: An example of using the synced-folder component in YAML.
resources:
gcp-bucket:
type: gcp:storage:Bucket
properties:
location: US
website:
mainPageSuffix: index.html
notFoundPage: error.html
gcp-bucket-iam-binding:
type: gcp:storage:BucketIAMBinding
properties:
bucket: ${gcp-bucket.name}
role: roles/storage.objectViewer
members:
- allUsers
# π
synced-google-cloud-folder:
type: synced-folder:index:GoogleCloudFolder
properties:
path: ./site
bucketName: ${gcp-bucket.name}
outputs:
url: https://storage.googleapis.com/${gcp-bucket.name}/index.html
The following input properties are common to all three resource types:
Property | Type | Description |
---|---|---|
path |
string |
The path (relative or fully-qualified) to the folder containing the files to be synced. Required. |
managedObjects |
boolean |
Whether to have Pulumi manage files as individual cloud resources. Defaults to true . See below for details. |
Additional resource-specific properties are listed below.
Property | Type | Description |
---|---|---|
bucketName |
string |
The name of the S3 bucket to sync to (e.g., my-bucket in s3://my-bucket ). Required. |
acl |
string |
The AWS Canned ACL to apply to each file (e.g., public-read ). Required. |
Property | Type | Description |
---|---|---|
containerName |
string |
The name of the Azure storage container to sync to. Required. |
storageAccountName |
string |
The name of the Azure storage account that the container belongs to. Required. |
resourceGroupName |
string |
The name of the Azure resource group that the storage account belongs to. Required. |
Property | Type | Description |
---|---|---|
bucketName |
string |
The name of the Google Cloud Storage bucket to sync to (e.g., my-bucket in gs://my-bucket ). Required. |
By default, the component manages your files as individual Pulumi cloud resources, but you can opt out of this behavior by setting the component's managedObjects
property to false
. When you do this, the component assumes you've installed the appropriate CLI tool β aws
, az
, or gcloud
/gsutil
, depending on the cloud β and uses the Command provider to issue commands on that tool directly. Files are one-way synchronized only (local to remote), and files that exist remotely but not locally are deleted. All files are deleted from remote storage on pulumi destroy
.
The component does not yet support switching seamlessly between managedObjects: true
and managedObjects: false
, however, so if you find after deploying a given folder with managed objects that you'd prefer to use unmanaged objects instead (or vice-versa), we recommend creating a second bucket/storage container and folder and removing the first. You can generally do this within the scope of a single program update. For example:
# ...
resources:
# The original bucket and synced-folder resources, using managed file objects.
#
# my-first-bucket:
# type: aws:s3:Bucket
# properties:
# acl: public-read
# website:
# indexDocument: index.html
# errorDocument: error.html
#
# my-first-synced-folder:
# type: synced-folder:index:S3BucketFolder
# properties:
# path: ./stuff
# bucketName: ${my-first-bucket.bucket}
# acl: public-read
# A new bucket and synced-folder using unmanaged file objects.
changed-my-mind-bucket:
type: aws:s3:Bucket
properties:
acl: public-read
website:
indexDocument: index.html
errorDocument: error.html
changed-my-mind-synced-folder:
type: synced-folder:index:S3BucketFolder
properties:
path: ./stuff
bucketName: ${changed-my-mind-bucket.bucket}
acl: public-read
managedObjects: false
outputs:
# An updated program reference pointing to the new bucket.
url: http://${changed-my-mind-bucket.websiteEndpoint}