New Pipelines Destination: AWS S3
A new Pipelines data destination for AWS S3 is now generally available for Golioth users. It represents the first object storage destination for Golioth Pipelines, and opens up a new set of data streaming use-cases, such as images and audio. It also can be useful for scenarios in which large volumes of data are collected then batch processed at a later time.
How It Works
The aws-s3
data destination uploads events routed to it as objects in the specified bucket. The name of an object corresponds to its event ID, and objects are organized in directories for each device ID.
/ ├─ 664b9e889a9590ccfcf822b3/ │ ├─ 28ebd981-80ae-467f-b700-ba00e7c1e3ee │ ├─ e47e5b46-d4e3-4bf1-a413-9fc71ec9f6b0 │ ├─ ... ├─ 66632a45658c93af0895a70e/ ├─ .../
Data of any content type, including the aforementioned media use-cases and more traditional structured sensor data, can be routed to the aws-s3
data destination. To authenticate, an IAM access key and secret key must be created as secrets, then referenced in the pipeline configuration. It is recommended to limit the permissions of the IAM user to PutObject for the specified bucket.
filter: path: "*" steps: - name: step0 destination: type: aws-s3 version: v1 parameters: name: my-bucket access_key: $AWS_ACCESS_KEY access_secret: $AWS_SECRET_KEY region: us-east-1
Click here to use this pipeline in your Golioth project!
For more details on the aws-s3
data destination, go to the documentation.
What’s Next
While any existing uses of the Golioth Firmware SDK that leverage data streaming can be used with the aws-s3
data destination, we’ll be introducing examples to demonstrate new use-cases in the coming weeks. Also, stay tuned for more object storage data destinations, and reach out on the forum if you have a use-case that is not currently well-supported by Pipelines!
Start the discussion at forum.golioth.io