A new Pipelines data destination for Amazon Kinesis is now generally available for Golioth users. Amazon Kinesis Data Streams is a massively scalable, highly durable data ingestion and processing service optimized for streaming data.
How It Works
The aws-kinesis data destination is similar to the aws-sqs destination for Amazon Simple Queue Service (SQS). However, besides allowing you take advantage of the unique capabilities of Kinesis Data Streams, the aws-kinesis data destination will pass along the exact data delivered to it regardless of format. This gives you more control on the data that ends up in your data stream, but also means that you’ll need to use a transformer like inject-metadata if you want device ID, project ID, and other metadata included in each message. The device ID is also used as the data stream partition key.
Like other AWS destination, the aws-kinesis data destination accepts access_key, access_secret, and region parameters. It also accepts a stream_arn parameter formatted as an Amazon Resource Name (ARN).
filter:
path: "*"
steps:
- name: step0
destination:
type: aws-kinesis
version: v1
parameters:
stream_arn: arn:aws:kinesis:us-east-1:123456789:stream/pipelines
access_key: $AWS_ACCESS_KEY
access_secret: $AWS_ACCESS_SECRET
region: us-east-1
Click here to use this pipeline in your Golioth project!
For more details on the aws-kinesis data destination, go to the documentation.
What’s Next
Amazon Kinesis is especially well-suited for high volume data and applications in which there are multiple consumers of the streamed data. It also offers a wide range of integrations with other AWS services, making it a foundational component for customers already leveraging AWS. Reach out to us on the forum if you have any questions on the aws-kinesis data destination or are using an alternative service that does not currently have a native integration in Golioth Pipelines!


No comments yet! Start the discussion at forum.golioth.io