New Pipelines Transformer: Webhook

A new Pipelines transformer for calling an external webhook for data transformation is now generally available for Golioth users. The webhook transformer, not be confused with the webhook data destination, dramatically expands the capabilities of Pipelines by enabling users to target any existing public API or perform arbitrary data transformations by writing their own code.

Note: Egress data when using the webhook transformers incurs usage. See Golioth’s pricing options for more information on costs and our generous free tier.

How It Works

The webhook transformer accepts data of any content type, and sends it to the specified URL as an HTTP POST request. The response replaces the outgoing data, allowing it to be passed through further transformers, before being delivered to one or more data destinations.

filter:
  path: "*"
  filter: application/cbor
steps:
  - name: convert-json
    transformer:
      type: cbor-to-json
  - name: external-transform
    transformer:
      type: webhook
      parameters:
        url: https://temp-converter.golioth.workers.dev/
  - name: send-lightdb-stream
    destination:
      type: lightdb-stream
      version: v1

Click here to use this pipeline in your Golioth project!

In the pipeline above, the webhook transformer is used to target a Cloudflare Worker that converts temperature data from Celsius to Fahrenheit. CBOR encoded device data arrives at the pipeline, then is converted to JSON and delivered to the webhook transformer. The worker is a minimal JavaScript function that checks if a temp field is present in the JSON data, and if so, converts it.

export default {
    async fetch(request) {
        const body = await request.json();
        if (body.temp) {
            body.temp = body.temp * 9 / 5 + 32
        }
        return Response.json(body, { status: 200 });
    },
};

Finally, data is delivered to LightDB Stream, where it can be observed in the Golioth console. In the following example, CBOR payloads that included a temp field with values 34.5 and 32 respectively arrived at the pipeline. Each had their temp value converted from Celsius to Fahrenheit.

Temperature converted from Celsius to Fahrenheit in LightDB Stream

For more information on the webhook transformer, go to the documentation.

What’s Next

This post details one of the simplest cases of using the webhook transformer, but the possibilities it enables are endless. Keep an eye out for more examples on the blog next week, and let us know how you are using Golioth Pipelines on the forum!

A new Pipelines transformer for embedding data in JSON is now generally available for Golioth users. The embed-in-json transformer enables data streamed from devices to be embedded as a string value for a key in a JSON object, where the object can then be further augmented before being delivered to a destination that expects a JSON payload.

How It Works

The embed-in-json transformer accepts data of any content type, escapes it if necessary, then embeds it in JSON object with a key specified in the transformer parameters. For example, in the pipeline shown below, the data payload will be embedded as a UTF-8 string value for the key text.

filter:
  path: "*"
steps:
  - name: embed
    transformer:
      type: embed-in-json
      parameters:
        key: text
  - name: send-webhook
    destination:
      type: webhook
      version: v1
      parameters:
        url: $MY_WEBHOOK

Click here to use this pipeline in your Golioth project!

Therefore, if a device sent a payload containing hello, world, the POST request delivered to the webhook would be:

{"text": "hello, world"}

However, it is common to combine the embed-in-json transformer with other transformers. For example, if devices are sending binary data, it may be useful to encode that data as text before embedding it in the JSON object. One way to accomplish this is by utilizing the recently announced base64 transformer.

filter:
  path: "*"
steps:
  - name: encode
    transformer:
      type: base64
  - name: embed
    transformer:
      type: embed-in-json
      parameters:
        key: text
  - name: send-webhook
    destination:
      type: webhook
      version: v1
      parameters:
        url: $SLACK_WEBHOOK

Click here to use this pipeline in your Golioth project!

This specific pipeline is one we use in many of our internal Golioth projects when we want to temporarily inspect unknown binary data being sent from devices. Consider the following binary data (displayed as hex encoded for readability).

A1 64 74 65 6D 70 18 20

When presented to the pipeline, the data will first be Base64 encoded, yielding the following result.

oWR0ZW1wGCA=

Then, the Base64 encoded data will be embedded in a JSON object.

{"text": "oWR0ZW1wGCA="}

This payload is finally delivered to the Slack webhook, which results in the following message being delivered to a channel in our workspace.

Slack message showing base64 encoded data.

From there, we are able to inspect the data sent by the device, in this case determining that it is CBOR data with the following content.

A1             # map(1)
   64          # text(4)
      74656D70 # "temp"
   18 20       # unsigned(32)

Sometimes we even like to provide a little extra information to our messages. For example, it would be helpful if the message in the Slack channel also told us which device sent the payload. This can be accomplished by incorporating the inject-metadata transformer, then using the json-patch transformer to craft a payload that adheres to Slack’s rich message layout formatting.

filter:
  path: "*"
steps:
  - name: embed
    transformer:
      type: embed-in-json
      parameters:
        key: text
  - name: metadata
    transformer:
      type: inject-metadata
  - name: patch
    transformer:
      type: json-patch
      parameters:
        patch: |
          [
            {
              "op": "add",
              "path": "/blocks",
              "value": [
                {
                  "type": "rich_text",
                  "elements": [
                    {
                      "type": "rich_text_section",
                      "elements": [
                        {
                          "type": "text",
                          "text": "Device ID: ",
                          "style": {
                            "bold": true
                          }
                        },
                        {
                          "type": "text",
                          "text": "REPLACE"
                        }
                      ]
                    },
                    {
                      "type": "rich_text_section",
                      "elements": [
                        {
                          "type": "text",
                          "text": "Message: ",
                          "style": {
                            "bold": true
                          }
                        },
                        {
                          "type": "text",
                          "text": "REPLACE"
                        }
                      ]
                    }
                  ]
                }
              ]
            },
            {
              "op": "move",
              "from": "/device_id",
              "path": "/blocks/0/elements/0/elements/1/text"
            },
           {
              "op": "move",
              "from": "/data/text",
              "path": "/blocks/0/elements/1/elements/1/text"
            },
            {
              "op": "remove",
              "path": "/data"
            },
            {
              "op": "remove",
              "path": "/timestamp"
            },
            {
              "op": "remove",
              "path": "/device_id"
            },
            {
              "op": "remove",
              "path": "/project_id"
            }
          ]
  - name: send-webhook
    destination:
      type: webhook
      version: v1
      parameters:
        url: $SLACK_WEBHOOK

Click here to use this pipeline in your Golioth project!

The same payload through this pipeline now produces the following formatted message.

Slack message showing device ID and data message.

For more information on the embed-in-json transformer, go to the documentation.

What’s Next

Because of the broad set of services with APIs that accept JSON requests, the ability to embed data payloads using the embed-in-json transformer enables targeting many more destinations. We’ll be sharing more examples, and we look forward to hear more about how users are leveraging Golioth Pipelines on the forum.

New Pipelines Transformer: Base64

A new Pipelines transformer for Base64 encoding and decoding is now generally available for Golioth users. The base64 transformer is useful for working with sources and destinations in which working with binary data is difficult or unsupported.

How It Works

By default, the base64 transformer encodes the message payload as Base64 data. In the following example, the data is delivered to the recently announced aws-s3 data destination after being encoded. The content type following encoding will be text/plain.

filter:
  path: "*"
steps:
  - name: step0
    transformer:
      type: base64
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: my-bucket
        access_key: $AWS_ACCESS_KEY
        access_secret: $AWS_ACCESS_SECRET
        region: us-east-1

Click here to use this pipeline in your Golioth project!

Supplying the decode: true parameter will result in the base64 transformer decoding data rather than encoding. In the following example, Base64 data is decoded before being delivered to the recently announced kafka data destination. The content type following decoding will be application/octet-stream.

filter:
  path: "*"
steps:
  - name: step0
    transformer:
      type: base64
      parameters:
        decode: true
    destination:
      type: kafka
      version: v1
      parameters:
        brokers:
          - my.kafka.broker.com:9092
        topic: my-topic
        username: pipelines-user
        password: $KAKFA_PASSWORD
        sasl_mechanism: PLAIN

Click here to use this pipeline in your Golioth project!

For more details on the base64 transformer, go to the documentation.

What’s Next

The base64 transformer has already proved useful for presenting binary data in a human-readable format. However, it becomes even more useful when paired with other transformers, some of which we’ll be announcing in the coming days. In the mean time, share how you are using Pipelines on the forum and let us know if you have a use-case that is not currently well-supported!

New Pipelines Data Destination: Kafka

A new Pipelines data destination for Kafka is now generally available for Golioth users. Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. There are many cloud-hosted services offering Kafka or Kafka-compatible APIs.

How It Works

Similar to the existing gcp-pubsub and azure-event-hubs destinations, the kafka data destination publishes events to the specified topic. Multiple brokers can be configured, and PLAIN, SCRAM-SHA-256, SCRAM-SHA-512 SASL mechanisms are supported for authentication. All data in-transit is encrypted with Transport Level Security (TLS).

Data of any content type can be delivered to the kafka destination. Metadata, such as Golioth device ID and project ID, will be included in the event metadata, and the event timestamp will match that of the data message.

filter:
  path: "*"
steps:
  - name: step0
    destination:
      type: kafka
      version: v1
      parameters:
        brokers:
          - my-kafka-cluster.com:9092
        username: kafka-user
        password: $KAFKA_PASSWORD
        topic: my-topic
        sasl_mechanism: SCRAM-SHA-256

Click here to use this pipeline in your Golioth project!

For more details on the kafka data destination, go to the documentation.

What’s Next

Kafka has been one of the most requested data destinations by Golioth users, and we are excited to see all of the new platforms that are able to be leveraged as part of the integration. Keep an eye on the Golioth blog for more upcoming Pipelines features, and reach out on the forum if you have a use-case that is not currently well-supported!

New Pipelines Transformer: Struct to JSON

A new Struct-to-JSON Pipelines transformer is now generally available to Golioth users. This transformer takes structured binary data, such as that created using packed C structs, and converts it into JSON according to a user provided schema. To see this transformer in action, check out the example in our documentation or watch our recent Hackday recap stream.

Compact and Low Overhead

Historically, many IoT devices have sent structured binary data to cloud applications, which then needed to unpack that data and serialize it into a format that downstream services can understand. It’s easier and faster to populate a struct than it is to serialize data into a common format in C, and the data is often significantly more compact, which is especially important for devices using constrained transports such as cellular, Bluetooth, or LoRA. As embedded devices have gotten more powerful and efficient, many of these devices have started using serialization formats like CBOR, JSON, or Protocol Buffers, which are easier for cloud applications to work with. These serialization formats have significant advantages over packed C structs in terms of flexibility and interoperability, and for most cases that’s what we recommend our customers use.

But there are still cases where sending raw binary data could make sense. If the structure of your data is well-defined, a widely accepted standard, and/or unlikely to change, the overhead of a flexible serialization format may not yield any practical advantages. If it’s important to squeeze every single byte out of a low bandwidth link, it’s hard to beat a packed C struct. If you’re working with legacy systems where updating them to use a new serialization format involves all kinds of dependencies, it may simply be easier to stick with the status quo. Golioth now supports these use cases through the new Struct-to-JSON transformer.

To use the transformer, you need to describe the structure of the data in your Pipeline YAML. The transformer currently supports standard integer sizes from 8 to 64 bits (signed and unsigned), single and double precision floating point numbers, and fixed- and variable-length strings. Here’s an example of setting up the transformer for a float and a couple strings:

transformer:
  type: struct-to-json
  version: v1
  parameters:
    members:
      - name: temperature
        type: float
      - name: string1
        type: string
        length: 5
      - name: string2_len
        type: u8
      - name: string2
        type: string
        length: string2_len

We can create a packed struct in C that matches the schema:

struct my_struct {
    float temperature;
    char string1[5];
    uint8_t string2_len;
    char string2[];
} __attribute__((packed));

struct my_struct *s = malloc(sizeof(struct my_struct) + strlen("Golioth!"));

s->temperature = 23.5f;
s->string2_len = strlen("Golioth!");
memcpy(s->string1, "hello", strlen("hello"));
memcpy(s->string2, "Golioth!", strlen("Golioth!"));

Sending that struct through the transformer gives us the following JSON:

{
  "temperature": 23.5,
  "string1": "Hello",
  "string2_len": 8,
  "string2": "Golioth!"
}

Monitoring Heap Usage with mallinfo()

Monitoring heap usage is a great way to gain insight into the performance of your embedded device. Many C standard library implementations provide a mallinfo() (or its newer analogue mallinfo2()) API that returns information about the current state of the heap. As this data is in a well-defined, standard format and unlikely to change, it’s a good candidate for sending directly to Golioth without any additional serialization. We can use the following pipeline to transform the struct returned by a call to mallinfo2() on a 64-bit Linux system to JSON and send it to LightDB Stream:

filter:
  path: "/mallinfo"
  content_type: application/octet-stream
steps:
  - name: step0
    transformer:
      type: struct-to-json
      version: v1
      parameters:
        members:
          - name: arena
            type: u64
          - name: ordblks
            type: u64
          - name: smblks
            type: u64
          - name: hblks
            type: u64
          - name: hblkhd
            type: u64
          - name: usmblks
            type: u64
          - name: fsmblks
            type: u64
          - name: uordblks
            type: u64
          - name: fordblks
            type: u64
          - name: keepcost
            type: u64
    destination:
      type: lightdb-stream
      version: v1

Click here to use this Pipeline.

See it in action

What’s Next

Stay tuned for additional Pipelines transformers and destinations to be released in the coming weeks. If you have a use-case that is not currently well supported by Pipelines, or an idea for a new transformer or destination, please reach out on the forum!

New Pipelines Data Destination: AWS S3

A new Pipelines data destination for AWS S3 is now generally available for Golioth users. It represents the first object storage destination for Golioth Pipelines, and opens up a new set of data streaming use-cases, such as images and audio. It also can be useful for scenarios in which large volumes of data are collected then batch processed at a later time.

How It Works

The aws-s3 data destination uploads events routed to it as objects in the specified bucket. The name of an object corresponds to its event ID, and objects are organized in directories for each device ID.

/
├─ 664b9e889a9590ccfcf822b3/
│  ├─ 28ebd981-80ae-467f-b700-ba00e7c1e3ee
│  ├─ e47e5b46-d4e3-4bf1-a413-9fc71ec9f6b0
│  ├─ ...
├─ 66632a45658c93af0895a70e/
├─ .../

Data of any content type, including the aforementioned media use-cases and more traditional structured sensor data, can be routed to the aws-s3 data destination. To authenticate, an IAM access key and secret key must be created as secrets, then referenced in the pipeline configuration. It is recommended to limit the permissions of the IAM user to PutObject for the specified bucket.

filter:
  path: "*"
steps:
  - name: step0
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: my-bucket
        access_key: $AWS_ACCESS_KEY
        access_secret: $AWS_SECRET_KEY
        region: us-east-1

Click here to use this pipeline in your Golioth project!

For more details on the aws-s3 data destination, go to the documentation.

What’s Next

While any existing uses of the Golioth Firmware SDK that leverage data streaming can be used with the aws-s3 data destination, we’ll be introducing examples to demonstrate new use-cases in the coming weeks. Also, stay tuned for more object storage data destinations, and reach out on the forum if you have a use-case that is not currently well-supported by Pipelines!

A new Pipelines data destination for Memfault is now generally available for Golioth users. It enables devices to leverage their existing secure connection to Golioth to deliver data containing coredumps, heartbeats, events, and more to Memfault. To see this integration in action, check out the example firmware application, read the documentation, and tune into this week’s Friday Afternoon Stream, where we will be joined by a special guest from Memfault.

Golioth + Memfault

We have long been impressed with the functionality offered by Memfault’s platform, as well as the embedded developer community they have cultivated with the Interrupt blog. In our mission to build a platform that makes connecting constrained devices to the cloud simple, secure, and efficient, we have continuously expanded the set of cloud services that devices can target. This goal has been furthered by the recent launch of Golioth Pipelines.

Memfault’s observability features are highly desired by embedded developers, but leveraging them has typically required establishing a separate HTTP connection from a device to Memfault’s cloud, building custom functionality to relay data from an existing device service to Memfault, or utilizing an intermediate gateway device to provide connectivity. With Golioth, devices already have a secure connection to the cloud for table-stakes device management services and flexible data routing. By adding a Memfault data destination to Golioth Pipelines, that same connection can be used to route a subset of streaming data to Memfault. Leveraging this existing connection saves power and bandwidth on the device, and removes the need to store fleet-wide secrets on deployed devices.

How It Works

The Memfault Firmware SDK provides observability data to an application serialized in the form of chunks. An application can periodically query the packetizer to see if there are new chunks available.

bool data_available = memfault_packetizer_begin(&cfg, &metadata);

When data is available, it can be obtained from thepacketizer by either obtaining a single chunk via memfault_packetizer_get_chunk, or by setting enable_multi_packet_chunk to true in configuration and repeatedly invoking memfault_packetizer_get_next  until a kMemfaultPacketizerStatus_EndOfChunk status is returned. The latter strategy allows for obtaining all data in a single chunk that would exceed the default size limitations. Golioth leverages this functionality to upload both large and small chunks using CoAP blockwise transfers, a feature that was enabled in our recent v0.14.0 Golioth Firmware SDK release.

golioth_stream_set_blockwise_sync(client,
                                  "mflt",
                                  GOLIOTH_CONTENT_TYPE_OCTET_STREAM,
                                  read_memfault_chunk,
                                  NULL);

The read_memfault_chunk callback will be called repeatedly to populate blocks for upload until the entire chunk has been obtained from the packetizer.

static enum golioth_status read_memfault_chunk(uint32_t block_idx,
                                               uint8_t *block_buffer,
                                               size_t *block_size,
                                               bool *is_last,
                                               void *arg)
{
    eMemfaultPacketizerStatus mflt_status;
    mflt_status = memfault_packetizer_get_next(block_buffer, block_size);
    if (kMemfaultPacketizerStatus_NoMoreData == mflt_status)
    {
        LOG_WRN("Unexpected end of Memfault data");
        *block_size = 0;
        *is_last = true;
    }
    else if (kMemfaultPacketizerStatus_EndOfChunk == mflt_status)
    {
        /* Last block */
        *is_last = true;
    }
    else if (kMemfaultPacketizerStatus_MoreDataForChunk == mflt_status)
    {
        *is_last = false;
    }

    return GOLIOTH_OK;
}

Golioth views the data as any other stream data, which can be delivered to a path of the user’s choosing. In this case, the data is being streamed to the /mflt path, which can be used as a filter in a pipeline.

filter:
  path: "/mflt"
  content_type: application/octet-stream
steps:
  - name: step0
    destination:
      type: memfault
      version: v1
      parameters:
        project_key: $MEMFAULT_PROJECT_KEY

Click here to use this pipeline in your Golioth project!

Because the Memfault Firmware SDK is producing this data, it does not need to be transformed prior to delivery to Memfault’s cloud. Creating the pipeline shown above, as well as a secret with name MEMFAULT_PROJECT_KEY that contains a project key for the desired Memfault project, will result in all streaming data on the /mflt path to be delivered to the Memfault platform.

Livestream demo with Memfault

Dan from Golioth did a livestream with Noah from Memfault showcasing how this interaction works, check it out below:

What’s Next

We will be continuing to roll out more Pipelines data destinations and transformers in the coming weeks. If you have a use-case in mind, feel free to reach out to us on the forum!

A new JSON Patch Pipelines transformer is now generally available for Golioth users. It allows for modifying JSON payloads, or payloads that have been transformed into JSON in a previous pipeline step, in order to fit a specific structure or schema. To see this transformer in action, check out the documentation example or last week’s Friday Afternoon Stream.

Shaping Data for Different Destinations

Golioth frequently sits at the intersection of firmware and cloud platforms. One of our goals when launching Pipelines was to enable those to worlds to seamlessly interact, avoiding the need for one side to compromise to suit the other. For example, Pipelines can allow for devices to send data in a compact binary format, such as CBOR, then have it translated to a text-based representation, such as JSON, for delivery to a cloud service.

The json-patch transformer enhances this capability by not only changing the format, but also the structure and content. Furthermore, it allows for the structure required by the end destination to change over time, without requiring firmware updates. In the following example, fields are re-arranged to meet the requirements of the custom webhook data destination. Critically, if this destination changed, or a new one was added in the future, the pipeline could be updated, and the device could continue sending the same payloads.

filter:
  path: "*"
  content_type: application/cbor
steps:
  - name: change-format
    transformer:
      type: cbor-to-json
      version: v1
  - name: transform-and-deliver
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "add", "path": "/environment", "value": {}},
            {"op": "add", "path": "/location", "value": {}},
            {"op": "move", "from": "/temp", "path": "/environment/temp"},
            {"op": "move", "from": "/lat", "path": "/location/lat"},
            {"op": "move", "from": "/long", "path": "/location/long"}
          ]
    destination:
      type: webhook
      version: v1
      parameters:
        url: https://my-backend.example.com/data
        headers:
          x-api-key: $BACKEND_API_KEY

Click here to use this pipeline in your project.

Conditional Data Manipulation

In some cases it may be desirable to conditionally patch a JSON object payload based on the contents of the payload, or the metadata associated with the device that sent it. Combining the json-patch transformer with other transformers demonstrates the power of Pipelines. The test operation in a JSON Patch document conditionally applies a patch if the criteria is met.

For example, in the following pipeline, the key-value pair demo: true is injected into the payload if the device ID matches 649998262fecb43eb2d39859. The device ID is made available when applying the patch using the inject-metadata transformer. The metadata is subsequently stripped from the payload to ensure extraneous information is not delivered to the final destination.

filter:
  path: "*"
  content_type: application/json
steps:
  - name: get-metadata
    transformer:
      type: inject-metadata
      version: v1
  - name: conditional-patch
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "test", "path": "/device_id", "value": "649998262fecb43eb2d39859"},
            {"op": "add", "path": "/demo", "value": true}
          ]
  - name: remove-metadata
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "remove", "path": "/device_id"},
            {"op": "remove", "path": "/project_id"},
            {"op": "remove", "path": "/timestamp"}
          ]
  - name: send-lightdb
    destination:
      type: lightdb-stream
      version: v1

Click here to use this pipeline in your project.

See it in action

What’s Next

The json-patch transformer is the first of many new transformers and data destinations we’ll be adding over the next few weeks. If you have a use-case in mind, feel free to reach out to us on the forum!

Today, we are thrilled to announce the launch of Pipelines, a powerful new set of features that redefines how you manage and route your IoT data on Golioth. Pipelines is the successor to our now-deprecated Output Streams and represents a significant upgrade in our functionality, scalability and user control.

Two years ago, we introduced Output Streams to seamlessly connect IoT data to various cloud services like AWS SQS, Azure Event Hubs, and GCP PubSub. This enabled Golioth users to efficiently stream sensor data for real-time processing, analytics, and storage, integrating easily with their existing cloud infrastructure.

Since then, we’ve gathered extensive feedback, and now we’re excited to introduce a more versatile solution for data routing: Pipelines. Previously, all stream data had to flow into LightDB Stream and conform to its JSON formatting requirements, which was restrictive for those with regulatory and data residency needs. With Pipelines, you can direct your data to LightDB Stream, your own database, or any other destination, in any format you choose.

Pipelines also introduces filtering and transformation features that simplify or even eliminate backend services through low-code configuration. Configurations are stored and managed as simple YAML files, making them easily deployable across multiple projects without manual recreation. This approach allows you to version your data routing configurations alongside your application code.

Pipelines Example Screenshot

Internally, Pipelines architecture is designed to support our future growth, enabling us to scale our data routing capabilities to billions of devices. This robust foundation allows us to quickly iterate and add new features rapidly, ensuring that our users always have access to the most powerful and flexible data management tools available.

All Golioth users can start taking advantage of Pipelines today. Projects that were previously only using LightDB Stream and did not have any Output Streams configured have been automatically migrated to Pipelines. Users in those projects will see two pipelines present, which together replicate the previous behavior of streaming to LightDB Stream. These pipelines can be modified or deleted, and new pipelines can be added to support additional data routing use-cases.

Projects with Output Streams configured will continue using the legacy system, but can be seamlessly migrated to Pipelines with no interruptions to data streaming. To do so, users in those projects must opt-in to migration.

New projects created on Golioth will have a minimal default pipeline created that transforms CBOR data to JSON and delivers it to LightDB Stream. This pipeline is compatible with Golioth firmware examples and training, but may be modified or removed by a user if alternative routing behavior is desired.

Pipelines are especially advantageous for users with specific data compliance requirements and those transmitting sensitive information, such as medical device products. Removing the requirement of routing data through LightDB Stream, where it is persisted on the Golioth platform, provides two main benefits:

  1. Regulatory Compliance: Users can route data to their own compliant storage solutions, making it suitable for many sensitive applications that require data not be persisted on other third-party platforms.
  2. Cost Savings: For users who do not need data persistence, routing data directly to other destinations can avoid the costs associated with streaming data to LightDB Stream. This flexibility allows for more efficient and cost-effective data management.

Getting Started with Pipelines

Alongside the launch of Pipelines, we have also released a new version of the Golioth Firmware SDK, v0.13.0 , which introduces new functionality to support streaming arbitrary binary data to destinations that support it. Previously, only CBOR and JSON data could be streamed to Golioth, as everything flowed through LightDB Stream, which only accepts JSON data. Now, rather than streaming data to LightDB Stream, data is sent to the Golioth platform and routed to its ultimate destination via the pipelines configured in a project. Devices using previous versions of the Golioth Firmware SDK will continue working as expected.

Pipelines can be configured in the Golioth Console using YAML, which defines filters and steps within your pipeline. Here’s an example:

filter:
  path: "*"
  content_type: application/cbor
steps:
  - name: step-0
    destination:
      type: gcp-pubsub
      version: v1
      parameters:
        topic: projects/my-project/topics/my-topic
        service_account: $GCP_SERVICE_ACCOUNT
  - name: step-1
    transformer:
      type: cbor-to-json
      version: v1
  - name: step-2
    transformer:
      type: inject-path
      version: v1
    destination:
      type: lightdb-stream
      version: v1
  - name: step-3
    destination:
      type: influxdb
      version: v1
      parameters:
        url: https://us-east-1-1.aws.cloud2.influxdata.com
        token: $INFLUXDB_TOKEN
        bucket: device_data
        measurement: sensor_data

This pipeline accepts CBOR data, delivers it to GCP PubSub, before transforming it to JSON and delivering it to both LightDB Stream (with the path injected) and InfluxDB. This is accomplished via three core components of Pipelines.

Filters

Filters route all or a subset of data to a pipeline. Currently, data may be filtered based on path and content_type. If either is not supplied, data with any value for the attribute will be matched. In this example, CBOR data sent on any path will be matched to the pipeline.

filter:
  path: "*"
  content_type: application/cbor

Transformers

Transformers modify the structure of a data message payload as it passes through a pipeline. A single transformer may be specified per step, but multiple steps can be chained to perform a sequence of transformations. This transformer will convert data from CBOR to JSON, then pass it along to the next step.

- name: step-1
  transformer:
    type: cbor-to-json

Destinations

Destinations define where the transformed data should be sent. Each step in a pipeline can have its own destination, allowing for complex routing configurations. When a step includes a transformer and a destination, the transformed data is only delivered to the destination in that step. This destination sends JSON data to LightDB Stream after nesting the object using the message path. The next step receives the data as it was prior to the path injection.

- name: step-2
    transformer:
      type: inject-path
    destination:
      type: lightdb-stream
      version: v1

The full list of Output Stream destinations is now available as Pipelines destinations (with more to come):

  • Azure Event Hub
  • AWS SQS
  • Datacake
  • GCP PubSub
  • InfluxDB
  • MongoDB Time Series
  • Ubidots
  • Webhooks

For detailed documentation visit, visit our Pipelines Documentation.

Updated Pricing Model

We’re keeping the same usage-based pricing as Output Streams but also introducing volume discounts. We want to emphasize transparent pricing optimized for MCUs and are revising the pricing structure for Pipelines to accommodate a wider range of data usage patterns, ensuring affordability and predictability in billing for both low and high bandwidth use cases while ensuring customers with large fleets of devices can enjoy discounts that come with scale.

Data routed to External Pipelines Destination
Data Volume (per Month) Per MB Price ($)
0 – 1 GB $0.40
1 – 10 GB $0.34
10 – 50 GB $0.28
50 – 150 GB $0.22
150 – 300 GB $0.16
300 – 600 GB $0.10
600 – 1 TB $0.04
1 TB+ $0.01
Data routed to LightDB Stream
Data Volume (per Month) Per MB Price ($)
0-1 TB + $0.001

The first 3MB of usage for Pipelines is free, allowing users who are prototyping to do so without needing to provide a credit card. This includes usage routing data to LightDB Stream through Pipelines.

For full details, visit Golioth Pricing.


Pipelines marks a significant step forward in Golioth’s IoT data routing capability, offering a new level of flexibility and control. We’re excited to see how you’ll use Pipelines to enhance your IoT projects. For more details and to get started, visit our Pipelines Documentation.

With our new infrastructure, we can rapidly add new destinations and transformations, so please let us know any you might use. Additionally, we’d love to hear about any logic you’re currently performing in backend services that we can help you streamline or even delete. If you have any questions or need assistance, don’t hesitate to reach out. Contact us at [email protected] or post in our community forum. We’re here to help!

Migration from Output Streams to Pipelines

As previously mentioned, projects currently using Output Streams will continue to leverage the legacy infrastructure until users opt-in to migration to Pipelines. We encourage users to try out Pipelines in a new project and opt-in existing projects when ready. Output Streams does not currently have an end of life date but we will be announcing one soon.

Stay tuned for more updates and happy streaming!

Golioth works with Qualcomm

We’re excited to announce the latest update to the Golioth Firmware SDK, release 0.12.0, which now includes support for Zephyr’s newly introduced Modem Subsystem. This enhancement significantly increases the flexibility of our platform, enabling support for a broader array of cellular modem technologies, starting with Qualcomm. 0.12.0 adds support for the Quectel BG95, joining the Nordic Semiconductor nRF9160 (our go-to modem around here!) as a first class cellular modem. We also introduced additional ways to securely store credentials.

Zephyr’s New Modem Subsystem

Introduced in Zephyr 3.5.0, the Modem Subsystem is a unified interface for modem drivers. This addition simplifies the integration of cellular modems (and others) into Zephyr-based projects, greatly expanding the range of devices and technologies that developers can utilize effectively. For a detailed overview of the modem subsystem, check out this summary from Zephyr’s Developer Advocate, Benjamin Cabé.

Integration in Golioth Firmware SDK

With the integration of this modem subsystem in the Golioth Firmware SDK, Golioth users can now more flexibly incorporate a wider array of modem technologies into their IoT projects. There are a lot of great modems and module vendors in the market and providing choice is at the heart of what we do at Golioth.

First Supported Modem and Board

The first modem we are supporting with this updated SDK is the BG95 from Quectel, based on Qualcomm technology. The BG95 is paired with the nRF52840 on the RAK5010 development board from RAKwireless. This combination highlights the flexibility of Qualcomm’s technology integrated into Quectel’s hardware, offering developers robust tools for deploying cellular IoT solutions efficiently.

Why Qualcomm?

We chose to support Qualcomm modems because our community asked for it! Developers have different design needs and want maximum flexibility. They need more options that accommodate diverse business needs. Qualcomm chipsets offer the latest in connectivity protocols and radio technology at competitive prices. Qualcomm provides silicon and support for a wide ecosystem of module vendors, such as Quectel, U-Blox, Telit, and more. Golioth customers have used Qualcomm modems in their products in the past, but needed to do much of the integration engineering themselves. Zephyr’s Modem Subsystem makes it easier to develop applications that integrate Qualcomm modems. Connecting this wider range of modems to Golioth is more hands-off for the user, reducing complexity. Developers can focus more on innovation and less on technical hurdles.

Also in This Release

In addition to new modem support, this release introduces a another feature: DTLS socket offloading for Zephyr. This includes an example for the long-supported Nordic Semiconductor nRF9160.

DTLS socket offloading leverages a modem’s secure credential store, which allows for the use of secure, offloaded sockets. This means there is not any private key material in RAM. This can be a significant advantage as it helps reduce RAM usage, code size, CPU load, and power consumption. Actual benefits will vary depending on the application and how the code is utilized.

This new feature enhances device security and efficiency, contributing further to the versatility and robustness of the Golioth Firmware SDK. Mike previously wrote how to store credentials on the nRF9160 using TLS tags.

Getting Started

To get started with the latest SDK:

  1. Update to the newest release, 0.12.0, from the Golioth Firmware SDK repository.
  2. Explore the documentation and examples provided for integrating the RAK5010 board or try DTLS socket offloading with the nRF9160.
  3. Visit our community forums or support channels if you need help or want to discuss your projects.

Focused on Your Success

At Golioth, we’re committed to providing you with the tools and flexibility needed to succeed in the fast-evolving world of IoT. By adding support for new modems and enhancing the ways you can manage credentials, we aim to streamline your development process and empower your innovative projects. Whether you’re integrating the latest modem technology or implementing secure credential management, Golioth is here to support every step of your journey towards building smarter, more connected solutions.