New Pipelines Transformer: Base64

A new Pipelines transformer for Base64 encoding and decoding is now generally available for Golioth users. The base64 transformer is useful for working with sources and destinations in which working with binary data is difficult or unsupported.

How It Works

By default, the base64 transformer encodes the message payload as Base64 data. In the following example, the data is delivered to the recently announced aws-s3 data destination after being encoded. The content type following encoding will be text/plain.

filter:
  path: "*"
steps:
  - name: step0
    transformer:
      type: base64
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: my-bucket
        access_key: $AWS_ACCESS_KEY
        access_secret: $AWS_ACCESS_SECRET
        region: us-east-1

Click here to use this pipeline in your Golioth project!

Supplying the decode: true parameter will result in the base64 transformer decoding data rather than encoding. In the following example, Base64 data is decoded before being delivered to the recently announced kafka data destination. The content type following decoding will be application/octet-stream.

filter:
  path: "*"
steps:
  - name: step0
    transformer:
      type: base64
      parameters:
        decode: true
    destination:
      type: kafka
      version: v1
      parameters:
        brokers:
          - my.kafka.broker.com:9092
        topic: my-topic
        username: pipelines-user
        password: $KAKFA_PASSWORD
        sasl_mechanism: PLAIN

Click here to use this pipeline in your Golioth project!

For more details on the base64 transformer, go to the documentation.

What’s Next

The base64 transformer has already proved useful for presenting binary data in a human-readable format. However, it becomes even more useful when paired with other transformers, some of which we’ll be announcing in the coming days. In the mean time, share how you are using Pipelines on the forum and let us know if you have a use-case that is not currently well-supported!

New Pipelines Data Destination: Kafka

A new Pipelines data destination for Kafka is now generally available for Golioth users. Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. There are many cloud-hosted services offering Kafka or Kafka-compatible APIs.

How It Works

Similar to the existing gcp-pubsub and azure-event-hubs destinations, the kafka data destination publishes events to the specified topic. Multiple brokers can be configured, and PLAIN, SCRAM-SHA-256, SCRAM-SHA-512 SASL mechanisms are supported for authentication. All data in-transit is encrypted with Transport Level Security (TLS).

Data of any content type can be delivered to the kafka destination. Metadata, such as Golioth device ID and project ID, will be included in the event metadata, and the event timestamp will match that of the data message.

filter:
  path: "*"
steps:
  - name: step0
    destination:
      type: kafka
      version: v1
      parameters:
        brokers:
          - my-kafka-cluster.com:9092
        username: kafka-user
        password: $KAFKA_PASSWORD
        topic: my-topic
        sasl_mechanism: SCRAM-SHA-256

Click here to use this pipeline in your Golioth project!

For more details on the kafka data destination, go to the documentation.

What’s Next

Kafka has been one of the most requested data destinations by Golioth users, and we are excited to see all of the new platforms that are able to be leveraged as part of the integration. Keep an eye on the Golioth blog for more upcoming Pipelines features, and reach out on the forum if you have a use-case that is not currently well-supported!

New Pipelines Transformer: Struct to JSON

A new Struct-to-JSON Pipelines transformer is now generally available to Golioth users. This transformer takes structured binary data, such as that created using packed C structs, and converts it into JSON according to a user provided schema. To see this transformer in action, check out the example in our documentation or watch our recent Hackday recap stream.

Compact and Low Overhead

Historically, many IoT devices have sent structured binary data to cloud applications, which then needed to unpack that data and serialize it into a format that downstream services can understand. It’s easier and faster to populate a struct than it is to serialize data into a common format in C, and the data is often significantly more compact, which is especially important for devices using constrained transports such as cellular, Bluetooth, or LoRA. As embedded devices have gotten more powerful and efficient, many of these devices have started using serialization formats like CBOR, JSON, or Protocol Buffers, which are easier for cloud applications to work with. These serialization formats have significant advantages over packed C structs in terms of flexibility and interoperability, and for most cases that’s what we recommend our customers use.

But there are still cases where sending raw binary data could make sense. If the structure of your data is well-defined, a widely accepted standard, and/or unlikely to change, the overhead of a flexible serialization format may not yield any practical advantages. If it’s important to squeeze every single byte out of a low bandwidth link, it’s hard to beat a packed C struct. If you’re working with legacy systems where updating them to use a new serialization format involves all kinds of dependencies, it may simply be easier to stick with the status quo. Golioth now supports these use cases through the new Struct-to-JSON transformer.

To use the transformer, you need to describe the structure of the data in your Pipeline YAML. The transformer currently supports standard integer sizes from 8 to 64 bits (signed and unsigned), single and double precision floating point numbers, and fixed- and variable-length strings. Here’s an example of setting up the transformer for a float and a couple strings:

transformer:
  type: struct-to-json
  version: v1
  parameters:
    members:
      - name: temperature
        type: float
      - name: string1
        type: string
        length: 5
      - name: string2_len
        type: u8
      - name: string2
        type: string
        length: string2_len

We can create a packed struct in C that matches the schema:

struct my_struct {
    float temperature;
    char string1[5];
    uint8_t string2_len;
    char string2[];
} __attribute__((packed));

struct my_struct *s = malloc(sizeof(struct my_struct) + strlen("Golioth!"));

s->temperature = 23.5f;
s->string2_len = strlen("Golioth!");
memcpy(s->string1, "hello", strlen("hello"));
memcpy(s->string2, "Golioth!", strlen("Golioth!"));

Sending that struct through the transformer gives us the following JSON:

{
  "temperature": 23.5,
  "string1": "Hello",
  "string2_len": 8,
  "string2": "Golioth!"
}

Monitoring Heap Usage with mallinfo()

Monitoring heap usage is a great way to gain insight into the performance of your embedded device. Many C standard library implementations provide a mallinfo() (or its newer analogue mallinfo2()) API that returns information about the current state of the heap. As this data is in a well-defined, standard format and unlikely to change, it’s a good candidate for sending directly to Golioth without any additional serialization. We can use the following pipeline to transform the struct returned by a call to mallinfo2() on a 64-bit Linux system to JSON and send it to LightDB Stream:

filter:
  path: "/mallinfo"
  content_type: application/octet-stream
steps:
  - name: step0
    transformer:
      type: struct-to-json
      version: v1
      parameters:
        members:
          - name: arena
            type: u64
          - name: ordblks
            type: u64
          - name: smblks
            type: u64
          - name: hblks
            type: u64
          - name: hblkhd
            type: u64
          - name: usmblks
            type: u64
          - name: fsmblks
            type: u64
          - name: uordblks
            type: u64
          - name: fordblks
            type: u64
          - name: keepcost
            type: u64
    destination:
      type: lightdb-stream
      version: v1

Click here to use this Pipeline.

See it in action

What’s Next

Stay tuned for additional Pipelines transformers and destinations to be released in the coming weeks. If you have a use-case that is not currently well supported by Pipelines, or an idea for a new transformer or destination, please reach out on the forum!

New Pipelines Data Destination: AWS S3

A new Pipelines data destination for AWS S3 is now generally available for Golioth users. It represents the first object storage destination for Golioth Pipelines, and opens up a new set of data streaming use-cases, such as images and audio. It also can be useful for scenarios in which large volumes of data are collected then batch processed at a later time.

How It Works

The aws-s3 data destination uploads events routed to it as objects in the specified bucket. The name of an object corresponds to its event ID, and objects are organized in directories for each device ID.

/
├─ 664b9e889a9590ccfcf822b3/
│  ├─ 28ebd981-80ae-467f-b700-ba00e7c1e3ee
│  ├─ e47e5b46-d4e3-4bf1-a413-9fc71ec9f6b0
│  ├─ ...
├─ 66632a45658c93af0895a70e/
├─ .../

Data of any content type, including the aforementioned media use-cases and more traditional structured sensor data, can be routed to the aws-s3 data destination. To authenticate, an IAM access key and secret key must be created as secrets, then referenced in the pipeline configuration. It is recommended to limit the permissions of the IAM user to PutObject for the specified bucket.

filter:
  path: "*"
steps:
  - name: step0
    destination:
      type: aws-s3
      version: v1
      parameters:
        name: my-bucket
        access_key: $AWS_ACCESS_KEY
        access_secret: $AWS_SECRET_KEY
        region: us-east-1

Click here to use this pipeline in your Golioth project!

For more details on the aws-s3 data destination, go to the documentation.

What’s Next

While any existing uses of the Golioth Firmware SDK that leverage data streaming can be used with the aws-s3 data destination, we’ll be introducing examples to demonstrate new use-cases in the coming weeks. Also, stay tuned for more object storage data destinations, and reach out on the forum if you have a use-case that is not currently well-supported by Pipelines!

IoT is all about data. How you choose to handle sending that data over the network can have a large impact on your bandwidth and power budgets. Golioth includes the ability to batch upload streaming data, which is great for cached readings that allows your device to stay in low power mode for more of the time. Today I’ll detail how to send IoT data in batches.

What is Batch Data?

Batch data simply means one payload that encompasses multiple sensors readings.

[
    {
        "ts": 1719592181,
        "counter": 330
    },
    {
        "ts": 1719592186,
        "counter": 331
    },
    {
        "ts": 1719592191,
        "counter": 332
    }
]

The example above shows three readings, each passing a counter value the represents a sensor reading, along with a timestamp for when that reading was taken.

Sending Batch Data from an IoT Device

The sample firmware can be found at the end of the post, but generally speaking, the device doesn’t need to do anything different to send batch data. The key is to format the data as a list of readings, whether you’re sending JSON or CBOR.

int err = golioth_stream_set_async(client,
                                   "",
                                   GOLIOTH_CONTENT_TYPE_JSON,
                                   buf,
                                   strlen(buf),
                                   async_push_handler,
                                   NULL);

We call the Stream data API above. The client and data type are passed as the first two arguments, then the buffer holding the data and the buffer length are supplied. The last two parameters are a callback function and an optional user data pointer.

Routing Batch Data using a Golioth Pipeline

Batch data will be automatically sorted out by the Golioth servers based on the pipeline you use.

filter:
  path: "*"
  content_type: application/json
steps:
  - name: step0
    destination:
      type: batch
      version: v1
  - name: step1
    destination:
      type: lightdb-stream
      version: v1

This example pipeline listens for JSON data coming in on any stream path. In step0 it “unpacks” the batch data into individual readings. In step1 the individual readings are routed to Golioth’s LightDB stream. Here’s what that looks like:

Note that all three readings are coming in with the same server-side timestamp. The device timestamp is preserved in the data, but you can also use Pipelines to tell Golioth to use the embedded timestamps.

Batch Data with Timestamp Extract

For this example we’re using a very similar pipeline, with one additional transformer to extract the timestamp from the readings and use it as the LightDB Stream timestamp.

filter:
  path: "*"
  content_type: application/json
steps:
  - name: step0
    destination:
      type: batch
      version: v1
  - name: step1
    transformer:
      type: extract-timestamp
      version: v1
    destination:
      type: lightdb-stream
      version: v1

Note that we didn’t even need an additional step, but simply added the transformer to the step that already set lightdb-stream as the destination.

You can see that the Linux epoch formatted timestamp has been popped out of the data and assigned to the LightDB timestamp. Extracting the timestamp is not unique to Golioth’s LightDB Stream service.

Streaming data may be routed anywhere you want it. For instance, if you wanted to send your data to a webhook, just use the webhook destination. If you included the extract-timestamp transformer, you data will arrive at the webhook with the timestamps from your device as part of the metadata instead of nested in the JSON.object.

Using a Special Path for Batch Data

What happens if your app wants to send other types of streaming data beyond batch data? The batch destination will automatically drop data that isn’t a list of data objects. But you might like to be more explicit about where you send data and for that you can easily create a path to receive batch data.

filter:
  path: "/batch/"
  content_type: application/json
steps:
  - name: step0
    destination:
      type: batch
      version: v1
  - name: step1
    transformer:
      type: extract-timestamp
      version: v1
    destination:
      type: lightdb-stream
      version: v1

This pipeline is nearly the same as before with the only change on line 2 where the * wildcard was removed from path and replaced with "/batch/". Now we can update the API call in the device firmware to target that path:

int err = golioth_stream_set_async(client,
                                   "batch",
                                   GOLIOTH_CONTENT_TYPE_JSON,
                                   buf,
                                   strlen(buf),
                                   async_push_handler,
                                   NULL);

Although the result hasn’t changed, this does make the intent of the firmware more clear, and it differentiates the intent of this pipeline from others.

Sample Firmware

This is a quick sample firmware I made to use while writing this post. It targets the nrf9160dk. One major caveat is that the function that pulls time from the cellular network is quite rudimentary and should be replaced on anything that you plan to use in production.

To try it out, start from the Golioth Hello sample and replace the main.c file. This post was written using v0.14.0 of the Golioth Firmware SDK.

Wrapping Up

Batch data upload is a common request in the IoT realm. Golioth has not only the ability to sort out your batch data uploads, but to route them where you want and even to transform that data as needed. If you want to know more about what Pipelines brings to the party, check out the Pipelines announcement post.

A new Pipelines data destination for Memfault is now generally available for Golioth users. It enables devices to leverage their existing secure connection to Golioth to deliver data containing coredumps, heartbeats, events, and more to Memfault. To see this integration in action, check out the example firmware application, read the documentation, and tune into this week’s Friday Afternoon Stream, where we will be joined by a special guest from Memfault.

Golioth + Memfault

We have long been impressed with the functionality offered by Memfault’s platform, as well as the embedded developer community they have cultivated with the Interrupt blog. In our mission to build a platform that makes connecting constrained devices to the cloud simple, secure, and efficient, we have continuously expanded the set of cloud services that devices can target. This goal has been furthered by the recent launch of Golioth Pipelines.

Memfault’s observability features are highly desired by embedded developers, but leveraging them has typically required establishing a separate HTTP connection from a device to Memfault’s cloud, building custom functionality to relay data from an existing device service to Memfault, or utilizing an intermediate gateway device to provide connectivity. With Golioth, devices already have a secure connection to the cloud for table-stakes device management services and flexible data routing. By adding a Memfault data destination to Golioth Pipelines, that same connection can be used to route a subset of streaming data to Memfault. Leveraging this existing connection saves power and bandwidth on the device, and removes the need to store fleet-wide secrets on deployed devices.

How It Works

The Memfault Firmware SDK provides observability data to an application serialized in the form of chunks. An application can periodically query the packetizer to see if there are new chunks available.

bool data_available = memfault_packetizer_begin(&cfg, &metadata);

When data is available, it can be obtained from thepacketizer by either obtaining a single chunk via memfault_packetizer_get_chunk, or by setting enable_multi_packet_chunk to true in configuration and repeatedly invoking memfault_packetizer_get_next  until a kMemfaultPacketizerStatus_EndOfChunk status is returned. The latter strategy allows for obtaining all data in a single chunk that would exceed the default size limitations. Golioth leverages this functionality to upload both large and small chunks using CoAP blockwise transfers, a feature that was enabled in our recent v0.14.0 Golioth Firmware SDK release.

golioth_stream_set_blockwise_sync(client,
                                  "mflt",
                                  GOLIOTH_CONTENT_TYPE_OCTET_STREAM,
                                  read_memfault_chunk,
                                  NULL);

The read_memfault_chunk callback will be called repeatedly to populate blocks for upload until the entire chunk has been obtained from the packetizer.

static enum golioth_status read_memfault_chunk(uint32_t block_idx,
                                               uint8_t *block_buffer,
                                               size_t *block_size,
                                               bool *is_last,
                                               void *arg)
{
    eMemfaultPacketizerStatus mflt_status;
    mflt_status = memfault_packetizer_get_next(block_buffer, block_size);
    if (kMemfaultPacketizerStatus_NoMoreData == mflt_status)
    {
        LOG_WRN("Unexpected end of Memfault data");
        *block_size = 0;
        *is_last = true;
    }
    else if (kMemfaultPacketizerStatus_EndOfChunk == mflt_status)
    {
        /* Last block */
        *is_last = true;
    }
    else if (kMemfaultPacketizerStatus_MoreDataForChunk == mflt_status)
    {
        *is_last = false;
    }

    return GOLIOTH_OK;
}

Golioth views the data as any other stream data, which can be delivered to a path of the user’s choosing. In this case, the data is being streamed to the /mflt path, which can be used as a filter in a pipeline.

filter:
  path: "/mflt"
  content_type: application/octet-stream
steps:
  - name: step0
    destination:
      type: memfault
      version: v1
      parameters:
        project_key: $MEMFAULT_PROJECT_KEY

Click here to use this pipeline in your Golioth project!

Because the Memfault Firmware SDK is producing this data, it does not need to be transformed prior to delivery to Memfault’s cloud. Creating the pipeline shown above, as well as a secret with name MEMFAULT_PROJECT_KEY that contains a project key for the desired Memfault project, will result in all streaming data on the /mflt path to be delivered to the Memfault platform.

Livestream demo with Memfault

Dan from Golioth did a livestream with Noah from Memfault showcasing how this interaction works, check it out below:

What’s Next

We will be continuing to roll out more Pipelines data destinations and transformers in the coming weeks. If you have a use-case in mind, feel free to reach out to us on the forum!

Piecing together different pieces of technology can have a multiplicative effect. I think that’s what happened with this demo: we paired Wi-Fi locationing, low cost hardware, Golioth Pipelines, and n8n (an API workflow tool) to create a “geofence”.

A geofence is a virtual perimeter used to set up alerts or take actions once a device moves outside that virtual perimeter. The example we gave in the video is if you had a tracker on your cat and you wanted to take an action once the device was outside a particular area.

Hardware

The reason we’re calling this a “$2 geofence” is because it’s enabled by the ESP32-C3, a low cost module from Espressif. We put this on the Aludel Elixir as a backup connectivity method if we were again at a conference with no LTE-M coverage.

The ESP-AT firmware does what it sounds like it should do: it responds to AT commands from other microcontrollers talking to it over serial (as many cellular modules also do). One key enhancement is that the ESP-AT mode already works as a connectivity method; in fact, we utilize the ESP-AT firmware as an offloaded Wi-Fi modem when we build and test for the nRF52840 in our Continuously Verified Hardware. In Zephyr, there is an option for utilizing the ESP-AT modem as the main offloaded Wi-Fi modem. This makes it ‘invisible’ to the Zephyr program and acts like any other network interface, since it is built on top of the Wi-Fi subsystem in Zephyr.

One change that was required is we had to re-write how we pulled the information off the ESP-AT modem. Normally the wifi scan shell command returns the (human readable) names and signal strengths of all the access points (APs) visible to the modem. Instead, we want mac address and signal strength, as that’s what’s expected by the API service we’ll describe below.

Golioth Pipeline

We start by scanning Wi-Fi APs and the tower that the cell modem is connected to. Then we publish that on the Stream service up to the Golioth cloud. Because we’re publishing to a specific topic (instead of my normal, generic default of “sensor”), we can start to peel off that data and send it somewhere interesting. How? With pipelines, of course!

I set up the pipeline to watch on the path wifi_lte_loc_req (a name of my own making, this could be any arbitrary name). That data gets sent out to a webhook going to n8n. Webhooks more broadly are a generic way to interface between a lot of cloud services, but we use it to send data into the api platform.

n8n

Now that the data is being sent into n8n (a self hosted instance, no less!) we can start doing interesting things with it. This is an area that is full of similar offerings, sometimes specifically targeted at IoT, and other time targeted a business workflows:

If you’re newer to working with APIs and tying stuff together, it might take a bit of time to figure out how queries should be structured and how your setup should respond when there are errors.

API service

We send data from the device to Golioth already formatted for what the location service API service expects. This is not required in the slightest, as Golioth’s Pipelines can morph and transform data to meet the needs of the endpoint. But…why not? It kind of makes sense to have the device publish data in a format that matches the target API service. Then later if we decide to re-target an alternative service, we can use transformations to mold the incoming data to what that new service expects.

For this demo, I’m using the here.com API service. I like that it combines LTE tower + WiFi AP for its API, which means it will lean on whichever provides a more accurate reading (normally Wi-Fi). Again, this service is one of many! There are a range of API services because this is something that phones are often using to determine location from apps.

Once we receive the lat, lon, and accuracy, we actually pass the data back to the device using LightDB State. This two-sided database is a good defacto way to send arbitrary data from the cloud to the device. In the case of n8n, we’re pulling through the original project name, device identifier, and then publishing to the Golioth REST API. This makes it a data “round trip” from device to cloud and back down to device.

Logic and alerts

Since the data is already on the cloud in an API marketplace like n8n…why not use that data to do some cloud side processing? In this case, I wanted to set up a geofence to show that we can trigger logic and alerts on the cloud and even call 3rd party APIs like Slack and Twilio.

Geofence alert messages being sent into Slack

I asked ChatGPT to help me out with some javascript that would help calculate a true/false output so that I could use that to trigger downstream logic. We insert the lat/lon data that was returned from here.com into this algorithm and it pops out whether or not we are inside the “fence”. As of this writing, I am still using a fixed location for where the center of the “fence” is located, as well as the radius of said “fence”. I’m certain it’s possible in n8n or other tools, perhaps as another Webhook or a configurable variable.

Future demos

Hopefully one thing you noticed from this demo is just how much can be enabled with Golioth’s pipelines. Since Golioth takes care of reliably delivering your data to the cloud, the rest is really a matter of configuration. It’s also difficult to know all the different APIs that could be utilized out in the world. Pulling these elements together shows how a hardware or firmware engineer could enact complex device and business logic to create interesting applications out in the real world. If you need any help getting your next project off the ground, stop by our forum!

A new JSON Patch Pipelines transformer is now generally available for Golioth users. It allows for modifying JSON payloads, or payloads that have been transformed into JSON in a previous pipeline step, in order to fit a specific structure or schema. To see this transformer in action, check out the documentation example or last week’s Friday Afternoon Stream.

Shaping Data for Different Destinations

Golioth frequently sits at the intersection of firmware and cloud platforms. One of our goals when launching Pipelines was to enable those to worlds to seamlessly interact, avoiding the need for one side to compromise to suit the other. For example, Pipelines can allow for devices to send data in a compact binary format, such as CBOR, then have it translated to a text-based representation, such as JSON, for delivery to a cloud service.

The json-patch transformer enhances this capability by not only changing the format, but also the structure and content. Furthermore, it allows for the structure required by the end destination to change over time, without requiring firmware updates. In the following example, fields are re-arranged to meet the requirements of the custom webhook data destination. Critically, if this destination changed, or a new one was added in the future, the pipeline could be updated, and the device could continue sending the same payloads.

filter:
  path: "*"
  content_type: application/cbor
steps:
  - name: change-format
    transformer:
      type: cbor-to-json
      version: v1
  - name: transform-and-deliver
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "add", "path": "/environment", "value": {}},
            {"op": "add", "path": "/location", "value": {}},
            {"op": "move", "from": "/temp", "path": "/environment/temp"},
            {"op": "move", "from": "/lat", "path": "/location/lat"},
            {"op": "move", "from": "/long", "path": "/location/long"}
          ]
    destination:
      type: webhook
      version: v1
      parameters:
        url: https://my-backend.example.com/data
        headers:
          x-api-key: $BACKEND_API_KEY

Click here to use this pipeline in your project.

Conditional Data Manipulation

In some cases it may be desirable to conditionally patch a JSON object payload based on the contents of the payload, or the metadata associated with the device that sent it. Combining the json-patch transformer with other transformers demonstrates the power of Pipelines. The test operation in a JSON Patch document conditionally applies a patch if the criteria is met.

For example, in the following pipeline, the key-value pair demo: true is injected into the payload if the device ID matches 649998262fecb43eb2d39859. The device ID is made available when applying the patch using the inject-metadata transformer. The metadata is subsequently stripped from the payload to ensure extraneous information is not delivered to the final destination.

filter:
  path: "*"
  content_type: application/json
steps:
  - name: get-metadata
    transformer:
      type: inject-metadata
      version: v1
  - name: conditional-patch
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "test", "path": "/device_id", "value": "649998262fecb43eb2d39859"},
            {"op": "add", "path": "/demo", "value": true}
          ]
  - name: remove-metadata
    transformer:
      type: json-patch
      version: v1
      parameters:
        patch: |
          [
            {"op": "remove", "path": "/device_id"},
            {"op": "remove", "path": "/project_id"},
            {"op": "remove", "path": "/timestamp"}
          ]
  - name: send-lightdb
    destination:
      type: lightdb-stream
      version: v1

Click here to use this pipeline in your project.

See it in action

What’s Next

The json-patch transformer is the first of many new transformers and data destinations we’ll be adding over the next few weeks. If you have a use-case in mind, feel free to reach out to us on the forum!