I have been building lots of new Bluetooth applications around Golioth Connectivity and Pouch. This is what our users (me included!) use to send data over a Bluetooth link, up through a gateway, and out to the Cloud. As someone who regularly works with the Golioth Firmware SDK and Zephyr, the data handling in the Pouch SDK feels very familiar, despite the link to the cloud being over Bluetooth. I get to treat the connection like any other network connection I normally have when working with Golioth.
But I found a new trick / design pattern that I thought was interesting. It simplifies sending up data once device activity is no longer happening. This fits in well with Golioth’s Pouch protocol and applications we are targeting, so let’s take a look at how to put it all together.
Sending data when the device is idle
Most of the demos I have been working on for the past few months are based on the Pouch SDK, which enables Bluetooth-to-Cloud bidirectional data flow via Golioth Connectivity. The BLE-GATT demo in the Pouch SDK repository is my starting point for most of the things I am building. Out of the gate, I get data uplink (sending time series data up to Golioth), settings downlink (configuration data synced to each device), and Over-The-Air firmware updates. I also get to roam between different gateways thanks to how Pouch is architected.
I have also been working on waking on movement using the LIS2DH12 accelerometer on the Tikk board. When we wake up the board, I want to take a reading and quickly go back to sleep. That’s relatively straightforward. But when should we send that data up to the cloud? Immediately? On a set schedule?
For the asset tracking applications I am targeting, I see a couple of themes:
- Efficient battery operation
- Low number of overall events
- Large time gaps between events (irregular intervals)
- Small data payloads
I think it makes the most sense for my battery and for data bandwidth to send data when the devices are no longer moving.
Design patterns for send-when-idle
This situation could be a good use case for an event-driven state machine, which you can build using built-in Zephyr features. The State Machine Framework (SMF) gives you all the tools you need to build a state machine that covers a variety of scenarios. But given how this sample requires relatively few states, a state machine is likely overkill.
Instead, I will lean on the idea of “delayable work” (insert joke here about me procrastinating writing this article). These are part of the Workqueue feature of Zephyr that we have written about before. I think about them as “thread-like behavior without all the dedicated memory”. Honestly the setup of a thread is pretty easy in Zephyr, but workqueues make it even easier.
Using k_work_schedule to schedule Bluetooth syncs
Pouch initiates a connection to the cloud by advertising to the gateway that there is data available, at which point a connection starts. Normally in our sample, the device starts out requesting a sync on a delay before going into the main loop. When a Pouch session has ended, we schedule it again for another 20 seconds later:
static void pouch_event_handler(enum pouch_event event, void *ctx)
{
if (POUCH_EVENT_SESSION_START == event)
{
pouch_uplink_entry_write(".s/sensor",
POUCH_CONTENT_TYPE_JSON,
"{\"temp\":22}",
sizeof("{\"temp\":22}") - 1,
K_FOREVER);
golioth_sync_to_cloud();
}
if (POUCH_EVENT_SESSION_END == event)
{
service_data.data.flags &= ~POUCH_GATT_ADV_FLAG_SYNC_REQUEST;
bt_le_adv_update_data(ad, ARRAY_SIZE(ad), NULL, 0);
k_work_schedule(&sync_request_work, K_SECONDS(20));
}
}
The sync_request_work function is just using the K_WORK_DELAYABLE_DEFINE macro to set up the workqueue type and list the handler. In this case, all it’s doing is setting up advertising for the device (which is, of course, configurable for your specific project).
void sync_request_work_handler(struct k_work *work)
{
service_data.data.flags |= POUCH_GATT_ADV_FLAG_SYNC_REQUEST;
bt_le_adv_update_data(ad, ARRAY_SIZE(ad), NULL, 0);
}
K_WORK_DELAYABLE_DEFINE(sync_request_work, sync_request_work_handler);
Waiting for downtime using k_work_reschedule
In my case, I want to call that same delayable sync_request_work function. This time, not on a fixed schedule, but when there is no more data coming in from the accelerometer on my board. If I used k_work_schedule each time the accelerometer fired (in an ISR), it would simply execute however many seconds later I set the design (say 20 seconds like in the sample shown above). But that’s not what I want because I would basically get a single accelerometer reading every 20 seconds, executed in the workqueue. For a large number of events in a small amount of time, I would be waiting forever to get all that data up to the cloud. It would be single readings lined up to be transmitted in 20 second increments into the future.
Instead, I want to bundle all the readings. Each time I get a hardware interrupt from the sensor and service the ISR, I instead reset the 20 second timer (or whatever your delay is). I can do this by firing k_work_reschedule instead of k_work_schedule. That “starts the timer over” each time it is called, effectively kicking the proverbial can down the road. It only advertises that data is ready 20 seconds after the last reading. If an accelerometer event happens 21 seconds after the previous reading…no problem! It starts a new 20 second timer and waits for that data to no longer fire.
void lis2dh_int_callback(const struct device *dev, struct gpio_callback *cb, uint32_t pins) {
wake_event_count++;
current_time_us = k_uptime_get();
LOG_DBG("wake = %d, time = %" PRIu64 " ms", wake_event_count, current_time_us);
k_work_reschedule(&sync_request_work, K_SECONDS(20)); // schedule all events to be sent 10 seconds from now, resets if the event happens again
}
Future work
Already, I can hear the howls about all of the corner cases this presents, such as a hyperactive sensor that is taking tons of readings and never has a chance to synchronize with the cloud and offload its data. The howling is in my head as well! But there are a few things we can do.
First, I didn’t even mention how the data is being stored while we wait for a Bluetooth sync to occur. That can take many forms, including a message queue, FIFO, ZBus listener, or just shoving into memory somewhere. That is likely the next blog post in this journey. But wherever we put the data, we need to reliably pull it back out. If I really wanted to stir the pot, I might be so bold to even suggest we offload data to local storage like an SD card (gasp/shock/horror).
If we want to set up for a higher data rate scenario, we could also put soft and hard limits on the number of data points that are allowed on the device before we initiate a sync (regardless of motion). Say we get to 100 data points (or some other definable limit) before we actually start advertising. Then the RTOS scheduler would be balancing between handling interrupts from the accelerometer and sending data over Bluetooth. RTOSes are made for such a task!
Speaking of definable limits, I also plan to include individual settings that can be configured on each device using Pouch downlink and Golioth Settings service. These can be tweaked on a per project, per Blueprint, or per device basis. But more importantly, the configurability means that I am not trying to tweak the setting on my bench; I can just live update the values for things like:
- Delay after the last datapoint
- Upper limit on the number of datapoints before we advertise to the gateway
- Whether we display activity to the user with LEDs
Finally, we will be reconstituting this data cloud-side using Golioth Pipelines. There are a lot of interesting things we can do to ensure the data looks like any other data coming from a Golioth device.
Did we pique your interest? Are you ready to start taking data and send it at a convenient time? Head over to our forum to ask more about Bluetooth and share your next project idea.


No comments yet! Start the discussion at forum.golioth.io