This guest post is contributed by Vojislav Milivojević, an Embedded software lead at IRNAS.

As embedded software engineers we usually have some associated hardware sitting next to our computer that is used for developing and testing our code. In most cases, this piece of hardware is connected to our computer with a so-called “programmer”, an additional tool that allows us to access processors and controllers for which we are developing code. Here we will explore the relationship between devices we are developing and a computer, but it won’t be a standard one, it will be a long-distance relationship.

I lead the firmware development at IRNAS, where we push the limits of efficient solution development on IoT devices, but since I live in a different country than the rest of my team, there are usually a lot of packages with PCBs going back and forth. While that is not a big problem, there are times when some pieces just cannot get to me quickly enough to meet customer demands. There have also been times where a specific LTE network is not available in my region. Overcoming this issue is usually done with remote desktop solutions that are not so efficient, or with some special equipment that in a nutshell is again a computer with some additional hardware. Since I needed such a solution, and none of the existing ones were able to give me a nice out-of-the-box experience, I decided to design and document a process that works for me and the complete IRNAS engineering team.

Using Segger tools

There are many solutions, commercial and open-source, that provide embedded development tools such as programmers, IDEs, logging features, etc. One of these is solutions providers is Segger, and their hardware sometimes comes as part of development boards which is really nice. At IRNAS we tend to favor using Segger J-Link tools as our ‘go-to’ solution for target flashing and debugging while building connected products. Besides that, they have a range of very useful features for embedded developers, and one of these tools is Segger Tunnel mode. This is a remote server that allows the users to connect to a J-Link programmer (and thus its connected target device) over the internet. This means a device located anywhere in the world can be debugged or brought up.

Mixing with Zephyr (west tool)

Since most of the projects that I am working on are using Zephyr RTOS, this means that the west tool is used for flashing, debugging, and many other things. West is a meta-tool that abstracts software for all different programmers and gives us the ability to easily flash for multiple targets while not needing to remember long command lines. West does support Segger J-Link for specific targets and it can be selected as one of the offered runners. The good thing about west is that it will let us pass commands to the selected runner which gives us the ability to fully utilize all the functions of the selected J-Link software.

Set up the hardware and software

In December of 2020 there was great news from Segger that the complete J-Link software is now available to run for Linux on ARM architecture, which meant that Raspberry Pi is now supported as a host machine! The idea was to connect a J-Link programmer to Raspberry Pi, add in some software, and we have ourselves a remote programming jig.

Components needed for this demo:

    • Raspberry Pi
    • J-Link programmer
    • Board with the target MCU

For the purposes of this demo, we will be using the Nordic Semiconductor nRF9160DK development kit since it already contains both a J-Link and the target MCU hardware. The board connects via USB to the Raspberry Pi which connects to power and Ethernet (WiFi is also an option).

nRF9160DK connected to Raspberry Pi

Now J-Link software needs to be installed on Raspberry Pi so it can work as a remote J-Link Server. In the Raspberry Pi user home directory, download and un-tar the Segger utilities for the Raspberry Pi (choose the Linux ARM 32-bit TGZ archive). Then configure the udev rules as per the README.txt file in the JLink_Linux_Vxxx_arm directory.

$ wget --post-data 'accept_license_agreement=accepted&non_emb_ctr=confirmed&submit=Download+software'
$ tar xvf JLink_Linux_arm.tgz
$ cd JLink_Linux_V646g_arm/
$ cat README.txt
$ sudo cp 99-jlink.rules /etc/udev/rules.d/
$ sudo reboot

Next, it is time to start the remote server. On a GUI-based system, this can be done with a small application from Segger, but the good thing is that the CLI tool is also provided. I recommend checking all available options for this tool by starting it and then typing ? at the prompt.

pi@raspberrypi:~ $ JLinkRemoteServer
SEGGER J-Link Remote Server V7.22b
Compiled Jun 17 2021 17:32:35

'q' to quit '?' for help

Connected to J-Link with S/N 960012010

Waiting for client connections...
?Command line options:
? - Prints the list of available command line options
-Port - Specifies listening port of J-Link Remote Server
-UseTunnel - Specifies if tunneled connection shall be used
-SelectEmuBySN - Specifies to connect to a J-Link with a specific S/N
-TunnelServer - Specify a tunnel server to connect to (default:
-TunnelBySN - Specifies to identify at tunnel server via J-Link S/N
-TunnelByName - Specifies to identify at tunnel server via custom name
-TunnelPW - Specifies to protect the connection with a password
-TunnelEncrypt - Specifies to encrypt any transferred data of a tunneled connection
-TunnelPort - Specifies to connect to a tunnel server listening on a specific port
-select - <USB/IP>[=<SN/Hostname>] Specify how to connect to J-Link

Before entering the command we need to think of a name for our tunnel and a password. For me, this will be tunnel name: remote_nrf91 and password: 19frn. Then start the remote server with the command:

JLinkRemoteServer -UseTunnel -TunnelByName remote_nrf91 -TunnelPW 19frn

Demo time

To test this remote flashing we will build a demo application on our host computer. nRF Connect SDK (NCS) that is based on ZephyrRTOS contains some sample applications and we will use shell_module, which enables us to use shell commands over UART with nRF9160. The selected application is located in the ncs/zephyr/samples/subsys/shell/shell_module folder of NCS. To build it for nRF9160DK we will use the command:

west build -b nrf9160dk_nrf9160_ns -p

After that let’s flash the board that is connected to our remote Raspberry Pi. The default runner for flashing the nRF9160DK is nrfjprog, but instead of that, we want to use the J-Link supported runner. Since the west tool does not natively support remote flashing, parameters will be sent directly to the J-Link software using the --tool-opt flag.

west flash -r jlink --tool-opt="ip tunnel:remote_nrf91:19frn"

This will flash our target MCU that is connected to J-Link and Raspberry Pi. To validate the result, open the serial terminal on Raspberry Pi and see if shell commands are working.

minicom -D /dev/ttyACM1 -b 115200


While Segger provides very interesting tools for embedded developers, there is still some work that needs to be done so they are properly integrated into our development workflow. Remote flashing is just one part of all capabilities, so this can be a starting point for a great remote development setup!

Storing and retrieving data from the cloud is the foundational concept of the Internet of Things. When two machines talk to one another they need to settle on a data format. Here at Golioth that means your microcontroller of choice is going to be sending and receiving JSON.

Wait a minute. JSON and microcontrollers? Take a breath, dry those sweaty palms, and keep reading! When it comes to working with the C language, parsing strings for punctuation delimiters doesn’t sound like much fun. That’s why it’s really nice that Zephyr has a built-in JSON library which does most of the work for you. The only thing you really need to do is to make a struct that tells Zephyr how the incoming data will be organized. At first blush this seems tedious, but what you get out of it is validation of both the key and the received value type.

The nice thing is that once you get the hang of it, parsing data and accessing those values becomes very easy. So today I’ll walk you through the process.

Why do we need to parse JSON?

It’s very easy to get data to and from the Golioth Cloud. One of the first examples you should try out is the lightdb/get sample that simple asks for an endpoint called /counter. The string you’ll receive is a key-value pair that looks like this:


For data this simple it would be trivial to iterate the string, look for a semicolon, and then test the following characters to make sure they are numbers. But that’s clunky, and breaks down when you start adding more key-value pairs and nested values. The JSON module will make sure the value you receive matches the key you were expecting, and that the variable type (string, int, or bool) is validated. You’ll want to use this library for sending complex data back to the cloud too. It includes an encoder that will build the JSON packet for you.

Configuring the JSON library

Enabling the library in your project is very simple. First, turn on the library in your prj.conf file:


Next you include the header file in main.c of your app:

#include <data/json.h>

Setting up the struct

This part is a bit hard to wrap your head around because the helper code looks like a foreign language compared the readability of a JSON object. Here is the overview of what we need to accomplish in this section:

  1. Build a set of all key-value pairs. This includes the name of the key and the data type of the value.
  2. Package up all of the key-value pairs into a struct (which might itself include other structs) to match the way the expected JSON package will be structured.
  3. Tell the library about our struct, which it will use as a map to encode or parse the JSON string.
  4. Give the library a pointer to store the data. For encoding, this is a string pointer, for decoding this is a struct pointer.

Here’s a simple packet that might be received by a temperature controller listening to the Golioth Cloud for user settings:

  "unit": "c",
  "value": 37

In our c code we begin by describing all of the key-value pairs we expect to find in the JSON packet. Notice that we’re declaring variables. The type of the variable must match the expected data type of the JSON value. The name of the variable must match the expected name of the JSON key. Note that this isn’t something inherent about C; it’s how the library matches up the incoming data and rejects things that don’t fit that mold.

struct temperature {
  const char *unit;
  int value;

Next we declare a struct that will match the expected structure of the JSON packet. We invoke the JSON_OBJ_DESCR_PRIM() macro for each key-value pair. Here you can see we feed it temperature, the name of the field name from the struct, and a token that indicates the data type.

static const struct json_obj_descr temperature_descr[] = {
  JSON_OBJ_DESCR_PRIM(struct temperature, unit, JSON_TOK_STRING),
  JSON_OBJ_DESCR_PRIM(struct temperature, value, JSON_TOK_NUMBER),

Parsing JSON

Now we have a struct that contains all of our expected keys, and a descriptor struct that maps out the expected structure of the JSON packet. We’re ready to test it out!

Here is a concise bit of sample code to test out our setup. Note that I’m using Zephyr’s logging system to display values.

/* decode a single object */
uint8_t json_msg[] = "{\"unit\":\"c\",\"value\":30}";
struct temperature temp_results;

ret = json_obj_parse(json_msg, sizeof(json_msg),

if (ret < 0)
	LOG_ERR("JSON Parse Error: %d", ret);
	LOG_INF("json_obj_parse return code: %d", ret);
	LOG_INF("Unit: %s", temp_results.unit);
	LOG_INF("Value: %d", temp_results.value);

On line 5 of the example we call json_obj_parse(), passing it our JSON string, length of that string, the descriptor we previously set up, the length of that descriptor (number of elements), and a copy of the struct where the results will be stored. The rest of the code tests whether an error code is being returned and prints out the values if the parsing was successful.

This works if the data is just right. However, checking for a negative error code isn’t enough. For instance, if valuekey of the JSON packet is received as a string ("30") instead of an int (30), the return code will not be negative, but the program will crash at runtime. This is a feature, not a bug, and it lets us parse JSON even when it’s not quite right. But we need to do more to test that the data is valid. Let’s do that, as well as looking at an example of parsing nested data.

Nested JSON

Our heater control example probably needs more than a temperature setting, it needs an on/off setting. Here’s what that JSON might look like:

  "heater_on": true,
  "heater_temp": {
    "unit": "c",
    "value": 30

We build the struct in much the same way, except there are two steps here. First we set up the struct (and the descriptor) for the inner “heater_temp” object, then set up another struct and descriptor and maps the “heater_on” key/value and the “heater_temp” object:

struct temperature {
	const char *unit;
	int value;

struct heater_ctl {
	bool heater_on;
	struct temperature heater_temp;

static const struct json_obj_descr temperature_descr[] = {
	JSON_OBJ_DESCR_PRIM(struct temperature, unit, JSON_TOK_STRING),
	JSON_OBJ_DESCR_PRIM(struct temperature, value, JSON_TOK_NUMBER),

static const struct json_obj_descr heater_unit_descr[] = {
	JSON_OBJ_DESCR_PRIM(struct heater_ctl, heater_on, JSON_TOK_TRUE),
	JSON_OBJ_DESCR_OBJECT(struct heater_ctl, heater_temp, temperature_descr),

Note that we’re using the JSON_OBJ_DESCR_OBJECT()that maps the temperature descriptor for the nested data. Now we can parse our data:

uint8_t str[] = "{\"heater_on\":true,\"heater_temp\":{\"unit\":\"c\",\"value\":30}}";
struct heater_ctl heater_settings;
int expected_return_code = (1 << ARRAY_SIZE(heater_unit_descr)) - 1;
int ret = json_obj_parse(str, sizeof(str),

if (ret < 0)
	LOG_ERR("JSON Parse Error: %d", ret);
else if (ret != expected_return_code)
	LOG_ERR("Not all values decoded; Expected return code %d but got %d", expected_return_code, ret);
	LOG_INF("json_obj_parse return code: %d", ret);
	LOG_INF("calculated return code: %d", expected_return_code);
	if (heater_settings.heater_on)
		LOG_INF("Heater On: True");
		LOG_INF("Heater On: False");
	LOG_INF("Unit: %s", heater_settings.heater_temp.unit);
	LOG_INF("Value: %d", heater_settings.heater_temp.value);

You must use JSON parse return codes to validate your data!

The json_obj_parse() function is going to return a positive-value that correlates to which tokens of the JSON object were successfully found and validated. Each token is represented by one bit in the return code.

The nested JSON example above presents a gotcha. We expect the parser to report back on the three tokens that are important to us (heater_on, unit, and value; in that order). What it actually does is report back on the tokens found in the top-level struct. So in this case a return code indicates that the parser found heater_on and heater_temp, the key for the nested data. It might have also found unit and value, or they may not have been present. We just don’t know for sure.

The solution when decoding json is to pretend the key to the nested struct doesn’t exist. Instead, we can just declare our important values:

struct temperature {
	bool heater_on;
	const char *unit;
	int value;

static const struct json_obj_descr temperature_descr[] = {
	JSON_OBJ_DESCR_PRIM(struct temperature, heater_on, JSON_TOK_TRUE),
	JSON_OBJ_DESCR_PRIM(struct temperature, unit, JSON_TOK_STRING),
	JSON_OBJ_DESCR_PRIM(struct temperature, value, JSON_TOK_NUMBER),

How Zephyr JSON library return codes workWe will receive a return code indicating whether the keys in the descriptor were successfully decoded–the parser will set the corresponding bit when each value is validated. So we want to see bits 0, 1, and 2 set in the return code (0b111). If heater_on is not validated, we would received 0b110. Here is an illustration of the gotcha (top return code) and the fix (bottom return code).

It’s really important to check these bits before using the value. If we fail to do so, we’ll be using uninitialized values (bad data) or reading from unallocated memory (runtime crash).

So why did I even show you how to build structs for nested JSON? You need it when encoding data. The json_obj_encode() function will take the nested descriptor and encode a JSON string exactly as we expect to see it. It doesn’t matter as much when you’re in control of the data scheme used on the cloud side, but if you need to match an existing standard, this makes the encoding a snap.

Things to keep in mind, and further reading

When working with this library, remember that the variables you declare in structs must match the keys in the JSON string. The values must also match up. The JSON library currently supports three data types: string, int, and bool. In our example, you would probably want to use a float for Celsius temperature settings, but it’s not possible to parse that data type with this library.

If you’re ambitious, adding this support would be a great way to contribute to the Zephyr open source project! But we can work around the problem. When accessing lightDB state values on Golioth, it’s possible to directly request the value just by using the specific endpoint–in our example: .d/heat_control/heater_temp/value.

The Zephyr JSON library includes other very helpful features like array-handling. I haven’t been able to find additional documentation on these features beyond the the JSON API reference, but the automated tests are a great place to see all functions/macros at play. There is a ton of utility built into this and it’s worth getting to know the library by building a few examples. Once you get the hang of it, this a very accessible way to make sure you can use incoming JSON data and know that you have dependable values.

See it in action

Check out our recent video reviewing some of the JSON basics described above!

Zephyr has a number of tools to aid in debugging during your development process. Today we’re focusing on the most available and widely useful of these: printing message to a terminal and enabling logging messages.

New to Golioth? Sign up for our newsletter to keep learning more about IoT development or create your free Golioth account to start building now.

printk() is the printf() of the Zephyr world

Printing useful messages using printf() is a time-tested practice for getting programs up and running. Some frown upon using this as the main debugging approach, but don’t discount how incredibly useful it is as a first step.

printk("String: %s Length: %zd Pointer: %p\n", my_str, sizeof(my_str), my_str);

Zephyr builds this functionality right in with the `printk()` command so that you can have immediate feedback. These messages print out over a serial connection using the same style of conversion syntax as printf(). This automatically converts data types into the printable representation. In my example I’m debugging a string in c by printing out the string itself, the length, and the pointer address. The Linux docs are handy for those looking to drill down into the specifics of printk formatting.

Tip: Pay attention to Zephyr return codes!

Throughout the Zephyr samples you’ll see that it’s standard practice to test return codes and print them out when they are non-zero numbers. I have found these return codes to be indispensable when troubleshooting subsystems like i2c. The paradigm most often used is:

int ret = gpio_pin_configure(dev, PIN, GPIO_OUTPUT_ACTIVE | FLAGS);
if (ret < 0) {
	printk("Pin config failed: %d", ret);

You can look up error codes in Zephyr docs. I was getting a -88, which is ENOSYS –function not implemented.

Use the Logging Subsystem as a Powerful Debugging Tool

Once you’ve seen the Zephyr logging subsystem, there is no replacement. Log messages automatically include a timestamp and report on what part of the application they come from. Data can be included in a few different ways, and these messages are queued so that they don’t interfere with time-dependent parts of your program.

Perhaps the best part is that you specify the importance of each message, allowing you to choose at compile time which logging messages will be included in the binary. This means you can pepper your program with debug-level messages and choose to leave them out of the production builds.

How to Enable Logging in Zephyr

To turn on logging, we need to do three things: tell our Kconfig that we want to use the subsystem, include the logging header file, and declare the module name.

Step 1: Add CONFIG_LOG=y to your project’s prj.conf file.

Like all subsystems in Zephyr, we need to tell CMake that we want to use it. The easiest way to do this is by adding CONFIG_LOG=y to the prj.conf file in the project directory.

Step 2: Add the header file to the top of your c file: #include <logging/log.h>

This one is straightforward. We want to use a C library so we have to include it in the main.c (and any other C files in the project).

Step 3: Declare the module

We need to tell the logging module where the message is coming from using a macro: LOG_MODULE_REGISTER(logging_blog);

There are a couple of important things to understand here. First, you will use any unique token you want in this macro, but make sure you don’t surround it in quotes. Second, as I just mentioned, this needs to be unique (in this example this is represented by logging_blog but could be any arbitrary phrase). If you have additional C files in your project, you either need to register different tokens, or more commonly just declare the file as part of the original module: LOG_MODULE_DECLARE(logging_blog);

There is an optional second argument and this is where you choose which logging events will be compiled into the application. By default, debug messages will not be shown, so you can declare your module to enable them: LOG_MODULE_REGISTER(logging_blog, LOG_LEVEL_DBG);. Logging levels run from 0 to 4 using the following suffixes: _NONE, _ERR, _WRN, _INF, _DBG.

How to use the Logging subsystem in Zephyr

Using the logging subsystem is just as easy as using printf: LOG_INF("Count: %d", count);. The log outputs for this example look like this:

[00:01:52.439,000] <inf> logging_blog: Count: 112
[00:01:53.439,000] <inf> logging_blog: Count: 113
[00:01:54.439,000] <inf> logging_blog: Count: 114
[00:01:55.439,000] <inf> logging_blog: Count: 115
[00:01:56.439,000] <inf> logging_blog: Count: 116
[00:01:57.439,000] <inf> logging_blog: Count: 117
[00:01:58.439,000] <inf> logging_blog: Count: 118
[00:01:59.439,000] <inf> logging_blog: Count: 119

We begin each line with a timestamp down to microseconds, the severity level (inf for INFO), followed by our printf-style message output. Look at the timestamps in this example–they are exactly one second apart. This drives home the power of the queuing system: the messages arrive at the terminal slightly delayed, but they don’t alter the k_msleep() timing that was used for this example.

You can use four different built-in severity levels for your logs by choosing a different macro: LOG_ERR(), LOG_WRN(), LOG_INF(), LOG_DBG(). Setting these different levels allows you to choose what gets included at compile time. If you made all your debugging messages using printk(), they will always compile into your code until you remove them from the C file. If you use LOG_DBG(), you can choose not to include that level of logging when you compile the production version of your code.

By default, debug-level messages will not be shown. As mentioned earlier, you have the option of specifying the maximum severity level to show whey you register your modules.

Hex dumping via logs

Logging lets you dump data arrays without the need to turn that data into something digestible first.

LOG_HEXDUMP_INF(my_data, sizeof(my_data), "Non-printable:");

I’ve given an array of values, the length of that array, and a string to use as the label for the log message. The logging subsystem will automatically show the hexadecimal representation of that data, as well as a string representation to the right, much as you’d expect from any hexdump program (you’ll need to scroll right in the example below).

[00:00:00.427,000]  logging_blog: Non-printable:
                                       01 fe 12 00 27                                   |....'            

Other debugging tools for next time

Using logging can give you enough feedback to solve the majority of your development issues in Zephyr, but of course there are other tools available. In future posts we’ll discuss using virtual devices via QEMU to speed up debugging sessions because you won’t have to flash newly compiled binaries to hardware. And we plan to dive into on-chip debugging that lets you set break points and step through your code. Stay tuned!

See it in action: Zephyr Debugging demo video

Zephyr wrapped up in a box

If you’re like me, you installed Zephyr and began making your own changes to the sample applications that came with the toolchain. But at some point–either for personal project repository tracking or building out a professional project–your program starts to take shape. You want to move it to its own standalone directory. It’s not immediately clear how to do that, so today we’ll dive into the nuts and bolts. (Spoiler alert: it’s pretty easy.)

There be dragons in developing inside the Zephyr directory

For those new to Zephyr, getting started examples like “Blinky” programs are located inside the ~/zephyrproject/zephyr/samples directory. I knew I shouldn’t just be making (and forgetting about) my own folders inside. It took my losing a few programs before I did anything about it. I wanted to reinstall the toolchain and I removed my zephyrproject directory without a thought for my poor, non-revision-controlled, early experiments with the RTOS. Don’t be me.

One option is to run git init in your own subdir within the Zephyr tree. But I feel like the work I’m doing should be separate from the toolchain I’m using. So I changed the formula: I set up my own tree and told it where Zephyr is installed.

Step by step

By far the easiest thing to do is to copy one of the sample directories. For my fellow Linux enthusiasts this looks something like cp -r ~/zephyrproject/zephyr/samples/basic/blinky ~/.

Here are the more verbose steps:

  1. Create a directory for your app
  2. Add CMakeLists.txt
  3. Create a src subdirectory and add main.c file to it

That sets up the directory. You now have a folder in your home directory called blinky. Like many zephyr samples, this includes the source files (src folder), a project file (prj.conf), CMake directives (CMakeLists.txt), a readme file (README.rst), and a yaml file (sample.yaml). More complex examples might have a boards directory (boards) that has hardware specific configurations.

Notably absent is any reference to the Zephyr SDK and toolchain to build your project. So we need to tell it where to find Zephyr in order to build your app inside of that directory.

Each time you begin a new terminal session, source this helper file:

source ~/zephyrproject/zephyr/

If you are working with the Nordic fork of Zephyr, simply source the same file from that tree:

source ~/zephyr-nrf/zephyr/

Don’t forget that each time you begin a new terminal session (no matter what tree you’re in) you need to enable your Python virtual environment and set up the build environment. Here’s an example of how I start an ESP32 development session:

mike@golioth ~ $ cd ~/blinky
mike@golioth ~/blinky $ source ~/zephyrproject/.venv/bin/activate
(.venv) mike@golioth ~/blinky $ source ~/zephyrproject/zephyr/
(.venv) mike@golioth ~/blinky $ export ESPRESSIF_TOOLCHAIN_PATH="/home/mike/.espressif/tools/zephyr"
(.venv) mike@golioth ~/blinky $ export ZEPHYR_TOOLCHAIN_VARIANT="espressif"
(.venv) mike@golioth ~/blinky $ west build -b esp32 . -p

Here’s the same process, but for a Nordic-based board. Note two things: I didn’t need to set up the toolchain variables like I did with ESP32, and the binaries for different hardware can be built in the same tree using different toolchains–a powerful perk of using Zephyr.

mike@golioth ~ $ cd ~/blinky
mike@golioth ~/blinky $ source ~/zephyrproject/.venv/bin/activate
(.venv) mike@golioth ~/blinky $ source ~/zephyr-nrf/zephyr/
(.venv) mike@golioth ~/blinky $ west build -b thingy91_nrf9160_ns . -p
TIP: The ESP32 example above will not build the blinky app unless you add an esp32.overlay file to configure the &led0 alias that main.c needs to attach to an LED. The Thiny91 doesn’t have this limitation. That particular dev board has an LED included, so &led0 is already specified in the dts file within the Zephyr toolchain.

Before we wrap up, let’s spend a beat discussing those CMake and Kconfig files.

Configuring a Zephyr project

I’ve moved the files of my app to a different location, but the build process remains the same. Every project needs a CMakeLists.txt file to specify the CMake version, designate this as a Zephyr project, and list the C files to include in the build. Beginners do not want to make this file themselves, so copy it from a known-working sample. You also name your project in this file using the project() designator, so go change that name now.

Projects usually also need to include a Kconfig file. This is the prj.conf that you see in a lot of projects, and for blinky that simply turns on the gpio subsystem:


You may choose to add a boards subdirectory and store .conf files with specific board names. These files configure subsystems, while overlay files in the same directory designate hardware pins and peripherals. These two file types are key to making your Zephyr app portable across different hardware.

Further reading

Golioth has a “clean application” guide to get you started and it walks through the prj.conf file settings necessary for your app to connect to the Golioth Cloud. Of course the deeper dive into this issue is the Zephyr docs page on application development that drills down into every part of out-of-tree coding. But what I’ve covered today should be enough to get you started.

If you get stuck, the Golioth forum is a great place to ask for help on this topic. You’re also invited to join us for Golioth Office Hours, every Wednesday at 10 AM Pacific time on our Discord channel. We love chatting about the work you’re doing, and everyone wants an early look at your hardware demos during development!

Getting your ESP32 GPIO pins working with Zephyr is easy, and using a devicetree overlay file to do so makes it painless to change pins (or even boards/architectures) in the future. Today we’re looking at a simple overlay file for the ESP32 architecture and talking about the syntax used to choose input and output pins.

Overlay File Example

Zephyr uses a description language called devicetree to abstract hardware so your code is extremely portable. Since the ESP32 is a common architecture, this abstraction is already done for us. But we need to tell Zephyr what we plan to do with the GPIO pins by writing a devicetree overlay file.

An overlay file assigns an alias to a physical pin on a microcontroller and configures hardware options for that pin. Here’s an example of the overlay file for the two buttons on a TTGO T-Display board which are connected to GPIO0 and GPIO35.

/ {
    aliases {
        sw0 = &button0;
        sw1 = &button1;
    gpio_keys {
        compatible = "gpio-keys";
        button0: button_0 {
            gpios = <&gpio0 0 GPIO_ACTIVE_LOW>;
            label = "Button 0";
        button1: button_1 {
            gpios = <&gpio1 3 GPIO_ACTIVE_LOW>;
            label = "Button 1";

Let’s focus on assigning the alias to a specific pin. You can see at the top where sw0 and sw1 are set. These names are commonly use in Zephyr samples; for instance, assigning sw0 here will make our board compatible with the basic/button sample. For us, the important parts are lines 9 and 13 where the actual port, pin, and pin properties are assigned. One thing you should notice: what happened to GPIO35? Let’s get into that in the next section.

Please note that explaining every part of this overlay file is beyond the scope of this article, but you can see a functional overview in our overlay file video series and dig deeper with Zephyr’s Introduction to devicetree documentation.

ESP32 Pin Numbering

ESP32 module pinout


In this diagram you can see how the ESP32-WROOM module (PDF) pins are named using an IO# format. The ESP32 splits these 39 GPIO pins between two different GPIO ports. In Zephyr, the port for the first 32 GPIO pins is called &gpio0 (zero indexed), and the port for the last 7 GPIO pins are called &gpio1 (once again, zero indexed)

  • GPIO0..GPIO31 → port &gpio0, pins 0..31
  • GPIO32..GPIO39 → port &gpio1, pins 0..7

Following that scheme, it’s easy to translate the pins I need from the TTGO T-Display board (GPIO0 and GPIO35) into the numbers the devicetree understands. Because I’m using one of the pins numbered higher than GPIO31, I must switch to port 1 and adjust the pin number to begin counting again from zero – just subtract 32 from GPIO35 to get pin number 3.

  • &gpio0 0
  • &gpio1 3

Pin Functions

The overlay file is also where you want to set up the expected behavior of the pin. Most importantly, this tells Zephyr if the pin will be active high or active low. This pin status addition is valid whether it’s an input or an output.

In the case of my TTGO T-Display board, there are pull-up resistors on the circuit board and pressing the button pulls the pin low, so these are “active low” buttons. Other boards might have pull-down resistors where pressing the button pulls the pin high. Zephyr lets you use the same application code for both boards, and the actual state is translated correctly by the overlay file.

There are other options available, including the ability to turn the ESP32’s internal pull-up/down resistors on. These flags can be added using logical OR:

gpios = <&gpio0 25 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;

Zephyr also makes interrupt-drive GPIO a snap. For more on this, study the basic/button sample which turns on the interrupts and adds a callback to the sw0 pin.

Initializing and Calling the Pins in Your Program

Before we finish up, let’s talk about accessing the pins you have set up in your overlay file. In main.c we need to get the alias and set up a struct that understands devicetree. From there, it’s the normal procedure of initializing the GPIO, and then acting on it: polling an input pin, or changing the state of an output.

First, we can define a node from the overlay file alias (esp32.overlay). That node is used to populate a struct that holds the port, pin number, and configuration of the GPIO pin.

#include <drivers/gpio.h>
#define SW0_NODE DT_ALIAS(sw0)
static const struct gpio_dt_spec button0 = GPIO_DT_SPEC_GET_OR(SW0_NODE, gpios, {0});

Second, we need to first initialize the pin in main.c. This function passes the port, pin number, and flags (ie: GPIO_ACTIVE_LOW). Note that we’ve added GPIO_INPUT here as an extra flag that sets the direction of the pin.

gpio_pin_configure_dt(&button0, GPIO_INPUT);

To act on the changes to the pin, we can poll its value do something if it is in the active state.

if (gpio_pin_get_dt(&button0) > 0)

If we had set this pin as an output, the state changes happen in much the same way, but utilize a function to set the state: gpio_pin_set_dt(spec, value). Notice that the functions and macros that Zephyr uses here all include “dt” in them. This indicates that they are specialized to operate with the devicetree. There are functions to directly set up and manipulate GPIO without using an overlay file, but then you also lose out on the benefits of software portability. To fill in your knowledge around this, take a look at the GPIO section of the Zephyr docs.

ESP32 Overlays are a Snap

Pulling off an ESP32 demo is a snap now that you know how to address the pins. This is a great first adventure into how overlay files work, but of course they are far more powerful than buttons and LEDs. Sensors, displays, UARTs and just about everything else can be plumbed into Zephyr using the overlay file in order to take advantage of the libraries and drivers already present in the RTOS.

Troubleshooting high complexity systems like Zephyr requires more thorough tools. Menuconfig allows users to see the layers of their system and adjust settings without requiring a complete system recompilation.

The troubleshoot loop

Modify, compile, test.

Modify, compile, test.

Modify, compile, test.

Modify, compile, test.

How do we break out of this loop of trying to change different settings in a program, recompiling the entire thing, and then waiting for a build to finish? Sure, there are some tools to modify things if you’re step debugging, such as changing parameters in memory. But you can’t go and allocate new memory after compiling normally. So what happens when you need to change things? You find the #define in the code, change the parameter, and recompile. What a slow process!

Moving up the complexity stack

We move up the “complexity stack” from a bare-metal device to running a Real Time Operating System (RTOS) in order to get access to higher level functions. Not only does this allow us to abstract things like network interfaces and target different types of hardware, but it also allows us to add layers of software that would be untenable when running bare-metal firmware. The downside, of course, is that it’s more complex.

When you’re trying to figure out what is going wrong in a complex system like Zephyr, it can mean chasing problems through many layers of functions and threads. It’s hard to keep track of where things are and what is “in charge” when it comes time to change things.

Enter Menuconfig

Menuconfig is a tool borrowed from Linux development that works in a similar way: a high complexity system that needs some level of organization. Obviously, in full Linux systems, the complexity often will be even higher than in an RTOS. In the video below, Marcin shows how he uses Menuconfig to turn features on and off during debugging, including with the Golioth “hello” example. As recommended in the video, new Zephyr users can also utilize Menuconfig to explore the system and which characteristics are enabled and available.



A possible solution

Let’s pretend you’re in the middle of a global chip shortage.

Surprise! There’s no need to pretend, as we are all currently in the middle of a global chip shortage. Right now it’s very difficult to source certain components.

“Why don’t hardware makers just switch out the components when they can’t source them during the chip shortage? In fact, why don’t people switch chips on a regular basis?”

As a generalization, switching costs for embedded devices are very high. If we were able to magically solve all of the switching costs for the hardware, you’d still need to deal with the switching costs of firmware and cloud platforms. This often is even more dire than than the hardware switching costs. It’s significant at both an individual level (rewriting firmware to target different architectures and board setups) and at an institutional level (maintaining different platforms and interoperability).

Operating systems and Real Time Operating Systems (RTOS) help by abstracting away a lot of the individual hardware details. When a new device is added to an RTOS, it needs to fit within the constraints of the system. If your board has an i2c sensor on it, you need to ensure your supporting firmware for that board or chipset capable of working with the elements of the RTOS. Then you can take advantage of the drivers already written for other boards/chipsets on the platform. Assuming you are willing to work within that system, you can start to supercharge your development. It’s possible to switch out components quickly and confidently, helping to alleviate the woes of the chip shortage currently underway.

Making the switch

Let’s say you have a board using an ESP32 module. Due to sourcing problems, you can no longer source a particular LED on your board, but you don’t want to change your PCB. Instead, you ask a technician wire in an extra LED to a spare pin you have on your production board that has a larger landing area. You need to build a firmware image to drive a different pin on the microcontroller than you previously were using. Now “LED2” (as it’s called in your program) is not driving pin 18, but is instead driving pin 22. With Zephyr overlays, the switch will take 5 minutes. As Marcin shows in the video below, the device tree overlay is where we map the signals internal to the firmware to the physical pins being used.

Now let’s say you cannot source the ESP32 at all, for some reason. You could create a new overlay file for a different target that works with Zephyr, assign the pins to target the functions you need on a new PCB containing a different chip, and then target that device. The time consuming aspect would be checking all of the functions are performing the same as your previous platform. But once you have decided on a new platform, assigning pins and functions to your new device would occur through overlay files.

How to use Zephyr Overlays

In the video below, we  walk through the location and function of overlays in Zephyr. Marcin explains that customization of firmware images for particular hardware targets can be as simple as a different flag on the command line. In this particular example, we are showing how to change the pins for the ESP32 demo. Previously the ESP32 overlay was shown as part of our our LightDB Sample code (docs), which targeted an ESP32-DevKitC in that video.

About The Zephyr Project

The Zephyr Project is a popular and open source Real Time Operating System (RTOS) that enables complex features and easy connectivity to embedded devices.  The project is focused on vendor participation, long-term support, and in-depth security development life cycles for products.

About Golioth

Golioth helps users to speed up development and increase the chances that pilots will be put into production with a commercial IoT development platform built for scale. We offer standardized interfaces for connecting embedded devices to the cloud and build out software ecosystems that allow your projects to get to market faster. Golioth uses Zephyr as part of the Golioth SDK to bootstrap application examples and show how to utilize the range of networking features Golioth enables via APIs.