The Zephyr Developer Summit (ZDS) is coming up June 7th-9th 2022 in Mountain View California at the Computer History Museum. Golioth will be there and we’re very excited to interact with fellow users, developers, and stakeholders in the open source real-time operating system (RTOS) known as Zephyr!

We love Zephyr

People reading this blog will not be surprised to know that we love Zephyr. We write about it quite often and it is the basis of our Zephyr-based SDK. As a result, many of our samples and demos are built using Zephyr. We often talk about Zephyr being an indicator that a hardware device will work with Golioth; all you need is a network connection, a board running Zephyr, and little bit of storage overhead to hold the Golioth code. It’s the hardware interoperability of Zephyr that allows Golioth users to target a wide range of platforms, including microcontrollers from Espressif, Infineon, Intel, Microchip, NXP, Nordic Semiconductor…and more being added every day!

Our plans at ZDS

We’re excited to be returning to ZDS. Last year we officially announced Golioth to the world at ZDS, and talked about how our platform works within the Zephyr ecosystem. We hope to have another year of connection, this time in person and online. Let’s look at how we’ll be participating.

Sponsoring/Showcase

We are helping to sponsor ZDS this year. We believe in the mission of the project and the conference and wanted to be part of it. We will also be showcasing Golioth at a vendor table at the conference. If you would like to see Golioth in action, you can stop by at any time to ask questions and see demos. You can, of course, also try out Golioth at any time using our Dev Tier plan, which gives anyone up to 50 free devices on the platform.

Giving Talks

We will be presenting a range of talks at ZDS:

  • What chip shortage? How we use Zephyr for truly modular hardware
    • Chris and Mike from Developer Relations will highlight the Aludel, an internal hardware platform we’ve built as a customizable solution that can switch out hardware pieces without major redesign. This modular hardware showcases a path for hardware and firmware teams to unify their codebase using Zephyr while targeting a wide range of hardware. Being able to swap out a sensor, microcontroller, or radio but keep the main board, or go from outdoor air monitoring to indoor monitoring is really powerful. Zephyr makes it much easier to create alternate builds and manage firmware pipelines to hardware variants.
  • Connecting Zephyr Logging to the Cloud over Constrained Channels
    • Our resident Zephyr expert Marcin will cover an approach to preparing Zephyr logging messages for transmission through a constrained networking layer, such as a cellular connection. This includes CBOR compression on all logging messages, including special handling around binary payloads. There is also an interface to a CoAP library to take advantage of smaller payloads and standardized format to a cloud backend. Additional tooling is included for selectable acknowledgement of messages, to handle high priority and high traffic scenarios.
  • Zephyr <3 Internet: How Zephyr speeds implementation for new IoT devices
    • I (Jonathan, CEO) will make a case to people outside of the Zephyr ecosystem on why they should adopt the platform and contrast the difficulties to other RTOS solutions. These networking concepts are so baked-in that it fundamentally changes the cost for anyone buying into the ecosystem. From vendors adding modems to developers building apps, the underlying framework saves time and engineering complexity.
  • End-to-end IoT development with Zephyr
    • Founding engineer Alvaro will cover the options for getting a Zephyr app connected (WiFi, Ethernet, Cellular), selecting the right data encoding (JSON/CBOR), securing the data transfer (DTLS/TLS), and choosing a protocol (HTTP/MQTT/COAP). But that’s not the end of the story, the cloud needs to manage devices allowed to connect, consume the data being received, open up options for using that data, and be aware of the continued state of the hardware. And once you have the data you need to build a user-facing application on top of it.

Giving a workshop

Hands-on demos are a critical part of understanding a new system. This is true of both Zephyr and of Golioth. We wanted to showcase how Golioth works to Zephyr users, while also helping people get a real piece of hardware talking to the cloud. We’re giving a workshop called “Hands-on with Zephyr-based IoT Hardware – Data Goes in, Data Comes Out, Data Goes Up“.  This is a hands-on developer training showing how to get a finished piece of hardware utilizing the various features that Zephyr has to offer. The main thrust of the training is getting up and running with the Zephyr toolchain, implementing examples on a piece of hardware (provided), and interacting with cloud services. The user will learn about various abstraction layers around things like CoAP and CBOR, and experience a real world example of a smart device talking back to the Golioth Cloud. This will also expose the user to web-side technologies and how they can export data to external commercial services like AWS, Azure, and GCP.

Meeting with users and partners

We love our community and are always looking to meet new people within it. Interested in setting up a time to discuss something? Email [email protected]

Should you attend?

If you’re a someone already developing for Zephyr and pushing code upstream, this is the best opportunity to meet with others from the community and continue to build your skills. We think this is a perfect event for you!

If you’re new to Zephyr, the content can seem a bit intimidating…but fear not! The first half day of the conference (June 7th starting at 1 pm) is the “Intro to Zephyr” day, and this is a great introduction to the platform and how you can build your skills using Zephyr. There are also reduced cost tickets for students, if you’re still learning. We think if you’re looking to build a product with Zephyr in the future, or already are building with Zephyr, it’s a worthwhile experience to be there.

See you there!

We’re excited to meet more people and hear the other great talks that will be happening at the 2022 Zephyr Developer Summit. While we definitely plan to share the talks after the fact, and you can also participate in the virtual conference, we still hope to see you there!

Embedded firmware development almost always involves an interaction between an MCU and sensors used for detecting environmental factors. The Zephyr OS has a very particular way of interacting with sensors that can be challenging to learn and duplicate.

I needed to incorporate a sensor driver without it being part of the Zephyr tree. I won’t cover all of the detail necessary to build a driver from scratch, but I certainly learned a lot about how the driver model works in Zephyr. I’ll be sharing that with you today.

New to Golioth? Sign up for our newsletter to keep learning more about IoT development or create your free Golioth account to start building now.

In the tree, or out of the tree?

There are two methods of adding a driver to Zephyr. The first is to add the relevant files to the OS directory internally, such as in the sensors folder you see on the main Zephyr repo. The second is to add the driver into a directory structure outside of Zephyr, known as Out-of-Tree.

Working “In-Tree” (IT) is the most straightforward: the sensor driver lives in the official Zephyr repository as if it was native to Zephyr the day the project started. Any hardware vendor hoping to get their device driver In-Tree would need to submit a Pull Request (PR) to the Zephyr project to be included on every computer compiling Zephyr in the future. This is also a benefit for my learning: there are many examples of how to do this in the main repository from all the PRs. I can go to GitHub and track down the change that incorporated any particular sensor. This serves as a guide, with relevant file additions and existing file modifications.

Working in an “Out-of-Tree” (OOT) context means we will develop driver code independent of the central Zephyr repository, so no upstream changes are required. I think this helps to clarify the driver binding process and hierarchy. There are use cases for retaining the driver code alongside application code and not incorporating the driver within the OS, especially for projects that have customization or need to keep some aspect of driver code out of the public repositories.

We’re going to move an IT driver to an OOT context. The benefit of this exercise is that we will see how Zephyr locates and binds the driver to the application target device. All of the relevant files and file changes are contained to a single driver folder rather than spread out over the OS in various include, src, and specialized header files.

Understanding how Zephyr drivers are structured

There are three categorical topics to understand when adding and interacting with a driver. These are:

  1. Get the build system to find the driver
  2. Have the target-specific overlay file in place that aligns with the sensor yaml file
  3. Understand the Zephyr generalized sensor interaction functions

Getting the build system to find your driver and incorporate it involves one manifest file and a series of CMakeLists.txt and Kconfig files. Configuration of the driver could be accomplished with a single CMakeLists.txt file and a single Kconfig file, but using many files allows for the content of the files to be brief. It also establishes a directory build hierarchy that becomes intuitive after some study. This hierarchy will be explained in more detail later.

Some of the most cryptic errors will be generated when the target-specific overlay file that matches the sensor yaml file is either missing or contains errors. This file pair should be studied carefully within existing sensor examples.

Overlay and yaml files

To start, let’s find a pair of overlay and yaml files that we can study. Navigate to zephyr/samples/sensor and open any of the sensor folders. There will be a boards folder containing overlay files. Open one of them. Next navigate to zephyr/dts/bindings/sensor and open the sensor yaml file corresponding the the sensor example.

The board overlay file will typically declare and describe the communication method (i2c,SPI, etc). It will declare relevant pins used and communication bus speeds. The sensor yaml file will declare properties which may or may not need to be initialized by the target-specific overlay file for correct driver operation. The properties declared within the yaml file will have a key called required which may be true or false. If property:required: is false then it does not need to be specified in the overlay file. If it is true then it must be specified in the overlay file or the project will not build.

The sensor_driver_api struct

The most generalized functions for interacting with a sensor in Zephyr are defined by the sensor_driver_api struct. This can be found by searching for it within the sensor.h file found in /zephyr/include/drivers/ folder. These generalized functions include a getter and setter for sensor ‘attributes’, a sample fetching function, a ‘channel’ getter, and a ‘trigger’ setter. These generalized functions allow for ease of interaction with the sensor from the perspective of application code.

The most important methods are the sample_fetch and channel_get functions. The sample_fetch function can be executed from the application simply by passing the sensor device object. Sensor data for all defined channels will be obtained and stored in the ‘data’ portion of the ‘device’ struct. This struct can be found in /zephyr/include/zephyr.h. Sensor data specific to a channel can then be obtained with the channel_get function by passing this getter the device object, the relevant channel #define, and the address of a sensor_value struct into which we will store the sensor data value integer and decimal components.

Adding the out-of-tree sensor driver

Adding a sensor driver to project in the OOT context is most easily achieved by modifying the Example Application provided by the Zephyr Project . Lets add the hx711 load sensor driver to this project.

I mentioned before that it’s helpful to review the pull-request that added the driver to Zephyr in the first place. Here’s the source PR for the hx711. This driver adds source files, configuration files, and also modifies internal zephyr files. Fortunately, there is a method to avoid modifying the internal zephyr files.

  1. Add a folder called hx711 inside of example-application/drivers/sensor/
  2. Add the hx711.c and hx711.h files from the PR inside the hx711 folder
  3. Add the drivers/sensor/hx711/CMakeLists.txt and drivers/sensor/hx711/Kconfig files from the PR to this folder
  4. Add the avia,hx711.yaml file from the PR to the dts/bindings/sensor directory in the example-application project
  5. Add the nrf52dk_nrf52832.overlay file to example-application/app/boards directory
  6. Add samples/sensor/hx711/prj.conf from the PR to example-application/app/
  7. Overwrite the main.c source code from the PR to the main.c file in the example-application/app/src folder.

That’s it for file addition. The remaining steps are to modify the configuration files to locate and build the driver, and finally modify the driver header file to incorporate the changes made to internal zephyr files into the driver directly instead.

West manifest, CMakeList, and Kconfig (oh my!)

Look at the config file in the .west folder in the project root directory. It specifies the path to, and name of the manifest file (west.yml) used by this project. For the Example Application the west manifest is found in the example-application directory. It is also in this directory that the outer-most CMakeLists.txt and Kconfig files are found.

Multiple CMakeLists.txt and Kconfig files exist as pairs beginning at the example-application/ directory level. In this example, only the innermost CMakelists.txt and Kconfig file pairs contain actual build or configuration instructions. Working from the innermost directories back to the example-application/ directory, these file pairs act as sign posts, directing the build tools to find the innermost build and configuration instructions.

Open the Kconfig file in example-application. It has only one line: rsource "drivers/Kconfig". This directs the build system to look in the ‘drivers’ folder for more Kconfig information. The Kconfig file in the ‘drivers’ folder will further direct to the sensor folder. To the Kconfig file in the sensor folder which adds rsource "hx711/Kconfig". This will complete the Kconfig navigation to the hx711 folder where you earlier placed the Kconfig file containing the actual Kconfig build instructions.

The CMakeLists.txt files work in a similar manner directing the build system to inner directories. In the Example Application project the first CMakeLists.txt file that will need modification is in the ‘sensor’ folder. Here you will add add_subdirectory_ifdef(CONFIG_HX711 hx711) following the existing example for the example sensor. This is the configuration file that defines the incorporation of the driver when CONFIG_HX711=y is defined in the proj.conf file. The innermost CMakeLists.txt file that was already added will define the source files for the driver.

Adding attributes and channels

The last item to discuss is the addition of attributes and channels to the driver header file. In the PR, notice that changes were added to two enumerated lists in include/drivers/sensor.h. The second-to-last item in each of these lists is a ‘private start’ item. We can use this to create custom extensions of the enumerated lists for attributes and channels.

Add the following code to the top of the hx711.h file in your project to accommodate the attribute and channel additions in the driver file instead of the internal sensor.h file:

/** @brief Sensor specific attributes of hx711. */
enum hx711_attribute {

	/**
	 * The sensor value returned will be altered by the amount indicated by
	 * slope: final_value = sensor_value * slope.
	 */
	SENSOR_ATTR_SLOPE = SENSOR_ATTR_PRIV_START,

	/**
	 * The sensor gain.
	 */
	SENSOR_ATTR_GAIN,
};

/** @brief Sensor specific channels of hx711. */
enum hx711_channel{

	/** Weight in grams */
	SENSOR_CHAN_WEIGHT = SENSOR_CHAN_PRIV_START,


};

The project can now be built with the following line:

west build -p -b nrf52dk_nrf52832 app/.

I hope this article helped sort out some of the confusion surrounding the Zephyr methodology of adding and interacting with a sensor driver. All drivers must satisfy the three components of build direction/inclusion files, the sensor yaml and target overlay file pair, and the incorporation of the generalized sensor access functions. Seeing all of these components come together in the Out-of-Tree driver context should provide valuable insight into how they function within the drivers that are internal to Zephyr.

At Golioth, we talk about 3 things that make for likely hardware/firmware compatibility with our Cloud:

  • Running Zephyr RTOS
  • Have sufficient overhead to run the Golioth code in conjunction with other Zephyr code (about 2K extra code space)
  • A network interface in Zephyr

(this is not the only way to connect, just a good formula for getting started)

It’s that last point that disqualifies a bunch of boards in Zephyr. Maybe you love the STM32 feature set, but your board doesn’t have a modem to get to the internet. What then?

The great thing about Zephyr is that network interfaces are often abstracted to the point that you can add one after the fact to your board, say with a wire harness to a different PCB. If you’re at the design phase, you could also add the ESP32 as a co-processor to add connectivity. We have shown this in the past with Ethernet and with WiFi, and we’re working on a sample that adds a non-native cellular modem.

This article will show how to add WiFi to your Zephyr project in a cheap and efficient manner, using a $5 ESP32 board put into ESP-AT mode. Your project instantly has network connectivity (and a few other tricks too!).

AT commands? Like on my brick phone?

We’ll talk about the hardware in a bit, but the software part of this hinges on communication between processors using the ESP-AT command set.

AT Commands?? Like from the 80s?

Actually, exactly like that. And not just from your brick phone, the Hayes Command Set was created in 1981 for a 300 baud modem. It has survived 40 years later due to the easy connection over a serial interface (UART), which makes boards-to-board or chip-to-chip connectivity well understood and almost universally available. In fact, many of the cellular modems on the market if not using AT command sets directly (it has an ETSI standard), at least have an “AT mode” for setting up communications with cellular towers and troubleshooting.

The benefit is that the ESP32 acting as a secondary processor means a wide range of parts can talk over the UART interface. Though we’re talking about Zephyr in this post, a previous example showed a Cortex-m0+ running our Arduino SDK in conjunction with the ESP32 modem. On the Zephyr side of things, you can view the wide range of boards that are supported on our hardware catalog, including boards as powerful as the Altera Max 10 FPGA board and as small as the Seeeduino XIAO.

Set up the modem

The ESP32 AT command firmware is just a binary. If you find the proper module and chipset, you should be able to download it directly onto your board. The board the ESP32 module is mounted on doesn’t really matter, as long as you have access to the pins and can tell which pin on the PCB routes back to which pin on the module.

In this example, we are working with the ESP32-WROOM-32. This is one of the most common modules on the market today. You can find which module you have by looking at the laser etching on the metal can on the module itself.

I downloaded the latest binaries (V2.2.0.0 as of this writing) from the Espressif site. I will also show the command below using that version number, though you should use the newest version that is available. There is also a page that lists the different type of binaries and the associated pin numbers you’ll need to connect to when testing below.

esptool.py write_flash --verify 0x0 ~/Downloads/ESP32-WROOM-32-V2.2.0.0/factory/factory_WROOM-32.bin

Testing the modem

Once you have successfully programmed the modem, you’ll want to test it. This will involve manually typing in AT commands to a serial interface / terminal. While that might seem like an inefficient way to work with a modem, it’s a good skill set to have if you need to troubleshoot your setup at a later time.

You will need a USB to serial converter, or some other way to communicate with a UART. These are available on Amazon for $5 or less. You do not need any fancy features on this device.

If you’re using the ESP32-WROOM32 like me, you’ll have a setup like above. Hook up your USB to serial converter TX pin to pin 16 (ESP32 RX) and the converter’s RX pin to pin 17 (ESP32 TX). Note that there are pins labeled TX and RX on the dev kit, but those are the console output for the processor. The easy way to test is if you hit the Reset button (labeled “EN” on this board), you will see all of the boot sequence scrolling across the screen if hooked into TX/RX. If you are connected to the proper output (16/17), you will only see a ready prompt when the board is booted. Reminder to check the pin numbers if you’re using a different module than above.

In terms of the program to connect you to the USB to serial and communicate with the ESP32, a small warning about line endings. After initially using screen on Linux, I found that the line endings were not compatible with the ESP-AT family. I could see the ready prompt, but I could not enter any data. After some digging I found that you need to be able to send a Carriage Return / CR (\r) and a Line Feed / LF (\n). I followed this advice and downloaded and installed picocom and used the following command on the command line to launch a more interactive terminal: picocom /dev/ttyUSB0 --baud 115200 --omap crcrlf

This enabled me to try out various commands in the ESP-AT Command Set. Two in particular stood out to me as interesting, even though they are not implemented below:

  • AT+SLEEPWKCFG – Allows you to set the “light sleep” command for the modem and tell the modem which pin will be used for waking the modem.
  • AT+BLEGATTSSETATTR – This sets the GATT profile for the modem in Bluetooth LE mode. The command is actually just one of many commands…I didn’t realize that it was also possible to use the modem as a Bluetooth LE gateway as well!

Use the modem with samples

One hardware combination that is well supported in Golioth samples is the nRF52840 and the ESP32. Our “Hello” sample shows how you can configure the device and compile firmware for the nRF52840 while still taking advantage of the ESP-AT modem connected to it.

If you don’t have the nRF52840DK (Developer Kit), there are a range of other boards that will work. When you start actually running the demo, it will be very similar to our getting started using the ESP32 (natively), or the nRF9160. Our goal is to make a seamless experience once you have a network connection. We always love discussing projects in our forum, our Discord, and on Twitter.

Pin mapping has long been a headache for embedded development. Those cold, hard pins on the microcontroller are connected to actual hardware, so a number of chip makers have figured out how to reroute them internally using code. Historically that code can be complex, and spawn a rash of gui-based configuration tools with varying success. But Zephyr does a pretty good job of making pin assignments painless using devicetree overlay files. Let’s take look!

Real life portability

The ah-ha moment for me was when I switch from one architecture to another during a project. We’ve been developing some demo hardware here at Golioth, and as the hardware engineer on the project, Chris Gammell chose to use the Feather footprint for swapability. I started working with the Circuit Dojo nRF9160 Feather during development but later wanted to test out the Adafruit ESP32 Huzzah Feather. Same PCB footprint, same features enabled, should be fine, right? Not so fast.

Zephyr sells itself on portability and I was able to compile the code for both devices without any hiccups. The problem is that the Feather board specification designates where the i2c pins will be on the PCB layout. When I made the switch from the compiling the project with the nRF91 to the ESP32, those signals ended up on different physical pins of the board I was using. This kind of ruined my day… but only for like 5 minutes until I figured out how easy pin remapping is. I literally just created an overlay file that says: “hey, put these signals on these other pins (please).”

&i2c0 {
	sda-pin = < 23 >
	scl-pin = < 22 >
};

This devicetree file is pretty easy to read. I’ve told the build system that I need i2c0 pins to be on GPIO22 and GPIO23. Note that these are GPIO numbers and not pin numbers. ESP32 can be a bit confusing, just stick with the GPIO number and checkout my other post on the quirks of specifying ports for those pins when you need them.

Check for pin collision

The good news is that moving pins just works, at least on devices that have a pinmux. The bad news is that sometimes it can cause other problems and you don’t get a warning about it. In Zephyr it is up each of us to make sure we don’t assign multiple signals to the same pin.

When working with devicetree you should get used to reviewing the generated build/zephyr/zephyr.dts file. This is created by the build tools which combine the board-specific .dts from the Zephyr tree with any overlay files you have in the boards directory of your project tree. Looking at the output allows you to check the expected pins and features were as you expected.

zephyr devicetree entry for ESP32 i2c0

Left: Remapped pins        Right: Stock pin assignments

Here you can see the pin remapping as the final two lines of the i2c0 peripheral entry. On the right are the original pin assignments, and the left are the updated assignments thanks to my overlay file (Note that our decimal values from the overlay file have been converted into hexadecimal).

To avoid pin collision, search this zephyr.dts file for 0x16 and 0x17. In my case, spi3 is also using one of these pins: mosi-pin = < 0x17 >;. If you’re not going to enable that peripheral in your build this is no big deal. But if one of your active peripherals is already mapping a pin you need, the solution is pretty simple. Just remap the pin on that peripheral in the same way you did the i2c pins.

I have been searching for a way to map unused pins to a null value but haven’t yet found a way. If you have the answer, we’d love to hear about it over on the Golioth Discord channel.

Using variant overlay files

If you’re going to build your code base for several different hardware variations, it’s a good idea to keep a set of specialized overlay files. In my case I made one called esp32-feather.overlay. I still have an overlay file for the ESP32, but when I build for the Feather variant I specify the specialized overlay in the build commands:

west build -b esp32 . -D DTC_OVERLAY_FILE=boards/esp32-feather.overlay -p

Other architectures

What’s that you say? You’re not using an ESP32. Well, it’s really no different. For example, when building for the nrf52dk_nrf52832, you can move the mosi-pin of spi2 to P0.20 pretty easily.

&spi2 {
        mosi-pin = < 20 >
};

But beware if you do. The nRF52 Development Kit already has an LED mapped to that pin.

The devicetree makes moving peripheral pins quite painless. There’s even more power under the hood, as Zephyr provides access to use the pinmux, making it possible to move pins at runtime. We’ll discuss that and other Zephyr tips in future posts.

This guest post is contributed by Asgeir Stavik Hustad, a Golioth community member who is active on the Golioth Discord. Reach him on Twitter at @AsgeirSH.

This tutorial was inspired by and a response to the tutorial about how to build your Zephyr application in a standalone folder. I have done exactly that before, but I also wanted to include all my dependencies in that separate folder.

Background and motivation

I need to maintain different firmware with different Zephyr versions and trees. For example, I maintain the following directories:

  • Nordic’s Zephyr-variant (NCS) for the nRF9160
  • Base Zephyr for Atmel-MCUs
  • Base Zephyr, but locked to a particular version (ie. “2.7.0”)

We also have several custom boards. These are currently maintained in each project, but could be moved to a separate dependency if we want to use the same board overlay files in multiple projects.

Instead of trying to swap a single Zephyr-installation between all of these, I did some research into using west and its manifest file to automatically set up my project folders to include all dependencies. I also wanted to ensure our build server didn’t require any manual work to build different projects. The Zephyr docs present this topic in depth, and are recommended reading if you want to set this up.

Let’s look at how we can set up a project to fit a wider range of needs.

Project structure

Most of my projects are kept in my “Dev” folder, so for this example we’ll be using ~/Dev/app_zephyr as the root directory of the project.

I put my application source in application, which is further split into at least boards and src (you can add any folder you like here). You’ll note this is the same structure as any of the Zephyr or Golioth samples you see; in fact, you can copy a sample as the starting point (such as <Zephyr SDK Install location>/zephyr/samples/basic/blinky). The other folders include deps for dependencies and build for the build output folder.

Inside the root folder, add .west/config. This is a plain text file describing to west where it should look for the manifest file and where Zephyr should be placed.

[manifest]
path = application
[zephyr]
base = deps/zephyr

Drawbacks

  • The initial clone and west update of a project set up like this takes some time.
  • This method uses quite a bit of disk space because each project carries around the Zephyr dependencies, as opposed to having your application live within the Zephyr SDK.
  • Ensuring you get updates to all your projects means you need to update the projects in your manifest file to a new revision manually (not really a drawback in my eyes – I want control!)

Let’s go through the manifest-file itself step by step. It’s found in application/west.yml:

manifest:
  version: 0.7

  defaults:
    remote: zephyrproject
  remotes:
    - name: zephyrproject
      url-base: https://github.com/zephyrproject-rtos
    - name: mcutools
      url-base: https://github.com/mcu-tools
  projects:
    - name: zephyr
      repo-path: zephyr
      revision: v2.7.0
      import:
        path-prefix: deps
      path: zephyr
    - name: mcuboot
      remote: mcutools
      repo-path: mcuboot
      revision: v1.7.2
      path: deps/mcuboot

  self:
    path: application
  • The manifest version being set to 0.7 simply means west (the meta tool) must be at version 0.7 or higher to parse this correctly.
  • Default attributes for the project are not required, but in this case lists the main remote.
  • Remotes lists where west should look for project repos.
  • Projects lists the full range of repositories we’ll pull in as dependencies. This includes the revision, so we have control over upgrades when they are available. I want to prevent breaking changes from entering my project without my knowledge.
  • The self: path: application is used to further define where in the project tree the west.yml file is compared to the root of the project.

I feel that the projects key is the true turning point of this manifest. By adding to this we can make Zephyr pull any git repository we want, and put it in our dependencies-folder. We specify the project name, a remote (if different from the specified default), a repo-path on the remote (defaults to name), a revision (defaults to master) and a local path (with a slight footnote for this one).

There is a special key here as well that makes this work, import will make west import the projects from the manifest file of that project as well. This means that when running west update on this manifest, west will first clone all projects in this manifest, then run west update on the manifest file in the specified project, and clone all projects from that, with the specified path-prefix for all those. I’ve used this for the Zephyr-include, but not for the mcutools.

Build

In practice, this means that my project structure for the manifest file above after running west update will look like this:

- app_zephyr/
    - .west/
        - config
    - application/
        - boards/
            - arm/
                - ah_1202a/
        - src/
        - CMakeLists.txt
        - prj.conf
        - west.yml
    - deps/
        - mcuboot/
        - modules/
        - tools/
        - zephyr/
    - .gitignore

Your custom board *.dts-files can include all the root overlays from the Zephyr-dependency or any other projects. (I’ve also set this up so VS Code can do IntelliSense of these DTS files, that’s just a matter of setting the correct includePaths.)

From here, you can run west build and have it use your custom board-files, source and everything. In my case:

cd application
west build -p -b ah_1202a -d ../build

Revision Control

One of the benefits of a method like this is the reduced amount of files going into revision control. You don’t need to index all of the Zephyr directory files in your project repo. This is a bad idea anyway, given the size of the project and the almost certain guarantee they will be out of date the next time you pull your project. Locking the Zephyr version in west.yml will ensure that your project is always pulling from the expected version of an SDK or Zephyr repo. Adding a .gitignore file as shown below to your main repository will reduce your total footprint and only capture the unique elements of your project–your application code.

deps/
build/
.vscode/

Build it your way

The first step to building an optimal workflow for your company or personal development process is understanding how your build system works. The above method is far from the only way of doing things, but helps to give more precise control over what is tracked and what is pulled in from external sources.

This guest post is contributed by Vojislav Milivojević, an Embedded software lead at IRNAS.

As embedded software engineers we usually have some associated hardware sitting next to our computer that is used for developing and testing our code. In most cases, this piece of hardware is connected to our computer with a so-called “programmer”, an additional tool that allows us to access processors and controllers for which we are developing code. Here we will explore the relationship between devices we are developing and a computer, but it won’t be a standard one, it will be a long-distance relationship.

I lead the firmware development at IRNAS, where we push the limits of efficient solution development on IoT devices, but since I live in a different country than the rest of my team, there are usually a lot of packages with PCBs going back and forth. While that is not a big problem, there are times when some pieces just cannot get to me quickly enough to meet customer demands. There have also been times where a specific LTE network is not available in my region. Overcoming this issue is usually done with remote desktop solutions that are not so efficient, or with some special equipment that in a nutshell is again a computer with some additional hardware. Since I needed such a solution, and none of the existing ones were able to give me a nice out-of-the-box experience, I decided to design and document a process that works for me and the complete IRNAS engineering team.

Using Segger tools

There are many solutions, commercial and open-source, that provide embedded development tools such as programmers, IDEs, logging features, etc. One of these is solutions providers is Segger, and their hardware sometimes comes as part of development boards which is really nice. At IRNAS we tend to favor using Segger J-Link tools as our ‘go-to’ solution for target flashing and debugging while building connected products. Besides that, they have a range of very useful features for embedded developers, and one of these tools is Segger Tunnel mode. This is a remote server that allows the users to connect to a J-Link programmer (and thus its connected target device) over the internet. This means a device located anywhere in the world can be debugged or brought up.

Mixing with Zephyr (west tool)

Since most of the projects that I am working on are using Zephyr RTOS, this means that the west tool is used for flashing, debugging, and many other things. West is a meta-tool that abstracts software for all different programmers and gives us the ability to easily flash for multiple targets while not needing to remember long command lines. West does support Segger J-Link for specific targets and it can be selected as one of the offered runners. The good thing about west is that it will let us pass commands to the selected runner which gives us the ability to fully utilize all the functions of the selected J-Link software.

Set up the hardware and software

In December of 2020 there was great news from Segger that the complete J-Link software is now available to run for Linux on ARM architecture, which meant that Raspberry Pi is now supported as a host machine! The idea was to connect a J-Link programmer to Raspberry Pi, add in some software, and we have ourselves a remote programming jig.

Components needed for this demo:

    • Raspberry Pi
    • J-Link programmer
    • Board with the target MCU

For the purposes of this demo, we will be using the Nordic Semiconductor nRF9160DK development kit since it already contains both a J-Link and the target MCU hardware. The board connects via USB to the Raspberry Pi which connects to power and Ethernet (WiFi is also an option).

nRF9160DK connected to Raspberry Pi

Now J-Link software needs to be installed on Raspberry Pi so it can work as a remote J-Link Server. In the Raspberry Pi user home directory, download and un-tar the Segger utilities for the Raspberry Pi (choose the Linux ARM 32-bit TGZ archive). Then configure the udev rules as per the README.txt file in the JLink_Linux_Vxxx_arm directory.

$ wget --post-data 'accept_license_agreement=accepted&non_emb_ctr=confirmed&submit=Download+software' https://www.segger.com/downloads/jlink/JLink_Linux_arm.tgz
$ tar xvf JLink_Linux_arm.tgz
$ cd JLink_Linux_V646g_arm/
$ cat README.txt
$ sudo cp 99-jlink.rules /etc/udev/rules.d/
$ sudo reboot

Next, it is time to start the remote server. On a GUI-based system, this can be done with a small application from Segger, but the good thing is that the CLI tool is also provided. I recommend checking all available options for this tool by starting it and then typing ? at the prompt.

pi@raspberrypi:~ $ JLinkRemoteServer
SEGGER J-Link Remote Server V7.22b
Compiled Jun 17 2021 17:32:35

'q' to quit '?' for help

Connected to J-Link with S/N 960012010

Waiting for client connections...
?Command line options:
? - Prints the list of available command line options
-Port - Specifies listening port of J-Link Remote Server
-UseTunnel - Specifies if tunneled connection shall be used
-SelectEmuBySN - Specifies to connect to a J-Link with a specific S/N
-TunnelServer - Specify a tunnel server to connect to (default: jlink.segger.com:19020)
-TunnelBySN - Specifies to identify at tunnel server via J-Link S/N
-TunnelByName - Specifies to identify at tunnel server via custom name
-TunnelPW - Specifies to protect the connection with a password
-TunnelEncrypt - Specifies to encrypt any transferred data of a tunneled connection
-TunnelPort - Specifies to connect to a tunnel server listening on a specific port
-select - <USB/IP>[=<SN/Hostname>] Specify how to connect to J-Link

Before entering the command we need to think of a name for our tunnel and a password. For me, this will be tunnel name: remote_nrf91 and password: 19frn. Then start the remote server with the command:

JLinkRemoteServer -UseTunnel -TunnelByName remote_nrf91 -TunnelPW 19frn

Demo time

To test this remote flashing we will build a demo application on our host computer. nRF Connect SDK (NCS) that is based on ZephyrRTOS contains some sample applications and we will use shell_module, which enables us to use shell commands over UART with nRF9160. The selected application is located in the ncs/zephyr/samples/subsys/shell/shell_module folder of NCS. To build it for nRF9160DK we will use the command:

west build -b nrf9160dk_nrf9160_ns -p

After that let’s flash the board that is connected to our remote Raspberry Pi. The default runner for flashing the nRF9160DK is nrfjprog, but instead of that, we want to use the J-Link supported runner. Since the west tool does not natively support remote flashing, parameters will be sent directly to the J-Link software using the --tool-opt flag.

west flash -r jlink --tool-opt="ip tunnel:remote_nrf91:19frn"

This will flash our target MCU that is connected to J-Link and Raspberry Pi. To validate the result, open the serial terminal on Raspberry Pi and see if shell commands are working.

minicom -D /dev/ttyACM1 -b 115200

Summary

While Segger provides very interesting tools for embedded developers, there is still some work that needs to be done so they are properly integrated into our development workflow. Remote flashing is just one part of all capabilities, so this can be a starting point for a great remote development setup!

Storing and retrieving data from the cloud is the foundational concept of the Internet of Things. When two machines talk to one another they need to settle on a data format. Here at Golioth that means your microcontroller of choice is going to be sending and receiving JSON.

Wait a minute. JSON and microcontrollers? Take a breath, dry those sweaty palms, and keep reading! When it comes to working with the C language, parsing strings for punctuation delimiters doesn’t sound like much fun. That’s why it’s really nice that Zephyr has a built-in JSON library which does most of the work for you. The only thing you really need to do is to make a struct that tells Zephyr how the incoming data will be organized. At first blush this seems tedious, but what you get out of it is validation of both the key and the received value type.

The nice thing is that once you get the hang of it, parsing data and accessing those values becomes very easy. So today I’ll walk you through the process.

Why do we need to parse JSON?

It’s very easy to get data to and from the Golioth Cloud. One of the first examples you should try out is the lightdb/get sample that simple asks for an endpoint called /counter. The string you’ll receive is a key-value pair that looks like this:

{"counter":18}

For data this simple it would be trivial to iterate the string, look for a semicolon, and then test the following characters to make sure they are numbers. But that’s clunky, and breaks down when you start adding more key-value pairs and nested values. The JSON module will make sure the value you receive matches the key you were expecting, and that the variable type (string, int, or bool) is validated. You’ll want to use this library for sending complex data back to the cloud too. It includes an encoder that will build the JSON packet for you.

Configuring the JSON library

Enabling the library in your project is very simple. First, turn on the library in your prj.conf file:

CONFIG_JSON_LIBRARY=y

Next you include the header file in main.c of your app:

#include <data/json.h>

Setting up the struct

This part is a bit hard to wrap your head around because the helper code looks like a foreign language compared the readability of a JSON object. Here is the overview of what we need to accomplish in this section:

  1. Build a set of all key-value pairs. This includes the name of the key and the data type of the value.
  2. Package up all of the key-value pairs into a struct (which might itself include other structs) to match the way the expected JSON package will be structured.
  3. Tell the library about our struct, which it will use as a map to encode or parse the JSON string.
  4. Give the library a pointer to store the data. For encoding, this is a string pointer, for decoding this is a struct pointer.

Here’s a simple packet that might be received by a temperature controller listening to the Golioth Cloud for user settings:

{
  "unit": "c",
  "value": 37
}

In our c code we begin by describing all of the key-value pairs we expect to find in the JSON packet. Notice that we’re declaring variables. The type of the variable must match the expected data type of the JSON value. The name of the variable must match the expected name of the JSON key. Note that this isn’t something inherent about C; it’s how the library matches up the incoming data and rejects things that don’t fit that mold.

struct temperature {
  const char *unit;
  int value;
};

Next we declare a struct that will match the expected structure of the JSON packet. We invoke the JSON_OBJ_DESCR_PRIM() macro for each key-value pair. Here you can see we feed it temperature, the name of the field name from the struct, and a token that indicates the data type.

static const struct json_obj_descr temperature_descr[] = {
  JSON_OBJ_DESCR_PRIM(struct temperature, unit, JSON_TOK_STRING),
  JSON_OBJ_DESCR_PRIM(struct temperature, value, JSON_TOK_NUMBER),
};

Parsing JSON

Now we have a struct that contains all of our expected keys, and a descriptor struct that maps out the expected structure of the JSON packet. We’re ready to test it out!

Here is a concise bit of sample code to test out our setup. Note that I’m using Zephyr’s logging system to display values.

/* decode a single object */
uint8_t json_msg[] = "{\"unit\":\"c\",\"value\":30}";
struct temperature temp_results;

ret = json_obj_parse(json_msg, sizeof(json_msg),
				temperature_descr,
				ARRAY_SIZE(temperature_descr),
				&temp_results);

if (ret < 0)
{
	LOG_ERR("JSON Parse Error: %d", ret);
}
else
{
	LOG_INF("json_obj_parse return code: %d", ret);
	LOG_INF("Unit: %s", temp_results.unit);
	LOG_INF("Value: %d", temp_results.value);
}

On line 5 of the example we call json_obj_parse(), passing it our JSON string, length of that string, the descriptor we previously set up, the length of that descriptor (number of elements), and a copy of the struct where the results will be stored. The rest of the code tests whether an error code is being returned and prints out the values if the parsing was successful.

This works if the data is just right. However, checking for a negative error code isn’t enough. For instance, if valuekey of the JSON packet is received as a string ("30") instead of an int (30), the return code will not be negative, but the program will crash at runtime. This is a feature, not a bug, and it lets us parse JSON even when it’s not quite right. But we need to do more to test that the data is valid. Let’s do that, as well as looking at an example of parsing nested data.

Nested JSON

Our heater control example probably needs more than a temperature setting, it needs an on/off setting. Here’s what that JSON might look like:

{
  "heater_on": true,
  "heater_temp": {
    "unit": "c",
    "value": 30
  }
}

We build the struct in much the same way, except there are two steps here. First we set up the struct (and the descriptor) for the inner “heater_temp” object, then set up another struct and descriptor and maps the “heater_on” key/value and the “heater_temp” object:

struct temperature {
	const char *unit;
	int value;
};

struct heater_ctl {
	bool heater_on;
	struct temperature heater_temp;
};

static const struct json_obj_descr temperature_descr[] = {
	JSON_OBJ_DESCR_PRIM(struct temperature, unit, JSON_TOK_STRING),
	JSON_OBJ_DESCR_PRIM(struct temperature, value, JSON_TOK_NUMBER),
};

static const struct json_obj_descr heater_unit_descr[] = {
	JSON_OBJ_DESCR_PRIM(struct heater_ctl, heater_on, JSON_TOK_TRUE),
	JSON_OBJ_DESCR_OBJECT(struct heater_ctl, heater_temp, temperature_descr),
};

Note that we’re using the JSON_OBJ_DESCR_OBJECT()that maps the temperature descriptor for the nested data. Now we can parse our data:

uint8_t str[] = "{\"heater_on\":true,\"heater_temp\":{\"unit\":\"c\",\"value\":30}}";
struct heater_ctl heater_settings;
int expected_return_code = (1 << ARRAY_SIZE(heater_unit_descr)) - 1;
int ret = json_obj_parse(str, sizeof(str),
				heater_unit_descr,
				ARRAY_SIZE(heater_unit_descr),
				&heater_settings);

if (ret < 0)
{
	LOG_ERR("JSON Parse Error: %d", ret);
}
else if (ret != expected_return_code)
{
	LOG_ERR("Not all values decoded; Expected return code %d but got %d", expected_return_code, ret);
}
else
{
	LOG_INF("json_obj_parse return code: %d", ret);
	LOG_INF("calculated return code: %d", expected_return_code);
	if (heater_settings.heater_on)
	{
		LOG_INF("Heater On: True");
	}
	else
	{
		LOG_INF("Heater On: False");
	}
	LOG_INF("Unit: %s", heater_settings.heater_temp.unit);
	LOG_INF("Value: %d", heater_settings.heater_temp.value);
}

You must use JSON parse return codes to validate your data!

The json_obj_parse() function is going to return a positive-value that correlates to which tokens of the JSON object were successfully found and validated. Each token is represented by one bit in the return code.

The nested JSON example above presents a gotcha. We expect the parser to report back on the three tokens that are important to us (heater_on, unit, and value; in that order). What it actually does is report back on the tokens found in the top-level struct. So in this case a return code indicates that the parser found heater_on and heater_temp, the key for the nested data. It might have also found unit and value, or they may not have been present. We just don’t know for sure.

The solution when decoding json is to pretend the key to the nested struct doesn’t exist. Instead, we can just declare our important values:

struct temperature {
	bool heater_on;
	const char *unit;
	int value;
};

static const struct json_obj_descr temperature_descr[] = {
	JSON_OBJ_DESCR_PRIM(struct temperature, heater_on, JSON_TOK_TRUE),
	JSON_OBJ_DESCR_PRIM(struct temperature, unit, JSON_TOK_STRING),
	JSON_OBJ_DESCR_PRIM(struct temperature, value, JSON_TOK_NUMBER),
};

How Zephyr JSON library return codes workWe will receive a return code indicating whether the keys in the descriptor were successfully decoded–the parser will set the corresponding bit when each value is validated. So we want to see bits 0, 1, and 2 set in the return code (0b111). If heater_on is not validated, we would received 0b110. Here is an illustration of the gotcha (top return code) and the fix (bottom return code).

It’s really important to check these bits before using the value. If we fail to do so, we’ll be using uninitialized values (bad data) or reading from unallocated memory (runtime crash).

So why did I even show you how to build structs for nested JSON? You need it when encoding data. The json_obj_encode() function will take the nested descriptor and encode a JSON string exactly as we expect to see it. It doesn’t matter as much when you’re in control of the data scheme used on the cloud side, but if you need to match an existing standard, this makes the encoding a snap.

Things to keep in mind, and further reading

When working with this library, remember that the variables you declare in structs must match the keys in the JSON string. The values must also match up. The JSON library currently supports three data types: string, int, and bool. In our example, you would probably want to use a float for Celsius temperature settings, but it’s not possible to parse that data type with this library.

If you’re ambitious, adding this support would be a great way to contribute to the Zephyr open source project! But we can work around the problem. When accessing lightDB state values on Golioth, it’s possible to directly request the value just by using the specific endpoint–in our example: .d/heat_control/heater_temp/value.

The Zephyr JSON library includes other very helpful features like array-handling. I haven’t been able to find additional documentation on these features beyond the the JSON API reference, but the automated tests are a great place to see all functions/macros at play. There is a ton of utility built into this and it’s worth getting to know the library by building a few examples. Once you get the hang of it, this a very accessible way to make sure you can use incoming JSON data and know that you have dependable values.

See it in action

Check out our recent video reviewing some of the JSON basics described above!

Zephyr has a number of tools to aid in debugging during your development process. Today we’re focusing on the most available and widely useful of these: printing message to a terminal and enabling logging messages.

New to Golioth? Sign up for our newsletter to keep learning more about IoT development or create your free Golioth account to start building now.

printk() is the printf() of the Zephyr world

Printing useful messages using printf() is a time-tested practice for getting programs up and running. Some frown upon using this as the main debugging approach, but don’t discount how incredibly useful it is as a first step.

printk("String: %s Length: %zd Pointer: %p\n", my_str, sizeof(my_str), my_str);

Zephyr builds this functionality right in with the `printk()` command so that you can have immediate feedback. These messages print out over a serial connection using the same style of conversion syntax as printf(). This automatically converts data types into the printable representation. In my example I’m debugging a string in c by printing out the string itself, the length, and the pointer address. The Linux docs are handy for those looking to drill down into the specifics of printk formatting.

Tip: Pay attention to Zephyr return codes!

Throughout the Zephyr samples you’ll see that it’s standard practice to test return codes and print them out when they are non-zero numbers. I have found these return codes to be indispensable when troubleshooting subsystems like i2c. The paradigm most often used is:

int ret = gpio_pin_configure(dev, PIN, GPIO_OUTPUT_ACTIVE | FLAGS);
if (ret < 0) {
	printk("Pin config failed: %d", ret);
}

You can look up error codes in Zephyr docs. I was getting a -88, which is ENOSYS –function not implemented.

Use the Logging Subsystem as a Powerful Debugging Tool

Once you’ve seen the Zephyr logging subsystem, there is no replacement. Log messages automatically include a timestamp and report on what part of the application they come from. Data can be included in a few different ways, and these messages are queued so that they don’t interfere with time-dependent parts of your program.

Perhaps the best part is that you specify the importance of each message, allowing you to choose at compile time which logging messages will be included in the binary. This means you can pepper your program with debug-level messages and choose to leave them out of the production builds.

How to Enable Logging in Zephyr

To turn on logging, we need to do three things: tell our Kconfig that we want to use the subsystem, include the logging header file, and declare the module name.

Step 1: Add CONFIG_LOG=y to your project’s prj.conf file.

Like all subsystems in Zephyr, we need to tell CMake that we want to use it. The easiest way to do this is by adding CONFIG_LOG=y to the prj.conf file in the project directory.

Step 2: Add the header file to the top of your c file: #include <logging/log.h>

This one is straightforward. We want to use a C library so we have to include it in the main.c (and any other C files in the project).

Step 3: Declare the module

We need to tell the logging module where the message is coming from using a macro: LOG_MODULE_REGISTER(logging_blog);

There are a couple of important things to understand here. First, you will use any unique token you want in this macro, but make sure you don’t surround it in quotes. Second, as I just mentioned, this needs to be unique (in this example this is represented by logging_blog but could be any arbitrary phrase). If you have additional C files in your project, you either need to register different tokens, or more commonly just declare the file as part of the original module: LOG_MODULE_DECLARE(logging_blog);

There is an optional second argument and this is where you choose which logging events will be compiled into the application. By default, debug messages will not be shown, so you can declare your module to enable them: LOG_MODULE_REGISTER(logging_blog, LOG_LEVEL_DBG);. Logging levels run from 0 to 4 using the following suffixes: _NONE, _ERR, _WRN, _INF, _DBG.

How to use the Logging subsystem in Zephyr

Using the logging subsystem is just as easy as using printf: LOG_INF("Count: %d", count);. The log outputs for this example look like this:

[00:01:52.439,000] <inf> logging_blog: Count: 112
[00:01:53.439,000] <inf> logging_blog: Count: 113
[00:01:54.439,000] <inf> logging_blog: Count: 114
[00:01:55.439,000] <inf> logging_blog: Count: 115
[00:01:56.439,000] <inf> logging_blog: Count: 116
[00:01:57.439,000] <inf> logging_blog: Count: 117
[00:01:58.439,000] <inf> logging_blog: Count: 118
[00:01:59.439,000] <inf> logging_blog: Count: 119

We begin each line with a timestamp down to microseconds, the severity level (inf for INFO), followed by our printf-style message output. Look at the timestamps in this example–they are exactly one second apart. This drives home the power of the queuing system: the messages arrive at the terminal slightly delayed, but they don’t alter the k_msleep() timing that was used for this example.

You can use four different built-in severity levels for your logs by choosing a different macro: LOG_ERR(), LOG_WRN(), LOG_INF(), LOG_DBG(). Setting these different levels allows you to choose what gets included at compile time. If you made all your debugging messages using printk(), they will always compile into your code until you remove them from the C file. If you use LOG_DBG(), you can choose not to include that level of logging when you compile the production version of your code.

By default, debug-level messages will not be shown. As mentioned earlier, you have the option of specifying the maximum severity level to show whey you register your modules.

Hex dumping via logs

Logging lets you dump data arrays without the need to turn that data into something digestible first.

LOG_HEXDUMP_INF(my_data, sizeof(my_data), "Non-printable:");

I’ve given an array of values, the length of that array, and a string to use as the label for the log message. The logging subsystem will automatically show the hexadecimal representation of that data, as well as a string representation to the right, much as you’d expect from any hexdump program (you’ll need to scroll right in the example below).

[00:00:00.427,000]  logging_blog: Non-printable:
                                       01 fe 12 00 27                                   |....'            

Other debugging tools for next time

Using logging can give you enough feedback to solve the majority of your development issues in Zephyr, but of course there are other tools available. In future posts we’ll discuss using virtual devices via QEMU to speed up debugging sessions because you won’t have to flash newly compiled binaries to hardware. And we plan to dive into on-chip debugging that lets you set break points and step through your code. Stay tuned!

See it in action: Zephyr Debugging demo video

Zephyr wrapped up in a box

If you’re like me, you installed Zephyr and began making your own changes to the sample applications that came with the toolchain. But at some point–either for personal project repository tracking or building out a professional project–your program starts to take shape. You want to move it to its own standalone directory. It’s not immediately clear how to do that, so today we’ll dive into the nuts and bolts. (Spoiler alert: it’s pretty easy.)

There be dragons in developing inside the Zephyr directory

For those new to Zephyr, getting started examples like “Blinky” programs are located inside the ~/zephyrproject/zephyr/samples directory. I knew I shouldn’t just be making (and forgetting about) my own folders inside. It took my losing a few programs before I did anything about it. I wanted to reinstall the toolchain and I removed my zephyrproject directory without a thought for my poor, non-revision-controlled, early experiments with the RTOS. Don’t be me.

One option is to run git init in your own subdir within the Zephyr tree. But I feel like the work I’m doing should be separate from the toolchain I’m using. So I changed the formula: I set up my own tree and told it where Zephyr is installed.

Step by step

By far the easiest thing to do is to copy one of the sample directories. For my fellow Linux enthusiasts this looks something like cp -r ~/zephyrproject/zephyr/samples/basic/blinky ~/.

Here are the more verbose steps:

  1. Create a directory for your app
  2. Add CMakeLists.txt
  3. Create a src subdirectory and add main.c file to it

That sets up the directory. You now have a folder in your home directory called blinky. Like many zephyr samples, this includes the source files (src folder), a project file (prj.conf), CMake directives (CMakeLists.txt), a readme file (README.rst), and a yaml file (sample.yaml). More complex examples might have a boards directory (boards) that has hardware specific configurations.

Notably absent is any reference to the Zephyr SDK and toolchain to build your project. So we need to tell it where to find Zephyr in order to build your app inside of that directory.

Each time you begin a new terminal session, source this helper file:

source ~/zephyrproject/zephyr/zephyr-env.sh

If you are working with the Nordic fork of Zephyr, simply source the same file from that tree:

source ~/zephyr-nrf/zephyr/zephyr-env.sh

Don’t forget that each time you begin a new terminal session (no matter what tree you’re in) you need to enable your Python virtual environment and set up the build environment. Here’s an example of how I start an ESP32 development session:

mike@golioth ~ $ cd ~/blinky
mike@golioth ~/blinky $ source ~/zephyrproject/.venv/bin/activate
(.venv) mike@golioth ~/blinky $ source ~/zephyrproject/zephyr/zephyr-env.sh
(.venv) mike@golioth ~/blinky $ export ESPRESSIF_TOOLCHAIN_PATH="/home/mike/.espressif/tools/zephyr"
(.venv) mike@golioth ~/blinky $ export ZEPHYR_TOOLCHAIN_VARIANT="espressif"
(.venv) mike@golioth ~/blinky $ west build -b esp32 . -p

Here’s the same process, but for a Nordic-based board. Note two things: I didn’t need to set up the toolchain variables like I did with ESP32, and the binaries for different hardware can be built in the same tree using different toolchains–a powerful perk of using Zephyr.

mike@golioth ~ $ cd ~/blinky
mike@golioth ~/blinky $ source ~/zephyrproject/.venv/bin/activate
(.venv) mike@golioth ~/blinky $ source ~/zephyr-nrf/zephyr/zephyr-env.sh
(.venv) mike@golioth ~/blinky $ west build -b thingy91_nrf9160_ns . -p
TIP: The ESP32 example above will not build the blinky app unless you add an esp32.overlay file to configure the &led0 alias that main.c needs to attach to an LED. The Thiny91 doesn’t have this limitation. That particular dev board has an LED included, so &led0 is already specified in the dts file within the Zephyr toolchain.

Before we wrap up, let’s spend a beat discussing those CMake and Kconfig files.

Configuring a Zephyr project

I’ve moved the files of my app to a different location, but the build process remains the same. Every project needs a CMakeLists.txt file to specify the CMake version, designate this as a Zephyr project, and list the C files to include in the build. Beginners do not want to make this file themselves, so copy it from a known-working sample. You also name your project in this file using the project() designator, so go change that name now.

Projects usually also need to include a Kconfig file. This is the prj.conf that you see in a lot of projects, and for blinky that simply turns on the gpio subsystem:

CONFIG_GPIO=y

You may choose to add a boards subdirectory and store .conf files with specific board names. These files configure subsystems, while overlay files in the same directory designate hardware pins and peripherals. These two file types are key to making your Zephyr app portable across different hardware.

Further reading

Golioth has a “clean application” guide to get you started and it walks through the prj.conf file settings necessary for your app to connect to the Golioth Cloud. Of course the deeper dive into this issue is the Zephyr docs page on application development that drills down into every part of out-of-tree coding. But what I’ve covered today should be enough to get you started.

If you get stuck, the Golioth forum is a great place to ask for help on this topic. You’re also invited to join us for Golioth Office Hours, every Wednesday at 10 AM Pacific time on our Discord channel. We love chatting about the work you’re doing, and everyone wants an early look at your hardware demos during development!

Getting your ESP32 GPIO pins working with Zephyr is easy, and using a devicetree overlay file to do so makes it painless to change pins (or even boards/architectures) in the future. Today we’re looking at a simple overlay file for the ESP32 architecture and talking about the syntax used to choose input and output pins.

Overlay File Example

Zephyr uses a description language called devicetree to abstract hardware so your code is extremely portable. Since the ESP32 is a common architecture, this abstraction is already done for us. But we need to tell Zephyr what we plan to do with the GPIO pins by writing a devicetree overlay file.

An overlay file assigns an alias to a physical pin on a microcontroller and configures hardware options for that pin. Here’s an example of the overlay file for the two buttons on a TTGO T-Display board which are connected to GPIO0 and GPIO35.

/ {
    aliases {
        sw0 = &button0;
        sw1 = &button1;
    };
    gpio_keys {
        compatible = "gpio-keys";
        button0: button_0 {
            gpios = <&gpio0 0 GPIO_ACTIVE_LOW>;
            label = "Button 0";
        };
        button1: button_1 {
            gpios = <&gpio1 3 GPIO_ACTIVE_LOW>;
            label = "Button 1";
        };
    };
};

Let’s focus on assigning the alias to a specific pin. You can see at the top where sw0 and sw1 are set. These names are commonly use in Zephyr samples; for instance, assigning sw0 here will make our board compatible with the basic/button sample. For us, the important parts are lines 9 and 13 where the actual port, pin, and pin properties are assigned. One thing you should notice: what happened to GPIO35? Let’s get into that in the next section.

Please note that explaining every part of this overlay file is beyond the scope of this article, but you can see a functional overview in our overlay file video series and dig deeper with Zephyr’s Introduction to devicetree documentation.

ESP32 Pin Numbering

ESP32 module pinout

ESP32-WROOM32E

In this diagram you can see how the ESP32-WROOM module (PDF) pins are named using an IO# format. The ESP32 splits these 39 GPIO pins between two different GPIO ports. In Zephyr, the port for the first 32 GPIO pins is called &gpio0 (zero indexed), and the port for the last 7 GPIO pins are called &gpio1 (once again, zero indexed)

  • GPIO0..GPIO31 → port &gpio0, pins 0..31
  • GPIO32..GPIO39 → port &gpio1, pins 0..7

Following that scheme, it’s easy to translate the pins I need from the TTGO T-Display board (GPIO0 and GPIO35) into the numbers the devicetree understands. Because I’m using one of the pins numbered higher than GPIO31, I must switch to port 1 and adjust the pin number to begin counting again from zero – just subtract 32 from GPIO35 to get pin number 3.

  • &gpio0 0
  • &gpio1 3

Pin Functions

The overlay file is also where you want to set up the expected behavior of the pin. Most importantly, this tells Zephyr if the pin will be active high or active low. This pin status addition is valid whether it’s an input or an output.

In the case of my TTGO T-Display board, there are pull-up resistors on the circuit board and pressing the button pulls the pin low, so these are “active low” buttons. Other boards might have pull-down resistors where pressing the button pulls the pin high. Zephyr lets you use the same application code for both boards, and the actual state is translated correctly by the overlay file.

There are other options available, including the ability to turn the ESP32’s internal pull-up/down resistors on. These flags can be added using logical OR:

gpios = <&gpio0 25 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;

Zephyr also makes interrupt-drive GPIO a snap. For more on this, study the basic/button sample which turns on the interrupts and adds a callback to the sw0 pin.

Initializing and Calling the Pins in Your Program

Before we finish up, let’s talk about accessing the pins you have set up in your overlay file. In main.c we need to get the alias and set up a struct that understands devicetree. From there, it’s the normal procedure of initializing the GPIO, and then acting on it: polling an input pin, or changing the state of an output.

First, we can define a node from the overlay file alias (esp32.overlay). That node is used to populate a struct that holds the port, pin number, and configuration of the GPIO pin.

#include <drivers/gpio.h>
#define SW0_NODE DT_ALIAS(sw0)
static const struct gpio_dt_spec button0 = GPIO_DT_SPEC_GET_OR(SW0_NODE, gpios, {0});

Second, we need to first initialize the pin in main.c. This function passes the port, pin number, and flags (ie: GPIO_ACTIVE_LOW). Note that we’ve added GPIO_INPUT here as an extra flag that sets the direction of the pin.

gpio_pin_configure_dt(&button0, GPIO_INPUT);

To act on the changes to the pin, we can poll its value do something if it is in the active state.

if (gpio_pin_get_dt(&button0) > 0)
{
    LOG_INF("Button0");
}

If we had set this pin as an output, the state changes happen in much the same way, but utilize a function to set the state: gpio_pin_set_dt(spec, value). Notice that the functions and macros that Zephyr uses here all include “dt” in them. This indicates that they are specialized to operate with the devicetree. There are functions to directly set up and manipulate GPIO without using an overlay file, but then you also lose out on the benefits of software portability. To fill in your knowledge around this, take a look at the GPIO section of the Zephyr docs.

ESP32 Overlays are a Snap

Pulling off an ESP32 demo is a snap now that you know how to address the pins. This is a great first adventure into how overlay files work, but of course they are far more powerful than buttons and LEDs. Sensors, displays, UARTs and just about everything else can be plumbed into Zephyr using the overlay file in order to take advantage of the libraries and drivers already present in the RTOS.