Zephyr: lack of a performant sensor API

Created on 7 Sep 2017  路  5Comments  路  Source: zephyrproject-rtos/zephyr

While Zephyr has a sensor data API, it only allows to solve very simple problems. The API does not support FIFO-model which is critical to achieve power savings on the main MCU. A user would have to reimplement his own FIFO- and timestamping capable API if they decide to use Zephyr, and this is non-trivial. Additionally API has limited means to define context where measurement is run. Running each measurement in own thread or circulating measurements via global thread is neither convenient nor efficient.

Existing sensor APIs:

Enhancement API Sensors

Most helpful comment

What I don't like about the current API

  • Sensor value conversion and 2 32bit value sensor reading representation.

    • Should be raw values with perhaps a function callable by application code to convert them to floats or 2 ints

    • 2 int representation has a significant memory footprint for values you may wish to have 1000's of in memory.

  • Triggers require you to read the sensor values in a thread or work queue task.

    • Highly likely you will miss sensor values

    • Doing the SPI/I2C (blocking call) in the IRQ handler directly has other poor consequences

    • Should be done using an async SPI/I2C call, starting it in the IRQ handler, leaving as soon as possible.

  • Trigger callback is not nearly as convienent as perhaps providing a K_FIFO like handle you can use kernel polling on for sensor readings.

    • No way to currently attach context to the trigger callback

    • Does not allow you to have a single k_thread polling on many sensors saving memory

  • Device specific functionality hidden

    • MPU9250 is a perfect example. Has gesture interrupts you may wish to enable/disable at runtime. Has a builtin FIFO. Has builtin Sensor Fusion. Has high/low/no power modes. All of these things you may wish to use in the same application changing the state of the device while you application is running. IE wake up on a tap/shake gesture, now start collecting higher quality high power motion readings.

All 5 comments

CC @pabigot

I'd be interested in taking this on, but it'd start with some architectural design. A generic ability to have device-specific functions and state exposed to the user seems to be lacking in Zephyr and I believe that lack makes things difficult (for both sensors and for devices that support a generic API but have extra functionality that isn't obviously generalizable).

What I don't like about the current API

  • Sensor value conversion and 2 32bit value sensor reading representation.

    • Should be raw values with perhaps a function callable by application code to convert them to floats or 2 ints

    • 2 int representation has a significant memory footprint for values you may wish to have 1000's of in memory.

  • Triggers require you to read the sensor values in a thread or work queue task.

    • Highly likely you will miss sensor values

    • Doing the SPI/I2C (blocking call) in the IRQ handler directly has other poor consequences

    • Should be done using an async SPI/I2C call, starting it in the IRQ handler, leaving as soon as possible.

  • Trigger callback is not nearly as convienent as perhaps providing a K_FIFO like handle you can use kernel polling on for sensor readings.

    • No way to currently attach context to the trigger callback

    • Does not allow you to have a single k_thread polling on many sensors saving memory

  • Device specific functionality hidden

    • MPU9250 is a perfect example. Has gesture interrupts you may wish to enable/disable at runtime. Has a builtin FIFO. Has builtin Sensor Fusion. Has high/low/no power modes. All of these things you may wish to use in the same application changing the state of the device while you application is running. IE wake up on a tap/shake gesture, now start collecting higher quality high power motion readings.

I'm currently working on an RFC for the sensor API to address some of the limitations highlighted in this issue, which I've run into myself writing some drivers to contribute, and wanted to contribute to this issue as I work through writing up the initial RFC.

Problems

Problem 1: Lack of meta-data with sensor samples

Today, the sensor API has a workable abstraction layer where all sensors have:

  • Attributes
  • Triggers
  • Channels

Sensor drivers respond to requests for channels like SENSOR_CHAN_RED or SENSOR_CHAN_ALTITUDE, and they all return the same data type (https://github.com/zephyrproject-rtos/zephyr/blob/master/include/sensor.h#L30).

This works fine for many basic uses cases, but you hit a wall when things don鈥檛 fall into the narrow constraints of the current channel definition(s).

For example, we have channels for IR, RED, GREEN, BLUE ... and those are useful, but what about other scientifically important ranges like UV or something specific like a narrow wavelength from a laser being measured on a narrow band photodiode?

There is currently no way to add meta-data to channel responses, and if we add a new channel like SENSOR_CHAN_SPECTRAL_COMPONENT to address the current overly-narrow range of light values, you can return a single value, but not the 'range' that value represents (ex. 645nm or 635..645nm).

Problem 2: Support for multiple instances of the same channel

How can we best handle situations where it's perfectly valid to have multiple distinct samples from the same channel on a sensor in one fetch operation? A spectrometer, for example, might read spectral data from 380-780nm (visible light) in 10nm chunks, meaning there are 40 data samples on the same 'channel', all read in one fetch operation, but this isn鈥檛 something that naturally fits into the current API. The same issue potentially applies to other classes of sensors, where we might have multiple instances of one type and we need some sort of meta-data to distinguish between them.

Possible Solution(s)

As a naive solution, we could add two more values to the sensor_value struct for additional meta-data, but then this doubles the amount of memory used for each data sample which is unfortunate.

There are a number of possibilities here, but they all have tradeoffs, and I'm hoping to get a discussion started on what some of the pros/cons of different solutions to the above problems might be.

As I mentioned, I'm working on an RFC addressing the issues related to light in particular, detailing the current problems and trying to suggest some solutions and the tradeoffs involved, but this is something that touches on the whole sensor API, so I wanted to add my initial thoughts here while I work on the RFC. I'll address some issues in other sensor families like motion-related devices, but light has the most obvious problems to me at present so I wanted to start there.

Any suggestions on what might make sense to handle a situation like the spectrometer above would be useful, so that a meaningful proposal can be made for improvements.

A sensor_fetch_raw() or something would be much appreciated. Forcing this weird double int format upon developers feels pretty... forced.

What we've done so far is setting periodic k_timers whose ISRs pushes poll jobs to a work queue. I can imagine an alternative implementation where a single thread does k_poll(&sensor_event, K_FOREVER) in a loop and gpio/timer ISRs does k_poll_raise(&sensor_event, sensor_id). Maybe this isn't optimal - starting an async transaction in the ISR itself sounds like the perfect solution, but the I2C API doesn't support that yet, right?

@microbuilder How about adding a int *info argument to sensor_sample_fetch() and/or sensor_channel_get(), and if more data is available, the driver flips the MORE_DATA_AVAILABLE bit?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Nukersson picture Nukersson  路  5Comments

akansal1 picture akansal1  路  4Comments

spoorthik picture spoorthik  路  3Comments

JusbeR picture JusbeR  路  5Comments

DeltaEvo picture DeltaEvo  路  4Comments