Mbed-os: EventQueue Background Example

Created on 26 Apr 2018  路  16Comments  路  Source: ARMmbed/mbed-os

Description

- Type: Question

Question

How does one effectively use the EventQueue.background method? What does the passed in update function have to handle and what is the significance of the int argument?

An official example of this functionality would be helpful!

CLOSED events mirrored

Most helpful comment

Thanks! That's a very good explanation. I'd be interested in seeing more material like that from mbed's engineers. You guys should consider a weekly blog post on _good_ software design practices when using mbed :)
[Mirrored to Jira]

All 16 comments

Coincidentally I was looking for an example of 'EventQueue.background' yesterday. I too would like to see an official example.
[Mirrored to Jira]

@geky @SenRamakri Please review
[Mirrored to Jira]

Unfortunately I don't have the time to update docs right now, but I can try to explain it here and schedule time with @AnotherButler to get it into the online docs.

Let me know if this helps! Or if you guys are left with questions


So, event queue backgrounding!

Well, first, what does an event queue do? It moves tasks from one execution context to another execution context.

In the embedded space, we usually use the event queue to move irqs from high priority interrupt context to a low priority threaded context:

EventQueue equeue;

// example function
void print_callback() {
    printf("hi!\n");
}

// high priority interrupt context
void irq() {
    // we use the "call" method to pass the callback_function to a different execution context
    equeue.call(print_callback);
}

// low priority threaded context
int main() {
    // hardware timer, can only execute code in interrupt context
    Timer timer;
    timer.attach_ms(irq, 100);

    // dispatch events in our current context
    equeue.dispatch();
}

In this example, we assign the execution context for evens by calling the dispatch function. dispatch runs the event queue in the current execution context, in this case the low priority thread.

But dispatch has a downside. It consumes the whole thread! While we're running dispatch we can't execute anything else.

We can pass a timeout to dispatch, this would let us get our execution context back for periodic events:

// low priority threaded context
int main() {
    // hardware timer, can only execute code in interrupt context
    Timer timer;
    timer.attach_ms(irq, 100);

    while (true) {
        // dispatch events in our current context for 100 ms
        equeue.dispatch(100);

        // super fancy status update system
        led = !led;
    }
}

And we could even pass 0 ms to execute any pending events and return immediately, letting us spend all of our time blinking our LED.

But that's still rather limiting. What if we want to do stuff in our thread, execute events, _and_ sleep?

What would this look like? We would need to be able to get how to long to sleep from the event queue, but since the event queue could be updated from any IRQ the time to sleep could change whenever it wants. So we also need a way to be notified when a new sleep time has been decided...

This is where the event queue background function comes in!

/** Background an event queue onto a single-shot timer-interrupt
 *
 *  When updated, the event queue will call the provided update function
 *  with a timeout indicating when the queue should be dispatched. A
 *  negative timeout will be passed to the update function when the
 *  timer-interrupt is no longer needed.
 *
 *  Passing a null function disables the existing update function.
 *
 *  The background function allows an event queue to take advantage of
 *  hardware timers or other event loops, allowing an event queue to be
 *  ran in the background without consuming the foreground thread.
 *
 *  @param update   Function called to indicate when the queue should be
 *                  dispatched
 */
void background(mbed::Callback<void(int)> update);

The background function is a way of allowing the event queue to tell you when it can sleep. background and dispatch can be used together to run an event queue in the "background" of an execution context, without completely consuming the context.

So! What _does_ this look like?

// sleeping stuff so we're not burning fuel all the time
int sleep_time = -1;
Semaphore sleep_sema;
void sleep_callback(int new_sleep_time) {
    // update sleep_time
    sleep_time = new_sleep_time;

    // go ahead and wake up thread so it can sleep for new sleep_time
    sleep_sema.signal();
}

// low priority threaded context
int main() {
    // hardware timer, can only execute code in interrupt context
    Timer timer;
    timer.attach_ms(irq, 100);

    // attach our sleep callback so we get sleep updates
    equeue.background(sleep_callback);

    while (true) {
        // dispatch any pending events
        equeue.dispatch(0);

        // super duper fancy status update system
        led = !led;

        // go to sleep until next event
        sleep_sema.wait(sleep_time);        
    }
}

So now we will execute events, toggle our led, and sleep all using just one execution context!

The background function is designed to handle all of the weird corner cases for you, such as cancelling events, nested events, and things like that, so as long as you call dispatch after the timeout, everything should work out fine.

Note! If you use this function, one tricky part is the when the event queue _doesn't_ have a timeout. In this case, the callback will be passed the value -1 (which is the same as osWaitForever). In this example passing -1 onto the semaphore wait gets the behaviour we want, but sometimes this needs to be treated as a special case. Also the event queue will pass -1 at destruction time.

Now I know what you're thinking. That while loop is starting to look a like like an event queue by itself! Could we use the background function to run one event queue in another event queue's context?

The answer is yes:
https://github.com/ARMmbed/mbed-os/blob/master/events/equeue/equeue.c#L540-L575

But it's complicated enough that we wrapped it up in background's sister function: chain:

/** Chain an event queue onto another event queue
 *
 *  After chaining a queue to a target, calling dispatch on the target
 *  queue will also dispatch events from this queue. The queues use
 *  their own buffers and events must be handled independently.
 *
 *  A null queue as the target will unchain the existing queue.
 *
 *  The chain function allows multiple event queues to be composed,
 *  sharing the context of a dispatch loop while still being managed
 *  independently
 *
 *  @param target   Queue that will dispatch this queue's events as a
 *                  part of its dispatch loop
 */
void chain(EventQueue *target);

With chain we can rewrite our blinky to use it's own event queue

// our own event queue
EventQueue blinky_equeue(2*EVENTS_EVENT_SIZE);

// status update thingy
void blink() {
    led = !led;
}

// low priority threaded context
int main() {
    // hardware timer, can only execute code in interrupt context
    Timer timer;
    timer.attach_ms(irq, 100);

    // chain our event queue onto this new event queue for the blinky stuff
    equeue.chain(&blinky_equeue);

    // blink our led every 100 ms
    blinky_equeue.call_every(100, blink);

    // dispatch our root event queue
    blinky_equeue.dispatch();
}

Now why would you want to do this? When you want to tightly control the memory consumption of each event loop.

Consider an example Sonar class driven by a complicated FSM that I'm too lazy to write:

class Sonar {
    Sonar(EventQueue *parent_queue=mbed_event_queue()) {
        // attach our sonar FSM to the provided event queue, defaulting to the global mbed event queue
        _equeue.chain(parent);
    }

    EventQueue _equeue(SONAR_EVENTS * EVENTS_EVENT_SIZE);
}

Normally, when you pass around an event queue, you need to track how many events could be allocated at once. But if event queues are chained together, the parent no longer needs to care about the child's quantity of events, as long as it gives the child the hook to dispatch its own event queue (1 event for the child's chain call).

In this way we can pass the decision of execution context entirely up to our caller:

// create three sonars using our event queue
EventQueue queue(3 * EVENTS_EVENT_SIZE);
Sonar s1(&queue)
Sonar s2(&queue);
Sonar s3(&queue);

int main() {
    // dispatch all of our sonars at once
    queue.dispatch();
}

Note: This is a form of controlling execution context _explicitly_. In mbed-os, you can also control execution context _implicitly_ by creating different threads for each sonar. The decision for which one to use is left up to you.
[Mirrored to Jira]

Thanks. That's certainly given me plenty to be going on with! :-)
[Mirrored to Jira]

Thanks! That's a very good explanation. I'd be interested in seeing more material like that from mbed's engineers. You guys should consider a weekly blog post on _good_ software design practices when using mbed :)
[Mirrored to Jira]

Glad I could help! It would be interesting to see more technical notes on the blog, I'll mention that to @janjongboom / @BlackstoneEngineering.
[Mirrored to Jira]

@geky , want to do an office hours on Event Queue?
[Mirrored to Jira]

I've never looked too hard at this - would it make sense to encourage this "chain" rather than "use directly" model for users of the shared event queues - putting it into example docs there?

Would allow us to maybe reduce the default event queue size for those, and also permits people who to use them who want to queue significant data (eg packet payloads).
[Mirrored to Jira]

I'm not sure if this approach is for everyone, it works if you know roughly how much memory the event queue for your module needs, and it's a good way to keep memory exhaustion limited to different modules. If you don't know, having multiple event queues may do more harm than good.

It'd be interesting to try it out for a code module and see if it has a net benefit.

For significant data, I think user allocated events is the correct solution, but I haven't been able to add support for user allocated events. It's probably the biggest missing feature of the equeue right now. (Currently the equeue relies on event addresses being in the range of the internal memory arena to fit ids into ints).

[Mirrored to Jira]

The chaining makes it more "lumpy", right? The thing dispatches everything from the chain as 1 top-level events?

User-allocated events is the biggest omission from my point of view. It's vital for robustness - being able to queue pre-allocated events with no risk of memory exhaustion. (Added this to Nanostack's event queue a couple of years ago to make it survive fuzz testing).
[Mirrored to Jira]

If this hasn't been integrated into the official handbook already, it should be put _somewhere_ public. The explanation above was too good to be lost to the depths of closed GitHub issues.
[Mirrored to Jira]

Thanks for the feedback! We're working on integrating it into our documentation, completely agree that Geky did a great write-up on this.
[Mirrored to Jira]

Here's the current pr for it:
https://github.com/ARMmbed/mbed-os-5-docs/pull/507
[Mirrored to Jira]

@geky I tried the background example with moving the dispatching of the queue into another thread and figured that it doesn't quite have the expected behaviour. The sleep_callback always gets a 0 as parameter after the first execution. So the controller never goes to sleep since the sleep_time is always 0 and the LED is just flickering instead of blinking. The "hi" is printed out correctly every second.

I used the following code:

EventQueue equeue;
DigitalOut led(LED0);
Thread thread1;

int sleep_time = -1;
Semaphore sleep_sema;

void sleep_callback(int new_sleep_time)
{
    // update sleep_time
    sleep_time = new_sleep_time;

    // go ahead and wake up thread so it can sleep for new sleep_time
    sleep_sema.release();
}

// example function
void print_callback()
{
    printf("hi!\n");
}

// high priority interrupt context
void irq()
{
    // we use the "call" method to pass the callback_function to a different execution context
    equeue.call(print_callback);
}

void threadContext()
{
    while (true)
    {
        // dispatch any pending events
        equeue.dispatch(0);

        // super duper fancy status update system
        led = !led;

        // go to sleep until next event
        sleep_sema.wait(sleep_time);
    }
}

void main(void)
{
    // hardware timer, can only execute code in interrupt context
    Ticker timer;
    timer.attach_us(irq, 1000000);

    // attach our sleep callback so we get sleep updates
    equeue.background(sleep_callback);

    thread1.start(threadContext);

    while (true)
    {
        sleep();
    }
}

I tried this code on an EFM32 Pearl Gecko DevBoard from Silicon Labs.
So my question is, am I doing something wrong here?
[Mirrored to Jira]

Internal Jira reference: https://jira.arm.com/browse/IOTCORE-340

Thank you for raising this issue. Please note we have updated our policies and
now only defects should be raised directly in GitHub. Going forward questions and
enhancements will be considered in our forums, https://forums.mbed.com/ . If this
issue is still relevant please re-raise it there.
This GitHub issue will now be closed.

Was this page helpful?
0 / 5 - 0 ratings