I seem to be experiencing loss of characters sent/received through UART when communicating with the BG96 modem and when I have TICKLESS mode enabled. I have noticed this by viewing traces when connecting to a network. When I disable TICKLESS mode, the modem connects and works perfectly.
I am using a custom target based on STM32L476VG and I'm using the GCC_ARM toolchain. I have copied the DISCO_L476VG system_clock.c and am using mostly the same peripheral pins and pin names files. I am using the QUECTEL BG96 modem and I'm using the integrated mbed driver. I have connected the modem to uart pins and I have also defined and connected flow control pins. The baud rate is set to default at 115200. Modem pins:
MDMRTS = PA_12,
MDMCTS = PA_11,
MDMTXD = PA_9,
MDMRXD = PA_10,
custom_targets.json
{
"MY_TARGET": {
"inherits": ["FAMILY_STM32"],
"core": "Cortex-M4F",
"extra_labels_add": ["STM32L4", "STM32L476xG", "STM32L476VG"],
"config": {
"clock_source": {
"help": "Mask value : USE_PLL_HSE_EXTC (need HW patch) | USE_PLL_HSE_XTAL | USE_PLL_HSI | USE_PLL_MSI",
"value": "USE_PLL_MSI",
"macro_name": "CLOCK_SOURCE"
},
"lpticker_lptim": {
"help": "This target supports LPTIM. Set value 1 to use LPTIM for LPTICKER, or 0 to use RTC wakeup timer",
"value": 1
}
},
"OUTPUT_EXT": "hex",
"detect_code": ["0820"],
"macros_add": [
"USBHOST_OTHER",
"TWO_RAM_REGIONS",
"MBED_TICKLESS"
],
"device_has_add": [
"ANALOGOUT",
"CAN",
"LPTICKER",
"SERIAL_FC",
"SERIAL_ASYNCH",
"TRNG",
"FLASH"
],
"release_versions": ["2", "5"],
"device_name": "STM32L476VG",
"bootloader_supported": true
}
}
mbed_app.json
{
"target_overrides": {
"*": {
"events.use-lowpower-timer-ticker": 1,
"platform.default-serial-baud-rate": 115200,
"platform.stdio-baud-rate": 115200,
"platform.stdio-convert-newlines": true,
"platform.stdio-buffered-serial": true,
"drivers.uart-serial-rxbuf-size": 256,
"drivers.uart-serial-txbuf-size": 256,
"rtos.idle-thread-stack-size": 1024,
"platform.cpu-stats-enabled": 1,
"platform.thread-stats-enabled": 1,
"platform.error-filename-capture-enabled": true,
"cellular.use-apn-lookup": 0,
"ppp-cell-iface.apn-lookup": 0,
"mbed-trace.enable": true
}
},
"macros": [
"DEFAULT_APN=\"internet\"",
"CELLULAR_DEVICE=QUECTEL_BG96"
],
"config": {
"trace-level": {
"help": "Options are TRACE_LEVEL_ERROR,TRACE_LEVEL_WARN,TRACE_LEVEL_INFO,TRACE_LEVEL_DEBUG",
"macro_name": "MBED_TRACE_MAX_LEVEL",
"value": "TRACE_LEVEL_INFO"
}
}
}
I have tried:
lpticker_lptim to 0lpticker_delay_ticks from and testing with values from 0 to 4I even tried setting a custom idle callback function, nothing changed. Nothing seems to work except disabling TICKLESS mode, which is not an option for my project. This is a blocker for my project because it requires low-power mode and is rarely turning on the modem and sending data to the backend.
[ ] Question
[ ] Enhancement
[x] Bug
Internal Jira reference: https://jira.arm.com/browse/MBOCUSTRIA-161
Hi
Maybe you can try to add a sleep_manager_lock_deep_sleep() call
somewhere in the cellular init procedure in order to avoid going into deepsleep ?
Hi!
Did that already, no difference. Also tried to set a different idle hook (empty) at boot, also no difference.
I would not expect deep sleep to be entered at all - both the console and the modem are using UARTSerial - ie buffered serial with an IRQ handler attached for reception. Both of those should be holding a deep sleep lock.
Deep sleep would only be possible if the modem is closed (possible) and the console input was somehow deactivated (not currently possible).
For test purposes if you're not using console input, you can of course set "platform.stdio-buffered-serial" to false, which would get rid of that lock. Downside is no output buffering.
We do need some mechanism to tell UARTSerial "output only" so it doesn't attach the Rx IRQ handler - then it would only hold the deep sleep lock while the TX buffer was occupied.
I don't have any problems with UARTSerial locking deep sleep, the modem is turned on only when sending data, and after that I disconnect, delete the instance of the NetworkInterface object, and turn off the modem power.
My problem is that the modem does not want to connect (UART is losing data) when TICKLESS mode is enabled.
I think the console will be disabling the deep sleep, and that sounds like it will ultimately be an issue for you.
But your character loss must be happening when deep sleep is disabled for the modem regardless - I was just bringing it up to note that Jerome's "try locking deep sleep" shouldn't affect anything.
The AT handler stuff in the modems doesn't have proper wake-up mechanisms anyway - they basically handle blocking by entering a polling loop for readability with 1 millisecond sleeps. (See mbed_poll.cpp)
As such they're possible worst-case for "tickless" latency behaviour - they're going to be asking for a wakeup every millisecond, thus every idle call will be trying to set up a 1 millisecond timer manually - harder work than just having a continuous millisecond ticker.
If there was an IRQ latency issue with this, you could get character loss.
Should be easy to reproduce though - same basic behaviour is used by any blocking serial read on console input with "platform.stdio-buffered-serial" true (UARTSerial::read), so you would see it on any console test reading stdin.
Now, just noticed you actually replaced the idle hook (with what - null function?). If that's the case - then I'm not sure what's left. We won't be making sleep calls at all to the HAL. So a flaw in the SysTimer / OS_Tick implementation, or the lpticker it uses?
@kjbracey-arm yeah, I'm at a loss too here.
I have tried to replace the idle function with:
Thread::yield()wait(1.0f)I also tried setting lpticker_delay_ticks to 0 to eliminate the lpticker wrapper, but it didn't make any difference.
I also suspect that the problem is somewhere in the ticker(s), but I don't have enough knowledge or experience to find out what and where exactly is the problem.
You also have CTS/RTS. If that's functional (has a command been issued to the modem to enable its flow control?) then IRQ latency issues should basically be ruled out too. It shouldn't matter if we're sluggish to wait up, the modem should be waiting.
I have tried with and without configuring RTS and CTS pins, and it's the same. I don't see flow control is being enabled on the modem, I don't see that the integrated mbed driver is configuring it on the modem. I think that should be updated, so the driver automatically configures it when RTS and CTS pins are configured on a target.
But the problem is even detecting when the modem is turned on to be able to issue any commands to it because the received data is missing characters, so the received data is not reliable.
I would expect a wait in the idle function to kill the system - I believe RTX requires that there always be a "ready" thread, so idle has to always be "ready" - it can never wait for something.
Empty and yield would be fine. Just __DSB(); __WFE(); would be a viable minimal "pause the CPU" implementation. Any of those should have the same basic effect - CPU active or frozen waiting for the next millisecond tick, and no real delay in processing it when it happens. When not stopped via the idle hook, the "tickless" OS does still have regular millisecond ticks. And the blocking serial reception is checked on those ticks via polling, not the serial Rx IRQ.
It smells as if the ThisThread::sleep_for(1) calls in the serial/poll blocks are taking significantly longer than 1 millisecond. Maybe use a (micro-second resolution) Timer to check how long they're actually sleeping for.
@kjbracey-arm I will try and measure how long the UARTSerial::wait_ms function is actually sleeping for.
@kjbracey-arm It seems as the UARTSerial::wait_ms is not called at all because ATHandler is calling _fileHandle->set_blocking(false); in its constructor
Ah, good point. The wait is then occurring in poll() in mbed_poll.cpp. It's fundamentally the same though.
@kjbracey-arm I did some tests and measured the average time of execution of the mbed_poll.cpp poll function:
Values are in milliseconds, where avgExec is the average time of execution and avgTimeout is the average value of the timeout parameter:
avgExec=51.134, avgTimeout=973.595
The important thing for character loss is how long it spends in the rtos::ThisThread::sleep_for(1) call inside poll. Effectively that determines how often it polls the serial port for readability. If that goes significantly above 1ms, you'll be at the mercy of whatever hardware FIFOs the device has.
Or, indeed, I guess what ultimately matters is indeed how far spaced out the calls to the underlying fh->poll() are - the loop period.
I'll add more measurement and report.
The "proper wakeup mechanism" is very big TODO here. It is unfortunately non-trivial, at least if you want to be solid for the case of multiple threads all waiting on a single device. Polling every millisecond is pretty poor, but should be just about tolerable for a serial port at 115200, if we have a little bit of FIFO.
To do it properly we need to be able to broadcast a wakeup to all threads blocked on poll or a blocking read/write on that device. ConditionVariable would do the job, except we need to be able to raise it from interrupt, which that doesn't support.
This prototype branch experimented with a new ConditionVariableCS, and I see it actually does incorporate a poll wake implementation based on it.
As I recall, I think this fully worked - the only issue was the contentiousness of ConditionVariableCS - some felt it to be a bad thing due to "encouraging" critical section use.
At some point we'll have to revisit this.
@kjbracey-arm seems complicated... How would I go about solving the problem for my project? It is a critical issue and I need to find a solution... I did some measurement of the loops - values are in microseconds, where avgOuter is the average time of execution of the outer loop body and avgInner is the average time of execution of the inner loop body:
avgOuter=991, avgInner=10
The wakeup stuff would be a potential solution if the problem was somehow due to the "1ms wait" massively overshooting. Maybe a proper RX IRQ would wake you up in time. (It's possible you could just merge that branch in and it would work, but wouldn't surprise me if it needed some rebasing).
But 991us seems good, so maybe it's not really anything to do that. Maybe you can check for maximum, see how long it can get - any spikes? And do those test results change notably with tickless turned on and off?
TICKLESS ON:
avgOuter=992, avgInner=10, maxOuter=1205, maxInner=159TICKLESS OFF:
avgOuter=981, avgInner=10, maxOuter=1229, maxInner=29avgOuter=1002, avgInner=10, maxOuter=5993, maxInner=29avgOuter=1007, avgInner=10, maxOuter=6004, maxInner=47avgOuter=1023, avgInner=10, maxOuter=6004, maxInner=1005Well, no significant difference there. Seems all the timing stuff is a red herring. Is it possible that the TICKLESS switch is somehow changing the behaviour of something else? How many bits of code look at the flag?
Or is it something daft like an uninitialised variable, so it's just code moving that is breaking/fixing it? Have you tried the different compilation profiles - release/develop/debug?
I have tried using different compilation profiles, no difference.
The MBED_TICKLESS define is used only in mbed_rtx_idle.cpp and mbed_power_mgmt.h. I don't see any obvious problems there...
The only logical problem is with the OS tick, but I don't understand where or why. Because nothing else changes when enabling TICKLESS...
The OS tick ultimately has the job of waking up those sleep_for(1) calls (assuming idle has been nerfed so the CPU keeps running), so I would expect any problems with it to be visible there.
One possible thought, which I haven't really followed through - interrupt priority of LPTicker (used if tickless) versus SysTick. Do they have different priorities configured in the NVIC?
@kjbracey-arm where should I check that?
NVIC_GetPriority(SysTick_IRQn) versus NVIC_GetPriority(<whatever interrupt the lpticker uses>). You'd have to figure out the latter.
Seems that SysTick would probably be set very low priority, but lpticker might be high? Not sure why that would matter though.
void SysTimer::setup_irq()
{
#if (defined(NO_SYSTICK) && !defined (TARGET_CORTEX_A))
NVIC_SetVector(mbed_get_m0_tick_irqn(), (uint32_t)SysTick_Handler);
NVIC_SetPriority(mbed_get_m0_tick_irqn(), 0xFF); /* RTOS requires lowest priority */
NVIC_EnableIRQ(mbed_get_m0_tick_irqn());
#else
// Ensure SysTick has the correct priority as it is still used
// to trigger software interrupts on each tick. The period does
// not matter since it will never start counting.
OS_Tick_Setup(osRtxConfig.tick_freq, OS_TICK_HANDLER);
#endif
}
I tried to find where the lpticker interrupt is set but I don't know enough about the structure of mbed to be able to do that. I don't understand how or where the lpticker is implemented for STM...
I've realised I was talking nonsense to some extent above - the 1ms periodic poll is to check the UARTSerial receive buffer, not the serial port itself. The data is transferred from the actual serial port into the buffer via UART RX interrupts.
So what I'm now thinking is that the serial interrupt is being starved in the tickless system. Presumably the lpticker interrupt is the same priority or higher than the serial interrupt, and I'm guessing whatever work is being done in the lpticker OS_Tick interrupt is causing too much serial IRQ latency. In the tick system, the SysTimer OS_Tick interrupt is very low priority, and can be pre-empted by the serial interrupt.
It's not actually clear to me why the tickless system uses lptimer to generate its ticks when running. It certainly needs to use it for the wakeup from idle, but that doesn't mean you have to switch from SysTick to lptimer for OS_Tick, does it?
If we have a priority issue, to avoid this sort of compatibility grief, maybe switching back to using SysTick OS_Tick would help? Lowering lpticker's interrupt priority might cause other problems, although it seems like it should naturally be a low-ish priority interrupt.
I wonder how much work is actually occurring in the lpticker ticks to the OS though. I wouldn't have thought it should be enough to starve the serial - shouldn't it just be triggering the low-priority SVCHandler? I guess I don't understand /why/ SysTick should have been set to low priority in the first place. Maybe there's a flaw in the lpticker code or HAL making it take a long time to process an IRQ.
Time to hand over to RTOS porting folks I think - @ARMmbed/mbed-os-core , @bulislaw
For your own experiments, you could try raising the priority of the serial interrupt or lowering that of the lpticker.
LPTicker seems to be LPTIM1_IRQn - set its priority to 0xFF
@kjbracey-arm Thank you very much for all your suggestions and effort! 馃憤 I hope we will be able to fix this, the deadline for my project is near, so I have to solve this issue somehow.
I will try to test your suggestion as soon as I get time and will post results, I have some other stuff to deal with on Android right now :-)
I've looked at it a bit more - the LPTicker interrupt handler does just generate the SysTick interrupt in software using ICSR.PENDSTSET, so there's no real RTOS work being done at the higher LPTicker interrupt level. And it appears that the code is attempting to make that SysTick be low priority as usual.
But maybe there is still enough time spent in the HAL and platform layers processing the lptimer interrupt to disrupt serial?
I have looked at the stm32L4 HAL LPTIM source code and understand very little. I don't know where to begin and what should I look at. I compared some parts with the info from the datasheet and it seems to be correct...
What are my options? How would I debug it and see if the serial interrupts are being ignored?
@c1728p9
Add NVIC_SetPriority(LPTIM1_IRQn, 0xFF) somewhere in its init code, when it makes the other NVIC calls on that interrupt. That will lower the priority to the same as SysTick, and maybe make the symptoms go away, if my theory is correct.
I added NVIC_SetPriority(LPTIM1_IRQn, 0xFF); to mbed-os\targets\TARGET_STM\TARGET_STM32L4\lp_ticker.c above the NVIC_SetVector(LPTIM1_IRQn, (uint32_t)LPTIM1_IRQHandler); line in the void lp_ticker_init(void) function inside the #if MBED_CONF_TARGET_LPTICKER_LPTIM block.
It didn't help, still losing characters.
I tried to use NVIC_GetPriority(LPTIM1_IRQn); in my main() function to check if the value is set and it returns 0x0F as the priority instead of 0xFF.
I replaced my NVIC_SetPriority(LPTIM1_IRQn, (uint32_t)0xFF) line with:
NVIC_SetPriority(LPTIM1_IRQn, (uint32_t)0xFF);
uint32_t prio = NVIC_GetPriority(LPTIM1_IRQn);
if (prio != 0xFF) {
error("HAL_LPTIM_Init set LPTIM1 priority failed!\n");
return;
}
And it crashes, should that happen? Shouldn't the GetPriority function return the same value as was set the line before using the SetPriority function?
The NVIC_Get/SetPriority API is a bit bonkers. The registers are designed so that the most significant bits matter if the chip only has 16 priority levels.
The API shifts it so you specify priorities 0x0-0xF on a 4-bit priority system, rather than 0x00-0xF0, totally defeating the point of the hardware register design.
So that's what I'd expect. It's fine as long as you're trying to specify min or max priority, but anything intermediate is a pain.
@kjbracey-arm Oh, ok, understood.
@kjbracey-arm Is there any option to switch between TICKLESS and non-TICKELSS mode at runtime? Should that even be done? I'm running out of options and don't know what else to do.
I took a look at this and was able to reproduce dropped characters when tickless is turned on when using a NUCLEO_L476RG at 115200 baud. On this board the failures were due to overrun errors caused by lp_ticker_set_interrupt (used by tickless) taking too long to execute.
@mfatiga a quick test you can do to confirm that you are seeing this same problem is to run tickless off the microsecond ticker, as us_ticker_set_interrupt does not busy loop like lp_ticker_set_interrupt does. If the dropped characters go away for you then you likely have the same problem I'm seeing. To run tickless off of the microsecond ticker replace the get_lp_ticker_data() here with get_us_ticker_data and include #include "hal/us_ticker_api.h" at the top of the file.
As for profiling of the NUCLEO_L476RG the function lp_ticker_set_interrupt is in some cases taking >90us to execute. The screenshot below shows a logic capture when a ~90us call to set_interrupt is followed by a uart overrun. The logic capture this screenshot was taking from is available here.
Thanks, Russ. I was going to suggest instrumenting to see if SysTimer::handler() was somehow taking too long, but looks like you've already found it does. And because it claims a critical section, lowering its priority wouldn't have helped.
Presumably if we do switch to us_ticker for the test, the idle function will still need to be replaced with null, or a deep sleep lock will need to be held, assuming us_ticker doesn't run during deep sleep.
So outstanding questions:
SysTimer::handler need to claim a critical section? I would suspect it doesn't, as long as we can assume SVCHandler etc are lower or equal priority, and I believe that's a RTX requirement.@c1728p9 Yes! I have tested your suggestion about replacing lp_ticker with us_ticker in SysTimer and I have no more character loss.
@kjbracey-arm @c1728p9 Should there exist some kind of construct similar to deep_sleep_lock that would switch to the SysTick for code that deals with fast interrupt-based logic (fast serial, etc.). I wouldn't add calls to it anywhere in the mbed API, just provide the ability to use it when required. I don't think deep_sleep_lock should enable this, because in many cases this is not necessary.
Can you try the alternative of leaving it using lpticker, lowering the priority as previously tried, but removing the critical section from SysTimer::handler.
If the lpticker setup is taking that long, it's going to potentially cause all sorts of problems if ever used from normal interrupts or critical sections - and I bet it is in other places. I would say the implementation should be improved if at all possible.
Anyone who needs fast interrupts must already be disabling the deep sleep via the lock, as deep sleep wake could be as slow as 10ms. The serial drivers do hold the lock while interrupt handlers are attached. (Nominally because UART IRQs are not specified to wake from deep sleep, but even if they did, 10ms would be too slow).
The general expectation is that no-one should be disabling interrupts long enough to break serial ever. Serial RX is effectively the canary for interrupt latency - it's the first to die. A "keep IRQ disables below 100us" has always been my guideline, but it appears the STM platform is particularly vulnerable - no FIFO? Conventional PC UARTs have historically had a 16-byte FIFO to achieve 115200, but some of these embedded devices have omitted it, making them more sensitive to IRQ latency.
I'm currently not seeing the reason for the RTOS code to use lp_timer when running ever - SysTick should be fine. lp_timer is only really needed for the deep sleep, when you know no-one is needing fast IRQ response anyway; wake from shallow sleep could also be us_ticker (or maybe even SysTick)?
So don't see a need for a new runtime lock - either lp_timer should be expected to be fast enough to use generally as it currently is, or it should always be avoided until going to deep sleep.
@kjbracey-arm I have tested your suggestion and it doesn't work.
I agree with you, lp_ticker should only be used when in deep sleep, otherwise the SysTick should be used.
I don't see a mention of a hardware FIFO for STM UART anywhere. Would maybe using DMA help?
Ah, there are more critical sections on the path - SysTimer::schedule_tick, and ticker_insert_event_us. It's critical sections all the way down. Looks like lp_ticker_set_interrupt will always be called in a critical section, no matter who schedules the timer, so it's going to cause problems even if called from thread context.
Sadly, UART DMA also often has its own set of problems. Many devices have problems doing clean buffer handover, or avoiding races between timeouts and character arrival. I think that's why most HALs use PIO.
Googling "STM DMA UART" or "Freescale DMA UART", and probably many other vendors, will show a number of discussions with people struggling to write a solid driver, given the way the hardware works. I think for many devices it just isn't possible.
Thank you for the info about DMA, didn't know that.
So, the solution here would be to use SysTick when not in deep-sleep. How would one go about implementing that? What implications would switching the tick source in runtime have on the internals? Would that break anything?
I believe lpticker for this platform needs to be improved regardless. Quite a few bits of code do use it if available, and if it causes enough IRQ latency to cause UART character loss every time they do, that's not acceptable.
So I'd say the most important thing to do first is speed up STM's lp_ticker_set_interrupt.
For SysTick, conceptually it would mean dropping the OS_Tick overrides in mbed_rtx_idle.c, or at least making them conditional on NO_SYSTICK rather than MBED_TICKLESS, but there would be various plumbing issues about making sure that os_timer is still initialised in its own right to do its job for MBED_TICKLESS's default_idle_hook, and to make sure that the SysTick is shut down before entering idle. (But maybe osKernelSuspend calls OS_Tick_Disable itself? - yes it does)
But at this point we need to check with the OS porters as to why SysTimer is acting as OS_Tick for tickless. There may be something deeper here I'm missing.
I suppose the while (__HAL_LPTIM_GET_FLAG(&LptimHandle, LPTIM_FLAG_CMPOK) == RESET) { } inside lp_ticker_set_interrupt is making it slow. It's probably there for a reason, don't know if it can be optimized.
Do you know who would be able to find a way of using lpticker only in deep-sleep mode? I don't think I would or should do this because I don't want to break anything, and also don't have enough experience to be able to do that...
Hi
Can you try https://github.com/ARMmbed/mbed-os/commit/6c6dd7b2e2b4899a57844b37cd429c09ffecce60
?
Regards,
@jeromecoutant It seems to be working 馃憤 Can't believe it. Thank you!
@c1728p9 Can you please confirm it using the same test? Thank you!
I measured the current consumption and it seems that everything is OK.
I tried to get sleep stats using Disregard that, was using develop build which meant that mbed_stats_cpu_get(&stats); and the deep_sleep_time is 0, even though looking at the current consumption it seems that deep sleep is working, but that is not important for this issue.debug print calls were executed.
Deep sleep is working properly, so are the stats.
Even with 6c6dd7b2e2b4899a57844b37cd429c09ffecce60 I still get uart overruns, thought they are much less frequent. I can confirm that the function lp_ticker_set_interrupt is no longer blocking.
I'm still seeing interrupt latency as high as 80us causing these overruns. @mfatiga would it be possible to run at a lower baudrate until this latency is addressed? If you could run the uart at 57600 you shouldn't experience any overruns. At 115200 you may see infrequent overruns.
The worst case latency I'm seeing in the various configurations:
*When the low power ticker wrapper is turned off scheduling does not work, so this time could be wrong.
With IO tracing I'm seeing ~36us of the worst case ~80us being directly due to the low power ticker wrapper.
@c1728p9 Thank you! The QUECTEL BG96 modem I'm using comes from the factory with the default baud rate set to 115200, and the mbed cellular stack doesn't currently support configuring the device baud rate, but I could hack something up :-)
If messing with the cellular stack, I'd go for enabling the flow control first. Without it, I'd be wary of occasional loss even at 57600.
Unless you're using PPP, which is robust to occasional loss, you really need the modem comms to be 100% solid, and if you've no UART FIFO that definitely requires hardware flow control.
The lpticker fix may have eliminated the constantly-occurring latency problem, but there will likely still be occasional spikes.
Disregard that, was using develop build which meant that debug print calls were executed.
Deep sleep is working properly, so are the stats.
So your "release" build is shutting off the platform.stdio-buffered-serial such that the console doesn't hold the deep sleep lock?
@kjbracey-arm Ok, I'll try to do that.
I am using debug and debug_if so my printf's are disabled when building using the release profile https://os.mbed.com/docs/v5.9/reference/debug.html.
But the console is still effectively enabled for input and output by the platform.stdio-buffered-serial, which means there is an Rx IRQ handler attached to that serial port, which means deep sleep is locked. As far as I'm aware. So you shouldn't be deep sleeping at all. Trying to figure this out...
I have read in the documentation that STDIO Rx IRQ is enabled on first printf, and that is the same behavior I am seeing when measuring power consumption. That's why I'm using debug() instead of printf() so I can see my logs when working with development or debug builds and they are disabled when using release builds.
Would love a pointer to that documentation. That's a genuine surprise to me. And I've engineered a lot of the relevant code!
It presumably means the C library doesn't actually open stdin or stdout until they're used. Is that true for all toolchains?
I can't find it in the documentation, but I'm sure I've read it somewhere. I found this https://os.mbed.com/docs/v5.8/tutorials/debugging-using-printf-statements.html
printf() is not free:
It uses an additional 5-10K of flash memory. However, this is the cost of the first use of printf() in a program; further uses cost almost no additional memory.
Maybe I assumed that deep sleep lock is acquired only after the first printf because of this, but I can see by the power consumption that this is what is happening. That's why I'm using debug()
@c1728p9 Will 6c6dd7b2e2b4899a57844b37cd429c09ffecce60 get merged or is this only a temporary solution?
@kjbracey-arm Should a new issue be opened related to extending the cellular stack with configurable baud rate and flow control on the cellular targets which support it?
Hi
PR for https://github.com/ARMmbed/mbed-os/commit/6c6dd7b2e2b4899a57844b37cd429c09ffecce60 is on going
Yes, I'd suggest a separate issue on baud rate + flow enabling for modems. That's a separate team.