Header features/lwipstack/lwipopts.h enables LWIP mailboxes size configuration, which are there defined as follows:
#define TCPIP_MBOX_SIZE MBED_CONF_LWIP_TCPIP_MBOX_SIZE
#define DEFAULT_TCP_RECVMBOX_SIZE MBED_CONF_LWIP_DEFAULT_TCP_RECVMBOX_SIZE
#define DEFAULT_UDP_RECVMBOX_SIZE 8
#define DEFAULT_RAW_RECVMBOX_SIZE 8
#define DEFAULT_ACCEPTMBOX_SIZE 8
Moreover TCP/IP thread mailbox size (MBED_CONF_LWIP_TCPIP_MBOX_SIZE) and TCP receive mailbox size (MBED_CONF_LWIP_DEFAULT_TCP_RECVMBOX_SIZE) may be defined via features/lwipstack/mbed_lib.json, so they are part of mbed-os user configuration interface. (Why only these, not all ?)
The issue is: any configuration here is completely ignored (see sys_mbox_new() routine's queue_sz argument handling in features/lwipstack/lwip-sys/arch/lwip_sys_arch.c). Mailbox size is always set to MB_SIZE (configurable via MBED_CONF_LWIP_MBOX_SIZE) regardless of its type.
All
n/a
mbed-os-5.14.2
n/a
n/a
Internal Jira reference: https://jira.arm.com/browse/MBOTRIAGE-2453
Configurability was added recently by a third party: https://github.com/ARMmbed/mbed-os/pull/11517
I would have thought it did at least do something to work around his issue...
Looking at it, the allocation is indeed fixed at MBED_CONF_LWIP_MBOX_SIZE (MB_SIZE) for every mailbox, so 11517 should have just added that config option.
Do you have a need for different per-mbox size?
Configurability was added recently by a third party: #11517
@kjbracey-arm the issue described by #11517 should be solved by my fix we recently talked about. The symptoms resemble problems I had. @balajicyp could you check #11976 patch. It should solve your problems with corrupted TCP packages. In case you had problems with TCP transmission slowed down, try increase mailbox & netbuf mem-pool sizes (MBED_CONF_LWIP_MBOX_SIZE, MBED_CONF_LWIP_NUM_NETBUF) from 8 to 16 for example.
Do you have a need for different per-mbox size?
At least the common mailbox size configuration via MBED_CONF_LWIP_MBOX_SIZE should be sufficient. I'd recommend to hide other params as not providing any value for now. The problem with more fine grained mailboxes config (TCPIP thread, UDP/TCP receive mailboxes, PCBs etc.) is it will require proper config for underlying mem-pools which usually is not an easy task. I'd recommend hide complexity exposed by large number of complex params to only few ones but properly chosen.
Hi @pstolarz @kjbracey-arm
I have verified with the patch #11976 (i.e using changes in https://github.com/ARMmbed/mbed-os/issues/12046 ) and it indeed fix my problem of receiving out of order TCP packets while playing mp3 ( using chunk data format).
I did not face problems with TCP transmission and with the patch alone and leaving the MBOX_SIZE at 8 I did not see any problems.
I agree that MBED_CONF_LWIP_MBOX_SIZE configuration should have been sufficient for #11517 along with that MBED_CONF_LWIP_MEMP_NUM_TCPIP_MSG_INPKT to be configurable.
Other configuration changes such as MBED_CONF_LWIP_TCPIP_MBOX_SIZE and MBED_CONF_LWIP_DEFAULT_TCP_RECVMBOX_SIZE are not needed.
Thanks
Balaji.
Glad to hear it solved your issue.
MBED_CONF_LWIP_MEMP_NUM_TCPIP_MSG_INPKT param configures MEMP_TCPIP_MSG_INPKT pool used internally by call synchronizing thread (tcpip_thread) as a pool for processing incoming TCP messages in a synchronous way. Did you experience no memory errors on this pool? I doubt it should be exposed as a config param.
TCP received frames are provided by net-driver to the IP stack via pbuf mem-pools (MEMP_PBUF, MEMP_PBUF_POOL) and UDP packets via netbuf mem-pool (MEMP_NETBUF) and appropriate configs for these pools should be configurable I believe.
@pstolarz
MBED_CONF_LWIP_MEMP_NUM_TCPIP_MSG_INPKT param configures MEMP_TCPIP_MSG_INPKT pool used internally by call synchronizing thread (tcpip_thread) as a pool for processing incoming TCP messages in a synchronous way. Did you experience no memory errors on this pool? I doubt it should be exposed as a config param.
[Balaji] The MEMP_NUM_TCPIP_MSG_INPKT packet pool was running out of memory during peak receiving burst of data, again this I saw when I enabled some debug prints. This MEMP_NUM_TCPIP_MSG_INPKT is assigned to MBED_CONF_LWIP_MEMP_NUM_TCPIP_MSG_INPKT. Since the default value configured 8 is too low it could be configurable so that it could be increased to 16 based on other changes such as TCP window size etc.
TCP received frames are provided by net-driver to the IP stack via pbuf mem-pools (MEMP_PBUF, MEMP_PBUF_POOL) and UDP packets via netbuf mem-pool (MEMP_NETBUF) and appropriate configs for these pools should be configurable I believe.
[Balaji] I am ok with these parameters being configurable.
[Balaji] The MEMP_NUM_TCPIP_MSG_INPKT packet pool was running out of memory during peak receiving burst of data, again this I saw when I enabled some debug prints.
It was caused by corrupted tcpip_thread's mailbox. As I understand no out-of-mem errors occurred after applying #11976 fix.
Hello, thanks a lot for this valuable analysis, @pstolarz and @balajicyp ! I think it diverged into three slightly different topics...
1) The originally reported issue regarded MBED_CONF_LWIP_DEFAULT_TCP_RECVMBOX_SIZE and MBED_CONF_LWIP_TCPIP_MBOX_SIZE. The corresponding LWIP's internal definitions are indeed not being used (it looks like they are used in netconn_alloc(), but in fact the argument is ignored in sys_mbox_new, as @pstolarz noticed).
I believe however that these two values might have been set to be configurable because of this macro. If user decreases the MB_SIZE, they might also need to decrease TCPIP_MBOX_SIZE and DEFAULT_TCP_RECVMBOX_SIZE (and any other argument passed to sys_mbox_new for that matter, however I checked that at the moment only these two are in this situation).
What we can do is remove the check OR do something like this in lwipopts.h:
#define TCPIP_MBOX_SIZE MBED_CONF_LWIP_MBOX_SIZE
#define DEFAULT_TCP_RECVMBOX_SIZE MBED_CONF_LWIP_MBOX_SIZE
and then remove the two corresponding configs from mbed_lib.json. Does this make sense?
2) MEMP_NETBUF and MEMP_PBUF are in fact created with this macro being used here and using the MEMP_NUM_NETBUF/MEMP_NUM_PBUF defines. Those in turn are set in lwipopts.h based on MBED_CONF_LWIP_NUM_NETBUF/PBUF. It seems then, that these two are already configurable and no work is needed here?
3) MBED_CONF_LWIP_MEMP_NUM_TCPIP_MSG_INPKT <- I still don't have clarity here, but if @balajicyp confirms that his issue was resolved without the need to modify this setting I would tend to agree with @pstolarz that unnecessary complexity should better be hidden from users.
@kjbracey-arm , your feedback on this would be much appreciated. If I understood the scope of the task, I can prepare a PR with the agreed changes.
[Balaji] The MEMP_NUM_TCPIP_MSG_INPKT packet pool was running out of memory during peak receiving burst of data, again this I saw when I enabled some debug prints.
It was caused by corrupted tcpip_thread's mailbox. As I understand no out-of-mem errors occurred after applying #11976 fix.
Yes I have not seen any issues after applying the #11976 fix.
Hello, thanks a lot for this valuable analysis, @pstolarz and @balajicyp ! I think it diverged into three slightly different topics...
The originally reported issue regarded MBED_CONF_LWIP_DEFAULT_TCP_RECVMBOX_SIZE and MBED_CONF_LWIP_TCPIP_MBOX_SIZE. The corresponding LWIP's internal definitions are indeed not being used (it looks like they are used in netconn_alloc(), but in fact the argument is ignored in sys_mbox_new, as @pstolarz noticed).
I believe however that these two values might have been set to be configurable because of this macro. If user decreases the MB_SIZE, they might also need to decrease TCPIP_MBOX_SIZE and DEFAULT_TCP_RECVMBOX_SIZE (and any other argument passed to sys_mbox_new for that matter, however I checked that at the moment only these two are in this situation).
What we can do is remove the check OR do something like this in lwipopts.h:define TCPIP_MBOX_SIZE MBED_CONF_LWIP_MBOX_SIZE
define DEFAULT_TCP_RECVMBOX_SIZE MBED_CONF_LWIP_MBOX_SIZE
and then remove the two corresponding configs from mbed_lib.json. Does this make sense?
MEMP_NETBUF and MEMP_PBUF are in fact created with this macro being used here and using the MEMP_NUM_NETBUF/MEMP_NUM_PBUF defines. Those in turn are set in lwipopts.h based on MBED_CONF_LWIP_NUM_NETBUF/PBUF. It seems then, that these two are already configurable and no work is needed here?
MBED_CONF_LWIP_MEMP_NUM_TCPIP_MSG_INPKT <- I still don't have clarity here, but if @balajicyp confirms that his issue was resolved without the need to modify this setting I would tend to agree with @pstolarz that unnecessary complexity should better be hidden from users.
_>[Balaji] I think we can remove this configuration MBED_CONF_LWIP_MEMP_NUM_TCPIP_MSG_INPKT, I have only seen issues of running out of memory without #11976 fix when streaming audio.__@kjbracey-arm , your feedback on this would be much appreciated. If I understood the scope of the task, I can prepare a PR with the agreed changes.
define TCPIP_MBOX_SIZE MBED_CONF_LWIP_MBOX_SIZE
There's no need to mess with the MBOX config options if the porting layer doesn't use them. All they do is say what size parameter is passed to sys_mbox_new in each case, but if the sys_mbox_new implementation ignores the parameter, as we do and as is permitted, then they don't need to be defined at all. (And then lwIP will pass 0 by default, and that's fine).
Double-check, but the only place those macros are used, other than to produce the second parameter to sys_mbox_new, is in our own #if/#error test in sys_arch.h.
Our implementation can continue to be configurable only via the MB_SIZE=MBED_CONF_LWIP_MBOX_SIZE option, distinct from the lwIP core config, as it simplifies the code.
As to the other settings, I've no problem with exposing working tuning settings for people who need them. There could be all sorts of different performance/memory profiles. So certainly the buffer pool sizes should be configurable. The TCP_MSG_INPKT was a workaround, but it's a bit pointless to remove it.
Any removal is a breaking change, so we try to avoid it. But I'm fairly relaxed about removing thing that never worked, like those more-specific MBOX config options.
So, specific proposal - remove all lwIP MBOX config. lwIP core does not need to be told what to pass to our sys_arch. Double-check that they're not used, except for that.
Leave only our internal MB_SIZE=MBED_CONF_LWIP_MBOX_SIZE option that "hard-codes" it (from lwIP core's point of view).
No need to do anything with the other settings for buffers and TCP input packets.
Just to emphasise that all this is "lwIP-approved" behaviour for the port - comments say:
sys_mbox_new: Creates an empty mailbox for maximum "size" elements. Elements stored in mailboxes are pointers. You have to define macros "_MBOX_SIZE" in your lwipopts.h, or ignore this parameter in your implementation and use a default size.
TCPIP_MBOX_SIZE: The mailbox size for the tcpip thread messages. The queue size value itself is platform-dependent, but is passed to sys_mbox_new() when tcpip_init is called.
So we go with the second option - ignore the parameter, and not define the _MBOX_SIZE macros.
As to the other settings, I've no problem with exposing working tuning settings for people who need them. There could be all sorts of different performance/memory profiles. So certainly the buffer pool sizes should be configurable. The TCP_MSG_INPKT was a workaround, but it's a bit pointless to remove it.
It's not such simple as it could look like. If a mempool is associated with some mailbox, their sizes are closely related. If a mailbox size if smaller than mempool size, the mempool size will never reach its max capacity since the mailbox will block on posting (therefore allocating on the pool). On the other hand, if mempool is smaller then the mailbox, it will cause no-mem errors on posts after mempool is saturated. In both cases any configuration other than mempool size == mailbox size don't provide any benefit.
If a mailbox is associated with more than 1 mempool (e.g. tcpip_thead's mailbox) there is a sense to have larger mailbox than mempools associated with it.
As a Mbed OS user I would like to have simple config interface and don't bother with all the internal config dependencies. For example - if I set single MB_SIZE the internal logic set remaining mempools sizes accordingly to tune the OS for max possible performance.
For example - if I set single MB_SIZE the internal logic set remaining mempools sizes accordingly to tune the OS for max possible performance.
If we had per-Mbox-use tuning, then sure, it would be logical to ensure pool size = mbox size for each individual case where they're coupled. If not, we'd either be wasting Mbox memory or pool memory.
But at the minute, we are using the simplification of using one MB_SIZE for every mbox case. Which then means that may be higher-than-necessary for any individual mbox. It's not logical to force every pool size up to the same number. "Wasting" some mbox entries is less of a waste than allocating memory pools bigger than we need.
Linking them would mean it may be impossible to get the MB_SIZE up to whatever number someone needed for tcpip_thread because you couldn't allocate enough memory pools to give every subsystem that many bufs.
It would be worth having a cross-check that no mbox-linked pool number is >MB_SIZE, if that extra pool memory would be ineffective.
Linking them would mean it may be impossible to get the MB_SIZE up to whatever number someone needed for tcpip_thread because you couldn't allocate enough memory pools to give every subsystem that many bufs.
But why not change the sys arch code to have queue_sz as an actual mbox size (with all the necessary code required). Then it would be possible to config mboxes for most common use cases (UDP/TCP receives). Remaining mboxes (e.g. PCBs, tcpip thread) could have the current default value (MB_SIZE == 8). In this scenario UDP/TCP mempools could be set to configured mailboxes.
Right now we are half-way between relying on LWIP's defaults and having our own configuration in place, so we'd better move either towards removal of the config option in mbed_lib.json or towards consequently allowing all of them.
We loose nothing by allowing more configurability for those who would like to make use of it, as long as we get the defaults right (and keep an eye on them). I understand we would have to carefully change the code and test it, as it would get a bit more complex and fragile, but I think it is worth the effort if there are users who need this.
@kjbracey-arm , can you share your opinion?
But why not change the sys arch code to have queue_sz as an actual mbox size (with all the necessary code required)
Because we don't like increasing code size, and we don't like doing dynamic allocations. (Although I don't regard one-off init heap allocations as terribly dynamic, it still makes it a bit hard to judge memory use compared to static allocations).
This is why I specifically asked "Do you have a need for different per-mbox size?" above - we'd need a clear justification/demand beyond "it would be nice".
Right now we are half-way between relying on LWIP's defaults and having our own configuration in place, so we'd better move either towards removal of the config option in mbed_lib.json or towards consequently allowing all of them.
Not quite following. We can configure mbox size centrally (for every mbox), but that's not a "lwip" configuration option. The single-size limitation means those particular lwip "config" settings do nothing, but I'm not convinced we need to make the porting layer more complex just to allow that configuration yet. lwIP itself does not seem to expect that such configuration is necessarily required - they explicitly allow for the per-mbox parameter to be ignored.
@kjbracey-arm , what I mean is that that this issue report has a point and there is some work emerging from it. We shouldn't leave things as they are right now. If we don't want to provide extra complexity, I agree we should at least remove MBED_CONF_LWIP_TCPIP_MBOX_SIZE and MBED_CONF_LWIP_DEFAULT_TCP_RECVMBOX_SIZE, as you suggested earlier.
@pstolarz , can you please provide us with the details of why you need the extra configuration options? Are you OK with the simplification of the configs of our porting layer and relying on LWIP in this matter, as proposed earlier in the course of the discussion?
Yes, I currently stand by my previous suggestion https://github.com/ARMmbed/mbed-os/issues/12046#issuecomment-565950785
I didn't quite understand the "halfway" thing, but I get it now. My view is we're entirely written as ignoring the lwip options, so 100% on that side,, but mistakenly exposing them. I was thinking of that as more a contradiction, or a false representation, rather than being "halfway".
@pstolarz , can you please provide us with the details of why you need the extra configuration options? Are you OK with the simplification of the configs of our porting layer and relying on LWIP in this matter, as proposed earlier in the course of the discussion?
I didn't say I'm for extra configs. What I said - I'm for config simplification to only these params which are actually valuable for an mbed os user and hiding the rest (which are in some way dependent on them). The OS itself should take care about tuning the dependent params in most effective way (e.g. mempool - mailbox pair).
I don't see effective lwip-sys implementation with a central mbox size. Sooner or later it will end up with some some sort of net problem. E.g. try to change MB_SIZE to some large value on K64F and see what will happen with heap while flooding a device with a large number of UDP frames. Having single mbox size there would be hard to tune the device working on diverse conditions (e.g. high-load TCP/UDP traffic).
I also propose to see how it's implemented on other lwip-ports (e.g. FreeRTOS). But... do whatever you want, I'm not an architect of this system, responsible for its performance/usefulness.
Thanks
@pstolarz , your remark was very valuable to us and we really appreciate it. Let's stick to Kevin's suggestion for now and in case we ever actually run into the trouble you described, we'll instantly know the possible reason and be able to address it quickly. Until then, let's just keep things simple.