Mbed-os: LPC546XX: hard faults in various tests

Created on 2 Oct 2018  路  7Comments  路  Source: ARMmbed/mbed-os

Description

LPC546XX-GCC_ARM.tests-netsocket-tcp.TCPSOCKET_ECHOTEST_NONBLOCK (from LPC546XX-GCC_ARM) test :

[1538422528.68][CONN][RXD] >>> Running case #2: 'TCPSOCKET_ECHOTEST_NONBLOCK'...
[1538422528.76][CONN][INF] found KV pair in stream: {{__testcase_start;TCPSOCKET_ECHOTEST_NONBLOCK}}, queued...
[1538422528.84][CONN][RXD] [Sender#00] bytes sent: 1
[1538422529.82][CONN][RXD] [Recevr#00] bytes received: 1
[1538422529.82][CONN][RXD] [Sender#01] bytes sent: 2
[1538422530.82][CONN][RXD] [Recevr#01] bytes received: 2
[1538422530.92][CONN][RXD] [Sender#02] bytes sent: 3
[1538422531.92][CONN][RXD] [Recevr#02] bytes received: 3
[1538422531.92][CONN][RXD] [Sender#03] bytes sent: 4
[1538422532.93][CONN][RXD] [Recevr#03] bytes received: 4
[1538422532.93][CONN][RXD] [Sender#04] bytes sent: 5
[1538422533.93][CONN][RXD] [Recevr#04] bytes received: 5
[1538422533.93][CONN][RXD] [Sender#05] bytes sent: 6
[1538422534.94][CONN][RXD] [Recevr#05] bytes received: 6
[1538422534.94][CONN][RXD] [Sender#06] bytes sent: 7
[1538422535.94][CONN][RXD] [Recevr#06] bytes received: 7
[1538422536.04][CONN][RXD] [Sender#07] bytes sent: 8
[1538422537.04][CONN][RXD] [Recevr#07] bytes received: 8
[1538422537.04][CONN][RXD] [Sender#08] bytes sent: 9
[1538422538.04][CONN][RXD] [Recevr#08] bytes received: 9
[1538422538.04][CONN][RXD] [Sender#09] bytes sent: 10
[1538422538.55][CONN][RXD] 
[1538422538.55][CONN][RXD] ++ MbedOS Fault Handler ++
[1538422538.55][CONN][RXD] 
[1538422538.55][CONN][RXD] FaultType: HardFault
[1538422538.55][CONN][RXD] 
[1538422538.55][CONN][RXD] Context:
[1538422538.64][CONN][RXD] R0   : 00000000
[1538422538.64][CONN][RXD] R1   : 20002DF4
[1538422538.64][CONN][RXD] R2   : 20003220
[1538422538.65][CONN][RXD] R3   : 000000E8
[1538422538.65][CONN][RXD] R4   : 2000D850
[1538422538.65][CONN][RXD] R5   : 20005568
[1538422538.74][CONN][RXD] R6   : 20010844
[1538422538.74][CONN][RXD] R7   : 00000008
[1538422538.74][CONN][RXD] R8   : 00000000
[1538422538.75][CONN][RXD] R9   : 00000000
[1538422538.75][CONN][RXD] R10  : 00000000
[1538422538.75][CONN][RXD] R11  : 00000000
[1538422538.75][CONN][RXD] R12  : 00014185
[1538422538.85][CONN][RXD] SP   : 200107B4
[1538422538.85][CONN][RXD] LR   : 000143B3
[1538422538.85][CONN][RXD] PC   : 00000000
[1538422538.85][CONN][RXD] xPSR : 40000200
[1538422538.85][CONN][RXD] PSP  : 20010790
[1538422538.95][CONN][RXD] MSP  : 20027FC0
[1538422538.95][CONN][RXD] CPUID: 410FC241
[1538422538.95][CONN][RXD] HFSR : 40000000
[1538422538.95][CONN][RXD] MMFSR: 00000000
[1538422538.96][CONN][RXD] BFSR : 00000000
[1538422538.96][CONN][RXD] UFSR : 00000002
[1538422538.96][CONN][RXD] DFSR : 00000008
[1538422539.05][CONN][RXD] AFSR : 00000000
[1538422539.05][CONN][RXD] Mode : Thread
[1538422539.05][CONN][RXD] Priv : Privileged
[1538422539.05][CONN][RXD] Stack: PSP
[1538422539.05][CONN][RXD] 
[1538422539.05][CONN][RXD] -- MbedOS Fault Handler --
[1538422539.05][CONN][RXD] 
[1538422539.05][CONN][RXD] 
[1538422539.05][CONN][RXD] 
[1538422539.15][CONN][RXD] ++ MbedOS Error Info ++
[1538422539.15][CONN][RXD] Error Status: 0x80FF013D Code: 317 Module: 255
[1538422539.15][CONN][RXD] Error Message: Fault exception
[1538422539.25][CONN][RXD] Location: 0x12D47
[1538422539.25][CONN][RXD] Error Value: 0x0
[1538422539.35][CONN][RXD] Current Thread: Id: 0x20002DF4 Entry: 0x84DD StackSize: 0x4B0 StackMem: 0x20010328 SP: 0x20027F58 
[1538422539.45][CONN][RXD] For more info, visit: https://armmbed.github.io/mbedos-error/?error=0x80FF013D
[1538422539.45][CONN][RXD] -- MbedOS Error Info --
[1538423155.58][CONN][RXD] mbedmbedmbedmbedmbedmbedmbedmbed
[1538423155.58][CONN][INF] found KV pair in stream: {{__sync;2629e17c-55f7-42a3-8601-cb0988f188ab}}, queued...
[1538423155.58][HTST][ERR] orphan event in main phase: {{__sync;2629e17c-55f7-42a3-8601-cb0988f188ab}}, timestamp=1538423155.577806
[1538423155.58][CONN][INF] found KV pair in stream: {{__version;1.3.0}}, queued...
[1538423155.58][HTST][ERR] orphan event in main phase: {{__version;1.3.0}}, timestamp=1538423155.577812
[1538423155.68][CONN][INF] found KV pair in stream: {{__timeout;120}}, queued...
[1538423155.68][HTST][ERR] orphan event in main phase: {{__timeout;120}}, timestamp=1538423155.676442
[1538423155.68][CONN][INF] found KV pair in stream: {{__host_test_name;default_auto}}, queued...
[1538423155.68][CONN][INF] found KV pair in stream: {{__testcase_count;3}}, queued...
[1538423155.68][HTST][ERR] orphan event in main phase: {{__host_test_name;default_auto}}, timestamp=1538423155.676450
[1538423155.78][CONN][RXD] >>> Running 3 test cases...
[1538423155.78][CONN][INF] found KV pair in stream: {{__testcase_name;NVStore: Basic functionality}}, queued...
[1538423155.78][CONN][INF] found KV pair in stream: {{__testcase_name;NVStore: Race test}}, queued...
[1538423155.93][CONN][RXD] 
[1538423155.93][CONN][INF] found KV pair in stream: {{__testcase_name;NVStore: Multiple thread test}}, queued...
[1538423156.01][CONN][RXD] >>> Running case #1: 'NVStore: Basic functionality'...

LPC546XX-ARM.tests-netsocket-tcp.TCPSOCKET_ECHOTEST (from LPC546XX-ARM) test:

[1538388720.36][CONN][RXD] MBED: TCPClient IP address is '10.118.12.34'
[1538388720.75][CONN][RXD] >>> Running 5 test cases...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_count;5}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_ECHOTEST}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_ECHOTEST_NONBLOCK}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_OPEN_CLOSE_REPEAT}}, queued...
[1538388720.75][CONN][RXD] 
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_OPEN_LIMIT}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_THREAD_PER_SOCKET_SAFETY}}, queued...
[1538388720.83][CONN][RXD] >>> Running case #1: 'TCPSOCKET_ECHOTEST'...
[1538388720.83][CONN][INF] found KV pair in stream: {{__testcase_start;TCPSOCKET_ECHOTEST}}, queued...
[1538388721.47][CONN][RXD] 
[1538388721.47][CONN][RXD] ++ MbedOS Fault Handler ++
[1538388721.47][CONN][RXD] 
[1538388721.47][CONN][RXD] FaultType: HardFault
[1538388721.47][CONN][RXD] 
[1538388721.47][CONN][RXD] Context:
[1538388721.47][CONN][RXD] R0   : BF234AE6
[1538388721.56][CONN][RXD] R1   : 2000FAA0
[1538388721.56][CONN][RXD] R2   : 2000FAA0
[1538388721.56][CONN][RXD] R3   : A3207D8C
[1538388721.56][CONN][RXD] R4   : 0000001C
[1538388721.56][CONN][RXD] R5   : 2FA0BE83
[1538388721.56][CONN][RXD] R6   : 000004B0
[1538388721.66][CONN][RXD] R7   : 200015D4
[1538388721.66][CONN][RXD] R8   : 00000000
[1538388721.66][CONN][RXD] R9   : 00000094
[1538388721.66][CONN][RXD] R10  : 0001C678
[1538388721.66][CONN][RXD] R11  : 0000BF79
[1538388721.66][CONN][RXD] R12  : 2000FA40
[1538388721.76][CONN][RXD] SP   : 2000EC80
[1538388721.76][CONN][RXD] LR   : 00007977
[1538388721.76][CONN][RXD] PC   : 00000D4C
[1538388721.76][CONN][RXD] xPSR : 61002800
[1538388721.76][CONN][RXD] PSP  : 2000EC18
[1538388721.76][CONN][RXD] MSP  : 20027FD8
[1538388721.88][CONN][RXD] CPUID: 410FC241
[1538388721.88][CONN][RXD] HFSR : 40000000
[1538388721.88][CONN][RXD] MMFSR: 00000000
[1538388721.88][CONN][RXD] BFSR : 00000000
[1538388721.88][CONN][RXD] UFSR : 00000001
[1538388721.88][CONN][RXD] DFSR : 00000008
[1538388721.98][CONN][RXD] AFSR : 00000000
[1538388721.98][CONN][RXD] Mode : Thread
[1538388721.98][CONN][RXD] Priv : Privileged
[1538388721.98][CONN][RXD] Stack: PSP
[1538388721.98][CONN][RXD] 
[1538388721.98][CONN][RXD] -- MbedOS Fault Handler --
[1538388721.98][CONN][RXD] 
[1538388721.98][CONN][RXD] 
[1538388721.98][CONN][RXD] 
[1538388722.35][CONN][RXD] ++ MbedOS Error Info ++
[1538388722.35][CONN][RXD] Error Status: 0x80FF013D Code: 317 Module: 255
[1538388722.35][CONN][RXD] Error Message: Fault exception
[1538388722.35][CONN][RXD] Location: 0x1281D
[1538388722.35][CONN][RXD] Error Value: 0xD4C
[1538388722.35][CONN][RXD] Current Thread: Id: 0x2000ED30 Entry: 0x12A71 StackSize: 0x1000 StackMem: 0x2000DD30 SP: 0x20027F90 
[1538388722.37][CONN][RXD] For more info, visit: https://armmbed.github.io/mbedos-error/?error=0x80FF013D
[1538388722.37][CONN][RXD] -- MbedOS Error Info --
[1538389345.46][CONN][RXD] mbedmbedmbedmbedmbedmbedmbedmbed
[1538389345.46][CONN][INF] found KV pair in stream: {{__sync;c37ad506-dcf7-4a6c-ba42-4012ea81e8ff}}, queued...
[1538389345.46][CONN][INF] found KV pair in stream: {{__version;1.3.0}}, queued...
[1538389345.46][HTST][ERR] orphan event in main phase: {{__sync;c37ad506-dcf7-4a6c-ba42-4012ea81e8ff}}, timestamp=1538389345.460268
[1538389345.46][HTST][ERR] orphan event in main phase: {{__version;1.3.0}}, timestamp=1538389345.460277
[1538389345.51][CONN][RXD] >>> Running 3 test cases...

These 2 failures happened in the last builds (we counted 5x the first non block, 4x the second one).

Aside from networking tests, also ticker test hard faults

LPC546XX-GCC_ARM.tests-mbed_drivers-ticker.Test timers: 1x ticker (from LPC546XX-GCC_ARM):

[1538418619.47][CONN][RXD] >>> Running case #10: 'Test timers: 1x ticker'...
[1538418619.57][CONN][INF] found KV pair in stream: {{__testcase_start;Test timers: 1x ticker}}, queued...
[1538418619.57][CONN][INF] found KV pair in stream: {{timing_drift_check_start;0}}, queued...
[1538418619.57][GLRM][TXD] {{base_time;0}}
[1538418619.67][CONN][INF] found KV pair in stream: {{base_time;74000}}, queued...
[1538418632.63][HTST][INF] Device base time 74000
[1538418632.63][HTST][INF] sleeping for 12.9469680786 to measure drift accurately
[1538418632.63][GLRM][TXD] {{final_time;0}}
[1538418632.70][CONN][INF] found KV pair in stream: {{final_time;13142000}}, queued...
[1538418632.70][HTST][INF] Device final time 13142000 
[1538418632.70][GLRM][TXD] {{pass;0}}
[1538418632.70][HTST][INF] Compute host events
[1538418632.70][HTST][INF] Transport delay 0: 0.0993371009827
[1538418632.70][HTST][INF] Transport delay 1: 0.0714609622955
[1538418632.70][HTST][INF] DUT base time : 74000.0
[1538418632.70][HTST][INF] DUT end time  : 13142000.0
[1538418632.70][HTST][INF] min_pass : 12.4747843504 , max_pass : 13.6085815787 for 5.0%%
[1538418632.70][HTST][INF] min_inconclusive : 12.3125261903 , max_inconclusive : 13.7879195452
[1538418632.70][HTST][INF] Time reported by device: 13.068
[1538418632.70][HTST][INF] Test passed !!!
[1538418632.80][CONN][RXD] ++ MbedOS Fault Handler ++
[1538418632.80][CONN][RXD] 
[1538418632.80][CONN][RXD] FaultType: HardFault
[1538418632.80][CONN][RXD] 
[1538418632.80][CONN][RXD] Context:
[1538418632.80][CONN][RXD] R0   : 000F4240
[1538418632.80][CONN][RXD] R1   : 200013E0
[1538418632.80][CONN][RXD] R2   : 01373413
[1538418632.90][CONN][RXD] R3   : 00000000
[1538418632.90][CONN][RXD] R4   : 0000F49C
[1538418632.90][CONN][RXD] R5   : 0000000D
[1538418632.90][CONN][RXD] R6   : ED000004
[1538418632.90][CONN][RXD] R7   : 00000000
[1538418632.90][CONN][RXD] R8   : 20001340
[1538418633.00][CONN][RXD] R9   : 00000010
[1538418633.00][CONN][RXD] R10  : 20001324
[1538418633.00][CONN][RXD] R11  : 00000000
[1538418633.00][CONN][RXD] R12  : 00000000
[1538418633.00][CONN][RXD] SP   : 20027FB0
[1538418633.00][CONN][RXD] LR   : 00004A75
[1538418633.10][CONN][RXD] PC   : 00004DD6
[1538418633.10][CONN][RXD] xPSR : 4101001B
[1538418633.10][CONN][RXD] PSP  : 200034C0
[1538418633.10][CONN][RXD] MSP  : 20027F90
[1538418633.10][CONN][RXD] CPUID: 410FC241
[1538418633.10][CONN][RXD] HFSR : 40000000
[1538418633.20][CONN][RXD] MMFSR: 00000000
[1538418633.20][CONN][RXD] BFSR : 00000000
[1538418633.20][CONN][RXD] UFSR : 00000100
[1538418633.20][CONN][RXD] DFSR : 00000008
[1538418633.20][CONN][RXD] AFSR : 00000000
[1538418633.20][CONN][RXD] Mode : Handler
[1538418633.30][CONN][RXD] Priv : Privileged
[1538418633.30][CONN][RXD] Stack: MSP
[1538418633.30][CONN][RXD] 
[1538418633.30][CONN][RXD] -- MbedOS Fault Handler --
[1538418633.30][CONN][RXD] 
[1538418633.30][CONN][RXD] 
[1538418633.30][CONN][RXD] 
[1538418633.30][CONN][RXD] ++ MbedOS Error Info ++
[1538418633.40][CONN][RXD] Error Status: 0x80FF013D Code: 317 Module: 255
[1538418633.40][CONN][RXD] Error Message: Fault exception
[1538418633.40][CONN][RXD] Location: 0x5B27
[1538418633.40][CONN][RXD] Error Value: 0x4DD6
[1538418633.50][CONN][RXD] Current Thread: Id: 0x200026A4 Entry: 0x5C73 StackSize: 0x1000 StackMem: 0x200026E8 SP: 0x20027F28 
[1538418633.60][CONN][RXD] For more info, visit: https://armmbed.github.io/mbedos-error/?error=0x80FF013D
[1538418633.70][CONN][RXD] -- MbedOS Error Info --
[1538418853.93][HTST][INF] test suite run finished after 240.37 sec...
[1538418853.93][CONN][INF] received special event '__host_test_finished' value='True', finishing

One thing that caught my eye is reporting hard faults prior [1538389345.46][CONN][RXD] mbedmbedmbedmbedmbedmbedmbedmbed:

[1538388720.75][CONN][RXD] >>> Running 5 test cases...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_count;5}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_ECHOTEST}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_ECHOTEST_NONBLOCK}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_OPEN_CLOSE_REPEAT}}, queued...
[1538388720.75][CONN][RXD] 
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_OPEN_LIMIT}}, queued...
[1538388720.75][CONN][INF] found KV pair in stream: {{__testcase_name;TCPSOCKET_THREAD_PER_SOCKET_SAFETY}}, queued...
[1538388720.83][CONN][RXD] >>> Running case #1: 'TCPSOCKET_ECHOTEST'...
[1538388720.83][CONN][INF] found KV pair in stream: {{__testcase_start;TCPSOCKET_ECHOTEST}}, queued...
[1538388721.47][CONN][RXD] 
[1538388721.47][CONN][RXD] ++ MbedOS Fault Handler ++
[1538388721.47][CONN][RXD] 
[1538388721.47][CONN][RXD] FaultType: HardFault
[1538388721.47][CONN][RXD] 
[1538388721.47][CONN][RXD] Context:
[1538388721.47][CONN][RXD] R0   : BF234AE6
[1538388721.56][CONN][RXD] R1   : 2000FAA0
[1538388721.56][CONN][RXD] R2   : 2000FAA0
[1538388721.56][CONN][RXD] R3   : A3207D8C
[1538388721.56][CONN][RXD] R4   : 0000001C
[1538388721.56][CONN][RXD] R5   : 2FA0BE83
[1538388721.56][CONN][RXD] R6   : 000004B0
[1538388721.66][CONN][RXD] R7   : 200015D4
[1538388721.66][CONN][RXD] R8   : 00000000
[1538388721.66][CONN][RXD] R9   : 00000094
[1538388721.66][CONN][RXD] R10  : 0001C678
[1538388721.66][CONN][RXD] R11  : 0000BF79
[1538388721.66][CONN][RXD] R12  : 2000FA40
[1538388721.76][CONN][RXD] SP   : 2000EC80
[1538388721.76][CONN][RXD] LR   : 00007977
[1538388721.76][CONN][RXD] PC   : 00000D4C
[1538388721.76][CONN][RXD] xPSR : 61002800
[1538388721.76][CONN][RXD] PSP  : 2000EC18
[1538388721.76][CONN][RXD] MSP  : 20027FD8
[1538388721.88][CONN][RXD] CPUID: 410FC241
[1538388721.88][CONN][RXD] HFSR : 40000000
[1538388721.88][CONN][RXD] MMFSR: 00000000
[1538388721.88][CONN][RXD] BFSR : 00000000
[1538388721.88][CONN][RXD] UFSR : 00000001
[1538388721.88][CONN][RXD] DFSR : 00000008
[1538388721.98][CONN][RXD] AFSR : 00000000
[1538388721.98][CONN][RXD] Mode : Thread
[1538388721.98][CONN][RXD] Priv : Privileged
[1538388721.98][CONN][RXD] Stack: PSP
[1538388721.98][CONN][RXD] 
[1538388721.98][CONN][RXD] -- MbedOS Fault Handler --
[1538388721.98][CONN][RXD] 
[1538388721.98][CONN][RXD] 
[1538388721.98][CONN][RXD] 
[1538388722.35][CONN][RXD] ++ MbedOS Error Info ++
[1538388722.35][CONN][RXD] Error Status: 0x80FF013D Code: 317 Module: 255
[1538388722.35][CONN][RXD] Error Message: Fault exception
[1538388722.35][CONN][RXD] Location: 0x1281D
[1538388722.35][CONN][RXD] Error Value: 0xD4C
[1538388722.35][CONN][RXD] Current Thread: Id: 0x2000ED30 Entry: 0x12A71 StackSize: 0x1000 StackMem: 0x2000DD30 SP: 0x20027F90 
[1538388722.37][CONN][RXD] For more info, visit: https://armmbed.github.io/mbedos-error/?error=0x80FF013D
[1538388722.37][CONN][RXD] -- MbedOS Error Info --
[1538389345.46][CONN][RXD] mbedmbedmbedmbedmbedmbedmbedmbed

This is currently ongoing on master, should be able to reproduce this fetching the master.
I checked the target files (https://github.com/ARMmbed/mbed-os/tree/master/targets/TARGET_NXP/TARGET_MCUXpresso_MCUS/TARGET_LPC546XX) - the last change 3 months ago, also targets definition has not changed. The change might be in rtos or in networking ?

For internal reference: IOTTESTINF-4055

@ARMmbed/team-nxp @ARMmbed/mbed-os-ipcore @studavekar

Issue request type

[ ] Question
[ ] Enhancement
[x] Bug

CLOSED nxp mirrored bug

Most helpful comment

@c1728p9 has ascertained that one or more boards in the CI has worn-out flash, which causes unreliable reads. So I think we suspend investigation of the crashes. The serial loss issue remains.

All 7 comments

Looked into this a bit. On the run I looked at (the first one above, I think), the crash is due to branching to address 0 from inside osMutexRelease, acting on mem_mutex in lwIP.

00014394 <osMutexRelease>:
   14394:   b508        push    {r3, lr}
   14396:   4602        mov r2, r0
   14398:   f3ef 8305   mrs r3, IPSR
   1439c:   b13b        cbz r3, 143ae <osMutexRelease+0x1a>
   1439e:   4610        mov r0, r2
   143a0:   f06f 0105   mvn.w   r1, #5
   143a4:   f7fe fe64   bl  13070 <EvrRtxMutexError>
   143a8:   f06f 0005   mvn.w   r0, #5
   143ac:   bd08        pop {r3, pc}
   143ae:   f7ff fe4b   bl  14048 <IsIrqMasked>
   143b2:   2800        cmp r0, #0       ; <-- lr seen in dump, so passed here
   143b4:   d1f3        bne.n   1439e <osMutexRelease+0xa>
   143b6:   4610        mov r0, r2           ; r2 = mem_mutex seen in dump
   143b8:   f8df c004   ldr.w   ip, [pc, #4]    ; 143c0 <osMutexRelease+0x2c>   ;  r12 = svcMutexRelease seen in dump
   143bc:   df00        svc 0
   143be:   bd08        pop {r3, pc}     ; process mode in dump, so presumably returned from svc here?

We've apparently returned from the SVC call, so it seems likely we've just popped 0 into PC from the stack. So the stack has been overwritten, or we overflowed, during this mutex release.

The mutex is lwIP's mem_mutex, so this is probably inside a pbuf_alloc(PBUF_RAM).

What's odd is that the corruption is happening in a "very short" window of that mutex release. It should only be possible in software if a thread switch occurs due to the mutex release. That seems unlikely as all threads involved with lwIP and the driver are set to equal priority, and it shouldn't be a preemption switch, as we wouldn't have held the mutex long.

So I'm wondering if this is a hardware corruption - the Ethernet hardware has overwritten the stack (either due to misconfiguration, or us overflowing into it).

It is the case that the stack pointer is in the heap, and the LPC546XX Ethernet driver does allocate its receive buffers just before its thread stack, so they are likely adjacent in the heap.

However, that's not going to remotely explain the ticker fault, so I'll look at that next.

@kjbracey-arm There an updated DAPLink firmware available that fixes a bug where the entire binary was not getting programmed to flash.
https://github.com/ARMmbed/DAPLink/commit/3ae822fa850f2f075d9cffb4982de49f36527c76

The Updated interface firmware for LPC546XX is available at the below link:
https://os.mbed.com/teams/NXP/wiki/Updating-LPCXpresso-firmware

Can you please update and check if this is still an issue.

@mmahadevan108 The fix you referenced was only for images >256k or also smaller images might be affected ? Because our test images we do in morph test are not that big.

The ticker crash is a bit inconclusive. The microsecond timer event head (in events from mbed_us_ticker_api.c) is corrupt.

Can't see any flaw in the Ticker test or Ticker itself. The tests are a big aggressive at creating Tickers on the stack and expecting the destructor to clean up, but looks like it should be working. Only quirk I see is that TimerEvent::remove gets called twice via the inherited destructors, but the second one should be a no-op.

If I can find the board here I can try to reproduce locally under a debugger tomorrow.

I ran all the mbed tests. I am not able to reproduce this crash on the LPC54628 and LPC54608 boards.

@c1728p9 has ascertained that one or more boards in the CI has worn-out flash, which causes unreliable reads. So I think we suspend investigation of the crashes. The serial loss issue remains.

Internal Jira reference: https://jira.arm.com/browse/MBOCUSTRIA-9

Was this page helpful?
0 / 5 - 0 ratings

Related issues

DuyTrandeLion picture DuyTrandeLion  路  3Comments

toyowata picture toyowata  路  4Comments

drahnr picture drahnr  路  4Comments

bcostm picture bcostm  路  4Comments

rbonghi picture rbonghi  路  3Comments