[BUG] RP2350 ostest stuck when SMP enabled
Description / Steps to reproduce the issue
A RaspberryPi's PICO 2W board with a RP2350 MCU is experiencing weird hangs in ostest during the nested signals testing when SMP is enabled. It loops forever inside the spin_lock_notrace function.
The traces I managed to collect:
info threads
Index Tid Pid Cpu Thread Info Frame
0 0 0 0 '\000' Thread 0x20003d08 (Name: CPU0 IDLE, State: Assigned, Priority: 0, Stack: 1008) 0x100105b2 up_idle() at chip/rp23xx_idle.c:94
1 1 0 1 '\001' Thread 0x20003dd8 (Name: CPU1 IDLE, State: Assigned, Priority: 0, Stack: 1008) 0x100105b2 up_idle() at chip/rp23xx_idle.c:94
2 2 2 0 '\000' Thread 0x20006698 (Name: nsh_main, State: Waiting,Semaphore, Priority: 100, Stack: 2008) 0x10005086 nxsem_wait_slow() at semaphore/sem_wait.c:207
12 12 12 0 '\000' Thread 0x20007880 (Name: ostest, State: Waiting,Semaphore, Priority: 100, Stack: 2016) 0x10005086 nxsem_wait_slow() at semaphore/sem_wait.c:207
13 13 13 1 '\001' Thread 0x200084e8 (Name: ostest, State: Assigned, Priority: 100, Stack: 8120) No symbol with pc
*14 62 13 1 '\001' Thread 0x2000a9d8 (Name: ostest, State: Running, Priority: 101, Stack: 8176) 0x1000290c enter_critical_section_wo_note() at include/nuttx/spinlock.h:199
*15 63 13 0 '\000' Thread 0x2000aab8 (Name: ostest, State: Running, Priority: 102, Stack: 8176) 0x1000290c enter_critical_section_wo_note() at include/nuttx/spinlock.h:199
bt
#0 0x1000290c in spin_lock_notrace (lock=0x200040c8 <g_cpu_irqlock> "\001") at include/nuttx/spinlock.h:199
#1 enter_critical_section_wo_note () at irq/irq_csection.c:183
#2 0x1000c754 in uart_xmitchars (dev=0x2000121c <g_uart0port>) at serial/serial_io.c:62
#3 0x10000e54 in up_interrupt (irq=49, context=0x0, arg=0x2000121c <g_uart0port>) at chip/rp23xx_serial.c:617
#4 0x10002836 in irq_dispatch (irq=49, context=0x0) at irq/irq_dispatch.c:144
#5 0x10001b64 in exception_direct () at armv8-m/arm_doirq.c:62
#6 <signal handler called>
#7 spin_lock_notrace (lock=0x200040c8 <g_cpu_irqlock> "\001") at include/nuttx/spinlock.h:199
#8 enter_critical_section_wo_note () at irq/irq_csection.c:234
#9 0x10005376 in nxsig_deliver (stcb=0x2000aab8) at signal/sig_deliver.c:178
#10 0x10001e9e in arm_sigdeliver () at armv8-m/arm_sigdeliver.c:107
#11 0x10005fb8 in nxsched_remove_self (tcb=0x40) at sched/sched_removereadytorun.c:280
#12 0x00000000 in ?? ()
list
194 {
195 #ifdef CONFIG_TICKET_SPINLOCK
196 int ticket = atomic_fetch_add(&lock->next, 1);
197 while (atomic_read(&lock->owner) != ticket)
198 #else /* CONFIG_TICKET_SPINLOCK */
199 while (up_testset(lock) == SP_LOCKED)
200 #endif
201 {
202 UP_DSB();
203 UP_WFE();
info args
lock = 0x200040c8 <g_cpu_irqlock> "\001"
Additional facts:
- console to the board is connected via UART0
- the issue reproduces 100% of times when running the
ostestutility - the
smputility runs without problems, no issues found - the issue does not reproduce on the older RP2040 MCU (different ARM cores)
- the issue does not reproduce when
CONFIG_SMP_NCPUS=1but SMP is enabled - the issue reproduces even when
RP23XX_TESTSET_SPINLOCKis changed from0to31(see the RP2350-E2 erratum) - the issue reproduces with today's
master
The output of the ostest utility often times is partially cut off:
...
user_main: nested signal handler test
signest_test: Starting signal waiter task at priority 101
signest_test: Started waiter_main pid=62
waiter_main: Waiter started
signest_test: Starting interfering task at priority 102
waiter_main: Setting signal mask
interfere_main: Waiting on semaphore
waiter_main: Registering signal handler
signest_test: Started interfere_main pid=63
waiter_main: Waiting on semaphore
signest_test: Simple case:
Total signalled
On which OS does this issue occur?
[OS: Linux]
What is the version of your OS?
ArchLinux, Debian
NuttX Version
master
Issue Architecture
[Arch: arm]
Issue Area
[Area: Kernel]
Host information
I use two build environments, in both the issue is reproducing 100% of times.
1: x86_64 PC with ArchLinux and the arm-none-eabi-* embedded toolchain
2: aarch64 VM with Debian and the arm-none-eabi-* embedded toolchain
Verification
- [x] I have verified before submitting the report.
Hey, is this problem in any way or form related to https://github.com/apache/nuttx/issues/16139 ? Maybe there are details in common. I've had these strange hangs for a while and i feel as if it got worse. But all on the RP2040. I see that you have no problems with the RP2040, but maybe you have a specific situation where it does not trigger the hang.
Hey @keever50. I've rebuilt my debug SMP build for RP2040 with USB console and found no issue. ostest runs till the end and exits without any error. Maybe you need to try add SMP to your configurations?
@keever50 @avgoor I got ostest crash in the same signal test, could you try the my commit https://github.com/apache/nuttx/commit/45478e5d5e71f647cb6ce0ee3f2b920e637410a7 it fixes for me the ostest, rp2350 SMP mode disabled.
@keever50 @avgoor I got ostest crash in the same signal test, could you try the my commit 45478e5 it fixes for me the ostest, rp2350 SMP mode disabled.
That is really nice, however, I never used SMP (atleast, not that i am aware of in the usbnsh) and i am using the RP2040. Is it possible that this also fixes RP2040 non SMP? I will test this out later.
@keever50 @avgoor I got ostest crash in the same signal test, could you try the my commit 45478e5 it fixes for me the ostest, rp2350 SMP mode disabled.
That is really nice, however, I never used SMP (atleast, not that i am aware of in the usbnsh) and i am using the RP2040. Is it possible that this also fixes RP2040 non SMP? I will test this out later.
It is possible since the same compiler is used, if it's really compiler optimization issue it could help.
@keever50 @avgoor I got ostest crash in the same signal test, could you try the my commit https://github.com/apache/nuttx/commit/45478e5d5e71f647cb6ce0ee3f2b920e637410a7 it fixes for me the ostest, rp2350 SMP mode disabled.
Thanks for the patch! I've tested it on my setup and unfortunately it didn't help. I've forgot to say that I build nuttx without optimizations (-O0) from the beginning, specifically to exclude any compiler related issues.
Thanks, however, for me the same result. It did not help. I did similar fixes like this that appeared to have helped, but really just "shifts" around the issue elsewhere. Such as the hangs and crashes happening at a different time. I am afraid there is memory corruption happening somehow.
Op ma 21 apr 2025 om 22:18 schreef Denis V. Meltsaykin < @.***>:
@keever50 https://github.com/keever50 @avgoor https://github.com/avgoor I got ostest crash in the same signal test, could you try the my commit 45478e5 https://github.com/apache/nuttx/commit/45478e5d5e71f647cb6ce0ee3f2b920e637410a7 it fixes for me the ostest, rp2350 SMP mode disabled.
Thanks for the patch! I've tested it on my setup and unfortunately it didn't help. I've forgot to say that I build nuttx without optimizations (-O0) from the beginning, specifically to exclude any compiler related issues.
— Reply to this email directly, view it on GitHub https://github.com/apache/nuttx/issues/16133#issuecomment-2819422072, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACJ3DKAUWOLEAFIEZYM63TL22VHBJAVCNFSM6AAAAAB2OBOOX2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMJZGQZDEMBXGI . You are receiving this because you were mentioned.Message ID: @.***> avgoor left a comment (apache/nuttx#16133) https://github.com/apache/nuttx/issues/16133#issuecomment-2819422072
@keever50 https://github.com/keever50 @avgoor https://github.com/avgoor I got ostest crash in the same signal test, could you try the my commit 45478e5 https://github.com/apache/nuttx/commit/45478e5d5e71f647cb6ce0ee3f2b920e637410a7 it fixes for me the ostest, rp2350 SMP mode disabled.
Thanks for the patch! I've tested it on my setup and unfortunately it didn't help. I've forgot to say that I build nuttx without optimizations (-O0) from the beginning, specifically to exclude any compiler related issues.
— Reply to this email directly, view it on GitHub https://github.com/apache/nuttx/issues/16133#issuecomment-2819422072, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACJ3DKAUWOLEAFIEZYM63TL22VHBJAVCNFSM6AAAAAB2OBOOX2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMJZGQZDEMBXGI . You are receiving this because you were mentioned.Message ID: @.***>
@keever50 @avgoor yes, looks like it's just shifting the code, already tried with rp2350 errata spinlock fixes and sched lock/unlock updates, it did not help with ostest, but helped at least boot the SMP (2cores) config on my pico2 without the sched lock/unlock modification it failed to boot for me. I think it's time to attach SWD debugger and see what's really going on.
This seems to be related to https://github.com/apache/nuttx/issues/16193
Can you roll back to https://github.com/apache/nuttx/pull/16030 and check if the SMP + ostest's signest failed after applying it?
Hi @tmedicci, unfortunately rewinding back to pre-#16030 commit doesn't help.
In many runs ostest hangs even sooner, the last output is:
user_main: pthread_rwlock test
pthread_rwlock: Initializing rwlock
pthread_exit_thread 30: Exiting
Once it went further, to the signest test and even produced there some dump:
user_main: nested signal handler test
signest_test: Starting signal waiter task at priority 101
signest_test: Started waiter_main pid=53
waiter_main: Waiter started
signest_test: Starting interfering task at priority 102
waiter_main: Setting signal mask
interfere_main: Waiting on semaphore
waiter_main: Registering signal handler
signest_test: Started interfere_main pid=54
waiter_main: Waiting on semaphore
[CPU0] dump_assert_info: Current Version: NuttX 12.9.0 504f838577-dirty Apr 22 2025 17:01:39 arm
[CPU0] dump_assert_info: Assertion failed : at file: :0 task(CPU0): ostest process: ostest 0x10024ccd
[CPU0] up_dump_register: R0: 20001510 R1: 00000000 R2: 20001510 R3: 20001510
[CPU0] up_dump_register: R4: 20003790 R5: 20003770 R6: 00000000 FP: 00000000
[CPU0] up_dump_register: R8: 00000000 SB: 00000000 SL: 00000000 R11: 00000000
[CPU0] up_dump_register: IP: 00000000 SP: 200090c8 LR: 10003141 PC: 10003141
[CPU0] up_dump_register: xPSR: 40000000 BASEPRI: 00000040 CONTROL: 00000002
[CPU0] up_dump_register: EXC_RETURN: 00000000
[CPU0] dump_stackinfo: User Stack:
[CPU0] dump_stackinfo: base: 0x200072c0
[CPU0] dump_stackinfo: size: 00008120
[CPU0] dump_stackinfo: sp: 0x200090c8
[CPU0] stack_dump: 0x200090a8: 40006e10 00000000 40006e10 20006e10 20004ed8 20003790 20003770 100031bd
[CPU0] stack_dump: 0x200090c8: 20001510 2000444b 20001510 00000000 00000000 00000000 4000004d 20006e10
[CPU0] stack_dump: 0x200090e8: 20001510 00000000 00000000 00000000 00000000 20006e10 00006e10 00000040
[CPU0] stack_dump: 0x20009108: 00003790 00000000 20006e10 00000000 00000000 20004436 00000000 00000040
[CPU0] stack_dump: 0x20009128: 00007278 00000000 20004436 20004436 00000000 0000c3bd 20006e10 00007278
[CPU0] stack_dump: 0x20009148: 00000040 0002f697 00000000 00000000 20006e10 00013ca0 24f47300 1000c35f
[CPU0] stack_dump: 0x20009168: 00000013 00000000 00000000 00000000 00000000 100278c9 00000013 00000035
[CPU0] stack_dump: 0x20009188: 00000013 0000004d 24f47300 00000002 10027868 10027ab9 20006fe0 00000036
[CPU0] stack_dump: 0x200091a8: 00000035 00010066 00000000 00000000 00002000 00000066 00000027 10035528
[CPU0] stack_dump: 0x200091c8: 00006b54 00000000 00000013 00000006 00000188 00000188 20003770 10024f17
[CPU0] stack_dump: 0x200091e8: 0007d7ac 00000001 00000023 00078d80 00004a2c 00078d80 00004b98 deadbeef
[CPU0] stack_dump: 0x20009208: 20007288 00000005 deadbeef 00000005 deadbeef 00000000 00000000 1000c84f
[CPU0] stack_dump: 0x20009228: 20009278 20007288 00000005 10024ccd 00000000 100072db 00000000 00000000
[CPU0] stack_dump: 0x20009248: 20006e10 00000000 00000040 00fffffd 00000000 20007288 00000000 20006e10
[CPU0] stack_dump: 0x20009268: 00000005 00000001 1000722c 00000000 00000000 00000000 00000000 00000000
ostest_main: Exiting with status 256
stdio_test: Standard I/O Check: fprintf to stderr
Can it be connected with the IRQ handling differences between M0+ and M33 ARM cores? I've seen that the NuttX port to RP2350 is somewhat a copy-paste of the RP2040 port, but the ARM cores are very different between those two.
Hi @xiaoxiang781216 and @pussuw, I've seen you closed this issue as resolved by #16262. I've rebuilt everything from scratch with the fresh master branch including this specific commit and unfortunately it doesn't help. The issue persists. Since it's not fixed please re-open this report.
Done.
@avgoor I have at least one successful full run of ostest after this commit c12aa5663d3046138eb975a8bc872d8348c73e21 . I’m still very interested to hear how you’re able to get rp2350 with SMP(2) running at all from master, because on my side it’s not working fresh from master without the sched_unlock patch I posted on the mailing list.
PS My defconfig like this https://github.com/shtirlic/picocalc-nx/blob/main/configs/nsh/defconfig
ostest also still fails for me on the RP2350 - usbnsh.
It does not get any further than this:
I still believe that the issue isn't even centered around SMP specifically. These issues are completely random. Every patch, every PR, the issue just gets moved around in memory. The RP2040 also still has this problem.
It is not just the RP2350, RP2040 and the ESP(forgot specific). I also notice that my STM32H743 is behaving very strange now. I am stuck in an older nuttx 12.3 version where everything seems to be more stable. To be more specific about the STM, the USB stack is completely broken. Sometimes, hard to replicate, the ostest also fails at random locations there.
This is a wide spread issue. I hope that developers/maintainers wont keep closing these issues too fast without deep testing. Maybe its time for a testing board farm.
@shtirlic I honestly just used pimoroni-pico-2-plus:smp configuration and added debugging options. That's it, nothing more. I connect via UART0, so maybe this is why it works. But ostest still gets stuck randomly before signest and 100% it gets stuck on signest.
@keever50 I'm not sure it's SMP either, however I don't see why it works OK when I set CONFIG_SMP_NCPUS=1 to limit the CPU count. When only one CPU core is running - everything works fine, doesn't get stuck anywhere. So at least in my brain it looks like a race condition between CPU cores when they lock SPINLOCKs and IRQ. Unfortunately I'm new to NuttX and don't know how it was intended to behave in multi-CPU environments.
P.S. I've tried to get rid of spin locks in favor of atomic operations, but then OS randomly crashes with hardfault. But I'm not sure that it wasn't my mistake so take it with a grain of salt.
I must correct myself. I've rerun the fresh master with CONFIG_SMP_NCPUS=1 and it's very unstable now. With enabled extra debug output it gets stuck randomly here and there, as if the UART I/O procedure itself interferes with other lock operations.
>>> bt
#0 spin_lock_notrace (lock=0x20003b0d <g_cpu_irqlock> "\001") at nuttx/include/nuttx/spinlock.h:199
#1 enter_critical_section_wo_note () at irq/irq_csection.c:183
#2 0x1000d37e in uart_xmitchars (dev=0x20000fc8 <g_uart0port>) at serial/serial_io.c:62
#3 0x10000dd4 in up_interrupt (irq=49, context=0x0, arg=0x20000fc8 <g_uart0port>) at chip/rp23xx_serial.c:617
#4 0x10002bb6 in irq_dispatch (irq=49, context=0x0) at irq/irq_dispatch.c:144
#5 0x10001cf4 in exception_direct () at armv8-m/arm_doirq.c:62
#6 <signal handler called>
#7 0x10002cf0 in enter_critical_section_wo_note () at irq/irq_csection.c:184
#8 0x100226ac in nxsched_process_scheduler () at sched/sched_processtimer.c:132
#9 0x100226e6 in nxsched_process_timer () at sched/sched_processtimer.c:193
#10 0x100116e0 in rp23xx_timerisr (irq=15, regs=0x0, arg=0x0) at chip/rp23xx_timerisr.c:82
#11 0x10002bb6 in irq_dispatch (irq=15, context=0x0) at irq/irq_dispatch.c:144
#12 0x10001cf4 in exception_direct () at armv8-m/arm_doirq.c:62
#13 <signal handler called>
#14 up_idle () at chip/rp23xx_idle.c:94
#15 0x100028be in nx_start () at init/nx_start.c:782
#16 0x100001ec in __start () at chip/rp23xx_start.c:192
[2] id 2 name rp2350.dap.core1 from 0x000000ec
[1] id 1 name rp2350.dap.core0 from 0x10002cd0 in spin_lock_notrace+8 at nuttx/include/nuttx/spinlock.h:199
Output in the console looks unfinished:
[CPU0] nxsig_tcbdispatch: TCB=0x2000adb0 pid=46 signo=39 code=0 value=0 masked=NO
[CPU0] nxsig_tcbdispatch: TCB=0x2000adb0 pid=46 signo=41 code=0 value=0 masked=NO
[CPU0] nxsig_tcbdispatch: TCB=0x2000adb0 pid=46 signo=42 code=0 value=0 masked=NO
[CPU0] nxsig_tcbdispatch: TCB=0x2000adb0 pid=46
as if the UART I/O procedure itself interferes with other lock operations.
I think that is likely. As RP2040 suffers from glitchy and bad UART.
Both on USB and HW UART.
This issue also appeared on the STM32H7, where turning on SYSLOG debug causes crashes.
Suspicious...
It might be worth looking into that.
Perhaps we can run some serial stress tests to confirm? Is NuttX actually still running during these hangs? Maybe its a thread specific hang or is everything stuck? Perhaps a classic blinky could confirm this.
@keever50 It is "running", meaning that the active thread is actively reading the SPINLOCK register for acquiring the spinlock (which never happens for some reason). If you unlock the lock (g_cpu_irqlock in this case) by setting it to 0x0 via debugger the OS is getting unstuck and proceeds running the thread further. But then it gets stuck once again in just couple functions call further down the program. In all my testing I was able to escape from this "deadlock" by manually setting the lock to 0x0 or writing to a corresponding SPINLOCK register of RP2350. So at least in my case it's not like a hardfault when CPU is halted. On the other hand I would not expect any other background thread to run in these moment, I doubt that spinning the lock can be preempted by the scheduler (not sure here).
@avgoor There have been recent stack overflow issues (not the website). Probably because NuttX is growing. If you have issues, please enable KASAN (MM_KASAN) and stack canaries in Kconfig.
If KASAN throws panics right away, there is a stack overflow, somewhere. It also shows where and how much.
run "ps" to see the stack usage levels per thread.
As you can see, it is extremely high. Way too high. Try to increase this and see if you get the same issues as before.
I did increase it but we still have hangs if you run ps multiple times. This happened after the second time i flashed the pico. What if this is flash related?
After enabling KASAN, everything appears to be "fixed" but i am sure it is not. Strange.
@keever50 I've enabled KASAN, bumped the stack sizes everywhere but the issue is still there. Once I got a partial debug assert shown:
[CPU0] dump_assert_info: Current Version: NuttX 12.9.0 7bbc76f115 Apr 30 2025 09:23:31 arm
[CPU0] dump_assert_info: Assertion failed rtcb != ((void*)0) && rtcb->irqcount > 0: at file: irq/irq_csection.c:379 task(CPU0): ostest process: ostest 0x1002a649
[CPU0] up_dump_register: R0: 20001510 R1: 0000017b R2: 20001510 R3: 20001510
[CPU0] up_dump_regis
and that's it. Not sure what to do now. For me it still looks like a broken logic of locks vs IRQs.
Retested the scenario with the fresh build
NuttX 12.10.0 44b65cbafd Aug 17 2025 10:52:31 arm raspberrypi-pico-2
with enable debug assertions. The test still fails, but now with an assertion. The output now:
signest_test: Starting signal waiter task at priority 101
signest_test: Started waiter_main pid=67
waiter_main: Waiter started
signest_test: Starting interfering task at priority 102
waiter_main: Setting signal mask
interfere_main: Waiting on semaphore
waiter_main: Registering signal handler
signest_test: Started interfere_main pid=68
waiter_main: Waiting on semaphore
[CPU0] dump_assert_info: Current Version: NuttX 12.10.0 44b65cbafd Aug 17 2025 10:52:31 arm
[CPU0] dump_assert_info: Assertion failed (_Bool)0: at file: :0 task(CPU0): ostest process: ostest 0x100297d1
[CPU0] up_dump_register: R0: 20001310 R1: 00000000 R2: 20001310 R3: 20001310
[CPU0] up_dump_register: R4: 200039e4 R5: 200039c4 R6: 00000000 FP: 00000000
[CPU0] up_dump_register: R8: 00000000 SB: 00000000 SL: 00000000 R11: 00000000
[CPU0] up_dump_register: IP: 00000000 SP: 200099d0 LR: 1000366b PC: 1000366b
[CPU0] up_dump_register: xPSR: 20000000 BASEPRI: 00000080 CONTROL: 00000006
[CPU0] up_dump_register: EXC_RETURN: 00000000
[CPU0] dump_stackinfo: User Stack:
[CPU0] dump_stackinfo: base: 0x20007bc0
[CPU0] dump_stackinfo: size: 00008120
[CPU0] dump_stackinfo: sp: 0x200099d0
[CPU0] stack_dump: 0x200099b0: 80800080 00000000 80000080 20007770 200050d8 200039e4 200039c4 100036e3
[CPU0] stack_dump: 0x200099d0: 20001310 00004650 20001310 10059264 00000000 00000000 00000000 20007770
[CPU0] stack_dump: 0x200099f0: 20001310 10059264 00000000 00000000 00000000 20007770 00000000 00000080
[CPU0] stack_dump: 0x20009a10: 00002804 00000000 20007770 00000000 000077c8 2000463a 00000000 00000080
[CPU0] stack_dump: 0x20009a30: 00007b78 00000000 2000463a 2000463a 00000000 0000c935 20007770 00007b78
[CPU0] stack_dump: 0x20009a50: 00000080 00035051 00000000 00000000 20007770 00013eec 1a39de00 1000c8d7
[CPU0] stack_dump: 0x20009a70: 1003640d 10059264 00000000 00000000 010039e4 1002c4f1 0000002d 00000043
[CPU0] stack_dump: 0x20009a90: 0000002d 00000066 1a39de00 00000002 00000000 1002c6f5 20009acf 00010066
[CPU0] stack_dump: 0x20009ab0: 00000000 00000000 00002000 00000066 00000028 10056508 00006a58 00000000
[CPU0] stack_dump: 0x20009ad0: 0000002d 00000000 0000002e 0000002e 200039c4 10029a29 0007d5a8 00000002
[CPU0] stack_dump: 0x20009af0: 00000027 000772a0 00005a10 00077b98 00006d68 00000000 20007b88 00000005
[CPU0] stack_dump: 0x20009b10: 00000000 00000005 00000000 00000000 00000000 1000cc39 1000689c 20007b88
[CPU0] stack_dump: 0x20009b30: 00000005 100297d1 00000000 10006967 00000000 00000000 20007770 00000000
[CPU0] stack_dump: 0x20009b50: 00000080 00000000 00000000 20007b88 00000000 20007770 00000005 00000001
[CPU0] stack_dump: 0x20009b70: 00040000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ostest_main: Exiting with status 256
stdio_test: Standard I/O Check: fprintf to stderr