How to debug test failures [and: force_close_record_session issue]?
Given the following test result - how can I actually debug rr and check what is happening in force_close_record_session?
When running the replay mentioned a plain continue runs till the expected result - so this seems to be only "test case related" (the breakpoint that is set and may doesn't have a running command in)? In this case
#/home/rr_test/rr/_build> ctest -VV -R fd_cleanup-no-syscallbuf
UpdateCTestConfiguration from :/home/rr_test/rr/_build/DartConfiguration.tcl
UpdateCTestConfiguration from :/home/rr_test/rr/_build/DartConfiguration.tcl
Test project /home/rr_test/rr/_build
Constructing a list of tests
Done constructing a list of tests
Checking test dependency graph...
Checking test dependency graph end
test 171
Start 171: fd_cleanup-no-syscallbuf
171: Test command: /bin/bash "source_dir/src/test/basic_test.run" "fd_cleanup" "-n" "bin_dir" "120"
171: Test timeout computed to be: 9.99988e+06
171: source_dir/src/test/util.sh: line 245: 23964 Aborted _RR_TRACE_DIR="$workdir" test-monitor $TIMEOUT record.err $RR_EXE $GLOBAL_OPTIONS record $LIB_ARG $RECORD_ARGS "$exe" $exeargs > record.out 2> record.err
171: Test 'fd_cleanup' FAILED: : error during recording:
171: --------------------------------------------------
171: timeout 120 exceeded
171: ====== /proc/23965/status
171: Name: rr
171: Umask: 0002
171: State: S (sleeping)
171: Tgid: 23965
171: Ngid: 0
171: Pid: 23965
171: PPid: 23964
171: TracerPid: 0
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 64
171: Groups: 613 615 617 12430
171: VmPeak: 408404 kB
171: VmSize: 408260 kB
171: VmLck: 0 kB
171: VmPin: 0 kB
171: VmHWM: 18216 kB
171: VmRSS: 18216 kB
171: RssAnon: 11144 kB
171: RssFile: 7072 kB
171: RssShmem: 0 kB
171: VmData: 376540 kB
171: VmStk: 136 kB
171: VmExe: 7696 kB
171: VmLib: 5324 kB
171: VmPTE: 128 kB
171: VmSwap: 0 kB
171: Threads: 6
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000027
171: SigCgt: 0000000180006400
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 0
171: Seccomp: 0
171: Speculation_Store_Bypass: thread vulnerable
171: Cpus_allowed: 1
171: Cpus_allowed_list: 0
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 372
171: nonvoluntary_ctxt_switches: 5
171: ====== /proc/23965/stack
171: ====== /proc/23966/status
171: Name: compress events
171: Umask: 0002
171: State: S (sleeping)
171: Tgid: 23965
171: Ngid: 0
171: Pid: 23966
171: PPid: 23964
171: TracerPid: 0
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 64
171: Groups: 613 615 617 12430
171: VmPeak: 408404 kB
171: VmSize: 408260 kB
171: VmLck: 0 kB
171: VmPin: 0 kB
171: VmHWM: 18216 kB
171: VmRSS: 18216 kB
171: RssAnon: 11144 kB
171: RssFile: 7072 kB
171: RssShmem: 0 kB
171: VmData: 376540 kB
171: VmStk: 136 kB
171: VmExe: 7696 kB
171: VmLib: 5324 kB
171: VmPTE: 128 kB
171: VmSwap: 0 kB
171: Threads: 6
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000027
171: SigCgt: 0000000180006400
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 0
171: Seccomp: 0
171: Speculation_Store_Bypass: thread vulnerable
171: Cpus_allowed: 3
171: Cpus_allowed_list: 0-1
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 4
171: nonvoluntary_ctxt_switches: 0
171: ====== /proc/23966/stack
171: ====== /proc/23967/status
171: Name: compress data
171: Umask: 0002
171: State: S (sleeping)
171: Tgid: 23965
171: Ngid: 0
171: Pid: 23967
171: PPid: 23964
171: TracerPid: 0
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 64
171: Groups: 613 615 617 12430
171: VmPeak: 408404 kB
171: VmSize: 408260 kB
171: VmLck: 0 kB
171: VmPin: 0 kB
171: VmHWM: 18216 kB
171: VmRSS: 18216 kB
171: RssAnon: 11144 kB
171: RssFile: 7072 kB
171: RssShmem: 0 kB
171: VmData: 376540 kB
171: VmStk: 136 kB
171: VmExe: 7696 kB
171: VmLib: 5324 kB
171: VmPTE: 128 kB
171: VmSwap: 0 kB
171: Threads: 6
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000027
171: SigCgt: 0000000180006400
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 0
171: Seccomp: 0
171: Speculation_Store_Bypass: thread vulnerable
171: Cpus_allowed: 3
171: Cpus_allowed_list: 0-1
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 9
171: nonvoluntary_ctxt_switches: 0
171: ====== /proc/23967/stack
171: ====== /proc/23968/status
171: Name: compress data
171: Umask: 0002
171: State: S (sleeping)
171: Tgid: 23965
171: Ngid: 0
171: Pid: 23968
171: PPid: 23964
171: TracerPid: 0
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 64
171: Groups: 613 615 617 12430
171: VmPeak: 408404 kB
171: VmSize: 408260 kB
171: VmLck: 0 kB
171: VmPin: 0 kB
171: VmHWM: 18216 kB
171: VmRSS: 18216 kB
171: RssAnon: 11144 kB
171: RssFile: 7072 kB
171: RssShmem: 0 kB
171: VmData: 376540 kB
171: VmStk: 136 kB
171: VmExe: 7696 kB
171: VmLib: 5324 kB
171: VmPTE: 128 kB
171: VmSwap: 0 kB
171: Threads: 6
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000027
171: SigCgt: 0000000180006400
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 0
171: Seccomp: 0
171: Speculation_Store_Bypass: thread vulnerable
171: Cpus_allowed: 3
171: Cpus_allowed_list: 0-1
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 5
171: nonvoluntary_ctxt_switches: 0
171: ====== /proc/23968/stack
171: ====== /proc/23969/status
171: Name: compress mmaps
171: Umask: 0002
171: State: S (sleeping)
171: Tgid: 23965
171: Ngid: 0
171: Pid: 23969
171: PPid: 23964
171: TracerPid: 0
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 64
171: Groups: 613 615 617 12430
171: VmPeak: 408404 kB
171: VmSize: 408260 kB
171: VmLck: 0 kB
171: VmPin: 0 kB
171: VmHWM: 18216 kB
171: VmRSS: 18216 kB
171: RssAnon: 11144 kB
171: RssFile: 7072 kB
171: RssShmem: 0 kB
171: VmData: 376540 kB
171: VmStk: 136 kB
171: VmExe: 7696 kB
171: VmLib: 5324 kB
171: VmPTE: 128 kB
171: VmSwap: 0 kB
171: Threads: 6
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000027
171: SigCgt: 0000000180006400
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 0
171: Seccomp: 0
171: Speculation_Store_Bypass: thread vulnerable
171: Cpus_allowed: 3
171: Cpus_allowed_list: 0-1
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 5
171: nonvoluntary_ctxt_switches: 0
171: ====== /proc/23969/stack
171: ====== /proc/23970/status
171: Name: compress tasks
171: Umask: 0002
171: State: S (sleeping)
171: Tgid: 23965
171: Ngid: 0
171: Pid: 23970
171: PPid: 23964
171: TracerPid: 0
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 64
171: Groups: 613 615 617 12430
171: VmPeak: 408404 kB
171: VmSize: 408260 kB
171: VmLck: 0 kB
171: VmPin: 0 kB
171: VmHWM: 18216 kB
171: VmRSS: 18216 kB
171: RssAnon: 11144 kB
171: RssFile: 7072 kB
171: RssShmem: 0 kB
171: VmData: 376540 kB
171: VmStk: 136 kB
171: VmExe: 7696 kB
171: VmLib: 5324 kB
171: VmPTE: 128 kB
171: VmSwap: 0 kB
171: Threads: 6
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000027
171: SigCgt: 0000000180006400
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 0
171: Seccomp: 0
171: Speculation_Store_Bypass: thread vulnerable
171: Cpus_allowed: 3
171: Cpus_allowed_list: 0-1
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 4
171: nonvoluntary_ctxt_switches: 0
171: ====== /proc/23970/stack
171: ====== /proc/23971/status
171: Name: fd_cleanup-oasc
171: State: Z (zombie)
171: Tgid: 23971
171: Ngid: 0
171: Pid: 23971
171: PPid: 23965
171: TracerPid: 23965
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 0
171: Groups: 613 615 617 12430
171: Threads: 2
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000000
171: SigCgt: 0000000180000000
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 1
171: Seccomp: 2
171: Speculation_Store_Bypass: thread force mitigated
171: Cpus_allowed: 1
171: Cpus_allowed_list: 0
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 334
171: nonvoluntary_ctxt_switches: 1
171: ====== /proc/23971/stack
171: ====== /proc/23972/status
171: Name: fd_cleanup-oasc
171: Umask: 0002
171: State: t (tracing stop)
171: Tgid: 23971
171: Ngid: 0
171: Pid: 23972
171: PPid: 23965
171: TracerPid: 23965
171: Uid: 1110 1110 1110 1110
171: Gid: 613 613 613 613
171: FDSize: 1024
171: Groups: 613 615 617 12430
171: VmPeak: 25084 kB
171: VmSize: 25084 kB
171: VmLck: 0 kB
171: VmPin: 0 kB
171: VmHWM: 772 kB
171: VmRSS: 772 kB
171: RssAnon: 280 kB
171: RssFile: 492 kB
171: RssShmem: 0 kB
171: VmData: 12660 kB
171: VmStk: 0 kB
171: VmExe: 4 kB
171: VmLib: 2120 kB
171: VmPTE: 68 kB
171: VmSwap: 0 kB
171: Threads: 2
171: SigQ: 0/31194
171: SigPnd: 0000000000000000
171: ShdPnd: 0000000000000000
171: SigBlk: 0000000000000000
171: SigIgn: 0000000000000000
171: SigCgt: 0000000180000000
171: CapInh: 0000000000000000
171: CapPrm: 0000000000000000
171: CapEff: 0000000000000000
171: CapBnd: 0000001fffffffff
171: CapAmb: 0000000000000000
171: NoNewPrivs: 1
171: Seccomp: 2
171: Speculation_Store_Bypass: thread force mitigated
171: Cpus_allowed: 1
171: Cpus_allowed_list: 0
171: Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
171: Mems_allowed_list: 0
171: voluntary_ctxt_switches: 18
171: nonvoluntary_ctxt_switches: 0
171: ====== /proc/23972/stack
171: ====== gdb -p 23965 -ex 'set confirm off' -ex 'set height 0' -ex 'thread apply all bt' -ex q </dev/null 2>&1
171: GNU gdb (GDB) 11.1
171: Copyright (C) 2021 Free Software Foundation, Inc.
171: License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
171: This is free software: you are free to change and redistribute it.
171: There is NO WARRANTY, to the extent permitted by law.
171: Type "show copying" and "show warranty" for details.
171: This GDB was configured as "x86_64-pc-linux-gnu".
171: Type "show configuration" for configuration details.
171: For bug reporting instructions, please see:
171: <https://www.gnu.org/software/gdb/bugs/>.
171: Find the GDB manual and other documentation resources online at:
171: <http://www.gnu.org/software/gdb/documentation/>.
171:
171: For help, type "help".
171: Type "apropos word" to search for commands related to "word".
171: Attaching to process 23965
171: [New LWP 23966]
171: [New LWP 23967]
171: [New LWP 23968]
171: [New LWP 23969]
171: [New LWP 23970]
171: [Thread debugging using libthread_db enabled]
171: Using host libthread_db library "/lib64/libthread_db.so.1".
171: 0x00007f049b8365b0 in waitid () from /lib64/libc.so.6
171:
171: Thread 6 (Thread 0x7f049906a700 (LWP 23970) "compress tasks"):
171: #0 0x00007f049c3dda35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
171: #1 0x000000000076a64f in rr::CompressedWriter::compression_thread (this=0x25cf060) at /home/rr_test/rr/src/CompressedWriter.cc:212
171: #2 0x00000000007699b6 in rr::CompressedWriter::compression_thread_callback (p=0x25cf060) at /home/rr_test/rr/src/CompressedWriter.cc:28
171: #3 0x00007f049c3d9ea5 in start_thread () from /lib64/libpthread.so.0
171: #4 0x00007f049b86f96d in clone () from /lib64/libc.so.6
171:
171: Thread 5 (Thread 0x7f049986b700 (LWP 23969) "compress mmaps"):
171: #0 0x00007f049c3dda35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
171: #1 0x000000000076a64f in rr::CompressedWriter::compression_thread (this=0x25ced10) at /home/rr_test/rr/src/CompressedWriter.cc:212
171: #2 0x00000000007699b6 in rr::CompressedWriter::compression_thread_callback (p=0x25ced10) at /home/rr_test/rr/src/CompressedWriter.cc:28
171: #3 0x00007f049c3d9ea5 in start_thread () from /lib64/libpthread.so.0
171: #4 0x00007f049b86f96d in clone () from /lib64/libc.so.6
171:
171: Thread 4 (Thread 0x7f049a06c700 (LWP 23968) "compress data"):
171: #0 0x00007f049c3dda35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
171: #1 0x000000000076a64f in rr::CompressedWriter::compression_thread (this=0x25ce760) at /home/rr_test/rr/src/CompressedWriter.cc:212
171: #2 0x00000000007699b6 in rr::CompressedWriter::compression_thread_callback (p=0x25ce760) at /home/rr_test/rr/src/CompressedWriter.cc:28
171: #3 0x00007f049c3d9ea5 in start_thread () from /lib64/libpthread.so.0
171: #4 0x00007f049b86f96d in clone () from /lib64/libc.so.6
171:
171: Thread 3 (Thread 0x7f049a86d700 (LWP 23967) "compress data"):
171: #0 0x00007f049c3dda35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
171: #1 0x000000000076a64f in rr::CompressedWriter::compression_thread (this=0x25ce760) at /home/rr_test/rr/src/CompressedWriter.cc:212
171: #2 0x00000000007699b6 in rr::CompressedWriter::compression_thread_callback (p=0x25ce760) at /home/rr_test/rr/src/CompressedWriter.cc:28
171: #3 0x00007f049c3d9ea5 in start_thread () from /lib64/libpthread.so.0
171: #4 0x00007f049b86f96d in clone () from /lib64/libc.so.6
171:
171: Thread 2 (Thread 0x7f049b46f700 (LWP 23966) "compress events"):
171: #0 0x00007f049c3dda35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
171: #1 0x000000000076a64f in rr::CompressedWriter::compression_thread (this=0x25ce3c0) at /home/rr_test/rr/src/CompressedWriter.cc:212
171: #2 0x00000000007699b6 in rr::CompressedWriter::compression_thread_callback (p=0x25ce3c0) at /home/rr_test/rr/src/CompressedWriter.cc:28
171: #3 0x00007f049c3d9ea5 in start_thread () from /lib64/libpthread.so.0
171: #4 0x00007f049b86f96d in clone () from /lib64/libc.so.6
171:
171: Thread 1 (Thread 0x7f049d0b4780 (LWP 23965) "rr"):
171: #0 0x00007f049b8365b0 in waitid () from /lib64/libc.so.6
171: #1 0x0000000000938e82 in rr::Task::wait_exit (this=0x25cfe40) at /home/rr_test/rr/src/Task.cc:144
171: #2 0x00000000009392c1 in rr::Task::proceed_to_exit (this=0x25cfe40, wait=true) at /home/rr_test/rr/src/Task.cc:182
171: #3 0x0000000000833997 in rr::handle_ptrace_exit_event (t=0x25cfe40) at /home/rr_test/rr/src/RecordSession.cc:240
171: #4 0x000000000083ce96 in rr::RecordSession::record_step (this=0x25cde80) at /home/rr_test/rr/src/RecordSession.cc:2380
171: #5 0x00000000008305de in rr::record (args=std::vector of length 1, capacity 8 = {...}, flags=...) at /home/rr_test/rr/src/RecordCommand.cc:662
171: #6 0x000000000083114e in rr::RecordCommand::run (this=0xd87130 <rr::RecordCommand::singleton>, args=std::vector of length 1, capacity 8 = {...}) at /home/rr_test/rr/src/RecordCommand.cc:812
171: #7 0x000000000099515f in main (argc=8, argv=0x7ffd12a5df78) at /home/rr_test/rr/src/main.cc:268
171: Detaching from program: /home/rr_test/rr/_build/bin/rr, process 23965
171: [Inferior 1 (process 23965) detached]
171: rr itself crashed (SIGSEGV). This shouldn't happen!
171: eight 0' -ex 'b rr::force_close_record_session' -ex 'p rr::force_close_record_session()' -ex detach -ex q </dev/null 2>&1
171: GNU gdb (GDB) 11.1
171: Copyright (C) 2021 Free Software Foundation, Inc.
171: License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
171: This is free software: you are free to change and redistribute it.
171: There is NO WARRANTY, to the extent permitted by law.
171: Type "show copying" and "show warranty" for details.
171: This GDB was configured as "x86_64-pc-linux-gnu".
171: Type "show configuration" for configuration details.
171: For bug reporting instructions, please see:
171: <https://www.gnu.org/software/gdb/bugs/>.
171: Find the GDB manual and other documentation resources online at:
171: <http://www.gnu.org/software/gdb/documentation/>.
171:
171: For help, type "help".
171: Type "apropos word" to search for commands related to "word".
171: Attaching to process 23965
171: [New LWP 23966]
171: [New LWP 23967]
171: [New LWP 23968]
171: [New LWP 23969]
171: [New LWP 23970]
171: [Thread debugging using libthread_db enabled]
171: Using host libthread_db library "/lib64/libthread_db.so.1".
171: 0x00007f049b8365b0 in waitid () from /lib64/libc.so.6
171: Breakpoint 1 at 0x82feba: file /home/rr_test/rr/src/RecordCommand.cc, line 584.
171:
171: Thread 1 "rr" hit Breakpoint 1, rr::force_close_record_session () at /home/rr_test/rr/src/RecordCommand.cc:584
171: 584 if (static_session) {
171: The program being debugged stopped while in a function called from GDB.
171: Evaluation of the expression containing the function
171: (rr::force_close_record_session()) will be abandoned.
171: When the function is done executing, GDB will silently stop.
171: Detaching from program: /home/rr_test/rr/_build/bin/rr, process 23965
171: [Inferior 1 (process 23965) detached]
171: --------------------------------------------------
171: record.out:
171: --------------------------------------------------
171: EXIT-SUCCESS
171: --------------------------------------------------
171: Test fd_cleanup failed, leaving behind /tmp/rr-test-fd_cleanup-oascXRGlL
171: To replay the failed test, run
171: _RR_TRACE_DIR=/tmp/rr-test-fd_cleanup-oascXRGlL rr replay
1/1 Test #171: fd_cleanup-no-syscallbuf .........***Failed Error regular expression found in output. Regex=[FAILED]124.57 sec
0% tests passed, 1 tests failed out of 1
Total Test time (real) = 124.62 sec
The following tests FAILED:
171 - fd_cleanup-no-syscallbuf (Failed)
Errors while running CTest
Hello, just saw this issue and maybe I can add some information.
This test fd_cleanup-no-syscallbuf finishes at my system in less
than a second, so I assume this is the original issue here.
Then most command in the test are executed through test-monitor, which has
an own timeout which got hit here: 171: timeout 120 exceeded
If test-monitor detects this timeout it prints all kind of informations and
tries to stop rr from whatever it is doing by attaching a gdb session,
and executing p rr::force_close_record_session().
This might happen when rr is in a situation where such a call causes a crash.
For this reason this workaround of setting a breakpoint to rr::force_close_record_session
with the following detach is in place to at least get out of the gdb session,
otherwise the higher ctest timeout has to be waited for.
So this rr::force_close_record_session is usually not part of the test, it is just the attempt to stop it.
In case of you just trying to debug why the test takes that much time, you could check
if it hangs also if you run the test without the ctest framework.
E.g. the test writes /bin/bash "source_dir/src/test/basic_test.run" "fd_cleanup" "-n" "bin_dir" "120" which
I add just a -x to /bin/bash -x "source_dir/src/test/basic_test.run" "fd_cleanup" "-n" "bin_dir" "120".
This interactively executed shows each command and might be simplified to e.g. bin/rr record -n bin/fd_cleanup
If that hangs attaching another gdb to it and adding a backtrace here, might help get the issue diagnosed.
Thanks @bernhardu, this is very useful information. Could you please take the time to copy-paste and summarize that information to https://github.com/rr-debugger/rr/wiki/Building-And-Installing#tests ? This would allow to close this question-issue (I'll create a new one with possible debug details).
Can't you edit the wiki yourself?
I didn't felt confident enough in this point; there is no PR so I'd directly edit the "main documentation".
But I can try "soon" and either post a suggestion here or adjust the wiki and let others fix possible issues with my change if you want me to.
adjust the wiki and let others fix possible issues with my change if you want me to.
Do that and just leave a note here that you did.
Thanks!
Sorry for my delayed response, I started adding a few words here.