[Bug] Core Dump in `mirror_replay` Test Suite During Execution
Apache Cloudberry version
main branch
What happened
The mirror_replay test suite is consistently generating a core dump during execution. This test is part of the greenplum_schedule running under the ic-good-opt-off (make -c src/test/regress installcheck-good) test matrix configuration. From the core dump's stack , the issue occurs specifically during the append-only segment file handling in the startup process.
Environment
Project: Apache Cloudberry Test Suite: mirror_replay Schedule: greenplum_schedule Test Matrix Config: ic-good-opt-off Build Type: Debug build with the following configuration:
--enable-debug
--enable-profiling
--enable-cassert
--enable-debug-extensions
Stack Trace The core dump stack trace indicates the crash occurs during append-only segment file handling:
Thread 1 (Thread 0x7f9cf7a5ed00 (LWP 8442)):
#0 0x00007f9cf8f11a6c in __pthread_kill_implementation () from /lib64/libc.so.6
#1 0x00007f9cf8ec4686 in raise () from /lib64/libc.so.6
#2 0x00007f9cf8eae833 in abort () from /lib64/libc.so.6
#3 0x00007f9cf9ca28bf in errfinish (filename=<optimized out>, filename@entry=0x7f9cfa27ef7a "xlogutils.c", lineno=lineno@entry=103, funcname=funcname@entry=0x7f9cfa27f060 <__func__.5> "log_invalid_page") at elog.c:819
#4 0x00007f9cf97272d6 in log_invalid_page (present=false, blkno=1, forkno=MAIN_FORKNUM, node=...) at xlogutils.c:103
#5 XLogAOSegmentFile (rnode=..., segmentFileNum=1) at xlogutils.c:567
#6 0x00007f9cf9d590a6 in ao_truncate_replay (record=<optimized out>, record=<optimized out>) at cdbappendonlyxlog.c:177
#7 0x00007f9cf971b7e5 in StartupXLOG () at xlog.c:7824
#8 0x00007f9cf9a6d124 in StartupProcessMain () at startup.c:267
#9 0x00007f9cf9767e52 in AuxiliaryProcessMain (argc=<optimized out>, argc@entry=2, argv=<optimized out>, argv@entry=0x7ffd0b1cc490) at bootstrap.c:483
#10 0x00007f9cf9a6cbd4 in StartChildProcess (type=StartupProcess) at postmaster.c:6139
#11 PostmasterMain (argc=argc@entry=7, argv=argv@entry=0x137aa30) at postmaster.c:1668
#12 0x000000000040282f in main (argc=7, argv=0x137aa30) at main/main.c:270
$1 = {si_signo = 6, si_errno = 0, si_code = -6, _sifields = {_pad = {8442, 1000, 0 <repeats 26 times>}, _kill = {si_pid = 8442, si_uid = 1000}, _timer = {si_tid = 8442, si_overrun = 1000, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 8442, si_uid = 1000, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid = 8442, si_uid = 1000, si_status = 0, si_utime = 0, si_stime = 0}, _sigfault = {si_addr = 0x3e8000020fa, _addr_lsb = 0, _addr_bnd = {_lower = 0x0, _upper = 0x0}}, _sigpoll = {si_band = 4294967304442, si_fd = 0}, _sigsys = {_call_addr = 0x3e8000020fa, _syscall = 0, _arch = 0}}}
Impact
- Blocks successful execution of mirror_replay test suite
- May indicate potential issues with append-only segment file handling during mirror synchronization
What you think should happen instead
Analysis
- The crash occurs in the startup process during XLOG replay
- Specifically fails in log_invalid_page() function in xlogutils.c
- The context suggests this is related to append-only segment file handling during mirror replay
- The immediate cause appears to be an invalid page access during AO segment file processing
How to reproduce
Ensure your system is capable of generating core files. Execute the following dev test execution command:
make -c src/test/regress installcheck-good
Issue reproduces consistently without additional steps
Operating System
Rocky Linux 9 (should be platfo independent)
Anything else
Additional Context The error occurs during the append-only truncate replay operation (ao_truncate_replay), suggesting potential issues with either:
- Invalid segment file state during replay
- Corruption in the XLOG records
- Incorrect handling of append-only segment files during mirror synchronization
Are you willing to submit PR?
- [ ] Yes, I am willing to submit a PR!
Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
FYI: Non debug builds produce the following:
Thread 1 (Thread 0x7eff22bfad00 (LWP 95861)):
#0 0x00007eff240ada6c in __pthread_kill_implementation () from /lib64/libc.so.6
#1 0x00007eff24060686 in raise () from /lib64/libc.so.6
#2 0x00007eff2404a833 in abort () from /lib64/libc.so.6
#3 0x00007eff24d18c46 in errfinish () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#4 0x00007eff2484f836 in XLogAOSegmentFile () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#5 0x00007eff24db0e96 in ao_truncate_replay.isra () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#6 0x00007eff248452ff in StartupXLOG () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#7 0x00007eff24b29fb5 in StartupProcessMain () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#8 0x00007eff24882642 in AuxiliaryProcessMain () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#9 0x00007eff24b24db5 in StartChildProcess () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#10 0x00007eff24b298cf in PostmasterMain () from /usr/local/cloudberry-db-99.0.0/lib/libpostgres.so
#11 0x00000000004027db in main ()
$1 = {si_signo = 6, si_errno = 0, si_code = -6, _sifields = {_pad = {95861, 1000, 0 <repeats 26 times>}, _kill = {si_pid = 95861, si_uid = 1000}, _timer = {si_tid = 95861, si_overrun = 1000, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 95861, si_uid = 1000, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid = 95861, si_uid = 1000, si_status = 0, si_utime = 0, si_stime = 0}, _sigfault = {si_addr = 0x3e800017675, _addr_lsb = 0, _addr_bnd = {_lower = 0x0, _upper = 0x0}}, _sigpoll = {si_band = 4294967391861, si_fd = 0}, _sigsys = {_call_addr = 0x3e800017675, _syscall = 0, _arch = 0}}}
CI seem lost isolation2 that make -C src/test/isolation2 installcheck
@yjhjstz & @avamingli I hope to bring it and others online soon. I am able to run two of the isolation2 tests and did notice there are failures (output differences). They can be seen here.
https://github.com/edespino/cloudberry/actions/runs/12364538041
@yjhjstz & @avamingli I hope to bring it and others online soon. I am able to run two of the isolation2 tests and did notice there are failures (output differences). They can be seen here.
https://github.com/edespino/cloudberry/actions/runs/12364538041
Hi, at a glance, that's a case we should fix, please feel free to create the PR bringing isolation2 back if there were only that case failed. I will help you fix the diffs there.(on vacation today, perhaps tomorrow I will be back)
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /__w/cloudberry/cloudberry/src/test/isolation2/expected/parallel_retrieve_cursor/explain.out /__w/cloudberry/cloudberry/src/test/isolation2/results/parallel_retrieve_cursor/explain.out
[18](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:19)
--- /__w/cloudberry/cloudberry/src/test/isolation2/expected/parallel_retrieve_cursor/explain.out 2024-12-16 17:38:39.620082360 -0800
[19](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:20)
+++ /__w/cloudberry/cloudberry/src/test/isolation2/results/parallel_retrieve_cursor/explain.out 2024-12-16 17:38:39.628082370 -0800
[20](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:21)
@@ -113,40 +113,40 @@
[21](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:22)
QUERY PLAN
[22](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:23)
___________
[23](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:24)
Seq Scan on pg_catalog.pg_class
[24](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:25)
- Output: oid, relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, reltoastrelid, relhasindex, relisshared, relpersistence, relkind, relnatts, relchecks, relhasrules, relhastriggers, relhassubclass, relrowsecurity, relforcerowsecurity, relispopulated, relreplident, relispartition, relisivm, relrewrite, relfrozenxid, relminmxid, relacl, reloptions, relpartbound
[25](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:26)
-GP_IGNORE:(3 rows)
[26](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:27)
+ Output: oid, relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, reltoastrelid, relhasindex, relisshared, relpersistence, relkind, relnatts, relchecks, relhasrules, relhastriggers, relhassubclass, relrowsecurity, relforcerowsecurity, relispopulated, relreplident, relispartition, relisivm, relisdynamic, relrewrite, relfrozenxid, relminmxid, relacl, reloptions, relpartbound
[27](https://github.com/edespino/cloudberry/actions/runs/12364538041/job/34508249298#step:18:28)
+GP_IGNORE:(4 rows)
help to add relisdynamic field to fix test. @avamingli
@edespino can you help to bring make installcheck-cbdb-parallel parallel test back ?
@edespino can you help to bring
make installcheck-cbdb-parallelparallel test back ?
Yes I will
@edespino can you help to bring
make installcheck-cbdb-parallelparallel test back ?
@yjhjstz If you could help an approval for https://github.com/apache/cloudberry/pull/819 it wold be appreciated.
@yjhjstz FYI: installcheck-cbdb-parallel is now live: https://github.com/apache/cloudberry/actions/runs/12502691175
@edespino please help to set MAX_CONNECTIONS=5 in installcheck-cbdb-parallel env ?
@edespino please help to set
MAX_CONNECTIONS=5ininstallcheck-cbdb-parallelenv ?
@yjhjstz This is already configured:
https://github.com/apache/cloudberry/blob/a03d2b857a9b240326ad47560db3191f47cf503c/src/test/regress/GNUmakefile#L217
fixed.