citus icon indicating copy to clipboard operation
citus copied to clipboard

PG18 Support - Regression tests sanity

Open m3hm3t opened this issue 7 months ago • 4 comments

Latest diffs with PG18.0 (Nov-17): https://github.com/citusdata/citus/actions/runs/19422731228

m3hm3t avatar May 21 '25 11:05 m3hm3t

OLDER RESULTS:

Using the pg18_support branch (PR #7981), CI run https://github.com/citusdata/citus/actions/runs/15157798085 completed on May 21 2025.

Top recurring error messages (first scan of regression.diffs)

Rank Message excerpt Hits
1 ~could not open relation with OID 0~ 556
2 ~current transaction is aborted, commands ignored until end of transaction block - Side Effect~ 129
3 ~could not determine which collation to use for LIKE~ 109
4 ~cannot push down this subquery~ 33
5 ~attempted columnar write on relation 17862 to invalid logical offset: 0~ 17
6 ~UPDATE and CTID scans not supported for ColumnarScan~ 15
7 relation "zero_col_columnar" does not exist 15
8 relation "pushdown_test" does not exist 15
9 ~attempted columnar write on relation 18893 to invalid logical offset: 0~ 14
10 Unrecognized range table id 2 13

m3hm3t avatar Jul 22 '25 10:07 m3hm3t

No tests test_group example
1 @naisila columnar_create.out - WARNING: Metadata index chunk_pkey is not available, this might mean slower read/writes on columnar tables. This is expected during Postgres upgrades and not expected otherwise.
- WARNING: Metadata index chunk_group_pkey is not available, this might mean slower read/writes on columnar tables. This is expected during Postgres upgrades and not expected otherwise.
- N
- (N row)
+ (N rows)
- N
- (N row)
+ (N rows)
- N
- (N row)
+ (N rows)
- N
2 columnar_query.out - f1 | f2 | fs
- (N rows)
-
+ ERROR: could not open relation with OID N
+ ERROR: could not open relation with OID N
3 columnar_first_row_number.out - row_count | first_row_number
- N | N
- N | N
- N | N
- N | N
- N | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
- row_count | first_row_number
- N | N
- N | N
4 columnar_drop.out + ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
+ ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
5 @naisila columnar_indexes.out - ERROR: could not create unique index "columnar_table_a_idx"
- DETAIL: Key (a)=(N) is duplicated.
- t
+ f
- (N rows)
+ (N rows)
- ERROR: duplicate key value violates unique constraint "columnar_table_pkey"
- DETAIL: Key (a)=(N) already exists.
- ERROR: duplicate key value violates unique constraint "columnar_table_pkey"
- DETAIL: Key (a)=(N) already exists.
- ERROR: duplicate key value violates unique constraint "columnar_table_pkey"
- DETAIL: Key (a)=(N) already exists.
6 columnar_paths.out - t
+ f
- t
+ f
- t
+ f
- t
+ f
- Index Scan using uncorrelated_idx on uncorrelated (actual rows=N loops=N)
+ Index Scan using uncorrelated_idx on uncorrelated (actual rows=N.N loops=N)
- (N rows)
+ Index Searches: N
7 columnar_insert.out - relname | stripe_num | chunk_group_count | row_count
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
- relname | stripe_num | value_count
- (N rows)
-
8 @naisila columnar_alter.out - (N rows)
+ (N rows)
- (N rows)
+ (N rows)
9 @naisila columnar_lz4.out - N
+ N
10 @naisila columnar_zstd.out - t
+ f
- t
+ f
- N
+ N
11 columnar_matview.out, columnar_rollback.out - count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
12 columnar_truncate.out + ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
13 columnar_vacuum.out + ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- N | N
14 columnar_types_without_comparison.out - minimum_value | maximum_value
- |
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- minimum_value | maximum_value
- |
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- minimum_value | maximum_value
- |
15 columnar_chunk_filtering.out - -> Custom Scan (ColumnarScan) on coltest (actual rows=N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest (actual rows=N.N loops=N)
- -> Custom Scan (ColumnarScan) on coltest (actual rows=N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest (actual rows=N.N loops=N)
- -> Append (actual rows=N loops=N)
- -> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=N loops=N)
+ -> Append (actual rows=N.N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=N.N loops=N)
- -> Seq Scan on coltest_part1 coltest_part_2 (actual rows=N loops=N)
+ -> Seq Scan on coltest_part1 coltest_part_2 (actual rows=N.N loops=N)
+ Storage: Memory Maximum Storage: NkB
- (N rows)
16 columnar_recursive.out - relname | count
- t1 | N
- t2 | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
17 @naisila columnar_write_concurrency_index.out + s2: WARNING: resource was not closed: snapshot reference 0x563c75ac37b0
18 @naisila columnar_vacuum_vs_insert.out - s2: INFO: "public.test_vacuum_vs_insert": found N removable, N nonremovable row versions in N pages
+ s2: INFO: "public.test_vacuum_vs_insert": found N removable, N nonremovable row versions in N pages
- N|N
- N|N
- N|N
- N|N
- N|N
- N|N
- (N rows)
+ (N rows)
19 columnar_temp_tables.out + ERROR: could not open relation with OID N
+ ERROR: could not open relation with OID N
20 @naisila columnar_index_concurrency.out - t
+ f
21 shard_move_constraints.out - sensors_8970000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
+ fkey_from_parent_to_parent_8970000_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
+ fkey_from_child_to_parent_8970008_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
- sensors_2020_01_01_8970008_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
- sensors_8970000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
+ fkey_from_parent_to_parent_8970000_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
+ fkey_from_child_to_parent_8970008_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
- sensors_2020_01_01_8970008_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
22 shard_move_constraints_blocking.out + fkey_from_parent_to_parent_8970000_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
- sensors_8970000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
+ fkey_from_child_to_parent_8970008_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
- sensors_2020_01_01_8970008_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
+ fkey_from_parent_to_parent_8970000_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
- sensors_8970000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
+ fkey_from_child_to_parent_8970008_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
- sensors_2020_01_01_8970008_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970024(eventdatetime, measureid)
23 follower_single_node.out - N | N
- N | N
- N | N
- N | N
- (N rows)
+ (N rows)
24 multi_insert_select.out + DEBUG: io N |op invalid|target invalid|state HANDED_OUT : adding cb #N, id N/aio_shared_buffer_readv_cb
+ DEBUG: io N |op invalid|target smgr|state HANDED_OUT : adding cb #N, id N/aio_md_readv_cb
+ DEBUG: io N |op readv|target smgr|state DEFINED : calling cb #N N/aio_shared_buffer_readv_cb->stage(N)
+ DEBUG: io N |op readv|target smgr|state STAGED : staged (synchronous: N, in_batch: N)
+ DEBUG: io N |op readv|target smgr|state COMPLETED_IO : after shared completion: distilled result: (status OK, id N, error_data: N, result N), raw_result: N
+ DEBUG: io N |op readv|target smgr|state COMPLETED_SHARED: after local completion: result: (status OK, id N, error_data N, result N), raw_result: N
25 window_functions.out + Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY ((N / (N + sum(users_table.value_2)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
26 multi_insert_select_conflict.out + -> Distributed Subplan 48_1
+ -> Custom Scan (Citus Adaptive)
- (N rows)
+ (N rows)
27 insert_select_into_local_table.out - NOTICE: l2={"(,,value,,,,col_7,)","(,,value2,,,,col_7,)"}
- CONTEXT: PL/pgSQL function query_results_equal(text,text,text) line XX at RAISE
- NOTICE: l2={"(,,value,,,,col_7,)","(,,value2,,,,col_7,)"}
- CONTEXT: PL/pgSQL function query_results_equal(text,text,text) line XX at RAISE
- query_results_equal
- t
- (N row)
-
+ ERROR: Unrecognized range table id N
+ CONTEXT: SQL statement "
+ INSERT INTO local_dest_table (col_3)
+ SELECT t1.text_col_1
28 ✅ multi_partitioning_utils.out + Not-null constraints:
+ "parent_table_id_not_null" NOT NULL "id"
+ Not-null constraints:
+ "parent_table_id_not_null" NOT NULL "id"
29 subquery_in_targetlist.out - ERROR: correlated subqueries are not supported when the FROM clause contains a reference table
+ max | ?column?
+ --------------------------------+----------
+ Thu Nov N N:N:N.N N | N
+ (N row)
+
- ERROR: correlated subqueries are not supported when the FROM clause contains a CTE or subquery
+ count | max
+ -------+---------------------------------
+ N | Thu Nov N N:N:N.N N
+ (N row)
+
30 subquery_in_where.out - N | N
+ N | N
31 ✅ pg17.out + DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus"
+ DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus.so"
+ Not-null constraints:
+ "partitioned_table_a_not_null" NOT NULL "a"
+ Not-null constraints:
+ "partitioned_table_a_not_null" NOT NULL "a"
+ Not-null constraints:
+ "partitioned_table_a_not_null" NOT NULL "a"
+ Not-null constraints:
+ "partitioned_table_a_not_null" NOT NULL "a"
+ Not-null constraints:
+ "partitioned_table_a_not_null" NOT NULL "a"
32 ✅ @naisila pg18.out - ERROR: cannot create statistics on the specified relation
- DETAIL: CREATE STATISTICS only supports tables, foreign tables and materialized views.
+ ERROR: CREATE STATISTICS only supports relation names in the FROM clause
33 multi_explain.out - "Alias": "lineitem"
+ "Alias": "lineitem",
+ Index Searches: N
+ Index Searches: N
- "Actual Loops": N +
+ "Actual Loops": N, +
- "Actual Loops": N
+ "Actual Loops": N,
- "Actual Loops": N
+ "Actual Loops": N,
+ Window: w1 AS (ROWS UNBOUNDED PRECEDING)
+ Storage: Memory Maximum Storage: NkB
34 multi_subquery_window_functions.out - Sort Key: (sum((sum(users_table.value_2) OVER (?)))) DESC, users_table.user_id DESC
+ Sort Key: (sum((sum(users_table.value_2) OVER w1))) DESC, users_table.user_id DESC
- Group Key: users_table.user_id, (sum((sum(users_table.value_2) OVER (?))))
+ Group Key: users_table.user_id, (sum((sum(users_table.value_2) OVER w1)))
- Group Key: users_table.user_id, (sum(users_table.value_2) OVER (?))
+ Group Key: users_table.user_id, (sum(users_table.value_2) OVER w1)
+ Window: w1 AS (PARTITION BY users_table.user_id)
+ Window: w1 AS (PARTITION BY events_table.user_id)
- Group Key: users_table_1.user_id, (sum(users_table_1.value_2) OVER (?))
+ Group Key: users_table_1.user_id, (sum(users_table_1.value_2) OVER w1)
+ Window: w1 AS (PARTITION BY users_table_1.user_id)
+ Window: w1 AS (PARTITION BY events_table_1.user_id)
35 sql_procedure.out - CONTEXT: SQL function "test_procedure_commit" during startup
+ CONTEXT: SQL function "test_procedure_commit" statement N
- CONTEXT: SQL function "test_procedure_rollback" during startup
+ CONTEXT: SQL function "test_procedure_rollback" statement N
36 multi_subquery_misc.out - ERROR: could not create distributed plan
- DETAIL: Possibly this is caused by the use of parameters in SQL functions, which is not supported in Citus.
- HINT: Consider using PL/pgSQL functions instead.
- CONTEXT: SQL function "sql_subquery_test" statement N
+ sql_subquery_test
+ -------------------
+ N
+ (N row)
+
37 multi_outer_join_columns.out - Output: remote_scan.id, max(remote_scan.max) OVER (?), remote_scan.worker_column_3
+ Output: remote_scan.id, max(remote_scan.max) OVER w1, remote_scan.worker_column_3
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_3)
- (N rows)
+ (N rows)
- Output: remote_scan.id, max(remote_scan.max) OVER (?), remote_scan.worker_column_3
+ Output: remote_scan.id, max(remote_scan.max) OVER w1, remote_scan.worker_column_3
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_3)
- (N rows)
+ (N rows)
- Output: remote_scan.id, (max(remote_scan.max) OVER (?)), remote_scan.worker_column_3
- Group Key: remote_scan.id, max(remote_scan.max) OVER (?)
38 ✅ @naisila multi_array_agg.out - {}
+
39 ✅ @naisila multi_task_assignment_policy.out, recurring_join_pushdown.out, shard_rebalancer.out + DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus"
+ DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus.so"
40 ✅ alter_table_set_access_method.out + heap_'tbl_a_not_null
- (N rows)
+ (N rows)
+ heap_'tbl_2789465768_a_not_null
- (N rows)
+ (N rows)
41 merge.out + N | N | N | N
+ N | N | N | N
- (N rows)
+ (N rows)
42 ✅ generated_identity.out + Not-null constraints:
+ "color_color_id_not_null" NOT NULL "color_id"
+ "color_color_name_not_null" NOT NULL "color_name"
+ Not-null constraints:
+ "color_color_id_not_null" NOT NULL "color_id"
+ "color_color_name_not_null" NOT NULL "color_name"
43 ✅ @naisila multi_extension.out + SQL statement "DO LANGUAGE plpgsql
+ $$
+ BEGIN
+ IF EXISTS (SELECT N FROM pg_dist_shard where shardstorage = 'c') THEN
+ RAISE EXCEPTION 'cstore_fdw tables are deprecated as of Citus N.N'
+ USING HINT = 'Install Citus N.N and convert your cstore_fdw tables to the columnar access method before upgrading further';
+ END IF;
+ END;
+ $$"
+ extension script file "citus--N.N-N--N.N-N.sql", near line N
+ SQL statement "DO $$
+ BEGIN
44 ✅ @naisila alter_role_propagation.out + WARNING: setting an MD5-encrypted password
+ DETAIL: MD5 password support is deprecated and will be removed in a future release of PostgreSQL.
+ HINT: Refer to the PostgreSQL documentation for details about migrating to another password type.
45 ✅ @naisila citus_local_tables.out, single_shard_table_udfs.out - {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N}
+ {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N}
46 ✅ @naisila partitioning_issue_3970.out + part_table | part_table_my_seq_not_null | NOT NULL my_seq
+ part_table | part_table_seq_not_null | NOT NULL seq
+ part_table | part_table_work_ymdt_not_null | NOT NULL work_ymdt
+ part_table_p202008 | part_table_my_seq_not_null | NOT NULL my_seq
+ part_table_p202008 | part_table_seq_not_null | NOT NULL seq
+ part_table_p202008 | part_table_work_ymdt_not_null | NOT NULL work_ymdt
+ part_table_p202009 | part_table_my_seq_not_null | NOT NULL my_seq
+ part_table_p202009 | part_table_seq_not_null | NOT NULL seq
+ part_table_p202009 | part_table_work_ymdt_not_null | NOT NULL work_ymdt
- (N rows)
+ (N rows)
+ part_table_1690000 | part_table_1690000_my_seq_not_null | NOT NULL my_seq
47 ✅ @naisila multi_repartition_join_task_assignment.out + DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus"
+ DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus.so"
+ DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus"
+ DEBUG: find_in_path: trying "/usr/lib/postgresql/N/lib/citus.so"
48 ✅ @naisila multi_prune_shard_list.out - {OPEXPR :opno N :opfuncid N :opresulttype N :opretset false :opcollid N :inputcollid N :args ({VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} {CONST :consttype N :consttypmod -N :constcollid N :constlen -N :constbyval false :constisnull true :location -N :constvalue <>}) :location -N}
+ {OPEXPR :opno N :opfuncid N :opresulttype N :opretset false :opcollid N :inputcollid N :args ({VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} {CONST :consttype N :consttypmod -N :constcollid N :constlen -N :constbyval false :constisnull true :location -N :constvalue <>}) :location -N}
49 local_dist_join_mixed.out - DEBUG: generating subplan XXX_1 for subquery SELECT id FROM local_dist_join_mixed.local u1 WHERE true
+ DEBUG: generating subplan 82_1 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u1 WHERE true
- DEBUG: generating subplan XXX_2 for subquery SELECT id FROM local_dist_join_mixed.local u2 WHERE true
+ DEBUG: generating subplan 82_2 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u2 WHERE true
- DEBUG: generating subplan XXX_3 for subquery SELECT id FROM local_dist_join_mixed.local u3 WHERE true
+ DEBUG: generating subplan 82_3 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u3 WHERE true
- DEBUG: generating subplan XXX_4 for subquery SELECT id FROM local_dist_join_mixed.local u4 WHERE true
+ DEBUG: generating subplan 82_4 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u4 WHERE true
- DEBUG: generating subplan XXX_5 for subquery SELECT id FROM local_dist_join_mixed.local u5 WHERE true
+ DEBUG: generating subplan 82_5 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u5 WHERE true
- DEBUG: generating subplan XXX_6 for subquery SELECT id FROM local_dist_join_mixed.local u6 WHERE true
+ DEBUG: generating subplan 82_6 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u6 WHERE true
50 query_single_shard_table.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DETAIL: users_table and non_colocated_events_table are not colocated
+ DETAIL: non_colocated_events_table and users_table are not colocated
- DETAIL: users_table and non_colocated_events_table are not colocated
+ DETAIL: non_colocated_events_table and users_table are not colocated
51 multi_router_planner_fast_path.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
52 ✅ @naisila multi_metadata_sync.out + WARNING: setting an MD5-encrypted password
+ DETAIL: MD5 password support is deprecated and will be removed in a future release of PostgreSQL.
+ HINT: Refer to the PostgreSQL documentation for details about migrating to another password type.
- mx_testing_schema.mx_test_table | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | s | f
+ mx_testing_schema.mx_test_table | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | s | f
- mx_testing_schema.mx_test_table | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | s | f
+ mx_testing_schema.mx_test_table | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | s | f
53 mixed_relkind_tests.out - Sort Key: remote_scan.a, (count() OVER (?))
+ Sort Key: remote_scan.a, (count(
) OVER w1)
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_2)
- (N rows)
+ (N rows)
54 ssl_by_default.out - ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384
+ HIGH:MEDIUM:+3DES:!aNULL
- (localhost,N,t,ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384)
- (localhost,N,t,ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384)
+ (localhost,N,t,HIGH:MEDIUM:+3DES:!aNULL)
+ (localhost,N,t,HIGH:MEDIUM:+3DES:!aNULL)
55 sqlancer_failures.out - count
- N
- (N row)
-
+ ERROR: invalid reference to FROM-clause entry for table "a"
+ DETAIL: There is an entry for table "a", but it cannot be referenced from this part of the query.
- count
- N
- (N row)
-
+ ERROR: invalid reference to FROM-clause entry for table "a"
+ DETAIL: There is an entry for table "a", but it cannot be referenced from this part of the query.
56 ✅ @naisila start_stop_metadata_sync.out - events | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | s | f
- events_2021_feb | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | s | f
- events_2021_jan | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | s | f
- events_replicated | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | c | f
- events_replicated_2021_feb | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | c | f
- events_replicated_2021_jan | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varnosyn N :varattnosyn N :location -N} | N | c | f
+ events | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | s | f
+ events_2021_feb | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | s | f
+ events_2021_jan | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | s | f
+ events_replicated | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | c | f
+ events_replicated_2021_feb | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | c | f
+ events_replicated_2021_jan | h | {VAR :varno N :varattno N :vartype N :vartypmod -N :varcollid N :varnullingrels (b) :varlevelsup N :varreturningtype N :varnosyn N :varattnosyn N :location -N} | N | c | f
57 ✅ @naisila multi_mx_hide_shard_names.out - -> Index Scan using pg_class_relname_nsp_index on pg_class c
- Index Cond: (relname = 'test_table'::text)
+ -> Seq Scan on pg_class c
- (N rows)
+ (N rows)
- NOTICE: backend type switched to: background worker
+ NOTICE: backend type switched to: autovacuum worker
+ test_index_1130000
+ test_index_1130002
+ test_table_102008_1130004
+ test_table_102008_1130006
+ test_table_1130000
58 local_shard_execution_replicated.out - (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ Buffers: shared hit=N
+ (N rows)
+ Buffers: shared hit=N
- (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ (N rows)
59 local_shard_execution.out - (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ Buffers: shared hit=N
+ (N rows)
+ Buffers: shared hit=N
+ Index Searches: N
+ Buffers: shared hit=N
- (N rows)
+ (N rows)
60 citus_local_tables_mx.out + fkey_cas_test_3_1
- (N rows)
+ fkey_cas_test_3_1330013_1
+ fkey_cas_test_3_1330013_2
+ fkey_cas_test_3_2
+ (N rows)
- (N rows)
+ fkey_cas_test_3_1
+ fkey_cas_test_3_2
+ (N rows)
- (N rows)
+ fkey_cas_test_3_1
61 multi_mx_explain.out - "Alias": "lineitem_mx"
+ "Alias": "lineitem_mx",
- "Alias": "supplier_mx"
+ "Alias": "supplier_mx",
- "Alias": "lineitem_mx"
+ "Alias": "lineitem_mx",
- "Alias": "customer_mx"
+ "Alias": "customer_mx",
- "Alias": "orders_mx"
+ "Alias": "orders_mx",
62 locally_execute_intermediate_results.out - NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1 FROM locally_execute_intermediate_results.table_2_1580004 table_2) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1 FROM locally_execute_intermediate_results.table_2_1580006 table_2) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1, table_2.value AS worker_column_2 FROM locally_execute_intermediate_results.table_2_1580004 table_2) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1, table_2.value AS worker_column_2 FROM locally_execute_intermediate_results.table_2_1580006 table_2) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
63 local_shard_execution_dropped_column.out - NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
64 ✅ @naisila metadata_sync_helpers.out - ERROR: cannot colocate tables test_2 and test_3
+ ERROR: did not find '}' at end of input node
65 local_execution_local_plan.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Fast-path router query: created local execution plan to avoid deparse and compile of shard query
- DEBUG: Local executor: Using task's cached local plan for task N
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Fast-path router query: created local execution plan to avoid deparse and compile of shard query
- DEBUG: Local executor: Using task's cached local plan for task N
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
66 multi_move_mx.out - ERROR: could not connect to the publisher: root certificate file "/non/existing/certificate.crt" does not exist
+ ERROR: subscription "subs_01" could not connect to the publisher: connection to server at "localhost" (::N), port N failed: root certificate file "/non/existing/certificate.crt" does not exist
67 citus_non_blocking_split_columnar.out, citus_split_shard_columnar_partitioned.out + sensors_2020_01_01_8970002 | fkey_from_child_to_parent_8970002_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
- sensors_2020_01_01_8970002 | sensors_2020_01_01_8970002_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
+ sensors_8970000 | fkey_from_parent_to_parent_8970000_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
- sensors_8970000 | sensors_8970000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
+ sensors_2020_01_01_8999004 | fkey_from_child_to_parent_8999004_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
- sensors_2020_01_01_8999004 | sensors_2020_01_01_8999004_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
+ sensors_8999000 | fkey_from_parent_to_parent_8999000_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
- sensors_8999000 | sensors_8999000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
+ sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
- sensors_2020_01_01_8999005 | sensors_2020_01_01_8999005_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
+ sensors_8999001 | fkey_from_parent_to_parent_8999001_1 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
- sensors_8999001 | sensors_8999001_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)

m3hm3t avatar Oct 03 '25 08:10 m3hm3t

AI generated list by 13.10.25

No tests test_group example
1 columnar_create.out - WARNING: Metadata index chunk_pkey is not available, this might mean slower read/writes on columnar tables. This is expected during Postgres upgrades and not expected otherwise.
- WARNING: Metadata index chunk_group_pkey is not available, this might mean slower read/writes on columnar tables. This is expected during Postgres upgrades and not expected otherwise.
- N
- (N row)
+ (N rows)
- N
- (N row)
+ (N rows)
- N
- (N row)
+ (N rows)
- N
2 columnar_query.out - f1 | f2 | fs
- (N rows)
-
+ ERROR: could not open relation with OID N
+ ERROR: could not open relation with OID N
3 columnar_first_row_number.out - row_count | first_row_number
- N | N
- N | N
- N | N
- N | N
- N | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
- row_count | first_row_number
- N | N
- N | N
4 columnar_drop.out + ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
+ ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
5 columnar_indexes.out - ERROR: could not create unique index "columnar_table_a_idx"
- DETAIL: Key (a)=(N) is duplicated.
- t
+ f
- (N rows)
+ (N rows)
- ERROR: duplicate key value violates unique constraint "columnar_table_pkey"
- DETAIL: Key (a)=(N) already exists.
- ERROR: duplicate key value violates unique constraint "columnar_table_pkey"
- DETAIL: Key (a)=(N) already exists.
- ERROR: duplicate key value violates unique constraint "columnar_table_pkey"
- DETAIL: Key (a)=(N) already exists.
6 columnar_paths.out - t
+ f
- t
+ f
- t
+ f
- t
+ f
- Index Scan using uncorrelated_idx on uncorrelated (actual rows=N loops=N)
+ Index Scan using uncorrelated_idx on uncorrelated (actual rows=N.N loops=N)
- (N rows)
+ Index Searches: N
7 columnar_insert.out - relname | stripe_num | chunk_group_count | row_count
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
- relname | stripe_num | value_count
- (N rows)
-
8 columnar_alter.out - (N rows)
+ (N rows)
- (N rows)
+ (N rows)
9 columnar_lz4.out - N
+ N
10 columnar_zstd.out - t
+ f
- t
+ f
- N
+ N
11 columnar_matview.out, columnar_rollback.out - count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
12 columnar_truncate.out + ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
13 columnar_vacuum.out + ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- N | N
14 columnar_types_without_comparison.out - minimum_value | maximum_value
- |
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- minimum_value | maximum_value
- |
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- minimum_value | maximum_value
- |
15 columnar_chunk_filtering.out - -> Custom Scan (ColumnarScan) on coltest (actual rows=N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest (actual rows=N.N loops=N)
- -> Custom Scan (ColumnarScan) on coltest (actual rows=N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest (actual rows=N.N loops=N)
- -> Append (actual rows=N loops=N)
- -> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=N loops=N)
+ -> Append (actual rows=N.N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=N.N loops=N)
- -> Seq Scan on coltest_part1 coltest_part_2 (actual rows=N loops=N)
+ -> Seq Scan on coltest_part1 coltest_part_2 (actual rows=N.N loops=N)
+ Storage: Memory Maximum Storage: NkB
- (N rows)
16 columnar_recursive.out - relname | count
- t1 | N
- t2 | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
17 columnar_write_concurrency_index.out + s2: WARNING: resource was not closed: snapshot reference 0x55cdc8c337b0
18 columnar_vacuum_vs_insert.out - s2: INFO: "public.test_vacuum_vs_insert": found N removable, N nonremovable row versions in N pages
+ s2: INFO: "public.test_vacuum_vs_insert": found N removable, N nonremovable row versions in N pages
- N|N
- N|N
- N|N
- N|N
- N|N
- N|N
- (N rows)
+ (N rows)
19 columnar_temp_tables.out + ERROR: could not open relation with OID N
+ ERROR: could not open relation with OID N
20 columnar_index_concurrency.out - t
+ f
21 follower_single_node.out - N | N
- N | N
- N | N
- N | N
- (N rows)
+ (N rows)
22 multi_insert_select.out + DEBUG: io N |op invalid|target invalid|state HANDED_OUT : adding cb #N, id N/aio_shared_buffer_readv_cb
+ DEBUG: io N |op invalid|target smgr|state HANDED_OUT : adding cb #N, id N/aio_md_readv_cb
+ DEBUG: io N |op readv|target smgr|state DEFINED : calling cb #N N/aio_shared_buffer_readv_cb->stage(N)
+ DEBUG: io N |op readv|target smgr|state STAGED : staged (synchronous: N, in_batch: N)
+ DEBUG: io N |op readv|target smgr|state COMPLETED_IO : after shared completion: distilled result: (status OK, id N, error_data: N, result N), raw_result: N
+ DEBUG: io N |op readv|target smgr|state COMPLETED_SHARED: after local completion: result: (status OK, id N, error_data N, result N), raw_result: N
23 window_functions.out + Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY ((N / (N + sum(users_table.value_2)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
24 multi_insert_select_conflict.out + -> Distributed Subplan 48_1
+ -> Custom Scan (Citus Adaptive)
- (N rows)
+ (N rows)
25 insert_select_into_local_table.out - NOTICE: l2={"(,,value,,,,col_7,)","(,,value2,,,,col_7,)"}
- CONTEXT: PL/pgSQL function query_results_equal(text,text,text) line XX at RAISE
- NOTICE: l2={"(,,value,,,,col_7,)","(,,value2,,,,col_7,)"}
- CONTEXT: PL/pgSQL function query_results_equal(text,text,text) line XX at RAISE
- query_results_equal
- t
- (N row)
-
+ ERROR: Unrecognized range table id N
+ CONTEXT: SQL statement "
+ INSERT INTO local_dest_table (col_3)
+ SELECT t1.text_col_1
26 subquery_in_targetlist.out - ERROR: correlated subqueries are not supported when the FROM clause contains a reference table
+ max | ?column?
+ --------------------------------+----------
+ Thu Nov N N:N:N.N N | N
+ (N row)
+
- ERROR: correlated subqueries are not supported when the FROM clause contains a CTE or subquery
+ count | max
+ -------+---------------------------------
+ N | Thu Nov N N:N:N.N N
+ (N row)
+
27 pg17.out - "Actual Loops": N +
+ "Actual Loops": N, +
28 multi_explain.out - "Alias": "lineitem"
+ "Alias": "lineitem",
+ Index Searches: N
+ Index Searches: N
- "Actual Loops": N +
+ "Actual Loops": N, +
- "Actual Loops": N
+ "Actual Loops": N,
- "Actual Loops": N
+ "Actual Loops": N,
+ Planning:
+ Buffers: shared hit=N
29 multi_subquery_window_functions.out - Sort Key: (sum((sum(users_table.value_2) OVER (?)))) DESC, users_table.user_id DESC
+ Sort Key: (sum((sum(users_table.value_2) OVER w1))) DESC, users_table.user_id DESC
- Group Key: users_table.user_id, (sum((sum(users_table.value_2) OVER (?))))
+ Group Key: users_table.user_id, (sum((sum(users_table.value_2) OVER w1)))
- Group Key: users_table.user_id, (sum(users_table.value_2) OVER (?))
+ Group Key: users_table.user_id, (sum(users_table.value_2) OVER w1)
+ Window: w1 AS (PARTITION BY users_table.user_id)
+ Window: w1 AS (PARTITION BY events_table.user_id)
- Group Key: users_table_1.user_id, (sum(users_table_1.value_2) OVER (?))
+ Group Key: users_table_1.user_id, (sum(users_table_1.value_2) OVER w1)
+ Window: w1 AS (PARTITION BY users_table_1.user_id)
+ Window: w1 AS (PARTITION BY events_table_1.user_id)
30 multi_subquery_misc.out - ERROR: could not create distributed plan
- DETAIL: Possibly this is caused by the use of parameters in SQL functions, which is not supported in Citus.
- HINT: Consider using PL/pgSQL functions instead.
- CONTEXT: SQL function "sql_subquery_test" statement N
+ sql_subquery_test
+ -------------------
+ N
+ (N row)
+
31 multi_outer_join_columns.out - Output: remote_scan.id, max(remote_scan.max) OVER (?), remote_scan.worker_column_3
+ Output: remote_scan.id, max(remote_scan.max) OVER w1, remote_scan.worker_column_3
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_3)
- (N rows)
+ (N rows)
- Output: remote_scan.id, max(remote_scan.max) OVER (?), remote_scan.worker_column_3
+ Output: remote_scan.id, max(remote_scan.max) OVER w1, remote_scan.worker_column_3
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_3)
- (N rows)
+ (N rows)
- Output: remote_scan.id, (max(remote_scan.max) OVER (?)), remote_scan.worker_column_3
- Group Key: remote_scan.id, max(remote_scan.max) OVER (?)
32 merge.out + N | N | N | N
+ N | N | N | N
- (N rows)
+ (N rows)
33 local_dist_join_mixed.out - DEBUG: generating subplan XXX_1 for subquery SELECT id FROM local_dist_join_mixed.local u1 WHERE true
+ DEBUG: generating subplan 82_1 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u1 WHERE true
- DEBUG: generating subplan XXX_2 for subquery SELECT id FROM local_dist_join_mixed.local u2 WHERE true
+ DEBUG: generating subplan 82_2 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u2 WHERE true
- DEBUG: generating subplan XXX_3 for subquery SELECT id FROM local_dist_join_mixed.local u3 WHERE true
+ DEBUG: generating subplan 82_3 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u3 WHERE true
- DEBUG: generating subplan XXX_4 for subquery SELECT id FROM local_dist_join_mixed.local u4 WHERE true
+ DEBUG: generating subplan 82_4 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u4 WHERE true
- DEBUG: generating subplan XXX_5 for subquery SELECT id FROM local_dist_join_mixed.local u5 WHERE true
+ DEBUG: generating subplan 82_5 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u5 WHERE true
- DEBUG: generating subplan XXX_6 for subquery SELECT id FROM local_dist_join_mixed.local u6 WHERE true
+ DEBUG: generating subplan 82_6 for subquery SELECT NULL::integer AS "dummy-N" FROM local_dist_join_mixed.local u6 WHERE true
34 query_single_shard_table.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DETAIL: users_table and non_colocated_events_table are not colocated
+ DETAIL: non_colocated_events_table and users_table are not colocated
- DETAIL: users_table and non_colocated_events_table are not colocated
+ DETAIL: non_colocated_events_table and users_table are not colocated
35 multi_router_planner_fast_path.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
36 mixed_relkind_tests.out - Sort Key: remote_scan.a, (count() OVER (?))
+ Sort Key: remote_scan.a, (count(
) OVER w1)
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_2)
- (N rows)
+ (N rows)
37 ssl_by_default.out - ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384
+ HIGH:MEDIUM:+3DES:!aNULL
- (localhost,N,t,ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384)
- (localhost,N,t,ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384)
+ (localhost,N,t,HIGH:MEDIUM:+3DES:!aNULL)
+ (localhost,N,t,HIGH:MEDIUM:+3DES:!aNULL)
38 sqlancer_failures.out - count
- N
- (N row)
-
+ ERROR: invalid reference to FROM-clause entry for table "a"
+ DETAIL: There is an entry for table "a", but it cannot be referenced from this part of the query.
- count
- N
- (N row)
-
+ ERROR: invalid reference to FROM-clause entry for table "a"
+ DETAIL: There is an entry for table "a", but it cannot be referenced from this part of the query.
39 multi_mx_hide_shard_names.out - -> Index Scan using pg_class_relname_nsp_index on pg_class c
- Index Cond: (relname = 'test_table'::text)
+ -> Seq Scan on pg_class c
- (N rows)
+ (N rows)
40 local_shard_execution_replicated.out - (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ Buffers: shared hit=N
+ (N rows)
+ Buffers: shared hit=N
- (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ (N rows)
41 local_shard_execution.out - (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ Buffers: shared hit=N
+ (N rows)
+ Buffers: shared hit=N
+ Index Searches: N
+ Buffers: shared hit=N
- (N rows)
+ (N rows)
42 citus_local_tables_mx.out + fkey_cas_test_3_1
- (N rows)
+ fkey_cas_test_3_1330013_1
+ fkey_cas_test_3_1330013_2
+ fkey_cas_test_3_2
+ (N rows)
- (N rows)
+ fkey_cas_test_3_1
+ fkey_cas_test_3_2
+ (N rows)
- (N rows)
+ fkey_cas_test_3_1
43 multi_mx_explain.out - "Alias": "lineitem_mx"
+ "Alias": "lineitem_mx",
- "Alias": "supplier_mx"
+ "Alias": "supplier_mx",
- "Alias": "lineitem_mx"
+ "Alias": "lineitem_mx",
- "Alias": "customer_mx"
+ "Alias": "customer_mx",
- "Alias": "orders_mx"
+ "Alias": "orders_mx",
44 locally_execute_intermediate_results.out - NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1 FROM locally_execute_intermediate_results.table_2_1580004 table_2) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1 FROM locally_execute_intermediate_results.table_2_1580006 table_2) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1, table_2.value AS worker_column_2 FROM locally_execute_intermediate_results.table_2_1580004 table_2) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1, table_2.value AS worker_column_2 FROM locally_execute_intermediate_results.table_2_1580006 table_2) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
45 local_shard_execution_dropped_column.out - NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
46 local_execution_local_plan.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Fast-path router query: created local execution plan to avoid deparse and compile of shard query
- DEBUG: Local executor: Using task's cached local plan for task N
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Fast-path router query: created local execution plan to avoid deparse and compile of shard query
- DEBUG: Local executor: Using task's cached local plan for task N
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
47 multi_move_mx.out - ERROR: could not connect to the publisher: root certificate file "/non/existing/certificate.crt" does not exist
+ ERROR: subscription "subs_01" could not connect to the publisher: connection to server at "localhost" (::N), port N failed: root certificate file "/non/existing/certificate.crt" does not exist
48 numa.out pg_vanilla_outputs - ok
- t
- (N row)
-
+ ERROR: failed NUMA pages inquiry status: Operation not permitted

m3hm3t avatar Oct 13 '25 09:10 m3hm3t

17.10

combined_regression.txt

No tests test_group example
1 columnar_create.out + ERROR: could not open relation with OID N
+ ERROR: Timeout while waiting for temporary table to be dropped
+ CONTEXT: PL/pgSQL function inline_code_block line N at RAISE
+ ERROR: could not open relation with OID N
- count
- N
- (N row)
-
+ ERROR: could not open relation with OID N
+ ERROR: cannot access temporary tables of other sessions
+ ERROR: could not open relation with OID N
- columnar_metadata_has_storage_id
2 columnar_query.out - f1 | f2 | fs
- (N rows)
-
+ ERROR: could not open relation with OID N
+ ERROR: could not open relation with OID N
3 columnar_first_row_number.out - row_count | first_row_number
- N | N
- N | N
- N | N
- N | N
- N | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
- row_count | first_row_number
- N | N
- N | N
4 columnar_drop.out + ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
+ ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
5 columnar_indexes.out - (N rows)
+ (N rows)
- ?column?
- t
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- ?column?
- t
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
6 columnar_paths.out - t
+ f
- t
+ f
- (N rows)
+ Index Searches: N
+ (N rows)
7 columnar_insert.out - relname | stripe_num | chunk_group_count | row_count
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- zero_col | N | N | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
- relname | stripe_num | value_count
- (N rows)
-
8 columnar_matview.out, columnar_rollback.out - count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
9 columnar_truncate.out + ERROR: cannot access temporary tables of other sessions
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
- ?column?
- N
- (N row)
-
+ ERROR: syntax error at or near ":"
10 columnar_vacuum.out + ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
- N
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- count
11 columnar_types_without_comparison.out - minimum_value | maximum_value
- |
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- minimum_value | maximum_value
- |
- (N row)
-
+ ERROR: cannot access temporary tables of other sessions
- minimum_value | maximum_value
- |
12 columnar_chunk_filtering.out - -> Custom Scan (ColumnarScan) on coltest (actual rows=N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest (actual rows=N.N loops=N)
- -> Custom Scan (ColumnarScan) on coltest (actual rows=N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest (actual rows=N.N loops=N)
- -> Append (actual rows=N loops=N)
- -> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=N loops=N)
+ -> Append (actual rows=N.N loops=N)
+ -> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=N.N loops=N)
- -> Seq Scan on coltest_part1 coltest_part_2 (actual rows=N loops=N)
+ -> Seq Scan on coltest_part1 coltest_part_2 (actual rows=N.N loops=N)
+ Storage: Memory Maximum Storage: NkB
- (N rows)
13 columnar_recursive.out - relname | count
- t1 | N
- t2 | N
- (N rows)
-
+ ERROR: cannot access temporary tables of other sessions
14 columnar_temp_tables.out + ERROR: could not open relation with OID N
+ ERROR: could not open relation with OID N
15 multi_insert_select.out + DEBUG: io N |op invalid|target invalid|state HANDED_OUT : adding cb #N, id N/aio_shared_buffer_readv_cb
+ DEBUG: io N |op invalid|target smgr|state HANDED_OUT : adding cb #N, id N/aio_md_readv_cb
+ DEBUG: io N |op readv|target smgr|state DEFINED : calling cb #N N/aio_shared_buffer_readv_cb->stage(N)
+ DEBUG: io N |op readv|target smgr|state STAGED : staged (synchronous: N, in_batch: N)
+ DEBUG: io N |op readv|target smgr|state COMPLETED_IO : after shared completion: distilled result: (status OK, id N, error_data: N, result N), raw_result: N
+ DEBUG: io N |op readv|target smgr|state COMPLETED_SHARED: after local completion: result: (status OK, id N, error_data N, result N), raw_result: N
16 window_functions.out + Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY (('N'::numeric / ('N'::numeric + avg(users_table.value_1)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
+ Window: w1 AS (PARTITION BY users_table.user_id ORDER BY ((N / (N + sum(users_table.value_2)))) ROWS UNBOUNDED PRECEDING)
- (N rows)
+ (N rows)
17 subquery_in_targetlist.out - ERROR: correlated subqueries are not supported when the FROM clause contains a reference table
+ max | ?column?
+ --------------------------------+----------
+ Thu Nov N N:N:N.N N | N
+ (N row)
+
- ERROR: correlated subqueries are not supported when the FROM clause contains a CTE or subquery
+ count | max
+ -------+---------------------------------
+ N | Thu Nov N N:N:N.N N
+ (N row)
+
18 pg17.out - "Actual Loops": N +
+ "Actual Loops": N, +
19 pg18.out - -> Seq Scan on sje_d1_102012 sje_d1
+ -> Seq Scan on sje_d1_361861 sje_d1
- -> Seq Scan on sje_d2_102016 u6
+ -> Seq Scan on sje_d2_361865 u6
- -> Seq Scan on sje_d1_102012 d1
+ -> Seq Scan on sje_d1_361861 d1
- -> Seq Scan on sje_d2_102016 u3
+ -> Seq Scan on sje_d2_361865 u3
- -> Seq Scan on sje_d1_102012 sje_d1
+ -> Seq Scan on sje_d1_361861 sje_d1
- -> Seq Scan on sje_d1_102012 d
+ -> Seq Scan on sje_d1_361861 d
20 multi_explain.out - "Alias": "lineitem"
+ "Alias": "lineitem",
+ Index Searches: N
+ Index Searches: N
- "Actual Loops": N +
+ "Actual Loops": N, +
- "Actual Loops": N
+ "Actual Loops": N,
- "Actual Loops": N
+ "Actual Loops": N,
+ Planning:
+ Buffers: shared hit=N
21 multi_subquery_window_functions.out - Sort Key: (sum((sum(users_table.value_2) OVER (?)))) DESC, users_table.user_id DESC
+ Sort Key: (sum((sum(users_table.value_2) OVER w1))) DESC, users_table.user_id DESC
- Group Key: users_table.user_id, (sum((sum(users_table.value_2) OVER (?))))
+ Group Key: users_table.user_id, (sum((sum(users_table.value_2) OVER w1)))
- Group Key: users_table.user_id, (sum(users_table.value_2) OVER (?))
+ Group Key: users_table.user_id, (sum(users_table.value_2) OVER w1)
+ Window: w1 AS (PARTITION BY users_table.user_id)
+ Window: w1 AS (PARTITION BY events_table.user_id)
- Group Key: users_table_1.user_id, (sum(users_table_1.value_2) OVER (?))
+ Group Key: users_table_1.user_id, (sum(users_table_1.value_2) OVER w1)
+ Window: w1 AS (PARTITION BY users_table_1.user_id)
+ Window: w1 AS (PARTITION BY events_table_1.user_id)
22 multi_subquery_misc.out - ERROR: could not create distributed plan
- DETAIL: Possibly this is caused by the use of parameters in SQL functions, which is not supported in Citus.
- HINT: Consider using PL/pgSQL functions instead.
- CONTEXT: SQL function "sql_subquery_test" statement N
+ sql_subquery_test
+ -------------------
+ N
+ (N row)
+
23 multi_outer_join_columns.out - Output: remote_scan.id, max(remote_scan.max) OVER (?), remote_scan.worker_column_3
+ Output: remote_scan.id, max(remote_scan.max) OVER w1, remote_scan.worker_column_3
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_3)
- (N rows)
+ (N rows)
- Output: remote_scan.id, max(remote_scan.max) OVER (?), remote_scan.worker_column_3
+ Output: remote_scan.id, max(remote_scan.max) OVER w1, remote_scan.worker_column_3
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_3)
- (N rows)
+ (N rows)
- Output: remote_scan.id, (max(remote_scan.max) OVER (?)), remote_scan.worker_column_3
- Group Key: remote_scan.id, max(remote_scan.max) OVER (?)
24 merge.out + N | N | N | N
+ N | N | N | N
- (N rows)
+ (N rows)
25 query_single_shard_table.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DETAIL: users_table and non_colocated_events_table are not colocated
+ DETAIL: non_colocated_events_table and users_table are not colocated
- DETAIL: users_table and non_colocated_events_table are not colocated
+ DETAIL: non_colocated_events_table and users_table are not colocated
26 multi_router_planner_fast_path.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
+ DEBUG: query has a single distribution column value: N
- DEBUG: Deferred pruning for a fast-path router query
27 mixed_relkind_tests.out - Sort Key: remote_scan.a, (count() OVER (?))
+ Sort Key: remote_scan.a, (count(
) OVER w1)
+ Window: w1 AS (PARTITION BY remote_scan.worker_column_2)
- (N rows)
+ (N rows)
28 ssl_by_default.out - ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384
+ HIGH:MEDIUM:+3DES:!aNULL
- (localhost,N,t,ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384)
- (localhost,N,t,ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384)
+ (localhost,N,t,HIGH:MEDIUM:+3DES:!aNULL)
+ (localhost,N,t,HIGH:MEDIUM:+3DES:!aNULL)
29 sqlancer_failures.out - count
- N
- (N row)
-
+ ERROR: invalid reference to FROM-clause entry for table "a"
+ DETAIL: There is an entry for table "a", but it cannot be referenced from this part of the query.
- count
- N
- (N row)
-
+ ERROR: invalid reference to FROM-clause entry for table "a"
+ DETAIL: There is an entry for table "a", but it cannot be referenced from this part of the query.
30 multi_mx_hide_shard_names.out - -> Index Scan using pg_class_relname_nsp_index on pg_class c
- Index Cond: (relname = 'test_table'::text)
+ -> Seq Scan on pg_class c
- (N rows)
+ (N rows)
31 local_shard_execution_replicated.out - (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ Buffers: shared hit=N
+ (N rows)
+ Buffers: shared hit=N
- (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ (N rows)
32 local_shard_execution.out - (N rows)
+ Index Searches: N
+ Buffers: shared hit=N
+ Buffers: shared hit=N
+ (N rows)
+ Buffers: shared hit=N
+ Index Searches: N
+ Buffers: shared hit=N
- (N rows)
+ (N rows)
33 citus_local_tables_mx.out + fkey_cas_test_3_1
- (N rows)
+ fkey_cas_test_3_1330013_1
+ fkey_cas_test_3_1330013_2
+ fkey_cas_test_3_2
+ (N rows)
- (N rows)
+ fkey_cas_test_3_1
+ fkey_cas_test_3_2
+ (N rows)
- (N rows)
+ fkey_cas_test_3_1
34 multi_mx_explain.out - "Alias": "lineitem_mx"
+ "Alias": "lineitem_mx",
- "Alias": "supplier_mx"
+ "Alias": "supplier_mx",
- "Alias": "lineitem_mx"
+ "Alias": "lineitem_mx",
- "Alias": "customer_mx"
+ "Alias": "customer_mx",
- "Alias": "orders_mx"
+ "Alias": "orders_mx",
35 locally_execute_intermediate_results.out - NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1 FROM locally_execute_intermediate_results.table_2_1580004 table_2) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1 FROM locally_execute_intermediate_results.table_2_1580006 table_2) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1, table_2.value AS worker_column_2 FROM locally_execute_intermediate_results.table_2_1580004 table_2) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_2.value AS worker_column_1, table_2.value AS worker_column_2 FROM locally_execute_intermediate_results.table_2_1580006 table_2) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
- NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_1) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580000 table_1) worker_subquery GROUP BY worker_column_1
+ NOTICE: executing the command locally: SELECT worker_column_1 AS key, max(worker_column_2) AS worker_column_2 FROM (SELECT table_1.value AS worker_column_1, table_1.value AS worker_column_2 FROM locally_execute_intermediate_results.table_1_1580002 table_1) worker_subquery GROUP BY worker_column_1
36 local_shard_execution_dropped_column.out - NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
- NOTICE: executing the command locally: SELECT count() AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) N) GROUP BY c
+ NOTICE: executing the command locally: SELECT count(
) AS count FROM local_shard_execution_dropped_column.t1_2460000 t1 WHERE (c OPERATOR(pg_catalog.=) $N) GROUP BY c
37 local_execution_local_plan.out - DEBUG: Distributed planning for a fast-path router query
- DEBUG: Fast-path router query: created local execution plan to avoid deparse and compile of shard query
- DEBUG: Local executor: Using task's cached local plan for task N
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Fast-path router query: created local execution plan to avoid deparse and compile of shard query
- DEBUG: Local executor: Using task's cached local plan for task N
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
- DEBUG: Distributed planning for a fast-path router query
38 multi_move_mx.out - ERROR: could not connect to the publisher: root certificate file "/non/existing/certificate.crt" does not exist
+ ERROR: subscription "subs_01" could not connect to the publisher: connection to server at "localhost" (::N), port N failed: root certificate file "/non/existing/certificate.crt" does not exist
39 numa.out pg_vanilla_outputs - ok
- t
- (N row)
-
+ ERROR: failed NUMA pages inquiry status: Operation not permitted
40 upgrade_columnar_after.out - (N rows)
+ Index Searches: N
+ (N rows)

m3hm3t avatar Oct 17 '25 12:10 m3hm3t