ERROR: could not resize shared memory segment "/PostgreSQL.3746429032" to 226938880 bytes: No space left on device
When execute:
MATCH (n0 :L4)-[r0 :T4]->(n1)-[r1 :T0]->(n2 :L3)MATCH (n3 :L4)-[r2 :T4]->(n4) MATCH (n5)<-[r3 :T3]-(n6 :L4) WITH (r1.k45) AS a0, (r1.k42) AS a1 WITH a1 RETURN DISTINCT a1;
it results in the following error:
ERROR: could not resize shared memory segment "/PostgreSQL.3746429032" to 226938880 bytes: No space left on device
But when I execute a much more complicated version than the query above as below:
MATCH (n0 :L4)-[r0 :T4]->(n1)-[r1 :T0]->(n2 :L3) WHERE ( ( ( r0 . id ) > -1 ) AND ( ( r0 . id ) <> ( r1 . id ) ) ) MATCH (n3 :L4)-[r2 :T4]->(n4) WHERE CASE WHEN TRUE THEN ( ( n2 . k21 ) CONTAINS ( n2 . k21 ) ) END UNWIND [NULL] as l0 MATCH (n5)<-[r3 :T3]-(n6 :L4) WHERE ( ( r3 . id ) > -1 ) WITH ( r1 . k45 ) AS a0 , ( r1 . k42 ) AS a1 WHERE true UNWIND ([]+[ -485426356 , 369543079 ] )AS a2 UNWIND [1] as l1 WITH a1 WHERE true UNWIND [1] as l2 RETURN DISTINCT a1 ;
it returns:
a1
----
(0 rows)
This is surprising because the second query, derived from the first and laden with additional conditions and operations, runs smoothly while the simpler one fails.
May I ask if there is any workaround or configuration that can work for this situation? Or is there a fix plan for this error? As I am using my fuzz tool to test AgensGraph, an effective workaround would help me avoid this error and enable deeper testing of the database. I would highly appreciate any suggestions or guidance. Thank you very much for your support! I have also submitted issue #729, and I would like to request the same support for that as well. Thank you!
@stupalpa Based on your previous issue reports, I’m assuming you're using Docker. If so, it’s possible that the container’s shared memory is limited. You can try increasing it with --shm-size flag when running the container.
@stupalpa did you try running EXPLAIN on both queries? The "simpler" query might actually be consuming more resources (it depends on the optimizer, statistics, your data etc.), and your instance may not have enough shared memory, as @MuhammadTahaNaveed pointed out.
@turicas Thanks for advice! I just run EXPLAIN on both queries. The first simpler query returns:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=1252354231085.76..1252354231087.76 rows=200 width=32)
-> Sort (cost=1252354231085.76..1252354231086.76 rows=400 width=32)
Sort Key: (r1.properties.'k42'::text)
-> Gather (cost=1252354231026.47..1252354231068.47 rows=400 width=32)
Workers Planned: 2
-> HashAggregate (cost=1252354230026.47..1252354230028.47 rows=200 width=32)
Group Key: r1.properties.'k42'::text
-> Parallel Hash Join (cost=200438423829.84..897672868290.59 rows=141872544694350 width=32)
Hash Cond: (r3.start = n6.id)
-> Merge Join (cost=416.03..692.05 rows=17218 width=8)
Merge Cond: (n5.id = r3."end")
-> Sort (cost=348.20..357.08 rows=3550 width=8)
Sort Key: n5.id
-> Parallel Append (cost=0.00..138.87 rows=3550 width=8)
-> Parallel Seq Scan on l0 n5_2 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l1 n5_3 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l2 n5_4 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l3 n5_5 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l6 n5_6 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l5 n5_7 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l4 n5_8 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on ag_vertex n5_1 (cost=0.00..1.71 rows=71 width=8)
-> Sort (cost=67.82..70.25 rows=970 width=16)
Sort Key: r3."end"
-> Seq Scan on t3 r3 (cost=0.00..19.70 rows=970 width=16)
-> Parallel Hash (cost=116750720084.82..116750720084.82 rows=4120010010000 width=40)
-> Nested Loop (cost=925569.83..116750720084.82 rows=4120010010000 width=40)
-> Parallel Hash Join (cost=925569.83..17103134.82 rows=3433341675 width=32)
Hash Cond: (r1."end" = n2.id)
-> Merge Join (cost=455.73..1867.11 rows=83088 width=40)
Merge Cond: (n1.id = r0."end")
-> Sort (cost=348.20..357.08 rows=3550 width=8)
Sort Key: n1.id
-> Parallel Append (cost=0.00..138.87 rows=3550 width=8)
-> Parallel Seq Scan on l0 n1_2 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l1 n1_3 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l2 n1_4 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l3 n1_5 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l6 n1_6 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l5 n1_7 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l4 n1_8 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on ag_vertex n1_1 (cost=0.00..1.71 rows=71 width=8)
-> Materialize (cost=107.53..266.54 rows=4681 width=56)
-> Merge Join (cost=107.53..254.84 rows=4681 width=56)
Merge Cond: (r1.start = r0."end")
Join Filter: (r0.id <> r1.id)
-> Index Scan using t0_start_idx on t0 r1 (cost=0.15..62.70 rows=970 width=56)
-> Sort (cost=107.38..109.80 rows=970 width=16)
Sort Key: r0."end"
-> Hash Join (cost=37.00..59.26 rows=970 width=16)
Hash Cond: (r0.start = n0.id)
-> Seq Scan on t4 r0 (cost=0.00..19.70 rows=970 width=24)
-> Hash (cost=22.00..22.00 rows=1200 width=8)
-> Seq Scan on l4 n0 (cost=0.00..22.00 rows=1200 width=8)
-> Parallel Hash (cost=586143.60..586143.60 rows=20661000 width=8)
-> Nested Loop (cost=455.58..586143.60 rows=20661000 width=8)
-> Merge Join (cost=455.58..731.60 rows=17218 width=0)
Merge Cond: (n4.id = r2."end")
-> Sort (cost=348.20..357.08 rows=3550 width=8)
Sort Key: n4.id
-> Parallel Append (cost=0.00..138.87 rows=3550 width=8)
-> Parallel Seq Scan on l0 n4_2 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l1 n4_3 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l2 n4_4 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l3 n4_5 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l6 n4_6 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l5 n4_7 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l4 n4_8 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on ag_vertex n4_1 (cost=0.00..1.71 rows=71 width=8)
-> Sort (cost=107.38..109.80 rows=970 width=8)
Sort Key: r2."end"
-> Hash Join (cost=37.00..59.26 rows=970 width=8)
Hash Cond: (r2.start = n3.id)
-> Seq Scan on t4 r2 (cost=0.00..19.70 rows=970 width=16)
-> Hash (cost=22.00..22.00 rows=1200 width=8)
-> Seq Scan on l4 n3 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l3 n2 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l4 n6 (cost=0.00..22.00 rows=1200 width=8)
(78 rows)
The second query returns:
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=221978950404634656.00..221978950404634656.00 rows=200 width=32)
Group Key: (((ROW(r1.id, r1.start, r1."end", r1.properties, r1.ctid)::edge)).properties.'k42'::text)
-> ProjectSet (cost=580764.34..65455307658647136.00 rows=12521891419679000576 width=64)
-> ProjectSet (cost=580764.34..654519561808302.00 rows=125218914196790000 width=128)
-> ProjectSet (cost=580764.34..6511680839913.75 rows=1252189141967900 width=96)
-> Nested Loop (cost=580764.34..156820944426.66 rows=12521891419679 width=32)
-> Gather (cost=580700.33..285925453.69 rows=910034406 width=288)
Workers Planned: 2
-> ProjectSet (cost=579700.33..194921013.09 rows=37918100200 width=288)
-> Parallel Hash Join (cost=579700.33..2486654.57 rows=379181002 width=62)
Hash Cond: (r2.start = n3.id)
-> Merge Join (cost=416.03..692.05 rows=17218 width=8)
Merge Cond: (n4.id = r2."end")
-> Sort (cost=348.20..357.08 rows=3550 width=8)
Sort Key: n4.id
-> Parallel Append (cost=0.00..138.87 rows=3550 width=8)
-> Parallel Seq Scan on l0 n4_2 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l1 n4_3 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l2 n4_4 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l3 n4_5 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l6 n4_6 (cost=0.00..17.06 rows=706 width=8)
--More--
Cancel request sent
agens=# EXPLAIN MATCH (n0 :L4)-[r0 :T4]->(n1)-[r1 :T0]->(n2 :L3) WHERE ( ( ( r0 . id ) > -1 ) AND ( ( r0 . id ) <> ( r1 . id ) ) ) MATCH (n3 :L4)-[r2 :T4]->(n4) WHERE CASE WHEN TRUE THEN ( ( n2 . k21 ) CONTAINS ( n2 . k21 ) ) END UNWIND [NULL] as l0 MATCH (n5)<-[r3 :T3]-(n6 :L4) WHERE ( ( r3 . id ) > -1 ) WITH ( r1 . k45 ) AS a0 , ( r1 . k42 ) AS a1 WHERE true UNWIND ([]+[ -485426356 , 369543079 ] )AS a2 UNWIND [1] as l1 WITH a1 WHERE true UNWIND [1] as l2 RETURN DISTINCT a1 ;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=221978950404634656.00..221978950404634656.00 rows=200 width=32)
Group Key: (((ROW(r1.id, r1.start, r1."end", r1.properties, r1.ctid)::edge)).properties.'k42'::text)
-> ProjectSet (cost=580764.34..65455307658647136.00 rows=12521891419679000576 width=64)
-> ProjectSet (cost=580764.34..654519561808302.00 rows=125218914196790000 width=128)
-> ProjectSet (cost=580764.34..6511680839913.75 rows=1252189141967900 width=96)
-> Nested Loop (cost=580764.34..156820944426.66 rows=12521891419679 width=32)
-> Gather (cost=580700.33..285925453.69 rows=910034406 width=288)
Workers Planned: 2
-> ProjectSet (cost=579700.33..194921013.09 rows=37918100200 width=288)
-> Parallel Hash Join (cost=579700.33..2486654.57 rows=379181002 width=62)
Hash Cond: (r2.start = n3.id)
-> Merge Join (cost=416.03..692.05 rows=17218 width=8)
Merge Cond: (n4.id = r2."end")
-> Sort (cost=348.20..357.08 rows=3550 width=8)
Sort Key: n4.id
-> Parallel Append (cost=0.00..138.87 rows=3550 width=8)
-> Parallel Seq Scan on l0 n4_2 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l1 n4_3 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l2 n4_4 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l3 n4_5 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l6 n4_6 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l5 n4_7 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l4 n4_8 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on ag_vertex n4_1 (cost=0.00..1.71 rows=71 width=8)
-> Sort (cost=67.82..70.25 rows=970 width=16)
Sort Key: r2."end"
-> Seq Scan on t4 r2 (cost=0.00..19.70 rows=970 width=16)
-> Parallel Hash (cost=312598.55..312598.55 rows=11011500 width=70)
-> Nested Loop (cost=144.29..312598.55 rows=11011500 width=70)
-> Hash Join (cost=144.29..614.55 rows=9176 width=62)
Hash Cond: (n1.id = r0."end")
-> Parallel Append (cost=0.00..138.87 rows=3550 width=8)
-> Parallel Seq Scan on l0 n1_2 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l1 n1_3 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l2 n1_4 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l3 n1_5 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l6 n1_6 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l5 n1_7 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on l4 n1_8 (cost=0.00..17.06 rows=706 width=8)
-> Parallel Seq Scan on ag_vertex n1_1 (cost=0.00..1.71 rows=71 width=8)
-> Hash (cost=137.83..137.83 rows=517 width=70)
-> Hash Join (cost=93.29..137.83 rows=517 width=70)
Hash Cond: (r0."end" = r1.start)
Join Filter: ((r0.id <> r1.id) AND (r0.properties.'id'::text <> r1.properties.'id'::text))
-> Hash Join (cost=37.00..59.97 rows=323 width=48)
Hash Cond: (r0.start = n0.id)
-> Seq Scan on t4 r0 (cost=0.00..22.12 rows=323 width=56)
Filter: (properties.'id'::text > '-1'::jsonb)
-> Hash (cost=22.00..22.00 rows=1200 width=8)
-> Seq Scan on l4 n0 (cost=0.00..22.00 rows=1200 width=8)
-> Hash (cost=52.26..52.26 rows=323 width=62)
-> Hash Join (cost=30.00..52.26 rows=323 width=62)
Hash Cond: (r1."end" = n2.id)
-> Seq Scan on t0 r1 (cost=0.00..19.70 rows=970 width=62)
-> Hash (cost=25.00..25.00 rows=400 width=8)
-> Seq Scan on l3 n2 (cost=0.00..25.00 rows=400 width=8)
Filter: string_contains(properties.'k21'::text, properties.'k21'::text)
-> Seq Scan on l4 n3 (cost=0.00..22.00 rows=1200 width=8)
-> Materialize (cost=64.01..831.31 rows=13760 width=0)
-> Hash Join (cost=64.01..762.51 rows=13760 width=0)
Hash Cond: (n5.id = r3."end")
-> Append (cost=0.00..198.80 rows=8520 width=8)
-> Seq Scan on ag_vertex n5_1 (cost=0.00..2.20 rows=120 width=8)
-> Seq Scan on l0 n5_2 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l1 n5_3 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l2 n5_4 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l3 n5_5 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l6 n5_6 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l5 n5_7 (cost=0.00..22.00 rows=1200 width=8)
-> Seq Scan on l4 n5_8 (cost=0.00..22.00 rows=1200 width=8)
-> Hash (cost=59.97..59.97 rows=323 width=8)
-> Hash Join (cost=37.00..59.97 rows=323 width=8)
Hash Cond: (r3.start = n6.id)
-> Seq Scan on t3 r3 (cost=0.00..22.12 rows=323 width=16)
Filter: (properties.'id'::text > '-1'::jsonb)
-> Hash (cost=22.00..22.00 rows=1200 width=8)
-> Seq Scan on l4 n6 (cost=0.00..22.00 rows=1200 width=8)
(77 rows)
It looks both queries have a huge cost.However, the consumption of the second query is still several orders of magnitude higher than that of the first one
@MuhammadTahaNaveed Thanks for the advice! Setting --shm-size=512m resolved the issue perfectly! However, I’m still not sure whether this error is a bug or expected behavior. The query plan indicates that the first query should consume fewer resources than the second one. If this is normal behavior, I’ll go ahead and close the issue. Best regards,
@MuhammadTahaNaveed Thanks for the advice! Setting --shm-size=512m resolved the issue perfectly! However, I’m still not sure whether this error is a bug or expected behavior. The query plan indicates that the first query should consume fewer resources than the second one. If this is normal behavior, I’ll go ahead and close the issue. Best regards,
@stupalpa Let's keep it open for a while, I will look into it more closely when I get enough time.