cloudberry icon indicating copy to clipboard operation
cloudberry copied to clipboard

[Bug] relcache reference leak, related to ao unique index, when repeated update

Open congxuebin opened this issue 1 year ago • 3 comments

Cloudberry Database version

PostgreSQL 14.4 (Cloudberry Database 1.6.0+dev.3.g48d76a3e build 80152 commit:48d76a3e)

What happened

UPDATE sales_partition_ao SET status = 'Closed', description = description || ' Audited'; psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60386" not closed (seg1 127.0.0.1:7003 pid=552) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60386_index" not closed (seg1 127.0.0.1:7003 pid=552) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60395_index" not closed (seg1 127.0.0.1:7003 pid=552) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60395" not closed (seg1 127.0.0.1:7003 pid=552) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60395" not closed (seg2 127.0.0.1:7004 pid=551) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60386_index" not closed (seg2 127.0.0.1:7004 pid=551) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60395_index" not closed (seg2 127.0.0.1:7004 pid=551) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60386" not closed (seg2 127.0.0.1:7004 pid=551) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60386_index" not closed (seg0 127.0.0.1:7002 pid=550) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60395_index" not closed (seg0 127.0.0.1:7002 pid=550) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60395" not closed (seg0 127.0.0.1:7002 pid=550) psql:recreate0.sql:80: WARNING: relcache reference leak: relation "pg_aovisimap_60386" not closed (seg0 127.0.0.1:7002 pid=550)

What you think should happen instead

No response

How to reproduce

DROP TABLE IF EXISTS sales_partition_ao; CREATE TABLE IF NOT EXISTS sales_partition_ao ( product_id INT, is_audited BOOLEAN DEFAULT FALSE, quantity SMALLINT, total_sales BIGINT, unit_price REAL, discount DOUBLE PRECISION, description TEXT, sale_date TIMESTAMP, order_date DATE, status CHAR(10), customer_name VARCHAR(20), price DECIMAL(20, 10) ) DISTRIBUTED BY (product_id) PARTITION BY HASH(description);

CREATE TABLE sales_partition_ao_part1 PARTITION OF sales_partition_ao FOR VALUES WITH (MODULUS 3, REMAINDER 0) WITH (appendonly=true);

CREATE TABLE sales_partition_ao_part2 PARTITION OF sales_partition_ao FOR VALUES WITH (MODULUS 3, REMAINDER 1) WITH (appendonly=true);

CREATE TABLE sales_partition_ao_part3 PARTITION OF sales_partition_ao FOR VALUES WITH (MODULUS 3, REMAINDER 2) WITH (appendonly=true); -- Create Indexes

-- Unique CREATE UNIQUE INDEX on sales_partition_ao(product_id,description);

INSERT INTO sales_partition_ao ( product_id, is_audited, description, status ) SELECT x.id, -- product_id FALSE,
'Product description ' || x.id, -- description 'Closed' FROM ( SELECT * FROM generate_series(1, 20) AS id ) AS x;

UPDATE sales_partition_ao SET status = 'Closed', description = description || ' Audited';

DELETE FROM sales_partition_ao;

INSERT INTO sales_partition_ao ( product_id, is_audited, description, status ) SELECT x.id, -- product_id FALSE, 'Product description ' || x.id, -- description 'Closed' FROM ( SELECT * FROM generate_series(1, 20) AS id ) AS x;

UPDATE sales_partition_ao SET status = 'Closed', description = description || ' Audited'; DELETE FROM sales_partition_ao;

Operating System

centos7

Anything else

No response

Are you willing to submit PR?

  • [ ] Yes, I am willing to submit a PR!

Code of Conduct

congxuebin avatar Aug 06 '24 10:08 congxuebin

Seem a bug from GPDB https://github.com/greenplum-db/gpdb-archive/commit/ed364586b99d431ee04a86d760cb58c17e068cbc

avamingli avatar Aug 07 '24 04:08 avamingli

I will take a look at this later, may be several days.

avamingli avatar Aug 07 '24 04:08 avamingli

AOCS also has this issue, fixed by the way in #649

avamingli avatar Sep 30 '24 05:09 avamingli