percona-postgresql-operator icon indicating copy to clipboard operation
percona-postgresql-operator copied to clipboard

Always writing table in to a single node. Percona postgres Citus , Sharding not working

Open SHUFIL opened this issue 1 year ago • 1 comments

Report

I need to share a single table in to multiple nodes using Sharding , I have used this Doc

More about the problem

I need to share a single table in to multiple nodes using Sharding , which is not happening , it is writing only a single node always

Steps to reproduce

  1. I have tested using below command
    SET citus.explain_all_tasks TO on;
    EXPLAIN ANALYZE SELECT * FROM events;
  1. When I do analyze using analyze command , it is write only in single system , not both, below you can see that out put.
 testdb=# EXPLAIN ANALYZE SELECT * FROM events;
                                                          QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
 Custom Scan (Citus Adaptive)  (cost=0.00..0.00 rows=100000 width=52) (actual time=38.927..38.928 rows=5 loops=1)
   Task Count: 32
   Tuple data received from nodes: 290 bytes
   Tasks Shown: One of 32
   ->  Task
         Tuple data received from node: 0 bytes
         Node: host=192.168.1.100 port=5432 dbname=testdb
         ->  Seq Scan on events_102040 events  (cost=0.00..20.20 rows=1020 width=52) (actual time=0.005..0.006 rows=0 loops=1)
             Planning Time: 0.509 ms
             Execution Time: 0.032 ms
 Planning Time: 1.453 ms
 Execution Time: 38.974 ms
(12 rows)
  1. nodes details getting correctly from both nodes
postgres=# SELECT * FROM pg_dist_node;
 nodeid | groupid |   nodename    | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
--------+---------+---------------+----------+----------+-------------+----------+----------+-------------+----------------+------------------
     18 |      17 | 192.168.1.100 |     5432 | default  | t           | t        | primary  | default     | t              | t
      1 |       0 | 192.168.1.101 |     5432 | default  | t           | t        | primary  | default     | t              | f
     20 |      19 | 192.168.1.150 |     5435 | default  | t           | t        | primary  | default     | t              | t
(3 rows)

Worker nodes list from both server.

postgres=# SELECT * FROM citus_get_active_worker_nodes();
   node_name   | node_port
---------------+-----------
 192.168.1.150 |      5435
 192.168.1.100 |      5432
(2 rows)

Versions

Database Postgres percona cluster

postgres=# SELECT VERSION();
                                                          version
------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 15.7 - Percona Distribution on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
(1 row)

citus_version

postgres=# select citus_version();
                                           citus_version
----------------------------------------------------------------------------------------------------
 Citus 12.0.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
(1 row)

Anything else?

No response

SHUFIL avatar Aug 28 '24 07:08 SHUFIL

Hi @SHUFIL did you use PG k8s operator? We do not have citus extension in our operator images and we did not test citus with operator.

hors avatar Jan 14 '25 10:01 hors

Since we didn't get any answer for more then a year we closed this issue.

nmarukovich avatar Sep 26 '25 09:09 nmarukovich