Eduardo Breijo
Eduardo Breijo
I'm also having this issue when using the [backfill script](https://github.com/timescale/timescaledb-extras/blob/master/backfill.sql) on large data set. The compress/decompress is locking read queries on the hypertable that contains the compressed chunks. Currently running...
Any updated on this issue or any workaround to get a batch of rows?
After doing some digging, I believe that this PR https://github.com/timescale/timescaledb/pull/4821 is the one causing the performance regression. `SET timescaledb.enable_compression_indexscan = 'OFF';` - Compressing 1 hour chunk took: 5 minutes `SET...
@nikkhils This is my hypertable schema and indexes: ```sql CREATE TABLE device_readings ( time timestamp with time zone, device_id integer, metric_id integer, value double precision NOT NULL, metadata jsonb, PRIMARY...
@shhnwz - The issue showed up from the very first cycle of compression job after the upgrade - I don't have any other hypertables, just the one I posted above...
@nikkhils From the `timescaledb_information.compression_settings` view
@shhnwz - From `postgres.conf` file ```postgres.conf shared_buffers = 15823MB effective_cache_size = 47471MB bgwriter_delay = 200ms (default value) bgwriter_lru_maxpages = 100 (default value) bgwriter_lru_multiplier = 2.0 (default value) bgwriter_flush_after = 512kB...
@shhnwz ### With disabled `SET timescaledb.enable_compression_indexscan = 'OFF'` - Before compression ``` SELECT * FROM pg_statio_user_tables where relname = '_hyper_1_40061_chunk' relid | schemaname | relname | heap_blks_read | heap_blks_hit |...
@shhnwz I have follow your recommendation of setting the shared buffers to 40% of the RAM and I still don't see any improvements/benefits when compressing a chunk. It is still...
Looks like the `timescaledb.enable_compression_indexscan` has been disabled by default on TimescaleDB 2.14.1 https://github.com/timescale/timescaledb/pull/6639