Bharathy

Results 24 comments of Bharathy

Hi @felipenogueirajack Please give details of deleteRow() method too. Can you also log the exception which you get when deleteRow() is called before decompression is executed. You iterate over the...

Hi @felipenogueirajack Thanks for trying what i suggested, unfortunately it did no work. Also iam not sure on how and which order JDBC code submits the sql statements to database....

Hi @felipenogueirajack Thanks for all the details. Currently Iam working on a different issue. Will get back to this issue after a day or 2 and will update what i...

I tried as you say, please check below. bharathysatish@Bharathys-MacBook-Pro-2 build % bin/createdb timescale_delete_error createdb: error: database creation failed: ERROR: database "timescale_delete_error" already exists bharathysatish@Bharathys-MacBook-Pro-2 build % java -jar /Volumes/Work/PR/PR4798\ -\...

First error iam aware off, i only want to point out that i have a databases created on my setup. Ok now i downloaded jar file. Started postgres server. bin/pg_ctl...

Hi @yinan8128 It would of much help if we know what errors are occurring as part of compression policy getting triggered in the background. When a compression policy fails, we...

Hi @mblsf If the INSERT is going to affect all segments in compressed chunk, then this INSERT operation will end up decompressing everything in compressed chunk which will be time...

Here is the issue which should fix the performance in INSERTS ... ON CONFLICT queries on compressed chunks. https://github.com/timescale/timescaledb/issues/6063

Hi @mickael-choisnet Thanks for the detailed report. Deleting directly from a compressed chunk is not recommended. As you mentioned deleting from a hypertable is time consuming, because of several internal...