Nikhil
Nikhil
@jflambert while doing major upgrades, we typically recommend to upgrade to the required timescaledb version on existing curren tPG version and then do the subsequent PG version upgrade. It seems...
@lb-ronyeh as mentioned in the documentation, not supporting writes is a limitation of this API. However when compared to the PG `CLUSTER` command we are already doing a little bit...
@Dzuzepppe there's no plan to do a new 2.14.x release. So, this PR will be closed. To get this fix you will need to upgrade to the 2.15.x release.
@RobAtticus the current `show_chunks` logic uses the `hypertable_id` and `table_id` numbering values to do the sorting of the returned chunks. Typically, if we consider append only data insertions then that...
@RobAtticus yeah, `show_chunks` is used in the compression policy logic. yeah, maybe `dimension_slice` based sorting is the way to go. We will need documentation changes also if we go this...
> add_compression_policy @kevcenteno the `add_compression_policy` API takes in an interval only as input. I believe the handling for that is good enough. Did you see any issues with it in...
> @nikkhils I did see the same issue with the `add_compression_policy` when using the `compress_created_before` argument; that is, chunks were immediately being compressed without satisfying the interval: > > _Given_...
@felipenogueirajack can you please confirm if this issue is fixed for you in 2.11.0 and above versions?
Closing this. A newer draft PR #6727 covers this same functionality now.
@cchengubnt can you also provide the definition of the cagg? Also, are you still seeing this error? Did you try with newer timescaledb version?