Handle TRUNCATE TABLE on chunks
Make truncating a uncompressed chunk drop the data for the case where they reside in a corresponding compressed chunk.
Generate invalidations for Continuous Aggregates after TRUNCATE, so as to have consistent refresh operations on the materialization hypertable.
Fixes #4362
Codecov Report
Merging #4416 (f1eb65a) into main (e97fa59) will decrease coverage by
0.01%. The diff coverage is100.00%.
@@ Coverage Diff @@
## main #4416 +/- ##
==========================================
- Coverage 90.77% 90.76% -0.02%
==========================================
Files 224 224
Lines 41849 41867 +18
==========================================
+ Hits 37987 37999 +12
- Misses 3862 3868 +6
| Impacted Files | Coverage Δ | |
|---|---|---|
| src/ts_catalog/continuous_agg.h | 100.00% <ø> (ø) |
|
| src/compat/compat.h | 95.12% <100.00%> (+0.38%) |
:arrow_up: |
| src/process_utility.c | 94.65% <100.00%> (+0.03%) |
:arrow_up: |
| src/ts_catalog/continuous_agg.c | 95.76% <100.00%> (+0.03%) |
:arrow_up: |
| tsl/src/continuous_aggs/invalidation.c | 96.10% <100.00%> (ø) |
|
| src/loader/bgw_message_queue.c | 85.52% <0.00%> (-2.64%) |
:arrow_down: |
| tsl/src/reorder.c | 85.37% <0.00%> (-0.27%) |
:arrow_down: |
| tsl/src/nodes/data_node_dispatch.c | 96.49% <0.00%> (-0.24%) |
:arrow_down: |
| tsl/src/bgw_policy/job.c | 88.62% <0.00%> (-0.05%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update fc865de...f1eb65a. Read the comment docs.
Why are we blocking this instead of making this work?
Various reasons:
- What's the value of truncating a chunk instead of dropping it? Why keep an empty chunk around?
- We cannot truncate distributed chunks because they are foreign tables so that is not that consistent of an experience:
postgres=# TRUNCATE TABLE _timescaledb_internal._dist_hyper_8_10_chunk;
ERROR: "_dist_hyper_8_10_chunk" is not a table
- Truncating a chunk does NOT generate continuous aggregate invalidation records today so it is in a fundamentally broken state
- There are no tests whatsoever for truncating chunks currently so this probably slipped.
Too much work would need to be done to do this properly, for little benefit. Handle CAGG invalidations, local/distributed chunks, compressed/decompressed etc. I don't think it is worth the effort.
I agree with @mfundul considerations and even worse will be upgrade current hypertables to also consider the TRUNCATE event on hypertables.
Nowadays the invalidation logs are created by a trigger fired by INSERT/UPDATE/DELETE events on chunks. And to also fire this invalidation trigger by TRUNCATE we'll need to re-create (drop/create) the invalidation trigger in all existing hypertables and chunks and this operation requires AccessExclusiveLock so it can lead to a bad user experience during the extension upgrade (aka locks).
IMHO now the best we can do is block TRUNCATE until we discuss more and figure out a good way to implement it if we agree on how to implement it. The best is the enemy of the good.
I am missing a test where you drop a chunk and then try to insert into it again. If there is such a test already, could you add that information to the commit message.
This commit is about blocking TRUNCATE TABLE on chunks. I am not sure what dropping a chunk would do here, can you elaborate?