[Docs RFC]Update the hypercore and compression docs to show how to write to a chunk that is being compressed or moved to the columstore
Update https://docs.timescale.com/use-timescale/latest/compression/modify-compressed-data/ and https://docs-dev.timescale.com/docs-262-docs-rfccreate-a-page-automate-hypercore-using-jobs/use-timescale/262-docs-rfccreate-a-page-automate-hypercore-using-jobs/hypercore/modify-data-in-the-columnstore/ and associated API pages to mention the following.
INSERT into the chunk being compressed are blocked, INSERTs into other chunks are not by default we take exclusive lock at the end of compress_chunk so it might block other reads on that chunk for a short moment you can get rid of that lock by setting set timescaledb.enable_delete_after_compression to on; which means you need to vacuum more it wont take accessexclusive lock
hese limitations only apply to the chunk being compressed, SELECT and INSERT on other chunks of the hypertable are not affected 3:25 Keep in mind though that SELECT will try access every chunk unless it can do plan time exclusion
At the moment the policy will process only 1 chunk at a time
4:00 But you can manually compress multiple chunks in parallel sessions
@svenklemm does this apply to hypercore as well?