chdb icon indicating copy to clipboard operation
chdb copied to clipboard

Cleaning up stale parts

Open ruslandoga opened this issue 1 year ago • 20 comments

👋

Just wanted to quickly check if chdb automatically cleans stale MergeTree parts after optimize table commands. And if so, how does it work?

ruslandoga avatar Sep 07 '23 13:09 ruslandoga

No, we haven't do this. To be fixed

auxten avatar Sep 07 '23 13:09 auxten

In that case this chat with GPT that explains the naming might be useful: https://chat.openai.com/share/94a7fa2d-5f73-4f9f-b2ba-681ed324ae35

ruslandoga avatar Sep 07 '23 14:09 ruslandoga

Seems like we can find the part with the highest LEVEL, check its MAX_BLOCK_NUMBER, and delete all parts with MAX_BLOCK_NUMBER less than that.

ruslandoga avatar Sep 07 '23 14:09 ruslandoga

There should be some code in clickhouse doing that in background thread pool. We just need to find it out, run it immediately on Optimize. Any interest to make a patch? @ruslandoga

auxten avatar Sep 08 '23 07:09 auxten

Yes, I'm interested and will try doing it today! :)

ruslandoga avatar Sep 09 '23 03:09 ruslandoga

The logic seems to be in MergeTreeData::clearOldPartsFromFilesystem but I'm a bit stuck at compiling ClickHouse, it's already taking over four hours and I'm at [7671/8349] Building CXX object src/CMakeFiles/dbms.dir/Interpreters/DatabaseAndTableWithAlias.cpp.o 😅

I'm new to C++ so I wonder if there is a faster way to build ClickHouse / chdb just for tests?

ruslandoga avatar Sep 09 '23 11:09 ruslandoga

Hey @ruslandoga I'm afraid the first build is as painful as you're experiencing (our action takes > 5h to build) but if you add ccache to the mix the next compile and link rounds with minor modifications will be much faster. If you're on our discord feel free to ping on the dev channel and we'll try to assist.

lmangani avatar Sep 09 '23 11:09 lmangani

I tried to find your discord server but failed :)

ruslandoga avatar Sep 11 '23 06:09 ruslandoga

I tried to find your discord server but failed :)

Imangani == qxip

auxten avatar Sep 11 '23 06:09 auxten

Thank you @auxten, but google doesn't return anything for "qxip discord" either.

ruslandoga avatar Sep 11 '23 06:09 ruslandoga

Here are we: https://discord.com/channels/1098133460310294528/1125668654965604422

auxten avatar Sep 11 '23 06:09 auxten

It doesn't seem like this link leads me anywhere :) I think it might be specific to your user account. Or maybe I need to be invited first.

ruslandoga avatar Sep 11 '23 06:09 ruslandoga

Hello @ruslandoga the link and invite to our discord is on the chdb readme in the contact section. Once you join you'll find us all at once on the chdb channels! Looking forward to you joining!

lmangani avatar Sep 11 '23 07:09 lmangani

A small note on the suggested implementation:

We just need to find it out, run it immediately on Optimize.

It seems like ClickHouse is cleaning old parts after ~eight minutes since they become inactive (i.e. stopped being referenced). Apparently, it's done this way to make sure the dirty pages have been fsynced (assuming dirty_writeback_centisecs = 5 minutes) and that in case of a crash (before fsync) the data could be restored. So I guess the custom implementation would need to call fsync on the new parts and only then clean the old parts.

I also found in the docs that the old parts can be deleted with SQL, so technically, what I initially wanted could be achieved like this:

INSERT INTO events FORMAT RowBinary <...rowbinary...>;

-- in the background process, every few minutes
SELECT * FROM system.parts WHERE table = 'events'; -- check how many parts there are, if too many, run OPTIMIZE
OPTIMIZE TABLE events;
-- also if there are inactive parts older than 8 minutes, drop them
ALTER TABLE events DROP PART '<part_id goes here>';

ruslandoga avatar Oct 11 '23 08:10 ruslandoga

@ruslandoga Nice, I think the 8 mins thing is reasonable in ClickHouse. But not in an embedded database. I would patch chdb to run something to do the cleanup automatically.

auxten avatar Oct 13 '23 03:10 auxten

There are MergeTree settings which control usage of fsync(). Please check min_rows_to_fsync_after_merge and/or min_compressed_bytes_to_fsync_after_merge.

The relevant code is here

devcrafter avatar Oct 26 '23 08:10 devcrafter

Hey @ruslandoga I'm afraid the first build is as painful as you're experiencing (our action takes > 5h to build) but if you add ccache to the mix the next compile and link rounds with minor modifications will be much faster. If you're on our discord feel free to ping on the dev channel and we'll try to assist.

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

devcrafter avatar Oct 26 '23 08:10 devcrafter

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

I don't see any related issues with the build process and we have chdb-builder helper

lmangani avatar Oct 26 '23 09:10 lmangani

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

I don't see any related issues with the build process and we have chdb-builder helper

Had no idea you have it. Just read your comment and thought that this should be covered by build documentation. Just tried to help

devcrafter avatar Oct 26 '23 10:10 devcrafter

Seems like we can find the part with the highest LEVEL, check its MAX_BLOCK_NUMBER, and delete all parts with MAX_BLOCK_NUMBER less than that.

Would this be safe to do?

poundifdef avatar Apr 18 '24 19:04 poundifdef