ClickHouse icon indicating copy to clipboard operation
ClickHouse copied to clipboard

Reduce size of posting list of inverted index

Open cangyin opened this issue 10 months ago • 9 comments

While it's natural for an inverted index implementation to save a unique row ID for each row in posting lists (which also brings some performance benefits compared to granule IDs, as the foot note explains in doc), the generated posting list files of inverted index are too precise and thus too big for some use cases, where storage and I/O resources are taken considerable care of.

Here a new setting inverted_index_row_id_divisor (defaults to 1) allows to make each single row id be assigned to a batch of rows, instead of unique row id for each row. This helps reduce size of posting list files, at some precision loss or performance penalty.

Close #62627

See latest test result in comments below.

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Reduce size of posting list of inverted index with shared row IDs

Documentation entry for user-facing changes

  • [ ] Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/


Modify your CI run:

NOTE: If your merge the PR with modified CI you MUST KNOW what you are doing NOTE: Checked options will be applied if set before CI RunConfig/PrepareRunConfig step

Include tests (required builds will be added automatically):

  • [ ] Fast test
  • [ ] Integration Tests
  • [ ] Stateless tests
  • [ ] Stateful tests
  • [ ] Unit tests
  • [ ] Performance tests
  • [ ] All with ASAN
  • [ ] All with TSAN
  • [ ] All with Analyzer
  • [ ] Add your option here

Exclude tests:

  • [ ] Fast test
  • [ ] Integration Tests
  • [ ] Stateless tests
  • [ ] Stateful tests
  • [ ] Performance tests
  • [ ] All with ASAN
  • [ ] All with TSAN
  • [ ] All with MSAN
  • [ ] All with UBSAN
  • [ ] All with Coverage
  • [ ] All with Aarch64
  • [ ] Add your option here

Extra options:

  • [ ] do not test (only style check)
  • [ ] disable merge-commit (no merge from master before tests)
  • [ ] disable CI cache (job reuse)

Only specified batches in multi-batch jobs:

  • [ ] 1
  • [ ] 2
  • [ ] 3
  • [ ] 4

cangyin avatar Apr 16 '24 19:04 cangyin


-- prepare table hackernews with data as per https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/invertedindexes

CREATE TABLE hackernews_idx_row_ids AS hackernews
ENGINE = MergeTree ORDER BY (type, author);

CREATE TABLE hackernews_idx_granule_ids AS hackernews
ENGINE = MergeTree ORDER BY (type, author) SETTINGS inverted_index_map_to_granule_id=1;

ALTER TABLE hackernews_idx_row_ids ATTACH PARTITION tuple() FROM hackernews;
ALTER TABLE hackernews_idx_granule_ids ATTACH PARTITION tuple() FROM hackernews;

ALTER TABLE hackernews_idx_row_ids ADD INDEX comment_lowercase(lower(comment)) TYPE inverted;
ALTER TABLE hackernews_idx_granule_ids ADD INDEX comment_lowercase(lower(comment)) TYPE inverted;

ALTER TABLE hackernews_idx_row_ids MATERIALIZE INDEX comment_lowercase;
ALTER TABLE hackernews_idx_granule_ids MATERIALIZE INDEX comment_lowercase;
Table Size of *.gin_post
hackernews_idx_row_ids 1.6 GB
hackernews_idx_granule_ids 129 MB

cangyin avatar Apr 16 '24 20:04 cangyin

by this optimization, what would be the size of posting list if the inverted index type is decclared as inverted(3)? in our case, the size of inverted(3) is 3x bigger than the size of column size.

And by the way, the size of posting list is not counted in the size of inverted index. See #62681

FrankChen021 avatar Apr 18 '24 05:04 FrankChen021

This is an automated comment for commit 3d2c86ca98cf1cde2a61bafa47e169cf6953ed37 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR⏳ pending
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors❌ failure
Successful checks
Check nameDescriptionStatus
A SyncThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
PR CheckThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

robot-clickhouse-ci-2 avatar Apr 20 '24 01:04 robot-clickhouse-ci-2

would be interesting to see a perf comparison if you have any at hand

nickitat avatar Apr 26 '24 12:04 nickitat

would be interesting to see a perf comparison if you have any at hand

How to make a perf comparison, is that a perf diff ?

cangyin avatar Apr 26 '24 15:04 cangyin

would be interesting to see a perf comparison if you have any at hand

How to make a perf comparison, is that a perf diff ?

I thought you maybe tried for some dataset and set of queries how CH will perform with per-row and per-granule index.

nickitat avatar Apr 26 '24 16:04 nickitat

I am thinking about throwing current logic away. Because mapping to granule IDs make the inverted index behave like a bloom filter with zero false-positive rate.

  1. With bloom filter, terms are mapped to their existence (true or false) in all rows within a granule.
  2. With inverted index storing row IDs, terms are mapped to their existence in a row within a granule. Which is too precise and can be expensive in some sense, at least on slow storages where I/O dominates the query performance.

If divide the row ID by a constant divisor. A single row ID is assigned to row-id-divisor number of rows, instead of a whole granule, can be a bit more useful.

cangyin avatar Apr 26 '24 17:04 cangyin

Table Index Divisor Size of *.gin_post Dropped Granules Index Cold Run Index Hot Run Cold Run Hot Run
hackernews_tokenbf
(base line for indexing on tokens)
tokenbf_v1(254935,2,0) N/A N/A 2956 2.452s 1.185s 11.099s 4.310s
hackernews_row_ids inverted(0) 1 1.6G 2956 2.218s 0.055s 6.332s 3.366s
hackernews_shared_row_ids2 inverted(0) 2 1.4G 2956 2.292s 0.058s 6.794s 3.431s
hackernews_shared_row_ids16 inverted(0) 16 878M 2956 2.170s 0.058s 5.923s 3.025s
hackernews_shared_row_ids64 inverted(0) 64 737M 2955 2.196s 0.056s 6.944s 3.908s
hackernews_shared_row_ids128 inverted(0) 128 619M 2952 2.270s 0.054s 7.403s 3.104s
hackernews_shared_row_ids512 inverted(0) 512 362M 2933 2.445s 0.055s 6.749s 3.761s
hackernews_ngrambf
(base line for indexing on ngrams)
ngrambf_v1(3,254935,2,0) N/A N/A 521 2.595s 1.674s 44.267s 17.082s
hackernews_row_ids_3grams inverted(3) 1 4.6G 2788 0.849s 0.086s 5.673s 4.496s
hackernews_shared_row_ids_3grams_2 inverted(3) 2 3.5G 2731 0.804s 0.099s 5.037s 4.557s
hackernews_shared_row_ids_3grams_4 inverted(3) 4 2.7G 2568 0.750s 0.092s 6.329s 5.240s
hackernews_shared_row_ids_3grams_8 inverted(3) 8 2.1G 2321 0.822s 0.094s 9.455s 6.604s
hackernews_shared_row_ids_3grams_16 inverted(3) 16 1.6G 1664 0.869s 0.089s 12.836s 9.847s
hackernews_shared_row_ids_3grams_64 inverted(3) 64 1.2G 801 0.859s 0.083s 17.712s 16.562s
hackernews_shared_row_ids_3grams_128 inverted(3) 128 845M 699 0.853s 0.088s 17.798s 15.854s
hackernews_shared_row_ids_3grams_512 inverted(3) 512 413M 616 0.728s 0.093s 16.180s 15.887s

SQLs Used

  1. Create Tables
-- prepare table hackernews with data as per https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/invertedindexes

CREATE TABLE hackernews_row_ids AS hackernews
ENGINE = MergeTree ORDER BY (type, author)
SETTINGS max_bytes_to_merge_at_max_space_in_pool=1073741824;

ALTER TABLE hackernews_row_ids ATTACH PARTITION tuple() FROM hackernews;
ALTER TABLE hackernews_row_ids ADD INDEX comment_lowercase(lower(comment)) TYPE inverted;
ALTER TABLE hackernews_row_ids MATERIALIZE INDEX comment_lowercase;

CREATE TABLE hackernews_shared_row_ids512 AS hackernews
ENGINE = MergeTree ORDER BY (type, author)
SETTINGS max_bytes_to_merge_at_max_space_in_pool=1073741824,inverted_index_row_id_divisor=512;

ALTER TABLE hackernews_shared_row_ids512 ATTACH PARTITION tuple() FROM hackernews;
ALTER TABLE hackernews_shared_row_ids512 ADD INDEX comment_lowercase(lower(comment)) TYPE inverted;
ALTER TABLE hackernews_shared_row_ids512 MATERIALIZE INDEX comment_lowercase;
-- prepare table hackernews with data as per https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/invertedindexes

CREATE TABLE hackernews_row_ids_3grams AS hackernews
ENGINE = MergeTree ORDER BY (type, author)
SETTINGS max_bytes_to_merge_at_max_space_in_pool=1073741824;

ALTER TABLE hackernews_row_ids_3grams ATTACH PARTITION tuple() FROM hackernews;
ALTER TABLE hackernews_row_ids_3grams ADD INDEX comment_lowercase(lower(comment)) TYPE inverted(3);
ALTER TABLE hackernews_row_ids_3grams MATERIALIZE INDEX comment_lowercase;

CREATE TABLE hackernews_shared_row_ids_3grams_512 AS hackernews
ENGINE = MergeTree ORDER BY (type, author)
SETTINGS max_bytes_to_merge_at_max_space_in_pool=1073741824,inverted_index_row_id_divisor=512;

ALTER TABLE hackernews_shared_row_ids_3grams_512 ATTACH PARTITION tuple() FROM hackernews;
ALTER TABLE hackernews_shared_row_ids_3grams_512 ADD INDEX comment_lowercase(lower(comment)) TYPE inverted(3);
ALTER TABLE hackernews_shared_row_ids_3grams_512 MATERIALIZE INDEX comment_lowercase;
  1. Dropped Granules (Total 3513)
SELECT (granules[2]) - (granules[1]) AS dropped,granules[2] AS total
FROM
(
    SELECT arrayMap(x -> toUInt32(x), splitByChar('/', splitByString(': ', explain)[2])) AS granules
    FROM
    (
        EXPLAIN indexes=1 SELECT count() FROM hackernews_shared_row_ids_3grams_512 WHERE hasToken(lower(comment), 'clickhouse')
    )
    WHERE explain LIKE '%Granules: %'
    OFFSET 1
)
  1. Index Cold/Hot Run
-- Restart ClickHouse to drop the cache of inverted index
-- `echo 3 > /proc/sys/vm/drop_caches` to drop page caches
EXPLAIN indexes=1 SELECT count() FROM hackernews_shared_row_ids_3grams_512 WHERE hasToken(lower(comment), 'clickhouse')
  1. Cold Run / Hot Run
-- Restart ClickHouse to drop the cache of inverted index
-- `echo 3 > /proc/sys/vm/drop_caches` to drop page caches
SELECT count() FROM hackernews_shared_row_ids_3grams_512 WHERE hasToken(lower(comment), 'clickhouse')

cangyin avatar Apr 28 '24 15:04 cangyin

what would be the size of posting list if the inverted index type is decclared as inverted(3)?

@FrankChen021 Please see updated result above.

would be interesting to see a perf comparison if you have any at hand

@nickitat Please see updated result above.

cangyin avatar Apr 28 '24 15:04 cangyin

Dear @rschu1ze, this PR hasn't been updated for a while. You will be unassigned. Will you continue working on it? If so, please feel free to reassign yourself.

woolenwolfbot[bot] avatar Jun 25 '24 16:06 woolenwolfbot[bot]

@cangyin Are u still working on this?

FrankChen021 avatar Jul 18 '24 03:07 FrankChen021