carbon-clickhouse icon indicating copy to clipboard operation
carbon-clickhouse copied to clipboard

how to use carbon clickhouse with distributed tables ?

Open mcarbonneaux opened this issue 3 years ago • 10 comments

how to configure carbon-clikhouse with clickhouse distributed tables ?

mcarbonneaux avatar Nov 18 '22 13:11 mcarbonneaux

I am not sure what you mean. I've used distributed tables for inserts there

Felixoid avatar Nov 18 '22 13:11 Felixoid

you use partitionned tables, not distributed table in the readme.

clickhouse distributed table: https://clickhouse.com/docs/en/sql-reference/distributed-ddl

mcarbonneaux avatar Nov 19 '22 20:11 mcarbonneaux

with distributed table you can distribute data again clickhouse node shard.... and scale linearie with the number of node (depend on the eficiency of the distribution key)....

mcarbonneaux avatar Nov 19 '22 20:11 mcarbonneaux

You should rather use a single table, not "on cluster" clause.

See https://clickhouse.com/docs/en/engines/table-engines/special/distributed/

Felixoid avatar Nov 19 '22 20:11 Felixoid

is the documentation what i'm searching !

the idea is to store not in single node but in cluster with multiple shard.... to scale...

the readme create table instruction, are for single node... or i've missed somephing ...

can be possible to have an example of create table in distributed mode in the readme ?

mcarbonneaux avatar Nov 19 '22 22:11 mcarbonneaux

You should create regular tables on each node in cluster. After that you can write into any of nodes (I use L7 LB).

For reading from all nodes in one request, use Distributed table.

When creating Distributed, you can set sharding_key. That allows you to write "to distribution table" -- this means that all incoming data will be routed by sharding_key.

Note here: When you use rollup-conf = "auto" in graphite-clickhouse, you should set rollup-auto-table = "" pointed to regular table.

Here is examples of configs from my prod:

Tables:

CREATE TABLE IF NOT EXISTS graphite_repl ON CLUSTER datalayer (
    `Path`      String  CODEC(ZSTD(3)),
    `Value`     Float64 CODEC(Gorilla, LZ4),
    `Time`      UInt32  CODEC(DoubleDelta, LZ4),
    `Date`      Date    CODEC(DoubleDelta, LZ4),
    `Timestamp` UInt32  CODEC(DoubleDelta, LZ4)
)
ENGINE = ReplicatedGraphiteMergeTree('/clickhouse/tables/{shard}/graphite_repl', '{replica}', 'graphite_rollup')
PARTITION BY toYYYYMMDD(Date)
ORDER BY (Path, Time)
TTL
    Date + INTERVAL 1 WEEK TO VOLUME 'cold_volume',
    Date + INTERVAL 4 MONTH DELETE
SETTINGS
    index_granularity = 512;

CREATE TABLE IF NOT EXISTS graphite_dist ON CLUSTER datalayer AS graphite_repl
ENGINE = Distributed(datalayer, ..., graphite_repl);

carbon-clickhouse:

...
[upload.graphite]
type = "points"
table = "graphite_repl"
...

graphite-clickhouse:

...
[[data-table]]
 table = "graphite_dist"
 rollup-conf = "auto"
 rollup-auto-table = "graphite_repl"
...

sheyt0 avatar Dec 12 '22 12:12 sheyt0

i while go to test that !!

mcarbonneaux avatar Dec 17 '22 12:12 mcarbonneaux

can be usefull to use chproxy in front to cache request (https://www.chproxy.org/) ?

mcarbonneaux avatar Dec 17 '22 12:12 mcarbonneaux

If you use carbonapi - it also can cache requests. So that depends on what is your use case.

I would overall suggest to start with simple setup and then add extra pieces once you encounter a bottleneck

Civil avatar Dec 17 '22 13:12 Civil

can be usefull to use chproxy in front to cache request (https://www.chproxy.org/) ?

No. chproxy can't cache requests with external data (used in points table queries).

Graphite-clickhouse can cache finder queries (in render requests). Carbonapi can cache other on the front of API requests (render, find, tags autocomplete).

So, no reason use chproxy for caching. But usefull as bouncer/connection pool limiter.

msaf1980 avatar Dec 17 '22 19:12 msaf1980