gfunc

Results 13 comments of gfunc

experienced the same error when upgraded to 0.5.0 release and configed to use grpc port(9100) with sparksql, but things worked when switched to http port (8123) #### jars used -...

> > Please disable compression when you are using gRPC w/ old(before 22.3) ClickHouse and see what will happen. > > ``` > spark.clickhouse.write.compression.codec=none > spark.clickhouse.read.compression.codec=none > ``` the same...

using a fork with support for distributed engine as well, repo [here](https://github.com/gfunc/dbt-clickhouse) my solution to distributed tables was to create the on cluster distributed table with the model name, in...

I started to merge my approach toward distributed table engine. And I want to start a discussion early on which is about the handling of `unique_keys`. My production env has...

> Optimize is by nature an expensive operation, since it most cases it rewrites all of the data in the table. (You can also OPTIMIZE a ReplacingMergeTree table, which is...

Perhaps it would be nice to add a switch for this feature (snapshot data on S3 disk)? For my use case, I need to back up metadata and metadata only.

Hi @genzgd, Thanks for your comment. I think the answer is no. To my understanding, the table materialization (not incremental) now is not affected by the `full-refresh` flag much except...

I will take a look. I think it is a problem with the exchange macro. It should not give the`on cluster` clause for table materialization. Hi @ikeniborn, are you expecting...

I can reproduce the problem now. It seems to be a compatibility issue. model `dimension.dim_twitter_pinned_tweet` already exists on the cluster. But with the latest `dbt-clickhouse` ver 1.4.9 during the creation...

In #206 my solution to this problem is that we provide a detailed message to reflect the error and make `full-refresh` workable in this situation. I am not sure this...