Geoff Genz

Results 77 comments of Geoff Genz

That isn't on the short term roadmap, and I don't have any sense of how complex it will be (since apparently there isn't any high level documentation on Python models)....

From what I can see in the updated DBT documentation, Python models run on the "platform", which in this case would be ClickHouse. At the moment there is no way...

Reopening this so we don't get more duplicates.

We'll update the README but we will need some code changes to make delete+insert the default strategy for current ClickHouse versions.

I could see handling this to check specifically for the GoLang 0 value (and switching it to a ClickHouse 1970-01-01 zero value), but otherwise leaving the overflow logic in place....

Thanks for looking into it! I understand the cache key concern and I'm not sure what the right balance is without adding a lot of complexity. I'll see about doing...

That seems like a very large number of blocks for the 30k rows response. Are they very wide rows? How large is the entire result set? Your solution basically "unstreams"...

Assuming you're using `df_result`, you could probably move the suggested fix to the `df_close` method (instead of having `df_close` call `_df_stream`), and that would be a good improvement without affecting...

Just to reiterate, I think your optimization makes perfect sense in the non-streaming case and I have no problem with implementing that in the next release. It's a fairly small...

@georgipeev - there's an attempted improvement in 0.7.2. In order to maintain the correct dtypes, I had to use a somewhat different approach and I don't know how it compares...