guillesd
guillesd
Are you sure that the filter is not pushdown in 1.4.3? I made this PR https://github.com/duckdb/duckdb/pull/19911 that precisely disables this optimization when there is a filter. Maybe it is better...
Hey @gsueur! I did some checks on one parquet file in remote s3 and indeed the filter + order by query with a limit is slightly slower. In my testing...
Definitely reproduced! Thanks!
Hey! I don't get the same error as you do, actually I get quite a nice error: ```console Not implemented Error: Only literals (e.g. 42 or 'hello world') are supported...
DuckDB has a native integration with polars already. ```python duckdb.sql("SELECT ...").pl(lazy=True) ``` Not sure if this is what we want to use to make the DuckLake integration but I'd be...
Yes I see that for some operations they have a native client @cmdlineluser. And yes @rchui, we would need to be able to write to a database to make writing...
Thanks! We are on it
So the problem here is that when we `add_data_files`, because we theoretically want the user to specify a file wherever they want, then we store absolute paths instead of relative...
Hi @soumitsalman can you check if indeed you are using Ducklake 0.2? DuckLake 0.3 should be the correct version to run 1.4 duckdb (I would think 0.2 should be incompatible)....
It is actually not the filename but the `data_file_id`, here you have an explanation of how this is generated https://ducklake.select/docs/stable/specification/tables/ducklake_data_file. This key needs to always be taken from the `next_file_id`...