dbt-duckdb icon indicating copy to clipboard operation
dbt-duckdb copied to clipboard

dbt (http://getdbt.com) adapter for DuckDB (http://duckdb.org)

Results 74 dbt-duckdb issues
Sort by recently updated
recently updated
newest added

dbt-duckdb already has a ["how to connect" (`/duckdb-setup`) page](https://docs.getdbt.com/docs/core/connect-data-platform/duckdb-setup), but we recommend a second page: how to configure (`/duckdb-configs`). We have a doc that gives some guidance: [Build, test, document:...

Bumps [pyarrow](https://github.com/apache/arrow) from 18.1.0 to 20.0.0. Release notes Sourced from pyarrow's releases. Apache Arrow 20.0.0 Release Notes URL: https://arrow.apache.org/release/20.0.0.html Apache Arrow 20.0.0 RC2 Release Notes: Release Candidate: 20.0.0 RC2 Apache...

dependencies
python

It would be nice to elverage the new DBT engine in duckd-dbt. Possibly related issue on their end: https://github.com/dbt-labs/dbt-fusion/issues/46

One of the most common use case for DuckDb is to run SQL tests locally. [SQLMesh](https://github.com/TobikoData/sqlmesh) supports this natively with [SQLGlot](https://github.com/tobymao/sqlglot), but no alternative exists for dbt workflow. There's global...

My pipeline was working fine for a month until the data volume reaches to like 50 million of rows. It then throws an error lilke this: `what(): {"exception_type":"INTERNAL","exception_message":"Attempted to access...

This still doesn't work unfortunately, but at least it's much more up to date than my last attempt to fix/implement #161

- resolved issue when writing a new partition to glue for a table with the same schema would result in multiple glue schema versions (instead of just adding a partition)...

Using contract enforcement and setting a sql_header can cause the DuckDB adapter to encounter errors when it attempts to describe the structure of a query because the sql_header is included...

bug

Hello, I am running into an issue when using the external materialization with `per_thread_output` option. This option is supposed to create a number of files based on the number of...

I took the current example from the readme and used my large multi-gig model to test the chunking: ``` import pyarrow as pa def batcher(batch_reader: pa.RecordBatchReader): for batch in batch_reader:...