Robbert Hofman
Robbert Hofman
Thanks! Indeed, the cast is unnecessary. Changed it in the PR. Do you think it makes sense to have `current_timestamp(6)` there by default, so that not every iceberg user has...
Ok, makes sense. Thanks a lot for following up so quickly! :)
I found another case where this `timestamp(3)` vs. `timestamp(6) with time zone` thing causes trouble. If you run `dbt snapshot` on a source table, then add a `timestamp(6) with time...
I will, thank you!
Ah indeed, it seems like https://github.com/starburstdata/dbt-trino/blob/fa901faf6f48f14d812b8cd5796b7a4964ea03d7/dbt/adapters/trino/column.py#L32-L36 is the culprit. I'm not sure why you're rebuilding the data type from the regex matches instead of passing on the raw one, @damian3031...
> The reason is that `from_description` should return `data_type` and `char_size` (or `scale` and `precision`) as separate tuple elements. So, if [raw_data_type](https://github.com/starburstdata/dbt-trino/blob/v1.4.0/dbt/adapters/trino/column.py#L23) argument is **_timestamp(6) with time zone_**, this method...
Btw I also have this problem with `varchar(20)`.
Hmmm, trino itself seems to indeed separate the datatype and precision: https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/adapters/base/column.py#L124-L160 Then I guess dbt-expectations is wrong in only accessing `dtype`
Apparently, the solution is to use `column.data_type`, not `column.dtype`. See https://docs.getdbt.com/reference/dbt-classes#column and https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/adapters/base/column.py#L41
^ this is me discovering the difference between `data_type` and `dtype` 🤦 In the snapshot logic, it already uses `data_type` so you can ignore my last 4 comments. Sorry for...