dbt-snowflake icon indicating copy to clipboard operation
dbt-snowflake copied to clipboard

[ADAP-432] [Feature] Snowflake Adapter supports INSERT OVERWRITE

Open markproctor1 opened this issue 1 year ago • 24 comments

Is this your first time submitting a feature request?

  • [X] I have read the expectations for open source contributors
  • [X] I have searched the existing issues, and I could not find an existing issue for this feature
  • [X] I am requesting a straightforward extension of existing dbt-snowflake functionality, rather than a Big Idea better suited to a discussion

Describe the feature

Snowflake has an optional parameter for inserts which does TRUNCATE behavior before INSERT. It's one transaction and avoids dropping or recreating tables.

https://docs.snowflake.com/en/sql-reference/sql/insert#optional-parameters INSERT [ OVERWRITE ] INTO <target_table>

Support this feature in dbt

Describe alternatives you've considered

There is no single-transaction alternative feature to TRUNCATE / DELETE * and then INSERT. This is documented by Snowflake.

Who will this benefit?

Anyone who needs to avoid DROP/RECREATE and at the same time wants to full load will benefit, with the added bonus of keeping the behavior single-transaction.

In our use case, we don't want tables to be dropped as the table has permissions and policies which need to be kept in place despite which process is doing the load. At the same time, the rows of the table are completely replaced during load.

Are you interested in contributing this feature?

Yes, I need to understand materializations better but in my naivete this seems like a simple and valuable feature

Anything else?

Matt Winkler already built this: (https://github.com/matt-winkler/dbt_workspace/blob/master/dbt/macros/materializations/snowflake_custom_incremental/insert_overwrite.sql)](https://github.com/matt-winkler/dbt_workspace/tree/master/dbt/macros/materializations/snowflake_custom_incremental)

markproctor1 avatar Apr 04 '23 15:04 markproctor1

Hey there!

I'm still trying to understand the scope of work here.

Is the ask to add a model config applicable to table and incremental only, to indicate that CREATE OR REPLACE should be avoided in favor of INSERT OVERWRITE?

This would mean that normal runs for table, and full-refresh runs for both table and incremental would be impacted? Is that a fair description of the requirements?

Fleid avatar Apr 05 '23 12:04 Fleid

My understanding is that this would be best as a new incremental strategy

incremental_strategy: insert+overwrite

+1 for this feature!

adammarples avatar Apr 06 '23 11:04 adammarples

Hi @adammarples - can you please tell me more?

The adapters where we support an insert_overwrite strategy for incremental, we do because the overwrite is done at the partition level. We only drop and regenerate the partitions where data has actually change. In Snowflake this is not possible, as the entire table is truncated with INSERT OVERWRITE. So to me it's not incremental anymore? Am I missing something?

Fleid avatar Apr 06 '23 13:04 Fleid

@Fleid It would be better named if it weren't listed as incremental, but some of the other behaviors of table materialization such as potentially dropping / replacing the table are to be avoided for INSERT OVERWRITE.

markproctor1 avatar Apr 06 '23 13:04 markproctor1

@markproctor1 I think we agree here. There is value, but not limited to incremental models. So it's not an incremental strategy, but rather an overall switch to make all drop/replace operations replaced by INSERT OVERWRITE.

Fleid avatar Apr 06 '23 14:04 Fleid

@Fleid What goes into the decision to make it a switch rather than an optional part of table materialization? Someone out there probably LOVES the fact that their tables get dropped :)

markproctor1 avatar Apr 06 '23 15:04 markproctor1

I think I covered the fact that it was not a strategy for incremental models.

Could it be a materialization by itself? Materializations are about the type of object being created in the database. We have views, tables, CTEs (ephemerals), and soon materialized views. The only exception is incremental models, which do generate tables, but needed to be clearly delineated from tables as the handling mechanism is widely different. This is me post-rationalizing, because I wasn't there for that decision, but I'm glad it's not a switch on tables.

Is using INSERT OVERWRITE deserving of its own materialization? It's still resulting in a table, and the ergonomics of using it, from the dbt user standpoint is minimal. Yes I want it, or no. That's why I'm seeing it as an option on table and incremental.

Fleid avatar Apr 06 '23 16:04 Fleid

This is intuitive to me: {{ config(materialized='table', insert_overwrite=true) }}

However, "insert overwrite" as a concept can be full load or incremental:

https://docs.getdbt.com/reference/resource-configs/glue-configs#the-insert_overwrite-strategy

If no partition_by is specified, then the insert_overwrite strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances, since it minimizes downtime while the table contents are overwritten. The operation is comparable to running truncate + insert on other databases. For atomic replacement of Delta-formatted tables, use the table materialization (which runs create or replace) instead.

So there's concept disagreement, the term can mean both table and incremental outside of dbt. In Snowflake, it means materialized=table.

markproctor1 avatar Apr 06 '23 16:04 markproctor1

I agree! I don't think we should name it insert_overwrite, despite the DDL being INSERT OVERWRITE

Maybe something like:

{{ config(materialized='table', overwrite=true) }}

After more thinking, maybe we should narrow it down to only non --full-refresh runs for table:

  • It's not applicable to non --full-refresh runs for incremental models because it truncates the whole table, which defeats the purpose of incremental processing.
  • It's not applicable to --full-refresh runs of tables and incremental models, because the expectation of --full-refresh is that tables will be dropped and re-created. Wiping things clean are the point here.
  • It's not applicable to views and ephemeral models (CTE) for obvious reasons

Now what we need to figure out, is what happens when the model has changed, and is now different from the table in the database?

What happens if I dbt run with this model:

{{ config(materialized='table', overwrite=true) }}
select 1 as c1, `two` as c2

When in database the table looks like:

c1 (number) c2(number)
1 2

Here the INSERT OVERWRITE will fail on c2 with a type mismatch. Should we just let that happen? Or should we be clever and somehow re-use on_schema_change from incremental?

{{ config(materialized='table', overwrite=true, on_schema_change= 'sync_all_columns') }}
select 1 as c1, `two` as c2

I'm thinking we should not be clever. This is supposed to be a simple optimization, not a complicated piece of logic. I'm interested in your opinion?

Fleid avatar Apr 07 '23 13:04 Fleid

Totally agree, no need to be clever here.

In our case we'd actually prefer to know if the table schemas don't line up, fast failure rather than handling the schema change. We tested this in the custom materialization to make sure that dbt doesn't modify the schema on mismatch.

{{ config(materialized='table', overwrite=true) }} Your way looks better to me, same concept with fewer characters.

Overwrite being true is binary, there is no consideration of partitions, where partitioning might be considered elsewhere. So you get the idea of changing ALL results, without changing DDL.

It's an overwrite, not a replace. I like it.

markproctor1 avatar Apr 07 '23 13:04 markproctor1

Since you already have some code there, would you be interested in contributing a PR? I'll flag this issue as help_wanted if you do ;)

Fleid avatar Apr 07 '23 13:04 Fleid

@Fleid yes my mistake, it would be a table materialization, not an incremental strategy

adammarples avatar Apr 08 '23 17:04 adammarples

@Fleid I'm trying to decide when to fit this in, as sprint work or on the side. I am interested, will get back to you asap.

markproctor1 avatar Apr 10 '23 15:04 markproctor1

This is probably how seeds should work, on Snowflake, instead of truncate + delete. It would be a way of resolving https://github.com/dbt-labs/dbt-snowflake/issues/112 / #135.

This makes less sense to me for incremental models (since it always replaces the full contents, unless partition-based insert overwrite on other platforms) and for table models (since it wouldn't work if the columns of the table have changed between runs).

jtcohen6 avatar Apr 12 '23 08:04 jtcohen6

@jtcohen6 Your comment makes sense about seeds, but I don't know enough to say whether it should be in scope here.

Regarding table models: Do you expect materialized='table', overwrite=true to succeed if the columns have since changed? In our use case, if we want to INSERT OVERWRITE we don't expect column changes. If we expected column changes and wanted full load behavior, we'd aim for create/replace or something else.

What is the dbtonic of resolving target table changes not handled by the specified materialization? If it were incremental and the target object is no longer compatible it would just error right?

markproctor1 avatar Apr 13 '23 16:04 markproctor1

Like discussed, I would not be clever. If we limit the scope of overwrite to simple runs for tables, then INSERT OVERWRITE is an optimization that you should only use when the model is stable. If you get an error from Snowflake because your model changed, you need to go --full-refresh to resolve it. That sounds fair to me.

Fleid avatar Apr 17 '23 13:04 Fleid

@Fleid Our teams are discussing this again. We're finding the benefit of INSERT OVERWRITE in our custom materialization isn't worth the extra dev/maintenance time through table schema changes. So as you say, it would be best to have --full-refresh.

@dataders I'm not sure this is incremental, in the sense of incremental materialization, as we've discussed it.

markproctor1 avatar Aug 17 '23 14:08 markproctor1

We're finding the benefit of INSERT OVERWRITE in our custom materialization isn't worth the extra dev/maintenance time through table schema changes. So as you say, it would be best to have --full-refresh.

@markproctor1 my read here is that you and your team no longer think that inclusion of Snowflake's INSERT OVERWRITE within dbt would be a useful addition?

dataders avatar Aug 17 '23 17:08 dataders

@dataders No, I was unclear! We want it more than ever but think it should be handled as a table materialization instead of an incremental materialization.

markproctor1 avatar Aug 17 '23 17:08 markproctor1

@markproctor1 do you care to open a PR that modifies the existing table materialization? I think that'd make this discussion much more focused.

dataders avatar Aug 17 '23 17:08 dataders

Hello, is this still in progress or are there any plans for this? key feature here is using the underlying time travel feature which gets lost when we drop/replace tables.

umarh-ccy avatar Jun 27 '24 14:06 umarh-ccy

Checking on the status of this. Did the INSERT OVERWITE concept get moved to a different PR?
We want to retain time travel on our mart-level objects.

james-mcp-cdi avatar Sep 05 '24 19:09 james-mcp-cdi

I don't know whether it moved to a different PR; I ran into some design decisions over the months.

Snowflake Insert Overwrite

  • It doesn't cleanly fit into either full load or incremental
    • The rows are full load, but there shouldn't be drop / replace table behavior
  • The terminology for "insert overwrite" in Snowflake is not universal, it won't match other adapter terminology in dbt
    • Doesn't match BigQuery for example
  • When the table column datatype don't match between dbt definition and existing target, how to handle the change is complex (One reason we use INSERT OVERWRITE is so that columns and tables aren't dropped.)

We still use Matt Winkler's custom materialization for all our most important work, but I don't know how to design this. @dataders @matt-winkler @james-mcp-cdi

markproctor1 avatar Sep 05 '24 23:09 markproctor1

Could this be an incremental strategy within the incremental materialization instead of an option in table? Might be a simple way to implement this? There's even schema change logic already a part of the incremental materialization.

Edit: sorry I saw this was discussed somewhat. In terms of it needing to be a table or incremental. What's the difference between it as a new incremental strategy versus a table option with full refresh set to false?

ian-andriot-cdi avatar Sep 11 '24 14:09 ian-andriot-cdi