[CT-2809] Support `ref` in foreign key constraint expressions
Problem
constraints:
- type: FOREIGN_KEY # multi_column
columns: [FIRST_COLUMN, SECOND_COLUMN, ...]
expression: "OTHER_MODEL_SCHEMA.OTHER_MODEL_NAME (OTHER_MODEL_FIRST_COLUMN, OTHER_MODEL_SECOND_COLUMN, ...)"
columns:
- name: FIRST_COLUMN
data_type: DATA_TYPE
# column-level constraints
constraints:
- type: foreign_key
expression: OTHER_MODEL_SCHEMA.OTHER_MODEL_NAME (OTHER_MODEL_COLUMN)
Because you must hard-code your database.schema.table name when setting a foreign key constraint:
- DAG dependencies are incorrect
- multi-environment is not supported (it's very hacky)
This feature has become more important now that warehouses use foreign key constraints for better performance.
Instead, we should support ref in foreign key constraint expression - both at the model and column level.
This is similar to how the relationships data test works.
models:
- name: orders
columns:
- name: customer_id
tests:
- relationships:
to: ref('customers')
field: id
Current workaround
Having to use jinja to specify the expression based on the target:
- type: foreign_key
expression: "{{ 'prod_dataset.' if target.name!='dev' else target.dataset ~ '.prod_dataset__' }}foreign_table(foreign_key)"
Acceptance criteria
- For foreign key constraint, I can specify which table I want to reference using
refat the column level
columns:
- name: my_column
data_type: int
constraints:
- type: foreign_key
to: ref('my_other_model')
to_column: other_my_column
- or at the model level
constraints:
- type: foreign_key
columns: [first_column, second_column, ...]
to: ref('my_other_model')
to_columns: [other_first_column, other_second_column, ...]
Notes from technical refinement
originally left as comment in https://github.com/dbt-labs/dbt-core/issues/7417
I'm opening this issue to track upvotes/comments that could inform eventual prioritization. Is this something people want/need in their production workflows? Are happy to solve by other means in the meantime (e.g. dbt_constraints)?
If we were to take FK constraints more seriously, we're missing a pretty important ingredient, which is the ability to include & template ref inside the expression field — or providing more structure, i.e.
constraints:
- type: foreign_key
ref_table_name: ref('other_table_name')
ref_column_names: ['id'] # could be multiple
Per https://github.com/dbt-labs/dbt-core/issues/6754#issuecomment-1449200569, we kicked that out of scope for v1.5, and we're unlikely to prioritize it while this remains a metadata-only (nonfunctional & unenforceable) feature on the majority of data platforms.
An argument in favor of prioritizing this is that BigQuery now supports the use of foreign keys for optimizing joins.
https://cloud.google.com/blog/products/data-analytics/join-optimizations-with-bigquery-primary-and-foreign-keys?hl=en
I would also submit that, database enforcement implementation aside, forcing the usage of explicit <schema>.<table> hardcodings and not supporting ref() is a crack in dbt's abstraction model. On its own it's certainly not the end of the world, but these breaks in the overall architectural vision and product conceptualization tend to proliferate if left unaddressed.
Snowflake can also use foreign keys for optimizing joins: https://docs.snowflake.com/en/user-guide/join-elimination#setting-the-rely-constraint-property-to-eliminate-unnecessary-joins
I'd really be interested in referencing a FK constraint to a model that lives in a custom schema. The referred model lives in a custom schema that is dependent on an Environment Variable that is passed in at runtime, so I cannot hardcode a <schema>.<table> reference in my constraint as I do not know what it will be ahead of time.
Until dbt is enhanced to support ref() in a foreign key constraint, I cannot model my FKs in constraints.
Another reason to add this is to ensure that dbt builds DAG dependencies that support the foreign keys. Because there is no ref(), but instead the hard-coding specification of <schema.table>, there’s no way to for dbt to understand the DAG dependency that a foreign key constraint creates.
For example, let’s say I have 3 models: A, B, and C
B depends on A.
So if I say dbt run -m +B it will first build A, then B.
So far so good. Now, suppose I specify a foreign key constraint on a column in B, referring to a column in C. For this to work, C has to exist. In other words, there’s now a DAG dependency between B and C, for that reason.
But with that constraint specified, dbt run -m +B still just builds A and then B. The constraint itself causes an error, because C does not exist.
In any non-trivial sized DAG, this will cause constant errors in builds, because there is no guarantee of a thread getting to C before B.
The workaround is to force the dependency by placing a SQL-commented ref() in the model .sql, as described here. In other words, something like:
-- {{ ref('C') }}
But this is just more extra work, and it becomes difficult to maintain as it scales. So this is one more reason to support ref in foreign key constraints expressions in the .yml; i.e. all in the same place.
Like Snowflake and BigQuery, Redshift also uses foreign keys for optimizing joins: https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-defining-constraints.html
During development we build into developer dependent datasets (e.g. dev_developer_name.dataset_name__model_name instead of dataset_name.model_name in production), so hard coding foreign keys seems impossible.
During development we build into developer dependent datasets (e.g.
dev_developer_name.dataset_name__model_nameinstead ofdataset_name.model_namein production), so hard coding foreign keys seems impossible.
@elyobo
The dependency issue raised by @noahjgreen295 will still be an issue and was a major issue for us in using this feature. Our pipelines were less reliable and there was essentially a race condition when running multiple models in parallel.
I use a similar naming convention to you and I used something like this in the model YAML
- type: foreign_key
expression: "{{ 'warehouse' if target.name!='dev' else target.dataset }}.tableA(tableB_ForeignKey)"
you can define simple if-else logic in the brackets. This allows for the FKs to be created in a dev_developer_name schema under a dev target. Hope this helps!
Thanks @Stochastic-Squirrel, I didn't realise you could do that; ends up something like this for ours and does indeed work, leaving the logic duplication (this is already handled in the naming macros that ref calls) and the dependency issue.
- type: foreign_key
expression: "{{ 'prod_dataset.' if target.name!='dev' else target.dataset ~ '.prod_dataset__' }}foreign_table(foreign_key)"
Another option might be a post hooks alterations with alter table statements, but also not ideal. ref support would be ideal but can appreciate that it's a pain to implement.
@jtcohen6 Given that Snowflake, Redshift and BigQuery use foreign keys to optimize joins, will this issue get re-prioritized? Also, I'll add that downstream tools can use PK/FK to infer table relationships, perhaps bumping the priority further.
Any updates on the priority for this? I feel like dbt focus a lot in adding new features but pushes aside the improvement of great features already present...
Any updates on this? It defeats the purpose of foreign key constraints as we cannot use them because it seems that dbt is unable to build a correct DAG. I have to run the project a couple of times so that parent tables get built.
+1 for this functionality
I am currently using this workaround successfully. Specifying this in the in-sql config block, in the post_hook argument:
{% if (is_incremental == false) and execute%}
ALTER TABLE {{this}} ADD FOREIGN KEY ('my_root_column') REFERENCES {{ref('my_foreign_model')}} ('my_foreign_column')
{% endif %}
This applies the constraint only when the model is materialized for the first time, avoiding unnecessary runs.
I would definitely appreciate a feature which fixes this open issue. I tried to implement the workaround below but found that my foreign table is built after the table I am defining the foreign key for. So, I had to go create the primary key for the foreign table manually in BigQuery in order for the production DAG to build properly.
- type: foreign_key expression: "{{ 'prod_dataset.' if target.name!='dev' else target.dataset ~ '.prod_dataset__' }}foreign_table(foreign_key)"
Hello Team,
I checked this pull request, it does not state that "ref" are now supported in the "expression" for constraints (foregin_key).
It does not work in the following version, so is it planned to be fixed ?
dbt --version using legacy validation callback Core:
- installed: 1.8.4
- latest: 1.8.4 - Up to date!
Plugins:
- databricks: 1.8.4 - Up to date!
- spark: 1.8.0 - Up to date!
There has not been a release since before that PR was merged; did you check the code from that PR (which does show examples of ref() in a to option - it looks like expression has split into to (which takes the ref) and to_columns (the columns referenced in the to table) and if so did you use the new options, or are you using the older 1.8.4 release which doesn't have this change yet?
@elyobo thanks i tried out using to and to_columns with the 1.8.4 release, i got the following error message
Compilation Error in model customer_interactions (models/silver/customer_interactions.sql)
No parent table defined for foreign key:
I will try it later today with the beta 1.9 version today.
Is this enabled for DBT Cloud as well?