Ben Cassell
Ben Cassell
Python models are in fact the only models that are currently supported for running on job clusters, though its part of an on-going internal debate how much we want to...
So are the errors you are reporting here with dbt-databricks or dbt-spark? dbt-databricks does support python on job clusters, though apparently not in your desired approach of reusing the same...
This is expected behavior, as python models are integrated into the rest of your dbt project using SQL (for example, on an incremental model, the merge behavior is conducted in...
I believe that as long as you return a dataframe, the dbt adapter will handle it. If your spark works in a Databricks notebook, I'd believe it should work with...
While this does look like a bug, I'm wondering why you are dynamically making catalogs rather than schemas?
Would you mind submitting a PR with your solution? Given the limitations you described, it seems reasonable to me.
Interesting. Do you have a dbt.log that can provide the stack so I know where the exception is raised?
Can you try 1.7.9 and let me know if you still have this issue? I removed one place that raises an exception, but uncertain if its the same issue you're...
@andrefurlan-db would appreciate your thoughts on this one.
@casperdamen123 did some research and one option is to specify your metadata as table properties: https://docs.databricks.com/en/delta/custom-metadata.html#store-custom-tags-in-table-properties they won't be column scoped in your yml file but you could do something...