Ravi Gawai
Ravi Gawai
@junhonglau added support for custom function for eventhub/kafka and delta tables in [issue_227](https://github.com/databrickslabs/dlt-meta/tree/issue_227). Can you please verify if it works.
Usually bronze is entry point where customers do quarantine data and send back to source. We can introduce quarantine feature in silver too. It might be for v0.0.10 release since...
**New Silver Quarantine Table Attributes Introduced in [onboarding.json](https://github.com/databrickslabs/dlt-meta/blob/a57286d893a1af77c9d81f21fdada692b0aab65b/integration_tests/conf/cloudfiles-onboarding.template#L73):** * `silver_catalog_quarantine` * `silver_database_quarantine` * `silver_quarantine_table` * `silver_quarantine_table_properties` * `silver_quarantine_cluster` **Also Added:** * `expect_or_quarantine` in [silver_data_quality_expectations.json](https://github.com/databrickslabs/dlt-meta/blob/Issue_104/integration_tests/conf/dqe/transactions/silver_data_quality_expectations.json) ✅ **To run tests:** ```bash python...
@DivyanshuSati007 only sql expressions are supported, sql queries with join not supported as of now! There is a feature request [here](https://github.com/databrickslabs/dlt-meta/issues/88) which might cover above scenario in coming release v0.0.10
@PeerW `souce_path_schema` attribute is optional, check demo's and their onboarding config files. e.g. In [silver fan out demo](https://github.com/databrickslabs/dlt-meta/tree/main/demo#silver-fanout-demo) demo's [onboarding_file](https://github.com/databrickslabs/dlt-meta/blob/main/demo/conf/onboarding_cars.template) there is no schema provided. Please check [pipeline_reader.py ](https://github.com/databrickslabs/dlt-meta/blob/397980b3746f1cb496252b6103ed191150033d6d/src/pipeline_readers.py#L18)where we...
@PeerW for silver schema is derived based on your silver_transformations and transformation functions you provide. In dlt-meta bronze and silver layers are tied together( one tag in onboarding json), so...
We developed dlt-meta based on working with many customers, which helped shape the core requirements. While the bronze layer is typically considered raw, most customers prefer to apply basic schema...
dlt-meta follows medallion architecture hence bronze and silver would be streaming tables and gold can be MVs. Once sql support comes to dlt-meta we can think of adding MVs
@shishupalgeek DLT relies entirely on Structured Streaming internally (as you can see in the [readers implementation](https://github.com/databrickslabs/dlt-meta/blob/main/src/pipeline_readers.py)). Because of this, it doesn’t support the traditional `overwrite` behavior available in standard Spark...
@kosch34 , we have added process in FAQ: https://databrickslabs.github.io/dlt-meta/faq/execution/index.html This failure happens because the pipeline was created using Legacy Publishing mode, which does not support saving tables with catalog or...