kedro icon indicating copy to clipboard operation
kedro copied to clipboard

[spike] Investigate suitability of Kedro for EL pipelines and incremental data loading

Open astrojuanlu opened this issue 1 year ago • 22 comments

Intro and context

Kedro describes itself in its README as a tool for data science and data engineering pipelines (emphasis mine):

Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.

As per https://github.com/kedro-org/kedro-devrel/issues/94, these "data engineering and data science pipelines" actually reflect the broad categories that people have in mind when talking about "pipelines", which are

The focus of this issue is on data pipelines.

Data pipelines

Data pipelines are important because they are the beginning of any data project: you need to get your data from somewhere, to then start doing analysis, machine learning, and the like.

Data pipelines are tricky. For ETL architectures, the Transformation needs to be executed carefully, and it's coupled to both the source (Extraction) and target (Loading). ELT is touted as the "modern" approach, but creates a big overhead of often denormalised tables on the data warehouse.

According to industry surveys https://github.com/kedro-org/kedro-devrel/issues/94, most teams use in-house tools, or just no recognizable tools at all (a mess of Python scripts, Jupyter notebooks, and the like), which suggests that most teams are doing ETL as opposed to ELT. The most recognizable tools and vendors focus on ELT and are commercial (Fivetran, Azure Data Factory) whereas the existing open source tools have mixed reviews (Airbyte, Meltano).

Kedro for data pipelines

We have evidence of users using Kedro for authoring data pipelines https://linen-slack.kedro.org/t/16312377/hi-everyone-here-luca-ds-from-italy-happy-kedro-user-for-3-y#2d666fee-5385-45d2-b2f8-4282ef22c2f9

However, there are also some signs that hint that there's margin for improvement for Kedro to be suitable for creating data pipelines:

  • Some Kedro projects give up using Kedro for data ingestion and use Bash scripts instead https://github.com/deepyaman/inauditus/blob/develop/refresh-data
  • There is not a blessed way to extract data from unusual sources (see https://github.com/astrojuanlu/kedro-kaggle-dataset/ for my own attempt at writing a "Kaggle Dataset")
  • Some unstated Kedro principles seem to put a strong emphasis on reproducibility, whereas the key property for data pipelines is idempotency https://github.com/kedro-org/kedro-plugins/issues/471#issuecomment-1851014301
  • It's unclear how to write Kedro datasets that are amenable to UPSERT (aka MERGE aka "INSERT or UPDATE") operations https://github.com/kedro-org/kedro/pull/964

The fact that Kedro is not mentioned in any industry survey we have found https://github.com/kedro-org/kedro-devrel/issues/94 is probably a symptom, rather than a cause, of all the above.

There's two sides of this problem:

  • There might be some friction because of technical difficulties, and/or
  • There might be a lack of educational material or guidance on how to use Kedro for these tasks.

Next steps

Part of this intersects with https://github.com/kedro-org/kedro/issues/1778, https://github.com/kedro-org/kedro/issues/1936 cc @merelcht

From a product perspective it's worth asking whether we want to pursue making Kedro a suitable tool for ETL/ELT pipelines at all. Regarding ELT, Kedro will probably never be as convenient as the Singer ecosystem and derivatives could theoretically be - however, the practical application of Meltano and Airbyte leaves some gaps, and maybe Kedro could be a satisfying tool for some users. Regarding ETL, I think Kedro could be a perfect framework for this, provided that the datasets, the crucial bits that perform the I/O, are up to the task or at least we provide clear guidance of what is the "Kedronic" way of achieving idempotent data ingestion pipelines that can support cold starts, behave well under changes of the source schema, and any other desirable properties of data pipelines.

From a technical perspective, on the other hand, we need to develop an understanding of how Kedro can be used nowadays for ELT/ETL pipelines following modern data architecture patterns, and evaluate to what extent the pains described above are real or are just a matter of having better docs.

On a related note, discussion in https://github.com/kedro-org/kedro-plugins/issues/471 surfaced that we might have to make some of the Kedro principles more explicit.

Finally, we should execute on messaging/value proposition updates based on the conclusions of our investigation, and probably generate appropriate educational material in the form of documentation, blog posts, and videos.

astrojuanlu avatar Jan 30 '24 18:01 astrojuanlu

Ibis has to be central to this

datajoely avatar Jan 30 '24 18:01 datajoely

To give a specific example of how this is posing a problem to users: https://linen-slack.kedro.org/t/16366189/tldr-is-there-a-suggested-pattern-for-converting-vanilla-par#23c36a9d-7bea-40f9-a21f-cc6def7e9ccf

User tries to convert a Parquet file to a Delta table with a Kedro pipeline, only to see that DatasetError: DeltaTableDataset is a read only dataset type. Supposedly there's rationale for this in the original PR from 3 years ago https://github.com/kedro-org/kedro/pull/964 but (1) the conversation is extremely long, and I can't pinpoint the exact moment it was decided to remove _save() functionality, and (2) this was never documented in any place, so users are left in the dark.

Going through the PR again, I found a comment that spells the problem in detail https://github.com/kedro-org/kedro/pull/964#issuecomment-947761059

Update, Upsert/Merge, Delete

These are not directly consistent with the Kedro Principles & DAG, as

  1. The filepath is intrinsic to the DeltaTable
  2. The update, merge and delete methods are methods on the DeltaTable and are immediately materialised (on call or on subsequent execute call on a merge builder)
  3. We still need to inform the Kedro pipeline and DAG that this node has succeeded in a meaningful way

This is the problem we're addressing.

astrojuanlu avatar Feb 04 '24 09:02 astrojuanlu

It's unclear how to write Kedro datasets that are amenable to UPSERT (aka MERGE aka "INSERT or UPDATE") operations https://github.com/kedro-org/kedro/pull/964

Upsert is mostly supported by database backends. You could simulate it in data frames using concat with indices, e.g. in pandas or spark, but it's not very clean.

For database backends, it is on the radar for Ibis support.

Regarding ELT, Kedro will probably never be as convenient as the Singer ecosystem and derivatives could theoretically be - however, the practical application of Meltano and Airbyte leaves some gaps, and maybe Kedro could be a satisfying tool for some users. Regarding ETL, I think Kedro could be a perfect framework for this

Why can Kedro not be at least as good at ELT as it is at ETL? As long as you can interact with databases natively using SQL under the hood, I think it can be a great option for the people who are going to use Python anyway (or prefer to).

Also, I'm not recent enough on this perhaps, but has there been any momentum back towards people wanting to do ETL? If ELT is still where "modern data engineering" is at, doing ETL well isn't that exciting.

Finally, we should execute on messaging/value proposition updates based on the conclusions of our investigation, and probably generate appropriate educational material in the form of documentation, blog posts, and videos.

:100:

If Kedro is a tool that supports both data pipelines and ML pipelines, it makes sense that people are educated on how to write each, and don't use the same approach for the disparate problems.

deepyaman avatar Feb 05 '24 15:02 deepyaman

Also, I'm not recent enough on this perhaps, but has there been any momentum back towards people wanting to do ETL? If ELT is still where "modern data engineering" is at, doing ETL well isn't that exciting.

We could discuss whether the Modern Data Stack was a real industry trend or only happened on Data Twitter - but I'll only do so over beer 😄

astrojuanlu avatar Feb 05 '24 15:02 astrojuanlu

Why can Kedro not be at least as good at ELT as it is at ETL?

I'm not denying this. What I'm saying that, in theory[^1],

$ pip install meltano
$ meltano add extractor tap-postgres
$ meltano add loader target-snowflake
$ meltano run

is the optimal open-source, CLI-based, EL experience, and I don't think Kedro can match this at the moment or in the near future (very happy to be proven wrong).

Edit: Meltano would be EL, then for example dbt would be T, or as Lauren Balik jokingly says, TTTTTTT

[^1]: In theory there is no difference between theory and practice, while in practice there is.

astrojuanlu avatar Feb 07 '24 10:02 astrojuanlu

We've implemented an in-house upsert functionality into one of our Arrow datasets using a method @deepyaman alludes to

You could simulate it in data frames using concat with indices, e.g. in pandas or spark, but it's not very clean.

The write_mode is just a save_arg for us. This definitely breaks "reproducibility" though and goes towards idempotency like @astrojuanlu pointed out

inigohidalgo avatar Feb 28 '24 14:02 inigohidalgo

Why can Kedro not be at least as good at ELT as it is at ETL?

I'm not denying this. What I'm saying that, in theory

$ pip install meltano
$ meltano add extractor tap-postgres
$ meltano add loader target-snowflake
$ meltano run

is the optimal open-source, CLI-based, ELT experience, and I don't think Kedro can match this at the moment or in the near future

I've never used meltano, but this covers only EL in ELT, right? Kedro+ibis could slot in very nicely into the T, and also provide more-than-good-enough performance for the EL side, though it does seem hard to beat specialized tools like meltano.

inigohidalgo avatar Feb 28 '24 15:02 inigohidalgo

Oh, correct. I meant "the optimal [...] EL experience".

astrojuanlu avatar Feb 28 '24 15:02 astrojuanlu

I would love to see Kedro fully support the "T", standing as an alternative for dbt for engine base transformation but with a python API. This could bring a huge value for some Data teams that need to juggle between two (or more) differents Technologies/frameworks and throws their works over the wall for others teams, dependencing on the stages of their Data pipelines (DE, DS/ML).

This could significally enlarge Kedro user base, as there is much more volume of work in Data & Analytics engineering than Data science & ML.

As for the "E" and "L" part Kedro could be just good enough.

takikadiri avatar Mar 30 '24 15:03 takikadiri

To achieve this I really believe we should go all in on Ibis as a first class citizen / prefered approach in Kedro. One syntax for broadly the backends we care about enabling the interdisciplinary collaboration @takikadiri mentions.

datajoely avatar Apr 02 '24 08:04 datajoely

(From phone) To clarify, I don't think T is the problem, but rather E & L. I suspect some changes in philosophy of even API might be required that go beyond adopting Ibis, the task here is to investigate.

astrojuanlu avatar Apr 02 '24 08:04 astrojuanlu

Although T might also require some improvements in how we approach upserts.

astrojuanlu avatar Apr 02 '24 08:04 astrojuanlu

At PyData London I spoke to 2 different users about how they were using Kedro for their ETL pipelines and they both have challenges:

  • One of them has a special arrangement and performs the I/O outside of Kedro because of the current limitations, so they don't really leverage the full power of Kedro, just use it as a micro-orchestration engine.
  • Another one had to write their own Delta Table dataset with upserts (#542), plus a layer of state management for checkpointing and keeping track of what parts of the data had been ingested already.

Also, while discussing this in person with @deepyaman, I realised that both EL and T data pipelines need upserts anyway, so probably my comments above were somewhat misguided.

astrojuanlu avatar Jun 15 '24 14:06 astrojuanlu

Inspiration: "Incremental loads should be replayable by design" (source)

image

astrojuanlu avatar Jun 15 '24 14:06 astrojuanlu

I wonder if we could make start_date and end_date first class CLI arguments and part of the session constructor?

datajoely avatar Jun 17 '24 09:06 datajoely

Sometimes it would be date, sometimes it would be id... Don't think we can anticipate all possible pagination options. But regardless, I think this is more or less achievable already thanks to runtime parameters, right? The difficult thing is the upsert logic. Not from a technical perspective but from a product philosophy perspective, shifting from a focus on reproducibility to a focus on idempotency (data pipelines and machine learning pipelines might require different approaches)

astrojuanlu avatar Jun 17 '24 09:06 astrojuanlu

I think you can generalise upserts into the need for a conditional node...

datajoely avatar Jun 17 '24 10:06 datajoely

BTW about Ibis and upserts https://github.com/ibis-project/ibis/issues/5391 cc @deepyaman

astrojuanlu avatar Jun 17 '24 10:06 astrojuanlu

Kept thinking and thinking about the reproducibility idea. From https://github.com/kedro-org/kedro/discussions/3979#discussioncomment-9947103 (by @datajoely) and some conversations I had during EuroPython (and the fact that I've been mulling over this almost since I joined the project):

We should acknowledge that the reproducibility principle has never been explicit. In fact, it's mentioned zero times in https://github.com/kedro-org/kedro/wiki/Kedro-Principles (established roughly 3 years ago https://github.com/kedro-org/kedro/discussions/824).

It was said in https://github.com/kedro-org/kedro-plugins/issues/471#issuecomment-1851014301 that "pure functions + DAG + static catalog = reproducible", but as I already hinted in that thread, that only holds if you assume that the catalog points to files that are tracked by version control alongside the code itself. The moment you refer to a remote location s3://bucket/my-file.csv that principle already breaks, because Kedro doesn't do any sort of hashing of the inputs and, by definition, whatever comes from that URL is out of Kedro jurisdiction.

LLMs aren't hugely different from any other REST APIs in that regard. Even under my "1. Frozen inputs" scenario there, models and APIs aren't versioned, there's randomness without the possibility to set the seed, etc.

Hence remote data locations, database connection strings, REST APIs, LLMs... all of them can break reproducibility.

We should of course continue to try to keep functions as pure as possible (always a good thing to do) and push the I/O part to the datasets, but users are demanding a better answer to data pipelines and dynamic catalogs, so I think it's time to break free from the illusion that Kedro in itself and by itself can guarantee the reproducibility of the pipelines.

astrojuanlu avatar Jul 14 '24 21:07 astrojuanlu

Hence remote data locations, database connection strings, REST APIs, LLMs... all of them can break reproducibility.

This is true of any upstream data outside of your control, a SQL table/view which is changing frequently also applies. In my mind all of these roads lead back to some sort of conditional construct... you need it to do UPSERTS and a bunch of other things.

datajoely avatar Jul 15 '24 08:07 datajoely

FYI, we're considering giving dlt a try and see how it works alongside Kedro.

astrojuanlu avatar Jul 30 '24 15:07 astrojuanlu

Also renamed this issue to hopefully make it less confusing.

astrojuanlu avatar Jul 30 '24 15:07 astrojuanlu