delta icon indicating copy to clipboard operation
delta copied to clipboard

[Feature Request] Enable LOAD DATA for delta tables.

Open PadenZach opened this issue 3 years ago • 1 comments

Feature request

Overview

Currently Deltalake - Databricks has the "COPY INTO" DML statement, and vanilla parquet datasets in spark support the "LOAD DATA" DML statement. However, currently there doesn't seem to be work regarding this in deltalake that I can find. There's also currently tests that make sure they raise not supported warnings.

Motivation

Currently Delta has great support for inserting and writing NEW data into delta tables. However, patterns where we want to insert a new existing parquet file into a delta table currently require us to read the file into memory, and write it into the table.

Ideally, for cases where the file already exists and can be simply 'copied' into the delta table, we would support a DML statement to do this but also track the changes in the delta log.

This should make some use cases of delta more efficient, for example, writing staging partitions somewhere else before testing them, then using LOAD DATA to load them into the final delta table.

Further details

I'd need to look more into this to figure out exactly what would need to be done, but, I'd imagine it something like:

  1. Inspect parquet file path for schema compatibility
  2. Check for partition spec
  3. calculate delta log changes
  4. copy file into new location
  5. commit transaction log

Willingness to contribute

The Delta Lake Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?

  • [ ] Yes. I can contribute this feature independently.
  • [X] Yes. I would be willing to contribute this feature with guidance from the Delta Lake community.
  • [X] No. I cannot contribute this feature at this time. ( I'd be willing, but if this isn't a good 'first' issue, it may require more knowledge/expertise than I have)

PadenZach avatar Aug 24 '22 17:08 PadenZach

At a high level, it makes sense to support LOAD DATA on Delta where, after running a whole bunch of checks, the input file is copied to the delta table directory and added to the log.

My concern is the whole bunch of checks needed to pass is going to be pretty complicated given Delta protocol has so much restrictions on schema, etc. Furthermore, as we are adding more things like column mapping (where the column name in parquet file needs to follow a special naming scheme for it to be valid), data constraints... this can get pretty complex as any of features require actually processing the data (i.e., blindly adding the file to the delta log is going corrupt the table). So while i agree that for the absolutely simple case where none of these are additional stuff is enabled on a table can take advantage of this, its going to be a challenge to implement all the checks, and very brittle from the user point of view as many advanced feature enabled will break it.

tdas avatar Sep 08 '22 20:09 tdas

How about, as an initial scope for this feature, just looking for basic parity with common tools for RDBMS such as BCP? For our use case, the feature support does not need to be very sophisticated for this to be of high value.

liquidaty avatar Sep 14 '22 19:09 liquidaty

This is definitely something worth to look at. We are not putting this on our roadmap (#1307) right now as there are many items there already. But if anyone in the community would like to give it a try, feel free to discuss it in the issue.

zsxwing avatar Sep 14 '22 19:09 zsxwing

My company might consider contributing to this effort; outcome likely depends on a few things including others' support and various technicalities, including:

  1. Can this feature be structured as a standalone code module, dynamic library and/or CLI, that interfaces with a service API that operates on data passed from memory (and therefore source data file format (parquet, json, etc) is moot as the API only cares about cell data bytes that are sent to the API)?
  2. Can any existing code bases (such as freeTDS) be modified for this purpose?
  3. Are there any restrictions on what language or dev environment this would need to be built in (presumably, if the answer to question 1 is "Yes", then the answer to this is "No")?

liquidaty avatar Sep 16 '22 18:09 liquidaty

Can this feature be structured as a standalone code module, dynamic library and/or CLI, that interfaces with a service API that operates on data passed from memory (and therefore source data file format (parquet, json, etc) is moot as the API only cares about cell data bytes that are sent to the API)?

We would leverage Spark to read other source format (parquet, json, etc.) We don't want to re-build them.

Can any existing code bases (such as freeTDS) be modified for this purpose?

There is probably no existing code you can use as an example.

Are there any restrictions on what language or dev environment this would need to be built in (presumably, if the answer to question 1 is "Yes", then the answer to this is "No")?

We prefer Scala since the entire code path is using Scala heavily.

zsxwing avatar Sep 27 '22 17:09 zsxwing