dataframe icon indicating copy to clipboard operation
dataframe copied to clipboard

Arrow Support

Open Lundez opened this issue 3 years ago • 13 comments

Hi, I can't find that dataframe supports Arrow as internal serialization / backend.

Is this something which you're working on?

Lundez avatar Dec 21 '21 14:12 Lundez

Hi, Lundez!

Currently DataFrame doesn't use Arrow as backend, but it's on the roadmap.

Until now we were mostly focused on frontend part: typesafe Kotlin API, code generation, schema inference and other tricks that provide great experience when you work with data in Kotlin. But now API and overall model are getting stable, so it's time to do more performance tuning and scalability, including Arrow support as a backend.

Currently the project has only two active contributors, so any help will be very much appreciated!

nikitinas avatar Dec 22 '21 22:12 nikitinas

Hi, do you have any headers on how to start?

Do you think the java arrow API can work with your "typing" (or whatever to call the typing is used in data frames)? 😊

I think adding arrow would give this project a big boost. Also adding a query optimizer would follow up as a huge bonus, like pola.rs / spark. To optimize columns and other this when using arrow makes a lot of sense! 😄

Lundez avatar Dec 25 '21 01:12 Lundez

I have some experience with arrow (as an arrow committer) so let me try to set this up.

Current plan is to split into two parts:

  1. Arrow schema reading
  2. Arrow file / data loading and off-heap memory management

Subsequent features can come into more tangible forms when reading is done. Eg arrow file writing, streaming, predicate push down, etc.

jimexist avatar Mar 06 '22 05:03 jimexist

@Jimexist incredibly excited to hear this!

Lundez avatar Mar 06 '22 10:03 Lundez

Currently the project has only two active contributors, so any help will be very much appreciated!

Hello @nikitinas, what do you think about my last PR-s?

Also I have made some code writing to Arrow but it does not cover all DataFrame-supported column types (was made for Krangl originally)

Kopilov avatar May 20 '22 07:05 Kopilov

Hello again. I am working with more complex unit test for Arrow reading. Will make PR a little later. Just now, you can look at data example and code it was generated with here

Kopilov avatar Jul 06 '22 07:07 Kopilov

@koperagen, @nikitinas, I want your opinion about the next detail.

In Arrow schema we have nullable flag but it's value does not depend on column content. And we may get a column that is marked as not nullable but actually contains null values. Here is an example.

So, we can:

  • Ignore nullable flag in the file, read all data and set nullable flag in DataFrame schema if and only if there are null values in the column;
  • Look at nullable flag and always copy it to DataFrame schema; thus reading data like above will produce an error;
  • Look at nullable flag, copy it to DataFrame schema by default and then change not nullable to nullable if there are null values.

What behavior is the best and should we support different of them, in your point of view?

Kopilov avatar Jul 11 '22 11:07 Kopilov

@koperagen, @nikitinas, I want your opinion about the next detail.

In Arrow schema we have nullable flag but it's value does not depend on column content. And we may get a column that is marked as not nullable but actually contains null values. Here is an example.

So, we can:

  • Ignore nullable flag in the file, read all data and set nullable flag in DataFrame schema if and only if there are null values in the column;
  • Look at nullable flag and always copy it to DataFrame schema; thus reading data like above will produce an error;
  • Look at nullable flag, copy it to DataFrame schema by default and then change not nullable to nullable if there are null values.

What behavior is the best and should we support different of them, in your point of view?

Could we support different read-modes? Defaulting to first or third makes sense, but a strict-mode would be great (second) through a flag/read-mode IMO

Lundez avatar Jul 11 '22 13:07 Lundez

@koperagen, @nikitinas, I want your opinion about the next detail.

In Arrow schema we have nullable flag but it's value does not depend on column content. And we may get a column that is marked as not nullable but actually contains null values. Here is an example.

So, we can:

  • Ignore nullable flag in the file, read all data and set nullable flag in DataFrame schema if and only if there are null values in the column;
  • Look at nullable flag and always copy it to DataFrame schema; thus reading data like above will produce an error;
  • Look at nullable flag, copy it to DataFrame schema by default and then change not nullable to nullable if there are null values.

What behavior is the best and should we support different of them, in your point of view?

Hm, i would prefer 1 as a default, because in REPL it can help avoid unnecessary null handling when there are no nulls. But we also need 3 for Gradle plugin which generates schema declaration from data sample.

Do i understand the second option right? Something like this would be possible?

    val df = DataFrame.readArrow()
    df.notNullableColumn.map { it  / 2 } // null pointer exception 

I think we shouldn't have this mode unless there is very strong evidence that it is very useful for someone :)

Or do you mean this?

    val df = DataFrame.readArrow() // Exception: notNullableColumn marked not nullable in schema, but has nulls

All that reminds me of "Infer" that is used as a flat for some operations.

koperagen avatar Jul 11 '22 18:07 koperagen

Thank you for highlighting Infer enum. It can probably be used as parameter.

Hm, i would prefer 1 as a default

OK, thanks for sharing. About 2, I expected something like

val df = DataFrame.readArrow() // Exception: notNullableColumn marked not nullable in schema, but has nulls

when callnig

DataColumn.createValueColumn(field.name, listWithNulls, typeNotNullable, Infer.None)

but actually we have

val df = DataFrame.readArrow()
df.notNullableColumn.map { it  / 2 } // null pointer exception

now. I will fix that.

Where can I read more about the Gradle plugin? How do you use it?

Kopilov avatar Jul 12 '22 07:07 Kopilov

I suggest next mapping if use Infer as a parameter:

  • Infer.Nulls — set nullable flag in DataFrame schema if and only if there are null values in the column, make default;
  • Infer.None — copy Arrow schema to DataFrame, throw Exception like "notNullableColumn marked not nullable in schema, but has nulls";
  • Infer.Type — copy Arrow schema to DataFrame, change not nullable to nullable if there are null values. Or it actually would be the same as Infer.Nulls (single type is already guaranteed by Arrow).

Kopilov avatar Jul 12 '22 09:07 Kopilov

Where can I read more about the Gradle plugin? How do you use it?

https://kotlin.github.io/dataframe/gradle.html

I suggest next mapping if use Infer as a parameter:

I'm not sure about it anymore. Because Infer.Type does a different thing in other operations. Infer.Nulls is "actual data nullability" == "schema nullability", and in our case "set nullable flag in DataFrame schema if and only if there are null values in the column" is "narrow nullability if possible", and a third option is "widen nullability if needed" What do you think about a new enum, let's say something like SchemaVerification? It describes variants of this operation: actual nullability (from data) + schema nullability (from file) -> nullability | error Maybe some other name, idk.

edit. Colleagues suggested NullabilityOptions, NullabilityTransformOptions, NullabilityOperatorOptions, NullabilityCompositionOptions As for enum variants, could be WIDENING, NARROWING, CHECKING.

koperagen avatar Jul 12 '22 11:07 koperagen

Implemented in #129 Narrowing was renamed to Keeping because on schema ignoring we can get no nulls in nullable as well as some nulls in not nullable.

Kopilov avatar Jul 15 '22 10:07 Kopilov