delta icon indicating copy to clipboard operation
delta copied to clipboard

Not null invariant incorrectly fails on non-nullable field inside a nullable struct

Open Kimahriman opened this issue 4 years ago • 24 comments

An existing test actually tests for what I think is incorrect behavior: https://github.com/delta-io/delta/blob/master/core/src/test/scala/org/apache/spark/sql/delta/schema/InvariantEnforcementSuite.scala#L186

 testQuietly("reject non-nullable nested column") {
    val schema = new StructType()
      .add("top", new StructType()
        .add("key", StringType, nullable = false)
        .add("value", IntegerType))
    testBatchWriteRejection(
      NotNull(Seq("key")),
      schema,
      spark.createDataFrame(Seq(Row(Row("a", 1)), Row(Row(null, 2))).asJava, schema.asNullable),
      "top.key"
    )
    testBatchWriteRejection(
      NotNull(Seq("key")),
      schema,
      spark.createDataFrame(Seq(Row(Row("a", 1)), Row(null)).asJava, schema.asNullable),
      "top.key"
    )
  }

Here the top struct is nullable but the key field is not, and the current invariant checks only care about the fact that key is non-nullable therefore selecting that value (through a series of GetStructField's), will always not be null. However, it is valid for top to be null, and it's more accurate to say that key is never null when top is not null I think.

So in this test case, the first one is a valid test case. top is not null but key is. However, in the second test case, top is null, which should be valid behavior and not throw an exception I believe.

After looking through the code I can see a few ways to make it basically skip checking key in this case, but it might be more ideal but more complicated to have it check top first, and only if that's not null, then check key and fail only in that case.

Kimahriman avatar Dec 08 '21 12:12 Kimahriman

Hi @Kimahriman, thanks for bringing this to our attention. We will look into this.

scottsand-db avatar Dec 08 '21 16:12 scottsand-db

@brkyvz do you remember why we picked up this behavior?

zsxwing avatar Jan 13 '22 18:01 zsxwing

I believe this is the behavior that most databases follow. The idea was that if foo.bar is not null, then foo should always be not null. If you want the case that foo can be null, but whenever foo is not null, then bar should also be not null, then you should be using a check constraint such as:

foo is null or foo.bar is not null

brkyvz avatar Jan 13 '22 21:01 brkyvz

Spark seems to allow for that behavior of foo being nullable but foo.bar not

Kimahriman avatar Jan 13 '22 21:01 Kimahriman

Spark seems to allow for that behavior of foo being nullable but foo.bar not

@Kimahriman This is done by Delta. I think the current behavior makes sense in the SQL world. Even if foo.bar is not nullable, it can still be null if we allow foo to be nullable. For example, if foo is null, SELECT foo.bar will return null even if foo.bar is not nullable. The current behavior will ensure you never see null returned by foo.bar.

zsxwing avatar Jan 18 '22 17:01 zsxwing

Yeah I'm just saying this is valid Spark behavior, so you can hit an exception for valid Spark code. Simplest reproduction:

import pyspark.sql.functions as F
from delta.tables import DeltaTable
test = spark.range(1).withColumn('nested', F.when(F.col('id') < 5, F.struct('id')))
test.printSchema()

root
|-- id: long (nullable = false)
|-- nested: struct (nullable = true)
|    |-- id: long(nullable = false)

DeltaTable.createOrReplace().location('file:///tmp/test').addColumns(test.schema).execute()
test.write.format('delta').mode('append').save('file:///tmp/test')

Kimahriman avatar Jan 18 '22 19:01 Kimahriman

And Spark correctly updates the nullability when it needs to:

test.select('nested.*').printSchema()

root
|-- id: long (nullable = true)

Kimahriman avatar Jan 18 '22 19:01 Kimahriman

@Kimahriman Sorry. I missed your latest update. Thanks for the example. Yep, Spark will update the nullability automatically. However, for Delta, if users set a not-null constraint, they likely will expect they should never see a null value when writing the code (such as write a UDF to handle a nested column value). Hence, we would like to keep the existing behavior.

I'm going to close this as this is the intended behavior. Feel free to reopen this if you have more feedback 😄

zsxwing avatar May 10 '22 16:05 zsxwing

if users set a not-null constraint, they likely will expect they should never see a null value when writing the code

That's the main issue, not-null constraints are automatically applied with no way to disable them. I have been slowly playing around with ways update the not-null constraint checking to operate in-line with how Spark works, haven't made too much progress on it though. At the very least it should be an opt-in thing to get around this. I still consider it a bug 😅

Kimahriman avatar May 10 '22 17:05 Kimahriman

Hah, triggered a pipeline failure by commenting on a closed issue I guess

Kimahriman avatar May 10 '22 17:05 Kimahriman

We prefer to avoid creating non standard behaviors. Could you clarify what use cases that may require this non standard behavior?

zsxwing avatar May 10 '22 18:05 zsxwing

I would argue that the current behavior is non-standard and this ticket is to make it (spark) standard 😛

Basically what actually triggers this for us is a Scala UDF that takes a nullable string and returns struct. One of the fields in the struct is non-nullable, because it always has a value iff the string input is not null. The end result in Spark is a nullable struct with a non-nullable struct field

Kimahriman avatar May 10 '22 18:05 Kimahriman

it always has a value iff the string input is not null

Can you set the nested field nullable and add a check constraint such as nested is null or nested.id is not null to the table instead?

zsxwing avatar May 24 '22 16:05 zsxwing

Yeah there are workarounds, just less than ideal. You lose the 0.01% speedup you would get by avoiding null-checks on the field hah. I'm less concerned about enforcing the right not-null checks and just being able to opt-in to skip the check all together because I know the nullability is correct

Kimahriman avatar May 24 '22 16:05 Kimahriman

Adding a new behavior even if it's opt-in would also add extra work when we expend the not null constraint behaviors to other engines. And I feel it's not worth right now.

I will keep this open to hear more voice.

zsxwing avatar May 24 '22 17:05 zsxwing

I was bitten by this today. I'm using Scala and have this model

case class MyRow(id: Int, value: String)

case class Changes(before: Option[MyRow], after: MyRow)

modelling changes between MyRows of the same id. I expected this to work but get org.apache.spark.sql.delta.schema.DeltaInvariantViolationException: NOT NULL constraint violated for column: before.id.

I can work around it but it feels like incorrect behaviour.

megri avatar Oct 31 '22 12:10 megri

Do we know how other engines like Presto/Trino handle nullability of structs? Because Delta essentially doesn't support struct nullability. One option would be when creating a table or new field in the schema to force everything under a nullable struct to be nullable as well. This seems like the behavior Delta expects.

Kimahriman avatar Nov 03 '22 11:11 Kimahriman

Yep. It would be great if anyone familiar with Presto/Trino can provide the feedback.

zsxwing avatar Dec 08 '22 19:12 zsxwing

My team continues to get bit by this repeatedly, and we have to find workarounds to force certain fields to be nullable even when they don't have to be, and I see more people bring this up (someone just did on Slack).

Any reconsideration on respecting struct nullability? I still don't see how there's any valid way to consider that a non nullable field inside a nullable struct has to always be not null. That's an invalid interpretation of the struct being nullable. And it isn't even consistent with how Parquet files treat nullable structs. According to the user on slack at least Kafka interprets this correctly

Kimahriman avatar Jun 21 '24 21:06 Kimahriman

Yeah there are workarounds, just less than ideal. You lose the 0.01% speedup you would get by avoiding null-checks on the field hah.

You'd be surprised. Typically, code will check a single pointer for null and that pointer might have been assigned to something that lives in the CPU's cache (ie, access to it will be super fast). Checking for null with big data generally pulls large amounts from RAM (relatively slow) and pushes anything out of the CPU cache. If you do this for a large amount of data, it starts to show in in terms of performance.

PhillHenry avatar Jun 24 '24 19:06 PhillHenry

This is a problem for my team as well and I think this example demonstrates it well.

We are using Delta to transport data in Debezium CDC payloads. In that format, the top-level before and after fields need to be nullable. A delete event is denoted by a null after field and a create event is denoted by a null before field.

However, both those fields follow the same schema specification and, more importantly, the schema of the before and after fields is independent from the schema of the envelope that encloses them. We define the expected schema of the envelope by composing the schema of the entity (the before and after fields), with the schema of the envelope (the op, source, etc. fields). This feature effectively ties the schema of the entity with the schema of the enclosing envelope. For our use cases, this coupling is problematic in several ways. The schema of the entities transported in the most important thing and it is used by several systems. Preserving their schema when transported via a Debezium CDC payload allows us to preserve their integrity and is the most important thing. These schemas are used in Delta, but also by other systems.

jqmtor avatar Sep 30 '24 14:09 jqmtor

Ironically that's exactly how the Delta transaction log is stored as well. add will only be non-null if it's an add action, remove will only be non-null if it's a remove action, etc.

Based on the way Delta handles nullability currently, there is no way to actually express the equivalent of "if the action is add, then add.path must be non-null". Trying to do that makes Delta interpret this as add.path must be non null even if the action is not add.

Kimahriman avatar Sep 30 '24 14:09 Kimahriman

Also FWIW this isn't an issue in delta-rs (they don't do any implicit null checking AFAIK). So you can write this "valid" data in other ways and then read in Spark, you just can't write it in Spark right now.

Kimahriman avatar Oct 04 '24 17:10 Kimahriman

I forgot to mention before, but using flink-delta yields the same result as delta-rs then. The data can be written from Flink and then read by Spark. However, this prevents using the maintenance jobs, which is how we learned about this limitation. We tried to run the optimize job on the Delta table that we had written from Flink and the issue surfaced.

jqmtor avatar Oct 08 '24 09:10 jqmtor

This is an issue for my team as well, I agree that the current behavior feels incorrect.

jordanvoves avatar Apr 07 '25 13:04 jordanvoves