Shixiong Zhu

Results 193 comments of Shixiong Zhu

@akhilesh2186 we don't plan to fix this. It's an oversight that we put `now()` to the allowlist. In general, we don't want to allow non deterministic functions like this. The...

@harry19023 we don't have an ideal solution for this bug right now. Could you try to use Scala/Java UDFs instead?

@thonsinger-rseg Do you have a reproduction? I tried the following code and it worked for me: ```python spark.conf.set("spark.databricks.delta.schema.autoMerge.enabled","true") from delta.tables import * path = "/tmp/mergetest" df = spark.range(20).selectExpr("id", "id as...

@AlexWRZ01 could you provide the table schema and the merged data frame schema if possible so that we can try to create a reproduction? You can just call the `schema.json`...

Are you able to try 1.1.0 or above? There is a fix for such issue https://github.com/delta-io/delta/commit/c424efad8b03c2dce6d988a927677a0e9c314a11 If you can provide the reproduction steps, we can help confirm if this is...

Thanks for reporting this. This is an oversight when we refactor delta-standalone to include the shaded jackson libraries. We will investigate and see if it's possible to publish a correct...

> If I create a metastore view that will explicitly select named columns from an external table created through delta, could I then query that view using Spark SQL even...

We don't support reading files written using `spark.sql.parquet.writeLegacyFormat`. Do you have a legacy system that needs to use this format?

What Hive version are you using? I would be surprised that latest Hive 2.x version is still not working well with new parquet format. This legacy format is pretty old...

Note: we have tests for decimal values in our hive tests. The test tables are not using `spark.sql.parquet.writeLegacyFormat`. And we haven't found any issues so far.