Scott Sandre
Scott Sandre
@alfonsorr - you can read the table metadata by doing `deltaLog.snapshot.metadata.configuration`. So, we expect the result (i.e. the table metadata) after this fix to look as is described in the...
> ok i will create new pullreq to just expose isolation level and only allow Serializable. i will let this one remain WIP just in case it turns out current...
Thanks for the comments/feedback everyone. For now, our H2 roadmap is quite full, so this is something that we can consider next year. Please keep the comments and feedback coming...
Hi @GeekSheikh - can you a) update the issue description to clarify that the only way to do it for table API is using `spark.conf.set("spark.databricks.delta.commitInfo.userMetadata", "test")`, which is what breaks...
I've submitted a PR here: https://github.com/delta-io/delta/pull/1328 @edmondo1984 want to take a look?
Hi @sonhmai - that SGTM. @tdas - I don't expect any memory issues here, unless the user has > hundreds of thousands of partitions.
@kristoffSC - take a look?
Linking the design doc for future visibility: https://docs.google.com/document/d/1GNujU7XpV2eG7OevFBSZeYKNRM-ZrUx7egb7ti5hAF4/edit
@kristoffSC - delta on spark computes the stats while writing out the parquet file.
Hi @gopik - just following up on this. You still need to refactor it so that the parquet footer reading is done inside `DeltaFileCommitter` instead of `DeltaGlobalCommitter`, correct?