Nick
Nick
Is it possible the missing records in question were the records that matched your merge condition and were updated? Otherwise it's difficult to track this down without a repro, is...
I realize this doesn't apply in your HDFS to S3 case @ABRB554, but as far as the suggested change in the FAQ, I think it is possible to retain the...
Thanks @PadenZach. Acknowledging the repro is successful and it's a legitimate issue. The command should not swallow the specific error, in this case that map is not a comparable type...
Hi @KhASQ - there is only the S3+DynamoDB support today but other methods for providing mutual exclusion is a great ask. This will likely require an additional implementation of [LogStore](https://github.com/delta-io/delta/blob/master/core/src/main/scala/org/apache/spark/sql/delta/storage/LogStore.scala)...
FWIW there is a public [SparkSession.version](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala#L126) field (exposed in pyspark too)
Hi @gokulyc could you share a little more about your use case? Specifically, what is the pattern you're trying to implement that requires you to programmatically get the Delta version?...
Thanks @gokulyc, are you open to submitting a PR for this?
Hi @Kiran-G1 can you share a more complete example with sample data? A full reproduction will help us confirm and track this down. Here is a good example, https://github.com/delta-io/delta/issues/1279
I don't ~think it's a bug. Since you're referencing the Standalone library, can you please open this issue [in the connectors repo](https://github.com/delta-io/connectors)? FWIW the same behavior [is here too](https://github.com/delta-io/delta/blob/master/core/src/main/scala/org/apache/spark/sql/delta/actions/InMemoryLogReplay.scala) so...
Hi @himanshujindal - I think based on your description that @JassAbidi has shared a good solution (this is similar to how you would consume and apply CDF changes in Delta...