Results 24 comments of sherlockbeard

Hmm . is there any way we can set the value for commit id before commit ?. From what i am able to gather it's failing because id is not...

delta lake only works with JDK 1.8 right ? from Readme ``` Under Project JDK specify a valid Java 1.8 JDK and opt to use SBT shell for project reload...

@allisonport-db can you assign it to me ?

not able to reproduce it with mysql client . #7022 fixed this. i believe good to close

@ArunPesari2 try with this replace the line ```.config('spark.jars.packages','io.delta:delta-core_2.12:2.2.0')``` with ```.config('spark.jars.packages', 'org.apache.hadoop:hadoop-azure:3.3.1,io.delta:delta-spark_2.12:3.1.0')``` Spark version:3.5.* requires Delta Lake version 3.1.*

one workaround is forcing large_dtypes = True ``` import pandas as pd from deltalake import Schema, write_deltalake column_names=['campaign','account'] json_schema='{"type": "struct","fields": [{"name": "campaign", "type": "string", "nullable": true, "metadata": {}},{"name": "account", "type":...

@kevinzwang I think the PR is ready for review now. The only problem is I'm unable to test test_deltalake_write_cloud. Do I need to set up some infrastructure locally to run...

maybe a possible reproduce will be ``` from pyspark.sql import SparkSession from delta import * builder = SparkSession.builder.appName("MyApp") \ .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \ .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") spark = configure_spark_with_delta_pip(builder).getOrCreate() spark.sql("""CREATE OR REPLACE...

Nope @antonsteenvoorden `CREATE OR REPLACE TABLE` checks if table is there . if present it overwrites all the old data with the new data (from the select command).

you can delete and clear cache for deltalake and download again Edit : I tried the code with the latest version and its working fine