Eduard Tudenhoefner
Eduard Tudenhoefner
@rdblue we discussed removing `write.manifest-lists.enabled` and relevant code paths, should we go ahead and merge this PR here and tackle the removal around `write.manifest-lists.enabled` in a separate PR mainly because...
The `registerTable()` functionality from https://github.com/apache/iceberg/pull/5037 didn't make it into 0.14.0. However, we do publish nightly snapshot versions off of master (`0.15.0-SNAPSHOT`) so as a workaround you could try and use...
You need to use the snapshot repository mentioned in https://infra.apache.org/repository-faq.html. The artifacts themselves are under https://repository.apache.org/content/groups/snapshots/org/apache/iceberg/
> Hi, I would like to do (kind of) the same. Catalogs are very hard to maintain in my use case. > > Is it possible to open Iceberg tables...
> @nastra How would I register an Iceberg dataset to the in-memory PySpark catalog? @Hoeze you would have to create a catalog and then register the tables within that catalog...
@jzhuge I think you wanted to link to #4657 here :)
CI should be failing with the below failures, since the issue described in https://github.com/apache/iceberg/issues/5791 is still present ``` org.apache.iceberg.spark.SmokeTest > testAlterTable[catalogName = testhive, implementation = org.apache.iceberg.spark.SparkCatalog, config = {type=hive, default-namespace=default}]...
> Rather than running these along with unit tests, what about adding a configuration to run them separately? Would that be annoying to have so many checks? It would hopefully...
just fyi that we're tracking the same issue in https://github.com/apache/iceberg/issues/5676
I can confirm that those are actually failing when running `./gradlew :iceberg-spark:iceberg-spark-runtime-3.3_2.12:integrationTest`