iceberg-python
iceberg-python copied to clipboard
Support Nessie catalog
Feature Request / Improvement
PyIceberg has added support for glue catalog. We need to have support for Nessie catalog too just like hive, glue, REST catalogs.
Migrated from https://github.com/apache/iceberg/issues/6414
looking forward for this feature to conduct testing.
Any update on supporting nessie catalog?
@jbonofre might take it up after java 1.5.0 release.
@ajantha-bhat Any rough idea about when this will be available? thanks!
I would also like to know if it is estimated to be worked on soon, I'd find it very useful. Thx!
Hi, we would like to contribute to this issue, is it possible?
It looks like that Nessie has announced REST catalog support. This would make the native Nessie integration redundant.
ATM, Nessie has Iceberg REST API on main
, but it's not released yet.
Is there a release date?
It might be best to talk about Nessie releases in the project's Zulip chat (the join link is on projectnessie.org) :)
Nessie 0.90.2 and later support the Iceberg REST Catalog API.
I think this issue can be considered like fixed thanks to the REST Catalog API support by Nessie.
@dimas-b Thanks for the update here, and I agree with @jbonofre, let's close this issue!
I want to create iceberg tables using pyiceberg and store it in minio store, so for this i have created docker containers for services named as: nessie, minio, dremio Earlier i was using pyspark and was able to create tables using code: import pyspark from pyspark.sql import SparkSession import os
DEFINE SENSITIVE VARIABLES
NESSIE_URI = "http://nessie:19120/api/v1" MINIO_ACCESS_KEY = "my_access_key" MINIO_SECRET_KEY = "my_secret_access_key"
conf = ( pyspark.SparkConf() .setAppName('app_name') #packages .set('spark.jars.packages', 'org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:1.3.1,org.projectnessie.nessie-integrations:nessie-spark-extensions-3.3_2.12:0.67.0,software.amazon.awssdk:bundle:2.17.178,software.amazon.awssdk:url-connection-client:2.17.178') #SQL Extensions .set('spark.sql.extensions', 'org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,org.projectnessie.spark.extensions.NessieSparkSessionExtensions') #Configuring Catalog .set('spark.sql.catalog.nessie', 'org.apache.iceberg.spark.SparkCatalog') .set('spark.sql.catalog.nessie.uri', NESSIE_URI) .set('spark.sql.catalog.nessie.ref', 'main') .set('spark.sql.catalog.nessie.authentication.type', 'NONE') .set('spark.sql.catalog.nessie.catalog-impl', 'org.apache.iceberg.nessie.NessieCatalog') .set('spark.sql.catalog.nessie.warehouse', 's3a://warehouse') .set('spark.sql.catalog.nessie.s3.endpoint', 'http://minio:9000') .set('spark.sql.catalog.nessie.io-impl', 'org.apache.iceberg.aws.s3.S3FileIO') #MINIO CREDENTIALS .set('spark.hadoop.fs.s3a.access.key', MINIO_ACCESS_KEY) .set('spark.hadoop.fs.s3a.secret.key', MINIO_SECRET_KEY) )
Start Spark Session
spark = SparkSession.builder.config(conf=conf).getOrCreate() print("Spark Running")
LOAD A CSV INTO AN SQL VIEW
csv_df = spark.read.format("csv").option("header", "true").load("../datasets/df_open_2023.csv") csv_df.createOrReplaceTempView("csv_open_2023")
CREATE AN ICEBERG TABLE FROM THE SQL VIEW
spark.sql("CREATE TABLE IF NOT EXISTS nessie.df_open_2023 USING iceberg AS SELECT * FROM csv_open_2023").show()
QUERY THE ICEBERG TABLE
spark.sql("SELECT * FROM nessie.df_open_2023 limit 10").show()
Please tell me how to do it with pyiceberg
Please tell me how to do it with pyiceberg
generally speaking you use the REST catalog these docs may help: https://py.iceberg.apache.org/configuration/#rest-catalog https://kevinjqliu.substack.com/i/147257480/connect-to-the-rest-catalog
running the nessie server: https://projectnessie.org/guides/iceberg-rest/
RestCatalog
class seems to live in pyiceberg.catalog.rest
:
https://github.com/apache/iceberg-python/blob/c30e43adf94a82ec1a225d3a1bf69fface592cfd/pyiceberg/catalog/rest.py#L248
however according to https://py.iceberg.apache.org/api/ one is now supposed to use something like:
from pyiceberg.catalog import load_catalog
catalog = load_catalog("rest", <optional_config_dict>)
I encountered an issue while using the load_catalog() method, where it shows the following error: load_catalog() takes from 0 to 1 positional arguments but 2 were given.
To address this, I attempted to use load_rest("rest", <config_dict>), but I encountered a validation issue in the ConfigResponse model while working with the RestCatalog from PyIceberg. It seems that the defaults and overrides fields are required in the ConfigResponse model, but the Nessie REST API is not responding with these fields as expected.
Even after passing them explicitly in the response, I am still getting a validation error.
@cee-shubham I am having a similar issue. If someone has managed to load a Nessie catalog using pyiceberg's RestCatalog
, that would be greatly appreciated.
@sean-pasabi I was able to get pyiceberg working with REST catalog exposed by Nessie, at least as a proof of concept: https://github.com/edgarrmondragon/-learn-iceberg-nessie
@edgarrmondragon I have a similar .pyiceberg.ymal
, but without the token. I am using minio which requires additional work to add some sort of OAuth2 flow, and I would be surprised if that was the issue. Can your example run without the token, or is it required?
@edgarrmondragon I have followed your code, and while the namespace and table were successfully created and are visible in the MinIO bucket, I encountered an error when appending data to the table. The error is related to AWS access permissions, specifically an "ACCESS_DENIED" issue during a HeadObject operation. Below is the relevant error message: OSError: When getting information for key 'demo2/taxi_dataset_f684e603-b914-4f6b-91db-b9f86a2846b3' in bucket 'demobucket': AWS Error ACCESS_DENIED during HeadObject operation: No response body.
Hey @cee-shubham, did you mean @edgarrmondragon, because I haven't given any code?