hudi icon indicating copy to clipboard operation
hudi copied to clipboard

[SUPPORT] show_clustering is throwing exception in 0.14.1 + Spark 3.4

Open subash-metica opened this issue 7 months ago • 0 comments

Describe the problem you faced

Created a hudi table with inline clustering enabled, and running show_clustering throws errors.

To Reproduce

Steps to reproduce the behavior:

spark_conf = (
        SparkConf()
        .set("spark.sql.execution.arrow.pyspark.enabled", "true")
        .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
        .set("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.hudi.catalog.HoodieCatalog")
        .set("spark.sql.extensions", "org.apache.spark.sql.hudi.HoodieSparkSessionExtension")
        .set("spark.jars.packages", "org.apache.hudi:hudi-spark3.4-bundle_2.12:0.14.1")
        .set("spark.sql.shuffle.partitions", "2")
        .set("spark.default.parallelism", "2")
    )
    spark = SparkSession.builder.appName("etl").config(conf=spark_conf).getOrCreate()

    spark.createDataFrame(
        [(1, 2, 4, 4), (1, 2, 4, 5), (1, 2, 3, 6), (1, 2, 3, 7)] * 1000000, ["a", "b", "c", "d"]
    ).write.mode("append").format("hudi").options(
        **{
            "hoodie.table.name": "test_clustering",
            "hoodie.datasource.write.table.type": "COPY_ON_WRITE",
            "hoodie.datasource.write.recordkey.field": "a,b,c,d",
            "hoodie.datasource.write.partitionpath.field": "a,b,c",
            "hoodie.datasource.write.table.name": "test_clustering",
            "hoodie.datasource.write.keygenerator.class": "org.apache.hudi.keygen.ComplexKeyGenerator",
            "hoodie.datasource.hive_sync.mode": "hms",
            "hoodie.datasource.hive_sync.enable": "false",
            "hoodie.datasource.hive_sync.partition_extractor_class": "org.apache.hudi.hive.MultiPartKeysValueExtractor",
            "hoodie.datasource.write.hive_style_partitioning": "true",
            "hoodie.clean.automatic": "true",
            "hoodie.metadata.enable": "true",
            "hoodie.clustering.inline": "true",
            "hoodie.clustering.inline.max.commits": "1",
            "hoodie.cleaner.commits.retained": "2",
            "hoodie.clustering.plan.strategy.partition.regex.pattern": ".*c=(4|3).*",
            "hoodie.datasource.write.operation": "insert_overwrite",
        }
    ).save(
        "local_path"
    )

    spark.sql(
        "call show_clustering(path=>'local_path')"
    ).show()

Expected behavior

Show the clustering details, same as in .replacecommit.requested file I see the clustering details.

Environment Description

  • Hudi version : 0.14.1

  • Spark version : 3.4.3

  • Storage (HDFS/S3/GCS..) : Local

  • Running on Docker? (yes/no) : no

Stacktrace

: java.util.NoSuchElementException: No value present in Option
	at org.apache.hudi.common.util.Option.get(Option.java:89)
	at org.apache.spark.sql.hudi.command.procedures.ShowClusteringProcedure.$anonfun$call$5(ShowClusteringProcedure.scala:79)
	at scala.collection.immutable.Stream.$anonfun$map$1(Stream.scala:418)
	at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1173)
	at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1163)
	at scala.collection.immutable.Stream.$anonfun$map$1(Stream.scala:418)
	at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1173)
	at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1163)
	at scala.collection.immutable.Stream.length(Stream.scala:312)
	at scala.collection.SeqLike.size(SeqLike.scala:108)
	at scala.collection.SeqLike.size$(SeqLike.scala:108)
	at scala.collection.AbstractSeq.size(Seq.scala:45)
	at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:341)
	at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
	at scala.collection.AbstractTraversable.toArray(Traversable.scala:108)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:218)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:95)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:662)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
	at java.base/java.lang.Thread.run(Thread.java:833)

subash-metica avatar Jun 27 '24 09:06 subash-metica