paimon icon indicating copy to clipboard operation
paimon copied to clipboard

[spark]Fix the bug of special characters before single bucket_key field name

Open winfys opened this issue 9 months ago • 0 comments

Purpose

Linked issue: close #xxx

CREATE TABLE paimon.bdc_tmp.paimon_tbl (
  `#log_uuid` STRING NOT NULL,
  `#event_name` STRING,
  name STRING,
  external_json STRING,
  dt STRING NOT NULL)
USING paimon
TBLPROPERTIES (
'bucket'='2',
'file-format'='parquet',
  'primary-key' = '#log_uuid')

When my bucket_key has only one field and contains special characters before the field, DML fails.

[PARSE_SYNTAX_ERROR] Syntax error at or near '#'.(line 1, pos 0)

== SQL ==
#log_uuid
^^^

        at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(parsers.scala:257)
        at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:98)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseMultipartIdentifier(AbstractSqlParser.scala:54)
        at org.apache.spark.sql.connector.expressions.LogicalExpressions$.parseReference(expressions.scala:42)
        at org.apache.spark.sql.connector.expressions.LogicalExpressions.parseReference(expressions.scala)
        at org.apache.spark.sql.connector.expressions.Expressions.column(Expressions.java:58)
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:546)
        at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
        at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:505)
        at org.apache.spark.sql.connector.expressions.Expressions.bucket(Expressions.java:90)
        at org.apache.paimon.spark.PaimonScan.extractBucketTransform$lzycompute(PaimonScan.scala:66)
        at org.apache.paimon.spark.PaimonScan.extractBucketTransform(PaimonScan.scala:51)
        at org.apache.paimon.spark.PaimonScan.outputPartitioning(PaimonScan.scala:82)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:44)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:42)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)

Tests

API and Format

Documentation

winfys avatar Apr 10 '25 08:04 winfys