[Bug] [Sink] Bug Hive insert error org.apache.spark.sql.execution.QueryExecutionException: Parquet column cannot be converted in file hdfs:/db/xx/dt_mon=xxxx/qwxsadas12321321.parquet. Column: [xxx ], Expected: decimal(12,2), Found: FIXED_LEN_BYTE_ARRAY
Search before asking
- [X] I had searched in the issues and found no similar issues.
What happened
-
Everything displays normally when inserting data, and the partition information displays normally. However, when I execute a simple Select, an error occurs.
-
org.apache.spark.sql.execution.QueryExecutionException: Parquet column cannot be converted in file hdfs://xxxx/user/hive/warehouse/xxxx.db/xxxx/dt_mon=2024-04/xxxx.parquet. Column: [xxx], Expected: decimal(12,2), Found: FIXED_LEN_BYTE_ARRAY
-
But this did not happen in the earlier version seatunnel-1.5.7
-
Can you restore an earlier version such as seatunnel-1.5.7 to solve the parquet partition field problem
Hive {
sql = """
insert overwrite table datadwd.xxx partition(dt_year)
select * from xxxx
"""
}
SeaTunnel Version
2.3.4
SeaTunnel Config
env {
execution.parallelism = 4
job.mode = "BATCH"
# job.name spark引擎 不起作用,要在shell脚本中单独执行废弃
spark.sql.catalogImplementation = "hive"
}
source {
Jdbc {
#2.3.4版本是query
url = "jdbc:oracle:thin:@xxxxxx:1521:xxxxDB"
driver = "oracle.jdbc.OracleDriver"
user = "xxxx"
password = "xxxx"
query = """
select xxxx from dual
"""
result_table_name = "dddd_mm"
}
}
transform {
}
sink {
Hive {
#标准写法,table_name和metastore_uri必须要
table_name = "data.dddd_mm"
metastore_uri = "thrift://xxxx:9083"
}
}
Running Command
sh /data/seatunnel/seatunnel-2.3.4/bin/start-seatunnel-spark-3-connector-v2.sh \
--master yarn \
--deploy-mode cluster \
--queue xxxx\
--executor-instances 2 \
--executor-cores 6 \
--executor-memory 6g \
--name "h010-xxxxx" \
--config /data/ghyworkbase/seatunnel/H02-01-ODS_CONF-2.3.4/h010-xxxxx.conf
Error Exception
org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: Error operating ExecuteStatement: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 847.0 failed 4 times, most recent failure: Lost task 3.3 in stage 847.0 (TID 14569) (datalake136.ghy.com.cn executor 1): org.apache.spark.sql.execution.QueryExecutionException: Parquet column cannot be converted in file hdfs://xxxx/user/hive/warehouse/xxx.db/xxx/dt_mon=2024-04/xx.parquet. Column: [xx], Expected: decimal(12,2), Found: FIXED_LEN_BYTE_ARRAY at org.apache.spark.sql.errors.QueryExecutionErrors$.unsupportedSchemaColumnConvertError(QueryExecutionErrors.scala:706) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:278) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116) at scala.collection.Iterator$$anon$10.hasNext(Iterator. ...
Zeta or Flink or Spark Version
spark-3.3.0
Java or Scala Version
/jdk/jdk1.8.0_341
Screenshots
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
Please provide the schema of oracle table and hive table.
hive table below,Oracle same cloumns -- drop table if exists ghydata.ods_webcsmsgl_ecsms_visit_info_mi; CREATE TABLE IF NOT EXISTS ghydata.ods_webcsmsgl_ecsms_visit_info_mi ( isvisit string COMMENT '', clwz string COMMENT '', content string COMMENT '', plan string COMMENT ' ', cuser string COMMENT '创建', cdate timestamp COMMENT '时间', zhanbi decimal(12, 2) COMMENT ' ', entertime timestamp COMMENT '进', leavetime timestamp COMMENT '离' ) COMMENT 'SFA-' PARTITIONED BY (dt_mon string COMMENT '月快照增量表') STORED AS PARQUET;
It's better to provide the definition of the "zhanbi" field in the Oracle table. It seems like there might be an issue with the type conversion of the Oracle connector.
But it works well on seatunnel1.5.7,so i don't what is the reason that cause this
Oralce below: Column Name # Type Type Mod Not Null Default Comment ZHANBI 11 NUMBER(12,2) [NULL] false [NULL] 占比
https://github.com/apache/seatunnel/blob/cd4b30bbc3a1de9a0eb90be029fcc8feba87dde9/seatunnel-connectors-v2/connector-jdbc/src/main/java/org/apache/seatunnel/connectors/seatunnel/jdbc/internal/dialect/oracle/OracleTypeMapper.java#L85-L95
Because the scale of the field ZHANBI is 2, the field type will set to Decimal(38, 18).
❯ parq ./T_836191201227964417_6d287c425d_0_1_0.parquet -s 18:45:25
# Schema
<pyarrow._parquet.ParquetSchema object at 0x7ff161507b40>
required group field_id=-1 SeaTunnelRecord {
optional fixed_len_byte_array(16) field_id=-1 f (Decimal(precision=38, scale=18));
}
You can use sql cast the field as decimal(12, 2). Like this:
source {
Jdbc {
result_table_name = tbl
driver = oracle.jdbc.driver.OracleDriver
url = "jdbc:oracle:thin:@localhost:49161/xe"
user = xxxxx
password = xxx
query = "select F from tbl"
properties {
database.oracle.jdbc.timezoneAsRegion = "false"
}
}
}
transform {
sql {
source_table_name = tbl
result_table_name = t_tbl
query = "select cast(F as decimal(12,2)) as F1 from tbl"
}
}
sink {
LocalFile {
source_table_name = t_tbl
path = "/tmp/hive/warehouse/test3"
file_format_type = "parquet"
}
}
This issue has been solved at #5872. If you consider upgrading, you can also use the latest version 2.3.5, so you don't need to use sql transform.
Great, the problem is solved, which means that I can directly use decimal(38,18) or decimal(38,0) to process any precision numbers in the future.
@hailin0 please close this issue.
This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs.