seatunnel
seatunnel copied to clipboard
[Bug] [seatunnel-connector-spark-jdbc] after the sink is complete, the temporary table is created at the destination
Search before asking
- [X] I had searched in the issues and found no similar issues.
What happened
使用spark引擎执行jdbc 到 jdbc 的数据同步,在数据同步成功完成后,会在目的端创建一个与目的表表结构完全一致的空表。
SeaTunnel Version
最新的 dev分支。2.2.3也存在这个问题。
SeaTunnel Config
env {
spark.app.name = "lyf_test_spark_0909"
spark.executor.instances = 2
spark.executor.cores = 1
spark.executor.memory = "1g"
}
source {
jdbc {
driver = "com.mysql.cj.jdbc.Driver"
url = "jdbc:mysql://10.*.*.*:3306/davinci?useUnicode=true&characterEncoding=utf8&useSSL=false"
table = "sea1"
result_table_name = "sea1_data"
user = "*****"
password = "**********"
}
}
transform {
sql {
sql = "select id,name from sea1_data"
}
}
sink {
jdbc {
saveMode = "update"
driver = "com.mysql.cj.jdbc.Driver"
url = "jdbc:mysql://10.*.*.*:3306/davinci?useUnicode=true&characterEncoding=utf8&useSSL=false"
user = "*"
password = "*"
dbTable = "sea1_data"
customUpdateStmt = "insert into sea3(id,name) values(?,?)"
}
}
Running Command
/apache-seatunnel-incubating-2.1.3-SNAPSHOT/bin/start-seatunnel-spark.sh --master yarn --deploy-mode cluster --config /apache-seatunnel-incubating-2.1.3-SNAPSHOT/config/test.spark.jdbc.jdbc.conf
Error Exception
没有报错,任务成功执行。
Flink or Spark Version
spark version: 2.3.2.3.1.5.0-152
Java or Scala Version
java:1.8 scala:2.11
Screenshots
1、任务成功执行,目的表已经写入数据:

2、在目的端会生成一个与目的表结构一模一样的空表:
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
If you are not in a hurry, I will try this weekend. But it may exceed my ability. On Monday, I will ask others to help me repair it.
If you are not in a hurry, I will try this weekend. But it may exceed my ability. On Monday, I will ask others to help me repair it.
thanks,Hoping for good news!
If you are not in a hurry, I will try this weekend. But it may exceed my ability. On Monday, I will ask others to help me repair it.
thanks,Hoping for good news!
did you test for flink-engine
I has finded the bug which code cause. But the code is inconsistent with the description in the document. I need to ask the Commiter.
If you are not in a hurry, I will try this weekend. But it may exceed my ability. On Monday, I will ask others to help me repair it.
thanks,Hoping for good news!
did you test for flink-engine
flink-engine is ok
I has finded the bug which code cause. But the code is inconsistent with the description in the document. I need to ask the Commiter.
thanks
the Bug exception in org.apache.spark.sql.execution.datasources.jdbc2.DefaultSource
when the table names in dbTable
and customUpdateStmt
are inconsistent
I has finded the bug which code cause. But the code is inconsistent with the description in the document. I need to ask the Commiter.
thanks
you just keep table names consistent dbTable is not souce table name, and it is sink tableName which is Sink-Datasource
dbTable = "sea1_data"
customUpdateStmt = "insert into sea3(id,name) values(?,?)"
I has finded the bug which code cause. But the code is inconsistent with the description in the document. I need to ask the Commiter.
thanks
you just keep table names consistent dbTable is not souce table name, and it is sink tableName which is Sink-Datasource
dbTable = "sea1_data" customUpdateStmt = "insert into sea3(id,name) values(?,?)"
ok,thanks. I'll try it.
This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs.
This issue has been closed because it has not received response for too long time. You could reopen it if you encountered similar problems in the future.