seatunnel
seatunnel copied to clipboard
[Bug] [flink-hive-connector] Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf
Search before asking
- [X] I had searched in the issues and found no similar issues.
What happened
when i read fake source data and write it to hive by seatunnel-2.2-beta with flink engine ,it works. but when i change to seatunnel-2.3.0-beta with the same env,it shows an error,Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf
SeaTunnel Version
2.3.0-beta
SeaTunnel Config
env {
# You can set flink configuration here
# job.mode = "STREAMING"
execution.parallelism = 1
job.name="test_hive_source_to_hive"
}
source {
FakeSource {
row.num = 1000
schema = {
fields {
c_string = string
c_boolean = boolean
c_int = int
c_bigint = bigint
}
}
}
}
transform {
}
sink {
# choose stdout output plugin to output data to console
Hive {
table_name = "test.seatunnel_orc"
metastore_uri = "thrift://1.1.1.1:9083"
partition_by = ["c_int"]
sink_columns = ["c_string", "c_boolean", "c_bigint","c_int"]
}
}
Running Command
bin/start-seatunnel-flink-connector-v2.sh -m yarn-cluster -ynm seatunnel --config config/fake_hive.conf
Error Exception
2022-11-04 15:57:38,457 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: SeaTunnel FakeSource -> Sink Writer: Hive -> Sink Global Committer: Hive (1/1) (4bd8ee3e4e8e1f17a8f0554e551f53be) switched from SCHEDULED to DEPLOYING.
2022-11-04 15:57:38,457 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Deploying Source: SeaTunnel FakeSource -> Sink Writer: Hive -> Sink Global Committer: Hive (1/1) (attempt #0) with attempt id 4bd8ee3e4e8e1f17a8f0554e551f53be to container_1665368595228_0373_01_000002 @ aqpt06 (dataPort=39175) with allocation id 656cab01954b724159346c0d933bde78
2022-11-04 15:57:38,549 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: SeaTunnel FakeSource -> Sink Writer: Hive -> Sink Global Committer: Hive (1/1) (4bd8ee3e4e8e1f17a8f0554e551f53be) switched from DEPLOYING to RUNNING.
2022-11-04 15:58:19,980 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: SeaTunnel FakeSource -> Sink Writer: Hive -> Sink Global Committer: Hive (1/1) (4bd8ee3e4e8e1f17a8f0554e551f53be) switched from RUNNING to FAILED on container_1665368595228_0373_01_000002 @ aqpt06 (dataPort=39175).
java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:3931) ~[hive-exec.jar:2.1.1-cdh6.0.1]
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:3889) ~[hive-exec.jar:2.1.1-cdh6.0.1]
at org.apache.seatunnel.connectors.seatunnel.hive.utils.HiveMetaStoreProxy.<init>(HiveMetaStoreProxy.java:39) ~[connector-hive-2.3.0-beta.jar:2.3.0-beta]
at org.apache.seatunnel.connectors.seatunnel.hive.utils.HiveMetaStoreProxy.getInstance(HiveMetaStoreProxy.java:53) ~[connector-hive-2.3.0-beta.jar:2.3.0-beta]
at org.apache.seatunnel.connectors.seatunnel.hive.commit.HiveSinkAggregatedCommitter.commit(HiveSinkAggregatedCommitter.java:48) ~[connector-hive-2.3.0-beta.jar:2.3.0-beta]
at org.apache.seatunnel.translation.flink.sink.FlinkGlobalCommitter.commit(FlinkGlobalCommitter.java:54) ~[seatunnel-flink-starter.jar:2.3.0-beta]
at org.apache.flink.streaming.runtime.operators.sink.BatchGlobalCommitterOperator.endInput(BatchGlobalCommitterOperator.java:68) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:91) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$close$0(StreamOperatorWrapper.java:128) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:128) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.OperatorChain.closeOperators(OperatorChain.java:444) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:629) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:591) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573) ~[flink-dist_2.11-1.12.7.jar:1.12.7]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_141]
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_141]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_141]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) ~[?:1.8.0_141]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_141]
... 19 more
Flink or Spark Version
No response
Java or Scala Version
No response
Screenshots
No response
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
Please add hadoop dependency in classpath of flink cluster.
Please add hadoop dependency in classpath of flink cluster.
on yarn mode, exported hadoop classpath already,use the same hadoop dependency,when use 2.2.beta it's ok,but 2.3.beta,it's not
This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs.
This issue has been closed because it has not received response for too long time. You could reopen it if you encountered similar problems in the future.