[Bug] Caused by: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset
Search before asking
- [x] I had searched in the issues and found no similar issues.
What happened
请问文件的同步一定要安装hadoop吗
SeaTunnel Version
2.3.10
SeaTunnel Config
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
seatunnel:
engine:
event-report-http:
url: "http://127.0.0.1:8097/api/data/event/report"
headers:
Content-Type: application/json
backup-count: 1
queue-type: blockingqueue
print-execution-info-interval: 60
print-job-metrics-info-interval: 60
slot-service:
dynamic-slot: true
checkpoint:
interval: 300000
timeout: 10000
storage:
type: localfile
max-retained: 3
plugin-config:
namespace: /tmp/seatunnel/checkpoint_snapshot/
storage.type: local
fs.defaultFS: file:///tmp/
jar-storage:
enable: false
connector-jar-storage-mode: SHARED
connector-jar-storage-path: ""
connector-jar-cleanup-task-interval: 3600
connector-jar-expiry-time: 600
telemetry:
metric:
enabled: false
http:
enable-http: true
port: 9180
context-path: /
Running Command
local模式
Error Exception
Caused by: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
at org.apache.hadoop.util.Shell.fileNotFoundException(Shell.java:549)
at org.apache.hadoop.util.Shell.getHadoopHomeDir(Shell.java:570)
at org.apache.hadoop.util.Shell.getQualifiedBin(Shell.java:593)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:690)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:78)
at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1665)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:227)
at org.apache.seatunnel.connectors.seatunnel.file.hadoop.HadoopFileSystemProxy.initialize(HadoopFileSystemProxy.java:253)
at org.apache.seatunnel.connectors.seatunnel.file.hadoop.HadoopFileSystemProxy.<init>(HadoopFileSystemProxy.java:60)
at org.apache.seatunnel.connectors.seatunnel.file.local.catalog.LocalFileCatalogFactory.createCatalog(LocalFileCatalogFactory.java:36)
at org.apache.seatunnel.connectors.seatunnel.file.sink.BaseMultipleTableFileSink.getSaveModeHandler(BaseMultipleTableFileSink.java:142)
at org.apache.seatunnel.engine.server.master.JobMaster.handleSaveMode(JobMaster.java:574)
at org.apache.seatunnel.engine.server.master.JobMaster.handleSaveMode(JobMaster.java:586)
at org.apache.seatunnel.engine.server.master.JobMaster.lambda$init$1(JobMaster.java:280)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.apache.seatunnel.engine.server.master.JobMaster.init(JobMaster.java:273)
Zeta or Flink or Spark Version
No response
Java or Scala Version
No response
Screenshots
No response
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
Code of Conduct
- [x] I agree to follow this project's Code of Conduct
This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs.
This issue has been closed because it has not received response for too long time. You could reopen it if you encountered similar problems in the future.
The same issue with https://github.com/apache/seatunnel/issues/8365
Under Linux, there is no need to configure HADOOP_HOME; under Windows, HADOOP_HOME may be required (I haven't verified this on Windows). SeaTunnel uses HDFS protocol to save Checkpoint and other related information. We use HDFS protocol to support write local files. HDFS supports the local file system.