master分支,yarnper模式报错 files does not exist
脚本:
java -cp /opt/chunjun/lib/* com.dtstack.chunjun.client.Launcher -mode yarn-per-job -jobType sync -job /opt/chunjun/chunjun-examples/json/stream/stream.json -chunjunDistDir chunjun-dist -flinkConfDir /opt/chunjun/flinkconf -hadoopConfDir /opt/chunjun/hadoopconf -flinkLibDir /opt/chunjun/flinklib -confProp "{"flink.checkpoint.interval":60000,"yarn.application.queue":"default"}"
报错摘要:
File file:/home/flink/.flink/application_1653375620246_0033/chunjun-connector-stream-master.jar does not exist
报错日志:
Application application_1653375620246_0033 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1653375620246_0033_000001 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2022-05-26 09:49:16.973]File file:/home/flink/.flink/application_1653375620246_0033/chunjun-connector-stream-master.jar does not exist
java.io.FileNotFoundException: File file:/home/flink/.flink/application_1653375620246_0033/chunjun-connector-stream-master.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
For more detailed output, check the application tracking page: http://dchadoop01:8088/cluster/app/application_1653375620246_0033 Then click on links to logs of each attempt.
. Failing the application.