seatunnel icon indicating copy to clipboard operation
seatunnel copied to clipboard

[Bug] Plugin PluginIdentifier{engineType='seatunnel', pluginType='source', pluginName='Jdbc'} not found

Open czlh opened this issue 1 year ago • 4 comments

Search before asking

  • [X] I had searched in the issues and found no similar issues.

What happened

同样的配置文件,在2.3.3可以执行,在2.3.4执行的时候报错,install-plugin.sh已经执行,connectors目录下面已经有connector-jdbc-2.3.4.jar文件

SeaTunnel Version

V2.3.4

SeaTunnel Config

env {
  execution.parallelism = 1
  job.mode = "BATCH"
  job.name = "sync_orig_mqtt_doris_to_hive"
}

source {
  "Jdbc":{
            "result_table_name": "orig_mqtt_iot_device_property_post",
            "url":"jdbc:mysql://192.168.9.3:9030/suwen_platform?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8&nullCatalogMeansCurrent=true&allowMultiQueries=true&rewriteBatchedStatements=true",
            "driver":"com.mysql.cj.jdbc.Driver",
            "user":"root",
            "password":"",
            "query":"select * from orig_mqtt_iot_device_property_post"
        }
}

transform { 
}

sink {
#    Console {
#        source_table_name = "orig_mqtt_iot_device_property_post"
#    }
    
    Hive {
      source_table_name = "orig_mqtt_iot_device_property_post"
      table_name = "saas.orig_mqtt_iot_device_property_post"
      metastore_uri = "thrift://192.168.9.3:9083"
    }
}

Running Command

bin/start-seatunnel-flink-15-connector-v2.sh  --config config/sync_orig_mqtt_doris_to_hive.conf

Error Exception

-----------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Plugin PluginIdentifier{engineType='seatunnel', pluginType='source', pluginName='Jdbc'} not found.
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245)
        at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095)
        at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
        at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157)
Caused by: java.lang.RuntimeException: Plugin PluginIdentifier{engineType='seatunnel', pluginType='source', pluginName='Jdbc'} not found.
        at org.apache.seatunnel.plugin.discovery.AbstractPluginDiscovery.createPluginInstance(AbstractPluginDiscovery.java:231)
        at org.apache.seatunnel.plugin.discovery.AbstractPluginDiscovery.createPluginInstance(AbstractPluginDiscovery.java:171)
        at org.apache.seatunnel.core.starter.execution.PluginUtil.fallbackCreate(PluginUtil.java:128)
        at org.apache.seatunnel.core.starter.execution.PluginUtil.createSource(PluginUtil.java:77)
        at org.apache.seatunnel.core.starter.flink.execution.SourceExecuteProcessor.initializePlugins(SourceExecuteProcessor.java:118)
        at org.apache.seatunnel.core.starter.flink.execution.FlinkAbstractPluginExecuteProcessor.<init>(FlinkAbstractPluginExecuteProcessor.java:76)
        at org.apache.seatunnel.core.starter.flink.execution.SourceExecuteProcessor.<init>(SourceExecuteProcessor.java:61)
        at org.apache.seatunnel.core.starter.flink.execution.FlinkExecution.<init>(FlinkExecution.java:91)
        at org.apache.seatunnel.core.starter.flink.command.FlinkTaskExecuteCommand.execute(FlinkTaskExecuteCommand.java:59)
        at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
        at org.apache.seatunnel.core.starter.flink.SeaTunnelFlink.main(SeaTunnelFlink.java:34)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
        ... 12 more

Zeta or Flink or Spark Version

No response

Java or Scala Version

1.8 image

Screenshots

No response

Are you willing to submit PR?

  • [ ] Yes I am willing to submit a PR!

Code of Conduct

czlh avatar Feb 22 '24 02:02 czlh

Why did I get an error when executing this file bin/install-plugin.sh in apache-seatunnel-2.3.4-bin.tar.gz?

I run the cmd in ubuntu 22.04: bash bin/install-plugin.sh 2.3.4

The error is as follows:

Install SeaTunnel connectors plugins, usage version is 2.3.4 install connector : connector-cdc-mysql bin/install-plugin.sh: .../apache-seatunnel-2.3.4/mvnw: /bin/sh^M: bad interpreter: No such file or directory

zhangm365 avatar Feb 22 '24 02:02 zhangm365

bin/install-plugin.sh为什么在 apache-seatunnel-2.3.4-bin.tar.gz 中执行此文件时出现错误?

我在 ubuntu 22.04 中运行 cmd: bash bin/install-plugin.sh 2.3.4

错误如下:

安装SeaTunnel连接器插件,使用版本是2.3.4 安装连接器:connector-cdc-mysql bin/install-plugin.sh: .../apache-seatunnel-2.3.4/mvnw: /bin/sh^M: badterpreter : 没有这样的文件或目录

先执行find <seatunnel_home 路径> -type f -print0 | xargs -0 dos2unix -- ,再bin/install-plugin.sh 2.3.4就可以了

czlh avatar Feb 22 '24 02:02 czlh

Why did I get an error when executing this file bin/install-plugin.sh in apache-seatunnel-2.3.4-bin.tar.gz?

I run the cmd in ubuntu 22.04: bash bin/install-plugin.sh 2.3.4

The error is as follows:

Install SeaTunnel connectors plugins, usage version is 2.3.4 install connector : connector-cdc-mysql bin/install-plugin.sh: .../apache-seatunnel-2.3.4/mvnw: /bin/sh^M: bad interpreter: No such file or directory

there has some issue in 2.3.4 release version, will release again.

liunaijie avatar Feb 22 '24 07:02 liunaijie

Hi, for the error that the PluginIdentifier is not found, it seems that you may have set the env about SEATUNNEL_HOME, pls check it. If so, pls remove it and run the config file again. @czlh

zhangm365 avatar Feb 27 '24 02:02 zhangm365

I have the similar question. Version 2.3.4 has issues. So I use 2.3.3 version. My purpose is that I want use spark engine , that read iceberg and write it to clickhouse. So far , iceberg(source) and console(sink) no result.It's like paused. sparkversion:3.3.0 seatunnel:2.3.3 iecberg:1.0.0

This is logs:

24/02/28 19:24:01 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (hp09, executor driver, partition 0, PROCESS_LOCAL, 4585 bytes) taskResourceAssignments Map() 24/02/28 19:24:01 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 24/02/28 19:24:01 INFO ConsoleSinkWriter: output rowType: dbid<STRING>, etloilfield<STRING>, oilfieldname<STRING>, jhdm<STRING>, jh<STRING>, jhbm<STRING>, qssj<TIMESTAMP>, zzsj<TIMESTAMP>, zynr<STRING>, jsbzbdm<STRING>, jsbzb<STRING>, cjr<STRING>, cjsj<TIMESTAMP>, xgr<STRING>, xgsj<TIMESTAMP>, shra<STRING>, shrq<TIMESTAMP>, shzt<STRING>, org_code<STRING>, org_nm<STRING>, etltime<TIMESTAMP>, sourcetable<STRING> 24/02/28 19:24:01 INFO BaseMetastoreTableOperations: Refreshing table metadata from new version: s3a://syky-dev/user/hive/warehouse/fdp/minio_cluster.db/dwd_sygc_zwjgc_gcjk_test/metadata/00001-b74c68db-4971-4482-9b13-7864046dc8a8.gz.metadata.json 24/02/28 19:24:01 INFO BaseMetastoreCatalog: Table loaded by catalog: hive_prod.minio_cluster.dwd_sygc_zwjgc_gcjk_test 24/02/28 19:24:01 INFO BaseMetastoreTableOperations: Refreshing table metadata from new version: s3a://syky-dev/user/hive/warehouse/fdp/minio_cluster.db/dwd_sygc_zwjgc_gcjk_test/metadata/00001-b74c68db-4971-4482-9b13-7864046dc8a8.gz.metadata.json 24/02/28 19:24:01 INFO BaseMetastoreCatalog: Table loaded by catalog: hive_prod.minio_cluster.dwd_sygc_zwjgc_gcjk_test 24/02/28 19:24:01 INFO SnapshotScan: Scanning table hive_prod.minio_cluster.dwd_sygc_zwjgc_gcjk_test snapshot 4039299976393873727 created at 2024-02-20T09:42:31.936+00:00 with filter true 24/02/28 19:24:02 INFO LoggingMetricsReporter: Received metrics report: ScanReport{tableName=hive_prod.minio_cluster.dwd_sygc_zwjgc_gcjk_test, snapshotId=4039299976393873727, filter=true, schemaId=0, projectedFieldIds=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], projectedFieldNames=[dbid, etloilfield, oilfieldname, jhdm, jh, jhbm, qssj, zzsj, zynr, jsbzbdm, jsbzb, cjr, cjsj, xgr, xgsj, shra, shrq, shzt, org_code, org_nm, etltime, sourcetable], scanMetrics=ScanMetricsResult{totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.346052923S, count=1}, resultDataFiles=CounterResult{unit=COUNT, value=1}, resultDeleteFiles=CounterResult{unit=COUNT, value=0}, totalDataManifests=CounterResult{unit=COUNT, value=1}, totalDeleteManifests=CounterResult{unit=COUNT, value=0}, scannedDataManifests=CounterResult{unit=COUNT, value=1}, skippedDataManifests=CounterResult{unit=COUNT, value=0}, totalFileSizeInBytes=CounterResult{unit=BYTES, value=60729341}, totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=0}, skippedDataFiles=CounterResult{unit=COUNT, value=0}, skippedDeleteFiles=CounterResult{unit=COUNT, value=0}, scannedDeleteManifests=CounterResult{unit=COUNT, value=0}, skippedDeleteManifests=CounterResult{unit=COUNT, value=0}, indexedDeleteFiles=CounterResult{unit=COUNT, value=0}, equalityDeleteFiles=CounterResult{unit=COUNT, value=0}, positionalDeleteFiles=CounterResult{unit=COUNT, value=0}}, metadata={iceberg-version=Apache Iceberg 1.4.3 (commit 9a5d24fee239352021a9a73f6a4cad8ecf464f01)}} 24/02/28 19:24:02 INFO AbstractSplitEnumerator: Assigning IcebergFileScanTaskSplit{task={deletes=[], file=s3a://syky-dev/user/hive/warehouse/fdp/minio_cluster.db/dwd_sygc_zwjgc_gcjk_test/data/00000-0-c3c26f8e-2914-4108-b76a-fb1bc4034663-00001.parquet, start=4, length=60729337}, recordOffset=0} to 0 reader. 24/02/28 19:24:02 INFO AbstractSplitEnumerator: Assign splits [IcebergFileScanTaskSplit{task={deletes=[], file=s3a://syky-dev/user/hive/warehouse/fdp/minio_cluster.db/dwd_sygc_zwjgc_gcjk_test/data/00000-0-c3c26f8e-2914-4108-b76a-fb1bc4034663-00001.parquet, start=4, length=60729337}, recordOffset=0}] to reader 0 24/02/28 19:24:02 INFO IcebergSourceReader: Add 1 splits to reader 24/02/28 19:24:02 INFO IcebergSourceReader: Reader received NoMoreSplits event. 24/02/28 19:29:01 INFO metastore: Closed a connection to metastore, current connections: 0 here was paused long long time.

blueridder avatar Feb 28 '24 12:02 blueridder

This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs.

github-actions[bot] avatar Mar 30 '24 00:03 github-actions[bot]