incubator-streampark icon indicating copy to clipboard operation
incubator-streampark copied to clipboard

[Bug] hdfsCannot discover a connector using option: 'connector'='starrocks'

Open JasonChen-ecnu opened this issue 3 years ago • 1 comments

Search before asking

  • [X] I had searched in the issues and found no similar issues.

What happened

The Development Mode of streamx is flink sql, the deployment method is yarn per-job, the data is written to starrocks. the flink-connector-starrocks package is uploaded on the page, click submit, execute the launch application, no error is reported, then start application, the application reports an error. Try not to upload the connector package on the page, directly put the package in the uploads directory or the flink lib directory of the same level of uploads, click submit, execute the launch application, no error is reported, then start application, the application reports an error. streamx version 1.2.3, flink version 1.15.1, flink-connector-starrocks version 1.2.3_flink-1.15. In addition, try to write flink sql code, submit it directly on the flink cluster using the command line, and the data is written normally.

Note: The hdfs directory configured in the application.yml file: workspace remote has been adjusted once during installation. The old directory in hdfs was deleted, and a new directory was created. After restarting the service, the new directory is empty, and there are no children in it. directories and files. When testing, the uploaded udf package can be used normally, so this problem was not taken seriously at that time.

How to solve the problem that the hdfs directory is empty and Cannot discover a connector using option: 'connector'='starrocks'

streamx 的 Development Mode 为 flink sql,部署方式 yarn per-job,将数据写入 starrocks,在页面上传了 flink-connector-starrocks 包,点击submit,执行launch application,没有报错,执行 start application任务报错。尝试不在页面上传 connector 包,直接将包放在 uploads 目录下或 uploads 同级目录 flink lib 目录下,点击submit,执行launch application,没有报错,执行 start application任务报错。 streamx 版本 1.2.3,flink 版本 1.15.1 ,flink-connector-starrocks 版本 1.2.3_flink-1.15。 另外尝试写 flink sql代码,直接在flink集群用命令行提交,数据正常写入。

备注:application.yml 文件中配置的hdfs目录:workspace remote 在安装时有过一次调整,删掉hdfs中旧的目录,创建了新的目录,重启服务后,新的目录为空,里面没有任何子目录和文件。在测试使用时,上传的udf包可以正常使用,所以此问题当时没有重视。

该如何解决hdfs目录为空,以及Cannot discover a connector using option: 'connector'='starrocks'的问题

StreamX Version

1.2.3

Java Version

No response

Flink Version

1.15.1

Scala Version of Flink

2.12

Error Exception

Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a connector using option: 'connector'='starrocks'
	at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:728)
	at org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:702)
	at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:257)
	... 54 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'starrocks' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.

Available factory identifiers are:

blackhole
datagen
filesystem
kafka
print
upsert-kafka
	at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:538)
	at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:724)
	... 56 more

Screenshots

No response

Are you willing to submit PR?

  • [ ] Yes I am willing to submit a PR!

Code of Conduct

JasonChen-ecnu avatar Aug 29 '22 03:08 JasonChen-ecnu

@MonsterChenzhuo please check

wolfboys avatar Aug 29 '22 17:08 wolfboys