[Bug] Paimon 0.8 catalog with flink 1.18/1.19 and minio s3 can't create database
Search before asking
- [X] I searched in the issues and found nothing similar.
Paimon version
0.8.2, No hadoop ENV using flink's s3 config
Compute Engine
Flink 1.19.1 standalone session 1master, 3workers Flink 1.18.1 standalone session 1master, 3workers
Minimal reproduce step
with flink sql-client, i create paimon catalog with s3 store, as below,
CREATE CATALOG ctl_paimon_s3 WITH (
'type'='paimon',
'warehouse' = 's3p://flink-tst/paimon/',
...
);
but create database db_paimon;
only worked in default catalog, not work in paimon catalog with s3.
sql-client print succeed, but log has no output since command use catalog ctl_paimon_s3; printed.
with default database, follow paimon's guide, i have word_count table stored on s3 successed, select from table also worked, this table has nearly 100W records.
What doesn't meet your expectations?
I need to create database in paimon catalog with s3 stored on minio
Flink SQL> CREATE CATALOG ctl_paimon_s3 WITH (
> 'type'='paimon',
> 'warehouse' = 's3p://flink/paimon/',
> 's3.endpoint' = 'http://ossapi-tst',
> 's3.access-key' = 'flink',
> 's3.secret-key' = 'flink',
> 's3.path.style.access' = 'true'
> );
[INFO] Execute statement succeed.
Flink SQL> use catalog ctl_paimon_s3;
[INFO] Execute statement succeed.
Flink SQL> create database ODS;
[INFO] Execute statement succeed.
Flink SQL> show databases;
+---------------+
| database name |
+---------------+
| default |
+---------------+
1 row in set
Flink SQL> show tables;
+------------------+
| table name |
+------------------+
| customer |
| word_count |
+------------------+
2 rows in set
Flink SQL> CREATE TABLE ODS.word_count (
> word STRING PRIMARY KEY NOT ENFORCED,
> cnt BIGINT
> );
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.catalog.exceptions.DatabaseNotExistException: Database ODS does not exist in Catalog ctl_paimon_s3.
Flink SQL> use catalog default_catalog;
[INFO] Execute statement succeed.
Flink SQL> create database ODS;
[INFO] Execute statement succeed.
Flink SQL> show databases;
+------------------+
| database name |
+------------------+
| ODS |
| default_database |
+------------------+
2 rows in set
Flink SQL>
Log
root@gp_mdw:/opt/flink-1.19.1# tail -n 200 -f log/flink-root-sql-client-gp_mdw.log
2024-08-19 14:45:29,308 INFO org.apache.flink.configuration.GlobalConfiguration [] - Using standard YAML parser to load flink configuration file from /opt/flink-1.19.1/conf/config.yaml.
2024-08-19 14:45:29,412 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.checkpoints.num-retained, 20
2024-08-19 14:45:29,412 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: fs.allowed-fallback-filesystems, s3
2024-08-19 14:45:29,412 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.execution.failover-strategy, region
2024-08-19 14:45:29,412 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.address, 172.31.4.220
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.savepoints.dir, s3p://flink-tst/savepoints/
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.aligned-checkpoint-timeout, 30s
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.bind-host, 0.0.0.0
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: s3.secret-key, ******
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: s3.endpoint, https://ossapi.xxxx.com
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.storage, filesystem
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: parallelism.default, 2
2024-08-19 14:45:29,413 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.numberOfTaskSlots, 4
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: env.java.opts.all, --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.backend.type, hashmap
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.memory.process.size, 2048m
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.mode, EXACTLY_ONCE
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.bind-host, 0.0.0.0
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.tolerable-failed-checkpoints, 0
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.backend.incremental, true
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.memory.process.size, 1600m
2024-08-19 14:45:29,414 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.port, 6123
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.bind-address, 0.0.0.0
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.interval, 3min
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.timeout, 10min
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: s3.access-key, flink-tst
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.externalized-checkpoint-retention, RETAIN_ON_CANCELLATION
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: rest.address, 172.31.4.220
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.min-pause, 0
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: execution.checkpointing.max-concurrent-checkpoints, 1
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.checkpoints.dir, s3p://flink-tst/checkpoints/
2024-08-19 14:45:29,415 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: s3.path.style.access, true
2024-08-19 14:45:29,572 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-influx
2024-08-19 14:45:29,576 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-statsd
2024-08-19 14:45:29,577 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-slf4j
2024-08-19 14:45:29,577 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-datadog
2024-08-19 14:45:29,577 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-graphite
2024-08-19 14:45:29,577 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-jmx
2024-08-19 14:45:29,578 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: metrics-prometheus
2024-08-19 14:45:29,578 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: external-resource-gpu
2024-08-19 14:45:29,578 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: s3-fs-presto
2024-08-19 14:45:29,578 INFO org.apache.flink.core.plugin.DefaultPluginManager [] - Plugin loader with ID not found, creating it: s3-fs-hadoop
2024-08-19 14:45:29,729 INFO org.apache.flink.table.gateway.service.context.DefaultContext [] - Execution config: {execution.savepoint.ignore-unclaimed-state=false, execution.savepoint-restore-mode=NO_CLAIM, execution.attached=true, pipeline.jars=[file:/opt/flink-1.19.1/opt/flink-sql-client-1.19.1.jar], execution.shutdown-on-attached-exit=false, pipeline.classpaths=[], execution.target=remote}
2024-08-19 14:45:30,342 INFO org.apache.flink.configuration.Configuration [] - Config uses fallback configuration key 'rest.port' instead of key 'rest.bind-port'
2024-08-19 14:45:30,380 INFO org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint [] - Starting rest endpoint.
2024-08-19 14:45:30,850 INFO org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint [] - Rest endpoint listening at localhost:13675
2024-08-19 14:45:30,851 INFO org.apache.flink.table.client.SqlClient [] - Start embedded gateway on port 13675
2024-08-19 14:45:31,199 INFO org.apache.flink.table.client.gateway.ExecutorImpl [] - Open session to http://localhost:13675 with connection version: V2.
2024-08-19 14:45:31,599 INFO org.apache.flink.table.client.cli.CliClient [] - Command history file path: /root/.flink-sql-history
2024-08-19 14:48:47,074 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlCreateCatalog does not contain a setter for field catalogName
2024-08-19 14:48:47,074 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlCreateCatalog cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,076 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlCreateView does not contain a setter for field viewName
2024-08-19 14:48:47,076 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlCreateView cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,077 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewRename does not contain a getter for field newViewIdentifier
2024-08-19 14:48:47,077 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewRename does not contain a setter for field newViewIdentifier
2024-08-19 14:48:47,077 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewRename cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,078 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewProperties does not contain a setter for field propertyList
2024-08-19 14:48:47,078 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewProperties cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,078 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAlterViewAs does not contain a setter for field newQuery
2024-08-19 14:48:47,078 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAlterViewAs cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,079 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlAddPartitions does not contain a setter for field ifPartitionNotExists
2024-08-19 14:48:47,079 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlAddPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,080 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlDropPartitions does not contain a setter for field ifExists
2024-08-19 14:48:47,080 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlDropPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,081 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowPartitions does not contain a getter for field tableIdentifier
2024-08-19 14:48:47,081 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowPartitions does not contain a setter for field tableIdentifier
2024-08-19 14:48:47,081 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowPartitions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,082 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dml.SqlTruncateTable does not contain a getter for field tableNameIdentifier
2024-08-19 14:48:47,082 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dml.SqlTruncateTable does not contain a setter for field tableNameIdentifier
2024-08-19 14:48:47,082 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dml.SqlTruncateTable cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,083 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowFunctions does not contain a setter for field requireUser
2024-08-19 14:48:47,083 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowFunctions cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,084 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowProcedures does not contain a getter for field databaseName
2024-08-19 14:48:47,084 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowProcedures does not contain a setter for field databaseName
2024-08-19 14:48:47,084 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowProcedures cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,086 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.ddl.SqlReplaceTableAs does not contain a setter for field tableName
2024-08-19 14:48:47,086 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.ddl.SqlReplaceTableAs cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,087 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.sql.parser.dql.SqlShowDatabases does not contain a setter for field preposition
2024-08-19 14:48:47,088 INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - Class class org.apache.flink.sql.parser.dql.SqlShowDatabases cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance and schema evolution.
2024-08-19 14:48:47,141 WARN org.apache.paimon.utils.HadoopUtils [] - Could not find Hadoop configuration via any of the supported methods
2024-08-19 14:48:47,324 INFO org.apache.flink.fs.s3.common.token.AbstractS3DelegationTokenReceiver [] - Updating Hadoop configuration
2024-08-19 14:48:47,325 INFO org.apache.flink.fs.s3.common.token.AbstractS3DelegationTokenReceiver [] - Updated Hadoop configuration successfully
2024-08-19 14:49:51,964 INFO org.apache.flink.table.catalog.CatalogManager [] - Set the current default catalog as [ctl_paimon_s3] and the current default database as [default].
2024-08-19 15:05:12,325 INFO org.apache.flink.client.program.rest.RestClusterClient [] - Submitting job 'collect' (e0b9aa9782e4a000a4f27da7cb1dfe0e).
2024-08-19 15:05:12,453 INFO org.apache.flink.client.program.rest.RestClusterClient [] - Successfully submitted job 'collect' (e0b9aa9782e4a000a4f27da7cb1dfe0e) to 'http://172.31.4.220:8081'.
2024-08-19 15:05:16,076 WARN org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop [] - Selector.select() returned prematurely 512 times in a row; rebuilding Selector org.apache.flink.shaded.netty4.io.netty.channel.nio.SelectedSelectionKeySetSelector@36304b46.
2024-08-19 15:05:16,078 INFO org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop [] - Migrated 0 channel(s) to the new Selector.
2024-08-19 15:05:20,116 WARN org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop [] - Selector.select() returned prematurely 512 times in a row; rebuilding Selector org.apache.flink.shaded.netty4.io.netty.channel.nio.SelectedSelectionKeySetSelector@4fc12afb.
2024-08-19 15:05:20,116 INFO org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop [] - Migrated 0 channel(s) to the new Selector.
2024-08-19 15:06:11,324 WARN org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop [] - Selector.select() returned prematurely 512 times in a row; rebuilding Selector org.apache.flink.shaded.netty4.io.netty.channel.nio.SelectedSelectionKeySetSelector@59c0aef.
2024-08-19 15:06:11,324 INFO org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop [] - Migrated 0 channel(s) to the new Selector.
2024-08-19 15:07:33,315 WARN org.apache.flink.streaming.api.operators.collect.CollectResultFetcher [] - Interrupted when sleeping before a retry
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method) ~[?:1.8.0_311]
at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:247) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:116) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:126) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:100) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:247) [flink-table-planner_ba5d4a74-f257-498a-8e5f-469556725a3e.jar:1.19.1]
at org.apache.flink.table.gateway.service.result.ResultStore$ResultRetrievalThread.run(ResultStore.java:155) [flink-sql-gateway-1.19.1.jar:1.19.1]
2024-08-19 15:07:33,315 ERROR org.apache.flink.table.gateway.service.SqlGatewayServiceImpl [] - Failed to cancelOperation.
org.apache.flink.table.gateway.api.utils.SqlGatewayException: Failed to convert the Operation Status from FINISHED to CANCELED for 18781ab8-348f-4c72-bbfe-87fc6634ed6a.
at org.apache.flink.table.gateway.service.operation.OperationManager$Operation.updateState(OperationManager.java:385) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.table.gateway.service.operation.OperationManager$Operation.cancel(OperationManager.java:311) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.table.gateway.service.operation.OperationManager.cancelOperation(OperationManager.java:130) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.cancelOperation(SqlGatewayServiceImpl.java:148) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.table.gateway.rest.handler.operation.CancelOperationHandler.execute(CancelOperationHandler.java:47) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.table.gateway.rest.handler.operation.AbstractOperationHandler.handleRequest(AbstractOperationHandler.java:74) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:84) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:52) ~[flink-sql-gateway-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:196) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:88) ~[flink-dist-1.19.1.jar:1.19.1]
at java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_311]
at org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:45) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:85) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:50) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:115) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:94) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:55) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:233) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:70) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [flink-dist-1.19.1.jar:1.19.1]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_311]
2024-08-19 15:07:33,337 ERROR org.apache.flink.table.gateway.rest.handler.operation.CancelOperationHandler [] - Exception occurred in REST handler: Failed to cancelOperation.
Anything else?
No response
Are you willing to submit a PR?
- [ ] I'm willing to submit a PR!
I encountered same issue
I think this issue is because of flink-s3-fs-presto-1.19.1.jar,as the document recommends using it, we did it and failed. But when I'm using flink-s3-fs-hadoop-1.19.1.jar, all goods.
I think this issue is because of
flink-s3-fs-presto-1.19.1.jar,as the document recommends using it, we did it and failed. But when I'm usingflink-s3-fs-hadoop-1.19.1.jar, all goods.
thanks, just worked