Eason Ye
Eason Ye
Background: There are some requirements for real-time data widening. Now hive supports lookup join, but this solution is not available for production, and the hive table needs to be loaded...
Affected flink versions: 1.12, 1.14. ArcticDynamicSource should implement the `SupportsProjectionPushDown` interface to accelerate data querying.
This issue should affect Flink versions: flink1.12/flink1.14
This issue should affect Flink versions: flink1.12/flink1.14
### Search before asking - [x] I have searched in the [issues](https://github.com/NetEase/arctic/issues?q=is%3Aissue) and found no similar issues. ### What would you like to be improved? Currently, the insert overwrite statement...
`insert into arctic.db.table /*+ OPTIONS('arctic.emit.mode'='log') select id,name, LOCALTIMESTMAP from source;` arctic.db.table table info: ``` Flink SQL> show create table log_table; CREATE TABLE `arctic`.`db`.`log_table` ( `id` INT NOT NULL, `name` VARCHAR(2147483647),...
It is a demo showing how to lookup join with the arctic table by temporal table join. `SELECT [column_list] FROM table1 [AS ] [LEFT] JOIN table2 FOR SYSTEM_TIME AS OF...
In the guidance, we should declare which Kafka versions are supported in the log store.
Scenario: Create an iceberg table that has a timestamp-type column with a precision equal to 0 by Flink SQL. Using the below DDL to create a watermark definition table like...
[Improvement]: Integrate the Flink table propereties when the Arctic table refresh method is invoked
### Search before asking - [X] I have searched in the [issues](https://github.com/NetEase/arctic/issues?page=4&q=is%3Aissue) and found no similar issues. ### What would you like to be improved? As of now, the Flink...