xiaofan2012

Results 14 comments of xiaofan2012

Support for major connectors such as flink-cdc,flink-jdbc,flink-hive,flink-hudi mainstream sql field type mapping

Purpose: sql generation function automatically generates synchronous sql (type mapping) based on connector Plan: First support jdbc type sql generated field type mapping

The front end adds different connector types (such as CDC, JDBC, Hive, Hudi) when generating SQL statements, and the back end maps the corresponding fields according to the official documentation...

If you want to add flink-cdc type mapping, is to add a new flink-cdc-meta, and then put the corresponding logic into flink- CDC-meta?

Is it possible to maintain different conversion logic for different sources (e.g. flink-cdc,jdbc,hive,hudi) in the conversion logic corresponding to the connector?

I found that because JsonDebeziumDeserializationSchema this class in the treatment of the deserialization is oracle number type will be a problem, what's the good way to deal with i t?...

"consistent_bucket_write: test.fin_ipr_inmaininfo_test (1/2)#0" Id=89 TIMED_WAITING on java.util.LinkedList@37d9fd7 at java.lang.Object.wait(Native Method) - waiting on java.util.LinkedList@37d9fd7 at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:924) at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:692) at org.apache.hadoop.hdfs.DFSOutputStream.hsync(DFSOutputStream.java:587) at org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:145) at org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:145) at org.apache.hudi.common.table.log.HoodieLogFormatWriter.flush(HoodieLogFormatWriter.java:261) at org.apache.hudi.common.table.log.HoodieLogFormatWriter.appendBlocks(HoodieLogFormatWriter.java:194) at org.apache.hudi.io.HoodieAppendHandle.appendDataAndDeleteBlocks(HoodieAppendHandle.java:479)...