alibabacloud-hologres-connectors
alibabacloud-hologres-connectors copied to clipboard
alibabacloud-hologres-connectors
source file[源文件](https://github.com/aliyun/alibabacloud-hologres-connectors/blob/master/holo-client/src/main/java/com/alibaba/hologres/client/HoloClient.java) ``` /** * 如果读写分区表,修改对应操作的schema为分区子表. * * @param record 操作的Record * @param createIfNotExists dynamicPartition为true,且是非delete的put操作时,自动创建分区 * @param exceptionIfNotExists 分区表不存在时是否抛出异常,get和delete操作发现子表不存在不会抛出异常 * @return 是否可以忽略本次操作,比如delete(PUT)但是分区子表不存在的时候;GET但分区子表不存在的时候 * @throws HoloClientException 获取分区或者根据分区信息获取TableSchema异常 那么complete exception */ private...
Bumps [com.google.guava:guava](https://github.com/google/guava) from 31.0.1-jre to 32.0.0-jre. Release notes Sourced from com.google.guava:guava's releases. 32.0.0 Maven <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>32.0.0-jre</version> <!-- or, for Android: --> <version>32.0.0-android</version> </dependency> Jar files 32.0.0-jre.jar 32.0.0-android.jar Guava...
error=ERROR: internal error: Capacity error: BinaryArray cannot contain more than 2147483646 bytes, have 2147484109 CONTEXT: [query_id:310004008398768452] org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2565) org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1224) org.postgresql.core.v3.QueryExecutorImpl.endCopy(QueryExecutorImpl.java:1029) org.postgresql.core.v3.CopyInImpl.endCopy(CopyInImpl.java:49) com.bigdata.etl.stream.CoordinatorWithTransaction$.$anonfun$main$4(CoordinatorWithTransaction.scala:157) com.bigdata.etl.stream.CoordinatorWithTransaction$.$anonfun$main$4$adapted(CoordinatorWithTransaction.scala:119) org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndex$2(RDD.scala:915) org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndex$2$adapted(RDD.scala:915) org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) org.apache.spark.rdd.RDD.iterator(RDD.scala:337) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) org.apache.spark.scheduler.Task.run(Task.scala:131) org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:519)...
使用holo-client客户端中 writeThreadSize put 操作并发数,是并发数越大写入速度越快吗? 为什么我使用1个并发时速率达到10000/s 当我把并发数调到2时写入速率就只有3500/s.
我是用Flink SQL你给的demo每次都只能跑批,数据同步一次就结束了
- 看着官网只给了hologres本身的读写性能测试结果 https://help.aliyun.com/document_detail/605017.html - Flink/spark对接hologres,看着官方提到走Connector写入这种方式,LBS --> private api service--> backend (在插入数据场景,这里存在两次数据拷贝吧?),Flink对接hologres的写数据性能有评测么?**请教一下性能怎么样**,上面两次拷贝是否可以优化 https://developer.aliyun.com/article/778798
官网有和hive类型对比 ARRAY -> TEXT[] 但是创建复杂类型外部表报错 
[201]com.alibaba.hologres.client.exception.HoloClientException: [201]truncate table ["crm_data_cube"."bank_coop_account_check_for_spa"], but replay not finished yet:Failed to get table from StoreMaster, maybe still in replay after a truncate