liming.1018
liming.1018
> @liming30 Thanks for your PR. Would you like to share some improvements if this PR merged via [flink-benchmarks](https://github.com/apache/flink-benchmarks). Hi, @fredia, I wrote the following benchmark on my local Mac...
@fredia If we want to test the performance of RocksDB after `deleteRange`, the conclusion should have been given in the last part of this [blog](https://rocksdb.org/blog/2018/11/21/delete-range.html). Do you mean that the...
@jhunters 有时间的话辛苦帮忙review一下~
We have encountered many times such as `NoSuchMethodError` and `ClassNotFoundException` when executing tests in IDEA. This is mainly because we overwrote some format classes, shade packages, etc. Should we create...
> This is mainly due to we override the parquet's class `ParquetFileReader` with our own version. If we have a paimon-shade project, we have to put paimon's `ParquetFileReader` in it...
@JingsongLi hi, please help review it when you have time.
> What case do you want to solve? Lookup Join for partial-update table without changelog-producer? If this is your requirement, can we just modify Flink LookupJoin Function? @JingsongLi the dim...
As an issue following #3905 , dim tables do not require streaming consumption in most cases, so there is no need to generate changelog files to reduce write IO. When...
@qingfei1994 hi, I think that there is no need to validate the existence of `aggregate-functions` during the DDL stage because paimon already support `user-defined aggregate-functions`. In our case, tables are...
I agree that `user-defined aggregate-functions` should be validated in advance, rather than during the actual write stage. Perhaps we could add this validation when creating the `Source / Sink`, so...