Angerszhuuuu
Angerszhuuuu
### What changes were proposed in this pull request? `ArrayInterscet` miss judge if null contains in right expression's hash set. ``` >>> a = [1, 2, 3] >>> b =...
### What changes were proposed in this pull request? ArrayType's parameter `containsNull` means this array can contains null, related to nullable, this is easy to misunderstand in reading logic. In...
### What changes were proposed in this pull request? After discussing about https://github.com/apache/spark/pull/36207 and re-check the whole logic, we should revert https://github.com/apache/spark/pull/36207 and do some change 1. No matter whether...
## Bug 报告 When I wan't to use TiSpark and Hive together, It's OK to read data from hive and TiKV, but it failed to Create table into TiKV. I...
### What changes were proposed in this pull request? Cache Table with CTE won't work, there are two reasons 1. In the current code CTE in CacheTableAsSelect will be inlined...
# :mag: Description ## Issue References 🔗 This pull request fixes #5995 ## Describe Your Solution 🔧 Current injected rule already handle the whole plan, we don't need to apply...
### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ### Search before asking - [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no...
### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ### Search before asking - [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no...
### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ### Search before asking - [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues) and found no...
### _Why are the changes needed?_ To close #5594 For case ``` def filter_func(iterator): for pdf in iterator: yield pdf[pdf.id == 1] df = spark.read.table("test_mapinpandas") execute_result = df.mapInPandas(filter_func, df.schema).show() ```...