Xi Lyu

Results 6 comments of Xi Lyu

> Instead of adding (de)compression functions for different codecs, how about adding the `compression` and `decompression` directly, like, > > * https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_compress > * https://learn.microsoft.com/en-us/sql/t-sql/functions/compress-transact-sql?view=sql-server-ver16 Hi @yaooqinn, yes, that can...

Hi @hvanhovell @vicennial , could you take a look at this PR? Thanks.

Hi @zhengruifeng, could you help with the CI failures from `pyspark-pandas-connect-part1`? This PR has no changes on any scala code, but sql/hive, connector/kafka, and connect/server fail to compile due to...

Hi @zhengruifeng , I'm fixing the behaviour difference of referencing a non-existent column in Spark Connect Scala, based on the PySpark's [\_\_getitem\_\_](https://github.com/apache/spark/blob/727167acc30c7a50566dad0c030763e34b450cca/python/pyspark/sql/connect/dataframe.py#L1745-L1748) and [verify_col_name](https://github.com/apache/spark/blob/e70f39e9a67184c2595d3a091ca716dccd70e41f/python/pyspark/sql/connect/types.py#L353-L380) methods, could you please review this...

@hvanhovell Thank you, makes sense. In this case, we can close this PR. Do we want to remove the column name validation from [\_\_getitem\_\_](https://github.com/apache/spark/blob/727167acc30c7a50566dad0c030763e34b450cca/python/pyspark/sql/connect/dataframe.py#L1745-L1748) on PySpark, so df['non_existing_col'] won't trigger...