Max
Max
I was able to read a partitioned dataset without directly calling `pyarrow.parquet.read_table` by using the following keyword arguments which are passed to `pyarrow.parquet.read_table` (using polars==0.10.27 and pyarrow==6.0.1): ``` df =...
@cottrell it is `pl`. Note it only works if you have `pyarrow` installed, in which case it calls `pyarrow.parquet.read_table` with the arguments and creates a `pl.DataFrame` from the `pa.Table`. This...
I've tested this with my Spark Thrift HiveServer2 deployment behind a load balancer on Kubernetes and it works great. I'm looking forward to using this to connect Superset to my...
It is notable that the cursor is at 1:1097, requiring the aws-sdk which is meant to be an external module. 
I tried your above suggestion, but still got the error when trying to import the AWS SDK. I patched the issue by updating the configuration to just include the modules....