Ashhar Hasan

Results 136 comments of Ashhar Hasan

BTW even if it's not expected we should still make the code more defensive either way (for example if there's some proxy sitting between Trino and the client which mangles...

Fixed in https://github.com/trinodb/trino-python-client/pull/263

This seems to be side-effect of the fact that we cache tokens for the entire host instead of per-connection. i.e. once we stop sharing tokens across across connections this problem...

Seems like this was fixed in 3.11.1. I can no longer reproduce the issue on newer versions. From https://github.com/snowflakedb/snowflake-jdbc/blob/master/CHANGELOG.rst ``` JDBC Driver 3.11.1 | SNOW-126957 | Add CLIENT_ENABLE_LOG_INFO_STATEMENT_PARAMETERS for logging...

Does OSX have a `tput` command? If yes you can use something like `tput sgr0` for resetting, `tput bold` for bold. ```bash cb="$(tput bold)" cc="$(tput sgr0)" # Then use them...

Thanks a lot @tangjiangling for kicking this off and seeing it to completion.

My proposed solution involves allowing pass-through properties for the RecordWriter classes. I'll try to take a stab at this over the next week - unless someone here has an idea...

I have a patch ready for this - will be polishing and submitting over the weekend.

You'll need to fork and pass the following config when creating the Parquet writer [here](https://github.com/confluentinc/kafka-connect-storage-cloud/blob/master/kafka-connect-s3/src/main/java/io/confluent/connect/s3/format/parquet/ParquetRecordWriterProvider.java#L78) `parquet.avro.write-old-list-structure=false`.

I think the Confluent/Kafka Connect version differs from the version you built connector from so there are classpath issues. Try making sure that you are Confluent 6.1 (since that's what...