Wilber Hernandez
Wilber Hernandez
> Hi @weibingo, no plan currently but this would be a welcome PR. In the meantime, you would have to manually de/serialize the output of the raw `read` and `write`...
Any plans to add a timeout param to hive.connect ?
Any fix / workaround for Hive on inserting a batch of data? Other than uploading to HDFS or s3 bucket, or monkey patching the monkey patch pyhive.hive.Cursor's _fetch_more() method.
Here's a workaround - Chunking csv files using panda's dataframe. https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-chunking https://medium.com/towards-artificial-intelligence/efficient-pandas-using-chunksize-for-large-data-sets-c66bf3037f93 https://github.com/dropbox/PyHive/issues/55