Zaubeerer
Zaubeerer
PS: I am new to snowflake and to the python connector, so looking forward to learning from and improving with you the way we can use the snowflake connector. :)
PPS: I wanted to correct the title for clarification, but it seems like GitHub bot does not allow me to do so...
Hey @sfc-gh-mkeller, if you need any further information, I am happy to provide and contribute. Would like to solve this problem quickly, before ingesting large amounts of data :)
Hi @sfc-gh-mkeller, any updates on how quick the longterm fix will be available? I will be working with the `snowflake-connector-python` in the next days and wonder whether or not I...
Hey @sfc-gh-mkeller, after being able to workaround the issue, we are finally coming back to it. 🙂 So first question: Is this issue still open? If so, I just tried...
OK, thanks for the instant feedback! I assume the issue is not resolved yet? So I will try it just with `pip install .`. :)
So here is a quick summary of the small performance test so far for a table of 100000 rows that needs 15s to read from our clients API into a...
If we could reproduce the performance of **no pd_writer**, that would indeed be a sufficient work around for us. How did you implement or rather call **no pd_writer - no...
> @Zaubeerer we managed to find a temporary workaround by modifying the write_pandas method to write the temp files to csv not using parquet at all (based on the suggestion...
> I ran into the same issue. However, for us, it doesn't make too much of a difference and so we simply passed all dates and timestamps as strings. If...