psycopg2 icon indicating copy to clipboard operation
psycopg2 copied to clipboard

Support pyarrow

Open dhirschfeld opened this issue 4 years ago • 14 comments

pyarrow is a high performance (zero copy) serialisation library designed for multi-language interop which has very fast conversion to/from pandas DataFrames.

It would be great if psycopg3 could also support returning pyarrow.Table objects as well as Python tuples.

dhirschfeld avatar Mar 09 '20 03:03 dhirschfeld

turbodbc is an existing library which allows returning pyarrow.Table objects however it is built on top of the ODBC interface and it particularly calls out the postgres implementation for not being performant: image

In my own testing I've seen nearly an order of magnitude performance improvement when reading large datasets from SQL Server / Oracle into pandas DataFrames and I'd love to see the same benefits from a postgres native driver.

dhirschfeld avatar Mar 09 '20 03:03 dhirschfeld

Hello, thank you for the use cases input.

I guess first let's start from the pandas dataframes. I'm no pandas expert: what does people want to store: a binary blob or the data frame becomes a table?

If you can provide some docs link that could be helpful.

In general, adding specific adapters or cursor types should be supported (as it is for psycopg2 already). I think the biggest win here would be to pass binary data to/from the db, but I'd like to have first an idea of what data.

dvarrazzo avatar Mar 09 '20 07:03 dvarrazzo

what does people want to store: a binary blob or the data frame becomes a table?

The DataFrame (pyarrow.Table) would be inserted as rows into a postgres table (with a compatible schema) with the benefit being improved serialisation performance.

For me, a huge drawcard is the performance improvement when reading data from a database as you can construct the arrow.Table in C/C++ and pass it directly to Python without having to construct PyObjects for every value in the result set.

e.g. with turbodbc

image

I guess ideally an arrow.Table would be constructed in C/C++ from the binary output of libpq and passed to Python. I'm not at all sure how postgres works though so maybe it might be required to re-implement some functionality in libpq? :thinking:

Some more background on the performance improvements:

  • https://uwekorn.com/2019/11/17/fast-jdbc-access-in-python-using-pyarrow-jvm.html
  • https://www.snowflake.com/blog/fetching-query-results-from-snowflake-just-got-a-lot-faster-with-apache-arrow/

Documentation on the arrow columnar format and its C-data interface:

  • https://arrow.apache.org/docs/format/Columnar.html#serialization-and-interprocess-communication-ipc
  • https://github.com/apache/arrow/blob/master/docs/source/format/CDataInterface.rst

dhirschfeld avatar Mar 10 '20 12:03 dhirschfeld

As mentioned, I'm just a layperson (with an interest in database query performance!)

You'll get more informed opinions/advice if @wesm or @xhochy were inclined to comment...

dhirschfeld avatar Mar 10 '20 12:03 dhirschfeld

I have working code that uses COPY BINARY with psycopg2 to load data from Postgres to/from Arrow/Pandas. My plan is to opensource this in the next 1-3 months. Maybe, you can get better performance by directly integrating this with libpq but I wanted to have a working implementation which doesn't have a binary dependency on psycopg2.

xhochy avatar Mar 10 '20 12:03 xhochy

That's very exciting news @xochy! If @dvarrazzo does decide to support pyarrow in psycopg3 it should hopefully provide some inspiration as to what is required.

dhirschfeld avatar Mar 10 '20 12:03 dhirschfeld

Would love to see this! Did some experimentation a over a year ago (https://github.com/mangecoeur/pgarrow - no guarantee that anything still works today, plus I'm rubbish at C/Cython) and there's at least a 3x performance improvement to be had. I think there is value in building into psycopg3 since bulk load/read is a big use-case these days.

There is also https://github.com/heterodb/pg2arrow (in C) which converts between arrow and libpq (it was already proposed in this issue https://issues.apache.org/jira/browse/ARROW-1106?src=confmacro). Perhaps that could be adapted into psycopg with a nice python API?

mangecoeur avatar May 10 '20 17:05 mangecoeur

FYI: the Arrow C interface is a perfect match for a libpq bridge that doesn't require taking on any C++ library dependencies

http://arrow.apache.org/blog/2020/05/03/introducing-arrow-c-data-interface/

Some other database engines are starting to look at this as a way to pass simple datasets to pyarrow at C call sites

wesm avatar May 10 '20 20:05 wesm

@xhochy did you end up open sourcing that code?

kylebarron avatar Aug 30 '20 20:08 kylebarron

I have working code that uses COPY BINARY with psycopg2 to load data from Postgres to/from Arrow/Pandas. My plan is to opensource this in the next 1-3 months. Maybe, you can get better performance by directly integrating this with libpq but I wanted to have a working implementation which doesn't have a binary dependency on psycopg2.

Hi @xhochy, I would be interested in your code as well. Any news on that?

bopindux avatar Sep 24 '21 10:09 bopindux

Current methods to get pandas/polars/arrow dataframe from postgresql query result

  1. conventional method: pandas read_sql/read_sql_query or create pandas dataframe from dict from psycopg query result
  2. connectorx
  3. duckdb via postgres_scanner

Problems with the typical conventional approach:

  1. row oriented results which need to be stored into a column oriented memory store
  2. conversion from c byte to python object to byte for each columns in each row

What psycopg can greatly help in this case is to skip the c value to python object conversion. The basic approach would be

  1. create a static buffer (by first querying the results length) or an expandable buffer for each columns
  2. for each column value in each row, write the value into the buffer
  3. cast the buffers into a pandas dataframe/arrow table data structure.

char101 avatar Dec 14 '22 07:12 char101

FYI: I'm working on a Parquet -> SQLalchemy tool that also uses pyarrow to batch insert data with COPY FROM: https://github.com/offscale/cdd-python-gae

SamuelMarks avatar Feb 16 '23 06:02 SamuelMarks

The Postgres ADBC driver might also be useful in this space.

judahrand avatar Sep 05 '23 20:09 judahrand

Looks nice. I'll wait for it to have JSON, timezoned datetime, and array support before switching though. In the meantime I'm using my own repo, which underneath the hood uses PostgreSQL's binary protocol by way of https://github.com/altaurog/pgcopy

SamuelMarks avatar Sep 06 '23 02:09 SamuelMarks