horaedb
horaedb copied to clipboard
try to replace parquet with parquet2?
Description
replace parquet with parquet2
The five main differentiators in comparison with parquet
are:
- it uses
#![forbid(unsafe_code)]
- delegates parallelism downstream
- decouples reading (IO intensive) from computing (CPU intensive)
- it is faster (10-20x when reading to arrow format)
- supports
async
read and write. - It is integration-tested against pyarrow and (py)spark 3
Proposal
Additional context
One more benefit: switching to parquet2
can decouple our parquet
dependence with datafusion, we can update them separately.
But migrating this kind of dep is a big work...
One more reason to migrate to parquet2.
ArrowWriter have no method to retrieve inner writer, but parquet2 does have into_inner
Report to upstream:
- https://github.com/apache/arrow-rs/issues/2491
👏 parquet maintainer here, FWIW there is little read performance difference these days that I have been able to reproduce, there is mature support for decoupled IO (async), we integration test against pyarrow, and recent work by myself and others to add page and row-level filter pushdown should dramatically improve the performance of filtered scans.
There are definitely areas to improve, most notably the writer hasn't had the same degree of attention, but by working together we can pull the whole ecosystem along 😀
Anyway enough from me, just thought I'd provide an alternative narrative to the parquet2/arrow2 FUD...
@tustvold Awesome work. It's seems we need to re-evaluate the performance of parquet.
but by working together we can pull the whole ecosystem along 😀
We would love to share what we learn when build CeresDB, and keep communicating with upstream ecosystem to make it better. 🍺