Parquet.jl icon indicating copy to clipboard operation
Parquet.jl copied to clipboard

Reading from raw bytes?

Open calebwin opened this issue 3 years ago • 17 comments

I'm downloading a Parquet file over network using AWSS3.jl. Can I parse this into a a DataFrame using Parquet.jl?

calebwin avatar Mar 27 '21 20:03 calebwin

@tanmaykm, I think this would be helpful to have; in Arrow.jl and CSV.jl, we ultimately always do all the file parsing/processing on a Vector{UInt8} which makes it really convenient for cases like OP mentioned.

quinnj avatar Mar 30 '21 17:03 quinnj

Having the processing functions work on Vector{UInt8} will be useful for for files that would fit into memory. This would also work for files on disk that can be memory mapped.

But for files being loaded from blob stores like S3, will a File type abstraction on them be better? One that can fetch chunks from byte offsets as and when needed?

tanmaykm avatar Mar 31 '21 12:03 tanmaykm

@tanmaykm I don't have first-hand knowledge on this but it seems like you could easily incur overhead from reading over the network for each fetch of bytes. And so a File abstraction wouldn't be desirable?

calebwin avatar Mar 31 '21 16:03 calebwin

Yes, the reads need to be buffered by the abstraction of course. And most of the data access in this package are actually for reasonably large chunks of data, with byte level access in done from internal buffers, which I thought would suit this approach.

tanmaykm avatar Mar 31 '21 16:03 tanmaykm

I see. Looks like AWSS3.jl supports reading byte ranges from files in S3. But if this was behind an implementation of File (is there even such a thing as an AbstractFile?), does Parquet.jl support reading from a File or does it have to be a filename for some reason?

calebwin avatar Mar 31 '21 17:03 calebwin

The filepath is not used apart from initial opening of file, and for filtering partitioned datasets. Those may work too with minor changes if we use URLs instead.

I have not come across AbstractFile. We should have one maybe, and we probably only need methods for filesize, seek and reading a range of bytes implemented for S3 access.

tanmaykm avatar Mar 31 '21 18:03 tanmaykm

The filepath is not used apart from initial opening of file, and for filtering partitioned datasets. Those may work too with minor changes if we use URLs instead.

Got it

I have not come across AbstractFile. We should have one maybe, and we probably only need methods for filesize, seek and reading a range of bytes implemented for S3 access.

I feel like the Julia I/O ecosystem is really great thanks to hard work by you and others. But there really needs to be a better unifying abstraction for reading datasets from files. I'm working on something like Dask for Julia and greatly sensing the need for something similar to fsspec for Julia. PathsBase.jl and FileIO.jl are great but not sufficient for multi-file datasets.

calebwin avatar Mar 31 '21 18:03 calebwin

@tanmaykm @quinnj I unfortunately don't have the time to develop this at the moment - do you think this might be a valid case to just use S3FS via FUSE?

calebwin avatar Apr 05 '21 06:04 calebwin

Yes, I think S3FS via FUSE may work well in this case.

tanmaykm avatar Apr 05 '21 07:04 tanmaykm

@tanmaykm Okay, my only concern is - do you know if S3FS will download files to disk if it isn't using cache? I would hope that it would just download ranges of bytes to memory....

calebwin avatar Apr 05 '21 12:04 calebwin

It does seem like that from s3fs document, and I was not able to see files being written when I tried it. But it claims that using cache may make it faster and it has an option to limit cache size.

tanmaykm avatar Apr 12 '21 09:04 tanmaykm

I am having same issue: reading a Parquet file on S3 and hoping to benefit from reading a specific column only. I would think this is a very popular use case.

cwiese avatar Apr 12 '21 16:04 cwiese

root = "s3://$(bucket)/$(path)/$(run_date)"
fxpath = "$(root)/fx_states.parquet"
p = S3Path(fxpath, config=config)
f = read(p)
using Arrow
ar = Arrow.Stream(f, pos=1)

this gets me "ERROR: ArgumentError: no arrow ipc messages found in provided input"

I figured straight to Arrow and I can create a dataframe (if needed). Perhaps @quinnj can correct me here?

cwiese avatar Apr 12 '21 16:04 cwiese

That definitely won't work - Arrow.Stream is expecting Arrow data. But you provided it Parquet data. Arrow and Parquet are different formats. And note that the format of Arrow data is same (or almost same) regardless of whether it is disk or network or in-memory.

calebwin avatar Apr 12 '21 16:04 calebwin

right! But I do not see a way to construct a Parquet.File with s3Path. If I want to read Parquest Files on AWS S3 - I will need to use Python for now - it seems. Now the challenge is how to avoid copying all this data multiple time getting into Julia DF.

cwiese avatar Apr 12 '21 21:04 cwiese

Looking at the python equivalent:

pyarrow.parquet.read_table "path... or file-like objects" https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html

which enables

pandas.read_parquet() "path... or file-like objects" https://pandas.pydata.org/docs/reference/api/pandas.read_parquet.html

layne-sadler avatar Aug 28 '21 14:08 layne-sadler