eccodes-python
eccodes-python copied to clipboard
Using memory-based file-like object (BytesIO)
Is there a way to load a file-like object from memory? Looking for a way to work around local storage limitations and improve performance by using memory for input/output operations. The following example using a BytesIO object returns an error from the library.
This is with Python 3.6 and ecCodes 2.17.0.
Thanks!
f= open('file.grib','rb')
buffer = io.BytesIO(f.read())
codes_grib_new_from_file(buffer)
---------------------------------------------------------------------------
UnsupportedOperation Traceback (most recent call last)
<ipython-input-25-b3e3d2be12aa> in <module>
----> 1 codes_grib_new_from_file(stream)~/.local/lib/python3.6/site-packages/gribapi/gribapi.py in grib_new_from_file(fileobj, headers_only)
403 # err, h = err_last(lib.grib_new_from_file)(ffi.NULL, fileobj, headers_only)
404 err, h = err_last(lib.codes_handle_new_from_file)(
--> 405 ffi.NULL, fileobj, CODES_PRODUCT_GRIB
406 )
407 if err:~/.local/lib/python3.6/site-packages/gribapi/gribapi.py in wrapper(*args)
150 err = ffi.new("int *")
151 args += (err,)
--> 152 retval = func(*args)
153 return err[0], retval
154UnsupportedOperation: fileno
Dear Christopher, Please have a look here: https://confluence.ecmwf.int/display/UDOC/How+do+I+decode+messages+from+a+byte+stream+-+ecCodes+FAQ
I hope this helps :)
Dear Christopher, Please have a look here: https://confluence.ecmwf.int/display/UDOC/How+do+I+decode+messages+from+a+byte+stream+-+ecCodes+FAQ
I hope this helps :)
Hi shahramn,
This is perfect, thank you for passing that along! I have 2 followup questions as a result:
-
For some of the data I work with I use a multi-part HTTP range request from the server to get the files. As a result in these files there is a text wrapper around each of the ranges according to the spec that requires this. This extra text causes the method above to fail because the data doesn't start with 'GRIB'. When loading these files from disk it successfully skips through these text blobs. Aside from adding a separate function to clean out those extraneous pieces is there a way with the existing library to accomplish this?
-
I also use the indexing features quite heavily to pick selected messages out of a file. From the documentation it appears that I would have to loop through each of the GRIB messages in the buffer and keep or discard them as needed. Is my understanding on this correct or is there a way to integrate the indexing with this methodology?
Thank you again for the response!
There is currently no support for the two cases you mentioned