capnproto-java
capnproto-java copied to clipboard
Write many records into a file
Can we use the java capnproto implementation to efficiently write millions of records into a large binary file and then read those records back?
Let's assume to simplify that only one struct is defined, and that all the records we write are of that struct. Writes and then reads are sequential (streaming).
What is expected for efficiency is the ability to reuse message builders and readers or at least the byte buffers they use internally.
I hava do this with c++ capn sdk , write millions of records into a large binary file, then using mmap read them
@dwrensha I have a similar use case with a bytes (Data) list field in a struct which is nested couple of levels in other structs. Each entry of the list can be a couple of MB and potentially thousands of records. Is there a way to read as a stream instead of decoding the full message?