core-geonetwork
core-geonetwork copied to clipboard
Downloading big files can run out of memory
With large attachments having the content of the files as a byte array in memory can cause problems running out of memory.
https://github.com/geonetwork/core-geonetwork/blob/b670c52486121a676241ad32d0230afcfe287a80/services/src/main/java/org/fao/geonet/api/records/attachments/AttachmentsApi.java#L251-L285
This should be written to the servlet output stream directly instead of loading the file in memory.
I did some changes in the FME store implementation to stream response (from FME to client) - maybe that can be used here too https://github.com/geonetwork/core-geonetwork/pull/6688/files#diff-02af4158bb9acdbe0f63adb7a37d891a2d364afce173b894b5bdc302511fd930R276 - not an expert on this.
Yes, that's exactly what is needed, although you don't have to call close()
on the response output stream: the servlet container will call it.
I also noticed that we create temporary files https://github.com/geonetwork/core-geonetwork/blob/b670c52486121a676241ad32d0230afcfe287a80/core/src/main/java/org/fao/geonet/api/records/attachments/S3Store.java#L263 - and when you have 10Go files - it can take times before something is returned to the client ...
Also the temporary resized image isn't deleted after using it.
Hello, I'm sorry but it's very problematic with big files. the solution seems to be found and is it possible to correct it for next release ? Thank you very much !