lwt
lwt copied to clipboard
memory consumption blowups in downstream projects using Lwt
Hi Lwt people,
I randomly ended up on the following cohttp issue, which points at some Lwt usage pattern that greatly increase memory consumption in unexpected ways. (Sounds like a bug to me, at least a usability bug.)
https://github.com/mirage/ocaml-cohttp/issues/545
The issue is unfortunately not so clear -- I don't understand if Lwt_io is the culprit or not. There is a pure-Lwt repro case at
https://github.com/mirage/ocaml-cohttp/issues/545#issuecomment-1163929378
The issue seems serious enough that some users are considering avoiding Lwt due to its (unclear) existence. Probably worth investigating.
@rgrinberg In the cohttp issue discussion you mention
The issue was successfully worked around in the new cohttp client and servers though.
Do you have pointers to the fix? Maybe a PR number?
I haven't really worked on the Lwt_bytes
/Lwt_io
part of Lwt. I'll try to have a look some time, but that will be a big context switch so I can't do it right now.
As a low-hanging fruit (hopefully) I'll try to understand the patterns that can be problematic and update the documentation accordingly. An actual fix may come later.
Concerning Lwt_io, the "fix" is only in the new cohttp-server-lwt-unix
, where we remove the dependency on conduit
and move to Lwt_io.direct_access
, see https://github.com/mirage/ocaml-cohttp/pull/898 and https://github.com/mirage/ocaml-cohttp/pull/907 (the package was introduced in https://github.com/mirage/ocaml-cohttp/pull/838).
This goes hand in hand with the new very fast parser introduced in https://github.com/mirage/ocaml-cohttp/pull/819 and later improved and moved to the general http package. The PR that introduces it also includes some
@rgrinberg @anuragsoni @kandu please correct me or integrate with anything I may be missing or misremember.
@mseri I agree with the assessment that the new cohttp-server-lwt-unix
is where this issue is probably "resolved" for cohttp. It should still be tested with the example in the original issue though to confirm this. For the cohttp-lwt
packages there has been no change in its use Lwt_io
so unless things have changed within lwt, i'd think that the issue still persists for cohttp + lwt.
I have tried to experiment a little more to get a better understanding about memory consumption in the Ocsigen stack. But I'm getting more confused. I have a small test app that just sends notifications from server to client. When it sends notifications at a rate at about 50/s the application slowly increase memory consumption. If I increase the rate to 100/s, memory consumption increases. However, if I increase the rate to 200/s (or more), then memory consumption stays low,
at around 30MB.
Similarly, if I have other applications using most of the available memory before starting my application, then memory usage stays low. Also, I have seen that in some cases the application will give back memory to the OS if other applications starts using a lot of memory after the "Ocsigen" application has started consuming memory.
From my observations, it does not look like memory leakage, but but rather like a memory allocation/caching strategy that does not always works the way I would like. I guess this is more or less similar to what others have reported.
Is there no way for application writers to change/control the memory consumption behavior?