Joakim Kolsjö
Joakim Kolsjö
I'll keep this issue open in case anyone (including myself) feels like trying to fix it :) One good first step could be to create a script, somewhat like https://github.com/joakimk/toniq/blob/master/test/benchmark.exs...
Nice to see that people using Toniq more actively that me is working on this problem. I'll certainly accept a pull request that fixes this.
Can someone confirm if the above merged pull request in 1.0.6 fixes this issue?
Toniq is not meant to handle every kind of asynchronous task, after all, we do have OTP itself, but I expect this problem could be common enough that Toniq should...
This might be related https://youtu.be/6yoJ8sWRiyg?list=PLE7tQUdRKcyYoiEKWny0Jj72iu564bVFD&t=1517 (erlangs handling of big binaries).
If it turns out that holding many jobs in memory is impractical, maybe they can be flushed out to redis when the list is too long. Possibly as incoming jobs...
We have to consider takeover when there are many jobs as well. The way it's written now it assumes that moving jobs over inside redis will be fast. That won't...
@kpanic you could try the [import_limits branch](https://github.com/joakimk/toniq/tree/import_limits). It will ensure there are only 500 jobs in memory at a time. It's untested (other than simple manual verification in iex) but...
Sorry I didn't reply earlier. I don't actively work on this project, but I do aim to incorporate all contributions. As you've seen I've tried to keep the redis usage...
Related issue: https://github.com/joakimk/toniq/issues/9