fastembed
fastembed copied to clipboard
[Bug]: The multiprocess handler does not exit cleanly, if the subprocess is killed by the oomkiller
What happened?
When using fastembed with parallel=0 (or a higher number) and a large batch_size, the memory increases by a large amount (memory copies). In an event when the size is beyond the allocated size of the progress group (using containers), the OS oomkiller may kill the process. But, the multiprocessing pool does not handle this gracefully.
This is similar to the issue described here - https://stackoverflow.com/questions/61492362/multiprocessing-pool-hangs-if-child-process-killed
What Python version are you on? e.g. python --version
python 3.9/10/11
Version
0.3.0 (Latest)
What os are you seeing the problem on?
No response
Relevant stack traces and/or logs
No response
@kumare3 has you been able to fix this issue?
Nope, I gave up on fastembed. Found it hard to containerize and use, switched to sentence transformer and been very happy with it
@kumare3 sorry to hear, that it was hard for you to containerize it, could you provide any additional details?
@kumare3 @nauyan does the parent process just hangs indefinitely after an unexpected exit of a child process?
the issue has been fixed in fastembed 0.4.1