Sergey Shepelev
Sergey Shepelev
Thanks for details. - 100MB over 20000 occurrences is "only" 5kb per `socket.connect` which doesn't seem that much. Should be close to zero if socket buffers are counted as kernel...
Yeah there is (was?) an issue with logging compatibility. It worked better with `monkey_patch(thread=False)` but that's a bad workaround. Following benchmark clearly show there is no memory leak in `socket.connect`...
Continuing previous message. - server used `socat -u /dev/null tcp-listen:4083,reuseaddr,fork` - memory_profiler result contradicts tracemalloc so I'm unsure how to interpret it ``` Line # Mem usage Increment Occurrences Line...
No server, `try/except: pass` ignore connect error. Result is no memory used, also no leaks. Second snapshot didn't get any allocations in program, so it shows internals of stdlib. ```...
> looks like fluent logger is using thread.Lock to do synchronization, do you think it is still necessary to have it? It is mandatory in that particular style of buffering...
> do you try top? for me, looks like memory profiler gives a closer value compared to top It's because memory_profiler by default uses psutil to get exactly the same...
Yeah that makes sense. GreenPool starts each call in a new greenthread, each having a separate `threading.local()` object. Basically, it's a recipe to collect exceptions, together with heavy tracebacks, from...
You can reduce it further to something like this, no sockets or logging required. ```python def work(): try: raise Exception() except Exception as e: threading.local().last_error = e pool = GreenPool()...
Cool, thanks, I'll add this minimal test to eventlet test suite. After fixing of course.
@jshen28 please post `uname -a` and result of this script. I can't reproduce it in minimal version. ```python import eventlet eventlet.monkey_patch() import gc import resource import threading import psutil N...