filprofiler
filprofiler copied to clipboard
A Python memory profiler for data processing and scientific computing applications
1. All builds run. 2. Wheels get uploaded as artifacts. 3. A separate ... pipeline? workflow? then downloads all artifacts and uploads them to PyPI in one go.
And possibly even take that into account for peak calculations? * [ ] File backed * [x] Not file backed (#29) * [ ] Not file backed, but using `/dev/zero`...
Lacking `ulimit` on virtual memory, OOM deaths might not always report anything via Fil if it gets SIGKILLed by Linux OOM killer.
Is PyPy support planned? Is it currently not feasible? If so, how can we help make it feasible?
Additional info to include: * [ ] machine it ran on * [ ] git commit, branch * [ ] env variables
Probably most allocations are very ephemeral. Supposedly vector-based arrays (for