Massive memory usage for large file
Running on Mac (I am unable to run a test on linux right now) sd doesn't seem to output until the file is done. I'm not sure if that is the reason it's basically eating up a lot of memory. It's insanely fast compared to sed for my use case but unusable at the same time.
I started with cat file | sd '.*start' '' > out.file and also tried sd -p '.*start' '' file > out.file
My input file was 100GB+ and in both version sd keeps using up memory, pushing the system to use 10s of GB of swap (32GB Macbook pro) while no output is written to the file. I can see monitoring bytes read that sd is also working much faster than sed but using nearly 2/3 times memory of the amount read.
Is sd not geared for large files or I'm using this wrong or this is a bug?
@thmd sounds like this may be solved with buffered writing. I pushed a possible solution to a branch, please try it out and let me know if it helps :)
https://github.com/chmln/sd/tree/buffered-write