Jody Bruchon (MOVED TO JODYBRUCHON.COM)
Jody Bruchon (MOVED TO JODYBRUCHON.COM)
Grab endian.h from here: https://github.com/lattera/glibc/blob/master/string%2Fendian.h Delete everything before the last section that defines the functions based on __BYTE_ORDER. Manually #define __BYTE_ORDER above the section to match big endian (my research...
> What about buffered writes to `stdout` I don't understand because these things are all dumping data to files.
You're massively over-thinking this. Larger buffers only have one downside and that's RAM usage measured in kilobytes. Unless you are leaving tens of thousands of files open simultaneously, that's not...
But it's a file buffer. It's not a logic change. The only thing that happens here is that more data is read or written at a time. Like I said,...
OK. Explain the problem that you have foreseen. All of that "any change can cause a problem" rhetoric has no actual value. What is the specific technical issue you are...
FYI I've been using this for 9 days now with no issues and significantly improved performance. Edit: I've been watching it fly by for half an hour and the difference...
> it just needs to be configurable Unnecessary complexity, in my view. > I think the main concern is memory usage The program doesn't hold enough files open at once...
That sounds reasonable to me. :-)
I have a 10GbE connection to a Linux RAID server over Samba. The Windows machine is pulling the data, then bouncing it to mapped drives over the private link.
If that's the case then there may be an issue with how network failure is handled. A larger buffer shouldn't corrupt anything.