Jim Newsome
Jim Newsome
Another solution we talked about is to smear on the sender side; the sender shouldn't actually be able to send all 100 packets within a single nanosecond should it?
> I think the sender-side approach breaks in case you have a large number of senders sending packets to the same receiver socket all at the same instant. For UDP...
> and maybe some "smoothing" happening where packets from multiple senders arrive at a single upstream router and are buffered before being retransmitted Maybe just using a bigger "receive" buffer...
Could we have cmake run `cargo test` instead of trying to run the test binary directly?
Seems like a good idea. The [latter](https://www.contributor-covenant.org/version/2/1/code_of_conduct/) looks a bit more amenable to customization and reuse
I wondered if we could use `F_GETLK` to check whether a lock is already held over the requested range, but AFAICT there's no way to distinguish between "the lock isn't...
It looks like we *could* query the lock state via `/proc/locks`, but it's probably not worth investing the effort to parse it vs properly emulating and tracking the locks ourselves....
Ah, interesting. I think we can expand the scope of this issue beyond sched_yield; the principle is mostly the same. Having *all* syscalls (and rdtsc-based-time-checks) move forward by some non-zero...
@stevenengler IIUC to "fix" #1794 without changing curl, time would need to advance by > 3 ms? Would that have to be in a single iteration of the loop, or...
> This would require running the scheduler after every syscall right? Maybe instead we could only increase the cached time in the shim on every syscall, and then only if...