Nathaniel J. Smith
Nathaniel J. Smith
Something I just realized: at some point we may make `cancel_shielded_checkpoint()` skip the checkpoint if the task hasn't been running for very long. We've looked at this mostly for efficiency...
Yeah, ok, you're right :-). I've had some bad experiences with benchmark-driven development before, and it's going to be difficult to figure out what benchmarks are actually useful, but speed...
The discussion in [this discourse thread](https://trio.discourse.group/t/why-are-python-sockets-so-slow-and-what-can-be-done/121) convinced me that we have a fair amount of low-hanging fruit that even some simple microbenchmarks could help us flush out. (E.g., I'm pretty...
> There a lot's of lists of such tools, I'll post one for completeness https://gist.github.com/denji/8333630 I've looked at some lists like this, but all the ones I've found are focused...
> What's the point of benchmark without any protocol parsing? Higher numbers, but very synthetic conditions not correlating to any real life load I tend to group benchmarks into two...
IME most good benchmarking tools report results as a set of quantiles or a histogram, which tells you much more about the shape of the distribution than mean/stddev. The 68-95-99.7...
@belm0 Unfortunately, the only way to compute per-task CPU time (with or without child tasks) is to call `thread_time` at every task switch, which is probably too much overhead for...
@Tronic huh, neat! It's obviously a very narrow/specific microbenchmark, but those results are pretty promising as a start (the trio socket-level implementation beats the stdlib on throughput!), and it already...
@Tronic would you be okay with stashing those somewhere more permanent, so people reading this thread later will know what we're talking about?
It would be nice to see detailed profiles, so we could tell what part of the deadline code was actually taking up time :-). > I already removed separate CancelScopes...