v
v copied to clipboard
fix: picoev fmt
better use of OR bitwise
old generate c code
int read_events = (((((event.events & ((u32)(EPOLLIN))) != 0U ? (_const_picoev__picoev_read) : (0))) | (((event.events & ((u32)(EPOLLOUT))) != 0U ? (_const_picoev__picoev_write) : (0)))));
new generate c code
int read_events = 0;
if ((event.events & ((u32)(EPOLLIN))) != 0U) {
read_events |= _const_picoev__picoev_read;
}
if ((event.events & ((u32)(EPOLLOUT))) != 0U) {
read_events |= _const_picoev__picoev_write;
}
What is the performance impact?
What is the performance impact?
I will verify ASAP
Any news?
Why not just use the POSIX poll function, which should be available on all (or most?) systems. Even Windows has a POSIX subsystem.
Any news?
Sorry. I ended up getting bogged down with a lot of work during that time. Over the weekend I'll try to do some tests.
Why not just use the POSIX
pollfunction, which should be available on all (or most?) systems. Even Windows has a POSIX subsystem.
TLDR: performance and scalability. Platform specific APIs like epoll and kqueue, can better handle the case of many thousands of connections, that are mostly idle. See https://stackoverflow.com/questions/5383959/why-exactly-does-epoll-scale-better-than-poll for more details, or https://en.wikipedia.org/wiki/C10k_problem .
Imho in another 5-10 years, POSIX can standardize a common API for it, but afaik it has not happened yet, and poll is not enough.
What is the performance impact?
No performance impact.
After some performance test at examples/pico/pico.v and examples/pico/raw_callback.v
using wrk with command wrk -d10s http://127.0.0.1:8080 I have noticed no result changes (margin of error around 1%).
Running 10s test @ http://127.0.0.1:8080
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 60.84us 21.37us 563.00us 77.52%
Req/Sec 80.73k 8.49k 128.49k 67.16%
1613112 requests in 10.10s, 193.84MB read
Requests/sec: 159724.51
Transfer/sec: 19.19MB
The command used to run the projetcs was ./v -prod crun <project path>