opensmalltalk-vm
opensmalltalk-vm copied to clipboard
High Performance AsyncIO - kqueue/epoll/IOCP
I happened across commit 3664 [1] which indicates epoll and kqueue support is planned. The core devs probably already know what they want to do, but I got curious to understand the background and wanted somewhere to hang the info I found. Perhaps its useful for discussion or history.
Nginx summarises the per platform options[2]... select — standard method for platforms that lack more efficient methods. poll — standard method on platforms that lack more efficient methods. kqueue — efficient method used on FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0, and Mac OS X. epoll — efficient method used on Linux 2.6+. eventport — event ports, efficient method used on Solaris 10.
Code examples of epoll & kqueue provided by [3].
I/O Completion Ports seem to be the similar mechanism on Windows[4], and [9] discusses the "practical difference between epoll and Windows IO Completion Ports" and in particular... its "fairly easy to emulate IOCP with epoll while using a separate thread pool. However it is not that easy to do the reverse, and I don’t know of any easy way to emulate epoll with IOCP, and it looks rather impossible to keep the same or close performance."
Should an existing third-party cross platform library like libevent(BSD)[5] or libuv(MIT)[6][7] be used? Both these are git repositories and so could be maintained as a subtree repository. Pros and cons are listed in the comments of [11] for: LibEvent, Libev, Libuv, Libae, Boost asio.
Perhaps the http & ssl support in libevent is too much of a kitchen sink, or perhaps it could be useful to provide out-of-band management of a VM running in the cloud (if that might ever be useful). Per [8]... "Having to provide access to the functionality from the dramatically different methods, libevent has a rather complex API which is much more difficult to use than poll or even epoll. It is however easier to use libevent than to write separate backends if you need to support FreeBSD (epoll and kqueue) [and Windows]".
libuv seems a lighter option than libevent. The difference between *nix "Reactor" and Windows "Proactor" approach to AIO is described at [12]. (Does Libuv provide a cross platform proactor interface?) [13] says "libuv offers considerable child process management, abstracting the platform differences." (So maybe uv_spawn() will help get OSSubprocess working on Windows.) [14] says "you can also embed libuv’s event loop into another event loop based library."
[10] says "multiple SIGIO signals will not be queued and there is no way to detect if signals have been lost" . [10] also says of POSIX AIO is "totally screwed. The people who came up with it were on drugs or something. Really. I'll go through various issues, starting with the ones that aren't so bad and ending with the rool doozies [...and... ] is implemented at user level, using threads [which is a high overhead for large numbers of aio]"
[15] notes that Window's other AIO mechanism WSAWaitForMultipleEvents won't let you wait for more than 64 events.
[1] http://forum.world.st/commit-3664-Format-aio-c-according-to-Eliot-s-predilections-before-adding-kqueue-and-epoll-td4887256.html [2] http://nginx.org/en/docs/events.html [3] http://austingwalters.com/io-multiplexing/ [4] http://tinyclouds.org/iocp-links.html [5] https://en.wikipedia.org/wiki/Libevent [6] https://en.wikipedia.org/wiki/Libuv [7] http://docs.libuv.org/en/latest/design.html [8] http://www.ulduzsoft.com/2014/01/select-poll-epoll-practical-difference-for-system-architects/ [9] http://www.ulduzsoft.com/2014/01/practical-difference-between-epoll-and-windows-io-completion-ports-iocp/ [10] http://davmac.org/davpage/linux/async-io.html#sigio [11] https://www.reddit.com/r/linux/comments/1drwuw/why_doesnt_linux_implement_kqueue/ [12] http://somdoron.com/2014/11/netmq-iocp/ [13] https://nikhilm.github.io/uvbook/processes.html [14] https://nikhilm.github.io/uvbook/eventloops.html [15] http://blog.omega-prime.co.uk/?p=155
Some interesting broad discussion of Linux versus Windows async I/O. https://news.ycombinator.com/item?id=11864211 TL;DR just read comments by: trentnelson & wahern