nginx_upstream_module
nginx_upstream_module copied to clipboard
nginx auto batching feature for higher client & tarantool performance
Experiments have shown [1] that tarantool deals much better with transactions with batches of operations rather than lots of individual operations... and that's not even with the network layer playing a factor.
I have very many PHP processes which want to each make many individual operations... which isn't the best for tarantool performance.
How about a feature in the nginx_upstream_module which does the following for certain types of HTTP requests which only write to tarantool and don't have to return any data:
- Reads HTTP request from client.
- Queues up tarantool upstream request for forwarding upstream to tarantool.
- Immediately replies via HTTP saying "thank you, request received".
- Later when a certain buffer size is reached or an elapsed time threshold is reached, a batch of queued requests is forward upstream to tarantool and processed more efficiently as a transaction?
Note: This feature is only for users who don't care about certain writes to tarantool happening in absolutely real time...
[1] https://gist.github.com/simonhf/e7c2f40d36f1a4bdedfffa40c575b63b
Yep this is possible. Also If I implement this, you'll see better performance, better latency and less CPU usage (at nginx side). I've moved the issue to the next milestone.
Thanks for an idea!
any progress?
We have no planned works regarding the upstream module in a near future. You can open a pull request or contact with our commercial support.