nginx_upstream_module icon indicating copy to clipboard operation
nginx_upstream_module copied to clipboard

nginx auto batching feature for higher client & tarantool performance

Open simonhf opened this issue 9 years ago • 3 comments
trafficstars

Experiments have shown [1] that tarantool deals much better with transactions with batches of operations rather than lots of individual operations... and that's not even with the network layer playing a factor.

I have very many PHP processes which want to each make many individual operations... which isn't the best for tarantool performance.

How about a feature in the nginx_upstream_module which does the following for certain types of HTTP requests which only write to tarantool and don't have to return any data:

  1. Reads HTTP request from client.
  2. Queues up tarantool upstream request for forwarding upstream to tarantool.
  3. Immediately replies via HTTP saying "thank you, request received".
  4. Later when a certain buffer size is reached or an elapsed time threshold is reached, a batch of queued requests is forward upstream to tarantool and processed more efficiently as a transaction?

Note: This feature is only for users who don't care about certain writes to tarantool happening in absolutely real time...

[1] https://gist.github.com/simonhf/e7c2f40d36f1a4bdedfffa40c575b63b

simonhf avatar Oct 21 '16 16:10 simonhf

Yep this is possible. Also If I implement this, you'll see better performance, better latency and less CPU usage (at nginx side). I've moved the issue to the next milestone.

Thanks for an idea!

dedok avatar Oct 24 '16 19:10 dedok

any progress?

jobs-git avatar Jun 22 '22 19:06 jobs-git

We have no planned works regarding the upstream module in a near future. You can open a pull request or contact with our commercial support.

Totktonada avatar Jun 23 '22 22:06 Totktonada