tarpc
tarpc copied to clipboard
Best way to handle blocking service methods?
Some of my service's method need to access the db, which means they would block the current thread. Does that mean I should use the SyncService instead of FutureService?
But then how can I scale to hundreds of simultaneous rpc calls without spawning equally many threads?
Or should I use FutureService, but then how could I process any other rpc calls in parallel with a blocking call (e.g. while one call waits on the db)?
What's the best way to handle this?
Is there a way to combine both approaches like in actix where some actors (here rpc methods) are sync and some are async, and both can be part of the same service, because the sync actors (methods) are executed on a threadpool?
If I manually spawn multiple FutureServices in different threads, a blocking call in one thread would still block the other calls happening in the same thread at that time.. And they couldn't all listen on the same port, right? So what's the best way to do this? :)
The best approach I know of is to use tokio's blocking fn.
How would that look like, with a tarpc service?
You just wrap a blocking call and it returns a future. So, you'd plug it into a FutureService method.
So I should just use the FutureService and run 1 instance of it, and wrap all blocking (e.g. db) calls into blocking? But then I'd only have 1 thread processing requests, right? Would it be possible to use a pool of as many threads as there are cpu cores and have an instance of FutureService running in each? If yes, how? :)
I think right now the easiest way to achieve parallelism is to run multiple cores, each with an instance of FutureService. That's pretty coarse-grained parallelism -- each client will be served by one core -- but it should scale pretty well across clients.
Once tarpc supports the latest version of tokio, this will become much easier. It will basically work out-of-the-box the way you expect it to. I don't expect to have that version available for some time, though...not before futures 0.3 is released, probably.
But how can I make all these instances listen on the same port?
Btw, before I call blocking, do I have to create the global thread pool myself?
I'm pretty sure you can simply specify the same port. Did you run into a problem with that?
Yeah I think you probably have to create the thread pool first.
So every thread can only process one client request at a time? Let's say I only use 1 thread: Does it mean other client requests get queued and processed serially or will they be rejected when another client process is already being processed at that time?
Are you using the futures API or the blocking API? With the blocking API, it uses a thread pool. With the futures API, it's just one thread, and you're expected to not make blocking calls. If you're using futures and making blocking requests, like DB queries, I'd use one of the futures thread pools to execute those queries. I don't think the blocking fn works since I haven't yet updated tarpc to the latest tokio version.
I'm sorry if this is all a bit unclear! The futures ecosystem is still in quite a bit of turmoil, which has made it difficult for me to chart out a clear path for future tarpc versions.
So with the blocking API it's already using a thread pool and I don't have to spawn multiple instances of the SyncServer listening on the same port?
That's correct.
Ok, and how many threads is it using in the pool?
Let's say all threads are busy processing a rpc call (e.g. doing blocking db stuff) will new rpc calls be queued up and processed asap or be rejected?
Can it happen that new rpc calls will be cancelled due to timeout when all threads are busy for too long?
The default thread pool is configured here. It's entirely configurable, so you can specify the options that work best for your use case.
Is there anything else you feel is unresolved with this?