TorchSharp
TorchSharp copied to clipboard
Multithreading safety
Great library but please provide some information on multi threading safety. In production use, models are normally wrapped into REST service which is inherently multithreaded. The Torch itself is safe to use for inference in that matter. Not sure if your library keeping it this way. Major concern is introduction of scopes for memory management. Stack of scopes seems to me existing on global level which can be a problem. Not sure what else.
Dispose scopes are thread local: https://github.com/dotnet/TorchSharp/blob/6bc4574cc94a244a318f22ee0244addd9d4d94f2/src/TorchSharp/DisposeScopeManager.cs#L17
The only thing to care about here is that thread local actually does not work with async.
For the latter, we might just need to switch to AsyncLocal<T>
. I believe it does work with both threads and async flows.
For the latter, we might just need to switch to
AsyncLocal<T>
. I believe it does work with both threads and async flows.
As always an excellent point, @lostmsu. The current solution accommodates thread parallelism, but not task-based concurrency.
The Torch itself is safe to use for inference in that matter.
@gevorgter -- I have always wondered where, in all of the torch documentation, that safety is formalized into a guarantee. Do you have a reference?
Sorry, no reference. But my hunch is that training might not be thread safe but inference is since there is no gradient to keep and it's just calculations.
Pytorch team appears to care about threading issues and fixing them. https://github.com/pytorch/pytorch/issues/16828