jint icon indicating copy to clipboard operation
jint copied to clipboard

Allow concurrent calls to Engine

Open mariusdv opened this issue 6 years ago • 8 comments

(First PR on this codebase) Rewrote the way the global information is stored to allow concurrent requests to both Execute and Invoke methods.

This mainly uses AsyncLocals, which got introduced in net framework 4.6 (required a package upgrade)

Execution Context & Call Stacks Previously, these structures were kept using a simple stack. when running multiple invocations, this stack would be poluted by the different calls. To separate the data for every call while maintaining the ability of a common state, I use AsyncLocals. These locals will remain scoped to a single call. The structure is set up in a way to allow concurrent steps within a single invocation as well if we ever want to run some things in parallel.

StrictModeScope and EvalCodeScope Structure used to be based on a disposable pattern that modified the same static state. This similarly starts to be poluted during concurrent requests. Pattern changed from disposables to using delegates. change in usage is minimal, whereas the scoping is now provided correctly.

Others To make this scenario work, I also changed some dictionaries to concurrent dictionaries and altered the global cache objects so they use the ConcurrentObjectPool instead of the ObjectPool.

mariusdv avatar Oct 16 '19 00:10 mariusdv

Awesome work here! I'm a bit concerned about he performance. Currently Jint is highly optimized to work as single threaded engine and you should pool instances outside of Jint logic. Where comes the need to concurrently access the same engine and data structures? Historically JavaScript has been ran using single-threaded model which has its benefits and short comes.

lahma avatar Oct 17 '19 17:10 lahma

For me the main use case would be users wanting to encorporate this lib in a webservice.

Currently I would see 2 options for this usecase:

  • load the functions used (especially taxing when part of a sizable lib) each time you receive a request
  • create a pool of engines and manage the resources to prevent concurrent access.

The first one does not only impact the response times, it will be using a lot more memory than required due to the stored duplicate information, which is directly correlated to the number of requests.

Pooling would prevent the memory allocation, but decreases performence due to the limiting of concurrent processed calls when the pool hits its limit.

As to the hit in performance, I would have to run the benchmark :)

mariusdv avatar Oct 17 '19 18:10 mariusdv

Given that a lot of that is related to the need to parse / state. Can we do things differently? Have a readonly scope of some sort? So I can create an engine, load some functions / scripts, then get that immutable state and reuse it across engines?

ayende avatar Oct 17 '19 18:10 ayende

I quite like that approach as well 👍

mariusdv avatar Oct 17 '19 18:10 mariusdv

I think that the problem is that even with concurrency support you would suffer from shared or dirty state. Pre-parsed programs (esprima) should give you quite fast evaluation even if you would build engine per request. There was job done earlier to make engine creation faster and of course we want all execution to be fast. Single-threaded should be quite fast and locks would mean slower execution. Would be nice to see some benchmarks to show the use case of sharing these engine instance.

lahma avatar Oct 17 '19 19:10 lahma

I also agree there must be a way to be able to reuse "hot" engines that are somehow immutable, probably with some higher level prototype, or at least intermediate. Maybe based on a dedicated mode where the default objects are immutable (global, Error, Math, ...) and only the user context would be released. So even if an engine is not thread-safe, it could at least be pooled, and cleaned cheaply. Like with a "Freeze" method where at that point any existing state can't be changed, and only the user code one can be.

Then we could even image making a freeze after the standard object are initialized, and provide an automatic engine pool, which would improve all the common scenarios without any custom configuration. We know that creating an Engine incurs lots of C# code by default already (every Constructor object, all the lambdas, ...).

sebastienros avatar Oct 17 '19 20:10 sebastienros

Just throwing out an alternative idea I've been toying around with. You could put the instance of the engine in a Grain using Microsoft Orleans. It should handle the pooling/concurrency issues for you and provide another layer of state sharing/management within the Grain if you need it.

https://github.com/dotnet/orleans https://dotnet.github.io/orleans/Documentation/grains/reentrancy.html

Out of curiosity, does anyone have any metrics around the cost of creating an Engine vs reusing one?

rulehero avatar Dec 28 '19 04:12 rulehero

Previously, these structures were kept using a simple stack. when running multiple invocations, this stack would be poluted by the different calls.

This would explain #704 that we've run into. @sebastienros I'd love to see this merged, because I think it would enable our use of deferred calls too. In our case we use Jint in a not-ultra-light way, and a small performance hit would be an acceptable trade-off in my opinion.

@mariusdv I assume it would be hard to make this an Engine option?

Ghostbird avatar Jan 28 '20 09:01 Ghostbird