tractor
tractor copied to clipboard
Actor-local / `globals()` variables could use a pkg level API?
There's a section on how this works right now but I'd really like to move to the way trio
does it.
Namely,
- create per-actor
contextvars
like variables similar to the waytrio
does task-local storage - actor-local is equivalent to
trio
's run local variables for which the code is here
In terms of passing down values to subactors as is done with the statespace
arg right now I think a better name for this is simply vars
to pair with ContextVar
?
Re-thinking this after delving into contextvars
a little more... what we're actually after is a per-process (aka actor) variable scope that can be accessed from all the tasks running in that process. What would be handy is it keeping knowledge about which task did what in some kind of history for debugging purposes (ex. logging, remote state tracking).
I'm thinking a ContextVar
inspired api but with a MutableMapping
nod like:
vars = tractor.LocalVars('vars')
state = vars.setdefault('actor_local_state', 10)
# ... later some other task
state = vars.get('actor_local_state')
Is fine and underneath we can use contextvars.ContextVar
(if necessary) to keep tabs on the per-task interaction but the actual data is just a global mapping type that can be modified and used to past state around. A clear use case I've come across is locally cached network clients that are ideally shared between other actors.
There should in theory be no state clobbering which would require locking since each actor is single threaded.
I'm almost sold on:
-
blah = tractor.ActorVar('blah', default=10)
-
stack = tractor.ContextStack('my_stack')
as module level style declarations that track actor level state and context.
For the ContextStack
I'm thinking of making this a small instance which contains both a ExitStack
and AsyncExitStack
exposing certain methods from both but allowing for flags to defer teardown when actors are run with the new debug mode enabled. This would allow for deferring teardown of resources that must be destroyed on process termination but that maybe need to be introspected from a debugger beforehand.
A ContextStack
would actually be an example of an ActorVar
which can be accessed by tasks process-wide.
I'm not entirely sure if allowing the user to define their own stack names is necessary.
In theory you probably only need one unless the user wants to have some stacks in "don't teardown till after debug" mode versus all of them all the time in the singleton actor-wide-stack case.
In the singleton choice you could probably just have a single (async) with tractor.context_stack() as stack:
style api where on the with
close the teardowns are not immediately invoked until the actor runtime is terminated thus allowing the debugger (or other crash handlers) to engage before killing inter-process resources.
Interestingly enough the trio.lowlevel.RunVar
is a public api though nichey according to trio
peeps.
I'm wondering if just some slight wrapping around this is suitable?
- do we need to handle the case of multiple
trio
runs in a process having separate actor state? - are there cases where ^ is not desired?
I'm also just noticing now that the stdlib's contextvars
is actually implemented in C, to there may be slight speed considerations?
Hmm all this use of context leads me to think we should change our own Context
an an IPCConext
or something. ChannelContext
, TransportContext
?
Also pretty sure the current_context()
in that module isn't going to work?
Thinking about this further, I don't think any of this api is really required from the outset :thinking:
We can probably get away with just encouraging the use of plain old module level variables / globals for state "sharing" in each actor (since globals are "global" per process).
The main use for such a system would be to track state changes made by tasks which can be done already with trio.RunVar
if needed. I think maybe moving toward removing Actot.statespace
and encouraging use of both module variables and RunVar
if needed.
Going along in time more I'm thinking tractor.LocalVar
(or wtv) will have further value when looking at potentially moving the project towards support for repl-driven-programming.
For example if we want to make it possible to respawn the currently crashed task or whole actor from the debugger / shell, unwinding state changes may be necessary? I guess mostly in the crashed task case.
The new TreeVar
from tricycle
might be handy for this as well 🏄🏼
https://tricycle.readthedocs.io/en/latest/reference.html#tree-variables