tractor
tractor copied to clipboard
Who are the competitors?
There are a few "fronts" where this project is seeking to discover new low level solutions to problems in distributed computing. The main distinction is adherence to structured concurrency at the task and process level such that cancel-able CPU bound work is the new norm; of course we're also strongly tied to the development of trio
and its paradigms.
There are quite a few modern projects in this space and their is likely lots to learn from each.
I've broken down some categories I've thought out but of course more are welcome.
actor model (in other languages or Python frameworks)
- pulsar
-
spritely another actor model for
racket
, but with a focus on object capabilities and something called datashards for decentralized storage (I think) - thanks @salotz - syndicate: implemented in racket and javascript
- quasar a lib for java and kotlin, though a lot of the links seem defunct
-
dramatiq
has actors but appears to be geared for replacingcelery
(thanks @salotz). - I hadn't noticed before that
ray
actually has "actors" (thanks to @nmichaud for pointing this out) though it's pretty clear they're not really fitting the academic definition.-
ray
actually is a major competitor in terms of its api simplicity and integration with "data science" related frameworks.
-
-
stateright
is a smallrust
actor lib with an emedded model checker and GUI for it (thanks @guilledk)
distributed computing (CSP, RPC, MAS) :
python specific
-
aiomass:
asyncio
based MAS and RPC - rpyc: mature rpc framework in python
-
execnet: legacy inter-interpreter IPC system from the original creator of
pytest
- an
anyio
based implementation of gRPC: purerpc
other langs, prots, rpcs
-
tonic sweet
rust
gRPC implementation - apache thrift autogens rpc clients/server (awful looking tho)
- gRPC replacement / alt DRPC
- the ever glorious capnproto
worker pool / parallel job processing
- distex
-
dask
- there they seem to have chosen the same
forkserver
methodology as us - makes me wonder if they just don't care about the many forkservers issues I've run into? - they also seem to have an
Actor
system though it looks to be some kind of strange RPC proxy interface? - there's a supervisor type called a
Nanny
- there they seem to have chosen the same
- joblib which builds on loky - thanks @phaustin
-
arq
is Job queues and RPC in python with asyncio and redis.- a simpler and clearer rewrite of
rq
- interestingly they use the old approach we had of calling remote funcs by name vs. what everyone wanted with #69 / #174 though we technically did keep this style through
Portal.run_from_ns()
- almost identical first example to what i've been drafting in #53
- a simpler and clearer rewrite of
cancel-able cpu bound work
- thredo with it's cancel-able threads seems to be getting some interest
data streaming / pipelining
- faust which has been forked by some do-gooders
-
streamz
- has dataframe streaming support which is super interesting
- eventkit
- RxPy
self replicating, take over the world, skynet type stuff:
alternative concurrency models
- monte : object capability language in pypy with python-like syntax but in the E-lang tradition. The concurrency story is interesting here with the idea of "vats" which is worth reading about. - thanks @salotz
Also worth mentioning:
https://joblib.readthedocs.io/en/latest/
which uses:
https://loky.readthedocs.io/en/stable/
@phaustin slick, thanks for the links. Added them in.
Some more actor model options from different language suggested by @salotz:
quasar isn't defunct, its just that the main author went off to work on project Loom for the JVM so that quasar actually has a purpose (unlike python "threads"). Syndicate is interesting because of some of the design patterns over actors that it provides for distributed systems along the lines of tuple spaces. Let me add a few more things with interesting links to actor modelly things:
- https://gitlab.com/spritely another actor model for racket, but with a focus on object capabilities and something called datashards for decentralized storage (I think)
- https://www.monte-language.org/ : object capability language in pypy with python-like syntax but in the E-lang tradition. The concurrency story is interesting here with the idea of "vats" which is worth reading about.
As usual I have no code but more projects to compare to :P
https://dotnet.github.io/orleans/Documentation/index.html
Also the Akka documentation seems thorough and is in general I think a success story (albeit super heavyweight):
https://doc.akka.io/docs/akka/current/general/actor-systems.html
@salotz hehe fine by me.
I actually mention aka
and orleans
in #18 funny enough.
I guess linking the discussions is of use to some degree.
I started digging through the core of locust
and was interested in how they do their multi-processing stuff to find out that they kinda, don't do anything, heh, despite having a pretty cool design what with gevent
and zeromq
:smile: for their "worker pool" thing.
I just so happened to discover (whilst looking for non-existing process spawning code) that gevent
has a coolio processing spawning system gevent.os.fork()
which seems to do some supervisory management stuff. At the very least I'd like to dig through the code in preparation for #117.
This is a cool blog post by the ray
peeps that goes through comparisons with multiprocessing
. Obviously they're cheating to some degree since the core of ray
is C++.
I think doing the same but with the SharedArray
from piker
and tractor
would be a great exercise for both bench-marking and API scrutiny.
An interesting alternative in the Scala
language is zio
[1].
It uses Structured Concurrency
and makes this statement [2]:
ZIO has support for structured concurrency. The way ZIO structured concurrency works is that the child fibers are scoped to their parent fibers which means when the parent effect is done running then its child's effects will be automatically interrupted. So when we fork, and we get back a fiber, the fiber's lifetime is bound to the parent fiber, that forked it. It is very difficult to leak fibers because child fibers are guaranteed to complete before their parents.
The structure concurrency gives us a way to reason about fiber lifespans. We can statically reason about the lifetimes of children fibers just by looking at our code. We don't need to insert complicated logic to keep track of all the child fibers and manually shut them down.
See also: https://github.com/goodboy/tractor/issues/18 https://github.com/goodboy/tractor/issues/118
[1] https://github.com/zio/zio [2] https://github.com/zio/zio/blob/master/docs/datatypes/fiber/index.md#structured-concurrency