proposal-signals
proposal-signals copied to clipboard
Reasoning behind the push-then-pull model
I’d like to better understand the reasoning behind the push-then-pull model specified in this proposal. I’m looking for strong justification for it besides it being novel.
From my understanding, signals seem to be a push based system much like observables. The benefit of signals over observables would be their ergonomics and resource management. That would be my understanding, but I am open to being corrected in this understanding.
My understanding of the push-then-pull model is that dirty flags are pushed first through the graph, then only signals which are attached to some watcher endpoints (effects) are then pulling the graph where it’s dirty. The pulling starts at the beginning of the graph and the state changes propagate down the dirty paths until either reaching the end of the graph or a node where the state hasn’t changed (a ===
comparison).
If this understanding is correct, it seems to assuming that some optimization is to be had by eagerly propagating cache invalidation and then lazily propagating state changes. This would be an optimization assuming that portions of the graph may be frequently idle (not watched) only to become active when an effect is “mounted” which activates those portions of the graph.
From this, I can conclude that this proposal of signals is to suggest that signals are to be a state management system where the application state control flow is to be held in memory at all times, but the propagation of the state passing through the control flow is to be inactive until actively enabled by some watcher (effects). If this is a valid assessment, then my curiosity is why not leverage the language for this design: control flow can exist as function references in memory, and state only exists at the endpoints. Allow me to elaborate…
If a portion of the graph is not active, because no endpoints are watching this portion of the graph, then why hold reference to this sub-graph in memory at all? If all in-memory references to the instances are of the signals in this subgraph are not held, then the GC should clean them up. However, I could imagine that the references to them are held in memory while waiting for some event to mount a watcher or effect to activate them, but then the question is why track cache invalidation for them with a dirty flag? If they were in-active before the effect/watcher is mounted, then they are dirty because they haven’t been observed yet. So, we only seem to need to track whether the signal is being observed/pulled by some endpoint in the graph. If it is not, then sources shouldn’t propagate changes to them (nor dirty flags). If a signal is actively contacted to some endpoint then sources will push the changes. This avoids the two step process of push then pull, and simplifies it to push-but-only-where-necessary.
If I’m missing something critical to understanding the advantages of a push-the-pull model over a push model which tracks the active portions of the graph for optimizations, then please share that insight.