proposal-signals
proposal-signals copied to clipboard
Add `EagerEffect` for convenience
Thanks for this proposal. As with everyone active in this space, I'm very excited about Signals and the prospect of adding them to the language.
I understand the reasoning for not including a Signal.Effect
. As stated in the proposal:
Effects are implemented in terms of these reactions, plus framework-level scheduling.
However, use cases exist where an eager effect implementation is desirable. For example:
- You want to base framework-level effects on eager changes of signals.
- You're creating a console application that updates details about the progress of a background process based on signals.
- You're implementing a library that uses signals and want to unit test their effects.
By example
To illustrate using the second use case: say you have this scenario:
const TOTAL_ITEMS = 6942;
const itemsDone = new Signal.State(0);
const progressPercentage = new Signal.Computed(() =>(itemsDone.get() / TOTAL_ITEMS * 100);
// Just for the demo:
const interval = setInterval(() => {
itemsDone.set(itemsDone.get() + 1);
if (itemsDone.get() === TOTAL_ITEMS) {
clearInterval(interval);
}
}, 10);
You want to update the console's progress bar based on changes in the itemsDone
signal.
Current situation
Currently, the only way to implement this is using a Signal.subtle.Watcher
:
// 1. Create the watcher
const watcher = new Signal.subtle.Watcher(() => {
// 5. Schedule the effect to run in the next microtask, since currently, the signal state is dirty and may not be read
queueMicrotask(() => {
for (let signal of watcher.getPending()) {
signal.get();
// 6. Continue watching the signal in the future
watcher.watch();
}
});
});
// 2. Create the 'effect signal' to watch
const computedToWatch = new Signal.Computed(() => {
process.stdout.clearLine(0);
process.stdout.cursorTo(0);
process.stdout.write(`Progress: ${progressPercentage.get().toFixed(2)}%`);
});
// 3. Subscribe the watcher to the changes
watcher.watch(computedToWatch);
// 4. Activate the watcher
computedToWatch.get();
This way of working is a bit convoluted. As you can see, it depends on developers understanding the 6 steps and their nuances.
Proposed situation:
const effect = new Signal.EagerEffect(() => {
process.stdout.clearLine(0);
process.stdout.cursorTo(0);
process.stdout.write(`Progress: ${progressPercentage.get().toFixed(2)}%`);
});
// Whenever the console app is done with the progress reporting:
effect.dispose();
We could do better by creating the EagerEffect
class. I'm calling it Eager
to distinguish it from its framework brothers and because it eagerly pulls all signals (i.e. no longer 'lazy').
Alternatives
- One alternative would be to add a
subscribe
method to signals. This also exists preact-signals (though not in their public docs?). There may be other implementations that use this. That would work great for the use cases listed here as well. - A small library in userland could also implement this
EagerEffect
proposal. However, I hope the use cases described here are common enough to not have to rely on userland implementations.
Ah, I was just coming in to say something to this effect, and you gave a much more detailed suggestion!
I completely understand the desire to leave good scheduling to frameworks and just provide them the tools to integrate the signals into their own scheduling systems, but a trivial way to create simple effects without needing a framework is, imo, a requirement for this proposal.
For example, I'm currently making a simple tool for a game I'm playing, that wants to let users set the levels of several items they might have, and return an optimal loadout. Using signals for this seems nice and convenient - set up a State
for each of the inputs updated in oninput
handlers, and then a Computed
that queries all the states and calculates the loadout. I then want to re-render the output table whenever that changes.
Right now I'd have to, as you show, carefully thread together some signals with a watcher. But I'm not doing anything complicated, or using any framework at all; I'm just authoring some vanilla JS and want to clear out a table
element and regen its contents whenever that would give a new result.
Even if I've got a handful of things across a page that might want to respond to data changes and regen themselves, if I'm not doing anything complex I probably don't care about the exact timing. I'd be fine with a nice simple scheduler that just made things Work, and by the point I'm doing something complicated, I probably want to use a framework anyway (or am willing to do the work to set up a scheduler "correctly" for myself).
A simple effect
can be created with Computed
+ Watcher
, see the example in the readme. Does that suffice or would an explicit API still be preferrable?
I don't think we want an explicit API, because there may be multiple ways to watch for changes, depending on renderer implementations.
In any fine-grained renderer, tho, I would expect that this implementation of a reactive function invocation would "just work": https://github.com/NullVoxPopuli/signal-utils/blob/main/src/async-function.ts#L18 this is an impure computed which would allow for this:
import { Signal } from 'signal-polyfill';
import { signalFunction } from 'signal-utils/async-function';
const url = new Signal.State('...');
const signalResponse = signalFunction(async () => {
const response = await fetch(url.get()); // entangles with `url`
// after an away, you've detatched from the signal-auto-tracking
return response.json();
});
// output: true
// after the fetch finishes
// output: false
<template>
<output>{{signalResponse.isLoading}}</output>
</template>
In this demo, I'd assume that the renderer knows to create a Computed
for each {{ }}
cell, and use Watcher
for changes (and then probably batch or allow them to settle in some way (maybe via a render-aware scheduler))
I love this pattern, especially that it is a class, and allows subclassing and adding features, already including a dispose method.
Imo this should also allow the concept of a tree of effects, such that nested effects are automatically cleaned up when a parent effect is disposed.
I'm not sure why it is called "eager" though. To me "eager" means a synchronous effect that runs immediately synchronously right after the set of a signal (and causing potential glitches).
Maybe MicrotaskEffect
is much more clear?
Here is what my ideal basic effect would be, renamed, and including child ownership. In the next example, assume that c
updates much more quickly than a
or b
:
const outer = new MicrotaskEffect(() => {
console.log(a.get(), b.get())
const inner = new MicrotaskEffect(() => {
console.log(a.get() + c.get())
})
// ...Other child effects...
// Later
inner.dispose()
// API to optionally provide cleanup logic that runs before the next run, or on dispose.
cleanup(() => {
// ...
})
})
// Later
outer.dispose()
If inner.dispose gets called before outer.dispose, the inner effect is stopped (eventually is GC'd). Other child effects (not depicted) inside of outer will keep going if they weren't stopped.
If the outer effect re-runs, all previous inner effects in the tree are disposed, and the whole thing starts over, all child effects are created new again (all previous ones eventually get GC'd)
If outer.dispose is called before inner.dispose, the whole tree of effects is disposed, just the same as if the outer effect were going to rerun (but this time it won't rerun because it just got disposed). All effects including the outer effect will eventually be GC'd.
This basically would be similar to Solid 1.0 trees of effects, except
- without needing a createOwner API for making a special type of root
- this new one has the concept of an effect being a reference and hence having methods like dispose(), whereas Solid's has no disposal (only a root containing a tree of effects can be disposed).
- Solid 1.0 effects were historically not microtask scheduled but synchronous (will be in 2.0)
With this concept of an effect being disposable, and the concept of a hierarchy of effects, a root effect can replace the concept of a special non-effect root node (a root owner is now simply the root effect in a tree of effects).
If in the future we add features using an options object,
new Effect(() => {...}, {
scheduler: optionalScheduler, // default to single run in next microtask
pausable: trueOrFalse, // default to false (@fabiospampinato)
parent: otherEffectOrNull, // manually set the parent effect (useful for breaking out of a tree, f.e. set to null, for things like client-side routing
})
could the set of particular behaviors and features that various frameworks need possibly be covered via different sets of options?
My preferences match those of @NullVoxPopuli and @sorvell above, but the good thing about this suggestion is that it forms a simple addition which layers/composes well with the rest of the proposal. Let’s try it out along with frameworks which integrate the polyfill and see if the downsides are overstated.
If anyone's curious -- I prototyped a simple effect here (based off the example in the README as well as tests for the polyfill) and left some comments about the tradeoffs / dangers: https://github.com/NullVoxPopuli/signal-utils/blob/main/src/subtle/microtask-effect.ts
I made a JSBin: https://jsbin.com/safoqap/6/edit?html,output for folks to test it out
It has been suggested to merge the conversation from #149 into this thread. So...
My primary criticism/issue with the proposed spec as is, is that it relies on external dependencies to really make it useful. When building an NPM package, it is reasonable to just provide an API that still needs other packages for specific usages/implementations of your package.
When establishing a new ecma-script standard / browser-api, it is not reasonable for that API to be dependant on libraries for actual use.
I do not think the pattern with .subtle
is bad. I think it's fantastic to have access to "under the hood" for better integration with frameworks and the like. But having that low-level API available does not and should not mean that no basic effect should be available as part of the browser APIs. Similarly, a browser API being available does not preclude the solution from offering low level integrations to frameworks.
Javascript exists outside of the NPM ecosystem, and (new) browser API's should allow for direct usage. Some kind of base-effect should be available as part of the browser-api, so that Signals are available for use in prototyping in devtools/codepen/non-compiled-js.
(Yes there ".subtle" API's are available to create your own effect with a 'small amount of code', those API's are not user friendly/reasonable for userland. Even the example in the readme that is clearly noted "too basic to be useful" is not easy to read or intuitive to come up with)
Frameworks do a bunch of stuff. I don't think it's feasible that we provide built-in mechanisms for all of their functionality in one proposal. I definitely don't want to claim that everything is being solved when we only solve part of it (as we might be tempted to do if we introduce a built-in naive effect
).
Thanks for all the reactions 💐
@sorvell
A simple
effect
can be created withComputed
+Watcher
; see the example in the readme. Does that suffice or would an explicit API still be preferable?
This is true. As stated in my original post, an effect
can absolutely be created in userland. However, I think the use cases are common enough to warrant a simple effect implementation.
@NullVoxPopuli
I don't think we want an explicit API, because there may be multiple ways to watch for changes, depending on renderer implementations.
I've provided two use cases where we don't have a renderer. A simple effect is helpful for non-rendering (and simple rendering...?) use cases.
If anyone's curious -- I prototyped a simple effect here (based off the example in the README as well as tests for the polyfill) and left some comments about the tradeoffs / dangers: https://github.com/NullVoxPopuli/signal-utils/blob/main/src/subtle/microtask-effect.ts
Indeed, this is a significant memory leak. With my example:
effect(() => {
process.stdout.clearLine(0);
process.stdout.cursorTo(0);
process.stdout.write(`Progress: ${progressPercentage.get().toFixed(2)}%`);
});
After a few iterations:
node effect.js
Progress: 0.36%
<--- Last few GCs --->
[6341:0x5c3b220] 8537 ms: Mark-Compact 2016.8 (2048.9) -> 1949.4 (1982.0) MB, 274.87 / 0.00 ms (average mu = 0.237, current mu = 0.167) allocation failure; scavenge might not succeed
[6341:0x5c3b220] 9015 ms: Mark-Compact 2075.3 (2107.9) -> 1991.3 (2023.7) MB, 412.38 / 0.00 ms (average mu = 0.185, current mu = 0.138) allocation failure; scavenge might not succeed
The memory leak seems to originate from your implementation of flushPending
:
function flushPending() {
for (const signal of watcher.getPending()) {
signal.get();
// Keep watching... we don't know when we're allowed to stop watching
- watcher.watch(signal);
+ watcher.watch();
}
}
Your example shows that we do need something like a simple effect
, as it demonstrates how easy it is to mess it up. Without it, Signals are only helpful when combined with an effect
in userland.
What about my first proposed alternative alternative: add a subscribe
method to Signals? It might be easier to reason about/implement, as well as cover the described use cases.
Ow, 😅 I was writing a bunch of stuff and didn't see @littledan's reaction sneaking in:
I don't think it's feasible that we provide built-in mechanisms for all of their functionality in one proposal.
My issue mostly concerns the non-framework use cases, so use cases 2 and 3 in my original post.
The memory leak seems to originate from your implementation of flushPending:
Thanks! I've fixed that here, actually: https://github.com/NullVoxPopuli/signal-utils/pull/37
but there is still a memory leak.
what is going to do the final unwatch of an effect?
each effect()
call adds watchers, and they're never cleaned up.
@wimbarelds concerns are quite understandable imho.
if the proposal serves mainly (if not only) frameworks then it shouldn't be a core language thing, as i understand the language should remain accessible for all to use as is, not necessarily to build up on?
don't get me wrong i come from 20y of FE, i lived the days before this, so i welcome this with open hands. but i'm concerned this is added to the language but should remain a library since it's for other libraries to use.
i don't think in tc39 proposals were candidates which add framework-oriented apis so far? (i may be wrong I'm not that close to it)
if the proposal has to land, it should be that it's usable in the most basic form with effects too (and probably without the subtle namespace - i did read the #122 issue about babysitting so i won't extend further)
What, exactly, are the downsides of baking in the effect spec and driving it via the microtask queue?
As far as I can tell right now (please feel free to correct me), if that were specified, then we'd get the following.
Advantages:
- built-in, works everywhere
- makes async computeds first-class
- benefits from the robust work that's gone into scheduling promises
Neutral:
- framework and application developers will need to build a wrapper if they want to throttle output with some delay, like on the requestAnimationFrame timing or only when there's N effects to push or whatever
Downsides:
- no way to do blocking, synchronous signals
- extremely dense signal graphs may take too long and tie things up
If that's accurate:
-
I'm not convinced that the downsides are really downsides. Those are both situations where signals may just not be the right tool for the job, or the solution is in the application side to refactor
-
It appears the spec is focused on what I perceive as a neutral issue, and is conflating IO with subscribing to signals updates. The spec, rightfully, should not dictate when to render, but that has nothing at all to do with dictating when to schedule effects (in the sense of causing the pull mechanism to fire)
@dakom how do you tell the effect to stop watching?
(Like, when you're done with it, or going to a new page or something)
@NullVoxPopuli unsubscribe can be done with a callback. I've created a PR to illustrate: https://github.com/NullVoxPopuli/signal-utils/pull/39
Ye, i only meant that it's up to the user to manage unsubscribe, and without a universal lifetime implementation, it's gonna be tricky to manage for multi-framework or framework-agnostic libraries
(Pushing unsubscribe into consumer space, which is common, but prone to memory leaks as everyone makes mistakes)
Yes, but there are other use cases for which this effect
/Signal.propotype.subscribe
API would be more beneficial. I'm primarily focussed on usecase 2 and 3 from my original post.
I.e. it makes sense for console applications and library authors that want to test their signals to manage unsubscribing.
Ye, i only meant that it's up to the user to manage unsubscribe, and without a universal lifetime implementation, it's gonna be tricky to manage for multi-framework or framework-agnostic libraries
(Pushing unsubscribe into consumer space, which is common, but prone to memory leaks as everyone makes mistakes)
Unsubscribe always lives in userspace to some degree. In the case of component frameworks, it is usually instances of an app/render context and in SPA's, those never tend to die; but if/when they do you need to dispose of them to avoid memory leaks.
Hell, even outside of signals, react-effects explicitly allow a clean up function generally meant for unsubscribes as well. Similarly setInterval also needs to be explicitly killed in userland.
Users having to do some amount of lifecycle-management is inevitable with signals. That's fine so long as this has somewhat elegant API's and good documentation.
they do you need to dispose of them to avoid memory leaks.
In templating frameworks, disposal is automatic at the cell or block level, for example:
{{#if condition}}
when condition becomes false
cleanups and disposal run automatically,
ie: this is not userland behavior
{{/if}}
they do you need to dispose of them to avoid memory leaks.
In templating frameworks, disposal is automatic at the cell or block level, for example:
{{#if condition}} when condition becomes false cleanups and disposal run automatically, ie: this is not userland behavior {{/if}}
Im not sure what you mean precisely by "templating frameworks". I think we're loosely talking about similar things but viewing from different angles.
When you say disposal is automatic at the block level; I think what you're referring to is something like "when the component is disposed of" (when the template stops being rendered and stops updating the template with update signal values).
Yes, when a block is disposed of, all effects inside of that block are also disposed of. But that block needs to be disposed of. Probably that block is a child of another block, and when that block is disposed of, this one will be too. However ultimately there is a root block/context that if you ever stop needing it (uncommon in SPA's), then you do need to dispose of it in userland.
@dakom how do you tell the effect to stop watching?
I see it as very similar to event listeners. Nothing in the spec prevents users from registering anonymous functions or writing code that accumulates garbage indefinitely.
I think it's important that the spec provide a way to de-register effects, and it's on the users/framework authors to come up with elegant abstractions that handle this automatically.
For another comparison- the fetch API builds on the microtask queue, and supports cancellation. I personally think it's a good idea for someone writing a fetch "component" to cancel the fetch when the component is unmounted, but the spec doesn't force that or dictate when it happens, just makes it possible
(Pushing unsubscribe into consumer space, which is common, but prone to memory leaks as everyone makes mistakes)
Just like setInterval requires clearInterval or else leak. This is not a good reason to not ship setInterval. I also think this under estimates how capable developers are.
Suppose we don't ship Custom Elements because it requires devs to use disconnectedCallback or else they leak. Deallocation is just standard practice in software engineering and not something we can justify as a very strong reason to not ship something that is great and already established.
What will go a lot further is good docs. Make it clear how to clean effects up.
// Make an effect
const effect = new Effect(() => {
// ... read values, do stuff ...
})
// Later, when done with the effect
effect.dispose()
Also, effects must be garbage collectable without calling dispose if signals are no longer tracked (for example an early return that causes the effect to no longer have dependencies).
In Solid, this will garbage collect the effect:
var [value, setValue] = createSignal(0)
createEffect(() => {
console.log(value())
})
// later, release the signals to stop the effect
value = undefined
setValue = undefined
// Effect is GC'd
and so will this:
const [value, setValue] = createSignal(0, {
equals: false // trigger on same value (no equality check)
})
let stop = false
createEffect(() => {
if (stop) return
console.log(value())
})
// later, stop the effect
stop = true
setValue(value()) // trigger last time, effect returns early, stops
// Effect is GC'd
We can also make stop
be a signal then release it, so that setting a value is not required (useful if the rest of the effect body is in a function imported from a 3rd party and we cannot arbitrarily set 3rd party's values).
A microtask-scheduled effect covers a wide variety of cases. A synchronous effect may be desired by some (not me, never will) and to me that means not embracing the pattern of "signals and effects" if we're relying on timing to make decisions.
The whole premise of signals and effects is we care about what not when, write our code within effects and never rely on code execution order within that system. We shouldn't worry about when things fire, but about what state looks like based in other state. Anything that is procedural and timing based can be abstracted to be represented as state (f.e. an effect reacting to a time
signal instead of an animation frame callback, where a time
signal is implemented by abstraction of requestAnimationFrame).
There will always be procedural requirements because the CPU is procedural, but the purpose of signals and effects is to escape that as much as we can (impossible to escape it fully, but very possible to abstract a lot of it into signals (for example pointerPosition
as a signal instead of using events)).
If we don't embrace this, then we're set up to not be using signals and effects very well, fighting against what they are. Not including an effect API means not giving users the very patterns that spawned this proposal (signals and effects from Solid.js, Preact, Vue, Svelte, Meteor, MobX, etc).
I get that we're hopeful that effects can be implemented by frameworks.
Can someone describe why it would be detrimental to at least provide a basic effect out of the box?
they do you need to dispose of them to avoid memory leaks.
In templating frameworks, disposal is automatic at the cell or block level, for example:
Components and templating are out of scope (those are things you could build using effects). Sure those can have automatic ways to dispose effects (great if they do).
The conversation here is about creating and using effects directly, on their own, not only for rendering, but for any reactive data manipulation or side effects.
It is important that users have a standard effect API imo (still wondering what would be very detrimental due to a standard basic microtask effect being included), and important that documentation clearly covers how to dispose of them.
Frameworks do a bunch of stuff. I don't think it's feasible that we provide built-in mechanisms for all of their functionality in one proposal.
@littledan We will never cover every single possible use case with one API. It is impossible. People with edge cases may even be using effects incorrectly (as in, fighting against the pattern).
But a basic effect would cover so many use cases already, that they're totally worth it.
In fact, we may be surprised to see frameworks adjust to the existence of a standard basic effect, maybe even nullifying the needs they thought they had because they were not working within a more specialized space, decoupling concepts they were previously entangling. (F.e. why does the concept of Suspense need to be coupled to the concept of an effect??? Standardizing a basic effect may cause people to realize they can do things in a better way.)
I've written 3D apps using only basic effects, even for derived values (with no memos/computeds!), and it has been absolutely great! (I'll add examples to a showcase page at https://lume.io in the near future).
what is going to do the final unwatch of an effect?
each
effect()
call adds watchers, and they're never cleaned up.
@NullVoxPopuli I'm not sure if you missed the comments before yours, but effect.dispose()
can clean them up, if needed. Top level effects don't always need to be cleaned up, disposal is optional (call it if you want, or clear the effect's dependencies and let it be GC'd, or rerun an outer effect and inner ones get disposed).
@NullVoxPopuli I'm not sure if you missed the comments before yours, but effect.dispose() can clean them up, if needed. Top level effects don't always need to be cleaned up, disposal is optional (call it if you want, or clear the effect's dependencies and let it be GC'd, or rerun an outer effect and inner ones get disposed).
yeah, I think I can easily conflate the userland (appdev) purpose with effects (which (may) require high level abstractions (automatic cleanup, etc) and in general appdevs don't have (nor want) to deal with the lowlevel disposal concepts that library and framework authors (and gamedev, and other non-web verticals)) -- so I think as I think about effects for a general utility, I need to keep in mind who the audience is for these effects and what happens if an appdev, expecting automatic disposal, uses the low level effects. How do we balance utility for low level behaviors with protecting appdevs from misusing sharp tools? can we? should we care? should we defer "protection from themselves" to frameworks / ecosystems? Whatever the answers end up being, we should document them as our rational, and probably include some recommendations or a list of potential sharp edges that some users could run in to.
I'll add examples to a showcase page at https://lume.io/ in the near future
this is exciting! I look forward to more examples!
But a basic effect would cover so many use cases already, that they're totally worth it.
What is the best basic implementation for an effect?
in signal-utils, we have a microtask effect here: https://github.com/proposal-signals/signal-utils/blob/main/src/subtle/microtask-effect.ts
we could probably also have a requestAnimationFrame
effect, or maybe some batching effect of some sort? idk! we need more implementations to play with! :tada: