signals
signals copied to clipboard
Improving the implementation of signals
Hi, I have many years of experience implementing FRP systems. I quite like your implementation of signals, but I saw some improvements that could be made.
This is a new implementation of signals, written from scratch. The code is much simpler than the old implementation, and also ~15% faster.
In addition, it fixes a memory leak by using WeakRef:
const s = signal(5);
{
const c = computed(() => {
return s.value;
});
c.value;
// What happens if `c` isn't used anymore?
}
With this PR, c
will be garbage collected if it's not used anymore.
However, this does cause a couple minor changes to the behavior:
The behavior of computed
is lazier: now it will only update when calling .value
, it no longer updates automatically when its dependencies change. However, effect
continues to update automatically, it is not lazy.
This improves performance, and also creates a clean separation between computed
(which should be pure / referentially transparent), and effect
(which can have side effects).
The error handling behavior of computed
/ effect
has been changed slightly.
With the old behavior, it would only error the first time that .value
is called, and afterwards it would return the old stale value.
With the new behavior, if it errors it will keep erroring every time .value
is called, until one of its dependencies changes. When one of its dependencies changes, it will then re-run the function again.
This behavior is faster, more intuitive, and more semantically correct.
⚠️ No Changeset found
Latest commit: 6b0b8aff6b4ae63c893862dc1a8dd0baa7e461cc
Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.
This PR includes no changesets
When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types
Click here to learn what changesets are, and how to add one.
Click here if you're a maintainer who wants to add a changeset to this PR
Deploy Preview for preact-signals-demo ready!
Name | Link |
---|---|
Latest commit | 6b0b8aff6b4ae63c893862dc1a8dd0baa7e461cc |
Latest deploy log | https://app.netlify.com/sites/preact-signals-demo/deploys/631a0e2d924f9e0007827e72 |
Deploy Preview | https://deploy-preview-104--preact-signals-demo.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site settings.
It seems that @preact/signals
is relying very heavily on internal details of the signal implementation, so I need to fix that too.
Thanks for the PR! It's awesome to see that signals spark a lot ideas. I plan to go through the suggestions later in more detail over the weekend if that's ok.
The behavior of computed is lazier: now it will only update when calling .value, it no longer updates automatically when its dependencies change. However, effect continues to update automatically, it is not lazy.
I'm not sure I understand the improvement as this phrase describes the exact behaviour that is in main
already. We only update a computed when it meets the following criteria:
- It's subscribed by
effect()
- it's subscribed by a component
- someone accessed
computed.value
in the global scope. Note that we don't establish a subscription in this case, just refresh the value.
EIther way, looking forward to dive in more deeply into the code soon! Thank you for making a PR 👍
I'm not sure I understand the improvement as this phrase describes the exact behaviour that is in main already.
Consider this testing code which is currently on the main
branch:
https://github.com/preactjs/signals/blob/a4d659e8690246fb24d4af6d0779eb3ff96635d7/packages/core/test/signal.test.tsx#L211-L228
It uses a.value = 4;
which changes the signal
. But even though e.value
is not used, it still re-runs the computed
.
With my code, it will not re-run the computed
until e.value
is used.
EIther way, looking forward to dive in more deeply into the code soon!
I am still working on fixing the Preact/React integration, because it also needs to be rewritten from scratch.
But the core signals implementation should be (mostly) complete.
It uses a.value = 4; which changes the signal. But even though e.value is not used, it still re-runs the computed.
That's not what's happening there. The call count of the spy function isn't reset, so it stays at 1.
I am still working on fixing the Preact/React integration, because it also needs to be rewritten from scratch. But the core signals implementation should be (mostly) complete.
Went through most of the changes and I really like where this is going. I wonder if it's easier to land the changes in succession, rather than having to rewrite everything to completion before being able to merge them. I think some of the changes in the PR could be merged right now if extracted into a separate PR. I'm mainly worried that it might lead to way more merge conflicts than necessary as there is continued development on main
.
- Dropping batch counter in favor of a simple boolean flag - Seems straight forward to PR
- Replacing current mark+sweep sorting algorithm with the bottom up approach in your PR - Personally, I like this change the most! It gets rid of the
._pending
counter which always irked me. I like the new approach taken here a lot! - Split up
Signal
class intoSignal
,Computed
andEffect
- with 1) + 2) checked of, this should be doable to merge. We had something like this planned too, because it avoids the need to allocate collections to track depenedencies for source Signals, etc - Make effect the sole initiator of subscriptions + "value refreshes"
- All the other GC improvements
I'm a bit unsure about going with WeakRef
right now, because of the browser support level as I work in e-commerce where older browsers still contribute a lot to the bottom line. @jridgewell sent me this link on how a similar thing is tackled in lit
.
My assumption is that 1)-3) are changes that don't require discussions and are straightforward to merge. With 4) I could see that becoming a discussion point, which might take a while until everyone reaches a shared agreement. My hope is that 1)-3) also don't require any changes to the framework adapters which should further mitigate the risk that changes take longer to land or the risk to run into unforeseen things.
What do you think?
That's not what's happening there. The call count of the spy function isn't reset, so it stays at 1.
But it was reset by compute.resetHistory();
With my code, that test fails, saying that the compute
function was never called.
That's why I had to update some of the tests to explicitly use .value
.
I wonder if it's easier to land the changes in succession, rather than having to rewrite everything to completion before being able to merge them.
I think it would be hard to split the changes into smaller changes. So many things are intertwined together in the implementation. But I'll see if it's doable.
I'm a bit unsure about going with WeakRef right now, because of the browser support level as I work in e-commerce where older browsers still contribute a lot to the bottom line. @jridgewell sent me this link on how a similar thing is tackled in lit.
I'm okay with removing the WeakRef support, but WeakRef is supported by 91% of browsers, and is supported by all modern browsers.
If needed, we can create a WeakRef polyfill which just maintains a strong reference without disconnection:
class WeakRef<T> {
private _value: T;
constructor(value: T) {
this._value = value;
}
public deref(): T | undefined {
return this._value;
}
}
This will cause a memory leak on older browsers, but there's nothing we can do about that.
Or, alternatively, users can choose to use a WeakRef polyfill if they wish.
That's not what's happening there. The call count of the spy function isn't reset, so it stays at 1.
But it was reset by compute.resetHistory();
With my code, that test fails, saying that the compute function was never called.
That's why I had to update some of the tests to explicitly use .value.
Apologies, I stand corrected. It's maybe a lesson for me not to do code reviews close to midnight 😬
I wonder if it's easier to land the changes in succession, rather than having to rewrite everything to completion before being able to merge them.
I think it would be hard to split the changes into smaller changes. So many things are intertwined together in the implementation. But I'll see if it's doable.
Happy to lend a hand with that if you desire.
I'm okay with removing the WeakRef support, but WeakRef is supported by 91% of browsers, and is supported by all modern browsers.
91% is a fantastic outlook, but dropping those 9 percent, especially older iOS browsers would be a huge loss in revenue in the e-commerce world. It's not public, but we know that Preact is used for the store frontend for several big e-commerce companies. So everything in the Preact ecosystem has to be a little conservative compared to other projects when it comes to browser support.
So everything in the Preact ecosystem has to be a little conservative compared to other projects when it comes to browser support.
That's an odd decision. Many many many libraries don't work in older browsers like IE.
When a project needs to support older browsers, they use transpilation (like Babel) and polyfills. That's standard practice. It's the choice of the project, not the library.
By using a polyfill the project is able to use modern libraries while still supporting older browsers. It's unreasonable to expect every library to support old browsers, because polyfills exist.
And because Preact uses ES6 modules, projects need to use some sort of transpilation for older browsers anyways. If your concern is with the pre-compiled .js
files, it would be easy to include a WeakRef polyfill in the compiled .js
files. So that way the pre-compiled .js
files will continue to work in old browsers.
That's what many libraries do: they provide clean ES6 modules without polyfills, and also pre-compiled ES5 code which has polyfills.
Preact and the related modules do not require the use of ES Modules or transpilation. We ship ES5 in all but one of the compiled variants in each npm package. There are a few reasons for this:
- Most application bundler configurations do not correctly transpile or polyfill npm packages. While Vite does transpile dependencies based on a user-provided configuration, Next.js, create-react-app, preact-cli, default rollup/webpack installations and numerous boilerplates do not. (this article has more information on the status quo)
- A number of modern JS features suffer remain poorly optimized compared to "loose" ES5 equivalents. This is one of the reasons we're likely to move from
Set
to plain Arrays (see this benchmark). - While optimizing for the majority is a good idea in general, the value of that tradeoff is negatively impacted when there are observable functional differences in a library in a chunk of browsers. In this case, 90% of users would get proper GC, but 10% of users would be using an application that never frees memory.
I'd like to redirect discussion away from the polyfilling topic though, because we're not going to make a fundamental decision about Preact as part of a refactor PR.
Regarding the implementation (apologies, I was away last week) - one thing that needs to be considered is that we're aiming to provide a general-purpose reactivity model. The signals core is framework-agnostic, but it's not intended to be a competitor to something like Rx - it's agnostic to allow for additional framework bindings to be created similar to the current Preact one. In all of these cases, we have the advantage of knowing which signals/computeds are used by which components/elements in a tree structure with a well-defined lifecycle (VDOM or DOM). Because our state graph is attached to that existing lifecycle, we can already eagerly destruct the graph and allow the JS engine to do its garbage collection. That works today (assuming we land the bugfix in #117), and is verifiable in Chrome's memory inspector.
That said, there are lots of good things in this PR. I just wanted to provide some clarity on why we didn't use WeakRef in the first place, and why I would prefer to avoid it unless absolutely necessary.
@developit We ship ES5 in all but one of the compiled variants in each npm package.
And as I said, it's very easy to just add a WeakRef polyfill to the pre-compiled ES5 code. The amount of extra code is very tiny, and the performance is very good.
This is one of the reasons we're likely to move from Set to plain Arrays (see this benchmark).
I actually found the opposite, that Set performs better than Array in most situations. Perhaps iteration on Set is slower, but add/delete/has are faster, and they're the ones that are actually used.
I'd like to redirect discussion away from the polyfilling topic though, because we're not going to make a fundamental decision about Preact as part of a refactor PR.
I don't mind deferring it to another PR, but I disagree that it's a fundamental decision, because the behavior for the end user is identical, it is not a breaking change, it is a purely internal detail.
Because our state graph is attached to that existing lifecycle, we can already eagerly destruct the graph and allow the JS engine to do its garbage collection.
That works well enough for components which have explicit lifetimes, but not in general, as I showed in my first post.
Do you have a link to your findings about Set
? We use repeatable benchmarks for everything, because it is extremely easy to accidentally measure the wrong thing (or nothing) in JavaScript.
I think you might have misread my point about behavior - your reply was specifically about developer-observability. In this case, garbage collection is an observable behavior, since failure to GC effectively could easily lead to scenarios where an app crashes or consumes an unacceptable amount of memory.
It's okay to disagree about whether something is fundamental. Preact isn't a new project, and has a core team of folks who have guided it for many years who are ultimately responsible for deciding what is fundamental and what is an implementation detail. These things change over time, but that involves a lot of discussion far beyond the scope of a single PR.
Do you have a link to your findings about Set?
Sure: https://esbench.com/bench/631d30a36c89f600a5701bdc
On my machine, using Brave, I get this result:
Even for very small collections (5 values) Set
outperforms Array
, and for bigger collections Set
is 2x as fast as Array
.
In this case, garbage collection is an observable behavior, since failure to GC effectively could easily lead to scenarios where an app crashes or consumes an unacceptable amount of memory.
To be clear, the current code in main
fails to garbage collect and leaks memory.
Here is a simple test case showing the memory leak. The computed
s will never be garbage collected as long as the signal
is alive, even though the computed
s are not being used:
const s = signal(5);
for (let i = 0; i < 100000; ++i) {
{
const c = computed(() => {
return s.value;
});
c.value;
}
}
But my code (with WeakRef
) does correctly garbage collect, and so it does not leak memory with the above program.
If you remove WeakRef
from my code, then it will leak the same amount of memory as the main
code, but it's not any worse than the current main
code.
(Incidentally, the main
code takes 25978 ms
to run the above program, but my code only takes 50 ms
, so my code is much faster and also leak-free)
It's okay to disagree about whether something is fundamental.
Also to be clear, I was discussing the concern about my code causing preact signals to break in older browsers (which is fixed by adding a WeakRef polyfill to the compiled ES5 code).
There isn't any concern with memory leaks, because my code is never worse at memory leaks compared to the current main
code, it is either equal or better.
Any chance we can get this PR going?
@MrHBS Sorry, I've been busy, but I do plan to continue working on this.
It's been a while. While I have no stake in this, it was very interesting reading through the conversation and glancing over the code. Go get it integrated :)
The extra time that has passed does also mean that the caniuse percentage went up, the 9% that would get the polyfill treatment has become closer to ~5%. I'd find the work interesting even if it got spun off into an API-compatible separate package like usignal that you can replace by using an import map.