proposal-signals
proposal-signals copied to clipboard
Async
Lots of people here are interested in handling async/await in conjunction with signals. Some approaches that have been suggested:
- @modderme123 's Bubble reactivity uses something like React Suspense, where a pending promise state can be propagated through the dependency graph. This might require support from the core for efficiency of propagating that information and to fully eliminate the "function coloring problem".
- @pzuraq has proposed "Relays" to connect graphs together in a way that may be managed in a naturally async way. This may benefit from built-in support as well--it can mostly be implemented using effects, but ownership would be different.
- @alxhub has wondered whether it would be possible to make async/await in a computed "just work", but also doesn't consider this an MVP requirement.
For additional color here about bubble-reactivity:
- The big idea is ready state propagation: it has a bit field on all signals (both computed and state) for its loading/reloading(/error) status. Computations can choose (at construction time) whether they're opaque or transparent to each status bit of the signals they read; that is, whether they rerun when the status changes or simply inherit the union of their sources' statuses without rerunning.
- Transparent propagation is an optimization that really only matters for the reloading status; for other statuses, it's almost always the case that the value and the status change at the same time and you might as well rerun the computation whenever the status changed.
- Status propagation, plus the ability to observe the status as well as the value of a node, gets you out of the function coloring business; a totally non-async-aware computation can inherit a loading status from signals it reads, without needing to take any special care.
- If we weren't doing transparent propagation, then the remaining necessary core feature would be the ability to record the loading statuses of sources as a computation (re-)runs, and to include "did the status change?" as part of the equality check.
- It uses an opt-in React Suspense style "throw and retry" system, for when you just don't want to proceed if the signals you're reading aren't loaded yet.
- This was a legacy code compatibility thing for us. It's sometimes convenient, and sometimes a problem; it's easy to get waterfalls this way if you're not careful.
- I don't really think this convenience method belongs in the core.
- It doesn't have a primitive for "async computation"; instead this is defined in terms of other primitives. State nodes have the same set of capabilities as computations to unleash a loading status/throw an exception when read, so
createPromise
can pretty much directly turn a promise into a state node that'll be set once the promise completes; on top of this you can build a computation that makes a new write-once state for a new promise each time it reruns.- We tried putting this in the core, and it's definitely possible, but it turns out to be quite tricky to reason about a system where a computation can change value without rerunning (based on promise resolution). Keeping it to "only state can be mutated; computations can only rerun" simplified things considerably.
- Unlike the throw/restart pieces, I do think we want to provide this sort of convenience, assuming we tackle the async problem at all -- the standard translation between standard promises/async functions and standard signals feels like it belongs in the standard.
- When you work with a system like this, you end up wanting a signals-land version of
Promise.all
to prevent a computation from rerunning multiple times as individual sources it depends on become ready. Whatever our approach to async is, we should probably make sure that we have something like that available or at least we have enough expressive power that frameworks can define an efficient implementation of it.
We discussed this today and agreed to not include it in the initial thing which we will all prototype. There are too many open questions, and it feels too early to try to answer them all. Same goes for forking/transactions, which are similarly important.
I know it has already been decided to exclude this from an initial version of the proposal, but I think it would be immensely valuable to have something forward-compatible with an async mechanism, because:
- Web page performance benefits greatly from loading as little JS as possible.
- Using dynamic imports is a well-established mechanism to load heavier JS incrementally as it's needed.
- Similarly, it is often necessary to load data as needed, because sending all the data ahead of time is prohibitively expensive or literally impossible.
- For any calculations that take over ≈10ms, it is critical to offload the calculations to web workers. This requires the usage of
async
/callback code.- It is possible to make sync calls to WASM, but 1) WASM can run some code faster but it still blocks the thread, and 2) if the WASM code is not guaranteed to be loaded ahead of time then loading WASM is a similar issue to dynamic code import.
While there is a case to be made that app developers can solve this by manually implementing "loading" signals/states or carefully interleaving any async setup work, it can be much more straightforward to wait for the code/data to be fetched. In many cases, the user would expect to wait a brief while for an operation to finish, and it's still possible to add loading states to an async signal network.
I maintain a library named cubing.js
for displaying puzzles. We need to:
- stick with compatible web technologies — no framework lock-in,
- support a wide variety of puzzles (too much to load all at once), and
- efficiently recompute puzzle state across millions of moves.
After spending a lot of time looking at options, I ended up writing a small mechanism that propagates signals across a DAG: https://github.com/cubing/cubing.js/blob/cdfab5cb35c3741ef80e0680d9b72c69263205d3/src/cubing/twisty/model/props/TwistyProp.ts
I know it can't be a universal solution, but it has done a great job of being performant, avoiding synchronization bugs, and getting most of the benefits of async code while supporting async
nodes. In particular, each node receives a fully awaited set of inputs from parents at computation time. Here is a simplified explanation of the algorithm:
There is a global globalSourceGeneration
counter.
Each node has:
- The following construction-time inputs:
- A
Record
parents
of parent nodes (passed at construction time). Each parent has aparentKey
that is used to index otherRecord
s in the algorithm. - A synchronous function
canReuseValue(v1, v2)
, defaulting to a===
comparison. (This could probably support async.) - An async function
derive(inputs)
, defaulting to a===
comparison.
- A
- A set of
childNodes
(populated by children when they are constructed). - A
Record
ofcachedInputs
(one field corresponding to each parent). - A
lastSourceGeneration
number. - A
lastComputedGeneration
number. - A
cachedValue
.- This value is "up to date" when
lastSourceGeneration
==lastComputedGeneration
.
- This value is "up to date" when
When a node is updated:
- Increment
globalSourceGeneration
. -
Synchronously traverse all descendants and mark them as stale by setting their
lastSourceGeneration
fields toglobalSourceGeneration
.
When someone calls node.get()
:
- If
lastSourceGeneration == lastComputedGeneration
, returncachedValue
, else: - Call
parents[parentKey].get()
for all parents in parallel (usingPromise.all
) and store the innewInputs
byparentKey
.- If
parents[parentKey].canReuseValue(cachedInputs[parentKey], newInputs[parentKey]) == false
for any parent:- Set
cachedValue = await derive(cachedInputs)
[^1].
- Set
- If
- Update
newInputs = cachedInputs
andlastComputedGeneration = lastComputedGeneration
. - Return
cachedValue
.
[^1]: Optimization: if cachedValue
has not changed (based on canReuseValue
), reuse the existing cached value so the new one can be GCed.
To receive updates of the freshest value for a given node
as soon as possible: When a node is marked as stale, schedule node.get()
call using queueMicrotask(…)
and dispatch updates with the results.[^2]
[^2]: Alternatively, perform this work in the same microtask by building a list of these scheduled calls while marking nodes as stale, and call them all once the graph has been completely traversed.
There are some details around mutability rules and deduplication optimizations I've skipped over, but I think this captures enough core details.
@lgarron I'm making a utility library here: https://github.com/NullVoxPopuli/signal-utils/pull/1 which will include a reactive async implementation built upon the polyfill (and will eventually phase out the polyfill when real implementation occurs).
would that help with your concerns?
@lgarron I'm making a utility library here: NullVoxPopuli/signal-utils#1 which will include a reactive async implementation built upon the polyfill (and will eventually phase out the polyfill when real implementation occurs).
would that help with your concerns?
Hmm, the main appeal of the Signals proposal is that the code would be built-in and standardized.
I support a light async layer on top of the existing proposed spec could be handy, but using any library has costs and liabilities beyond using standard web platform code. In particular, if Signals becomes a standard for interoperable signaling code, then any async implementations on top of that would need to be careful to interoperate, probably to the point that… we would need a specification for now to interoperate. 😆
Like with other proposals, they need to be the minimal feature set possible to reduce bike sheddin / yak shaving / in general longer discussion. If something can't be implemented with primitives, it makes sense to include it in the proposal -- but since a lot of things can be implemented with the currently described primitives, deferring decisions to future proposals via libraries feels like the best way to move the existing proposal forward
pending promise state can be propagated through the dependency graph
Not only the promise state would be of interest. There will be a need to cancel all pending evaluations. For example if you switch the whole page and need to cancel all the requests that are still pending.
When you work with a system like this, you end up wanting a signals-land version of Promise.all to prevent a computation from rerunning multiple times
What signals need is Structured Concurrency. I am already looking at this with: https://github.com/timonkrebs/MemoizR
but since a lot of things can be implemented with the currently described primitives, deferring decisions to future proposals via libraries feels like the best way to move the existing proposal forward
I also don't see a way to make deduplication work with Promise
s passed around a sync Signals
API, since the definitions require equals(...)
to be implemented as synchronous. I may be missing something, though.
@lgarron the way that you could handle that is via what I've been calling the "Relay" pattern, mentioned by @littledan above. I'm going to write up a document describing this later this week if I can find some time, but here's an example:
// $lib/data-utils.ts
import { client } from '$lib/juno';
export const fetchJson = (url: Signal<string>) => {
let state: Signal.State = new State({ isLoading: true, value: undefined });
let controller: AbortController;
const loadData = new Signal.Computed(async () => {
controller?.abort();
// Update isLoading to true, but keep the rest of the state
state.set({ ...state.get(), isLoading: true });
controller = new AbortController();
const response = await fetch(url.get(), { signal: controller.signal });
const value = await response.json();
state.set({ ...state.get(), value, isLoading: false });
});
return new Signal.Computed(() => {
loadData.get();
return state;
}, {
onObserved() {
loadData.get();
},
onUnobserved() {
controller?.abort();
}
});
}
The basic idea is that Relays absorb the async of a subgraph and expose it to a parent graph that is consuming the subgraph in a synchronous way. So, you can intercept those sync changes and watch for equality at that point.
So, I figured out the minimal primitive needed to make async signals work without needing to deal with async functions directly: an "is updating" condition propagated similarly to the "is dirty" condition underlying Watcher.prototype.getPending
.
-
Signal.State
would allow this to be set directly, as part of setting the value. It should be optional, defaulting ideally tofalse
. - This condition can be read directly from any signal. For
Signal.State
, it can return the value of itself. ForSignal.Computed
, it should returntrue
iff all its source signals returntrue
for the value. -
Signal.State
needs a way to store exceptions likeSignal.Computed
. - Reads of this condition must be auto-tracked.
Then, you can implement async Computed
s in userland as follows:
// The `.set` options are purely hypothetical here
function asyncComputed(body, opts) {
const state = new Signal.State()
let token = {}
const invoker = new Signal.Computed(async () => {
const prev = Signal.subtle.untrack(() => state.get())
state.set(prev, {isPending: true})
const t = token = {}
let failed = false
let value
try {
value = await body()
} catch (e) {
value = e
failed = true
}
// Signal refreshed, drop the stale value
if (t !== token) {
if (failed) throw value
return
}
state.set(value, {
isException: failed,
isPending: false,
})
})
return new Signal.Computed(() => {
invoker.get()
return state.get()
}, opts)
}
When you use the above, it literally works like sync signals in every way, except that those who care can transparently know if a given value is ready. No inner function coloring involved.
// Define
function detailsInfo() {
let handler
const event = new Signal.State(undefined, {
[Signal.subtle.watched]() {
emitter.on("foo", handler = (v) => this.set(v))
},
[Signal.subtle.unwatched]() {
emitter.off("foo", handler)
},
})
const resource = asyncComputed(async () => {
const response = await fetch(`/thing/${Model.currentId.get()}/detail`)
if (!response.ok) throw new Error(`Response failed: ${await response.text()}`)
return response.json()
})
return new Signal.Computed(() => {
// combine `event` and `resource` somehow
return {
lastEvent: event.get(),
currentDetail: resource.get(),
}
})
}
// Use
const state = getStateSomehow()
effect(() => {
// Note how easily this could just be removed
if (state.isPending) showLoadingSymbol()
// Note how this is just the same as with sync signals
updateUI(state.get())
})
// And in your event listener
return <ThingCard
thing={thing}
onSelect={() => Model.currentId.set(thing.id)}
/>
@dead-claudia exactly! That's what i did here:
- https://github.com/NullVoxPopuli/signal-utils?tab=readme-ov-file#async-function
- https://github.com/NullVoxPopuli/signal-utils/blob/main/src/async-function.ts
(Tho, i don't think untrack is needed)
@NullVoxPopuli That doesn't seem to propagate readiness through computeds? Or am I missing something?
That propagation is pretty crucial to avoiding the function coloring problem in signals.
Edit: Also, exceptions need to be writable into Signal.State
s. That's the other half, as it allows rejections to be propagated synchronously as needed.
What's the tldr on the coloring problem?
Here is a test that shows behavior, if that helps: https://github.com/NullVoxPopuli/signal-utils/blob/ef3d29f2dd2ff943714c5f7c1bfd0c89af529ad7/tests/async-function.test.ts#L111
State changes are propagated appropriately (and non duplicatively) when tracked states are consumed by the outer computed.
But maybe the coloring problem will inform me what needs fixed 🤔
Edit: explain a bit more
@NullVoxPopuli Okay, maybe "function coloring" was a bit imprecise. I'm also conflating it with "setting to a thrown exception" and so I'll drop that for now.
It's more so "value coloring" that's at play here.
Without an "is pending" flag, you have to model your signal's value as follows:
type AsyncValue<T> =
| {thrown: false, value: T, isPending: boolean}
| {thrown: false, value: Error, isPending: boolean}
That isPending
boolean allows that declarative branching I mentioned above.
Now, suppose you want to filter the list by type. If you have a sync collection of records built by the user, and an async collection of records fetched from the network, you'll need two different functions to collect them:
Obviously this is academic, but not all signal combinators can just be turned into simple functions like that.
function filterSync(type, source) {
return new Signal.Computed(() => {
const t = type.get()
return source.get().filter((s) => s.type === t)
})
}
function filterAsync(type, source) {
return new Signal.Computed(() => {
const t = type.get()
const result = source.get()
if (result.thrown) return result
return {
...result,
value: result.filter((s) => s.type === t),
}
})
}
To get rid of the value coloring problem, you need to get all the fields other than the value itself (namely, thrown
and isPending
) to propagate implicitly.
Computed signals already have the ability to store exceptions and rethrow them on .get()
, so those can be leveraged to make thrown
propagate implicitly as an exception.
// Extends `Signal.Computed` so it can be directly `.watch`ed.
class StateWithException extends Signal.Computed {
#state
constructor(initial, opts) {
const state = new Signal.State({thrown: false, value: initial})
super(() => {
const {thrown, value} = state.get()
if (thrown) throw value
return value
}, opts)
this.#state = state
}
setException(e) {
this.#state.set({thrown: true, value: e})
}
set(v) {
this.#state.set({thrown: false, value: v})
}
}
Obviously, that
StateWithException
is inefficient. This is where asignal.setException(...)
extension toSignal.State
would come in handy.
We still have another field left, isPending
, and it needs propagated for both thrown exceptions and returned values.
// Returned from `.get()`
type AsyncValue<T> = {value: T, isPending: boolean}
// Thrown from `.get()`
type AsyncError = {value: Error, isPending: boolean}
Adding a signal.isPending
value that's propagated across the graph will solve this. And since values no longer require color in this, you can use the same combinators for both sync and async signals, treating them exactly the same across both types.
function filter(type, source) {
return new Signal.Computed(() => {
const t = type.get()
return source.get().filter((s) => s.type === t)
})
}
Likewise, you can switch from async to sync without code change, and it'll just work.
Suppose somewhere down the line, you make some changes to getStateSomehow
and currentDetail
now gets all its info right away from a cache and is no longer async. You're also anticipating a new feature that'll mean lastEvent
now depends on a network fetch, but your test endpoint isn't up yet.
Then, a month later, that test endpoint comes live and you need to make that change to lastEvent
in preparation for that feature launch.
I've made a similar sequence of changes myself in promise-based code before, with a few months in between. And with promises, that's two pretty heavy changes in the view code.
The beauty of this approach is that your view caller in this case won't need rewritten or even changed at all for either case. You can even keep the if (signal.isPending)
branch in anticipation for your lastEvent
change. You get maximum freedom here to handle it how you want.
Ah ok, thanks for explaining! This is indeed a familiar concept than i've been trying to find ergonomics for, and one that @ef4 has been pushing back on @wycats and I to figure out with "Resources", which are like a computed, but with a lifetime and cleanup.
Also! This reminds me a lot of railway-oriented-programming, (https://fsharpforfunandprofit.com/rop/), or Result types in other languages, where, if you can force the whole system to adhere to the same interface at each interaction, there are far fewer surprises.
This is super important for composing multiple computeds represents async state... Without a shared interface at each layer, making sure the pending and error states are appropriately propagated out to the renderer can be cumbersome and error prone 🤔
@NullVoxPopuli Admittedly, that style of programming was of some inspiration for my idea there, though I wasn't specifically thinking of it by name. (I have heard of the concept before, though.) Also, I was inspired by promises in that they also have this (implicit and internal) "pending" vs "settled" state, and I knew that exposing this data path would allow me to use that paradigm.
I also come from a huge virtual DOM background, both as a user and as a framework implementor, and that kind of branching-on-state reigns supreme in render functions/methods. I also have a little bit of recent experience in 2D rendering. So it was only natural for me to think about branching like that. 🙂
@dead-claudia Hah, yeah, this is almost exactly the core construction of the bubble-reactivity approach that @modderme123 and I were fleshing out a while back, and which I tried to describe (maybe poorly) earlier in this thread. The core idea is that loading-ness propagates contagiously forwards on get
, plus states letting you set their error/loading flags in addition to their value; the rest of the stuff I was talking about earlier is kind of orthogonal to that core.
"Every signal can be loading" avoids the function coloring problem in exactly the same way JS exceptions avoid the coloring problem you'd get by having things return Result
: by making the ability to be loading implicit everywhere, it doesn't have to be announced or managed explicitly. You're basically trading off between an explicit monadic approach (Promise, Result, Loadable) that shows up in the return type, and an implicit side effects based approach (throw
, loading state propagation) where some effects happen directly, without being part of the returned value.
As far as side effects go, this is about as benign as you can get -- the side channel is just one bit that gets captured alongside the result, with no control flow implications (unlike throw
). I think it's a neat trick, and that it's far and away more pleasant to use than having to make things explicitly a Signal<Loadable<T>>
all over the place.
IIRC it's not a completely satisfying solution to async operations for all frameworks, because some framework want to let you write "effects that wait for things to load before running" without you having to manually check that things are loaded or perhaps without you having to see the default unloaded values of signals show up in their types at all. OTOH I don't know if anyone has any alternatives that actually deliver on that experience for effects without introducing Weird Stuff:tm: (throw-and-restart based suspense, and/or spreading what should be one effect across multiple microtasks because you have to await
something)... so it's not clear that not knowing what to do about async effects should keep us from having this rather well-behaved system for async computeds.
Related: https://github.com/tc39/proposal-signals/issues/178
I finally got around to my writeup on the Relay pattern, you can check it out here.
My current feeling is that Relays would provide a primitive that would work well with all forms of async, including:
- Request/response (and promises in general)
- Pub/sub
- Background threads
- Websockets
- etc.
Basically, they can handle 1-to-1 connections AND 1-to-many connections quite well in a variety of ways.
At the moment, I'm concerned about over optimizing for 1-to-1/promise oriented async with automatic propagation of isPending
and error states. I do believe that is the majority of async that exists out there, so I can definitely see adding more features specifically for those cases to reduce boilerplate, but I think the two things we should be designing for are:
- We shouldn't actively prevent modeling of non-promise based async. If the default path is to use a promise and propagate automatically that is great, but it should still be possible to add signals that can handle other forms of async and incorporate them into output.
- We should either be very sure that the propagated values (
isPending
,error
, etc) are the only values that should ever be propagated, OR that the mechanism for propagating them is pluggable/extensible by future proposals. If there's a reasonable possibility of adding more types of values that are automatically propagated (e.g. some sort of automatic propagation for pub/sub state? Unsure what that would look like) I would lean toward the later, but so far I can't think of anything.
Has anyone yet considered the async context proposal ?
Where basically an async context could track dependencies of derived/computed signals and an API set up along those routes would allow callers to able to do something like the following totally transparently, 'and it just works.'
All the way down into any async function tapping any other observables, all captured and tracked through the async context flowing along aside the userland code.
Signal.derivedAsync(async abortSignal => {
const result = await fetch(url.get(), { signal: abortSignal });
return await processAsync({ result, dependency: otherDependency.get(), signal: abortSignal });
})
@rjgotten Fortunately, @littledan is involved both with async context and signals. I think we would like to integrate them. Dan can probably provide some additional thoughts.
My thoughts:
- On Signal.derivedAsync: This is an important space that should have a solution someday, but I don't know how it should look. I don't think just leaving a signal in the "computing" state serves all the use cases--you usually want to get at the previous value or a placeholder, rather than an exception. In general, autotracking is currently a very synchronous mechanism and not suitable for modeling by AsyncContext.
- My initial intuition has been that AsyncContext should tie into Signal.Computed by treating this as a "registration time" case--usually, you should restore the AsyncContext variables when running the computed body, e.g. if you use one to track the current cancel token. However, as @shicks has pointed out, sometimes you actually do want to trace the other path, which is what actually caused the computed to be forced... I'm not sure what the overall solution is.
@rjgotten To add on to that, signal state is currently (at least in this repo) specified to use an internal async context variable for glue.
Not to imply this will continue to be the case, just that it currently is the case.
@littledan On Signal.derivedAsync: This is an important space that should have a solution someday, but I don't know how it should look. I don't think just leaving a signal in the "computing" state serves all the use cases--you usually want to get at the previous value or a placeholder, rather than an exception. In general, autotracking is currently a very synchronous mechanism and not suitable for modeling by AsyncContext.
This is resolveable if derivedAsync
returns a promise as the result, rather than try to hold on to last computed value while it is internally 're-computing' and also tracks the state of said promise.
It wouldn't be problematic wrt auto-tracking either.
The AsyncContext flowing alongside the async delegate's execution would collect dependents in a collection that is added onto as the delegate advances, and derivedAsync
would be observing for any changes to those dependents.
If an update were needed mid-execution (i.e. while the promise is not yet settled) then internally the delegate should re-execute. While externally what's observed could still remain the same pending promise. (Only when a promise has already settled would the derivedAsync
itself need to change value to a new pending promise)
When such a mid-execution update would be needed, the old monitoring/recording in the AsyncContext should be marked as dead - not to be responded to further - so that any dependents still being added or changing in the remainder of the old execution, will no longer cause changes. And then the delegate should be re-executed with a new AsyncContext.
An abort signal would probably also be passed as a parameter to the userland async delegate for cooperative cancellation of the old execution, which might be beneficial to avoid continuing expensive recomputing; to avoid no longer needed network requests; etc.
Then create additional helpers that take care of unwrapping the asynchronous promise into an observable signal's value change. E.g. if you'd want to just expose the promise's state in full:
Signal.fromPromise<T>(signal: Signal<Promise<T>>): { value? : T, error? : unknown, state : "resolved"|"rejected"|"pending" };
And from there, other variants such as holding onto the last known value or error are also possible.
Basically; this means derivedAsync
actually returns a result synchronously - namely: the promise. And the only thing actually asynchronous about the signal, is that internal AsyncContext carrying the machinery for dependents tracking. That is: the dependents may still mutate even after the signal has already returned a value (a new pending promise). So any solutions like graph coloring that are need to order the dependency graph of pending updates and perform them correctly and efficiently, would need to be aware of that. However they would basically be looking at a synchronous snapshot of the dependents anyway, afaict, and they would need to finish their work synchronously. So there's no problem with dependents changing while they're doing their work, anyway...
@littledan I still feel AsyncVariable
is usable for autotracking, if you follow the asynchronous cases. (And they'd need handled even for the existing API, regardless of any suggested additions here.)
Consider this code as an example:
const foo = new Signal.State()
const bar = new Signal.State()
async function doAsyncAction() {
const thing = await db.fetchThing()
doSomething(foo.get(), thing)
}
const baz = new Signal.Computed(() => {
doAsyncAction()
return bar.get()
})
bar
is immediately added to the dependency list of baz
, but foo
is added asynchronously. There's two cases to consider for this:
-
foo.get()
indoAsyncAction
executes beforebaz.get()
is called. In this case,foo
andbar
are both added as dependencies, and you can proceed as if it's synchronous. -
foo.get()
indoAsyncAction
executes afterbaz.get()
is called. In this case,foo
is added as a dependency, thenbaz.get()
is called, thenbar
is added as a dependency.
For the second, throwing isn't an option as that's .get()
performing some pretty spooky action at a distance. Same with dirtying baz
(and that could also create perf issues with repeated calls). The only viable option for that is to just silently add the signal as a dependency, but that's entirely fine IMHO as long as there's no interfering .set()
.
If foo.set()
runs after both, it's equivalent to the synchronous bar.get(); foo.get()
case. But if foo.set()
runs between baz.get()
and foo.get()
, that poses a problem. I could see two reasonable ways to handle this:
- Save the current generation in the async context and have
foo.get()
throw if in a future generation. In order to avoid wasting resource usage and running into difficult-to-debug problems with cancelled-but-still-running requests, cancellation (also a dependency of my #222) of the computed callback will need done. - Save the current generation in the async context and have
foo.get()
return the current value always, but not add a subscription if in a future generation. This avoids needing cancellation, but sacrifices consistency of generations within a single async computed call.
Okay, I've recently through some experimentation come up with a slightly different model that IMHO is useful in its own right: Signal.try(func, ...args)
.
This isn't meant to replace my
isPending
suggestion, but to augment it.
What this does is run a block, catching errors and indirectly capturing pending signal status. It returns the same kind of object that Promise.allSettled
returns inside its result array, but with an extra isPending
boolean that's set to true
if any inner accessed signal is marked as pending, false
otherwise.
This also delimits calls: signal accesses in child Signal.try
calls are ignored by parent Signal.try
calls.
Note that this shouldn't use async context, only global state, as that limits object leakage and it's a lot cheaper.
Just coming to this discussion now, but I hit some of the needs described here when using signals to model data structures that have async edges, ie. to lazy-loaded nodes in the graph or expensive computations.
What I've made personally for this is a class called AsyncComputed, that works like this:
- It takes a Promise-returning computation function with an AbortSignal argument:
(signal: AbortSignal) => Promise<T>
- The function is called within an internal Signal.Computed, so all synchronous signal access is tracked
- When the internal computed runs, it re-runs the compute function, pre-empting any pending run and aborting their abort signals.
- It has signal-backed getters for its state:
-
state
: one of'initial'
,'pending'
,'complete'
, or'error'
.state
will synchronously change to'pending'
when dependency signals are dirtied. -
value
: the last completion value if the last completion type was'complete'
, otherwise undefined -
error
: the last error if the last completion type was'error'
, otherwise undefined
-
- It has a
complete
getter that returns a Promise that settles when the AsyncComputed does. If a new run is triggered before the previous completes, the same Promise instance is kept. If a new run is started after a completion, a new Promise is created. - It has a
get()
method that returns the latest completion value or throws if the last completion was an error. - It has a
run()
method to manually read from the internal computed and trigger a run. Otherwise the computed is read from the various getters.
This is somewhat similar to derivedAsync()
above, Relays, or signalFunction
from signal-utils - I think the differences are relatively minor. It's more-or-less a subclass of Computed that has extra fields to represent the state, and where get()
returns the previous completion value while pending.
It's working well enough for me so far, that the only thing I think I'm missing from the proposal is tracking signal reads after the first await of the compute function. If would be really great if computeds over async functions could "just work", although that seems difficult to polyfill.
Propagating pending state through the value
, and error
signals is interesting, but I'm unsure how necessary it is. Chaining AsyncComputeds can rely on state
and completed
to convey the async pending state and wait for completion.
Chaining a regular Computed off an AsyncComputed just works.
function filterSync(type, source) {
return new Signal.Computed(() => {
const t = type.get();
return source.get().filter((s) => s.type === t);
});
}
const source = new AsyncComputed(() => getItems());
If source
here is an AsyncComputed, then source.get()
will return or throw as it updates, causing the computed to return or throw.
My test for chaining
test('chaining a computed signal propagates error state', async () => {
const dep = new Signal.State('a');
const task = new AsyncComputed(async () => {
// Read dependencies before first await
const value = dep.get();
await 0;
if (value === 'b') {
throw new Error('b');
}
return value;
});
const computed = new Signal.Computed(() => task.get());
assert.strictEqual(computed.get(), undefined);
await task.complete;
assert.strictEqual(computed.get(), 'a');
dep.set('b');
await task.complete.catch(() => {});
assert.throws(() => computed.get());
dep.set('c');
await task.complete;
assert.strictEqual(computed.get(), 'c');
});