proposal-signals icon indicating copy to clipboard operation
proposal-signals copied to clipboard

Automatic dependency tracking within asynchronous computed functions - proposal to simplify implementations

Open brandon942 opened this issue 4 months ago • 9 comments

I don't have the power to make any tc39 proposals since I'm not a member but there are people here who do. So I'm going to describe a new useful API that can make asynchronous effect functions easier to implement and more ergonomic to use. In addition it provides better control over asynchronous functions. An asynchronous function is a promise chain that can be visualized like so: Node_1 -> (f2: Node_A -> Node_B -> Node_C) -> Node_2

effect(async effectContext => {
	// Node_1
	await effectContext.resume(f2(effectContext))
	// Node_2
})
async function f2(effectContext){
	// Node_A
	await effectContext.resume(sleep(1))
	// Node_B
	await effectContext.resume(sleep(1))
	// Node_C
}

Each node represents the code of a new task that runs after the previous task has completed. Each task returns a promise and is chained through promise chaining or generator iteration (await | then() - resolve() | yield + next()). effectContext.resume enables dependency tracking within each node. This code can be improved by the following API:

var f = async ()=>{
  await promise1
  await promise2
  return promise3.then(node4Cb).then(node5Cb)
}
f.onStart(cb) // when the function starts running
f.onResume(cb) // runs before each node
f.onUnresume(cb) // after each node
f.onEnd(cb) // when the function ends
f.abort() // to abort all running instances, simply avoids calling the next node
// then called from within the asynchronous function
Function.addCleanup(isAborted=>{
		// we take advantage of javascript's native context caching here
		if (var1) ; // cleanup if defined if needed
		if (var2) ; // cleanup if defined if needed - var2 is defined after NodeX within this function
})

// provide a higher level context accessible from any code in the function at any depth level
f.callWithContext(ctx) // This is also useful for synchronous functions btw
// this context can be retrieved via
Function.getCurrentContext() // gets the nearest context:  {parent parentCtx, value: ctxValue}
// to avoid searching the parent chain for a desired context, a context ID can be passed
f.callWithContext(ctx, "myId")
// then callable from anywhere, even in a library function
Function.getCurrentContext("myId")

That's basically it. Simple and straightforward.

brandon942 avatar Sep 07 '25 01:09 brandon942

I don't understand what you're describing up above, but Solid's upcoming signals update "async signals" are supported by having signals that are not yet resolved throw a special error (React Hook's new use() works very similarly). This error breaks the effect execution (or component execution in React) at that signal's location (other errors are re-thrown to be actual errors). Later, when the signal is resolved, the execution happens again.

Example in Solid:

createEffect(() => {
  console.log(valueFromNetwork())
  console.log(otherValueFromNetwork())
  console.log('both values have resolved')
})

When this first runs, if valueFromNetwork already has a value from the network, it will log the value. Then if otherValueFromNetwork does not yet have a value, a special error is thrown (and caught) to exit the effect's execution.

Later, when otherValueFromNetwork is resolved, the effect re-run and logs both values as well as the final "both values have resolved" message.

There are other ways to do this also. In Meteor's signals and effects (named "reactive vars and computations or autoruns" in that framework), later execution of code can be associated with an effect (and "autorun", or a "computation") like so:

// a computation, i.e. an "effect", i.e. an "autorun"
const computation = Tracker.autorun(() => {
  const response = await fetch(someUrl.get()) // someUrl is a Meteor `ReactiveVar`

  // later, ensure that execution happens with the same "effect":
  Tracker.withComputation(computation, () => {
    console.log(someOtherReactiveVar.get())
  })
})

I find this to be a bit confusing, and easy to get wrong.

I also find "async signals" to be a bit easy to get wrong, because they can execute things multiple times, partially, which technically means glitches if not done carefully. Solid has a new pattern coming out to avoid that, which I'm not sure that I like:

createEffect(() => [someAsyncValue(), otherAsyncValue()], (someValue, otherValue) => {
  // effect code goes here. This callback runs once all signals in the first function have resolved, and this second callback does not run multiple times, only one time after resolved dependencies.
  console.log('both values resolved')
})

It's dependency arrays again. The very thing React Compiler tries to get rid of, haha. 🥲

Purely synchronous effects are nice. We can make abstractions for async values out of those instead. It's a little more verbose, but it looks like this, using null or undefined for values that are not yet "resolved":

createEffect(() => {
  const valueFromNetwork = getValueFromNetwork(someUrl)
  const otherValueFromNetwork = getOtherFromNetwork(someOtherUrl)

  if (!valueFromNetwork() || !otherValueFromNetwork()) return // wait for both values.

  console.log('both values resolved')
})

If the second value depends on the first, patterns can be abstracted to make that possible:

createEffect(() => {
  const valueFromNetwork = getValueFromNetwork(someUrl)
  const otherValueFromNetwork = getOtherFromNetwork(someOtherUrl, valueFromNetwork)

  if (!otherValueFromNetwork()) return // wait for both values (sequentially, this time).

  console.log('otherValueFromNetwork resolved after valueFromNetwork')
})

where getOtherFromNetwork makes an effect that does it's own early return for the dependency value.

You can see a pattern there. Maybe a shared abstraction for those would be:

  const valueFromNetwork = createFetch(someUrl) // no dependency
  const otherValueFromNetwork = createFetch(() => valueFromNetwork() ? someOtherUrl() : null) // if url is falsy, it waits (using an effect internally).

The real real nice thing about this, that far outweighs plain async/await, is that functions like getValueFromNetwork and getOtherFromNetwork can have their own onCleanups for robustness and repetition. For example, if someUrl() is a signal and it changes mid-request, the effect will run cleanups before re-running again. A similar thing applies with React Hooks cleanup functions returned from useEffect (though React effects are not nestable, greatly limiting the utility of React), and similar in other frameworks like Svelte that have adopted signals (and nestable effects!).

All too often, I see leaky async/await code that is not robust to cancellation and repitition.

The above example is very simplistic. This pattern scales much better than lose async/await code everywhere, and the mess it becomes when you wrap async/await in try-catches to try to make it cancellable and repeatable.

Here's how effect nesting works in Solid (or Svelte, etc):

function createFetch(urlSignal) {
  const [response, setResponse] = createSignal() // initially undefined (no response yet)

  createEffect(() => {
    if (!urlSignal()) return

    const aborter = new AbortController()
    fetch(urlSignal(), {signal: aborter.signal}).then(setResponse)
    onCleanup(() => {
      // robust, repeatability-enabling, colocated cleanup code.
      aborter.abort()

      // For good measure, ensure that any effects in a different effect tree clean up too (a good practice to set values back to non-existing).
      setResponse(undefined) // This will make any effects in other effect trees clean up, and early return (or show loading state, etc) while waiting for the next value
    })
  })

  return response
}

import {otherValue, anotherValue} from 'somewhere'

createEffect(() => {
  const [someUrl, setSomeUrl] = createSignal()

  createEffect(() => setSomeUrl(otherValue() + anotherValue())) // nested effect!

  const valueFromNetwork = createFetch(someUrl) // nested effect!

  createEffect(() => { // nested effect!
    if (!valueFromNetwork()) return
    console.log('network value loaded:', valueFromNetwork())
  }) 
})

But this is not the best example. It is contrived. For derived values such as someUrl, always use a "memo" (or "computed" in other terminologies):

import {otherValue, anotherValue} from 'some-signals-library'

createEffect(() => {
  const someUrl = createMemo(() => otherValue() + anotherValue())
  const valueFromNetwork = createFetch(someUrl) // nested effect!

  createEffect(() => { // nested effect!
    if (!valueFromNetwork()) return
    console.log('network value loaded:', valueFromNetwork())
  }) 
})

I find this async approach to be more manageable than the other two options because:

  • it is more explicit than with async signals, the early returns make things more obvious
  • complexity does not live in the effect system. It's just simple synchronous effects, and the abstractions that create signals are straight forward
  • having to remember to run code with something like Tracker.withComputation in Meteor or runWithOwner, and to do it the right way, is more error prone

trusktr avatar Sep 22 '25 06:09 trusktr

I'm describing language features that we really need for dealing with promises, which are a bit too opaque right now, and non synchronous flows that need to be accompanied with data. I've proposed a nice API for it, for what it's worth.

You've gone into great detail about how frameworks are trying to solve these issues I appreciate that. I'm not sure how to feel about what they've come up with though. The method that involves throwing errors in order to stop the execution has got to be really bad performance wise. The only benefit is that the user doesn't have to write logic for terminating the function himself. He should not have to. The other method relies on passing down a helper object that carries the library's data that must accompany the flow. That's a better approach, the only viable one right now imo, but the user is burdened with the obligation to strictly follow a set of rules that the library imposes on him. This is what the proposed API can solve. The user should not be forced to help a library make internal implementations work. It creates code clutter as you have pointed out.

I have recently also created my own solution for enabling asynchronous effects and I have integrated it into Vue in a fork. My method also involves passing a helper object to the effect function and burdening the user with it. But it has more features like automatic caching, branch skipping/optional execution, cleanups both at the end of the effect and at the start of a next run, nestable branch cleanups, concurrency, abortions and different tracking and scheduling modes. You can check out what it does here. There are more ideas, like self contained branches which is something like nested effects you asked for, and some unresolved questions. But the shape of the API is not where I want it.

A better API can only be made possible with better language features like the one I have described above. I really like things to be as simple as possible but right now some things just can't be.

brandon942 avatar Sep 22 '25 18:09 brandon942

For what it is worth I don't know this has a ton of bearing on this specific issue, but I thought I'd add my perspective on the types of solutions that can exist in this space.

Tracking after await one of those things where there is value to it but like anything people tend to misuse it right away.

Like this isn't great:

effect(async () => {
  const url = urlSignal();
  const data = await fetch(url);
  const other = otherSignal();
  doSomething(data, other);
})

If you update otherSignal you are refetching. Even with something like resume. It's almost always better to break it into 2:

const dataSignal = computed(() => {
  const url = urlSignal();
  return fetch(url);
});
effect(async () => {
  const data = await dataSignal();
  const other = otherSignal();
  doSomething(data, other);
});

2 tradeoffs come to mind. The effect is always async even if it could resolve the value synchronously. This is probably acceptable. It does have a rippling impact though downstream of where you read. You are passing around Signal<Promise<T>> from that point downwards and everything derived from it is also Signal<Promise<T>>. It has a pretty extreme coloring effect.

Historically most solutions would just:

const dataSignal = signal();
effect(async () => {
  const url = urlSignal();
  const data = await fetch(url);
  dataSignal.set(data);
});
effect(() => {
  const data = dataSignal();
  if (data) {
    const other = otherSignal();
    doSomething(data, other);
  }
});

There is never a need to track after await here. This of course breaks the dependency graph. Like if there were more derived things you couldn't tell they are derived from dataSignal and requesting that data couldn't tell it is pending. Well in a sense you maybe could tell if it is undefined. Making things nullable though is a similar coloration. Everything now needs to pass a null check or bypass executing downstream. It also means the effect runs twice but the first time while it is waiting is basically inconsequential because no work is done and you are waiting anyway. If there were more chained async things you'd check them all at once, or you'd be better to break into seperate computations. This pattern holds I think.

As @trusktr pointed out you could nest these. But it becomes unnested once you have multiple consumers.. you end up writing to a signal and continuing down the chain.

Solid(currently) and I think new Angular are taking a tact that basically marries these together. The nullability with the graph preservation:

const dataSignal = asyncComputed(() => {
  const url = urlSignal();
  return fetch(url);
});
effect(() => {
  const data = dataSignal();
  if (data) {
    const other = otherSignal();
    doSomething(data, other);
  }
});

It keeps the nullable coloration but preserves the dependency graph. The benefit of nullable coloration over promises is that the value can be read at any point synchronously. And while it impacts API interfaces it can be resolved pretty easily inline without async functions etc.. lends well to templating. This is also easier not to get wrong. You won't forget to resume.. and TS will shout at you around null checks.

Of course nullability and Signals is kind of awkward when we use functions since they can't be marked as idepontent. Recently frameworks have been entertaining solutions to remove some of the coloration where either they are responsible for breaking up the async zones themselves via compiler (Svelte) or using some sort of simulated continuation via throwing (React, Solid 2.0):

const dataSignal = asyncComputed(() => {
  const url = urlSignal();
  return fetch(url);
});
effect(() => {
  const data = dataSignal();
  const other = otherSignal();
  doSomething(data, other);
});

This of course leads the effect to run twice when no value is present and it throws. But so do the last couple examples. Of course is people disperse there computation with when they do their reads this can lead to extra work. Again the advice is break it into multiple. A recurring theme.


Anyway how to approach this from a standards perspective? Tricky. Some optimizations especially around disposal are based around whether something has sources. I do like that this proposal gets in before and after.. like it wraps the promise rather than just calls resume.

This means it can be made that people don't get this arbitrary runWithObserver function. It has to start from the context that was tracking. Realistically this context should be passed into every computation especially when fanning out async in the graph, sometimes you do need to read early. I wonder if it be avoided passing through since that puts a burden on async API surfaces. Like can it just be:

effect(async ()=> {
	// Node_1
	await resume(f2())
	// Node_2
})
async function f2(){
	// Node_A
	await resume(sleep(1))
	// Node_B
	await resume(sleep(1))
	// Node_C
}

I imagine resume would be hung off some global like watcher.subtle or something but in this way it might be less abusable since it can only track the context in which it originated and we could presumably know when the effect is finally done. Like if someone missed a resume somewhere it isn't going to break the chain it is going to error if there is no initial context. Anyway that's about all I have on this topic right now.

ryansolid avatar Sep 25 '25 16:09 ryansolid

If AsyncContext is added to the language, do Signals need to do anything to support an AsyncComputed, or can it be a user-land construct?

DavidANeil avatar Sep 25 '25 17:09 DavidANeil

If AsyncContext is added to the language, do Signals need to do anything to support an AsyncComputed, or can it be a user-land construct?

It's an interesting question. Like the proposal at the top wouldn't be necessary to achieve that goal. You wouldn't need resume and it would work.

That all being said AsyncComputed in the way that I use it in my examples is a different thing. It is more of a Promise to Signal conversion. It maybe could benefit from AsyncContext but it also doesn't need it. If you notice nothing after the first 2 examples that I wrote reads a signal after an await. Because in those situations generally the unit of computation is always synchronous conceptually (even if the value isn't available yet).

So it's a bit of a question of whether we should have 2 things like Signals and Promises, which to be fair are 2 different things that do exist, and can work together, or are there benefits to making the API surface homogenous by basically absorbing the promises as soon as possible. Frameworks can be incentivized to do the latter since they are concerned about composition and resuability. It would be great if components didn't need to change their signatures at the posibility of async.

ryansolid avatar Sep 25 '25 17:09 ryansolid

Without using useEffect, you can create a helper function like this:

//null, means calculating the value
const asyncComputed = <T, R>(
    value: Signal.Computed<T>,
    compute: (value: T) => R
): Signal.Computed<R | null> => {

    const result = new Signal.State<R | null>(null);
    let requestFor: T | null = null;

    const process = async (value: T) => {
        if (requestFor === value) {
            return;
        }

        requestFor = value;

        const computeResult = await compute(value);

        if (requestFor === value) {
            result.set(computeResult);
        }
    };

    return new Signal.Computed(() => {
        const currentValue = value.get();
        process(currentValue).catch(console.error);
        return result.get();
    });
};

szagi3891 avatar Sep 25 '25 19:09 szagi3891

Because in those situations generally the unit of computation is always synchronous conceptually (even if the value isn't available yet).

It's clever that it looks sync, but I prefer nullable values (f.e.

createEffect(() => {
  if (!data) return // wait for data
})

) because it is more explicit. With the throwing trick, the code starts to get into the territory of not really understanding what is async anymore. Theoretically that shouldn't matter in a perfect world where the entire system is 100% declarative, but reality is we have lots and lots of non-declarative procedural APIs to contend with in JS and DOM. When I map JS/DOM APIs to signals, I like that the null checking tells me somwething takes time before it is available.

I guess it's a tradeoff: we're not in a perfect declarative environment, so the explicitness is possibly nicer than if we were.

It would be great if components didn't need to change their signatures at the posibility of async.

This is an interesting idea, but I suppose some people will raise the thought of async/await being required to convert downstream functions to async. It's work, but it's also making things more obvious anywhere it permeates.

I remember back in the day I was learning Java (not JavaScript) and all the code was synchronous style. I never knew what was async f.e. on a separate thread. This made it easier to cause problems. When I learned async in JavaScript after Java (callback hell) it was brain gymastics, but then I start to full know what was happening in parallel, and async/await simply made that cleaner.

trusktr avatar Oct 19 '25 22:10 trusktr

Without using useEffect, you can create a helper function like this:

//null, means calculating the value
const asyncComputed = <T, R>(

That doesn't seem to be cleaning up previous async processes, so changing value a number of times before any async processes finish will overload a number of parallel processes although only the last one will be used.

How would you update this to handle that?

What I'm curuious about is how vanilla TC39 Signals will compare to frameworks that have both effects and cleanup APIs (f.e. createEffect and onCleanup in Solid.js, very similar in others).

I want to be convinced that shipping Signals.subtle.Watcher is actually a good idea. To that end, if you could show a complete end-user example (without importing libraries, fully self-contained) that'd be great.

trusktr avatar Oct 19 '25 22:10 trusktr

This is a really cool approach: treating a promise as a computed value.

https://signalium.dev/core/reactive-promises

If you combine this with a computed that can take some arguments, the above problem becomes trivial to implement.

szagi3891 avatar Oct 21 '25 17:10 szagi3891