Expose an `aborted` promise on AbortSignal?
Hey,
Node.js AbortController integration is working great - almost all Node.js APIs now take a signal parameter to allow cancellation.
The Problem
I've spoken to a bunch of library authors regarding AbortSignal ergonomics and one feedback that has been brought up a bunch is that the following pattern is very common and rather easy to get wrong:
if (signal.aborted) {
// cleanup
} else {
const listener = () => resource.cleanup();
signal.addEventListener('abort', listener);
resource.on('close', () => signal.removeEventListener(listener));
}
// Happy to point to 8+ places in the Node.js code base we have this sort of code
The issue is that this is pretty easy to get wrong and cause memory leaks because:
- You can forget to remove the listener correctly when the resource is finished and leak memory.
- You can forget to check the signal is aborted ahead of time and deal with both cases.
One alternative suggested was to have a utility method that helps with this, the above code bemes:
// the second parameter is so the listener is removed when the resource gets GCd.
aborted(signal, resource).then(() => resource.cleanup());
// just returns a regular promise
aborted(signal);
It was also brought up this has several other use cases like Promise.race([somePromiseICantCancel, aborted(signal)]) which is very nifty because it integrates AbortSignal with promise combinators without requiring JS spec changes.
Ref: https://github.com/nodejs/node/issues/37220 and https://github.com/getify/CAF/issues/21
Idea
It would be very cool if instead of Node.js shipping this as a utility this could be part of the specification (a method on AbortSignal). That way we have a consistent cross-platform way to handle the use case.
Doubts
I am still not entirely convinced this is what we should do (either for Node or WHATWG). I think this is pretty nifty but I wanted to ask (the nice) people here if there are any alternatives (both for the Node.js utility and for the spec).
Additionally, I wanted to know if browsers think this is even an issue and find this interesting. I think this is important since it makes comosing cancellation APIs without bugs easier. Regardless of browser interest being a prerequisite for spec inclusion it would be very interesting to me as information to decide what route to pursue in Node.js.
Alternatives
I know I wrote this above as well but I am very interested in knowing if other people thought about this problem as well and have different ideas about how to solve it.
Implementation
I am happy to contribute the needed spec changes to the WHATWG spec if you are willing to put up with the low quality of my work. The feedback provided (by Domenic and Anne) last time was very helpful and I learned a bunch.
(I am starting a new job soon in a company who is a member of WHATWG and I'll need to re-sign the license I think)
cc helpful people who helped with the ideas last time I worked on related stuff @domenic @annevk @jakearchibald
cc library authors I talked to about cancellation and brought up use-cases that encouraged this pattern @bterlson @benlesh @getify
Note 1: I am putting on my Node.js hat because that's the one I put on when I talked to users - but to be fair/honest the majority of use cases people brought up were browser API based and not Node.js.
Note 2: As mentioned in the .follow issue I am still very much "on it" and in fact this utility could significantly simplify that problem (hopefully!).
Note 3: I am in no rush, I am hopeful that this will be slightly less mind-melting than that .follow stuff :D
You can forget to remove the listener correctly when the resource is finished and leak memory.
Just to clarify, the resource only stays in memory until signal is GC'd, at which point it's all collected.
One alternative suggested was to have a utility method that helps with this, the above code bemes:
// the second parameter is so the listener is removed when the resource gets GCd. aborted(signal, resource).then(() => resource.cleanup());
What mechanism is it using for that? Weakrefs?
@jakearchibald in Node.js yes - but:
- In spec-land this would probably be just explained (like other things in the GC section of the spec).
- In browsers this would probably done in C++ land.
I am already prototyping (internal only) weak listeners in Node.js :] https://github.com/nodejs/node/pull/36607
Just to clarify, the resource only stays in memory until signal is GC'd, at which point it's all collected.
The issue is that sometimes you have a (very) long living signal - a common pattern observed is:
const ac = new AbortController();
process.on('SIGINT', () => ac.abort());
// Use ac around in the project ...
I have seen this pattern in Node.js and in browsers (mostly around router navigation).
If you replace this with a promise you would no longer be able to synchronously act on it already being aborted, right?
I think we do have this pattern, e.g., https://fetch.spec.whatwg.org/#dom-global-fetch first does a synchronous check and then adds a listener. And https://dom.spec.whatwg.org/#aborting-ongoing-activities-spec-example recommends it. But it seems to me that changing it to await a promise might change the runtime semantics.
@benjamingr is there a real example you could point to that would benefit from this?
I find that I'm either using checkpoints, which is why I proposed https://github.com/whatwg/dom/issues/927, or I'm taking a thing that doesn't support signal and passing it to something like this:
async function abortable(signal, promise) {
if (signal.aborted) throw new DOMException('AbortError', 'AbortError');
let listener;
return Promise.race([
new Promise((_, reject) => {
listener = () => reject(new DOMException('AbortError', 'AbortError'));
signal.addEventListener('abort', listener);
}),
promise,
]).finally(() => signal.removeEventListener('abort', listener));
}
you would no longer be able to synchronously act on it already being aborted, right?
The signal.aborted boolean allows synchronous checking.
Yeah, I suppose you could use both, but that seems inelegant?
I suppose you could use both, but that seems inelegant?
I am regularly using both (via my CAF library). I consider the need to use both an artifact of the larger dichotomy between sync and async and the balancing act we play between lifting everything to promises (for nicer code) vs sometimes needing to optimize with a quicker sync path. IOW, that "cost" is already baked in as far as I'm concerned.
So I'd be happy to see the platform add the promise (rather than replace the boolean) so that these dual efforts are more canonical (and likely more efficient) than doing so manually in my own lib and userland code.
It isn't clear to me how a promise that only ever resolves on 'abort' solves the problem. Doesn't that just become a different leak point?
@jakearchibald sure, that's a good question/point.
Here are two quick examples:
With timers (since they're the simplest API):
// before
async function setTimeout(ms, { signal } = {}) {
return await new Promise((resolve) => {
const timer = setTimeout(() => {
signal?.removeEventListener("abort", listener);
resolve();
}, ms);
const listener = () => clearTimeout(timer) || resolve(Promise.reject(new AbortError()));
if (signal?.aborted) {
listener();
} else {
signal?.addEventListener("abort", listener);
}
});
}
After:
// after
async function setTimeout(ms, { signal } = {}) {
return await new Promise((resolve) => {
const timer = setTimeout(() => {
resolve();
}, ms);
// works in Node.js since timers are objects
aborted(signal, timer).then(() => clearTimeout(timer) || || resolve(Promise.reject(new AbortError())));
});
}
With a database layer:
class DB {
// wraps a database with AbortSignal API support
async query(string, { signal } = {}) {
await this.open();
const query = this.createQuery(string);
if (signal?.aborted) {
query.cancel(new AbortError());
} else {
signal?.addEventListener('abort', () => {
query.cancel(new AbortError());
});
}
return await query.promise();
}
// rest of the implementation
}
After:
class DB {
// wraps a database with AbortSignal API support
async query(string, { signal } = {}) {
await this.open();
const query = this.createQuery(string);
aborted(signal, query).then(() => query.cancel(new AbortError()))
return await query.promise();
}
// rest of the implementation
}
Let me know if the examples are what you had in mind or you had something else in mind :]
I haven't been following much of the subsequent discussion. However, my main comments on the OP are around the listener removal.
In particular, my understanding is that the proposal is not equivalent to the OP's
resource.on('close', () => signal.removeEventListener(listener));
example, but instead something more like
// This code probably leaks the signal, but I think a more complicated version could avoid that...
const registry = new FinalizationRegistry(() => {
signal.removeEventListener(listener);
});
registry.register(resource);
because the web platform (and really Node as well) don't have a uniform 'close' event for indicating when a resource is disposed. Is this understanding correct? If so, is this divergence problematic?
This might be the first time the web platform uses GC observation in such a way? But, this seems like a reasonable use of GC observation: it's using GC observation to help other things get GCed, not to do important program logic. So I'm not too worried there.
Another angle is, how does this contrast with weak listeners? My impression (but not a very confident one) is that this proposal actually has two parts: weak listeners, plus sugar to bundle together the aborted check and the weak listener attachment. That is, it could be written as
function aborted(signal, resource) {
if (signal.aborted) {
return Promise.resolve();
}
else {
return new Promise(resolve => {
// hypothetical method
signal.addWeakEventListener('abort', resolve, { fireOnlyIfStillAlive: resource });
});
}
}
I think this addWeakListener(..., { fireOnlyIfStillAlive }) function was discussed in https://github.com/whatwg/dom/issues/243 some years ago. This is different from the notion of weak listeners discussed in https://v8.dev/features/weak-references, which is roughly that the listener itself should not be kept alive by the EventTarget. I'm not sure which category https://github.com/nodejs/node/pull/36607 falls into?
Anyway, my instinct is that if we want a feature like this, it'd be best to spec both the high-level helper, and the lower-level building block (like a weak listener).
it's using GC observation to help other things get GCed, not to do important program logic. So I'm not too worried there.
That's a good point.
Another angle is, how does this contrast with weak listeners? My impression (but not a very confident one) is that this proposal actually has two parts: weak listeners, plus sugar to bundle together the aborted check and the weak listener attachment. That is, it could be written as
That is exactly the sort of implementation I've had in mind - I initially investigated weak listeners for .follow but the (internal) weak listeners PR will have to land.
Note that having such an aborted provides significantly fewer capabilities to users than exposing weak event listeners in general. Exposing the lower-level building block is indeed more powerful but also more dangerous.
There is precedent of things holding other things weakly in the spec (for example mutation observers have a weak reference to the node they are observing). This use case might just be a generalisation of that one.
I'm happy to follow up the (internal) weak listeners PR in Node.js with a addWeakListener util helper but:
- I'm not sure if it's a good idea. intuitively it's very easy to misuse but I have no evidence for that.
- I'd rather do this through the standards path, same reason this issue exists.
The idea is to provide a safe API for users to use to listen to the signal becoming aborted.
OK, I'm glad to confirm my understanding of the proposal as (a type of) weak listeners plus some sugar.
I take your point that it might be less scary to expose only the higher-level API. IMO the cat's out of the bag since WeakRefs and FinalizationRegistry exist, and it's scarier to make people use those directly than it is to give them higher-level "do what I mean" sugar. But I have no idea if that's a widely-shared opinion.
Let's see what other folks think!
Here's my understanding of the problem: An async task uses a signal to know when/if the task should be aborted. The listener the async task adds to the signal is no longer needed once the task is over, or once the listener is called (which should also end the task). However, the listener added by the async task ends up in reference as long as the signal is still in reference, which in some cases is much longer than necessary, since the signal's lifetime extends beyond the single async task.
So, a solution needs to have some knowledge of the length of the async task.
Here's the solution from @benjamingr:
// after async function setTimeout(ms, { signal } = {}) { return await new Promise((resolve) => { const timer = setTimeout(() => { resolve(); }, ms); // works in Node.js since timers are objects aborted(signal, timer).then(() => clearTimeout(timer) || resolve(Promise.reject(new AbortError()))); }); }
(I removed a || || from the code above, sorry if that's part of an upcoming proposal I'm not aware of)
I'm really struggling to understand how the above works. I'm never the smartest developer in the room, but I think others would struggle with it too.
In the above, assuming the reference that the body of aborted has to timer is weak, timer is out of reference once the promise returned by aborted(signal, timer) does one of the following:
- Rejects
- Fulfills and its reaction callback is called
- The promise loses the ability to fulfill or reject due to other things going out of reference
It fulfills if the signal is aborted, but I don't understand when the other things can happen. From other comments in the thread, it seems like finalization of the timer object is used in some way, but it seems circular. What am I missing?
(I removed a || || from the code above, sorry if that's part of an upcoming proposal I'm not aware of)
No no that was just a typo ^^
I'm really struggling to understand how the above works. I'm never the smartest developer in the room, but I think others would struggle with it too.
Basically there are two problems we've encountered with AbortSignals:
- It is very easy to forget to remove event listeners or to do so incorrectly.
- The code that checks if a signal is aborted and then adds the listener is almost always repeated.
In the above, assuming the reference that the body of aborted has to timer is weak, timer is out of reference once the promise returned by aborted(signal, timer) rejects, fulfills and its reaction callback is called, or the promise loses the ability to fulfill or reject due to other things going out of reference.
Basically - in aborted(signal, timer) it adds a listener to the signal and binds its lifetime to the lifetime of the timer. There are three scenarios:
- If the signal was already aborted, the promise fulfils immediately - the timer gets removed and the timer promise is rejected with abort error.
- If
abort()is called on the corresponding AbortController while the timer is still waiting to run - the promise fulfils and the timer gets cleared. - If however
timerfinishes cleanly and goes out of scope - the signal does not retain the timer preventing the memory leak and providing the ergonomic API.
From other comments in the thread, it seems like finalization of the timer object is used in some way, but it seems circular. What am I missing?
const retainerMap = new WeakMap();
function aborted(signal, retainer) {
if (signal.aborted) return Promise.resolve();
return new Promise(resolve => {
// only hold the listener as long as "retainer" is alive, this means if retainer is not alive no one has a reference
// to the resolve function and the promise can be GCd (and so can its then callbacks etc)
// In node we implement this with a FinalizationRegistry and WeakRefs
signal.addEventListener('abort', resolve, { magicInternalWeakOption: retainer });
// explicitly make the retainer hold a reference to the resolve function so we have a strong reference to it
// and the lifetime of `resolve` is bound to `retainer` since it's the only one that holds it strongly
// note that this is a weak map so `retainer` itself is held weakly here.
retainerMap.set(retainer, resolve);
});
}
I'd also like to emphasise I'm not fixated on this particular solution. I just brought it up as an idea and I'm not convinced myself it's a good one though I am tempted to implement as experimental and see what people say.
I figured that it should be a promise because that's our async primitive for "timing independent multicast one-off things". I figured it needs a way to tie its lifetime to a resource since that's the bit everyone is doing manually in the meantime.
It's entirely possible there are much better ways to accomplish this I haven't thought of yet. I am in no hurry :]
Based on the idea that a solution needs to have some knowledge of the length of the async task, here's a pattern:
function abortableSetTimeout(ms, { signal } = {}) {
return abortableTask(signal, (setOnAbort) => new Promise((resolve) => {
const timerId = setTimeout(() => resolve(), ms);
setOnAbort(() => clearTimeout(timerId));
}));
}
Where this is the implementation of abortableTask:
/**
* @param {AbortSignal | undefined} signal
* @param {(setAbort: (abortCallback: (() => void) | undefined) => void) => Promise} taskCallback
*/
async function abortableTask(signal, taskCallback) {
if (signal?.aborted) throw new DOMException('', 'AbortError');
let onAbort, listener;
const setOnAbort = (callback) => { onAbort = callback };
const promise = taskCallback(setOnAbort);
return Promise.race([
new Promise((_, reject) => {
listener = () => {
onAbort?.();
reject(new DOMException('', 'AbortError'));
};
signal?.addEventListener('abort', listener);
}),
promise,
]).finally(() => signal?.removeEventListener('abort', listener));
}
Some benefits of this:
- The task doesn't run at all if the signal has already aborted.
setOnAbortcan be called multiple times, which handles cases where the steps needed to abort/cleanup change during different phases of the task.- Works with primitives (
timerIddoesn't need to be an object) - Works using today's API, so can be polyfilled, or just shipped as a library
I don't think it leaks, but I'd like a second opinion on that 😄.
Here's the DB example:
class DB {
// wraps a database with AbortSignal API support
query(string, { signal } = {}) {
return abortableTask(signal, async (setOnAbort) => {
await this.open();
const query = this.createQuery(string);
setOnAbort(() => query.cancel());
return query.promise();
})
}
// rest of the implementation
}
@jakearchibald this is pretty similar to what James had in mind in https://github.com/nodejs/node/issues/37220#issuecomment-773370510
@getify what do you think? Would Jake's idea version (returning a promise) address your use case?
@jasnell any opinion on the above API compared to what you discussed in https://github.com/nodejs/node/issues/37220#issuecomment-773370510 ?
I gave it a spin on squoosh.app, which uses abort signals quite a bit https://github.com/GoogleChromeLabs/squoosh/pull/954/files - it helped in one place!
The other helpers I needed were assetNotAborted, and a way to make a promise appear to be abortable. The implementation of the latter becomes:
/**
* Take a signal and promise, and returns a promise that rejects with an AbortError if the abort is
* signalled, otherwise resolves with the promise.
*/
export function abortable(signal, promise) {
return abortableTask(signal, () => promise);
}
…which is kinda nice.
abortableTask is the wrong name though, since it clashes with tasks in HTML.
what do you think? Would Jake's idea version (returning a promise) address your use case?
I'm not sure, lemme try to clarify. Could it be used kinda like this?
var ac = new AbortController();
var pr = abortableTask(ac.signal,async (setOnAbort) => {
setOnAbort(() => "aborted!");
});
pr.then(msg => console.log(msg));
ac.abort();
// aborted!
Am I understanding it correctly?
Or would it be:
var ac = new AbortController();
var pr = abortableTask(ac.signal,async () => {
return "aborted!";
});
pr.then(msg => console.log(msg));
ac.abort();
// aborted!
The second one. The implementation works, so you can test it yourself.
The implementation works, so you can test it yourself.
Sorry, didn't see the implementation, missed that.
So, now that I've looked at it and played with it, I think I would end up using it like this:
var ac = new AbortController();
ac.signal.pr = abortableTask(ac.signal,() => new Promise(()=>{}));
// elsewhere
setTimeout(()=>ac.abort(),5000);
try {
await Promise.race([
someAsyncOp(),
ac.signal.pr,
]);
}
catch (err) {
// ..
}
So, IOW... a util like abortableTask(..) would save me doing the addEventListener(..) stuff, but it's a bit wonky that I essentially need to create a never-resolving promise to ensure that only the cancellation token ends up resolving (rejecting) the pr promise.
It's slightly beneficial for my purposes, but not as nice as if there was just a pr (of whatever name) as a lazy getter or method on the signal, or (as proposed earlier), a util that does only the addEventListener(..) part of the equation and not the task stuff.
On that last point, is there any chance that abortableTask(..) could have the second param as optional, so it skipped that part of the machinery if omitted (as would effectively be the case in my usage)?
I've been doing a similar thing since Node got support for AbortSignal, basically a wrapper that calls a function with a linked abort signal that gets disconnected once the promise returned from the function resolves:
// doAbortable.ts
export default async function doAbortable<R>(
signal: AbortSignal | undefined,
func: (abortSignal: AbortSignal) => R | Promise<R>,
): Promise<R> {
const innerController = new AbortController();
const abortInner = () => innerController.abort();
if (signal?.aborted) {
throw new AbortError();
} else {
signal?.addEventListener("abort", abortInner, { once: true });
}
try {
return await func(controller.signal);
} finally {
// this allows innerController to be garbage collected
// and hence if nothing is holding a strong reference to the signal
// that too may be collected
signal?.removeEventListener("abort", abortInner);
}
}
export default function animationFrame(abortSignal) {
// the inner abortSignal can be garbage collected
// once the promise returned resolves as there'll be
// no references to it remaining once doAbortable
// calls removeEventListener("abort", abortInner);
return doAbortable(abortSignal, (abortSignal) => {
return new Promise((resolve, reject) => {
const frameRequest = requestAnimationFrame(time => resolve(time));
abortSignal.addEventListener("abort", () => cancelAnimationFrame(frameRequest));
});
});
}
I've found it works fairly cleanly, although the extra wrapper is slightly annoying. This might be able to improved if function and parameter become a thing cause then I could just annotate the abort param and decorate it e.g:
@abortable
export default function delay(time, @abortable.signal abortSignal) {
// ... create delay promise
}
I don't have strong feelings about my version vs @Jamesernator's. It avoids the setOnAbort callback pattern mine has, which is good if we think my callback pattern is a bit weird. Although, it might not be clear to devs that the inner signal behaves differently to the outer signal. In @Jamesernator's solution it's slightly harder to change the abort tasks mid-way through the task, but I don't know if that's common (it didn't come up in the examples so far).
Edit: oh wait, I've just noticed that @Jamesernator's solution depends on the inner task to manually reject its promise if the task aborts. That seems less ergonomic to me. Maybe that's unintentional, since the requestAnimationFrame example would never resolve if aborted.
@getify Your example looks pretty weird and I'm not sure what it's trying to achieve. I don't understand why it isn't:
const ac = new AbortController();
// elsewhere
setTimeout(()=>ac.abort(),5000);
try {
await asyncTask(signal, someAsyncOp);
// or
// await asyncTask(signal, () => someAsyncOp());
// depending on if someAsyncOp supports reacting to abort.
}
catch (err) {
// ..
}
@jakearchibald
It's the same reason you see the "deferred" pattern still used in promises, like by extracting the resolve/reject methods from inside a promise constructor and then using them elsewhere externally to what was inside the promise constructor: sometimes it's really inconvenient (or impractical) to stuff all the logic that determines resolution of the promise inside that single constructor.
In my case, I actually have many different places that need to observe the promise attached to the signal, some inside the internals of my library, and some in the userland functions that people write and pass into my library.
So, I need to be able to pass around both a signal and a pr promise that's attached to that signal, because in different places you need access to one or the other. For convenience, I store that pr as a property on the signal, so that I'm only passing around the single entity that has both forms of "observation" (signal or promise) available.
At any given moment, there might be quite a few places in the program that are observing the single pr (like via Promise.race(..)) and several others that are directly observing the signal (like fetch(..) calls). All those different places in the code cannot pragmatically be nested (or even invoked from) inside a single "asyncTask" callback, the way your util assumes.
So, similar to resolve/reject extraction, I am essentially needing to "extract" a promise for the signal aborting, and store that and use it around.
Right now I do this by making my own calls to addEventListener(..) and removeEventListener(..). It's fine but it has some ugliness and potential for leakage.
I was asked if this utility being contemplated (for Node and for the web) could be useful for my purposes. Hopefully this helps explain the ways it does and doesn't fit with what you're suggesting here.
Makes sense! Yeah, I guess the same as the promise design, it's nice that it still helps use-cases that depend on pulling the insides outsides (I've run into these cases too), but we shouldn't over-index on them.
but we shouldn't over-index on them.
Agreed, to an extent. I don't think we should design a util that only serves my use case, but I also hope we don't design a util that falls short of it either. Would be nice to find a flexible compromise.
If the util you suggested had an option to be called without the taskCallback(..) param, and in that case, could only listen to the signal's event... that affordance would make your suggested util much more friendly for my purposes.
For clarity, here's sorta what I'm suggesting:
async function abortableTask(signal, taskCallback) {
if (signal?.aborted) throw new DOMException('', 'AbortError');
let onAbort, listener;
const setOnAbort = (callback) => { onAbort = callback };
const taskPromise = taskCallback?.(setOnAbort);
const signalPromise = new Promise((_, reject) => {
listener = () => {
onAbort?.();
reject(new DOMException('', 'AbortError'));
};
signal?.addEventListener('abort', listener);
});
return (
taskPromise ?
Promise.race([ signalPromise, taskPromise ]) :
signalPromise
)
.finally(() => signal?.removeEventListener('abort', listener));
}
// elsewhere
const ac = new AbortController();
ac.signal.pr = abortableTask(ac.signal);
With just a few changes here, it makes the taskCallback(..) param optional. If it's omitted, it's not called and the Promise.race(..) is skipped. I think that's a compromise that serves both sets of use-cases well... it doesn't detract from your favored "insides-only" approach, but gives a non-hacky escape hatch for the "outsides" need.
If instead I have to pass a function for taskCallback(..) that returns a never-resolving no-op promise, I've shot myself in the foot. Ostensibly, the purpose for me using this util would be to hopefully avoid/reduce the potential memory leakage, but now I'm creating a do-nothing promise that just hangs around (maybe forever?).
Edit: oh wait, I've just noticed that @Jamesernator's solution depends on the inner task to manually reject its promise if the task aborts. That seems less ergonomic to me. Maybe that's unintentional, since the
requestAnimationFrameexample would never resolve if aborted.
It was intentional at the time, but thinking about it more it might not be necessary. I think I was concerned about the fact the promise would never resolve, but actually it should be fine as if its simply raced against the abort it should in theory be able to be collected as there would be no remaining references to it as once it does a cleanup step like cancelAnimationFrame that would sever any references to resolve/reject and by consequence the promise itself (as Promise.race does not actually keep a reference, it just adds a .then() handler).
@Jamesernator I'm not sure I follow. Are you saying your proposal should change or should stay as it is?
@Jamesernator I'm not sure I follow. Are you saying your proposal should change or should stay as it is?
That it should change. Your abortableTask would work fine. I was thinking Promise.race would keep references to the promises (so if one was eternal it wouldn't be collected), but no that's not the case it's the other way around (the raced promise holds a reference to Promise.race's resolve/reject method).
It'd be quite cool as a method of abort signal so that we could just do:
function animationFrame(abortSignal) {
return abortSignal.task((setOnAbort) => {
return new Promise((resolve) => {
const frameRequest = requestAnimationFrame(time => resolve(time));
setOnAbort(() => cancelAnimationFrame(frameRequest));
});
});
}
I don't think it quite meets the OP use case. But I've generally found the doAbortable pattern to be sufficient for most cases. I can see a promise being useful on occasion, but I feel like asynchronous cleanup (via .then(...)) might introduce a hazard where people call controller.abort() and expect certain work to not continue, but work is still enqueued on the microtask queue and runs before the .then(...) handler.
@getify it's kinda nice that one function could do both, but I'm worried it muddies the intent of the method. abortableTask(signal, callback) is designed as a best practice way to create a unit of abortable work in a non-leaky way. Whereas abortableTask(signal) would create a promise that only resolves when signal is aborted. This means any reactions to the promise would leak until signal is aborted or goes out of reference, which takes us back to the problem we're trying to address.
Although these things could be the same method implementation-wise, would it make more sense in terms of intent and documentation to make them different methods?