fetch icon indicating copy to clipboard operation
fetch copied to clipboard

New Proposal: Fetch retry

Open rakina opened this issue 6 months ago • 33 comments

What problem are you trying to solve?

fetch() requests can fail due to transient network errors. Manual JavaScript retries are complex and impossible to be done after page unload (e.g. for keepalive fetches), causing data loss for critical beacons.

What solutions exist today?

Manual JavaScript Retry Logic: Developers write try...catch blocks, manage setTimeout for delays (often implementing exponential backoff), and track attempt counts.

Limitations: Requires boilerplate code, potentially complex to manage state correctly. Doesn't work for keepalive requests as the JavaScript context is unavailable to handle retries after page unload.

How would you solve it?

Add a new retry capability for fetches, as proposed in the explainer: https://github.com/explainers-by-googlers/fetch-retry/blob/main/README.md

This proposal introduces a configurable, browser-managed retry mechanism within fetch(). It allows web developers to indicate that a fetch() request should be retried, to have a greater guarantee on it being reliably sent, even if the network is flaky.

// --- Example Usage ---

fetch("/api/important-beacon?id=12345",  {
  method: "GET",
  keepalive: true, // Essential for retryAfterUnload: true
  retryOptions:  {
    maxAttempts: 3,        // Max 3 retries (4 total attempts)
    initialDelay: 500,    // Start with 500ms delay
    backoffFactor: 2.0, // Double delay each time (500ms, 1s, 2s)
    maxAge: 60000,        // Give up after 60 seconds total retry time
    retryAfterUnload: true  // Allow retries to continue even if page closes
  }
});

Anything else?

No response

rakina avatar Jun 27 '25 13:06 rakina

One of the concerns with Service Worker background sync was that it may allow leaking information about different networks a given device is using over time.

A site could presumably abuse this mechanism to achieve that as well. It could set up an endpoint that always terminates the connection after receiving the URL in such a way that it generates a fetch failure for the client. It could then observe the future retries, even if they were hours or days later.

To mitigate that risk, should there be a cap on how long (wall clock time) a retry will be attempted? And/or should it only attempt a retry if it looks like it's still on the same network interface, proxy configuration, and/or other networking configuration?

I do see the explainer notes that there should be some browser-enforced limit on maxAge, but it's coming from a resource exhaustion angle vs. an info leak. Is there a proposed recommended limit?

erik-anderson avatar Jun 27 '25 16:06 erik-anderson

What happens if fetch returns a network error due to a policy?

annevk avatar Jun 28 '25 20:06 annevk

Thanks all!

To mitigate that risk, should there be a cap on how long (wall clock time) a retry will be attempted? And/or should it only attempt a retry if it looks like it's still on the same network interface, proxy configuration, and/or other networking configuration? I do see the explainer notes that there should be some browser-enforced limit on maxAge, but it's coming from a resource exhaustion angle vs. an info leak. Is there a proposed recommended limit?

Having reasonable limits sounds good to me. I randomly picked 1 day in the prototype impl in Chromium, but that is likely too long. Perhaps a few hours is more reasonable? FWIW, the max attempt counts (per fetch, document, and network isolation key) will also be quite low so hopefully that helps?

For not retrying when changing the network though, actually one of the motivating use cases here is when e.g. a user switches from mobile to wifi network due to getting out of range for a bit. So we do want to retry in this case, which is quite common. Maybe limiting the number of network changes might be an option (since it's less likely to happen multiple times across valid retry cases)

What happens if fetch returns a network error due to a policy?

The plan is to only retry if it's likely caused by an "intermittent network error", which I guess is a little hard to spec. So e.g. we will retry when we get DNS failures, the network changed like mentioned above, a connection can't be established etc. I think for policy errors we can tell that they're likely not intermittent? At least in Chromium we can differentiate them with error codes, e.g. here's the list we used for the prototype.

rakina avatar Jun 30 '25 14:06 rakina

It seems like that would leak information then. We make policy errors and network errors indistinguishable for a reason.

annevk avatar Jun 30 '25 14:06 annevk

It seems like that would leak information then. We make policy errors and network errors indistinguishable for a reason.

Can you clarify which part is the information leak and to whom we're leaking the information to?

  • From the viewpoint of the page initiating the fetch, fetch retry is practically invisible. It just gets the final result (after all retry attempts have been exhausted, and if the page initiating it is still alive). So it doesn't learn any new information about the error
  • From the viewpoint of the server, it can see that a retry is attempted. But the server is the one producing the policy error so it doesn't learn anything new?

rakina avatar Jun 30 '25 14:06 rakina

If some will retry and others end quickly because they will not be retried you can observe the timing difference.

annevk avatar Jun 30 '25 14:06 annevk

OK, but the fact that the fetch is not retried doesn't imply it's because of policy errors. It can be caused by any other remaining errors e.g. improper settings on the server?

Other alternatives if that is still the concern:

  • retry on all errors (kinda wasteful but maybe ok)
  • add some random delay in resolving (either always or randomly). So a bit like retrying on all errors but not actually too wasteful
  • never let the fetch resolve with errors at all if the retry option is set (which is fine for the main case here, sending beacons).

Do you think those mitigate the concerns there?

rakina avatar Jun 30 '25 14:06 rakina

Isn't https://github.com/whatwg/html/issues/10997 already addressing the use-case of "data loss for critical beacons"? I don't see that proposal (also by Google) or its explainer mentioned in https://github.com/explainers-by-googlers/fetch-retry/blob/main/README.md but I think it probably should be. Especially since the main difference between this proposal and that proposal is this proposal seems to enable the very concerning bad-actor tracking raised in https://github.com/whatwg/fetch/issues/1838#issuecomment-3013757470.

asutherland avatar Jun 30 '25 19:06 asutherland

Isn't https://github.com/whatwg/html/issues/10997 already addressing the use-case of "data loss for critical beacons"? I don't see that proposal (also by Google) or its explainer mentioned in https://github.com/explainers-by-googlers/fetch-retry/blob/main/README.md but I think it probably should be

Thanks for pointing that out! I think while they both are around trying to minimize "data loss of critical beacons", they are tackling different problems around that. @domenic's Extended Lifetime SharedWorkers are more about "we need to run some arbitrary operations after unload, with stricter bounded time":

  • It can run arbitrary operations such as writing to storage, etc.
  • It will only run once, and needs to stop quite soon after the document unload (compared to fetch retry)
  • Like mentioned in this section, when async steps are required

Meanwhile fetch retry is more about "Try to ensure that this fetch gets sent, even if it takes a while"

  • It's specific to fetches
  • It's meant to make the fetches more resilent to potentially transient errors, which are actually quite common.
  • The retry can be triggered quite a bit after the document is unloaded, but in such a way that isn't a problem privacy wise (see below, about only retrying when a same-NetworkIsolationKey document is committed).
  • The retry can also be attempted when the document is still around

So the former is more around "making sure an operation is run, after potentially some async work" while the latter is more "fetches are more resilient to transient errors and have a higher chance of reaching the server". They can work together too, e.g. we can maybe trigger a fetch with retry from the worker and that makes sure it's attempted and with a higher chance of it actually getting through.

Especially since the main difference between this proposal and that proposal is this proposal seems to enable the very concerning bad-actor tracking raised in https://github.com/whatwg/fetch/issues/1838#issuecomment-3013757470.

Oh actually I just realized the explainer doesn't mention an important detail that might mitigate the concern here (this question also came up internally, I just forgot to update the external explainer): The fetch is only retried if a same-Network Isolation Key document is alive. So, technically, retries can be done by that other document if it communicates through, e.g. localStorage. Since the user has a document with the same origin / network Isolation key open, it shouldn't be a surprise privacy-wise that that document can trigger a fetch?

(I'll update the explainer with these points, and the options on not leaking the errors mentioned in https://github.com/whatwg/fetch/issues/1838#issuecomment-3019452794)

rakina avatar Jul 01 '25 07:07 rakina

For ref, a previous (mini) thread on the fetch + retry topic with links to popular NPM packages: https://github.com/whatwg/fetch/issues/1271

Delapouite avatar Jul 04 '25 09:07 Delapouite

FYI I've updated the explainer to address the feedback that came up in the thread around retrying on all network errors, and also clarifying that retry is only attempted when there's an active same-network-isolation-key document in the same browsing session: https://github.com/explainers-by-googlers/fetch-retry/blob/main/README.md.

It seems like there's real developer interest here. The prototype in Chromium is implemented (and updated according to feedback above), so we're going to propose an origin trial soon. Let us know if there are any concerns.

rakina avatar Jul 10 '25 08:07 rakina

Sending new headers across origins is concerning. I also have a concern about those headers revealing something about the end user's network environment.

Speaking of origins, it seems concerning to allow this for "no-cors" requests.

"Idempotent" needs some kind of definition that accounts for unknown HTTP methods.

I'll try to find out if others have more feedback.

annevk avatar Jul 10 '25 14:07 annevk

Thanks.

Sending new headers across origins is concerning. I also have a concern about those headers revealing something about the end user's network environment.

By "new-headers", do you mean the "Retry-Attempts" count and the "Retry-GUID" headers? Can you elaborate more on what is the concerning part? Do headers on fetch requests typically get stripped on cross-origin redirects? (Trying to understand what is the difference than e.g. the site manually setting this headers)

Speaking of origins, it seems concerning to allow this for "no-cors" requests.

Can you elaborate here as well? Is it not possible to know that there's a network error in this mode? Or is this because of the headers?

"Idempotent" needs some kind of definition that accounts for unknown HTTP methods.

Yeah probably it's safer to not retry on unknown methods except "retryNonIdempotent" is true as well?

On Thu, Jul 10, 2025 at 11:43 PM Anne van Kesteren @.***> wrote:

annevk left a comment (whatwg/fetch#1838) https://github.com/whatwg/fetch/issues/1838#issuecomment-3057763297

Sending new headers across origins is concerning. I also have a concern about those headers revealing something about the end user's network environment.

Speaking of origins, it seems concerning to allow this for "no-cors" requests.

"Idempotent" needs some kind of definition that accounts for unknown HTTP methods.

I'll try to find out if others have more feedback.

— Reply to this email directly, view it on GitHub https://github.com/whatwg/fetch/issues/1838#issuecomment-3057763297, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABPN2ERSN5N2I24JVEU5CRT3HZ32XAVCNFSM6AAAAACAI7HJ62VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTANJXG43DGMRZG4 . You are receiving this because you authored the thread.Message ID: <whatwg/fetch/issues/1838/3057763297 @.***>

-- Regards,

Rakina

rakina avatar Jul 10 '25 15:07 rakina

The comment above retryNonIdempotent says PUT and DELETE are non-idempotent, but the details below say they are idempotent.

Currently Safari retries safe (GET/HEAD/OPTIONS) methods automatically in some situations (we cannot retry once we process and deliver the response header). Wondering if it’s useful to differentiate among safe, idempotent, and non-idempotent/unknown methods.

guoye-zhang avatar Jul 11 '25 12:07 guoye-zhang

The comment above retryNonIdempotent says PUT and DELETE are non-idempotent, but the details below say they are idempotent.

Oh oops, sorry, that was a mistake. In Chromium impl at least {PUT, DELETE} are idempotent and {GET, HEAD, OPTIONS, TRACE} are safe and idempotent. We can add a "safe" toggle as well, although I'm not sure how useful that is. In any case this retry feature is an explicit opt-in, and the caller knows what method it's using, so really it's just a double-checking mechanism to make sure the users know the risk of non-idempotent methods? (But happy to just add more safe-guards if that seems like a good idea).

Currently Safari retries safe (GET/HEAD/OPTIONS) methods automatically in some situations (we cannot retry once we process and deliver the response header).

This is interesting, is this just an automatic optimization? Does that do anything special related to the cases mentioned in this thread, e.g. redirects, no-cors, having same-origin/NIK documents around?

BTW, on https://github.com/whatwg/fetch/issues/1838#issuecomment-3057763297, I looked at no-cors a bit more and it seems like even in that mode, even though responses are opaque, it's still possible to know if a network error happened, since that will show up as a promise failure on the fetch. So retrying even with no-cors seems OK, since it's not really different from what scripts can do manually anyways (by e.g. catching the error and re-attempting the fetch manually).

I'm also still not sure if there's anything problematic with sending the Retry-Attempts and Retry-GUID, even after cross-origin redirects. It seems like the motivation to strip some headers on cross-origin redirects is if it's related to authorization, or if it's something that a script can inject to the request (e.g. custom headers). But the Retry-Attempts and Retry-GUID are added by the browser, and the information revealed there is solely for identifying the fetch itself across retries (the ID) and differentiating the attempts (the attempts count) and the script can't control anything about the values there.

rakina avatar Jul 11 '25 13:07 rakina

I don't think sending new request headers for no-CORS is okay. We generally decided not to add new no-CORS capabilities or endpoints. I don't think this feature is important enough to subvert that.

And even for CORS we have limits on place on the headers that can be transmitted without preflight and we generally don't add new headers, even if they are browser-controlled (except if they are prefixed with Sec-).

Some more feedback:

  • retryAfterUnload: true It's not clear to me why this is needed. Can't this be derived from keepalive or fetchLater()? When do you want keepalive, but no retries?
  • I'm fairly convinced we need to move to the model suggested above where you either get a response (early) or you get a network error at maxAge, but you'll never get a network error early. This reveals the least amount of information and allows user agents to optimize the most when it comes to battery and such.
  • maxAttempts unsigned long long seems a bit big for this? Do we want the specification to enforce a maximum value and throw?
  • Should we give delay and its multiplier default values?

Have you looked at how existing JavaScript libraries are solving this and what their APIs look like? And what APIs look like in other networking environments?

Would be interesting to hear from @jasnell and @lucacasonato if this would also work for Node and Deno.

annevk avatar Jul 14 '25 06:07 annevk

Hmmm, ok, so what is the ideal way for this to alleviate the CORS concerns?

  1. Add "Sec-" prefix to the header to identify that only the browser could've set it. Is this acceptable? Or is not encouraged?
  2. Only send Retry-Attempts and Retry-GUID cross-origin if the headers are in the Access-Control-Request-Headers. If they don't exist in the header, or the request is no-cors, should we stop retrying? Or should we retry but without the headers?
  3. Something else?

(I'm not very familiar with what is generally OK or not wrt CORS, so thanks in advance for guidance!)

retryAfterUnload: true It's not clear to me why this is needed. Can't this be derived from keepalive or fetchLater()? When do you want keepalive, but no retries?

Yeah most use case I know would set it to true, but keepalive itself only keeps alive the request for 30s (in Chromium) after the initial request is sent. Meanwhile the retry after unload can happen significantly later (minutes/hours, with the same-NIK-doc constraint) so it's different enough in scale to potentially not want this even with keepalive? I don't have a strong opinion here though.

I'm fairly convinced we need to move to the model suggested above where you either get a response (early) or you get a network error at maxAge, but you'll never get a network error early. This reveals the least amount of information and allows user agents to optimize the most when it comes to battery and such.

So in the current proposal, you get the final network error (the one that cannot be retried, e.g. if you exhausted all retries) if the initiator document is still alive (if it's not, nothing can listen to the error). If you get a non-network-error during retry, then you get that result (and no further retries will be attempted). I'm not sure what the difference here is except that the final network error is punted until kMaxAge? Is it better to delay when we know the final result already?

maxAttempts unsigned long long seems a bit big for this? Do we want the specification to enforce a maximum value and throw? Should we give delay and its multiplier default values?

Oh yeah that's just human error. Maybe unsigned short or even unsigned char if those are typically used? The defaults in Chrome are here:

  • max_retry_count: 10 (with additional bounds of 20 per renderer process, 50 per Network Isolation Key)
  • min_retry_delta: 5s
  • min_retry_backoff: 1.0
  • max_retry_age: 1d

Those bounds are just picked randomly, and we can adjust them if there are better options. I'm not sure if these needs to be standardized or just left to each user agent's policy?

Have you looked at how existing JavaScript libraries are solving this and what their APIs look like? And what APIs look like in other networking environments?

From the developer request linked above, there's https://github.com/sindresorhus/p-retry and https://www.npmjs.com/package/fetch-retry.

Both of them seem quite similar to this proposal, but they don't have the capability to retry after unload (which requires tricky coordination between documents in the same browsing session). They also have a way to set callbacks that can run on every error to determine whether to continue retrying or not. I think that's a possible extension for this proposal, but for now I think we can start with the MVP that is opaque / only exposes the final result. Then if there's significant interest, we can extend to have something like that too. Although it's going to be a bit tricky if the initiator document is already unloaded.

rakina avatar Jul 14 '25 07:07 rakina

My understanding is that browsers already do extensive retrying of failed requests, including POSTs (etc.) in some circumstances (e.g., if the connection is dropped before the request body is complete).

Is this proposal attempting to capture all of those use cases and surface them as Fetch arguments, or is this purely for non-platform consumers of fetch?

Retry-Attempts and Retry-GUID seem like an attempt at building a reliability protocol with servers. I wonder if this is scope creep. If not, it'd be good to coordinate with other use cases; see eg Idempotency-Key.

mnot avatar Jul 14 '25 08:07 mnot

Here's how I would simplify some of this:

  • Throw for retry in combination with mode: "no-cors".
  • I would keep the new request headers out of v1. As Mark notes there's overlap with IETF activity and it also seems fairly high-level for a low-level API such as fetch(). I'm also not entirely comfortable sharing this additional information about the end user's network environment with the server.
  • Delivering the final network error upon max-age frees us from any worries about timing attacks and allows us to end retrying early and preserve end user resources when it's due to a policy, battery, or other relevant concern. Because of this we should probably require max-age to always be set.
  • I think we should have recommended default values for delay and delayFactor. If we don't and they end up significantly varying across implementations, web developers will feel required to always set them instead. Alternatively we could require them to be always be set in v1 and throw otherwise, that would also be reasonable.

annevk avatar Jul 14 '25 08:07 annevk

My understanding is that browsers already do extensive retrying of failed requests, including POSTs (etc.) in some circumstances (e.g., if the connection is dropped before the request body is complete). Is this proposal attempting to capture all of those use cases and surface them as Fetch arguments, or is this purely for non-platform consumers of fetch?

So this mostly comes from an observation even with the internal retries, we still see some amount of what seems like transient network errors as the end result. I guess the internal retries need to somewhat be careful to not have unintended consequences, meanwhile this API is an explicit signal that it's ok to retry. So this is a stronger version of that (and the fact that there are existing libraries that attempt to do just this, it seems useful enough).

I would keep the new request headers out of v1. As Mark notes there's overlap with IETF activity and it also seems fairly high-level for a low-level API such as fetch(). I'm also not entirely comfortable sharing this additional information about the end user's network environment with the server.

Yeah this is probably OK. I guess it's easy enough for devs to manually add Retry-GUID if they want to. Retry-Attempts isn't quite possible if we don't expose intermediate errors, but maybe it's ok to not have that, as it's not important to dedup. If we don't have these headers, is there still a problem with no-cors?

Delivering the final network error upon max-age frees us from any worries about timing attacks and allows us to end retrying early and preserve end user resources when it's due to a policy, battery, or other relevant concern. Because of this we should probably require max-age to always be set.

OK. Does this mean if we only expose at maxAge, there's no problem if we retry only on certain errors (e.g. transient network errors, or only on errors where we're sure that the server is not reached yet?). For all of them they will only hear back on kMaxAge.

I think we should have recommended default values for delay and delayFactor. If we don't and they end up significantly varying across implementations, web developers will feel required to always set them instead. Alternatively we could require them to be always be set in v1 and throw otherwise, that would also be reasonable.

Sure, I think I'll propose the Chromium defaults, or maybe we'll come up with better limits later.

rakina avatar Jul 14 '25 09:07 rakina

For reference, background downloads on iOS use exponential backoff with a few additional customizations:

  1. Delay is jittered by 10% to avoid repeated spikes
  2. Backoffs are capped at 1 hour
  3. In the case of network condition change (e.g. connected to a new Wi-Fi), a retry is immediately attempted and the backoff is reset
  4. If the user launches the app that initiated the download, a retry is also immediately attempted

We received feature requests to retry 429 and 5xx responses and respecting the Retry-After header field, but we haven't implemented that yet.

guoye-zhang avatar Jul 14 '25 09:07 guoye-zhang

I think without the HTTP headers and without early network error, no-CORS should indeed be allowable. (Though this will make future additions harder and we'll have to be very careful to segment those. E.g., I don't think we can add support for Retry-After in vNext as @guoye-zhang mentions in no-CORS.)

And yes, without early network error support we only have to retry those requests where it makes sense. I think this is the best choice as it's most conservative with end user resources and also has the least potential for information leakage. And hopefully it incentivizes web developers to put reasonable limits in place.

annevk avatar Jul 14 '25 11:07 annevk

Thanks! We'll probably iterate on this feature to add some more customizations/optimizations mentioned in this thread. For now, I've updated the explainer and Chromium impl for the concerns mentioned in this thread around headers and observability of network errors differences, and I think we're ready to propose for origin trial. This is to get early insight on how much this helps with reliability in practice, and we'll see if that can be improved with the ideas here.

FYI one of the use cases for this do involve no-cors, and since there's no immediate concern for allowing that with the mitigations (no headers and no early error), I've not added the change to disallow that yet. We can revisit later, I think that's not set in stone and this is just to try some early experimentation.

rakina avatar Jul 16 '25 07:07 rakina

Overall I think this is a fine addition. I'm not sure how much of a priority implementation would be in non-browser runtimes like Node.js, Cloudflare Workers, etc, but I can't see a reason why it would be problematic. I am concerned about options like retryAfterUnload which would only carry useful semantics in browsers. I'd much prefer if there were a way to define it such that it was not specific to browsers... but even then there's just simply no use for it in an environment like Workers (for instance). Not harmful but also not useful in any way.

One immediate comment: I really think there needs to at least be a mechanism for reporting that a retry is happening and when, particularly for debugging purposes. Perhaps some way of integrating with the ProgressEvent or something similar would be enough.

I'm not convinced that retryNonIdempotent: true is something that should be supported at all. It's just too much of a footgun. I'd suggest that only known safe/idempotent methods should ever be retriable and I'd even go so far as to throw if someone tried to use retryOptions with other methods. Of course, if something like http3 is used and the error returned by the server specifically indicates that retry is safe then that's a possible path to supporting this safely but even with that it makes me quite uneasy.

The only other comment I'd have here is that the name retryOptions is going to be confusing for some users given that it is intended to be used only for transient network errors. Quite a few applications implement retry for application level errors to and I can foresee some confusion with users who would see retryOptions and think that it's an approach for application-level errors as well. That confusion could be avoided with a bit of bikeshedding on the name. networkRetry, connectionRetry, etc might be better to more specifically scope the retry to the intended case.

jasnell avatar Jul 16 '25 13:07 jasnell

I really think there needs to at least be a mechanism for reporting that a retry is happening and when, particularly for debugging purposes.

This would result in information leakage as discussed upthread. I could see something like this working in non-browser environments though.

... only for transient network errors.

I think this is mainly a v1 vs a vNext thing. As mentioned upthread supporting 429 and 5xx seem like reasonable (opt-in) extension points.

annevk avatar Jul 16 '25 13:07 annevk

When using the fetchLater API, would adding support to fetch also automatically make fetchLater support the retry mechanism?

kurtextrem avatar Jul 18 '25 13:07 kurtextrem

@annevk:

This would result in information leakage as discussed upthread. I could see something like this working in non-browser environments though.

Understood. As long as it's not forbidden by the spec in anyway, then we can live with that.

... As mentioned upthread supporting 429 and 5xx seem like reasonable (opt-in) extension points.

Yep. If that's the case, as mentioned, I really think we shouldn't have the retryNonIdempotent option.

jasnell avatar Jul 18 '25 14:07 jasnell

@kurtextrem

When using the fetchLater API, would adding support to fetch also automatically make fetchLater support the retry mechanism?

It will. DeferredRequestInit inherits from RequestInit where the proposed RetryOptions can be added to.

mingyc avatar Jul 22 '25 06:07 mingyc

The fetch retry Origin Trial has been running on Chrome for a while. Some takeaways based on partner feedback:

  • Overall the current shape of the API seems ok. We needed to add a function (Request.getRetryOptions()) so that partners can do feature-detection and do slicing correctly in their A/B experiments.
  • Some of the requests are using POST so retryNonIdempotent is useful.
  • Retry-Attempts is needed by some partners to identify whether a request they're receiving is a retry or not, to evaluate the effectiveness of the fetch retry itself (and potentially tweak their parameters/retry strategy). From past comments this header might be OK if we don't send it for no-cors mode, and only send for CORS with pre-flight. Is that correct?
  • Currently if the fetch encountered redirects, then failed, we always retry from the beginning. There's some interest to do the retry from the last successful hop instead (so if we go A -> B -> fail to get to C, retry from B instead). I wonder if there are downsides / problems of doing that (potentially as an option so users can pick what works best for their case)?
  • There are some servers that can't deduplicate at all (e.g. going to third party servers), so there's some interest in a retryOnlyIfServerUnreached option. Not sure if it's entirely possible to spec, but at least in Chrome impl, we can know that e.g. a connection wasn't established at all, or if there are DNS problems, where we're pretty sure that the fetch request never reached the server.

I'm wondering if people here have thoughts on the potential additions from these feedback. I've also requested to add fetch retry to the WHATWG TPAC agenda, where we can hopefully finalize the API shape, and figure out technical details etc.

rakina avatar Oct 22 '25 13:10 rakina

TPAC slides

mingyc avatar Nov 13 '25 05:11 mingyc