react
react copied to clipboard
[React 19] Suspense throttling behavior (`FALLBACK_THROTTLE_MS`) kicks in too often
React version: 19.0.0
Related Issues: #30408, #31697
This may be the intended behavior, but I'm opening this issue anyway because enough discussion hasn't been done in the related issues IMO and the current behavior still feels poor to me in that it makes it too easy to slow down actual user experience.
Steps To Reproduce
Code
const zero = Promise.resolve(0);
function getNumber(num) {
return new Promise((resolve) => {
setTimeout(() => resolve(num), 100);
});
}
function App() {
const [count, setCount] = useState(zero);
const countValue = use(count);
useEffect(() => {
console.log("countValue =", countValue, Date.now());
}, [countValue]);
return (
<button
onClick={() => {
console.log("click", Date.now());
setCount(getNumber(countValue + 1));
}}
>
count is {countValue}
</button>
);
}
In short, when a rerendering suspends, you always have to wait for 300ms even if underlying data fetching has finished sooner.
In the attached example, when user pushes the button, a new Promise is passed to use(), which triggers Suspense. Even though that Promise resolves exactly after 100ms, the UI is updated only after 300ms.
I experienced this issue when using Jotai, a Suspense-based state management library.
Given that the throttling behavior kicks in even in this simplest situation, it seems impossible to implement a user experience that involves Suspension and is quicker than 300ms regardless of, say, user's network speed.
Link to code example:
https://codesandbox.io/p/sandbox/4f4r94
The current behavior
Almost always need to wait for 300ms.
The expected behavior
Maybe some better heuristic for enabling the throttling behavior? It would also be very nice to make this configurable.
The alternative of quickly flashing the Suspense boundary fallback isn't better from our experience. In real apps, that would usually mean unmounting a large chunk of the screen for a very short period which doesn't make for a pleasant UX.
There isn't a correct number here since this is just a heuristic. 300ms felt like a good middleground between avoiding jank and feeling too sluggish.
A real-world example would help illustrate the issue.
Keep in mind, that you can always wrap the update in startTransition and display smaller fallback while isPending from useTransition is true.
@eps1lon Thank you for the response. I have two questions now:
First, I understand that flashing UI isn't good user experience, but I still don't see any reason to make user wait for extra hundreds milliseconds, especially when the situation is this simple where there is only one ongoing suspension.
300ms at maximum isn't always a reasonable cost for making the UI look a bit less janky IMO.
Secondly, I see that useTransition could work. However if I understand correctly transitions are for non-blocking updates; in other words, state updates are marked as non-urgent if performed within a transition.
Using transitions for letting user see new data more quickly is quite counterintuitive to me. Am I getting anything wrong?
I don't have a truly real-world example but I think I can prepare something that looks more real-worldy if wanted.
There are use cases where the suspense bound components are very small and can take less than 100ms to load (depends on the server as well). We used to not show anything as a fallback to avoid the jankiness. This is intentional, so that initial bundle size is low but end user also doesn't have to see these fallbacks for every little lazy loaded components. Now with this 300ms hold up, there's no other way than showing a fallback, which in turn feels like a worse UX. Instead of making this behavior the default, it should be opt in based.
It appears that any time a lazy component is initialised it also kicks in the 300ms suspense, even if there is nothing that needs to load. We have our bundle split so we may have 10 or so lazy wrapped component (views) in one chunk, so the first time you hit one of the views the chunk is loaded, then switching between the views is instant. Now it seems that you have to wait 300ms and show a spinner even though nothing is being loaded after the initial chunk load, which is a notable degradation from an instant page change. After the lazy components initialise the changes are instant though on subsequent changes. But the artificial pause really isn't great.
Instead of making this behavior the default, it should be opt in based.
I'm fine with defaults, as long as there are escape hatches. This should 100% be configurable, with the option to disable it entirely. It's not really React's responsibility to make this kind of choice for every app.
It appears that any time a lazy component is initialised it also kicks in the 300ms suspense, even if there is nothing that needs to load. We have our bundle split so we may have 10 or so lazy wrapped component (views) in one chunk, so the first time you hit one of the views the chunk is loaded, then switching between the views is instant. Now it seems that you have to wait 300ms and show a spinner even though nothing is being loaded after the initial chunk load, which is a notable degradation from an instant page change. After the lazy components initialise the changes are instant though on subsequent changes. But the artificial pause really isn't great.
Seeing similar behavior, but only under webpack. Reproductions in https://github.com/mui/material-ui/issues/44920#issuecomment-2577355446.
So I can remove the 'always 300ms' fallback by using useDeferredValue in tandem with a use of useSyncExternalStore as noted here (which honestly makes no sense as a dev/lib user, sync updates cause an async fallback, but making it async avoids the async fallback to show, wat?) . Similarly you could wrap any state calls that cause lazy components to render in startTransition for a similar solution.
However, mixing the use of deferred values/transitions with any non deferred/transition code can easily cause bugs. It's also sadly very easy to do. You need to be aware of what the internals of some hook or util may be doing.
So the current situation seems to be:
- use deferred/transitions - lots of foot guns and potentially easy to have bugs. Also the whole team needs to suddenly be thinking about another concept that previously didn't exist and wasn't a problem (this really puts me off, we didn't have a problem before that this is solving).
- live with 300ms delays when there is no need for them
I'll take a bit of flicker sometimes to avoid either of these. So configurability would be greatly appreciated.
A Reddit discussion about this https://www.reddit.com/r/reactjs/comments/1hjoplz/react_19_scheduler_does_something_silly/
please add config option here. 300ms sucks.
I'm also encountering this issue when using lazy routes on tanstack router and it's not clear to me how I could use startTransition there.
This behaviour should really be opt-in not opt-out.
300ms is still a significant amount of time. Many people have tried to pinpoint this exact problem, as their web applications significantly slowed down in terms of UX after migrating from React 18 to 19.
(Please ignore my comment)
Wow this is horrible. My UI which was previously instant (because a suspense React Query would resolve instantly when cached) is now always waiting 300ms.
This has to be reverted.
It happens when triggering suspense boundaries. Was yours doing that but just resolving super fast so no one noticed?
The canonical solution for now is to put the state change in a transition and it'll appear as before. But watch out if you're using useSyncExternalStore as the transitions don't work with it... which is kind of exacerbated by this change tbh 😅.
This is really really really frustrating, my "loaders" are skeleton components that perfectly match the dimensions of my content, eg squares that should have photos. "flashing" the skeleton for 1ms and then fading into the loaded photo is totally fine and not at all jank.
Now with react 19, even if all of my content is immediately available in a local cache, i still need to load all my content for 300ms?!
why?!
The alternative of quickly flashing the Suspense boundary fallback isn't better from our experience.
Then make it configurable. This isn't a decision the React gods should have over every React app.
I put together a hackish React.lazy alternative that solves the situation where the Suspense fallback is shown even when the component had already been imported before it was rendered.
Basically, I am storing the resolved import in the SWR cache, with the stringified importer function (with toString) serving as the cache key. This can also be adapted for usage with other caching solutions than swr.
I noticed that even if the import path used by preloadComponent is relative and the import path used by lazyWithCache is absolute, or viceversa, it still works. The stringified function always includes the absolute path.
Does anyone see any issues with it? It seems to work fine. I tested in dev/prod build, different browsers, local/deployed. I only tested Vite as a bundler.
Maybe we could have a preloadComponent function offered directly by react which would be automatically tied to React.lazy() usages behind the scenes.
import useSWRImmutable from 'swr/immutable';
import { mutate } from 'swr';
import { useErrorBoundary } from 'react-error-boundary';
type ImportComponent<TProps extends object> = () => Promise<{
default: React.ComponentType<TProps>;
}>;
const getSWRKey = <TProps extends object>(
importComponent: ImportComponent<TProps>,
) => importComponent.toString();
/**
* Paired with `lazyWithCache`, this solves the issue where React briefly shows a Suspense fallback for a component loaded with React.lazy() even when the component had already been imported before it was rendered.
*
* Usage:
*
* `preloadComponent(() => import('path/to/component'));`
*/
export const preloadComponent = <TProps extends object>(
importComponent: ImportComponent<TProps>,
) => {
void mutate(getSWRKey(importComponent), importComponent);
};
/**
* Usage is similar to `React.lazy`:
*
* `const LazyLoadedComponent = lazyWithCache(() => import('./path/to/LazyLoadedComponent'));`
*/
export const lazyWithCache = <TProps extends object>(
importComponent: ImportComponent<TProps>,
) =>
function Lazy(props: TProps) {
const { showBoundary } = useErrorBoundary();
const {
data: { default: Component },
} = useSWRImmutable(getSWRKey(importComponent), importComponent, {
suspense: true,
onError: showBoundary,
});
return <Component {...props} />;
};
I also ended up reimplementing lazy without suspense and removing any other suspense boundaries. It was fairly simple and resolved the problems with this unfortunate decision. I just hope I can continue to avoid having to use these 'async' features.
Does anyone have an example that shows how to bypass this issue when doing simple data fetching?
I'm not sure the best way to use useDeferredValue and useSyncExternalStore to get back the near-instant behavior from before.
Any news on this? Or maybe any workaround?
We are likely experiencing this problem (https://github.com/facebook/react/issues/31122 also seems related) during initial SPA load - a lot of components are waiting for some moment instead of getting resolved (and continuing the waterfall) as they are ready.
➡️ Observed behavior - lots of fallbacks are displayed as they are displayed and unmounted roughly at one time (tracked via useEffect)
I wonder if we cannot followup https://github.com/facebook/react/issues/30408 and make FALLBACK_THROTTLE_MS configurable so it might have different values during initial load and further transitions. And/or at least allow some Suspense boundaries to bypass this feature?
I can stomach the 300ms delay for UX (although I agree, this should absolutely be configurable)
What is harder to eat is the cumulative effect of many 300ms's on my test suite? Does anyone have a work around for these?
I can stomach the 300ms delay for UX (although I agree, this should absolutely be configurable)
What is harder to eat is the cumulative effect of many 300ms's on my test suite? Does anyone have a work around for these?
Does it help to make the tests async and run them concurrently? Also, there's an issue for testing at https://github.com/facebook/react/issues/30408
sigh FINE, I'll do it myself.
Introducing vite-plugin-react-fallback-throttle
You can now use a Vite plugin to configure FALLBACK_THROTTLE_MS in Vite apps and Vitest unit tests.
Usage
import { defineConfig } from 'vite';
import reactFallbackThrottlePlugin from 'vite-plugin-react-fallback-throttle';
export default defineConfig({
plugins: [
reactFallbackThrottlePlugin(), // Leave empty for 0, or provide your own value if you like
],
});
Source
https://github.com/wojtekmaj/vite-plugin-react-fallback-throttle
sigh FINE, I'll do it myself.
THANK YOU! Now that's a solution, compared to the radio silence we all have been seeing from the react devs for months, completely ignoring this issue.
@wojtekmaj thank you for making this!
I had to change the filter to this to get it working:
include: [
- '**/react-dom-client.development.js',
- '**/react-dom-profiling.development.js',
- '**/react-dom-client.production.js',
+ '**/react-dom*',
],
I also had to rm -r node_modules/.vite* before restarting Vite.
Suggestion focus: “It would be much better if this behavior were configurable or driven by a smarter heuristic instead of being fixed.”
Not to sound harsh, but it's honestly baffling that besides being non-configurable, 300ms is the default behavior at all; since when has React become opinionated like this?
Usually React always provided the building blocks and then let us choose how to use them. Why are we suddenly making decisions for devs and even library authors?
If I had to make the call, I'd say 0ms should be the default, but configurable.
That's just me, of course -- but making it non-configurable honestly has me scratching my head. I need granular control. With React.Suspense being special-cased in the reconciler (right?), there is no way for us to work around this behavior short of not using Suspense at all (right?). Which is a shame, because use() makes it incredibly nice to use.
One data point: I'm developing a Tauri app, and my fetches to my locally running backend usually resolve very quickly, within 15 milliseconds (less than one frame on a 60hz screen).
I'd like to customize fallback behavior so the fallback is only shown when the request takes longer than, say, 128ms. Instead, I'm forced into this behavior by React making the choice for me
Another data point: I'm developing a chrome extension. I'd like to build my own abstraction on top of use, e.g. useCached, which fetches something using chrome.storage.local.get. The promise returned by calling chrome.storage.local.get is very, very fast to resolve. 1-5ms. Yet there now is a 300ms wait imposed by React on anything that uses useCached.
This is doubly terrible for Chrome "Panel" type extensions, because the app is re-created and re-mounted every time the panel is closed/opened, so every time you click on the extension, you now have to wait for 300ms for stuff to pop up. I now also can't use React.Suspense here.
So, ironically, this change adds jank, because it jarringly forces users to wait where they didn't expect to.
It doesn't matter a whole lot to me whether it's driven by a smarter heuristic or preset number of milliseconds. The most important thing by far is that it's configurable.
It is also very off-putting that this choice was seemingly made without community input. Even if the throttling behavior is changed to something that now suits me, I wouldn't trust React.Suspense enough to use it at all, because... what if it were suddenly decided that 500ms is the ideal throttling time? or 700ms? or 100ms? Now I'm stuck on that version of React for forever. If I had started using Suspense last year, I would be effectively barred from upgrading to React v19.
I hope all this doesn't sound too harsh. I'm giving unfiltered thoughts here, but I know this change was made in good conscience, trying to improve Suspense for everyone. I hugely appreciate all the efforts and research that has gone into this, so don't take my comment to heart or anything. (But please make stuff configurable!)
I completely agree with the community on this. Having a non-configurable throttle with an arbitrary value feels like a really questionable design choice. What’s even more frustrating is that this behavior isn’t documented anywhere, or at least not in any obvious place. I only discovered it by stumbling upon this discussion; otherwise, I’d have had no idea React was doing this behind the scenes.
If developers want to make a janky, flickering UI, we should be free to do so. It’s similar to Chrome throttling requests simply because Google thinks it “looks better.”
I just don't understand this design choice. If I want to show a Suspense boundary for at least a specific amount of time, I can just create a promise with this behaviour built-in (something like Promise.all([myPromise, wait(300)])).
Some context: I use Suspense with "legacy throw" throughout a front-end, single-page application. It's such a convenient pattern, especially with TypeScript! Unfortunately, I'm facing many problems upgrading from React 18 to React 19.2, from this mind-boggling throttling behaviour, to suspense fallbacks getting stuck, to issues with StrictMode that I can't pinpoint.
I work with an asynchronous API that is near-instantaneous in some scenarios (e.g. when accessing the content of a local file via a web worker). This 300ms delay makes the app unusable in those scenarios.
I tried useDeferredValue but, unless I'm missing something, it only avoids showing the Suspense fallback again on subsequent updates, not on initial mount, doesn't it? I also tried startTransition but couldn't get to a satisfactory result in terms of user experience — I want parts of the UI to update right away and only the part wrapped in the boundary to be deferred, but this leads to prop drilling and other confusing code... Maybe I'm doing it wrong also. 🤷
I'm all for switching to the new use hook and all the concurrent features to make my users' experience better, but:
- I feel like the experience I had was great as it was, and the code was clean and straightforward.
useDeferredValue,startTransitionand the like are awesome, but they don't seem to solve this "suspense throttling" problem, or at least not in a straightforward way. UsingstartTransitionespecially can get complicated quickly.- It would have been nice to know about this new throttling behaviour in the changelog (maybe it can be updated to save others some time?)
- The new
usehook looks quite promising and seems to solve the "stuck suspense fallback" issue I had with "legacy throw", but it definitely does not solve this "suspense throttling" issue. (It also does not feel mature enough to migrate to — there's no official client-side helper for caching/memoising promises, which turns out not to be trivial, and there don't seem to be any guidelines to help migrate codebases from "legacy throw" to the new paradigm. But I digress...)
I really, really don't want to resort to using a Vite plugin to patch React's code. Please, please consider reverting this change or providing more concrete explanations and solutions! 🙏
- there don't seem to be any guidelines to help migrate codebases from "legacy throw" to the new paradigm
That's because "legacy throw" was never officially supported, and it was explicitly stated more than a few times that if you decide to implement this behavior, you're on your own.