mpv with fifo presentation lags terminally when resizing a paused video or image
mpv Information
mpv v0.40.0-dirty Copyright © 2000-2025 mpv/MPlayer/mplayer2 projects
built on Sep 24 2025 17:56:15
libplacebo version: v7.351.0
FFmpeg version: n8.0 (runtime n8.0.1)
FFmpeg library versions:
libavcodec 62.11.100
libavdevice 62.1.100
libavfilter 11.4.100
libavformat 62.3.100
libavutil 60.8.100
libswresample 6.1.100
libswscale 9.1.100
Other Information
- Linux version: not relevant
- Kernel Version: not relevant
- GPU Model: AMD RX 6600
- Mesa/GPU Driver Version: mesa 25.2.7
- Window Manager and Version: Jay
- Source of mpv: Both current arch version and master are affected
- Latest known working version: none
- Issue started after the following happened: probably after mpv became aware of wp-fifo-v1 and started using fifo on wayland
Reproduction Steps
- Play any video on a compositor that supports wp-fifo-v1 with no particular command line arguments. I have tested KDE and Jay.
- Pause the video
- Resize the window
Expected Behavior
The window resizes smoothly
Actual Behavior
The resizing lags unbearably behind the mouse. On Jay, the connection is terminated soon afterwards with the following error:
An error occurred while processing a request: Could not process a
wl_surface#6.commitrequest: The client has too many pending commits
This is because Jay has an upper limit of 128 unapplied commits per client to prevent resource exhaustion.
This issue does not happen while the video is playing. Even so, wp-fifo-v1 is fundamentally broken for applications that want to support smooth resizing. See https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/495.
mpv seems to have perfectly functional mailbox support which it should always prefer on wayland unless the user explicitly opts into fifo. You can currently invoke it manually with --vulkan-swap-mode=mailbox and the issue goes away.
Log File
Sample Files
No response
I carefully read all instruction and confirm that I did the following:
- [x] I tested with the latest mpv version to validate that the issue is not already fixed.
- [x] I provided all required information including system and mpv version.
- [x] I produced the log file with the exact same set of files, parameters, and conditions used in "Reproduction Steps", with the addition of
--log-file=output.txt. - [x] I produced the log file while the behaviors described in "Actual Behavior" were actively observed.
- [x] I attached the full, untruncated log file.
- [x] I attached the backtrace in the case of a crash.
mpv seems to have perfectly functional mailbox support which it should always prefer on wayland unless the user explicitly opts into fifo.
mailbox support is a bit of a hack, though. I thought fifo support would do something smarter than this https://github.com/mpv-player/mpv/blob/master/video/out/wayland_common.c#L4647-L4700
I don't think anything other than fixed-rate fullscreen applications like games was really considered during wp-fifo-v1 development. At least I don't remember anything like that.
Isn't this something to fix on the Wayland/Mesa side then? Of course mpv can do the Right Thing^TM here, but you can't really expect every single application to do the same. Anything that picks FIFO via Vulkan is going to run into this problem
Luckily most applications don't. Most desktop-style applications are better off with mailbox and that's what they seem to be using. But yes, shit's fucked and the only reason that it's not more visible is that most fifo users are games that are almost always fullscreen.
It might be worth investigating why paused videos behave so much worse than playing videos.
It might be worth investigating why paused videos behave so much worse than playing videos.
May be a duplicate of https://github.com/mpv-player/mpv/issues/16744
Resizing seems be sluggish when there's no video playing to force updates, will need to investigate why. Though this seems to be the case even with opengl, the only common denominator is the use of the wayland backend
Yes, that is the same issue. However, I think contrary to the analysis in there the issue is caused by fifo.
the only common denominator is the use of the wayland backend
I can replicate this exact issue on Windows with --d3d11-flip=yes, but not --d3d11-flip=no. But I never bothered reporting it because I wasn't sure if this was a compositing issue or mpv issue.
The issue seems to be that, while playing a video, mpv is busy outside of the wayland code a lot of the time. This causes a singe wl_display_dispatch_pending call to dispatch a great many resize events. This acts as a sort of debounce where things throttle themselves. So if you have 1000 resize events, mpv might only act on every 100th of them.
On the other hand, when the video is paused, mpv is much more responsive and dispatches events immediately. So if you have 1000 resize events, mpv actually resizes the swapchain 1000 times. This causes the compositor-side fifo queue to fill up immediately.
This does not seem intentional so mpv only gets lucky when it's playing a video.
I have looked at what vulkan specifies and it does not say anything about this situation. That is, it is both possible for the FIFO queue to grow indefinitely if mpv re-creates the swapchain an indefinite number of times, and it is possible for the FIFO queue to get flushed when the swapchain is recreated. So far I have not found any information about how Vulkan drivers behave on windows in this situation.
For compatibility with all possible implementations, mpv must throttle itself after recreating the swapchain when using FIFO.
Doesn't the fifo protocol allow clearing the barrier early if needed?
The compositor may clear the condition early if it must do so to ensure client forward progress assumptions.
I believe it would make sense for the WSI or compositor to clear fifo barriers during resize operations no?
There is no way for the client to communicate this intent at the moment. That is what https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/495 is about. The WSI is unable to do anything about this.
The compositor might be able to get clever with it but I have not looked into this.
There is no way for the client to communicate this intent at the moment
Correct me if I'm wrong, but isn't there a way for WSI to detect this? It already returns VK_SUBOPTIMAL_KHR if the window gets resized but the client doesn't commit the appropriate sized texture. WSI knows when the swapchain is recreated with different surface size which could be used to clear fifo barriers.
It already returns VK_SUBOPTIMAL_KHR if the window gets resized but the client doesn't commit the appropriate sized texture.
It doesn't. The WSI has no way to know if the window has been resized. The only time the wayland WSI returns suboptimal is if the window goes fullscreen and the buffers need to be recreated to enable direct scanout.
WSI knows when the swapchain is recreated with different surface size which could be used to clear fifo barriers.
The WSI cannot clear fifo barriers.
@mahkoh does revert of d2f3b6643964ada78803e7ef2d50b60f5c275acf helps?
No, like I said in my previous message, the usecase that that commit fixes is the only usecase in which suboptimal will be reported:
With this, direct scanout with Vulkan on Wayland should just work.
The WSI will never report suboptimal when resizing a windowed mpv window.
Thanks for testing.
I did test it in case that wasn't clear. Could have ended up pretty embarrassing otherwise.
Reverting https://github.com/mpv-player/mpv/commit/39b53dd8936bd948097ad8024f6c9f5b0a6413a7 would help according to mahkoh, but I still see the resize issues with mailbox so I don't think that's it. As I've said on the other issue, the resize issue is related to --keepaspect-window=yes and not the presentation mode
Though I guess reverting that is still preferred if as mahkoh says, fifo isn't meant to be used by any application that intends to be not be fullscreened the whole time it's open
I cannot reproduce the issue with mpv --no-config --idle --force-window --keepaspect-window=yes --vulkan-swap-mode=mailbox. That's your reproducer from the other issue just with mailbox mode added at the end.
Though I guess reverting that is still preferred if as mahkoh says, fifo isn't meant to be used by any application that intends to be not be fullscreened the whole time it's open
I would not go that far. I would say that the current fifo protocol is deficient just like the situation before the fifo protocol was deficient. It's just deficient in a different way.
One of the authors of the vulkan swapchain extension has clarified in https://github.com/KhronosGroup/Vulkan-Docs/issues/2626 that back pressure is not guaranteed with the fifo present mode. Applications are required to pace rendering themselves even if they use fifo.
@mahkoh: Could you test if this patch resolve the issue?
diff --git a/src/vulkan/swapchain.c b/src/vulkan/swapchain.c
index 90a4d5e8..bb70e191 100644
--- a/src/vulkan/swapchain.c
+++ b/src/vulkan/swapchain.c
@@ -582,6 +582,15 @@ static void destroy_swapchain(struct vk_ctx *vk, void *swapchain)
VK_CB_FUNC_DEF(destroy_swapchain);
+static void wait_for_frames_in_flight(struct priv *p, int max_remaining)
+{
+ while (pl_rc_count(&p->frames_in_flight) > max_remaining) {
+ pl_mutex_unlock(&p->lock); // don't hold mutex while blocking
+ vk_poll_commands(p->vk, UINT64_MAX);
+ pl_mutex_lock(&p->lock);
+ }
+}
+
static bool vk_sw_recreate(pl_swapchain sw, int w, int h)
{
pl_gpu gpu = sw->gpu;
@@ -622,6 +631,12 @@ static bool vk_sw_recreate(pl_swapchain sw, int w, int h)
}
#endif
+ // Drain present queue to ensure it doesn't grow unboundedly, when recreating
+ // the swapchain multiple times in a row (e.g. resizing the window)
+ // Allow one frame to be in-flight, this should give enought time for next
+ // present to be submitted.
+ wait_for_frames_in_flight(p, 1);
+
// Calling `vkCreateSwapchainKHR` puts sinfo.oldSwapchain into a retired
// state whether the call succeeds or not, so we always need to garbage
// collect it afterwards - asynchronously as it may still be in use
@@ -880,11 +895,7 @@ static void vk_sw_swap_buffers(pl_swapchain sw)
struct priv *p = PL_PRIV(sw);
pl_mutex_lock(&p->lock);
- while (pl_rc_count(&p->frames_in_flight) >= p->swapchain_depth) {
- pl_mutex_unlock(&p->lock); // don't hold mutex while blocking
- vk_poll_commands(p->vk, UINT64_MAX);
- pl_mutex_lock(&p->lock);
- }
+ wait_for_frames_in_flight(p, p->swapchain_depth - 1);
pl_mutex_unlock(&p->lock);
}
However note, that we already do it in, vk_sw_swap_buffers, so there is little change here. And in fact, I don't know how you can manage to grow the queue beyond the configured depth.
So in general, yes, if a naive application continually acquires more images and presents them, with or without an intermediate swapchain recreation, the queue is allowed by the spec to grow in an unbounded manner. In practice, not all implementations have unlimited resources such that this is allowed, but applications can't assume it won't happen.
To clarify, my previous reply. This is not applicable to mpv, presentation is indeed blocked on image acquire (in display sync mode), but it is not allowed to exceed swapchain depth as configured by mpv. mpv will never submit another present if in-flight counter is higher than depth configured.
So, yes, scenario described in linked issue is not applicable here.
The patch makes no difference as far as I can tell.
To clarify, my previous reply. This is not applicable to mpv, presentation is indeed blocked on image acquire (in display sync mode), but it is not allowed to exceed swapchain depth as configured by mpv. mpv will never submit another present if in-flight counter is higher than depth configured.
That seems to be the case when the video is playing but not when it's paused or if I'm playing an image. I'm using the following command to gauge how often mpv presents:
WAYLAND_DEBUG=1 mpv ~/battlefield_1080p_120fps_8mbps.mp4 2>&1 | rg --line-buffered 'set_barrier' | sed --unbuffered -E 's/^(.....).*/\1/' | uniq -c
The video is 120 fps and plays on a 120 fps monitor. When I resize the window while the video is playing, the count is steady:
120 [ 564
120 [ 565
120 [ 566
120 [ 567
120 [ 568
120 [ 569
120 [ 570
120 [ 571
120 [ 572
Then I pause the video so the count goes to 0:
109 [ 573
12 [ 574
Then I resize the window and the count goes very high:
610 [ 577
882 [ 578
898 [ 579
879 [ 580
831 [ 581
I presume the patch I posted before, didn't help?
Could you try this and see if there are more than one pending swapchains to be destructed?
diff --git a/src/vulkan/common.h b/src/vulkan/common.h
index 26b70749..3c90a40f 100644
--- a/src/vulkan/common.h
+++ b/src/vulkan/common.h
@@ -100,6 +100,8 @@ struct vk_ctx {
const struct vk_callback *pending_callbacks;
int num_pending_callbacks;
+ atomic_int_fast32_t retired_swapchains;
+
// Instance-level function pointers
PL_VK_FUN(CreateDevice);
PL_VK_FUN(EnumerateDeviceExtensionProperties);
diff --git a/src/vulkan/swapchain.c b/src/vulkan/swapchain.c
index 90a4d5e8..ba962321 100644
--- a/src/vulkan/swapchain.c
+++ b/src/vulkan/swapchain.c
@@ -341,6 +341,7 @@ pl_swapchain pl_vulkan_create_swapchain(pl_vulkan plvk,
p->swapchain_depth = PL_DEF(params->swapchain_depth, 3);
pl_assert(p->swapchain_depth > 0);
atomic_init(&p->frames_in_flight, 0);
+ vk->retired_swapchains = 0;
p->last_imgidx = -1;
p->protoInfo = (VkSwapchainCreateInfoKHR) {
.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR,
@@ -577,6 +578,7 @@ error:
static void destroy_swapchain(struct vk_ctx *vk, void *swapchain)
{
+ PL_ERR(vk, "destroy_swapchain: %d", vk->retired_swapchains--);
vk->DestroySwapchainKHR(vk->dev, vk_unwrap_handle(swapchain), PL_VK_ALLOC);
}
@@ -627,6 +629,7 @@ static bool vk_sw_recreate(pl_swapchain sw, int w, int h)
// collect it afterwards - asynchronously as it may still be in use
sinfo.oldSwapchain = p->swapchain;
p->swapchain = VK_NULL_HANDLE;
+ PL_ERR(p->vk, "vk_sw_recreate: %d", vk->retired_swapchains++);
VkResult res = vk->CreateSwapchainKHR(vk->dev, &sinfo, PL_VK_ALLOC, &p->swapchain);
vk_dev_callback(vk, VK_CB_FUNC(destroy_swapchain), vk, vk_wrap_handle(sinfo.oldSwapchain));
PL_VK_ASSERT(res, "vk->CreateSwapchainKHR(...)");
It only ever prints
[vo/gpu-next/libplacebo] vk_sw_recreate: 0
[vo/gpu-next/libplacebo] destroy_swapchain: 1
Oh, I see now why this happens. Annoying.
Do we really need throttle this on wall-clock? I guess on Wayland it could wait for presentation feedback, but that's also not that great.
@mahkoh: I think we can do heavy handed solution and throttle all manual redraw events, see #17133. mpv does request redraws pretty liberally in various part of code, so while probably only when swapchain is recreated it is real a problem, but we can just nuke all requests to slower rate.
It's not perfect but it seems to fix the worst of it.
It's not perfect but it seems to fix the worst of it.
What if you make frame interval like 100ms? I'm reluctant to make it too slow, to avoid laggy UI in paused state. But maybe we can meet in the middle. I find it a bit of a trade off, since we don't really know how much frames is queue to present. Once we send it out, it's really out of mpv's hands.