hyprlock icon indicating copy to clipboard operation
hyprlock copied to clipboard

Can't unlock after routing monitor with KVM switch

Open diogobaeder opened this issue 8 months ago • 19 comments

Regression?

No

Hyprlock Info and Version

Hyprlock version v0.8.0

Hyprlock config
source = $HOME/.config/hypr/mocha.conf

$accent = $mauve
$accentAlpha = $mauveAlpha
$font = JetBrainsMono Nerd Font

# GENERAL
general {
    disable_loading_bar = true
    hide_cursor = true
}

# BACKGROUND
background {
    monitor =
    path = ~/.config/backgrounds/shaded.png
    blur_passes = 2
    color = $base
}

# TIME
label {
    monitor =
    text = cmd[update:30000] echo "$(date +"%R")"
    color = $text
    font_size = 90
    font_family = $font
    position = -30, 0
    halign = right
    valign = top
}

# DATE 
label {
    monitor = 
    text = cmd[update:43200000] echo "$(date +"%A, %d %B %Y")"
    color = $text
    font_size = 25
    font_family = $font
    position = -30, -150
    halign = right
    valign = top
}

# USER AVATAR

image {
    monitor = 
    path = ~/.face
    size = 100
    border_color = $accent

    position = 0, 75
    halign = center
    valign = center
}

# INPUT FIELD
input-field {
    monitor =
    size = 300, 60
    outline_thickness = 4
    dots_size = 0.2
    dots_spacing = 0.2
    dots_center = true
    outer_color = $accent
    inner_color = $surface0
    font_color = $text
    fade_on_empty = false
    placeholder_text = <span foreground="##$textAlpha"><i>󰌾 Logged in as </i><span foreground="##$accentAlpha">$USER</span></span>
    hide_input = false
    check_color = $accent
    fail_color = $red
    fail_text = <i>$FAIL <b>($ATTEMPTS)</b></i>
    capslock_color = $yellow
    position = 0, -35
    halign = center
    valign = center
}

Compositor Info and Version

System/Version info
Hyprland 0.48.1 built from branch  at commit 29e2e59fdbab8ed2cc23a20e3c6043d5decb5cdc  (version: bump to v0.48.1).
Date: Fri Mar 28 16:16:07 2025
Tag: v0.48.1, commits: 5937
built against:
 aquamarine 0.8.0
 hyprlang 0.6.0
 hyprutils 0.6.0
 hyprcursor 0.1.12
 hyprgraphics 0.1.3


no flags were set


System Information:
System name: Linux
Node name: diogobaeder-desktop
Release: 6.14.2-arch1-1
Version: #1 SMP PREEMPT_DYNAMIC Thu, 10 Apr 2025 18:43:59 +0000


GPU information: 
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA107 [GeForce RTX 3050 8GB] [10de:2582] (rev a1) (prog-if 00 [VGA controller])
NVRM version: NVIDIA UNIX Open Kernel Module for x86_64  570.133.07  Release Build  (archlinux-builder@)  


os-release: NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://gitlab.archlinux.org/groups/archlinux/-/issues"
PRIVACY_POLICY_URL="https://terms.archlinux.org/docs/privacy-policy/"
LOGO=archlinux-logo


plugins:



Description

Not sure if this is a bug or if I misconfigured something; What happens is that, if I:

  1. Lock my screen
  2. Switch my KVM to another computer (including the HDMI cable)
  3. Switch the KVM back to the previous computer

then I can't unlock my screen anymore - I get an empty Hyprland screen, without the usual hyprlock overlay I have, but I can't do anything inside the window manager (can't click or open anything).

I tried switching to a different desktop manager, tried using uwsm, tried both nvidia and nvidia-open drivers, and nothing makes it work fine.

How to reproduce

  1. Lock screen
  2. Simulate the KVM switching to another computer
  3. Switch back to the previous computer
  4. See the window manager frozen without accepting any commands

Crash reports, logs, images, videos

I'll provide a video from my phone as soon as I post this issue.

diogobaeder avatar Apr 12 '25 01:04 diogobaeder

Here's a video showing the behavior: https://youtu.be/3lr4bM0kNv8?feature=shared

diogobaeder avatar Apr 12 '25 01:04 diogobaeder

Dupe of https://github.com/hyprwm/hyprlock/issues/695

PointerDilemma avatar Apr 12 '25 06:04 PointerDilemma

Dupe of https://github.com/hyprwm/hyprlock/issues/695

I just tested with the nouveau driver, and I get a somewhat similar issue, but the environment is not exactly locked; would you like me to describe it in the other ticket, or should I do it here?

diogobaeder avatar Apr 12 '25 10:04 diogobaeder

I just tested with the nouveau driver, and I get a somewhat similar issue, but the environment is not exactly locked; would you like me to describe it in the other ticket, or should I do it here?

Hmm then not a dupe or maybe https://github.com/hyprwm/hyprlock/issues/695 is not a driver issue after all but a bug that only shows with nvidia cards.

I don't have a kvm switch, but essentially re-plugging my hdmi cable should effectively be exactly the same right?

PointerDilemma avatar Apr 12 '25 10:04 PointerDilemma

I believe unplugging and plugging again might achieve the same effect, yes. I believe it's something in the combination of Nvidia plus either Hyprland or Hyprlock or both, because I've been testing a bunch of different desktop environments and window managers (GNOME+Wayland, GNOME+Xorg, Budgie, Sway) and the others work fine with locking and KVM-switching.

diogobaeder avatar Apr 12 '25 13:04 diogobaeder

Any change since last month? (Nvida updates + hyprlock 0.8.2)

PointerDilemma avatar May 07 '25 07:05 PointerDilemma

I ended up switching to Xorg + i3 instead, it's more stable at the moment, but I can give it a try soon. I'll let you know how it goes.

diogobaeder avatar May 07 '25 13:05 diogobaeder

Still same behavior for me with the latest updates on Arch w/ Linux 6.14.6 Kernel.

Hyprland Related Packages hyprcursor 0.1.12-3 hyprgraphics 0.1.3-4 hypridle 0.1.6-4 hyprland 0.49.0-1 hyprland-qt-support 0.1.0-6 hyprland-qtutils 0.1.4-2 hyprlang 0.6.3-1 hyprlock 0.8.2-1 hyprpaper 0.7.5-1 hyprpicker 0.4.5-1 hyprpolkitagent 0.1.2-7 hyprutils 0.7.1-1

Nvidia Pacakges lib32-nvidia-utils 570.144-1 libva-nvidia-driver 0.0.13-1 nvidia-open 570.144-5 nvidia-utils 570.144-3

When this happens I have to remote in and execute the following:

pkill -9 hyprlock
hyprctl --instance 0 'keyword misc:allow_session_lock_restore 1'
hyprctl --instance 0 'dispatch exec hyprlock'

Killing hyprlock with USR1 is not sufficient.

Gadroc avatar May 13 '25 14:05 Gadroc

I need more info.

Ideally, someone with this issue would builds hyprlock in debug mode.

cmake -DCMAKE_BUILD_TYPE:STRING=Debug -S . -B ./build/
cmake --build ./build --config Debug -j$(nproc)

Then reproduce the issue, switch to another tty and:

# allow attaching to a running process (reset by rebooting)
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
# note the pid
pidof hyprlock

And then get a backtrace from all threads of hyprlock by launching this:

gdb $(which hyprlock) \
  -ex "set follow-fork-mode parent" \
  -ex "set follow-exec-mode same" \
  -ex "set logging file hyprlock_backtrace.txt" \
  -ex "set logging enabled on" \
  -ex "attach $(pidof hyprlock)" \
  -ex "t a a bt" \
  -ex "detach" -ex "exit"

Then check and send ./hyprlock_backtrace.txt. The goal of this is to know the state that hyprlock is in, when it does not draw anything ontop of the workspace.

PointerDilemma avatar May 14 '25 05:05 PointerDilemma

I'll work on doing that this weekend. Thanks for the help.

Gadroc avatar May 14 '25 10:05 Gadroc

Here is the backtrace.

Thread 8 (Thread 0x73d3ec9a46c0 (LWP 408431) "hyprlock"):
#0  __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
#1  0x000073d402ba9fda in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=a3@entry=1086, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=4294967295, nr=202) at cancellation.c:49
#2  0x000073d402baa64c in __futex_abstimed_wait_common64 (private=0, futex_word=0x569f351438c8, expected=1086, op=<optimized out>, abstime=0x73d3ec9a38c0, cancel=true) at futex-internal.c:57
#3  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x569f351438c8, expected=expected@entry=1086, clockid=clockid@entry=1, abstime=abstime@entry=0x73d3ec9a38c0, private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87
#4  0x000073d402baa6af in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x569f351438c8, expected=expected@entry=1086, clockid=clockid@entry=1, abstime=abstime@entry=0x73d3ec9a38c0, private=private@entry=0) at futex-internal.c:139
#5  0x000073d402bad152 in __pthread_cond_wait_common (cond=0x569f351438a8, mutex=0x569f351438d8, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:426
#6  ___pthread_cond_clockwait64 (cond=0x569f351438a8, mutex=0x569f351438d8, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:522
#7  ___pthread_cond_clockwait64 (cond=0x569f351438a8, mutex=0x569f351438d8, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:510
#8  0x0000569f0b4ed7bd in std::__condvar::wait_until (this=0x569f351438a8, __m=..., __clock=1, __abs_time=...) at /usr/include/c++/15.1.1/bits/std_mutex.h:187
#9  0x0000569f0b4f5fc0 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x569f351438a8, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317048824089443ns }) at /usr/include/c++/15.1.1/condition_variable:205
#10 0x0000569f0b4f4cd9 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x569f351438a8, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317048824089443ns }) at /usr/include/c++/15.1.1/condition_variable:115
#11 0x0000569f0b5044bb in std::condition_variable::wait_until<std::chrono::_V2::steady_clock, std::chrono::duration<long int, std::ratio<1, 1000000000> >, CAsyncResourceGatherer::asyncAssetSpinLock()::<lambda()> >(std::unique_lock<std::mutex> &, const std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1, 1000000000> > > &, struct {...}) (this=0x569f351438a8, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317048824089443ns }, __p=...) at /usr/include/c++/15.1.1/condition_variable:156
#12 0x0000569f0b504139 in std::condition_variable::wait_for<long int, std::ratio<1>, CAsyncResourceGatherer::asyncAssetSpinLock()::<lambda()> >(std::unique_lock<std::mutex> &, const std::chrono::duration<long, std::ratio<1, 1> > &, struct {...}) (this=0x569f351438a8, __lock=..., __rtime=std::chrono::duration = { 5s }, __p=...) at /usr/include/c++/15.1.1/condition_variable:179
#13 0x0000569f0b5034be in CAsyncResourceGatherer::asyncAssetSpinLock (this=0x569f35143890) at /home/ccourtne/Source/hyprlock/src/renderer/AsyncResourceGatherer.cpp:308
#14 0x0000569f0b50098b in operator() (__closure=0x569f353a7358) at /home/ccourtne/Source/hyprlock/src/renderer/AsyncResourceGatherer.cpp:20
#15 0x0000569f0b5050e5 in std::__invoke_impl<void, CAsyncResourceGatherer::CAsyncResourceGatherer()::<lambda()> >(std::__invoke_other, struct {...} &&) (__f=...) at /usr/include/c++/15.1.1/bits/invoke.h:63
#16 0x0000569f0b505063 in std::__invoke<CAsyncResourceGatherer::CAsyncResourceGatherer()::<lambda()> >(struct {...} &&) (__fn=...) at /usr/include/c++/15.1.1/bits/invoke.h:98
#17 0x0000569f0b504ff2 in std::thread::_Invoker<std::tuple<CAsyncResourceGatherer::CAsyncResourceGatherer()::<lambda()> > >::_M_invoke<0>(std::_Index_tuple<0>) (this=0x569f353a7358) at /usr/include/c++/15.1.1/bits/std_thread.h:303
#18 0x0000569f0b504faa in std::thread::_Invoker<std::tuple<CAsyncResourceGatherer::CAsyncResourceGatherer()::<lambda()> > >::operator()(void) (this=0x569f353a7358) at /usr/include/c++/15.1.1/bits/std_thread.h:310
#19 0x0000569f0b504f6e in std::thread::_State_impl<std::thread::_Invoker<std::tuple<CAsyncResourceGatherer::CAsyncResourceGatherer()::<lambda()> > > >::_M_run(void) (this=0x569f353a7350) at /usr/include/c++/15.1.1/bits/std_thread.h:255
#20 0x000073d402ee51a4 in std::execute_native_thread_routine (__p=0x569f353a7350) at /usr/src/debug/gcc/gcc/libstdc++-v3/src/c++11/thread.cc:104
#21 0x000073d402bad7eb in start_thread (arg=<optimized out>) at pthread_create.c:448
#22 0x000073d402c3118c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

Thread 7 (Thread 0x73d3e37ff6c0 (LWP 408432) "hyprlock"):
#0  __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
#1  0x000073d402ba9fda in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=4294967295, nr=202) at cancellation.c:49
#2  0x000073d402baa64c in __futex_abstimed_wait_common64 (private=0, futex_word=0x569f3527fee8, expected=<optimized out>, op=<optimized out>, abstime=0x0, cancel=true) at futex-internal.c:57
#3  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x569f3527fee8, expected=<optimized out>, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87
#4  0x000073d402baa6af in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x569f3527fee8, expected=<optimized out>, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at futex-internal.c:139
#5  0x000073d402bacd1e in __pthread_cond_wait_common (cond=0x569f3527fec8, mutex=0x569f3527fea0, clockid=0, abstime=0x0) at pthread_cond_wait.c:426
#6  ___pthread_cond_wait (cond=0x569f3527fec8, mutex=0x569f3527fea0) at pthread_cond_wait.c:458
#7  0x000073d402edaaa1 in __gthread_cond_wait (__cond=<optimized out>, __mutex=<optimized out>) at /usr/src/debug/gcc/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:911
#8  std::__condvar::wait (this=<optimized out>, __m=...) at /usr/src/debug/gcc/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/std_mutex.h:173
#9  std::condition_variable::wait (this=<optimized out>, __lock=...) at /usr/src/debug/gcc/gcc/libstdc++-v3/src/c++11/condition_variable.cc:41
#10 0x0000569f0b4b0b3b in std::condition_variable::wait<CPam::waitForInput()::<lambda()> >(std::unique_lock<std::mutex> &, struct {...}) (this=0x569f3527fec8, __lock=..., __p=...) at /usr/include/c++/15.1.1/condition_variable:107
#11 0x0000569f0b4b0624 in CPam::waitForInput (this=0x569f3527fe30) at /home/ccourtne/Source/hyprlock/src/auth/Pam.cpp:146
#12 0x0000569f0b4afe3d in operator() (__closure=0x569f353a7398) at /home/ccourtne/Source/hyprlock/src/auth/Pam.cpp:86
#13 0x0000569f0b4b1296 in std::__invoke_impl<void, CPam::init()::<lambda()> >(std::__invoke_other, struct {...} &&) (__f=...) at /usr/include/c++/15.1.1/bits/invoke.h:63
#14 0x0000569f0b4b1259 in std::__invoke<CPam::init()::<lambda()> >(struct {...} &&) (__fn=...) at /usr/include/c++/15.1.1/bits/invoke.h:98
#15 0x0000569f0b4b1214 in std::thread::_Invoker<std::tuple<CPam::init()::<lambda()> > >::_M_invoke<0>(std::_Index_tuple<0>) (this=0x569f353a7398) at /usr/include/c++/15.1.1/bits/std_thread.h:303
#16 0x0000569f0b4b11e8 in std::thread::_Invoker<std::tuple<CPam::init()::<lambda()> > >::operator()(void) (this=0x569f353a7398) at /usr/include/c++/15.1.1/bits/std_thread.h:310
#17 0x0000569f0b4b11cc in std::thread::_State_impl<std::thread::_Invoker<std::tuple<CPam::init()::<lambda()> > > >::_M_run(void) (this=0x569f353a7390) at /usr/include/c++/15.1.1/bits/std_thread.h:255
#18 0x000073d402ee51a4 in std::execute_native_thread_routine (__p=0x569f353a7390) at /usr/src/debug/gcc/gcc/libstdc++-v3/src/c++11/thread.cc:104
#19 0x000073d402bad7eb in start_thread (arg=<optimized out>) at pthread_create.c:448
#20 0x000073d402c3118c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

Thread 6 (Thread 0x73d3ea3ff6c0 (LWP 408433) "hyprlock"):
#0  __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
#1  0x000073d402ba9fda in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=a4@entry=0, a5=a5@entry=0, a6=a6@entry=0, nr=7) at cancellation.c:49
#2  0x000073d402baa024 in __syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=a4@entry=0, a5=a5@entry=0, a6=a6@entry=0, nr=7) at cancellation.c:75
#3  0x000073d402c2405e in __GI___poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:29
#4  0x0000569f0b4e38f4 in operator() (__closure=0x569f3541ad18) at /home/ccourtne/Source/hyprlock/src/core/hyprlock.cpp:366
#5  0x0000569f0b4ed6f8 in std::__invoke_impl<void, CHyprlock::run()::<lambda()> >(std::__invoke_other, struct {...} &&) (__f=...) at /usr/include/c++/15.1.1/bits/invoke.h:63
#6  0x0000569f0b4ed67e in std::__invoke<CHyprlock::run()::<lambda()> >(struct {...} &&) (__fn=...) at /usr/include/c++/15.1.1/bits/invoke.h:98
#7  0x0000569f0b4ed5f4 in std::thread::_Invoker<std::tuple<CHyprlock::run()::<lambda()> > >::_M_invoke<0>(std::_Index_tuple<0>) (this=0x569f3541ad18) at /usr/include/c++/15.1.1/bits/std_thread.h:303
#8  0x0000569f0b4ed59c in std::thread::_Invoker<std::tuple<CHyprlock::run()::<lambda()> > >::operator()(void) (this=0x569f3541ad18) at /usr/include/c++/15.1.1/bits/std_thread.h:310
#9  0x0000569f0b4ed564 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<CHyprlock::run()::<lambda()> > > >::_M_run(void) (this=0x569f3541ad10) at /usr/include/c++/15.1.1/bits/std_thread.h:255
#10 0x000073d402ee51a4 in std::execute_native_thread_routine (__p=0x569f3541ad10) at /usr/src/debug/gcc/gcc/libstdc++-v3/src/c++11/thread.cc:104
#11 0x000073d402bad7eb in start_thread (arg=<optimized out>) at pthread_create.c:448
#12 0x000073d402c3118c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

Thread 5 (Thread 0x73d3e93fe6c0 (LWP 408434) "hyprlock"):
#0  __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
#1  0x000073d402ba9fda in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=a3@entry=10292, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=4294967295, nr=202) at cancellation.c:49
#2  0x000073d402baa64c in __futex_abstimed_wait_common64 (private=0, futex_word=0x569f351440d4, expected=10292, op=<optimized out>, abstime=0x73d3e93fd940, cancel=true) at futex-internal.c:57
#3  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x569f351440d4, expected=expected@entry=10292, clockid=clockid@entry=1, abstime=abstime@entry=0x73d3e93fd940, private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87
#4  0x000073d402baa6af in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x569f351440d4, expected=expected@entry=10292, clockid=clockid@entry=1, abstime=abstime@entry=0x73d3e93fd940, private=private@entry=0) at futex-internal.c:139
#5  0x000073d402bad152 in __pthread_cond_wait_common (cond=0x569f351440b0, mutex=0x569f351440e0, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:426
#6  ___pthread_cond_clockwait64 (cond=0x569f351440b0, mutex=0x569f351440e0, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:522
#7  ___pthread_cond_clockwait64 (cond=0x569f351440b0, mutex=0x569f351440e0, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:510
#8  0x0000569f0b4ed7bd in std::__condvar::wait_until (this=0x569f351440b0, __m=..., __clock=1, __abs_time=...) at /usr/include/c++/15.1.1/bits/std_mutex.h:187
#9  0x0000569f0b4f5fc0 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x569f351440b0, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317047052148502ns }) at /usr/include/c++/15.1.1/condition_variable:205
#10 0x0000569f0b4f4cd9 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x569f351440b0, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317047052148502ns }) at /usr/include/c++/15.1.1/condition_variable:115
#11 0x0000569f0b4e9d65 in std::condition_variable::wait_until<std::chrono::_V2::steady_clock, std::chrono::duration<long int, std::ratio<1, 1000000000> >, CHyprlock::run()::<lambda()>::<lambda()> >(std::unique_lock<std::mutex> &, const std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1, 1000000000> > > &, struct {...}) (this=0x569f351440b0, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317047052148502ns }, __p=...) at /usr/include/c++/15.1.1/condition_variable:156
#12 0x0000569f0b4e8ab9 in std::condition_variable::wait_for<long int, std::ratio<1, 1000>, CHyprlock::run()::<lambda()>::<lambda()> >(std::unique_lock<std::mutex> &, const std::chrono::duration<long, std::ratio<1, 1000> > &, struct {...}) (this=0x569f351440b0, __lock=..., __rtime=std::chrono::duration = { 1000ms }, __p=...) at /usr/include/c++/15.1.1/condition_variable:179
#13 0x0000569f0b4e4006 in operator() (__closure=0x569f353eada8) at /home/ccourtne/Source/hyprlock/src/core/hyprlock.cpp:407
#14 0x0000569f0b4ed6bb in std::__invoke_impl<void, CHyprlock::run()::<lambda()> >(std::__invoke_other, struct {...} &&) (__f=...) at /usr/include/c++/15.1.1/bits/invoke.h:63
#15 0x0000569f0b4ed639 in std::__invoke<CHyprlock::run()::<lambda()> >(struct {...} &&) (__fn=...) at /usr/include/c++/15.1.1/bits/invoke.h:98
#16 0x0000569f0b4ed5c8 in std::thread::_Invoker<std::tuple<CHyprlock::run()::<lambda()> > >::_M_invoke<0>(std::_Index_tuple<0>) (this=0x569f353eada8) at /usr/include/c++/15.1.1/bits/std_thread.h:303
#17 0x0000569f0b4ed580 in std::thread::_Invoker<std::tuple<CHyprlock::run()::<lambda()> > >::operator()(void) (this=0x569f353eada8) at /usr/include/c++/15.1.1/bits/std_thread.h:310
#18 0x0000569f0b4ed544 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<CHyprlock::run()::<lambda()> > > >::_M_run(void) (this=0x569f353eada0) at /usr/include/c++/15.1.1/bits/std_thread.h:255
#19 0x000073d402ee51a4 in std::execute_native_thread_routine (__p=0x569f353eada0) at /usr/src/debug/gcc/gcc/libstdc++-v3/src/c++11/thread.cc:104
#20 0x000073d402bad7eb in start_thread (arg=<optimized out>) at pthread_create.c:448
#21 0x000073d402c3118c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

Thread 4 (Thread 0x73d3e17fd6c0 (LWP 408436) "[pango] fontcon"):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000073d4033b1450 in g_cond_wait_impl (cond=0x73d3dc002c38, mutex=0x73d3dc002c30) at ../glib/glib/gthread-posix.c:1026
#2  g_cond_wait (cond=0x73d3dc002c38, mutex=0x73d3dc002c30) at ../glib/glib/gthread.c:1686
#3  0x000073d403346dac in g_async_queue_pop_intern_unlocked (queue=0x73d3dc002c30, wait=1, end_time=-1) at ../glib/glib/gasyncqueue.c:375
#4  0x000073d403346e1d in g_async_queue_pop (queue=queue@entry=0x73d3dc002c30) at ../glib/glib/gasyncqueue.c:409
#5  0x000073d4030b292c in fc_thread_func (data=0x73d3dc002c30) at ../pango/pango/pangofc-fontmap.c:992
#6  0x000073d4033b6b3e in g_thread_proxy (data=0x73d3dc002c80) at ../glib/glib/gthread.c:893
#7  0x000073d402bad7eb in start_thread (arg=<optimized out>) at pthread_create.c:448
#8  0x000073d402c3118c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

Thread 3 (Thread 0x73d3db3ff6c0 (LWP 408438) "CPMMListener"):
#0  __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
#1  0x000073d402ba9fda in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=a4@entry=0, a5=a5@entry=0, a6=a6@entry=0, nr=7) at cancellation.c:49
#2  0x000073d402baa024 in __syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=a4@entry=0, a5=a5@entry=0, a6=a6@entry=0, nr=7) at cancellation.c:75
#3  0x000073d402c2405e in __GI___poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:29
#4  0x000073d400013567 in ?? () from /usr/lib/libnvidia-eglcore.so.570.144
#5  0x000073d400011b2a in ?? () from /usr/lib/libnvidia-eglcore.so.570.144
#6  0x000073d402bad7eb in start_thread (arg=<optimized out>) at pthread_create.c:448
#7  0x000073d402c3118c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

Thread 2 (Thread 0x73d3d9dfe6c0 (LWP 413195) "hyprlock"):
#0  __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
#1  0x000073d402ba9fda in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=4294967295, nr=202) at cancellation.c:49
#2  0x000073d402baa64c in __futex_abstimed_wait_common64 (private=0, futex_word=0x569f354694b8, expected=<optimized out>, op=<optimized out>, abstime=0x0, cancel=true) at futex-internal.c:57
#3  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x569f354694b8, expected=<optimized out>, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87
#4  0x000073d402baa6af in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x569f354694b8, expected=<optimized out>, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at futex-internal.c:139
#5  0x000073d402bacd1e in __pthread_cond_wait_common (cond=0x569f35469498, mutex=0x569f35146d60, clockid=0, abstime=0x0) at pthread_cond_wait.c:426
#6  ___pthread_cond_wait (cond=0x569f35469498, mutex=0x569f35146d60) at pthread_cond_wait.c:458
#7  0x000073d4018bbff8 in ?? () from /usr/lib/libEGL_nvidia.so.0
#8  0x000073d40188afe1 in ?? () from /usr/lib/libEGL_nvidia.so.0
#9  0x000073d4018c1a1e in ?? () from /usr/lib/libEGL_nvidia.so.0
#10 0x000073d402bad7eb in start_thread (arg=<optimized out>) at pthread_create.c:448
#11 0x000073d402c3118c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78

Thread 1 (Thread 0x73d401c6ba00 (LWP 408427) "hyprlock"):
#0  __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
#1  0x000073d402ba9fda in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=a3@entry=14271, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=4294967295, nr=202) at cancellation.c:49
#2  0x000073d402baa64c in __futex_abstimed_wait_common64 (private=0, futex_word=0x569f35144064, expected=14271, op=<optimized out>, abstime=0x7ffc00077be0, cancel=true) at futex-internal.c:57
#3  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x569f35144064, expected=expected@entry=14271, clockid=clockid@entry=1, abstime=abstime@entry=0x7ffc00077be0, private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87
#4  0x000073d402baa6af in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x569f35144064, expected=expected@entry=14271, clockid=clockid@entry=1, abstime=abstime@entry=0x7ffc00077be0, private=private@entry=0) at futex-internal.c:139
#5  0x000073d402bad152 in __pthread_cond_wait_common (cond=0x569f35144040, mutex=0x569f35143ff0, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:426
#6  ___pthread_cond_clockwait64 (cond=0x569f35144040, mutex=0x569f35143ff0, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:522
#7  ___pthread_cond_clockwait64 (cond=0x569f35144040, mutex=0x569f35143ff0, clockid=1, abstime=<optimized out>) at pthread_cond_wait.c:510
#8  0x0000569f0b4ed7bd in std::__condvar::wait_until (this=0x569f35144040, __m=..., __clock=1, __abs_time=...) at /usr/include/c++/15.1.1/bits/std_mutex.h:187
#9  0x0000569f0b4f5fc0 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x569f35144040, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317051052151232ns }) at /usr/include/c++/15.1.1/condition_variable:205
#10 0x0000569f0b4f4cd9 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x569f35144040, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317051052151232ns }) at /usr/include/c++/15.1.1/condition_variable:115
#11 0x0000569f0b4e9e15 in std::condition_variable::wait_until<std::chrono::_V2::steady_clock, std::chrono::duration<long int, std::ratio<1, 1000000000> >, CHyprlock::run()::<lambda()> >(std::unique_lock<std::mutex> &, const std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1, 1000000000> > > &, struct {...}) (this=0x569f35144040, __lock=..., __atime=std::chrono::_V2::steady_clock time_point = { 317051052151232ns }, __p=...) at /usr/include/c++/15.1.1/condition_variable:156
#12 0x0000569f0b4e8c55 in std::condition_variable::wait_for<long int, std::ratio<1, 1000>, CHyprlock::run()::<lambda()> >(std::unique_lock<std::mutex> &, const std::chrono::duration<long, std::ratio<1, 1000> > &, struct {...}) (this=0x569f35144040, __lock=..., __rtime=std::chrono::duration = { 5000ms }, __p=...) at /usr/include/c++/15.1.1/condition_variable:179
#13 0x0000569f0b4e485d in CHyprlock::run (this=0x569f35143e70) at /home/ccourtne/Source/hyprlock/src/core/hyprlock.cpp:424
#14 0x0000569f0b4fed91 in main (argc=1, argv=0x7ffc00078148, envp=0x7ffc00078158) at /home/ccourtne/Source/hyprlock/src/main.cpp:115
Detaching from program: /home/ccourtne/hyprlock, process 408427
[Inferior 1 (process 408427) detached]

Gadroc avatar May 16 '25 13:05 Gadroc

I believe I also have this issue.

Though in my case, it occurs when I power cycle my Valve Index, or disable it with hyprctl monitors, prior to locking. Then hypridle starts hyprlock, and after a while toggles dpms on my main monitor. This is when I observe the behavior.

I've tried to manually unlock it by sending the USR1 signal or using loginctl unlock-sessions but I haven't had success.

My guess is this is caused by monitors changing on nvidia card? I'm willing to build with debug symbols and gather information on this.

xenia-foxtrot avatar May 18 '25 20:05 xenia-foxtrot

My guess is this is caused by monitors changing on nvidia card? I'm willing to build with debug symbols and gather information on this.

Yeah that is probably the failure condition.

@xenia-foxtrot build it and post the same backtrace that @Gadroc posted. Also please post which versions of hypr* you are running as well as your nvidia driver versions.

Summary of @Gadroc's backtrace: All the hyprlock thread's do what they are expected to do:

  • asyncResourceGatherer is waiting.
  • hyprlock main thread is waiting.
  • timers thread is wating.
  • pam thread is waiting.
  • poll thread is probably stuck at poll (Same as in #741) Now the thing that's probably the problem is the CPMMListener thread which comes from the nvidia drivers. That one is also currently polling __GI___poll. The file descriptor it is probably polling on is the wayland display. Because hyprlock is also polling on the wayland display at the same time, we probably have a deadlock here. And the reason is two threads polling on the wayland-display.

The fun thing is that is the same problem as #741, just a different cause. And also it's the same problem that the initial nividia patch that fixed smoothness on nvidia cards fixed (https://github.com/hyprwm/hyprlock/pull/655).

Now why does that happen???

In wayland, you MUST successfully call wl_display_prepare_read before reading from the wayland display (wl_display_cancel_read if the call failed). What a lot of wayland eventloops do wrong is that a call to poll also must use this synchronization mechanism.

In hyprlock I am like 99% sure that we guard with wl_display_prepare_read wherever we interact with the wayland display. My educated guess is that the nvidia driver doesn't and just polls on the wayland display without preparing a read. This is probably also the behavior that causes #695

PointerDilemma avatar May 21 '25 07:05 PointerDilemma

I am a developer, but I have no knowledge of Wayland echo system, just a happy user.

Would all apps read from the display or is this a scenario unique to hyprlock? Theoretically if the problem solely resided in the Nvidia code those apps would also have problems. Currently none of my other applications have any kind of hiccup in this scenario, albeit it's a limited set (firefox, several electron apps, jetbrains suite, steam, and a plethora of ghostty instances). Happy to install something else to see it's behavior as well during this condition.

Gadroc avatar May 21 '25 11:05 Gadroc

I just tried to give a summary on what I think this bug is. Mostly for myself honestly, but maybe also for other people.

Would all apps read from the display or is this a scenario unique to hyprlock? Theoretically if the problem solely resided in the Nvidia code those apps would also have problems. Currently none of my other applications have any kind of hiccup in this scenario, albeit it's a limited set (firefox, several electron apps, jetbrains suite, steam, and a plethora of ghostty instances). Happy to install something else to see it's behavior as well during this condition.

Good question. I am not entirely sure it's their fault. Maybe I was a bit to confident when writing the previous comment. Some reason why other thing work just fine:

  • Normal application don't need to immediately submit a new frame for a new output.
  • Hyprlock directly uses some api's, that a normal application with a window wouldn't use.

However, hyprlock's eventloop is a lot simpler and not as well tested as something that gtk/qt would use. Backends that had considerable work put into them to properly run on nvidia. So it could very well still be fixable in hyprlock. Maybe it's some sort of opengl related synchronization problem.

PointerDilemma avatar May 21 '25 17:05 PointerDilemma

I would try to reproduce for another stack trace but I seem to have fixed my index issue by just unplugging it and plugging it back in. This persists across reboots.

I have encountered another issue though, now when hyprlock activated and hypridle activates dpms (or is it dkms? I can't remember -- the protocol that allows the monitor to turn off) the same thing happens.

The behavior is different enough that I think it might warrant a separate issue? I'll let you decide on that. I can try to reproduce with a debug stack trace in the meantime.

On Wed, May 21, 2025, at 10:16 AM, Maximilian Seidler wrote:

PaideiaDilemma left a comment (hyprwm/hyprlock#740) https://github.com/hyprwm/hyprlock/issues/740#issuecomment-2898687105

I just tried to give a summary on what I think this bug is. Mostly for myself honestly, but maybe also for other people.

Would all apps read from the display or is this a scenario unique to hyprlock? Theoretically if the problem solely resided in the Nvidia code those apps would also have problems. Currently none of my other applications have any kind of hiccup in this scenario, albeit it's a limited set (firefox, several electron apps, jetbrains suite, steam, and a plethora of ghostty instances). Happy to install something else to see it's behavior as well during this condition.

Good question. I am not entirely sure it's their fault. Maybe I was a bit to confident when writing the previous comment. Some reason why other thing work just fine:

• Normal application don't need to immediately submit a new frame for a new output. • Hyprlock directly uses some api's, that a normal application with a window wouldn't use. However, hyprlock's eventloop is a lot simpler and not as well tested as something that gtk/qt would use. Backends that had considerable work put into them to properly run on nvidia. So it could very well still be fixable in hyprlock. Maybe it's some sort of opengl related synchronization problem.

— Reply to this email directly, view it on GitHub https://github.com/hyprwm/hyprlock/issues/740#issuecomment-2898687105, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAWPLROHVTEPITK5VE5NOKL27SYF3AVCNFSM6AAAAAB27N66GGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQOJYGY4DOMJQGU. You are receiving this because you were mentioned.Message ID: @.***>

xenia-foxtrot avatar May 21 '25 21:05 xenia-foxtrot

@xenia-foxtrot yeah please open another issue for that. Although it may still be the same issue. Sometimes it's hard to tell.

dmps off + on has caused problems in the past, curious what it is this time.

PointerDilemma avatar May 22 '25 06:05 PointerDilemma

Check if this changes anything please (I think it should also work around this problem): https://github.com/hyprwm/hyprlock/pull/845

PointerDilemma avatar Sep 03 '25 07:09 PointerDilemma

New hunch on the problem produced this: https://github.com/hyprwm/hyprlock/pull/877 If you can, please test it.

PointerDilemma avatar Sep 17 '25 07:09 PointerDilemma