wezterm
wezterm copied to clipboard
panic on window close with wayland in nightly
What Operating System(s) are you seeing this problem on?
Linux Wayland
Which Wayland compositor or X11 Window manager(s) are you using?
hyprland
WezTerm version
nightly
Did you try the latest nightly build to see if the issue is better (or worse!) than your current version?
Yes, and I updated the version box above to show the version of the nightly that I tried
Describe the bug
Between bed5141d6e07f2e82ad68cb73feb7b7949460aad and 5046fc225992db6ba2ef8812743fadfdfe4b184a, there has been a regression in which if any of these instances are closed, all instances of wezterm exit. bed5141d6e07f2e82ad68cb73feb7b7949460aad is stable and working for me.
To Reproduce
- Launch a wayland environment (https://github.com/hyprwm/Hyprland/commit/ded174d6e5d14bc376919194cbc52c238a07f640 specifically in my instance)
- Start 2 instances of wezterm.
- Quit any instance (I use hyprland's killactive, this closes the window but does not kill the process)
- See that both instances are dead.
Configuration
-- Generated by Home Manager.
-- See https://wezfurlong.org/wezterm/
local wezterm = require 'wezterm';
return {
font = wezterm.font("FiraCode Nerd Font"),
font_size = 12.0,
color_scheme = "tokyo-city-terminal-dark",
hide_tab_bar_if_only_one_tab = true,
window_close_confirmation = "NeverPrompt",
check_for_updates = false,
}
Expected Behavior
Not all instances of wezterm die.
Logs
Reading through the logs, it seems that it is an resizing error in the wayland handler:
18:09:12.073 ERROR wezterm_client::client > Error while decoding response pdu: decoding a PDU: reading PDU length: Connection reset by peer (os error 104)
18:09:12.256 WARN wezterm_gui::termwindow::resize > cannot resize window to match Some(RowsAndCols { rows: 24, cols: 80 }) because window_state is MAXIMIZED
18:09:12.628 ERROR wezterm_mux_server_impl::local > writing pdu data buffer: Broken pipe (os error 32)
18:09:12.662 WARN wezterm_gui::termwindow::resize > cannot resize window to match Some(RowsAndCols { rows: 24, cols: 80 }) because window_state is MAXIMIZED
18:09:13.173 ERROR env_bootstrap > panic at window/src/os/wayland/window.rs:1034:14 - !?
0: env_bootstrap::register_panic_hook::{{closure}}
1: std::panicking::rust_panic_with_hook
2: std::panicking::begin_panic_handler::{{closure}}
3: std::sys_common::backtrace::__rust_end_short_backtrace
4: rust_begin_unwind
5: core::panicking::panic_fmt
6: core::option::expect_failed
7: window::os::wayland::window::WaylandWindowInner::do_paint
8: window::os::wayland::connection::WaylandConnection::with_window_inner::{{closure}}
9: async_task::raw::RawTask<F,T,S,M>::run
10: window::spawn::SpawnQueue::run
11: <window::os::wayland::connection::WaylandConnection as window::connection::ConnectionOps>::run_message_loop
12: wezterm_gui::run_terminal_gui
13: wezterm_gui::main
14: std::sys_common::backtrace::__rust_begin_short_backtrace
15: std::rt::lang_start::{{closure}}
16: std::rt::lang_start_internal
17: std::rt::lang_start
18: __libc_start_call_main
19: __libc_start_main@GLIBC_2.2.5
20: _start
It should be stated that this error only occurs after the most recent wayland changes which end with a435606975818e46794ccf5e2799ff3e82498363. a435606975818e46794ccf5e2799ff3e82498363 panics immediately for me however, and I believe this was fixed in 32f5d1ca08a2446131eed70051565bd00c5c9da9
As a note, it seems that I also get an info dump that we are using the existing GUI instance for our new window, which would explain why all the current windows close when we panic.
18:03:20.370 INFO wezterm_gui > Spawned your command via the existing GUI instance. Use wezterm start --always-new-process if you do not want this behavior. Result=SpawnResponse { tab_id: 2, pane_id: 5, window_id: 2, size: TerminalSize { rows: 24, cols: 80, pixel_width: 640, pixel_height: 384, dpi: 0 } }
using --always-new-process
, windows no longer close when another is closed.
Anything else?
No response
same bug on sway
same bug on sway
Awesome, glad that i'm not the only one!
Can confirm that 2fee694bccea2b383b6b09b58ee88a5225f722db fixes this issue for me.
also solved for me
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.