TTY switch leads to FATAL error @ unloading nvidia modules
Hi guys, i always get the same error when i switch between my "nvidia-xrun openbox-session"-Session and my standard budgie-session (tty7). Whenever i switch to budgie with alt ctrl f7 while nvidia-xrun w/ openbox is running and after that switching back to the xrun/openbox-sesion, a logout from openbox will lead to modprobe: FATAL: Module nvidia_drm is in use.
Same for modules nvidia_modeset and ofc nvidia itself.
lsmod | grep nvidia gives: nvidia_drm 57344 1 nvidia_modeset 1089536 1 nvidia_drm nvidia 17637376 1 nvidia_modeset ipmi_msghandler 65536 2 ipmi_devintf,nvidia drm_kms_helper 208896 2 nvidia_drm,i915 drm 499712 11 drm_kms_helper,nvidia_drm,i915
manually unloading nvidia moduls will lead to the same FATAL-error.
tty6 is stuck.
Asus ux303lb here with 940m/intel.
God damn, i ll never ever buy a freakin optimus-laptap again. The extra battery life wont fix the life time i needed to invest into this freakin horror machine.

Huuulp :(
Can you try to set modeset=0 in /etc/default/nvidia-xrun and see if it helps?
Hi michelesr again, iam sorry; not sure what u want me xctly 2 do - where 2 put?
Edit the /etc/default/nvidia-xrun config file and replace modeset=1 with modeset=0 in this line: https://github.com/Witko/nvidia-xrun/blob/master/config/nvidia-xrun#L26
so far it seems to work - will test it the nxt days. thank u!
Hi, I can confirm after some basic testing that michelesr's solution works for me. I had this similar issue with modules being unable to unload. Should this be added to troubleshooting steps in the readme?
Can confirm. Had a very similar issue (save for the switching ttys bit) and setting modeset=0 for nvidia_drm solved it. Might be a good idea to place this in a "Troubleshooting" section in the readme.
The problem i was facing was that nvidia-xrun would oftentimes not turn on at all but immediately exit leaving all the nvidia* modules loaded afterwards. There was no way to unload them as nvidia_drm was permanently marked as being used by something (not in the modules list). The only option was to reboot (which would also sometimes hang). Other times it would work just fine (as far as i could tell). Here are some kernel and stdout/stderr logs for the successful and failed sessions.
kernel.fail.log
kernel.ok.log
std.fail.log
std.ok.log
Note that in the kernel.ok.log I manually quit the session at 21:46:14
Seeing that nvidia-xrun isn't doing much here, I'd guess it's an upstream bug. Should it be reported somewhere else? Or is the modesetting implementation known to be buggy for the nvidia driver?
Run nvidia-smi to check if there is any process using the graphic card before unloading modules. Kill the processes and try again.
You can also modify the script so that before trying to sudo tee etc. it will check if all the nvidia modules are unloaded, by something like
if [[ $(lsmod | grep nvidia ]]; then
echo "error message"
return 0
fi
I think it will be better that the script be divided, and user can manually unload the graphic card. Currently these things are done automa6, which leaves space for a multitude of unexpected bugs.
I have also written a script, which does nothing related to X, just simply loads and unloads the graphic card. https://github.com/noxultranonerit/SimpleScripts/blob/master/nvidia_gpu
Running a new X session is just another thing, which should be easy for anyone successfully installed and configured an X window system on arch.