webvirtcloud
webvirtcloud copied to clipboard
How NoVNC reach the QEMU's VNC
Hello, I've a small question regarding how NoVNC will try to establish the connection to the hypervisor when:
- the QUEMU/KVM hypervisor is on another machine
- webvirtcloud connect to the hypervisor through SSH
Will the VNC connection between the hypervisor and NoVNC be tunneled through SSH or a direct connection on port 59xx will be established ?
In case of a direct connection, would it be possible to tunnel that inside SSH ?
Thanks for the help
I'm asking this question because i'm experiencing some weird mixed results.
I would like to run webvirtcloud from a docker container and I had great success using mplx/docker-webvirtcloud
The issue is that this image is quite old and seems not to be updated anymore.
I tried then to install webvirtcloud following the Readme on a Debian 10 on a different machine and was not able to open the console when the VNC settings is set to listen localhost (but works when set to 0.0.0.0) The weird thing here is the NoVNC from mplx/docker-webvirtcloud is working when the setting is 127.0.0.1 despite the fact the docker IP is 172.x.0.2 and connect to the hypervisor using 172.x.0.1 it should not work here either.... If I put the vnc setting to listen on 172.x.0.1 it works as well
EDIT: Another weird thing is if I connect using my public IP SSH in mplx/docker-webvirtcloud it works as well. I'm loosing a bit my mind.
Hi, From my experiences with multiple hosts I can say it isvtrying to make direct connection on port 59xx.
@W0rmsy hi, if you look to the dockerfile you can see there are extra two patches . There patches Wsproxy and forwardssl . I havent look them but i am sure the behaviour modified with them.
@catborise Hi, I've looked at the two patches and if I understood them well, their goal is to allow the reverse proxy of the NoVNC through a single port for the client and remove the need of the port 6080. I've seen nothing in the patch that modify the connection between NoVNC and the hypervisor
Can you please use that docker install guide . (It does not contain wspatch forward SSL)( may be we can add Mplx like docker instructions later) https://github.com/retspen/webvirtcloud/wiki/Docker-Installation
@catborise, Thanks for your support, but the question here was more how the VNC connection is handled between the webvirtcloud server and the hypervisor in case of SSH connection.
Nevertheless, I'll try the wiki guide and apply the patch. Maybe it's due to a weird NAT between the docker and the host that allows VNC connection... but even in that case I cannot explain one situation.
- Hypervisor internal IP: 192.168.10.10
- Hypervisor public IP: <public_ip>
- Hypervisor gateway: 192.168.10.1
Docker is running on the hypervisor
- webvirtcloud-docker ip: 172.23.0.2
- webvirtcloud-docker gateway 172.23.0.1
And the second webvirtcloud I've installed on another machine
- webvirtcloud-new ip: 192.168.10.20
- webvirtcloud-new gateway: 192.168.10.1
All the trafic coming on the public_ip is forwarded to 192.168.10.10 (keeping the real source IP) Then all the outgoing traffic (toward internet) from 192.168.10.10 is nated by the router so the source ip = public_ip (Nothing special here, default behavior of a home router/modem)
On docker webvirtcloud I've added 2 computes
- Compute 1: SSH/172.23.0.1
- Compute 2: SSH/<public_ip>
on the new webvirtcloud I've added 2 computes
- Compute 1: SSH/192.168.10.10
- Compute 2: SSH/<public_ip>
Finally on the Hypervisor I've 4 VMs with the following settings for the VNC
- VM1: listen 0.0.0.0
- VM2: listen 127.0.0.1
- VM3 listen 172.23.0.1
- VM4 listen 192.168.10.10
And now the tests
- VM1: the NoVNC works from the four computes -- Makes sense but don't want to expose VNC on internet so don't like that solution
- VM2: the NoVNC works from docker webvirtcloud compute 1 and 2 -- weird, the source ip with compute 1 should be 172.23.0.2 and from compute 2 it should be <public_ip> (since the source ip = private and dest = public the router will do a double nat and will make the connection using his public IP) -- not working at all from new webvirtcloud
- VM3: only works from docker webvirtcloud compute 1 -- Makes sense and make the fact than VM2 works even weirder
- VM4: Works from new webvirtcloud compute 1 -- Makes sense -- And works from both webvirtcloud compute 2. That's weird but I can imagine that the router use his internal IP instead of public IP (need to check with a tcpdump, because usually it uses his public ip...)
So at the end, it's like the docker webvirtcloud is capable of both making directe connection to the hypervisor and tunneled connections through SSH (when the listen is 127.0.0.1). So this is exactly what I want but I don't know if it's a feature that works on the docker and not on the new webvirtcloud or if it's due to some black magic and weird routing/nating between the host and docker.
Thanks for the help :)
there is nothing with vnc and ssh. there is not any tunnel. novncd uses 6080 port for vnc. the vnc connection between novnc and vm instance are handled by "websocket" which is "https://github.com/novnc/websockify".
webvirtcloud only uses libvirt capabilities nothing more. libvirt + novnc. psudo code is very simple; call "virsh -c "ssh&tcp+qemu://hostadress" console vm-name" to get the ip address and port. open connection with websockify draw with html5
i think it is docker network and iptables rules magic. docker and its host is same thing. if you set vm access 127.0.0.1, container already on 127.0.0.1, so it can access.(if you install webvirtcloud on vm, it cannot reach host local address but only itself)
can you reach the vnc console(listens 127.0.0.1) on the other host?
Hi, sorry for the late reply, did not had time to do it sooner.
Well I'm loosing my mind. I installed a new docker following the tutorial, manually applied the patches from Mplx and added the missing docker instruction (the startinit.sh that set the right ports and fix few permissions). The only difference is the supervisor package that is apparently not part of the new docker, so I did not put it. I then added the reverse proxy in my apache the same way i did for the other docker.
I can successfully login on the new docker and access the VM's VNC that are listen on 0.0.0.0 but still not working with the ones listening 127.0.0.1 so I cannot explain why the old docker can still access them...
How can I know wich IP novncd is using to connect to the VMs ? is there some logs somewhere ?
@catborise you said it's calling virsh, but virsh is not installed inside the docker and if I look at the running process I can see a lot of tunnels made with netcat O_o
ssh -l [username] -- [host] sh -c 'if 'nc' -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U /var/run/libvirt/libvirt-sock'
Well, I've tried to install the latest version in the working docker by backuping and erasing /serv/webvirtcloud and reinstalled everything with a git pull. I then patched the required files and restarted the docker. The NoVNC is not working with listen to 127.0.0.1. I then restored the backup by simply erasing the new content and copied the backuped one to /serv/webvirtcloud then restarted the docker. And now it's working again...
So the mplx docker is not working because of black magic made by docker and iptables, but something changed between the commit id a9a2e1167bfae652186e905d6b226c75022b45e9 and the current one.
@catborise, @retspen can you enlighten me with what might have changed so I can redo the change on the current commit ?
Thanks :)
Hi,
Any new idea on that topic ?
Thanks :)
@W0rmsy :) my english not enough to tell it deeply. Because of that i did not answer. I can say that: two component is updated related with vnc.
- Websockify
- Novnc
Websockify is responsible to get vnc details from libvirt and send varius details to novnc.
For some time i could make a setup to understand your situation but for now remote working make it is hard. We may look it after corona Lockdown
@catborise Thanks for the update :) Ok, i'll wait the end of the lockdown for help. In the meantime I'll try to setup a different system with the same issue and give you full access to it if it can help.
I have the same Issue. After update, the SSH Tunnel for the VNC Connection failed.
I have debugged the novnc script: the connection started but then Error message: "Target Closed".
i will make a research and update it. start to install environment
i found the problem, i will create a commit to solve that problem, soon. @W0rmsy that will solve your problem also.
I'm so happy to read that ❤️ Thanks @catborise I cannot wait to test all the new upgrades that were recently made to the project 😃
by the way as you said making listening ip as 0.0.0.0 can cause security problem. may be, listening adress could be "webvirtcloud ip". more secure than 0.0.0.0
The thing is I would like to avoid relying on QEMU's VNC password security. If it's the same policy as tightVNC (8 char long password and no brute force protection) this is not super great to have that exposed on internet. I could listen on 127.0.0.1 so only the hypervisor itself can connect and a ssh tunnel (or proxy) is needed and this is my preferred way as it's the more secure.
I could listen on the private local IP or to the Docker interface or to a VPN IP. This an okay solution as it's not directly exposed to internet. but if another device connected to my network or a docker is compromised, this could give access to the VMs through VNC. This also require the use of another tool to access the VM.
I like the fact that webvirtcloud is able to tunnel VNC connection through SSH so I can allow VNC only from localhost, I can limit and monitor the the ssh account and have a secured connection all the way. 2FA + SSL certificate on the reverse proxy between the computer and the web server, then ssh tunnel between the web server and the VMs. This only require a browser and that's good if I'm not on my computer :)
i found the problem, i will create a commit to solve that problem, soon. @W0rmsy that will solve your problem also.
Hello, Any new when the fix will be part of the commit ? Thanks a lot :)
remote access not possible if libvirt listen only localhost. i research it severeal resources there is not way to do it anything but tunneling. securing vnc is a thing which is required for some environments.(personally i am using vpn to access webvirtcloud) but if you look at these examples you will see that they are using always cert, pass, keys https://www.freeipa.org/page/Libvirt_with_VNC_Consoles
@catborise the only problem is that the ssh tunnel (the function already exists) is broken with the update.
https://github.com/retspen/webvirtcloud/blob/master/console/novncd#L188
There must no new function.
Wormsy has the same problem, he just wrote it awkwardly.
Yeah, i made it more difficult to understand with to much blabla.
To summarize I just would like that webvirtcloud makes a tunnel when the QEMU's VNC listen on 127.0.0.1. This was working in the build https://github.com/retspen/webvirtcloud/commit/a9a2e1167bfae652186e905d6b226c75022b45e9
And this is not working anymore. Thanks @QDaniel for pointing out where the issue is.
:) i understand the problem and situation :) i said before my english is not very good. longer texts makes me fainted :) "i like steps" while describing problems. for this situation with the help of daniel, i clearly understand.
i create a setup to see what is going on... there is an insteresting situation. if we listen 127.0.0.1 then novnc creates an ssh tunnel with that command ssh -p 22 -l username destination_host_ip sh -c 'nc -q 2>&1 | grep "requires an argument" >/dev/null;if [ $? -eq 0 ] ; then CMD="nc -q 0 127.0.0.1 5900";else CMD="nc 127.0.0.1 5900";fi;eval "$CMD";'
then tunnel is closing with a message: "Target closed Connection" if we look to the target it says "Client closed connection"
it is absurt. no error no problem bu connection close. i will continue to investigate...
@W0rmsy i tested very different ways possibilities are narrowing down.
it is related with tunnel.py this one is for python2. after migrating python3, tunnel starts not working well.
it is working until build fd9465c7695d12e3335c2a7519e0c9c457f061a9
problem is not related with websockify, novnc or app changes. only tunnel.py file. i am working it out...
@W0rmsy finally i found ssh tunnel problem. i will create a commit. it was very absurt and hidden error. it was a struggling effort.
Thank you so much 😘😘 I'll test all those amazing new features as soon as the commit is merged. Thanks again! this project is awesome 😊
Hello,
First of all, Thanks, the new UI is sick, I love it :) I tried to install the latest commit in a new docker and enable the reverse proxy feature so everything is on the same port. I got here few issues that I'll try to explain step by step.
First the setup:
- I installed a docker following the wiki (https://github.com/retspen/webvirtcloud/wiki/Docker-Installation-&-Update)
- I ensured the ngnix was correctly configured to reverse proxy the connection to the novncd
- Ensured the file settigs.py has the correct "WS_PUBLIC_PORT" (I put 443 here as the docker will be reverse proxified with ssl using apache on the host machine)
- the nginx inside the docker expose everything on the port 80 (not very important) and the docker map the internal port 80 to 127.0.01:8888 (not very important as well)
- the apache outside the docker do a reverse proxy to 127.0.0.1:8888 and expose that on ssl/443 (it also handle the websocket if the called url is /novncd/)
I can access webvirtcloud on my public IP / HTTPS ✅
If I try to open the console. It doesn't works with the following error:
[error] 47#47: *140 connect() failed (111: Connection refused) while connecting to upstream, client: 172.28.0.1, server: , request: "GET /novncd// HTTP/1.1", upstream: "http://127.0.0.1:6080/novncd//", host: "<hostname>"
I checked the logs: "/var/log/novncd.log" and found:
Traceback (most recent call last): File "/srv/webvirtcloud/console/novncd", line 278, in <module> server.start_server() File "/srv/webvirtcloud/venv/lib/python3.6/site-packages/websockify/websockifyserver.py", line 745, in start_server tcp_keepintvl=self.tcp_keepintvl) File "/srv/webvirtcloud/venv/lib/python3.6/site-packages/websockify/websockifyserver.py", line 491, in socket sock.bind(addrs[0][4]) PermissionError: [Errno 13] Permission denied
By going further, I can see the var addrs[0][4]
contains ('0.0.0.0', 443)
- Since novncd is lunched from www-data it can't bind on a port lower than 1024
-
websockifyserver.py
should bind the port 6080 and not the 443
I did a very ugly thing, I edited directly the file /srv/webvirtcloud/venv/lib/python3.6/site-packages/websockify/websockifyserver.py
and forced the variable port to 6080
- Now the novncd works and I can open the light console ✅
- The light console is asking for the vnc password ==> is this expected ? before it was automatically passed.
- The Full Console is not working because it tries to open the path
/websocketify/
instead of/novncd/
. If in the advanced settings I change the path to /novncd/. it works.
To summarize:
-
By default it's not working because
/srv/webvirtcloud/venv/lib/python3.6/site-packages/websockify/websockifyserver.py
is called with the wrong port. Maybe the value ofWS_PUBLIC_PORT
=> Is there a way to change that without hardcoding the port directly in the file/srv/webvirtcloud/venv/lib/python3.6/site-packages/websockify/websockifyserver.py
? -
Is it expected that the light console is asking the VNC password and not take it automatically ?
-
Is it expected that the full console is asking the VNC password and not take it automatically ?
-
The full console is using by default the path
/websocketify/
instead of/novncd/
I already had this issue on my first install if I remember well, I need to search again where to change that.
Thanks again for the excellent work and the help @catborise the SSH tunnel is now working fine :-)
i will make optional the console password asking . some user need it to be asked some of them not. it is better to make it optional.
i do not understand why do you are using both apache and nginx, couldnt you make ssl activation with nginx. you should not modify websocketserver. i will look at it. there must be a glitch.
can you change the app theme? (some user have problem with themes)
I'm using apache as the main reverse proxy for all my websites, services and dockers. As I have only a single public IP and want to use a signed SSL certificat (letsencrypt) I have no choice to have a single front facing web server. Ngnix is built-in the docker to serve the static content, reverse the django app and reverse the novncd. I could bypass the ngnix and do it directly from the apache, but i like the fact the docker expose only a single port. it's easier to have multiple instances as I just have to handle one port.
Regarding the theme, I tried few of them without any issues. just had to do a ctrl + f5 to reload the new theme and it's working fine. Just a complain regarding the theme. It's super nice to be able to order by name, status, vcpu, etc... in the instance view but the header should not move ;) https://eof.li/ouu
I am using novnc to access the console of KVM guest machine with spice graphic, but I can't send any key to the KVM guest machine. There is no problem with a KVM guest machine with vnc graphic. Could anyone tell the cause.