microvm.nix
microvm.nix copied to clipboard
Enter running machine as systemd service
Is it possible to connect terminal stdin/stdout to deployed machine, to inspect what's going on there?
Having same issue, maybe a sshd service might work, but is there any easier way ( like a tty directly ) ?
If you look into git history, this existed as bin/microvm-console before, using a pty instance, just for qemu and cloud-hypervisor. I dropped it because I wasn't too happy with it.
I am happy to have consoles/serials configurable but with sensible defaults. I cannot give an ETA when I'll have time for that.
Also, I am delighted that @Mic92 has updated https://github.com/Mic92/vmsh -- please play with that!
Thanks, but vmsh seems not working for me , not sure why :(
I think I would need more time to fix some issues with VMSH. But here are some thoughts about serial/console support in microvm.nix itself:
- Serial devices do not set TERM or terminal size correctly. Here is an serial nixos module that works around that: https://github.com/numtide/srvos/blob/main/nixos/common/serial.nix
- virtio console would be ideal because it knows about the terminal size and can also update it dynamically. At least on the protocol level, I don't know what the state of the art hypervisors are doing.
- Another option some vsock based daemon that works like ssh but doesn't require any special network configuration.
- Or simply use ssh, this is what I am doing just now:
I allocate an tap interface called "management" to each vm (on the host I use mgt-$name). And allow ssh traffic from it:
{
# Only allow ssh on internal tap devices
networking.firewall.interfaces.management.allowedTCPPorts = [ 22 ];
services.openssh.openFirewall = false;
}
Than I set the link-local ipv6 address to "fe80::1" on the host and "fe80::2" in the VM.
I can than use this ssh wrapper to access my machine:
{
environment.systemPackages = [
(pkgs.writeScriptBin "ssh-vm" ''
#!/usr/bin/env bash
if [[ "$#" -ne 1 ]]; then
echo "Usage: $0 <vm-name>"
exit 1
fi
vm=$1
shift
# we can disable host key checking because we use ipv6 link local addresses and no other vm can spoof them on this interface
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "$@" root@fe80::2%mgt-$vm
'')
];
}
This allows me to login by using the VM name:
$ ssh-vm foo
Systemd also now parses terminal name and size from the kernel command line, but this would be mainly useful for the initial terminal at boot time and not for ad-hoc ones: https://github.com/systemd/systemd/blob/6ac299e3cedb1d9eb8df88010c4994a90aa12a9a/NEWS#L144
A future version of systemd will make it easy to connect to a running VM over VSOCK: https://github.com/systemd/systemd/pull/30777 which this project can use!
I just came across this issue on my configuration. For me, @Mic92's solution, with SSH over IPv6, did not work. I instead changed the Qemu parameters to forward /dev/ttyS0 to a unix sock. This allows me to at least access my VMs using socat.
I have these changes in this branch: https://github.com/jim3692/microvm.nix/tree/console-in-unix-sock
I have also implemented the microvm -s <name> command, which runs socat in raw mode, inside screen. I could not find an easier way to be able to leave the socat session.
EDIT: My VM's IPv6 is fe80::ff:fe00:1 and not fe80::2. I am not sure how link local works, but the VM's MAC is 02:00:00:00:00:01. I managed to successfully SSH using the correct IP.
I prefer waiting for ssh over vsock rather than bringing back what we had before with microvm-console for only a few hypervisors.
BTW, find your machine's link-local addresses by pinging ff02::1%$interface (that's my favourite IPv6 address).
I prefer waiting for ssh over vsock
This is doable today.
In your host:
microvm.my-vm.vsock.cid = 1337;
In your guest:
services.openssh = {
enable = true;
startWhenNeeded = true;
};
systemd.sockets.sshd = {
socketConfig = {
ListenStream = [
"vsock:1337:22"
];
};
};
Then, to connect to your guest from your host:
$ ssh -o "ProxyCommand socat - VSOCK-CONNECT:1337:22" root@localhost