flare-vm icon indicating copy to clipboard operation
flare-vm copied to clipboard

Port virtualbox scripts to VBoxManage CLI

Open stevemk14ebr opened this issue 1 year ago • 1 comments

Ports to VBoxManage CLI, identical logic otherwise. Errors handled gracefully for the most part. Output:

stepheneckels@flarevm-build-2:~/source/repos/flare-vm$ python3 virtualbox/vbox-export-snapshots.py 
Starting operations on FLARE-VM
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is already shut down (state: poweroff).
Restored 'FLARE-VM'
Found existing hostonlyif vboxnet0
Verified hostonly nic configuration correct
Power cycling before export...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is not running (state: poweroff). Starting VM...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} started.
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is not powered off. Shutting down VM...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is shut down (status: poweroff).
Power cycling done.
Exporting /usr/local/google/home/stepheneckels/EXPORTED VMS/FLARE-VM.20241009.dynamic.ova (this will take some time, go for an 🍦!)
Exported /usr/local/google/home/stepheneckels/EXPORTED VMS/FLARE-VM.20241009.dynamic.ova! 🎉
All operations on FLARE-VM successful ✅
Starting operations on FLARE-VM.full
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is already shut down (state: poweroff).
Restored 'FLARE-VM.full'
Found existing hostonlyif vboxnet0
Changed nic1 to hostonly
Verified hostonly nic configuration correct
Power cycling before export...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is not running (state: poweroff). Starting VM...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} started.
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is not powered off. Shutting down VM...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is shut down (status: poweroff).
Power cycling done.
Exporting /usr/local/google/home/stepheneckels/EXPORTED VMS/FLARE-VM.20241009.full.dynamic.ova (this will take some time, go for an 🍦!)
Exported /usr/local/google/home/stepheneckels/EXPORTED VMS/FLARE-VM.20241009.full.dynamic.ova! 🎉
All operations on FLARE-VM.full successful ✅
Starting operations on FLARE-VM.EDU
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is already shut down (state: poweroff).
Restored 'FLARE-VM.EDU'
Found existing hostonlyif vboxnet0
Changed nic1 to hostonly
Verified hostonly nic configuration correct
Power cycling before export...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is not running (state: poweroff). Starting VM...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} started.
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is not powered off. Shutting down VM...
VM {b76d628b-737f-40a3-9a16-c5f66ad2cfcc} is shut down (status: poweroff).
Power cycling done.
Exporting /usr/local/google/home/stepheneckels/EXPORTED VMS/FLARE-VM.20241009.EDU.ova (this will take some time, go for an 🍦!)
Exported /usr/local/google/home/stepheneckels/EXPORTED VMS/FLARE-VM.20241009.EDU.ova! 🎉
All operations on FLARE-VM.EDU successful ✅
Done. Exiting...

stevemk14ebr avatar Oct 09 '24 13:10 stevemk14ebr

for example, it seems it is not possible to access the max number of adapters which would allow us to write simpler code as in the previous version

we can, the vminfo command lists all 8 adapters (the max) and any unset adapters have the value 'none'. The code doesn't need to check the max adapters because it lists all of them, even if they're unset, so we always loop all 8 adapters.

What about keeping both the version using the virtualbox library and the new one using VBoxManage until we have tested and migrated everything else

I have no issues with not merging these PRs (I will send more for the other two scripts) until we are ready to drop the virtualbox package dependency entirely. I would not want to keep two version around though, that goes against the spirit of doing this work. While the code does appear more complex, the port was actually quite straightforward, there is just a lot of logic to parse the text CLI output and handle the errors nicely. Some things are different than the virtualbox package for sure, but there are not any glaring things missing from the CLI. In the long term this should be very easy to maintain as the CLI does not often change. More importantly though on some setup the python .so that virtualbox uses is not build/included, and the package is unmaintained for +1 year at this time, so we should not rely on it anymore.

stevemk14ebr avatar Oct 10 '24 15:10 stevemk14ebr

I did some more testing. Exporting a VM setting the hostonly adapter (with either virtualbox API and VBoxManage CLI) fails when I have never used the hostonly adapter of the VM as exporting does not set the name. It does not appear possible to use the API/VBoxManage CLI and I still think this is a virtualbox bug as reported on as reported on https://www.virtualbox.org/ticket/22158. But for our case where we always use the same VM to export several snapshots, we can ensure the hostonly adapter has a name before creating the snapshots: Set the network to hostonly (save the settings) and then back to NAT (save setting again). This ensures the hostonly adapter name is set and then the exporting using the virtualbox API and VBoxManage CLI works.

So the issue is not a blocker for this PR. Thanks @stevemk14ebr for working on this! This is a very intuitive bug and your work was very helpful to figure out a fix. :bouquet:

Ana06 avatar Dec 11 '24 14:12 Ana06

vbox-export-snapshots.py works perfectly, but I think there is a bug in vbox-export-snapshots.py:

Failed to find root snapshot
Error getting snapshot children: Failed to find root snapshot EMPTY
Traceback (most recent call last):
  File "/usr/local/google/home/anamg/VM-building/vbox-clean-snapshots.py", line 36, in get_snapshot_children
    raise Exception(f"Failed to find root snapshot {snapshot_name}")
Exception: Failed to find root snapshot EMPTY

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/google/home/anamg/VM-building/vbox-clean-snapshots.py", line 126, in <module>
    main()
  File "/usr/local/google/home/anamg/VM-building/vbox-clean-snapshots.py", line 122, in main
    delete_snapshot_and_children(args.vm_name, args.root_snapshot, args.protected_snapshots)
  File "/usr/local/google/home/anamg/VM-building/vbox-clean-snapshots.py", line 57, in delete_snapshot_and_children
    TO_DELETE = get_snapshot_children(vm_name, snapshot_name, protected_snapshots)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/google/home/anamg/VM-building/vbox-clean-snapshots.py", line 54, in get_snapshot_children
    raise Exception(f"Could not get snapshot children for '{vm_name}'")
Exception: Could not get snapshot children for 'REMnux.testing'

Ana06 avatar Dec 17 '24 19:12 Ana06

@Ana06 can you give some more information on your snapshot layout and names? Are you running with protected snapshots set? VBoxManage showvminfo <name> --machinereadable should give enough information to debug.

stevemk14ebr avatar Jan 02 '25 15:01 stevemk14ebr