kubevirtci
kubevirtci copied to clipboard
gocli: provision: tell qemu to avoid rebooting
For some reason, sometimes VM shutdown leads to a reboot, breaking the provisioning process. That would happen if for example the VM looks like it crashed instead of just shutdown. Telling qemu to never reboot should fix the problem, and avoid many provisionning failures.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please ask for approval from jean-edouard. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/cc @dhiller /retest This doesn't seem to work all that well on test lanes...
/retest-required
@jean-edouard I don't get what this PR wants to achieve? Can you elaborate?
@jean-edouard I don't get what this PR wants to achieve? Can you elaborate?
Yes, hi! I noticed that, both on the check-provision lanes and on my box, the provisioning script was flaky, often hanging on "waiting for the node to stop".
So I investigated, and found out that while we issued a "shutdown now" command, the VM rebooted instead, causing the VM to never stop.
Since I couldn't figure out why the VM decided to reboot instead of shutting down, I thought about telling qemu to never reboot instead.
And that completely fixed the provisioning script on my box.
However, the check-provision
lanes still fail on this PR, at the same place and maybe even more often than usual, which has me quite confused...
This PR might need some tweaking, I'm not sure, but I do believe we need this fix, which is why I left it open.
Edit: hmm, it looks happy now :sweat_smile:
Edit2: eh, 2 out of 3...
@jean-edouard: The following test failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
check-provision-k8s-1.27 | 2347db41e8b021df8ab86e2eff89e34dc311670d | link | true | /test check-provision-k8s-1.27 |
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
Please see https://github.com/kubevirt/kubevirtci/pull/1049
fwiw https://github.com/kubevirt/kubevirtci/pull/1049#issuecomment-1834306634
/cc @brianmcarey
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.