boa
boa copied to clipboard
Unexpected Update of 4.0.1 to 4.1.4 rel triggered automatically
Hi,
Today at 00:05 UTC, we had 3 servers all begin to update to version 4.1.4 rel from version 4.0.1.
We have never experienced an event like this before which is more than a little concerning as this has resulted in our service being taken offline outside our maintenance windows.
Does anyone know why this update would of been triggered automatically? We have being running version 4.0.1 since 2019.
Same, except it took my server down!
All of our servers went down as well during this time.
Not knowing what was wrong, a server reboot was triggered due to the web host being offline and resulted in the update being stopped and stuck in a unusable state.
Did you find any solution? I haven't yet - close to restoring a backup of the entire server
Hi,
Check this commit:
https://github.com/omega8cc/boa/commit/bb4a29b2fcc394f6c768280f2684ee7fba319f88
Tim
Hello,
Does this unattended upgrade happen ONLY if SKYNET_MODE=YES in /root/.barracuda.cnf ?
In other words does a setting of SKYNET_MODE=NO prevent this upgrade from happening?
Thanks for your response.
Best,
Ed
You shouldn’t be running BOA older than 4.1.4-rel because all previous versions are unsupported and deprecated for a long time, as we are gearing up for BOA 5.x — Furthermore, June 30, 2022 is the last day of LTS for the current default Debian version BOA depends on and you won’t be able to run reliable updates starting tomorrow, only major system upgrade to newer Debian versions will be supported. If you have maintained your BOA system updates regularly you wouldn’t even notice that last forced upgrade intended guarantee smooth major upgrade — it won’t be forced though because it always requires system reboot.
Sent with GitHawk
Hello, Does that mean that we shoud do a reboot right away?
Hello, Does that mean that we shoud do a reboot right away?
@EdNett no, this forced upgrade doesn’t require reboot, only upcoming major (but manually run) upgrade will.
Sent with GitHawk
Thanks for your response. I noticed that one server updated the AEgir octopus hostmaster instances to 7.90 but the other did not - they are left at 7.89. Should I run a manual up-head expecting all the octopus hostmaster instances to be upgraded to 7.90? Would you like me to file a full support issue on this?
Thanks,
Ed
Thanks for your response. I noticed that one server updated the AEgir octopus hostmaster instances to 7.90 but the other did not - they are left at 7.89. Should I run a manual up-head expecting all the octopus hostmaster instances to be upgraded to 7.90? Would you like me to file a full support issue on this? ...
@EdNett if the barracuda upgrade took a very long time because there were many components to rebuild it could miss the following octopus upgrade because depending on the time zone we have set 1-2 hours between cron tasks. No need to report since it’s nothing unusual. Just run octopus upgrade again manually.
Sent with GitHawk
@omega8cc I know it isn't recommended, but is there any way to prevent this update temporarily? We're only weeks out from decommissioning our Aegir server but this update is failing for me and I had to restore from backup today.
@omega8cc I know it isn't recommended, but is there any way to prevent this update temporarily? We're only weeks out from decommissioning our Aegir server but this update is failing for me and I had to restore from backup today.
@kevinob11 It was only one-time update to prepare existing servers for major upgrade which should be run manually because it can’t be automated, so this kind of preparation upgrade won’t be forced again. BOA will switch from opt-in to opt-out automatic upgrades starting with next release, though.
Sent with GitHawk
@omega8cc ah ok, so the 4.1.4 update won't attempt again tonight?
If we do end up keeping this server we'll certainly make sure to manually upgrade to the latest version in short-order.
Thanks for the fantastic support over the years!
" BOA will switch from opt-in to opt-out automatic upgrades starting with next release, though." ... what do we need to do, to opt-out of automatic upgrades, so that this doesn't happen again?
we are on an old 4.0.1 install that will be decommissioned within the next year. yes, we know we should have upgraded but it's a difficult upgrade in our case and we would rather just "tough it out".
man, this is rough. I was trying to run this upgrade manually weeks ago and I kept getting stopped and having to revert. So, i held off to try and keep figuring it out. But, not the auto upgrade appears to have happened, and while the sites on the server are running, now we can't reach the main BOA hosting site. Getting the same 502 Bad Gateway nginx error.
hello Omega! sorry but this really concerns me and hopefully it's a simple answer, i will ask this again:
" BOA will switch from opt-in to opt-out automatic upgrades starting with next release, though." ... what do we need to do, to opt-out of automatic upgrades, so that this doesn't happen again?
hello Omega! sorry but this really concerns me and hopefully it's a simple answer, i will ask this again: ...
@EarthAngelConsulting It will be explained in the first 5.x release notes and new docs how to opt-out. You don’t need to do anything as long as you are on 4.x and BOA will not upgrade any server to 5.x automatically — even if you would configure weekly auto-updates in 4.x
Sent with GitHawk
ok that's great! thank you Omega! :-)
with this upgrade, lost access to the hostmaster site. Ran an upgrade tonight to try and make sure all php versions were at 7.4 The sites hosted on the server are available and running fine, but cannot reach the hostmaster site. Getting 502 Bad Gateway, nginx error When I check nginx error log, get this result:
2022/07/11 20:15:32 [crit] 28716#0: *72 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: XX.XX.X.X, server: 0.0.0.0:XXX
2022/07/11 20:25:34 [error] 21949#0: r3.o.lencr.org could not be resolved (110: Operation timed out) while requesting certificate status, responder: r3.o.lencr.org, certificate: "/data/disk/o1/tools/le/certs/o1.MYSERVER.ORG/fullchain.pem"
replaced ip address information with XX
Anyone have ideas to fix this?
@dserrato Running octopus up-head o1 force
should always help