easyengine
easyengine copied to clipboard
Sites do not work after machine reboot
After host reboot, I am seeing this error using docker logs container ID for nginx-proxy (see separate issue #1250 for issue about getting proxy error logs.
[error] 91#91: *14 no live upstreams while connecting to upstream, client
Not sure what else you will need to troubleshoot but please let me know.
lsb_release -a
System Information
-
[ ] lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic
-
[ ] ee cli version EE 4.0.0-beta.6
-
[ ] ee cli info ee cli info +-------------------+----------------------------------------------------------------------------+ | OS | Linux 4.15.0-36-generic #39-Ubuntu SMP Mon Sep 24 16:19:09 UTC 2018 x86_64 | | Shell | /bin/bash | | PHP binary | /usr/bin/php7.2 | | PHP version | 7.2.11-1+ubuntu18.04.1+deb.sury.org+1 | | php.ini used | /etc/php/7.2/cli/php.ini | | EE root dir | phar://ee.phar | | EE vendor dir | phar://ee.phar/vendor | | EE phar path | /opt/easyengine/nginx/conf.d | | EE packages dir | | | EE global config | /opt/easyengine/config.yml | | EE project config | | | EE version | 4.0.0-beta.6
-
[ ] wp --allow-root --info wp --allow-root --info wp: command not found
dockergen.1 | 2018/10/18 02:44:53 Running 'nginx -s reload' dockergen.1 | 2018/10/18 02:44:53 Received event die for container 36a1ace0beef dockergen.1 | 2018/10/18 02:44:53 Generated '/etc/nginx/conf.d/default.conf' from 5 containers dockergen.1 | 2018/10/18 02:44:53 Running 'nginx -s reload' nginx.1 | 2018/10/18 02:45:03 [error] 142#142: *14 no live upstreams while connecting to upstream, client: 69.255.22.171, server: ee1.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://ee1.mydomain.com-42099b4af021e53fd8fd4e056c2568d7c2e3ffa8/", host: "mydomain.com" ngi
I had the same issues rebooting the host VM and found that the nginx docker container doesn't start properly. You will need to find the ID of that docker container, stop it manually and restart it. Other methods that worked include using the ee site down
command, which stops the website and bring it back up with ee site up
@kirtangajjar @mbtamuli Is this on the radar to be fixed before v4 is released?
@devbhosale Can you try to reproduce this issue from v4 stable and let us know if its still there?
Definitely still an issue - has happened to me on several reboots using 4.0.0 - 4.0.4.
https://community.easyengine.io/t/easyengine-services-not-running-on-server-reboot/11651
Can confirm this still an issue as well running Ubuntu 18 LTS and EE 4.0.10
Easiest way to deal with it instead of SSH every time is I set a simple script in /etc/init.d/ until this is fixed that runs the command:
ee service restart nginx-proxy
That command didn't work either this time. Now on 4.0.10 and Ubuntu 18 LTS not working.
Running
sudo ee site disable site.com
sudo ee site enable site.com
doesn't work.
Running
ee service restart nginx-proxy
doesn't work.
Restarting the server multiple times doesn't work.
Please somebody find a fix!
@michacassola Can you try ee service enable db --force
and check the output of docker ps -a
and check if any of the site or service containers are restarting or exited?
Sorry, already deinstalled and reinstalled. Now the main site runs fine, the others were anyways just test sites. If I run into the same problem again when new updates come in or after a rebot now I will do the steps you mentioned and post the output here.
@michacassola Can you try
ee service enable db --force
and check the output ofdocker ps -a
and check if any of the site or service containers are restarting or exited?
After disable and enable the sites, I got “Error establishing a database connection” message. This command solved the problem. Thanks.
502 Bad Gateway - is back.
Running 'sudo ee site disable site.com' 'sudo ee site enable site.com' doesn't work.
Running 'ee service restart nginx-proxy' doesn't work.
Same as @michacassola.
Dont know what to do now. Any help?
For me the following worked. ee site disable on each of the sites cd into /opt/easyengine/services and run docker-compose down docker-compose up -d reboot Then ee site enable for each of the sites
For me the following worked. ee site disable on each of the sites cd into /opt/easyengine/services and run docker-compose down docker-compose up -d reboot Then ee site enable for each of the sites
Thank you! It worked!
The following steps worked to run the EasyEngine docker containers and the specific site container:
To run EE docker containers:
cd /opt/easyengine/services/ && sudo docker-compose down && sudo docker-compose up -d
To run EE site's containers:
cd /opt/easyengine/sites/EXAMPLE.COM/ && sudo docker-compose down && sudo docker-compose up -d
NOTE: of course, you need to repeat the second step for every site you want to run.
@rahul286 @kirtangajjar Is there any possibility you can prioritize this issue for in the next 1-2 months.. as long as this issue is there, it means downtime for all sites hosted with EE4...
None of the proposed solution worked for me.
What did work was to make a copy of the config file then make changes:
cd /opt/easyengine/services/nginx-proxy/conf.d
cp default.conf default.conf.orig
I removed all
upstream <domain-name>-<id> {
}
Then within each server defined for a domain remove the -<id>
e.g.
example.com-2666376731789139831
became
example.com
Restarting with ee service restart nginx-proxy
and all my sites were back online as i watched the log
docker logs --tail=10 -f services_global-nginx-proxy_1
Restarting the service and watching the log for errors and going to the line number helped tidy up the config.
Adding a new site, the config file updates to what it was originally, and it remains to work.
So it seems to be some timeout waiting for the docker network / containers for each domain to start is quitting too soon, or the order of services starting up has changed.
I have a couple of clues about this issue. It seems to be resource related. With a DO droplet,, 18.04, 1 GB ram (The "$5" droplet), I get restarts after reboot with as few as three WP sites enabled. Clue #1. If I disable all sites, reboot, enable all sites (10 total), all is fine. Clue #2 If I increase the DO droplet to 2GB ram, the problem never occurs. Hope this helps.
UPDATE: Unfortunately, the problem persists in Beta 6.
For me the following worked. ee site disable on each of the sites cd into /opt/easyengine/services and run docker-compose down docker-compose up -d reboot Then ee site enable for each of the sites
Thanks, it work, and then run bellow command if you got "Error establishing a database connection",
ee service enable db
None of the proposed solution worked for me.
What did work was to make a copy of the config file then make changes:
cd /opt/easyengine/services/nginx-proxy/conf.d cp default.conf default.conf.orig
I removed all
upstream <domain-name>-<id> { }
Then within each server defined for a domain remove the
-<id>
e.g.example.com-2666376731789139831 became example.com
Restarting with
ee service restart nginx-proxy
and all my sites were back online as i watched the logdocker logs --tail=10 -f services_global-nginx-proxy_1
Restarting the service and watching the log for errors and going to the line number helped tidy up the config.
Adding a new site, the config file updates to what it was originally, and it remains to work.
So it seems to be some timeout waiting for the docker network / containers for each domain to start is quitting too soon, or the order of services starting up has changed.
forego | starting nginx.1 on port 5200 forego | sending SIGTERM to nginx.1 forego | sending SIGTERM to dockergen.1 Custom dhparam.pem file found, generation skipped forego | starting dockergen.1 on port 5000 forego | starting nginx.1 on port 5100 nginx.1 | nginx: [emerg] host not found in upstream example.com-42099b4af021e53fd8fd4e056c2568d7c2e3ffa8" in /etc/nginx/conf.d/default.conf:363 forego | starting nginx.1 on port 5200 forego | sending SIGTERM to nginx.1 forego | sending SIGTERM to dockergen.1
Still got the problem :(
Is this fixed yet? Im getting Error: Unable to find global EasyEngine service nginx every time I try to enable or disable nginx service to see if that would fix it…
Try to reboot again your server. If not happening you need to reinstall the website.
- Backup your wp-content and database.
- Removing the old-site from easyengine just "sudo rm -rf" in site folder
- Create new instance using ee site crete yoursite.com
- Removing new wp-content folder and restore the backup there using rsync.
- Delete the new database and replace with backup.
- Setting again the wpconfig file and change the prefix , database, username database etc... commonly if you can't access after doing all step is db connection.
- Done.
For me the following worked. ee site disable on each of the sites cd into /opt/easyengine/services and run docker-compose down docker-compose up -d reboot Then ee site enable for each of the sites
Thank you! It worked!
This worked for me with one tweak first
sudo /etc/init.d/docker stop sudo /etc/init.d/docker start ee site disable www.site.com cd into /opt/easyengine/services sudo docker-compose down sudo docker-compose up -d ee site enable www.site.com
thanks a lot! André RenaultCEO +55 31 9 8799 9303 <+5531987999303> andre@ [email protected]smartinvent.capital [email protected] https://www.smartinvent.capital Projects & General Venture Capital & Project Management NOTA: Esta mensagem, incluindo seus anexos, tem caráter confidencial e é restrita ao(s) destinatário(s). O uso não autorizado ou a divulgação do seu conteúdo são proibidos e passíveis de ações e indenizações judiciais cabíveis. NOTE: This message (attachments included) is confidential and restricted to its recipient(s). Unauthorized use or dissemination of its content are prohibited and may be liable to legal procedures and compensation.
On Sun, Aug 23, 2020 at 3:42 AM michaelnicolle [email protected] wrote:
For me the following worked. ee site disable on each of the sites cd into /opt/easyengine/services and run docker-compose down docker-compose up -d reboot Then ee site enable for each of the sites
Thank you! It worked!
This worked for me with one tweak first
sudo /etc/init.d/docker stop sudo /etc/init.d/docker start ee site disable www.site.com cd into /opt/easyengine/services sudo docker-compose down sudo docker-compose up -d ee site enable www.site.com
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/EasyEngine/easyengine/issues/1251#issuecomment-678736461, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGVNKCLZS4G3HKHR5Y7FNKDSCC23PANCNFSM4F5R474Q .
I got this error after upgrade EE version 4.1.4 to 4.1.5
Error: Ports of current running nginx-proxy and ports specified in EasyEngine config file don't match.
solution by petebytes is the best for now... Thank you!! but the error will be occurred again after server reboot.. now I need to reinstall everything and reupload 9GB WP files ( TT _ TT )
I hope will be fix for this in next release and the upgrade free of error.
conclusion: never upgrade EE version in production server
For me the following worked. ee site disable on each of the sites cd into /opt/easyengine/services and run docker-compose down docker-compose up -d reboot Then ee site enable for each of the sites
I'm using Debian 10, ee v4.1.5 This work for me. Also I tried on another distro like Debian 9, Ubuntu 18.04, Ubuntu 20.04, it works!
I have more than 100 blogs, and I can't imagine if I have to disable / enable one by one, 27 blogs each server, I know I can create bash script, but for 1 website it took around 10-15 seconds to disable and enable.
EE is awesome, it fast and simple. But restart the server is something will happen in any case, and this problem still here since 2018, wow. I hope EE getting better in the future, and I still can't use EE for production servers.
Thanks @imylomylo , your answer works for me with small modification to make it simpler 😉
Instead of doing this
What did work was to make a copy of the config file then make changes:
cd /opt/easyengine/services/nginx-proxy/conf.d cp default.conf default.conf.orig
I removed all
upstream <domain-name>-<id> { }
Then within each server defined for a domain remove the
-<id>
e.g.example.com-2666376731789139831 became example.com
I just move default.conf to make backup, and let nginx-global-proxy do the vhost creation
cd /opt/easyengine/services/nginx-proxy/conf.d
mv default.conf default.conf.orig
ee service restart nginx-proxy
After all site are going well, remove the backup file
rm -f default.conf.orig