pgwatch2
pgwatch2 copied to clipboard
Pg-watch docker container error: 127.0.0.1:5432: bind: address already in use
Trying to run the pgwatch docker container in my local pc running Ubuntu 22.04. I am running postres-14 in another container, exposing port 5432, with this command:
sudo docker run -d --name postgres14 -p 5432:5432 -e POSTGRES_HOST_AUTH_METHOD=trust \
-e PGDATA=/media/jay/postgres/pgdata \
-v pg14-data:/media/jay/postgres \
postgres:14
When i try running the pgwatch container with this:
sudo docker run -d --restart=unless-stopped --name pw2 \
-p 3000:3000 -p 8080:8080 -p 127.0.0.1:5432:5432 \
-e PW2_TESTDB=true \
-e NOTESTDB=1 \
cybertec/pgwatch2-postgres:latest
I get the following error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint pw2 (aa442a59266fce1c8084af2f94294c607c502f15fdc7ba2e16500e65884aaa2d): Error starting userland proxy: listen tcp4 127.0.0.1:5432: bind: address already in use.
Appears that pgwatch container and postgres are both trying to bind to port 5432
Looks like the docs for docker quickstart use the command:
docker run -d -p 3000:3000 -p 8080:8080 -e PW2_TESTDB=true --name pw2 cybertec/pgwatch2
to start the docker container https://pgwatch2.readthedocs.io/en/latest/README.html?highlight=docker#quick-start-with-docker
The run command I provided in the opening comment which includes port 5432, was from the README in this repo
For sure you have to think (at least) a bit before executing commands. In the README is also written, right above your posted statement:
# and the internal configuration and metrics DB on localhost port 5432
This means if your "DB to monitor" is also running on port 5432, you either have the option to do not expose the pgwatch2 metrics db (just delete the -p 127.0.0.1:5432:5432) or you must change the ports. This is not a bug.
📅 This issue has been automatically marked as stale because lack of recent activity. It will be closed if no further activity occurs. ♻️ If you think there is new information allowing us to address the issue, please reopen it and provide us with updated details. 🤝 Thank you for your contributions.