nanobox
nanobox copied to clipboard
IP reservations not being removed after Nanobox tries to use a network space that's already used by Docker
If using Docker Native and you run into a network space conflict, after updating the native-network-space
, app components still try to use the previous network space and throw an error:
Error response from daemon: Invalid address 172.18.0.2: It does not belong to any of this network's subnets
The only way to fix it is to nanobox destroy
and nanobox run
again.
Would it be possible to clear the IP reservation for the app if Docker has already reserved the network space? Or not save the network space (in the database?) until after the network-space is confirmed to be available?
This problem was clarified in Slack. The VM actually has to be destroyed before the config change will be applied to containers. I know the startup process is being automated, but in the meantime, I wonder if it'd be worth updating the error message to include instructions on imploding as well.
It seems like any time you run nanobox configure set ...
all those settings really only work when we are creating nanobox new. If you change cpus or disk space that isnt retroactive either.
any idea how to fix it temporarily? :wrench:
I try with nanobox destroy
and nanobox run
again but the problem persists
! Starting docker container
Error : Error response from daemon: Invalid address 172.20.0.3: It does not belong to any of this network's subnets
Context : failed to sync components -> failed to provision components -> failed to setup component (data.db): failed to start docker container: Error response from daemon: Invalid address 172.20.0.3: It does not belong to any of this network's subnets -> failed to start docker container
The solution is to completely eliminate the docker by following it https://stackoverflow.com/questions/31313497/how-to-remove-docker-installed-using-wget/31313851#31313851 , reinstall it again and ready!
The only thing that helped to me was to run nanobox implode
and then nanobox start
again and configuring from the start. Simply removing stale docker containers and images did not help 🤕