kubernetes-vagrant-coreos-cluster
kubernetes-vagrant-coreos-cluster copied to clipboard
vagrant status shows only first 2 nodes
In case more than 2 nodes are started, vagrant still sees only node-01 and node-02. It is not possible to ssh into higher nodes via vagrant, nor halt them. If issuing vagrant halt, the master and 2 first nodes are halted but the rest is up.
This happens because when one uses environment variable NUM_INSTANCES to set the number of minions to spawn, it actually doesn't get persisted anywhere where Vagrantfile can read it and falls back to the default 2.
Please do note that if you were using non default settings to startup your cluster you must also use those exact settings when invoking
vagrant {up,ssh,destroy}to communicate with any of the nodes in the cluster as otherwise things may not behave as you'd expect.
So, if you ran NUM_INSTANCES=4 vagrant up you'll need to NUM_INSTANCES=4 vagrant status in order to show all the nodes. This is not optimal, but right now it's what works.
This same issue applies for all environment variables that we set but are not reused during cluster lifecycle. We should find a way to write-and-read this info from a persistent media, i.e. a file.
To persist variables, I am using it like this: echo NUM_INSTANCES=1 MASTER_MEM=1024 MASTER_CPUS=2 NODE_MEM=2048 NODE_CPUS=2 vagrant "$@" > v chmod +x v ./v up
Interesting @janroos. Thanks for sharing. But I would like something like a trigger on the Vagrantfile that would read values from a file. Such values would only be preceded by environment variables previously set.