ansible-kubernetes-openshift-pi3 icon indicating copy to clipboard operation
ansible-kubernetes-openshift-pi3 copied to clipboard

After clean Install: Port occupied

Open dklueh79 opened this issue 8 years ago • 3 comments

During ansible-playbook -i hosts kubernetes.yml:

ASK [kubernetes : Run kubeadm init on master] ************************************************************************************************************************************ fatal: [192.168.0.230]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--config", "/etc/kubernetes/kubeadm.yml"], "delta": "0:00:06.811351", "end": "2017-10-22 15:50:01.583502", "failed": true, "rc": 2, "start": "2017-10-22 15:49:54.772151", "stderr": "[preflight] Some fatal errors occurred:\n\tPort 10250 is in use\n\tPort 10251 is in use\n\tPort 10252 is in use\n\t/etc/kubernetes/manifests is not empty\n\tPort 2379 is in use\n\t/var/lib/etcd is not empty\n[preflight] If you know what you are doing, you can skip pre-flight checks with --skip-preflight-checks", "stdout": "[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.\n[init] Using Kubernetes version: v1.8.2-beta.0\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks", "stdout_lines": ["[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.", "[init] Using Kubernetes version: v1.8.2-beta.0", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks"], "warnings": []} to retry, use: --limit @/root/k8s-pi/kubernetes.retry

dklueh79 avatar Oct 22 '17 13:10 dklueh79

Sorry, the current check for whether Kubernetes is running or not is a bit limited. It used kubectl get nodes and if this fails with error code 1 then it is assumed that no cluster is runnin gand kubeadm init is called again.

I think this should be made more robuts. Any idea ? (maybe we should run kubeadm upgrade plan or so ....)

rhuss avatar Oct 23 '17 13:10 rhuss

Any solution for completing your kubernetes setup?

dklueh79 avatar Oct 27 '17 15:10 dklueh79

@dklueh79 wdym ? Actually, the current detection works when Kubernetes has been properly installed and nodes are running. So in this case the kubeadm init step is skipped. However when the initial setup didnt work and you are in a half-baked state, the kubectl get nodes fails, but kubeadm init also fails. One should properly do a full reset then.

So you should try a full reset when this error ocurs for you before trying again:

ansible-playbook -i hosts kubernetes-full-reset.yml

rhuss avatar Oct 30 '17 08:10 rhuss