kubeadm-ha
kubeadm-ha copied to clipboard
跨云实例部署k8s集群的时候会报错
- 使用了oracle云7台机器,分别是不同的地区,只会公网ip能通,很明显。看到代码中会使用127.0.0.1访问,这是始终访问不通的。从而导致k8s在lb_kube_apiserver_port阶段报错;请问如何解决
fatal: [152.67.127.61]: FAILED! => {"attempts": 12, "changed": true, "cmd": "nc -z -w 3 127.0.0.1 8555", "delta": "0:00:03.033992", "end": "2023-10-13 00:08:31.099027", "msg": "non-zero return code", "rc": 1, "start": "2023-10-13 00:08:28.065035", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (5 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (3 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (3 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (12 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (3 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (2 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (2 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (4 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (2 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (11 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (1 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (1 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (1 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (3 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (10 retries left). fatal: [152.67.206.130]: FAILED! => {"attempts": 12, "changed": true, "cmd": "nc -z -w 3 127.0.0.1 8555", "delta": "0:00:03.036649", "end": "2023-10-13 00:09:11.041576", "msg": "non-zero return code", "rc": 1, "start": "2023-10-13 00:09:08.004927", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} fatal: [158.179.163.160]: FAILED! => {"attempts": 12, "changed": true, "cmd": "nc -z -w 3 127.0.0.1 8555", "delta": "0:00:03.036905", "end": "2023-10-13 00:09:11.465386", "msg": "non-zero return code", "rc": 1, "start": "2023-10-13 00:09:08.428481", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} fatal: [152.67.216.49]: FAILED! => {"attempts": 12, "changed": true, "cmd": "nc -z -w 3 127.0.0.1 8555", "delta": "0:00:03.036661", "end": "2023-10-13 00:09:14.650546", "msg": "non-zero return code", "rc": 1, "start": "2023-10-13 00:09:11.613885", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (2 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (12 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (9 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (11 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (1 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (8 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (10 retries left). fatal: [158.178.150.247]: FAILED! => {"attempts": 12, "changed": true, "cmd": "nc -z -w 3 127.0.0.1 8555", "delta": "0:00:03.034773", "end": "2023-10-13 00:09:45.424047", "msg": "non-zero return code", "rc": 1, "start": "2023-10-13 00:09:42.389274", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (7 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (9 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (6 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (8 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (5 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (7 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (4 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (6 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (5 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (3 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (4 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (2 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (3 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (1 retries left). FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (2 retries left). fatal: [132.145.192.232]: FAILED! => {"attempts": 12, "changed": true, "cmd": "nc -z -w 3 127.0.0.1 8555", "delta": "0:00:03.036745", "end": "2023-10-13 00:11:20.420574", "msg": "non-zero return code", "rc": 1, "start": "2023-10-13 00:11:17.383829", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (1 retries left). fatal: [140.238.243.210]: FAILED! => {"attempts": 12, "changed": true, "cmd": "nc -z -w 3 127.0.0.1 8555", "delta": "0:00:03.038241", "end": "2023-10-13 00:11:37.138415", "msg": "non-zero return code", "rc": 1, "start": "2023-10-13 00:11:34.100174", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
你好,本项目只能通过内网IP进行部署,不支持通过公网IP进行部署。请将脚本克隆至服务器,在服务器上执行集群安装。
你好.我有一个想法,如果我使用 wirguard 打通跨云的实例,这样就会使得多云的 ip 在同一个 ip 段,是否可以通过这个方法完成对 k8s 的部署呢? 首先wirguard会导致 ip 段都在 10.222.1.x 这个段.我只需要将 address 保持外网,内网使用 10.222.1.x 的 ip 段.
你可以尝试一下,我没有这样弄过,使用本项目时,节点IP填写wirguard分配的IP(使用ip a
命令能够查看得到的IP)
当然可以.研究老久了.之前使用 k3s+kilo成功了.但是出现一个问题,是 coredns 出现了问题,导致了集群内部无法访问