install
install copied to clipboard
K8s installer does not work for contiv
sudo ./install/k8s/install.sh -n 172.29.174.19 77%
Installing Contiv for Kubernetes 211,2-9 78%
Generating local certs for Contiv Proxy
Setting installation parameters
Applying contiv installation
To customize the installation press Ctrl+C and edit ./.contiv.yaml.
configmap "contiv-config" configured
daemonset "contiv-netplugin" configured
replicaset "contiv-netmaster" configured
replicaset "contiv-api-proxy" configured
daemonset "contiv-etcd" configured
ERRO[0000] Get http://netmaster:9999/api/v1/globals/global/: dial tcp 172.29.174.19:9999: getsockopt: connection refused
port 9999 is opened in Security Group. Using 1.0.3 release.
what's the environment?
We are seeing the exact same issue:
root@ritesh-test:~/contiv/contiv-1.0.3# ./install/k8s/install.sh -n 184.173.89.146 Installing Contiv for Kubernetes secret "aci.key" created Generating local certs for Contiv Proxy Setting installation parameters Applying contiv installation To customize the installation press Ctrl+C and edit ./.contiv.yaml. clusterrolebinding "contiv-netplugin" created clusterrole "contiv-netplugin" created serviceaccount "contiv-netplugin" created clusterrolebinding "contiv-netmaster" created clusterrole "contiv-netmaster" created serviceaccount "contiv-netmaster" created configmap "contiv-config" created daemonset "contiv-netplugin" created replicaset "contiv-netmaster" created replicaset "contiv-api-proxy" created daemonset "contiv-etcd" created ERRO[0000] Get http://netmaster:9999/api/v1/globals/global/: dial tcp 184.173.89.146:9999: getsockopt: connection refused
Contiv: 1.0.3 K8s: v1.6.4
I am trying to bring up on internal environment as a part of other task which I am working on. Ubuntu platform
Folks
Please describe the platform:
Bare metal/VM, which virt platform On prem/cloud OS
-Himanshu
On Jun 9, 2017, at 10:46 PM, Gaurav Dalvi <[email protected]mailto:[email protected]> wrote:
I am trying to bring up on internal environment as a part of other task which I am working on. Ubuntu platform
You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/contiv/install/issues/169#issuecomment-307544622, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AMiMKf8rNKTPKJAL-hQeI5lHgwCNotc6ks5sCi2bgaJpZM4N1t_F.
@gaurav-dalvi, @patelrit, please provide the following information: For k8s 1.6, you may have to export KUBECONFIG=/etc/kubernetes/admin.conf or your corresponding config file kubectl version kubectl get nodes kubectl get pods -n kube-system kubectl describe pods -n kube-system cat /etc/hosts systemctl status firewalld Ports open on your firewall/security groups - specifically state of ports 6443, 9999 & 6666.
1:
cat /etc/hosts
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.1.1 vhost-ga661-0
172.29.54.227 corc
172.29.174.19 netmaster
2:
sudo systemctl status firewalldA
WARNING: terminal is not fully functional
● firewalldA.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
ubuntu@vhost-ga661-0:~$
3:
kubectl get nodes
NAME STATUS AGE
vhost-ga661-0 Ready,master 2d
ubuntu@vhost-ga661-0:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-s4vnt 1/1 Running 0 2d
calico-node-jmph3 2/2 Running 1 2d
calico-policy-controller-807063459-gpjd1 1/1 Running 0 2d
contiv-api-proxy-rbzz9 1/1 Running 187 2d
contiv-etcd-9slfd 1/1 Running 233 2d
contiv-netmaster-f4jq2 1/1 Running 212 2d
contiv-netplugin-cxmgd 1/1 Running 210 2d
dummy-2088944543-jmw4g 1/1 Running 0 2d
etcd-vhost-ga661-0 1/1 Running 2 2d
kube-apiserver-vhost-ga661-0 1/1 Running 2 2d
kube-controller-manager-vhost-ga661-0 1/1 Running 9 2d
kube-discovery-1769846148-r6n6c 1/1 Running 0 2d
kube-dns-2924299975-01tmk 0/4 OutOfcpu 0 2d
4:
kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.6", GitCommit:"114f8911f9597be669a747ab72787e0bd74c9359", GitTreeState:"clean", BuildDate:"2017-03-28T13:36:31Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
ubuntu@vhost-ga661-0:~$
@gaurav-dalvi, you have calico running on the pod and both the etcd instances are trying to use the same host port. Please retry on a clean setup.
Here is the requested info:
Platform: Ubuntu 16.04.2 LTS VMs on IBM SoftLayer
root@cluster1:~/contiv/contiv-1.0.3# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
50.97.198.5 cluster1.nirmata.com cluster1
127.0.1.1 cluster1.nirmata.com cluster1.nirmata.com
50.97.198.5 netmaster
root@cluster1:~/contiv/contiv-1.0.3#
root@cluster1:~/contiv/contiv-1.0.3#
root@cluster1:~/contiv/contiv-1.0.3# systemctl status firewalld
firewalld.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
root@cluster1:~/contiv/contiv-1.0.3#
root@cluster1:~/contiv/contiv-1.0.3# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"clean", BuildDate:"2017-05-10T15:48:59Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
root@cluster1:~/contiv/contiv-1.0.3#
root@cluster1:~/contiv/contiv-1.0.3# kubectl get nodes
NAME STATUS AGE VERSION
cluster1 Ready 17h v1.6.4
cluster2 Ready 17h v1.6.4
root@cluster1:~/contiv/contiv-1.0.3# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
contiv-api-proxy-g07zc 0/1 Pending 0 6m
contiv-netmaster-8hk0v 0/1 Pending 0 6m
contiv-netplugin-j1f7p 0/1 CrashLoopBackOff 6 6m
contiv-netplugin-nrprk 0/1 CrashLoopBackOff 6 6m
root@cluster1:~/contiv/contiv-1.0.3#
root@cluster1:~/contiv/contiv-1.0.3#
cd ~/kubernetes/
root@cluster1:~/kubernetes# KUBECTL_PATH=$(which kubectl) NUM_NODES=2 KUBERNETES_PROVIDER=local cluster/validate-cluster.sh
Found 2 node(s).
NAME STATUS AGE VERSION
cluster1 Ready 17h v1.6.4
cluster2 Ready 17h v1.6.4
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
Cluster validation succeeded
Pod details
root@cluster1:~/contiv/contiv-1.0.3# kubectl describe pods -n kube-system
Name: contiv-api-proxy-g07zc
Namespace: kube-system
Node: /
Labels: k8s-app=contiv-api-proxy
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"contiv-api-proxy","uid":"bd9cc25f-4fa4-11e7-9354-066bde8f6de...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Pending
IP:
Controllers: ReplicaSet/contiv-api-proxy
Containers:
contiv-api-proxy:
Image: contiv/auth_proxy:1.0.3
Port:
Args:
--tls-key-file=/var/contiv/auth_proxy_key.pem
--tls-certificate=/var/contiv/auth_proxy_cert.pem
--data-store-address=$(CONTIV_ETCD)
--netmaster-address=50.97.198.5:9999
Environment:
NO_NETMASTER_STARTUP_CHECK: 0
CONTIV_ETCD: <set to the key 'cluster_store' of config map 'contiv-config'> Optional: false
Mounts:
/var/contiv from var-contiv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-k596z (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
var-contiv:
Type: HostPath (bare host directory volume)
Path: /var/contiv
contiv-netmaster-token-k596z:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netmaster-token-k596z
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: node-role.kubernetes.io/master=:NoSchedule
node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 13s 27 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (2).
Name: contiv-netmaster-8hk0v
Namespace: kube-system
Node: /
Labels: k8s-app=contiv-netmaster
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"contiv-netmaster","uid":"bd91a0fd-4fa4-11e7-9354-066bde8f6de...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Pending
IP:
Controllers: ReplicaSet/contiv-netmaster
Containers:
contiv-netmaster:
Image: contiv/netplugin:1.0.3
Port:
Args:
-m
-pkubernetes
Environment:
CONTIV_ETCD: <set to the key 'cluster_store' of config map 'contiv-config'> Optional: false
CONTIV_CONFIG: <set to the key 'config' of config map 'contiv-config'> Optional: false
Mounts:
/etc/kubernetes/ssl from etc-kubernetes-ssl (rw)
/etc/openvswitch from etc-openvswitch (rw)
/lib/modules from lib-modules (rw)
/opt/cni/bin from cni-bin-dir (rw)
/var/contiv from var-contiv (rw)
/var/run from var-run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-k596z (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
etc-openvswitch:
Type: HostPath (bare host directory volume)
Path: /etc/openvswitch
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
var-run:
Type: HostPath (bare host directory volume)
Path: /var/run
var-contiv:
Type: HostPath (bare host directory volume)
Path: /var/contiv
etc-kubernetes-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/ssl
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
contiv-netmaster-token-k596z:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netmaster-token-k596z
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: node-role.kubernetes.io/master=:NoSchedule
node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 13s 27 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (2).
Name: contiv-netplugin-j1f7p
Namespace: kube-system
Node: cluster1/50.97.198.5
Start Time: Mon, 12 Jun 2017 19:24:23 +0000
Labels: k8s-app=contiv-netplugin
pod-template-generation=1
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"kube-system","name":"contiv-netplugin","uid":"bd846f21-4fa4-11e7-9354-066bde8f6de8...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 50.97.198.5
Controllers: DaemonSet/contiv-netplugin
Containers:
contiv-netplugin:
Container ID: docker://13259011ff436e7ffa9131c3fc81ec76d92b6d8708ec6f37a0fbdca688d6804d
Image: contiv/netplugin:1.0.3
Image ID: docker-pullable://contiv/netplugin@sha256:1d818453a71688d81072c25a149d814e4b2331fa526c3afd71aa895d79f6800a
Port:
Args:
-pkubernetes
-x
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 12 Jun 2017 19:30:34 +0000
Ready: False
Restart Count: 6
Environment:
VLAN_IF:
VTEP_IP: (v1:status.podIP)
CONTIV_ETCD: <set to the key 'cluster_store' of config map 'contiv-config'> Optional: false
CONTIV_CNI_CONFIG: <set to the key 'cni_config' of config map 'contiv-config'> Optional: false
CONTIV_CONFIG: <set to the key 'config' of config map 'contiv-config'> Optional: false
Mounts:
/etc/cni/net.d/ from etc-cni-dir (rw)
/etc/kubernetes/pki from etc-kubernetes-pki (rw)
/etc/kubernetes/ssl from etc-kubernetes-ssl (rw)
/etc/openvswitch from etc-openvswitch (rw)
/lib/modules from lib-modules (rw)
/opt/cni/bin from cni-bin-dir (rw)
/var/contiv from var-contiv (rw)
/var/run from var-run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netplugin-token-gvdt9 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
etc-openvswitch:
Type: HostPath (bare host directory volume)
Path: /etc/openvswitch
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
var-run:
Type: HostPath (bare host directory volume)
Path: /var/run
var-contiv:
Type: HostPath (bare host directory volume)
Path: /var/contiv
etc-kubernetes-pki:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
etc-kubernetes-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/ssl
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
etc-cni-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d/
contiv-netplugin-token-gvdt9:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netplugin-token-gvdt9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master=:NoSchedule
node.alpha.kubernetes.io/notReady=:Exists:NoExecute
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Created Created container with id d9b7552ff2e9792ebdda4c271abb1d10225931684c7d2121507ff16e3703af5b
6m 6m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Started Started container with id d9b7552ff2e9792ebdda4c271abb1d10225931684c7d2121507ff16e3703af5b
6m 6m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Created Created container with id f94d928b4b6b322c2dc4345f475b7a4c6dbad9438c526b5323b728c4c1065d61
6m 6m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Started Started container with id f94d928b4b6b322c2dc4345f475b7a4c6dbad9438c526b5323b728c4c1065d61
6m 6m 1 kubelet, cluster1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 10s restarting failed container=contiv-netplugin pod=contiv-netplugin-j1f7p_kube-system(bd870652-4fa4-11e7-9354-066bde8f6de8)"
6m 6m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Created Created container with id 3318a45050e3f50a1883cb4242efd805d2c29acf74881f767b66781139a0586f
6m 6m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Started Started container with id 3318a45050e3f50a1883cb4242efd805d2c29acf74881f767b66781139a0586f
6m 6m 2 kubelet, cluster1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 20s restarting failed container=contiv-netplugin pod=contiv-netplugin-j1f7p_kube-system(bd870652-4fa4-11e7-9354-066bde8f6de8)"
5m 5m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Started Started container with id bba908b948c43507caa30f2a86e8294f22d4a4cf0e541b4810fa08f0bdccbb09
5m 5m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Created Created container with id bba908b948c43507caa30f2a86e8294f22d4a4cf0e541b4810fa08f0bdccbb09
5m 5m 4 kubelet, cluster1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 40s restarting failed container=contiv-netplugin pod=contiv-netplugin-j1f7p_kube-system(bd870652-4fa4-11e7-9354-066bde8f6de8)"
4m 4m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Started Started container with id 289ff6adb0ac3d668fa1b5f1a6789058c157483df99d805c412030c680b4d60c
4m 4m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Created Created container with id 289ff6adb0ac3d668fa1b5f1a6789058c157483df99d805c412030c680b4d60c
4m 3m 7 kubelet, cluster1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=contiv-netplugin pod=contiv-netplugin-j1f7p_kube-system(bd870652-4fa4-11e7-9354-066bde8f6de8)"
3m 3m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Started Started container with id 810300b9a6152344f1e4b4be11944f265c9d8a3927ea312c07b919a0dbe4a72e
3m 3m 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Created Created container with id 810300b9a6152344f1e4b4be11944f265c9d8a3927ea312c07b919a0dbe4a72e
3m 44s 13 kubelet, cluster1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=contiv-netplugin pod=contiv-netplugin-j1f7p_kube-system(bd870652-4fa4-11e7-9354-066bde8f6de8)"
6m 28s 7 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Pulled Container image "contiv/netplugin:1.0.3" already present on machine
28s 28s 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Started Started container with id 13259011ff436e7ffa9131c3fc81ec76d92b6d8708ec6f37a0fbdca688d6804d
28s 28s 1 kubelet, cluster1 spec.containers{contiv-netplugin} Normal Created Created container with id 13259011ff436e7ffa9131c3fc81ec76d92b6d8708ec6f37a0fbdca688d6804d
6m 0s 30 kubelet, cluster1 spec.containers{contiv-netplugin} Warning BackOff Back-off restarting failed container
25s 0s 3 kubelet, cluster1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=contiv-netplugin pod=contiv-netplugin-j1f7p_kube-system(bd870652-4fa4-11e7-9354-066bde8f6de8)"
Name: contiv-netplugin-nrprk
Namespace: kube-system
Node: cluster2/50.97.198.8
Start Time: Mon, 12 Jun 2017 19:24:23 +0000
Labels: k8s-app=contiv-netplugin
pod-template-generation=1
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"kube-system","name":"contiv-netplugin","uid":"bd846f21-4fa4-11e7-9354-066bde8f6de8...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 50.97.198.8
Controllers: DaemonSet/contiv-netplugin
Containers:
contiv-netplugin:
Container ID: docker://aad9e782923b382cd89debea3115c2318883732f000e6068c9924a95a81ac933
Image: contiv/netplugin:1.0.3
Image ID: docker-pullable://contiv/netplugin@sha256:1d818453a71688d81072c25a149d814e4b2331fa526c3afd71aa895d79f6800a
Port:
Args:
-pkubernetes
-x
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 12 Jun 2017 19:30:17 +0000
Ready: False
Restart Count: 6
Environment:
VLAN_IF:
VTEP_IP: (v1:status.podIP)
CONTIV_ETCD: <set to the key 'cluster_store' of config map 'contiv-config'> Optional: false
CONTIV_CNI_CONFIG: <set to the key 'cni_config' of config map 'contiv-config'> Optional: false
CONTIV_CONFIG: <set to the key 'config' of config map 'contiv-config'> Optional: false
Mounts:
/etc/cni/net.d/ from etc-cni-dir (rw)
/etc/kubernetes/pki from etc-kubernetes-pki (rw)
/etc/kubernetes/ssl from etc-kubernetes-ssl (rw)
/etc/openvswitch from etc-openvswitch (rw)
/lib/modules from lib-modules (rw)
/opt/cni/bin from cni-bin-dir (rw)
/var/contiv from var-contiv (rw)
/var/run from var-run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netplugin-token-gvdt9 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
etc-openvswitch:
Type: HostPath (bare host directory volume)
Path: /etc/openvswitch
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
var-run:
Type: HostPath (bare host directory volume)
Path: /var/run
var-contiv:
Type: HostPath (bare host directory volume)
Path: /var/contiv
etc-kubernetes-pki:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
etc-kubernetes-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/ssl
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
etc-cni-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d/
contiv-netplugin-token-gvdt9:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netplugin-token-gvdt9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master=:NoSchedule
node.alpha.kubernetes.io/notReady=:Exists:NoExecute
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Created Created container with id 3f91be3210ec3675a4481588d2063d1ba8b6d29976417b7e5250763ecd1beb55
6m 6m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Started Started container with id 3f91be3210ec3675a4481588d2063d1ba8b6d29976417b7e5250763ecd1beb55
6m 6m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Created Created container with id c09fa7f09ece695e1cdca3e6358dc56915731b1bff1e1d3b9fd62dbdb171cb58
6m 6m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Started Started container with id c09fa7f09ece695e1cdca3e6358dc56915731b1bff1e1d3b9fd62dbdb171cb58
6m 6m 1 kubelet, cluster2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 10s restarting failed container=contiv-netplugin pod=contiv-netplugin-nrprk_kube-system(bd866ca7-4fa4-11e7-9354-066bde8f6de8)"
6m 6m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Created Created container with id 17864cd42ac6f470a19267d13e604308471a2cb0d680c84e0141cb858775b92b
6m 6m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Started Started container with id 17864cd42ac6f470a19267d13e604308471a2cb0d680c84e0141cb858775b92b
6m 6m 2 kubelet, cluster2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 20s restarting failed container=contiv-netplugin pod=contiv-netplugin-nrprk_kube-system(bd866ca7-4fa4-11e7-9354-066bde8f6de8)"
5m 5m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Started Started container with id afc9eb071487fe6045e23fa54c072f49066f358b4d03591c9f080b49f5b20337
5m 5m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Created Created container with id afc9eb071487fe6045e23fa54c072f49066f358b4d03591c9f080b49f5b20337
5m 5m 3 kubelet, cluster2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 40s restarting failed container=contiv-netplugin pod=contiv-netplugin-nrprk_kube-system(bd866ca7-4fa4-11e7-9354-066bde8f6de8)"
5m 5m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Started Started container with id 1063e2668feae8284e616962bf2af07c43c78ffd93332108e2a7959ab1795419
5m 5m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Created Created container with id 1063e2668feae8284e616962bf2af07c43c78ffd93332108e2a7959ab1795419
5m 3m 6 kubelet, cluster2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=contiv-netplugin pod=contiv-netplugin-nrprk_kube-system(bd866ca7-4fa4-11e7-9354-066bde8f6de8)"
3m 3m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Started Started container with id bb57a6ceb35c7538822b4598d095143b972591095e35cc6a0ec0596f0011860a
3m 3m 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Created Created container with id bb57a6ceb35c7538822b4598d095143b972591095e35cc6a0ec0596f0011860a
3m 57s 13 kubelet, cluster2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=contiv-netplugin pod=contiv-netplugin-nrprk_kube-system(bd866ca7-4fa4-11e7-9354-066bde8f6de8)"
6m 46s 7 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Pulled Container image "contiv/netplugin:1.0.3" already present on machine
46s 46s 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Started Started container with id aad9e782923b382cd89debea3115c2318883732f000e6068c9924a95a81ac933
46s 46s 1 kubelet, cluster2 spec.containers{contiv-netplugin} Normal Created Created container with id aad9e782923b382cd89debea3115c2318883732f000e6068c9924a95a81ac933
6m 4s 29 kubelet, cluster2 spec.containers{contiv-netplugin} Warning BackOff Back-off restarting failed container
43s 4s 4 kubelet, cluster2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "contiv-netplugin" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=contiv-netplugin pod=contiv-netplugin-nrprk_kube-system(bd866ca7-4fa4-11e7-9354-066bde8f6de8)"
root@cluster1:~/contiv/contiv-1.0.3#
@patelrit , looks like your netplugin containers are in a crash loop. Can you post the contents on /var/contiv/log/netplugin.log here
Here is the log:
root@cluster1:~# cat /var/contiv/log/netplugin.log
time="Jun 13 01:43:39.825962264" level=error msg="Failed to connect to etcd. Err: client: etcd cluster is unavailable or misconfigured"
time="Jun 13 01:43:39.826034103" level=error msg="Error creating client etcd to url 50.97.198.5:6666. Err: client: etcd cluster is unavailable or misconfigured"
time="Jun 13 01:43:39.826044294" level=fatal msg="Error initializing cluster. Err: client: etcd cluster is unavailable or misconfigured"
On Tue, Jun 13, 2017 at 4:22 PM, neelimamukiri [email protected] wrote:
@patelrit https://github.com/patelrit , looks like your netplugin containers are in a crash loop. Can you post the contents on /var/contiv/log/netplugin.log here
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/contiv/install/issues/169#issuecomment-308276185, or mute the thread https://github.com/notifications/unsubscribe-auth/ADXAACGT59UHpkoP8ASBtbKaoqWIHXJ_ks5sDxmZgaJpZM4N1t_F .
-- Ritesh Founder @ Nirmata http://nirmata.com/
@patelrit, I will need to take a look at your setup. Please ping me on contiv slack and we can setup a Webex to debug further. kubectl describe contiv-etcd-9slfd -n kube-system and kubectl logs -n kube-system contiv-etcd-9slfd will help to debug further.
Quick update....it appears that the installer assumes that kubernetes is deployed using kubeadm so all the node labels are correctly setup. In my case, I did not use kubeadm to install k8s but a different tool so the node-role.kubernetes.io/master="" label was missing and as a result the installation failed.
It may be good to document this requirement (the label) and also maybe check for it in the install script and provide a better error message.
Y
Had the same problem and it turned out it was because openvswitch was running on the host from some previous work. Stopping ovs solved it. Maybe the install script could be improved to detect and report this case.
# kubectl logs contiv-netplugin-0d336 -n kube-system ovs-vsctl: no bridge named contivVlanBridge ovs-vsctl: no bridge named contivVxlanBridge Initializing OVS The Open vSwitch database exists Starting ovsdb-server... 2017-07-06T09:02:11Z|00001|vlog|WARN|failed to open /var/contiv/log/ovs-db.log for logging: No such file or directory ovsdb-server: /var/run/openvswitch/ovsdb-server.pid: already running as pid 2487, aborting
I'm not sure about the same problem,I used ansible-playbook to install k8s and chose netplugin for contiv, an error occurred during the installation.
error message: TASK [kubernetes-apps/network_plugin/contiv : Contiv | Wait for netmaster] ***** Thursday 26 July 2018 03:31:48 -0400 (0:00:08.035) 0:04:00.036 ********* FAILED - RETRYING: Contiv | Wait for netmaster (10 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (9 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (8 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (7 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (6 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (5 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (4 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (3 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (2 retries left). FAILED - RETRYING: Contiv | Wait for netmaster (1 retries left). fatal: [huangch]: FAILED! => {"attempts": 10, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://127.0.0.1:9999/info"}
[root@huangch kubespray]# netctl network ls ERRO[0000] Get http://netmaster:9999/api/v1/tenants/default/: dial tcp: lookup netmaster: no such host
[root@huangch kubespray]# kubectl get nodes NAME STATUS ROLES AGE VERSION huangch Ready master,node 23h v1.10.4
[root@huangch kubespray]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE contiv-api-proxy-mpl4x 0/1 CrashLoopBackOff 278 23h contiv-etcd-proxy-4hfw8 1/1 Running 0 23h contiv-etcd-vgdp9 0/1 CrashLoopBackOff 278 23h contiv-netmaster-gq75n 1/1 Running 0 23h contiv-netplugin-t8z4p 1/1 Running 0 23h kube-apiserver-huangch 1/1 Running 0 23h kube-controller-manager-huangch 1/1 Running 0 23h kube-proxy-huangch 1/1 Running 0 23h kube-scheduler-huangch 1/1 Running 0 23h
[root@huangch kubespray]# kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
i can't find the file, /var/contiv/log/netplugin.log
[root@huangch kubespray]# kubectl logs contiv-netplugin-t8z4p -n kube-system time="Jul 27 06:58:02.316144721" level=fatal msg="Error initializing cluster. Err: client: etcd cluster is unavailable or misconfigured" CRITICAL : Netplugin has exited. Trying to respawn in 5s time="Jul 27 06:58:07.494037048" level=error msg="Failed to connect to etcd. Err: client: etcd cluster is unavailable or misconfigured" time="Jul 27 06:58:07.494119229" level=error msg="Error creating client etcd to url 127.0.0.1:6666. Err: client: etcd cluster is unavailable or misconfigured" time="Jul 27 06:58:07.494131730" level=fatal msg="Error initializing cluster. Err: client: etcd cluster is unavailable or misconfigured" CRITICAL : Netplugin has exited. Trying to respawn in 5s time="Jul 27 06:58:12.660610179" level=error msg="Failed to connect to etcd. Err: client: etcd cluster is unavailable or misconfigured" time="Jul 27 06:58:12.660659695" level=error msg="Error creating client etcd to url 127.0.0.1:6666. Err: client: etcd cluster is unavailable or misconfigured" time="Jul 27 06:58:12.660667782" level=fatal msg="Error initializing cluster. Err: client: etcd cluster is unavailable or misconfigured" CRITICAL : Netplugin has exited. Trying to respawn in 5s
Pods:
[root@huangch kubespray]# kubectl describe pods -n kube-system
Name: contiv-api-proxy-mpl4x
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:31:48 -0400
Labels: controller-revision-hash=2912678776
k8s-app=contiv-api-proxy
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 192.168.122.208
Controlled By: DaemonSet/contiv-api-proxy
Containers:
contiv-api-proxy:
Container ID: docker://b4dc111fcf17cb3f6f404afcd4839b47ccceccf1d64fca5f32de59abeeee041f
Image: contiv/auth_proxy:1.1.7
Image ID: docker-pullable://contiv/auth_proxy@sha256:53b58b7a0279d71da654f6687b4dd841efac5186037a724deb8d91654b2a50d0
Port:
contiv-netmaster-token-p98k7:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netmaster-token-p98k7
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
Warning BackOff 1m (x6357 over 23h) kubelet, huangch Back-off restarting failed container
Name: contiv-etcd-proxy-4hfw8
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:31:45 -0400
Labels: controller-revision-hash=3716054185
k8s-app=contiv-etcd-proxy
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 192.168.122.208
Controlled By: DaemonSet/contiv-etcd-proxy
Containers:
contiv-etcd-proxy:
Container ID: docker://e037fe55845b0a3fc57e28f05c1fc91a3c7319115977d12656ab496da082f25f
Image: quay.io/coreos/etcd:v3.2.4
Image ID: docker-pullable://quay.io/coreos/etcd@sha256:0a582c6ca6d32f1bed74c51bb1e33a215b301e0f28683777ec6af0c2e3925588
Port:
Name: contiv-etcd-vgdp9
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:31:45 -0400
Labels: controller-revision-hash=3920299055
k8s-app=contiv-etcd
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 192.168.122.208
Controlled By: DaemonSet/contiv-etcd
Init Containers:
contiv-etcd-init:
Container ID: docker://5c7dd3c679ab7b61dfa4c71e5f285921e2e59092854b1438e234917f06c0db31
Image: ferest/etcd-initer:latest
Image ID: docker-pullable://ferest/etcd-initer@sha256:120360e7d581eb15fd3fdb9d682e1b690d01653545a1df05d0acbcb66fec601c
Port:
contiv-etcd-conf-dir:
Type: HostPath (bare host directory volume)
Path: /etc/contiv/etcd
HostPathType:
default-token-b9xxv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b9xxv
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
Warning BackOff 1m (x6372 over 23h) kubelet, huangch Back-off restarting failed container
Name: contiv-netmaster-gq75n
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:31:48 -0400
Labels: controller-revision-hash=1775865577
k8s-app=contiv-netmaster
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 192.168.122.208
Controlled By: DaemonSet/contiv-netmaster
Containers:
contiv-netmaster:
Container ID: docker://bc1b299ac99b8357702feec116372c75bce42951cac24e5a039aab80a169723e
Image: contiv/netplugin:1.1.7
Image ID: docker-pullable://contiv/netplugin@sha256:419a6320e4aaba8185f0c4ffc0fe1df2ba3f3fe55dcda4852742fee026ef8faa
Port:
contiv-netmaster-token-p98k7:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netmaster-token-p98k7
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Name: contiv-netplugin-t8z4p
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:31:47 -0400
Labels: controller-revision-hash=2001464756
k8s-app=contiv-netplugin
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 192.168.122.208
Controlled By: DaemonSet/contiv-netplugin
Containers:
contiv-netplugin:
Container ID: docker://8ae62d26edb82e14ac1860b6165fa35a4eaddc990bde164b1b07e15d3a216123
Image: contiv/netplugin:1.1.7
Image ID: docker-pullable://contiv/netplugin@sha256:419a6320e4aaba8185f0c4ffc0fe1df2ba3f3fe55dcda4852742fee026ef8faa
Port:
VTEP_IP: (v1:status.podIP)
CONTIV_ETCD: <set to the key 'cluster_store' of config map 'contiv-config'> Optional: false
CONTIV_CNI_CONFIG: <set to the key 'cni_config' of config map 'contiv-config'> Optional: false
CONTIV_CONFIG: <set to the key 'config' of config map 'contiv-config'> Optional: false
Mounts:
/etc/cni/net.d/ from etc-cni-dir (rw)
/etc/openvswitch from etc-openvswitch (rw)
/lib/modules from lib-modules (rw)
/opt/cni/bin from cni-bin-dir (rw)
/var/contiv from var-contiv (rw)
/var/run from var-run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netplugin-token-wv7gl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
etc-openvswitch:
Type: HostPath (bare host directory volume)
Path: /etc/openvswitch
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run:
Type: HostPath (bare host directory volume)
Path: /var/run
HostPathType:
var-contiv:
Type: HostPath (bare host directory volume)
Path: /var/contiv
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
etc-cni-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d/
HostPathType:
contiv-netplugin-token-wv7gl:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netplugin-token-wv7gl
Optional: false
QoS Class: BestEffort
Node-Selectors:
Name: kube-apiserver-huangch
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:30:40 -0400
Labels: k8s-app=kube-apiserver
kubespray=v2
Annotations: kubernetes.io/config.hash=2af32c3393fe907d07d56e4647a20dec
kubernetes.io/config.mirror=2af32c3393fe907d07d56e4647a20dec
kubernetes.io/config.seen=2018-07-26T03:30:40.305785326-04:00
kubernetes.io/config.source=file
kubespray.apiserver-cert/serial=DDFA3428A86C8A0B
kubespray.etcd-cert/serial=C472C1DC951C4C47
Status: Running
IP: 192.168.122.208
Containers:
kube-apiserver:
Container ID: docker://a6c4d0c7a53563614f6a8ddc2b277c41d57637cc3ed4106b0ae5ef8f523f5d1f
Image: anjia0532/google-containers.hyperkube:v1.10.4
Image ID: docker-pullable://anjia0532/google-containers.hyperkube@sha256:2b44d8a1bdb323a56f08ed803eb6cd4f9b7faef5d407c9ebd14d509b8d2f5276
Port:
ssl-certs-host:
Type: HostPath (bare host directory volume)
Path: /etc/ssl
HostPathType:
etc-pki-tls:
Type: HostPath (bare host directory volume)
Path: /etc/pki/tls
HostPathType:
etc-pki-ca-trust:
Type: HostPath (bare host directory volume)
Path: /etc/pki/ca-trust
HostPathType:
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/etcd/ssl
HostPathType:
QoS Class: Burstable
Node-Selectors:
Name: kube-controller-manager-huangch
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:31:04 -0400
Labels: k8s-app=kube-controller-manager
Annotations: kubernetes.io/config.hash=c764daa7ad6cb4efb35fe43117f45870
kubernetes.io/config.mirror=c764daa7ad6cb4efb35fe43117f45870
kubernetes.io/config.seen=2018-07-26T03:31:04.233275144-04:00
kubernetes.io/config.source=file
kubespray.controller-manager-cert/serial=DDFA3428A86C8A0D
kubespray.etcd-cert/serial=C472C1DC951C4C47
Status: Running
IP: 192.168.122.208
Containers:
kube-controller-manager:
Container ID: docker://e729a4394f285e39e13eaea9f6f64543248b4400b29dff5143cc05e70cd0544a
Image: anjia0532/google-containers.hyperkube:v1.10.4
Image ID: docker-pullable://anjia0532/google-containers.hyperkube@sha256:2b44d8a1bdb323a56f08ed803eb6cd4f9b7faef5d407c9ebd14d509b8d2f5276
Port:
etc-pki-tls:
Type: HostPath (bare host directory volume)
Path: /etc/pki/tls
HostPathType:
etc-pki-ca-trust:
Type: HostPath (bare host directory volume)
Path: /etc/pki/ca-trust
HostPathType:
etc-kube-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/ssl
HostPathType:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/kube-controller-manager-kubeconfig.yaml
HostPathType:
QoS Class: Burstable
Node-Selectors:
Name: kube-proxy-huangch
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:30:28 -0400
Labels: k8s-app=kube-proxy
Annotations: kubernetes.io/config.hash=c9c9d7b48fef8331028dc08a3ac50424
kubernetes.io/config.mirror=c9c9d7b48fef8331028dc08a3ac50424
kubernetes.io/config.seen=2018-07-26T03:30:25.680916182-04:00
kubernetes.io/config.source=file
kubespray.kube-proxy-cert/serial=DDFA3428A86C8A10
Status: Running
IP: 192.168.122.208
Containers:
kube-proxy:
Container ID: docker://f1fb71eaff010a53498666b8682a62225d99334fc24ea4073ac3f522fa892563
Image: anjia0532/google-containers.hyperkube:v1.10.4
Image ID: docker-pullable://anjia0532/google-containers.hyperkube@sha256:2b44d8a1bdb323a56f08ed803eb6cd4f9b7faef5d407c9ebd14d509b8d2f5276
Port:
etc-kube-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/ssl
HostPathType:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/kube-proxy-kubeconfig.yaml
HostPathType:
var-run-dbus:
Type: HostPath (bare host directory volume)
Path: /var/run/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors:
Name: kube-scheduler-huangch
Namespace: kube-system
Node: huangch/192.168.122.208
Start Time: Thu, 26 Jul 2018 03:31:03 -0400
Labels: k8s-app=kube-scheduler
Annotations: kubernetes.io/config.hash=470bcf69687af15648a8730c430fc2a6
kubernetes.io/config.mirror=470bcf69687af15648a8730c430fc2a6
kubernetes.io/config.seen=2018-07-26T03:31:03.084471488-04:00
kubernetes.io/config.source=file
kubespray.scheduler-cert/serial=DDFA3428A86C8A0C
Status: Running
IP: 192.168.122.208
Containers:
kube-scheduler:
Container ID: docker://7447df3c188ddc1f90ac6ac7fd12cc1c8042327554d5970c5e1d29048f5ea918
Image: anjia0532/google-containers.hyperkube:v1.10.4
Image ID: docker-pullable://anjia0532/google-containers.hyperkube@sha256:2b44d8a1bdb323a56f08ed803eb6cd4f9b7faef5d407c9ebd14d509b8d2f5276
Port:
etc-pki-tls:
Type: HostPath (bare host directory volume)
Path: /etc/pki/tls
HostPathType:
etc-pki-ca-trust:
Type: HostPath (bare host directory volume)
Path: /etc/pki/ca-trust
HostPathType:
etc-kube-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/ssl
HostPathType:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/kube-scheduler-kubeconfig.yaml
HostPathType:
QoS Class: Burstable
Node-Selectors: