monocular-monocular-api CrashLoopBackOff
I install nginx-ingress and monocular use below command.
helm install --namespace=kube-system --name nginx-ingress stable/nginx-ingress --set controller.hostNetwork=true,rbac.create=true
helm install --namespace=kube-system --name monocular --set mongodb.persistence.enabled=false monocular/monocular
helm repo info
[root@k8s-master tmp]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
monocular https://kubernetes-helm.github.io/monocular
helm version [root@k8s-master tmp]# helm version Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"} [root@k8s-master tmp]#
but the api pods can not running normal. [root@k8s-master tmp]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-etcd-ndr9k 1/1 Running 0 1h calico-kube-controllers-5d74847676-bg2gn 1/1 Running 0 1h calico-node-774j4 2/2 Running 0 1h calico-node-gtn8p 2/2 Running 0 1h etcd-k8s-master 1/1 Running 0 1h kube-apiserver-k8s-master 1/1 Running 0 1h kube-controller-manager-k8s-master 1/1 Running 0 1h kube-dns-86f4d74b45-62qqs 3/3 Running 0 1h kube-proxy-p4nc6 1/1 Running 0 1h kube-proxy-qtr9l 1/1 Running 0 1h kube-scheduler-k8s-master 1/1 Running 0 1h monocular-mongodb-6558fbbbd8-x7tcp 1/1 Running 0 26m monocular-monocular-api-74577b849b-6tvrh 0/1 CrashLoopBackOff 9 26m monocular-monocular-api-74577b849b-qjtws 0/1 CrashLoopBackOff 9 26m monocular-monocular-prerender-7948c67fdf-8bj2f 1/1 Running 0 26m monocular-monocular-ui-5767d4447f-dpgd7 1/1 Running 0 26m monocular-monocular-ui-5767d4447f-mxhrl 1/1 Running 0 26m nginx-ingress-controller-7cc6f879d6-5hh7j 1/1 Running 0 27m nginx-ingress-default-backend-956f8bbff-d55xh 1/1 Running 0 27m tiller-deploy-f5597467b-6pqn6 1/1 Running 0 39m [root@k8s-master tmp]#
detail info about pods.
[root@k8s-master tmp]# kubectl describe pods monocular-monocular-api-74577b849b-6tvrh -n kube-system
Name: monocular-monocular-api-74577b849b-6tvrh
Namespace: kube-system
Node: k8s-master/172.20.151.2
Start Time: Tue, 08 May 2018 12:45:32 +0800
Labels: app=monocular-monocular-api
pod-template-hash=3013364056
Annotations: checksum/config=9553fcc7aff59375c5cced1d31ba671b2d4c7ae9117f0e61c20dd12a1b124f5a
Status: Running
IP: 192.168.235.201
Controlled By: ReplicaSet/monocular-monocular-api-74577b849b
Containers:
monocular:
Container ID: docker://9620d7f4adc64f0e1579aaa055f020422a7ca0d997429f3dd945055b078fda36
Image: bitnami/monocular-api:v0.6.2
Image ID: docker-pullable://docker.io/bitnami/monocular-api@sha256:c67707ca9113fa7f25fbeb37fa04afea3ebd767674858a6663f06e073caa3de7
Port: 8081/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 08 May 2018 13:12:25 +0800
Finished: Tue, 08 May 2018 13:12:27 +0800
Ready: False
Restart Count: 10
Limits:
cpu: 100m
memory: 128Mi
Requests:
cpu: 100m
memory: 128Mi
Liveness: http-get http://:8081/healthz delay=180s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
Environment:
MONOCULAR_HOME: /monocular
Mounts:
/monocular from cache (rw)
/monocular/config from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vgq57 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: monocular-monocular-api-config
Optional: false
cache:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-vgq57:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vgq57
Optional: false
QoS Class: Guaranteed
Node-Selectors:
Normal Scheduled 28m default-scheduler Successfully assigned monocular-monocular-api-74577b849b-6tvrh to k8s-master Normal SuccessfulMountVolume 28m kubelet, k8s-master MountVolume.SetUp succeeded for volume "cache" Normal SuccessfulMountVolume 28m kubelet, k8s-master MountVolume.SetUp succeeded for volume "config" Normal SuccessfulMountVolume 28m kubelet, k8s-master MountVolume.SetUp succeeded for volume "default-token-vgq57" Normal Pulling 27m (x4 over 28m) kubelet, k8s-master pulling image "bitnami/monocular-api:v0.6.2" Normal Pulled 27m (x4 over 28m) kubelet, k8s-master Successfully pulled image "bitnami/monocular-api:v0.6.2" Normal Created 27m (x4 over 28m) kubelet, k8s-master Created container Normal Started 27m (x4 over 28m) kubelet, k8s-master Started container Warning BackOff 3m (x116 over 28m) kubelet, k8s-master Back-off restarting failed container [root@k8s-master tmp]#
The following is logs of monocular-api pod. [root@k8s-master tmp]# kubectl logs -f monocular-monocular-api-74577b849b-6tvrh -n kube-system time="2018-05-08T05:12:27Z" level=info msg="Configuration bootstrap init" configFile="/monocular/config/monocular.yaml" time="2018-05-08T05:12:27Z" level=info msg="Configuration file found!" time="2018-05-08T05:12:27Z" level=info msg="Loading CORS from config file" time="2018-05-08T05:12:27Z" level=fatal msg="unable to load configuration" error="yaml: line 15: mapping values are not allowed in this context" [root@k8s-master tmp]#
I am not sure whether I am wrong or the version problem. Similar operations have been successful before. I'm confused.
@LookingLMH it must be the helm template problem please check with command below: kubectl edit configmap wizened-leopard-monocular-api-config
you will find that:
tillerNamespace: kube-systemmongodb:
url: wizened-leopard-mongodb:27017
database: monocular
which should be:
tillerNamespace: kube-system
mongodb:
url: wizened-leopard-mongodb:27017
database: monocular
@somedayiamold strange, I'm not seeing that with the latest chart, my config is properly formatted.
@prydonius please check if below is the root cause: https://github.com/kubernetes-helm/monocular/issues/441 https://github.com/kubernetes-helm/monocular/pull/442
helm install monocular/monocular, the monocular-api never runs healthy. The pods loops:
mn-monocular-api-97b6466d8-hwwqg 0/1 Running 6 7m
mn-monocular-api-97b6466d8-hwwqg 0/1 OOMKilled 6 7m
mn-monocular-api-97b6466d8-hwwqg 0/1 CrashLoopBackOff 6 7m
mn-monocular-api-97b6466d8-7k4qv 0/1 Running 6 8m
mn-monocular-api-97b6466d8-7k4qv 0/1 OOMKilled 6 8m
mn-monocular-api-97b6466d8-7k4qv 0/1 CrashLoopBackOff 6 8m
Container log contains errors like:
E goroutine 2908 [chan send]:
E github.com/kubernetes-helm/monocular/src/api/data/cache.processChartMetadata(0xc421e77140, 0xc4201b11a0, 0x30, 0xc420db8c60)
E /go/src/github.com/kubernetes-helm/monocular/src/api/data/cache/cache.go:288 +0x49
E created by github.com/kubernetes-helm/monocular/src/api/data/cache.(*cachedCharts).Refresh
E /go/src/github.com/kubernetes-helm/monocular/src/api/data/cache/cache.go:258 +0x570
and:
E goroutine 3061 [select]:
E net/http.(*persistConn).writeLoop(0xc420259200)
E /usr/local/go/src/net/http/transport.go:1759 +0x165
E created by net/http.(*Transport).dialConn
E /usr/local/go/src/net/http/transport.go:1187 +0xa53
E
E goroutine 3108 [select]:
E net.(*Resolver).LookupIPAddr(0x2384a60, 0x2317a20, 0xc42017ae40, 0xc42126e9a0, 0x19, 0x0, 0x0, 0x0, 0x0, 0x0)
E /usr/local/go/src/net/lookup.go:196 +0x52b
E net.(*Resolver).internetAddrList(0x2384a60, 0x2317a20, 0xc42017ae40, 0x183f89e, 0x3, 0xc42126e9a0, 0x1d, 0x0, 0x0, 0x0, ...)
E /usr/local/go/src/net/ipsock.go:293 +0x644
E net.(*Resolver).resolveAddrList(0x2384a60, 0x2317a20, 0xc42017ae40, 0x183ff52, 0x4, 0x183f89e, 0x3, 0xc42126e9a0, 0x1d, 0x0, ...)
E /usr/local/go/src/net/dial.go:193 +0x594
E net.(*Dialer).DialContext(0xc420046120, 0x23179e0, 0xc42001a038, 0x183f89e, 0x3, 0xc42126e9a0, 0x1d, 0x0, 0x0, 0x0, ...)
E /usr/local/go/src/net/dial.go:375 +0x248
E net.(*Dialer).DialContext-fm(0x23179e0, 0xc42001a038, 0x183f89e, 0x3, 0xc42126e9a0, 0x1d, 0x403859, 0x60, 0x619869, 0xc42001a038)
E /usr/local/go/src/net/http/transport.go:46 +0x73
E net/http.(*Transport).dial(0x2379d20, 0x23179e0, 0xc42001a038, 0x183f89e, 0x3, 0xc42126e9a0, 0x1d, 0xc4215dbe20, 0xc420db7d00, 0x80000000001, ...)
E /usr/local/go/src/net/http/transport.go:884 +0x223
E net/http.(*Transport).dialConn(0x2379d20, 0x23179e0, 0xc42001a038, 0x0, 0xc421afb960, 0x5, 0xc42126e9a0, 0x1d, 0x0, 0x0, ...)
E /usr/local/go/src/net/http/transport.go:1060 +0x1d62
E net/http.(*Transport).getConn.func4(0x2379d20, 0x23179e0, 0xc42001a038, 0xc421fac9c0, 0xc4206b8600)
E /usr/local/go/src/net/http/transport.go:943 +0x78
E created by net/http.(*Transport).getConn
E /usr/local/go/src/net/http/transport.go:942 +0x393
E
E goroutine 3123 [select]:
E net/http.setRequestCancel.func3(0x0, 0xc4202f2090, 0xc421fb3d80, 0xc42136475c, 0xc4206b9680)
E /usr/local/go/src/net/http/client.go:320 +0x118
E created by net/http.setRequestCancel
E /usr/local/go/src/net/http/client.go:319 +0x2bf
@biancajiang see https://github.com/kubernetes-helm/monocular/issues/473 for related issue and fix
Thanks a bunch @bzon! increasing the api pod's cpu and memory limits solved my pb.
helm install monocular/monocular --set api.resources.limits.memory=526Mi,api.resources.limits.cpu=200m