openwhisk-deploy-kube
openwhisk-deploy-kube copied to clipboard
Controller cannot be ready and CouchDB is ok
Hello, I'm trying to deploy openwhisk on kubernetes with simple docker-based option(mac m1), but the deployment process was not smooth. Now I'm not sure what's wrong with controller because of loop between Running(not ready) and CrashLoopBackOff status.
and I use kubectl exec
go into couchdb docker, it seems that couchdb is usable.
Thanks in advance for any your hint.
Here is some revelant output:
$ kubectl get po -n wsk
NAME READY STATUS RESTARTS AGE
owdev-alarmprovider-778878649b-ct5x4 0/1 Init:0/1 1 16h
owdev-apigateway-7c66484b88-gp7hw 1/1 Running 1 (115m ago) 16h
owdev-controller-0 0/1 Running 41 (10m ago) 14h
owdev-couchdb-87c76548-xrmdm 1/1 Running 1 (115m ago) 16h
owdev-gen-certs--1-n5wnt 0/1 Completed 0 16h
owdev-init-couchdb--1-cs79r 0/1 Error 0 16h
owdev-init-couchdb--1-jncgj 0/1 Error 0 16h
owdev-init-couchdb--1-rb9rx 0/1 Error 0 16h
owdev-init-couchdb--1-vx865 0/1 Completed 0 15h
owdev-install-packages--1-b8m27 0/1 Init:Error 0 16h
owdev-install-packages--1-mrxq6 0/1 Init:0/1 0 114m
owdev-invoker-0 0/1 Init:0/1 1 16h
owdev-kafka-0 1/1 Running 1 (115m ago) 16h
owdev-kafkaprovider-7c4dd5884b-h492n 0/1 Init:0/1 1 16h
owdev-nginx-75c7895465-r6jvz 0/1 Init:0/1 1 16h
owdev-redis-59bf4984c-xqfp6 1/1 Running 1 (115m ago) 16h
owdev-wskadmin 1/1 Running 1 (115m ago) 16h
owdev-zookeeper-0 1/1 Running 1 (115m ago) 16h
$ kubectl logs -n wsk owdev-controller-0 -f
[2022-03-22T03:08:03.549Z] [INFO] Slf4jLogger started
[2022-03-22T03:08:06.151Z] [WARN] Failed to attach the instrumentation because the Kamon Bundle is not present on the classpath
[2022-03-22T03:08:06.842Z] [INFO] Started the Kamon StatsD reporter
[2022-03-22T03:08:09.244Z] [INFO] [#tid_sid_unknown] [Config] environment set value for limits.triggers.fires.perMinute
[2022-03-22T03:08:09.273Z] [INFO] [#tid_sid_unknown] [Config] environment set value for limits.actions.sequence.maxLength
[2022-03-22T03:08:09.274Z] [INFO] [#tid_sid_unknown] [Config] environment set value for limits.actions.invokes.concurrent
[2022-03-22T03:08:09.274Z] [INFO] [#tid_sid_unknown] [Config] environment set value for limits.actions.invokes.perMinute
[2022-03-22T03:08:09.275Z] [INFO] [#tid_sid_unknown] [Config] environment set value for runtimes.manifest
[2022-03-22T03:08:09.276Z] [INFO] [#tid_sid_unknown] [Config] environment set value for kafka.hosts
[2022-03-22T03:08:09.277Z] [INFO] [#tid_sid_unknown] [Config] environment set value for port
[2022-03-22T03:08:19.365Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] completed0 already exists and the user can see it, skipping creation.
[2022-03-22T03:08:21.930Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] health already exists and the user can see it, skipping creation.
[2022-03-22T03:08:23.424Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] cacheInvalidation already exists and the user can see it, skipping creation.
[2022-03-22T03:08:26.352Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] events already exists and the user can see it, skipping creation.
[2022-03-22T03:08:28.447Z] [INFO] [#tid_sid_controller] [Controller] starting controller instance 0 [marker:controller_startup0_counter:19346]
[2022-03-22T03:08:39.409Z] [INFO] [#tid_sid_dispatcher] [MessageFeed] handler capacity = 128, pipeline fill at = 128, pipeline depth = 256
[2022-03-22T03:08:44.526Z] [INFO] [#tid_sid_loadbalancer] [ShardingContainerPoolBalancerState] managedFraction = 0.9, blackboxFraction = 0.1
[2022-03-22T03:08:45.976Z] [INFO] [#tid_sid_dispatcher] [MessageFeed] handler capacity = 128, pipeline fill at = 128, pipeline depth = 256
[2022-03-22T03:08:52.383Z] [INFO] [#tid_sid_loadbalancer] [WhiskAction] [GET] serving from datastore: CacheKey(whisk.system/invokerHealthTestAction0) [marker:database_cacheMiss_counter:43277]
[2022-03-22T03:08:52.954Z] [INFO] [#tid_sid_loadbalancer] [CouchDbRestStore] [GET] 'test_whisks' finding document: 'id: whisk.system/invokerHealthTestAction0' [marker:database_getDocument_start:43856]
[2022-03-22T03:08:55.266Z] [ERROR] [#tid_sid_unknown] [KafkaConsumerConnector] poll returned with failure. Recreating the consumer. Exception: java.lang.ClassCastException: java.util.stream.ReduceOps$3ReducingSink incompatible with java.util.stream.Sink
[2022-03-22T03:08:55.295Z] [INFO] [#tid_sid_unknown] [KafkaConsumerConnector] recreating consumer for 'cacheInvalidation'
[2022-03-22T03:08:55.464Z] [INFO] [#tid_sid_unknown] [KafkaConsumerConnector] old consumer closed for 'cacheInvalidation'
[2022-03-22T03:08:56.355Z] [INFO] [#tid_sid_loadbalancer] [CouchDbRestStore] [marker:database_getDocument_finish:47252:3386]
[2022-03-22T03:08:57.348Z] [ERROR] [#tid_sid_dispatcher] [MessageFeed] exception while pulling new cacheInvalidation records: java.lang.ClassCastException: java.util.stream.ReduceOps$3ReducingSink incompatible with java.util.stream.Sink
[2022-03-22T03:08:57.465Z] [INFO] [#tid_sid_loadbalancer] [WhiskAction] write initiated on existing cache entry, invalidating CacheKey(whisk.system/invokerHealthTestAction0), tid sid_loadbalancer, state WriteInProgress
[2022-03-22T03:08:57.546Z] [INFO] [#tid_sid_loadbalancer] [CouchDbRestStore] [PUT] 'test_whisks' saving document: 'id: whisk.system/invokerHealthTestAction0, rev: 10-b15ea5e89647acb0b2a6286089606b06' [marker:database_saveDocument_start:48447]
[2022-03-22T03:08:58.102Z] [INFO] [#tid_sid_loadbalancer] [CouchDbRestStore] [marker:database_saveDocument_finish:49001:551]
[2022-03-22T03:08:58.157Z] [INFO] [#tid_sid_loadbalancer] [WhiskAction] write all done, caching CacheKey(whisk.system/invokerHealthTestAction0) Cached
[2022-03-22T03:08:58.170Z] [INFO] [#tid_sid_loadbalancer] [InvokerPool] test action for invoker health now exists
[2022-03-22T03:09:02.210Z] [INFO] [#tid_sid_controller] [Controller] loadbalancer initialized: ShardingContainerPoolBalancer
[2022-03-22T03:09:02.277Z] [INFO] [#tid_sid_dispatcher] [MessageFeed] handler capacity = 128, pipeline fill at = 128, pipeline depth = 256
[2022-03-22T03:09:04.465Z] [INFO] [#tid_sid_controller] [KindRestrictor] all kinds are allowed, the white-list is not specified
*** Invalid JIT return address 00000000C164E248 in 0000000001845A08
03:09:06.798 0x1845700 j9vm.249 * ** ASSERTION FAILED ** at swalk.c:1565: ((0 ))
JVMDUMP039I Processing dump event "traceassert", detail "" at 2022/03/22 03:09:06 - please wait.
JVMDUMP032I JVM requested System dump using '//core.20220322.030906.1.0001.dmp' in response to an event
*** Invalid JIT return address 00000000FFE6DEC8 in 0000000001B3D508
JVMDUMP010I System dump written to //core.20220322.030906.1.0001.dmp
0000000001845700: Object neither in heap nor stack-allocated in thread controller-actor-system-dispatchers.kafka-dispatcher-12
0000000001845700: O-Slot=0000000001866390
0000000001845700: O-Slot value=0000000010000000
0000000001845700: PC=0000004016FA02DF
0000000001845700: framesWalked=6
0000000001845700: arg0EA=0000000001866390
0000000001845700: walkSP=0000000001866280
0000000001845700: literals=0000000000000010
0000000001845700: jitInfo=000000407A91C3F8
0000000001845700: method=0000000001269B08 (org/apache/kafka/common/metrics/Sensor.record(DJZ)V) (JIT)
0000000001845700: stack=000000000185FDF8-0000000001866E80
0000000001845700: Object neither in heap nor stack-allocated in thread controller-actor-system-dispatchers.kafka-dispatcher-12
0000000001845700: O-Slot=0000000001866308
0000000001845700: O-Slot value=0000000100000001
0000000001845700: PC=0000004016FA02DF
0000000001845700: framesWalked=6
0000000001845700: arg0EA=0000000001866390
0000000001845700: walkSP=0000000001866280
0000000001845700: literals=0000000000000010
0000000001845700: jitInfo=000000407A91C3F8
0000000001845700: method=0000000001269B08 (org/apache/kafka/common/metrics/Sensor.record(DJZ)V) (JIT)
0000000001845700: stack=000000000185FDF8-0000000001866E80
JVMDUMP032I JVM requested Java dump using '//javacore.20220322.030906.1.0002.txt' in response to an event
[2022-03-22T03:09:12.379Z] [ERROR] [Consumer clientId=consumer-health, groupId=health0] Heartbeat thread failed due to unexpected error
java.lang.NullPointerException: null
at org.apache.kafka.common.metrics.stats.SampledStat$Sample.isComplete(SampledStat.java:129)
at org.apache.kafka.common.metrics.stats.SampledStat.record(SampledStat.java:49)
at org.apache.kafka.common.metrics.Sensor.record(Sensor.java:188)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:303)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1210)
[2022-03-22T03:09:12.503Z] [ERROR] [#tid_sid_unknown] [KafkaConsumerConnector] poll returned with failure. Recreating the consumer. Exception: java.lang.NullPointerException
[2022-03-22T03:09:12.524Z] [INFO] [#tid_sid_unknown] [KafkaConsumerConnector] recreating consumer for 'health'
[2022-03-22T03:09:12.781Z] [INFO] [#tid_sid_unknown] [KafkaConsumerConnector] old consumer closed for 'health'
JVMDUMP010I Java dump written to //javacore.20220322.030906.1.0002.txt
0000000001845700: Object neither in heap nor stack-allocated in thread controller-actor-system-dispatchers.kafka-dispatcher-12
0000000001845700: O-Slot=0000000001866390
0000000001845700: O-Slot value=0000000010000000
0000000001845700: PC=0000004016FA02DF
0000000001845700: framesWalked=6
0000000001845700: arg0EA=0000000001866390
0000000001845700: walkSP=0000000001866280
0000000001845700: literals=0000000000000010
0000000001845700: jitInfo=000000407A91C3F8
0000000001845700: method=0000000001269B08 (org/apache/kafka/common/metrics/Sensor.record(DJZ)V) (JIT)
0000000001845700: stack=000000000185FDF8-0000000001866E80
0000000001845700: Object neither in heap nor stack-allocated in thread controller-actor-system-dispatchers.kafka-dispatcher-12
0000000001845700: O-Slot=0000000001866308
0000000001845700: O-Slot value=0000000100000001
0000000001845700: PC=0000004016FA02DF
0000000001845700: framesWalked=6
0000000001845700: arg0EA=0000000001866390
0000000001845700: walkSP=0000000001866280
0000000001845700: literals=0000000000000010
0000000001845700: jitInfo=000000407A91C3F8
0000000001845700: method=0000000001269B08 (org/apache/kafka/common/metrics/Sensor.record(DJZ)V) (JIT)
0000000001845700: stack=000000000185FDF8-0000000001866E80
JVMDUMP032I JVM requested Snap dump using '//Snap.20220322.030906.1.0003.trc' in response to an event
JVMDUMP010I Snap dump written to //Snap.20220322.030906.1.0003.trc
JVMDUMP013I Processed dump event "traceassert", detail "".
To get more info about controller, I edited periodSeconds
of readiness probe and liveness probe of controller.
$ kubectl describe -n wsk po owdev-controller-0
Name: owdev-controller-0
Namespace: wsk
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 21 Mar 2022 21:28:21 +0800
Labels: app=owdev-openwhisk
chart=openwhisk-1.0.0
controller-revision-hash=owdev-controller-597cd54d75
heritage=Helm
name=owdev-controller
release=owdev
statefulset.kubernetes.io/pod-name=owdev-controller-0
Annotations: <none>
Status: Running
IP: 10.1.1.113
IPs:
IP: 10.1.1.113
Controlled By: StatefulSet/owdev-controller
Init Containers:
wait-for-kafka:
Container ID: docker://148badc9170f80265195fb99e9710b0a2519e7b7c2810d2d453f61f1396e00da
Image: busybox:latest
Image ID: docker-pullable://busybox@sha256:caa382c432891547782ce7140fb3b7304613d3b0438834dce1cad68896ab110a
Port: <none>
Host Port: <none>
Command:
sh
-c
result=1; until [ $result -eq 0 ]; do OK=$(echo ruok | nc -w 1 owdev-zookeeper-0.owdev-zookeeper.wsk.svc.cluster.local 2181); if [ "$OK" == "imok" ]; then result=0; echo "zookeeper returned imok!"; else echo waiting for zookeeper to be ready; sleep 1; fi done; echo "Zookeeper is up; will wait for 10 seconds to give kafka time to initialize"; sleep 10;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 22 Mar 2022 10:15:56 +0800
Finished: Tue, 22 Mar 2022 10:16:36 +0800
Ready: True
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9p9sf (ro)
wait-for-couchdb:
Container ID: docker://134686e62def659941ae3f544c66f973831242139cae4aa8a9e564baaf758e4d
Image: busybox:latest
Image ID: docker-pullable://busybox@sha256:caa382c432891547782ce7140fb3b7304613d3b0438834dce1cad68896ab110a
Port: <none>
Host Port: <none>
Command:
sh
-c
while true; do echo 'checking CouchDB readiness'; wget -T 5 --spider $READINESS_URL --header="Authorization: Basic d2hpc2tfYWRtaW46c29tZV9wYXNzdzByZA=="; result=$?; if [ $result -eq 0 ]; then echo 'Success: CouchDB is ready!'; break; fi; echo '...not ready yet; sleeping 3 seconds before retry'; sleep 3; done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 22 Mar 2022 10:16:37 +0800
Finished: Tue, 22 Mar 2022 10:16:37 +0800
Ready: True
Restart Count: 0
Environment:
READINESS_URL: http://owdev-couchdb.wsk.svc.cluster.local:5984/ow_kube_couchdb_initialized_marker
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9p9sf (ro)
Containers:
controller:
Container ID: docker://a48cf6934b802b6e15d5b6d287cc2ca096760aede14cc79a47ad43cd26473272
Image: openwhisk/controller:1.0.0
Image ID: docker-pullable://openwhisk/controller@sha256:36e8d65dfc7a1a37075b22d33f082ac1917b41d06b0935b68de6e0ecf677827f
Ports: 8080/TCP, 2552/TCP, 19999/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/bash
-c
/init.sh `hostname | awk -F '-' '{print $NF}'`
State: Running
Started: Tue, 22 Mar 2022 12:05:16 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Tue, 22 Mar 2022 11:58:21 +0800
Finished: Tue, 22 Mar 2022 12:00:07 +0800
Ready: False
Restart Count: 41
Liveness: http-get http://:8080/ping delay=10s timeout=1s period=360s #success=1 #failure=3
Readiness: http-get http://:8080/ping delay=10s timeout=1s period=360s #success=1 #failure=3
Environment:
PORT: 8080
TZ: UTC
CONFIG_whisk_info_date: <set to the key 'whisk_info_date' of config map 'owdev-whisk.config'> Optional: false
CONFIG_whisk_info_buildNo: <set to the key 'whisk_info_buildNo' of config map 'owdev-whisk.config'> Optional: false
JAVA_OPTS: -Xmx1024M
CONTROLLER_OPTS:
RUNTIMES_MANIFEST: {
.... ...
},
"blackboxes": [
{
"prefix": "openwhisk",
"name": "dockerskeleton",
"tag": "1.14.0"
}
]
}
LIMITS_ACTIONS_INVOKES_PERMINUTE: 60
LIMITS_ACTIONS_INVOKES_CONCURRENT: 30
LIMITS_TRIGGERS_FIRES_PERMINUTE: 60
LIMITS_ACTIONS_SEQUENCE_MAXLENGTH: 50
CONFIG_whisk_timeLimit_min: 100ms
CONFIG_whisk_timeLimit_max: 5m
CONFIG_whisk_timeLimit_std: 1m
CONFIG_whisk_memory_min: 128m
CONFIG_whisk_memory_max: 512m
CONFIG_whisk_memory_std: 256m
CONFIG_whisk_concurrencyLimit_min: 1
CONFIG_whisk_concurrencyLimit_max: 1
CONFIG_whisk_concurrencyLimit_std: 1
CONFIG_whisk_logLimit_min: 0m
CONFIG_whisk_logLimit_max: 10m
CONFIG_whisk_logLimit_std: 10m
CONFIG_whisk_activation_payload_max: 1048576
CONFIG_whisk_loadbalancer_blackboxFraction: 10%
CONFIG_whisk_loadbalancer_timeoutFactor: 2
KAFKA_HOSTS: owdev-kafka-0.owdev-kafka.wsk.svc.cluster.local:9092
CONFIG_whisk_kafka_replicationFactor:
CONFIG_whisk_kafka_topics_cacheInvalidation_retentionBytes:
CONFIG_whisk_kafka_topics_cacheInvalidation_retentionMs:
CONFIG_whisk_kafka_topics_cacheInvalidation_segmentBytes:
CONFIG_whisk_kafka_topics_completed_retentionBytes:
CONFIG_whisk_kafka_topics_completed_retentionMs:
CONFIG_whisk_kafka_topics_completed_segmentBytes:
CONFIG_whisk_kafka_topics_events_retentionBytes:
CONFIG_whisk_kafka_topics_events_retentionMs:
CONFIG_whisk_kafka_topics_events_segmentBytes:
CONFIG_whisk_kafka_topics_health_retentionBytes:
CONFIG_whisk_kafka_topics_health_retentionMs:
CONFIG_whisk_kafka_topics_health_segmentBytes:
CONFIG_whisk_kafka_topics_invoker_retentionBytes:
CONFIG_whisk_kafka_topics_invoker_retentionMs:
CONFIG_whisk_kafka_topics_invoker_segmentBytes:
CONFIG_whisk_couchdb_username: <set to the key 'db_username' in secret 'owdev-db.auth'> Optional: false
CONFIG_whisk_couchdb_password: <set to the key 'db_password' in secret 'owdev-db.auth'> Optional: false
CONFIG_whisk_couchdb_port: <set to the key 'db_port' of config map 'owdev-db.config'> Optional: false
CONFIG_whisk_couchdb_protocol: <set to the key 'db_protocol' of config map 'owdev-db.config'> Optional: false
CONFIG_whisk_couchdb_host: owdev-couchdb.wsk.svc.cluster.local
CONFIG_whisk_couchdb_provider: <set to the key 'db_provider' of config map 'owdev-db.config'> Optional: false
CONFIG_whisk_couchdb_databases_WhiskActivation: <set to the key 'db_whisk_activations' of config map 'owdev-db.config'> Optional: false
CONFIG_whisk_couchdb_databases_WhiskEntity: <set to the key 'db_whisk_actions' of config map 'owdev-db.config'> Optional: false
CONFIG_whisk_couchdb_databases_WhiskAuth: <set to the key 'db_whisk_auths' of config map 'owdev-db.config'> Optional: false
CONTROLLER_INSTANCES: 1
CONFIG_logback_log_level: INFO
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9p9sf (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-9p9sf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 16m (x254 over 119m) kubelet Back-off restarting failed container
Warning Unhealthy 61s (x36 over 117m) kubelet Readiness probe failed: Get "http://10.1.1.113:8080/ping": dial tcp 10.1.1.113:8080: connect: connection refused
$ kubectl describe pod -n wsk owdev-couchdb-87c76548-xrmdm
Name: owdev-couchdb-87c76548-xrmdm
Namespace: wsk
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 21 Mar 2022 19:27:42 +0800
Labels: app=owdev-openwhisk
chart=openwhisk-1.0.0
heritage=Helm
name=owdev-couchdb
pod-template-hash=87c76548
release=owdev
Annotations: <none>
Status: Running
IP: 10.1.1.107
IPs:
IP: 10.1.1.107
Controlled By: ReplicaSet/owdev-couchdb-87c76548
Containers:
couchdb:
Container ID: docker://9899a6aea484b6bcd376c52b0351194d7e07f64957b00e44ff402dbfe8c63a68
Image: apache/couchdb:2.3
Image ID: docker-pullable://apache/couchdb@sha256:9f895c8ae371cb895541e53100e039ac6ae5d30f6f0b199e8470d81d523537ad
Port: 5984/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 22 Mar 2022 10:15:54 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 21 Mar 2022 19:27:49 +0800
Finished: Tue, 22 Mar 2022 10:15:41 +0800
Ready: True
Restart Count: 1
Environment:
COUCHDB_USER: <set to the key 'db_username' in secret 'owdev-db.auth'> Optional: false
COUCHDB_PASSWORD: <set to the key 'db_password' in secret 'owdev-db.auth'> Optional: false
NODENAME: couchdb0
Mounts:
/opt/couchdb/data from database-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t9ghx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
database-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: owdev-couchdb-pvc
ReadOnly: false
kube-api-access-t9ghx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Here is some interactive output when I use kubectl exec
go into the couchdb.
$ kubectl exec -n wsk -it owdev-couchdb-87c76548-xrmdm /bin/bash
root@owdev-couchdb-87c76548-xrmdm:/# curl http://127.0.0.1:5984/
{"couchdb":"Welcome","version":"2.3.1","git_sha":"c298091a4","uuid":"d482341852664dea41f463b538e04b61","features":["pluggable-storage-engines","scheduler"],"vendor":{"name":"The Apache Software Foundation"}}
root@owdev-couchdb-87c76548-xrmdm:/# curl http://127.0.0.1:5984/_all_dbs
["_users","ow_kube_couchdb_initialized_marker","test_activations","test_subjects","test_whisks"]