charts icon indicating copy to clipboard operation
charts copied to clipboard

Unable to connect to via `ysqlsh`

Open emchristiansen opened this issue 2 years ago • 0 comments

After following the instructions here, I have what appears to be a functioning Yugabyte cluster, but I'm not able to connect using the command kubectl exec --namespace yb-demo -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yb-demo.

If I try it, I get this error:

Screenshot 2023-02-09 at 4 32 05 PM

Here are the running pods:

Screenshot 2023-02-09 at 4 34 07 PM

Here's the output of kubectl describe pod yb-tserver-0 -n yb-demo:

Name:             yb-tserver-0
Namespace:        yb-demo
Priority:         0
Service Account:  default
Node:             b8615d15-a0af-4d0f-9c60-455880de8e76/192.168.222.167
Start Time:       Thu, 09 Feb 2023 22:14:16 +0000
Labels:           app=yb-tserver
                  chart=yugabyte
                  component=yugabytedb
                  controller-revision-hash=yb-tserver-69d95c5685
                  heritage=Helm
                  release=yb-demo
                  statefulset.kubernetes.io/pod-name=yb-tserver-0
Annotations:      cni.projectcalico.org/containerID: f893b830e319234281af0296b433c4af7e29f1ddb367d9cdd23971824d786243
                  cni.projectcalico.org/podIP: 10.244.38.135/32
                  cni.projectcalico.org/podIPs: 10.244.38.135/32
Status:           Running
IP:               10.244.38.135
IPs:
  IP:           10.244.38.135
Controlled By:  StatefulSet/yb-tserver
Containers:
  yb-tserver:
    Container ID:  containerd://d8694c5d623ba3539ad0ab0513e2ab8c643d580f36c7ae3b1f4c7e7ace5d7f8b
    Image:         yugabytedb/yugabyte:2.17.1.0-b439
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:ed09f14588cb8cda772ec344dc1e5beb19aeed710fca41600f3a79a2c19773c0
    Ports:         9000/TCP, 12000/TCP, 11000/TCP, 13000/TCP, 9100/TCP, 6379/TCP, 9042/TCP, 5433/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      touch "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local" \
          --port="9100"
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local:9100" \
          --port="9100"
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="0.0.0.0" \
          --port="9000"
      fi && \
      if [[ -f /home/yugabyte/tools/k8s_parent.py ]]; then
        k8s_parent="/home/yugabyte/tools/k8s_parent.py"
      else
        k8s_parent=""
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local" \
          --port="9042"
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="0.0.0.0:5433" \
          --port="5433"
      fi && \
      exec ${k8s_parent} /home/yugabyte/bin/yb-tserver \
        --fs_data_dirs=/mnt/disk0,/mnt/disk1 \
        --tserver_master_addrs=yb-master-0.yb-masters.$(NAMESPACE).svc.cluster.local:7100,yb-master-1.yb-masters.$(NAMESPACE).svc.cluster.local:7100,yb-master-2.yb-masters.$(NAMESPACE).svc.cluster.local:7100 \
        --metric_node_name=$(HOSTNAME) \
        --memory_limit_hard_bytes=3649044480 \
        --stderrthreshold=0 \
        --num_cpus=2 \
        --undefok=num_cpus,enable_ysql \
        --use_node_hostname_for_local_tserver=true \
        --rpc_bind_addresses=$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local \
        --server_broadcast_addresses=$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local:9100 \
        --webserver_interface=0.0.0.0 \
        --enable_ysql=true \
        --pgsql_proxy_bind_address=0.0.0.0:5433 \
        --cql_proxy_bind_address=$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local
      
    State:          Running
      Started:      Thu, 09 Feb 2023 22:14:45 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:     2
      memory:  4Gi
    Liveness:  exec [bash -c touch "/mnt/disk0/disk.check" "/mnt/disk1/disk.check"] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_IP:                  (v1:status.podIP)
      HOSTNAME:               yb-tserver-0 (v1:metadata.name)
      NAMESPACE:              yb-demo (v1:metadata.namespace)
      YBDEVOPS_CORECOPY_DIR:  /mnt/disk0/cores
    Mounts:
      /mnt/disk0 from datadir0 (rw)
      /mnt/disk1 from datadir1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6cmn (ro)
  yb-cleanup:
    Container ID:  containerd://c43c77eefdc845884aac8bb560bf1ea180ebf932b50ec6d57856e66c806642d3
    Image:         yugabytedb/yugabyte:2.17.1.0-b439
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:ed09f14588cb8cda772ec344dc1e5beb19aeed710fca41600f3a79a2c19773c0
    Port:          <none>
    Host Port:     <none>
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      while true; do
        sleep 3600;
        /home/yugabyte/scripts/log_cleanup.sh;
      done
      
    State:          Running
      Started:      Thu, 09 Feb 2023 22:14:45 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      USER:  yugabyte
    Mounts:
      /home/yugabyte/ from datadir0 (rw,path="yb-data")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6cmn (ro)
      /var/yugabyte/cores from datadir0 (rw,path="cores")
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  datadir1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir1-yb-tserver-0
    ReadOnly:   false
  datadir0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir0-yb-tserver-0
    ReadOnly:   false
  kube-api-access-x6cmn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  23m   default-scheduler  Successfully assigned yb-demo/yb-tserver-0 to b8615d15-a0af-4d0f-9c60-455880de8e76
  Normal  Pulling    23m   kubelet            Pulling image "yugabytedb/yugabyte:2.17.1.0-b439"
  Normal  Pulled     22m   kubelet            Successfully pulled image "yugabytedb/yugabyte:2.17.1.0-b439" in 25.578555117s (25.578633289s including waiting)
  Normal  Created    22m   kubelet            Created container yb-tserver
  Normal  Started    22m   kubelet            Started container yb-tserver
  Normal  Pulled     22m   kubelet            Container image "yugabytedb/yugabyte:2.17.1.0-b439" already present on machine
  Normal  Created    22m   kubelet            Created container yb-cleanup
  Normal  Started    22m   kubelet            Started container yb-cleanup

FYI, I installed the cluster using k0s with defaults, except I'm using Calico for networking.

emchristiansen avatar Feb 09 '23 22:02 emchristiansen