noobaa-operator icon indicating copy to clipboard operation
noobaa-operator copied to clipboard

noobaa install stuck at "System Phase is "Connecting". Waiting for phase ready ..."

Open wudixrw opened this issue 4 years ago • 9 comments

OS: RHEL 7.6 Kubenetes: 1.16

I follow the steps from https://github.com/noobaa/noobaa-operator

The noobaa install stuck as below:

noobaa install
INFO[0000] CLI version: 2.2.0
INFO[0000] noobaa-image: noobaa/noobaa-core:5.4.0
INFO[0000] operator-image: noobaa/noobaa-operator:2.2.0
INFO[0000] Namespace: default
INFO[0000]
INFO[0000] CRD Create:
INFO[0000] ✅ Already Exists: CustomResourceDefinition "noobaas.noobaa.io"
INFO[0000] ✅ Already Exists: CustomResourceDefinition "backingstores.noobaa.io"
INFO[0000] ✅ Already Exists: CustomResourceDefinition "bucketclasses.noobaa.io"
INFO[0000] ✅ Already Exists: CustomResourceDefinition "objectbucketclaims.objectbucket.io"
INFO[0000] ✅ Already Exists: CustomResourceDefinition "objectbuckets.objectbucket.io"
INFO[0000]
INFO[0000] Operator Install:
INFO[0000] ✅ Already Exists: Namespace "default"
INFO[0000] ✅ Created: ServiceAccount "noobaa"
INFO[0000] ✅ Created: Role "noobaa"
INFO[0000] ✅ Created: RoleBinding "noobaa"
INFO[0000] ✅ Created: ClusterRole "default.noobaa.io"
INFO[0000] ✅ Created: ClusterRoleBinding "default.noobaa.io"
INFO[0000] ✅ Created: Deployment "noobaa-operator"
INFO[0000]
INFO[0000] System Create:
INFO[0000] ✅ Already Exists: Namespace "default"
INFO[0000] ✅ Created: NooBaa "noobaa"
INFO[0000]
INFO[0000] NOTE:
INFO[0000]   - This command has finished applying changes to the cluster.
INFO[0000]   - From now on, it only loops and reads the status, to monitor the operator work.
INFO[0000]   - You may Ctrl-C at any time to stop the loop and watch it manually.
INFO[0000]
INFO[0000] System Wait Ready:
INFO[0000] ⏳ System Phase is "". Deployment "noobaa-operator" is not ready: ReadyReplicas 0/1
INFO[0003] ⏳ System Phase is "Connecting". Pod "noobaa-core-0" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [core]). ContainersNotReady (containers with unready status: [core]).
INFO[0006] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0009] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0012] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0015] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0018] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0021] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0024] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0027] ⏳ System Phase is "Connecting". Waiting for phase ready ...
INFO[0030] ⏳ System Phase is "Connecting". Waiting for phase ready ...

This is the noobaa status shows:

Show all details
noobaa status
INFO[0000] CLI version: 2.2.0
INFO[0000] noobaa-image: noobaa/noobaa-core:5.4.0
INFO[0000] operator-image: noobaa/noobaa-operator:2.2.0
INFO[0000] Namespace: default
INFO[0000]
INFO[0000] CRD Status:
INFO[0000] ✅ Exists: CustomResourceDefinition "noobaas.noobaa.io"
INFO[0000] ✅ Exists: CustomResourceDefinition "backingstores.noobaa.io"
INFO[0000] ✅ Exists: CustomResourceDefinition "bucketclasses.noobaa.io"
INFO[0000] ✅ Exists: CustomResourceDefinition "objectbucketclaims.objectbucket.io"
INFO[0000] ✅ Exists: CustomResourceDefinition "objectbuckets.objectbucket.io"
INFO[0000]
INFO[0000] Operator Status:
INFO[0000] ✅ Exists: Namespace "default"
INFO[0000] ✅ Exists: ServiceAccount "noobaa"
INFO[0000] ✅ Exists: Role "noobaa"
INFO[0000] ✅ Exists: RoleBinding "noobaa"
INFO[0000] ✅ Exists: ClusterRole "default.noobaa.io"
INFO[0000] ✅ Exists: ClusterRoleBinding "default.noobaa.io"
INFO[0000] ✅ Exists: Deployment "noobaa-operator"
INFO[0000]
INFO[0000] System Status:
INFO[0000] ✅ Exists: NooBaa "noobaa"
INFO[0000] ✅ Exists: StatefulSet "noobaa-core"
INFO[0000] ✅ Exists: StatefulSet "noobaa-db"
INFO[0000] ✅ Exists: Service "noobaa-mgmt"
INFO[0000] ✅ Exists: Service "s3"
INFO[0000] ✅ Exists: Service "noobaa-db"
INFO[0000] ✅ Exists: Secret "noobaa-server"
INFO[0000] ❌ Not Found: Secret "noobaa-operator"
INFO[0000] ❌ Not Found: Secret "noobaa-endpoints"
INFO[0000] ❌ Not Found: Secret "noobaa-admin"
INFO[0000] ❌ Not Found: StorageClass "default.noobaa.io"
INFO[0000] ❌ Not Found: BucketClass "noobaa-default-bucket-class"
INFO[0000] ❌ Not Found: Deployment "noobaa-endpoint"
INFO[0000] ❌ Not Found: HorizontalPodAutoscaler "noobaa-endpoint"
INFO[0000] ⬛ (Optional) Not Found: BackingStore "noobaa-default-backing-store"
INFO[0000] ⬛ (Optional) CRD Unavailable: CredentialsRequest "noobaa-cloud-creds"
INFO[0000] ⬛ (Optional) CRD Unavailable: PrometheusRule "noobaa-prometheus-rules"
INFO[0000] ⬛ (Optional) CRD Unavailable: ServiceMonitor "noobaa-service-monitor"
INFO[0000] ⬛ (Optional) CRD Unavailable: Route "noobaa-mgmt"
INFO[0000] ⬛ (Optional) CRD Unavailable: Route "s3"
INFO[0000] ✅ Exists: PersistentVolumeClaim "db-noobaa-db-0"
INFO[0000] ❌ System Phase is "Connecting"
INFO[0000] ⏳ System Phase is "Connecting". Waiting for phase ready ...
#------------------#
#- Backing Stores -#
#------------------#

No backing stores found.

#------------------#
#- Bucket Classes -#
#------------------#

No bucket classes found.

#-----------------#
#- Bucket Claims -#
#-----------------#

No OBCs found.

-------------------------------------
**I check kubectl get noobaa, show below
NAME     MGMT-ENDPOINTS                  S3-ENDPOINTS   IMAGE                      PHASE        AGE
noobaa   [https://172.16.12.152:32517]                  noobaa/noobaa-core:5.4.0   Connecting   19m**

And kubectl describe noobaa show below

Show all details
kubectl describe noobaa
Name:         noobaa
Namespace:    default
Labels:       app=noobaa
Annotations:  <none>
API Version:  noobaa.io/v1alpha1
Kind:         NooBaa
Metadata:
  Creation Timestamp:  2020-07-12T12:31:09Z
  Generation:          1
  Resource Version:    8201271
  Self Link:           /apis/noobaa.io/v1alpha1/namespaces/default/noobaas/noobaa
  UID:                 8700ccb5-fcf0-42a8-8bad-54d5c33ec7d8
Spec:
  Db Image:  centos/mongodb-36-centos7
  Image:     noobaa/noobaa-core:5.4.0
Status:
  Accounts:
    Admin:
      Secret Ref:
  Actual Image:  noobaa/noobaa-core:5.4.0
  Conditions:
    Last Heartbeat Time:   2020-07-12T12:31:11Z
    Last Transition Time:  2020-07-12T12:31:11Z
    Message:               RPC: connection (0xc001660140) already closed &{RPC:0xc000101040 Address:wss://noobaa-mgmt.default.svc.cluster.local:443/rpc/ State:closed WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:3s}
    Reason:                TemporaryError
    Status:                False
    Type:                  Available
    Last Heartbeat Time:   2020-07-12T12:31:11Z
    Last Transition Time:  2020-07-12T12:31:11Z
    Message:               RPC: connection (0xc001660140) already closed &{RPC:0xc000101040 Address:wss://noobaa-mgmt.default.svc.cluster.local:443/rpc/ State:closed WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:3s}
    Reason:                TemporaryError
    Status:                True
    Type:                  Progressing
    Last Heartbeat Time:   2020-07-12T12:31:11Z
    Last Transition Time:  2020-07-12T12:31:11Z
    Message:               RPC: connection (0xc001660140) already closed &{RPC:0xc000101040 Address:wss://noobaa-mgmt.default.svc.cluster.local:443/rpc/ State:closed WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:3s}
    Reason:                TemporaryError
    Status:                False
    Type:                  Degraded
    Last Heartbeat Time:   2020-07-12T12:31:11Z
    Last Transition Time:  2020-07-12T12:31:11Z
    Message:               RPC: connection (0xc001660140) already closed &{RPC:0xc000101040 Address:wss://noobaa-mgmt.default.svc.cluster.local:443/rpc/ State:closed WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:3s}
    Reason:                TemporaryError
    Status:                False
    Type:                  Upgradeable
  Observed Generation:     1
  Phase:                   Connecting
  Readme:

  NooBaa operator is still working to reconcile this system.
  Check out the system status.phase, status.conditions, and events with:

    kubectl -n default describe noobaa
    kubectl -n default get noobaa -o yaml
    kubectl -n default get events --sort-by=metadata.creationTimestamp

  You can wait for a specific condition with:

    kubectl -n default wait noobaa/noobaa --for condition=available --timeout -1s

  NooBaa Core Version:     5.4.0
  NooBaa Operator Version: 2.2.0

  Services:
    Service Mgmt:
      Internal DNS:
        https://noobaa-mgmt.default.svc:443
      Internal IP:
        https://10.96.129.199:443
      Node Ports:
        https://172.16.12.152:32517
      Pod Ports:
        https://10.100.0.12:8443
    serviceS3:
Events:
  Type    Reason       Age   From             Message

  Normal  NooBaaImage  20m   noobaa-operator  Using NooBaa image "noobaa/noobaa-core:5.4.0" for the creation of "noobaa"

While at the same time, I can access the management endpoint and log in

wudixrw avatar Jul 12 '20 12:07 wudixrw

Hi @wudixrw Can you try the noobaa diagnose command?

Also if you can share this list of namespaced resources:

for name in $(kubectl api-resources --verbs=list --namespaced -o name); do \
  cmd="kubectl get $name"; [[ "$name" =~ "events" ]] && cmd="$cmd --sort-by=lastTimestamp"; \
  echo; echo "### $cmd"; $cmd; \
done

Thanks

guymguym avatar Jul 12 '20 15:07 guymguym

Hi @guymguym

  1. Here is the noobaa diagnose output: noobaa_diagnostics_2020-07-12T23:59:20+08:00.tar.gz

  2. Here is the command output:

Show all details
### kubectl get configmaps
NAME                   DATA   AGE
noobaa-operator-lock   0      9m54s

### kubectl get endpoints
NAME              ENDPOINTS                                                        AGE
kubernetes        172.16.22.103:6443                                               65d
noobaa-db         10.100.0.97:27017                                                9m48s
noobaa-mgmt       10.100.0.31:8443,10.100.0.31:8080,10.100.0.31:8445 + 1 more...   9m48s
s3                <none>                                                           9m48s
wordpress-mysql   <none>                                                           130m

### kubectl get events --sort-by=lastTimestamp
F0712 23:59:07.988412   29049 sorter.go:354] Field {.lastTimestamp} in [][][]reflect.Value is an unsortable type: interface, err: unsortable type: <nil>

### kubectl get limitranges
No resources found in default namespace.

### kubectl get persistentvolumeclaims
NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
db-noobaa-db-0                    Bound    pvc-a82aa402-18ee-44e8-91f1-563be3d53b8d   50Gi       RWO            sc-svc         9m49s
noobaastorage-test-csc-noobaa-0   Bound    pvc-81f495eb-e09d-4285-ba23-8ce68a97fe31   100Gi      RWO            sc-svc         4h12m
noobaastorage-test-csc-noobaa-1   Bound    pvc-551d14ce-7e46-49eb-9352-f4e7acf2a68d   100Gi      RWO            sc-svc         4h12m
noobaastorage-test-csc-noobaa-2   Bound    pvc-7537dc03-a0b4-44fa-b584-0a689fbbd47a   100Gi      RWO            sc-svc         4h12m
pvc-blk-test                      Bound    pvc-2d42e18c-1543-42a7-bdb2-d2662d3c3025   1Gi        RWO            sc-svc         63d
pvc-blk-test2                     Bound    pvc-3956a5b9-a86d-4ecb-8cc7-66f5970cc7fd   1Gi        RWO            sc-svc         11h
pvc-fs-test                       Bound    pvc-dce550d3-34e3-4299-90ca-e5c11c061a11   1Gi        RWO            sc-svc         63d

### kubectl get pods
NAME                              READY   STATUS              RESTARTS   AGE
noobaa-core-0                     1/1     Running             0          9m49s
noobaa-db-0                       1/1     Running             0          9m49s
noobaa-operator-cf6c7cd85-6259z   1/1     Running             0          9m56s
statefulset-blk-test-0            1/1     Running             0          31m
statefulset-fs-test-0             0/1     ContainerCreating   0          27m

### kubectl get podtemplates
No resources found in default namespace.

### kubectl get replicationcontrollers
No resources found in default namespace.

### kubectl get resourcequotas
No resources found in default namespace.

### kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-v9wfh   kubernetes.io/service-account-token   3      65d
noobaa-server         Opaque                                2      9m49s
noobaa-token-pmgqx    kubernetes.io/service-account-token   3      9m56s

### kubectl get serviceaccounts
NAME      SECRETS   AGE
default   1         65d
noobaa    1         9m56s

### kubectl get services
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                    AGE
kubernetes        ClusterIP      10.96.0.1       <none>        443/TCP                                                    65d
noobaa-db         ClusterIP      10.96.177.32    <none>        27017/TCP                                                  9m49s
noobaa-mgmt       LoadBalancer   10.96.130.179   <pending>     80:32633/TCP,443:30106/TCP,8445:31662/TCP,8446:30367/TCP   9m49s
s3                LoadBalancer   10.96.252.209   <pending>     80:31959/TCP,443:32189/TCP,8444:30877/TCP                  9m49s
wordpress-mysql   ClusterIP      None            <none>        3306/TCP                                                   130m

### kubectl get controllerrevisions.apps
NAME                              CONTROLLER                              REVISION   AGE
noobaa-core-d764fc546             statefulset.apps/noobaa-core            1          9m49s
noobaa-db-857d8678d6              statefulset.apps/noobaa-db              1          9m49s
statefulset-blk-test-5d95644447   statefulset.apps/statefulset-blk-test   1          63d
statefulset-fs-test-646cf55486    statefulset.apps/statefulset-fs-test    1          63d

### kubectl get daemonsets.apps
No resources found in default namespace.

### kubectl get deployments.apps
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
noobaa-operator   1/1     1            1           9m56s

### kubectl get replicasets.apps
NAME                        DESIRED   CURRENT   READY   AGE
noobaa-operator-cf6c7cd85   1         1         1       9m57s

### kubectl get statefulsets.apps
NAME                   READY   AGE
noobaa-core            1/1     9m50s
noobaa-db              1/1     9m50s
statefulset-blk-test   1/1     63d
statefulset-fs-test    0/1     63d

### kubectl get horizontalpodautoscalers.autoscaling
No resources found in default namespace.

### kubectl get cronjobs.batch
No resources found in default namespace.

### kubectl get jobs.batch
No resources found in default namespace.

### kubectl get cephblockpools.ceph.rook.io
No resources found in default namespace.

### kubectl get cephclients.ceph.rook.io
No resources found in default namespace.

### kubectl get cephclusters.ceph.rook.io
No resources found in default namespace.

### kubectl get cephfilesystems.ceph.rook.io
No resources found in default namespace.

### kubectl get cephnfses.ceph.rook.io
No resources found in default namespace.

### kubectl get cephobjectstores.ceph.rook.io
No resources found in default namespace.

### kubectl get cephobjectstoreusers.ceph.rook.io
No resources found in default namespace.

### kubectl get leases.coordination.k8s.io
No resources found in default namespace.

### kubectl get networkpolicies.crd.projectcalico.org
No resources found in default namespace.

### kubectl get networksets.crd.projectcalico.org
No resources found in default namespace.

### kubectl get ibmblockcsis.csi.ibm.com
No resources found in default namespace.

### kubectl get events.events.k8s.io --sort-by=lastTimestamp
No resources found in default namespace.

### kubectl get ingresses.extensions
No resources found in default namespace.

### kubectl get ingresses.networking.k8s.io
No resources found in default namespace.

### kubectl get networkpolicies.networking.k8s.io
No resources found in default namespace.

### kubectl get backingstores.noobaa.io
No resources found in default namespace.

### kubectl get bucketclasses.noobaa.io
No resources found in default namespace.

### kubectl get noobaas.noobaa.io
NAME     MGMT-ENDPOINTS                  S3-ENDPOINTS   IMAGE                      PHASE        AGE
noobaa   [https://172.16.12.152:30106]                  noobaa/noobaa-core:5.4.0   Connecting   10m

### kubectl get objectbucketclaims.objectbucket.io
No resources found in default namespace.

### kubectl get poddisruptionbudgets.policy
No resources found in default namespace.

### kubectl get rolebindings.rbac.authorization.k8s.io
NAME     AGE
noobaa   10m

### kubectl get roles.rbac.authorization.k8s.io
NAME     AGE
noobaa   10m

### kubectl get volumes.rook.io
No resources found in default namespace.

### kubectl get volumesnapshots.snapshot.storage.k8s.io
No resources found in default namespace.

wudixrw avatar Jul 12 '20 16:07 wudixrw

any update on it ?

Daornit avatar Jul 30 '20 08:07 Daornit

I am hitting same issue, was there any resolution or debugging done on this issue?

bssrikanth avatar Nov 25 '20 12:11 bssrikanth

kubectl get pod -n noobaa
NAME READY STATUS RESTARTS AGE noobaa-core-0 1/1 Running 2 13m noobaa-db-0 0/1 Pending 0 13m noobaa-operator-6c64f578b9-96pqx 1/1 Running 0 13m

# kubectl describe pod noobaa-db-0
Name:           noobaa-db-0
Namespace:      noobaa
Priority:       0
Node:           <none>
Labels:         app=noobaa
                controller-revision-hash=noobaa-db-78d8cf5898
                noobaa-db=noobaa
                statefulset.kubernetes.io/pod-name=noobaa-db-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/noobaa-db
Init Containers:
  init:
    Image:      noobaa/noobaa-core:5.5.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /noobaa_init_files/noobaa_init.sh
      init_mongo
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:        500m
      memory:     500Mi
    Environment:  <none>
    Mounts:
      /mongo_data from db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-hnz2q (ro)
Containers:
  db:
    Image:      centos/mongodb-36-centos7
    Port:       <none>
    Host Port:  <none>
    Command:
      bash
      -c
      /opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:        2
      memory:     4Gi
    Environment:  <none>
    Mounts:
      /data from db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-hnz2q (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  db-noobaa-db-0
    ReadOnly:   false
  noobaa-token-hnz2q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  noobaa-token-hnz2q
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  12m   default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  12m   default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.

bssrikanth avatar Nov 25 '20 12:11 bssrikanth

@bssrikanth Hi. Your case seems to have a different cause then the original - in your case the reason for the installation not advancing is that the noobaa-db PVC is not able to provision. Check out the noobaa.spec.dbStorageClass matches the storage class you use to provision PV's in your system. If left empty, the default storage class is used.

See the CRD API definitions: https://github.com/noobaa/noobaa-operator/blob/13240fe180d79be9afa8597b039181009fdc7f3a/pkg/apis/noobaa/v1alpha1/noobaa_types.go#L92-L98

If you want to update this property - you can use kubectl edit noobaa, add the property and save, and then manually delete the db-noobaa-db-0 PVC so that it will be recreated from a new storage class.

Let us know if this helps

guymguym avatar Nov 25 '20 16:11 guymguym

Thanks for the update @guymguym . I am new to noobaa, when I edit noobaa I am not finding DBStorageClass.. currently I am using IBM spectrum scale CSI as storage backend to provision PVC in my k8s cluster. Storageclass details are as below:

Name:            ibm-spectrum-scale-csi-fileset
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"ibm-spectrum-scale-csi-fileset"},"parameters":{"clusterId":"11585869269786559051","volBackendFs":"fs1"},"provisioner":"spectrumscale.csi.ibm.com","reclaimPolicy":"Delete"}

Provisioner:           spectrumscale.csi.ibm.com
Parameters:            clusterId=11585869269786559051,volBackendFs=fs1
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

Can you please let me know what exactly should I direct noobaa to? is it just the storage class name which is ibm-spectrum-scale-csi-fileset in my case.

bssrikanth avatar Nov 26 '20 04:11 bssrikanth

Hi @bssrikanth

If you use the cli to install noobaa then you should use the following:

noobaa install --db-storage-class ibm-spectrum-scale-csi-fileset

Otherwise you could manipulate the NooBaa CR

kubectl patch noobaa noobaa --type merge -p '{"spec":{"dbStorageClass":"ibm-spectrum-scale-csi-fileset"}}'

guymguym avatar Nov 29 '20 14:11 guymguym

Sorry, I think I created a duplicate issue #1128. I have similar behavior with:

  • NooBaa Operator Version: 5.11.0
  • Platform: K8s v1.22.3 on minikube 1.24.0 5CPU 8GB

Default StorageClass is in place and DB (postgres) is working. But Core api endpoints are not responding and there are no errors in core logs. Is there any updates on the issue? Is it makes sense to use older k8s or operator version?

arttor avatar May 14 '23 08:05 arttor