console icon indicating copy to clipboard operation
console copied to clipboard

Bug: "Address is not allowed" error message upon initial Console login, changing password

Open pcgeek86 opened this issue 2 years ago • 11 comments

General remarks

This form is to report bugs. For general usage questions refer to our Slack channel KubeSphere-users

Describe the bug

  • I am being blocked from proceeding in the KubeSphere console by a webhook-related bug.
  • I created a k8s cluster on Amazon EKS, using eksctl.
  • I deployed KubeSphere using the k8s manifest files in the documentation.
  • Then I changed the ks-console service type to LoadBalancer.
  • Then I tried to login to the console through the load balancer's external DNS name.
Screen Shot 2022-02-28 at 10 39 04 AM
ubuntu@vm01:~$ kubectl get all --namespace kubesphere-system
NAME                                         READY   STATUS    RESTARTS   AGE
pod/ks-apiserver-55d68bd7f-75nqm             1/1     Running   0          5m16s
pod/ks-console-65f4d44d88-hmhp8              1/1     Running   0          19m
pod/ks-controller-manager-68b54f7967-rg99p   1/1     Running   0          5m16s
pod/ks-installer-85dcfff87d-tt5cq            1/1     Running   0          9m51s

NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
service/ks-apiserver            ClusterIP      10.100.228.188   <none>                                                                    80/TCP         19m
service/ks-console              LoadBalancer   10.100.232.102   a94291894669b4059b67eaad7cc534be-1183942353.us-west-2.elb.amazonaws.com   80:30880/TCP   19m
service/ks-controller-manager   ClusterIP      10.100.12.182    <none>                                                                    443/TCP        19m

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ks-apiserver            1/1     1            1           19m
deployment.apps/ks-console              1/1     1            1           19m
deployment.apps/ks-controller-manager   1/1     1            1           19m
deployment.apps/ks-installer            1/1     1            1           68m

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/ks-apiserver-55d68bd7f             1         1         1       5m16s
replicaset.apps/ks-apiserver-69b77996db            0         0         0       19m
replicaset.apps/ks-apiserver-6bcd4675f9            0         0         0       5m30s
replicaset.apps/ks-apiserver-6f48b8598             0         0         0       15m
replicaset.apps/ks-apiserver-dccc7546              0         0         0       16m
replicaset.apps/ks-console-65f4d44d88              1         1         1       19m
replicaset.apps/ks-controller-manager-5785c696df   0         0         0       16m
replicaset.apps/ks-controller-manager-5f89b68b7d   0         0         0       19m
replicaset.apps/ks-controller-manager-64b88f6dc8   0         0         0       15m
replicaset.apps/ks-controller-manager-66fd555c4    0         0         0       5m29s
replicaset.apps/ks-controller-manager-68b54f7967   1         1         1       5m16s
replicaset.apps/ks-installer-85dcfff87d            1         1         1       68m

Versions used(KubeSphere/Kubernetes) KubeSphere: see below Kubernetes: 1.21.5 EKS

Name:                   ks-installer
Namespace:              kubesphere-system
CreationTimestamp:      Mon, 28 Feb 2022 16:27:34 +0000
Labels:                 app=ks-install
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=ks-install
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=ks-install
  Service Account:  ks-installer
  Containers:
   installer:
    Image:      kubesphere/ks-installer:v3.2.1
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        20m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /etc/localtime from host-time (ro)
  Volumes:
   host-time:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   ks-installer-85dcfff87d (1/1 replicas created)
Events:          <none>

Environment How many nodes and their hardware configuration:

For example: EKS master 2 nodes: 8cpu/16g

(and other info are welcomed to help us debugging)

To Reproduce

See above.

Expected behavior

I should be able to change password or skip changing password, so I can login to KubeSphere Console.

pcgeek86 avatar Feb 28 '22 17:02 pcgeek86

Hi @wansir , could you please help to take a look at this issue?

FeynmanZhou avatar Mar 01 '22 04:03 FeynmanZhou

@pcgeek86 It seems this issue https://medium.com/@denisstortisilva/kubernetes-eks-calico-and-custom-admission-webhooks-a2956b49bd0d

wansir avatar Mar 01 '22 05:03 wansir

Is the solution to delete Calico from cluster? @wansir

If not, what steps should I take to fix it?

pcgeek86 avatar Mar 01 '22 16:03 pcgeek86

So far I have:

  • Removed Calico by manifests
  • Removed KubeSphere by manifests
  • Re-deployed KubeSphere by manifests
  • Now the installer for KubeSphere will not start at all. The installer pod is hung.
Events:
  Type     Reason                  Age                     From               Message
  ----     ------                  ----                    ----               -------
  Normal   Scheduled               7m44s                   default-scheduler  Successfully assigned kubesphere-system/ks-installer-85dcfff87d-hwqd4 to ip-192-168-80-59.us-west-2.compute.internal
  Warning  FailedCreatePodSandBox  7m43s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "1b0845610c98e66285fdacf7293c4f9382b30d730d3f43a0fb96ecaa30370209" network for pod "ks-installer-85dcfff87d-hwqd4": networkPlugin cni failed to set up pod "ks-installer-85dcfff87d-hwqd4_kubesphere-system" network: error getting ClusterInformation: connection is unauthorized: Unauthorized, failed to clean up sandbox container "1b0845610c98e66285fdacf7293c4f9382b30d730d3f43a0fb96ecaa30370209" network for pod "ks-installer-85dcfff87d-hwqd4": networkPlugin cni failed to teardown pod "ks-installer-85dcfff87d-hwqd4_kubesphere-system" network: error getting ClusterInformation: connection is unauthorized: Unauthorized]
  Normal   SandboxChanged          2m33s (x25 over 7m43s)  kubelet            Pod sandbox changed, it will be killed and re-created.

pcgeek86 avatar Mar 01 '22 16:03 pcgeek86

So far I have:

  • Removed Calico by manifests
  • Removed KubeSphere by manifests
  • Re-deployed KubeSphere by manifests
  • Now the installer for KubeSphere will not start at all. The installer pod is hung.
Events:
  Type     Reason                  Age                     From               Message
  ----     ------                  ----                    ----               -------
  Normal   Scheduled               7m44s                   default-scheduler  Successfully assigned kubesphere-system/ks-installer-85dcfff87d-hwqd4 to ip-192-168-80-59.us-west-2.compute.internal
  Warning  FailedCreatePodSandBox  7m43s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "1b0845610c98e66285fdacf7293c4f9382b30d730d3f43a0fb96ecaa30370209" network for pod "ks-installer-85dcfff87d-hwqd4": networkPlugin cni failed to set up pod "ks-installer-85dcfff87d-hwqd4_kubesphere-system" network: error getting ClusterInformation: connection is unauthorized: Unauthorized, failed to clean up sandbox container "1b0845610c98e66285fdacf7293c4f9382b30d730d3f43a0fb96ecaa30370209" network for pod "ks-installer-85dcfff87d-hwqd4": networkPlugin cni failed to teardown pod "ks-installer-85dcfff87d-hwqd4_kubesphere-system" network: error getting ClusterInformation: connection is unauthorized: Unauthorized]
  Normal   SandboxChanged          2m33s (x25 over 7m43s)  kubelet            Pod sandbox changed, it will be killed and re-created.

I will verify this issue later. BTW, I think it is limited by the cluster network. You can recreate an EKS cluster and use AWS VPC CNI instead

wansir avatar Mar 02 '22 10:03 wansir

Same issue meat here, @wansir any thoughts?

kalavt avatar Sep 26 '22 16:09 kalavt

is there any workaround support EKS with Calico network?

kalavt avatar Sep 27 '22 12:09 kalavt

still not work change [ks-controller-manager] pod hostNetwork: true Error log: W0927 23:22:18.495218 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0927 23:22:18.496499 1 server.go:202] setting up manager I0927 23:22:18.520026 1 deleg.go:130] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"=":8080" E0927 23:22:18.520266 1 deleg.go:144] controller-runtime/metrics "msg"="metrics server failed to listen. You may want to disable the metrics server or use another port if it is due to conflicts" "error"="error listening on :8080: listen tcp :8080: bind: address already in use" F0927 23:22:18.520279 1 server.go:207] unable to set up overall controller manager: error listening on :8080: listen tcp :8080: bind: address already in use

kalavt avatar Sep 27 '22 23:09 kalavt

Progress update:

simply set use hostNetwork: true won't work, because node-local-dns already occupied this 8080 port. Stuck here again...

kalavt avatar Sep 28 '22 14:09 kalavt

[Pull request](url https://github.com/kubesphere/kubesphere/pull/5255) still need manually set hostNetwork: true manually enable control panel communicate with ks-controller-manager.

kalavt avatar Sep 28 '22 16:09 kalavt

issue fixed with

  1. set ks-controller-manage hostNetwork: true
  2. change node-local-dns ConfigMap, set health monitoring port to another port rather than 8080 (only if you have conflict with node-local-dns 8080 port)

kalavt avatar Sep 28 '22 19:09 kalavt