microk8s icon indicating copy to clipboard operation
microk8s copied to clipboard

How to chage a IP range

Open niwashing opened this issue 6 years ago • 47 comments

IP range 10.1.1.0/24, 10.152.183.0/24 are userd for cluster or pods by default. How can I change a cluster IP and IP range that is assigned to nodes?

niwashing avatar Jan 10 '19 15:01 niwashing

Hi @niwashing

These two IP ranges are configured in a couple places:

10.152.183.0/24:

  • /var/snap/microk8s/current/args/kube-apiserver: --service-cluster-ip-range=10.152.183.0/24
  • /var/snap/microk8s/current/args/kubelet: --non-masquerade-cidr=10.152.183.0/24
  • /var/snap/microk8s/current/args/kube-proxy: --cluster-cidr=10.152.183.0/24

10.1.1.0/24:

  • /var/snap/microk8s/current/args/cni-network/cni.conf "subnet": "10.1.1.0/24",
  • /var/snap/microk8s/current/args/kubelet:--pod-cidr=10.1.1.0/24

I hope I am not missing anything. Remember to stop/start MicroK8s after you update any of those arguments.

Thank you for using MicroK8s

ktsakalozos avatar Jan 11 '19 09:01 ktsakalozos

@ktsakalozos

Thank you! I could change ip range, but microk8s.enable dns still throw error as follows:

$ microk8s.enable dns
Enabling DNS
Applying manifest
serviceaccount/kube-dns unchanged
configmap/kube-dns unchanged
deployment.extensions/kube-dns configured
The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.152.183.10": provided IP is not in the valid range. The range of valid IPs is 172.22.183.0/24
Failed to enable dns

Although I found /snap/microk8s/354/actions/dns.yaml and /snap/microk8s/354/actions/enable.dns.sh still include a default IP range, they cannot be changed due to snap permission.

Do I have to build microk8s from source by using snapcraft?

niwashing avatar Jan 11 '19 14:01 niwashing

You have the option to recompile MicroK8s and produce your own .snap file.

I suspect that after microk8s.enable dns you can also microk8s.kubectl edit the part of the dns manifest that is failing.

ktsakalozos avatar Jan 13 '19 08:01 ktsakalozos

I suspect that after microk8s.enable dns you can also microk8s.kubectl edit the part of the dns >manifest that is failing.

Sorry, I'm not very familier with kubernetes, but kubectl edit is not available because kube-dns has not been deployed yet due to ip range error?

I would appreciate if you could show me entier commands.

niwashing avatar Jan 14 '19 12:01 niwashing

Sure, here is what I have:

> microk8s.kubectl get all --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   pod/kube-dns-6ccd496668-qx5m4   3/3     Running   0          41s

NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP         72s
kube-system   service/kube-dns     ClusterIP   10.152.183.10   <none>        53/UDP,53/TCP   41s

NAMESPACE     NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/kube-dns   1/1     1            1           41s

NAMESPACE     NAME                                  DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/kube-dns-6ccd496668   1         1         1       42s

At this point I have kube-dns running, it should be failing in your case. I suspect you can go and edit the kube-dns service clusterIP with:

microk8s.kubectl edit -n kube-system service/kube-dns

If this is does not work you will need to download and edit this file https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/dns.yaml and then microk8s.kubectl apply -f ./dns.yaml

ktsakalozos avatar Jan 14 '19 13:01 ktsakalozos

I can confirm I just hit this too because i didn't enable dns before adding pods so there was an IP conflict. I can also confirm that the installation of the dns.yaml definition was required since dns never started.

fquinner avatar Jan 31 '19 12:01 fquinner

What happens when the microk8s snap refreshes? Since the files to be edited are in /var/snap/microk8s/current/, I suspect the changes will revert back to the defaults.

What about storing the subnet to use in /var/snap/microk8s/common, and modifying the configuration to get the value from that file?

AdamIsrael avatar Feb 04 '19 10:02 AdamIsrael

@AdamIsrael I do not think this is what happens during a refresh. ${SNAP_DATA} is backed up and could be reverted (contrary to ${SNAP_COMMON}) but the contents are preserved. If you configure your daemons in a specific way we respect your configuration, we do not overwrite the configuration with the defaults. Have a look here https://forum.snapcraft.io/t/proper-way-to-simulate-a-snap-refresh-release/5565 on how you can simulate a refresh and check this yourself.

ktsakalozos avatar Feb 04 '19 15:02 ktsakalozos

@ktsakalozos That's good to know, thanks!

I still think having the subnet defined in a single location would be better. Also perhaps a command for changing it to a specific or random subnet?

The specific use-case I have is that I may install microk8s to lxd, and thus have multiple microk8s on a host. I can create new networks in lxd for each, but I then need for each microk8s to use the appropriate network.

AdamIsrael avatar Feb 05 '19 08:02 AdamIsrael

I agree with @AdamIsrael, I started playing with K8s and decided that since we were and Ubuntu shop microk8s would be a great way to get started. However we also happen to use the subnet 10.1.1.0/24 and so the cbr0 interface caused issues for me accessing portions of our network.

evilhamsterman avatar Sep 25 '19 19:09 evilhamsterman

Hello everyone, @ktsakalozos, I followed these steps to deploy microk8s on a different range (simply 10.152.182.0/24). However, I run into a similar issue when trying to enable istio. It tries to re-deploy kube-dns (even if it is already deployed) and gets stuck because of the clusterIP spec again.

$ microk8s.kubectl get all --all-namespaces
NAMESPACE            NAME                                        READY   STATUS             RESTARTS   AGE
container-registry   pod/registry-d7d7c8bc9-g86qw                0/1     Pending            0          15h
kube-system          pod/coredns-9b8997588-v2hs6                 0/1     Running            3          16h
kube-system          pod/hostpath-provisioner-7b9cb5cdb4-c2z2l   0/1     CrashLoopBackOff   22         15h
kube-system          pod/kube-dns-579bd8fb8d-gh2m6               0/3     InvalidImageName   0          15h

NAMESPACE            NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
container-registry   service/registry     NodePort    10.152.182.65   <none>        5000:32000/TCP   15h
default              service/kubernetes   ClusterIP   10.152.182.1    <none>        443/TCP          17h
kube-system          service/kube-dns     ClusterIP   10.152.182.10   <none>        53/UDP,53/TCP    15h

NAMESPACE            NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
container-registry   deployment.apps/registry               0/1     1            0           15h
kube-system          deployment.apps/coredns                0/1     1            0           16h
kube-system          deployment.apps/hostpath-provisioner   0/1     1            0           15h
kube-system          deployment.apps/kube-dns               0/1     1            0           15h

NAMESPACE            NAME                                              DESIRED   CURRENT   READY   AGE
container-registry   replicaset.apps/registry-d7d7c8bc9                1         1         0       15h
kube-system          replicaset.apps/coredns-9b8997588                 1         1         0       16h
kube-system          replicaset.apps/hostpath-provisioner-7b9cb5cdb4   1         1         0       15h
kube-system          replicaset.apps/kube-dns-579bd8fb8d               1         1         0       15h
 
$ microk8s.enable istio
Enabling Istio
Enabling DNS
Applying manifest
serviceaccount/coredns unchanged
configmap/coredns unchanged
deployment.apps/coredns unchanged
clusterrole.rbac.authorization.k8s.io/coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/coredns unchanged
The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.152.183.10": field is immutable
Failed to enable dns
Failed to enable istio

Could you recommend what I should do with this, or which files I should download and edit? Thank you!

camille-rodriguez avatar Oct 22 '19 13:10 camille-rodriguez

Hello, this problem is still relevant now with flannel. we do a lot of work from home (using VPN) and many of our internal services run in 10.1.0.0/16

is there any way to change the adresses with flannel? ui tried to change /var/snap/microk8s/current/args/flannel-network-mgr-config and after a reset and reboot the new range is used. But sometimes it just stops working as flannel daemon can not start with:

Jul 02 09:50:48 xxx microk8s.daemon-flanneld[1747]: error #0: dial tcp: lookup none on 127.0.0.53:53: server misbehaving Jul 02 09:50:53 xxx microk8s.daemon-flanneld[2174]: Error: dial tcp: lookup none on 127.0.0.53:53: server misbehaving

we want to replace minikube with microk8s for local development on linux. So it would be really important to us.

Thanks for any good tips

sadoMasupilami avatar Jul 02 '20 08:07 sadoMasupilami

Microk8s 1.16+

Modify:

/var/snap/microk8s/current/args/flannel-network-mgr-config

Change: "10.1.0.0/16" to: "10.8.0.0/16" (or any other range)

Then restart microk8s:

microk8s.stop

microk8s.start

strigona-worksight avatar Jul 02 '20 15:07 strigona-worksight

Right now, what is the best option to achieve this local IP range change for the latest 1.19 microk8s without breaking any addons or functionality? And is it possible to use custom ip ranges outside of the standard ip ranges used in local? for example: 1.1.1.1? Or will this conflict with the internet?

uGiFarukh avatar Sep 04 '20 10:09 uGiFarukh

@ktsakalozos any idea on how to achieve this properly without breaking anything?

uGiFarukh avatar Sep 05 '20 18:09 uGiFarukh

@uGiFarukh for 1.19 we have the following:

There are two main IP ranges you may want to change.

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s.stop
    • Clean the current datastore and CNI with:
    (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
    echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
    rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
    
    • Edit /var/snap/microk8s/current/args/kube-apiserver and update the --service-cluster-ip-range=10.152.183.0/24 argument of the API server.
    • Edit /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP the kubernetes service will have in the new IP range.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Start all services with microk8s.start
    • Reload the CNI with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
    • To enable dns you should not use the packaged addon. Instead you should:
      • make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/.
      • In this manifest copy update the clusterIP: 10.152.183.10 to an IP in the new range and replace the $iALLOWESCALATION string with false.
      • Apply the manifest with microk8s kubectl apply -f /tmp/coredns.yaml
      • Add the following two arguments on the kubelt arguments at ``/var/snap/microk8s/current/args/kubelet`:
      --cluster-domain cluster.local
      --cluster-dns <the cluster ip of the dns service you put in the coredns.yaml>
      
      • Restart MicroK8s with microk8s stop; microk8s start.
  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Restart MicroK8s with microk8s stop; microk8s start.
    • Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.1.0.0/16"
      
    • Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.

ktsakalozos avatar Sep 05 '20 21:09 ktsakalozos

@ktsakalozos I tried your solution for 1.19 mentioned above, but there are still some issues:

  1. There is a line in /var/snap/microk8s/current/args/cni-network/calico-kubeconfig that says "server: https://[10.152.183.1]:443", and even if I edit it with sudo, it would be restored to its original content automatically after I restart microk8s. Does that matter?
  2. I found reference to "10.1.0.0/16" and "10.152.183.0/24" in /var/snap/microk8s/current/args/containerd-env. Should I also update that?
  3. I found reference to "10.1.0.0/16" in /var/snap/microk8s/current/args/flannel-network-mgr-config. Should I also update that?

Thanks a lot!

baritono avatar Sep 10 '20 01:09 baritono

@baritono I revised and tested the the instructions in the above comments for the service range. Please have another look. To your questions:

  • There is a line in /var/snap/microk8s/current/args/cni-network/calico-kubeconfig that says "server: https://[10.152.183.1]:443", and even if I edit it with sudo, it would be restored to its original content automatically after I restart microk8s. Does that matter?

This is now covered y the revised version of the instructions above.

  • I found reference to "10.1.0.0/16" and "10.152.183.0/24" in /var/snap/microk8s/current/args/containerd-env. Should I also update that?

If you are using a proxy you should update this file accordingly.

  • I found reference to "10.1.0.0/16" in /var/snap/microk8s/current/args/flannel-network-mgr-config. Should I also update that?

Flannel is not used in 1.19 anymore. It is here only for backwards compatibility with the non-HA setup.

ktsakalozos avatar Sep 10 '20 08:09 ktsakalozos

@ktsakalozos thank you so much! Now microk8s is up and running, and can happily co-exist with my Cisco VPN.

Some follow-up questions:

  1. Now when I microk8s enable dashboard then microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443, I cannot access the dashboard at http://localhost:10443. Got the following error
$ microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
Forwarding from 127.0.0.1:10443 -> 8443
Forwarding from [::1]:10443 -> 8443
Handling connection for 10443
Handling connection for 10443
E0910 12:32:18.674538   29339 portforward.go:400] an error occurred forwarding 10443 -> 8443: error forwarding port 8443 to pod c20523e09aa81f81d8448079efeb79369bc89d9d47b1ecaf29db9126d5544f67, uid : failed to execute portforward in network namespace "/var/run/netns/cni-371b6a26-e533-5db1-0c43-a6bdcaa84643": socat command returns error: exit status 1, stderr: "2020/09/10 12:32:18 socat[29527] E connect(5, AF=2 127.0.0.1:8443, 16): Connection refused\n"
E0910 12:32:18.675504   29339 portforward.go:400] an error occurred forwarding 10443 -> 8443: error forwarding port 8443 to pod c20523e09aa81f81d8448079efeb79369bc89d9d47b1ecaf29db9126d5544f67, uid : failed to execute portforward in network namespace "/var/run/netns/cni-371b6a26-e533-5db1-0c43-a6bdcaa84643": socat command returns error: exit status 1, stderr: "2020/09/10 12:32:18 socat[29528] E connect(5, AF=2 127.0.0.1:8443, 16): Connection refused\n"
  1. My main goal is to run kubeflow on microk8s, if I microk8s enable kubeflow, it seems to depend on the dns addon. Since I do not want to enable the packaged dns addon, what's the recommended way of enabling kubeflow in this setting? Should I follow the generic deploying kubeflow on existing kubernetes cluster instructions here?

Thank you again!

baritono avatar Sep 10 '20 19:09 baritono

@ktsakalozos since dashboard was not working, I disabled it microk8s disable dashboard and restarted microk8s microk8s stop ; microk8s start.

Now the pods are not healthy. For example, log from pod calico-kube-controllers-847c8c99d-dg4rj in deployment calico-kube-controllers (namespace kube-system)

2020-09-10 20:43:02.341 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
W0910 20:43:02.343059       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2020-09-10 20:43:02.343 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-09-10 20:43:05.399 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://192.168.64.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 192.168.64.1:443: connect: no route to host
2020-09-10 20:43:05.399 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://192.168.64.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 192.168.64.1:443: connect: no route to host

baritono avatar Sep 10 '20 20:09 baritono

@baritono could you attach the microk8s inspect tarball?

BTW who is @ keshavdv ?

ktsakalozos avatar Sep 11 '20 05:09 ktsakalozos

Sorry, @ktsakalozos I misspelled your ID! Auto-completion somehow gave me @ keshavdv .

$ microk8s inspect
[sudo] password for haosong: 
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster

WARNING:  Docker is installed. 
File "/etc/docker/daemon.json" does not exist. 
You should create it and add the following lines: 
{
    "insecure-registries" : ["localhost:32000"] 
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
  Report tarball is at /var/snap/microk8s/1668/inspection-report-20200911_124340.tar.gz

inspection-report-20200911_124340.tar.gz

baritono avatar Sep 11 '20 19:09 baritono

Wow, that's a lot of steps! Is there any movement to make it configurable (and working?). If it's a not on the roadmap, then may be set better defaults can resolve most of use cases? It seems like 10.1.1/24 is often leads to a conflict. And I'm wonder that /24 is enough, especially for "production-grade Kubernetes". Why not just use 10.152.0.0/16 and 10.153.0.0/16 as default? Or I like /12. Or look at other project defaults like rancher

Don't forget to upvote the first post to reflect the need of an easy configuration.

Bessonov avatar Nov 16 '20 23:11 Bessonov

i follow the steps, try to change the pod IP range only, but still not working, when i create a new pod, the ip is still 10.1.x.x

debu99 avatar Jan 03 '21 06:01 debu99

These are the steps that work for me using addresses from the 172.16.0.0/12 private address range

My setup

sudo snap install microk8s --classic --channel=1.19
microk8s enable dns helm3 rbac

cat ~/.bash_aliases

alias kubectl='microk8s kubectl'
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"

Configuration:

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s stop

    • Clean the current datastore and CNI with:

        (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
        echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
        rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
      
    • Edit nano /var/snap/microk8s/current/args/kube-apiserver and update the argument of the API server --service-cluster-ip-range=10.152.183.0/24 to --service-cluster-ip-range=172.30.183.0/24 .

    • Edit nano /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP IP.2 = 172.30.183.1 the kubernetes service will have in the new IP range.

    • If you are also setting up a proxy, update nano /var/snap/microk8s/current/args/containerd-env with the respective IP ranges from:

        # NO_PROXY=10.1.0.0/16,10.152.183.0/24
      
    • to:

        # NO_PROXY=172.17.0.0/16,172.30.183.0/24
      
    • Start all services with microk8s start

    • Reload the CNI with kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml

    • To enable dns you should make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/

    • In this manifest copy nano /tmp/coredns.yaml update the clusterIP: 10.152.183.10 to this IP clusterIP: 172.30.183.10 in the new range and replace the $ALLOWESCALATION string with false.

    • Apply the manifest with kubectl apply -f /tmp/coredns.yaml

    • Add/Modify the following two arguments on the kubelt arguments at nano /var/snap/microk8s/current/args/kubelet:

        --cluster-domain cluster.local
        --cluster-dns 172.30.183.10
      
    • Restart MicroK8s with microk8s stop; microk8s start.

  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit nano /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument to --cluster-cidr=172.17.0.0/16.

    • If you are also setting up a proxy, update nano /var/snap/microk8s/current/args/containerd-env with the respective IP ranges from:

        # NO_PROXY=10.1.0.0/16,10.152.183.0/24
      
    • to:

        # NO_PROXY=172.17.0.0/16,172.30.183.0/24
      
    • Restart MicroK8s with microk8s stop; microk8s start.

    • Edit nano /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range from:

        - name: CALICO_IPV4POOL_CIDR
          value: "10.1.0.0/16"
      
    • to:

        - name: CALICO_IPV4POOL_CIDR
          value: "172.17.0.0/16"
      
    • Apply the above yaml with kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.

    • Restart MicroK8s with microk8s stop; microk8s start.

Calico CTL install

kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml

Configure Calico

calicoctl get ippool -o wide
calicoctl delete pool default-ipv4-ippool
sudo reboot

The default-ipv4-ippool is recreated on reboot with the settings from /var/snap/microk8s/current/args/cni-network/cni.yaml Verify the pod IP-s! They should use the new IP range:

kubectl get pod -o wide --all-namespaces

ldaroczi avatar Jan 05 '21 15:01 ldaroczi

It would be really helpful to be able to specify all of this during installation. If I have a setup of several nodes, do I have to repeat all its changes on all the nodes?

metabsd avatar Feb 12 '21 01:02 metabsd

Used above comments to come up with this script:

alias kubectl="microk8s kubectl"
microk8s disable dns
microk8s.stop

(cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
sudo rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig

sed -ie 's|10.152.183.0/24|172.30.183.0/24|g' /var/snap/microk8s/current/args/kube-apiserver
sed -ie 's|10.152.183.1|172.30.183.1|g' /var/snap/microk8s/current/certs/csr.conf.template
sed -ie 's|10.1.0.0/16,10.152.183.0/24|172.17.0.0/16,172.30.183.0/24|g' /var/snap/microk8s/current/args/containerd-env
sed -i "/--cluster-domain .*/d" /var/snap/microk8s/current/args/kubelet
sed -i "/--cluster-dns .*/d" /var/snap/microk8s/current/args/kubelet
echo "--cluster-domain cluster.local" >> /var/snap/microk8s/current/args/kubelet
echo "--cluster-dns 172.30.183.10" >> /var/snap/microk8s/current/args/kubelet
sed -ie 's|10.1.0.0/16|172.17.0.0/16|g' /var/snap/microk8s/current/args/kube-proxy
sed -ie 's|10.1.0.0/16,10.152.183.0/24|172.17.0.0/16,172.30.183.0/24|g' /var/snap/microk8s/current/args/containerd-env
sed -ie 's|10.1.0.0/16|172.17.0.0/16|g' /var/snap/microk8s/current/args/cni-network/cni.yaml

reboot 

After Reboot

alias kubectl="microk8s kubectl"
microk8s start

kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
sleep 20

cp /snap/microk8s/current/actions/coredns.yaml /tmp/
sed -ie 's|$ALLOWESCALATION|false|g' /tmp/coredns.yaml
sed -ie 's|10.152.183.10|172.30.183.10|g' /tmp/coredns.yaml
kubectl apply -f /tmp/coredns.yaml

shauryagarg2006 avatar Apr 01 '21 21:04 shauryagarg2006

@uGiFarukh for 1.19 we have the following:

There are two main IP ranges you may want to change.

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s.stop
    • Clean the current datastore and CNI with:
    (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
    echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
    rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
    
    • Edit /var/snap/microk8s/current/args/kube-apiserver and update the --service-cluster-ip-range=10.152.183.0/24 argument of the API server.

    • Edit /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP the kubernetes service will have in the new IP range.

    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges

    • Start all services with microk8s.start

    • Reload the CNI with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml

    • To enable dns you should not use the packaged addon. Instead you should:

      • make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/.
      • In this manifest copy update the clusterIP: 10.152.183.10 to an IP in the new range and replace the $iALLOWESCALATION string with false.
      • Apply the manifest with microk8s kubectl apply -f /tmp/coredns.yaml
      • Add the following two arguments on the kubelt arguments at ``/var/snap/microk8s/current/args/kubelet`:
      --cluster-domain cluster.local
      --cluster-dns <the cluster ip of the dns service you put in the coredns.yaml>
      
      • Restart MicroK8s with microk8s stop; microk8s start.
  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Restart MicroK8s with microk8s stop; microk8s start.
    • Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.1.0.0/16"
      
    • Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.

In the version v1.21.0 of microk8s, the configmap of coredns should be replaced manually, or the coredns pod keeps restarting.

replace forward . $NAMESERVERS string with forward . 8.8.8.8 8.8.4.4.

nlnjnj avatar May 06 '21 08:05 nlnjnj

Hi for some reason my coredns pod is not coming up while following the guide

NAMESPACE     NAME                                           READY   STATUS             RESTARTS        AGE
kube-system   pod/calicoctl                                  1/1     Running            1 (2m10s ago)   14m
kube-system   pod/calico-kube-controllers-54c85446d4-4m97b   1/1     Running            4 (2m10s ago)   39m
kube-system   pod/calico-node-gv5mc                          1/1     Running            2 (2m10s ago)   16m
kube-system   pod/coredns-d489fb88-qwc8f                     0/1     CrashLoopBackOff   6 (26s ago)     2m54s

NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   192.168.81.97   <none>        443/TCP                  39m
kube-system   service/kube-dns     ClusterIP   192.168.81.99   <none>        53/UDP,53/TCP,9153/TCP   2m54s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   39m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           39m
kube-system   deployment.apps/coredns                   0/1     1            0           2m54s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-54c85446d4   1         1         1       39m
kube-system   replicaset.apps/coredns-d489fb88                     1         1         0       2m54s

The events are

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  17s                default-scheduler  Successfully assigned kube-system/coredns-d489fb88-t4md9 to ip-192-168-81-126
  Normal   Pulled     16s (x2 over 17s)  kubelet            Container image "coredns/coredns:1.9.3" already present on machine
  Normal   Created    16s (x2 over 17s)  kubelet            Created container coredns
  Normal   Started    15s (x2 over 17s)  kubelet            Started container coredns
  Warning  BackOff    7s (x4 over 15s)   kubelet            Back-off restarting failed container

What could be the reason?

uchiha-pain avatar Nov 15 '22 09:11 uchiha-pain

Now, I am trying to change the ip cidr for pods only and all my pods crashed after following this guide below

The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to: Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument. If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges Restart MicroK8s with microk8s stop; microk8s start. Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in. - name: CALICO_IPV4POOL_CIDR value: "10.1.0.0/16" Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml

LAST SEEN   TYPE      REASON                   OBJECT                                         MESSAGE
5m9s        Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "39bd045ac4a60e09c5f9feec2c17d687197b1ca423af383a5e0e223f63b1d1a5": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
4m56s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "21fd79ec15b8075f38db1558c47e2fa56eb45230c55c5cb7392f6656d00ec830": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
3m32s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4777b33fa4e4c0b4a99f4d15d217981844e55a14e4fa485614fc5f8ea8c2c13a": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m56s       Normal    SandboxChanged           pod/calico-node-qmnmf                          Pod sandbox changed, it will be killed and re-created.
2m55s       Normal    Pulled                   pod/calico-node-qmnmf                          Container image "docker.io/calico/cni:v3.23.3" already present on machine
2m55s       Normal    Created                  pod/calico-node-qmnmf                          Created container upgrade-ipam
2m55s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b541e80d9567e646d2da8b157da907b8580891145f73209e749ea88f26a59940": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m55s       Normal    Started                  pod/calico-node-qmnmf                          Started container upgrade-ipam
2m55s       Normal    Pulled                   pod/calico-node-qmnmf                          Container image "docker.io/calico/cni:v3.23.3" already present on machine
2m55s       Normal    Created                  pod/calico-node-qmnmf                          Created container install-cni
2m54s       Normal    Started                  pod/calico-node-qmnmf                          Started container install-cni
2m43s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c0c2b090d72ea457967629a6f949564de57007febe0f08889d95946f4b76faf9": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m32s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1e45b4188db7e4bd433d5389f3c7220ec0600983c8c4226db593b6b29c1c6046": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m17s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "88029f9160b78507efc4e51b421e5459fbedc829234c1c8f239244e84b903944": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m3s        Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "17de4a98557f07a239e27ac8a27d332af81b1aaee6e50f5a525dff74b1fdfc6b": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m          Normal    Pulled                   pod/calico-node-qmnmf                          Container image "docker.io/calico/node:v3.23.3" already present on machine
2m          Normal    Created                  pod/calico-node-qmnmf                          Created container calico-node
2m          Normal    Started                  pod/calico-node-qmnmf                          Started container calico-node
118s        Warning   BackOff                  pod/calico-node-qmnmf                          Back-off restarting failed container
114s        Warning   BackOff                  pod/calico-node-6l8bp                          Back-off restarting failed container
111s        Warning   FailedCreatePodSandBox   pod/calico-kube-controllers-54c85446d4-7xss2   (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ad0a77ebf57264879f7cb68860ab1c5b1f0c06df570a59c0228e52fa67d72e98": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
110s        Warning   BackOff                  pod/calico-node-577p5                          Back-off restarting failed container
109s        Warning   BackOff                  pod/calico-node-gxnxv                          Back-off restarting failed container
108s        Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "df2efd5a6f203de330ca2e9b6db4fc59687b1d4b7454d2b70788d133680af1a4": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
93s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "13b4202e8c0c7af66b2c8561a6ede40274b83b255fc567bf7a3d974944367edd": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
80s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "50b1b4bd74251e5d3b06b31d6ef1895350bbd6db4d92cabd4a129c0701e56b1e": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
66s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a27b18f04f20ff1fd70b31be73bcf2aa2de211e902750bf58fd7abc63f47a164": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
13s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "49fe3a56e2107fe2933eac802302fa6628f86355ea469006668c316cebecdbf1": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found

uchiha-pain avatar Nov 15 '22 15:11 uchiha-pain