vcluster icon indicating copy to clipboard operation
vcluster copied to clipboard

IPv6-only clusters not supported

Open Frankkkkk opened this issue 4 years ago • 5 comments

Hi, I wanted to give vcluster a try, however it seems that IPv6-only kubernetes clusters are not yet fully supported;

kl -n host-namespace-1 logs -f vcluster-1-0 vcluster
time="2021-05-18T09:56:40.133915808Z" level=info msg="Starting k3s v1.18.16+k3s1 (8c7dd139)"
time="2021-05-18T09:56:40.161845141Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-05-18T09:56:40.162067426Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-05-18T09:56:40.193732078Z" level=info msg="Database tables and indexes are up to date"
time="2021-05-18T09:56:40.200156204Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-05-18T09:56:40.620849868Z" level=info msg="Active TLS secret  (ver=) (count 7): map[listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-2001:1600:caca:1002:1d:12e:ffff:4b6d:2001:1600:caca:1002:1d:12e:ffff:4b6d listener.cattle.io/cn-2001:1600:caca:50da:1d:fffe:0:1:2001:1600:caca:50da:1d:fffe:0:1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:58e35fc7fbc214c1d0e3c5f72f84a49a8ee8f2ec2eb4cfe34a546604d09d32f4]"
time="2021-05-18T09:56:40.644266915Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/data/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/data/server/tls/temporary-certs --client-ca-file=/data/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/data/server/tls/server-ca.crt --kubelet-client-certificate=/data/server/tls/client-kube-apiserver.crt --kubelet-client-key=/data/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/data/server/tls/client-auth-proxy.crt --proxy-client-key-file=/data/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/data/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/data/server/tls/service.key --service-account-signing-key-file=/data/server/tls/service.key --service-cluster-ip-range=2001:1600:caca:50da:1d:fffe::/110 --storage-backend=etcd3 --tls-cert-file=/data/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/data/server/tls/serving-kube-apiserver.key"
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I0518 09:56:40.650575       1 server.go:645] external host was not specified, using 2001:1600:caca:1002:1d:12e:ffff:4b6d
I0518 09:56:40.651761       1 server.go:162] Version: v1.18.16+k3s1
I0518 09:56:41.065909       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0518 09:56:41.066022       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0518 09:56:41.081038       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0518 09:56:41.081190       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0518 09:56:41.138573       1 master.go:270] Using reconciler: lease
I0518 09:56:41.323665       1 rest.go:113] the default service ipfamily for this cluster is: IPv6
W0518 09:56:42.195315       1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0518 09:56:42.216467       1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0518 09:56:42.245175       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0518 09:56:42.288855       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0518 09:56:42.297130       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0518 09:56:42.333489       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0518 09:56:42.380667       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0518 09:56:42.380700       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0518 09:56:42.403280       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0518 09:56:42.403322       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0518 09:56:46.561540       1 secure_serving.go:178] Serving securely on 127.0.0.1:6444
I0518 09:56:46.561672       1 crd_finalizer.go:266] Starting CRDFinalizer
I0518 09:56:46.561764       1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
I0518 09:56:46.561808       1 dynamic_serving_content.go:130] Starting serving-cert::/data/server/tls/serving-kube-apiserver.crt::/data/server/tls/serving-kube-apiserver.key
I0518 09:56:46.562047       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0518 09:56:46.563094       1 controller.go:86] Starting OpenAPI controller
I0518 09:56:46.563180       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0518 09:56:46.563205       1 naming_controller.go:291] Starting NamingConditionController
I0518 09:56:46.563224       1 establishing_controller.go:76] Starting EstablishingController
I0518 09:56:46.563242       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0518 09:56:46.563259       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0518 09:56:46.563286       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
I0518 09:56:46.578483       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0518 09:56:46.578511       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0518 09:56:46.578563       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0518 09:56:46.578570       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0518 09:56:46.578597       1 available_controller.go:404] Starting AvailableConditionController
I0518 09:56:46.578602       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0518 09:56:46.578657       1 autoregister_controller.go:141] Starting autoregister controller
I0518 09:56:46.578662       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0518 09:56:46.578694       1 controller.go:81] Starting OpenAPI AggregationController
I0518 09:56:46.581360       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
I0518 09:56:46.581417       1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
I0518 09:56:46.581779       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0518 09:56:46.584247       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
E0518 09:56:46.658946       1 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "2001:1600:caca:50da:1d:fffe:0:1": cannot allocate resources of type serviceipallocations at this time
E0518 09:56:46.667561       1 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/2001:1600:caca:1002:1d:12e:ffff:4b6d, ResourceVersion: 0, AdditionalErrorMsg: 
I0518 09:56:46.679560       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0518 09:56:46.679613       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
I0518 09:56:46.681339       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0518 09:56:46.681383       1 cache.go:39] Caches are synced for autoregister controller
I0518 09:56:46.688337       1 shared_informer.go:230] Caches are synced for crd-autoregister 
I0518 09:56:47.558741       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0518 09:56:47.558788       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0518 09:56:47.589383       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0518 09:56:47.606356       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0518 09:56:47.606385       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0518 09:56:49.765228       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0518 09:56:49.889568       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0518 09:56:50.156090       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [2001:1600:caca:1002:1d:12e:ffff:4b6d]
I0518 09:56:50.157759       1 controller.go:609] quota admission added evaluator for: endpoints
I0518 09:56:50.174678       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
time="2021-05-18T09:56:50.618681934Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/data/server/tls/server-ca.crt --cluster-signing-key-file=/data/server/tls/server-ca.key --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle --kubeconfig=/data/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/data/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/data/server/tls/service.key --use-service-account-credentials=true"
I0518 09:56:50.626095       1 controllermanager.go:161] Version: v1.18.16+k3s1
I0518 09:56:50.628217       1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
time="2021-05-18T09:56:50.655583671Z" level=info msg="Creating CRD addons.k3s.cattle.io"
time="2021-05-18T09:56:50.681262455Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
time="2021-05-18T09:56:50.711929954Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
time="2021-05-18T09:56:50.743876926Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
I0518 09:56:50.899874       1 plugins.go:100] No cloud provider specified.
I0518 09:56:50.903791       1 shared_informer.go:223] Waiting for caches to sync for tokens
I0518 09:56:50.935018       1 controller.go:609] quota admission added evaluator for: serviceaccounts
I0518 09:56:50.942311       1 controllermanager.go:533] Started "replicationcontroller"
W0518 09:56:50.942566       1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
W0518 09:56:50.942626       1 controllermanager.go:512] "nodelifecycle" is disabled
I0518 09:56:50.942381       1 replica_set.go:182] Starting replicationcontroller controller
I0518 09:56:50.943366       1 shared_informer.go:223] Waiting for caches to sync for ReplicationController
I0518 09:56:51.008386       1 shared_informer.go:230] Caches are synced for tokens 
I0518 09:56:51.032146       1 controllermanager.go:533] Started "disruption"
W0518 09:56:51.032173       1 controllermanager.go:512] "bootstrapsigner" is disabled
I0518 09:56:51.032480       1 disruption.go:331] Starting disruption controller
I0518 09:56:51.032498       1 shared_informer.go:223] Waiting for caches to sync for disruption
I0518 09:56:51.103384       1 controllermanager.go:533] Started "job"
I0518 09:56:51.103424       1 job_controller.go:145] Starting job controller
I0518 09:56:51.103449       1 shared_informer.go:223] Waiting for caches to sync for job
I0518 09:56:51.165042       1 controllermanager.go:533] Started "replicaset"
I0518 09:56:51.165297       1 replica_set.go:182] Starting replicaset controller
I0518 09:56:51.165340       1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet
time="2021-05-18T09:56:51.267636533Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2021-05-18T09:56:51.267687711Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
I0518 09:56:51.286705       1 controllermanager.go:533] Started "horizontalpodautoscaling"
W0518 09:56:51.286739       1 controllermanager.go:512] "tokencleaner" is disabled
W0518 09:56:51.286747       1 controllermanager.go:512] "persistentvolume-binder" is disabled
W0518 09:56:51.286753       1 controllermanager.go:512] "attachdetach" is disabled
I0518 09:56:51.287081       1 horizontal.go:169] Starting HPA controller
I0518 09:56:51.287111       1 shared_informer.go:223] Waiting for caches to sync for HPA
I0518 09:56:51.750642       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0518 09:56:51.750755       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0518 09:56:51.750798       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0518 09:56:51.750881       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
I0518 09:56:51.750930       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0518 09:56:51.750994       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0518 09:56:51.751097       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0518 09:56:51.751197       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0518 09:56:51.751231       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0518 09:56:51.751259       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0518 09:56:51.751306       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0518 09:56:51.751342       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0518 09:56:51.751404       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0518 09:56:51.751463       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0518 09:56:51.751494       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0518 09:56:51.751521       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0518 09:56:51.751554       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0518 09:56:51.751584       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0518 09:56:51.751613       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0518 09:56:51.751651       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0518 09:56:51.751725       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0518 09:56:51.751758       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0518 09:56:51.751785       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0518 09:56:51.751839       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0518 09:56:51.751862       1 controllermanager.go:533] Started "resourcequota"
I0518 09:56:51.751903       1 resource_quota_controller.go:272] Starting resource quota controller
I0518 09:56:51.751933       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0518 09:56:51.751970       1 resource_quota_monitor.go:303] QuotaMonitor running
time="2021-05-18T09:56:51.779410685Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
time="2021-05-18T09:56:51.779456015Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
time="2021-05-18T09:56:52.289946016Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2021-05-18T09:56:52.324247973Z" level=info msg="Writing static file: /data/server/static/charts/traefik-1.81.0.tgz"
time="2021-05-18T09:56:52.345976569Z" level=info msg="Writing manifest: /data/server/manifests/rolebindings.yaml"
time="2021-05-18T09:56:52.350008423Z" level=info msg="Writing manifest: /data/server/manifests/ccm.yaml"
time="2021-05-18T09:56:52.359111781Z" level=info msg="Writing manifest: /data/server/manifests/coredns.yaml"
time="2021-05-18T09:56:52.461556884Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2021-05-18T09:56:52.463023570Z" level=info msg="Node token is available at /data/server/token"
time="2021-05-18T09:56:52.463609283Z" level=info msg="To join node to cluster: k3s agent -s https://2001:1600:caca:1002:1d:12e:ffff:4b6d:6443 -t ${NODE_TOKEN}"
I0518 09:56:52.529683       1 controllermanager.go:533] Started "garbagecollector"
I0518 09:56:52.530331       1 garbagecollector.go:133] Starting garbage collector controller
I0518 09:56:52.530353       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0518 09:56:52.530391       1 graph_builder.go:282] GraphBuilder running
2021-05-18 09:56:52.550448 I | http: TLS handshake error from 127.0.0.1:37086: remote error: tls: bad certificate
time="2021-05-18T09:56:52.563547895Z" level=info msg="Wrote kubeconfig /k3s-config/kube-config.yaml"
time="2021-05-18T09:56:52.563609983Z" level=info msg="Run: k3s kubectl"
time="2021-05-18T09:56:52.563627200Z" level=info msg="k3s is up and running"
I0518 09:56:52.597942       1 controllermanager.go:533] Started "statefulset"
I0518 09:56:52.598518       1 stateful_set.go:146] Starting stateful set controller
I0518 09:56:52.598548       1 shared_informer.go:223] Waiting for caches to sync for stateful set
I0518 09:56:52.779406       1 controller.go:609] quota admission added evaluator for: addons.k3s.cattle.io
I0518 09:56:52.781435       1 controllermanager.go:533] Started "csrapproving"
I0518 09:56:52.781684       1 certificate_controller.go:119] Starting certificate controller "csrapproving"
I0518 09:56:52.781696       1 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving
time="2021-05-18T09:56:52.833523204Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2021-05-18T09:56:52.833897214Z" level=info msg="Starting /v1, Kind=Node controller"
time="2021-05-18T09:56:52.834071443Z" level=info msg="Starting /v1, Kind=Service controller"
time="2021-05-18T09:56:52.834142579Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2021-05-18T09:56:52.834232913Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2021-05-18T09:56:52.863677201Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2021-05-18T09:56:52.863887128Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2021-05-18T09:56:52.863967949Z" level=info msg="Starting batch/v1, Kind=Job controller"
I0518 09:56:52.882701       1 controllermanager.go:533] Started "ttl"
W0518 09:56:52.882733       1 controllermanager.go:512] "persistentvolume-expander" is disabled
W0518 09:56:52.882749       1 controllermanager.go:525] Skipping "ttl-after-finished"
I0518 09:56:52.882747       1 ttl_controller.go:118] Starting TTL controller
I0518 09:56:52.882877       1 shared_informer.go:223] Waiting for caches to sync for TTL
I0518 09:56:52.898746       1 request.go:621] Throttling request took 1.040939299s, request: GET:https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1?timeout=32s
I0518 09:56:52.940150       1 controllermanager.go:533] Started "endpointslice"
I0518 09:56:52.940367       1 endpointslice_controller.go:213] Starting endpoint slice controller
I0518 09:56:52.940456       1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice
I0518 09:56:52.976228       1 controllermanager.go:533] Started "daemonset"
I0518 09:56:52.976453       1 daemon_controller.go:286] Starting daemon sets controller
I0518 09:56:52.976469       1 shared_informer.go:223] Waiting for caches to sync for daemon sets
I0518 09:56:53.099704       1 controllermanager.go:533] Started "pv-protection"
W0518 09:56:53.099900       1 controllermanager.go:512] "nodeipam" is disabled
I0518 09:56:53.099876       1 pv_protection_controller.go:83] Starting PV protection controller
I0518 09:56:53.100147       1 shared_informer.go:223] Waiting for caches to sync for PV protection
I0518 09:56:53.131029       1 controllermanager.go:533] Started "clusterrole-aggregation"
I0518 09:56:53.131293       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0518 09:56:53.131305       1 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator
I0518 09:56:53.168358       1 controllermanager.go:533] Started "csrcleaner"
I0518 09:56:53.168634       1 cleaner.go:82] Starting CSR cleaner controller
I0518 09:56:53.180249       1 controller.go:609] quota admission added evaluator for: deployments.apps
I0518 09:56:53.233142       1 controllermanager.go:533] Started "pvc-protection"
I0518 09:56:53.233470       1 pvc_protection_controller.go:101] Starting PVC protection controller
I0518 09:56:53.233490       1 shared_informer.go:223] Waiting for caches to sync for PVC protection
I0518 09:56:53.313134       1 gc_controller.go:89] Starting GC controller
I0518 09:56:53.313319       1 shared_informer.go:223] Waiting for caches to sync for GC
I0518 09:56:53.313843       1 controllermanager.go:533] Started "podgc"
time="2021-05-18T09:56:53.330485996Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
I0518 09:56:53.433840       1 controllermanager.go:533] Started "namespace"
I0518 09:56:53.433898       1 namespace_controller.go:200] Starting namespace controller
I0518 09:56:53.433915       1 shared_informer.go:223] Waiting for caches to sync for namespace
I0518 09:56:53.466497       1 controllermanager.go:533] Started "endpoint"
I0518 09:56:53.466677       1 endpoints_controller.go:181] Starting endpoint controller
I0518 09:56:53.466695       1 shared_informer.go:223] Waiting for caches to sync for endpoint
I0518 09:56:53.484052       1 controllermanager.go:533] Started "cronjob"
I0518 09:56:53.484301       1 cronjob_controller.go:97] Starting CronJob Manager
I0518 09:56:53.504434       1 controllermanager.go:533] Started "csrsigning"
I0518 09:56:53.504650       1 certificate_controller.go:119] Starting certificate controller "csrsigning"
I0518 09:56:53.504664       1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning
I0518 09:56:53.504706       1 dynamic_serving_content.go:130] Starting csr-controller::/data/server/tls/server-ca.crt::/data/server/tls/server-ca.key
E0518 09:56:53.553865       1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0518 09:56:53.553891       1 controllermanager.go:525] Skipping "service"
W0518 09:56:53.553907       1 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
W0518 09:56:53.553915       1 controllermanager.go:525] Skipping "route"
W0518 09:56:53.553921       1 controllermanager.go:512] "cloud-node-lifecycle" is disabled
I0518 09:56:53.588354       1 controllermanager.go:533] Started "serviceaccount"
I0518 09:56:53.588647       1 serviceaccounts_controller.go:117] Starting service account controller
I0518 09:56:53.588660       1 shared_informer.go:223] Waiting for caches to sync for service account
I0518 09:56:53.639884       1 controllermanager.go:533] Started "deployment"
I0518 09:56:53.640326       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0518 09:56:53.640420       1 deployment_controller.go:153] Starting deployment controller
I0518 09:56:53.640435       1 shared_informer.go:223] Waiting for caches to sync for deployment
I0518 09:56:53.687679       1 shared_informer.go:230] Caches are synced for HPA 
I0518 09:56:53.690451       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
I0518 09:56:53.694704       1 shared_informer.go:230] Caches are synced for service account 
I0518 09:56:53.695090       1 shared_informer.go:230] Caches are synced for TTL 
I0518 09:56:53.700682       1 shared_informer.go:230] Caches are synced for PV protection 
I0518 09:56:53.710284       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
I0518 09:56:53.713943       1 shared_informer.go:230] Caches are synced for GC 
I0518 09:56:53.732358       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
I0518 09:56:53.733737       1 shared_informer.go:230] Caches are synced for PVC protection 
I0518 09:56:53.733857       1 shared_informer.go:230] Caches are synced for disruption 
I0518 09:56:53.733876       1 disruption.go:339] Sending events to api server.
I0518 09:56:53.734456       1 shared_informer.go:230] Caches are synced for namespace 
I0518 09:56:53.741283       1 shared_informer.go:230] Caches are synced for deployment 
I0518 09:56:53.744736       1 shared_informer.go:230] Caches are synced for endpoint_slice 
I0518 09:56:53.746240       1 shared_informer.go:230] Caches are synced for ReplicationController 
I0518 09:56:53.771405       1 shared_informer.go:230] Caches are synced for ReplicaSet 
I0518 09:56:53.905065       1 controller.go:609] quota admission added evaluator for: replicasets.apps
E0518 09:56:53.928396       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0518 09:56:53.937637       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1fb191d7-431c-487d-b2d3-199da1c21b9c", APIVersion:"apps/v1", ResourceVersion:"222", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7944c66d8d to 1
I0518 09:56:53.966966       1 shared_informer.go:230] Caches are synced for endpoint 
E0518 09:56:53.976003       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0518 09:56:53.976667       1 shared_informer.go:230] Caches are synced for daemon sets 
I0518 09:56:53.996759       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0518 09:56:53.998815       1 shared_informer.go:230] Caches are synced for stateful set 
I0518 09:56:54.028963       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7944c66d8d", UID:"82703c86-f233-40d3-acfb-1a681563aed9", APIVersion:"apps/v1", ResourceVersion:"261", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7944c66d8d-g5f4n
I0518 09:56:54.152214       1 shared_informer.go:230] Caches are synced for resource quota 
I0518 09:56:54.203926       1 shared_informer.go:230] Caches are synced for job 
I0518 09:56:54.240639       1 shared_informer.go:230] Caches are synced for resource quota 
I0518 09:56:54.297126       1 shared_informer.go:230] Caches are synced for garbage collector 
I0518 09:56:54.330647       1 shared_informer.go:230] Caches are synced for garbage collector 
I0518 09:56:54.330683       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W0518 09:57:00.128646       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [2001:1600:caca:1002:1d:12e:ffff:4b6d]
time="2021-05-18T09:57:08.598544058Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:57:23.712596609Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:57:38.831005487Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:57:53.934357168Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:58:09.147813795Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:58:24.307792410Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:58:39.481314918Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:58:54.668060028Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:59:09.810203979Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:59:24.986142765Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"
time="2021-05-18T09:59:40.138485865Z" level=error msg="failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"0.0.0.10\": provided IP is not in the valid range. The range of valid IPs is 2001:1600:caca:50da:1d:fffe::/110"

Don't hesitate if you need more information !

Cheers

Frankkkkk avatar May 18 '21 10:05 Frankkkkk

@Frankkkkk thanks for creating this issue! To be honest we didn't had the time to test IPv6 only clusters yet, however I don't see a reason why it shouldn't work as we don't have any IPv4 only code in vcluster. I guess the problem might be the k3s settings we use as they might not be sufficient for IPv6 only clusters.

FabianKramm avatar May 18 '21 10:05 FabianKramm

K3s does not support IPv6 only currently. I tried to implement it on https://github.com/k3s-io/k3s/pull/4450 but as normal k3s deployment needs Flannel this one need to be solved first https://github.com/flannel-io/flannel/issues/1453 also it is worth to mention that K8s 1.23 will promote dual stack to stable which why there have been some changes on kube-proxy which will need some changes to k3s too.

olljanat avatar Nov 23 '21 10:11 olljanat

@olljanat Thank you for that context :)

richburroughs avatar Nov 23 '21 17:11 richburroughs

Btw. Now when vcluster supports k0s and k8s it should be possible to support IPv6 only too.

I think that best option would be add new --use-ipv6-cidr flag which would then set those control plane parameters like they are described on Calico's documentation https://docs.projectcalico.org/networking/ipv6-control-plane (especially --node-ip and --bind-address are critical) and .spec.ipFamilyPolicy = SingleStack + .spec.ipFamilies = ["IPv6"]` to vcluster services.

Then it should be possible to run vclusters on top of IPv6 only host cluster or alternatively if host cluster is running dual-stack configuration it can run both IPv4 and IPv6 vclusters.

olljanat avatar Nov 26 '21 16:11 olljanat

@Frankkkkk have you tried this again? IPv6 only mode have been supported by k3s on some level after my PR https://github.com/k3s-io/k3s/pull/4450 was merged and I see that there have been some improvements even on latest version https://github.com/k3s-io/k3s/releases/tag/v1.23.6%2Bk3s1

So probably it would be good idea to test with values file like this:

vcluster:
  image: rancher/k3s:v1.23.6+k3s1

olljanat avatar May 09 '22 20:05 olljanat

Sounds like this might work already. We will close this issue unless told that it doesn't work even with the latest vcluster version and recent k3s.

matskiv avatar Nov 02 '22 19:11 matskiv