kops icon indicating copy to clipboard operation
kops copied to clipboard

this cluster is not configured for dual-stack services

Open IgalSc opened this issue 2 years ago • 5 comments

/kind bug

1. What kops version are you running? The command kops version, will display this information. Client version: 1.24.1 (git-v1.24.1) 2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.4", GitCommit:"95ee5ab382d64cfe6c28967f36b53970b8374491", GitTreeState:"clean", BuildDate:"2022-08-17T18:47:37Z", GoVersion:"go1.18.5", Compiler:"gc", Platform:"linux/amd64"} 3. What cloud provider are you using? AWS 4. What commands did you run? What is the simplest way to reproduce this issue?

kops create cluster --cloud aws \
                    --vpc $VPC_ID \
                    --node-count 2 \
                    --zones us-east-1a,us-east-1b \
                    --master-zones us-east-1a,us-east-1b,us-east-1c \
                    --node-size $NODE_SIZE  \
                    --master-count 3 \
                    --master-size $MASTER_SIZE  \
                    --networking calico \
                    --ssh-public-key ~/.ssh/id_rsa.pub \
                    --cloud-labels  "Cost=NewDevKubernetesCluster" \
                    --ipv6

After the cluster is created and validated, I'm trying to create a nginx service with dual-stack loadbalancer

5. What happened after the commands executed? The cluster is created and validated, nginx service fails The Service "svc-nginx" is invalid: spec.ipFamilyPolicy: Invalid value: "RequireDualStack": this cluster is not configured for dual-stack services

6. What did you expect to happen? The cluster should have dual-stack support

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

kind: Cluster
metadata:
  creationTimestamp: "2022-08-30T17:07:55Z"
  generation: 1
  name: devcluster.dev.domain.name
spec:
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudControllerManager: {}
  cloudLabels:
    Cost: NewDevKubernetesCluster
  cloudProvider: aws
  configBase: s3://devcluster-kops-state-store/devcluster.dev.domain.name
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-east-1a
      name: a
    - encryptedVolume: true
      instanceGroup: master-us-east-1b
      name: b
    - encryptedVolume: true
      instanceGroup: master-us-east-1c
      name: c
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-east-1a
      name: a
    - encryptedVolume: true
      instanceGroup: master-us-east-1b
      name: b
    - encryptedVolume: true
      instanceGroup: master-us-east-1c
      name: c
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
  kubernetesApiAccess:
  - 0.0.0.0/0
  - ::/0
  kubernetesVersion: 1.24.4
  masterInternalName: api.internal.devcluster.dev.domain.name
  masterPublicName: api.devcluster.dev.domain.name
  networkCIDR: 172.30.0.0/16
  networkID: vpc-ID
  networking:
    calico: {}
  nonMasqueradeCIDR: ::/0
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: 172.30.32.0/19
    ipv6CIDR: 2600:a:b:c::/64
    name: us-east-1a
    type: Public
    zone: us-east-1a
  - cidr: 172.30.64.0/19
    ipv6CIDR: 2600:a:b:d::/64
    name: us-east-1b
    type: Public
    zone: us-east-1b
  - cidr: 172.30.96.0/19
    ipv6CIDR: 2600:a:b:e::/64
    name: us-east-1c
    type: Public
    zone: us-east-1c
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

Based on the kops documentation (https://kops.sigs.k8s.io/networking/ipv6/), kOps has experimental support for configuring clusters with IPv6-only pods and dual-stack nodes. IPv6 mode is specified by setting nonMasqueradeCIDR: "::/0" in the cluster spec. The --ipv6 flag of kops create cluster sets this field, among others.

IgalSc avatar Aug 30 '22 18:08 IgalSc

You cannot create a dualstack load balancer like that, unfortunately. You have to use load balancer controller addon and then an NLB with the dualstack annotation (not the ipfamily* fields).

olemarkus avatar Aug 31 '22 09:08 olemarkus

@olemarkus By "load balancer controller addon" you mean spec.api.LoadBalancer? And network load balancer on a service config?

IgalSc avatar Aug 31 '22 12:08 IgalSc

Sorry. Should have linked. See https://kops.sigs.k8s.io/addons/#aws-load-balancer-controller

olemarkus avatar Aug 31 '22 13:08 olemarkus

@olemarkus thank you! Where do I use the NLB then?

IgalSc avatar Aug 31 '22 13:08 IgalSc

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/

olemarkus avatar Sep 01 '22 17:09 olemarkus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 30 '22 18:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 30 '22 19:12 k8s-triage-robot

/close

johngmyers avatar Dec 30 '22 19:12 johngmyers

@johngmyers: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 30 '22 19:12 k8s-ci-robot