Support IPv6 for ambient
Bug Description
I am trying to install ambient mesh with IPv6 single stack Kind cluster but I am not able to install it.
My setup is based on kind cluster with below config.
three node (two workers) cluster config
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: cluster1-ipv6 nodes:
-
role: control-plane image: kindest/node:v1.23.6@sha256:b1fa224cc6c7ff32455e0b1fd9cbfd3d3bc87ecaa8fcb06961ed1afb3db0f9ae
-
role: worker image: kindest/node:v1.23.6@sha256:b1fa224cc6c7ff32455e0b1fd9cbfd3d3bc87ecaa8fcb06961ed1afb3db0f9ae
-
role: worker image: kindest/node:v1.23.6@sha256:b1fa224cc6c7ff32455e0b1fd9cbfd3d3bc87ecaa8fcb06961ed1afb3db0f9ae networking: ipFamily: ipv6
installation failed with below logs. bin/istioctl install --set profile=ambient This will install the Istio 0.0.0 ambient profile with ["Istio core" "Istiod" "CNI" "Ingress gateways"] components into the cluster. Proceed? (y/N) y ✔ Istio core installed
✘ Istiod encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
✔ CNI installed
✔ Ingress gateways installed
- Pruning removed resources Error: failed to install manifests: errors occurred during operation
pods status in the isito-system namespace.
k get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-cni-node-8smkj 1/1 Running 0 6m44s istio-cni-node-khrd2 1/1 Running 0 6m44s istio-cni-node-kxrdh 1/1 Running 0 6m44s istio-ingressgateway-f6d95c86b-jp6w4 1/1 Running 0 6m44s istiod-6c99d96db7-h9pr9 1/1 Running 0 11m ztunnel-5284f 0/1 Init:CrashLoopBackOff 6 (5m5s ago) 11m ztunnel-lr8wh 0/1 Init:CrashLoopBackOff 6 (5m5s ago) 11m ztunnel-mx9x7 0/1 Init:Error 7 (5m15s ago) 11m
k logs ztunnel-5284f -n istio-system Defaulted container "istio-proxy" out of: istio-proxy, istio-init (init) Error from server (BadRequest): container "istio-proxy" in pod "ztunnel-5284f" is waiting to start: PodInitializing
k logs ztunnel-5284f -c istio-init -n istio-system
- PROXY_OUTBOUND_MARK=0x401/0xfff
- PROXY_INBOUND_MARK=0x402/0xfff
- PROXY_ORG_SRC_MARK=0x4d2/0xfff
- MARK=0x400/0xfff
- ORG_SRC_RET_MARK=0x4d3/0xfff
- POD_OUTBOUND=15001
- POD_INBOUND=15008
- POD_INBOUND_PLAINTEXT=15006
- OUTBOUND_MASK=0x100
- OUTBOUND_MARK=0x100/0x100
- SKIP_MASK=0x200
- SKIP_MARK=0x200/0x200
- CONNSKIP_MASK=0x220
- CONNSKIP_MARK=0x220/0x220
- PROXY_MASK=0x210
- PROXY_MARK=0x210/0x210
- PROXY_RET_MASK=0x040
- PROXY_RET_MARK=0x040/0x040
- INBOUND_TUN=istioin
- OUTBOUND_TUN=istioout
- INBOUND_TUN_IP=192.168.126.1
- ZTUNNEL_INBOUND_TUN_IP=192.168.126.2
- OUTBOUND_TUN_IP=192.168.127.1
- ZTUNNEL_OUTBOUND_TUN_IP=192.168.127.2
- TUN_PREFIX=30
- INBOUND_ROUTE_TABLE=100
- INBOUND_ROUTE_TABLE2=103
- OUTBOUND_ROUTE_TABLE=101
- PROXY_ROUTE_TABLE=102
- set +e
- ip link del pistioin Cannot find device "pistioin"
- ip link del pistioout Cannot find device "pistioout"
- set -e
- ip route
- grep+ default awk {print $3}
- HOST_IP=
- ip link add name pistioin type geneve id 1000 remote Command line is not complete. Try option "help"
Version
bin/istioctl version
client version: 0.0.0-ambient.191fe680b52c1754ee72a06b3e0d3f9d116f2e82
control plane version: 0.0.0
data plane version: 0.0.0-ambient.191fe680b52c1754ee72a06b3e0d3f9d116f2e82 (1 proxies)
k version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.0
Kustomize Version: v4.5.7
Server Version: v1.23.6
Additional Information
No response
This is expected for now, there are quite a few places we assume IPv4
Okay hoping to see more IPv6 friendly contributions.
@howardjohn John, we would love to see Ambient has some IPv6 support. Do you mind sharing insight on where the IPv4 is assumed in the current implementation? We may be able to collaborate and contribute on this issue. Thanks.
The main part is the redirection. The main issue is I hope it will be mostly replaced by https://github.com/istio/istio/pull/42372 anyways so it may be throw away work... but that isn't guaranteed to merge (or soon)
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2023-01-09. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.
Created by the issue and PR lifecycle manager.