terway icon indicating copy to clipboard operation
terway copied to clipboard

NetworkPolicy denied when ipBlock belongs to podCIDR

Open onelapahead opened this issue 4 months ago • 1 comments

Identical to https://github.com/cilium/cilium/issues/9209.

When creating two clusters on Alibaba Cloud, K8s 1.33 Terway 1.15.0 (ENI Trunking No; Network Policies Yes; IPV4; Forwarding Mode IPVS), I found if I created a NetworkPolicy like so:

apiVersion: networking.k8s.io/v1
  kind: NetworkPolicy
  metadata:
    name: peering
    namespace: default
  spec:
    ingress:
    - from:
      - ipBlock:
          cidr: 172.28.6.254/32 # pod IP of pod in a different cluster in a different region
      - ipBlock:
          cidr: 172.24.16.60/32 # pod IP of pod in the same cluster
      ports:
      - port: 4001
        protocol: TCP
    podSelector:
      matchLabels:
        app.kubernetes.io/component: someruntime
    policyTypes:
    - Ingress

The targeted someruntime pod could successfully connect to 172.28.6.254/32 via a VPC peering connection to a cluster in another region, but it could not connect to 172.24.16.60/32 which was a pod IP in the same cluster, where the cluster was given a VPC for 172.24.0.0/18.

So similar to Cillium, it seems Terway eBPF tries to explicitly deny ipBlock rules if they match the cluster CIDR, prescribing that the user should be using podSelector or namespaceSelector's instead.

Chatting with the Kubernetes Networking SIG on Slack, they confirmed that this is a bug and implementers of the NetworkPolicy API should allow this: https://kubernetes.slack.com/archives/C09QYUH5W/p1757701723707039.

Below is the contents of eni-config ConfigMap for Terway with IDs obfuscated:

  10-terway.conf: |
    {
      "cniVersion": "0.4.0",
      "name": "terway",
      "capabilities": {"bandwidth": true},
      "network_policy_provider": "ebpf",
      "type": "terway"
    }
  disable_network_policy: "false"
  eni_conf: |
    {
      "version": "1",
      "max_pool_size": 5,
      "min_pool_size": 0,


      "credential_path": "/var/addon/token-config",



      "enable_eni_trunking": true,
      "ipam_type": "crd",
      "vswitches": {"ap-southeast-5a":["vsw-***"],"ap-southeast-5b":["vsw-***"],"ap-southeast-5c":["vsw-***"]},
      "eni_tags": {"ack.aliyun.com":"***"},
      "service_cidr": "10.200.0.0/18",
      "security_group": "sg-***",

      "ip_stack": "ipv4",

      "resource_group_id": "rg-***",

      "vswitch_selection_policy": "ordered"
    }
  in_cluster_loadbalance: "true"

onelapahead avatar Sep 15 '25 17:09 onelapahead

Hi, thanks for the detailed report.

This is indeed a known behavior. It's an inherent limitation of the eBPF-based policy enforcement that we integrate from Cilium. For performance reasons, the eBPF datapath prioritizes identity-based policies (podSelector/namespaceSelector) over ipBlock for in-cluster traffic.

Therefore, we strongly recommend using podSelector instead of ipBlock to define rules for traffic originating from within the cluster. This is not only a workaround but also the best practice for better performance.

BSWANG avatar Sep 16 '25 12:09 BSWANG