antrea icon indicating copy to clipboard operation
antrea copied to clipboard

Multicast Support in Antrea

Open luolanzone opened this issue 3 years ago • 11 comments

Describe what you are trying to solve Currently Antrea doesn’t support multicast traffic, all multicast traffic will be dropped by data path in OVS. Considering there will be some multicast applications like financial services require multicast support from CNI, it would be ideal to have multicast supported in Antrea.

Describe the solution you have in mind User can enable or disable multicast support in Antrea via a feature gate eg: Multicast: true, when multicast is enabled, multicast traffic can be forwarded correctly inside a node and across nodes.

Describe how your solution impacts user flows User can use multicast application via Antrea.

Describe the main design/architecture of your solution The details of Antrea multicast support design is here https://docs.google.com/document/d/1SriZ3Tjy9Su-cH0foegKjx2esTuPdWg0UvjgHH95PaE/edit?usp=sharing

The design for ANP on multicast traffic is here https://github.com/antrea-io/antrea/issues/3323

The design for multicast statistics is here https://github.com/antrea-io/antrea/issues/3294

Test plan TBD

Work Breakdown

  • [x] Use PacketIn/PacketOut to discovery Multicast group address #2652
  • [x] Configure IPv4 Multicast routing entries #2835
  • [x] Use OpenFlow group for multicast traffic forwarding #3508
  • [ ] Support IPv6 Multicast group discovery (Multicast Listener Discovery using ICMPv6)
  • [ ] Configure IPv6 Multicast routing entries
  • [x] Support Multicast in Encap mode #3947
  • [ ] Support source-specified group records in IGMPv3
  • [x] Integrate Multicast with Network Policy #3660
  • [x] Multicast e2e #2986 #3792 #3803
  • [x] Multicast Statistics
    • [x] #3354
    • [x] #3449

luolanzone avatar Jun 08 '21 08:06 luolanzone

There was a few topics during the design sharing session in community meeting, added here for further discussion, please correct me if I missed anything or misunderstood it. @antoninbas @jianjuns @salv-orlando @tnqn

  1. Try IGMP Proxy instead of mrouted
  2. Higher priority for Encap, hybrid can be lower one.
  3. Support multicast from/to external cluster
  4. Jianjun and Antonin is investigating BSD multicast attaching socket to OVS?
  5. Possibility to use mrouted for both Encap and noEncap
  6. Possibility to use OVS only(maybe some simple implementation on OVS?) to support multicast
  7. Can the multicast source IP be Pod IP when there is no NAT?

luolanzone avatar Jun 22 '21 07:06 luolanzone

Jianjun and Antonin is investigating BSD multicast attaching socket to OVS?

I meant technically it should be possible to call a socket API (setsockopt() with IP_ADD_MEMBERSHIP) to join a multicast group from a host NIC, then Linux TCP/IP stack can handle IGMP for you; then you can achieve similar effects as IGMP Proxy without depending on another solution. I feel it makes sense if: 1) we found some disadvantage of IGMP Proxy (in complexity, functionalities, overhead, stability, etc.); 2) if we go control plane approach for encap mode, and Agent anyway needs to manage the local groups.

jianjuns avatar Jun 22 '21 16:06 jianjuns

@luolanzone @jianjuns Asking out of general interest in the multicast topic. In encap mode, can the nodes join the multicast group in addition to the pods/ports of the overlay network connected to OVS? I understand that we have to rely on multicast functionality in the underlay switches/router and depend on the capability of the nodes to send IGMP reports. By doing this, we could use the multicast group as the destination address when tunneling the traffic instead of broadcasting to every node. Is this technically feasible or you see some issue? I wanted to raise this as I was not sure if this scenario was discussed in Option-1 in the community meeting.

srikartati avatar Jun 22 '21 18:06 srikartati

Hi @srikartati , I don't think it will work to let nodes join the multicast group for the purpose you want to achieve, per my understanding, each node is actually the local multicast router for pods. between nodes (router to router), we have to involve PIM/DVMRP etc to do the job. you may check this article about OVS multicast. https://software.intel.com/content/www/us/en/develop/articles/ip-multicast-support-in-open-vswitch.html

luolanzone avatar Jun 23 '21 01:06 luolanzone

I tried a static multicast route tool smcroute, looks like we can configure static multicast route to allow traffic between nodes in noEncap mode without DVMRP/PIM dependency, which means we may use the way @jianjuns suggested to do our own codes similar as smcroute to manipulate mroute table to do multicast works across nodes.

luolanzone avatar Jun 23 '21 09:06 luolanzone

I tried IGMP proxy, it doesn't satisfy our requirement: IGMP Proxy gives two roles for the networking interfaces (antrea-gw0, and ens192, e,g.): upstream and downstream. It assumes Multicast client is outside the host, so it only forwards the Multicast traffic to the downstream interface from upstream interface, but not do it on the reverse path. That would cause Multicast traffic sent out from a local Pod is not possible to forward to external or different Node. From my test, I got the same conclusion. The cross Node Multicast traffic does not work in noEncap mode.

wenyingd avatar Jul 30 '21 01:07 wenyingd

For statically configuring multicast routing entry, it works if we leverage cgo to configure the Linux mroute table, and it is what we plan to do in the Multicast implementation

wenyingd avatar Jul 30 '21 01:07 wenyingd

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

github-actions[bot] avatar Dec 18 '21 00:12 github-actions[bot]

@wenyingd Since this is supported now, could you check if we still need to keep this issue? If it can be closed, could you list the PRs that implement it here.

tnqn avatar Jan 21 '22 02:01 tnqn

@wenyingd Since this is supported now, could you check if we still need to keep this issue? If it can be closed, could you list the PRs that implement it here.

We can't close it yet, there are some other multicast relevant features required in next release, like NetworkPolicy and metrics. I have linked the merged PRs in the subtasks.

wenyingd avatar Jan 21 '22 02:01 wenyingd

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

github-actions[bot] avatar Jul 19 '22 01:07 github-actions[bot]

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

github-actions[bot] avatar Dec 14 '22 00:12 github-actions[bot]

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days

github-actions[bot] avatar Mar 15 '23 00:03 github-actions[bot]