WebRTC not work with IPv6
Which version are you using?
v1.6.0
Which operating system are you using?
- [x] Linux amd64 standard
- [ ] Linux amd64 Docker
- [ ] Linux arm64 standard
- [ ] Linux arm64 Docker
- [ ] Linux arm7 standard
- [ ] Linux arm7 Docker
- [ ] Linux arm6 standard
- [ ] Linux arm6 Docker
- [ ] Windows amd64 standard
- [ ] Windows amd64 Docker (WSL backend)
- [ ] macOS amd64 standard
- [ ] macOS amd64 Docker
- [ ] Other (please describe)
Describe the issue
I deployed MediaMTX on a server with dual stack IP and a NAT. (actually a homelab)
And I set up port forwarding to expose 8189 and 8889 with both tcp&udp, and then DDNS to the server.
e.g. v4.example.com and dualstack.example.com
The former has only A record and the latter has both A and AAAA record
When using dual stack host, clients prefer to use IPv6, which should be a good thing However, I found that whip/whep connections through IPv6 does not work well, while IPv4 does Seems that only handshakes can be established, stream data are not transfered:
2024/04/11 16:39:48 INF [WebRTC] [session 556ebd06] created by [Client IPv6 Address]:4045
2024/04/11 16:39:59 INF [WebRTC] [session 556ebd06] closed: deadline exceeded while waiting connection
An example of IPv4 working config:
webrtcLocalUDPAddress: :8189
webrtcLocalTCPAddress: ''
webrtcIPsFromInterfaces: no
webrtcIPsFromInterfacesList: []
webrtcAdditionalHosts:
- v4.example.com
webrtcICEServers2: []
IPv6 does not work whatever I set in config, as long as I use the dual stack host in webrtcAdditionalHosts
webrtcLocalUDPAddress: :8189 or ''
webrtcLocalTCPAddress: '' or :8189
webrtcIPsFromInterfaces: no or yes
webrtcIPsFromInterfacesList: [] or enp2s0
webrtcAdditionalHosts:
- dualstack.example.com
webrtcICEServers2: [] or Google STUN Server
Describe how to replicate the issue
- Start the server
- Publish with OBS/Browser
- Read with Browser
By the way, I tried on multiple devices: Publish:
OBS 30.1.2 on Win & Linux Chrome on Win, Linux, and iPad
Read: Chrome on Win, Linux, iPad, and Android
Did you attach the server logs?
yes
Notice that I use caddy to mux the live and api So that the log may look weird, but it doesn't affect the bug Here's the Caddyfile if you need
xxx.com:21935 {
encode zstd gzip
import tlsconfig
handle_path /api/* {
header {
Access-Control-Allow-Origin: *
}
rewrite /api /
reverse_proxy :29997
}
handle_path /live/* {
rewrite /live /
reverse_proxy :28889
}
}
Did you attach a network dump?
yes
Sorry that I removed some of the dump out of security reasons I filtered the packages with src&dst of the server ip, and the clienthello packages are removed Feel free to contact me if any important information is filtered
I can reproduce this on both Alpine Linux and Gentoo Linux with and without nginx as reverse proxy.
Using the server with WebRTC and IPv6 is currently disabled. The reason is that it causes a strange packet loss that affects IPv4 connections too. Further investigations are needed.
If you want to test it, you can enable it by adding NetworkTypeUDP6 after this line:
https://github.com/bluenviron/mediamtx/blob/e86a7a8217e4da855175d44163285e32c3084214/internal/protocols/webrtc/peer_connection.go#L135
@aler9 , any updates here? IPv6 worked perfectly in 1.11.0, but became broken since 1.11.1.
@Sunvas Do you have any detailed log about how it is broken in 1.11.1? Currently I'm not using it but I think I will return to this later
@ailelix unfortunately, no. From perspective of user I can only state that 1.11.0 is the last version where IPv6 worked.
I re-enabled IPv6 with #4816.
I was able to use IPv6 without issues or packet losses in the following scenarios:
- local UDP socket (
webrtcLocalUDPAddress != 0) and automatic IPv6 IP (webrtcIPsFromInterfaces: true) - local TCP socket (
webrtcLocalTCPAddress != 0) and automatic IPv6 IP (webrtcIPsFromInterfaces: true) - local UDP socket (
webrtcLocalUDPAddress != 0) and manual IPv6 IP inwebrtcAdditionalHosts - local TCP socket (
webrtcLocalUDPAddress != 0) and manual IPv6 IP inwebrtcAdditionalHosts
I didn't test STUN because it requires a IPv6-capable internet connection (which i don't have).
The problem described by @ailelix is still present but is related to how Chrome handles domain names in SDPs, in particular, i think it requires IPv6 internet to actually work (which is something that is often not present) and domains must be associated with an IPv4 address anyway. This is an additional problem of using domain names in webrtcAdditionalHosts, so i've created a feature request in order to solve domains on server-side, convert them into IPs and avoid all this hassle. #4817
This issue is mentioned in release v1.14.0 🚀 Check out the entire changelog by clicking here