Wrong / confusing MSS clamping behaviour with PPPoE MTU 1492
Important notices
Before you add a new report, we ask you kindly to acknowledge the following:
- [x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
- [x] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Describe the bug
My ISP uses PPPoE, so my WAN MTU must be 1492 instead of 1500. With this setup, some websites (for example Azure CDN–hosted sites over IPv6) do not open or time out. A packet capture shows that this is caused by incorrect MSS clamping on OPNsense.
To Reproduce
-
Configure WAN as PPPoE with MTU 1492.
-
Open some IPv6 sites behind Azure CDN – the connection hangs / times our. Use http://pmtud.enslaves.us to check.
-
Capture traffic on the WAN interface: you will see MSS values of 1460 (IPv4) and 1440 (IPv6), which do not fit into an MTU of 1492.
-
Now change the MSS setting so that you enter 1492 in the MSS field. The resulting effective MSS becomes 1452 (IPv4) and 1432 (IPv6), and the problem disappears.
Expected behavior
-
OPNsense should automatically derive the correct MSS from the interface MTU (e.g. MTU – 40 for IPv4 and MTU – 60 for IPv6) and clamp accordingly, especially for PPPoE interfaces with MTU 1492; or
-
The field should clearly be an MTU field, or there should be a checkbox like “Clamp MSS to interface MTU” (enabled by default) so the user does not have to do the header-size calculation manually.
Additional context
Check the original Reddit Post for more Details
Environment
Software version used and hardware type if relevant, e.g.:
OPNsense 25.7.7 (amd64).
The MSS clamping setting in the interface is a remnant from legacy choices I'm afraid, last time we touched that code we already made a note that it should be better if it would fully dissapear.
https://github.com/opnsense/core/blob/f31afb436d42de089e48a66098a3c0095a08fea8/src/etc/inc/filter.inc#L553-L567
In reality, when mss clamping is needed, there's almost always somewhere another mtu misconfiguration leading to misalignments of packets (or not being able to fragment them properly).
With MSS you can more or less fix TCP, but then when other types of traffic are being used (such as UDP), you will be left with an unsolvable problem as mss doesn't apply there.
Thanks for your answer and the link to the code.
I fully agree that MSS clamping is a legacy bandaid and that most of the time it is hiding a broken MTU/PMTUD setup somewhere in the path.
However, in this particular case there are two separate problems, and one of them is on OPNsense itself:
- WAN MTU is correct, but OPNsense advertises an impossible MSS
- WAN is plain PPPoE, ifconfig pppoe0 clearly shows mtu 1492.
- With no MSS value configured on the interface, the SYNs leaving the PPPoE interface still carry
- MSS 1460 for IPv4
- MSS 1440 for IPv6 which correspond to an MTU of 1500, not 1492.
- That means OPNsense is happily sending TCP segments that cannot fit into its own WAN MTU (1500-byte IP packets over a 1492 interface). This is visible in the packet capture and confirmed by pmtud.enslaves.us: MSS 1460/1440 fails, MSS 1452/1432 works. So even if the rest of the path were perfectly configured, these MSS values are already inconsistent with the PPPoE MTU at the firewall edge.
- The MSS field behaves like an MTU field, but the user has to do the math. When I set the “MSS” field on the WAN interface to 1492, the resulting SYNs have:
- IPv4 MSS = 1452 (1492 – 40)
- IPv6 MSS = 1432 (1492 – 60) This matches the help text in the GUI: “If you enter a value in this field, then MSS clamping for TCP connections to the value entered above minus 40 (IPv4) or 60 (IPv6) will be in effect (TCP/IP header size).”
In other words: the field is actually treated as MTU input, not as a raw MSS value. Once I put 1492 there, Azure CDN over IPv6 and the PMTUD test page both work fine. From a UX and configuration perspective that is very confusing:
- The GUI calls it “MSS”, but internally it expects “MTU” and subtracts header sizes.
- Without touching it, OPNsense derives MSS values that ignore the PPPoE MTU.
- With PPPoE 1492 and LAN 1500 this leads to real-world breakage (Azure CDN over IPv6 timing out) unless the user manually enters the MTU into this field.
- Yes, there is likely also a PMTUD black-hole on the Azure side I agree there is probably an additional misconfiguration in the path (ICMPv6 “Packet Too Big” filtered somewhere). As an end user I cannot fix Azure or any intermediate ISP. MSS clamping is the only realistic mitigation on my side. But regardless of that, OPNsense should not generate MSS values that can’t even traverse its own PPPoE MTU. That part is entirely under OPNsense’s control.
If MSS clamping is considered legacy and you’d like to get rid of it, that’s fine in the long term. But as long as:
- PPPoE WAN MTU defaults to 1492, and
- LAN usually runs at 1500,
it would be great if OPNsense at least:
- Automatically derived MSS from the interface MTU (MTU–40 for IPv4, MTU–60 for IPv6) when the WAN MTU < 1500; or
- Clearly labelled this as an MTU-based clamp, e.g. “Clamp TCP MSS to interface MTU”, with an optional override field, so the user doesn’t have to do header-size calculations by hand.
Right now the combination of:
- correct PPPoE MTU (1492),
- default MSS values (1460/1440), and
- a field called “MSS” that actually expects “MTU”
is what makes the behaviour questionable and confusing.
However, in this particular case there are two separate problems, and one of them is on OPNsense itself:
Well that depends on the rest of the configuration, mtu sizes of upper interfaces for example. If it's an (upstream) bug is hard to assess with the data known here, our forum might be a better place to discuss setup challenges, I believe quite some people use PPPoE.
In other words: the field is actually treated as MTU input, not as a raw MSS value.
It's a calculated field as explained in the help text (not being confused with mtu input), a strict mss setting can easily be offered via normalization (Firewall: Settings: Normalization)
@DollarSign23 I've noticed today that we do not allow certain ICMPv6 types that may be required for PMTU to work properly. Maybe you can give 7824ce5 a try.