What's new

optimal VPN MTU higher than optimal WAN MTU

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

JTnola

Regular Contributor
I know that VPN MTU and WAN MTU settings are interface specific—but I’m looking to understand, is the actual optimal VPN MTU limited by the optimal WAN MTU?

I think my optimal VPN MTU is higher than my optimal WAN MTU. (How) is that possible? Why wouldn’t my optimal VPN MTU, in practice, automatically top out at my WAN MTU?
Thanks

(My WAN MTU is 1428 but my VPN mtu is 1500)
 
The MTU sets the maximum size a transmission unit might have. That doesn't mean it will always use that size. The "optimal" size depends not only on your interface characteristics but all the other devices between you and the destination. To determine the actual MTU size that will be used PMTUD is used.

OpenVPN has additional options that relate to MTU in cases where PMTUD doesn't work. This usually involves setting options like tun-mtu, tun-mtu-extra and mssfix.
 
Right but let’s say that I KNOW (both from ping tests and from my ISP) that the WAN MTU is 1428. How is it possible for the VPN MTU to exceed that (I can ping up to 1472 without fragmentation over the VPN interface, which ultimately uses the same internet connection…?)

Sorry I realize this is just me not having a solid foundation on the subject. Id be happy to be pointed at a good resource where I could read up more …
 
I don't have the same type of WAN connection as you so I can't be 100% sure of your situation, but.... If you're trying to determine the max MTU through the tunnel using ping then you're likely seeing a false response. The pings are inside the tunnel. If you force frames through the tunnel that are larger than the MTU of the WAN interface they will be fragmented and reconstructed outside the tunnel. Your ping command will be unaware of this happening.
 
I don't have the same type of WAN connection as you so I can't be 100% sure of your situation, but.... If you're trying to determine the max MTU through the tunnel using ping then you're likely seeing a false response. The pings are inside the tunnel. If you force frames through the tunnel that are larger than the MTU of the WAN interface they will be fragmented and reconstructed outside the tunnel. Your ping command will be unaware of this happening.
So if my maximum mtu on the wan interface is 1428, then I should set that as the max mtu for the tunnel, even if my ping results for the tunnel had been higher?

Thank you!!
 
So if my maximum mtu on the wan interface is 1428, then I should set that as the max mtu for the tunnel, even if my ping results for the tunnel had been higher?

Thank you!!
I’m not sure what you’re saying is correct … when I run ifconfig with the reduced TUN-MTU … tons of dropped packets. When I leave it at 1500, zero dropped packets.
 
I’m not sure what you’re saying is correct … when I run ifconfig with the reduced TUN-MTU … tons of dropped packets. When I leave it at 1500, zero dropped packets.
Sorry, I don't know anything about your VPN setup, how it's configured or how you're testing this. With my own testing I had to capture the VPN data on the router's WAN interface and load it into Wireshark to see the fragmentation happening. But as I said, my network is not the same as yours.
 
Last edited:
Sorry, I don't know anything about your VPN setup, how it's configured or how you're testing this. With my own testing I had to capture the VPN data on the router's WAN interface and load it into Wireshark to see the fragmentation happening. But as I said, my network is not the same as yours.
Don’t get me wrong—I appreciate your insights very much. Are you able to briefly describe how one goes about doing that ?
 
Assuming the VPN in question is the only thing using port 1194:
Code:
tcpdump -i eth0 port 1194 -s0 -U -w /mnt/MyDisk/ASUS/test.pcap
 
Don’t get me wrong—I appreciate your insights very much. Are you able to briefly describe how one goes about doing that ?

It is impossible to send unfragmented packets larger than the WAN MTU. Since your VPN has overhead, your VPN MTU/MSS needs to be smaller than your WAN MTU to avoid packet fragmentation.

It is possible that even though you're setting DF bit when testing, the VPN is stripping that off since it is seeing that it would cause a failure.

So if your WAN MTU is 1428 you need to find the overhead that your VPN protocol is using and deduct that. Or you can use trial and error while sniffing packets and find out when they stop fragmenting, and that's the size you want.

If you really want to optimize it, make sure your WAN MTU and tunnel MTU are both sizes that require no padding.
 
It is impossible to send unfragmented packets larger than the WAN MTU. Since your VPN has overhead, your VPN MTU/MSS needs to be smaller than your WAN MTU to avoid packet fragmentation.

It's entirely possible to have the VPN client exceed the MaxMTU size of the WAN link - performance is horrible, but it can happen.

:)

I had this very thing happen with the Palo Alto Global Protect client on my work laptop when attaching to a 5G Fixed Wireless gateway - MaxMTU on it was 1480 because of overhead on that link...
 
It is impossible to send unfragmented packets larger than the WAN MTU. Since your VPN has overhead, your VPN MTU/MSS needs to be smaller than your WAN MTU to avoid packet fragmentation.

It is possible that even though you're setting DF bit when testing, the VPN is stripping that off since it is seeing that it would cause a failure.

So if your WAN MTU is 1428 you need to find the overhead that your VPN protocol is using and deduct that. Or you can use trial and error while sniffing packets and find out when they stop fragmenting, and that's the size you want.

If you really want to optimize it, make sure your WAN MTU and tunnel MTU are both sizes that require no padding.

If my WAN MTU is 1428, and I’m using OVPN IPv4 (20 bytes) …. would my mssfix just be 1388?

[WAN MTU 1428] - [IPv4 20] - [TCP 20] = 1388

?


And leave the TUN-MTU as the default 1500?

Thank you!!
 
It's entirely possible to have the VPN client exceed the MaxMTU size of the WAN link - performance is horrible, but it can happen.

:)

I had this very thing happen with the Palo Alto Global Protect client on my work laptop when attaching to a 5G Fixed Wireless gateway - MaxMTU on it was 1480 because of overhead on that link...

The only way it is possible is with fragmentation. Or have every packet that exceeds that MTU dropped. That's why I specified unfragmented packets.
 
If my WAN MTU is 1428, and I’m using OVPN IPv4 (20 bytes) …. would my mssfix just be 1388?

[WAN MTU 1428] - [IPv4 20] - [TCP 20] = 1388

?


And leave the TUN-MTU as the default 1500?

Thank you!!

You would need 20 for IP, 8 for UDP, and whatever the VPN and encryption protocol needs (i.e. are you using AES-256?). Sounds like OVPN is using 12 bytes if they say 20 total (UDP plus encryption). If you're using a more advanced protocol it might use more overhead, same if you tell it to use TCP transport instead of UDP (not really any reason to do that though).

First are you sure your WAN MTU is correct? that is an odd number, I'd expect 1392 or 1400 for some older ISPs and 1492 or 1500 for modern ones. I'm assuming you did the ping -f test and the largest that would go through was 1400? If so, then 1428 is correct, just seems a bit odd, but there are outliers out there that are strange.

I'd set your VPN MTU to the same as your WAN MTU, and your MSS to at least 40 less, but probably 50 or a bit more. Looks like the default values for OVPN are MTU 1500 and mssfix (if enabled) of 1450 so they want you to subtract 50 from the MTU, in which case 1378 should work for you if you're just using standard IPSEC. Looks like they're including 10 bytes of "just in case" in that calculation so that's good.

So maybe try MTU 1428 and MSSfix 1378 and then test and make sure you are not fragmenting or dropping any packets.
 
You would need 20 for IP, 8 for UDP, and whatever the VPN and encryption protocol needs (i.e. are you using AES-256?). Sounds like OVPN is using 12 bytes if they say 20 total (UDP plus encryption). If you're using a more advanced protocol it might use more overhead, same if you tell it to use TCP transport instead of UDP (not really any reason to do that though).

First are you sure your WAN MTU is correct? that is an odd number, I'd expect 1392 or 1400 for some older ISPs and 1492 or 1500 for modern ones. I'm assuming you did the ping -f test and the largest that would go through was 1400? If so, then 1428 is correct, just seems a bit odd, but there are outliers out there that are strange.

I'd set your VPN MTU to the same as your WAN MTU, and your MSS to at least 40 less, but probably 50 or a bit more. Looks like the default values for OVPN are MTU 1500 and mssfix (if enabled) of 1450 so they want you to subtract 50 from the MTU, in which case 1378 should work for you if you're just using standard IPSEC. Looks like they're including 10 bytes of "just in case" in that calculation so that's good.

So maybe try MTU 1428 and MSSfix 1378 and then test and make sure you are not fragmenting or dropping any packets.

Thank you

Yes as of a few months ago, I get my internet through Verizon 5g home internet. The results of my ping tests (1400 + 28) and their support person response to my inquiry (1428) led me to change my WAN MTU to 1428.

On the VPN side… using proto UDP … if I do nothing with the MTU, nothing works. So then I set mssfix to 1388, and most things seem to work … though there’s the occasional weirdness. (Setting mssfix to anything LESS than 1400, AKA… 1399 and under… seems to work, at least for some/most cases. I don’t specify tun-mtu, so it remains at default 1500 … a lot of sites I checked out advise against adjusting tun-mtu, tho none seem to supply their rationale). I’m going to try mssfix 1378, and maybe that’ll work out the kinks with the outliers.

———————

For my proto UDP OVPN tunnel, Im using …

AES 128 GCM
IPv4

(And presumably TCP traffic is sent inside my UDP tunnel … in additional to UDP traffic … right?)



Y’all are AWESOME
 
Last edited:
Thank you

Yes as of a few months ago, I get my internet through Verizon 5g home internet. The results of my ping tests (1400 + 28) and their support person response to my inquiry (1428) led me to change my WAN MTU to 1428.

On the VPN side… using proto UDP … if I do nothing with the MTU, nothing works. So then I set mssfix to 1388, and most things seem to work … though there’s the occasional weirdness. (Anything LESS than 1400, AKA… 1399 and under… seems to work in at least some/most cases). I’m going to try 1378 and maybe that’ll fix the outliers.

———————

For my proto UDP tunnel, Im using …
AES 128 GCM
IPv4
(And presumably TCP traffic is sent inside my UDP tunnel … in additional to UDP traffic … right?)


Y’all are AWESOME
Ah yes forgot about wireless internet they have some nuances.

Yes your TCP is encapsulated within the UDP stream. That makes the connection "reliable" (UDP is a non-guaranteed delivery but having TCP within it makes it guaranteed). UDP is used as it is more efficient for encryption, less overhead. In cases of really bad connections, you might have to switch to TCP mode as the tunnel will drop if enough UDP packets are lost, but doesn't sound like you're having that. In that case you'd need to remove another 12 bytes from your MSS size.

mss clamping mechanisms like mssfix actually intercept the TCP handshake and tell both ends the largest TCP packet size to use, basically like NAT for MSS. So the TCP header is already factored in (TCP includes its header when setting its size). So that 20 bytes is already included. Looks like OVPN also automatically calculates in the encryption overhead too and deducts that from whatever you specify. So in reality, you only need to deduct 28 bytes (20 for IP, and 8 for UDP). But they suggest 50, which I'm guessing is so that you can do both IPv4 (20 byte) and IPv6 (40 byte) plus 8 byte UDP and 2 bytes for headroom/padding. If you only plan to use IPv4, you can probably do 28 or 30 less than MTU.

So if your WAN/LAN MTU is set to 1428, and your mssfix is correct, you don't necessarily need to set the Tunnel MTU and OVpn even says best to not touch it. This is because with properly sized MSS, the MTU of any packet won't exceed the maximum MTU you can support, so that setting is moot. It sort of sounds like Ovpn automatically calculates the Tunnel MTU for you based on MSS (not positive on that, but doesn't matter, even if it is set to 1500, no packet should ever hit that). But the documentation suggests adding the --fragment option "just in case".

So you can try Asus GUI MTU 1428 (note this doesn't change the MTU on wireless interfaces but it does change it on the bridges and uplink to CPU which is enough, since PMTUD nearly always works within a LAN).
mssfix in theory should be able to be 1400 (which will actually tell the endpoints that the MSS is 1384 since I think AES-128-GCM is 16 bytes). Since that isn't working for you, my guess is there is some padding going on, so use 1398 (bad form to use odd numbers). But sounds like you are having some issues with even 1388. My guess is those issues may be related to stuff using protocols other than TCP (mssfix can only intercept TCP, not UDP, ESP, etc) That's what the --fragment option is for, those outliers. So try setting mssfix to 1398 and adding the --fragment option, and see how many packets are getting fragmented. If it is not many, then you should be good, if it is all of them, then you need to go lower. The fact that 1399 and lower works for "almost everything" sounds like 1398 with --fragment is probably going to be your solution, the --fragment will fix the little bit that isn't working.

Or you can try doing their default "50 less" which would be 1378 and see if that resolves the outliers without the --fragment option. Then add it on anyway just to be safe.

Long story short
I guess you have two options (both using WAN MTU of 1428)
- mssfix of 1378 and see if everything works good with no fragmentation, then add the --fragment option as a "just in case". You'll reduce your throughput some for large files but it won't be a huge difference. You should also be able to use IPv6 over this size if you ever wanted to (who knows, some stuff may be using it and you don't even know it).
- mssfix 1398 with --fragment option and see how many packets are getting fragmented. If it is just a few percent, should be fine and you'll improve throughput and efficiency a bit.


Below is some info for OVPN in UDP tunnel mode (I believe 1473 below is a typo and should be 1478 for IPv4).

--mssfix max
Announce to TCP sessions running over the tunnel that they should limit their send packet sizes such that after OpenVPN has encapsulated them, the resulting UDP packet size that OpenVPN sends to its peer will not exceed maxbytes. The default value is 1450.The max parameter is interpreted in the same way as the --link-mtu parameter, i.e. the UDP packet size after encapsulation overhead has been added in, but not including the UDP header itself. Resulting packet would be at most 28 bytes larger for IPv4 and 48 bytes for IPv6 (20/40 bytes for IP header and 8 bytes for UDP header). Default value of 1450 allows IPv4 packets to be transmitted over a link with MTU 1473 or higher without IP level fragmentation.

The --mssfix option only makes sense when you are using the UDP protocol for OpenVPN peer-to-peer communication, i.e. --proto udp.

--mssfix
and --fragment can be ideally used together, where --mssfix will try to keep TCP from needing packet fragmentation in the first place, and if big packets come through anyhow (from protocols other than TCP), --fragment will internally fragment them.

Both --fragment and --mssfix are designed to work around cases where Path MTU discovery is broken on the network path between OpenVPN peers.

The usual symptom of such a breakdown is an OpenVPN connection which successfully starts, but then stalls during active usage.
 
Last edited:
Thank you all for your help!!

(unfortunately none of the fragment options appear to function — I think it’s because Merlin uses OVPN 2.6 now, and PIA still uses 2.4 … but who knows. I have a support request in to them…)

Currently I’m rolling with
tun-mtu 1428
mssfix 1378

————

Side note:

So, …if you can’t tell, I’ve only recently switched to using a UDP tunnel. Previously I was using a TCP tunnel, but bowed to others’ insistence that my implementation made no sense. Apart from a small loss of speed, is there any real reason you shouldn’t use a TCP tunnel? — I don’t believe I ever experienced TCP over TCP meltdown … or if I had, it certainly was with less frequency and proved to be less of a problem than what I’m running into on UDP with this reduced WAN MTU business. It’s not crippling. But it’s a lot of issues in a very short time. Not all of them total breakdowns .., just more like unpredictable hiccups. I think this cancels out whatever modest gains UDP gives me speed-wise … if not in actual time, then certainly in frustration time.
 
Thank you all for your help!!

(unfortunately none of the fragment options appear to function — I think it’s because Merlin uses OVPN 2.6 now, and PIA still uses 2.4 … but who knows. I have a support request in to them…)

Currently I’m rolling with
tun-mtu 1428
mssfix 1378

————

Side note:

So, …if you can’t tell, I’ve only recently switched to using a UDP tunnel. Previously I was using a TCP tunnel, but bowed to others’ insistence that my implementation made no sense. Apart from a small loss of speed, is there any real reason you shouldn’t use a TCP tunnel? — I don’t believe I ever experienced TCP over TCP meltdown … or if I had, it certainly was with less frequency and proved to be less of a problem than what I’m running into on UDP with this reduced WAN MTU business. It’s not crippling. But it’s a lot of issues in a very short time. Not all of them total breakdowns .., just more like unpredictable hiccups. I think this cancels out whatever modest gains UDP gives me speed-wise … if not in actual time, then certainly in frustration time.

TCP has some extra overhead (not a huge issue, you lose another 12 bytes from your payload size) and adds latency, in some cases a lot of latency depending where your server is located. That latency can significantly impact not only throughput but experience with latency sensitive things like gaming etc.

But on the other hand it may allow PMTUD to work negating the need for tweaks to MTU/MSS (may being the key word, PMTUD is very problematic and requires everything in the path to allow those ICMP packets in both directions). Having TCP on both ends of the tunnel only helps it work on one segment, in reality many of the remote servers are not going to support it so it is somewhat moot.

The issue is MSS only works for TCP. MTU works for everything. So if you send a UDP packet through the UDP tunnel and it is 1500 bytes, you need it to be able to be fragmented or it will just get dropped, unless path MTU Discovery (PMTUD) is working in which case the sending stations should both detect that they must limit themselves to the smallest MTU in the path.

Since PMTUD rarely works properly, the preferred approach is a udp tunnel with mss intercept (mssfix in ovpn) along with allowing fragmentation so that UDP, ICMP, and non-standard IP protocols can get through.

Most likely your TCP setup was just fragmenting all larger packets (I believe TCP tunnels allow fragmentation by default unless something specifies DF). If this wasn't causing any noticeable issues for you, and the latency wasn't an issue, then there's nothing to say you can't go back to using that. Fragmentation will reduce your throughput as well as the load on your router, but how much and whether it is noticeable depends on your setup and perception.

OVPN 2.4 should support the --fragment switch but maybe the provider isn't doing it on their end.

You can try reducing the tun-mtu, but that only fixes your side, you need to find out what the VPN provider is set to.

Many VPN providers will give you a config file with recommended settings in it, does yours have that? Will give you an idea of what they are set to.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top