WisdPi WP-UT5 (RTL8157) Slower with Jumbo Packets enabled (9000/9014) than with Default MTU (1500/1514)
Description of the problem
Running Crystal Benchmark, when Jumbo Packets are enabled performance is slower than when jumbo is not enabled.
Jumbo Packets Disabled -
CrystalDiskMark 8.0.5 x64 (C) 2007-2024 hiyohiyo Crystal Dew World: https://crystalmark.info/
- MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
- KB = 1000 bytes, KiB = 1024 bytes
[Read] SEQ 1MiB (Q= 8, T= 1): 459.488 MB/s [ 438.2 IOPS] < 18210.81 us> SEQ 128KiB (Q= 32, T= 1): 462.352 MB/s [ 3527.5 IOPS] < 8994.77 us> RND 4KiB (Q= 32, T=16): 137.983 MB/s [ 33687.3 IOPS] < 15158.65 us> RND 4KiB (Q= 1, T= 1): 18.014 MB/s [ 4397.9 IOPS] < 227.12 us>
[Write] SEQ 1MiB (Q= 8, T= 1): 417.910 MB/s [ 398.6 IOPS] < 19988.51 us> SEQ 128KiB (Q= 32, T= 1): 417.798 MB/s [ 3187.5 IOPS] < 9994.44 us> RND 4KiB (Q= 32, T=16): 88.239 MB/s [ 21542.7 IOPS] < 23650.10 us> RND 4KiB (Q= 1, T= 1): 16.212 MB/s [ 3958.0 IOPS] < 252.27 us>
Profile: Default Test: 4 GiB (x5) [N: 79% (22690/28603GiB)] Mode: [Admin] Time: Measure 5 sec / Interval 5 sec Date: 2025/08/08 9:21:58 OS: Windows 11 Pro 24H2 [10.0 Build 26100] (x64)
Jumbo Packets Enabled (no other changes to config) - packet size 9000 on NAS/9014 on PC
CrystalDiskMark 8.0.5 x64 (C) 2007-2024 hiyohiyo Crystal Dew World: https://crystalmark.info/
- MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
- KB = 1000 bytes, KiB = 1024 bytes
[Read] SEQ 1MiB (Q= 8, T= 1): 435.727 MB/s [ 415.5 IOPS] < 19187.21 us> SEQ 128KiB (Q= 32, T= 1): 426.672 MB/s [ 3255.2 IOPS] < 9655.77 us> RND 4KiB (Q= 32, T=16): 133.179 MB/s [ 32514.4 IOPS] < 15669.51 us> RND 4KiB (Q= 1, T= 1): 18.592 MB/s [ 4539.1 IOPS] < 219.95 us>
[Write] SEQ 1MiB (Q= 8, T= 1): 442.393 MB/s [ 421.9 IOPS] < 18877.86 us> SEQ 128KiB (Q= 32, T= 1): 442.394 MB/s [ 3375.2 IOPS] < 9393.38 us> RND 4KiB (Q= 32, T=16): 92.093 MB/s [ 22483.6 IOPS] < 22677.29 us> RND 4KiB (Q= 1, T= 1): 17.633 MB/s [ 4304.9 IOPS] < 231.92 us>
Profile: Default Test: 4 GiB (x5) [N: 79% (22690/28603GiB)] Mode: [Admin] Time: Measure 5 sec / Interval 5 sec Date: 2025/08/08 9:28:30 OS: Windows 11 Pro 24H2 [10.0 Build 26100] (x64)
Description of your products
Linux SYN1 4.4.302+ #72806 SMP Mon Jul 21 23:14:27 CST 2025 x86_64 GNU/Linux synology_geminilake_720+ WisdPi WP-UT5 (RTL8157)
NAS using 2x 16TB Seagate Iron Wolf Pro drives
Description of your environment
AMD 9950x3D ASUS ROG STRIX X870E-E Realtek 5GBe mainboard directly connected to NAS - CAT7 cabling
Output of dmesg command
Attached.
Output of lsusb command
|__usb1 1d6b:0002:0404 09 2.00 480MBit/s 0mA 1IF (Linux 4.4.302+ xhci-hcd xHCI Host Controller 0000:00:15.0) hub |__1-2 0764:0501:0001 00 2.00 12MBit/s 2mA 1IF (CPS CP1500PFCLCD 000000000000) |__1-4 f400:f400:0100 00 2.00 480MBit/s 200mA 1IF (Synology DiskStation 7F00082D9081C689) |__usb2 1d6b:0003:0404 09 3.00 5000MBit/s 0mA 1IF (Linux 4.4.302+ xhci-hcd xHCI Host Controller 0000:00:15.0) hub |__2-1 0bda:8157:3000 00 3.20 5000MBit/s 544mA 1IF (WisdPi USB 5G Ethernet 000334C8D6B112AF)
(paste output here)
Output of ifconfig -a command
eth0 Link encap:Ethernet HWaddr 00:11:32:DB:51:7D inet addr:10.0.166.166 Bcast:10.0.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:90529269 errors:0 dropped:0 overruns:0 frame:0 TX packets:78349270 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:89134355746 (83.0 GiB) TX bytes:25202804594 (23.4 GiB) Interrupt:97 base 0x2000
eth1 Link encap:Ethernet HWaddr 00:11:32:DB:51:7E inet addr:169.254.30.100 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:96
eth2 Link encap:Ethernet HWaddr 34:C8:D6:B1:12:AF inet addr:192.168.5.1 Bcast:192.168.255.255 Mask:255.255.0.0 inet6 addr: fe80::36c8:d6ff:feb1:12af/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:74312497 errors:0 dropped:0 overruns:0 frame:0 TX packets:144830258 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69434170808 (64.6 GiB) TX bytes:196740346632 (183.2 GiB)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:14025318 errors:0 dropped:0 overruns:0 frame:0 TX packets:14025318 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:63454786858 (59.0 GiB) TX bytes:63454786858 (59.0 GiB)
sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
your write performance is higher :)
Maybe I misunderstand, but shouldn't both read and write speed be higher w/Jumbo?
And thank you for pointing that out, write speed is more important to me as this is an array I back up to, so I enabled Jumbo.
Whether Jumbo Frames improve performance depends on the switch, use case, and environment. Performance is not guaranteed to improve.
Also, just to be sure, please check that SMB3 is enabled.