Add Raspberry Pi 5 5GbE PoE+ expansion board
Next-Gen 5GbE Expansion Board for Raspberry Pi 5 with PoE+ Support. The latest product from WisdPi, the WP-NH5000P, will soon be available for sale!
Features:
-
Optimized for Raspberry Pi 5: WP-NH5000P is specifically designed to complement the Raspberry Pi 5, ensuring seamless integration and optimal performance.
-
5GbE Connectivity: Equipped with the Realtek RTL8126, this board offers 5 Gigabit Ethernet connectivity. Experience blazing-fast network speeds for seamless data transfer and low latency.
-
PoE+ Support: Simplify your setup with power and data delivered through a single cable.Shielding: Minimizes electromagnetic interference, enhancing stability and reliability with every connection
-
Dedicated MAC Address Range: Each WP-NH5000P comes with a hardware MAC address from WisdPi's own IEEE-assigned pool.
Oh, very cool! Maybe I'll finally have an excuse to upgrade my studio PoE setup to 5/10 Gbps (right now I only have 1 Gbps PoE+... at home I have 2.5/10 Gbps!).
What do you think about a USB PD powered PoE PSE midspan product ?A relatively cheap 5/10Gbps PoE solution, 802.2at or even 802.3bt
It's on the site: https://pipci.jeffgeerling.com/hats/wisdpi-5gbe-poe.html
lspci output:
0000:01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8126 (rev 01)
Subsystem: Realtek Semiconductor Co., Ltd. Device 0123
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 170
Region 2: Memory at 1b80000000 (64-bit, non-prefetchable) [size=64K]
Region 4: Memory at 1b80010000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Address: 000000ffffffe000 Data: 0008
Masking: 00000000 Pending: 00000000
Capabilities: [70] Express (v2) Endpoint, MSI 01
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0W
DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 512 bytes, MaxReadReq 2048 bytes
DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
LnkCap: Port #0, Speed 8GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 <64us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 8GT/s, Width x1
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
10BitTagComp- 10BitTagReq- OBFF Via message/WAKE#, ExtFmt- EETLPPrefix-
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS- TPHComp+ ExtTPHComp-
AtomicOpsCap: 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR+ 10BitTagReq- OBFF Disabled,
AtomicOpsCtl: ReqEn-
LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: unsupported
Capabilities: [b0] MSI-X: Enable- Count=32 Masked-
Vector table: BAR=4 offset=00000000
PBA: BAR=4 offset=00000800
Capabilities: [d0] Vital Product Data
pcilib: sysfs_read_vpd: read failed: No such device
Not readable
Capabilities: [100 v2] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [148 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
Status: NegoPending- InProgress-
Capabilities: [170 v1] Device Serial Number 01-00-00-00-68-4c-e0-00
Capabilities: [180 v1] Secondary PCI Express
LnkCtl3: LnkEquIntrruptEn- PerformEqu-
LaneErrStat: 0
Capabilities: [190 v1] Transaction Processing Hints
No steering table available
Capabilities: [21c v1] Latency Tolerance Reporting
Max snoop latency: 0ns
Max no snoop latency: 0ns
Capabilities: [224 v1] L1 PM Substates
L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
PortCommonModeRestoreTime=150us PortTPowerOnTime=150us
L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
T_CommonMode=0us LTR1.2_Threshold=306176ns
L1SubCtl2: T_PwrOn=150us
Capabilities: [234 v1] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?>
Kernel driver in use: r8126
Kernel modules: r8126
ethtool output:
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
5000baseT/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
5000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Link partner advertised link modes: 100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
5000baseT/Full
Link partner advertised pause frame use: No
Link partner advertised auto-negotiation: Yes
Link partner advertised FEC modes: Not reported
Speed: 5000Mb/s
Duplex: Full
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: on
netlink error: Operation not permitted
Current message level: 0x00000033 (51)
drv probe ifdown ifup
Link detected: yes
Tested on a CM4 at 3.2 Gbps (using a Pineboards Modulo4 carrier), a CM5 at 4.7 Gbps, and a Pi 5 at 4.7 Gbps:
pi@cm5:~ $ iperf3 -c 10.0.2.15 --bidir
Connecting to host 10.0.2.15, port 5201
[ 5] local 10.0.2.227 port 45090 connected to 10.0.2.15 port 5201
[ 7] local 10.0.2.227 port 45100 connected to 10.0.2.15 port 5201
[ ID][Role] Interval Transfer Bitrate Retr Cwnd
[ 5][TX-C] 0.00-1.00 sec 548 MBytes 4.60 Gbits/sec 0 1.50 MBytes
[ 7][RX-C] 0.00-1.00 sec 207 MBytes 1.73 Gbits/sec
[ 5][TX-C] 1.00-2.00 sec 558 MBytes 4.68 Gbits/sec 0 1.76 MBytes
[ 7][RX-C] 1.00-2.00 sec 479 MBytes 4.02 Gbits/sec
[ 5][TX-C] 2.00-3.00 sec 555 MBytes 4.66 Gbits/sec 0 2.19 MBytes
[ 7][RX-C] 2.00-3.00 sec 393 MBytes 3.30 Gbits/sec
[ 5][TX-C] 3.00-4.00 sec 558 MBytes 4.68 Gbits/sec 0 3.18 MBytes
[ 7][RX-C] 3.00-4.00 sec 377 MBytes 3.16 Gbits/sec
[ 5][TX-C] 4.00-5.00 sec 559 MBytes 4.69 Gbits/sec 0 3.35 MBytes
[ 7][RX-C] 4.00-5.00 sec 374 MBytes 3.14 Gbits/sec
[ 5][TX-C] 5.00-6.00 sec 558 MBytes 4.68 Gbits/sec 0 3.52 MBytes
[ 7][RX-C] 5.00-6.00 sec 475 MBytes 3.98 Gbits/sec
[ 5][TX-C] 6.00-7.00 sec 559 MBytes 4.69 Gbits/sec 0 3.52 MBytes
[ 7][RX-C] 6.00-7.00 sec 540 MBytes 4.53 Gbits/sec
[ 5][TX-C] 7.00-8.00 sec 559 MBytes 4.69 Gbits/sec 0 3.52 MBytes
[ 7][RX-C] 7.00-8.00 sec 477 MBytes 4.00 Gbits/sec
[ 5][TX-C] 8.00-9.00 sec 558 MBytes 4.68 Gbits/sec 0 3.71 MBytes
[ 7][RX-C] 8.00-9.00 sec 243 MBytes 2.04 Gbits/sec
[ 5][TX-C] 9.00-10.00 sec 560 MBytes 4.70 Gbits/sec 0 3.71 MBytes
[ 7][RX-C] 9.00-10.00 sec 326 MBytes 2.74 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.00 sec 5.44 GBytes 4.67 Gbits/sec 0 sender
[ 5][TX-C] 0.00-10.00 sec 5.44 GBytes 4.67 Gbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 3.80 GBytes 3.26 Gbits/sec sender
[ 7][RX-C] 0.00-10.00 sec 3.80 GBytes 3.26 Gbits/sec receiver
@wisdpi - I noticed—and this is not exclusive to your driver install—the Realtek chip is not giving full speed in reverse:
pi@cm5:~ $ iperf3 -c 10.0.2.15 --reverse
Connecting to host 10.0.2.15, port 5201
Reverse mode, remote host 10.0.2.15 is sending
[ 5] local 10.0.2.227 port 39272 connected to 10.0.2.15 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 45.1 MBytes 379 Mbits/sec
[ 5] 1.00-2.00 sec 42.7 MBytes 358 Mbits/sec
[ 5] 2.00-3.00 sec 36.1 MBytes 303 Mbits/sec
[ 5] 3.00-4.00 sec 37.5 MBytes 315 Mbits/sec
[ 5] 4.00-5.00 sec 38.1 MBytes 320 Mbits/sec
[ 5] 5.00-6.00 sec 38.2 MBytes 321 Mbits/sec
[ 5] 6.00-7.00 sec 35.6 MBytes 298 Mbits/sec
[ 5] 7.00-8.00 sec 36.8 MBytes 309 Mbits/sec
[ 5] 8.00-9.00 sec 36.9 MBytes 310 Mbits/sec
[ 5] 9.00-10.00 sec 38.2 MBytes 321 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 385 MBytes 323 Mbits/sec sender
[ 5] 0.00-10.00 sec 385 MBytes 323 Mbits/sec receiver
It's like it's hitting a sleep state severely degrading speed when it's all incoming...
I haven't tested with the in-kernel drivers in 6.12, but I wonder if there's any driver update that would fix this and make it fast both directions?
Also, I made sure to boost PCIe to Gen 3 to get the faster speeds. Otherwise I'm limited to 3.2 Gbps at Gen 2.
Finally, I tested this with a Mokerlink 2.5G/5G/10G PoE+ switch and was able to power the Pi off a 2.5 Gbps PoE+ connection without issue.
There's a tiny bit of coil whine, but almost imperceptible.
@wisdpi - I noticed—and this is not exclusive to your driver install—the Realtek chip is not giving full speed in reverse:
pi@cm5:~ $ iperf3 -c 10.0.2.15 --reverse Connecting to host 10.0.2.15, port 5201 Reverse mode, remote host 10.0.2.15 is sending [ 5] local 10.0.2.227 port 39272 connected to 10.0.2.15 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 45.1 MBytes 379 Mbits/sec [ 5] 1.00-2.00 sec 42.7 MBytes 358 Mbits/sec [ 5] 2.00-3.00 sec 36.1 MBytes 303 Mbits/sec [ 5] 3.00-4.00 sec 37.5 MBytes 315 Mbits/sec [ 5] 4.00-5.00 sec 38.1 MBytes 320 Mbits/sec [ 5] 5.00-6.00 sec 38.2 MBytes 321 Mbits/sec [ 5] 6.00-7.00 sec 35.6 MBytes 298 Mbits/sec [ 5] 7.00-8.00 sec 36.8 MBytes 309 Mbits/sec [ 5] 8.00-9.00 sec 36.9 MBytes 310 Mbits/sec [ 5] 9.00-10.00 sec 38.2 MBytes 321 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 385 MBytes 323 Mbits/sec sender [ 5] 0.00-10.00 sec 385 MBytes 323 Mbits/sec receiverIt's like it's hitting a sleep state severely degrading speed when it's all incoming...
I haven't tested with the in-kernel drivers in 6.12, but I wonder if there's any driver update that would fix this and make it fast both directions?
Also, I made sure to boost PCIe to Gen 3 to get the faster speeds. Otherwise I'm limited to 3.2 Gbps at Gen 2.
We have just updated to the latest driver version provided by Realtek (version number: 10.014.01), and the test speeds have significantly improved.
Our github has updated.
Much better! Still, I'm guessing Realtek could figure out a way to boost those speeds, since I get a lot better results doing a bidirectional test!
Raspberry Pi 5 with Ubuntu 24.04 LTS
5GbE HAT benchmark on Pi 5 with Ubuntu 24.04 default driver(sudo apt install r8125-dkms)