Optimizing my home and work network setup
Kai Wolf 01 December 2024
After analyzing my network setup in a previous post, I decided it was time to dig deeper and optimize my network setup, both at home and at my office. With data-intensive workflows becoming more demanding - especially in areas like deep learning and computer vision - every millisecond and megabit counts.
Following is a detailed breakdown of the changes I made, the tools I used and the performance gains I achieved.
Baseline Measurements
At home I am using a less powerful network setup for obvious reasons:
Home Network
- ISP: DSL 250
- Router: FritzBox 7590 AX
- Ethernet: Directly connected to the router or
- Wi-Fi: FritzBox mesh network via a Fritz!Repeater 3000
- Speed test:
- Ethernet: 183 Mbps download, 39 Mbps upload, 6.5 ms latency
- Wi-Fi: 181 Mbps download, 37 Mbps upload, 9.8 ms latency
Work Network
At work I am running a bit more sophisticated setup:
- ISP: Vodafone Cable 1000
- Setup: 2.5 GBit Ethernet via CalDigit TS4 hub connected to my MacBook
- Server Rack: Includes a 1 GBit switch and a Proxmox application server
- Baseline Results (NetworkQuality):
- Uplink: 889 Mbps
- Downlink: 746 Mbps
- Responsiveness (RPM): Medium, 487 RPM
- Idle Latency: ~6 ms
Improvements and Their Impact
I did upgrade my server rack with a second-hand used 10G Ethernet switch from Netgear that I’ve bought for cheap online but first I did some experiments using the old (managed) switch first.
Leveraging Link Aggregation (LAG) at Work
I configured two Ethernet ports to share the network load (LAG) with the following results:
- Uplink: 1.152 Gbps (+29.6%)
- Downlink: 752 Mbps (+0.8%)
- Responsiveness: Medium, 387 RPM (slight reduction)
- Idle Latency: ~6 ms (unchanged)
While the uplink showed significant improvement, the downlink remained largely unchanged. Responsiveness slightly dipped, likely due to increased protocol overhead.
Upgrading to 10G Ethernet for Synology and Proxmox
At work, I added:
- A 10G module (E10G22-T1-Mini) network expansion module to my Synology RS422+
- A cheap 10G PCIe card for the Proxmox hypervisor
With these changes I got the following results:
Results (iperf3, Proxmox)
- Transfer: 2.74 GBytes
- Bitrate: 2.35 Gbits/sec
Reults (NetworkQuality, Proxmox)
- Uplink: 2 Gbps (+125%)
- Downlink:1 Gbps (+33.9%)
- Responsiveness: Medium, 942 RPM (+93.4%)
- Idle Latency: 6 ms (unchanged)
Optimizing MTU for Jumbo Frames
Adjusting the MTU from 1500 to 9000 allowed jumbo frames, increasing efficiency.
Impact on Responsiveness
- Before: 942 RPM
- After: 1778 RPM (+88.8%)
While throughput stayed constant, responsiveness doubled, indicating a significant reduction in network packet overhead.
Introducing the OWC 10G Thunderbilt 3 Ethernet Adapter
Upgrading my connection to the Proxmox server and Synology rack with the OWC adapter yielded dramatic results.
Results (iperf3)
- Proxmox:
- Transfer: 9.64 GBytes
- Bitrate: 8.288 Gbits/sec
- Synology:
- Transfer: 9.54 GBytes
- Bitrate: 8.19 Gbits/sec
Results (NetworkQuality)
- Proxmox:
- Uplink: 1.3 Gbps
- Downlink: 4.6 Gbps (+360%)
- Responsiveness: Medium, 421 RPM
- Idle Latency: 5.2 ms
- Synology:
- Uplink: 4.2 Gbps (+373%)
- Downlink: 1.7 Gbps (+70%)
- Responsiveness: Medium, 360 RPM
- Idle Latency: 4.8 ms
Home Network Upgrades with 2.5G Dongles
For my home setup, I added:
- A 2.5G USB Ethernet dongle for the Synology DS923+ and the CalDigit TS3 hub
- Adjusted internal PCI sharing for stable performance
Results (iperf3)
- Transfer: 2.28 GBytes
- Bitrate: 1.96 Gbits/sec
Results (NetworkQuality)
- Uplink: 641 Mbps
- Downlink: 1.433 Gbps
- Responsiveness: High, 3905 RPM (+>7x)
- Idle Latency: 4.9 ms
The dongle upgrade transformed my basement rack’s performance, especially for responsiveness and latency, despite hitting the throughout ceiling of the 2.5G connection.
Overall Performance Gains
Metric | Baseline (Work) | Final (Work) | Improvement |
---|---|---|---|
Uplink (Gbps) | 0.889 | 4.2 | +372.5% |
Downlink (Gbps) | 0.746 | 4.6 | +516.2% |
Responsiveness (RPM) | 487 | 1778 | +265% |
Metric | Baseline (Home) | Final (Home) | Improvement |
---|---|---|---|
Uplink (Gbps) | 183 | 641 | +250% |
Downlink (Gbps) | 39 | 1443 | +3581% |
Responsiveness (RPM) | 454 | 3905 | +760% |
Key Learnings and Next Steps
- Jumbo Frames: Enabling an MTU of 9000 dramatically improves responsiveness without impacting throughput.
- 10G Ethernet: Investments in 10G hardware pays of, especially for workloads like backups and Proxmox hypervisor.
- PCI Optimization: Understanding internal bus sharing is crucial for stable performance.
Next, I plan to investiage further fine-tuning options, including QoS settings for prioritizing critical traffic and potential upgrades to fiber internal at home.