What's new

Unable to saturate 10Gbps Ethernet

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

gustavo1spbr

Occasional Visitor
Hi There!

Just put together the following setup:

NAS:
- Synology DS-923+, with 32GB RAM and 4x WD Red Plus 10TB each in RAID 0
- Synology 10Gbps ethernet adapter

Switch:
- TRENDnet 10Gbps Switch

Desktop:
- TRENDnet 10Gbps ethernet card for my desktop, using the Marvell AQtion chipset
- Brand new CAT6A cabling connecting everythinh

- Jumbo Frames enabled at 9000 at both Synology and Desktop (Switch is unmanaged)

The WD Red Plus are rated as capable of sustaining 215MB/s. Therefore, 4 in RAID 0 should be able to go 860MB/s, right?

But when I transfer a very large 128GB file from the desktop (NVMe SSD, rated 3000MB/s+) to this RAID 0 volume, I'm only getting 400MB/s, or about only half of what should be expected.

The ethernet ports on both the NAS, Switch and Desktop are all lighting green. They only light green when the connection is at 10Gbps. 5Gbps and below will cause them to light orange.

Any ideas what could be hindering the performance?

Thank you very much in advance.
 
I can get that with raid 10 off WD reds.

It seems a bit low though for R0 configuration. Are they synced and built?

I just rebuilt my setup using nvme drives though and came across a setting that bumps the speed up with a couple of commands. I'm not sure you can do it on an actual NAS though.

dev.raid.speed_limit_min=500000
dev.raid.speed_limit_max = 2000000

I saw some background data be able to hit 2-3GB/s after the changes. Didn't do much for the spinners though.

I would turn off the jumbo frames and test again since the drives don't use them as they're usually 512/4096kb in side for blocks.
.
Also, the ram isn't going to make much difference in terms of data. I have 16gb in my setup but the system rarely goes beyond 2-3gb for a full Linux OS. Ram comes into play more with different file systems like zfs.
 
How full is your NAS? Are you over 20% of maximum capacity? If so, those rated speeds are not realistic.

Do you often delete and copy new files to your NAS? That can slow the transfers too.

Have you tried disabling Jumbo Frames and rebooting all?

Writing to the NAS is expected to be slower (particularly if you've deleted many small files previously) than reading speeds.
 
How full is your NAS? Are you over 20% of maximum capacity? If so, those rated speeds are not realistic.

Do you often delete and copy new files to your NAS? That can slow the transfers too.

Have you tried disabling Jumbo Frames and rebooting all?

Writing to the NAS is expected to be slower (particularly if you've deleted many small files previously) than reading speeds.
Not full at all. In fact these are brand new drives with nothing on them.

Disabled Jumbo Packets. Speed lowered from 380-400MB/s to 350-360MB/s.
 
Last edited:
Some important additional tests to try to shed some light into the situation.

Common setup
- DS 923+ 32GB RAM, stock DSM 7.2 without any installed or running programs other than plain vanilla first boot setup
- Synology proprietary 10 GbE card installed
- Switch: TrendNet S750 5-port 10GbE
- Desktop: Ryzen 5800X, 32GB, 2TB NVMe
- NIC card on desktop: TrendNet TEG-10GECTX (Marvell AQtion chipset)
- Jumbo Packet 9000 bits

A) TESTING WITH 4 HDDS IN RAID0

iSCSI
4 HDDs in RAID0
Result: 300MB/s
--> Very weird that iSCSI performance is worse than SMB.

SMB
4 HDDs in RAID0
Btrfs
Result: 400MB/s
--> This seems too low as each HDD capable of 200MB/s

SMB
4 HDDs in RAID0
Ext4
Result: 400MB/s
--> File system choice does not affect performance in this test


B) TESTING WITH 2 HDDS IN RAID 0

SMB
2 HDDs in RAID0
Btrfs
Result: 400MB/s
--> This proves that these drives can sustain at least 200MB/s each. 4 should go to 800MB/s as far as the HDDs are concerned.

SMB
2 HDDs in RAID0
Ext4
Result: 400MB/s
--> And file system choice not affecting performance in large file transfers

C) TESTING WITH 4 HDDS IN 2 RAID 0 POOLS

SMB
2 HDDs in RAID0 and Ext4
+
2 HDDs in RAID0 and Btrfs
Simultaneous data transfer from different SSDs on desktop
Result: 200MB/s on each transfer
--> Clearly, there's a cap at 400MB/s...

Where is this cap coming from?
- Test shows HDDs not the bottleck
- Maybe the Synology DS-923+ isn't really 10GbE capable???
- Maybe TrendNET switch or NICs not really 10GbE capable???
 
I can get that with raid 10 off WD reds.

It seems a bit low though for R0 configuration. Are they synced and built?

I just rebuilt my setup using nvme drives though and came across a setting that bumps the speed up with a couple of commands. I'm not sure you can do it on an actual NAS though.

dev.raid.speed_limit_min=500000
dev.raid.speed_limit_max = 2000000

I saw some background data be able to hit 2-3GB/s after the changes. Didn't do much for the spinners though.

I would turn off the jumbo frames and test again since the drives don't use them as they're usually 512/4096kb in side for blocks.
.
Also, the ram isn't going to make much difference in terms of data. I have 16gb in my setup but the system rarely goes beyond 2-3gb for a full Linux OS. Ram comes into play more with different file systems like zfs.

Thanks a lot again. Since it's a Synology NAS setup, I don't think I can input these commands anywhere. But I should be able to get 800MB/s transfer speeds using 4 HDDs in RAID0. I haven't seen anywhere that there would be a limitation. I just posted a series of tests that should help narrow down the situation. Would be grateful if you would be able to have a look into them.
 
Last edited:
@gustavo1spbr

Yeah, those are disappointing to say the least. This is why I diy my setup. There's something funky in the NAS preventing full bandwidth.

I've seen DAS units vary from 200-400 in their specs / reviews and it comes down to the controllers being used. There's a chance you're hitting a bottleneck there.

If you could slap an nvme into it and test that it could be apparent. Even a SATA SSD should be able to hit above the 400 you're seeing up to 550 depending on the SSD.

The NAS could be using a switch between the drives causing the deficiency.
 
3 HDDs in RAID0 won't hit 800 MB/s (even theoretically).

I haven't seen you answer if you let the drives/controller sync first after setting them up (this takes 24 hours or more, depending on the options set).

Right now, I'm suspecting the NAS Ethernet adaptor or the switch. Can you remove the latter from the equation (for testing)?
 
3 HDDs in RAID0 won't hit 800 MB/s (even theoretically).
No, but, they should be hitting higher than 400 at least. Assuming new drives even 5k drives hit 200+/drive.

I would still just slap an SSD inside and see if it breaks 400 as it should with a simple volume without raid being involved.
 
3 HDDs in RAID0 won't hit 800 MB/s (even theoretically).

I haven't seen you answer if you let the drives/controller sync first after setting them up (this takes 24 hours or more, depending on the options set).

Right now, I'm suspecting the NAS Ethernet adaptor or the switch. Can you remove the latter from the equation (for testing)?
This is a 4 drive setup (sorry I said 3, just corrected). Each can sustain 200MB/s, so I expected to see 800MB/s in RAID0 and ideal conditions, which I have just prepared, instead of the actual 400MB/s I'm getting.

Pardon my ignorance, I don't know what syncing is in this context. I just set up the RAID0 pool, created a Btrfs volume, and the NAS was ready to go (On a different test I also created an Ext4 volume on the same pool, to see if file system option would affect the performance, which it did not).

I don't have any alternative 10GbE devices to replace the NAS ethernet adaptor or the switch. But both are brand new, being used out of the box for the first time. The NAS adaptor is a proprietary model from Synology, and the only possible model that works for this device. The switch is a TRENDnet S750 5-port 10GbE, unmanaged. And the NIC on the desktop is also a TRENDnet TEG-10GECTX (Marvell AQtion chipset), which sells as a bundle with the switch.
 
Last edited:
No, but, they should be hitting higher than 400 at least. Assuming new drives even 5k drives hit 200+/drive.

I would still just slap an SSD inside and see if it breaks 400 as it should with a simple volume without raid being involved.
I don't have SSDs available for testing, but the fact that 2 drive RAID0 yielded the same exact 400MB/s write performance as 4 drive RAID0 clearly means that the drives (even being HDDs, not SSDs) are not the bottleneck.

It must be a limitation elsewhere on the NAS or on the switch. But the swich is unmanaged and rated for 10GbE. So I think it must be the NAS. What puzzles me is that I've seen reviews that this NAS was able to output all the performance of a 4 drive array through the network, and it is clearly not being the case....
 
@gustavo1spbr

I just setup a R0 and ignore the syncing comment as it doesn't apply.

I don't think it's the nic or switch. I think it's the NAS. Without a SATA SSD it's hard to pinpoint the bottleneck though whether it's a controller or the backplane or something else. I'd go to Walmart and pick up an SSD for testing and rule out the NAS before diving into other potential causes.

It sounds like the NAS sees all of the drives and configured them correctly so, thus either SW or HW in the NAS being the issue.

If you have a PC you could put the adapter in you could test that as well with iperf to confirm the network isn't the issue. I suppose it could be cabling causing it to link at 5ge instead of 10ge but I doubt the switch would be NBASE-T and link at other speeds than 1/10. Try another cable and see if it makes a any difference.
 
@Tech Junky @L&LD

Finally, figured out what was happening!

It turns out it was a super stupid newbie mistake! 🙈

I plugged the 10GbE card on a PCIe 2.0 x1 slot on my desktop, therefore limiting it to 500MB/s!!! 😵

Once I moved the card to a PCIe 3.0 x4 slot, I immediately got 800MB/s on the 4xRAID0 array.

Unbelievably stupid on my end. Apologies for the time consumed.

At least, some lessons learned, which I hope can be useful to all:

1) 4x RAID 0 with modern HDDs can indeed achieve 800MB/s. That's better than SATA SSD speed territory, while also providing major storage capacity (40TB in my case)
2) Btrfs and Ext4 are equivalent in terms of performance on large file transfers on Synology (my personal preference going to Btrfs, due to quick and space saving snapshotting ability, along with being well supported by Synology)
3) Jumbo packets (MTU 9000) do provide a meaninful speed boost. Around 10% - 15% in my testing vs MTU1500.
4) A Synology DS923+ coupled with 4x WD Red Plus 10TB is a nearly completely silent and low power server that can sit on top of your work desk without making distracting noises. Fans and HDs are super silent.
5) Start by checking the PCIe slot! It is not because the card is small that it should be inserted in the smallest slot!

I hope this is helpful. Thank you so much for the great support in this community, specially to you @Tech Junky. 🙏
 
Last edited:
Yes, I mentioned it was either the 10GbE card (or the switch).

Glad you got your expected speeds. 🙂
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top