What's new

Transfer speed

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Apes

New Around Here
I got an Asus RT-AC86U running the latest Merlin firmware and at one 1 Gigabit LAN Port i attached a 2.5G switch.
At that switch a 2.5G NAS and a 2.5G Laptop are attached.
When i am reading data from the NAS to the laptop the speed is around 300 MB/s, which is a good 2.5G speed. So far so good.
When i am writing data from the laptop to the NAS the speed is around 120 MB/s which is only a 1G speed.
I am using a ftp connection running Linux.

Can somone explain why my write speed is so slow? How can it be improved? (The cpu usage seems fine on both machines)
Thanks
 
When i am writing data from the laptop to the NAS the speed is around 120 MB/s which is only a 1G speed.
1GbE speed would be more like 113 MB/s. 120 MB/s sounds more like the maximum write speed the NAS is capable of. You don't specify what kind of storage is in the NAS or how it's configured.
 
It is an Asus AS6604T in a RAID 5 configuration.
The File system is EXT4.
I did another transfer test just now, writing was at 109 MB/s reading at 280 MB/s.
 
What I have seen in the past with RAID in servers is unless writes are cached, they are slower than reads which are cached. The big problem is you never want cache writes unless you have a very good back up battery system. You can kill your RAID. IF you have a good battery backup then see if there is a write cache setting that can be turned on.

You can have battery backed up cached controller but that is more in the server worlds.
 
it is the transfer time from line to battery feed that matters in the UPS. If you can get a battery backup card for the raid box, that is the first choice. Otherwise, you will need to talk with tech support at both the raid box supplier and the UPS supplier to ensure the USP is quick enough to switch.
 
RAID5 has a bit more overhead as parity needs to be split across the drives - which is one benefit of going with RAID10 - depending on the use case, RAID10 has less available space across the array, but writes and reads are faster...

Most customer NAS boxes run LVM over MDADM which is all software, so CPU performance does have impact here - older small core Intel chips could do 1GB with little effort - maybe 2.5GB ethernet might run to being CPU bound on the writes.
 
Using Raid 10 as @sfx2000 mentioned I can hit 400MB/s using 4 x WD Red drives over a 5GE connection. I also have a 5th drive on the array that will immediately take over if one fails. These are normal 8tb 5400's with 256mb cache.

You should probably validate you speeds with the raid calculator.
 
So what you are saying is that in general at a Raid 5 configuration the write speed are maxed out at around 125 MB/s?
Would the installation of a read/write Dual M.2 NVMe SSD Cache improve the write speed to like double (e.g. 300 MB/s?
Since i already got a RAID 5 set up i can't switch to RAID 10 easily can I?
 
If you add NVME's to the equation you'll see mac network speed of the port they're connected to.

1651586730371.png


1651586881954.png

Now, if you converted to R10 you could see 1X WR performance boost per pair of disks you add to the array. If you go with the minimum of 4 drives you get 2X the write speed. 3 pair gets you 3X and so on.

So, with my 4 disks / 2 pair I get 400MB/s+ because it's combining 2 controllers for WR / R bundling the bandwidth to hit those speeds.

1651587043792.png
 
I think i can't change to RAID 10 now since i got already to much data on the NAS and it would be deleted in the process.
Intresting to know though that RAID 10 improves the write speed so much!
If you add NVME's to the equation you'll see mac network speed of the port they're connected to.
Are you saying that if I add e.g. 2x 256GB M.2 NVMe and put it in read/write SSD Cache mode i would see the maximum bandwith speeds?
 
If you got away from using R5.

I would get a drive capable of holding all of your data as a backup and then convert the array to R10 // or add the NVME's as a R0 array and then setup a rsync script to move the data to the spinners.

You're going to be limited by what the NAS can do. I built my own using a PC and can manipulate everything individually. Using Linux / MDADM you can play around with moving data w/o breaking things by using "missing" option to create a new array while leaving the existing one in place to copy data over and then kill the existing array and moving the disks into the new one and let them sync.

Off the shelf NAS boxes are okay until you want performance or need to do something more advanced. Adding the NVME's in cache mode though should boost the speeds. It depends on how the NAS handles them to what sort of speed you'll see from them.
 
So what you are saying is that in general at a Raid 5 configuration the write speed are maxed out at around 125 MB/s?
The speed would be dictated by your specific hardware and not fixed to a particular value like 125MB/s. As always there are caveats like IO configuration, CPU, interface speed, etc.

Unfortunately the specs for your NAS don't explicitly state the expected speeds in MB/s for your particular setup. It does strongly suggest that both the read and write speeds should be about 295MB/s.
 
Last edited:
In the specification of the HDD drives it says 7200RPM, 512e, 512 Buffer size, SATA 6Gb/s, with sustained Data rate of 281 MB/s.
So that would explain my 280 MB/s read speed.

To reach that sort of write speed i could try the Asus SSD Cache (link) option with 2x 512GB M.2 NVMe and put it in read/write SSD Cache mode. Does anyone know how much that would increase write speed (currently at around 120MB/s)?
Thank you for your help
 
In the specification of the HDD drives it says 7200RPM, 512e, 512 Buffer size, SATA 6Gb/s, with sustained Data rate of 281 MB/s.
So that would explain my 280 MB/s read speed.
What is the model number of the drives you're using (e.g. ST1000LM048)? You should be able to find some benchmarks for its write speed on the internet. That could be useful in determining whether that's where the problem lies.
 
Well, is you did 4 x 18TB in R10 you get 36TB of usable space but ~400MB/s+.

Flash caching might get the speeds up on R5 and it's a ~$60 experiment to see what it does.

Taking 1 18TB drive out of the array / converting it to a backup drive and copying your data off the NAS and reconfiguring it to R10 might make more sense than playing with the NVME's as a cache.

It's all up to you though in what way you want to get it working the way you want it to. Of course pushing the limits to get 2.5GE of throughput is tempting but, ~200MB/s is more than enough speed for anything you're using it for directly. If you have a use case like live editing video in 4K format then it's worth pursuing faster speeds. If you're just doing backups and downloading to the NAS then it's not needed to hit those higher speeds.

If you were using a PC as a NAS though there could be a simple solution of using a HBA to bypass using DMI bottlenecks by using the PCIE slot connected to the CPU directly.

Like anything tech related there's about 1000 different ways you can do this sort of thing.
 
I think i can't change to RAID 10 now since i got already to much data on the NAS and it would be deleted in the process.
Intresting to know though that RAID 10 improves the write speed so much!

Are you saying that if I add e.g. 2x 256GB M.2 NVMe and put it in read/write SSD Cache mode i would see the maximum bandwith speeds?
With RAID10 you trade off giving up more space over RAID5.

So, I assume these are all write thru speeds so no write caching for better performance. If you can cache all activity then the data can be written out based on the hard drive head position rather than stopping everything to perform a write. This makes the system faster. I believe hard drives nowadays still only cache reads in their internal cache.
 
Last edited:
So what you are saying is that in general at a Raid 5 configuration the write speed are maxed out at around 125 MB/s?

Writes to a RAID5 array can only happen at the speed of the slowest drive in the group.

SSD caching won't help on the writes, but it will on the reads if there are multiple clients fetching the same data.
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top