What's new

Terrible write performance

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

kadajawi

New Around Here
I've got a QNAP TS-431X connected via 10 GbE and filled with 3x 18 TB drives in a RAID 5 array.

The problem: At best I get 60 MB/s write speeds, usually it is around 30 MB/s though. Read speeds are ok, with 300+ MB/s. The CPU is rather bored in all of this. I noticed that a ton of reads happen when writing files, and the drives are making lots of seeking noises.

Is this normal or is something going terribly wrong? And if so, does anyone have a clue what it could be or what I could do to figure it out?
 
You should check out the QNAP community forum.

Many things to check including antivirus.

What are you using to determine the actual write speed vs a network, client issue?
 
There are slower USB drives attached to the NAS, and write speeds are much faster to it (can max out the 160 MB/s). Also, before I went to RAID 5, I had to start with a single drive for space reasons. That could be filled at much higher speeds. As I did not pay much attention I'm not sure, but I believe I saw up to 300 MB/s. Since then the only thing that changed was adding a drive, switching to RAID 1, and then another drive and switching to RAID 5. I did not install any more packages. iperf3 is giving me around 600 MB/s between my desktop and my NAS. Also, the client reads the data to be sent much faster than the NAS can write it, so it fills up the cache and then slows down. The client shouldn't be the issue, IMHO, especially since sending data to other drives connected to the NAS works much faster. The integrated benchmark feature of QNAP claims 210 (WD) or 260 MB/s (Toshiba) read speeds per drive.

I tried to deactivate the malware remover. Apart from that, not much is installed. There is no Multimedia Console for example, which would index photos etc. Qsirch isn't installed either. The NAS has 4 GB of RAM.

Perhaps the mistake was to leave one of the old drives in the NAS to keep the settings while creating a new array, so something was messed up (the NAS did not migrate all the packages I had installed previously to the new array).

I did however do a factory reset when I noticed the performance issues and couldn't figure out what was causing trouble, alas with no effect.

If I knew that it will perform better if I start fresh I'd buy another 18 TB drive, as I do not have the space at the moment.
 
Last edited:
3x 18 TB drives in a RAID 5 array

There's your problem right there - RAID5 can read fast, but writes are limited to the speed of a single drive. RAID10 will get you much better write performance - but there you'll need one more disk in the array, 4*10, and you'll have half of that available for storage (the other half is redundancy).

Put another disk in, nuke and rebuild the array from scratch with current QTS (yes, that matters)... and again, strongly consider RAID10 over RAID5 (RAID5 is the second most risky version of RAID after RAID0 - mostly due to failures during the rebuild on a disk element replacement)

The TS-431X is a dual-core Annapurna Labs AL212 @ 1.7GHz - 32-bit ARM, and the A15 is comparable to Intel Atom from 10 years ago...

Other things to look at...

enable SMB3, with min version of SMB2 - IIRC, the default is not to enable SMB3 on the ARM variants, also look at Async I/O, but that comes with some risk of data loss if power is cut (async I/O is deferred writes).

I don't recommend using external drives via USB as part of your storage set - they're good for backups, but not as primary storage.

good luck...
 
If you start with non equilibrated drives this could decrease performance if the raid algorithm tryed to fill drives equally, normally the raid 5 performance is far better than a single disk as with raid 5 the IO's are distributed accross the 3 drives.
I make some performance test on my nas and there is an improvement with QTS 5.0.1 (for disks and ssd's), write speed for big files are 1.7 X speed of nominal disk speed (for 3 disks on QTS 5.0.1) maybe due to a better usage of ram cache memory.
 
I setup a PC as a NAS and run raid 10 and get 400MB/s from 8TB WD reds. Also, I have a 5th drive configured as a hot standby. I used to have a qnap but, NAS boxes in general don't perform at rated speeds. Migrating from one version to another raid can be daunting and scary when you don't know what the embedded system will do when adding disks. Not to mention you're confined to the physical space vs using a PC case that has more options. I simply use Linux / MDADM to provide hosted space on the network. For the price they charge for a NAS you expect more. They skimp on the brains of the box to keep their profits high. Using outdated tech to run the box like ddr3 slows things down with their gimmicky apps and costs more to upgrade legacy ram. Being restricted to sometimes proprietary NICs doesn't help either. There's a lot of drawbacks to off the shelf solutions for the sake of mass marketing and simplicity for the non technical consumer.
 
There's your problem right there - RAID5 can read fast, but writes are limited to the speed of a single drive. RAID10 will get you much better write performance - but there you'll need one more disk in the array, 4*10, and you'll have half of that available for storage (the other half is redundancy).

Put another disk in, nuke and rebuild the array from scratch with current QTS (yes, that matters)... and again, strongly consider RAID10 over RAID5 (RAID5 is the second most risky version of RAID after RAID0 - mostly due to failures during the rebuild on a disk element replacement)

Just to add to the recommendation from @sfx2000 , the risk factor he refers to is one that has been increasing as disk sizes are increasing. The fact you're running 18TB disks technically means you have a greater chance of a second disk failure during re-build (i.e. catastrophic) than I do with a RAID5 array running 3x6TB disks. (And yes - once a disk fails, if not before, that array is being changed to RAID10 array rather than recovered to RAID5. The only reason it's 5 at the moment as cost constraints at the time it was setup that don't apply now.)
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top