What's new

NAS - drive sleep/spindown or not?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

sfx2000

Part of the Furniture
As a general question - what are most folks doing the the rust disks in their NAS boxen, let the drives spin down, or let them keep spinning?

The option is there, and for the moment, with a RAID10 configured, I'm letting the drives spin 24/7, disabling spin down..

Any concerns about reliability and potential for either outright failure or RAID sync issues?
 
With zero spin down, you are putting a lot of extra hours on the motors. At most I'd set a very long spin down time, like 4hrs or something. Depends on access pattern and how much latency you are willing or able to tolerate for them to spin up.

My desktop and server both are RAID0 in a 2 drive config. The newer 7200rpm Seagate 3TB drives I have in them spin up pretty fast, maybe 8-9s total latency. My old Samsung 5400rpm and 7200rpm drives it was closer to 13-14s, which did get a little ridiculous.

On my desktop I had drive spin down set at 15 minutes and on my server it is set to 10 minutes. My and my family's usage patterns* are generally such that they'll either stay spun up from access or else it isn't likely anyone is going to hit the drives again for a few hours. I am sure it probably doesn't save more than $4-5 over a somewhat more relax policy, but never spinning a drive down does use a fair amount of extra power.

If you figure 5-6w at idle per drive compared to less than a watt per drive with it spun down...I assume you've got at least 4 drives to have a RAID10 config, that adds up. My electric prices make that probably $20 a year if the NAS also never sleeps just to keep the drives idle instead of spun down.

Or is it an issue that you have limited nits to change spin down (IE you have like 5, 10, 15min, etc. up to only an hour?)

*Checking the SMART data over the course of ~38,000 power on hours for the old 2TB drives that used to be in my server, I racked up a total of ~5000 loads (IE drive spin ups). No way to check total idle time, as that isn't broken out, but if I were to take a guess, of those 31,000 hrs, total idle time was probably around 6000-8000hrs. A lot lower motor wear as well as power consumption even if head/platter wear was somewhat higher than it would be have been otherwise.
 
Reason for asking, most drives failures I've seen were on spin-up, esp as they age - the wear on the bearings is negligible in my experience in the data center world. And there, I've got drives that have been spinning for over 5 years (except for a bad batch of seagates in one data center that got very hot)... we had a major power event last march in my lab, and there I had a few failures as they cooled off after spinning for many years (my lab gear typically is a smaller version of Production gear, and is normally a single slice of a much larger platform).

From a power perspective - Power is going to be consumed whether the disk is spinning or not, but it's pretty low whether the disk is idle vs. not spinning, it's the read-write that gets the current up...

I'll have to check and see what QNAP has for power consumption numbers that, and what contributions the drives have...

Would appreciate any other comments/observations.
 
I have my SOHO NAS sleep from 1AM to 7AM each day. During the work day I don't want drives to spin up/down/up...
 
Reason for asking, most drives failures I've seen were on spin-up, esp as they age - the wear on the bearings is negligible in my experience in the data center world. And there, I've got drives that have been spinning for over 5 years (except for a bad batch of seagates in one data center that got very hot)... we had a major power event last march in my lab, and there I had a few failures as they cooled off after spinning for many years (my lab gear typically is a smaller version of Production gear, and is normally a single slice of a much larger platform).

From a power perspective - Power is going to be consumed whether the disk is spinning or not, but it's pretty low whether the disk is idle vs. not spinning, it's the read-write that gets the current up...

I'll have to check and see what QNAP has for power consumption numbers that, and what contributions the drives have...

Would appreciate any other comments/observations.

Same here, especially in cases where the system has been up for years without shutting down. If you have drives spinning 24/7 the bearings can go really bad and it still works.. I remember moving datacenters with lots of old RISC servers that had been running for up to 10 years without powering down, the amount of drives that did not start up was quite crazy!

But on my own nas I let the drives spin down since actual usage has been max a few hours a day.
 
I have my SOHO NAS sleep from 1AM to 7AM each day. During the work day I don't want drives to spin up/down/up...

Ditto, when I sleep they sleep too.
 
I have to agree with most here, forcing the drives to spin continuously is not optimal if long term reliability is the goal. On my QNAP, I run a quick test on the drives each 24 Hrs and a thorough test each week. And the NAS never sleeps (access times can be quite random throughout the day and depends on day of week too).

What the drives do in-between the scheduled tests and actual user activity is hopefully spin down and not create heat and wear on the motors, bearings and heads..
 
I have a pair of 500GB drives in my 24/7 server in the garage. I think they've been spinning for 4 years or so. Some argue that spin-up/down too often stresses the drive motor/bearings, as does the change in temperature.
 
I have a pair of 500GB drives in my 24/7 server in the garage. I think they've been spinning for 4 years or so. Some argue that spin-up/down too often stresses the drive motor/bearings, as does the change in temperature.

Stress of starting to spin in surely higher then while spinning (just think in terms of anything how much torque acceleration requires vs cruising). But I think even in a datacenter if there was really a strong schedule that machines are only used a few hours a day it could start to be worth it to shut things down for the day. In reality somebody is always accessing some parts of a shared system so you just need to leave it running (it's not about drive life, just practicality or requirements). Actually even from a business point of view there is a lot more risk in keeping things running for ages vs. periodic shut downs as you increase the risk of having multiple disks fail to start simultaneously (honestly I have not really seen this as much with modern hardware then old Sun / HP UX / AIX servers so maybe there has been some improvements).
 
Yeah. In the last 5 years or so, consumer grade drives (other than bleeding edge like 4 or 5TB) are much more reliable. Indeed, in my NAS, I rate risk of data loss from drive failure lower on the list of risks, including human error, theft, file system corruption. That's why I don't use RAID but rather, I used two volumes and external USB3.
 
But I think even in a datacenter if there was really a strong schedule that machines are only used a few hours a day it could start to be worth it to shut things down for the day. In reality somebody is always accessing some parts of a shared system so you just need to leave it running (it's not about drive life, just practicality or requirements)

Most of the failures we see are power up and spin up - so once they're up, we usually let them run...

We're 24/7 for signalling and application servers, so they basically stay up all the time, and we rarely even reboot them - mostly for security and application updates only.

The "hotel" servers, e.g. OAM/Monitoring/etc... I've been moving them slowly over to a VMWare ESX instance when possible - mostly to save thermal and power costs, and the upside is increased reliability there...

In the enterprise/carrier grade space, the newer HW is much more reliable - I'm running mostly current HP 1u/2u boxes, and slowly migrating over to IBM blade centers... As far as drives - what I've found is that typical lifespan in the DC is around 3 years - HGST 2.5" 10K RPM SAS drives, along with 3.5's usually on Seagates (again SAS), but I've got some HGST still in inventory there - folks might be surprised, but generally we don't run very large drives - the 2.5" for example, are usually 300GB, and the 3.5" are typically 1TB in our current storage inventory...
 
And to give some sense of scale - one of my clusters is 6 IBM blade center chassis, each with 8 dual socket blades (Xeon Westmere's, with 6C/12threads), each blade has 32GB of RAM, and two drives in RAID1 - so multiply 6 chassis * 8 blades * 2 sockets * 12 threads/socket - let's just say it's pretty big -

DB servers on that cluster at 6 IBM M300 2U, 2 socket Xeon's (same as the blades) but with 64GB RAM running Oracle 10g. The DB servers are talking to a 24 disk SAN over fiber channel locally - this is all geo-redundant as an active/active with full failover - so each site is 3 chassis, plus 3 M300's - so two racks per site...

4 10GigE interfaces (bonded) on the blade center chassis to the outside world and to the DB servers - site to site is 40GigE with MPLS over dedicated links between Site 1 and Site 2.

Fun stuff... I've got a dedicated Operations Engineer that is on-call 24/7 to support it ;)
 
Can this data help you make a decision?
https://www.backblaze.com/hard-drive-test-data.html

It is my understanding that continuously spinning a drive is more reliable than otherwise. I would like to see proof otherwise if my assumption is wrong.

A continuously spinning drive will most certainly live longer then one that is shut down frequently. A side effect is that if you have loads of drives running 24/7 and then need to shut them down after 2 years you run a very high risk of multiple simultaneous failures.

However is this really something people at home should consider? We are generally talking of quite a small amount of drives so even if you shut down / spin up many times a day you will be fairly unlikely to have a failure during the normal life span of the drives. On the other hand the probability is high enough that you do need to consider proper backups to ensure you can recover even if the whole NAS is nuked.

If comparing home vs. enterprise setups one thing to take into consideration is what happens if your device is unavailable. For an enterprise it can mean angry customers and lost revenue, for a home user it should never mean more then inconvenience (you should organize your backups in a way that you can get individual files even if the NAS is not online). There is just too much differences to make any direct comparisons. For us, we run close to 10000 VM's over multiple continents with petabytes of data on different SAN devices. Disks fail weekly but it's rarely an issue (we can lose a lot of drives before risking data loss), sometimes multiple disk losses can affect performance though and that can already be an issue (less now that all databases run on Fusion IO).
 
The question is, does a drive last longer spinning continuously, or periodically spinning it down if it will be idle for a moderate period of time. I don't disagree that if you are spinning the drive up and down dozen or dozens of times a day, it is going to be a lot higher wear than never spinning it down.

What about if you are spinning the drive up and down only 2-4 times per day. That is potentially a lot less time on the motors, but also probably not significant wear caused from parking and spin-up.

As for power consumption, looking at most tests, it looks like a typical 5400/5900rpm drive uses around .5-1w with the drive spun down and 4.5-5.5w. 7200rpm drives are about the same spun down, but about 5.5-7.5w when idle.

Sequential read and write generally doesn't increase the idle figures by much, maybe just 1 or 2w. Random I/O is more on the level of 7-10w depending on 5400rpm/7200rpm.

I will certainly grant a spin down time of 10 minutes is probably a bad thing if you have moderate access patterns. It might not be a bad thing if you have very infrequent or very frequent access patterns. In the former, you just aren't going to be hitting the drive(s) much to cause spin-ups, so spinning down quickly is a good way to save power. In the later, the frequent access patterns are generally going to keep the drives spun constantly until after whatever "work day" access is done.

If you have moderate access patterns than having a 1-2hr spin down time would probably work.

Or at the very least, spin the drives down at the end of any reasonable time.

Example, I basically never use my server between 12:30am and 7am, so I have the server go to S3, which also means drives parked, from 12:45am to 6:45am.

If you have a lot of drives, or even just one, if you are going to keep it for a long time, you might save a fair amount over the life of the drive. $2-3 a year letting the drive be parked half the time certainly doesn't seem like a lot, but it can add up.

Anyway, frequent parking is certainly bad, periodic I think is possibly actually benefical.
 
Interesting... I assume the answer to your question is very complex and well beyond my math skills.

Are there many data sets on HDD stop-start Mean Ttime To Failure? The nature of stop-start being so much more chaotic than continuously spinning says to me that the MTTF fluctuates wildly giving no reliable numbers. Perhaps if you were very strict about the stop-start cycle you could get reliable numbers, but probably nowhere near the reliable stats of continuously spinning HDDs.

The thread was started with a focus on reliability, and continuously spinning is the answer. Though, I am unsure what his ultimate goal was with a focus reliability... price savings? convenience? longevity? bragging rights? maybe his data center smells awful and he prefers to be there as rarely as possible?
 
Nullity, I don't think that continuously spinning is the general agreement on this thread for long term reliability? Usage patterns matter. And spinning for a few years continuously is not the same thing as being able to depend on then to spin up once more if / when they need to be shut down for whatever reason after all that time.
 
The thread was started with a focus on reliability, and continuously spinning is the answer. Though, I am unsure what his ultimate goal was with a focus reliability... price savings? convenience? longevity?

It was mostly focused on reliability, and with a secondary emphasis on availability...

Remember, a drive that never spins up, it can never fail - until it does.. And a drive that won't spin up when you need it, well, that's takes a price of it's own perhaps...

Some good points for spinning down - noise, heat, power consumption to some degree, and constant spinning does put some long-term wear on a mechanical drive.

Use cases - some folks use NAS for backup, others incorporate into workflows - so storage can be online, near-line, or cold-storage...

So a lot of good points here...
 
I guess it depends very heavily. At the very least, in a home or small business setting, unless you have off-hour/overnight workloads, I would guess you are best off with the NAS/drives sleeping during the hours where it isn't likely to be needed. Have business hours of 8am-6pm and there is no use off hours, have the NAS sleeping in the off hours. At home and you pretty reliably don't use it from midnight to 8am, sleep it during those hours. Most NAS are pretty easy to wake up on the rare times you might use it during the regularly scheduled sleep times.

I know with my server, other than having WOL configured as well as a desktop icon for a batch program to wake my server on my desktop, laptop, tablet and a quicky iOS program to do the same on my phone, it is a 10 second walk (at most) in to my basement storage room and hitting the physical power button to wake it up if I need it between 12:45am and 6:45am.

I think I've had my server sleep maybe 5-6 times over the last several years where it "broke" something. Either a download that didn't finish before it went down for the night or I was starting to watch a movie on my Tablet or AppleTV or something where I had to wake up the server and resume what I was doing. I've maybe had 15-20 times over the last few years where I've needed to wake the server because I was going to start doing something after it had already gone to sleep (IE I want to start watching a movie at 1am because I can't sleep, or the kids are up super early and I want to put a movie on for them at 6am and Netflix isn't going to cut it or something).
 
Spin-down/sleep for 6 hours a day. That's what made sense to me, rather than frequent start/stop.

I wish the consumer/SOHO NASes sleepy-time scheduler would postpone the shutdown if a backup is in process.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top