tranceaddict84
Occasional Visitor
Hi folks. I have a DS415Play with 4x6TB WD Red drives in RAID5. A couple of weeks ago my A/C unit tripped my circuit breaker and when I flipped the power back my NAS started beeping. The volume was shown as degraded with disk 2 not initialised. I reinitialised the disk, ran a quick SMART test which showed fine, and rebuilt my RAID.
This week I had my A/C replaced and I shut down the NAS cleanly so the installers could cut the power. Again when I brought the NAS back up the volume was degraded and disk 2 was uninitialised.
I had something similar a while back when I updated DSM, and checking back through my previous posts on the Synology forum I see it was the same disk in that instance, too:
https://forum.synology.com/enu/viewtopi ... 64#p419564
I have replaced the disk as a precaution and my RAID is rebuilding again. However I have put the old disk in my PC and run a WD DLDIAG extended test and that came back fine. When it's done rebuilding I will cleanly shut down the NAS, unplug and leave it 10 minutes before I bring it back up. I will repeat this process several times until I'm happy that it is healthy following a clean shutdown / startup.
Has anyone seen anything similar before? Is the Synology just far more sensitive to something that even an extended SMART test and full surface scan doesn't detect? I am concerned this is a false positive or a firmware bug causing a perfectly healthy drive to drop out of the array, and there's nothing to say it won't reoccur with a new drive. I can't very well do a 48 hour RAID rebuild every time I power cycle my NAS.
I would be greatly reassured if I can confirm that disk is indeed faulty and RMA it. Can anyone recommend a more thorough HDD test utility that might pick up something DLDIAG misses? (I used to use MHDD from a DOS boot USB so I might dig that out a bit later.)
Thanks!
This week I had my A/C replaced and I shut down the NAS cleanly so the installers could cut the power. Again when I brought the NAS back up the volume was degraded and disk 2 was uninitialised.
I had something similar a while back when I updated DSM, and checking back through my previous posts on the Synology forum I see it was the same disk in that instance, too:
https://forum.synology.com/enu/viewtopi ... 64#p419564
I have replaced the disk as a precaution and my RAID is rebuilding again. However I have put the old disk in my PC and run a WD DLDIAG extended test and that came back fine. When it's done rebuilding I will cleanly shut down the NAS, unplug and leave it 10 minutes before I bring it back up. I will repeat this process several times until I'm happy that it is healthy following a clean shutdown / startup.
Has anyone seen anything similar before? Is the Synology just far more sensitive to something that even an extended SMART test and full surface scan doesn't detect? I am concerned this is a false positive or a firmware bug causing a perfectly healthy drive to drop out of the array, and there's nothing to say it won't reoccur with a new drive. I can't very well do a 48 hour RAID rebuild every time I power cycle my NAS.
I would be greatly reassured if I can confirm that disk is indeed faulty and RMA it. Can anyone recommend a more thorough HDD test utility that might pick up something DLDIAG misses? (I used to use MHDD from a DOS boot USB so I might dig that out a bit later.)
Thanks!