What's new

Commercial or DIY NAS

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

It's an interesting contradiction between minimising initial build cost vs total build of potentially (in UK costs) £2,500 in disks alone over the years (without replacements) for easier access to bluray content (you mentioned a lower concern on recovery because the BDs acted as backup)

I do hope that disks will become little cheaper over time but the total cost is the reason why I haven't committed fully on this already. I can't justify spending well over 3000€ for convenience of use but I can justify cost of buying a NAS and disks that store data I already have. If adding space will then be just the cost of disk(s) it's easier to accept and pretty much deceive yourself about total cost of the system. Buying a NAS and eight disks will cost more that your monthly budget but you barely notice the price of one disk.

I get not wanting to re-do storage build but given a trigger for this work is the slowing VMs, I think focusing on big storage (when by your own admission you're not sure you'll get around to the work) is the wrong way around.

VMs will not run from same disk space in any scenario. They will have a dedicated mirror set on same computer that runs them, at this point I don't see need for even five VMs but I try to find couple of cheap one TB SSDs for them which should stay well over half empty.

The mirror is faster than the parity method as there's no calculations for parity taking place. The faster speed comes from the stripe within the mirror. 4 disks in R10 using 5400rpm drives is still north of 400MB/s in R10. This exceeds your network speed by ~4X if you're running 1GE.

I don't work with big files over the network, I only store them there. And when I copy large amount of data to NAS I don't need to watch it progressing so it doesn't matter if it takes an hour or two. So while speed is nice it's not essential.

Uplifting the core of the system leaves everything else intact and gives you more CPU power for the VMs. Splitting the storage from the compute leaves things intact and you just move the VMs to the new system.

While I don't think it matters now it may well be valid point at the end of the systems life cycle and something I will consider.

If you're running a 4 disk R6 array, adding to extend to 5 disk and higher is easy.

Can you give an example of (non commercial) system that has an easy expansion possible? If for example it's Linux server what file system it requires?

I have to say that at this point it seems that possibility of storing Blu-Rays will bring too big initial cost when comparing the benefits. Most likely scenario for this to happen would be that I find similar server that I have now which can house at least eight 3,5" disks but it would still require sata-controller, SAS disks cost too much more. I would start with four 16TB disks (which seem to cost same as 10TB disks) in RAID 6, it would be in the (far corner of) same ballpark as NAS + two disks.
Can you give an example of (non commercial) system that has an easy expansion possible? If for example it's Linux server what file system it requires?

Depends a bit on your setup - in my case I'm running a DIY NAS: ext4 on LVM on RAID5. (A bit of extra complexity but can be helpful if you want to swap the disks out with bigger disks in the future say going from 10TB disks to 20TB disks.)

High level steps:
1. Plug in new disk.
2. Run 2 mdadm commands - first (mdadm --add) extends the array by a disk and the second (mdadm --grow) makes it active and re-distributes/copies data on to it.
3. Sit and wait for resync. (Optional but that last command puts the most stress on the disks through the process).
4. Run LVM commands (pvresize/lvextend depending on your setup), to make the new space available. This takes about as long as it takes to type the commands.
5. Extend the partition - resize2fs.

That said, while I've given you an example of how simple the commands are I wouldn't take advice from me on overall storage architecture - let's just say there are a couple of parts of my config that have added some complexity for no real-world benefit in the long run...
a DIY NAS: ext4 on LVM on RAID5.

Important thing for me is the bit that it's a Linux. While I do have a Linux VM running I wont make it OS for my server, it will be either Windows or VmWare. And since I don't trust Windows enough for doing and expanding RAID with it the number of disk slots in future server will be irrelevant unless somebody confirms me that the expansion can be made under VmWare. And it tells me that if I end up going to DIY NAS route I need to use Linux instead of for example TrueNAS. What would be specs for memory and processor if if it may scale to around 100TB? I would suspect that not that much if it doesn't do anything else than serve (1Gb) network share.

And that Linux server reminds me of a more general question, the Linux has two network cards, one in internal network and other in IOT network, i've mapped the music share from NAS as read-only for Linux, set up DLNA server for it and share music to smart speakers in IOT network through it. As this will expand to video for streaming devices (I currently use a PC that is connected to internal network if I need to watch something) I'd like to know if anyone has other suggestions how to share part of NAS securely to IOT network, other than through a Linux proxy so to speak.

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!