What's new

Recommendations for a DIY

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I had one years ago in form of HP 6305 PC. It runs hot even compared to the same gen/age CPU's. It jumps to 4.2GHz on slightest load straining the MB power VRM's. When you build a DIY NAS, you want it to be as reliable as possible and with minimum maintenance. Maintenance is equal to downtime. This old hardware with relatively rare chipset is not the best choice in a long run.
 
I had one years ago in form of HP 6305 PC. It runs hot even compared to the same gen/age CPU's. It jumps to 4.2GHz on slightest load straining the MB power VRM's. When you build a DIY NAS, you want it to be as reliable as possible and with minimum maintenance. Maintenance is equal to downtime. This old hardware with relatively rare chipset is not the best choice in a long run.
You are aware that it's possible to underclock a CPU as well, right?
So it's possible to lock it down to a slower peak speed, which would prevent what you're describing from happening.
 
You are aware that it's possible to underclock a CPU as well, right?

Yes, the MB in question can do that. It wasn't possible on business line HP product. This will solve the heat/power consumption issue. In addition to underclocking/undervolting I would use a different Intel NIC, not the built-in Realtek. The future NAS software updates remain a concern. I don't know if the developers will keep supporting this niche product. It wasn't very popular when it was new 10 years ago.

My home storage solution is a mini PC running Windows 10 with standard NTFS drives. The drives are readable on every Windows PC - no special file systems. Our work PC's do incremental hourly backups on one 4TB drive. The backups are copied to another 8TB drive once a day - the media storage drive. This drive is mirrored to another 8TB drive once a day. This way I have 4 copies of what is important and 2 copies of what is easier to restore - movies/music collection. This simple approach is hardware independent and the drives are easily upgradeable one by one. I replace/upgrade the storage drives in pairs every about 4 years. They run 24/7 with no spin down. The read/write performance is >100MB/sec - good enough for home use.
 
Yes, the MB in question can do that. It wasn't possible on business line HP product. This will solve the heat/power consumption issue. In addition to underclocking/undervolting I would use a different Intel NIC, not the built-in Realtek. The future NAS software updates remain a concern. I don't know if the developers will keep supporting this niche product. It wasn't very popular when it was new 10 years ago.

My home storage solution is a mini PC running Windows 10 with standard NTFS drives. The drives are readable on every Windows PC - no special file systems. Our work PC's do incremental hourly backups on one 4TB drive. The backups are copied to another 8TB drive once a day - the media storage drive. This drive is mirrored to another 8TB drive once a day. This way I have 4 copies of what is important and 2 copies of what is easier to restore - movies/music collection. This simple approach is hardware independent and the drives are easily upgradeable one by one. I replace/upgrade the storage drives in pairs every about 4 years. They run 24/7 with no spin down. The read/write performance is >100MB/sec - good enough for home use.
Huh? OMV runs on top of of Debian, so as long as Debian supports the hardware platform, it'll work.
A lot older hardware is supported by Linux, so I doubt it's even an issue.

Well, SnapRAID can run on Windows, so that's also an option. That said, it's likely that Windows 10 stops getting updates long before Debian does. I have a NAS because I don't want spinning rust in my PC.
 
Hope for the best. My simple solution is hardware independent and can be modified at any time with no data loss.
And so is mine, I can just remove the drives and plug them in to a new Linux machine.
SnapRAID isn't traditional RAID, as all the drives are independent. I also back up everything on an external USB drive that's NTFS formatted.

What I use is a dedicated PC, same as @aNASforXmas intentions. HDD's are still the better per MB storage solution.
Well, I have a mini ITX build with four hot swappable drive trays, so in that sense it's a dedicated PC as well, just purpose built.
10Gbps connection makes the drive access as fast as having them connected locally to my desktop.
 
And also requires ECC memory which doesn't work with modern consumer CPUs.
SnapRAID also prevents bit rot, without the need of expensive hardware.
Ryzen CPUs aren't modern consumer CPUs?
 
Ryzen CPUs aren't modern consumer CPUs?
Sure, but as for using ECC memory with a Ryzen CPU, not all motherboards support ECC memory and even if the board makers claim to support ECC memory, it doesn't mean that the ECC feature is supported. Everyone has a disclaimer along the lines of "ECC memory (ECC mode) support varies by CPU" and then asks you to check the QVL, which in most cases doesn't list any ECC memory.
So if you know of an instance where this actually works, you might want to share that information.
 
Last edited:
So if you know of an instance where this actually works, you might want to share that information.
Well, that depends. Do you mean actually logging a correctable bit error? That I don't have proof of. I'm not sure how to trigger it in a test scenario. I do know that people using Ryzen systems have shown that ECC mode is "on". For example, this output from an UDOO Bolt:

Code:
$ dmesg | grep EDAC
[    0.568499] EDAC MC: Ver: 3.0.0
[   12.064922] EDAC amd64: Node 0: DRAM ECC enabled.
[   12.064924] EDAC amd64: F17h_M10h detected (node 0).
[   12.065034] EDAC MC: UMC0 chip selects:
[   12.065035] EDAC amd64: MC: 0:  8192MB 1:     0MB
[   12.065036] EDAC amd64: MC: 2:     0MB 3:     0MB
[   12.065037] EDAC amd64: MC: 4:     0MB 5:     0MB
[   12.065038] EDAC amd64: MC: 6:     0MB 7:     0MB
[   12.065043] EDAC MC: UMC1 chip selects:
[   12.065044] EDAC amd64: MC: 0:  8192MB 1:     0MB
[   12.065044] EDAC amd64: MC: 2:     0MB 3:     0MB
[   12.065045] EDAC amd64: MC: 4:     0MB 5:     0MB
[   12.065046] EDAC amd64: MC: 6:     0MB 7:     0MB
[   12.065047] EDAC amd64: using x8 syndromes.
[   12.065047] EDAC amd64: MCT channel count: 2
[   12.065159] EDAC MC0: Giving out device to module amd64_edac controller F17h_M10h: DEV 0000:00:18.3 (INTERRUPT)
[   12.065169] EDAC PCI0: Giving out device to module amd64_edac controller EDAC PCI controller: DEV 0000:00:18.0 (POLLED)
[   12.065170] AMD64 EDAC driver v3.5.0
 
Well, that depends. Do you mean actually logging a correctable bit error? That I don't have proof of. I'm not sure how to trigger it in a test scenario. I do know that people using Ryzen systems have shown that ECC mode is "on". For example, this output from an UDOO Bolt:

Code:
$ dmesg | grep EDAC
[    0.568499] EDAC MC: Ver: 3.0.0
[   12.064922] EDAC amd64: Node 0: DRAM ECC enabled.
[   12.064924] EDAC amd64: F17h_M10h detected (node 0).
[   12.065034] EDAC MC: UMC0 chip selects:
[   12.065035] EDAC amd64: MC: 0:  8192MB 1:     0MB
[   12.065036] EDAC amd64: MC: 2:     0MB 3:     0MB
[   12.065037] EDAC amd64: MC: 4:     0MB 5:     0MB
[   12.065038] EDAC amd64: MC: 6:     0MB 7:     0MB
[   12.065043] EDAC MC: UMC1 chip selects:
[   12.065044] EDAC amd64: MC: 0:  8192MB 1:     0MB
[   12.065044] EDAC amd64: MC: 2:     0MB 3:     0MB
[   12.065045] EDAC amd64: MC: 4:     0MB 5:     0MB
[   12.065046] EDAC amd64: MC: 6:     0MB 7:     0MB
[   12.065047] EDAC amd64: using x8 syndromes.
[   12.065047] EDAC amd64: MCT channel count: 2
[   12.065159] EDAC MC0: Giving out device to module amd64_edac controller F17h_M10h: DEV 0000:00:18.3 (INTERRUPT)
[   12.065169] EDAC PCI0: Giving out device to module amd64_edac controller EDAC PCI controller: DEV 0000:00:18.0 (POLLED)
[   12.065170] AMD64 EDAC driver v3.5.0
The Udo Bolt uses an industrial, embed SoC though, not your typical consumer Ryzen CPU.

I checked with someone I know at one of the motherboard makers and apparently only the Pro CPUs, which aren't available directly to consumers, have ECC support. So as I said, no modern consumer CPUs have ECC support.

In addition to this, if the motherboard is using T-topology for the memory, ECC isn't supported, as it only works in daisy chain. This means a lot of boards can't support ECC memory.

Also, see
 
Last edited:
The Udo Bolt uses an industrial, embed SoC though, not your typical consumer Ryzen CPU.

I checked with someone I know at one of the motherboard makers and apparently only the Pro CPUs, which aren't available directly to consumers, have ECC support. So as I said, no modern consumer CPUs have ECC support.
It's only the APUs where the Pro CPUs are required for ECC support.


 
It's only the APUs where the Pro CPUs are required for ECC support.


Well, that's what Asus says, but apparently others provide different information. This is somewhat outdated, as it doesn't have the 5000-series, but as you can see, the 2000-series doesn't support ECC unless you have a Pro series CPU, unlike what Asus claims. So if two board makers can't agree on which support supports ECC or not, how are we as consumers going to be able to work it out?

261992864_2787232518241154_4831254436736925656_n.jpg


And again, this doesn't mean the motherboards have ECC support in the UEFI, so even if you install ECC memory, it might do nothing.

Quote from AMD. In other words, no guarantees that it will work.
ECC is not disabled. It works, but not validated for our consumer client platform.

Validated means run it through server/workstation grade testing. For the first Ryzen processors, focused on the prosumer/gaming market, this feature is enabled and working but not validated by AMD. You should not have issues creating a whitebox homelab or NAS with ECC memory enabled.

yes, if you enable ECC support in the BIOS so check with the MB feature list before you buy.

Anyhow, we're way off topic now.
 
All Ryzen CPUs support ECC, but the motherboard manufacturer has to implement it. That's why different manufacturers have different levels of support. The Asus motherboards can be had for as low as $70 with full ECC support. I don't know about other manufacturers because I don't own their boards.

This isn't off-topic as it's directly related to the use of ECC ram in a self-built NAS. If you recall, we came down this particular avenue because there was a question whether ECC capability was available to the typical consumer on commodity grade hardware. It is.
 
And they're block-level snapshots? If I snapshot a 100GB Veracrypt container and then add a 1 kB file to the container and take another snapshot, the two snapshots will occupy 1 kb on disk?
I haven't seen anyone address this so thought I would jump in. I have a Raspberry pi "file server" and use LVM for snapshots of an ext4 file system. If you set up LVM with thin provisioning then the snapshots are block level, working just the way you mention. You can set up samba so that Windows clients see the snapshots as file history, very handy. I had to write a bunch of scripts to manually manage my snapshots, so it's very much DIY but it does work.
 
I have owned 2 Netgear NAS's which served me well and still own a Synology which is 8+ years and running perfectly fine, not a glitch, but it is outdated now and cannot cope with Plex or other multimedia apps let alone 4K streaming and honestly, replacing it with a new Synology that would support this is quite costly. For reasons mentioned in this thread earlier, i decided to move to TrueNAS Core (ZFS) some years ago already and i am extremely happy with it, also for home use. Since the initial installation, I have changed my hardware configuration now already several times (2 Supermicro mobo upgrades, 2 CPU upgrades, ECC memory upgrades, new enclosure, new backplane, new HBA) and now the great part, never had to do a reinstall for a hardware upgrade. Just hook up the SSD boot drive to the new stuff and off you go. As i run my NAS on decent (Supermicro, Intel, LSI) but refurbished hardware, i never had any issues so far and the upgrades are cheap. The only time i did a reinstall is when i decided that 15 x 2Tb was maybe way too much for my needs and the MD1000 SAN was sucking power like crazy so i decided to reduce to 4 x 2Tb. As you may or may not know, ZFS isn't very flexible on changing the number of HDD's in a pool so i decided to start from scratch. On the other hand, expanding the size of a pool by increasing the disk size of all disks in a zpool is a walk in the park.

I am sure that QNAP, Synology and even Netgear are very reliable NAS solutions and TrueNAS does require a bit more work in setting it up, i feel that with this setup, i am more future proof with same or better data security. If i decide tomorrow i want to upgrade the Supermicro mobo and CPU to support newer more hardware demanding versions TrueNAS, Plex or Nextcloud out goes the old and in goes the new with only little money to spend i would once again be looking for 2nd hand stuff.
 
Last edited:
Any insights into the question that may have gotten lost in the shuffle above?


I have read a bunch on the ZFS topic. From a data safety and security, it is my understanding that it's ability to monitor, identify and correct errors is unmatched. On the other hand, it is very inflexible with regards to pool setup and storage expansion which probably is related to the forementioned advantage. I guess it all depends on what the system demands are in terms of storage spce versus data safety and redundancy.
 
Every modern NAS includes snapshots. ZFS isn't recommended by the people in the know.

Don't Use ZFS on Linux: Linus Torvalds - It's FOSS (itsfoss.com)
I understand that but by promoting QNAP then ZFS is excluded.

And I do not understand why ZFS is not recommended?
I am using selfbuild TrueNAS which probably is the most robust NAS - software-wise.
And it does provide Snapshots.
I also have a Synology, but I can not understand why I should use it when TrueNAS is supreme?

QNAP is also more suspectable to intrusion so why not recommend Synology if you do not like ZFS?
 
I am not 'promoting' QNAP. Other than it is simply the product I feel gives the best overall value between it and Synology.

Read the posts above to see why you may not want to use ZFS.

QNAP is 'more susceptible', today. That doesn't mean anything else is more secure in the long run.

Use the hardware/firmware that you feel is better. Nobody will judge.

When asked about NAS, in general, I state QNAP, Synology (only). With the nod to QNAP because it offers more hardware/performance for less cost. In almost every other area, these two top picks are effectively the same.
 
NAS - Network Attached Storage

How you get there is up to you and your budget.

Principal though is KISS.. when you complicate things they tend to not work as well.

Buying a system + drives = easy
Building + drives = easy (takes more thought)

Picking the FS though outside of buying something preconfigured to use x / y / z locks you into their system. Drawbacks can be as noted prior with the inability to expand easily or data integrity.

Most boxes off the shelf though use Linux as their method of making things work and be stable. "features" or as I think about them more as vulnerabilities aren't as apparent if you didn't build it or it isn't open source for inspection. You've seen the news with different vendors and different problems. One in particular comes to mind with WD and their remote wipe flaw.

If you build it you know what's going on and looking at many different how to pages will give you a clearer picture on security / integrity.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top