What's new

NVME NAS

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Tech Junky

Part of the Furniture
So, I've been thinking about this more lately and finally scratched the itch.

I have my DIY setup running as a router / AP / etc. + NAS / HTPC / OTA with 5*8TB drives in a Raid 10 configuration.

I've been thinking about converting it over to some sort of SSD configuration though to be more efficient. Not that I need the speed really most of the time even though the HDD's push 400MB/s+ as they spin right now. However dropping the system weight from 15# of drives to less than 1# using M2's is appealing for being able to move it around easier. Maybe find a lighter weight case now that I don't need a HDD rack inside for the drives.

Anyway, I spotted a MSI MPG Z790 Edge WiFi DDR4 on Amazon "used" for $193 shipped. Snagged it since it will be more cos effective than dealing with a high end PLX card offering full drive speed @ $600. I figure for less than the price of a single drive I can handle a new MOBO. The quantity of drives going up under the Z790 banner makes this easier to do with M2's. running through a list of 5xM2 boards there's a limited selection and most disable sockets and don't allow for full use of all of them at the same time. I would have preferred to stick with ASRock but, they full under this issue of disablement. I don't trust the other options that popped up as having the # of sockets I'm looking for. I've used MSI in the past and they were rock solid until I needed something a bit more niche that they didn't offer.

Next up was picking drives. I looked through quite a few options but, a price point I was trying to stick to was $200/drive @ 4TB. Next was speed since it doesn't make sense to go with a slower drive if you can get a better one for the same price. Some of the drives though had some gotchas when looking through reviews. One mentioned speed dropped under 1GB/s after filling it up x%. Another one was iffy on durability and several were showing failures prior to a year of use. I narrowed it down to

Lexar NM790 SSD 4TB / 7400MB/s ($824 shipped)​

and

Kingston NV2 4TB M.2 2280 / 3500MB/s ($779 shipped)​


Kingston had more reviews but, 1/2 the speed. Lexar is a bit vague on which controller they're using but, google fu brings up some reasonable idea of it being a SM based option.

I've been using WD primarily for all things data though and have tried other options over the years that work but, just didn't give that performance feel. Switching things up though from spinning rust to SSD should provide plenty of snappiness even w/o the bulk data hoarding ability for the time being. If I really need the space I could switch things up from R10 to R5 or cough up the cash for higher capacity options. But, 4x4TB drives for the price of a single 8TB makes more sense. There's always the idiot idea of running all 4 drives in R0 for 16TB of space but, I'm a little gun shy on the potential failure of a drive. I can't say I've had it happen yet but, then again I swap drives for testing purposes quite often before they really see heavy use degradation.
 
My $0.02.

I wouldn't trust a fully/only flash NAS yet. Many failure points, with no recovery possible in a lot of scenarios.

The drives you mention specifically, I wouldn't trust at all with important data.

What has worked flawlessly in many QNAP setups is an SSD for caching the 6x or 8x 14TB HDDs. Noticeable speed increase particularly for smaller files (less than 1 MB) and all original setups still working (~3 years later).

The HDDs are WD Red Plus and the SSD is exclusively Samsung 980 Pro 1TB and 2TB models.

I don't know if you can do a similar thing on your DIY setup. But if you can, that is definitely the way to go.

NAS weight isn't usually a factor. Even so, ~15 lbs. isn't really heavy either.
 
NAS weight isn't usually a factor. Even so, ~15 lbs. isn't really heavy either.
When it's over half of the weight in a complete build it's significant. The drives being 15# and the case at 22# the complete system comes in at about a bag of cement. All flash has some worries but, I'm taking that into account.

I'm looking for slimmed down cases and found a couple that trim the weight by 5+ pounds. Adding in a larger HDD for a backup to the raid might be an option to look into. Obvious testing during the return period is in order for the setup to make sure it's stable. I can throw a new Exos 18TB in for the backup destination for ~$200.

This setup has been an evolving organism for the past several years as tech changes though, It's already lived in several cases with different mobos and functions.

Figuring taking off 20# is a decent start to minimizing things a bit and being more efficient.
 
I guess you don't need the capacity then. I haven't run into a situation where a NAS' weight (DIY or pre-built) was ever a consideration - even if it was ~40 lbs or more (although I agree, 3.5" HDDs do weigh a lot).

I'd be buying 3x Exos (RAID5) and a single SSD. Or 2 SSDs (RAID1) and 2x or more Exos HDDs (RAID1, 5, or 10, depending on how many drives).

The speed will be effectively as fast as the all-flash setup (after everything is copied/configured, of course) and your total storage (and reliability/consistency) will be vastly improved (at least, when writing files with (total) sizes smaller than the SSD's cache size).

However, you haven't stated if the DIY method will allow for those kinds of setups. And, you're really stuck on a tangential aspect of a NAS that I haven't thought about before much either (lbs).

Looking forward to what you do here. And, if you have any more answers to the questions I've repeated in this post too.
 
Speed is one consideration but, not the end all here. I want to streamline things s bit and condense it into something a bit different. Capacity isn't the goal and purging old files don't hurt too much either. Hoarding is easy to do when you have tons of TBs to play with. I use the system for quite a bit more than just data though as the primary role is as a router and aggregating things through it. When I started this journey it was primarily to collapse several devices into one.

I collapsed 5-6 devices and functions into a single box. Using Linux to perform all of the functions from the simple router aspect to rolling in HTPC duties alongside that. I was running an AP off the box using a wifi card and using quad port NICs for the LAN / WAN. Bundling interfaces for higher than single port BW for the WAN side to get that extra over provisioned BW. It started out simple with a NVME / HDD and a 8700K CPU years ago. It's been rebuilt / repurposed several times since that initial build expanding in functions and tinkering with different tech. It's been a mining machine at one point as well.

As for data / NAS though I've been running Raid 10 on it with the original disks for many years w/o issue or need for replacement. I have backups of the important stuff across different local drives in the laptop and in different enclosures as well as USB drives. Drives aren't an issue for storing copies of stuff and hoarding. That's well beyond where it should be and probably time to sync them and ditch what's not being used and just laying around.

I could go with the multi drive / hybrid tiering option but, that's not what I want to accomplish with this rebuild. It's not really a worry at this point of losing key data that's of high value. It's more about being able to do it w/o spending several grand on switching things over to AMD or spending a massive amount on a PLX card because it's the only option w/ Intel prior to z790 and high socket density M2 options. Even with the higher density options the limitations by some companies makes it difficult to proceed due to limiting the configuration of the sockets to disable 1 for the other to work. Lack of lanes and not putting the additional socket behind the DMI looks to be the prevailing reason for some companies designs. Now, if I were looking to run a Gen5 drive and have 128gb/s available then those might make sense. Or if I wanted to boot from the Raid instead of isolating it from the OS drive it would work as well. Currently I run an OS drive and it rsyncs to the raid in use currently every 6 hours to capture changes w/o having to try to recall what was changed when copying data back to a replacement drive.

There are many different ways to approach data as you well know. My approach isn't your approach or someone else's. There's 1000 ways to do things and that's what keeps it interesting. Having the flexibility to do it your way offers a sense of ownership when you're done. For the weight / portability aspect I'm planning on moving around a bit in the future and lugging around a 50# tower isn't appealing. I could use my laptop and ditch the whole setup but, I like having my setup when possible as it works better than just relying on someone else's idea of secure networking with the bare minimum and shotty firmware. Ideally though there would be a way to cram all of this into an ITX format that's even more portable but, due to constraints in current tech and the expense associated with higher density flash it's just a bit more than I'm willing to put into the idea. If the expense and there was a need I would just build out a whole rack and make my own DC of things to play with.
 
Been poking around before hitting the buy button for a case and stumbled across something different.

https://www.ceptagon.com/home - $100 on Amazon - weighs in @ 12# and takes a totally different approach to the I/O being placed on top / front to make things easier to access.

1691096584301.png
 
Yes, 1000 ways to skin a cat, and I appreciate the additional info you've provided.

I still don't see the need for any of this though, even with the additional info.

That case on Amazon isn't what I would consider worth buying either. I've done many 'one-off' case solutions and always regretted it eventually (even it if wasn't regret, it was always a sigh of relief to get back to a known/working quantity). Standards are standards for a reason.

Keep us posted on what you eventually commit to. I'd love to see how it improved your current setup.
 
how it improved your current setup.
Well, the obvious changes are...
5*HDD >> NVME = losing 15#
Case - losing another 9#

More drive speed but, not really the goal at this point due to network speed limits of tech today. Even upgrading wouldn't really push the speed to the drive to hit thermals. As a local workstation though it might become a benefit if I convert it to a different function later.

The case has options for adding a backup volume to the mix or stuffing it with additional gear in the open voids.

This isn't going to be your typical use case or probably even for those with a GPU inside the case due to the confined area / lower air flow options w/ only 4 fan ports. It's just going to be nice not being forced to use 2 hands to move the thing and have significant structure to rest it on to operate inside to make changes. Switching to a high M2 density restricts my options of PCI based toys to play with since there's 2 X16 / 1 x1 slots and if I port over my NIC and pick up a TB4 AIC that only leaves the x1 slot for misc devices. I'm kind of putting myself into a corner on making future changes which is the idea since I've rebuilt / updated / revised my setup several times more often than your average user or enthusiast.

I don't foresee any further needs to change this up for awhile moving forward. Tech doesn't appear to be changing w/o switching platforms completely either to AMD or SMB/enterprise options. This is just a foundational change to M2 based density w/ the option to do the same on HDDs as well w/ the same board. I did briefly consider switching to DDR5 w/ the same board option but don't see the need right now to spend another $100-$150 other than to have the same performance.

The next platform I'm keeping an eye on though is ARL as its a significant uplift from ADL/RPL to a further reduced node size and chiplet structure on the die.
 
Losing 24 lbs. doesn't seem worth it for the small benefits on offer.

Platform changes are generally more important with 24/7 used desktops (run at or near 100% for a significant portion of each day), and even more important with PC laptops because (older) mobile devices are never fast enough, efficient enough, or light enough.

I guess I still don't understand your obsession with weight reduction for a system that is nominally tethered to a network for weeks/months/years at a time.
 
Well the case showed up and went back about as soon as it showed up. It turned out to be flimsy and no real way to attach the cover for the "front". It had some quick connect snap in stubby pegs and it just didn't cut the mustard for what I was looking for. So, you were right.

@L&LD

Trying to decide on which one to go with again. I have a short list from before of a few FD models and a couple of others.

GD09B-C - https://www.amazon.com/dp/B08DTHNPPD/?tag=snbforums-20

Fractal Design Pop Air - https://www.amazon.com/dp/B08SRBTYGK/?tag=snbforums-20

Fractal Design Meshify C - https://www.amazon.com/dp/B079FPKPN3/?tag=snbforums-20

Fractal Design Torrent - https://www.amazon.com/dp/B08KT8DGQL/?tag=snbforums-20

Fractal Design Meshify 2 Compact - https://www.amazon.com/dp/B0822W433F/?tag=snbforums-20

 

Fractal Design Torrent​


Went with this one for the Front/Bottom fans instead of all 4 sides in the Mesh2. One less surface to dust / clean. Intrigued by the 180mm fans though on the torrent but can swap them out for 120/140's easily. Having the top panel closed off might make the airflow more directed as well compared to the Mesh2.
 
Sorry to see you have to buy/return before finding something with a good fit, but happy you decided on the one you did buy.

Be sure you test the case in different orientations (laying down on both sides and even upside down) to see which one gives you the best access and lowest temps/fan speeds.
 
I like laying them on their side. I have them in a sideways bookshelf laying on it's side as well. Keeps the dust away as well since they're three or so feet off the floor on the shelf. Since the shelves are random cube sizes it's also compartmentalized from other devices. Adequate clearance on all sides for airflow. The bookshelf was needed for supporting a flat screen and sound bar before switching things up to a projector setup. I've been testing ust projectors looking for the right balance of brightness and crispness of the image as well. I've taken a liking to the Hisense px1 series and waiting for them to release the px2 which adds 200 lumens to make it more daylight friendly. It's released overseas already and should drop soon for us sales.
 
Sorry to see you have to buy/return before finding something
Happened with the initial MSI MOBO as well. Bad RAM slots and busted WIFI card.

Anyway...
Torrent Compact
Aero G
DDR5 upgrade
NM790 drives X 4

Switching mobo's though enabled keeping the TB4 card instead of having to buy yet another piece of gear.

Took a bit to copy over the data from the spinners even w/ 350MB/s avg speed during the transfer.

AG board needed a UEFI update while getting things booting the first couple of times. It did some funny stuff because of the changes in HW / adding back in the the TB4 card and so on. First boot delayed because the NIC names changed due to PCIE counts changed but, eventually booted and got that changed. Then rebooted and it acted funny w/ probing the primary drive and the MD/Raid Then it booted up and the primary drive number changed for some reason which caused the delay and Raid to not come up. Rebooted again and everything came up 100%.

Even though the OS SN850 hasn't changed across the 3 MOBO's it does seem to boot faster on Z7790 vs Z690 for some reason. Really though there weren't many issues moving from one board to another other than the physical issues of the board not allowing use of both sticks of RAM for some reason. Oh, and the quick release tabs for the drives didn't let them sit 100% flat for the heat sinks to lock in 100%. The GB board has them as well but, they fit just fine. Was starting to wonder If I should just get some standoffs / screws but didn't need to. With the MSI board I had a couple of 140's on the bottom for more air but, with the TB cards in there now it's a tighter fit across the bottom and would need a "slim" fan ot wedge between the card / mounting rail.

New case though seems to have dropped the temps a little bit but, might more ;likely be the spinners not being in there. Or more streamlined airflow w/ the lack of top venting.

Got into it w/ Amazon over the Torrent case though because they sold it "NEW" and shipped a return w/ the return slips inside. They issued a full refund and didn't want it back so, that's a perk of this update project. They missed delivery on the mobo so got another $10 off toward something else. Either way rolling the dice on some items pays off nicely. Kept the total cost under $1K w/ the case credit even with the $50 bump in MOBO/RAM

Working w/ the GB board though seemed to be a bit quicker than MSI. For one thing they didn't hide the WIFI M2 under the heat shield it's just on the board in a convenient place and then stacks a NVME on top of it in the same spot. Between the teardown / rebuild / rebuild it took about 8 hours not including any of the copying of data from the spinners to the new NVME setup. While waiting found some old sludge data that was an error from some other projects that reclaimed some space. Still need to go digging through the backup and see what else needs to get wiped that's a duplicate of something else.

Might take a bit of time to see if there's a performance boost but, I'm sure benchmarks might be impressive @ 15GB/s roughly w/ the Raid setup.
 
I guess an output showing the final config might be valid as well.

Code:
lspci
00:00.0 Host bridge: Intel Corporation 12th Gen Core Processor Host Bridge/DRAM Registers (rev 02)
00:01.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x16 Controller #1 (rev 02)
00:01.1 PCI bridge: Intel Corporation Device 462d (rev 02)
00:02.0 VGA compatible controller: Intel Corporation AlderLake-S GT1 (rev 0c)
00:06.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x4 Controller #0 (rev 02)
00:14.0 USB controller: Intel Corporation Device 7a60 (rev 11)
00:14.2 RAM memory: Intel Corporation Device 7a27 (rev 11)
00:14.3 Network controller: Intel Corporation Device 7a70 (rev 11)
00:15.0 Serial bus controller: Intel Corporation Device 7a4c (rev 11)
00:15.1 Serial bus controller: Intel Corporation Device 7a4d (rev 11)
00:15.2 Serial bus controller: Intel Corporation Device 7a4e (rev 11)
00:15.3 Serial bus controller: Intel Corporation Device 7a4f (rev 11)
00:16.0 Communication controller: Intel Corporation Device 7a68 (rev 11)
00:17.0 SATA controller: Intel Corporation Device 7a62 (rev 11)
00:19.0 Serial bus controller: Intel Corporation Device 7a7c (rev 11)
00:19.1 Serial bus controller: Intel Corporation Device 7a7d (rev 11)
00:1a.0 PCI bridge: Intel Corporation Device 7a48 (rev 11)
00:1b.0 PCI bridge: Intel Corporation Device 7a40 (rev 11)
00:1b.4 PCI bridge: Intel Corporation Device 7a44 (rev 11)
00:1c.0 PCI bridge: Intel Corporation Device 7a38 (rev 11)
00:1c.2 PCI bridge: Intel Corporation Device 7a3a (rev 11)
00:1c.4 PCI bridge: Intel Corporation Device 7a3c (rev 11)
00:1d.0 PCI bridge: Intel Corporation Device 7a30 (rev 11)
00:1d.4 PCI bridge: Intel Corporation Device 7a34 (rev 11)
00:1f.0 ISA bridge: Intel Corporation Device 7a04 (rev 11)
00:1f.3 Audio device: Intel Corporation Device 7a50 (rev 11)
00:1f.4 SMBus: Intel Corporation Device 7a23 (rev 11)
00:1f.5 Serial bus controller: Intel Corporation Device 7a24 (rev 11)
02:00.0 Non-Volatile memory controller: Sandisk Corp WD PC SN810 / Black SN850 NVMe SSD (rev 01)
03:00.0 Non-Volatile memory controller: Shenzhen Longsys Electronics Co., Ltd. Device 1602 (rev 01)
04:00.0 Non-Volatile memory controller: Shenzhen Longsys Electronics Co., Ltd. Device 1602 (rev 01)
06:00.0 Non-Volatile memory controller: Shenzhen Longsys Electronics Co., Ltd. Device 1602 (rev 01)
08:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03)
09:00.0 PCI bridge: ASMedia Technology Inc. Device 2812 (rev 01)
0a:00.0 PCI bridge: ASMedia Technology Inc. Device 2812 (rev 01)
0a:02.0 PCI bridge: ASMedia Technology Inc. Device 2812 (rev 01)
0a:03.0 PCI bridge: ASMedia Technology Inc. Device 2812 (rev 01)
0a:08.0 PCI bridge: ASMedia Technology Inc. Device 2812 (rev 01)
0a:0a.0 PCI bridge: ASMedia Technology Inc. Device 2812 (rev 01)
0a:0b.0 PCI bridge: ASMedia Technology Inc. Device 2812 (rev 01)
0c:00.0 Ethernet controller: Aquantia Corp. AQC111 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
0d:00.0 Ethernet controller: Aquantia Corp. AQC111 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
0f:00.0 Ethernet controller: Aquantia Corp. AQC111 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
10:00.0 Ethernet controller: Aquantia Corp. AQC111 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
11:00.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
12:00.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
12:01.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
12:02.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
12:03.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
13:00.0 USB controller: Intel Corporation Thunderbolt 4 NHI [Maple Ridge 4C 2020]
4a:00.0 USB controller: Intel Corporation Thunderbolt 4 USB Controller [Maple Ridge 4C 2020]
81:00.0 Non-Volatile memory controller: Shenzhen Longsys Electronics Co., Ltd. Device 1602 (rev 01)


Code:
sudo inxi -F
System:
  Host: server Kernel: 6.5.0-060500rc6-generic arch: x86_64 bits: 64 Console: pty pts/15
    Distro: Ubuntu 23.04 (Lunar Lobster)
Machine:
  Type: Desktop System: Gigabyte product: Z790 AERO G v: -CF serial: N/A
  Mobo: Gigabyte model: Z790 AERO G v: x.x serial: N/A UEFI: American Megatrends LLC. v: F7
    date: 07/24/2023
CPU:
  Info: 12-core (8-mt/4-st) model: 12th Gen Intel Core i7-12700K bits: 64 type: MST AMCP cache:
    L2: 12 MiB
  Speed (MHz): avg: 2340 min/max: 800/4900:5000:3800 cores: 1: 800 2: 3600 3: 3600 4: 800 5: 800
    6: 3600 7: 3600 8: 3600 9: 800 10: 800 11: 800 12: 3600 13: 800 14: 3600 15: 800 16: 3600
    17: 800 18: 3600 19: 3600 20: 3600
Graphics:
  Device-1: Intel AlderLake-S GT1 driver: i915 v: kernel
  Display: server: X.org v: 1.21.1.7 with: Xwayland v: 22.1.8 driver: X: loaded: modesetting
    unloaded: fbdev,vesa dri: iris gpu: i915 tty: 177x59 resolution: 3840x2160
  API: OpenGL Message: GL data unavailable in console for root.
Audio:
  Device-1: Intel driver: snd_hda_intel
  Sound API: ALSA v: k6.5.0-060500rc6-generic running: yes
Network:
  Device-1: Intel driver: iwlwifi
  IF: wlo1 state: down mac: 60:a5:e2:e8:25:33
  Device-2: Intel Ethernet I225-V driver: igc
  IF: enp8s0 state: down mac: 74:56:3c:47:d8:60
  Device-3: Aquantia AQC111 NBase-T/IEEE 802.3bz Ethernet [AQtion] driver: atlantic
  IF: enp12s0 state: up speed: 100 Mbps duplex: full mac: 24:5e:be:4d:c4:53
  Device-4: Aquantia AQC111 NBase-T/IEEE 802.3bz Ethernet [AQtion] driver: atlantic
  IF: enp13s0 state: up speed: 2500 Mbps duplex: full mac: 24:5e:be:4d:c4:54
  Device-5: Aquantia AQC111 NBase-T/IEEE 802.3bz Ethernet [AQtion] driver: atlantic
  IF: enp15s0 state: down mac: be:09:a4:c4:49:a1
  Device-6: Aquantia AQC111 NBase-T/IEEE 802.3bz Ethernet [AQtion] driver: atlantic
  IF: enp16s0 state: up speed: 1000 Mbps duplex: full mac: be:09:a4:c4:49:a1
  IF-ID-1: bo0 state: up speed: 1000 Mbps duplex: full mac: be:09:a4:c4:49:a1
  IF-ID-2: bonding_masters state: N/A speed: N/A duplex: N/A mac: N/A
  IF-ID-3: br0 state: up speed: 2500 Mbps duplex: unknown mac: d2:9c:a6:15:1d:b0
  IF-ID-4: nordlynx state: unknown speed: N/A duplex: N/A mac: N/A
Bluetooth:
  Device-1: Intel type: USB driver: btusb
  Report: hciconfig ID: hci0 state: up address: 60:A5:E2:E8:25:37
RAID:
  Device-1: md0 type: mdraid level: raid-10 status: active size: 7.45 TiB report: 4/4 UUUU
  Components: Online: 0: nvme2n1 1: nvme4n1 2: nvme1n1 3: nvme3n1
Drives:
  Local Storage: total: raw: 15.81 TiB usable: 8.36 TiB used: 6.14 TiB (73.4%)
  ID-1: /dev/nvme0n1 vendor: Western Digital model: WDS100T1X0E-00AFY0 size: 931.51 GiB
  ID-2: /dev/nvme1n1 vendor: Lexar model: SSD NM790 4TB size: 3.73 TiB
  ID-3: /dev/nvme2n1 vendor: Lexar model: SSD NM790 4TB size: 3.73 TiB
  ID-4: /dev/nvme3n1 vendor: Lexar model: SSD NM790 4TB size: 3.73 TiB
  ID-5: /dev/nvme4n1 vendor: Lexar model: SSD NM790 4TB size: 3.73 TiB
Partition:
  ID-1: / size: 915.32 GiB used: 59.14 GiB (6.5%) fs: ext4 dev: /dev/nvme0n1p2
  ID-2: /boot/efi size: 47.5 MiB used: 6 MiB (12.7%) fs: vfat dev: /dev/nvme0n1p1
Swap:
  Alert: No swap data was found.
Sensors:
  System Temperatures: cpu: 25.0 C mobo: N/A
  Fan Speeds (RPM): N/A
Info:
  Processes: 443 Uptime: 18h 30m Memory: 15.37 GiB used: 2.13 GiB (13.9%) Init: systemd
  target: graphical (5) Shell: Sudo inxi: 3.3.25
 
Kind of off-topic'esq but what about going all M.2, as-in:
Asustor Flashstor 12 Pro FS6712X with 12 M.2 SSD Slots. Plus 10GBe port.. 6x2TB WD Blue SN570's -
This can run with 24GB DDR RAM max, tested 64GB, 32GB RAM no joy, but 16+8 (or x2 x8GB) runs fine.. A 2nd 10GbE port would be perfect tbh, and 16GB imho as standard. Smaller than PS5 and weighs a lot less than AS6604T fully populated with x4 16TB IronWolf Pro (in Raid10) and 2x2TB WD Blue SN570's as R/W cache nvme's
 
@majika

I went in a different direction and switched to U.x drives instead. Slapped a 16tb in for $1000 and get 6.5GB/s and it runs cooler. I also switched away from Intel to AMD 7900X because it allows bifurcation when I'm ready to expand.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top