What's new

NVME / U.2 vs U.3 / Cheap NVME high capacity

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Tech Junky

Part of the Furniture
So, I found something interesting when it comes to higher capacity NVME / PCIE drives that has me scratching my head as to why technophiles are giving their money to companies for M2 based options when they can get double the capacity for the same price in some cases.

It doesn't seem like a difficult proposition depending on how you want to do things. There's the PCIE card adapters and cages for 5.25" holes in the case.

There's some issues though that have me wondering about compatibility between the 2 though since they both use the same 8639 connector. Now, in theory the backplane / controller should dictate which drives work with which controller.

Some cards/cages say they work with both versions while others say they don't.
A U.3 NVMe drive is backward compatible to U.2 drive bays, but a U.2 NVMe drive can't be used in a U.3 drive bay

So, this is similar to backwards compatibility of say a Gen4 drive being put into a Gen3 slot and just reducing the BW of the drive. However, U.2 won't work in a U.3 position even though the connector is the same. So, if someone wants to take advantage of this storage option it would be best to get a U.3 card / cage and drive. However the price difference is significant depending on the capacity.

8TB U.2 Gen3 - $400 - https://www.amazon.com/dp/B0C6B2PGMC/?tag=snbforums-20
8TB U.2 Gen4 - $500 - https://www.amazon.com/dp/B0B7Y4TFJR/?tag=snbforums-20
16TB U.2 Gen4 - $1000 - https://www.amazon.com/dp/B0C6BR89C2/?tag=snbforums-20

8TB U.3 Gen4 - $600 - https://www.amazon.com/dp/B0BHCMYH9Z/?tag=snbforums-20
16TB U.3 Gen4 - $1000 -https://www.amazon.com/dp/B0BHGSXB4T

U.2 PCIE adapters
$21 - https://www.amazon.com/dp/B099185SSV/?tag=snbforums-20
$40 - https://www.amazon.com/dp/B01D2PXUAQ/?tag=snbforums-20

U.3 PCIE adapters
$15 - https://www.amazon.com/dp/B0C3C6MHH3/?tag=snbforums-20
$16 - https://www.amazon.com/dp/B0C3HGSBGD/?tag=snbforums-20
$27 - https://www.amazon.com/dp/B0C8L6DPZX/?tag=snbforums-20
$42 - https://www.amazon.com/dp/B09RTZSN9X/?tag=snbforums-20

Cages
U.2/3 2X $230 - https://www.amazon.com/dp/B0BHLTDZZ8/?tag=snbforums-20
U.2/3 4X $400 - https://www.amazon.com/dp/B07B9HK4QG/?tag=snbforums-20

With how newer MOBO's are killing off slots to make room for more M2 options it's a toss up in which direction to go but, with a 4TB NVME Gen4 being ~$200 it's compelling to look at these U options. 8TB G4 ~$800 and no 16TB options in the M2 format.

4 x 4TB runs about $800 shipped
8TB x 2 U.2 same price but, in my instance would switch from R10 to R1
Bumping back to a 16TB array though would be a significant change in affordability at ~$2000 / R1.

But, considering the higher capacity being the same price as 8TB NVME M2 dives it's considerably more affordable at 50% of the cost for capacity.


https://www.amazon.com/s?k=Oculink+...1:qNKm69wi0NEUsBx0HFbdRxwkiYTmnI/9WpZ94SJ7bnA
Then there's Oculink SFF-8611 cards / cables / adapters where you can either use a PCIE slot or M2 to convert to U2/3 format as well.

I could swap the M2 drives for these @ $15/ea - https://www.amazon.com/dp/B0BDYKG9RQ/?tag=snbforums-20
Grab the cables - $29/ea https://www.amazon.com/dp/B07ZBKCFND/?tag=snbforums-20

$45/drive to convert over might be a budget issue though for the ability to up your storage game. High density MOBO options with Gen4 slots might be an issue though to find w/o going with the M2 conversion method. I've had some 5-6 slot options in the past though but, the consistency of the slot speeds / sizes / electrical wiring vary. Moving to something like a X299 though would give tons of slots all at Gen3 speeds.
 
https://www.superwarehouse.com/micr...3-15mm-non-sed-mtfdkcc15t3tfr-1bc1zabyyr.html
$1,069.43 shipped / no tax

Seems like a bit of savings considering w/ tax Amazon for me comes out to $1,135.54 / $65 more w/ tax

In reality, if you want a lot of capacity, you probably aren't using it for bleeding edge speed, so just stick with SATA. Much lower heat and higher capacity for the price. Your OS drive can be super high speed 1TB or whatever.

On a side note enterprise SSDs are up to like 64tb. They look like a 2.5" sata SSD but fit in an M.2 slot. They have a huge (standard, not super) capacitor in them for power loss protection.
 
In reality, if you want a lot of capacity, you probably aren't using it for bleeding edge speed, so just stick with SATA.
SATA SSD options are out there but, they top out at 8TB / 550MB/s and cost $400
NVME M2 top out at 8TB / Gen3 3.5GB/s and cost $800
U2/3 options can head over 30TB but, in this comparison 16TB / $1050 with Gen4 speed ~7GB/s

1693996913804.png


Looking at the thermals though they idle at the same temp as the generic consumer M2 but, under load with the 2.5" chassis they're enclosed in they run cooler.

Code:
nvme-pci-0100
Adapter: PCI adapter
Composite:    +36.9°C  (low  = -20.1°C, high = +76.8°C)
                       (crit = +84.8°C)
Sensor 1:     +43.9°C  (low  = -273.1°C, high = +65261.8°C)
Sensor 2:     +38.9°C  (low  = -273.1°C, high = +65261.8°C)
Sensor 3:     +36.9°C  (low  = -273.1°C, high = +65261.8°C)

nvme-pci-0200
Adapter: PCI adapter
Composite:    +36.9°C  (low  =  -5.2°C, high = +83.8°C)
                       (crit = +87.8°C)
Code:
 nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme1n1          /dev/ng1n1                         WDS100T1X0E-00AFY0                       1           1.00  TB /   1.00  TB    512   B +  0 B   614900WD
/dev/nvme0n1          /dev/ng0n1                         Micron_7450_MTFDKCC15T3TFR               1           1.00  TB /  15.36  TB    512   B +  0 B   E2MU200
Code:
nvme smart-log /dev/nvme0n1 | grep Temperature
Temperature Sensor 1           : 44°C (317 Kelvin)
Temperature Sensor 2           : 39°C (312 Kelvin)
Temperature Sensor 3           : 37°C (310 Kelvin)

nvme smart-log /dev/nvme1n1
temperature                             : 37°C (310 Kelvin)

I find it odd though that I get more output from the Micron in the PCIE slot vs the native M2 WD drive.

Anyway let's talk $'s per TB then.
Sure SATA is going to be cheaper @ $50/TB
M2 NVME @ $100/TB
U3 @ $68/TB

Using some Logic here and paying a slight premium over SATA to get double the capacity in a 2.5" format that runs cooler at over 10X the speed make a bit more sense. If you want a direct comparison of 8TB size it comes into parity with M2 options but double the speed. 8TB drive would run ~$600 which is cheaper than the M2 and only $200 more than the SATA.
 
Using some Logic here and paying a slight premium over SATA

FWIW - SATA is losing that battle now that we have a number of different NVME offerings out there.

Most of the cost around the controller is similar, and NAND pricing is more of a factor these days...
 
Ok, 2nd Micron drive borked out and lost data. This one lasted almost a week compared to the other that did it the same day. I though I was out of the danger zone but, went to add some data and got an error. Was able to READ data or at least list it in the drive but, it was acting funny and rebooted the system and bam it was all toast like the last drive.

I'm not even going to mess with it at this point and looking at some different brands instead. 2 failures is enough for me.

Looking at the Samsung PM1733 or Intel D7-P5520 both come in the 15.36TB configuration same as the Micron but, aren't Micron.... I even bothered to update the firmware on this one which I couldn't on the last one since it freaked out and I wasn't aware there was an update until I went TSing the damned thing. So, I swapped the adapter card the drive sits in w/ the new drive as well to rule out cheap CN crap being the issue at the same time. I don't get why they keep failing though. What are the odds of 2 drives from different batches doing the same thing? Maybe there's a reason they're showing up for sale? Maybe the 9 series version is more stable?
 
SSDs are not more reliable than HDDs. Not even close.

At least, not the ones we normal humans have access to.

If the next test you do lasts a year or more with good results, I'm sure you're going to get hit with data loss again, eventually, using SSDs as NAS.

I suggest using a quality SSD (Samsung 990 Pro or better) as a cache for a few WD Red Plus 14TB (or larger) drives instead.
 
I suggest using a quality SSD (Samsung 990 Pro or better) as a cache for a few WD Red Plus 14TB (or larger) drives instead.
Not the direction I want to go as stated before. I have spinners sitting off to the side and M2's that work fine for years.

There's no reason these should be dying or losing their config though. If a cheap M2 version can last years then WTF is the issue with these being cased in a 2.5" format? It's not like I hammered them with transfers to put them into thermal failure. This one in particular just copied my data to it and let it sit and do normal daily stuff.
 
There is a reason; they're cheap.

Unless you want to use commercial-level nand (with the associated controllers and firmware), you're only going to get yourself deeper into this pile.

Good luck.
 
There is a reason; they're cheap.
Price is all over the map for the same drive. Price is relative to supply / demand though as well.

Intel / Solidigm \\ Samsung offer similar drives in the same size I'm after and are in the same price tier ~$1000. So, is it just a Micron issue or is it going to be an issue across brands as well?

The appeal of the Micron though was price / capacity / performance compared to the other two. Now having been through 2 of them though I'm apprehensive of trying SS / Intel. These aren't nothing new on the enterprise side of life though in terms of use. U.2 has been a staple for quite awhile and U.3 just changes the pinout to allow for using SATA/NVME on the same connector moving forward.

Unless you want to use commercial-level nand (with the associated controllers and firmware), you're only going to get yourself deeper into this pile.
These are commercial drives w/ commercial firmware.... They don't require a special controller they need a special connector. From the standpoint of how they operate it's like any other NVME based drive it's just a different connector. Otherwise you just hook them up and partition them to use them for data.

The issue is after X period of time drive A stopped having a partition table within hours of putting it online and drive B lasted almost a week before doing the same thing with the same symptoms of appearing in the output lspci but not being able to push/pull data or partition/format the drive to make it usable again.
 
Price is all over the map for the same drive. Price is relative to supply / demand though as well.

Intel / Solidigm \\ Samsung offer similar drives in the same size I'm after and are in the same price tier ~$1000. So, is it just a Micron issue or is it going to be an issue across brands as well?

The appeal of the Micron though was price / capacity / performance compared to the other two. Now having been through 2 of them though I'm apprehensive of trying SS / Intel. These aren't nothing new on the enterprise side of life though in terms of use. U.2 has been a staple for quite awhile and U.3 just changes the pinout to allow for using SATA/NVME on the same connector moving forward.


These are commercial drives w/ commercial firmware.... They don't require a special controller they need a special connector. From the standpoint of how they operate it's like any other NVME based drive it's just a different connector. Otherwise you just hook them up and partition them to use them for data.

The issue is after X period of time drive A stopped having a partition table within hours of putting it online and drive B lasted almost a week before doing the same thing with the same symptoms of appearing in the output lspci but not being able to push/pull data or partition/format the drive to make it usable again.

Micron SSDs are usually rock solid, and others use their memory and sometimes controllers in their drives. Is it possible it is something on the host (hardware or software) causing problems? Are the drives totally dead or do they work again after a secure erase?
 
host (hardware or software) causing problems? Are the drives totally dead or do they work again after a secure erase?
Using Linux which most of these drives would be hosted on in the DC environment.

Tested the 1st failure in 2 systems and even w/ a LiveCD image to rule out configuration conflicts.

electrically they show up but logically they don't after a failure.

They're impossible to reconfigure and load a partition table to them once they spazz out. At least I can't find a way to get them working again they way they should. 1st one I thought ok, maybe it's a fluke and just a DOA and sent it back and ordered another one from a different supplier and it lasted longer but, same story. Works fine and then there's an I/O message when trying to move data.

So, I put in my spare M2 and that gave me some boot issues as the system tries to swap the NVME names when moving things around which never was an issue when running raid / mdadm but, those were spinners. So, I switched the fstab over to use UUID instead of /dev/nvmeX and resolved that. Seeing the nvmeX swaps though explains the issues I saw when using a set of 4 NVME drives before switching to U NVME.

Now I have some other crap to figure out what's taking up space on the primary M2 when it was ~40GB full prior to things going haywire and now says it's 100% in use.

Besides making this work I'm thinking about swapping to AMD 7900X setup to allow for bifurcation of the PCIE slot to allow spreading some drives across the slot using a different adapter whether it's mini/slim SAS or Oculink. Oculink is appealing due to higher bandwidth for the future but, I'm leery as the cards I'm seeing are dual drive per port which could get a bit hairy in terms of seeing / splitting things up w/o issue. Or, if this is all destined to fail at least the new setup would allow for M2's up to G5 speeds when the prices comes down if the speed is ever a real temptation.
 
Ok, 2nd Micron drive borked out and lost data. This one lasted almost a week compared to the other that did it the same day. I though I was out of the danger zone but, went to add some data and got an error. Was able to READ data or at least list it in the drive but, it was acting funny and rebooted the system and bam it was all toast like the last drive.

I'm not even going to mess with it at this point and looking at some different brands instead. 2 failures is enough for me.

that's weird, but at the same time suggests some kind of incompatibility on the platform - might be SW, could be power, might be HW in general...
 
SW, could be power, might be HW in general...
Just vanilla linux, 850W PSU, no other issues with HW. the 850 is overkill but peace of mind since it's a 12700K and no GPU to tax the system and relatively nothing else inside the case that draws power or could cause a conflict. Kind of the point to draw down the number of devices inside the case and simplify things a bit. I haven't had any issues with the other drives I've taken out / put in / repeat since messing with this. I did have a stick of ram that took a crap most likely with the M2 swap to Z790 DDR4 but, have since swapped that through Amazon as well.

It shouldn't have anything to do with the partition dropping out with the fstab switch from /dev/xyz to UUID. It's odd though that the MOBO moves the drive assignments around when there's more than 1 NVME detected as its set in the UEFI which to use for OS / boot. It's Linux that's swapping numbers around but, this is something deeper causing them to drop out completely. It's not hard to fix the issue on boot it's just time consuming waiting for it to time out.

With the adapter cards it seems there's no mention of using Micron drives but mostly the Intel / SS drives. I can get SS/WD 8TB options for $400 and that's only $50 more than an 8B M2 and $200 less than a 16TB but, would need to swap out from Intel build to AMD to get bifurcation w/o paying the PLX premium for Intel. The AMD option is more palatable now with prices coming don on things and then just sell the Intel build to make up for the costs.

I did testing with the 1st drive in 2 systems so, it's not the system build causing the issue. I also sapped carrier cards for the drive w/ the new drive in case it was the cheap CN card before that could have caused the issue. The carrier cards though are basically just dumb adapters to connect the drive through the PCIE slot vs a direct connection potential or using a cable to remotely mount the drive somewhere else.

If I go AMD though I'd jump to a quad port card and cable setup or something along those lines to enable using 2 8TB drives and isolate some of the potential issues or use rsync to copy them periodically as a backup mechanism. Should be fairly quick task in the background at gen 4 speeds.

It's just frustrating at this point as there's no apparent reason why this is happening. It's not 100% dead but, it's still unusable or recoverable. It's baffling and I've done some weird storage things in the past and even forensic recovery off drives and these things don't even show on those tools either. I did find a "dock" though for these things if I want to really put the time into figuring them out. https://www.amazon.com/dp/B09DKLM82S/?tag=snbforums-20 USB!0 speeds but, easier to work with than digging inside the case or running remote commands.
 
What's the price of these drives? What is the warranty?

Is the AMD platform you're using contributing to these failures?
 
What Intel platform are you using then? Is the RAM fully tested? You're giving us bits and pieces of your setup/testing...
 
12700k/8700k

Replaced the ram on the 12700k system when I figured out the issue of not booting being the bad stick. Figured it was the other mobo had a bad slot since it worked fine for years. Replacement ram is working fine.

Both systems have proper pcie slots though the 8700 is Gen 3 and the other is Gen 4.
 
Sounds like a current, Intel Xeon platform may be required for stability here.
 
UPDATE....

Switched to a PCI adapter and cable setup since the slotted drive adapters x 2 failed. So,,,,,,,

Started with the card + 100CM cable and nada.
Switched to a 50CM cable and got signs of life through the system renumbering the NIC IF's on boot which is normal to see when it picks up +/- a PCI device.
Go look for the new disk.... NADA! WTF!?!? double checked all the cabling and rebooted again and NADA.
Decided to move the card down to another slot which happens to still be G4 instead of the G5 top slot both of the other cards were in w/ Micron's and moved my NIC to that slot and watched for lights on power on and they started dancing like they should so, the slot isn't dead and works fine other than anything associated with these drives. Go figure.
Boot to the desktop and open "disks" and sure as S*&^ it shows up in the list.
Changed my mount points on the M2 to backup and put the U3 on Storage by UUID as this avoids the renaming complex Linux has when booting. Reboot and verify.
Things up, mount, etc.

Code:
lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme2n1     259:0    0 931.5G  0 disk
└─nvme2n1p1 259:1    0 931.5G  0 part /mnt/backup
nvme1n1     259:2    0    14T  0 disk /mnt/Storage
nvme0n1     259:3    0 931.5G  0 disk
├─nvme0n1p1 259:4    0     1G  0 part /boot/efi
└─nvme0n1p2 259:5    0 930.5G  0 part /

OS - 09:00.0 Non-Volatile memory controller: Sandisk Corp WD PC SN810 / Black SN850 NVMe SSD (rev 01)
U3 - 0a:00.0 Non-Volatile memory controller: KIOXIA Corporation NVMe SSD Controller CD8 (rev 01)
M2 / backup - 0c:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

Now, to see how this one works AFA lasting more than a week. Monitoring temps a bit closer as well due to mentions of high temps on the Micron's someone else added to their system. Right now though it's sitting at 38C which is in line with the 2 x M2's in the system currently.

Right now it's just sitting in the case not actually mounted though and will need to figure that out once it lives longer than the return period and proves itself. So, the other thing that's going to make my OCD go haywire is the slot issue with the X16 G5 being flakey. No real reason it shouldn't be able to work with the adapter / cable / drive. The other option is to switch the drive adapter to an M2 format and push the NIC back down to the original slot. Either way both options result in the drive having G4 x4 speeds.
 
Similar threads
Thread starter Title Forum Replies Date
T NVME NAS DIY 18

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top