What's new

Which NAS file system can be read by Windows?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

newb

Occasional Visitor
My biggest problem with the NAS (no RAID) is that when NAS hardware fails I cannot remove the drive and connect it directly to a Windows-based anything and read the files.
I am talking about having access to data using different hardware than the one that recorder it.
The similar problem is with any RAID system - if hardware fails I am screwed until I buy the new identical hardware.
Please do not advise backups. I have Terabytes of stuff and I cannot afford Terabytes of backups nor electric bill it takes to do it. Which does not mean I should not be allowed to keep my Terabytes of stuff in case hardware fails. Does it?
I see there is Thecus - which is Windows Server based and I assume it can be read by any Windows. But I do not know.
Thank you
 
Windows can only read NTFS. No Linux-based NAS that I know of will support NTFS, because it's a proprietary filesystem, therefore Linux performance through reverse-engineered drivers will always be less than optimal. And personally I cannot recommend any Windows-based file server solution if dealing with large amounts of data.

My recommendation is to go with ext4 and a Linux-based NAS, then you can use any Linux boot CD to access the disk's content if you plug it to a PC. If your NAS is BSD-based, then there might be ways to access their own filesystems as well through a boot CD, or even a virtual machine running on a Windows PC.

And while you don't want to hear it, you STILL need a backup of some sort. A power supply shorting out can take out all disks within a system, leaving you with nothing recoverable unless you go through a data recovery company. I've seen a number of dead HDDs over the years caused by power supply issues, resulting in the death of not only the PC motherboard but also any disk plugged in. I could often see the actual burned traces on the back of the HDD when that happened... Get an entry level NAS, fill it with the cheapest, high capacity disks you can find, and use that for backups. Personally, I use a QNAP TS-228A with a pair of 4 TBs in RAID 0 (for capacity purposes) to backup my main NAS that has a RAID 1 with 10 TBs. The 4 TBs were actually previously used in that NAS, so when I upgraded it, I reused the old disks for the backup solution. And I can deal with the loss of the backup (as the chances it would happen at the same time as my main NAS are very low), hence I didn't go with a RAID 1.
 
The problem with your plan of trying to read a non-native Windows filesystem is that it doesn't protect you against a corrupted filesystem or fire/theft.

One copy of data on a single physical device is asking for trouble. Anyone who has suffered through a data loss knows this. I know I do.
 
Windows can only read NTFS. No Linux-based NAS that I know of will support NTFS, because it's a proprietary filesystem, therefore Linux performance through reverse-engineered drivers will always be less than optimal. And personally I cannot recommend any Windows-based file server solution if dealing with large amounts of data.
.

A tidbit from an old geezer. I was at the solar observatory in New Mexico before the storm took it out and they capture huge amounts of sun data. They would set a calculation which would run for maybe a year or more and the support guy told me the only OS which would stand up to calculations running that long was Solaris on the Intel platform. Windows or Linux would not do it.
 
There are only a couple of ways to not be tied to a specific system. Although as already stated....recovering via Windows is not really an option at all...but via Linux it should be.

Primary way is to not use anything proprietary. Build a Linux box using standard hardware and use software RAID. Dedicated HW controllers are not required for consumer use these days. CPUs in general have plenty of power behind them for the most part.

The other way is to just do drive mirroring. Shouldn't be anything special there at all. Device fails, just yank the drive and put it in something else.

On my previous Windows file server, my primary storage drives were mirrored via the chipset. It worked well enough for home use and I was able to recover drive failures fairly easy since those drives were able to be read on any other Windows system. It didn't scale well and wasn't the fastest, but it worked.
 
In addition to be able to boot a Linux distro and often being able to recover data from a NAS, you can also install windows extension drivers to be able to read (read-only) some Linux filesystems. I've used this method to read data off a drive pulled from a NAS using a $2 usb-to-sata cable.

When I was worried about a similar situation a number of years ago, my solution was to buy a spare NAS (same model) at the same time and keep it in the box for emergency purposes as NAS manufacturers like to change things up and you can't otherwise guarantee pulling the drives from an older model will be readable in a new model.

Another cheaper option is of course an external usb drive as they are pretty inexpensive (comparatively speaking) as a backup device (even though you aren't really considering this). Many NAS drives can even perform an automated copy if you plug the USB drive into the NAS.

+1 vote for NOT using windows as a NAS operating system.
 
A tidbit from an old geezer. I was at the solar observatory in New Mexico before the storm took it out and they capture huge amounts of sun data. They would set a calculation which would run for maybe a year or more and the support guy told me the only OS which would stand up to calculations running that long was Solaris on the Intel platform. Windows or Linux would not do it.

Not sure what Linux distro they tried to use, because I've personally seen Linux server uptimes of close to two years. The longest I've personally seen was over 700 days (running CentOS), and I've personally broken 420 days on a Mandrake Linux server in my workshop, which was reset when I had to move the server to a different location in my workshop.

Not disputing Solaris' track record tho, it had a reputation for being rock solid.
 
My biggest problem with the NAS (no RAID) is that when NAS hardware fails I cannot remove the drive and connect it directly to a Windows-based anything and read the files.
I am talking about having access to data using different hardware than the one that recorder it.
The similar problem is with any RAID system - if hardware fails I am screwed until I buy the new identical hardware.
Please do not advise backups. I have Terabytes of stuff and I cannot afford Terabytes of backups nor electric bill it takes to do it. Which does not mean I should not be allowed to keep my Terabytes of stuff in case hardware fails. Does it?
I see there is Thecus - which is Windows Server based and I assume it can be read by any Windows. But I do not know.
Thank you

I had similar problem in the past. I had a Seagate BlackArmor NAS with 2 x 1TB HDDs in RAID 1 (mirror) configuration. One of the HDDs failed. The other one were close to fail. So I was unable to read the other (still good) one on any Windows or Linux machine. I didn't dare to operate the NAS with only one disk, I've been already bored by this old NAS, so decided to buy/build a new NAS and copy all the files via network. But lessons learned from operation of this NAS lead to the conclusion that I shall not use anymore proprietary NAS if I wish to be able to read the data disks on different hardware.

So I moved to DIY NAS and decided to use embedded NAS OS, which boots from USB stick on any PC. I've chosen the XigmaNAS (former Nas4Free), which is a fork from original FreeNAS. It is based on FreeBSD and ZFS. Now I sleep well. :) In case of NAS failure (disk controler, mainboard, cpu, etc.) the only thing to do is to move the disks to ANY PC, plug the USB stick with OS, boot the embedded NAS OS, import the ZFS pool and read my valuable data. It is easy and you can read your data anytime and everywhere. The only requirement is to have any spare PC, even it is not a NAS grade.
 
Not sure what Linux distro they tried to use, because I've personally seen Linux server uptimes of close to two years. The longest I've personally seen was over 700 days (running CentOS), and I've personally broken 420 days on a Mandrake Linux server in my workshop, which was reset when I had to move the server to a different location in my workshop.

Not disputing Solaris' track record tho, it had a reputation for being rock solid.

Not uptime. A year long calculation. He said they tried them all. Even Solaris on their hardware could not do it. It had to be Intel hardware with Solaris.
 
But lessons learned from operation of this NAS lead to the conclusion that I shall not use anymore proprietary NAS if I wish to be able to read the data disks on different hardware.

It does not matter who's hardware you use if you use high end RAID you will not be able to read the disks. RAID 5 and 6 is written as a stripe accross the disks. Not readable without the RAID. Only a mirror is going to be readable. The proprietary NAS had nothing to do with it other than the RAID you used.
 
It does not matter who's hardware you use if you use high end RAID you will not be able to read the disks. RAID 5 and 6 is written as a stripe accross the disks. Not readable without the RAID. Only a mirror is going to be readable. The proprietary NAS had nothing to do with it other than the RAID you used.

Think I said exactly the same. All commercial NAS-es use RAID. Failure of RAID controller -> disks are not anymore readable, except you replace the failed controller with exactly the same. So the solution is don't use RAID! Use ZFS pools, they are readable everywhere.
 
So you are teaching me something. never mind.
Sorry, don't want to be offensive, but my feeling is exactly the same - you are teaching me something. ;)
 
Commercial NAS doesn't mean (unreadable) RAID, necessarily. :)

You can use a 6 bay NAS and have 3 sets of mirrored drives. Back up the main set to the second in weekly intervals. Back up the second set to the third in monthly intervals. Up to three copies of your data, readable on any system that is not a NAS. :)
 
Commercial NAS doesn't mean (unreadable) RAID, necessarily. :)

You can use a 6 bay NAS and have 3 sets of mirrored drives. Back up the main set to the second in weekly intervals. Back up the second set to the third in monthly intervals. Up to three copies of your data, readable on any system that is not a NAS. :)

I appreciate your idea, but unfortunately it didn't work in my case. As I said above I had a mirrored drives in commercial NAS and one of them failed. The "alive" one was totally unreadable on any Linux or Windows machine, while it was still readable by the NAS itself. So, at least one real case exists when the idea does not work. That is the reason I am so aggressively against hardware RAID controllers. Before that happened I used to have the same opinion as you - that mirrored drives are "world" readable :) I really don't know if my case is a rule or exception, but it happened.
 
In your case, it sounds like both drives failed then? This is not normal. :)
 
Not uptime. A year long calculation. He said they tried them all. Even Solaris on their hardware could not do it. It had to be Intel hardware with Solaris.

So it was something about a process running for over a year possibly. Still odd.
 
I appreciate your idea, but unfortunately it didn't work in my case. As I said above I had a mirrored drives in commercial NAS and one of them failed. The "alive" one was totally unreadable on any Linux or Windows machine, while it was still readable by the NAS itself. So, at least one real case exists when the idea does not work. That is the reason I am so aggressively against hardware RAID controllers. Before that happened I used to have the same opinion as you - that mirrored drives are "world" readable :) I really don't know if my case is a rule or exception, but it happened.

Hardware raid controlers will always be tricky indeed, and one must plan for having to use a second similar controller for recovery purposes. They might use a proprietary partition table.

However, a lot of today's SOHO NAS products from Synology and QNAP are using Linux md RAIDs and/or LVM, so they are readable on a regular Linux system. That might be different if you went for one of their SMB/Enterprise models however, I can't say since I've never had one in my hands.
 
In your case, it sounds like both drives failed then? This is not normal. :)
No, the second drive was fully readable by NAS itself. It had some SMART indications that will fail soon, but was in good condition to read it by NAS and network copy the whole drive to new NAS. And I did it and saved my data. It was just not readable on any Linux/Windows machine. And BTW all this story happened after 7 years continuous operation :), so I have no complains to the drives, they performed very well :)
 
Last edited:
Hardware raid controlers will always be tricky indeed, and one must plan for having to use a second similar controller for recovery purposes. They might use a proprietary partition table.

However, a lot of today's SOHO NAS products from Synology and QNAP are using Linux md RAIDs and/or LVM, so they are readable on a regular Linux system. That might be different if you went for one of their SMB/Enterprise models however, I can't say since I've never had one in my hands.

I am not so familiar with recent developments in SOHO NAS products. I decided to stop using them 4 years ago when all this happened. May be I am speaking about old technology. But in that time it was a typical SOHO NAS, Seagate Blackarmor 220 namely. Now I am using XigmaNAS OS (embedded) with ZFS on dedicated server hardware (HP Microserver Gen8) and I am very satisfied.
 

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top