What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

but it might break some of the QTS special sauce with their virtual network management..

Good point! That is probably the best part about QTS!

However, my head starts to spin when I think about the XOR Lagg groups, and how to get full wire speeds out of VM's...when used with the virtual switch QTS shows me.
 
Good point! That is probably the best part about QTS!

However, my head starts to spin when I think about the XOR Lagg groups, and how to get full wire speeds out of VM's...when used with the virtual switch QTS shows me.

Yeah, it's a bit odd, and with patience, one can see why they had to do it they way they did...

It's easier when working with the guest VM's for networking between them (and containers if used) - and much better than in the early days of Virtualization Station.
 
When it comes to my data, I prefer tried-and-true over bleeding edge. I've been burned by ReiserFS back in the day (back then every so-called expert pointed out how much better it was than ext2). So, ext3 or ext4 is what I prefer to use.

EXT4 is only one year older than BTRFS, EXT4 development started in 2006 and BTRFS development started in 2007. They're pretty much both equally mature.

That's not an entirely legal solution however (if it's the same thing I've read about a few years ago).

No, it's not, however the way Synology is installed, you can simply swap the drives into a legitimate Synology NAS if the project disappears one day, like say if Synology shut them down and the configuration plus data all migrates over in automatically.
 
EXT4 is only one year older than BTRFS, EXT4 development started in 2006 and BTRFS development started in 2007. They're pretty much both equally mature.

EXT4 was an evolution of an existing thing - BTRFS was pretty ambitious in their targets... even recently, BTRFS had issues in certain configs where data corruption and loss was hidden...

One could always do ReiserFS, LOL...
 
EXT4 was an evolution of an existing thing - BTRFS was pretty ambitious in their targets... even recently, BTRFS had issues in certain configs where data corruption and loss was hidden...

One could always do ReiserFS, LOL...

I'm not sure you fully understand or have even done any research.

BTRFS data loss is caused by the BTRFS RAID/volume management, its still not 100% stable and not recommended for anything but a test setup.

Synology uses mdadm for the RAID and BTRFS as the filesystem. There are plenty of large companies, like Facebook for instance that use it as a filesystem only, which is what Synology also does.
 
Last edited:
EXT4 was an evolution of an existing thing

Exactly. Not starting from a new code base or architecture, so far less likely to exhibit esoteric bugs than when you're building an entirely new filesystem from the ground up.
 
I'm not sure you fully understand or have even done any research.

Please now...

I have nothing against BTRFS, they've made a lot of progress, and it does address issues that we have with earlier file systems.

Informed people can make a choice...
 
The Filesystem is rock solid. It’s been tested, tried and true now for years.

Except that last year, they did find an issue related to BTRFS and raids.
 
BTRFS data loss is caused by the BTRFS RAID/LVM, its still not 100% stable and not recommended for anything but a test setup.

I think the term is "fatally flawed" if BTRFS is managing the RAID5/6 by itself - the parity was not calculated correctly - had nothing to do with LVM, as that works fine (by itself or with mdadm).

A rock solid filesystem should not have bugs like that, period.
 
Except that last year, they did find an issue related to BTRFS and raids.

That was the RAID/volume management part of BTRFS. You don't have to use BTRFS to manage RAID (Although you lose some features but it's not stable anyway).

I think the term is "fatally flawed" if BTRFS is managing the RAID5/6 by itself - the parity was not calculated correctly - had nothing to do with LVM, as that works fine (by itself or with mdadm).

BTRFS doesn't use a LVM (I Know I used the wrong terminology before, should have said BTRFS volume management), it manages the volumes itself. However you have the option to use it over a LVM or mdadm, in which case BTRFS wouldn't be calculating parity or resilvering.

The thread you linked above obviously mentions these and proves he was using BTRFS to manage the volumes, which we already agree is not stable.

That thread is actually very popular in the BTRFS community and in Linux 4.12 they improved RAID 56 a lot, however it's still not ready for show time.

https://www.phoronix.com/scan.php?page=news_item&px=Btrfs-Linux-4.12-RAID-5-6-Fixes

A rock solid filesystem should not have bugs like that, period.

I know I'm repeating myself but once again, only has issues if you use BTRFS to manage the volumes.

Not an issue using BTRFS on top of mdadm.

I do agree with you guys about not using BTRFS if you are letting it manage the volumes, however I and many other people realise that it's stable and reliable as a filesystem, such as when you use it over a LVM/mdadm. Like Synology does.
 
Last edited:
I do agree with you guys about not using BTRFS if you are letting it manage the volumes, however I and many other people realise that it's stable and reliable as a filesystem, such as when you use it over a LVM/mdadm. Like Synology does.

I think you're confusing some issues here, and a general lack of knowledge about how SW RAID can be done...

BTRFS, much like ZFS, can cross layer boundaries, and yes, that's what bit the BTRFS team with the parity issue noted by crossing layers, as one can build a RAID straight up, just like one can build a RAID straight up with LVM and not mess with mdadm at all...

Nothing wrong with running BTRFS over an MD device running on mdadm, and nothing wrong with running over LVM - actually, one could bind multiple devices as an md device, and layer it with logical volume manager, and then format the device with btrfs...

If I recall, that's exactly what Synology does...

BTRFS - data
LVM - volume groups/storage pools
MDADM - physical disks...

That works... no problem here, and it gives some flexibility as one can have multiple MD groups under LVM, and each LVM managed volume can have a FS running on it, optimized for the task at hand - for example, XFS on on volume, EXT4 on another, and BTRFS on another...

Like I mentioned earlier, I have nothing against btrfs in general, other than it's not as proven as some of the other choices out there - and yes, it has some serious issues that the btrfs team knows needs to be addressed to meet the requirements they set forth.

A plus for btrfs is that it's not encumbered like zfs is, so it's more gpl friendly and gives similar benefits that ext4 can't.
 
BTRFS doesn't use a LVM (I Know I used the wrong terminology before, should have said BTRFS volume management), it manages the volumes itself. However you have the option to use it over a LVM or mdadm, in which case BTRFS wouldn't be calculating parity or resilvering.

Just as a refresher - here's an article I wrote some time back...
 

Attachments

  • logical_volume_management.pdf
    96.4 KB · Views: 477
I think you're confusing some issues here, and a general lack of knowledge about how SW RAID can be done...

BTRFS, much like ZFS, can cross layer boundaries, and yes, that's what bit the BTRFS team with the parity issue noted by crossing layers, as one can build a RAID straight up, just like one can build a RAID straight up with LVM and not mess with mdadm at all...

Nothing wrong with running BTRFS over an MD device running on mdadm, and nothing wrong with running over LVM - actually, one could bind multiple devices as an md device, and layer it with logical volume manager, and then format the device with btrfs...

If I recall, that's exactly what Synology does...

BTRFS - data
LVM - volume groups/storage pools
MDADM - physical disks...

That works... no problem here, and it gives some flexibility as one can have multiple MD groups under LVM, and each LVM managed volume can have a FS running on it, optimized for the task at hand - for example, XFS on on volume, EXT4 on another, and BTRFS on another...

Like I mentioned earlier, I have nothing against btrfs in general, other than it's not as proven as some of the other choices out there - and yes, it has some serious issues that the btrfs team knows needs to be addressed to meet the requirements they set forth.

A plus for btrfs is that it's not encumbered like zfs is, so it's more gpl friendly and gives similar benefits that ext4 can't.

Just as a refresher - here's an article I wrote some time back...

Thanks for the info, and yes I was little confused about exactly how mdadm and LVM work, I thought they were mutually exclusive. Now its a bit clearer. I will read the PDF you attached.

I was simply trying to point out that BTRFS is stable and reliable if you are using it over LVM and mdadm (I've got 5 systems that I've setup and manage, no problems for me). There seems to be a lot of people who think otherwise because they don 't know how it can be used in different ways.

I think people have googled BTRFS and read about the problems with failed volumes, but didn't realise it was caused by BTRFS RAID and didn't know you don't have to use, and that other systems don't use it.

Its been interesting talking with you and sorry I was a bit rude earlier :)
 
That was the RAID/volume management part of BTRFS. You don't have to use BTRFS to manage RAID (Although you lose some features but it's not stable anyway).

But my point stands - newer code isn't as mature, and when dealing with something as complex and critical as a filesystem, I trust my data to something that's more mature (and ideally simpler - the more complicated the plumbing gets, the more likely you are to run into a flaw at some point in the future). Been burned once by reiserfs, never again.
 
Similar threads

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top