What's new

Storage build advice

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Tal Bar-Or

New Around Here
HI All,

I would like to build a nas/san(iscsi) for XEN virtualization usage and also for maybe some file sharing cifs.
The NAS spec i am aiming would be as follows below mounted with Freenas ZFS , my question is anyone had experience with such config and how much throughput and IO should i expect from such config , and if somone have any advice to have better config it would be appreciated
Thanks

Chassis SC846XE1C-R1K23B
Intel Xeon E5-1620 V3 Quad-core [4 Core] 3.50 Ghz Processor
Mother Board X10SRL-F
2xIBM-ServeRAID-M5016-Controller-Raid-SAS-SATA-PCIe-6GB-1G-cache-LSI-9265-8i
128GB MEM-DR432L-SL01-ER21
12xST6000NM0004
3xSDSSDXPS-240G
INTEL-DEL-X520-SR2-E10G42BFSR-Server-Adapter-10Gbps-Dual-Port
 
Sounds like a decent setup HW wise - I would suggest checking out OpenFiler, as their iSCSI performance is very good...

I'm not a big fan of ZFS - on Solaris, it's ok... it's just another filesystem.... on the various Linux/BSD's, really comes down to implementation.

If one is really bent on doing ZFS, check out SmartOS
 
I'm not a big fan of ZFS - on Solaris, it's ok... it's just another filesystem.... on the various Linux/BSD's, really comes down to implementation.
On any version of FreeBSD that is still supported, ZFS is fine (I have pools up to about 150TB). Fixes and improvements that come from upstream (illumos) are integrated pretty rapidly. The closer you are to the latest FreeBSD version, the more likely enhancements will be applied - older but still-supported releases tend to get fixes only.

I don't know about the other BSD's. On Linux, my understanding is that incompatible licenses restrict how ZFS can be integrated into the system. I'm not sure if that limits performance and/or features.

Oracle decided they didn't like open source, so they took their ball (ZFS) and went home. Therefore, Oracle ZFS has diverged a bit from the open source versions. That is generally not a problem unless you need to share storage pools between Oracle and non-Oracle implementations.
 
ZFS has a lot of IP protection around it - e.g. patents and licensing... that's why a lot of people step back away from it - it's not technical excellence... the *BSD's continue to support it, as they generally do, as anything is better than UFS when scaling...

For the most part - most folks here need to deal with - EXT4, NTFS, and to a lesser degreee HFS+
 
ZFS has a lot of IP protection around it - e.g. patents and licensing... that's why a lot of people step back away from it - it's not technical excellence...
Say what? :confused:

Sun open-sourced ZFS under the CDDL. Oracle later took their ball and went home, but that does not change the rights and responsibilities granted under the CDDL - it just means that later changes to ZFS by Oracle are not available to non-Oracle customers. Independent development of the CDDL ZFS is ongoing and ZFS pools are generally importable on a variety of operating systems, as long as the operating system doing the import has (at least) the features the pool was created with.

You may be thinking of the incompatibility of CDDL with the GPL. The GPL people don't like it, and say "This is a free software license. It has a weak per-file copyleft (like version 1 of the Mozilla Public License) which makes it incompatible with the GNU GPL. This means a module covered by the GPL and a module covered by the CDDL cannot legally be linked together. We urge you not to use the CDDL for this reason." That would seem to be the GPL's problem. There are a lot of open source licenses they don't like (click on the link above).
 
ZFS has always been a nice filesystem to use for storage if you expect multi user and multiple applications to use it simultaneously. It also uses ram as cache so you dont need those pricey RAID cards that have onboard RAM/cache or use a cache drive. Backup is also quick and easy with it and has self healing. Raidz variants are complicated since its not easy to upgrade your storage.

ZFS over a standard RAID array is usually the easiest from a management standpoint and even linux distributions have been improving their implementations so you could use ubuntu with ZFS for example. When it comes to running software it is generally better to use a different filesystem.
 
Sun open-sourced ZFS under the CDDL. Oracle later took their ball and went home, but that does not change the rights and responsibilities granted under the CDDL - it just means that later changes to ZFS by Oracle are not available to non-Oracle customers. Independent development of the CDDL ZFS is ongoing and ZFS pools are generally importable on a variety of operating systems, as long as the operating system doing the import has (at least) the features the pool was created with.

Well, it is the license that many folks are concerned about, and that Oracle has, as you've stated, taken their ball off the field and gone home. So we basically have a source tree that is stuck in time, and a license that Oracle can update/modify...

The compatibility with GPL (LGPL/GPLv2/v3) limits its practical use in Linux. The various BSD's don't have that problem, and they're pretty open about support OSI compliant licenses, if not directly, in their ports tree.. Apple had initially explored it as a supported filesystem for OSX, if I recall in 10.6, including the ability to use ZFS (but not format or create the filesystem directly, but it could mount it as a supported FS type). They eventually walked away because of the terms in CDDL, which was perhaps a little bit to copyleft for their taste...

Personally I have nothing again ZFE - on Solaris, it's pretty awesome, esp. if one has the hardware resources that really make it shine - and while it doesn't necessarily need a high end RAID card, I'll tell you first hand, it can and will use whatever is available. It's fantastic in a enterprise/carrier grade application platform.

It's probably overkill for most folks here - performance wise it doesn't compare well on lower end hardware, but the file check-summing, snapshot capability, and storage pool functionality are benefits if one can tolerate the performance hit.
 
Raidz variants are complicated since its not easy to upgrade your storage.
I'm not sure what you mean here - you can replace 1 drive at a time (or 1 in each raidz member if you like being adventurous) with larger drives and once all the drives have been replaced, you have a larger pool with more storage. You could also add additional raidz's to the pool and get more storage that way, although existing data will not migrate to the new raidz by itself, so I/O will not be perfectly balanced. Either of these is better than the traditional "back it all up, nuke it from orbit, add/replace drives and restore everything" method.
ZFS over a standard RAID array is usually the easiest from a management standpoint and even linux distributions have been improving their implementations so you could use ubuntu with ZFS for example.
ZFS expects that each of its vdevs is a separate disk device. If you create a ZFS pool with vdevs that are actually multi-disk logical units on a RAID controller, you lose a lot of the self-healing abilities of ZFS and introduce a second level of management by performing the disk replacement on the RAID controller. I run my pools (currently 32TB each, moving to 128TB shortly) on a RAID controller, but I export each drive on the RAID controller as a logical unit and build my pool out of those. I get the benefit of a nice web interface on the raid controller, lots of status LEDs for each drive bay instead of a simple activity LED, and so on, but I still have the benefits of ZFS.
 
Well, it is the license that many folks are concerned about, and that Oracle has, as you've stated, taken their ball off the field and gone home. So we basically have a source tree that is stuck in time, and a license that Oracle can update/modify...
As I mentioned above, there is coordinated open source development of ZFS and pools are importable on different operating systems as long as the system doing the importing supports all of the features that the pool was created with. A feature flags mechanism was added quite some time ago to support this - before that, there were simple ZFS and zpool version numbers and a pool would not be importable if it was created with a higher version than the one provided by the operating system doing the import.

Oracle stopped providing updates under the CDDL sometime after zpool 28 / ZFS 5. They are currently at zpool 33 / ZFS 5. Looking at the list of changes in zpools 29-33, a number of those have also been independently implemented in open source ZFS (along with other features).
The compatibility with GPL (LGPL/GPLv2/v3) limits its practical use in Linux. The various BSD's don't have that problem, and they're pretty open about support OSI compliant licenses, if not directly, in their ports tree..
Yup. But I'd argue that that is a Linux/GPL issue.
Apple had initially explored it as a supported filesystem for OSX, if I recall in 10.6, including the ability to use ZFS (but not format or create the filesystem directly, but it could mount it as a supported FS type). They eventually walked away because of the terms in CDDL, which was perhaps a little bit to copyleft for their taste...
My understanding was that it was a technology preview and there wasn't sufficient justification to do further development. Once Apple discontinued their Xserve product line, they were left with desktop and notebook products. There isn't much need for double-digit terabytes of storage on desktops, and none at all on notebooks. Perhaps coincidentally, OS X 10.6 was the last version to ship a separate server edition.

Apple isn't opposed to open source, as this shows. To quote a little from that link:

"As the first major computer company to make Open Source development a key part of its ongoing software strategy, Apple remains committed to the Open Source development model. Major components of Mac OS X, including the UNIX core, are made available under Apple’s Open Source license, allowing developers and students to view source code, learn from it and submit suggestions and modifications. In addition, Apple uses software created by the Open Source community, such as the HTML rendering engine for Safari, and returns its enhancements to the community.

Apple believes that using Open Source methodology makes Mac OS X a more robust, secure operating system, as its core components have been subjected to the crucible of peer review for decades. Any problems found with this software can be immediately identified and fixed by Apple and the Open Source community."

It's probably overkill for most folks here - performance wise it doesn't compare well on lower end hardware, but the file check-summing, snapshot capability, and storage pool functionality are benefits if one can tolerate the performance hit.
I know of systems with large (100TB+) pools that provide several GB/sec of I/O to the pool. You definitely need to design for that situation - you can't just throw parts at it and hope. For systems with lower capacity and/or performance requirements, ZFS can still provide a good solution. It probably isn't a good fit for systems with fewer than 5 disk devices, which does eliminate a good sized chunk of the audience here.
 
at the end of the day, i really don't care what color we paint this bike shed...

I run a carrier grade network - 7 out of 8 are linux based - one is Solaris, and yes, it runs ZFS... and it's well beyond your 100TB pool...

That ZFS pool runs just fine... so do my other filesystems on EXT4...
 
I run a carrier grade network - 7 out of 8 are linux based - one is Solaris, and yes, it runs ZFS... and it's well beyond your 100TB pool...

That ZFS pool runs just fine... so do my other filesystems on EXT4...
They aren't my pools, just on systems where I've done consulting. And yes, they're bigger than 100TB, but the customer doesn't want the exact size, etc. mentioned. I will say that those servers are in the Alexia Top 50.

So you're using ZFS. I'm just surprised at what I took to be misunderstandings about ZFS in that case.
 
Similar threads
Thread starter Title Forum Replies Date
T Build or Buy DIY 8
J Advice on NAS case DIY 8

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top