What's new

Confessions of a 10GbE Network Newbie

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

L&LD

Part of the Furniture
Dennis,

I just finished reading your latest article (Part 3) on the main page. Great stuff!

Just a question: is the QTS firmware v4.1 official yet or are you still running the Beta?
 
Very good write up Dennis and much appreciated. What has been you experience with SSD cache on the QNAP unit? I have read that it seems to provide great enhancement for 10gb network purposes and VM implementation but really does very little in other implementations. Intuitively it seems as if having SSD cache on the QNAP used to store video and photo files for editing with Adobe CC would be a great benefit even on a standard gigabit network.
 
Thanks for the kind words gents :)

L, yes, I'm using QTS 4.1 which Qnap has listed as beta. Zero issues there given my testing.

Ht, here are a few SSD cache results using some older (slower) drives but a fairly new SSD: http://forums.smallnetbuilder.com/showthread.php?t=15921
My thoughts on gigabit (even with SSD in the NAS) is that the data rate and latency would still be slower than a local sata 6 spinner. I did a few large file copy and paste tests (30GB) to the NAS where speeds oscillated between 300 MB/s and 100MB/ every 3GB or so.

For home use, over gigabit, I can see SSD caching being an improvement for apps like Lightroom. Keep in mind that the tests above were using 5 year old sata3 drives. The hitachi 4tb 7200rpm NAS drives (sata6) I'm using for our 10GbE network are at least 50% faster...so combine these with SSD caching and results would be even better. My take on the setup is that a nice compromise between power savings on "green" NAS drives vs slow performance will be enhanced significantly using an SSD drive. I should mention that QNAP has several options for cache algorithm, and an option to bypass cache for larger sequential writes that I did not test...I used defaults.
 
Last edited:
It is way overdue for home networks to move to 10 Gigabit Ethernet.

Consider how everything else has moved on, leaving the decrepit 1 Gigabit standard in place.

If networking keeps pace, some home networks would have been getting ready to leave 10 Gigabit behind, and move to 16 Gigabit fibre channel or some other modern technology.

With the ability to build your own NAS for cheap, that can easily saturate a Gigabit connection, and also a 10 Gigabit connection (if they would stop price gouging on the 10 Gigabit adapters and switches) for far less than $1000, home networking needs to move on.

Doing so will also allow prices to drop. There are things in product marketing which causes prices to have sky high markups. For example, a cheap $20 parabolic reflector becomes a $200 reflector of similar construction when you add the word "photography" to it.
 
Last edited:
It is way overdue for home networks to move to 10 Gigabit Ethernet.
Don't hold your breath on seeing 10GbE growth in the consumer market.

Consumer networking product makers are seeing a stronger trend away from wired to all-wireless networks in the home.

The trends you see toward more powerful NASes that saturate a Gigabit connection are real. But they are not home market trends.
 
Even take up on the commercial side has been slow, likely due to very expensive switches. Netgear is the first to jump in, and I'm sure more will follow in 2014. Seeing 10GbE onboard NAS offerings at the $2K mark ..this is less than a high end gaming machine. If you consider how much data an 8 port 10GbE needs to move, I can understand why they are so expensive today. We all know that processing power marches forward at impressive factors, so another year or two and we will see the effect on hardware.

At least for now, at $400 for a NIC, $800 for an 8 port switch and $2000 for a 10GbE diskless NAS, manufacturers have their focus on business. The article series really is targeted at the small studio or business like Cinevate. Many photographers and filmmakers are 1 or 2 person operations based from their home, but with significant storage challenges.

In the "build your own server" section of this series I will detail a RAID 5, 20TB server/NAS with dual 10GbE capable of over 1000 MB/s for under $2500 (disks included!).
 
Last edited:
In the "build your own server" section of this series I will detail a RAID 5, 20TB server/NAS with dual 10GbE capable of over 1000 MB/s for under $2500 (disks included!).
Oooh! Teaser! :)
 
Thanks for the kind words gents :)

L, yes, I'm using QTS 4.1 which Qnap has listed as beta. Zero issues there given my testing.

Ht, here are a few SSD cache results using some older (slower) drives but a fairly new SSD: http://forums.smallnetbuilder.com/showthread.php?t=15921
My thoughts on gigabit (even with SSD in the NAS) is that the data rate and latency would still be slower than a local sata 6 spinner. I did a few large file copy and paste tests (30GB) to the NAS where speeds oscillated between 300 MB/s and 100MB/ every 3GB or so.

I am a little nervous about the ability of the SSD cache to sustain its performance over time as QNAP does not yet support TRIM. Do you believe that one can rely upon the internal garbage collection in the drive to keep its performance up to speed?

I like the idea that QNAP has several algorithms available for cache usage. Another one of my concerns about using an SSD for cache in a NAS is the life span of the ssd in the nas. The best consumer grade SSD's are rated at 80-100k iops. It seems as if one could burn through the ssd pretty quickly given the large volume of read/writes to the nas.
 
I posed the question to QNAP. QNAP SSD trim currently supports RAID 0/1/10, however, does not support SSD trim for the SSD caching drives. SSD write cache TRIM is likely on the radar for their engineers. A recent flavour SSD though should be fine in terms of garbage collection. I did see that SSD torture test earlier..an excellent read.

I've been running a few tests with the TS-870 PRO in RAID 5 with eight 4 TB Hitachi drives. Quite impressive with reads in the 850 MB/s area, and writes in the 600MB/s range. At those speeds, it would not make sense to use an SSD cache.
 
Nope.. Gigabyte is not the first single socket with 10GbE onboard. Supermicro has had this board for some time now: http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRH-7TF.cfm

The SM motherboard with dual Intel 10GbE and LSI 8308 raid controller at $500 is perhaps the biggest "bargain" out there. I'll be writing about it very soon as it is the basis of my affordable 1000MB/s server.

It's really good to see Gigabyte jumping in.
 
Dennis,

Thanks. Guess we can't believe everything we read on the 'net. :)
 
Yeah, but now we need something with 10GbE onboard NIC that'll take socket 1055 and not a server board and preferably under $300.

C'est la vie. It is exciting that 10GbE seems to be starting to make a push for much lower prices recently and hopefully the trend will continue.

I am still salvating at the possibility of getting 10GbE for "cheap" at some point in the next couple of years...though I don't know what I'd do with it right now. Right now I am probably better off just trying out my onboard NICs to mix with my pair of Intel Gigabit CT NICs for 3Gbps of throughput with SMB multichannel. I'd still need to upgrade my disk array, but the disks I am looking at in RAID0 should be able to push close to what 3Gbps can handle (a pair of Seagate 3TB Baracudas for both machines).

What I am really hoping for soon are "affordable" switches with mixed 1GbE and 10GbE ports on them.

Something like a gigabit 16 port switch, with a pair of 10GbE ports shared with a pair of SFP+ ports would be really nice. Especially if you were talking $5 per gigabit port and $80-100 per 10GbE port. Or, maybe better, a 24 port gigabit switch with a similar pricing structure and 4x10GbE ports. Maybe/eventually.
 
The problem I'm seeing even in socket 1155 is that 16 lanes of PCIe 2 really isn't enough. With SATA 3.2 spec eating up 2 PCIe 3.0 lanes to get 2GB/s (only a spec at this point) there will be further pressure on the bus to keep all devices happy. Once PCIe devices themselves all reach PCI 3.0 spec, and SATA 3.2 is mainstream, then you will be ok with a lower socket count..as far as saturating a single 10GbE connection goes anyway.

If you see yourself ever needing a PCIe SSD, using a fast PCIe graphics card (or two which has benefit with Adobe CC) and 10GbE, the reality at least today is that your only choice is socket 2011. This ensures enough PCIe bandwidth to the CPU. Anything less is using a PCIe switching (plex) chip to fake higher bandwidth.

The supermicro board I linked above has dual 10GbE onboard, for less than a single dual port 10GbE Ethernet card. Add in the SATA 6Gb/s eight port LSI 2308 and the $500 price tag doesn't seem so bad.
 
No, its not bad at all. I am just looking at it from the stand point of a "low cost" server. Granted, 10GbE isn't low cost yet, so kind of what is the point of the rest of the server being $300 if it can't keep up even.

Its at least simpler from a lot of angles to go 10GbE instead of multiple 1GbE links (even with SMB Multichannel) once you exceed 1GbE speeds. Most single hard drives that are newer can easily push beyond Gigabit ethernet speeds, let alone an SSD or a RAID array. My much older RAID array in RAID0 can push over 200MB/sec and when it was somewhat less full could saturate my 2x1GbE link. With a couple of new 3TB drives I expect it could do justice to a 3x1GbE link.

It would, however, be much simpler if I could just do 10GbE, even if the drives couldn't keep up with the network link.

At least for Z77, not sure about Z87, there are 16x PCIe 3.0, but also 4xPCIe 2.0. 4xPCIe isn't enough for any 10GbE board I have seen, but it doesn't mean it couldn't be. 4 lanes at 2.0 speeds are theoretically 2000MB/sec, even with over head its something like 1700-1800MB/sec. Its not 10GbE full duplex concurrent, but it could easily handle half duplex at 10GbE and still leave room for a fair amount of opposite direction traffic.

Heck, there are plenty of PCIe 1.0a cards that are single lane and gigabit ethernet. With overhead, that is only around 220MB/sec bandwidth, which is a bit less than GbE full duplex and at least so far I haven't noticed any real world impacts of that (granted, with my use case, I am rarely doing more than maybe maxing Rx or Tx with only limited traffic in the other direction for ACK and occasionally a "slow" link back, such as streaming from my server as I am shoving a bunch of files to it).

Or, as a dedicated server, like mine, I have nothing populating my 16x slot. I have a pair of 1x NICs in there and nothing else. You could easily do a pair of 8x slots with a RAID card and a 10GbE card, or if it is an onboard NIC, connecting through the CPU PCIe lanes (instead of the PCH, which I think has 8 lanes for everything (at least as of 7 series chipsets, not sure about earlier), which is usually what the PCIe 1x lanes are connected through as well as SATA/SAS) you could use 8x of those PCIe lanes dedicated to that.

Or if it was PCIe 3.0 capable, just dedicate 4 lanes to it.

Actually...that is my biggest beef with networking vendors right now. EVERYONE seems to be stuck in years past. I still see most gigabit NICs shipping with only 1.0a support. I see a small handful of newer cards, mostly server dual/quad port cards, that have 2.0 support. Very few though. Nothing with PCIe 3.0, at least no on the GbE front. I don't really see anything on the 10GbE front either, though I haven't been looking too closely at that stuff lately.

It just doesn't make a lot of sense to me. GbE is still common/important in a lot of SOHO/SMB servers and networking gear. You could make a quad port card on PCIe 3.0 with a single lane and handle 80% of the bandwidth the thing could generate with all ports at full duplex. Or even a dual port card on PCI3 2.0 as a single lane. None of this taking up 4 lanes on a dual port/quad port card with PCIe 1.0a.

It just doesn't seem to make any sense to me at all. Though I'll be the first to admit I know little about enterprise networking and only a bit about SMB networking (but a lot about SOHO networking). I guess its personal whim. I can't help looking at things like the new Bay Trail mITX boards, especially as cheap as they are, and thinking, if there was just a single lane dual port PCIe 2.0 NIC out there, one of those on the board, IF it supported RAID (which I don't think any of them do), with a couple of drives in RAID0 plus a small mSATA SSD as the boot drive and I'd have an AWESOME micro file server for cheap.

Alas, my whims are not to be catered to.
 
You can run a single port X-540 NIC on PCIe 2.0 x 4. I tested that and NTTTCP reported 1180 MB/s. Dual port card though and you're out of bandwidth.

You are absolutely correct with PCIe 3.0. The only card in my test systems that use it are the Nvidia video cards. A fully PCIe 3.0 capable system, including cards would solve the 16 lane issue. SATA 3.2 also solves the onboard RAID limitations I've seen in testing...so ZFS and similar RAID schemes would see much better performance at least on reads.

I suspect you'll have the lower cost solution in 12-18 months :) For now, it's pretty hard to beat a dual/quad port Intel NIC and some extra LAN cables in terms of buzz for your buck.
 
You are deffinitely right.

I guess I just want today, what I'll probably be able to buy in 3-5 years (I am less optimistic than you...though my budget is also fairly constrained).

I guess my glasses are rose tinted about the past, but I just feel like the last couple of years, at least in the networking world, companies have been rather slow on the uptake on new technologies. It at least seemed like to me, that when PCI-e 2.0 rolled around, a lot of companies jumped on board REAL quick (IE, all of the add-on board makers)...except network manufacturers. PCI-e 3.0 rolled around and EVERYONE seems to be slow on the uptake this time. AMD jumped on the bandwagon right away...and then NVIDIA took what, in the Video card world, seemed to be forever.

And other than a couple of RAID cards out there, nothing seems to be PCI-e 3.0 other than video cards (maybe some PCI-e SSD cards? I haven't looked at those too much lately).

Till then, I'll be thankful to MS that they rolled out SMB3.0 and SMB Multichannel (one of the few things I am legit thankful to Microsoft for having done in the last few years) and just rely on ganging GbE ports. It costs me almost nothing since my brother donated some older Intel quad port cards to me...at least if I really need to scale up beyond the 3 ports I have available in my machines now (I'd yank the two single port NICs in my machines and just drop in the quad port cards and keep the on board NIC disabled). Still a lot of wires to run, though I have the ports now.

I got a used Trendnet TEG-160ws for super cheap that is sitting on my door step waiting for me, so port limits shouldn't be an issue for me now (I have 15 of 16 ports used on my TP-Link SG2216 right now and in a renovation I am doing this spring I am adding at least 2 if not 3 more LAN drops). So if I reallly wanted to, I could run the wires and stick in the quad port cards.

Sadly, stinking old RAID array is having problems pushing my 2GbE now. Sigh. Time to invest in new drives. Its always something.
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top