What's new

How to build a spectacularly fast network and server for small engineering dept.

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

infotime

Occasional Visitor
Trying to figure out the best way to configure a small network for blistering speed.

A new client of mine is a tiny business with three engineers running AutoDesk Inventor in the office. The shop has a couple of CNC machines and they do various specialty metal working and machinery building. Right now the three engineers are using sneaker-net. They have Internet and use WiFi. There is no hard wires installed at this point. So we're starting from scratch.

The project has two goals:
1) Share data files
2) Data backup

The undertone of all this is PERFORMANCE. They'd like the network to be fast enough so that the users can't tell a difference whether they're loading a project file off of their local hard drive or off of the server. And for various reasons they don't want to be copying files to local storage then copying them back. They also don't want to use a project management system like AutoDesk Vault at this point either. Just straight-up sharing files form the server.

Typical project folders can be up to 1GB in size. That's not one single file. Usually it's multiple smaller files in nested folders. One project was maybe 5GB. Most day to day stuff is probably less than 1GB in practice, maybe 800MB. So far they have about 200GB of saved files.

The two areas are the physical network and the server.

Physical Network

I started out thinking maybe I could install some ultra-high-speed network for only three systems maybe using fiber. But I quickly found that it seems getting past 1Gb Ethernet increases the cost by about 10x for this project. Example: an 8 port Gb switch is about $60 while an 8 port 10GbE switch is $850. 3 ports at 10GbE would be enough. Didn't look long enough to see if there are smaller less expensive switches.

The other potential problem with 10GbE would be the client PCs. They're all laptops. But they're workstation class laptops. Didn't look too hard at their specs, but I guess the only shot at a fast connection there would be if they had PCI express slots or other high-speed slot for an add-on card, if there's even such a beast.

I would love to find a way to build a network at that speed. If anyone point me where I could look for a solution that's less than $1,000 approx I'd have fun checking it out.

So, will probably just go with a simple 8 port Gb switch and CAT6 wiring. Boring. :D

Server

Having trouble conceptualizing how fast the disks can or should be in relation to the network speeds. I'm thinking all the shared project files would be kept of a fast SSD on the server. There's only three users right now and I don't expect that to grow much nor do I expect them to all hit the server at once. Wondering whether to go with a SATA interface SSD or a PCIe type SSD - and would it matter.

I think I've got the data backup piece of it figured out: the PCs will image backup to server. Server will have rotating disks with the entire server imaged regularly. Maybe an off-site service like Carbonite or roll-your-own.

But my bigger question is how to optimize the server hardware and network hardware for spectacular speed.

I'm not asking for a lot of hand holding, just a few ideas to help me clarify my thinking. Thanks!
 
Hi,
Funny, my SIL is running a specialty CNC/Moulding shop best one in Western Canada with
some U.S. contract work too. I never heard him complaining about his office/shop network
speed which I put in couple years ago. CNC machines don't need lightning fast system support.
 
You're right. They connect their laptop to the CNC machine with a USB to Serial adapter. Can't get much slower than that :D

But my comment about them having CNC machines might have thrown off the discussion. Connecting to the shop equipment isn't what this is about. It's about file sharing for large files among the three engineers. I mentioned the other equipment to give an idea about what their business was about.

This is just about the office network. Speed is king.
 
Hi,
Of course, when big number crunching is involved it is never enough.
 
OK, it looks like I've been over-thinking this. Seems that even the most mundane SATA interfaces today outpace 1 Gb Ethernet: SATA 2 is 300MB/s vs 1Gb Ethernet at 125MB/s.

And those are theoretical limits. I think I saw that 1 Gb Ethernet sustains 60MB/s somewhere. I need to look into Jumbo Frames and maybe Link Aggregation or Trunking.

http://en.wikipedia.org/wiki/List_of_device_bit_rates
 
not difficult to beat local hard drive speed with dual-gigabit laptop adapter

1) Share data files 2) Data backup

They'd like the network to be fast enough so that the users can't tell a difference whether they're loading a project file off of their local hard drive or off of the server...

the client PCs. They're all laptops...

I think I've got the data backup piece of it figured out: the PCs will image backup to server.

First of all, make sure all clients are running Windows 8.1. There is no downside to this. Automatic SMB3 NIC teaming in Win 8.1 is a game-changer for LAN throughput.

Secondly, commit to Cat6e or similar decent copper cables to all high-speed-access machines. (Cat6e possible future 10gig ethernet capable).

Thirdly, use this $112 dollar PCI ExpressCard in each laptop:
http://www.newegg.com/Product/Product.aspx?Item=N82E16839158043

Use any old dual- or quad-port gigabit ethernet card in any future desktop or server machine (should you decide to build one instead of using a laptop or dual-port NAS, see below).

That gives you a potential of 300 megabytes per second transfer rate over 3 gigabit ethernet cables to a central 8- or 16-port switch (well 400 megs if a desktop server is quad-cabled to a switch), as long as all computers on the network are running Windows 8.1. The key point here is that Win 8.1 effortlessly and automatically supports network adapter "teaming". Don't take my word for it, see http://www.smallnetbuilder.com/lanwan/lanwan-features/32373-confessions-of-a-10-gbe-network-newbie-part-4-smb3

A secondary point is that if your goal is to have the network speed be "as fast as local hard drives", you only need TWO gigabit network cables to each laptop to get over 200 megabytes per second transfer rate. Because let's try to think of what local physical, spinning "hard drive" can transfer at rates higher than 150 megabytes per second. And am doubting that any individual file your engineers are working with is larger than a couple of hundred megabytes (i.e. a one-second delay in load time over a 200 megs per second network).

My vote would be for you to have all three of your laptops connect to an 8-port switch (incredibly inexpensive), using the 2 identical gigabit ethernet ports on the above-referenced dual-port laptop ethernet adapters. I guess you could try to use all 3 ethernet ports at once on a couple of the laptops, but perhaps cable-combining would go best if the ports you are "teaming" are hardware identical.

For the server, my first inclination would be to nominate one of the three laptops as "the server". Make sure the "server"/lucky user has a modern SSD of appropriate capacity, such as a $360 dollar 1-terabyte Mushkin Enhanced Reactor MKNSSDRE1TB. Nominate the laptop that gets used the least as the server, naturally.

Then see if all the sharing works OK with no separate server at all. For goodness' sake, the one laptop will only be serving at most 2 remote users, and I think the Windows 8.1 client limit is 5 users.

Anyway, if you get the slightest idea that there is some kind of slowdown because the 2 other laptops are competing with the server laptop for file accesses (unlikely because in this scenario you'll have a 500 megs/sec SSD in the server laptop, that isn't even using the network!), pull the tera-SSD out of the server laptop and stick it in a teaming-supporting NAS with dual ethernet ports such as a Synology DS214+ for all of $365 dollars. You'll get 200 megs per second out of a DS214+. Wonder if the not-quite-shipping-yet Qnap TS-231+ will be any cheaper (noting also the Qnap will have no eSata port) to get roughly comparable performance.

There is no need for RAID anymore to get high performance in the SSD era. So if you get a dual-bay DS214+ etc just use the second bay for some kind of 6 terabyte multi-image backup target, or archive drive or something like that.

As for backup, USB 3 or eSata to another SSD in a tiny enclosure, which is plugged now and then into any of the laptops (or even into the NAS if you end up that way) will work great. Make sure the portable enclosure has UASP (auto-supported by Win 8.1 by the way) and eSata, such as a StarTech.com Esatap / ESATA / USB 3.0 enclosure for $35 dollars.

You can communicate with me privately if you want more details. However there are not a lot of hidden gotcha's in this simple set of suggestions.
 
Last edited:
I'll chip in some advice too. If it is a small network you can just get a 48port gigabit switch that is managed for the purpose of combining ports. Intel server NICs, the older ones with at least 4 gigabit ethernet ports are cheap and you can get some of those and combine the ports. With 2 adapters in a single machine you can get 8 Gb/s much cheaper than 10 Gb/s. Some server boards have 10Gb/s ethernet. For laptops you can use usb3 to ethernet. Only intel NICs support teaming on windows 7 and above.

The issue with external expresscard adaptors is that they're usb only so dont support PCIe interface expresscards. You could however get PCIe to mini HDMI and from mini HDMI to mini PCIe if you want to use the intel server NICs on laptops. For more information on this check out the eGPU communities.

You can use a NAS or build your own using a decent x86 CPU, RAM and a small enclosure and an intel server quad port NIC.
 
Thanks both of you for those posts. I know at least one of the three laptops is running Windows 8.1. One is on Windows 7. Not sure about the third. I'm pretty sure they won't want to make any laptop the server since any of the three could leave the building at any time. Definitely would want a dedicated box.

I'm trying to understand the difference between RussellInCincinnati's and System Error Message's approaches. The first approach seems to use off the shelf network hardware and relies on Windows 8.1 / Server 2012 or other SMB3 implementation to handle the combining in software. The second approach suggested by System Error Message uses a managed switch to combine ports. That sounds a little fuzzy to me. How do the client PCs or servers know how to combine the traffic?

Back to Russell's approach. Seems that it would be easy to build. Even start out with a basic 1Gb Ethernet network just off the shelf. Put in a simple 16 port Netgear Gb unmanaged switch. Run CAT6e (or CAT5e?) to each of the three workstation laptops and to the server. Start with a server or NAS that supports teaming of it's NICs. Then make it all go with just one cable to each single NIC in each of the 4 machines (3 laptops, 1 server). That would give them roughly 100 MB/s throughput. They might be perfectly happy with that speed.

If they're not then just add more NICs! Use two NIC ports on the server and get the dual NIC cards for the laptops and just use that cards identical NICs. Whammo! You've doubled your throughput. Or triple it by using the new card's two NICs and combine with the laptops built-in NIC. Plus you'd have to activate a 3rd or 4th NIC on the server.

Would it be helpful to have +1 NIC to whatever your laptops are running? i.e. in the base config with each laptop only using it's built in NIC should you have two NICs teamed on the server? Or when you have 3 NICs going in laptops have 4 NICs on the server?

Hell, this system could end up being FASTER than their local disks! None of them are using SSD now. But at least one of them has a hybrid drive.
 
Even start out with a basic 1Gb Ethernet network just off the shelf...Start with a server or NAS that supports teaming of it's NICs. Then make it all go with just one cable to each single NIC in each of the 4 machines (3 laptops, 1 server). That would give them roughly 100 MB/s throughput. They might be perfectly happy with that speed...If they're not then just add more NICs!

Would it be helpful to have +1 NIC to whatever your laptops are running? i.e. in the base config with each laptop only using it's built in NIC should you have two NICs teamed on the server? Or when you have 3 NICs going in laptops have 4 NICs on the server?

...system could end up being FASTER than their local disks! None of them are using SSD now. But at least one of them has a hybrid drive.

Three general principles that we both appear to be embracing are "better is the enemy of good enough", and as my combinatorics professor use to say, "there is a difference between what is true and what is important", and "optimize what you have before deciding you must upgrade at any cost".

Of course it would be "better" in many ways to have 10 gigabit everywhere. While that is true, it might not be important.

It also makes sense, as you describe, to first make a perfectly tuned server and gigabit network and check on user satisfaction, before paying for a 10 gigabit backbone. Especially in the case where your clients already have gigabit hardware.

A hint as to why a gigabit network is not necessarily "the" important bottleneck in a given server/network, is the slowness of complex directory copies in SmallNetBuilder NAS benchmarks. How often do we see great large-file transfer rates but 17 megs per second directory-copy benchmarks? Just imagine how fast a network would be if all file requests were served up at a solid 100 megabytes per second. You could open 100 one-megabyte Microsoft Word documents in one second, say. You could copy a CD worth of stuff in 7 seconds, etc.

Thus it is not a waste of time to see what can be done with making truly full use of a gigabit backbone before ten-timing costs to put in 10GbE.

The problem you have posed has helped me see another point clearly. How could more-than-1-gigabit be essential to making users happy, if they are already satisfied with (at most) 100 megabyte per second spinning hard drives?
 
If you want to go even faster try using 40gb infiniband. Used switches from $800-$1000. And mellanox connectx-2 cards $55-100 ea. Nice thing about these NICs is they can do 10gbe or 40gb infiniband. They work very nice on server 2012 and 2012r2 and use ipoib rdma. On windows 8 ipoib will still be faster than 10gbe but can't use the rdma portion unless it server 2012 to 2012. What all that means is that you can pretty much saturate the PCI express lanes on any computer as long as the storage system is up to it. I am currently hitting 12-14gb per second and about 1.2 million iops on an all ssd array.

Voltage 40gb switch

http://pages.ebay.com/link/?nav=item.view&id=191486128471&alt=web

If you want a much cheaper route try one of these switches. 48 gigabit ports and depending on model 4x cx4 10gb ports or 2sfp+ 10gbe ports.

Quanta LB4M 48-Port Gigabit Ethernet Switch Dual 10 GB w/ DUAL Power Supply

URL: http://pages.ebay.com/link/?nav=item.view&id=111567208097&alt=web
 
Last edited:
The only difference between a dedicated NAS box and an x86 NAS server is the ease of setting it up and expansion. You cant add more NICs to a dedicated NAS box while an x86 NAS server requires more effort but you can add more ram, CPU and cards.

A standard hard drive runs at 150MB/s which is already more than 1 gigabit. By putting them in RAID and serving multiple clients at the same time multiple links on server prevent network bottleneck

So if you want to have 10Gb/s in the future it might be better to build a NAS system so you can add the cards later on.
 
Last edited:
Looks like I've found a hard limit, the expansion card in the Dell Precision M4800 laptop workstations:

From http://en.wikipedia.org/wiki/ExpressCard
The ExpressCard has a maximum throughput of 2.5 Gbit/s through PCI Express

At this point it sounds like I can focus my design on Russell's ideas with teaming two NICs for a 2Gb network. If I were to try for 3 NICs then only 1/2 of that added bandwidth would be available. And anything beyond that would be wasted - so a 10 GbE or anything in those higher realms wouldn't help.

Would there be any point in having 3 or 4 NICs teamed on the server? The only time I think that might help is when two or three users are loading a big file at the same moment. Is that correct?
 
Looks like I've found a hard limit, the expansion card in the Dell Precision M4800 laptop workstations:

From http://en.wikipedia.org/wiki/ExpressCard


At this point it sounds like I can focus my design on Russell's ideas with teaming two NICs for a 2Gb network. If I were to try for 3 NICs then only 1/2 of that added bandwidth would be available. And anything beyond that would be wasted - so a 10 GbE or anything in those higher realms wouldn't help.

Would there be any point in having 3 or 4 NICs teamed on the server? The only time I think that might help is when two or three users are loading a big file at the same moment. Is that correct?

What exact type of server are you describing? Am still thinking it makes sense to take Win 8.1 laptop user with the most file I/O, and declare that machine the "server", and leave it on all the time. After all it is an inherently "green" laptop, probably a laptop capable of being easily configured to "Wake On LAN" activity. See if that's good enough for everyone before spending money on a dedicated server. By the way it sounds like your apps are going to be simple document sharing, not running SQL Server or anything like that.
 
Last edited:
My experience with NAS and gigE LAN and PCs...

to get close to the speed of gigE for a single file transfer, I need a fast PC and BIG files (100's or 1,000's of MB). Transferring small files has a lot of file system and SMB overhead and rarely gets anywhere close to the LAN's capacity.

I have not tried the case for lots of PCs doing big or small transfers. I did once try a single PC with 8 or so simultaneous transfers on different TCP connections. This does fill the LAN's capacity a lot more than a single flow.

But I'm not convinced that there's LAN hardware/NIC magic that can offset the overhead in the file systems and SMB.

What ever happened to fiber to the desktop? Did it die? Fail to get to multi-Gb/s due to bus speeds inside PCs?
 
What exact type of server are you describing? Am still thinking it makes sense to take Win 8.1 laptop user with the most file I/O, and declare that machine the "server", and leave it on all the time. After all it is an inherently "green" laptop, probably a laptop capable of being easily configured to "Wake On LAN" activity. See if that's good enough for everyone before spending money on a dedicated server. By the way it sounds like your apps are going to be simple document sharing, not running SQL Server or anything like that.
Can't make a laptop the "server" because any one of the three could be out of the office at any moment.

The type of server I'm thinking about would be a small Windows Server 2012 R2 Essentials running on an HP MicroServer or Lenovo TS-140 style box. Neither are more than $500 each plus the OS license. For now it will just be serving files, true. Not running any databases or anything but there may be applications they want to run in the future on the server. A box like that could give me a lot of flexibility with LAN connections and other upgrades and changes. For me the biggest reason to go with a Windows Server 2012 R2 Essentials is that I like the backup piece. It effortlessly backs up all the client PCs automatically every night. Can restore single files or an entire image from any of those nightlys. On the server I put USB backup disks and back up the entire server and rotate those disks off site. I have several customers set up like this and it's a system I know well and find that it works well.

A NAS like the Synology you mentioned sounds like a nice box and am giving that serious consideration too.

My experience with NAS and gigE LAN and PCs...

to get close to the speed of gigE for a single file transfer, I need a fast PC and BIG files (100's or 1,000's of MB). Transferring small files has a lot of file system and SMB overhead and rarely gets anywhere close to the LAN's capacity.
Are you saying that you don't think we'd see any difference between 1 Gb Ethernet vs 2 Gb or 3Gb or even 10 Gb because of this?

And a follow up to my earlier thought that the max interface speed of the laptops would be capped at the 2.5Gb per sec because of the ExpressCard PCI interface... I forgot that the built-in NIC on the laptop would most likely be running on a separate bus so that's not off the table totally.
 
Win 2012 R2 Server should support adapter teaming

Can't make a laptop the "server"...thinking about would be a small Windows Server 2012 R2 Essentials ...I like the backup piece. Are you saying that you don't think we'd see any difference between 1 Gb Ethernet vs 2 Gb or 3Gb or even 10 Gb because of this?
Win Server 2012 R2 or a dual-gigabit-port Synology NAS are good choices if you want the potential for low-hassle adapter teaming with Windows 8.1 clients, as well as their backup features.

Any i3-or-better Intel box with 8 gigabytes of RAM should make a plenty fast file server.

Just have a 500 megabytes/second type SSD storage device in whatever server type you choose (such as we have already discussed), and see if the performance is fine with a single cable to a laptop. My guess is you will be satisfied, and you'll also know that running a single Cat6e cable to your workstation areas will be adequate.

However if the Win server or NAS can deliver 100 megs/second over a single cable, but you are thinking after testing a single gigabit cable that you could indeed use more cable bandwidth, then go ahead and run 2 Cat6e cables to each workstation site. Then at your leisure, you can buy a dual-port adapter for any laptop you want to "speed up" as far as peak network transfer speed.

If you are using a Windows 2012 R2 as your server, and it only comes with one ethernet port, the safest way to make sure teaming is going to work is to add a dual-port Intel gigabit adapter, they are around $140. That way you will be teaming with 2 identical ports using Intel drivers that are known to work with Windows teaming. Of course you could add a second Intel dual-port card down the road to your server (or swap the 2-port card out for a $260 dollar 4-port one) if you decide some day to connect your server via 4 cables(!) to the network switch.
 
Last edited:
I should add that laptops can use NIC teaming but it is complicated as i said to see egpu examples. This involves a PCIe adapter, 2 miniPCIe ports or expresscard + miniPCIe port in the laptop. This means that the case has to be opened to use this but if your laptop is well made it would only mean keeping a small panel below removed. you can than support 10Gbe if you use 2 PCIe V3 lanes and a 4 port gigabit NIC using 2 PCIe V2 lanes.

I managed to get low profile quad port intel server NICs for £25 each, you just have to look around and on what you need. If you dont need the latest from intel's networking solution. File sharing doesnt require low latency so going with older intel server NICs will work fine not to mention that windows can do teaming with them because its in the intel drivers. Intel server NIC line also does a lot of offloading from CPU even for the older ones.

very important is that if you wish to use PCIe cards with adapter and expresscard the laptop must natively have it because it will not work with external usb expresscards because they only support usb based expresscards and not PCIe based which is what PCIe cards use. They cannot work with usb mode.

While not mentioned much you can use TILE PCIe based cards for 10Gb/s networks but they require a lot of skill to set up because the PCIe card has its own CPU and ram and runs its own linux at the same time. If the 9 core TILE PCIe card has 2 SFP+ or 10Gbe ports and fits your budget they will work well because the teaming can be done entirely by the card itself and the OS would only see just 1 connection. It is what facebook uses though but facebook stuffs the 100 core PCIe card variants into their servers.
 
Last edited:
Can't make a laptop the "server" because any one of the three could be out of the office at any moment.

The type of server I'm thinking about would be a small Windows Server 2012 R2 Essentials running on an HP MicroServer or Lenovo TS-140 style box. Neither are more than $500 each plus the OS license.A NAS like the Synology you mentioned sounds like a nice box and am giving that serious consideration too.
Remember a Synology DS214+ is $365 dollars. It has 1000 megs of memory. Synology wants 4 megs of RAM for each 1 gigabyte of "SSD caching" of a hard drive. I.E. you could make a pretty clever system out of a $50 dollar http://www.newegg.com/Product/Product.aspx?Item=N82E16820178761
120 gigabyte SSD, which Synology will use as the "cache" for a 2 Terabyte $70 dollar hard drive,
http://www.newegg.com/Product/Product.aspx?Item=9SIA5AD1S80659
all fitting in the one box with USB 3 and external eSata port(s) still free. Total cost for 2 Terabyte low power server with SSD caching is $485 dollars.

Or for $250 more go with that $360 Mushkin Enhanced reactor 1-Terabyte SSD and skip the spinning hard drive or cache drive, total cost for an all-SSD 1 Terabyte Synology server $730 dollars or so.

All the nice-software NAS's have tons of backup options.

Can imagine a faster-peak-transfer system that you build yourself out of Win 2012 R2 etc. Consider also value of your time in putting together such a Win server, will your laptop users even notice the difference since they're going to be on at most dual-gigabit cables, you can't beat the low heat and power of the little NAS's, never buy too speculatively far ahead in computers because when the future comes it will be cheaper and too many other variables will have shifted, etc.

It's fun and instructive making arguments for so many different paths, there is no "right" solution of course that is better in every way than all the others. In your case the fact that the users are laptops now, actually trims down your decision tree. Insofar as making 10GbE look like overkill for the ExpressCard laptop interfaces.
 
Last edited:
It should be noted that some laptops have expresscards connected to 2 PCIe lanes and perhaps some may even have PCIe 3. thunderbolt has 4 lanes.

The problem laptops are coming with fewer expansion slots, no thunderbolt no expresscard fewer miniPCIe.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top