What's new

ESXi: one NIC or multiple NICs?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Ois

Occasional Visitor
I am planning to purchase a small low power home server system.
I would like to run my home automation software (FHEM) and some other applications - such as a download client and maybe a few other things.
I'd like to use ESXi to keep everything managed (and I'd like to play around with a few VMs).

I was wondering what the advantage of >1 NICs @1Gb would be?
Obviously link aggregation is an advantage with 2 or more NICs, I however do not need LAG....Are there advantages (other than bandwidth) involved with assigning VMs a physical NIC rather than using NAT and sharing?

Thank you!

Oisín
 
We always assigned a physical NIC to every VM in the old days as you would have a MAC per VM which would make it easy to identify VM traffic by having a separate IP address. Each VM server would end up with its own unique IP address. I would today too but I don't run many VMs any more. Old habits are hard to break.

LAG connections are really not needed for home systems. They add a complexity not needed on smaller networking gear. If I had a true GIG internet connection, not 300 meg but true GIG, I would consider it.
 
Last edited:
okay, thank you.
So what I am hearing is that a single NIC is fine for a home hypervisor - no real advantages with current software.

I am thinking of buying an Intel NUC instead of building a traditional system. From what I have read, the power/performance ratio offered by a NUC is not easily matched with any other system.
The obvious disadvantage is however that a NUC cannot be easily upgraded with a second NIC.
 
I would not worry about the Intel NUC's NIC as it looks like the Intel NUC does not have enough memory to run serious VMs. Say that fast three times... Intel NUC's NIC...Intel NUC's NIC...Intel NUC's NIC.

Sure would be nice if there was an easy way to add a second NIC to run a software router firewall. This might be a great router firewall box.

If these boxes would have been out when Microsoft Home Server was big I think they would have been a big hit. I guess the hardware always lags the software.
 
Last edited:
an i5 NUC with 16GB RAM would have the horsepower to do a lot....more than sufficient for my needs.

You mean something like pfsense? Is there any change that a USB 3.0 NIC would work?
 
I think pfsense would be a great market with 2 NICs. The low power and small form factor would be great. I would buy one.
 
We always assigned a physical NIC to every VM in the old days as you would have a MAC per VM which would make it easy to identify VM traffic by having a separate IP address. Each VM server would end up with its own unique IP address. I would today too but I don't run many VMs any more. Old habits are hard to break.

Used to do that until putting many machines into an ESX cloud - was easier to use the virtual switch at the end of the day - had to spend some time with a VMWare FAE to make a believer out of me... but going into a 8 blade/16 socket IBM Blade Center, with more that 20 VM's running... when we ramped it up to over 100, we had no choice - slow rollover that it was going from physical connections to the virtual switch...

And performance actually improved, as much was actual backplane traffic, so ran at better than physical connections...
 
I would not worry about the Intel NUC's NIC as it looks like the Intel NUC does not have enough memory to run serious VMs. Say that fast three times... Intel NUC's NIC...Intel NUC's NIC...Intel NUC's NIC.

There's a couple of NUC's that are fairly powerful...

NUC5I7RYH comes to mind...
 
I could see where if you had a lot of VMs talking to each other in the same physical box the virtual switch could be faster. A hundred NICs would be an awful lot for one box.

I think on a small scale like a NUC I would rather have an IP address for each VM server.

The second NIC would make a great router which would beat all the current consumer models.
 
Last edited:
Everything we do in the office/client sites with Hypervisors is using a virtual switch.
Usually with a 10gbe or 40gbe depending on the site/budget.
A few sites also use 1gbe to the client network with 8gb fiberchannel to the SAN but we are moving away from that model in favor of iSCSI
 
I like iSCSI. You separate your processors and data. You also have a private network or path between the processors and data and you can end up with redundant links. I guess I should add it runs on different topologies. This allows you to scale, both up and down, more than a SAN. On the small side you can use a simple backend switch to run a small iSCSI network.

At work in the old days when a SAN fiber channel was upgraded you pulled out the old and replaced with the new. The old fiber channel was then worthless. With iSCSI it runs on networking equipment which can be recycled.
 
Last edited:
Don't go for the cheapest NUC box. In fact, don't go for NUC/Zotac machine. And no USB NICs.

Get a lower power server grade mainboard with 4 Gigabit NICs + 1 IPMI and you'll more than happy with it. Supermicro has great offerings in this regard. I don't think it will break your wallet by doing so.
The reason is that ESXi may or may not be fully compatible with those desktop class machines and you'll end up installing just one OS on it.
 
I could see where if you had a lot of VMs talking to each other in the same physical box the virtual switch could be faster. A hundred NICs would be an awful lot for one box.

I think on a small scale like a NUC I would rather have an IP address for each VM server.

The second NIC would make a great router which would beat all the current consumer models.

IBM blade centers - so there is a switch on the backplane so we don't have to leave the chassis for inter-blade networking - chassis also has 4 10-GiG NIC's for an outside switch, along with 10 1-G NIC's and a 16 port switch...

So previously everything would go to the edge of the blade and back - with virtual switches and the NUMA architecture, everything on the virtual switch runs at RAM speed - then it's down to where the memory is...

Made me a real believer in the flexibility and speed of virtual switches...
 
Going to the switch and back is much slower than talking at memory speeds across a virtual switch. How much memory were in these boxes? I bet a lot.
 
Going to the switch and back is much slower than talking at memory speeds across a virtual switch. How much memory were in these boxes? I bet a lot.

64GB per blade... each blade - 2*6 core Xeons (with hyper threading active so 2x that, 12 threads per socket), so 8 blades/chassis... had 6 chassis in productions, plus one development lab...

Let's not talk about the filers, lol, the NAS guys will be sad....

(DevLab was fun to do video transcodes as a science project..)
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top