What's new

Trying to port trunk a Dell PowerConnect 2716 to a HP ProCurve 1810-24G V2

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

HotFix

Occasional Visitor
Greetings!

I first want to say I am not a network engineer. I am a server/system engineer so all of my knowledge of network (what little I have) has come as a result of working with networks as a port consumer. So if I use the wrong term or am doing something backwards, please be patient with me. :)

I am trying to port trunk port 23 & 24 on the HP to ports 15 an 16 on the Dell.

I have configured ports 15 & 16 as members of LAG Group 1 on the Dell.

I have configured ports 23 & 24 as members of Trunk 1 on the HP. I have configured the trunk as a Static trunk since the Dell doesn't support LACP.

I have connected Port 15 to port 23 with a Cat5e cable, and port 16 to 24 with a Cat5e cable.

I have a DLINK Router plugged into the Dell with one cable because that is my Verizon Fios connection.

I have a NetGear wireless router (configured as a bridge only) plugged into my HP with one cable as that is the wireless in my environment.

I only have VLAN 1 defined on the Dell switch as that is my "production" network. VLAN 1 is also defined on the HP switch as is another VLAN 100 which is what I am going to use for iSCSI. The HP has VLAN 1 set up as the "U"ntagged VLAN on all ports, and right now VLAN 100 is "E"xcluded on all ports since I don't have any iSCSI devices today. I hope I said that right.

I have not enabled Spanning Tree protection on my HP because there is no corresponding setting on my Dell and I have not created any loops as everything is one big daisy chain. I.E. NetGear <--> HP <--> Dell <--> DLINK.

I have turned Flow Control on globally on the HP switch because I intend to hook up some iSCSI servers and a NAS very soon, and from my experiences in the past Flow Control is a best practice for iSCSI. There does not appear to be any per-port settings for Flow Control on the HP.

I have not turned Flow Control on any ports of the Dell (which unlike the HP are configured per-port). Originally I tried to turn it on the two trunk ports, but I was getting sub-par performance connecting to the Internet from a system connected to the HP so I turned it off.

I had also tried to statically set all 4 ports to 1000/Full, because that was always a best practice in the server world to hard set it on both sides, but I set them all back to Auto/Auto when I was troubleshooting the Internet performance from the HP. The Dell has a way to only advertise 1000/Full in an Auto negotiation so I think that might be ok long term, otherwise I prefer to hard set ports where I know their connections won't be changing any time soon (like servers and trunks).

Since the changes I made above, I am getting decent download speed from the Internet on the HP, but thinks aren't so hot for systems connected directly to the Dell (which is one hop closer to the DLINK internet connection). On my desktop that is connected directly the Dell (with everything set to Auto), I am getting some weird results pinging the Dell switch which doesn't make any sense to me:
PowerConnect_Ping.jpg
When I ping the HP switch from my desktop, which is 1 hop away over the trunk, the pings are all less than 1 ms. This is really odd to me because I would think the local/no-hop switch would ALWAYS be 1 ms (unless it was busy) and the trunked/one-hop switch might see some unusual ping times (which it is not in this case).

So I have some questions I am hoping someone more knowledgeable than myself can answer.
  1. Does it looked like I configured the Trunk correctly, or should I have done something else?
  2. Is leaving Flow Control enabled switch wide on the HP but disabled on the Dell switch ports ok? I.E. Will the HP switch understand that there is no Flow Control on the trunk ports to the Dell and be ok with that?
  3. Is there any Spanning Tree or other protection mechanism I need to enable, even though none of the switches are cross-connected? I see things like Storm Control, Auto DoS protection, Looping Control, and other things out there that I am trying to read up on and understand ASAP but as of right now it isn't clear I need them in my configuration,
  4. What could be causing a computer directly to a switch have those ping times above but have "better" ping times to a trunked switch? That one has me stumped.
  5. Is there anything else I should know or do to get this to work correctly?

FYI - I am trunking the ports because I am going to connect a NAS with multiple ports in a LACP trunk to my HP, and I want to make sure two 1 GB systems connected to the Dell can get relatively "full" 1GB performance against the NAS versus both fighting for bandwidth over a single non-trunked 1GB connection.

If you read this far, thank you. If you have any thoughts/suggestions then thank you x2!
 
Last edited:
A small update.

If I unplug one of the trunk ports, the PING issues from my desktop to the directly connected Dell switch don't go away.

If I unplug both of the trunk ports, the PING issues from my desktop to the directly connected Dell switch DO go away completely.

So something about the inter-switch connection is causing the Dell switch to return sub-par PING times to a directly connected computer and this has me worried.

I also forgot to mention I turned on Jumbo Frame support on my HP switch as that is a global setting, but I did not make any change like that to the Dell switch. My understanding is this is ok because MTU path discovery will work fine in this configuration (w/o needing to go through a router in the middle).
 
Last edited:
I am trying to port trunk port 23 & 24 on the HP to ports 15 an 16 on the Dell.

I have configured ports 15 & 16 as members of LAG Group 1 on the Dell.
I'm not familiar with the Dell 2716. Here's how I have a LACP group set up on a Dell 8024:

Code:
interface Te1/0/23
channel-group 1 mode active
description "Trunk to Catalyst 4948-10GE 1/2"
mtu 9216
switchport mode trunk
switchport general acceptable-frame-type tagged-only
exit
!
--More-- or (q)uit
interface Te1/0/24
channel-group 1 mode active
description "Trunk to Catalyst 4948-10GE 2/2"
mtu 9216
switchport mode trunk
switchport general acceptable-frame-type tagged-only
exit
!
interface port-channel 1
description "Trunk to Catalyst 4948-10GE"
hashing-mode 7
switchport mode trunk
switchport general acceptable-frame-type tagged-only
mtu 9216
exit

And, for completeness, the same thing on the Cisco 4948-10GE:

Code:
interface TenGigabitEthernet1/49
 description Trunk to PowerConnect 8024 1/2
 switchport trunk encapsulation dot1q
 switchport mode trunk
 mtu 9198
 channel-protocol lacp
 channel-group 1 mode active
!
interface TenGigabitEthernet1/50
 description Trunk to PowerConnect 8024 2/2
 switchport trunk encapsulation dot1q
 switchport mode trunk
 mtu 9198 
 channel-protocol lacp
 channel-group 1 mode active
!
interface Port-channel1
 description Trunk to PowerConnect 8024
 switchport
 switchport trunk encapsulation dot1q
 switchport mode trunk
 mtu 9198

I have configured ports 23 & 24 as members of Trunk 1 on the HP. I have configured the trunk as a Static trunk since the Dell doesn't support LACP.
Let's check terminology - a "trunk" is something that carries one or more VLANs with tagging, over one or more physical links. A "port channel", "LACP", etc. is a way to connect devices via one or more physical links, all of which will be active, using some method for load balancing. As opposed to using spanning tree, where all but one of the links are in an inactive state (blocked).

I only have VLAN 1 defined on the Dell switch as that is my "production" network. VLAN 1 is also defined on the HP switch as is another VLAN 100 which is what I am going to use for iSCSI. The HP has VLAN 1 set up as the "U"ntagged VLAN on all ports, and right now VLAN 100 is "E"xcluded on all ports since I don't have any iSCSI devices today. I hope I said that right.
I would recommend not using an untagged / native VLAN. It complicates things. I set up ports to either be access (not tagged) configured to a single VLAN, or trunked (tagged) possibly with limitations on which VLANs will be carried on that port.

I have not enabled Spanning Tree protection on my HP because there is no corresponding setting on my Dell and I have not created any loops as everything is one big daisy chain. I.E. NetGear <--> HP <--> Dell <--> DLINK.
That's fine for now. But what happens at some point in the future where someone creates an accidental loop? Also, if you enable spanning tree now and something stops working, it is a clue that you're doing something wrong.

I have turned Flow Control on globally on the HP switch because I intend to hook up some iSCSI servers and a NAS very soon, and from my experiences in the past Flow Control is a best practice for iSCSI. There does not appear to be any per-port settings for Flow Control on the HP.

I have not turned Flow Control on any ports of the Dell (which unlike the HP are configured per-port). Originally I tried to turn it on the two trunk ports, but I was getting sub-par performance connecting to the Internet from a system connected to the HP so I turned it off.
I'd leave the flow control settings at their defaults everywhere until / if you notice something strange that might be related. Flow control is one of the optional pieces of the spec that may or may not be implemented. The only thing more annoying than flow control is what Cisco calls "speed nonegotiate". Things don't get sane until 10GbE, where all of these "options" became mandatory.

I had also tried to statically set all 4 ports to 1000/Full, because that was always a best practice in the server world to hard set it on both sides, but I set them all back to Auto/Auto when I was troubleshooting the Internet performance from the HP. The Dell has a way to only advertise 1000/Full in an Auto negotiation so I think that might be ok long term, otherwise I prefer to hard set ports where I know their connections won't be changing any time soon (like servers and trunks).
This is one of those pieces of Ancient Wisdom that is no longer correct, and can actually be harmful. Back in the day, there were loads of chipsets out there that couldn't properly negotiate (DEC 2114x, I'm looking at you). The solution was to hard-code speed and duplex.

These days, anything that can't properly negotiate either needs a firmware update or a closer association with the garbage recycling bin.

On my desktop that is connected directly the Dell (with everything set to Auto), I am getting some weird results pinging the Dell switch which doesn't make any sense to me:
View attachment 2296
When I ping the HP switch from my desktop, which is 1 hop away over the trunk, the pings are all less than 1 ms. This is really odd to me because I would think the local/no-hop switch would ALWAYS be 1 ms (unless it was busy) and the trunked/one-hop switch might see some unusual ping times (which it is not in this case).
It is always best to ping to another system through the switch, and not to the switch itself. Managed switches have a forwarding engine which runs mostly in hardware, and a management engine that runs on a (generally) slow CPU. As an extreme example, the Cisco 2900XL switches had the CPU spending > 80% of its time just blinking the port LEDs. When you ping the switch, you're talking to the possibly-overworked management engine. Pinging through the switch to a system on the far end shows you the true throughput of the switch.

FYI - I am trunking the ports because I am going to connect a NAS with multiple ports in a LACP trunk to my HP, and I want to make sure two 1 GB systems connected to the Dell can get relatively "full" 1GB performance against the NAS versus both fighting for bandwidth over a single non-trunked 1GB connection.
This may or may not be necessary (or useful), assuming you mean "trunking" to mean "using a port channel" (see my comments above). If the NAS can't handle more than 100MByte/sec, there's no need to give it additional links for speed.

There is also the issue of different pieces of hardware implementing different methods of allocating traffic to component links. For reasons too technical to get into here, simply putting the next packet on the lesser-loaded link doesn't work. There are various algorithms to allocate traffic - balancing based on source or destination MAC address, IP address, etc. You may find that even if you get the port channel set up and the other problems fixed, that you still have a very unbalanced utilization of your port channel links.

I also forgot to mention I turned on Jumbo Frame support on my HP switch as that is a global setting, but I did not make any change like that to the Dell switch. My understanding is this is ok because MTU path discovery will work fine in this configuration (w/o needing to go through a router in the middle).
Jumbos are something you should postpone until you have the basic configuration working. From the point of view of the end systems, a jumbo problem will just look like some packets never getting to their destination. If you're using NFS (for example) over UDP, it just won't work.
 
First let me say THANK YOU for a such a detailed and thorough response! :)

I'm not familiar with the Dell 2716. Here's how I have a LACP group set up on a Dell 8024:

Code:
interface Te1/0/23
channel-group 1 mode active
description "Trunk to Catalyst 4948-10GE 1/2"
mtu 9216
switchport mode trunk
switchport general acceptable-frame-type tagged-only
exit
!
--More-- or (q)uit
interface Te1/0/24
channel-group 1 mode active
description "Trunk to Catalyst 4948-10GE 2/2"
mtu 9216
switchport mode trunk
switchport general acceptable-frame-type tagged-only
exit
!
interface port-channel 1
description "Trunk to Catalyst 4948-10GE"
hashing-mode 7
switchport mode trunk
switchport general acceptable-frame-type tagged-only
mtu 9216
exit

And, for completeness, the same thing on the Cisco 4948-10GE:

Code:
interface TenGigabitEthernet1/49
 description Trunk to PowerConnect 8024 1/2
 switchport trunk encapsulation dot1q
 switchport mode trunk
 mtu 9198
 channel-protocol lacp
 channel-group 1 mode active
!
interface TenGigabitEthernet1/50
 description Trunk to PowerConnect 8024 2/2
 switchport trunk encapsulation dot1q
 switchport mode trunk
 mtu 9198 
 channel-protocol lacp
 channel-group 1 mode active
!
interface Port-channel1
 description Trunk to PowerConnect 8024
 switchport
 switchport trunk encapsulation dot1q
 switchport mode trunk
 mtu 9198
Unfortunately neither of my switches provide a command line interface, just web management. If it helps, here is the Dell Switch Port Config:
DellSwitchPortConfig.jpg
And here is the HP Switch Port Config (and "trunk" config):
HPSwitchPortandTrunkConfig.jpg

Let's check terminology - a "trunk" is something that carries one or more VLANs with tagging, over one or more physical links. A "port channel", "LACP", etc. is a way to connect devices via one or more physical links, all of which will be active, using some method for load balancing. As opposed to using spanning tree, where all but one of the links are in an inactive state (blocked).
Understood. It looks like HP calls it a "Port Trunk" versus a "VLAN Trunk", but I will refer to it as a Port Channel from this point forward as there is no way to confuse that with a VLAN Trunk.

I would recommend not using an untagged / native VLAN. It complicates things. I set up ports to either be access (not tagged) configured to a single VLAN, or trunked (tagged) possibly with limitations on which VLANs will be carried on that port.
The ports are all untagged at the moment as I wasn't sure how to implement VLAN tagging on the NetGear or DLINK switches (if you even can) and also other network devices I have like a PS3. I remember someone once telling me it was a best practice to never use VLAN 1, and instead implement your stuff on other VLANs, so I was never going to specify VLAN 1 tagging but rather just leave it the default untagged which I believe happens to be VLAN 1. Again I hope I said that right. :)
The only exception to this I plan on making is to default some ports to the iSCSI VLAN in an untagged fashion (I hope I said that right) so there will be no chance of cross VLAN chatter.

That's fine for now. But what happens at some point in the future where someone creates an accidental loop? Also, if you enable spanning tree now and something stops working, it is a clue that you're doing something wrong.
That sounds like sage advice. To your point, in theory Spanning Tree should just do nothing since there are no loops, so it should be harmless to turn on and it will just be there to protect me.
I just need to figure out which is the best way and to make sure it won't be harmful to my DLINK and NetGear switches as I don't think they have any of those smarts.

I'd leave the flow control settings at their defaults everywhere until / if you notice something strange that might be related. Flow control is one of the optional pieces of the spec that may or may not be implemented. The only thing more annoying than flow control is what Cisco calls "speed nonegotiate". Things don't get sane until 10GbE, where all of these "options" became mandatory.
Understood and that sounds reasonable.

This is one of those pieces of Ancient Wisdom that is no longer correct, and can actually be harmful. Back in the day, there were loads of chipsets out there that couldn't properly negotiate (DEC 2114x, I'm looking at you). The solution was to hard-code speed and duplex.

These days, anything that can't properly negotiate either needs a firmware update or a closer association with the garbage recycling bin.
Regarding hard setting NIC ports and speed, I hear you but I have routinely seen in datacenters where once in a blue moon a NIC port won't auto-negotiate correctly after a server reboot. The server guys end up beating our heads against the wall trying to figure out what's going on with system performance/connectivity until we figure out it's a NIC port auto-negotiation issue. Even as recent as 2 years ago when I was implementing a brand new multi-system iSCSI SAN, I had two instances where NIC ports didn't negotiate correctly, so I tend to stick to the ancient wisdom so that I don't get bit in the rear.

I would like to get clarification what you have seen with setting static port configs that is harmful though as I am not aware of any to date (and am eager to learn new things).

It is always best to ping to another system through the switch, and not to the switch itself. Managed switches have a forwarding engine which runs mostly in hardware, and a management engine that runs on a (generally) slow CPU. As an extreme example, the Cisco 2900XL switches had the CPU spending > 80% of its time just blinking the port LEDs. When you ping the switch, you're talking to the possibly-overworked management engine. Pinging through the switch to a system on the far end shows you the true throughput of the switch.
Well to your point I can ping the default gateway on the DLINK switch and the HP Switch IP with perfect latency, through my workstation's connection to the Dell switch. I am concerned that if establishing the port channel with the Dell switch has overloaded it to the point where I can't ping it with expected latency that it might be time to replace it.

This may or may not be necessary (or useful), assuming you mean "trunking" to mean "using a port channel" (see my comments above). If the NAS can't handle more than 100MByte/sec, there's no need to give it additional links for speed.
Regarding setting up a port channel, I am planning on getting a QNAP TS-870 and putting 8 drives in it, and it comes with 4 NICs. So it can certainly crank the through put. Regardless though I really want to do this just to get experience with it in my lab as that's how I learn since I can't do stuff like this in my customer's production environment.

There is also the issue of different pieces of hardware implementing different methods of allocating traffic to component links. For reasons too technical to get into here, simply putting the next packet on the lesser-loaded link doesn't work. There are various algorithms to allocate traffic - balancing based on source or destination MAC address, IP address, etc. You may find that even if you get the port channel set up and the other problems fixed, that you still have a very unbalanced utilization of your port channel links.
Understood. Hopefully lab testing will show this one way or another.

Jumbos are something you should postpone until you have the basic configuration working. From the point of view of the end systems, a jumbo problem will just look like some packets never getting to their destination. If you're using NFS (for example) over UDP, it just won't work.
[/QUOTE]

Also sage advice. I will turn off Jumbo Frames and Flow Control on the HP switch until such time as I get my iSCSI infrastructure up and running, so that I am not troubleshooting multiple issues at the same time.

Thank you again for your detailed and thoughtful responses.

I am still concerned about the Dell switch ping times as they seem to indicate to me something isn't right, even if that something is the Dell switch CPU is now overloaded.
 
First let me say THANK YOU for a such a detailed and thorough response! :)
No problem! Here's a few answers inline...

Unfortunately neither of my switches provide a command line interface, just web management. If it helps, here is the Dell Switch Port Config:
View attachment 2322
And here is the HP Switch Port Config (and "trunk" config):
View attachment 2323
Might be worthwhile to see if there's a firmware update for either of them. That won't get you a command line interface, but may fix some of the things you're seeing.

The ports are all untagged at the moment as I wasn't sure how to implement VLAN tagging on the NetGear or DLINK switches (if you even can) and also other network devices I have like a PS3. I remember someone once telling me it was a best practice to never use VLAN 1, and instead implement your stuff on other VLANs, so I was never going to specify VLAN 1 tagging but rather just leave it the default untagged which I believe happens to be VLAN 1. Again I hope I said that right. :)
The only exception to this I plan on making is to default some ports to the iSCSI VLAN in an untagged fashion (I hope I said that right) so there will be no chance of cross VLAN chatter.
The end devices don't need to be VLAN-aware. You set the VLAN membership on the switch port(s) in question. A trunk (also known as dot1Q) will carry all of those VLANs (unless you exclude some) to the second switch. If you set all of your unconfigured ports to (usually the default of) VLAN 1, then anything plugged into an unconfigured port will have "nobody to talk to".

Some end devices (such as most modern Unix-like systems) are trunk-aware and can be configured to "talk" on different VLANs. But that's an advanced topic and we don't need to confuse things more at this time.

Regarding hard setting NIC ports and speed, I hear you but I have routinely seen in datacenters where once in a blue moon a NIC port won't auto-negotiate correctly after a server reboot. The server guys end up beating our heads against the wall trying to figure out what's going on with system performance/connectivity until we figure out it's a NIC port auto-negotiation issue. Even as recent as 2 years ago when I was implementing a brand new multi-system iSCSI SAN, I had two instances where NIC ports didn't negotiate correctly, so I tend to stick to the ancient wisdom so that I don't get bit in the rear.

It would be interesting to find out what brand/model the NIC was, along with the brand/model of the switch, so the guilty parties can be subjected to a public flogging.

Other than 10GbE on copper, plain old 10/100/1000 autonegotiation should be flawless at this point. Some 10/100-only implementations may have trouble with 1000, but the switch can generally be configured to only offer 10/100 to those ports. I note that in your screenshot, you have auto-negotiation enabled on the Dell switch, but are only offering 1000/full.

I would like to get clarification what you have seen with setting static port configs that is harmful though as I am not aware of any to date (and am eager to learn new things).

The problem with disabling autonegotiation on a port is that if some other device gets plugged into that port and tries to autonegotiate, it will fail. The standard says that it should assume a half-duplex connection in that case. That will cause performance to go down the tubes, since it will see any incoming traffic while it is transmitting as a [false] collision and re-transmit the packet.
 
No problem! Here's a few answers inline...
Might be worthwhile to see if there's a firmware update for either of them. That won't get you a command line interface, but may fix some of the things you're seeing.
The very first thing I did when I got the HP switch was check to make sure it and the existing Dell switch were both up to date before I tried to port channel them. :) The Dell switch is old and the firmware hasn't been updated since 2005 so I think that might be part of the problem (a switch that hasn't been kept up to date by the manufacturer).

The end devices don't need to be VLAN-aware. You set the VLAN membership on the switch port(s) in question. A trunk (also known as dot1Q) will carry all of those VLANs (unless you exclude some) to the second switch. If you set all of your unconfigured ports to (usually the default of) VLAN 1, then anything plugged into an unconfigured port will have "nobody to talk to".

Some end devices (such as most modern Unix-like systems) are trunk-aware and can be configured to "talk" on different VLANs. But that's an advanced topic and we don't need to confuse things more at this time.
Gotcha.
The HP switch requires that the management be performed from VLAN 1 it seems, and I think the Dell is the same way, so I may just keep that one as my "main" VLAN versus try to figure out a work around and still keep switch management.

It would be interesting to find out what brand/model the NIC was, along with the brand/model of the switch, so the guilty parties can be subjected to a public flogging.

Other than 10GbE on copper, plain old 10/100/1000 autonegotiation should be flawless at this point. Some 10/100-only implementations may have trouble with 1000, but the switch can generally be configured to only offer 10/100 to those ports. I note that in your screenshot, you have auto-negotiation enabled on the Dell switch, but are only offering 1000/full.
The NICs were HP NC365Ts, which are Intel NIC based. The switches were Cisco 3750-Xs. Honestly the only reason we left the NICs Auto was because HP hadn't updated their drivers to one that would allow us to statically set 1000/Full. Sure enough we saw 100/Full negotiated 2 or 3 times during the various reboots which only reaffirmed my long held belief that you should statically set server NIC/Switch ports since they don't change often.

I only turned off the non-1000/Full options under Auto-negotiate on the two Dell switch ports that were connected the HP switch, so I could have some reasonable amount of assurance that only 1000/Full would be negotiated even though I had Auto turned on. I felt more comfortable with this configuration than the other way where any speed/duplex could be negotiated and felt like it was a happy middle ground.

All other ports on the Dell and all of the ones on the HP are just Auto (because the HP doesn't give you a way to limit which speeds are available in an auto-negotiation).

The problem with disabling autonegotiation on a port is that if some other device gets plugged into that port and tries to autonegotiate, it will fail. The standard says that it should assume a half-duplex connection in that case. That will cause performance to go down the tubes, since it will see any incoming traffic while it is transmitting as a [false] collision and re-transmit the packet.

Understood - I would only ever set something static when I plugged something in that I knew wasn't going to change any time in the near future such as a server or another switch. Previously I did this consistently so we never had to guess what the port was configured as, and it didn't make sense to auto-negotiate something (and run the risk it made a bad choice one day) when you could statically set it and not guess.
All other ports like those for desktops or devices stay Auto.

I'm starting to think I should just replace the Dell switch with another HP switch since the HP switch is newer and a 24 port fanless unit ran me ~$200 on Newegg.
 
I disabled Jumbo Frames and Flow Control on my HP switch with no change in the wacky ping response of my Dell switch.

I was about to turn on Spanning Tree but I discovered the Dell PowerConnect 27XX series does not support it and I wasn't sure what turning it on the HP would do. Is there any point to turning it on if none of the other switches support it?

I think I have exhausted things I can try to make the Dell switch happy. Unless someone has any ideas of tweaks on a switch whose firmware is 8-9 years old, I am going to replace it with another HP ProCurve 1810-24G V2. It would be a welcome change from an administration POV because the Dell can only be managed from Firefox as both up-to-date versions of IE and Opera can't log into the device, not to mention I have to turn off AVG for Firefox to be able to log in.

I did look at the difference between the 1810-24G V2 and the 1910-24G, and it appears the major differences are that the 1910 series is a layer 3 device with some minor routing capabilities, and all models have fans. Since I don't have any layer 3 routing needs, if I do buy another switch I am just going to pass on the additional expense and noise and stick with a layer 2 device. If I ever want to route between VLANs in the future, I will create a VM or something like that to do it on the cheap since I will have plenty of room in the realm of Hyper-V. :)
 
The HP181024g can be connected with fiber. That's how I get all my servers rack mounted switches back into my HP 4208 network rack.
 
Since the HP ProCurve 1810-24G V2 was only $209 on Newegg, I decided to pull the trigger and just get a second one to replace my aging Dell PowerConnect 2716.

It's overkill for the number of ports I need to connect (~12) in my central LAN closet but... I am hopeful that the issues I am seeing now go away.
 
Out of curiosity, if you have so few ports, why the two switches? I had to expand out as I ran out of ports on my 16 port switch, so I added in a second 16 port switch and setup a 2 port LAG between them for a fatter uplink, though I tend to treat the original switch (TP-Link SG-2216) as the core and the new (actually, older since I got it used) switch (Trend net TEG-160sw) as a peripheral switch with more of the "slow speed" devices like my MoCA Bridge and the lesser used LAN drops off that, with my router, access points, media streamers, desktop and server off the core switch.
 
Out of curiosity, if you have so few ports, why the two switches? I had to expand out as I ran out of ports on my 16 port switch, so I added in a second 16 port switch and setup a 2 port LAG between them for a fatter uplink, though I tend to treat the original switch (TP-Link SG-2216) as the core and the new (actually, older since I got it used) switch (Trend net TEG-160sw) as a peripheral switch with more of the "slow speed" devices like my MoCA Bridge and the lesser used LAN drops off that, with my router, access points, media streamers, desktop and server off the core switch.

Fair question.

My office/lab switch will eat ~18 ports with the 2 Hyper-V servers I want to build and the various other items in my lab. The "whole house" switch only needs about ~12 ports to connect everything else (wife's computer, PS3, my computer, etc...) so I do need 2 switches. I would only need a 16 port whole house switch, but unfortunately HP goes from 8 to 24.
 
Ahhhh I see. Sorry, your "only need 12 ports" had me confused.

I know little of HP/Dell, but what about someone like Trendnet/TP-Link/DLink? Not the awesomest/bestest products ever, but low cost and they generally seem to work well. Their Semi-managed switches should be able to do what you want (at least my trendnet and TP-Link semi-managed 16 ports I've done everything you have asked about and they worked fine, though I am not currently using VLANs with them, but VLAN functionality worked fine when I tested it).
 
Problem remediated.

I got a second HP ProCurve 1810-24G v2 and replaced the Dell PowerConnect 2716. Now my set up looks like this:
DLINK <--> HP <--> HP <--> NetGear

I created an LACP "trunk" (HP's term for a Port Channel) between the two switches, one switch set as Active and the other set as Passive. I found documentation that said I could have gone LACP Active/Active but I couldn't find a reason to do that so I went with Active/Passive because that seemed to be the reason for having Passive in the first place.

After establishing the 2x 1Gb Port Channel, I no longer had the issues I saw where pinging the Dell switch would report random response times. Also several download tests showed no discernable difference to which HP switch I had my workstation plugged into. As I test I made the various changes between the HP switches to see if there was any performance degradation:
  1. Disable auto-negotiation on the two Port Channel ports on each switch. Yes I know this shouldn't be necessary but I just don't see the point in leaving the connection speeds to chance if you know what they should be.
  2. Enable Flow Control on both switches.
  3. Enable Rapid Spanning Tree detection on both switches.
  4. Enable Loop Prevention on both switches.
  5. Enabled Jumbo Frames on just the office/lab HP switch.

Even after all of those changes I saw no discernable difference in Internet download speeds from either HP switch. I did monitor the ports and saw a couple of Pause/Wait frames being exchanged over the trunk/Port Channel, so I disabled the Flow Control on the whole house switch since I won't be using it for iSCSI. The rationale there is that just like Jumbo Frames I can use Flow Control on the office/lab HP switch between the iSCSI end points where it is recommended, but not have to worry about it being used between/across the two switches.

I have configured the ports not connected to switches as Spanning Tree Edge ports, because I believe this is like Cisco's Port Fast which I know I have need enabled in the past for server ports. I need to read up on Spanning Tree protection and make sure I have everything else configured correctly because there are more settings in the Spanning Tree menu though which I don't understand.

The only remaining question I have is what is the difference between Spanning Tree protection and Loop Prevention. I think I read somewhere that Loop Prevention steps in when a connected device doesn't support/utilize Spanning Tree, but I didn't read that in any official protection.

Thank you everyone for your assistance and comments!
 
I've never heard of loop prevention. It sounds like STP. My guess is it is an HP proprietary implementation of spanning tree protocol. That or maybe it is what they call a later or earlier version of STP? It could be what they call rapid spanning tree protocol or something.

I personally leave all flavors of STP off on my switchs. My topography is pretty basic and I really would have to be pretty dunder headed to setup an accidental loop. I don't need any redundant links sitting idle, as I just setup active links through port trunking and no need for STP there.
 
I've never heard of loop prevention. It sounds like STP. My guess is it is an HP proprietary implementation of spanning tree protocol. That or maybe it is what they call a later or earlier version of STP? It could be what they call rapid spanning tree protocol or something.

I personally leave all flavors of STP off on my switchs. My topography is pretty basic and I really would have to be pretty dunder headed to setup an accidental loop. I don't need any redundant links sitting idle, as I just setup active links through port trunking and no need for STP there.

These HP switches have Rapid Spanning Tree Protocol (and regular Spanning Tree Protocol) support in addition to the Loop Prevention. Their explanation of the two technologies from their User Manual is as follows:

Spanning Tree:
The Rapid Spanning Tree Protocol (RSTP, IEEE 802.1w) reduces the convergence time for network
topology changes to about 3-5 seconds from the 30 seconds or more for the IEEE 802.1D STP standard.

Loop Protection:
Loops in a network can consume switch resources and degrade performance. Detecting loops manually
can be very cumbersome and time consuming. The HP 1810 series switch software provides an
automatic Loop Protection feature.
Loop Protection may be enabled or disabled globally and on a port-by-port basis. When enabled globally,
the software sends loop protection packets to a reserved layer 2 multicast destination address on all
the ports on which the feature is enabled. Transmission of the packet can be disabled selectively on
certain ports, even when Loop Protection is enabled.
If this multicast packet comes back to the switch with any of the ports’ MAC addresses as the source,
the switch determines that a loop has occurred. The port that received the loop protection packet from
the switch can be shut down for a configured period, or a log entry can be made.
Ports on which Loop Protection is disabled drop the loop protection packets silently.


So RSTP just seems to get the network back in order when there are changes but doesn't appear to prevent loops which the Loop Protection does. At least that is how this non-network savvy systems engineer reads it. :)

I probably don't need either of them myself, but since I am trying to mirror customer environments in my lab, where they will likely have both turned on, it's probably a good idea for me to leave them on so I can "fell their pain". :)
 
STP is all about preventing loops though. What it sounds like is that loop preventing is HPs proprietary rSTP like implementation that might possibly act faster than even rSTP can.
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top