What's new

HP ProCurve 1800-24G and throughput...

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

nry

New Around Here
Hey

Im developing a small network, but with greater throughput.

I generally work with some either hundreds of files ranging from 1kb to a few mb, or files which are 10-100GB each.

I have one file server with mutiple raid 6 arrays, capable of reading at 300-400MB/s

Now im setting up a desktop soon which will have a few SSDs in raid with the hope to match 300-400MB/s so I can work with these files a little quicker.

This machine will also host a few Virtual machines at a time too

How gigabit networking max is about 110MB/s (best I get anyway)
Im looking at getting atleast 300MB/s between the two machines (possibily a dedicated VM server at a later date)

Now I looked at NIC teaming but this wouldnt allow me to sustain 300MB/s between two machines which isnt very helpful.

Secondly I am looking at fibre connections, you can pick up a 4GB PCIe fibre card on ebay for about £180ish
So would I be able to get 4GB mini-GBIC modules for the pro curve switch, and would it support these?

I know if I expand to a VM server at a later date I would need to invest in a fibre switch, but will cross that when I get there

Anyone got any input on this?

Guessing this could get expensive and might just have to learn to wait for transfers and things to load :(
 
Well connecting two computers together with 4Gbps fiber would probably work just fine and be the easiest solution. As for adding the 4Gbps mini-GBIC modules to the procurve switch... I am not sure if the switch supports higher than 1Gbps in those ports. Scroll down to the bottom of this link. Also when I looked on the HP website it did not show any accessories that were over 1000 Mbps.

Not sure if this is an option for you but you might be able to just do 10 gigabit Ethernet instead of fiber. http://cgi.ebay.com/Intel-10-Gigabi...147?pt=LH_DefaultDomain_0&hash=item20b63cdf73

00Roush
 
Last edited:
I have the HP 1810-24G but I've yet to do trunking with it. However, I do use it with my home NAS which the array is capable of over 400MB/sec, but I'm limited to my current 1Gb line speed. I'm easily able to saturate the line while transferring large files to my NAS.

If you want a sustained 300MB/sec between two machines, I would also suggest considering a 10Gb connection or InfiniBand like 00Roush suggested.

You could also try configuring Link Aggregation (trunking) four of the ports on both ends, but you would need a quad port nic (or worse, 4 single port nics). However, I don't know if this will still be capable of reaching the speeds you want. It will also consume 1/3 of your switch for just two machines. That HP switch will support up to 8 ports in a single trunk with a maximum of 12 trunks. After you define these, you would need to configure the algorithm in which data is transferred.

To ensure that the switch traffic load is distributed evenly across all links in
a trunk, the source or destination addresses used in the load-balance
calculation can be selected to provide the best result for trunk connections.
The switch provides six load-balancing modes that apply for both static and
dynamic trunks:

■ SMAC: All traffic with the same source MAC address is output on the same link in a trunk. This mode works best for switch-to-switch trunk links where traffic through the switch is received from many different hosts.

■ DMAC: All traffic with the same destination MAC address is output on the
same link in a trunk. This mode works best for switch-to-switch trunk links where traffic through the switch is destined for many different hosts. Do not use this mode for switch-to-router trunk links where the destination MAC address is the same for all traffic.

■ SMAC XOR DMAC: The four least significant bits of the source and destination MAC addresses in a frame are used to determine the output link in the trunk. This mode works best for switch-to-switch trunk links where traffic through the switch is received from and destined for many different hosts. (This mode is the default setting.)

■ SMAC XOR DMAC XOR IP-Info: The source and destination MAC addresses and IP information in a frame are used to determine the output link in the trunk. This mode works best for switch-to-switch trunk links where traffic through the switch is received from and destined for many different hosts.

■ Pseudo-randomized: The output link in the trunk is randomly selected. This mode is useful for networks where frame ordering within conversations can be neglected and where a high utilization of aggregated links is important.

■ IP-Info: The source and destination IP addresses and TCP/UDP ports in a frame are used to determine the output link in the trunk. This mode works best for switch-to-router trunk links where traffic through the switch is received from and destined for many different hosts.

With regards to your idea of using a 4Gb gbic, they are not supported with this switch according to the manual. Ports 23 & 24 are a shared-personality between either the RJ45 or the gbic.

The switch has 24 auto-sensing 10/100/1000Base-T RJ-45 ports and two dual-personality ports (ports 23 and 24). Because the RJ-45 ports support automatic MDI/MDI-X operation, you can use straight-through cables for all network connections to PCs or servers, or to other switches or hubs.
Dual-personality ports use either the 10/100/1000Base-T RJ-45 connector, or a supported ProCurve mini-GBIC for fiber-optic connections. By default, the RJ-45 connectors are enabled.
This switch is designed to be used primarily as a high-density wiring closet or
desktop switch. With this switch you can directly connect computers, printers, and servers to provide dedicated bandwidth to those devices, and you can build a switched network infrastructure by connecting the switch to hubs, other switches, or routers. In addition, the switch offers network management capabilities.

You may not need to expand to a fibre channel switch (SAN) or fiber switch (Net) down the road with a VM farm. You can manage quite fine with a copper switch setup even for storage. You can do shared storage for vm vmotion operations using iSCSI.
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top