What's new

H81 Sata Port Multiplier

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

jhon

New Around Here
Hi, i'm looking to build a basic fileserver with 10 hard drives. I want it to use as little power as posible at idle with drives off.

Was thinking of a G1820 + some h81 mobo, like the ASROCK H81M-VG4, 2GB RAM, windows 7.

The issue is with the sata ports. With only 4 ports, one needs more for lost of hard drives. I have this nice toy that works like a dream on amd boards with sb850 - http://eshop.sintech.cn/sata-1-to-5-port-multiplier-card-p-759.html

Why the hell is it so hard to find info on port multiplier support on various new chipsets/mobos etc?

There is this nice wiki page where we can see intel had issues with this feature in the past. But h67 seems to be compatible. https://ata.wiki.kernel.org/index.php/SATA_hardware_features

I'm sure here there are people who use the port multipliers on modern and green builds. Could we share what was tested and works/doesn't work?
 
You'll have to realize, even if it says there is compatibility, there might not be. Varies from HBA to HBA with different chipsets, not simply the chipset doesn't work with HBAs.

You may be better off with a nice H87 board as you'll typically find more SATA ports to start with.

Go with Windows 8/8.1 if you can, it idles at lower power than Windows 7 for the same hardware. If you are using it for more than super basic file serving, get more RAM. Yes, it consumes more power that said, moving from a single 4GB DIMM (@1.4v) to a pair of 4GB DIMMs on my server (@1.3v) increased my idle power consumption by 1 watt and load power consumption by 2 watts under some workloads, but windows WILL leverage the extra RAM to improve things like network file transfers and such forth.

Considering everything, you are MUCH better off just getting an inexpensive HBA PCIe card, which should almost certainly work fine and a 4-port card + a board with 6 ports native gets you there.

Disable any feature on the motherboard you won't use then.

It consumes more power (1w for the newer Intel single port GbE NIC, 2w for the older Intel single port GbE NIC), but get an Intel NIC to go with if that motherboard doesn't have an Intel NIC (or find a board that has an Intel NIC integrated on it). From long experience, Intel is VASTLY superior with their NICs than ANYONE else. Both reliability, features and performance.

Unless you have a power constrained use case, or are just extremely principled on wanting to be green, it generally isn't worth it to persue power savings at a cost when it comes to computing. Now, inexpensive components that also happen to use less power, GREAT. Paying for power savings generally isn't a good way to go, and you also don't want to underspec a machine to get less power consumption, because that is a good way to either dislike the performance or actually consume MORE power, because the work loads you are doing run a lot longer (looking at 10 drives, I assume you'll be moving files around a fair amount...enter the suggestion for more RAM as well as Intel network adapter(s) in it).

at typical US electric prices, 1 watt of power consumption 24/7/365 costs around 80 cents to $1.50 per year. So if you spend $15 to save 1 watt of power consumption, it is going to take 10-20 years to pay that back...likely well beyond the life of whatever machine you built. Figure don't spend more than what you'd earn back in 4 years (unless it is a component that might be able to keep on giving, like a quality, very high efficiency PSU...just don't over spec the output, with the build you are looking at, a 400w PSU should be fine as Windows will stagger spin-up of the drives, you just might need to get some SATA power splitters to power all of the drives, then look at a gold rated one). So if getting low voltage RAM costs you $10 extra, but only saves 1w typically...don't do it. If it costs you $5 extra, but might save you 2w typically...it is probably worth it. PSU that saves you 6w, but costs $30 more than a lower efficiency PSU? possibly worth it if you only use it for the life of the machine...since a PSU can be used for a VERY long time, supposing standards don't change (they don't often) or it dies (odds are good it'll live a long time)...you might reuse it in 2-3 builds...which means maybe 10+ years of life...so $30 extra to save 6 watts might pay you back $40-80 over time, well worth the extra money.

Just some thoughts. My file server which is an H67 uATX board (BTW, also go the smallest motherboard you can afford that has the features/ports/PCIe slots you need, as smaller boards most of the time also consume less power) with Celeron G1610 in it, 2x4GB of low voltage RAM, 60GB SSD, 2x2TB hard drives (it is technically a single 3TB drive right now as I had to replace my array, but it'll be 2x3TB soon) and a pair of Intel Gigabit CT NICs consumes 21w of power at idle (windows 8 OS, it was actually 20w at idle with a single Intel NIC and 1x4GB of RAM previously, but I transitioned to Windows 8 before changing the hardware, at which time it consumed 18w, going to 2x4GB of RAM upped it to 19w and adding the 2nd Intel NIC increased it to 21w). Under streaming load it was 32w with the 2x2TB array and under heavy CPU load with the drives spun up it was/is 53w.

I have it setup to that it powers down to S3 at night from 12:45am till 6:45am, during which it consumes 2.3w and then resumes from S3 at that 6:45am time. I do backup jobs weekly from my desktop at 12:20am before it drops to S3 at 12:45am, with the server powering on my desktop from S5 and then my desktop setup to shutdown at 12:46am. I've never seen a backup job, even with a HUGE chunk of data take more than about 5-6 minutes to run (dual NICs, SMB Multichannel, I generally push >200MB/sec between the machines, even with Synchtoy doing the backup...which runs a little slower than a straight SMB file transfer).

I could decrease this a little by swapping out the older NICs for either a pair of Intel single port NICs (drops consumption by around 2w, but at the cost of ~$160 for the two NICs since they are expensive right now) or a newer dual port NIC (might only save 1w or so) for close to the same price. Best bang for the buck would likely be a new PSU, as I have a bronze rated 380w unit in there now, and a good gold rated 300-350w unit would be slightly more appropriately sized and likely handle the extreme low power draw better and likely push it down closer to 15w at idle.

Total cost is only about $20 a year (at around $1 per watt per year at my current electric rates) for me to run the server 24/7/365 between idle power consumption, S3 power consumption and the bit of streaming videos, transfering files, etc that I do (one nice thing about my Apple TV, it'll buffer the whole movie, or as much of it as it can and let the HDD spin down until needed again, unlike some media players which only buffer a few seconds to a couple of minutes and need to constantly stream off the HDD, keeping it spun up the entire time).
 
That was an awesome read with lots of info! Thanks for taking the time!

I have a basic fileserver built with old parts, that does absolutely nothing but store files and serve them via smb over the Gb lan.

It looks like this:

- GA-870A-UD3
- phenom 2 x4 @3000 (1.15v load, 0.75v idle@1100mhz, 45nm, 6MB cache level3)
- 1X2GB ddr3 (1066, 1.5V)
- amd hd5450 video card
- integrated realtek nic
- 10X seagate 2TB 7200 RPM hard drives (1TB platters)
- Old Chieftec GPS-500AB-A
-2x 1to5 sata power cables
-2x 1 To 5 SATA 2.0 Port Multiplier
-2x something like this for the hard drives, but simpler. Just two pieces of plastic with holes for the screws and the hard drives sit one on top of another with a few millimeters of space between them
- 1x12CM fan in front of each of the 5xhdds sandwiches, too keep them cool (~450 RPM)
- 1x12CM fan inside the pus, hooked to a mobo header (same ~450 RPM)
- 1x12CM fan on an old cooler master hyper 212 cpu radiator (again ~450 RPM)
- Some old antec case too keep everything in place.
- headless, no monitor, no usb. usb 3.0 disabled in bios, along with every other thing that could be turned off, like the jmicron ports etc.

The drives spin down after 10 minutes of idle, and the sata port multiplier toys do an amazing job of waking just the drive that is needed and spin it down after.

With the drives idle, the ups says that this thing pulls 55-60W. And a little over 100W when the drives are all awake. The psu efficiency is ~70% at ~50W, so that blows.


In the same case, i wanted to build something modern, simple, 22nm, with integrated graphics, that would have a ~20W pull at idle.

- g1820, g3220, or whatever cheap haswell. At idle it will be similar
- small mobo to help with power efficiency
- whatever ram
- same fans and psu for now

---------------


As sata port multiplier IS a major issue with intel, i was thinking of a two sata port pci-e 1x card, that has to be like this :
-- Port Multiplier FIS Based Switching
-- support for 3TB/4TB/6TB GPT drives (future)
-- offer at least real world 250-300 MB/sec, to work at full hdd speed when moving takes place between hard drives on the same multiplier.

I would need something from china, via ebay, as i'm in europe.

Candidates:

1. http://www.ebay.com/itm/New-2-eSATA...dapter-Converter-6-0Gbps-ASM1061/141006206664
2. http://www.ebay.com/itm/PCI-E-Expre...tender-Card-with-sata-3-0-cable-/160887199214
3. http://www.ebay.com/itm/PCI-E-To-SA...Expansion-Card-PCI-E-Adapter-R2-/281514050617

3 is cheap, ASM1061, but can't know if it has PM or over 2TB support...

This is cool, as it has over 200 reviews, has all the specs i want/need as i searched through the reviews (support for 3-4TB drives and FIS)
This is based on a marvel 9170. Again has all the right specs. FIS and GPT support, but might be slower that asmedia (?!)

SOME cards on ebay look like generic versions of those, with asm1061 or marvel 9170, but the descriptions are lackluster. Some have "up to 2tb", no mention of FIS or even the crappy Command Based Switching.

Jmicron cards are to be avoided, as they suuuck. I've tried the jmicron ports on the motherboard. Even if PM works, real world speeds are 70-90MB/sec at best.

So this could be another useful thing for builders. Nice, cheap sata3 2 port pcie 1x card, with fis and gpt 2TB+ support.

-----------------------


Now, i wanted to find out about dual nic. Does it work with realtek integrated + another cheap realtek? Integrated realtek + intel?

Win8 has the new smb speed. This speed is win8 <-> win8 or does it work win8 <-> win7?

My n56u with padavan for sure doesn't support this. How much of an investment would it be to have a working at dual nic full speed fileserver?
 
Last edited:
Just Win8 <-> Win8 (or newer versions), it will not work with Win7.

At least in my limited playing, you may need the same adapter for it to work. That said, I have not tried extensive testing, but when I've paired one of my Intel NICs with my on board NIC (Realtek and QCA), it has only worked at 118MB/sec. Go back to using both Intel NICs and I get 230+MB/sec.

I don't have two of anyone else's NIC, so I can't try a pair of Realtek or pair of QCA or whatever. That said, they SHOULD work. I just don't know if dissimilar ones will.

The thing that occured to me after testing is I should have tried a reboot as sometimes SMB Multichannel doesn't like working after I change something and I need to reboot.

So I'll get that a try at some point.

Me personally, I have a pair of Intel Gigabit CT single port NICs in my server and my desktop. less than $100 for all 4. You could get dual or quad port NICs if you wanted.

A dual port motherboard should in theory work too. I would THINK that anything dual port or better should work fine with SMB multichannel, but I can't say that for certain. Mixing different manufacturer adapters seems to break it, but I also have only done the MOST cursory testing. It might work fine (and at some point soon, I plan to test some more).
 
I've started a new build...

For just under 80$ i got 3 parts:

- g3220
- GA-H81M-DS2
- 2GB ddr3 1333, 1.5V


Using the same Chieftec GPS-500AB-A and just a laptop hard drive for os, i see 25W at idle in win 8.1 64.

Retested the amd build by disconnecting all the hard drives and all the fans, but with a 7200 RPM desktop hard drive for win7 32. It idles at 48.5W. A laptop hard drive or some ssd could bring it down to 45.

So we see ~20W difference between the old 45nm amd and the new 22nm haswell.

Waiting on some asmedia and marvel pcie 1x cards, hoping to find one that works with the port multiplier.

H81 does not support sata port multiplier. Had to test this just in case... :D
 
Look at the Antec Earthwatts green 380w PSU if you want something cheap and efficient. That is what I am running. I got mine on sale for probably $32 a few years ago. 70-71% efficient at that load according to the chart.

Only other one I'd consider might be the Seasonic 380w gold rated one, as it hits around 75% efficiency at that draw. Works out to maybe 1-2w of saved power over the Earthwatts 380w, but for $25-30 more.

The Seasonic 380w is what I am considering getting at some point, though I don't think I can really justify the extra cost for only saving 1 or maybe 2w of power total.

That said, a really crappy PSU could EASILY be drawing 2-5w more at those power draw figures...which might be worth getting an Antec Earthwatts 380w if you can find one on sale at some point.
 
You can add an additional SATA card to add more ports but it will use power. You only mention 10 hard drives not size. Running higher density drives with less of them will save power. The other thing is laptop hard drives pull very little power at idle vs desktop or server drives.
 
Now it's less than 10 drives. ~6-7 4tb 5900 rpm.

The port problem was fixed with cheap asm1061 cards. I bought all of them (posted earlier) and all were working with the port multiplier toy. Kept one, sold the rest...
 
Before I bought my pre-built NAS box, I was looking into the ASRock Rack C2750D4I - nice review of it over on Anandtech...

Look at all those ports on the board, and it has a management processor along with two GigE ports and a PCIe slot if one wants to drop in a 10GigE card...

Almost perfect for a home-built NAS box..


ASRock C2750D4I Top_575px.jpg
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top