What's new

Windows 10 Pro Video Editing Server - Used Supermicro SuperServer

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

DIYVideoEditServe

New Around Here
So I dove off the deep end on this... But I needed to.

We are a small video production company that is experiencing growing pains for our editing storage. We need to work in Adobe Premiere Productions to share our work and must use shared storage to make this feasible. Anyhow, to that end I bought a used Supermicro Superserver chassis from UNIXSurplus on eBay and have been setting it up now for a week or so, and learning a crap-ton as could be expected. The specs for that box are listed at the bottom of this post.

The workstations are:
- Windows 10 Home gaming-style PC (Gigabyte Aorus Gaming i7 with GTek 10G NIC)
- i7 MacBook Pro (OWC Thunderbolt NIC)
- i9 MacBook Pro (OWC Thunderbolt NIC)

So its a happy mix of OS's... no Linux or FreeNAS stuff. I want the server to run Windows 10 Pro since there are applications that we use that will take advantage of the hardware. Vienna Ensemble being the first one to try. Plus, I have zero experience in Linux etc. and would rather not go down that rabbit hole if possible.

PROGRESS THUS FAR:
Since we have 36 total bays, I wanted to leave room to grow. Starting with
- 12 HGST 12TB SAS drives to build the first RAID

First issue was figuring how best to do this since there are no real options for high-speed software raid in Windows. Looked at "spaces" but that is only parity and not stripping so that's out. I eventually flashed the HBA back to IR mode to use the hardware RAID it was designed for. After much thinking about RAID 6, I went with RAID 10 and a lower capacity since the card only did 0, 1, or 10. Plus after hearing about rebuild nightmares of RAID 6, I think this is the best option for speed/availability for us. The card limits you to 10 disks per group so I have a 54TB RAID 10 and an 11TB RAID 1. I had no idea that the "background initialization" did not stop me from using the array and thought it was going to take a week before I could test it!! LOL.... anyhow, many things are being learned on the job, so-to-speak.

ON TO THE NETWORK:
I went with a QNAP QSW-M1208-8C managed switch as it mixes SFP+DAC with regular RG45. OK so far... But wow, the info on how to setup that switch up is minimal and I had to reset it to get the default IP to work before I found the QFinder app which will search it out on the network in case you connected to a DHCP server first. That took an afternoon for me. :( It seems to be working just fine now and has the server, PC workstation, and a 1G link to the rest of the network and router.
https://www.amazon.com/gp/product/B08JTZ79KT/?tag=snbforums-20

OK, first the PC workstation: I originally had an ASUS 10G NIC that arrived DOA and would not even power up in the PCI slot. Not sure what was up with that. Ordered a GTek rebranded Intel NIC with single SFP+ port and have been messing with that since. The PC was full-up on PCI cards and the GTek really did not like the last slot. Lost video, couldn't even see BIOS post. Moving cards around and now its in a 16X slot and everything seems to work, but not at good speed. And that is what brings me to you fine folks.
https://www.amazon.com/gp/product/B01LZRSQM9/?tag=snbforums-20

TESTING:
I am using LAN Speed Test Lite and am getting horrible results: Write - 0.19Gbps and Read - 1.06Gbps. Just barely using up gigabit. OK, so I've been reading a bunch of the suggestions here and have tried a couple things (including moving the NIC to a 16X slot) but no improvement. I also tried moving a 100+Gb video file over and would get around 200MB/sec for about 30 seconds and then it would slow to 0 for another 30 seconds and then back up to 200, repeating this cycle until the file transfer completed.

Jumbo frame is set to 9014 on both machines.
The SuperServer has only one of four NICs connected right now, RJ45 CAT6 15'
File is coming from a normal 7200rpm SATA drive so the speed makes sense but the drop outs don't.
PC connected by SFP+DAC 1m cable

I have not done anything more radical than setting the Jumbo frame size and thought now is a good time to get some advice. I hope its something stupid that my inexperience has overlooked but who knows...

Thanks in advance for any help on this and hopefully I can share good real world experiences with this type of setup as we use it more.

-Ashley


VIDEO SERVER/NAS SPECS:

SKU: 4U-X10DRI-T4+-36BLS3
Performance Specs:
Processor: 2x Intel Xeon E5-2620 V3 Hex (6) Core 2.4.Ghz ( total 12 Cores)
Memory: 128GB DDR4 (16 x 8GB - DDR4 - REG 2133)
Hard Drives: None
Controller: 1x AOC-S3008L-L8e HBA 12Gb/s HBA UNRAID (Great for FreeNas)
NIC: * Integrated Intel X540 Quad Port 10GBase-T

Supermicro Model: SSG-6048R-E1CR36N
Secondary Chassis/ Motherboard specs:
Supermicro 4U 36x 3.5" Drive Bays
Server Chassis/ Case: CSE-847BE1C-R1K28LPB
Motherboard: X10DRi-T4+
* Integrated IPMI 2.0 Management
Backplane: 2x Backplane:
*BPN-SAS3-846EL1 24-port 4U SAS3 12Gbps single-expander backplane, support up to 24x 3.5-inch SAS3/SATA3 HDD/SSD
*BPN-SAS3-826EL1 12-port 2U SAS3 12Gbps single-expander backplane, support up to 12x 3.5-inch SAS3/SATA3 HDD/SSD
PCI-Expansions slots: Low Profile 2 PCI-E 3.0 x16 slot, 3 PCI-E 3.0 x8 slot, 1 PCI-E 2.0 x4 (in x8) slots
HD Caddies: 36x 3.5" Supermicro caddy
Power: 2x 1280Watt Power Supply PWS-1K28P-SQ
Rail Kit: Generic Supermicro 3rd party
 
That start stop behavior may be cache fill/empty.
How much SSD cache do you have on the NAS ?

Direct connect TB3 or faster will always have higher performance than LAN
 
No caching involved. The transfer speed while copying the big file makes sense as the source HD can only put out around 200MB/s. The start/stop is befuddling...

I did some ATTO disk tests on the RAID and was getting 450+MB/s writes and upwards of 950MB/s reads! That seems to be working...

-Ashley

That start stop behavior may be cache fill/empty.
How much SSD cache do you have on the NAS ?

Direct connect TB3 or faster will always have higher performance than LAN
 
random writes and reads or sequential ?

it sounds like something is buffering - either source disks, internal busses, NIC ( 1GBit/sec = 128 MB/sec), or receiving system. i assume the two are connected through a switch running at line speed.

i would run iperf to see if there are any network bottlenecks between the two over the lan. That should give you a ram to ram value across the backplane buss and network,

What happens if you direct connect the two - server to PC over 10Gb or 1GB lan ? That eliminates a router or switch from the equation.
 
one other thought - are all devices, including the switch and/or router set up with the same frame size ?
they have to be otherwise they have to be translated..
frame size may not make any difference to the application, so maybe reset to the default across all devices on the lan ?
 
Last edited:
you will need 10Gb ethernet NICs to get anywhere near the ATTO speeds you documented. And a 10Gb switch to connect through. And quality Cat6 cabling probably, although for very short distance you might get away with high quality CAT5e..

Do the server 16x slots slow down to 8x if two cards are installed or do they maintain 16x independently on the PCIe3 buss ? What other ports are they sharing interrupts with ?
 
All NIC's and the switch are 10G as per the specs listed. In looking at the switch, they are are registering at 10G. I'll have to double check frame size. I've never set that on a switch before. Not sure if the Atto test was sequential or random.

Can't directly connect the two as server is RJ45 and workstation is SFP+.

PCI slots I'm not sure about. Hard to get clear info on that from the docs. Specs state that the NIC only requires a 4x slot. The BIOS doesn't let me see interrupts for the slots. I wish I was more tech savvy at that level.

Cable distance is minimal, 15' for RG45, 1m for SFP.

I have the iPerf tutorial up and waiting for me but I am on a gig at the moment so have to wait a day or so to try stuff again. Stupid Windows update interrupted the RAID initialization so its starting all over again.... oy windows your horrible mistress!

Updates when the time opens up...

-ashley
 
check the motherboard manual for the interrupt sharing arrangement.
The switch frame size would need to be consistent with all devices on the network. i would put the two devices back to the default unless you can change it for all devices., not just these two.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top