What's new

Intel SS4200-E Lives Again

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Well, after initializing there were no shared folders. I was thinking that maybe there was still some existing data on the drives causing issues, so I ran the option to 'erase disks'. Apparently this takes as long as initializing the raid as it's only at 63% after about 2 hours, lol.

Wondering if there was something else going on, I checked dmesg and found a lot of io error messages like these:
Code:
end_request: I/O error, dev sdd, sector 225865999
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226286439
printk: 930 messages suppressed.
Buffer I/O error on device dm-3, logical block 113143188
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143189
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143190
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143191
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143192
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143193
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143194
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143195
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143196
lost page write due to I/O error on dm-3
Buffer I/O error on device dm-3, logical block 113143197
lost page write due to I/O error on dm-3
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226286695
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226286951
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226287207
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226287463
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226287719
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226287975
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226288231
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226288487
sd 3:0:0:0: SCSI error: return code = 0x06000000
end_request: I/O error, dev sdd, sector 226288743
Sounds like I might have a bad ssd in the mix on sdd. Luckily, I have another 128gb that I could use, but it might be tomorrow before I can run the tests since all this as to get sorted out first. :(
 
I should be getting another set of larger drives this week to try large drive support again.

In the meantime, I purchased a lot of smaller ssds for system upgrades and decided to take 4 of the 128gb varieties and try them in the ss4200 to see if the unit will exceed 100MB/s with 4x 500MB/sec ssds installed as striped mirror.

It's at 6% of initializing the disks so it will probably be 12hrs+ before I can begin testing, but this will be interesting to see for sure and tell conclusively if the unit's speed limits are due to its software design or hardware. Stay tuned...

While striping the array - if they're mounted, you can write/read from them if using MDADM...
 
Well, after initializing there were no shared folders. I was thinking that maybe there was still some existing data on the drives causing issues, so I ran the option to 'erase disks'. Apparently this takes as long as initializing the raid as it's only at 63% after about 2 hours, lol.

Wondering if there was something else going on, I checked dmesg and found a lot of io error messages like these:

Hint - sort the disks individually before building the array...

I'm recommending MDADM and LVM to build the array itself.

Attached might be of some assistance...
 

Attachments

  • LVM_Arrays.pdf
    124.5 KB · Views: 377
While striping the array - if they're mounted, you can write/read from them if using MDADM...
I'm not exactly sure how I would do that, but there's nothing on them anyways since they're only for testing purposes.

I took the disks and set each to offline in windows 7 after deleting all partitions before I tried them in the ss4200-e. And all had worked fine prior to testing in the ss4200-e, but were never extensively tested. That one that went bad doesn't even show up in win 7 anymore so I think 'it's dead Jim'.

Even though it takes time, I don't want to try using anything outside of the stock setup as the speed test should either prove a software limitation or not.
 
Swapped out the drive, it got a bit confused, I confused it some more by telling it to do stuff, I rebooted it, and it finally knows what's going on. It's reconstructing the data protection again and is at 9%. This time the shares are already showing up so I think it will complete successfully. So I'll hopefully be able to run some benchmark testing tomorrow. :) Stay tuned!
 
So everything was rebuilt and ready to go by this morning. Running 4x 128gb ssds in a mirrored raid didn't yield any faster top speeds except one spike to 94MB/s on reading. Otherwise, the results were just the same as with much more consistency, probably due to the lack of any seek latency.

I also checked dmesg to see if there were any more errors or 'spurious' errors related to the NCQ and everything was clear.

So this conclusively confirms that this unit cannot saturate gigabit, even with the drives to do it. :( Perhaps with a CPU upgrade it might, but I highly doubt it as it seems to barely use the CPU it already has. The 'top' command showed that even with two systems hammering it, the loads only went to 2-something. The highest I've ever seen top show the loads is 7, and that was when the unit was confused yesterday, probably from a bunch of overlapping commands/processes.

I recently acquired another one of these units that's essentially the unit without a dom or a power supply, so I may experiment with putting a faster processor in this unit since I could just use a regular atx power supply with more power. Then I could re-run the tests to see if it will hit the gigabit limit. But I still doubt it will, so for now the focus will be on capacity testing.
 
So this conclusively confirms that this unit cannot saturate gigabit, even with the drives to do it. :( Perhaps with a CPU upgrade it might, but I highly doubt it as it seems to barely use the CPU it already has. The 'top' command showed that even with two systems hammering it, the loads only went to 2-something. The highest I've ever seen top show the loads is 7, and that was when the unit was confused yesterday, probably from a bunch of overlapping commands/processes.

Too bad it only has the single DIMM slot - the chipset supports dual channel memory, and with those old cores, using matched DIMM's would increase performance a fair bit...
 
Too bad it only has the single DIMM slot - the chipset supports dual channel memory, and with those old cores, using matched DIMM's would increase performance a fair bit...
That's probably true, but from what I've seen, dual-channeling doesn't always translate to doubling of the memory access bandwidth. Besides, I don't really see this unit even coming close to topping out what it's even got at the moment. Not really sure why either.

Even this person that did some extensive testing on the hardware and a cpu upgrade to a e5800 using ubuntu came to the conclusion that a cpu upgrade was useless:
http://networkingathome.blogspot.com/2015/11/intel-ss4200-processor-upgrade-and.html

And this person's test was significant as the single thread performance more than doubled over the stock cpu:
https://www.cpubenchmark.net/compar...um-E2220-vs-Intel-Celeron-420/1102vs1137vs650
 
So I finally have in hand 2x 14TB WDC enterprise grade drives and got a chance to start some testing.

cat /sys/block/sda/device/model and cat /sys/block/sdb/device/model say:
Code:
# cat /sys/block/sda/device/model
WDC  WUH721414AL
# cat /sys/block/sdb/device/model
WDC  WUH721414AL
Unfortunately, it looks like these are also not having recognized sector sizes, despite also being 512e per dmesg:
Code:
sda : very big device. try to use READ CAPACITY(16).
sda : unsupported sector size -65536.
SCSI device sda: 0 512-byte hdwr sectors (0 MB)
sda: Write Protect is off
sda: Mode Sense: 00 3a 00 00
SCSI device sda: drive cache: write back
sd 0:0:0:0: Attached scsi disk sda
sdb : very big device. try to use READ CAPACITY(16).
sdb : unsupported sector size -65536.
SCSI device sdb: 0 512-byte hdwr sectors (0 MB)
sdb: Write Protect is off
sdb: Mode Sense: 00 3a 00 00
SCSI device sdb: drive cache: write back
sd 1:0:0:0: Attached scsi disk sdb
This time the unsupported sector size is different than the 6TB drives, but is still a negative number.

Hdparm does show them as being recognized as the full 14TB though:
Code:
# hdparm -I /dev/sda

/dev/sda:

ATA device, with non-removable media
        Model Number:       WDC  WUH721414ALE6L4
        Serial Number:      9JGAMZXT
        Firmware Revision:  LDGNW07G
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5; Revision: ATA8-AST T13 Project D1697 Revision 0b
Standards:
        Supported: 9 8 7 6 5
        Likely used: 9
Configuration:
        Logical         max     current
        cylinders       16383   16383
        heads           16      16
        sectors/track   63      63
        --
        CHS current addressable sectors:   16514064
        LBA    user addressable sectors:  268435455
        LBA48  user addressable sectors:27344764928
        device size with M = 1024*1024:    13351936 MBytes
        device size with M = 1000*1000:    14000519 MBytes (14000 GB)
Capabilities:
        LBA, IORDY(can be disabled)
        Queue depth: 32
        Standby timer values: spec'd by Standard, no device specific minimum
        R/W multiple sector transfer: Max = 16  Current = 16
        Advanced power management level: unknown setting (0x00fe)
        DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
             Cycle time: min=120ns recommended=120ns
        PIO: pio0 pio1 pio2 pio3 pio4
             Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
        Enabled Supported:
           *    SMART feature set
                Security Mode feature set
           *    Power Management feature set
           *    Write cache
           *    Look-ahead
           *    Host Protected Area feature set
           *    WRITE_BUFFER command
           *    READ_BUFFER command
           *    NOP cmd
           *    DOWNLOAD_MICROCODE
                Advanced Power Management feature set
                Power-Up In Standby feature set
           *    SET_FEATURES required to spinup after power up
                SET_MAX security extension
           *    48-bit Address feature set
           *    Device Configuration Overlay feature set
           *    Mandatory FLUSH_CACHE
           *    FLUSH_CACHE_EXT
           *    SMART error logging
           *    SMART self-test
           *    Media Card Pass-Through
           *    General Purpose Logging feature set
           *    WRITE_{DMA|MULTIPLE}_FUA_EXT
           *    64-bit World wide name
           *    URG for READ_STREAM[_DMA]_EXT
           *    URG for WRITE_STREAM[_DMA]_EXT
           *    WRITE_UNCORRECTABLE command
           *    {READ,WRITE}_DMA_EXT_GPL commands
           *    Segmented DOWNLOAD_MICROCODE
                unknown 119[6]
           *    unknown 119[7]
           *    SATA-I signaling speed (1.5Gb/s)
           *    SATA-II signaling speed (3.0Gb/s)
           *    unknown 76[3]
           *    Native Command Queueing (NCQ)
           *    Host-initiated interface power management
           *    Phy event counters
           *    unknown 76[12]
           *    unknown 76[15]
                Non-Zero buffer offsets in DMA Setup FIS
                DMA Setup Auto-Activate optimization
                Device-initiated interface power management
                In-order data delivery
           *    Software settings preservation
                unknown 78[7]
                unknown 78[10]
                unknown 78[11]
           *    SMART Command Transport (SCT) feature set
           *    SCT LBA Segment Access (AC2)
           *    SCT Error Recovery Control (AC3)
           *    SCT Features Control (AC4)
           *    SCT Data Tables (AC5)
Security:
        Master password revision code = 65534
                supported
        not     enabled
        not     locked
                frozen
        not     expired: security count
        not     supported: enhanced erase
        228min for SECURITY ERASE UNIT.
Checksum: correct
#
The drives do not show up at all in the web interface and even when selecting options to erase the disks or change the data protection type (mirror/parity), it does not change (and the data protection type is not selected to either type because the drives aren't recognized). The log says:
Code:
Date User Event
08/15/2019 4:13 pm  The main storage area could not be initialized.
08/15/2019 4:13 pm  Storage protection change requested.
08/15/2019 4:13 pm  Drive erase requested.
08/15/2019 4:13 pm  The main storage area could not be initialized.
08/15/2019 4:13 pm  Storage protection change requested.
08/15/2019 3:50 pm admin  Successfully logged in.
08/15/2019 3:10 pm  The main storage area could not be initialized.
08/15/2019 3:10 pm  Drive added to system.
08/15/2019 3:10 pm  The main storage area could not be initialized.
08/15/2019 3:10 pm  The network interface is now available.
08/15/2019 3:10 pm  System powered off.
08/15/2019 3:10 pm  System powered on.
That's it for now. I think the next step is to put my original drives back in and see if the 14tb drives are recognized as usb and esata external storage.
 
Okay, so I've done some more testing.

Made a 14TB GUID NTFS volume via an esata doc. And then checked its operation in the same startech usb 3.0 cloning dock I used earlier attached to an hp 8760w running win7 and everything checked out fine. (One wicked fast hard drive too, xferring 100 write and 179 read using a 500Mb size test in lan_speedtest.)

Attached this drive to the ss4200-e via the startech and usb and dmesg had this to say:
Code:
usb 1-5: new high speed USB device using ehci_hcd and address 2
usb 1-5: configuration #1 chosen from 1 choice
scsi6 : SCSI emulation for USB Mass Storage devices
usb-storage: device found at 2
usb-storage: waiting for device to settle before scanning
scsi 6:0:0:0: Direct-Access     WDC  WUH 721414ALE6L4     LDGN PQ: 0 ANSI: 0
sde : very big device. try to use READ CAPACITY(16).
sde : READ CAPACITY(16) failed.
sde : status=0, message=00, host=5, driver=00
sde : use 0xffffffff as device size
SCSI device sde: 4294967296 512-byte hdwr sectors (2199023 MB)
sde: Write Protect is off
sde: Mode Sense: 03 00 00 00
sde: assuming drive cache: write through
sde : very big device. try to use READ CAPACITY(16).
sde : READ CAPACITY(16) failed.
sde : status=0, message=00, host=5, driver=00
sde : use 0xffffffff as device size
SCSI device sde: 4294967296 512-byte hdwr sectors (2199023 MB)
sde: Write Protect is off
sde: Mode Sense: 03 00 00 00
sde: assuming drive cache: write through
 sde: sde1
sd 6:0:0:0: Attached scsi disk sde
usb-storage: device scan complete
usb 1-5: USB disconnect, address 2
It again has issues with reading the capacity, and it knows its a big drive. Problem seems to be is that it never figures out how big. Even hdparm wouldn't read the drive information even though it was assigned to sde.

Next, I formatted the drive with a single 2TB MBR NTFS volume. It worked immediately and dmesg interestingly had this to say:
Code:
usb 1-5: new high speed USB device using ehci_hcd and address 3
usb 1-5: configuration #1 chosen from 1 choice
scsi7 : SCSI emulation for USB Mass Storage devices
usb-storage: device found at 3
usb-storage: waiting for device to settle before scanning
scsi 7:0:0:0: Direct-Access     WDC  WUH 721414ALE6L4     LDGN PQ: 0 ANSI: 0
sde : very big device. try to use READ CAPACITY(16).
sde : READ CAPACITY(16) failed.
sde : status=0, message=00, host=5, driver=00
sde : use 0xffffffff as device size
SCSI device sde: 4294967296 512-byte hdwr sectors (2199023 MB)
sde: Write Protect is off
sde: Mode Sense: 03 00 00 00
sde: assuming drive cache: write through
sde : very big device. try to use READ CAPACITY(16).
sde : READ CAPACITY(16) failed.
sde : status=0, message=00, host=5, driver=00
sde : use 0xffffffff as device size
SCSI device sde: 4294967296 512-byte hdwr sectors (2199023 MB)
sde: Write Protect is off
sde: Mode Sense: 03 00 00 00
sde: assuming drive cache: write through
 sde: sde1
sd 7:0:0:0: Attached scsi disk sde
usb-storage: device scan complete
I'm not sure if it worked because it just happened to be 2TB or because of something else.

Next, I plan to see if GUID partitions are the problem, and if not, how large before they are.
 
Last edited:
So I formatted the drive with the following GUID partitions--2TB, 4TB, 6TB, and 1TB+ (remainder on the drive). This format didn't show up either, and dmesg said the exact same thing as above.

I think the conclusion I have come to on large drive support is the following:
I think this concludes my testing with large capacity drives.

The next goal will be to install 4x 4TB drives, which is the largest configuration that I've ever seen for sale. I hope this proves as easy as previously installing smaller drives. Stay tuned.
 
One thing I just thought of trying for kicks is to create an ext2/3 partition and see if that is seen on any external connection. One more thing to try!
 
Well, ext2 didn't work via usb using the startech. The drive did not show as attached in the web interface and dmesg gave the same responses as it did in previous connection.

I'm going to try ext3 since that is supposed to be the native file system for the unit and after that I've exhausted everything I can think of.
 
I've been formatting the drive to ext2 and ext3 using a parted magic live cd from the ultimate boot cd and a kingwin esata dock for the drive.
 
So no dice with the 14TB attached via startech usb 3.0 cloning dock formatted ext3.

And it was just brought to my attention why--ext3 has drive size limits depending on the block size in the kernel. And since this system is older, there's no way the kernel would have a large enough block size to support larger drives. Thus ends the quest for a larger drive and also definitively answers the question. :)
 
I just now found a post online on the xigmanas forum where someone has successfully used 8TB drives with this unit and xigmanas:
https://www.xigmanas.com/forums/viewtopic.php?p=88069#p88069

So this confirms that the unit hardware can take 8TB drives, and probably much larger too. I have another ss4200-e unit that is just a shell missing the power supply and dom, and I'll see if I can't replicate the xigmanas setup. Because 64TB (16TBx4) on this old guy would be a hoot. :)
 
So I recently acquired a WD Mycloud EX2 with 2x 3tb WD Red drives in it. I fully expected this to spank the ss4200-e in terms of raw performance, but was very surprised to find that in head to head tests on the same win7 client system, there was sometimes almost no difference at all. In fact, in some of the 200MB tests using lan_speedtest, the Intel actually beat the WD.

Considering the ss4200-e I was testing has only some older HGST 1TB drives vs the newer 3TB WD Red drives in the EX2, it was quite impressive to see the ss4200-e doing what it could considering its age.

The advantage that the EX2 has of course is that it can take drives bigger than 4TB, and that's the plan with this unit--a backup of the 16TB (raw) ss4200-e that I'm working on putting together once enterprise drive prices cooperate with me.

At some point I'll acquire a synology to see how it stacks up to the ss4200-e. The ss4200-e lives on. :)
 
Hi,

I just stumbled about your post:
- owning a ss4200 too, and I have updated the CPU using a "core2?duo?" and it was worth it, next time I login into it I will post the correct cpu type.
according to [3] an "Intel E4700" should work

EDIT: I have checked the CPU and according to "dmesg" it is a "Core2 E4300"

https://ark.intel.com/content/www/d...ssor-e4300-2m-cache-1-80-ghz-800-mhz-fsb.html


- running it on 1gb of RAM
- using FreeBSD
- used full disk encryption in the past, entering the passphrase by serial console
- using 4x 4TB drives
- ZFS is out of question for this small machine
- it is still in use, however now it only provides ssh encapsuled iscsi, pure drives the decryption is done by the accessing machine, I use it to rsync my mainserver to this backup-server, it's only powered on when I sync it.

In 2014 I have posted this[1] (which got a bit botched due to forum software update) guide how to setup FreeBSD using headless/serialcon. It's tricky to say the least. However from my experience even when BIOSes can't identify 4TB drives correctly, I have no problem accessing them using FreeBSD, and they have never started at "0" when hitting 2TB (and I test this at first before putting important data on a drive)

EDIT:
My current server-setup, is based on an old core i5-660 in a zotac h55n-itx mainboard, with 8GB Ram inside a Chenbro ES34169 ITX-NAS(added better control for the fans), with 2x 8TB - system on 480gb ssd - and I use smartctl to power down the drives when not needed and powerd to use speedstep to reduce power consumption - the i5 is ideal because even now it's reasonable fast enough - for vlc, office, webbrowsing, devellopment - and in combination with the onboard gfx the power consumption is ok (~25Watts with all HDDs in sleep) - and its smaller than the ss4200

I'm also experimenting with running win7 on this machine and having freebsd inside virtualbox with rawdisk access on the 2x8TB - which works well I can boot windows or freebsd and the virtualization gives me full windows access with a freebsd-nas-server in the background
And next I want to reverse that setup, virtualizing win7 inside freebsd.

As you mentioned synology I have also an ARM-marvel-kirkwood/linux based DS411j - I works with 2TB drives, but when trying 4tb drives everything gets terrible slow - don't know why.


[1] https://www.snbforums.com/threads/g...adless-install-notso-full-disc-encrypt.20747/
[2] http://albertech.net/2012/06/upgradecpu-on-the-intel-ss4200-from-celeron-420-to-the-pentium-2160/
[3] https://forum.home-server-blog.de/viewtopic.php?t=5915&start=90
 
Last edited:
What a coincidence! I was just re-reading the whole post to remember what I had done and not done. :D

For FreeBSD I have no doubt that a cpu upgrade would help. In fact, running anything other than the stock software on this machine, I have no doubt a cpu upgrade would help.

The fastest processor I found with the same TDP was the Celeron 450. It is just a faster processor in the same family, but has a significantly faster single thread performance. I actually have one of these now so I may try it on my ss4200-e that is in pieces. This compares the speed and power ratings of the Celeron 450 to the e2220 and e4700, both mentioned in the thread you indicated (even though I can't read German I figured out the processors :D).

If you are only running 1GB of ram, I highly recommend upgrading to 2GB as it will give a nice boost for sure. The stock software can't benefit from more than 1GB.

Your guide to FreeBSD is interesting. And I completely believe you on the drive recognition as the chipset specs are fully sata compliant and have no limitations on drive size that I can find.

Have you considered upgrading from the 4x 4TB to 4x 10/12/14TB? It would be interesting to see how it would handle larger and faster drives. I'm still looking for 4x 4TB enterprise drives at a nice discount. Seems like all the <8TB drive prices have increased across the board. :(
 
So while looking at my firewall logs, I noticed something odd--it seems both of my units (one a Fujitsu Scaleo and the other an Intel version) are trying to contact 70.148.54.17 multiple times via port 80 about every 2 minutes. From the whois, this IP is owned by AT&T as part of a adsl dhcp pool based out of Birmingham, AL. Ironically, this is only 100mi from where the units are and hence why I started looking into this.

There seems to also be a attempt from 70.148.54.17 to broadcast on the lan to 239.255.255.250 via 1900 udp, but from inside the lan. Luckily my router identifies this as spoofing and blocks these packets. And still the units reach out to this IP via port 80, which will route to the public Internet if not blocked.

I think a feature explained in an Intel presentation on the ss4200-e presents a clue as to what is going on here:
HTML:
Storage systems are discovered by the Storage System Manager installed on a client PC by multicast User Datagram Protocol (UDP) broadcasts
UDP queries are sent out every 4 to 7 seconds to discover newly connected devices
The discovery UDP broadcasts are not “routable”
The discovery packets are dropped by routers and may be blocked by firewalls or managed switches
For the storage system to be discovered by a client, it must be located within the same IP subnet as the client system
The proprietary discovery process will not see other device types, routers printers...
The UDP queries sound a lot like SSDP (Simple Service Discover Protocol), which according to https://github.com/universAAL/middleware/wiki/Discovery-protocol-SSDP
would send 'three disocery message for the root device'. This matches with groups of 3 blocked requests to udp 1900 that I see in my router logs.

And while this makes sense, the IP address 70.148.54.17 doesn't. It is being used by two different units that are as different as two ss4200 units can be--one is Fujitsu running the EMC version while one is the Intel version running the Intel branded version of the EMC software. Slight differences, but enough that I would have expected each to behave differently. Moreover, the 70.148.54.17 address doesn't change--even after reboots, they pick up where they left off trying to connect to this address.

While I'm not too concerned about what this means on my own network since these requests are blocked, it does make me wonder what ss4200-e units out there are doing and what IP address they are trying to contact. And the bigger question is why this IP? If anyone has any ideas or theories, I'd really like to hear them. :)
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top