What's new

Today's delightful new problem (10gbit/s connection incoming)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Was contacted yesterday by my ISP who now had an ETA on 1-4 weeks.
First step simply was sending the new media converter and ensuring I'd received it, and then booking a technician to make the changes in the local station and syncing that with some time when I'm available to do the change on my end of the fiber as well.

It currently seems they may simply go with a pure media converter for the initial step, even though they've mentioned (see previous post) equipment they're evaluating for the routing. I'm eager to start testing things out, anyhow, and will gladly take the challenge to get a software setup to tackle as good throughput as possible. :)
 
Please let us know what type of router/switch you end up using for your 10GE service.

BTW: This is what Utopia examples regarding 10GE equipment under "What kind of equipment do I need?" over at https://www.utopiafiber.com/10g-info/
Code:
10 Gbps consumer routers aren’t quite widely available yet. Though there are some small business options or you can build your own dedicated router and use open source router software.
As for an off the shelf router option, Mikrotik offers a few options for a router that can accept 10G connections.

[LIST]
[*]Part Number: CCR1009-7G-1C-1S+PC – Offers 1x10G SFP+ port as well as 7xGigabit Ethernet ports for about $500
[*]Part Number: CCR1036-8G-2S+ – Offers 2x10G SFP+ ports as well as 8xGigabit Ethernet ports for about $1,100
[/LIST]
As for building your own router, there are open source router software options such as VyOS and pfSense. We tested VyOS using the following hardware set up and were able to have reliable speed test results in the 3-5 Gbps range.

[LIST]
[*]Intel 240GB SATA3 SSD
[*]G.SKILL TridentZ RGB Series 16GB DDR4 Memory
[*]MSI Z270 Motherboard
[*]Intel 7th Gen i7-7700K
[*]StarTech.com SFP+ Server Network Card 4 port NIC
[*]Corsair RMx Series RM850x 850 W power supply
[/LIST]
All consumer grade routers available only support 1 Gbps connections. You will not be able to achieve multiple gigabit per second throughput over a wireless network.


Since the beginning of this year we now being offered options from multiple ISPs for residential 10Gbps up/down active ethernet FTTH and I think I may consider upgrading if the proper equipment can be sourced and is not too ridiculously expensive. TIA! :D
 
Last edited:
Alright, delivery of the media converter is done.
Planet XT-705A fitted with a suitable SFP+ module was the equipment I was provided with.

The additional CPE the ISP is testing out will be optional equipment, just as the wireless routers the provide today for the gigabit connections.

Confirmed with ISP that equpment was delivered, so status right now is waiting for a technician to be scheduled to perform the changes in the station, so still some time to consider the next step.
Also awaiting delivery for fiber patch cable so I can connect the new SFP+ module and keep it in the media cabinet as well... Unless I redraw my entire plans and start setting up a half size rack for enough oomph. :D
 
My new setup will be Comcast through SB8200 into pfsense into EdgeSwitch 8
EdgeSwitch 8 review. and for AP's UAP-AC-PRO or UAP-AC-HD

I don't like the CPU in the SG-4860 right know but I hopes for a new one soon. But I my have to build one something like this.

as for pfsense and a switch this my help

Or Layer 3 switch + pfsence you will see 9 comments click on that.
And you can Google: layer 3 switch + pfsense or layer 3 switch and pfsense there is a lot of info out there just look.

Processor : AMD Athlon II X4 610e Propus 2.4GHz 45watt
CPU Cooler : Zalman 9500A-LED 92mm 2 Ball CPU Cooler (fan off)
Motherboard : Asus M4A89GTD Pro/USB3 AM3 AMD 890GX
Memory : Kingston 4GB DDR3 KVR1333D3N9K2/4G
Hard Drive : Western Digital Caviar Green WD30EZRX
Power Supply : Antec Green 380 watts EA-380D
Case : Antec LanBoy Air (completely fan-less)

Network Card : Intel PRO/1000 GT PCI PWLA8391GT PCI
-OR-
Intel I350-T2 Server Adapter (PCIe x4)

what do you think

Please let us know what type of router/switch you end up using for your 10GE service.

BTW: This is what Utopia examples regarding 10GE equipment under "What kind of equipment do I need?" over at https://www.utopiafiber.com/10g-info/
Code:
10 Gbps consumer routers aren’t quite widely available yet. Though there are some small business options or you can build your own dedicated router and use open source router software.
As for an off the shelf router option, Mikrotik offers a few options for a router that can accept 10G connections.

[LIST]
[*]Part Number: CCR1009-7G-1C-1S+PC – Offers 1x10G SFP+ port as well as 7xGigabit Ethernet ports for about $500
[*]Part Number: CCR1036-8G-2S+ – Offers 2x10G SFP+ ports as well as 8xGigabit Ethernet ports for about $1,100
[/LIST]
As for building your own router, there are open source router software options such as VyOS and pfSense. We tested VyOS using the following hardware set up and were able to have reliable speed test results in the 3-5 Gbps range.

[LIST]
[*]Intel 240GB SATA3 SSD
[*]G.SKILL TridentZ RGB Series 16GB DDR4 Memory
[*]MSI Z270 Motherboard
[*]Intel 7th Gen i7-7700K
[*]StarTech.com SFP+ Server Network Card 4 port NIC
[*]Corsair RMx Series RM850x 850 W power supply
[/LIST]
All consumer grade routers available only support 1 Gbps connections. You will not be able to achieve multiple gigabit per second throughput over a wireless network.


Since the beginning of this year we now being offered options from multiple ISPs for residential 10Gbps up/down active ethernet FTTH and I think I may consider upgrading if the proper equipment can be sourced and is not too ridiculously expensive. TIA! :D
I think i should comment on the right kind of hardware. If all you need is basic NAT then router choice isnt a problem but you'll need 2x10Gb/s ports at least.
For a better one, at the moment i would pick between ryzen, threadripper, epyc, intel extreme (except 6th and 7th gen). The reason for this is because of a lot of intel CPUs have flaws and usually lack PCIe lanes. When doing 10Gb/s you cant just grab a quad port SFP+ NIC And expect it to work well, you need to check the lane configs as well. Regardless of whether it is PCIe 1.1, 2 or 3 all chipsets, CPUs and so on have a fixed number of lanes and the lowest supported on is used. 10Gb/s NICs typically will have a PCIe 2.0 x4 for a single port, 2 ports at x8 so you have to take this into consideration in regards to the platform you use (you may find issues when cards run at less lanes). the other thing is that the CPU needs to be fast enough for the things you wanna run on it, for instance how badly complicated the QoS for example as even the CCR will not do wirespeed if your QoS config is setup poorly. The athlon from AMD does not have what is needed for 10Gb/s. The 2nd thing is that your ram bandwidth needs to be at least 4x the bandwidth you need. So 10Gb/s needs 40Gb/s ram bandwidth (thats 5GB/s) so use all the channels and use fast ram.

The ccr1009 will do at best 5Gb/s of software NAT and QoS, the CCR1036 will do 10Gb/s software NAT and QoS without having to thread as thinly as on the CCR1009 for the highest throughput, the other bit of the CCR1009 is that despite the 10Gb/s port, 5 of the ports are switched and connected via 1Gb/s to CPU so you cannot total up the ports and expect more bandwidth.

The intel iseries is flawed in the core, but intel h as also been stingy on the PCIe lanes needed for you to run 10Gb/s. You'll tend to find intel extreme, threadripper and their server platforms to come with 10Gb/s and a lot of PCIe lanes, more memory channels making them far more suitable for high bandwidth processing networks.
 
Today my soon to be ISP (Bahnhof, Swedish) announced they've just finished their new round of infrastructure upgrades, which enables them to offer symmetric 10 gigabit connections for their customers.
This at the price point of roughly $60 USD a month regular price, with a initial offer of $36 a month for the first 6 months.

sadness gives me see those prices, I pay 90 dollars (offer! the normal price is 120 dollars) for 25 Mbps of download and 5 Mbps upload, this happens to be born in a third world country, here until the air you have to pay... to give you an idea I pay monthly of 75 to 90 dollars of electricity, 25 dollars water service, etc.
 
Last edited:
sadness gives me see those prices, I pay 90 dollars (offer! the normal price is 120 dollars) for 25 Mbps of download and 5 Mbps upload, this happens to be born in a third world country, here until the air you have to pay... to give you an idea I pay monthly of 75 to 90 dollars of electricity, 25 dollars water service, etc.
Its similar in the US, though US has caught up to the 1st world.
 
I think at those speeds (> 1Gbit/s) you start to really see the limits of kernel based networking. The limitation will come down to how many packets per second (PPS) the hardware you use can push across the firewall, rather than just the raw bandwidth that is capable off. It is not all too difficult to push 10Gbit/s across the firewall using some modest hardware, as long as the frame (or packet) sizes are large enough (and hence lower PPS). As such using regular ethernet sized frames (1500 bytes) 10Gbit/s will seem quite achievable (e.g. via performing a simple iperf3 test), but that is not realistic of what you see on the broader internet. Thus, what really matters are packets per second to determine the expected bandwidth, given the traffic profile. I have been running a pfSense based firewall/router for a little while and performed some throughput tests recently at 10Gbit across the firewall and two Linux based 10Gbit hosts sitting on either side. If you are interested, please check out my findings:

https://forum.netgate.com/topic/132394/10gbit-performance-testing

As you can see from the thread, to push 10Gbit/s using a standard internet packet size distribution requires very beefy hardware when using kernel based networking. If you are not planning on consistently maxing out the 10Gbit connection across hundreds of devices, regular x86 hardware and kernel networking will still work (just make sure to size it appropriately). Otherwise, I would recommend looking at more detail at enterprise level solutions.

Hope this helps.
 
Hi @tman222, and thank you for chiming in!

I did read your thread on the Netgate forums and found it extremely full of insight!

Same with @System Error Message, very useful insights you're sharing.

As for me, I've realized very much what you're saying, there is no cheap/easy solutions available on the market at the moment, so the currently best way to get some utilization out of the higer speed connection is to build a software router. With the Qualcomm IPQ807x family already starting to be announced and used in products, I would anticipate that consumer level hardware may be available within maybe a year or two, so I also have the backup plan of repurposing as a general server or NAS.

I'm trying to select between a few different Supermicro configurations (which I have a good local vendor for), mainly Atom C3958/C3955, Xeon D-1541 or Xeon D-2123/D-2141.

Comparing them, the D-2100 has a heavy advantage on memory bandwidth, as it's gone up to quad channel memory, and peaks out just below 80GiB/s, compared to ~35GiB/s.
The D-1500s, and even more so the D-2100, has more punch per core, while the C3000s outdoes the other two in core count.
Two other aspects in it is QAT, and of course the energy consumption.

Any general comments/thoughts on how their strengths/weaknesses would play out for the purpose of routing?

My current understanding is that when using something like pfSense/OPNSense it is very important with having high single core performance, as there's a lot of non parallell work going on in the routing/network stack, especially when working in the kernel network stack. As such, the D-2100 would be the currently best bet.

If moving things into user space, something like TNSR, or any DPDK based solutions, the multi core performance would become more useful. I have an idea of experimenting and trying to set up a FD.io/VPP based setup myself to see how hard it would be...

As such, the optimal hardware would probably start out with D-2100 and move towards the C3000 in case a VPP solution is finalized. The "safe bet" would then be to go for the Xeon D-2141 to not miss out on the memory bandwidth or the single core performance, and the only main drawback on that is the higher energy consumtion compared to the D-1500 series and even more so compared to the C3000. Also, in case I'm integrating VPN in the solution the QAT support which can be had almost "for free" on the C3000, but at a much higher price increase for the Xeon models, would be of some use.

I can round up this philosophical post with the news that the switchover for 10gbit/s should be happening tomorrow unless any last minute changes happens.
 
Last edited:
Hi @tman222, and thank you for chiming in!

I did read your thread on the Netgate forums and found it extremely full of insight!

Same with @System Error Message, very useful insights you're sharing.

As for me, I've realized very much what you're saying, there is no cheap/easy solutions available on the market at the moment, so the currently best way to get some utilization out of the higer speed connection is to build a software router. With the Qualcomm IPQ807x family already starting to be announced and used in products, I would anticipate that consumer level hardware may be available within maybe a year or two, so I also have the backup plan of repurposing as a general server or NAS.

I'm trying to select between a few different Supermicro configurations (which I have a good local vendor for), mainly Atom C3958/C3955, Xeon D-1541 or Xeon D-2123/D-2141.

Comparing them, the D-2100 has a heavy advantage on memory bandwidth, as it's gone up to quad channel memory, and peaks out just below 80gbit/s, compared to ~35gbit/s.
The D-1500s, and even more so the D-2100, has more punch per core, while the C3000s outdoes the other two in core count.
Two other aspects in it is QAT, and of course the energy consumption.

Any general comments/thoughts on how their strengths/weaknesses would play out for the purpose of routing?

My current understanding is that when using something like pfSense/OPNSense it is very important with having high single core performance, as there's a lot of non parallell work going on in the routing/network stack, especially when working in the kernel network stack. As such, the D-2100 would be the currently best bet.

If moving things into user space, something like TNSR, or any DPDK based solutions, the multi core performance would become more useful. I have an idea of experimenting and trying to set up a FD.io/VPP based setup myself to see how hard it would be...

As such, the optimal hardware would probably start out with D-2100 and move towards the C3000 in case a VPP solution is finalized. The "safe bet" would then be to go for the Xeon D-2141 to not miss out on the memory bandwidth or the single core performance, and the only main drawback on that is the higher energy consumtion compared to the D-1500 series and even more so compared to the C3000. Also, in case I'm integrating VPN in the solution the QAT support which can be had almost "for free" on the C3000, but at a much higher price increase for the Xeon models, would be of some use.

I can round up this philosophical post with the news that the switchover for 10gbit/s should be happening tomorrow unless any last minute changes happens.
Most important is memory bandwidth, and the bus between the NIC and CPU. If it uses PCIe it must have enough to fully use the link, or any other bus as well, something that is lacking in lower power CPUs that you use.

memory bandwidth is very important, as i said you need at least 4x the bandwidth. 10Gb/s is actually 20Gb/s symmetrical requiring at least 80Gb/s memory bandwidth to fully use it. I would pick the xeon over the atom especially if its a full core because you get faster busses or more PCIe lanes and this can be considerable for 10Gb/s or more, even better if the chipset (especially onboard) comes with a NIC interface in the SoC/CPU itself. One example for this is ryzen's 2nd gen quad core APU. Udoo has one that has a couple of 10Gb/s NICs directly connected to the CPU but Udoo only provides a single Gbe NIC instead. There are 4 variants of the SoC used that will be plenty fast, capable of giving you full 10Gb/s internet use especially with dual channel DDR4, which if at 20GB/s would mean 160Gb/s (this will need to be checked somewhere).
 
Most important is memory bandwidth, and the bus between the NIC and CPU. If it uses PCIe it must have enough to fully use the link, or any other bus as well, something that is lacking in lower power CPUs that you use.

memory bandwidth is very important, as i said you need at least 4x the bandwidth. 10Gb/s is actually 20Gb/s symmetrical requiring at least 80Gb/s memory bandwidth to fully use it.

Yes, And here I quickly realize I deserve a facepalm for me reading things over too quick... Just edited my post above, as it originally said the C3000/D-1500 had ~35gbit/s, where it really should have been 35GiB/s, thus already pushing 14x the bandwidth for the full symmetrical 10gigabit/s connection. Still, the D-2100 still pushes it further to 28x the bandwidth.

I would pick the xeon over the atom especially if its a full core because you get faster busses or more PCIe lanes and this can be considerable for 10Gb/s or more, even better if the chipset (especially onboard) comes with a NIC interface in the SoC/CPU itself. One example for this is ryzen's 2nd gen quad core APU. Udoo has one that has a couple of 10Gb/s NICs directly connected to the CPU but Udoo only provides a single Gbe NIC instead. There are 4 variants of the SoC used that will be plenty fast, capable of giving you full 10Gb/s internet use especially with dual channel DDR4, which if at 20GB/s would mean 160Gb/s (this will need to be checked somewhere).

Atom C3000 as well as both Xeon D-1500 and D-2100 does come with up to 4 integrated 10GbE NICs in the SOC, where the QAT module also is integrated on some models. But as you say, better make sure the motherboard/server is wired up to use it.

I will make sure to make a dive into Udoo and see figure about potential setups using Ryzen as well, but I have to admit they are a lot rarer to find packaged and ready to go at the vendors I'm used to scouting. Not impossible that I will have to build the machine by myself, it is indeed a DIY project anyhow. :D
 
Yes, And here I quickly realize I deserve a facepalm for me reading things over too quick... Just edited my post above, as it originally said the C3000/D-1500 had ~35gbit/s, where it really should have been 35GiB/s, thus already pushing 14x the bandwidth for the full symmetrical 10gigabit/s connection. Still, the D-2100 still pushes it further to 28x the bandwidth.



Atom C3000 as well as both Xeon D-1500 and D-2100 does come with up to 4 integrated 10GbE NICs in the SOC, where the QAT module also is integrated on some models. But as you say, better make sure the motherboard/server is wired up to use it.

I will make sure to make a dive into Udoo and see figure about potential setups using Ryzen as well, but I have to admit they are a lot rarer to find packaged and ready to go at the vendors I'm used to scouting. Not impossible that I will have to build the machine by myself, it is indeed a DIY project anyhow. :D
check for the SoC in wikidevi, there are 4 variants and udoo only uses 2 in their products. There are other products with the SoC though. Its not the memory bandwidth listed on intel ark that you gotta look at but the actual bandwidth using your ram config. That max bandwidth is when using the max frequency and best timings and so on. As i said, some actually report the actual memory bandwidth and i think its going to be enough most of the time. Dual channel DDR4 for sure has enough bandwidth if using decent ram, DDR3 dual channel would depend. RAM type is important as cheap ram usually has high timings reducing the total bandwidth. While latency isnt particularly important (making ryzen a good candidate), bandwidth is.

Thing is, that ryzen SoC has integrated vega GPU so if your router can make use of GPU processing like with packet shader which was only a test, throughput wont be a problem and you could have all the network features you like to an extent.
 
@Magebarf - did your 10Gbit circuit get installed? If so, how is the performance? Have you made a decision regarding hardware yet?

Nope, sorry. Monday came and went, and no switchover happened. After talking with tech support I was informed they were going to have to change a bit more equipment than they initially had planned in the station, which means they had to delay again. No new ETA, which I was hoping to get one, which is why I haven't given any status update yet...
 
check for the SoC in wikidevi, there are 4 variants and udoo only uses 2 in their products. There are other products with the SoC though. Its not the memory bandwidth listed on intel ark that you gotta look at but the actual bandwidth using your ram config. That max bandwidth is when using the max frequency and best timings and so on. As i said, some actually report the actual memory bandwidth and i think its going to be enough most of the time. Dual channel DDR4 for sure has enough bandwidth if using decent ram, DDR3 dual channel would depend. RAM type is important as cheap ram usually has high timings reducing the total bandwidth. While latency isnt particularly important (making ryzen a good candidate), bandwidth is.

Thing is, that ryzen SoC has integrated vega GPU so if your router can make use of GPU processing like with packet shader which was only a test, throughput wont be a problem and you could have all the network features you like to an extent.

The UDoo's are completely outclassed, as well as it the Ryzen embedded board with 10Gbe - even with flow offload and user space drivers, the clocks are not fast enough to process incoming/outgoing packets with 10Gbe - with or without NAT.

There are certain Xeon-D SKU's that are interesting but even then, it's a challenge... having been there/done that with user space stuff - startup things...

Comes down to clocks - which are incredibly important in SW based routing/switching under Linux and BSD.

From a switching perspective - the broadcom tridents are pretty good, and the proprietary fastpath stuff they have - similar to Mellanox (Mellonox is interesting right now, as they've incorporated much of the IP they acquired from Tilera into the BlueField series of switching/routing SoC's into a working 10Gbe SoC)

Good 10Gbe is non-trivial actually...
 
The UDoo's are completely outclassed, as well as it the Ryzen embedded board with 10Gbe - even with flow offload and user space drivers, the clocks are not fast enough to process incoming/outgoing packets with 10Gbe - with or without NAT.

There are certain Xeon-D SKU's that are interesting but even then, it's a challenge... having been there/done that with user space stuff - startup things...

Comes down to clocks - which are incredibly important in SW based routing/switching under Linux and BSD.

From a switching perspective - the broadcom tridents are pretty good, and the proprietary fastpath stuff they have - similar to Mellanox (Mellonox is interesting right now, as they've incorporated much of the IP they acquired from Tilera into the BlueField series of switching/routing SoC's into a working 10Gbe SoC)

Good 10Gbe is non-trivial actually...
Actually looking at ryzen SoCs, not udoo's, the 10Gbe ports are connected to the CPU, though maybe not as direct as tilera does theirs, however clocks are not equal and ryzen has way higher clocks than tilera does, the other thing being that it does have more cores.
CPU IPCs are not equal, recent x86 far outclasses MIPS in routing and NAT which outclasses ARM.

Multi threading is an aspect, and if one can use the GPU on those ryzen SoCs, thats a lot of performance not to mention using DDR4 ram that you can upgrade :p . As i mentioned above, memory speeds do play a role as well. Older CPUs achieve 10Gb/es easily. the test rig for packet shader did 40Gb/s via CPU using 2 either 4 or 6 cores together. Unlike the version i use thats overclocked, theirs werent and mine is overclocked by 50% but is running games. That ryzen at stock has a lot more IPC than the 1st gen iseries intel CPUs, DDR4 and a lot more internal bandwidths, faster caches, etc.
 
Actually looking at ryzen SoCs, not udoo's, the 10Gbe ports are connected to the CPU, though maybe not as direct as tilera does theirs, however clocks are not equal and ryzen has way higher clocks than tilera does, the other thing being that it does have more cores.
CPU IPCs are not equal, recent x86 far outclasses MIPS in routing and NAT which outclasses ARM.

Router/Switching in SW is dependent on clocks, not IPC... So once one gets into 10Gbe - clocks rules - clocks are packets per second.

I like MIPS for routing/switching, but ARM on the low end won the clock race - that's why most consumer routers are ARM based these days.

@System Error Message Focus - packets per second is the main benchmark.

Getting into treatment of the packets - then IPC helps with QoS and other layer 3 stuff... and there, clocks still rule.

Getting back to Packet Shader stuff - was interesting years back, but this also means a jump across the PCI-e Bus to a GPU/GPU's, and back. That adds latency that Intel has basically covered with AVX/AVX2, and very large L3 caches - there's a couple of Xeon SKU's that are relatively narrow (less cores, but more clocks), and those SKU's also have QAT in the chipset...

I should add - Lot's of interesting stuff these days also with the Linux kernel related to packet offload - the netfilter team has done good stuff with flow-offload, both HW and SW... check it out.
 
Alright then @tman222 now I'm game!

With an ETA of delivery of about minus 4 hours, I arrived at home and established that it was not my equipment that needed to be reset, but it was actually the LNK LED on the media converter that had given up!

As I had no heads up this was going to happen today I called the ISPs tech support a quarter before closing time to either, in the best case, get a confirmation that a switchover had happend, or, in the worst of cases, make sure to log an errand so a technician may have a look at it during the night.

So, this is the first post with new wavelengths going through the fiber! End node, after the media converter that is, is still 1000BASE-T, so won't really see that much of a difference yet.
In other words it is soon time to start exploring one or some of the options discussed in this thread in practice!
 
Router/Switching in SW is dependent on clocks, not IPC... So once one gets into 10Gbe - clocks rules - clocks are packets per second.

I like MIPS for routing/switching, but ARM on the low end won the clock race - that's why most consumer routers are ARM based these days.

@System Error Message Focus - packets per second is the main benchmark.

Getting into treatment of the packets - then IPC helps with QoS and other layer 3 stuff... and there, clocks still rule.

Getting back to Packet Shader stuff - was interesting years back, but this also means a jump across the PCI-e Bus to a GPU/GPU's, and back. That adds latency that Intel has basically covered with AVX/AVX2, and very large L3 caches - there's a couple of Xeon SKU's that are relatively narrow (less cores, but more clocks), and those SKU's also have QAT in the chipset...

I should add - Lot's of interesting stuff these days also with the Linux kernel related to packet offload - the netfilter team has done good stuff with flow-offload, both HW and SW... check it out.
If this were the case, my CCR would be slower than the current line of asus routers. Also PCIe latency isnt as high as you think, especially now that the controller is located on the CPU rather than chipset. IPCs are king and the performance is based on the total MIPS (not the architecture), this is where the MIPS CPU has the advantage, but more development done on ARM and cheaper ARM chips that provide more total clocks than MIPS is what makes them get chosen, along with other SoC stuff like wifi, PCIe, usb3 and many more. You also have marketing, phones use ARM and people are hooked onto the CPU so using it in your router sells. I mean it runs software quite decently, only slower than MIPS in IPCs where routing, NAT, QoS is concerned but on the other spectrum, math stuff and floats ARM rules here. ARM is actually more efficient or faster than x86 and this is actually where the x86 does lose out if comparing the server ARM variants but router manufacturers want to provide all in ones and MIPS will perform the other functions poorly. This is why ARM was chosen, because it had more development than MIPS and that it ran the other stuff consumer manufacturers wanted to implement much better. If you run a file server on openWRT with a 400Mhz single core MIPS, you'd get 2MB/s. If you ran it on the AC68U (stock 800Mhz dual core ARM), you'd get around 40MB/s, both with the FAT32 file system or even the more efficient ext2 linux file system which is more than 4x faster, so this is why ARM was picked, not because it was faster in doing networking than MIPS, as its only about 50% slower in that regard (IPCs) compensated by the fact that you can get much higher clocks and core counts than MIPS on the low end(4x more total clocks combined).

Alright then @tman222 now I'm game!

With an ETA of delivery of about minus 4 hours, I arrived at home and established that it was not my equipment that needed to be reset, but it was actually the LNK LED on the media converter that had given up!

As I had no heads up this was going to happen today I called the ISPs tech support a quarter before closing time to either, in the best case, get a confirmation that a switchover had happend, or, in the worst of cases, make sure to log an errand so a technician may have a look at it during the night.

So, this is the first post with new wavelengths going through the fiber! End node, after the media converter that is, is still 1000BASE-T, so won't really see that much of a difference yet.
In other words it is soon time to start exploring one or some of the options discussed in this thread in practice!
Theres going to be a mega amount of barf to clean up :p
 
Alright then @tman222 now I'm game!

With an ETA of delivery of about minus 4 hours, I arrived at home and established that it was not my equipment that needed to be reset, but it was actually the LNK LED on the media converter that had given up!

As I had no heads up this was going to happen today I called the ISPs tech support a quarter before closing time to either, in the best case, get a confirmation that a switchover had happend, or, in the worst of cases, make sure to log an errand so a technician may have a look at it during the night.

So, this is the first post with new wavelengths going through the fiber! End node, after the media converter that is, is still 1000BASE-T, so won't really see that much of a difference yet.
In other words it is soon time to start exploring one or some of the options discussed in this thread in practice!

That's great to hear! How are you liking the new service so far? I actually started looking around at solutions a little bit again at solutions and it is challenging to find a server with high enough CPU frequencies to process enough packets while at the same being at least somewhat power efficient. Honestly, if I had a multi gigabit connection in the 5 - 10Gbit/s range right now, I'd probably opt for one of these boards (the 12 core is my favorite out of the bunch) and mount them in a 1U chassis:

https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-8C-TP8F.cfm
https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-12C-TP8F.cfm
https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-16C-TP8F.cfm
https://www.supermicro.com/products/chassis/1u/505/SC505-203B

The other option would be to get a multi-core latest gen i7 with very fast cores that push closer to 4GHz, but of course that will have a much higher TDP. It all depends on your traffic requirements/profile and if any additional network services you plan on using (e.g. IDS/IPS, VPN, etc.) are single/multithreaded. One nice thing about having multiple cores is that the system can dedicate at least one core per NIC queue for packet processing.
 

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top