What's new

pfSense - Netgate RCE-V 2440 thread

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

FWIW - some reports are suggesting that the 2440 might not be up for a full-on Gigabit connection, which the next step up would be the RCE-VE 4860, which has two more cores, more ram, and a bit higher clock speed...

If you're a dedicated pfSense running, the SG-2440/SG-4860's are preinstalled, and the margin goes to support the pfSense project...
 
I wonder how much benefit more cores really help. I thinking clock speed and more RAM is probably a better benefit. Clock speed is probably first. RAM is if you VPN. My thinking is based on home or a small user count. 4 GIG is enough RAM. I had 8 GIG but it made no difference so I pulled it to save watts. As you add users RAM will probably make a difference but not in the small setting.

My CPU stays around 0% on the pfsense system information. Once in a while it jumps to 4%. I am running an old Xeon CPU dual core. I have a quad core but it does not seem to be needed and it uses a few more watts maybe 40 vs 35 watts. It has been a while since I looked at it. My CPU stays around 27 C degrees. I have a TWC 300 meg connection.
 
Last edited:
I am not saying pfsense is not SMP friendly. I am saying in a small environment I bet clock speed is the biggest difference. I would also think packages would be the prefect place to spin off to a different core. Router software is all about IO not the CPU. The faster you can clock tic the less latency in IO software.
 
You know what would be nice to see is a couple pfsense routers tested here on this site. Maybe one with lots of cores and one with a higher clock speed. It would give us a feel for the hardware. I would like to see how well pfsense does with latency. It could have some code bloat since it is written for generic hardware. The faster x86 hardware will probably over come any code bloat but a test would be interesting.

PS
I am talking base system no packages.
 
Last edited:
FWIW - some reports are suggesting that the 2440 might not be up for a full-on Gigabit connection, which the next step up would be the RCE-VE 4860, which has two more cores, more ram, and a bit higher clock speed...

pfSense doesn't advertise 2440 as gigabit WAN capable. They does mention that for 4860. That's the fun part if we read between the lines.

I think the two extra cores play little on achieving gigabit WAN. Higher clock speed perhaps. Maybe you can prove 2440 capable of gigabit WAN in a lab setup when you have time...those reports calming not possible maybe got the configs sub-optimal.
 
pfSense doesn't advertise 2440 as gigabit WAN capable. They does mention that for 4860. That's the fun part if we read between the lines.

I think the two extra cores play little on achieving gigabit WAN. Higher clock speed perhaps. Maybe you can prove 2440 capable of gigabit WAN in a lab setup when you have time...those reports calming not possible maybe got the configs sub-optimal.

It's probably more along clock speed - and there are ways to configure pfSense if one isn't careful that can really impact performance, just like any firewall/router
 
I would like to see how well pfsense does with latency.

What kinds of latency are you looking for? Define them first. We can ask sfx to test and report back...
 
What kinds of latency are you looking for? Define them first. We can ask sfx to test and report back...

Just note that my device isn't in the "lab", it's in production, so if service impacting, the test cases likely will not be done...
 
the long latency you see here on part of the graph below is a client talking to a media server in japan...

status_rrd_graph_img.php.png
 
the long latency you see here on part of the graph below is a client talking to a media server in japan...

View attachment 5936

Be watchful for long-term fluctuations in that graph, like 10ms decreases or 30ms increases (as I've experienced). The ping monitor (apinger?) is known to be a bit buggy. A restart of the service usually fixes the glitch.
 
What kinds of latency are you looking for? Define them first. We can ask sfx to test and report back...

I was using latency in the general sense. It would be nice to have a pfsense compared on the router graphs here since they are our standard. The faster the clock the less latency between clock tics. This is small but there are lots of things which impact latency. I guess you can add instruction set and the quality of programming, etc. By instruct set I mean some CPUs can preform an operation in one clock tic whereas other CPUs may take several clock tics. Over all it makes a difference. The end result is what they can do on the router graphs which is the sum of all things for that device.
 
I was using latency in the general sense. It would be nice to have a pfsense compared on the router graphs here since they are our standard. The faster the clock the less latency between clock tics. This is small but there are lots of things which impact latency. I guess you can add instruction set and the quality of programming, etc. By instruct set I mean some CPUs can preform an operation in one clock tic whereas other CPUs may take several clock tics. Over all it makes a difference. The end result is what they can do on the router graphs which is the sum of all things for that device.

Agree in general..

I recall seeing a Linux patch that significantly reduces the latency to service softirq requests inside kernel. Though overall throughput isn't improved but individual requests are serviced much faster.

I run a custom kernel on my RT-AC56U. My ping time between a wireless client and the router is between 0.8-1.6ms. I haven't spent time to figure out if any improvement from softirq or other patches.
 
PING 192.168.1.1 (192.168.1.1): 56 data bytes
64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=0.3 ms
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.9 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.3 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.3 ms
64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.3 ms
64 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=0.2 ms
64 bytes from 192.168.1.1: icmp_seq=6 ttl=64 time=0.3 ms
64 bytes from 192.168.1.1: icmp_seq=7 ttl=64 time=0.3 ms
64 bytes from 192.168.1.1: icmp_seq=8 ttl=64 time=0.3 ms
64 bytes from 192.168.1.1: icmp_seq=9 ttl=64 time=0.3 ms
--- 192.168.1.1 ping statistics ---

10 packets transmitted, 10 packets received, 0% packet loss

round-trip min/avg/max = 0.2/0.4/1.9 ms
 
I was using latency in the general sense. It would be nice to have a pfsense compared on the router graphs here since they are our standard. The faster the clock the less latency between clock tics. This is small but there are lots of things which impact latency. I guess you can add instruction set and the quality of programming, etc. By instruct set I mean some CPUs can preform an operation in one clock tic whereas other CPUs may take several clock tics. Over all it makes a difference. The end result is what they can do on the router graphs which is the sum of all things for that device.

Aren't modern, high-resolution timer interrupts much more granular than CPU clock-speed or OS ticks?

Also, most modern OS kernels are "tickless" (dynamic ticks).



My pfSense box usually has the CPUs down-clocked to 349Mhz (from 2.8Ghz). When I used powerd to force the CPU to max clock, the ping was virtually unchanged. The respective ping results follow.

Code:
10 packets transmitted, 10 packets received, 0% packet loss, time 8992ms
rtt min/avg/max/mdev 0.082/0.138/0.192/0.045 ms
Code:
10 packets transmitted, 10 packets received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev 0.088/0.137/0.201/0.043 ms

I could run more tests, but if CPU clock-speed changing from 349Mhz to 2.8Ghz makes no obvious latency change, I suspect latency is impacted by other factors.
 
My pfSense box usually has the CPUs down-clocked to 349Mhz (from 2.8Ghz). When I used powerd to force the CPU to max clock, the ping was virtually unchanged.

On my eight year old Thinkpad with a 802.11n card (that was disassembled from a Mac), ping time to the same router is 0.6ms...faster than a Sandy Bridge with 802.11ac card.

You guys have fast ping...about as fast as my ping over wire if not faster. Does pfSense really make such a difference? Hmm..
 
Latency is funny you can improve it so much then you hit another bottle neck in the chain of devices.
Nullity I have the same thing with my pfsense where my CPU runs at 749 MHz with max at 2331. Running at max does not improve my ping time either.
 
On my eight year old Thinkpad with a 802.11n card (that was disassembled from a Mac), ping time to the same router is 0.6ms...faster than a Sandy Bridge with 802.11ac card.

You guys have fast ping...about as fast as my ping over wire if not faster. Does pfSense really make such a difference? Hmm..

My pings were wired. From an ArchLinux GbE integrated-NIC, through my RT-N66U, to my pfSense GbE Intel NIC.
 

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top