What's new

[Talk] - pfSense 2.3 release/upgrades

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Check your voice mail - 2007 called - they want their computer back :D

At least it keeps the computer room warm ;) ;)

Shutup... :)

I mainly use it because it has an integrated CompactFlash reader.


The old pfSense displayed the current & max freq, so I doubt that my motherboard/BIOS is the problem. I doubt the amd64 version of pfSense would work if the PC did not support it.
 
I´ve also been following the pfSense forum after the 2.3 release. Seems to be a lot of early issues. I´m not using pfSense yet (hopefully soon), but if I were I would also stick to 2.2.6 until the noise has calmed down. :)

Ole
IMO people didn't plan or realize some packages had been removed. Fresh install should not be an issue.
 
Well I received an answer to the power throttling issue on the pfsense forum. There was a decision to remove the lower speeds.
https://svnweb.freebsd.org/base?view=revision&revision=276986

From what I can tell there is not any behind the scenes reasoning for this. Maybe now with GIG internet connections more power is needed. Slow CPUs take time to throttle up and a GIG spike can be a big hit for a CPU running slow.

There is a patch posted on the thread I started on the pfsense forum if you want to add the code to regain the lower throttling speeds.
https://forum.pfsense.org/index.php?topic=109704.0

I think I am going to run the way it is configured and not patch the code.
 
Well I received an answer to the power throttling issue on the pfsense forum. There was a decision to remove the lower speeds.
https://svnweb.freebsd.org/base?view=revision&revision=276986

From what I can tell there is not any behind the scenes reasoning for this. Maybe now with GIG internet connections more power is needed. Slow CPUs take time to throttle up and a GIG spike can be a big hit for a CPU running slow.

There is a patch posted on the thread I started on the pfsense forum if you want to add the code to regain the lower throttling speeds.
https://forum.pfsense.org/index.php?topic=109704.0

I think I am going to run the way it is configured and not patch the code.

I had a problem which I saw mirrored on the forums, where the HTTP GUI would be slow to respond to every request. Normal routing was not a problem, but GUI interaction was. I solved it by either maxing powerd or disabling it.

My experience could be unrelated, but I do actively follow the pfSense forum (though, mostly the traffic-shaping sub-forum).
 
Sounds like a good reason to me to keep the CPU running on the high side. I don't see any of this with my $10 Xeon. I do have a quad Xeon I can pop in there if I out grow what I have.
 
Some of the older CPU scalers were pretty slow at changing up/down - which is probably why FreeBSD (pfSense upstream) made the decision to do what they did..
 
With GIG internet connections it looks like to me they are looking for 2 GHz for real GIG connections. This could of been within the processor class as different processors have different processing power but I did not read it that way.
 
Version <2.2.x was a crazy "diff?" of FreeBSD's code.

2.3.x is a much simpler pkg overlay/addition to the FreeBSD code.

As a wanna-be dev, I think this change is huge & awesome.

@Nullity - I think most of the issues have been with upgrades, not fresh installs... and perhaps with older packages that have not been maintained in some time - ok with 2.2, but the application layer changes in 2.3 can break a lot of legacy things...

Moving forward, where the pfSense team is headed makes sense, as it brings it more inline with how FreeBSD rolls their packages, rather than a monolithic release..
 
Check your voice mail - 2007 called - they want their computer back :D

At least it keeps the computer room warm ;) ;)

Pentium_D - old school Prescott processor?

My Shorewall box at work is a Pentium 4 with Rambus memory. It used to be a customer's web server, before he moved everything to rackmounted servers, and told me to keep those old parts as he had no use for them...
 
My Shorewall box at work is a Pentium 4 with Rambus memory. It used to be a customer's web server, before he moved everything to rackmounted servers, and told me to keep those old parts as he had no use for them...

Hehe... that's a space-heater ;)

At some point, I would consider replacing that one... it's running now, but the concern would be if it ever lost power, would it come back up?

;)
 
Hehe... that's a space-heater ;)

At some point, I would consider replacing that one... it's running now, but the concern would be if it ever lost power, would it come back up?

;)

Although my perspective is purely consumer, I have literally never had any of the dozens of computers I have used in server roles fail like that (which includes a few P4s, even one with the overly expensive RDRAM...).

I have had HD failures, but not much else.

My Pentium D has the largest fan I have ever seen in a consumer PC (120mm x 38mm). It's kinda funny & useful because if the CPU ever gets off idle (it should always be idle), you can hear the fan spool up like a jet. Much louder than all my other PCs. :)
 
Although my perspective is purely consumer, I have literally never had any of the dozens of computers I have used in server roles fail like that (which includes a few P4s, even one with the overly expensive RDRAM...).

All depends on use cases - many 1U servers are rode pretty hard - in a data center, they might have cool air and good power, and that'll keep 'em running until either the AC takes a dump (been there, with Ambient getting above 100F and alarms ringing all over the place) or losing power - the Pre-ROHS devices are a bit more robust, but even then, old servers do die, and longer they run, the higher the risk...

This is why old 1U's are so cheap on Craigslist, BTW... sold at scrap value almost...
 
Hehe... that's a space-heater ;)

At some point, I would consider replacing that one... it's running now, but the concern would be if it ever lost power, would it come back up?

;)

It's rebooted semi-regularly without any issue due to routing issues involving our Asterisk server and the dual wan. Hard disks are much newer than that, I replaced them a few years ago. I'm waiting for a planned WAN change (ditching our second WAN, and upgrading the DSL link we'll be keeping) before replacing it with most likely another Shorewall-based PC.

I used to have four NICs in that thing... Dual WAN + LAN + DMZ for our routed IP block. Asus P4T motherboard if I recall correctly...
 
Although my perspective is purely consumer, I have literally never had any of the dozens of computers I have used in server roles fail like that (which includes a few P4s, even one with the overly expensive RDRAM...).

Quality hardware makes a huge difference. An Asus motherboard with an Enermax power supply will last for years, compared to a PC-Chips/Matsonic motherboard coupled with a 20$ 500W PSU that's so light that you can lift it with your little finger...
 
Quality hardware makes a huge difference. An Asus motherboard with an Enermax power supply will last for years, compared to a PC-Chips/Matsonic motherboard coupled with a 20$ 500W PSU that's so light that you can lift it with your little finger...

Those old boards pre-date ROHS requirements, which helps out much, and that machine is old enough to be before the great capacitor disaster...
 
Those old boards pre-date ROHS requirements, which helps out much, and that machine is old enough to be before the great capacitor disaster...

No, it was after. The capacitor debacle was during the P3 days. I had tons of dead Abit and IBM mobos (Aptivas) come my way back then...

I remember that time where one of our junior techs came to me puzzled by an unstable PCs they were trying to troubleshoot. I went to them, asked for a flashlight. Checked the inside of the open PC for 10 secs. Handed him the flashlight, told him the motherboard had to be replaced, and left them without saying anything more.

One of them finally came back to my desk a few minutes later, asking me "Uh, I'm sorry but, how could you tell?". I went back, and showed them the leaking capacitors on the board, and explained the issue to them :)
 
Finally the dev team could start coding 3.0 after a break for beers..

Never choose release 3.0 for anything...

Release 1.0 - gotta ship something
Release 1.1 - fix bugs in the shipping release
Release 1.1.1 - fix the fixies...
Release 2.0 - finally, all the stuff we wanted to put in Release 1.0, but busy fixing bugs and gotta ship pressure..
Release 2.1 - fix the things we broke in release 2.0

After the death march... as marketing has spun the 2.0 release as something really cool...

Release 3.0 - new team - let's do it another way, refactor all the code...

Gah... in real life, 3.0 releases are horrible, as many of the original devs/coders have left due to the death march of the rel 1, rel 2 cadence...
 
Yea we had Dells all over the place which Dell had to replace the motherboards in.

Asus at the time took a major hit, as they produced many boards for other OEM's - and their supply chain got hammered by the bad/counterfeit formula on the capacitors...

FWIW - friend of mine made serious money re-cap'ing boards at well below replacement costs - it's about an hour's work to remove/reinstall the board, along with the re-cap work itself was about 10-15 minutes at most...

$100USD/Hour ain't bad wages, even for today...
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top