What's new

pixelserv pixelserv - A Better One-pixel Webserver for Adblock

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

@thelonelycoder thank you

servstats festival :)

ss.png


400 HTTPS threads with each manipulated to last ~1s on client side. On RT-AC56U, RAM usage for the pixelserv-tls process surges to ~30MB. About 60kB per thread (HTTP thread could be much less).

@jrmwvu04

The servstats page is served by a thread of the same priority as any other service threads. In the above test, the page shows up in a few seconds. And 'kcc' drops back to 1 when clients wane.

15s is not normal.. How can I reproduce your test?
 
400 HTTPS threads with each manipulated to last ~1s on client side. On RT-AC56U, RAM usage for the pixelserv-tls process surges to ~30MB. About 60kB per thread (HTTP thread could be much less).

To put these numbers into perspective for less tech savvy users.

Assume all 400 HTTPS requests come from one LAN client and you don't have any adblock installed. You're hitting 400 ad/tracking servers by loading one or more webpages.

That's a LOT from one LAN client. Typically I've seen one client loading a few ad-intensive pages will be hitting ~40 ad/tracking servers.

Now this workload translates into 10 LAN clients doing ad-intensive browsing at the same time. Shall be more than any home/small office.
 
15s is not normal.. How can I reproduce your test?
Generally, a reliable way to do it is to go to open the /servstats page in a tab, and go to dailymail in a separate tab and just load an article, then click a link in that article to load a new one, etc. Usually by the 5th or 6th article, going back to the /servstats tab and reloading it will hang it for a decent chunk of time.

The changes in an instance where I hung the /servstats interface

XPW21RI.png
 
And 'kcc' drops back to 1 when clients wane.
That doesn't happen, ever, for me. It appears to stay about as high as it ever gets, like they never fall off.

Edit: about 20 minutes of browsing cnn.com, a known ad heavy site. I'll check it in the morning to see if it has died down at all.

14lblTd.png
 
Last edited:
I can't reproduce any of what @jrmwvu04 is seeing with Kl3 on a 56U.
 
Morning update - still seeing this. Any ideas what’s causing my threads to just keep climbing?

rIB63PD.jpg
 
@elorimer thank you

@jrmwvu04 If you fire up htop (or top), do you actually see that many threads? What's the RES value for the process?

my current htop output. NLWP is the thread count.
ss.png
 
ITHMkbT.png


It jumps out at me that you're using swap - I might set that up later and see if it makes a difference.

The swap partition rescued me multiple times without crash or forced manual reboot. Since it's a home system, usually I take risk and often lead to disasters..

Good news is that service threads work as expected. Bad news is unsure why the counters weren't updated..
 
I abused it (cnn.com, load articles quickly in succession) and changed the user back to nobody to see if that mattered. Kept htop up.

UvisF7J.png
 
Well, a 256MB swap file did the trick. Same torture tests as before, very different result.

hudbMKi.png
 
Loaded test4. I'll be out most of the day so it'll just run with normal use, but to this point I have had zero problems since enabling swap. I had been waffling over setting that up, thinking it was a good idea from my linux days long since passed.
 
Loaded test4. I'll be out most of the day so it'll just run with normal use, but to this point I have had zero problems since enabling swap. I had been waffling over setting that up, thinking it was a good idea from my linux days long since passed.

KL-test4 fixed a hard to see crash bug.

The author before me set message buffer to 8kB. Ignore everything else in the message beyond 8kB. A client request is serviced in a newly forked()'ed process. When the process finishes processing the request, remaining data in the socket is drained and dumped before exits.

This was okay for one-time use process but becomes a problem for a re-usable service thread. Two successive requests of >8kB will corrupt the message buffer, and crash pixelserv-tls.

With the new logics of socket read, robustness is increased by a lot. We start with a 4kB message buffer, and grow by 4kB increment only when needed to as big as 128kB.

To put it into perspective, URL length on client side isn't limited by http standard but most clients commonly limit to 16kB today. 128kB shall have enough room for many years.

If people observe an average request size (counter 'avg') bigger than 4kB, we probably can adjust initial size for more efficiency in a later version.
 
Test4 up 24 hours, no issues. This is the highest tmx I've seen though.

Code:
uts 0d 23:47 process uptime
log 1 critical (0) error (1) warning (2) notice (3) info (4) debug (5)
kcc 1 number of active service threads
kmx 28 maximum number of service threads
kvg 1.28 average number of requests per service thread
krq 25 max number of requests by one service thread
req 6632 total # of requests (HTTP, HTTPS, success, failure etc)
avg 845 bytes average size of requests
rmx 22220 bytes largest size of request(s)
tav 109 ms average processing time (per request)
tmx 30231 ms longest processing time (per request)
 
My test4 ran perfectly for a day and a half with swap. I disabled swap and restarted, and tried dailymail. Pixelserv started to hang and then fully crashed within 60 seconds. Something I noticed - because I clear the /opt/var/cache/pixelserv generated certs between tests, I saw that when I test it aggressively with no swap, pixelserv doesn’t (successfully) generate any certs. There are no clear reasons why, in terms of permissions, logged errors, etc. I don’t know what it means but it goes hand in hand with hanging and crashing.
 
Test4 up 24 hours, no issues. This is the highest tmx I've seen though.

Code:
uts 0d 23:47 process uptime
log 1 critical (0) error (1) warning (2) notice (3) info (4) debug (5)
kcc 1 number of active service threads
kmx 28 maximum number of service threads
kvg 1.28 average number of requests per service thread
krq 25 max number of requests by one service thread
req 6632 total # of requests (HTTP, HTTPS, success, failure etc)
avg 845 bytes average size of requests
rmx 22220 bytes largest size of request(s)
tav 109 ms average processing time (per request)
tmx 30231 ms longest processing time (per request)

For HTTP POST, if content is short, it'll come together with the request. Mostly likely not due to its large size and come in subsequent message(s).

Most likely what happened is that: pixelserv enters the mode of waiting to receive subsequent messages of a POST. Each time 10s with the default select_timeout. At the end of the 3rd attempt of our designed 5 attempts in total, the client disconnects. That gives a processing time of 30s for an incomplete POST request.

The default 10s select_timeout perhaps shall be changed. it's way too much in today's standard. Shall we change it to 1s?
 
My test4 ran perfectly for a day and a half with swap. I disabled swap and restarted, and tried dailymail. Pixelserv started to hang and then fully crashed within 60 seconds. Something I noticed - because I clear the /opt/var/cache/pixelserv generated certs between tests, I saw that when I test it aggressively with no swap, pixelserv doesn’t (successfully) generate any certs. There are no clear reasons why, in terms of permissions, logged errors, etc. I don’t know what it means but it goes hand in hand with hanging and crashing.

Did you happen to remove the ca.crt/ca.key? If so the result make sense, and I think where the bug is for the eventual crash. Will fix that in next test version.

If ca.crt/ca.key still there, I can't think of where the source of crash is atm..
 
Did you happen to remove the ca.crt/ca.key? If so the result make sense, and I think where the bug is for the eventual crash. Will fix that in next test version.

If ca.crt/ca.key still there, I can't think of where the source of crash is atm..
Not a bad guess, but I always leave the ca.crt and ca.key there.
The default 10s select_timeout perhaps shall be changed. it's way too much in today's standard. Shall we change it to 1s?
I have been running with 1s for a long time with good results.
 
A long time ago the timeout was 2 seconds, but someone had a device that needed 3 seconds to avoid browser error message - possibly due to it trying to negotiate secure connection when they were not handled by old version, so default raised and made configurable.

Re buf size, my V31 had this

#define CHAR_BUF_SIZE 1023 //surprising how big requests can be with cookies etc

By V34 (HunterZ version)

#define CHAR_BUF_SIZE 4095 // surprising how big requests can be with cookies and lengthy yahoo url!

IIRC the buffer only had to hold the first line for url parsing, would be reused to flush the incoming buffer to avoid error messages when closing socket.

I am concerned latest version can grow the msg buffer till the router is out of ram? When is msg ever freed? I don't understand realloc ... need to RTFM!
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top