What's new

AC86U SMB Tweaking

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Which firmware version are you using? Can you give the output of
Code:
cat /proc/sys/net/ipv4/tcp_wmem

My mistake, I read the output too fast, the only value that needed changing was the default.

Code:
admin@RT-AC86U-2EE8:/jffs/scripts# cat /proc/sys/net/ipv4/tcp_wmem
4096    16384    3521536

Doing so gave a slight peak write speed increase of about 5MB/s. Admittedly my tests aren't too scientific and are just simple wireless file transfers via a windows machine on a 866.5Mbps linkrate, but overall there has definitely been significiant improvement from default values.
 
With my config, I only get low writes around 10 MB/s (dunno if it's coming from my cheap NIC or Windows 7 itself). With no buffer specified I gain 5 MB/s, so a whopping 50% increase, thanks Adamm! Question: is there any downsides on keeping it without a specified buffer? ..and apart from disabling USB interference, is there other things I should consider on the modem side (AC1900P/380.69_2) ?
 
Here are my test results, default Samba config versus without the SO_RCVBUF and SO_SNDBUF options:
1) on a wireless 5 GHz connection (data rate: 1300 Mbps):
write speed improved from 21 MB/sec to 48 MB/sec (+129%)
read speed improved from 26 MB/sec to 39 MB/sec (+50%)

2) on a wired 1 Gbps connection, my results remained the same:
write speed 53 MB/sec, read speed 76 MB/sec
my wired results seem to be limited by CPU/USB2/disk.

How I tested:
tool: NAS performance tester 1.7 http://www.808.dk/?code-csharp-nas-performance
tool settings: file size 100MB, 5 loops
disk: LaCie Rugged FW/USB (320 GB, 7200 rpm, USB 2.0)
router: RT-AC86U
firmware: 384.4_alpha1-gc4fc92d
firmware settings: Reducing USB 3.0 interference: Disable, Enable SMB2 protocol: No, Multi-User MIMO: Disable, QoS: Off
client PC: Windows 7 Professional SP1 64-bit, Intel Core i5, 8 GB RAM
client wireless adapter: PCE-AC68 (situated in room 7m away from router with brick wall in between)
 
Last edited:
How I tested:
tool: NAS performance tester 1.7 http://www.808.dk/?code-csharp-nas-performance

Using the same tool I get slightly different numbers than real world results, but the level of improvement is still in the ballpark of x2 - x3.

Default settings;
Code:
Running warmup...
Running a 100MB file write on \\RT-AC86U\Backup (at Elements) 5 times...
Iteration 1:     17.98 MB/sec
Iteration 2:     14.72 MB/sec
Iteration 3:     14.86 MB/sec
Iteration 4:     14.93 MB/sec
Iteration 5:     14.88 MB/sec
-----------------------------
Average (W):     15.47 MB/sec
-----------------------------
Running a 100MB file read on \\RT-AC86U\Backup (at Elements) 5 times...
Iteration 1:     29.60 MB/sec
Iteration 2:     29.95 MB/sec
Iteration 3:     29.67 MB/sec
Iteration 4:     29.97 MB/sec
Iteration 5:     29.67 MB/sec
-----------------------------
Average (R):     29.77 MB/sec
-----------------------------

No Buffers;
Code:
Running warmup...
Running a 100MB file write on \\RT-AC86U\Backup (at Elements) 5 times...
Iteration 1:     42.34 MB/sec
Iteration 2:     45.76 MB/sec
Iteration 3:     45.26 MB/sec
Iteration 4:     47.06 MB/sec
Iteration 5:     46.73 MB/sec
-----------------------------
Average (W):     45.43 MB/sec
-----------------------------
Running a 100MB file read on \\RT-AC86U\Backup (at Elements) 5 times...
Iteration 1:     60.51 MB/sec
Iteration 2:     65.46 MB/sec
Iteration 3:     67.85 MB/sec
Iteration 4:     70.61 MB/sec
Iteration 5:     67.01 MB/sec
-----------------------------
Average (R):     66.29 MB/sec
-----------------------------
 
As RMerlin pointed out, testers should compare results for both a wired and a wireless connection (as these have different bandwidth and latency). And it would also be interesting to test both read speeds and write speeds.

Kernel settings are global, so what works ok for Samba, might not work for other things...

Step lightly there - one can do the buffer plays in the samba configs directly since that is localized to samba
 
Kernel settings are global, so what works ok for Samba, might not work for other things...
No kernel settings need to be changed.

I wrote that you could use this on the AC86U:
Code:
echo 524288 > /proc/sys/net/core/rmem_max
echo 524288 > /proc/sys/net/core/wmem_max
just because that is a tweak that already is in 384.4, so to compare apples to apples when testing from 384.3 or earlier.
Step lightly there - one can do the buffer plays in the samba configs directly since that is localized to samba
The issue is that giving Samba fixed buffers with SO_RCVBUF and SO_SNDBUF in the config will disable Linux TCP buffer auto-tuning for the socket. Linux has learned new tricks in this area since kernel 2.4.27 and 2.6.7 (in 2004), and to block those apparently hinders performance. So the suggestion here is to not do any 'buffer plays' in the Samba config, and this suggestion will not affect other things. No changes to kernel settings necessary.

You mentioned testing with SO_RCVBUF=65536 SO_SNDBUF=65536 in your previous post, it would be interesting to learn your results over wireless with example 3 in Adamm's first post.

To other testers: you need to set 'Enable JFFS custom scripts and configs' to Yes on the Administration/System page to follow Adamm's example. Also, use the fastest disk you have available for both the router and your client, to avoid those being the bottleneck.
 
Tested (no buffers) on 87U with nas performance tester.

While wireless speed had boosted almost 100%, wired speeds decreased around 15% avg and showed erratic speeds (ie: 30/22/31/23 vs 29/30/31/28).

Reverted back to defaults since I use wired transfers mostly.


Regards and thanks for the tip!
 
Tested (no buffers) on 87U with nas performance tester.

While wireless speed had boosted almost 100%, wired speeds decreased around 15% avg and showed erratic speeds (ie: 30/22/31/23 vs 29/30/31/28).
Does it make a difference if you enter this command:
Code:
echo 4096 65636 3521536 > /proc/sys/net/ipv4/tcp_wmem
and then restart Samba and try again? Could you paste the output of the NAS performance tester here?

What this does is that it sets the default sendbuffer for all TCP sockets (at the router) at 64K (which it also is in quite a few Linux distributions) instead of the current 16K. Effectively, the socket will have a 64K sendbuffer from the start, just like when it is fixed at 64K using the current default Samba config (because of SO_SNDBUF=65536).
 
Does it make a difference if you enter this command:
Code:
echo 4096 65636 3521536 > /proc/sys/net/ipv4/tcp_wmem
and then restart Samba and try again? Could you paste the output of the NAS performance tester here?

What this does is that it sets the default sendbuffer for all TCP sockets (at the router) at 64K (which it also is in quite a few Linux distributions) instead of the current 16K. Effectively, the socket will have a 64K sendbuffer from the start, just like when it is fixed at 64K using the current default Samba config (because of SO_SNDBUF=65536).
Ok, will test this weekend and post results!

Regards!
 
The issue is that giving Samba fixed buffers with SO_RCVBUF and SO_SNDBUF in the config will disable Linux TCP buffer auto-tuning for the socket. Linux has learned new tricks in this area since kernel 2.4.27 and 2.6.7 (in 2004), and to block those apparently hinders performance. So the suggestion here is to not do any 'buffer plays' in the Samba config, and this suggestion will not affect other things. No changes to kernel settings necessary.

You mentioned testing with SO_RCVBUF=65536 SO_SNDBUF=65536 in your previous post, it would be interesting to learn your results over wireless with example 3 in Adamm's first post.

Play around with it - I tend to look at things with Samba from a BSD perspective - and even with Linux, the app layer (smbd) should be ready to feed the kernel and keep it's buffers full

Some other fun stuff, and nicely written up.

https://www.arm-blog.com/samba-finetuning-for-better-transfer-speeds/
 
Play around with it - I tend to look at things with Samba from a BSD perspective - and even with Linux, the app layer (smbd) should be ready to feed the kernel and keep it's buffers full
But the buffer needs to be (or grow) large enough to keep the network "full". A fixed TCP buffer is a cap on the amount of data that can be "in flight" and thus on the throughput, even if the application were infinitely fast.
Unfortunately some advice that was fine in the days of Windows XP, 100 Mbit ethernet and USB 2 keeps being repeated on a lot of websites, while these days all modern OSes do a good job of auto-tuning the TCP buffers and we have gigabit networking and USB 3 speeds. The original text for that blog page dates back at least to 2007:
https://web.archive.org/web/20071031231951/https://calomel.org/samba_optimize.html
 
Last edited:
Ok, will test this weekend and post results!

Regards!
Well, made some time to test it:

WIRED TESTS ONLY
1. Stock => Well... stock
2. No Buffers => smb.postconf => sed -i "s~socket options.*~socket options = IPTOS_LOWDELAY TCP_NODELAY SO_KEEPALIVE~g" "$CONFIG" + service restart_samba
3. No Buffers + Echo => Item 2 + Putty: echo 524288 > /proc/sys/net/core/rmem_max:524288 /// echo 524288 > /proc/sys/net/core/wmem_max:524288 /// echo 4096 65636 3521536 > /proc/sys/net/ipv4/tcp_wmem

THhPdeG.jpg


Second option seems the overall fastest.

¿What do you think?

Regards!
 
Last edited:
Well, made same time to test it:

Thanks for sharing. I'd say your CPU was the limiting factor here as you are using a AC87U, but even with that being said there was still a %20 increase in read speeds. I assume the improvement would be more significant via wireless but any increase is a plus :p
 
Second option seems the overall fastest.

¿What do you think?
The config without fixed buffers indeed seems an improvement for your read speed!

Please note that Adamm and I ran the tests with the 100 MB file option, as I wanted to avoid creating memory pressure that could influence the results. The AC87U has 256 MB RAM. So to compare (if you have some time) it would be interesting to see that. Which firmware and what kind of drive did you test with?
3. No Buffers + Echo => Item 2 + Putty: echo 524288 > /proc/sys/net/core/rmem_max:524288 /// echo 524288 > /proc/sys/net/core/wmem_max:524288 /// echo 4096 65636 3521536 > /proc/sys/net/ipv4/tcp_wmem
Please note that the first two commands should have been without that ":524288" at the end:
Code:
echo 524288 > /proc/sys/net/core/rmem_max
echo 524288 > /proc/sys/net/core/wmem_max

I will do some more tests tomorrow with a portable USB3 SSD, to rule out the disk as a bottleneck.
 
Last edited:
Thanks for sharing. I'd say your CPU was the limiting factor here as you are using a AC87U, but even with that being said there was still a %20 increase in read speeds. I assume the improvement would be more significant via wireless but any increase is a plus :p
Just for the record, I'm running a 40% overclock towards 1400Mhz.
The config without fixed buffers indeed seems an improvement for your read speed!
Please note that Adamm and I ran the tests with the 100 MB file option, as I wanted to avoid creating memory pressure that could influence the results. The AC87U has 256 MB RAM. So to compare (if you have some time) it would be interesting to see that. Which firmware and what kind of drive did you test with?
Please note that the first two commands should have been without that ":524288" at the end:
Code:
echo 524288 > /proc/sys/net/core/rmem_max
echo 524288 > /proc/sys/net/core/wmem_max
I will do some more tests tomorrow with a portable USB3 SSD, to rule out the disk as a bottleneck.
I will run tests with 100mb and share results.
I'm testing with samsung usb3 128gb flash drive.
I'm running Merlin 380.69.
I indeed enter those echo commands without that final number. I had made a mistake while copying the lines here.

Will report back with 100mb results.

Cheers to you both!
 
But the buffer needs to be (or grow) large enough to keep the network "full". A fixed TCP buffer is a cap on the amount of data that can be "in flight" and thus on the throughput, even if the application were infinitely fast.
Unfortunately some advice that was fine in the days of Windows XP, 100 Mbit ethernet and USB 2 keeps being repeated on a lot of websites, while these days all modern OSes do a good job of auto-tuning the TCP buffers and we have gigabit networking and USB 3 speeds

Keep in mind that AsusWRT is running a very old kernel, and the bundled Samba isn't very fresh... so the approach is sound.

Newer code - sure - no problems there...
 
Here are my full test results with the three different socket options from Adamms' top post. To avoid disk bottlenecks, this time I tested with a Samsung T5 SSD (USB 3.1, transfer speed up to 540 MB/s) on the router and a RAM disk on the client side. No kernel tweaks were used.

My results now also show a x3 improvement in wireless write speed without the SO_RCVBUF/SO_SNDBUF fixed TCP buffers. And with the disk/USB2 bottleneck removed the wired write speed now also benefits, reaching a maximum of 116 MB/s (+29%):
  • SMB1 protocol on a wireless 5 GHz connection (data rate: 1300 Mbps):
(1) 64K fixed buffers: write 21 MB/s, read 27 MB/s
(2) 128K fixed buffers: write 35 MB/s (+65%), read 35 MB/s (+30%)
(3) no fixed buffers: write 74 MB/s (+244%), read 42 MB/s (+55%)
  • SMB1 protocol on a wired 1 Gbps connection:
(1) 64K fixed buffers: write 104 MB/s, read 74 MB/s
(2) 128K fixed buffers: write 112 MB/s (+8%), read 75 MB/s (+2%)
(3) no fixed buffers: write 114 MB/s (+10%), read 76 MB/s (+3%)

  • SMB2 protocol on a wireless 5 GHz connection (data rate: 1300 Mbps):
(1) 64K fixed buffers: write 21 MB/s, read 26 MB/s
(2) 128K fixed buffers: write 35 MB/s (+67%), read 32 MB/s (+24%)
(3) no fixed buffers: write 68 MB/s (+229%), read 44 MB/s (+71%)
  • SMB2 protocol on a wired 1 Gbps connection:
(1) 64K fixed buf: write 90 MB/s, read 72 MB/s
(2) 128K fixed buf: write 111 MB/s (+22%), read 73 MB/s (+1%)
(3) no fixed buf: write 116 MB/s (+29%), read 73 MB/s (+1%)


A final note on this subject from the Samba website:
https://wiki.samba.org/index.php/Performance_Tuning#The_socket_options_Parameter
Modern UNIX operating systems are tuned for high network performance by default. For example, Linux has an auto-tuning mechanism for buffer sizes. When you set the socket options parameter in the smb.conf file, you are overriding these settings. In most cases, setting this parameter decreases the performance.


How I tested:
tool: NAS performance tester 1.7 http://www.808.dk/?code-csharp-nas-performance
tool settings: file size 100MB, 5 loops
disk: Samsung Portable SSD T5 (250 GB, USB 3.1, ext2)
router: RT-AC86U
firmware: 384.4_alpha2-g508109e
firmware settings: Reducing USB 3.0 interference: Disable, Multi-User MIMO: Disable, QoS: Off
client PC: Windows 7 Professional SP1 64-bit, Intel Core i5, 8 GB RAM
client wireless adapter: PCE-AC68 (situated in room 7m away from router with brick wall in between)
 
Last edited:
And here are my results for the same tests on my old RT-AC68U (at 800 MHz). No kernel tweaks were used.

The write speeds and the SMB2 results benefit the most (up to 88%) from a Samba config without the SO_RCVBUF/SO_SNDBUF fixed TCP buffers:
  • SMB1 protocol on a wireless 5 GHz connection (data rate: 1300 Mbps):
(1) 64K fixed buffers: write 16 MB/s, read 21 MB/s
(2) 128K fixed buffers: write 19 MB/s (+19%), read 21 MB/s
(3) no fixed buffers: write 23 MB/s (+42%), read 21 MB/s (+1%)
  • SMB1 protocol on a wired 1 Gbps connection:
(1) 64K fixed buffers: write 33 MB/s, read 52 MB/s
(2) 128K fixed buffers: write 39 MB/s (+19%), read 53 MB/s (+2%)
(3) no fixed buffers: write 42 MB/s (+29%), read 54 MB/s (+3%)

  • SMB2 protocol on a wireless 5 GHz connection (data rate: 1300 Mbps):
(1) 64K fixed buffers: write 15 MB/s, read 16 MB/s
(2) 128K fixed buffers: write 18 MB/s (+24%), read 17 MB/s (+8%)
(3) no fixed buffers: write 20 MB/s (+34%), read 17 MB/s (+8%)
  • SMB2 protocol on a wired 1 Gbps connection:
(1) 64K fixed buf: write 20 MB/s, read 30 MB/s
(2) 128K fixed buf: write 35 MB/s (+74%), read 34 MB/s (+12%)
(3) no fixed buf: write 37 MB/s (+88%), read 40 MB/s (+31%)


How I tested:
tool: NAS performance tester 1.7 http://www.808.dk/?code-csharp-nas-performance
tool settings: file size 100MB, 5 loops
disk: Samsung Portable SSD T5 (250 GB, USB 3.1, ext2)
router: RT-AC68U (800 MHz)
firmware: 384.4_alpha2-g508109e
firmware settings: Reducing USB 3.0 interference: Disable, QoS: Off
client PC: Windows 7 Professional SP1 64-bit, Intel Core i5, 8 GB RAM
client wireless adapter: PCE-AC68 (situated 1m from router)
 
Last edited:
I am not using SMB attached disks, but those lines almost doubled wi-fi throughput on my AC68U:
Code:
echo 524288 > /proc/sys/net/core/rmem_max
echo 524288 > /proc/sys/net/core/wmem_max

echo 4096 87380 3521536 > /proc/sys/net/ipv4/tcp_rmem
echo 4096 65636 3521536 > /proc/sys/net/ipv4/tcp_wmem
If I want them executed after every reboot - wich /jffs/scripts/ script is the most appropriate one?
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top