What's new

MOCA asymmetric speeds

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

sugatam

New Around Here
This has me really puzzled. I have fios with an old Verizon Actiontec MI424WR Rev I router, with ethernet WAN on the ONT and the router. I was purely interested in MOCA to get wired LAN into a couple of rooms in my house that I couldn't easily fish cat5 to. Since all I care about if MOCA 1.1 speeds (I need to stream 4k UHD rips on my NAS), I picked up a pair of additional Rev I routers, and set them up as pure MOCA to ethernet bridges per instructions on the net (basically disabled broadband and wireless). Anyhow, I'm getting 165 - 170 Mbps iperf rates from these "bridges" to my main router, but only half that rate in the opposite direction. The slower rate is associated with many more retries than when I test in the "good" direction. I tried many combinations of interchanging the physical devices, putting in a better splitter, etc, but to no avail. Frustrated, I decided to do an elementary test, so I put one bridge right next to the router and connected the two with a short piece of RG6. Still the same thing! Ironically, given my desire to stream 4k and the fact that my NAS is connected via ethernet to the main router, if the direction in which the slow speed is happening were reversed, I could live with it. Any ideas on how to solve this asymmetric speed issue?
 
are these recertified or used units ?

are you observing stuttering on the streaming in the desired direction ?
what iperf command line are you using for each direction test ?
what are the devices being used to test ?
What do you get for test results if you avoid using any of the moca gear between the same two devices ?

Is there a diagnostic page on those modems that gives you the sync rate and power budget/used for each of the nodes ?
if so, what does the table show ?
 
The primary router was purchased new a decade ago, the two bridge actiontecs are used units. The problem doesn't change shape when moving the actiontecs around.

The devices are various single board computers with gigabit ethernet. Some are running armbian and others are running coreelec. No change if I swap the devices

I'm using iperf3 -s on one end and iperf -c <host> on the other. No difference if I use -R to test the opposite direction or swap server and client commands - its always slow in the same direction

The diagnostic page shows a 250M PHY rate in one direction and 243M in the other. Close enough, no?

Without MOCA gear the speeds are very fast, mostly in the 500Mbps range, and symmetric. I guess the bottleneck in that case is the CPU on the SBCs.

So here's another finding, for kicks I tried a raspberry pi 2 running libreelec on one end. The pi 2 has only 100Mb ethernet, and lo and behold, I get ~94Mbps in the good direction and again roughly half (50-55Mbps) in the bad direction. However, if I use a 64KB tcp window instead of the default, the retries drop to zero and the pi gets 85-90Mpbs in the bad direction too. Unfortunately with the MOCA latency a 64KB window maxes out at 85-90Mbps anyway, so it doesn't help me in my quest to get 150Mbps plus symmetric speeds to stream 4k on the gigabit SBCs. Experimenting with window sizes on the gigabit SBCs hasn't resulted in any improvement
 
MTU = 1500 across all ?
Correct, no jumbo frames anywhere.

New find, enabling BBR TCP congestion control on the sending side fixes the issue - armbian default is "cubic". Now iperf3 gets up to 150-155Mbps in the slow direction, and file transfers over netcat are correspondingly quick. However, this change is correlated with cifs read speeds slowing down to a crawl (< 1MBps as opposed to 18-19MBps netcat transfers consistent with the improved iperf speed), so for my 4k streaming use case I have more work to do...

I noticed that with default settings, the first couple of iperf transfer windows would give better speeds and then clamp down related to the Cwnd getting smaller. That had me think about congestion control. I'm still puzzled as to what's special about my setup - this is clearly not a common problem - unless windows defaults to BBR or otherwise more MOCA friendly congestion control algorithms.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Members online

Top