What's new

AXE58BT/ax86u slow speeds

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Salko

Occasional Visitor
I just recently bought the axe58bt wirless adapter and I am having some trouble with speeds being around 300mbs. I am currently on a gigabyte plan. My router is in the same room as my desktop computer. I have the router set to channel 36/160hz/ax mode only. Does any one have any suggestions on what to try next? I downloaded the latest driver directly from intels website. Thank you guys
 
Last edited:
I just recently bought the axe58bt wirless adapter and I am having some trouble with speeds being around 300mbs. I am currently on a gigabyte plan. My router is in the same room as my desktop computer. I have the router set to channel 36/160hz/ax mode only. Does any one have any suggestions on what to try next? I downloaded the latest driver directly from intels website. Thank you guys
Set the 5GHz to Auto channel, 20, 40, 80, 160 MHz, enable DFS, WPA2/ WPA3-Personal, use Dual band SmartConnect. Do not change any of the professional settings. You may set the 2.4 GHz to 20 MHz but let it on Auto channel, too.
This is what I use and it works well. And I recommend the Asus beta firmware.
 
I just recently bought the axe58bt wirless adapter and I am having some trouble with speeds being around 300mbs. I am currently on a gigabyte plan. My router is in the same room as my desktop computer. I have the router set to channel 36/160hz/ax mode only. Does any one have any suggestions on what to try next? I downloaded the latest driver directly from intels website. Thank you guys

If your router is in the same room, why not just hardwire it? Will always be better than wireless. But as the other poster said, hardcoding channels and channel widths is not what you want to be doing, better to let the router and card negotiate their best settings.
 
Actually, using set Control Channels and bandwidths is what solves almost all issues, IME.
 
Set the 5GHz to Auto channel, 20, 40, 80, 160 MHz, enable DFS, WPA2/ WPA3-Personal, use Dual band SmartConnect.

The worst possible. Basically taking your chances.

This is what I use and it works well.

Yes, because with your 100Mbps ISP it simply doesn't matter. We commented this already in other threads. You won't notice a big difference between 2.4GHz @20MHz and 5GHz @160MHz. Just stop giving this advice @bbunge. It ranges from non-optimal settings to constant devices re-connecting and breaking time-sensitive applications like VoIP and Video calls. You perhaps don't use any if that and don't know. Please... @Salko is looking for performance, not Smart/Auto convenience. And no, Asus beta firmware is not the best choice either. It was released for testing purposes only.
 
Actually, using set Control Channels and bandwidths is what solves almost all issues, IME.
On 2.4 I always use hardcoded channels and limit to 20mhz, due to the limitations of that band.

On 5g I always leave the channel width to 20/40/80 (exclude 160 if possible), have always found the best compatibility that way. Unless you have all devices capable of 80, then hardcoding is either going to limit you by setting it low for compatibility, or prevent some devices from connecting by setting it high for performance. I guess most AC supports 80 now but I've seen older devices that can't connect when hardcoded to 80. And N devices won't connect either. You can try hardcoding to 40 and see if that works, but haven't seen issues with leaving it at auto.

On 5g channels, I've hardcoded before but right now am on Auto (have been for a couple years), it changes from time to time but never see any difference in performance. I suppose in an apartment/condo complex maybe more useful, but 5g is much better at coexisting, and even if you find the perfect channel today, tomorrow 3 others are on it....
 
And N devices won't connect either.

This is incorrect. When set to 80MHz older up to 40MHz capable N devices connect with no issues. 80MHz set in GUI doesn't mean allow devices 80MHz capable only, but set radio using 80MHz wide channel. 20/40/80 means the router will determine what channel width the radio will be set to based on Wi-Fi environment.

tomorrow 3 others are on it

This is also incorrect. It doesn't matter how many APs use the same channel. What matters is what available bandwidth this channel has. This is a common mistake people do when using Wi-Fi Analyzer apps. The best channel may have 10 idle APs and the worse channel may have only 2 taking the entire bandwidth.
 
Last edited:
On 5g I always leave the channel width to 20/40/80 (exclude 160 if possible), have always found the best compatibility that way. Unless you have all devices capable of 80, then hardcoding is either going to limit you by setting it low for compatibility, or prevent some devices from connecting by setting it high for performance.

By my observation, it works like this:

In general, a client will connect at its max bandwidth, subject to the router's current bandwidth (the upper limit on connection bandwidth).

If the router's current bandwidth is 80MHz, clients may connect at 20, 40, and 80MHz.

If the router's current bandwidth is 40MHz, clients may connect at 20 and 40MHz.

If you fix the router's bandwidth at 20, 40, or 80MHz; this upper limit on connection bandwidth will not vary. An exception to this is DFS... you can fix 160MHz, but the router will vary/lower this upper limit by law to avoid RADAR.

If you unfix the router's bandwidth at 20/40/80/160MHz; the router may vary/lower its bandwidth (vary/lower the upper limit) to avoid interference/noise, similar to how channel Auto varies the router's control channel to avoid same.

I believe the router's default settings permit it to vary both bandwidth and channel, assuming worst-case interference/noise and no capable admin user and/or client intervention.

I believe it is worthwhile to fix the router's bandwidth and channel for best client/node connection performance and stability, IF you can observe and determine this for your clients/nodes in your radio space.

Otherwise, use the default/variable settings and live with what the router decides... sometimes it's happy but the connection performance is not as good as it could be. And I suspect that as the extent of interference/noise increases, the default/variable settings may not afford the best client/user experience.

Some users swear by this way or that way, but they are most likely only speaking for their clients in their radio space.

OE
 
Last edited:
This is incorrect. When set to 80MHz older up to 40MHz capable N devices connect with no issues. 80MHz set in GUI doesn't mean allow devices 80MHz capable only, but set radio using 80MHz wide channel. 20/40/80 means the router will determine what channel width the radio will be set to based on Wi-Fi environment.

80mhz set in the GUI should disable 40mhz only clients (i.e. 802.11N). If it doesn't, then the GUI is not properly representing what it is doing, and is in fact doing 20/40/80 (or 40/80). The whole point of hardcoding to 80 is to disable the ability for devices that don't support 4 channel bond to be able to connect, thus improving performance for those that do support it.

This is also incorrect. It doesn't matter how many APs use the same channel. What matters is what available bandwidth this channel has. This is a common mistake people do when using Wi-Fi Analyzer apps. The best channel may have 10 idle APs and the worse channel may have only 2 taking the entire bandwidth.

Idle APs still interfere and cause noise. I believe it was N that enabled some better coexistence features but it does not mean you can have 100 APs on a band with no detriment. Yes a channel with 5 APs that are running full throughput will be worse than one with 5 APs doing nothing, but that wasn't the point. The point was, hard coding a channel at one point in time, even if using a wifi analyzer with a good channel utilization function, is only good for that one point in time.
 
80mhz set in the GUI should disable 40mhz only clients

No, it doesn't work this way.

Idle APs still interfere and cause noise.

No, beacons only, not an issue.

The point was, hard coding a channel at one point in time, even if using a wifi analyzer with a good channel utilization function, is only good for that one point in time.

Most routers around you usually work on Auto channel. If you hold your ground, they'll move away over time. If your router is on Auto channel as well, chasing the better channel game continues forever.
 
By my observation, it works like this:

In general, a client will connect at its max bandwidth, subject to the router's current bandwidth (the upper limit on connection bandwidth).

If the router's current bandwidth is 80MHz, clients may connect at 20, 40, and 80MHz.

If the router's current bandwidth is 40MHz, clients may connect at 20 and 40MHz.

If you fix the router's bandwidth at 20, 40, or 80MHz; this upper limit on connection bandwidth will not vary. An exception to this is DFS... you can fix 160MHz, but the router will vary/lower this upper limit by law to avoid RADAR.

If you unfix the router's bandwidth at 20/40/80/160MHz; the router may vary/lower its bandwidth (vary/lower the upper limit) to avoid interference/noise, similar to how channel Auto varies the router's control channel to avoid same.

I believe the router's default settings permit it to vary both bandwidth and channel, assuming worst-case interference/noise and no capable admin user and/or client intervention.

I believe it is worthwhile to fix the router's bandwidth and channel for best client/node connection performance and stability, IF you can observe and determine this for your clients/nodes in your radio space.

Otherwise, use the default/variable settings and live with what the router decides... sometimes it's happy but the connection performance is not as good as it could be. And I suspect that as the extent of interference/noise increases, the default/variable settings may not afford the best client/user experience.

Some users swear by this way or that way, but they are most likely only speaking for their clients in their radio space.

OE

If that's how your router is functioning, that was not the intention of the standard and it is not doing what it is supposed to.

If you hardcode to 80, it should only allow 80mhz clients to connect, improving performance, but reducing coverage distance since a 4-channel bond requires stronger signal.

If you set to 20/40/80 it should allow all 3 types of clients to connect at the same time however there will be somewhat reduced performance as the slower clients will drag down the faster ones, much like when you have the lower data rates enabled or a very old b/g client connected (not exactly the same reason, but similar).

Nobody can force Asus or another vendor to adhere exactly to a standard, and often two different companies will interpret a standard differently, a broadcom chipset may behave totally differently than a qualcomm when in the same brand router. So there will always be some variation, but this one would seem to be a pretty stark contrast to the original intention.

I know with the enterprise stuff I've dealt with, hardcoding 80mhz channels will not allow 802.11N 5ghz clients to connect. If Asus is treating it as a "best case" setting they should update the GUI to reflect what is actually happening. If setting it to 80 and 20/40/80 has the same effect, why have the two options to begin with?
 
No, it doesn't work this way.



No, beacons only, not an issue.



Most routers around you usually work on Auto channel. If you hold your ground, they'll move away over time. If your router is on Auto channel as well, chasing the better channel game continues forever.

#1 - it is supposed to - maybe Asus doesn't.

#2 - If you're being completely literal in that you for some reason have a bunch of APs around you that have no clients ever connecting to them and are just sending a beacon every few seconds, seems unlikely but I guess you live in a weird area. However even in that case, there is still radio noise and still interference. The transmitter does not only turn on for a microsecond to send the beacon then turn off. A TV station that is live but sending no picture will still interfere with another station on the same frequency, no different here. RF is RF no matter how much intelligence you try to put behind it. In reality what is happening is the "coexistence" features that have been added as the standards have improved function better when less data is being sent on that channel, but still won't function as well as a channel with no other APs on it.

#3 - Most routers only scan and change their channel when rebooted or the radios reset. It isn't a constant thing. Many of them do not do a very good job of selecting the channel of often will just keep the one they previously had
 
Again, this not how it works.

Again, it's how it is supposed to. What you're describing is that 80 and 20/40/80 modes are identical which makes no sense, why even give the 2 options?

All home routers I've seen and Unifi, Cisco, Ruckus, Netgear, TP-Link APs behave the same way.

I can (and have) set both Unifi and Cisco to 80mhz only and no N clients can connect. Even some AC clients have trouble with this setting.
 
If that's how your router is functioning, that was not the intention of the standard and it is not doing what it is supposed to.

If you hardcode to 80, it should only allow 80mhz clients to connect, improving performance, but reducing coverage distance since a 4-channel bond requires stronger signal.

If you set to 20/40/80 it should allow all 3 types of clients to connect at the same time however there will be somewhat reduced performance as the slower clients will drag down the faster ones, much like when you have the lower data rates enabled or a very old b/g client connected (not exactly the same reason, but similar).

Nobody can force Asus or another vendor to adhere exactly to a standard, and often two different companies will interpret a standard differently, a broadcom chipset may behave totally differently than a qualcomm when in the same brand router. So there will always be some variation, but this one would seem to be a pretty stark contrast to the original intention.

I know with the enterprise stuff I've dealt with, hardcoding 80mhz channels will not allow 802.11N 5ghz clients to connect. If Asus is treating it as a "best case" setting they should update the GUI to reflect what is actually happening. If setting it to 80 and 20/40/80 has the same effect, why have the two options to begin with?

I only observed how my Asus router appears to work... settings and connections. If you have one, have a look yourself.

If I set/fix (hardcode) 160MHz bw, the Wireless log will list 40, 80, and 160MHz client connections and those clients will list corresponding connection/link rate details. This much is easily observed.

OE
 
If you set 2.4GHz to 40MHz wide channel, 20MHz capable devices won't connect? Are you sure?



Well, not observed here on any of my 6 networks, business and home, using different equipment.

Honestly I've never once tried to set 2.4Ghz to 40mhz in a permanent setting, just for brief testing and the clients were all 40mhz capable. Even if I was in the middle of nowhere (and thus not concerned with being a jerk to the neighbors), it is just asking for as much interference as possible. By the time 40mhz 2.4 came around 5ghz was plenty common. Given the decreased range of 40mhz 2.4 vs 20mhz I really see no reason it was ever released, 5ghz is better in pretty much every respect. Once you get beyond the usable range of 5ghz, the 2.4ghz 40mhz is having issues as well, likely running slower than 20mhz would be.

Again what the GUI (or CLI) shows and what it is translating that to in the hardware may be two totally different things. I've got an old Ubiquiti N outdoor AP that claims to have CCK data rates disabled but the radio in that particular model ignores that command. So I have to manually set the minimum data rate and beacons to 12M if I want to totally disable CCK (which also disables the 6M N rate but that is fine with me, don't want a client with that poor of a connection able to connect anyway).

As far as this case, the intention as it was always presented to me was if you hardcoded a channel width, only clients supporting that same channel width should be allowed to connect. I've seen that happen in practice, but have not tested it with any home routers, which I've always left set to 20/40/80 without any issues. My laptops both support a max of 866M (dual antenna, 80mhz) and I can push 450+ consistently over that which is about the most you can expect, so saw no reason to toy with it at home. There is very good reason to disable 2.4ghz 40mhz and 5ghz 160mhz (or even 80+80) and I always do that, but beyond that, haven't seen any need.

Now, are hardware vendors required to adhere perfectly to every aspect of specs and standards? No. Cisco for one is known for ignoring (or creating their own) standards constantly. In their APs you can either hardcode the width or select "best", which is an algorithm that picks a width based on the number of clean adjacent channels available. In that mode, from what I recall it also allows clients with lower width than what it has selected to connect, sort of a "compatibility mode". I'm sure some other vendors (possibly Asus/Broadcom it sounds like) have followed this model. When hardcoded, my colleague and I did notice it behaving as expected, only allowing clients supporting that exact width to connect, so in that case it appears they actually were adhering to the standard when hardcoded. Maybe they've changed it in newer code, this was back when AC was much newer.

All I can say is I have tried, and when hardcoded to 80, N clients (two different Intel chipsets, possibly their implementation is the one that doesn't play nicely with it) could not connect. Change to 20/40/80 and they connected right up at 40. During that same session, my phone which is AC saw much decreased range, and while I didn't specifically look to see what channel width was in use, my assumption was that it was not able to fail back to 40 and 20. This was on a Ubiquiti AP. As mentioned, similar results were seen on a Cisco AP. No MCS rates were changed or other things were tweaked other than the width for that test. Not really unsurprisingly, 160mhz outperformed but only very close to the AP, at further distances 80 hardcoded was best, but changing to "best" (20/40/80) which selected an 80mhz channel but allowed for 20 and 40 was slightly worse but not by much. This was in an environment set to mimic an office with various client types and noise.

Regardless of all the above, since the post I replied to was about a client that could not connect and the user had hardcoded their channel width, and was looking to increase compatibility/interoperability, the suggestion to not hardcode the speed made by another user is a valid one.
 
I just recently bought the axe58bt wirless adapter and I am having some trouble with speeds being around 300mbs. I am currently on a gigabyte plan. My router is in the same room as my desktop computer. I have the router set to channel 36/160hz/ax mode only. Does any one have any suggestions on what to try next? I downloaded the latest driver directly from intels website. Thank you guys
Sounds like you're connecting on 2.4 or only getting 40mhz of bandwidth.

Try a different channel up to 52 and see if it improves. For me 36 isn't the best but, 40 or 44 provides 1.5gbps.

Grab a wifi analyzer app for your phone and you can see how the different channels operate in your environment for the best option to stick with.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top