Monday, December 31, 2012

Wi-Fi Roaming Analysis Part 3 - Measuring Roam Times

This article picks up where we last left off in our discussion on Wi-Fi roaming. In part 1, I covered how connection control occurs in Wi-Fi, the importance of roaming, and what conditions are involved in triggering a client roam. Then in part 2, I dived into the many variations of Wi-Fi roaming and how they each work. Now that we know the background on how Wi-Fi roaming occurs in multiple different scenarios, it's almost time to dig in and get our hands dirty by actually capturing packets and measuring client roaming performance.

But before we an do so, we have one more topic to cover, namely - how to actually measure the roam. This may seem trivial, and really it isn't that difficult of a subject. However, it is important to establish the methodology we will use to provide consistent, repeatable, and comparable results. This will enable us to accurately compare roaming performance between different types of clients as well as across firmware, driver, and configuration updates on the same client or WLAN infrastructure.

Wi-Fi Roaming Analysis Series:
  1. Part 1 - Connection Control and Importance of Roaming Analysis
  2. Part 2 - The Many Variations of Wi-Fi Roaming
  3. Part 3 - Methods of Measuring Roam Times
  4. Part 4 - Analysis with Wireshark and AirPcap
  5. Part 5 - Analysis with Wildpackets Omnipeek (coming)
  6. Part 6 - Tips for Roaming Performance Improvement (coming)
Measuring Roam Times
There are several different methods by which we can actually measure the roaming event. Variations exist because organizations, wireless professionals, and software tools have different views on what constitutes a completed roam (from beginning to end). Different methods of measurement may be applicable in different scenarios, but what is important is to maintain consistency in approach in order to establish a baseline in performance and be able to accurately compare results to one another.

The following are common methods used to calculate the duration required for a client to roam from one AP to another AP:
  1. Between 802.11 Encrypted Data Packets to/from the client on the old and new AP
    This method focuses on the analyzing the impact of roaming delay to the application(s) running on the client device. By analyzing the amount of time between the last data frame transmitted on the "old" AP and the first data frame transmitted on the "new" AP, we can get an idea as to the latency that is experienced by applications. This can be useful to understand how roaming latency might impact software development and internal application timers that may result in timeout errors or otherwise disrupt network applications.

    However, one drawback to this method is that it reflects not only the actual wireless roaming latency, but also any idle time between frame transmissions if the application does not have data buffered for transmission. This can inflate perceived roam times based on application behavior. For example, many applications are bursty in nature, and measuring roam times between data frames could result in a large amount of time being included that is simply client idle time since no frames have been sent by the application for transmission. Even in the "best-case" scenario of where application behavior is consistent, such as a VoIP G.711 call that sends frames every 20ms, this can still cause imprecise measurement. Historically, this has not been a problem when a large percentage of clients did not support fast roaming methods and roaming times were comparatively large. An inaccuracy of 20ms or less would not be a substantial factor in a 500ms - 1sec roam time. But as newer clients support 802.11r and Fast BSS Transition, roam times are likely to be 50-100ms. A measurement error of 20ms could represent 20-40% of the calculated roam time.

  2. Between 802.11 Probe Request through EAPoL-Key (or Association Response)
    This method focuses solely on the wireless roaming latency, removing application behavior from the calculation. The calculation begins with client probing to discover candidate APs to which it can roam, and typically completes with the final EAPoL-Key frame (but may vary based on which type of roam was completed; for instance with 802.11r Fast BSS Transition the final frame required for a successful roam is the Association Response. See Part 2 of this series).

    However, the drawback with this method is that the calculation may be inflated by including the time required for client probing. Since client probing does not always reflect an actual roam event (many clients probe periodically to maintain a list of APs to minimize discovery time when they actually need to roam) and probing behavior varies between manufacturers and even driver versions, this can result in the inability to accurately compare roaming performance between clients or software upgrades.

  3. Between 802.11 Authentication Request through EAPoL-Key (or Association Response)
    This method is nearly identical to the previous method, but it omits client probing from the actual roam time calculation. Many times, when this method is used, the client probing is still listed for informational purposes in order to better understand the client behavior. Therefore, the roam time calculation is limited to the time it took the client to move to the new AP once it decided that a roam was required. Measurement is typically performed from the client 802.11 Authentication Request through the final EAPoL-Key frame (or Association Response, depending on the type of roam. See Part 2 of this series).

    It should also be noted that 802.11 Authentication does not always indicate a roaming event; Wi-Fi clients are allowed to perform 802.11 authentication with multiple APs at once, but can only be "associated" to a single AP at a time. Therefore, look for 802.11 Association Request frames to positively identify a roaming event, then look backwards from that point to identify when the client both probed to discover the AP and performed 802.11 authentication to the AP (different than 802.1X/EAP authentication).
None of these methods are better than the others, simply different ways of measuring the time required for a roam to complete.

It may be helpful for you to reference a simple Wi-Fi connection ladder diagram. At what point do we start measuring roam, and and what point has the roam completed? Which method of measurement is most useful for my analysis?

Measuring Wi-Fi Roam Times

Also, consider which EAP type you have implemented in your network. Different EAP methods require different application flows to complete authentication and can impact the roaming performance for clients that do not support fast roaming. For reference, here is a PEAP authentication packet flow for a client performing a full 802.1X/EAP authentication on a Wi-Fi network.

Personally, I typically measure Wi-Fi roaming times between the 802.11 Authentication Request and the final EAPoL-Key frame (or Association Response). However, this is simply my preference because I often like to compare roaming times between different clients that may be running different applications. By eliminating the Data frames from my calculation, it is easier to directly compare the Wi-Fi driver stacks and radio performance of the clients.

In the next article on this topic, we'll dig into actual packet captures. Stay with me you packet analysis junkies!

Cheers,
Andrew

Thursday, December 27, 2012

The Impact of 802.11ac Gigabit Wi-Fi on Enterprise Networks

802.11ac is being hyped as a dramatic improvement in performance over existing 802.11n equipment. While this will be technically true once all of the capabilities of 802.11ac are available, this is going to take time. Realistically, within the next year, first generation 802.11ac equipment will only offer marginal improvement over 802.11n equipment for enterprise networks. Separating the hype from real-world impact to enterprise WLANs has received little attention.

I recently had the opportunity to provide commentary on the subject for an article written by Lee Badman in Network Computing. Lee did a superb job with the article, as he always does, and provides great insight for corporate WLAN network managers looking for advice.

I'd like to expand a little bit deeper on my viewpoints expressed in his article with this post, and to provide an assessment of the "real-world" impact that most enterprises should be considering.

802.11ac - Wi-Fi Just Keeps Getting Better!
The 802.11ac standard includes complex technology that will eventually allow multi-Gigabit data transfer, but not all aspects of the specification will be available on day one. Similar to 802.11n, which began with two spatial stream devices capable of 300 Mbps and eventually saw maturation to three spatial streams capable of 450 Mbps, 802.11ac will see an initial first wave of products that are capable of 1.3 Gbps with future maturation possibly up to 6.9 Gbps. Whether or not we will actually see 802.11ac products capable of 6.9 Gbps is dependent on hardware enhancements on both the access point and client that are not certain.

First generation 802.11ac products will achieve 1.3 Gbps through the use of three spatial streams, 80 MHz wide channels (double the largest 40 MHz channel width with 802.11n), and use of better hardware components that allow higher levels of modulation and encoding (256-QAM). The 802.11ac amendment also simplifies implementation of standards-based beamforming for manufacturers by focusing on a single form of explicit beamforming, and eliminating the complex number of beamforming methods detailed in 802.11n. This should allow AP and client manufacturers to align on a single interoperable method.

Future releases of 802.11ac will enable even higher bandwidth by allowing up to eight spatial streams, 160 MHz wide channels, and simultaneous transmission to multiple clients by an access point, called Multi-User MIMO (MU-MIMO). In a few years, the realistic benefits for enterprises will likely be four and five spatial stream products with network designs still primarily based around 20 MHz and 40 MHz wide channels. MU-MIMO deserves special attention because it will mark a significant milestone in wireless technology that will allow greater performance through the use of parallel transmissions to two different receivers from the same transmitter. For example, an AP that is capable of 3 spatial streams (3X3:3) could transmit 1 spatial stream to three different clients that are only capable of 1 spatial stream each, concurrently. This will allow enterprises to better-serve large client populations in high-density environments. As described in the Aerohive High-Density Wi-Fi Design and Configuration Guide, the increasing reliance on WLANs as the primary method of network connectivity and for mission-critical services is shifting the focus from designing Wi-Fi networks for coverage to designing them for capacity. MU-MIMO will be a critical enhancement that will allow Wi-Fi networks to scale larger and provide greater capacity to support growing demand.

Real-World Benefits
Most of the current discussion on 802.11ac focuses on large bandwidth improvements that will not be available for several years to come. The short-term improvements are of bigger benefit in small WLAN deployments, such as SMB and consumer homes, where only a single Wi-Fi AP will be able to take advantage of the much wider channels.

First-generation enterprise 802.11ac products that will be available in 2013 represent an incremental evolution of Wi-Fi above current 802.11n products on the market. These first-generation products will outperform existing 11n products, but only marginally, especially in multi-AP enterprise deployments. This is because the majority of bandwidth gain with first generation 802.11ac products will rely on the use of wider channels, up to 80 MHz. Enterprises need to be cognizant of the limitations in spectral capacity when designing and deploying an enterprise WLAN network. Enterprises must be careful to deploy access points on non-overlapping channels, with sufficient signal attenuation between adjacent access points to prevent co-channel interference.

Here is a look at the data rates available with first-generation 802.11ac products capable of 1, 2, and 3 spatial streams. Note that 160 MHz channels are shown only for reference, and will not be available with the first wave of 11ac products.

First-Generation 802.11ac Data Rates
(Note - 160 MHz channels will not be available in first wave of products)
You can find a complete listing of 802.11ac data rates, including 4-8 spatial streams, in the appendix of the Aerohive High-Density Wi-Fi Design and Configuration Guide.

Arguably, the biggest benefit of first generation 802.11ac will be the adoption of 5 GHz bands by mobile devices. This will enable enterprise WLANs to serve mobile devices in greater quantity with better performance due to the mandatory support of 5 GHz frequency bands by all 802.11ac compliant equipment. Today, enterprise WLANs struggle to provide the capacity required to support the large influx of mobile devices like smartphones and tablets. Once mobile device manufacturers begin deploying 802.11ac capable devices, existing 802.11n and new 802.11ac WLAN deployments will be able to provide significantly better services to mobile devices. Due to the enormous popularity and market growth of mobile devices, they now represent a significant portion of the user-base in many enterprise WLAN networks. However, most mobile devices today are limited to the 2.4 GHz band, which is cluttered with interference and offers minimal capacity. This has resulted in under-performing WLANs and an often poor user experience. With the adoption of 802.11ac, mobile devices will be able to take advantage of the cleaner spectrum and the additional capacity available in the 5 GHz bands. In addition, chipset enhancements should allow mobile devices to operate at higher bandwidth and performance levels with better battery life.

The Massive Shift to 5 GHz and the Impact to Enterprise WLANs
802.11ac operates exclusively in the 5 GHz unlicensed bands. In the U.S. there are a total of 25 non-overlapping 20 MHz channels in the 5 GHz bands. While this appears sufficiently large to handle channel re-use in large enterprise deployments, it can be deceiving. When channel size is increased, the number of available channels decreases, constraining channel re-use and making the risk of co-channel interference and WLAN performance degradation higher. Only 10 non-overlapping channels will be available with 40 MHz channel width (similar to the 9 channels available with 802.11n; the difference is due to the addition of channel 144 with 802.11ac), and only 5 channels will be available with 80 MHz channel width. When DFS channels are avoided, which are not supported by a large percentage of client devices and are at higher risk of causing network stability issues, the number of remaining channels dwindles down the only 2. Therefore, it will not be practical for most enterprises to use of 80 MHz wide channels because it will significantly constrain channel re-use that is critical to a high-performance WLAN. This will limit practical performance of first-generation 802.11ac in enterprise environments to 600 Mbps using 40 MHz channels, a far cry from the 1.3 Gbps advertised.

Spectral Capacity versus Channel Width

Enterprises in multi-tenant buildings or in dense urban areas will likely see increased utilization of the 5 GHz spectrum bands, which could cause greater levels of interference and degrade WLAN performance. This is of significant concern if enterprises deploy 802.11ac equipment with 80 MHz wide channels, without recognizing the impact to neighboring businesses.

802.11ac also threatens to accelerate the utilization of 5 GHz spectrum bands by a large majority of enterprises. This could be a double-edged sword, providing the promise of increased performance for individual organizations, while simultaneously congesting the once interference-free 5 GHz bands. This may expose the need for more unlicensed spectrum sooner than anticipated. The timing is impeccable, as the FCC and Congress are currently considering allowing unlicensed use of two additional 5 GHz bands, devising rules for spectrum auctions in 2013-2014 of the 600 MHz TV white spaces, and spectrum-sharing plans in the federal 3550-3650 MHz band that would provide additional unlicensed spectrum for general use. While there may not be an "unlicensed spectrum crunch" today, there very well could be one in the not-to-distant future. More information on the current spectrum policy discussions can be found in my previous blog post about the need for a balanced spectrum policy.

When does it make sense to deploy 802.11ac?
Enterprises that have deployed the latest generation 802.11n equipment pervasively throughout their network can be confident in the investment they have made; first generation 802.11ac only offers incremental benefits over 3 spatial stream 802.11n.

First generation 802.11ac products will be of greater interest to enterprises that are purchasing a new "greenfield" WLAN deployment, growing an existing WLAN deployment with additional APs, or are running on older legacy WLAN equipment. 802.11ac is backwards-compatible with all previous versions of Wi-Fi, so it will be able to supplement existing WLAN deployments seamlessly while providing higher performance and investment protection versus 802.11n equipment. Enterprises that were early adopters of 802.11n may see greater appeal in moving to first generation 802.11ac because their existing 802.11n equipment has already been depreciated over a number of years and they have received their return on investment. In addition, 802.11ac can offer a substantial upgrade in performance to 600 Mbps over two spatial stream 802.11n (300 Mbps), allowing the enterprise to increase performance, capacity, and services offered over the WLAN.

Revolution or Evolution? - Andrew's Take
I'm bullish on 802.11ac. The technology holds a lot of promise to improve enterprise WLAN performance and capacity, especially for mobile devices which are accounting for a larger and larger percentage of our client base with BYOD and Consumerization of IT.

However, we need to temper our short-term expectations. The first wave of 802.11ac equipment will likely not prompt upgrades to existing WLAN deployments unless it is replacing older gear; it just won't provide enough value to justify the replacement of the latest generation of 802.11n equipment. Enterprises will also need to be careful to deploy 802.11ac equipment correctly to avoid harmful interference to their own network and neighboring networks - most notably limiting channel width to 20 MHz in high-density areas and 40 MHz maximum in other common-use areas.

Just as with 802.11n, we will see subsequent releases of 802.11ac that implement additional technology improvements detailed in the standard. This will include 4+ spatial streams (possibly up to 8 eventually), 160 MHz wide channels, and MU-MIMO.

Lastly, we need to ensure the value of unlicensed spectrum is recognized by regulatory agencies throughout the world. The 2.4 GHz unlicensed band is often joked about as being the "junk" band today due to rampant interference and over-crowding. The massive shift to 5 GHz is already underway which will only be accelerated by 802.11ac adoption. We run the risk of 5 GHz over-crowding very soon if more unlicensed spectrum isn't made available.

Cheers,
Andrew


802.11ac Gigabit Wi-Fi Series:

Sunday, December 23, 2012

Cellular MVNO's Increasingly Rely on Wi-Fi Offload to Net Subscribers

Are MVNOs pioneering the next "innovation" wave in cellular? The answer may be 'Yes' based on unique business models that appear to be working.

Cellular MVNOs (Mobile Virtual Network Operators) are gaining attention and subscribers in the U.S. due in-part to cheaper pricing models built around BYOD and Wi-Fi offload:

many new MVNOs are adopting Bring Your Own Device (BYOD) strategies, with SIM-only MVNOs like Simple Mobile, Red Pocket and Ultra on the GSM side, and a new BYO Sprint Device solution for MVNOs like Kajeet, Ting and others on the CDMA network. Sprint’s BYOSD program has the added benefit that no SIM kit or installation is required; the handset is activated simply via its serial number.

With BYOSD, for example, Kajeet offers network-based parental controls, web filtering and location services on recycled handsets. BYOD solves two problems for the MVNO – eliminating handset subsidies and reducing logistics cost (kitting, shipping, warranty repairs and returns). Even where customized handsets are used, MVNOs sell them above cost, eliminating costly subsidies.
[...]

Service – airtime and data – costs can also be reduced. With increasing data usage, many MVNOs utilize dual-mode phones (cellular and Wi-Fi) to offload voice and data traffic to Wi-Fi networks, which is increasingly available in homes, offices and businesses. And an added benefit for providers: offloading to Wi-Fi turns off the carrier’s meter.
[...]

When Republic Wireless introduced its $19/month unlimited plan as a beta trial, everyone asked how they planned to do it. Republic relies heavily on Wi-Fi networks at home and work, using “hybrid calling” or cellular offload where traffic only rolls to Sprint’s cellular network when Wi-Fi is unavailable. Republic is now shipping a Motorola Android smartphone, running proprietary Republic software (for $259), which completes the no-contract package. And apparently the beta trial worked just fine: the same $19 plan is now available to all.

FreedomPop guarantees 500MB of free 4G mobile broadband data every month, with no data caps or throttling, and attractive plans ($17.99/month for 2GB of data, a cent per MB additional). Customers can earn additional data for each friend referred or unlimited data by engaging in partner promotions. The Freedom Hub Burst, a 4G Wi-Fi router that offloads cellular to wireline and supports up to 10 devices, is free with security deposit. They also offer the Freedom Sleeve Rocket, an iPod Touch case that turns it into an iPhone. Plans include trading bandwidth with other FreedomPop users, and creating bandwidth-sharing communities. Launched on Clearwire, FreedomPop will add Sprint’s LTE network next year.
For those unacquainted with MVNOs, they purchase wholesale access to the larger carrier networks and repackage services with different pricing plans to subscribers.

What I find interesting is the unique business approaches being offered by MVNOs through a combination of cellular data coupled with prioritized Wi-Fi offload when Wi-Fi is available. BYOD is also a new approach in the cellular world, where historically strict control over subscriber client device options helped ensure network performance.

MVNOs are also leading solutions to ease international cellular data roaming.
If you travel overseas and want to maintain your mobile data connection, you’re either going to pay criminal roaming rates or endure tremendous hassles avoiding them. But a new breed of virtual operators like Voiamo are looking to create the first truly international plans.
[...]

To put it simply, international data roaming is broken, and no U.S. carrier seems to be lifting a finger to fix it. They seem to prefer the miserable status quo to the headaches required to repair the system. But where the network operators are falling down, mobile virtual network operators (MVNOs) are picking up the slack.
[...]

While most MVNOs tend to work in a single country with a single carrier, there’s nothing preventing them from buying capacity on multiple networks in multiple countries and then selling international access to a customer in a single pricing plan.

London-based Voiamo, however, is thinking bigger with a new service called GlobalGig. Instead of just renting you a hotspot and selling you a temporary plan when you travel, it proposes to replace your current country-limited 3G or 4G modem plan with a service that will work in multiple countries with a single pricing plan. Its rates are a comparable to the prices most of the major carriers charge for hotspot plans — starting at $25 for 1 GB a month and up to $50 for 5 GB — but those rates are good for the U.S., the U.K. and Australia.

In the U.S., GlobalGig uses Sprint’s network, while in the U.K. and Australia it uses the 3 and Optus networks, but it is negotiating deals with carriers in other countries and plans to expand its global footprint soon, Voiamo CEO and founder Nigel Bramwell said. Its $120 hotspot — which can connect up to five devices through Wi-Fi — can support networks in 100 different countries, Bramwell said. As GlobalGig adds more carriers to its roster it will periodically send out new SIM cards to its customers, expanding their coverage to new countries.

MVNOs are typically thought of as services that cater to the more price-conscious consumers; those that can't afford more expensive contracts by the larger carriers that lock subscribers into 2-year commitments.

But perhaps it's time to re-assess their offerings and take a serious look at MVNOs for better service offerings than their larger competitors.

Cheers,
Andrew

Saturday, December 22, 2012

Weird Wi-Fi Sightings - "milk-Fi"

This is a new series on weird Wi-Fi sightings from around the world.

Title: "milk-Fi"
Business: Meson Cinco Jotas
Location: Calle Arenal, Madrid, Spain
Date: 2012-Nov-05
Explanation: None given. The business was closed when I passed by.



Here is a better picture (source courtesy of Instagram user edans):


Friday, December 21, 2012

Understanding Fourier Transform

Check out this Interactive Guide to the Fourier Transform, which is essential for digital modulation onto radio waves:
The Fourier Transform is one of deepest insights ever made. Unfortunately, the meaning is buried within dense equations
[...]
The Fourier Transform takes a specific viewpoint: What if any signal could be made from circular motion (repeating cycles)?
[...]

I knew I should have remained a math major in college :)

Andrew

Thursday, December 20, 2012

Breathing Wi-Fi

I often get asked "What's with the velcro on your laptop?"

They are referring to this velcro:

Why the velcro, dude?

The answer is simple: I'm a Wi-Fi engineer.

You see, Wi-Fi engineers are not your typical run-of-the-mill network engineer. Sure, we carry the requisite rollover console cable, USB-to-Serial adapter, know most vendors' CLI interfaces through reflex alone, can whip up a discontiguous wildcard mask for an ACL without blinking an eye, and can diagnose spanning-tree problems in our sleep.

But unlike most other network engineers, we have a very intimate relationship with Layer 1. Whereas route/switch engineers have learned and long since forgotten about Ethernet's physical properties largely because they're not the source of many issues, Wi-Fi engineers eat, sleep, and breath air (you see what I did right there!) Wi-Fi engineers must know the ins-and-outs of RF propagation, signal strength, free space path loss, signal attenuation through various objects, channel planning, and the intricacies of co-channel interference in order to design a network that works.

This requires tools... Wi-Fi tools. Ask any Wi-Fi engineer and they'll show a plethora, nay a smorgasbord, of various WLAN adapters, antennas, and spectrum analyzers. We've all got too many to count; some Cardbus, some USB, some with internal antennas, some with external, some from companies that no longer exist, and some that have been obsolete so long that nobody else remembers them... except for us! Because there's always a chance we'll run into that one customer that still uses legacy equipment that we need to support, test, and emulate.

So when we roll into action, we light the place up!

Light it up!


So when you ask: "What's with the velcro, dude?"
I'll kindly reply: "I breath Wi-Fi, man!"


Breathing Wi-Fi

Until we Associate again!

Wi-Fi Clients Follow the Data Rates Advertised by the AP

As a follow-up to my previous post on Wi-Fi probing behavior and configured data rates, I thought I'd round-out the discussion showing how the client ultimately follows the AP's lead when it comes to communicating it's Basic and supported data rates when it connects to the WLAN.

It's very important for an AP to ensure that the clients attempting to connect to the WLAN support the mandatory (termed 'Basic') data rates configured as part of the network policy. Doing so ensures that all clients can properly decode and receive management frames like beacons as well as broadcast and multicast traffic that is essential to successful network connectivity, which are all sent out using one of the Basic data rates. Beacons and broadcasts are typically sent at the lowest Basic data rate, while multicast may be sent out at the highest Basic data rate depending on the AP vendor.

The communication of the Basic and supported data rates must occur in both directions, so both parties can understand the capabilities of the other.

Clients that are actively scanning send out a list of supported data rates, but don't mark any as Basic since that is the responsibility of the AP. This is just a raw list of data rates the client is capable of transmitting and receiving.

Probe Request Supported Data Rates


APs advertise it out during beacons so that clients that are passively scanning can make decisions on whether or not they can connect to that AP. APs will also include their list of supported rates in the probe response. Both frame types contain the same list of Basic and supported rate information, distinguishing which rates are mandatory for operation in this BSS (basic service set). Notice that 802.11n rates are carried in the HT Capability Information Element, listed as supported MCS sets for various numbers of spatial streams. Clients will include this element in the association request as well.

Probe Response Basic and Supported Data Rates


Finally, once it comes time to connect, the client will actually adjust it's supported data rate list contained in the Association Request frame in order to align with the policy advertised by the AP; this time listing which data rates are Basic (mandatory) and which ones are supported. Differences between the client and AP may still exist for the supported data rates, but the Basic data rates must match. Using the list found in the association request, the AP makes the determination whether or not the client is allowed to connect to the WLAN.

Association Request Basic and Supported Data Rates;
"Data Rate Follow-Back" behavior of Wi-Fi Clients

I call this the "Data Rate Follow-Back" behavior of Wi-Fi clients :) You won't find that in any book as a standard term, it's just my own terminology.

Cheers,
Andrew

Wi-Fi Probing Behavior and Configured Data Rates

Today, I'd like to highlight what happens under-the-covers when a client tries to connect to an AP after data rates have been manipulated. Specifically, I'm talking about the probing behavior of both the client and the AP. If you've ever wondered how this works, I'm here to tell you about it today!

A standard WLAN best practice to improve performance is to disable low data rates on the APs in an enterprise network. This helps to increase overall network capacity by eliminating overhead caused from management frames sent out at the lowest configured Basic data rate, and to ensure data frames aren't sent at the low data rates which consume large amounts of airtime for application traffic.

First, let's briefly describe how probing works in a legacy WLAN deployment, when the AP supports all 802.11b data rates (1, 2, 5.5, 11 Mbps). In this scenario, the client sends out a Probe Request at the lowest data rate it supports, typically 1 Mbps. The AP will issue a Probe Response back to the client, again at 1 Mbps. They both understand each other, life is happy!

Now let's examine what happens when data rates have been manipulated on an AP, for instance all 802.11b data rates have been disabled and 6 Mbps is now the lowest Basic data rate.

Probe Request and Response Behavior
After Data Rate Manipulation

The client probes at 1 Mbps (as usual) since it doesn't know "what's out there" yet. It's trying to discover all the nearby APs that it could connect to and what their capabilities are. So, it sends out the Probe Request at the lowest data rate it supports to achieve the maximum visibility and trigger all nearby APs to respond.

Probe Request at 1 Mbps

Now, what our AP with manipulated data rates does is the interesting part. Even though it can properly receive and decode the client's Probe Request at 1 Mbps, it is only configured to support 802.11g/n data rates (6 Mbps and higher). So what does it do? It issues a Probe Response at it's configured lowest data rate. After all, the configuration policy now states that all 802.11b data rates are disabled, so let's respond to the client at our lowest Basic data rate of 6 Mbps. If the client is 802.11g capable it will still be able to decode the frame and process the response. We also inform the client in the Probe Response of the supported rates and which ones are mandatory (the meaning of 'Basic' rate).

Probe Response at 6 Mbps

Therefore, the probe response behavior of APs is directly tied to the configured Basic data rates; APs always respond at the lowest Basic data rate, NEVER a disabled data rate. Think about the implications this has on WLAN network operation:

  • Clients that don't support the lowest Basic data rate can't connect to the WLAN. In this example, 802.11b-only clients that don't support 802.11g/n would not be able to connect.
  • Clients that have too low an SNR to decode the lowest Basic data rate won't be able to connect to the WLAN; they won't be able to decode the response frame from the AP. Data rate manipulation can effectively be used to limit the size and "association range" of a Wi-Fi access point (but not the "coverage range" since the signal still travels just as far and can cause interference even though it can't be decoded). Now think about the implications on association range if we made 18 Mbps the lowest Basic data rate.
It's worth noting that 802.11n requires at-least one legacy 802.11a/b/g data rate configured as a Basic data rate. Therefore, WLAN vendors that have the ability to allow only 802.11n clients and reject legacy 802.11a/b/g clients do so through other methods unrelated to data rate manipulation, typically by rejecting association requests by clients that don't advertise support for 802.11n MCS rates.


Cheers,
Andrew

Tuesday, December 18, 2012

Does Airport Security Actually Prove Useful? Possibly (in a twisted way).

I read Bruce Schneier's work, alot. I especially savor his insights on the psychology surrounding security systems and how humans interpret and react to threats both big and small, and respond by building security systems to protect us; how we misjudge and overreact to the rare and extreme threats while ignoring the more common yet mundane threats.

One commonly cited example of this is the post-9/11 response to terrorism and the implementation of airport security which is not actually making us safer. The TSA security procedures, "backscatter" X-Ray machines, and knee-jerk reactions to specific threats (such as a single instance of a weapon hidden in a shoe) that create new hassles for legitimate passengers.

But, I've also been reading Bruce's latest book, Liars and Outliers, where he explains how the human species grew intelligence to compete against primarily other humans, and learned to place self-interests aside (for the most part) in order to cooperate with the interests of the group. This requires trust in one another, and at varying group sizes (starting small, and eventually growing larger and larger) requires social pressures to keep the defectors (those favoring the self-interest over the group-interest) to a tolerable level.

These social pressures come in various forms and work at various group sizes. As our society and group-sizes have grown larger and larger over the centuries, we have developed different methods of influencing human behavior to cooperate with one another and forgo the self-interest. And each time our group-size gets larger, our definition of "trust" takes on new meaning. In small group sizes, we trust the intentions of the individual; we know them at a more personal level. These are the Moral and Reputational pressures. As group sizes get larger, we become increasingly disconnected from the others whom we must place trust in; we are forced to trust their actions will be cooperative in the group-interest because we don't know their intentions. This is where Institutional pressures and Security Systems come into play.

To quote from 'Liars and Outliers':

The most important difference among these four categories is the scale at which they operate.

  • Moral pressure works best in small groups. Yes, our morals can affect our interactions with strangers on the other side of the planet, but in general, they work best with people we know well.
  • Reputational pressure works well in small- and medium-sized groups. If we're not at least somewhat familiar with other people, we're not going to be able to know their reputations. And the better we know them, the more accurately we will know their reputations.
  • Institutional pressure works best in larger-sized groups. It often makes no sense in small groups; you're unlikely to call the police if your kid sister steals your bicycle, for example. It can scale to very large groups—even globally—but with difficulty.
  • Security systems can act as societal pressures at a variety of scales. They can be up close and personal, like a suit of armor. They can be global, like the systems to detect international money laundering. They can be anything in between.

Therefore, going back airport security, we cannot trust the large majority of society on a global scale using moral or reputational pressures. Institutional pressures become ineffective since they are enforced on a largely national basis, and terrorists are not deterred by such pressures. We are left reliant on security systems alone to base our "trust" of the aviation system.

But... could this "security theater" at airports actually be preventing the complete breakdown in trust in the aviation system, even if it proves woefully ineffective as an actual security mechanism? If the majority of society lost trust in aviation safety, would there be a mass-exodus of passengers? Would the airline industry completely crumble?

Perhaps our government's post-9/11 anti-terrorism actions, including the ineffective security systems and procedures instituted by the TSA, actually DO serve a purpose after all. They keep passengers re-assured that they are reasonably safe while flying; they pacify our human minds which over-exaggerate the true risk of terrorist attack on airplanes. I'm not justifying these measures as making us safer, just keeping the industry afloat by managing public "perceptions" of airline security.

This is just something I've been pondering...

Cheers,
Andrew

Monday, December 17, 2012

The Need for a Balanced U.S. Spectrum Policy between Licensed and Unlicensed Uses

The future of US competition in mobile broadband innovation hangs in the balance with upcoming rulemakings on spectrum auctions that were authorized as part of the Middle Class Tax Relief and Job Creation Act of 2012 to help fund tax cuts, as well a proposal to open a portion of federal spectrum for broader use through spectrum-sharing arrangements.

The spectrum policy debate is between the balanced allocation of licensed vs. unlicensed spectrum. The economic value of unlicensed spectrum already results in between $16-37 billion in annual economic value. And according the FCC, it also increases the value of wired broadband services by $25 billion per year and reduces the cost of delivering wireless broadband service by $25 billion per year through Wi-Fi offload mechanisms by carriers. Unlicensed spectrum is also helping spur innovation in other markets. For example, Wi-Fi plays a critical role in fostering the mobile apps economy, currently a $10 billion market that is projected to be a $46 billion market by 2016. Unlicensed spectrum is also the key enabler for machine-to-machine (M2M) networks that could prove to be a $1.4 trillion market by 2020. Add all of this up, and couple it with the huge growth in the Wi-Fi market and the impending congestion in the 5 GHz unlicensed frequency bands with the adoption of 802.11ac that utilizes 80 MHz and 160 MHz wide channels, and a compelling case for more unlicensed spectrum can be made.

Three proposals exist to increase unlicensed spectrum in the U.S.:
  1. Two additional 5 GHz bands for unlicensed use: 5350-5470 MHz and 5850-5925 MHz. (195 MHz total).
  2. Spectrum-sharing in the 3550-3650 MHz federal use band. (100 MHz total).
  3. Guard bands in the 600 MHz TV block under auction. (12-20 MHz total).
For comparison, the existing 2.4 GHz ISM, 5 GHz U-NII and 5 GHz ISM bands amount to a total of 663.5 MHz available for unlicensed use.

First, the Middle Class Tax Relief and Job Creation Act of 2012 requires modification of part 15 of title 47 in the Code of Federal Regulations (CFR) to explore unlicensed use in the 5350-5470 MHz band for indoor use. This band is currently used by the federal government for radiolocation services (RLS), and unlicensed use would be subject to non-interference requirements such as Dynamic Frequency Selection (DFS). This would add 120 MHz of unlicensed spectrum between existing UNII-2 and UNII-2 Extended bands. The proceedings to modify the CFR must begin by Feb. 22, 2013 (1 year after the bill was passed into law).

This legislation also stipulates that a study should be conducted to evaluate spectrum-sharing technologies and the risk to Federal users if unlicensed U–NII devices were allowed to operate in the aforementioned 5350–5470 MHz band and also in the 5850–5925 MHz band. The 5850-5925 MHz band is currently used for Intelligent Transportation Systems (ITS) by the transportation industry, which envisioned using dedicated short-range communication protocols to communicate between vehicles and roadway infrastructure. These networks have not been deployed. Freeing up this second 5 GHz band could free another 75 MHz of spectrum for unlicensed use.

Second, the FCC is proposing a shared spectrum plan for the 3550-3650 MHz band which will enable innovative small-cell use of this 100 MHz block of spectrum. This band is currently used for military and satellite operations, and the NTIA Fast Track report has identified "exclusion zones" to prevent interference with offshore shipborne radar systems. The proposal would "spur significant innovation in wireless technologies and applications throughout the economy, while protecting incumbent users in the band" through a tiered-use structure: 1. Incumbent Access (federal), 2. Priority Access (critical services in select defined location), and 3. General Authorized Access (public use). The FCC proposal follows the recommendations of the President’s Council of Advisors on Science and Technology (PCAST) to spur economic growth through government held spectrum. You can read the full FCC Notice of Proposed Rulemaking on 'Enabling Innovative Small Cell Use in 3.5GHz Band NPRM and Order' on their website.

Comments by Chairman Genachowski highlight the importance of this proposal:
"We have moved forward on a next generation of unlicensed spectrum use, building on the concept that gave us Wi-Fi.  And we are on track to meet our target of freeing up 300 MHz of spectrum by 2015.
Today's action is a major step toward unleashing an additional 100 megahertz of spectrum for broadband use.  
It is also progress on major innovations in spectrum policy and technology.  This is important because, to achieve our ambitious spectrum goals, we must continue look beyond traditional approaches and supplement them with new ways to unleash the airwaves for broadband.  Spectrum is a scarce asset with transformative power – power to drive private investment, innovation, and economic growth; strengthen our global competitiveness; and provide broad opportunity to all Americans."

3550-3650 MHz NTIA Exclusion Zones

Third, use of guard bands in the TV white space auction could create an additional 12-20 MHz of unlicensed spectrum available for 'Super Wi-Fi'. The TV white space auction of spectrum in the 600 MHz band is under debate in the House of Representatives. The auction will bring in revenue for the federal government to help pay for the approved tax cuts. However, the FCC TV auction details propose the use of two 6-10 MHz guard bands of the spectrum (12-20 MHz total) for unlicensed use to stimulate economic growth through competition and innovation (this is in addition to the previous work on unlicensed use of TV white spaces in the UHF band occupied in various markets by DTV channels). I have not seen an estimate of the amount of lost revenue that would result from the allocation of these guard bands for unlicensed use, but the estimated revenue from an auction is around $19 billion. Up to $7 billion of this auction revenue is earmarked to help build a nationwide public safety network for first-responders. You can read the full FCC Notice of Proposed Rulemaking on 'Broadcast Television Spectrum Incentive Auction NPRM' on their website.

FCC 600 MHz Band Plan Proposal

On December 12th, the U.S. House Energy and Commerce Subcommittee on Communications and Technology held a hearing on Keeping the New Broadband Spectrum Policy on Track.

Here are a few of my thoughts on the discussion that took place during the hearing:
  1. These U.S. incentive spectrum auctions are historic. I applaud our government in recognizing the need for re-purposing spectrum to maximize efficient use and help our nation remain competitive in this global economy. At the same time, our nation is looking at new ways of using spectrum through shared-spectrum approaches and striking a balance between licensed and unlicensed uses.
  2. It is not clear if the auctions will actually yield net revenues to fund a national public safety broadband network. The auctions involve three components: 1) a reverse auction to pay current spectrum owners in the 600 MHz TV band to vacate, 2) a re-packing of the band to facilitate more efficient spectrum use and enhance the value, and 3) a forward auction to sell the re-packed spectrum for licensed use for mobile broadband networks. The Spectrum Act only stipulates that the forward auction must cover costs of the reverse auction. The structure of the auction proceedings will influence these revenues. The value of the 600 MHz spectrum itself could be worth $1 - $2 per MHzPop (or roughly $7-16 billion) based on the previous 700 MHz auctions conducted in 2008.
  3. "Unlicensed spectrum is an extraordinary platform for innovation" - Chairman Genachowski
  4. Spectrum guard bands would serve two purposes: 1) protecting the adjacent licensed use frequency bands from interference, and 2) enhancing the spectrum band value by allowing the use of unlicensed services for new innovations, such as 'Super Wi-Fi'. This use of guard bands for unlicensed use has never been done before, and was already authorized under the Spectrum Act.
  5. Positions are split down party lines, with the Republicans advocating for smaller guard bands providing slightly more licensed spectrum available for auction while the Democrats are backing slightly larger guard bands that provide for unlicensed uses.
  6. There is concern about "maximizing proceeds" from the auction to help fund the public safety broadband network. I agree, a public safety network is important. But as congresswoman Eshoo points out, the FCC is prohibited from basing auctions predominantly on revenue generated. Furthermore, it is unclear just how much auction revenue would be lost due to larger sized unlicensed guard bands. I urge congressman Walden and other subcommittee members to complete a thorough analysis of the value of these guard bands with respect to the remaining licensed frequency, provide specific details on the expected revenue that would be lost, and the impact of this lost revenue on the funding for a public safety broadband network. This must be balanced with the proposed economic value that larger unlicensed guard bands could provide.
  7. The current legislation language focuses on minimizing guard bands "no larger than is technically reasonable to prevent harmful interference between licensed services outside the guard bands." Given the traditional nature of guard bands this makes sense. However, this new proposal for unlicensed use of guard bands represents a completely different use for that spectrum which can provide additional economic value. I would urge the FCC and subcommittee to re-evaluate the language of this legislation to properly focus on adequate guard band size given these new use-cases.
  8. If we don't move forward on growing unlicensed spectrum, we run the risk losing competitive advantage and seeing innovations in mobile platforms happen overseas rather than in the U.S.
Here is the full video of the hearing (2:45 in length):



In addition, Public Knowledge and over 370 companies wrote the following letter to the subcommittee chair Greg Walden (R-OR), urging the subcommittee to allow the "FCC [to] pursue policies that strike a productive balance between the need for more spectrum that accommodates both exclusive-use licensed and non-exclusive unlicensed technologies."

Undoubtedly, the myth of the cellular spectrum crunch is also playing a factor in this debate, especially when cellular carriers seem more intent to squat on existing unused spectrum while playing land-grab for more spectrum in an effort to reduce competition in the market and simultaneously pursuing a fairly aggressive plan for small-cell deployment coupled with Wi-Fi offload (ahem... unlicensed). Throw into the mix actual data figures that show mobile broadband data growth is strong, but nowhere near the numbers thrown around by lobbyists, and its clear that cellular providers have plenty of tools in their arsenal to meet growing demand without additional licensed spectrum.

Update Feb-2013: Several new reports have been released that continue to show that cellular data traffic is growing significantly slower than the industry would lead you to believe. This highlights the industry's deeply held desire to secure more spectrum for licensed use rather than freeing it up for unlicensed use. See these articles on the Cisco VNI Forecast for 2012 by GigaOm, BGR, and the Informa Whitepaper on smartphone data usage trends, and a study by Plum Consulting on Future-Proofing Wi-Fi with more spectrum (commissioned, ironically, by Cisco).

I urge you to support unlicensed spectrum as a more formal part of our nation's spectrum policy in the following ways:
  1. Leave comments on the FCC Notice of Proposed Rulemaking on 'Enabling Innovative Small Cell Use in 3.5GHz Band NPRM and Order' (Proceeding #12-354; deadline Feb. 20, 2013) and on 'Broadcast Television Spectrum Incentive Auction NPRM' (Proceeding #12-268; deadline Dec. 21, 2012). Comments can be left at the FCC Electronic Comment Filing System (ECFS). Simply search for the proceeding numbers that I have listed above.
  2. Write to both chairman Walden (R-OR) and your local Congressman and urge them to support a spectrum policy that more formally recognizes and includes unlicensed spectrum to support our economic growth, innovation, and global competitiveness as a nation. Include a copy of the Public Knowledge letter to voice your opinion.
Cheers,
Andrew von Nagy

2012-12-18 - Updated the 5 GHz band provisions based on reader comments and the subsequent research.

Thursday, December 13, 2012

IT for Oppression

We usually think of IT and the Internet in general as an open platform for freedom. But Bruce Schneier explains how it can be used as a tool to oppress:

I've been thinking a lot about how information technology, and the Internet in particular, is becoming a tool for oppressive governments. As Evgeny Morozov describes in his great book The Net Delusion: The Dark Side of Internet Freedom, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, and propaganda. And they're getting really good at it.

For a lot of us who imagined that the Internet would spark an inevitable wave of Internet freedom, this has come as a bit of a surprise. But it turns out that information technology is not just a tool for freedom-fighting rebels under oppressive governments, it's also a tool for those oppressive governments. Basically, IT magnifies power; the more power you have, the more it can be magnified in IT.

I think we got this wrong -- anyone remember John Perry Barlow's 1996 manifesto? -- because, like most technologies, IT technologies are first used by the more agile individuals and groups outside the formal power structures. In the same way criminals can make use of a technological innovation faster than the police can, dissidents in countries all over the world were able to make use of Internet technologies faster than governments could. Unfortunately, and inevitably, governments have caught up.

IT's new role: driving business growth (not just efficiency)

A survey by The Economist reveals shifting attitudes on the role that IT plays inside organizations:
The survey showed the highest-performing companies - those who reported their financial performance was stronger than their industry peers - identified a different role for IT in key areas of their business. [...]

While most IT departments are not yet widely seen driving business growth, the companies surveyed predict the role of IT will begin shifting "from tools of efficiency to engines of growth".

In the survey 60 percent of respondents reported that IT will be "very closely or somewhat closely involved" in helping to develop new products or services for the company over the next three years.

But as of today, very few business respondents successfully collaborated with IT on strategic business imperatives such as identifying new market opportunities (nine percent), identifying new innovations (six percent) or developing a competitive strategy (five percent).
Coming from a background in the retail industry, I can definitely agree, as retailers are looking to develop new business models based on omni-channel integration and digital interaction with consumers due to the strong growth and competition from e-commerce merchants.

I think this underscores a major shift that we will see over the coming years, acceptance of internal IT organizations as business partners rather than service providers and a more formal "seat at the decision-making table" directly influencing business decisions. This may result in complete restructuring of IT organization to better align with business units.

Cheers,
Andrew

Spectrum Policy Battle: Licensed vs Unlicensed

An interesting analysis by Dave Wright about the data tsunami hitting China Mobile:
After my 4th cup of coffee for the day, it finally dawned on me that I could calculate the 2nd half numbers for the last few years as a derivative from the interim reports. That provides a bit more granularity in terms of how cellular (mobile) and Wi-Fi (WLAN) traffic has been trending over the last two and a half years. Below are 2 charts that tell an interesting story.
It appears that Wi-Fi will bear the load of data traffic moving forward. Not really a big surprise.

But it does underscore the need for thoughtful spectrum policy moving forward. Two recent papers on spectrum allocation policy highlight the need to re-evaluate the current U.S. approach to spectrum allocation between licensed and unlicensed frequencies:

First, by Yochai Benklar:
[…] we may need to reverse our orientation from one that assumes that licensed and auctioned spectrum is the core, and open wireless a peripheral complement, to one that sees open strategies as the core, with important residual roles for licensed services.

Auctions designed purely to maximize revenue operate as a tax on American consumers who subsidize carriers' deployment of mobile broadband
Dave Wright also provides a good analysis of the Benklar paper here.

Second, by Richard Thanki:
innovation has been at least, if not more, marked in applications that, while aiming at mass consumer markets, were nevertheless able to use the freedom of unlicensed spectrum to develop, test and refine uses that have enriched the communications experience of individuals and the commercial performance of businesses.

Our contention is that regulators and legislators have the option now to reinforce the level of innovation and advancement in communications by recognising and supporting the use of unlicensed applications in areas of spectrum that are being liberated by technological advance [...]

First, the economic analysis needs to establish the potential benefit that can stand alongside the benefits that could accrue from a more conventional licensing approach. Second, regulators and governments need to be convinced of the power of these arguments, such that refraining from licensing becomes an important tool in the maximisation of value in spectrum rather than an approach for spectrum for which no other use can be found. Third, international cooperation will be needed if we are to secure the highest value from this approach.
So, is the trade-off between short-term auction revenues of licensed spectrum to the highest bidder versus long-term economic growth through innovation in multiple industries that is enabled by the use of open unlicensed spectrum? It appears so.

What's better for our economy, a one-time revenue of ~$19 billion from spectrum auctions, or multiples higher in economic value created through unlicensed spectrum? Estimates range from $16 - $37 billion per year, and that's only including Wi-Fi in homes, Wi-Fi in hospitals and RFID.

Unfortunately, it appears that GOP members of Congress are unwilling to acknowledge the longer term benefit of unlicensed spectrum and simply want an immediate payday from licensed auctions.

Monday, December 10, 2012

Design your WLAN for High Capacity

High-Density Wi-Fi Design Series:

The demand for high-capacity Wi-Fi networks continues to grow at an astonishing rate. The migration to 802.11n has taken Wi-Fi networking within the enterprise from an overlay to existing wired networks and made it the primary network connectivity method. And the upcoming 802.11ac standard promises to boost demand for Wi-Fi even higher. Users are increasingly adopting mobile devices that solely rely on Wi-Fi for connectivity to the network (when was the last time you left your desk without your laptop, tablet, or smartphone?). They are also carrying an average of 2-3 devices each, to work in the manner that suits them best depending on the situation.

This has spawned initiatives for consumerization of IT (corporate issued mobile devices) and BYOD (personally owned laptop and mobile devices) in many organizations. While the focus has shifted largely to supporting these devices on enterprise networks and securing the network and corporate data, organizations must also be aware of the need to re-assess Wi-Fi network performance.

But there’s a problem. Many Wi-Fi networks were never designed to handle the amount of clients or the traffic load that we see on our networks today. Instead, they were designed in an era not so long ago where simply providing adequate signal strength and coverage was sufficient.  Many organizations are quickly realizing that their existing WLAN deployments designed for basic coverage are no longer adequate to meet these growing demands and that simply adding more access points is usually ineffective, often necessitating new network planning and design. These increasing demands have brought with them new requirements to effectively design and deploy high-capacity wireless networks. But where do you start?

Aerohive’s new High-Density Wi-Fi Design and Configuration Guide provides resources for engineers working with any vendor’s equipment to understand the factors that influence WLAN deployment success, and to begin designing WLAN networks that meet the demands placed upon them.

Aerohive High-Density Wi-Fi Design and Configuration Guide
(Click to download the PDF)

The design guide covers the following topics:

  • Requirements Gathering – These steps are critical to understanding the load and demand that will be placed on the network. We must know what our goal is before we can design to meet and exceed it! This includes requirements for the infrastructure, clients, applications, and forecasting the number of APs required to service the client population.
  • Network Planning and Design – This section details the factors that influence Wi-Fi network capacity, including spectrum capacity, channel planning, minimizing co-channel interference, working with unique facility characteristics, collocating APs to achieve higher capacity, and site surveying. A discussion of critical wired network design variables such as switch port bandwidth, PoE, subnet allocation, DHCP, and Internet bandwidth are also included.
  • Aerohive Network Configuration – Provides detailed recommendations for configuring an Aerohive Wi-Fi network for high capacity, including SSIDs, RADIUS integration, QoS, security, and radio settings.
  • Network Monitoring and Optimization – Managing a high performing wireless network does not stop once it is deployed. Ongoing maintenance and network optimization will ensure that the network continues to exceed performance expectations. This section details monitoring an Aerohive Wi-Fi network through the tools provided within HiveManager to tune network performance as needs change.
  • Appendix – The appendix contains useful worksheets to aid in the process of requirements gathering and forecasting capacity demands, as well as a configuration checklist for deploying the network.
One of the heavily stressed points in the document is the need for proper planning. Wi-Fi can be deceiving, because signal strength no longer guarantees a successful network. Proper Wi-Fi network design must take into account both the client and the infrastructure because airtime is a shared resource. The capabilities of your client population will directly impact the capacity and performance of your wireless network. Only by understanding your client population (or at minimum, making some educated assumptions) can your network be successful.

I put in some long hours and gave my blood, sweat, and tears to this document. I hope it proves valuable for anyone reading it, and translates into successful WLAN deployments.


Cheers,
Andrew

This post originally appeared on the Aerohive Blogs website.

Friday, December 7, 2012

Wi-Fi Fast Roaming is Here! (Finally)

It's been a long time coming, but industry support for standards-based fast roaming is finally starting to emerge! See my previous posts:

http://revolutionwifi.blogspot.com/2010/06/its-time-for-80211r.html

http://revolutionwifi.blogspot.com/2012/02/wi-fi-roaming-analysis-part-2-roaming.html

Apple iOS now officially supports IEEE 802.11r and 802.11k, which form the underpinnings of the Wi-Fi Alliance Voice Enterprise certification for fast roaming. (Don't be confused by the word voice in the certification. This is applicable to any WiFi client that needs fast roaming capability at the driver level for any application.)

This will help Apple mobile devices perform better in enterprise environments, especially when multimedia and collaboration tools are used with BYOD.

From Apple's website:
http://support.apple.com/kb/HT5535

"iOS 6 introduces support for optimized client roaming on enterprise Wi-Fi networks. The 802.11 Working Group standards k and r were conceived to give wireless clients the ability to more seamlessly roam from access point (AP) to access point within the same network.

802.11k
802.11k allows an iOS 6 device to quickly identify nearby APs that are available for roaming. When the signal strength of the current AP weakens and the iOS device needs to roam to a new AP, it will already know the best candidate AP with which to connect.

802.11r
When an iOS 6 device roams from one AP to another on the same network, 802.11r streamlines the authentication process using a feature called Fast Basic Service Set Transition (FT). FT allows iOS 6 devices to associate with APs more quickly. Depending on your Wi-Fi hardware vendor, FT can work with both preshared key (PSK) and 802.1X authentication methods.

Coupled with 802.11k's ability to quickly identify the target AP, FT's faster association method may enhance application performance and aims to provide a better Wi-Fi experience in iOS.

Additional Information
Not every Wi-Fi network hardware vendor currently supports 802.11k and 802.11r. Check with the manufacturer of your Wi-Fi hardware (controllers and APs) to determine if support is available. Once support for both standards is verified, 802.11k and FT functionality must be enabled. Setup methods vary; please consult the current configuration documentation for your Wi-Fi hardware for details."

Cheers,
Andrew

Tuesday, December 4, 2012

Satellite Provider Petitions FCC for Additional 2.4 GHz Wi-Fi Channel and Exclusive Use

Globalstar, a U.S. company that operates low-Earth orbit (LEO) satellites providing voice and low-speed data communications, petitioned the FCC for approval for terrestrial use of their assigned Big LEO spectrum. Globalstar seeks to develop, market, sell, and deploy an terrestrial FDD LTE service by combing their Lower (1610-1618.725 MHz, uplink transmissions) and Upper (2483.5-2495 MHz, downlink transmissions) Big LEO spectrum. However, coexistence hurdles make terrestrial FDD LTE a long-term goal.

In the short-term, a Terrestrial Low Power Service (TLPS) in the Upper Big LEO spectrum is requested. And the promise is more efficient spectrum usage. Part of their spectrum holdings are directly adjacent to the 2.4 GHz ISM (Industrial, Scientific, and Medical) band in which Wi-Fi operates (they were granted this spectrum in 1995). The TLPS service would effectively enable use of Wi-Fi channel 14 in the U.S., which would increase aggregate Wi-Fi capacity since this channel does not overlap with existing Wi-Fi channels.

However, before you get too excited, realize that the use of this additional Wi-Fi channel would be exclusive to Globalstar.
Terrestrial Low-Power Service.  While the Commission and interested parties work through any technical issues described above, the Commission has an extraordinary opportunity to leverage Globalstar’s spectrum location in the 2.4 GHz band, the public’s prior investment in devices, and existing IEEE 802.11 technology to advance the rapid deployment of an innovative, pro-consumer TLPS.  Applying a unique, hybrid spectrum approach, Globalstar will provide this low-power broadband service over 22 megahertz of spectrum that includes both Globalstar’s exclusive terrestrial use spectrum at 2483.5-2495 MHz and adjacent unlicensed industrial, scientific and medical (“ISM”) spectrum at 2473-2483.5 MHz.  TLPS operations on this 2473-2495 MHz band segment are consistent with the 802.11 channelization scheme
This "hybrid" spectrum approach could have serious implications for spectrum policy in the U.S. The existing ISM band is an unlicensed band, open for use by anyone that meets certification criteria and testing. However, Globalstar effectively wants to retain exclusive use of their licensed spectrum (and rightly so) but couple it with unlicensed ISM spectrum to form this new service offering. The result would be the creation of a "licensed" Wi-Fi channel, that could only be developed and deployed by Globalstar.

The marketing used by Globalstar attempts to spin this approach as increasing the nation's broadband spectrum inventory, increasing capacity in congested metropolitan areas, and providing critical public-safety benefits during natural disasters.
If the Commission implements the necessary regulatory reforms to enable this hybrid approach, it will add 22 megahertz to the nation’s wireless broadband spectrum inventory.  In addition, this spectrum will be put quickly to use as American consumers utilize their current 802.11-enabled devices to receive Globalstar-managed TLPS, which will help alleviate the increasing congestion that is impeding use of existing 802.11 ISM channels in dense metropolitan areas... In addition, TLPS deployments will deliver meaningful public safety benefits.  During disasters such as Hurricane Sandy, still-operating 802.11-based hotspots can provide broadband and voice communications to citizens in affected areas who otherwise lack access to communications services.  The addition of TLPS facilities will augment this important post-disaster resource.
However, I fail to see how this hybrid spectrum policy will accomplish these goals. Since Globalstar will retain exclusive use of their licensed spectrum (2483.5-2495 MHz) no additional spectrum will be added to the nation's broadband spectrum inventory. Despite Globalstar's plans for 20,000 TLPS hotspots, they only currently have 550,000 worldwide subscribers and would likely need to partner with other Wi-Fi hotspot providers such as telecom carriers and wireline broadband providers (such as AT&T, Time Warner, Cablevision, and Comcast). Such business partnerships will prove difficult, as these existing providers will be reluctant to license and pay Globalstar for use of this additional spectrum. Finally, Globalstar's attempts to tie the TLPS service to public-safety are erroneous, since they are coupling the use of Wi-Fi hotspot access with their existing Mobile Satellite Service (MSS). Public safety has benefited from MSS service when terrestrial wireline operator facilities have been knocked out. The key here is the MSS backhaul facility, not the Wi-Fi access on the ground through the TLPS service. Similar public safety services are capable using existing Wi-Fi over unlicensed spectrum.

I'm not a spectrum policy expert. But my opinion is that Globalstar wants everything, but gives little in return. They want hybrid use of unlicensed spectrum coupled with their adjoining licensed spectrum holdings to create an exclusive-use Wi-Fi channel. This goes against the provisions established for the unlicensed spectrum and allows a single entity to profit from use of the spectrum and take advantage of the open nature and existing deployment base of the Wi-Fi eco-system. Moreover, this is simply a short-term plan until a long-term FDD LTE service can be deployed. Ultimately, no additional spectrum will be added for broadband use.

I can only see how this benefits Globalstar, harms unlicensed spectrum policy, and allows Globalstar to take advantage of the benefits of unlicensed spectrum by building an exclusive service on top of the Wi-Fi ecosystem.

I'm against it.

Andrew

Tuesday, July 31, 2012

Is WPA2 Security Broken Due to Defcon MS-CHAPv2 Cracking?

Quick answer - No. Read on to hear why.

A lot of press has been released this week surrounding the cracking of MS-CHAPv2 authentication protocol at Defcon. For example, see these articles from Ars Technica and CloudCracker. All of these articles contain ambiguous and vague references to this hack affecting Wi-Fi networks running WPA2 security. Some articles even call for an end to the use of WPA2 authentication protocols such as PEAP that leverage MS-CHAPv2.

But they fail to paint a true and accurate picture of the situation and the impact to Wi-Fi networks. I think this is misleading, and that any recommendations to stop using PEAP are flat-out wrong!

So let's clarify things.

Is MS-CHAPv2 authentication broken? 
Answer - Based on what I've read, let's assume it is TOTALLY broken. You can read about the details in those other articles. But for the topic of this post, applicability to Wi-Fi networks, it really doesn't matter if it is broken or not.

What is the Impact to Wi-Fi Network Security?
Specifically, does this make much of an impact for Wi-Fi networks where 802.1X authentication is employed where MS-CHAPv2 is used (namely EAP-PEAPv0 and EAP-TTLS)?
Answer - No, it really does NOT. The impact is essentially zero.

* Update - Microsoft has released a security advisory which recommends the use of PEAP encapsulation to mitigate this attack against un-encapsulated MS-CHAPv2 authentication. 

Let me explain why.

EAP Tunneled Authentication Protocols
MS-CHAPv2 is only used in what we call "tunneled authentication protocols," which includes EAP-PEAPv0 and EAP-TTLS. These EAP protocol specifications acknowledge that many insecure and legacy authentication methods need protection and should not be used on their own. They deal with that by wrapping the insecure protocol inside of another, much more secure, TLS tunnel. Hence, these protocols are called "tunneled authentication protocols."

This tunneling occurs by relying on asymmetric cryptography through the use of X.509 certificates installed on the RADIUS server, which are sent to the client device to begin connection setup. The client verifies the certificate is valid (more on that in a second), and proceeds to establish a TLS tunnel with the server and begin using symmetric key cryptography for data encryption. Once the TLS tunnel is fully formed, the client and server use the less secure protocol such as MS-CHAPv2 to authenticate the client. This exchange is fully encrypted using the symmetric keys established during tunnel setup. The encryption switches from asymmetric key cryptography to symmetric key cryptography to ease processing and performance, which are much faster this way. This is fundamentally the same method used for HTTPS sessions in a web browser.

Here is a reference ladder diagram of PEAP authentication which highlights the different phases of the connection process (outer TLS tunnel setup and inner MS-CHAPv2 authentication).

PEAP Ladder Diagram
(Click for full size image)
So, MS-CHAPv2 is not used natively for Wi-Fi authentication. We're safe right? Only if implemented properly.

Importance of Mutual Authentication
The key link in this chain then is the mutual authentication between the RADIUS server and the wireless client. The client must properly validate the RADIUS server certificate first, prior to sending it's credentials to the server. If the client fails to properly validate the server, then it may establish an MS-CHAPv2 session with a fake RADIUS server and send it's credentials along, which could then be cracked using the exploit that was shown at Defcon. This is commonly referred to as a Man-in-the-Middle attack, because the attacker is inserting their RADIUS server in the middle of a conversation between the client and the user database store (typically a directory server).

RADIUS Server Validation and Exposure to Attack
(Click for full size image)
The RADIUS server is validated as long as the certificate that it sends is trusted. For most client platforms, trusted certificates are provided by the manufacturer for public Certificate Authorities and PKI systems (such as Verisign, Thawte, Entrust, etc.) and are held in the certificate store or keychain on the device. In addition, for corporate environments, administrators can deploy certificates to managed devices in a number of different ways to enable trust for private Certificate Authorities and PKI systems, most common among these methods are Group Policy Objects (GPO) for Microsoft clients and Lion Server Profile Manager or the iPhone Configuration Utility (iPCU) for Apple clients (including OS X and iOS devices).

Enabling Server Certificate Validation on Clients
In Windows the RADIUS server validation is defined within each SSID profile. If you are looking directly on a Windows 7 workstation, you will want to view the SSID properties, select the Security tab, and go into the PEAP settings. Enable server validation, specify valid server names (which are checked against the Common Name - CN within the server certificate presented to the client), restrict which Root CAs the server certificate can be issued from, and prevent the system from prompting users to accept untrusted certificates (which is important, otherwise they could unknowingly accept a bad certificate and connect to an attacker's RADIUS server).

Windows RADIUS Server Validation

In Apple devices, including OS X Lion, Mountain Lion, and iOS, use the Lion Server Profile Manger or iPCU to define a configuration profile which includes credentials and a Wi-Fi policy. I'll show the iPCU in this example. First, add the Root CA certificate into the "Credentials" section. Next, define a Wi-Fi policy which specifies the trusted certificates and certificate names allowed.

Apple RADIUS Server Validation

Client Behavior for Server Validation
In both vendor implementations, the behavior of the client device is dictated by what policy has been defined on the system.

If no policy for the SSID has been defined or pushed to the client device by an administrator, the default behavior is to prompt the user to validate the certificate. This is likely not ideal, since users typically have a hard time distinguishing what a certificate means and whether or not they should proceed. For example, when an Apple iPhone attempts to join a network when no profile has been deployed for that SSID, the user receives a prompt to accept the connection and proceed:

iPhone Certificate Prompt

Therefore, for all corporate 802.1X environments, it is recommended to push profiles for all 802.1X SSIDs that end-users need on their systems. This goes for both production access and BYOD scenarios. The behavior on Windows 7, OS X Lion / Mountain Lion, and iOS devices when a profile has been installed for a specific SSID, is to check the local certificate store or keychain to validate the RADIUS server certificate. It must also match the Root CAs and server names specified in the deployed profile. In the event that an untrusted certificate is presented, all of these systems will NOT prompt the user and the connection is rejected. For example, here is rejected connection by an Apple iPhone for an SSID that has had a profile deployed by an administrator:

iPhone Certificate Rejection

Outstanding Vulnerabilities
You should still be aware of a few indirectly related vulnerabilities that have not yet been resolved relating to Wi-Fi authentication with 802.1X.

First, the default behavior of all systems (especially personal devices) is to prompt users to validate the RADIUS server certificate. This is often confusing and can lead to bad actions being taken by users and attempted authentication through an attacker's RADIUS server. This can be mitigated by having corporate environments deploy configuration profiles for all SSIDs in their network, both production and BYOD. Don't fall into the trap for BYOD of letting users connect on their own and try to decipher the certificate prompt. Establish a sound personal device on-boarding process which deploys a configuration profile to the device upon successful enrollment and policy acceptance. There are numerous ways to do this, ranging from simple solutions such as sending them a profile in an email or providing a web URL where users can download the profile, to more complex solutions such as MDM integration that allow self-registration and zero IT involvement.

Second, certificate binding to the SSID is still a manual process on wireless networks. It must be defined within the configuration profile. This is in contrast to SSL and TLS protocols that are used for secure web access where the end-user system can automatically verify if the FQDN within the URL matches the Common Name presented in the certificate. The manual binding process in Wi-Fi networks is born out of a lack of extensibility within the PKI system to handle network access scenarios such as this. A better solution is needed.

Finally, certificate revocation checking cannot occur by Wi-Fi clients since they do not yet have a network connection with which they can query a CRL distribution point or use OCSP. This means that client devices cannot check the status of the presented server certificate to see if it has been revoked, which could be caused by valid certificates that have subsequently been compromised or certificates that were invalidly issued by a CA. However, there is hope that the forthcoming 802.11u extensions to Wi-Fi can provide the means for this to occur through message exchanges prior to full network connection (thanks to Christopher Byrd for pointing this out to me during a Twitter conversation).


Revolution or Evolution? - Andrew's Take
We've known that MS-CHAPv2 is an insecure protocol for a long time. The recent Defcon exploit has just taken that one step further. Development of modern Wi-Fi security recognized the possible value in using legacy protocols such as these. Therefore, EAP protocols that employed such protocols were designed to tunnel the insecure protocol within a much more robust protocol such as TLS. These "tunneled authentication protocols" such as PEAP ensure protection for these insecure protocols through the use of certificates.

The onus for proper security then falls on RADIUS server validation to ensure the other end of the connection is trusted before allowing the client authentication to proceed. In a properly implemented wireless network, this MS-CHAPv2 exploit is a non-issue.

There is no need for Wi-Fi network administrators to abandon PEAP. Period.

Security is a complex field. It may be hard to distinguish the FUD from fact. If you're interested in learning more about Wi-Fi security, then I highly recommend engineers take training provided in the CWSP (Certified Wireless Security Professional) course offered by CWNP, Inc. or the SEC-617 (Wireless Ethical Hacking, Penetration Testing, and Defenses) course offered by the SANS Institute.

Cheers,
Andrew vonNagy