Tuesday, July 31, 2012

Is WPA2 Security Broken Due to Defcon MS-CHAPv2 Cracking?

Quick answer - No. Read on to hear why.

A lot of press has been released this week surrounding the cracking of MS-CHAPv2 authentication protocol at Defcon. For example, see these articles from Ars Technica and CloudCracker. All of these articles contain ambiguous and vague references to this hack affecting Wi-Fi networks running WPA2 security. Some articles even call for an end to the use of WPA2 authentication protocols such as PEAP that leverage MS-CHAPv2.

But they fail to paint a true and accurate picture of the situation and the impact to Wi-Fi networks. I think this is misleading, and that any recommendations to stop using PEAP are flat-out wrong!

So let's clarify things.

Is MS-CHAPv2 authentication broken? 
Answer - Based on what I've read, let's assume it is TOTALLY broken. You can read about the details in those other articles. But for the topic of this post, applicability to Wi-Fi networks, it really doesn't matter if it is broken or not.

What is the Impact to Wi-Fi Network Security?
Specifically, does this make much of an impact for Wi-Fi networks where 802.1X authentication is employed where MS-CHAPv2 is used (namely EAP-PEAPv0 and EAP-TTLS)?
Answer - No, it really does NOT. The impact is essentially zero.

* Update - Microsoft has released a security advisory which recommends the use of PEAP encapsulation to mitigate this attack against un-encapsulated MS-CHAPv2 authentication. 

Let me explain why.

EAP Tunneled Authentication Protocols
MS-CHAPv2 is only used in what we call "tunneled authentication protocols," which includes EAP-PEAPv0 and EAP-TTLS. These EAP protocol specifications acknowledge that many insecure and legacy authentication methods need protection and should not be used on their own. They deal with that by wrapping the insecure protocol inside of another, much more secure, TLS tunnel. Hence, these protocols are called "tunneled authentication protocols."

This tunneling occurs by relying on asymmetric cryptography through the use of X.509 certificates installed on the RADIUS server, which are sent to the client device to begin connection setup. The client verifies the certificate is valid (more on that in a second), and proceeds to establish a TLS tunnel with the server and begin using symmetric key cryptography for data encryption. Once the TLS tunnel is fully formed, the client and server use the less secure protocol such as MS-CHAPv2 to authenticate the client. This exchange is fully encrypted using the symmetric keys established during tunnel setup. The encryption switches from asymmetric key cryptography to symmetric key cryptography to ease processing and performance, which are much faster this way. This is fundamentally the same method used for HTTPS sessions in a web browser.

Here is a reference ladder diagram of PEAP authentication which highlights the different phases of the connection process (outer TLS tunnel setup and inner MS-CHAPv2 authentication).

PEAP Ladder Diagram
(Click for full size image)
So, MS-CHAPv2 is not used natively for Wi-Fi authentication. We're safe right? Only if implemented properly.

Importance of Mutual Authentication
The key link in this chain then is the mutual authentication between the RADIUS server and the wireless client. The client must properly validate the RADIUS server certificate first, prior to sending it's credentials to the server. If the client fails to properly validate the server, then it may establish an MS-CHAPv2 session with a fake RADIUS server and send it's credentials along, which could then be cracked using the exploit that was shown at Defcon. This is commonly referred to as a Man-in-the-Middle attack, because the attacker is inserting their RADIUS server in the middle of a conversation between the client and the user database store (typically a directory server).

RADIUS Server Validation and Exposure to Attack
(Click for full size image)
The RADIUS server is validated as long as the certificate that it sends is trusted. For most client platforms, trusted certificates are provided by the manufacturer for public Certificate Authorities and PKI systems (such as Verisign, Thawte, Entrust, etc.) and are held in the certificate store or keychain on the device. In addition, for corporate environments, administrators can deploy certificates to managed devices in a number of different ways to enable trust for private Certificate Authorities and PKI systems, most common among these methods are Group Policy Objects (GPO) for Microsoft clients and Lion Server Profile Manager or the iPhone Configuration Utility (iPCU) for Apple clients (including OS X and iOS devices).

Enabling Server Certificate Validation on Clients
In Windows the RADIUS server validation is defined within each SSID profile. If you are looking directly on a Windows 7 workstation, you will want to view the SSID properties, select the Security tab, and go into the PEAP settings. Enable server validation, specify valid server names (which are checked against the Common Name - CN within the server certificate presented to the client), restrict which Root CAs the server certificate can be issued from, and prevent the system from prompting users to accept untrusted certificates (which is important, otherwise they could unknowingly accept a bad certificate and connect to an attacker's RADIUS server).

Windows RADIUS Server Validation

In Apple devices, including OS X Lion, Mountain Lion, and iOS, use the Lion Server Profile Manger or iPCU to define a configuration profile which includes credentials and a Wi-Fi policy. I'll show the iPCU in this example. First, add the Root CA certificate into the "Credentials" section. Next, define a Wi-Fi policy which specifies the trusted certificates and certificate names allowed.

Apple RADIUS Server Validation

Client Behavior for Server Validation
In both vendor implementations, the behavior of the client device is dictated by what policy has been defined on the system.

If no policy for the SSID has been defined or pushed to the client device by an administrator, the default behavior is to prompt the user to validate the certificate. This is likely not ideal, since users typically have a hard time distinguishing what a certificate means and whether or not they should proceed. For example, when an Apple iPhone attempts to join a network when no profile has been deployed for that SSID, the user receives a prompt to accept the connection and proceed:

iPhone Certificate Prompt

Therefore, for all corporate 802.1X environments, it is recommended to push profiles for all 802.1X SSIDs that end-users need on their systems. This goes for both production access and BYOD scenarios. The behavior on Windows 7, OS X Lion / Mountain Lion, and iOS devices when a profile has been installed for a specific SSID, is to check the local certificate store or keychain to validate the RADIUS server certificate. It must also match the Root CAs and server names specified in the deployed profile. In the event that an untrusted certificate is presented, all of these systems will NOT prompt the user and the connection is rejected. For example, here is rejected connection by an Apple iPhone for an SSID that has had a profile deployed by an administrator:

iPhone Certificate Rejection

Outstanding Vulnerabilities
You should still be aware of a few indirectly related vulnerabilities that have not yet been resolved relating to Wi-Fi authentication with 802.1X.

First, the default behavior of all systems (especially personal devices) is to prompt users to validate the RADIUS server certificate. This is often confusing and can lead to bad actions being taken by users and attempted authentication through an attacker's RADIUS server. This can be mitigated by having corporate environments deploy configuration profiles for all SSIDs in their network, both production and BYOD. Don't fall into the trap for BYOD of letting users connect on their own and try to decipher the certificate prompt. Establish a sound personal device on-boarding process which deploys a configuration profile to the device upon successful enrollment and policy acceptance. There are numerous ways to do this, ranging from simple solutions such as sending them a profile in an email or providing a web URL where users can download the profile, to more complex solutions such as MDM integration that allow self-registration and zero IT involvement.

Second, certificate binding to the SSID is still a manual process on wireless networks. It must be defined within the configuration profile. This is in contrast to SSL and TLS protocols that are used for secure web access where the end-user system can automatically verify if the FQDN within the URL matches the Common Name presented in the certificate. The manual binding process in Wi-Fi networks is born out of a lack of extensibility within the PKI system to handle network access scenarios such as this. A better solution is needed.

Finally, certificate revocation checking cannot occur by Wi-Fi clients since they do not yet have a network connection with which they can query a CRL distribution point or use OCSP. This means that client devices cannot check the status of the presented server certificate to see if it has been revoked, which could be caused by valid certificates that have subsequently been compromised or certificates that were invalidly issued by a CA. However, there is hope that the forthcoming 802.11u extensions to Wi-Fi can provide the means for this to occur through message exchanges prior to full network connection (thanks to Christopher Byrd for pointing this out to me during a Twitter conversation).

Revolution or Evolution? - Andrew's Take
We've known that MS-CHAPv2 is an insecure protocol for a long time. The recent Defcon exploit has just taken that one step further. Development of modern Wi-Fi security recognized the possible value in using legacy protocols such as these. Therefore, EAP protocols that employed such protocols were designed to tunnel the insecure protocol within a much more robust protocol such as TLS. These "tunneled authentication protocols" such as PEAP ensure protection for these insecure protocols through the use of certificates.

The onus for proper security then falls on RADIUS server validation to ensure the other end of the connection is trusted before allowing the client authentication to proceed. In a properly implemented wireless network, this MS-CHAPv2 exploit is a non-issue.

There is no need for Wi-Fi network administrators to abandon PEAP. Period.

Security is a complex field. It may be hard to distinguish the FUD from fact. If you're interested in learning more about Wi-Fi security, then I highly recommend engineers take training provided in the CWSP (Certified Wireless Security Professional) course offered by CWNP, Inc. or the SEC-617 (Wireless Ethical Hacking, Penetration Testing, and Defenses) course offered by the SANS Institute.

Andrew vonNagy

"Hotspot Zones" Born from Muni-Fi Failures and Hotspot Successes

Remember all those failed Muni-Fi (or Metro Wi-Fi) deployments in the mid-2000's? Well, most of them failed due to a combination of poor business planning where no anchor tenancy was established, advertising wasn't sufficient to foot the bill, over-zealous Wi-Fi design that called for far too few wireless APs (and subsequently poor performance), and difficulty in obtaining access rights to utility poles to get power and backhaul for AP locations which stalled deployments. (Read about all that over here; Minneapolis seems to be one bright spot in all that).

Now, Muni-Fi appears to be making a bit of a comeback. Although everyone has ditched that terminology, likely to avoid bringing the negative connotation that now comes with it :) This is not completely without just cause, as the focus has shifted from providing pervasive coverage over entire cities, to focused coverage in specific public areas where it makes sense. In some respects, I consider these new deployments a hybrid between Muni-Fi (pervasive public area coverage) and Hotspots (coverage in a specific location, typically tied to an establishment such as a coffee shop). I'd call them "Hotspot Zones".

AT&T and other carriers have installed hotspot zones in a few major urban areas, such as Manhattan and San Francisco. And Wi-Fi is even being installed in old phone booths in NYC to ease rights-of-way issues.

LightReading describes this initiative:

New York City's plan to house hot spots in pay phones to provide free Wi-Fi service in the Big Apple illustrates just how much wireless LAN has become part of everyday life even as the public telephone system has become a thing of the past. 
The city is working with pay-phone companies Titan and Van Wagner to install the hot spots, according to NY1 TV.

And the folks at Smart Wi-Fi provide insight on the economics of deployment in old phone booths:
Obviously phone booths are connected via the two wires needed for a POTS line which could easily be augmented with a DSL session.  Along with power, the locations can easily be re-born to a ‘smartphone’ world.  Maybe we can dub these “smartphone booths”. 
Putting Wi-Fi access points into un-used phone banks makes some sense, particularly with the new Next Generation Hotspot (NGH) and HotSpot2.0 initiatives.  Acompany can use their right-of-way to install and maintain the access points while changing back-end service providers (BoingoiPass, AT&T, …) to enable their subscriber’s access.
What's the difference this time around?

First, and most importantly, the deployment model has shifted from "large-scale" Wi-Fi installations trying to light up entire cities and communities, to much more focused "hotspot-zones" where Wi-Fi is deployed in areas that make sense from a public and consumer point-of-view.

Second, Wi-Fi is much more accessible and mobile than it was 7-10 years ago. Users increasingly no longer need to carry around laptops to access Wi-Fi hotspots, they have much more convenient access from smartphones and tablets. That should help increase adoption, improve the free Wi-Fi through advertising business model, and help these hotspot zones succeed.

Finally, Wi-Fi technology has dramatically improved with 802.11n, offering better speeds and rate-over-range for customer devices, translating into better service quality and perception.

It's all about the business model folks :) Okay - and some technology too.


Monday, July 30, 2012

Vendors Need To Focus on Creating Value for Customers and Making Great Products

Have you heard of ‘Maslow’s Hammer’? This phrase originated with Abraham Maslow in his book, The Psychology of Science, in 1966. The saying goes something like this:

If the only tool you have is a hammer, everything looks like a nail.

Sometimes, actually quite often, technology vendors seem to fall into this trap. A trap where they get so enamored with what they’ve already built, which filled a previous need, that they forget that customer need are always changing and their solutions need to continue to adapt. (BTW, I hate to throw around the term ‘innovation’ in such scenarios. I believe true innovation is rare, and the term gets used way too often. So, I’ve dropped innovation from my vocabulary… mostly.)

Often times this adaptation requires critical thinking and a fundamentally new approach in order to solve a new problem. But change is hard. Vendors who get stuck in this trap attempt to take the path of least resistance, which often means incremental improvement of existing products that only half-solve the problem. There could be any number of reasons for this: too few development resources, constraint of existing product (e.g. painted themselves into a corner), time to market, lack of expertise on staff to truly understand and solve the problem (which is ultimately just a failure to invest and hire additional resources), or possibly the worst one of all – market dominance where short-term sales likely won’t suffer. However, long-term sales will definitely suffer if this problem becomes systemic and they continue to lag behind competitors. For reference look at what happened to DEC, Xerox, IBM (twice), and what is currently happening to both HP and Microsoft if they don’t pull it together.

This quote from Steve Jobs – The Lost Interview touched on precisely this:

If you were a product person at IBM or Xerox, so you make a better copy or a better computer, so what? When you have a monopoly market share the company is not any more successful. So the people that can make the company more successful are sales and marketing people and they end up running the companies and the product people get driven out of decision making forums. And the companies forget what it means to make great products.

So how does this relate to Wi-Fi? The rapid market evolution and change is hard to deny in the network industry these days, and the rate of change is especially staggering in Wi-Fi. Demand for new solutions and capabilities to enable both businesses and consumers abound.

Unfortunately, some vendor features are shoe-horned solutions into existing products in ways that just don’t make sense for customers. Essentially, viewing the problem as a nail so they can use their existing hammer (instead of realizing the problem is really a bolt and they need to develop a bolt-driver). What do I mean? Here are some examples of late:
  • Meraki  Layer 3 Roaming – using their MX firewall security appliances to facilitate Wi-Fi layer 3 roaming. Why would I want to hitch my WLAN architecture and design to a firewall appliance? They have completely different objectives and scope of deployment in the network. What this really does is allows Meraki to offer layer 3 roaming on a feature list, and try to sell more firewalls.
  • Cisco Bonjour Support – relies on existing multicast support and VLAN Select to facilitate Apple Bonjour service discovery across subnet boundaries. There are many reasons why I don’t like this implementation approach, including: requiring multicast expertise to enable Bonjour in the network, limitation of a single Bonjour source VLAN rather than any-to-any connectivity across subnets (tell me, what organization can aggregate all Bonjour devices that should be accessed into a single VLAN? Very few small networks), lack of Bonjour protocol intelligence which means they can’t filter services or tie access to defined policy, and requiring all Bonjour subnets to flow through the controller which prevents all edge wired subnets throughout the network from accessing Bonjour services (and edge wireless subnets if using Cisco’s FlexConnect / H-REAP architecture).

Do these implementations add features that they can list in their product marketing material? Yes.
Do these implementations really address customer needs or create value for customers? Only partially, in my opinion.

Hopefully this post will enlighten customers on this issue and make clear the need to thoroughly examine vendor offerings and their approach to solving your needs. Don’t simply take vendor marketing material at face value. Sure, they may have a check box next to a feature buried in a laundry list of other features, but that abstracts how they implement the feature, how robust their capability actually is, and what caveats/drawbacks exist. This is NOT a land of black and white, folks. Lines are blurred, shades of gray exist, and deciphering all this and making an informed decision are tough.

As a customer, you will not be able to go into the depth required on every product feature to make a 100% informed decision. But you SHOULD be able to identify the critical features for your specific environment. Go into depth on those features, find out the nitty-gritty details, and make a decision that makes sense for your business. Vendors have various strengths and weaknesses, areas where they are leading the market and areas where they provide only the basics. Often a customer decision comes down to a few key features that really make a difference and create value for their business. Focus on those items.

My point here is NOT to call out any one vendor on this topic; I think every vendor is guilty of it to some degree. Rather it is to call on vendors to step up their game; to focus on creating value for customers rather than selling them a bill-of-goods. I don’t think this is always happening, and quite-frankly it’s frustrating to see poorly implemented solutions being sold on the market to customers. It’s likely that this will never be eliminated – market competition and dynamics dictate some level of this to continue happening. But customer influence and buying pressure can minimize the frequency.

In short :
Vendors –  let’s build great products!
Customers – let’s be informed to ensure the solutions purchased create value relative to the business needs!

Andrew vonNagy

Full Disclosure
Most of my readers should know that I work for a vendor now, Aerohive Networks. If you didn’t, now you do. Are we perfect? By no means. But I do think that Aerohive has a greater focus on creating value for customers and solving their business needs than most other vendors (e.g. Bonjour Gateway for BYOD, MDM integration for mobile device support whether BYOD or corporate issued, TeacherView for education, SIP2 integration for libraries).

Am I biased? Yes, partially. But I also believe that everyone is biased for or against certain products or companies based on their experiences.

Am I independent? Yes, partially. But I form my opinions based on my experiences as an engineer, not as vendor-drone who recites marketing material verbatim.

I only speak or write what I truly believe to be true based on my own personal analysis. I try to be as objective as possible, with sound evidence and engineering behind my positions. And I’m open to discussion, which I think is key. I will change my opinion of sufficiently convinced based on evidence (not conjecture). I will admit that I’m wrong. Heck, I’m wrong all the time. But that realization and openness to change on a personal level is why I still consider myself of independent thought. I hope you value my opinion, whether you agree or disagree with my positions. And if you disagree, let’s have a meaningful discourse on the topic. 

Monday, July 16, 2012

Education Institutions want Apple Bonjour Improvements

Members of the Educause community are drafting a petition to Apple for improvements in Bonjour network services for enterprise environments with larger and more complex networking needs than consumer home networks. Specifically, they want the applications to work across routed layer 3 networks natively instead of being limited to the local subnet.

A well-respected blogger, Lee H. Badman, who works for Syracuse University writes about the petition over on Network Computing:

BYOD hype tends to center on the likes of smartphones and tablets. These self-owned devices usually do just fine on the network from the technical perspective, while causing a policy ruckus that has spawned new product markets. But Apple has a family of popular devices and protocols that are decidedly sub-par enterprise network clients, and higher education network administrators want Apple to provide some relief. 
Apple has built this market niche on the extremely limited Bonjour protocol, which is non-routable and extremely difficult to scale and administer on large wireless networks. Users want to make use of these very slick living-room-oriented devices at work, as they have a lot of potential cool uses. Network admins want to help, but not at the expense of wholesale network redesign. 
So what's Apple's answer thus far to individual pleas for a change in paradigm? Find a workaround.

Solutions exist out in the market, but only from 3rd party Wi-Fi vendors. Aerohive Networks is leading the charge with the only currently available solution in Bonjour Gateway. Aruba networks has announced a solution with AirGroup, but it will not be available for quite a while yet (rumored Q1 2013 September 2012).

Apple has turned into a computing behemoth with the rapid adoption of their iOS and OS X platforms in enterprise environments. Whether it's BYOD or corporate issued mobile devices, enabling services like Bonjour across the enterprise network in a simple and scalable fashion will only become of greater importance. 

It will be interesting to see how Apple's growth in the enterprise coupled with the greater demands on devices in such environments will be reflected in Apple's product development moving forward. Will they listen to the Educasue community and the larger enterprise demands? Or will they rely on 3rd party vendor partners to continue to pick up the slack?


P.S. - Updated the AirGroup release timing information based on feedback from Aruba.

Monday, July 9, 2012

IPv6 Unicast Address Allocation

Continuing in Chapter 2 of “Deploying IPv6 Networks” (Cisco Press, 2005)

In my last post, I provided an overview of IPv6 unicast addressing, which includes link-local addresses (LLAs), unique-local addresses (ULAs), and global unique addresses (GUAs). As previously described, GUAs must be globally unique and globally routable. GUA addresses are identified with the 3 high-order bits set to ‘001’ (2000::/3) with addresses ranging from 2000::/4 to 3000::/4.

In this post, we will dive a bit deeper on GUAs address allocation policies. Strong IPv6 address allocation and prefix aggregation policies must be implemented and enforced to prevent impact to routing process through longer lookups (remember that modern 64-bit processors require 4 cycles to process IPv6 SA and DA addresses, compared to 1 cycle for IPv4 addresses), reduce global routing table size (which is already becoming a problem with IPv4), and larger routing updates between peers because many more networks are possible with the increased addressing space.

Therefore, strong IPv6 allocation and prefix aggregation policies are meant to provide efficiencies by reducing the global routing table size and keeping routing updates to a manageable size. Cost of routing hardware also likely plays a large part in this equation, since routers with enough horsepower and memory to handle IPv6 routing updates and lookups without strong aggregation policies would come at such a huge cost that it would prohibit IPv6 adoption.

The IANA is responsible for managing and allocating the global IPv6 address space. Address allocation is strictly hierarchical from the IANA to Regional Internet Registries (RIRs), and subsequently to National Internet Registries (NIRs), Internet Service Providers (ISPs) or Local Internet Registries (LIRs), and finally individual organizations. At each step in allocation the larger address space is assigned in subsequently smaller pieces to customers. This allows hierarchical routing and prefix aggregation, which significantly reduces the size of the global routing table.

IPv6 Prefix Allocation Process

In Figure 1, you can see an example of this allocation process for the 2001::/3 prefix. The IANA allocates pieces of this prefix to RIRs and NIRs anywhere in size from /3 to /32 (however, it is typically /16 to /22 in size). These blocks are then subdivided and allocated to ISP and LIR customers, ranging in size from /32 to /35. Finally, individual organizations are allocated one or more /48 prefixes. This leaves 16-bits for use within the organization for internal subnetting and network design.

Using this allocation policy provides prefix aggregation capability at each organizational boundary (Organization, ISP, LIR, NIR, RIR, IANA). ISPs and LIRs must also allocate addresses to customers in a manner that prevents segmentation and allows for optimal use of aggregation (using the Host Density ratio defined in RFC 3194).

A complete list of IPv6 allocations by the IANA is available at the following link:

Such strict allocation and aggregation practices don’t come without detractions, however. Individual organizations no longer own their own address space directly from the RIRs (as they do with IPv4 prefixes today). Instead, IPv6 prefixes are acquired from specific ISPs or LIRs and represent a subset of their larger address allocation. Therefore, organizations will need to renumber their entire network when switching between ISPs. IPv6 does include features that make renumbering easier, but this results in an operational impact to organizations.

Renumbering can be made simpler by assigning multiple unique unicast addresses to each host interface during the transition period between ISPs, where both address spaces coexist within the organization. Additionally, the use of unique-local addressing (ULAs) within the organization can ensure internal communication is not disrupted during an ISP transition. However, IPv6 renumbering is still a major concern for medium-to-large size organizations.


P.S. – Please follow or get involved in the discussion on IPv6 architecture, design, and implementation on Twitter with the #IPv6Mission hashtag.