Archive for the ‘networks’ Category

As predicted, 802.11 is really done

Friday, September 11th, 2009

Back in July, I wrote that the IEEE Standards Board would consider 802.11n for approval on September 11. That meeting has occurred, the votes have been taken, and the standard has been approved. I received notice by e-mail this morning at 11 am Pacific. I didn’t pick it up immediately, since I was in Australia for the Wireless World conference, and the e-mail came in just after 4 am.

TKIP security is not “Gone in 60 Seconds”

Friday, August 28th, 2009

On the train home last night, I read the paper by Ohigashi and Morii that made the news yesterday, and resulted in a good number of electrons being spilled yesterday afternoon.

Before I get started, the key point here is:

If you have concerns about wireless security, JUST USE CCMP.

(CCMP is often referred to as WPA2, but that’s a nomenclature point that I’d rather not get into here.)

I enjoyed reading the paper because the attack is clever, and nicely builds on some work from a year ago by Eric Tews and Martin Beck. Both the Ohigashi/Morii paper and the Tews/Beck paper describe attacks against the TKIP integrity check. Notably, neither attack is able to recover the keys used by TKIP to encrypt frames.

The most important thing to understand about TKIP is that it was intended to be an interim measure. When design work on TKIP started in 2001, there was a two-pronged approach to developing wireless security protocols. The first prong was updating the much-maligned WEP to improve security, but that effort was circumscribed because the design that emerged was constrained by the need to be hardware-compatible with the millions of devices which had already been sold with WEP support. (In technical terms, that restricted TKIP to be based on the RC4 cipher, and prevented development of a message integrity check with significant computational requirements.)

In essence, TKIP is a set of “seat belts” that keep the most vulnerable parts of WEP from being thrown through the windshield or impaled on the steering column. (One of my favorite papers about the weaknesses in WEP is IEEE 802.11 document 11-00/0362, titled Unsafe at Any Key Length, which is where the metaphor comes from.)

I’m not terribly surprised by the increasing number of papers written about flaws in TKIP. Given the severe design constraints, TKIP was a stopgap solution intended to buy time to give wireless LAN users the breathing space to move to the eventual AES-based protocol in development. TKIP had a “design lifetime” of five years, meaning that the intent was to resist cryptanalytic attacks for that length of time. The TKIP specification had matured by 2003, so it is not a surprise that flaws began to be identified last year.

Last year, the Tews/Beck paper exposed a subtle flaw in TKIP’s integrity checking. The attack described in last year’s paper required that a network have quality of service extensions enabled. Ohigashi and Morii did away with that constraint by showing that an attacker clever enough to insinuate himself into the conversation between the network and a client device can perform a similar attack.

The technical impact of this attack is small. Tews and Beck showed that a network with WMM could be subject to attacks against the TKIP integrity check. Ohigashi and Morii have generalized that work to networks in which the attacker does not need WMM to be enabled, but the trade is that the attacker must have a situation in which the victim is outside the range of the AP without relaying. Many vendors developed workarounds a year ago which continue to provide protection against this attack.

What this perceptive paper should do is heighten the disquiet regarding continued use of TKIP. Initial papers on WEP showed flaws that were not fatal, but the accretion of cryptanalytic expertise over time resulted in a complete break of the protocol which enables attackers to swiftly recover encryption keys. TKIP has not suffered this fate yet, but it is difficult to know how far off that day is. The best advice is to start using CCMP today, and make plans to move away from TKIP.

TKIP was intended for use as a stopgap, and it was optimized for use with the existing protocol features at the time of its design. It has not been extended to protect the extended headers defined by 802.11n, which is why the Wi-Fi Alliance has defined tests to prevent the use of TKIP with its 11n certification. It will never provide protection for 802.11 management frames. (To learn more about management frame protection, see the summary video for a talk I’ve submitted to the RSA conference next year.)

The future of wireless LAN security is CCMP. Let’s bury TKIP, and move away from it before it becomes absolutely necessary.

Stick a fork in 802.11n, it’s almost done!

Monday, July 20th, 2009

Last week, the IEEE 802.11 working group met in San Francisco. Activity on the long-awaited 802.11n standard has been slowly moving through the process for several meetings now. On Friday, we took what is likely to be the final step as the 802.11 working group. We held our final approval vote, requesting that higher layers of the IEEE 802 group approve 11n for publication.

The vote felt somewhat anti-climactic. In a lightly discussed and debated motion to send the 802.11n draft onward, 53 members (including your correspondent) voted in favor, 1 voted against , and 6 abstained.

Following the working group’s approval, the IEEE 802 executive committee voted unanimously (14 for, none against or abstaining) to send 802.11n to “RevCom,” the IEEE Standards Board Review Committee. The IEEE Standards Board next meets on September 11, 2009.

In an interesting twist, September 11 is a date relevant to the history of 802.11n. Bruce Kraemer, the long-time chair of Task Group N and the current chair of the 802.11 working group, noted that the first meeting of the “High Throughput Study Group,” the precursor to TGn, was September 11, 2002.

If approved, the 802.11n effort will have taken exactly seven years, at least by one measure. We are a long way from the first time 802.11n passed the 75% threshold.

The 802.11 working group is already working on the next step. Two task groups (TGac and TGad) are researching and debating methods to create gigabit-capable physical layers.

The difficulties in building a conference wireless network, TERENA TNC edition

Monday, May 19th, 2008

It’s hard to build a conference wireless network. I’ve built a few over the past five years, and it is always a big engineering challenge. As you build the network, you refine your plans. When users arrive and start sending traffic, you refine your plans. As loads ebb and flow, you refine your plans. I won’t say it’s easy, but it is a well traveled path. Every major gathering of networkers requires wireless connectivity.

I’m accustomed to the user experience on wireless LANs built for industry trade groups like the Wi-Fi Alliance, the IEEE 802.11 working group, and the IETF. The Wi-Fi Alliance uses Michael Hijdra and his team at 2Fast4Wireless, and Verilan does work for both IEEE 802.11 and the IETF. This week, I’m speaking at the TERENA NetConnect conference. It’s only the first day, but I’ve had lots of trouble with the network.

First of all, the network uses web “authentication.” All of the conference attendees have been given a unique account, but the use of accounts is enforced by a captive web portal, not WPA. The Wi-Fi Alliance, IEEE 802, and the IETF all run an 802.1X network, though they also offer an option for unauthenticated access. It seems unfortunate to avoid using 802.1X at the TERENA TNC conference because TERENA’s Eduroam project has done a great deal to drive adoption of 802.1X, and many of the attendees are therefore familiar with 802.1X configuration.

When the plenary hall filled up, the performance went down very quickly. In the first eight minutes, I was disconnected four times. At eight minutes, the network connection gave up the ghost and quit working altogether. Before that point, I was seeing round-trip times that I hadn’t seen since the great AT&T frame relay outage of 1998, when round trips were measured in seconds from my then-office to, well, anywhere. Round trip times were also measured in second-plus range here, and are substantially higher even than the GPRS/EDGE network I use when commuting to work:

Reply from 4.2.2.1: bytes=32 time=2964ms TTL=246
Reply from 4.2.2.1: bytes=32 time=1050ms TTL=246
Reply from 4.2.2.1: bytes=32 time=1513ms TTL=246
Reply from 4.2.2.1: bytes=32 time=1464ms TTL=246
Reply from 4.2.2.1: bytes=32 time=3253ms TTL=246
Reply from 4.2.2.1: bytes=32 time=3448ms TTL=246
Reply from 4.2.2.1: bytes=32 time=753ms TTL=246
Reply from 4.2.2.1: bytes=32 time=1575ms TTL=246
Reply from 4.2.2.1: bytes=32 time=1469ms TTL=246
Reply from 4.2.2.1: bytes=32 time=228ms TTL=246
Reply from 4.2.2.1: bytes=32 time=1538ms TTL=246

(4.2.2.1 is one of my favorite test IP addresses. It’s short, quick, easy to type, and it belongs to a highly redundant DNS server so it is almost always there.)

In the plenary, I was sitting towards the back of the room. As it became clear that the network was failing, people closed up their laptops in frustration. In the afternoon session, an attempted demonstration was aborted due to network performance problems. In all of the rooms, Windows reports low signal strength, so some of the performance problems could be due to AP placement constraints.

Last but definitely not least, there are two network names in use. A sign posted at the plenary room indicates that the split is used for load balancing, and instructs us to use the appropriate one based on user name:

I have connected to both networks, and they appear to use the same DHCP server. This is probably a misguided attempt at broadcast containment and/or load balancing. The Wi-Fi/IEEE 802/IETF networks use a single SSID and let the infrastructure figure out load balancing in a way that is transparent to the users.

A dispatch from the home network: Powerline communications actually work!

Monday, June 18th, 2007

Recently, my parents moved out of a two-story house into a condominium. The condo is smaller, but it’s more spread out. The building is also a great deal sturdier, since it’s a 13-story high-rise, instead of a two-story detached home. In the old house, a single AP in the middle of the home on the second floor covered everything adequately. In the new condo, the wireless signal was not so great around the edges of the unit, and that expressed itself most dramatically in the speed of transferring recording programs between two TiVos.

Obviously, a second AP was needed to cover the extra horizontal distance, but I needed to link them to a single VLAN in order to get communications between clients of the two APs. The condo’s walls were already finished, and the last thing I wanted to learn to do was how to fish cable through ceilings and walls. (“It’s easy,” says a colleague with the right tools, as I tell him this after the fact. “There’s a really long drill bit, and then you just snake a pull cable over the ceiling. It works even better when you’re not on the top floor of the building!”)

It turns out the problem was pretty easy to solve with powerline networking. One of the APs acts like a firewall/router for the whole condo, and feeds the second AP over powerline. A third unit connects up a computer with only an Ethernet port. I used the new HomePlug AV equipment, which boasts speeds of up to 200 Mbps. According to the Netgear’s test tool, raw link speeds are 150-175 Mbps. (I wound up selecting the Netgear equipment because it was the best industrial design in the group, and it had the least garish display.)

I have one big worry about the equipment. It works by sending the data over the electrical wire as high-frequency modulation. Most power strips/surge suppressors will filter out high frequency noise, so the units need to be plugged directly into the wall. Without protection, I wonder how they’ll fare when the power flickers in a storm. Naturally, I worry about security as well, since there is no obvious security protocol configuration that took place during my installation.

At this point, everything seems to be working. The question is whether I am brave enough to upgrade the APs to some new firmware. I am tempted to because the third-party firmware offers multi-SSID support, with different security configurations, which would help contain some of the potential damage from the non-WPA TiVos.

Bad Advertising: Our services have the “speed” and “reliability” of the London Underground!

Sunday, June 17th, 2007

In a recent issue of a tech magazine that I receive, I saw the following advertisement, which is good for a laugh. I’ve deliberately blurred the company’s name, location, and sales telephone number.

Speed and reliability don't make me think of this

The photo in the background is nice, and gives you the impression of a fast-moving train. That is, until you take a closer look at it, and realize that it is unmistakably a picture from the London Underground.

Trust me, Unnamed Company, you don’t want to associate your services with the Tube, especially its speed and reliability.

I started riding the Tube about 15 years ago. Back then, the novelty of an underground railway that went everywhere made me think it was cool beyond belief. As far as I can tell, the government has barely invested in upkeep since that time. In January, I was in London for an IEEE meeting, and I loathed taking the thing. Most of the stations are only a half-step above decrepit, deferred maintenence kept good chunks of the system from running, even during weekdays, and it takes more than an hour to cross central London if you need to do something stupid, like transfer. One day, the network was even completely shut down to to “high winds.” (Bonus points for anybody who can tell me why high winds can almost completely shut down an train system, even the underground parts.)

All that said, it could have been worse. A few days after I left London, I was on a Belfast-Amsterdam flight delayed by snow over London. It apparently could have been worse: I could have been trying to ride the tube. Here are two precious photos: TFL’s delay apology and the service update.

Interop 2007 in photos

Sunday, June 3rd, 2007

Interop ended the week before last, but Las Vegas is good enough at being angry-making that it took me a week to sort through all the pictures that I took. During Interop, my major activities were related to the OpenSEA Alliance, an organization that I helped found, and the Interop Labs, the legacy of Interop’s conference and research focus.

My favorite photo of the week illustrates why the Interop Labs is so valuable for attendees. Those of us who put it on have a staging event a month before the show, and then we arrive several days early to set up demonstrations. It’s common to be troubleshooting all the way up to the opening curtain, and sometimes even well into the three-day show. To show off open-source admission control technologies using the Trusted Network Connect architecture, it was necessary for Mike McCauley, the CTO of Open System Consultants (maker of my favorite RADIUS server, Radiator) and Chris Hessing, the Open1X project lead, to work out some bugs before the show.

Mike McCauley and Chris Hessing troubleshooting

(Meanwhile, Ted Fornoles and Tim McCarthy, both of Trapeze Networks, are in the background working on another demonstration.)

Chris and Mike are both individual members of the OpenSEA Alliance, and attended a lunch meeting the group had on Tuesday. We’re all excited about the possibilities of where we might go, but there’s a lot of hard work ahead of us. Fortunately, the group has a wide cross-section of industry representation; here’s a shot of the Extreme Networks access control demonstration area, which makes use of the Open1X project software:

Extreme demonstration of Open1X project supplicant

There are several more pictures of people involved in the group in my OpenSEA gallery.

In the Interop Labs, I was a member of the VoIP team. Unfortunately, I missed the staging event because my presence was required at a meeting in Singapore. Of the demonstrations on the floor, our scalability demonstration seemed to attract the eye of most passers-by. Here, Jerry Perser of VeriWave (on the left) is explaining the demonstration to Sue Hares of Nexthop (on the right), and one of Sue’s colleagues whose name I forget:

VoIP scalability demonstration

If you’re interested, feel free to look at the full photo gallery here.

Secure VoIP demonstration at Interop

Wednesday, May 16th, 2007

Last month, the Adminsistrative Office of the United States Courts released the 2006 wiretap report (main report in PDF format). There are two extremely interesting points.

First, the third paragraph of the introductory page, which reads:

Public Law 106-197 amended 18 U.S.C. 2519(2)(b) to require that reporting should reflect the number of wiretap applications granted for which encryption was encountered and whether such encryption prevented law enforcement officials from obtaining the plain text of communications intercepted pursuant to the court orders. In 2006, no instances were reported of encryption encountered during any federal or state wiretap.

(Steve Bellovin, via Eric Rescorla.)

Second, on page 11 of the PDF (under the section “Summary and Analysis of Reports by Prosecuting Officials”), we learn that the federal government doesn’t encounter computers all that often:

The electronic wiretap, which includes devices such as digital display pagers, voice pagers, fax machines, and transmissions via computer such as electronic mail accounted for less than 1 percent (13 cases) of intercepts installed in 2006; 6 of these involved electronic pagers, and 7 involved computers.

For comparative purposes, the report notes that 1,839 wiretaps concluded in 2006.

Most voice communications are not encrypted. The exception is mobile telephones, which are encrypted only on the radio link. (Mobile phone wiretaps, however, generally take place at the switching office, where the voice traffic is not encrypted.) This certainly includes most VoIP calls today, and is a reason that is often cited for the lack of use of SIP-based services on corporate networks. Most VoIP data is transmitted using the Real-Time Protocol (RTP), which does not encrypt payload data. The Secure Real-Time Transport (SRTP) offers a potential solution, and implementations are now available.

After that very long introduction, I’d like to point out that the Interop Labs next week will have an interoperability demonstration featuring SRTP. It’s open to the public, so if you happen to be on the show floor, stop on by!

As a completely shameless plug, you can also see the Open1X supplicant in the iLabs, which is now supported by the newly-formed OpenSEA Alliance.

Off to Abu Dhabi!

Friday, February 23rd, 2007

I’m writing this from the Red Carpet Club at the San Francisco airport, waiting for my first hop flight on a trip to Abu Dhabi. I’ve been invited to speak at the Education Without Borders conference on networking and access to technology. Here’s the abstract for my talk, which was inspired by Stewart Brand’s book How Buildings Learn:

Pervasive communication networks greatly increase our ability to share information, but this can sometimes come at a great cost. In physical architecture, the imperceptible slow change of the site and structure dominate the more fast-moving familiar processes that are easily visible to the users of building. The same is true in communication networks, where the social environment of a network’s users and its basic technology steer its future development path. Network engineering is mainly a technical discipline that attempts to optimize for the technology of today while remaining open to tomorrow’s developments. Defining what is optimal and should be maximized, however, is a social question that drives those engineering decisions.

We are in the midst of one of the most fundamental changes in the physical architecture of the Internet, with potential profound changes in its social effect. Most of the technology that underlies traditional Internet access is based on “fixed” networks built from expensive and inflexible cables. The physical architecture constrains network services to areas where the infrastructure is easy to build, keeping access concentrated in densely populated areas. Newer wireless technologies free network builders from technological dependence on cables, and from the financial constraints of needing to pay for costly infrastructure. These technologies enable the network to reach into remote locations that would have previously been considered “off-limits” to traditional technologies.

The resulting flexibility from wireless technologies is shifting the focus of innovation and use away from traditional markets for new technology towards emerging markets that benefit more from its advantages. Instead of importing designs and their implicit social models from existing networks, network builders need to ensure that these new technologies are used within the correct social context and are made broadly available.

The conference is doing a/v recording. I’ll definitely post my slides, and with luck, the audio and video.

A look at 802.11a, b, and g throughput with short preambles

Saturday, January 27th, 2007

Three and a half years ago, I used a simple model to calculate the maximum throughput of 802.11 networks. Recently, a reader wrote to me and asked me how the use of the short preamble would affect the throughput numbers. The answer is that using the short preamble increases throughput by more than 20%.

The re-calculations aren’t particularly complicated. The short preamble is only 96 microseconds instead of 192. The initial slow component of the preamble is shorter, and the second and final component is transmitted at a faster data rate. This affects the model by making any frame transmitted with 802.11b data rates take 96 fewer microseconds in the air.

In the 802.11b model, that savings exists on four frames (the initial data frame with the TCP payload, the 802.11 ACK, the second data frame with the TCP ACK, and the final 802.11 ACK). Therefore, the simple TCP+ACK transaction the model is based on is 384 microseconds shorter.

In the 802.11g with CTS-to-self protection, the short preamble saves on just two frames. In CTS-to-self protection, the frames transmitted at 802.11b data rates are the two CTS frames used to lock up access to the medium. Therefore, the TCP+ACK transaction unit in the model is 192 microseconds shorter.

Finally, in the 802.11g with RTS-CTS protection, there are once again four frames with a preamble saving. Both of the two 802.11 data frames carrying a TCP payload are protected by an RTS and a CTS frame, so the TCP+ACK transaction unit is once again 384 microseconds shorter.

Throw it into a spreadsheet, and you get the following table, where the short preamble rows are new.

Technology Transaction time (μs) Transactions/ second TCP payload throughput, Mbps Improvement over long preamble
802.11b, long preamble 2084 479 5.6 -
802.11b, short preamble 1700 588 6.9 +23%
802.11g, CTS-to-self, long preamble 898 1113 13.0 -
802.11g, CTS-to-self, short preamble 706 1416 16.5 +27%
802.11g, RTS-CTS, long preamble 1285 778 9.1 -
802.11g, RTS-CTS, short preamble 948 1054 12.3 +35%

It’s also important to note that the equipment on the market rarely uses the full RTS-CTS protection. It is an option on most equipment, but it is rarely used.