Blog
July 31, 2007

IETF highlights: HTTP Bis and BIFF BoF

Event Driven Architectures, IETF, Security By: ams

The 69th IETF was last week in Chicago (windy and good pizza, who knew?). The two highlights for me were the HTTP Bis BoF and the BIFF BoF. A BoF is a “Birds of a Feather” meeting used to gauge interest and feasibility towards forming an IETF working group.

continue reading »

May 7, 2007

Attacks on Vidoop Authentication

Security By: ams

A new authentication scheme was announced recently at the Web
2.0 Expo: Vidoop http://www.vidoop.com.

Vidoop describes itself as a web single sign-on solution that is
resistant to “all prevalent forms of hacking”. Specifically, they
claim to resist “phishing, keystroke logging, brute force, and many
man-in-the-middle attacks” and to resist automated attacks by
“requiring human cognition” on the part of the attacker. This
language is misleading. In reality, the scheme only resists simple
phishing attacks — it does not prevent man-in-the-middle attacks, is vulnerable to brute
force attacks, and it is resistant to keyboard loggers only when screen loggers are
not present.

We were able to construct a man-in-the-middle (MITM) attack that allows us to capture users’ credentials and to login to their accounts. We recorded a video that demonstrates a MITM attack in progress at myvidoop.com. Ian Fischer, a Harvard University student and research intern at CommerceNet, created the attack in a few hours, by modifying freely available proxy software on the Internet. We describe the attacks in more detail below.

How Vidoop works: Vidoop is essentially a combination of a graphical
password scheme and client-side cookie. During setup, a user must
choose their secret, which is a set of three “image categories” out of
25 categories (e.g., the user might choose cats, dogs, and birds).

To login, the user has to enter their username (or OpenID URI). The
server presents a grid of 12 images from different image categories.
Each picture has a random character superimposed on it, and three of
the images are from the user’s pre-selected categories. The user
derives his one-time PIN by entering the three letters corresponding
to his image categories.

Attacks: We recently conducted a study that analyzed attacks on Bank
of America’s SiteKey scheme [1]. Vidoop bears some similarities and
shares many of the same vulnerabilities. In particular, Vidoop is
vulnerable to a man-in-the-middle attack in which the attacker
simulates the enrollment process. This is a well-know attack on
SiteKey, which was first published in 2005 [2], and has been well
analyzed by Jim Youll [3] and more recently demonstrated in this
video by Indiana University researchers [4].

Like SiteKey, users must have a Flash cookie and/or HTTP cookie on
their machine in order to log in (this cookie acts as a “second factor”
that ties the machine to the user’s account). If this cookie is
erased, or if the user logs in from a new machine, the user needs to
“enroll” the machine. The SiteKey enrollment process requires the
user to answer a challenge question before receiving their cookie.
This opens up a MITM attack, where the phisher lures the user to his
website and presents the enrollment message “You are logging in from a
computer that we don’t recognize”. The phisher proceeds to relay the
challenge question from the bank to the user, and then relays the user’s answer back to the
bank. This allows the phisher to ultimately capture the user’s
SiteKey image and password. Because the user has probably seen
the re-enrollment message several times in legitimate circumstances, he is likely to answer the challenge
question and might not even know he was the victim of a phishing attack.

In Vidoop’s enrollment process, the user has to request an activation
code, instead of answering a challenge question (the activation code
is delivered via email, a phone call or SMS text message). Once the
user enters the activation code, the server will place a cookie on the
machine, and allow the user to log in as usual. This opens up the same
MITM described above — now, instead of relaying the challenge
question to the bank, the phisher simply relays the activation code:

1. The phisher directs the user to phishingsite.com, which looks just like
the Bank site, and the user enters his username.

2. The phisher relays the username to the real Bank and is presented
with the message “We don’t recognize your computer. Please select how
you would like to receive your activation code”. The phisher relays
this message to the user.

3. The user selects the method of delivery, and the phisher relays
this choice to the Bank. The user receives the activation code and
enters it into the phishing website.

4. The phisher relays the activation code to the Bank, receives the
cookie, and the user’s authentication grid image.

5. The phisher displays the user’s image grid to the user in order to
obtain his PIN and secret “image categories”. He relays the PIN back
to the bank in order to log in.

Vidoop’s requirement for out-of-band communication does not increase
the cost of launching an automated MITM attack. In the SiteKey
attack, the MITM phisher obtains the SiteKey image and password and a
secure cookie, which allows him to log in indefinitely. In Vidoop, the
MITM attacker obtains the user’s PIN, which can only be used
immediately to login to the account one time. He also receives the
user’s image categories and a cookie that allows him to log in in the
future. To make use of the cookie, the attacker has to do a little
more work.

Vidoop claims that subsequent logins require a human to determine the
image categories and to look at the image grid to obtain the user’s
PIN. The necessity for a human in the loop increases the cost of an
attack, and most phishers won’t bother to go through the effort. They
don’t need to! The password space is so small that, once you have a
cookie, a brute force attack is trivial. The myvidoop.com PIN is 3
characters chosen from the 26 characters of the alphabet, is order-independent, and is case insensitive, so the attacker only has to
search 2,600 combinations (26 choose 3). With four login
attempts available, the chances of success are 1 in 650. If the phisher uses automated character-recognition programs, he can reduce the number of combinations to 220
(12 choose 3), or a 1 in 55 chance of success with 4 login attempts. Note that brute force attacks are also easy to mount by anyone that shares the machine with the user.

Vidoop could increase the attacker workload by increasing the size of
the PIN (the number of image categories), increasing the image grid,
increasing character set (e.g., adding digits and symbols), requiring
order dependence and non-repeatability, or by reducing the number of
attempts that are allowed. To defeat character recognition, they could eventually employ captcha-type characters. All of these options will significantly reduce the usability of the system.

Vidoop does improve upon SiteKey in its resistance to keyboard logging
attacks. If a keyboard logger obtains the PIN, it is only useful for
one login and only within the timeout period. Vidoop is not
resistant to malware that contains both keyboard loggers and screen
loggers, which are becoming increasingly common [5].

Graphical passwords do have other weaknesses. For example, an
attacker can predict the type of image categories that are chosen,
even with very limited information about the target user [6, 7].
However, targeted attacks are expensive to mount — we’ve only focused
on the attacks that are easy to automate here.

Privacy: There is a gaping privacy hole in their system. Vidoop makes
it easy to search for registered usernames, and they openly publish
these on their website. An attacker can enter usernames and request
that activation codes be sent to them via text message, cell phone or
email, depending on the user’s preferences (this can be very costly
and annoying for both Vidoop and its users). Initially, Vidoop had no
time-out or restriction on the number of messages that could be sent
by an unknown party. It appears that I can now only send 3 messages to
any one person, after which time there is 9 minute timeout before
requests can be sent again. By signing up for Vidoop, users
essentially give anyone the right to send them Vidoop messages,
without requesting their permission and without needing any contact
information.

Usability: The cognitive overhead of selecting the Vidoop PIN is
higher than recognizing the previously seen SiteKey image (the user
must remember their semantic image categories, select images from the appropriate
categories, find the associated characters and input them into a text
box). However, Vidoop eliminates the need to recall a password,
which is still a requirement with SiteKey. Vidoop eliminates the need
to answer a challenge question during enrollment, but requires the
user to check their email or phone and then input the activation code.

Summary: Before publishing our analysis, we communicated with Vidoop’s CTO, Scott Blomquist. He acknowledged that he is aware of these weaknesses and that the scheme is vulnerable to man-in-the-middle attacks. In comparison to simple password authentication, Vidoop does raise the bar for phishers. However, we find their advertising, and in particular their claims that they resist man-in-the-middle attacks and “all prevalent forms of hacking”, to be disingenuous.

[1] The Emperor’s New Security Indicators, Stuart Schecter, Rachna Dhamija, Andy
Ozment, Ian Fischer, to appear in the Proceedings IEEE Symposium on
Security and Privacy, May 2007.

[2] The Battle Against Phishing: Dynamic Security Skins, Rachna
Dhamija and J. D. Tygar, in Proceedings of the Symposium on Usable
Privacy and Security (SOUPS), July 2005.

[3] Fraud Vulnerabilities in SiteKey Security at Bank of America, Jim
Youll, July 2006.

[4] Deciet Augmented Man-in-the-middle Attack against Bank of America SiteKey
Service
, blog post and video, Christopher Soghoian, April 10,
2007.

[5] Anti-phishing Working Group, http://www.apwg.org/

[6] Deja Vu: A User Study. Using Images for Authentication, Rachna Dhamija
and Adrian Perrig, in Proceedings of the 9th USENIX Security
Symposium, August 2000.

[7] On User Choice in Graphical Password Schemes, Darren Davis, Fabian
Monrose, and Michael K. Reiter, in Proceedings of the 13th USENIX
Security Symposium, August 2004.

April 3, 2007

Needed: Web 2.0 hackers

Uncategorized By: ams

A longer, though not necessarily more accurate job description, can be found here.

January 31, 2007

hWorkshopProgram2007

Uncategorized By: ams

>> Healthcare Ecosystem
>> Healthcare 3.0 Workshop main page

Healthcare 3.0 Workshop Program

Evening Arrival – October 15, 2007
  • 8pm – Arrival and check-in, Monterey, California
  • 9-10:30pm – Networking Cocktail and Evening Speaker: TBD

Workshop Day 1 – October 16, 2007
  • 8-9am – Registration and Continental Breakfast
  • 9-9:50am – Keynote
    • Speaker: Dr. Marty Tenenbaum
  • 10-11:50am – Panel 1: Collective Intelligence for EBM
    • Speaker: Leslie Michaelson
    • Panelists: 3-5 TBD
  • 12-1pm – Lunch
    • Featured Speaker: TBD Celebrity
  • 1-2:45pm – Panel 2: Empowering Patients via the Net
    • Speaker: TBD
    • Panelists: Ryan Phelan of DNA Direct, Adam Bosworth of Google, Paula Kim + ?
  • 3-4:45pm – Panel 3: Internet for Citizen Science
    • Speaker: James Heywood
    • Panelists: Zak Kohane + ?
  • 5-6:30pm – Web-Med Mixer and Paper & Poster Presentations
  • 7:30-9:00pm – Dinner
    • Featured Speaker: TBD Celebrity

Workshop Day 2 – October 17, 2007
  • 8-9am – Continental Breakfast
  • 9-9:50am – Keynote
    • Speaker: Andy Grove or TBD Celebrity
  • 10-11:50am – Panel 4: Kaizen Trials
    • Speaker: TBD
    • Panelists: TBD
  • 12-3pm – Lunch and Event

December 8, 2006

Open Source, Open Standards

IETF By: ams

I’ve been told in the past that Open Source and Open Standards are practically the same thing, they go so well together. While they’re both good things, unfortunately, they’re not quite as naturally reinforcing as you’d like. There’s cost and style of participation, IPR concerns, and proliferation of standards (“The good thing about standards is, there’s so many to choose from” — ref).

continue reading »

November 30, 2006

EMail Standards Waves

IETF, Security By: ams

A month ago at the last IETF meeting, I talked to a bunch of email standards experts about the current wave of Internet email standards work. In these conversations I also built a mental picture of the previous waves.

Wave 1 was what really made email work over the Internet. The Simple Mail Transfer Protocol and the basic email message format were defined in 1982 with the major innovation of using domain names to find out where to deliver an email. This allowed email from one organization to reach an individual in another company. In 1989, this was somewhat updated with RFC1123, which made email addresses look the way they do today: mailbox@domain.example. Once mail got to the right server, POP (first described in RFC918 in 1984) allowed any mail client to look through the server’s queue of new mail and decide what to do with each message. Although that first POP was described so early, many did not use it in those years and it didn’t get much attention until POP3. Instead one would typically log into the SMTP server and look at its mailboxes there.

Wave 2 was POP3, IMAP and MIME, 1988 to 1994 or so. POP3 gained far more adoption than POP. IMAP defined a way to access the server-side repository for all one’s mail: not just the queue of new messages, but a hierarchy of mailboxes (called “folders” in many clients) which can be used to store mail for access by several clients. MIME brought Media to electronic mail: the ability to include image file formats, to use HTML instead of text, to attach Word documents and executables, and other variations necessary to business and eventually much beloved by spammers. MIME also introduced the very first non-ASCII characters in the body of email. MIME turned out to be big for other purposes too, like the Web.

There’s arguably a wave 2.5 or 3, adding security features from 1994 to 1999, including S/MIME, TLS support and authentication features for IMAP and SMTP. SASL was added to SMTP in 1999 although didn’t get put into IMAP until 2003. This mini-wave didn’t change peoples’ lives much except for those whose companies rolled out complicated and hard-to-use S/MIME infrastructures, but the continued deployment of IMAP and MIME over this period did change the email habits of many.

Today’s wave is starting to get complicated (oh, just starting? heh). It’s adding internationalization capability, step by painful step (to various IMAP functions, to various mail headers like an email Subject line, and most painfully, to email addresses themselves). It’s making IMAP and other mail infrastructure more usable by mobile clients (all the work of the Lemonade WG). It’s addressing security and spam, among other things new ways to sign messages (DKIM). There is also some refactoring and architectural work going on which may be very interesting in the long run — for example, features to assign URLs and attach metadata to IMAP messages. This kind of work already allows increasing innovation in how email clients can deal with mail (particularly mail overload and spam).

The people I work with today include:

  • Dave Crocker, who edited RFC822 (mail message format) in Wave 1
  • Joyce Reynolds, author of the first experimental version of POP, RFC918 in Wave 1
  • Mark Crispin, author of the first version of IMAP, RFC1064 in Wave 2, and other revisions of IMAP
  • Nathaniel Borenstein and Ned Freed, who did the first three versions of MIME, starting with RFC1341 in 1992
  • Marshall Rose, who updated POP many times (POP3 in RFC1081, RFC1225, RFC1460, RFC1725 and RFC1939) in Wave 2
  • Randy Gellens and Chris Newman, who have contributed significant updates to POP and IMAP in Wave 2
  • Paul Hoffman, who defined SMTP over TLS in RFC2487 in 1999, and who ran the Internet Mail Consortium
  • John Klensin and Pete Resnick, who edited the modern versions of SMTP and the Internet Message format (RFC2821 and RFC2822 respectively).
  • The same and many more participating in today’s wave, all of whom I greatly enjoy working with.

Of course, although talking to some of these guys helped me put together this picture of email standardization waves, any errors here are mine (and please let me know of errors so I can update this).

November 17, 2006

What compliance means to an engineer

IETF, Uncategorized By: ams

I’ve been seeing the word “compliance” tossed around a lot for HTTP and other standards lately, with much ambiguity. Let’s say you read RFC2068 and implemented a client very carefully. Does it make your client implementation “uncompliant” if a new standard updates or obsoletes RFC2068 and adds requirements, as RFC2616 did?

My answer is “that’s not even a meaningful question”. Compliance can be a very loose concept.

  • If your software claims compliance to HTTP, there can be a lot of variation in how that actually works because different versions of HTTP have significant differences.
  • If your software claims HTTP/1.1 compliance, we have a somewhat better idea what that means. A client advertising HTTP/1.1 support in its requests can be assumed to understand the “Cache-Control” response header from the server, because all the specs that ever defined HTTP/1.1 (RFC2068 and RFC2616) define that header. However, we can’t tell if such a client supports the “s-maxage” directive on the Cache-Control header (the maximum age allowed for a shared cache entry) because that was only defined in RFC2616.
  • If your software claims RFC2068 compliance we don’t know whether it understands “s-maxage”, but we can assume that it supports “maxage”.
  • If your software claims RFC2616 compliance we can assume that it understands “s-maxage” as well as “maxage”. But support for RFC2616 isn’t advertised over the wire to servers, so we can’t tell the difference from clients that only implement RFC2068.

With this knowledge, you can ask if the new caching features in RFC2616 made existing clients uncompliant with RFC2068. Of course not. RFC2068 didn’t change — there’s a reason the IETF doesn’t replace its standards in-place but defines new RFC numbers. Do the new caching features make the client uncompliant with RFC2616? Well it never claimed to be compliant with a spec that was probably published after the client was written.
The important question to ask is whether a new feature or requirement is backwards-compatible (and if it’s not, whether the feature is important enough to break backwards-compatibility). Let’s consider The Cache-Control header a little further: a response with “Cache-Control: no-store” can be sent to any client that advertised HTTP/1.1 support, because that directive works the same way in both specs. If the response has “Cache-Control: s-maxage=1600”, then we’re not so sure if all HTTP/1.1 clients support it, but that might be OK — only shared caches can possibly do the wrong thing if they don’t implement RFC2616 yet, and the server could limit the out-of-date cache entries of pre-2616 shared caches by having a backup plan, e.g. “Cache-Control: s-maxage=1600, maxage=36000”.

This new feature was a reasonable choice in the standardization of RFC2616. If the writers of RFC2616 had been prevented from making any requirements that weren’t already met in deployed clients, they would not have been able to add features like “maximum age limits for shared cache entries”. The limitation would have unduly restricted their ability to improve RFC2616. Instead, the authors considered whether each feature was worth the new code and other design/test effort, and the backwards-compatibility considerations, and whether there were reasonable work-arounds or fall-backs.

It’s a very engineering approach but that’s what we do at the IETF. We don’t do scientific theories of protocol compliance that must be true for all instances of protocol X. We do engineering.

October 17, 2006

A Skeptic’s View of Identity 2.0

Security By: ams

I signed up to do a talk called “Beyond Passwords” at ApacheCon US 2006, which took place in Austin last week. I had originally intended to talk rather blandly about current standards efforts. But in the end I took a much more contrarian approach and examined the promises of Identity 2.0, how policies and implementation progress are likely to affect the real benefits, and the risks or threats. It is a skeptical guide to a potential relying party — a Web service that is considering relying on some 3rd-party to authenticate and identify its users — on how to evaluate the benefits and the costs. continue reading »

October 10, 2006

CalDAV to Proposed Standard

IETF By: ams

I am very pleased to announce that an effort I’ve spent nearly three years on is becoming an IETF Proposed Standard. CalDAV will have its own RFC number shortly, and the approval announcement was just last week.

continue reading »

October 3, 2006

[Lisa Dusseault] Introducing myself

CommerceNet, IETF By: ams

As my first The Now Economy post, I thought I’d introduce myself and what I do.

I just joined CommerceNet as a Fellow a couple weeks ago. Just before that I was working at OSAF as a development manager and standards architect. I’d been doing that job for about two years, and simultaneously chairing the IMAPEXT and CALSIFY working groups at the IETF, when I was chosen by the IETF’s Nominations Committee to serve as the Applications Area Director for a two year period. I’m interested in all the work going on in the Applications Area and I enjoy doing standards work which has so much leverage (even though it has a distant success horizon of deployed and useful implementations of new standards), so I was very happy to accept this position and enjoy doing it so far.

continue reading »

  • blog

  • companies & initiatives

  • July 2019
    M T W T F S S
    « May    
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
  • archive

  • categories