Security Items
June 14, 2013

AMS talks Privacy at PST2013 in Spain

Conferences, Events, Security By: ams

Allan will be attending the UNESCO 11th International Conference on Privacy, Security and Trust (PST2013) in Tarragona, Catalonia, July 10-12.  He will be presenting the paper authored with Fellow Jeff Shrager and 2012 Intern Alegria Baquero titled Blend me in: Privacy-Preserving Input Generalization for Personalized Online Services (slidespaper).

September 7, 2008

Usable Security Systems to Make Its Debut at DEMOfall08

Security By: ams

Usable Security Systems, a CommerceNet portfolio company founded by Rachna Dhamija and Allan M. Schiffman, is launching its product, UsableLogin, at DEMOfall08 in San Diego today.

If you’re at DEMO, make sure to tell the Usable team that we’re proud of them!

July 31, 2007

IETF highlights: HTTP Bis and BIFF BoF

Event Driven Architectures, IETF, Security By: ams

The 69th IETF was last week in Chicago (windy and good pizza, who knew?). The two highlights for me were the HTTP Bis BoF and the BIFF BoF. A BoF is a “Birds of a Feather” meeting used to gauge interest and feasibility towards forming an IETF working group.

continue reading »

May 7, 2007

Attacks on Vidoop Authentication

Security By: ams

A new authentication scheme was announced recently at the Web
2.0 Expo: Vidoop http://www.vidoop.com.

Vidoop describes itself as a web single sign-on solution that is
resistant to “all prevalent forms of hacking”. Specifically, they
claim to resist “phishing, keystroke logging, brute force, and many
man-in-the-middle attacks” and to resist automated attacks by
“requiring human cognition” on the part of the attacker. This
language is misleading. In reality, the scheme only resists simple
phishing attacks — it does not prevent man-in-the-middle attacks, is vulnerable to brute
force attacks, and it is resistant to keyboard loggers only when screen loggers are
not present.

We were able to construct a man-in-the-middle (MITM) attack that allows us to capture users’ credentials and to login to their accounts. We recorded a video that demonstrates a MITM attack in progress at myvidoop.com. Ian Fischer, a Harvard University student and research intern at CommerceNet, created the attack in a few hours, by modifying freely available proxy software on the Internet. We describe the attacks in more detail below.

How Vidoop works: Vidoop is essentially a combination of a graphical
password scheme and client-side cookie. During setup, a user must
choose their secret, which is a set of three “image categories” out of
25 categories (e.g., the user might choose cats, dogs, and birds).

To login, the user has to enter their username (or OpenID URI). The
server presents a grid of 12 images from different image categories.
Each picture has a random character superimposed on it, and three of
the images are from the user’s pre-selected categories. The user
derives his one-time PIN by entering the three letters corresponding
to his image categories.

Attacks: We recently conducted a study that analyzed attacks on Bank
of America’s SiteKey scheme [1]. Vidoop bears some similarities and
shares many of the same vulnerabilities. In particular, Vidoop is
vulnerable to a man-in-the-middle attack in which the attacker
simulates the enrollment process. This is a well-know attack on
SiteKey, which was first published in 2005 [2], and has been well
analyzed by Jim Youll [3] and more recently demonstrated in this
video by Indiana University researchers [4].

Like SiteKey, users must have a Flash cookie and/or HTTP cookie on
their machine in order to log in (this cookie acts as a “second factor”
that ties the machine to the user’s account). If this cookie is
erased, or if the user logs in from a new machine, the user needs to
“enroll” the machine. The SiteKey enrollment process requires the
user to answer a challenge question before receiving their cookie.
This opens up a MITM attack, where the phisher lures the user to his
website and presents the enrollment message “You are logging in from a
computer that we don’t recognize”. The phisher proceeds to relay the
challenge question from the bank to the user, and then relays the user’s answer back to the
bank. This allows the phisher to ultimately capture the user’s
SiteKey image and password. Because the user has probably seen
the re-enrollment message several times in legitimate circumstances, he is likely to answer the challenge
question and might not even know he was the victim of a phishing attack.

In Vidoop’s enrollment process, the user has to request an activation
code, instead of answering a challenge question (the activation code
is delivered via email, a phone call or SMS text message). Once the
user enters the activation code, the server will place a cookie on the
machine, and allow the user to log in as usual. This opens up the same
MITM described above — now, instead of relaying the challenge
question to the bank, the phisher simply relays the activation code:

1. The phisher directs the user to phishingsite.com, which looks just like
the Bank site, and the user enters his username.

2. The phisher relays the username to the real Bank and is presented
with the message “We don’t recognize your computer. Please select how
you would like to receive your activation code”. The phisher relays
this message to the user.

3. The user selects the method of delivery, and the phisher relays
this choice to the Bank. The user receives the activation code and
enters it into the phishing website.

4. The phisher relays the activation code to the Bank, receives the
cookie, and the user’s authentication grid image.

5. The phisher displays the user’s image grid to the user in order to
obtain his PIN and secret “image categories”. He relays the PIN back
to the bank in order to log in.

Vidoop’s requirement for out-of-band communication does not increase
the cost of launching an automated MITM attack. In the SiteKey
attack, the MITM phisher obtains the SiteKey image and password and a
secure cookie, which allows him to log in indefinitely. In Vidoop, the
MITM attacker obtains the user’s PIN, which can only be used
immediately to login to the account one time. He also receives the
user’s image categories and a cookie that allows him to log in in the
future. To make use of the cookie, the attacker has to do a little
more work.

Vidoop claims that subsequent logins require a human to determine the
image categories and to look at the image grid to obtain the user’s
PIN. The necessity for a human in the loop increases the cost of an
attack, and most phishers won’t bother to go through the effort. They
don’t need to! The password space is so small that, once you have a
cookie, a brute force attack is trivial. The myvidoop.com PIN is 3
characters chosen from the 26 characters of the alphabet, is order-independent, and is case insensitive, so the attacker only has to
search 2,600 combinations (26 choose 3). With four login
attempts available, the chances of success are 1 in 650. If the phisher uses automated character-recognition programs, he can reduce the number of combinations to 220
(12 choose 3), or a 1 in 55 chance of success with 4 login attempts. Note that brute force attacks are also easy to mount by anyone that shares the machine with the user.

Vidoop could increase the attacker workload by increasing the size of
the PIN (the number of image categories), increasing the image grid,
increasing character set (e.g., adding digits and symbols), requiring
order dependence and non-repeatability, or by reducing the number of
attempts that are allowed. To defeat character recognition, they could eventually employ captcha-type characters. All of these options will significantly reduce the usability of the system.

Vidoop does improve upon SiteKey in its resistance to keyboard logging
attacks. If a keyboard logger obtains the PIN, it is only useful for
one login and only within the timeout period. Vidoop is not
resistant to malware that contains both keyboard loggers and screen
loggers, which are becoming increasingly common [5].

Graphical passwords do have other weaknesses. For example, an
attacker can predict the type of image categories that are chosen,
even with very limited information about the target user [6, 7].
However, targeted attacks are expensive to mount — we’ve only focused
on the attacks that are easy to automate here.

Privacy: There is a gaping privacy hole in their system. Vidoop makes
it easy to search for registered usernames, and they openly publish
these on their website. An attacker can enter usernames and request
that activation codes be sent to them via text message, cell phone or
email, depending on the user’s preferences (this can be very costly
and annoying for both Vidoop and its users). Initially, Vidoop had no
time-out or restriction on the number of messages that could be sent
by an unknown party. It appears that I can now only send 3 messages to
any one person, after which time there is 9 minute timeout before
requests can be sent again. By signing up for Vidoop, users
essentially give anyone the right to send them Vidoop messages,
without requesting their permission and without needing any contact
information.

Usability: The cognitive overhead of selecting the Vidoop PIN is
higher than recognizing the previously seen SiteKey image (the user
must remember their semantic image categories, select images from the appropriate
categories, find the associated characters and input them into a text
box). However, Vidoop eliminates the need to recall a password,
which is still a requirement with SiteKey. Vidoop eliminates the need
to answer a challenge question during enrollment, but requires the
user to check their email or phone and then input the activation code.

Summary: Before publishing our analysis, we communicated with Vidoop’s CTO, Scott Blomquist. He acknowledged that he is aware of these weaknesses and that the scheme is vulnerable to man-in-the-middle attacks. In comparison to simple password authentication, Vidoop does raise the bar for phishers. However, we find their advertising, and in particular their claims that they resist man-in-the-middle attacks and “all prevalent forms of hacking”, to be disingenuous.

[1] The Emperor’s New Security Indicators, Stuart Schecter, Rachna Dhamija, Andy
Ozment, Ian Fischer, to appear in the Proceedings IEEE Symposium on
Security and Privacy, May 2007.

[2] The Battle Against Phishing: Dynamic Security Skins, Rachna
Dhamija and J. D. Tygar, in Proceedings of the Symposium on Usable
Privacy and Security (SOUPS), July 2005.

[3] Fraud Vulnerabilities in SiteKey Security at Bank of America, Jim
Youll, July 2006.

[4] Deciet Augmented Man-in-the-middle Attack against Bank of America SiteKey
Service
, blog post and video, Christopher Soghoian, April 10,
2007.

[5] Anti-phishing Working Group, http://www.apwg.org/

[6] Deja Vu: A User Study. Using Images for Authentication, Rachna Dhamija
and Adrian Perrig, in Proceedings of the 9th USENIX Security
Symposium, August 2000.

[7] On User Choice in Graphical Password Schemes, Darren Davis, Fabian
Monrose, and Michael K. Reiter, in Proceedings of the 13th USENIX
Security Symposium, August 2004.

November 30, 2006

EMail Standards Waves

IETF, Security By: ams

A month ago at the last IETF meeting, I talked to a bunch of email standards experts about the current wave of Internet email standards work. In these conversations I also built a mental picture of the previous waves.

Wave 1 was what really made email work over the Internet. The Simple Mail Transfer Protocol and the basic email message format were defined in 1982 with the major innovation of using domain names to find out where to deliver an email. This allowed email from one organization to reach an individual in another company. In 1989, this was somewhat updated with RFC1123, which made email addresses look the way they do today: mailbox@domain.example. Once mail got to the right server, POP (first described in RFC918 in 1984) allowed any mail client to look through the server’s queue of new mail and decide what to do with each message. Although that first POP was described so early, many did not use it in those years and it didn’t get much attention until POP3. Instead one would typically log into the SMTP server and look at its mailboxes there.

Wave 2 was POP3, IMAP and MIME, 1988 to 1994 or so. POP3 gained far more adoption than POP. IMAP defined a way to access the server-side repository for all one’s mail: not just the queue of new messages, but a hierarchy of mailboxes (called “folders” in many clients) which can be used to store mail for access by several clients. MIME brought Media to electronic mail: the ability to include image file formats, to use HTML instead of text, to attach Word documents and executables, and other variations necessary to business and eventually much beloved by spammers. MIME also introduced the very first non-ASCII characters in the body of email. MIME turned out to be big for other purposes too, like the Web.

There’s arguably a wave 2.5 or 3, adding security features from 1994 to 1999, including S/MIME, TLS support and authentication features for IMAP and SMTP. SASL was added to SMTP in 1999 although didn’t get put into IMAP until 2003. This mini-wave didn’t change peoples’ lives much except for those whose companies rolled out complicated and hard-to-use S/MIME infrastructures, but the continued deployment of IMAP and MIME over this period did change the email habits of many.

Today’s wave is starting to get complicated (oh, just starting? heh). It’s adding internationalization capability, step by painful step (to various IMAP functions, to various mail headers like an email Subject line, and most painfully, to email addresses themselves). It’s making IMAP and other mail infrastructure more usable by mobile clients (all the work of the Lemonade WG). It’s addressing security and spam, among other things new ways to sign messages (DKIM). There is also some refactoring and architectural work going on which may be very interesting in the long run — for example, features to assign URLs and attach metadata to IMAP messages. This kind of work already allows increasing innovation in how email clients can deal with mail (particularly mail overload and spam).

The people I work with today include:

  • Dave Crocker, who edited RFC822 (mail message format) in Wave 1
  • Joyce Reynolds, author of the first experimental version of POP, RFC918 in Wave 1
  • Mark Crispin, author of the first version of IMAP, RFC1064 in Wave 2, and other revisions of IMAP
  • Nathaniel Borenstein and Ned Freed, who did the first three versions of MIME, starting with RFC1341 in 1992
  • Marshall Rose, who updated POP many times (POP3 in RFC1081, RFC1225, RFC1460, RFC1725 and RFC1939) in Wave 2
  • Randy Gellens and Chris Newman, who have contributed significant updates to POP and IMAP in Wave 2
  • Paul Hoffman, who defined SMTP over TLS in RFC2487 in 1999, and who ran the Internet Mail Consortium
  • John Klensin and Pete Resnick, who edited the modern versions of SMTP and the Internet Message format (RFC2821 and RFC2822 respectively).
  • The same and many more participating in today’s wave, all of whom I greatly enjoy working with.

Of course, although talking to some of these guys helped me put together this picture of email standardization waves, any errors here are mine (and please let me know of errors so I can update this).

October 17, 2006

A Skeptic’s View of Identity 2.0

Security By: ams

I signed up to do a talk called “Beyond Passwords” at ApacheCon US 2006, which took place in Austin last week. I had originally intended to talk rather blandly about current standards efforts. But in the end I took a much more contrarian approach and examined the promises of Identity 2.0, how policies and implementation progress are likely to affect the real benefits, and the risks or threats. It is a skeptical guide to a potential relying party — a Web service that is considering relying on some 3rd-party to authenticate and identify its users — on how to evaluate the benefits and the costs. continue reading »

November 11, 2005

Wireless Carriers Announce Rating System

Security By: ams

… there’s money to be made in adult content, so eventually handsets will become yet another commercial outlet. Rather than adopting a Web-standard content rating platform such as PICS, the wireless industry associaton (CTIA) has come out with some as-yet-unspecified new technology. So far, it appears to only be a press event about how nice it will be once it’s all working.

See the Washington Post or the New York Times for largely similar content-free announcement stories. From the NYT piece:

The nation’s major cellular phone carriers said yesterday that they had adopted a content rating system for video, music, pictures and games that they sell to cellphone users – a development that could pave the way for them to begin selling pornography and sex-oriented content on mobile devices.

The carriers said the ratings, meant to mimic content classifications for movies and video games, are voluntary.

Initially, the carriers would classify content in two categories: general interest and restricted content deemed appropriate only for people over the age of 18.

The carriers said they had agreed not to begin making restricted content available until they had developed filters and other technological tools that would enable parents to prevent children from getting access to inappropriate material.

The carriers, including Cingular Wireless and Verizon Wireless, the largest and second-largest mobile companies, said they were developing filtering technology and that it should be available soon.

July 8, 2005

More from SOUPS

Security By: ams

Excellent paper on phishing from Dhamija and Tygar of UCB, The Battle Against Phishing: Dynamic Security Skins. Doug Tygar, you may know, was co-author of the security+HCI paper Why Johnny Can’t Encrypt. They describe the problem of phishing, make a systematic analysis of the technical challenges, survey current phishing countermeasures, and describe countermeasures of their own.

continue reading »

July 7, 2005

Symposium on Usable Privacy and Security

Security By: ams

Blogging from SOUPS 2005 at CMU.

Ches just gave the keynote talk titled My Dad’s Computer, Microsoft, and the Future of Internet Security, which like all good talks, has been evolving for some time. Money quotes:

  • “Dad, your computer is blowing blue smoke all over the Internet!”
  • “These virus-building tools have GUIs, *nice* GUIs.”
  • On 0wn3rs: “They try not to be too disruptive. They’ve got uses for your computer. It’s called time-sharing. They install patches for you to keep (other) attackers out, they work very hard to get bugs out of their software.”
  • “You have to get out of the game. Or, as the Karate Kid’s Mr. Miyagi says: ”Best block is not to be there.”

Ches quoted spot prices for botnet cycles — 3 cents per week on the low end for spam forwarding, $40 each for machines on targeted networks. Also interesting, the Phatbot command list.

Ping and others are blogging the conference at Usable Security.

February 15, 2005

SHA-1 found vulnerable to collisions

Security By: ams

Bruce Schneier says SHA-1 is broken in a preprint paper from the same Chinese research group that broke MD5 and SHA-0 last year, as noted in our blog post at the time. Watch my delicious linklog for more details as they roll in over the next few days.

Like the earlier attacks, this is a collision attack, not a preimage attack, so it isn’t likely to actually break very many systems. But it’s a big warning sign that we should switch to new algorithms.

This is also definitive evidence that our government’s policies discouraging domestic cryptographic research have backfired, since now some Chinese university researchers are ahead of our own NSA. (*)

(*) Footnote: I believe this to be true because if this were an attack the NSA were aware of, they’d have released a SHA-2, the same way they replaced SHA with SHA-1.

(Change notes: previous version of this post said “they’d be working towards” rather than “they’d have released”, which is clearly an absurd thing to say. Also, it said “would be aware” rather than “were aware”, and “it aided the creation of a SHA-1” rather than “they replaced SHA with SHA-1”; these changes were made for clarity.)