The 69th IETF was last week in Chicago (windy and good pizza, who knew?). The two highlights for me were the HTTP Bis BoF and the BIFF BoF. A BoF is a “Birds of a Feather” meeting used to gauge interest and feasibility towards forming an IETF working group.

Read more

I’ve been told in the past that Open Source and Open Standards are practically the same thing, they go so well together. While they’re both good things, unfortunately, they’re not quite as naturally reinforcing as you’d like. There’s cost and style of participation, IPR concerns, and proliferation of standards (“The good thing about standards is, there’s so many to choose from” — ref).

Read more

A month ago at the last IETF meeting, I talked to a bunch of email standards experts about the current wave of Internet email standards work. In these conversations I also built a mental picture of the previous waves.

Wave 1 was what really made email work over the Internet. The Simple Mail Transfer Protocol and the basic email message format were defined in 1982 with the major innovation of using domain names to find out where to deliver an email. This allowed email from one organization to reach an individual in another company. In 1989, this was somewhat updated with RFC1123, which made email addresses look the way they do today: mailbox@domain.example. Once mail got to the right server, POP (first described in RFC918 in 1984) allowed any mail client to look through the server’s queue of new mail and decide what to do with each message. Although that first POP was described so early, many did not use it in those years and it didn’t get much attention until POP3. Instead one would typically log into the SMTP server and look at its mailboxes there.

Wave 2 was POP3, IMAP and MIME, 1988 to 1994 or so. POP3 gained far more adoption than POP. IMAP defined a way to access the server-side repository for all one’s mail: not just the queue of new messages, but a hierarchy of mailboxes (called “folders” in many clients) which can be used to store mail for access by several clients. MIME brought Media to electronic mail: the ability to include image file formats, to use HTML instead of text, to attach Word documents and executables, and other variations necessary to business and eventually much beloved by spammers. MIME also introduced the very first non-ASCII characters in the body of email. MIME turned out to be big for other purposes too, like the Web.

There’s arguably a wave 2.5 or 3, adding security features from 1994 to 1999, including S/MIME, TLS support and authentication features for IMAP and SMTP. SASL was added to SMTP in 1999 although didn’t get put into IMAP until 2003. This mini-wave didn’t change peoples’ lives much except for those whose companies rolled out complicated and hard-to-use S/MIME infrastructures, but the continued deployment of IMAP and MIME over this period did change the email habits of many.

Today’s wave is starting to get complicated (oh, just starting? heh). It’s adding internationalization capability, step by painful step (to various IMAP functions, to various mail headers like an email Subject line, and most painfully, to email addresses themselves). It’s making IMAP and other mail infrastructure more usable by mobile clients (all the work of the Lemonade WG). It’s addressing security and spam, among other things new ways to sign messages (DKIM). There is also some refactoring and architectural work going on which may be very interesting in the long run — for example, features to assign URLs and attach metadata to IMAP messages. This kind of work already allows increasing innovation in how email clients can deal with mail (particularly mail overload and spam).

The people I work with today include:

  • Dave Crocker, who edited RFC822 (mail message format) in Wave 1
  • Joyce Reynolds, author of the first experimental version of POP, RFC918 in Wave 1
  • Mark Crispin, author of the first version of IMAP, RFC1064 in Wave 2, and other revisions of IMAP
  • Nathaniel Borenstein and Ned Freed, who did the first three versions of MIME, starting with RFC1341 in 1992
  • Marshall Rose, who updated POP many times (POP3 in RFC1081, RFC1225, RFC1460, RFC1725 and RFC1939) in Wave 2
  • Randy Gellens and Chris Newman, who have contributed significant updates to POP and IMAP in Wave 2
  • Paul Hoffman, who defined SMTP over TLS in RFC2487 in 1999, and who ran the Internet Mail Consortium
  • John Klensin and Pete Resnick, who edited the modern versions of SMTP and the Internet Message format (RFC2821 and RFC2822 respectively).
  • The same and many more participating in today’s wave, all of whom I greatly enjoy working with.

Of course, although talking to some of these guys helped me put together this picture of email standardization waves, any errors here are mine (and please let me know of errors so I can update this).

I’ve been seeing the word “compliance” tossed around a lot for HTTP and other standards lately, with much ambiguity. Let’s say you read RFC2068 and implemented a client very carefully. Does it make your client implementation “uncompliant” if a new standard updates or obsoletes RFC2068 and adds requirements, as RFC2616 did?

My answer is “that’s not even a meaningful question”. Compliance can be a very loose concept.

  • If your software claims compliance to HTTP, there can be a lot of variation in how that actually works because different versions of HTTP have significant differences.
  • If your software claims HTTP/1.1 compliance, we have a somewhat better idea what that means. A client advertising HTTP/1.1 support in its requests can be assumed to understand the “Cache-Control” response header from the server, because all the specs that ever defined HTTP/1.1 (RFC2068 and RFC2616) define that header. However, we can’t tell if such a client supports the “s-maxage” directive on the Cache-Control header (the maximum age allowed for a shared cache entry) because that was only defined in RFC2616.
  • If your software claims RFC2068 compliance we don’t know whether it understands “s-maxage”, but we can assume that it supports “maxage”.
  • If your software claims RFC2616 compliance we can assume that it understands “s-maxage” as well as “maxage”. But support for RFC2616 isn’t advertised over the wire to servers, so we can’t tell the difference from clients that only implement RFC2068.

With this knowledge, you can ask if the new caching features in RFC2616 made existing clients uncompliant with RFC2068. Of course not. RFC2068 didn’t change — there’s a reason the IETF doesn’t replace its standards in-place but defines new RFC numbers. Do the new caching features make the client uncompliant with RFC2616? Well it never claimed to be compliant with a spec that was probably published after the client was written.
The important question to ask is whether a new feature or requirement is backwards-compatible (and if it’s not, whether the feature is important enough to break backwards-compatibility). Let’s consider The Cache-Control header a little further: a response with “Cache-Control: no-store” can be sent to any client that advertised HTTP/1.1 support, because that directive works the same way in both specs. If the response has “Cache-Control: s-maxage=1600”, then we’re not so sure if all HTTP/1.1 clients support it, but that might be OK — only shared caches can possibly do the wrong thing if they don’t implement RFC2616 yet, and the server could limit the out-of-date cache entries of pre-2616 shared caches by having a backup plan, e.g. “Cache-Control: s-maxage=1600, maxage=36000”.

This new feature was a reasonable choice in the standardization of RFC2616. If the writers of RFC2616 had been prevented from making any requirements that weren’t already met in deployed clients, they would not have been able to add features like “maximum age limits for shared cache entries”. The limitation would have unduly restricted their ability to improve RFC2616. Instead, the authors considered whether each feature was worth the new code and other design/test effort, and the backwards-compatibility considerations, and whether there were reasonable work-arounds or fall-backs.

It’s a very engineering approach but that’s what we do at the IETF. We don’t do scientific theories of protocol compliance that must be true for all instances of protocol X. We do engineering.

I am very pleased to announce that an effort I’ve spent nearly three years on is becoming an IETF Proposed Standard. CalDAV will have its own RFC number shortly, and the approval announcement was just last week.

Read more

As my first The Now Economy post, I thought I’d introduce myself and what I do.

I just joined CommerceNet as a Fellow a couple weeks ago. Just before that I was working at OSAF as a development manager and standards architect. I’d been doing that job for about two years, and simultaneously chairing the IMAPEXT and CALSIFY working groups at the IETF, when I was chosen by the IETF’s Nominations Committee to serve as the Applications Area Director for a two year period. I’m interested in all the work going on in the Applications Area and I enjoy doing standards work which has so much leverage (even though it has a distant success horizon of deployed and useful implementations of new standards), so I was very happy to accept this position and enjoy doing it so far.

Read more