I’ve been seeing the word “compliance” tossed around a lot for HTTP and other standards lately, with much ambiguity. Let’s say you read RFC2068 and implemented a client very carefully. Does it make your client implementation “uncompliant” if a new standard updates or obsoletes RFC2068 and adds requirements, as RFC2616 did?
My answer is “that’s not even a meaningful question”. Compliance can be a very loose concept.
- If your software claims compliance to HTTP, there can be a lot of variation in how that actually works because different versions of HTTP have significant differences.
- If your software claims HTTP/1.1 compliance, we have a somewhat better idea what that means. A client advertising HTTP/1.1 support in its requests can be assumed to understand the “Cache-Control” response header from the server, because all the specs that ever defined HTTP/1.1 (RFC2068 and RFC2616) define that header. However, we can’t tell if such a client supports the “s-maxage” directive on the Cache-Control header (the maximum age allowed for a shared cache entry) because that was only defined in RFC2616.
- If your software claims RFC2068 compliance we don’t know whether it understands “s-maxage”, but we can assume that it supports “maxage”.
- If your software claims RFC2616 compliance we can assume that it understands “s-maxage” as well as “maxage”. But support for RFC2616 isn’t advertised over the wire to servers, so we can’t tell the difference from clients that only implement RFC2068.
With this knowledge, you can ask if the new caching features in RFC2616 made existing clients uncompliant with RFC2068. Of course not. RFC2068 didn’t change — there’s a reason the IETF doesn’t replace its standards in-place but defines new RFC numbers. Do the new caching features make the client uncompliant with RFC2616? Well it never claimed to be compliant with a spec that was probably published after the client was written.
The important question to ask is whether a new feature or requirement is backwards-compatible (and if it’s not, whether the feature is important enough to break backwards-compatibility). Let’s consider The Cache-Control header a little further: a response with “Cache-Control: no-store” can be sent to any client that advertised HTTP/1.1 support, because that directive works the same way in both specs. If the response has “Cache-Control: s-maxage=1600”, then we’re not so sure if all HTTP/1.1 clients support it, but that might be OK — only shared caches can possibly do the wrong thing if they don’t implement RFC2616 yet, and the server could limit the out-of-date cache entries of pre-2616 shared caches by having a backup plan, e.g. “Cache-Control: s-maxage=1600, maxage=36000”.
This new feature was a reasonable choice in the standardization of RFC2616. If the writers of RFC2616 had been prevented from making any requirements that weren’t already met in deployed clients, they would not have been able to add features like “maximum age limits for shared cache entries”. The limitation would have unduly restricted their ability to improve RFC2616. Instead, the authors considered whether each feature was worth the new code and other design/test effort, and the backwards-compatibility considerations, and whether there were reasonable work-arounds or fall-backs.
It’s a very engineering approach but that’s what we do at the IETF. We don’t do scientific theories of protocol compliance that must be true for all instances of protocol X. We do engineering.