Nice analysis by Joyce Park on Web Services Commons:

Flickr is a company that has gotten into open web services in exactly the right way. It costs them a lot of money to host all that bandwidth, but feeds of photos are a cool new feature and a value-add to their users — and ultimately it all loops back to their core business model of getting people to host photos on Flickr. Flickr’s API also gives users some confidence that the company isn’t going to try to screw them by making it hard to get their content back out if they want to take it and go elsewhere. It’s a win-win-win situation for everyone, and another example of Flickr’s leadership by clarity.

On the other side of the spectrum are companies that treat RSS feeds and public web services just as free content, without adding any new transformative value or giving anything back to the community. There’s nothing necessarily wrong with this if the license allows it, but it’s not interestingly different from screenscrapers who present your content as theirs — who have long been considered sleazy parasites by most of the legitimate web. The whole idea of opening content via web services is that growth for all can be enhanced by sharing — and in the long run people don’t want to share with those who are openly contemptuous of the whole idea.

Opening content via web services used to be the Holy Grail, but slowly the world is rewarding the owners of services that do just that, as the Flickr example illustrates.

ACM News Service

“W3C Workshop on Constraints and Capabilities to Explore Next Web Services Layer”
XMLMania.com (10/12/04)

World Wide Web Consortium (W3C) members are working on a Web services constraints and capabilities framework that will allow organizations to communicate the terms of their service. Requirements for using HTTP or the ability to support GZIP compression, for example, need to be communicated in a standard manner using the Web Services Description Language (WSDL 2.0) specification, SOAP, or HTTP. W3C director Tim Berners-Lee said more standards were needed to support automated Web services. Participants at a two-day W3C workshop on Web services constraints and capabilities were required to write a position paper stating how they would preferably communicate constraints and capabilities in regards to reliable messaging protocol requirements, encryption using WS-Security or other security mechanisms, and an attached P3P privacy policy. Besides discussing how best to implement such constraints and capabilities requests and what vocabularies to use, the workshop participants also discussed their framework’s impact on and relation to other W3C protocols and Web technologies. Upcoming W3C recommendations include WSDL 2.0, which is currently in the “last call” stage and will be extended by the constraints and capabilities framework. SOAP 1.2 is also nearing completion, having advanced to candidate recommendation status in August 2004.

One immediately wonders if NAP is well-designed for RESTful applications… :-)

Looks like Bob Pasker has resurfaced happily after Kenamea:

Although J2EE has become the defacto standard for business application development, companies still face issues with reliability, predictability, and compute power…Azul has created a groundbreaking new class of compute infrastructure. Network attached processing seamlessly delivers on the dream of mainframe-class capabilities at new world economics.” — Bob Pasker
Co-Founder; Chief Architect – WebLogic; Java Luminary

Azul Systems: Products & Solutions

The key to network attached processing is Azul virtual machine proxy technology. This patent-pending technology, initially targeted at Java and J2EE platform-based applications, transparently redirects application workload to the compute pool. No changes are required to applications, or the existing infrastructure configuration. The Azul technology works with J2EE platform products including BEA WebLogic and IBM WebSphere application servers. Compute pool appliances are simply connected to the network and Azul software is installed on the application hosts. Suddenly every application has access to a virtually unlimited set of compute resources.

Each compute pool consists of two or more redundant compute appliances—devices designed solely to run massive amounts of virtual machine-based workloads. Each appliance has up to 384 coherent processor cores and 256 gigabytes of memory packed in a purpose-built design that delivers the benefits of symmetric multiprocessing with tremendous economic benefits. The massive SMP capacity of these appliances enables applications to dynamically scale, responding to varying workload and spikes without the pain of having to reconfigure or provision application tier servers. The targeted design provides small unit size, high rack density, low environmental costs, and simple administration.

Mark Nottingham wrote:

From the standpoint of interface semantics, the difference here is really just one between saying “POST machineMgmtFormat” and “MANAGEMACHINE.” In the uniform approach, the service-specific semantics are pushed down into the type (media type) and content (entity body) of the data. In the specific approach, they’re surfaced in the interface itself.

This isn’t a big difference… While there isn’t much technical difference between these approaches, there’s a big gap in how people can use them. In HTTP, creating new methods is expensive, while creating new URIs is very, very cheap. OTOH, Web services makes creating new methods cheap, while making the creation of new URIs expensive. This is not because either approach is technically limited; it’s due to the design of both the specifications — like WSDL — and the toolkits that people use.

Bill de hÓra adds:

While HTTP does allow for addition, practically speaking, the verb set is fixed. It has taken years for WebDAV additions to HTTP* to penetrate more than a fraction of the Web. Other efforts, such as HTTPR, an extension for reliable messaging, have gone nowhere. Even within the mandated verb set of HTTP itself, we find the availibility of verbs varies widely (notably PUT and DELETE) with entire eco-systems (such as mobile device clients) having only a subset. One can argue that the active verb set of HTTP comprises a subset of 3 verbs – HEAD, GET, POST – anything else is dead tongue.

The problem with HTTP POST, and what makes it special, is that it is a semantic catchall. What makes POST a uniform speech act is ironically the absence of interesting semantics and lack of specificity. Although it has specifications that are helpful to people when dealing with caches and state management, there’s no controlled means of defining what one is actually saying with it, without some further and prior agreement between client and server. The reality is that POST has been overloaded and abused to get systems talking even where such systems would have done better with an alternate verb – and the result is that in many systems the POST speech act is close to meaningless. WS-Transfer aims to throw some light into this void by providing a means to add consistent meaning to operations that would often be drilled through POST. In particular this may prove valuable for use with web services toolkits which are often designed to hide the networking aspect of communications from the developer.

Systems are being built, week in, week out, than cross the Web/Middleware boundary without being informed by both approaches and where they are approriate. This implies projects with excess risks and costs, wasted effort, re-learning of best practices or what is already in the state of the art. This is all the more important now that systems that incorporate web and middleware aspects are increasingly the norm (the size of the industry sector affected is significant).

At CommerceNet Labs we have a team of people who jump back and forth between the Web and middleware worlds regularly, and none of us find ourselves enthusiastic about WS-Transfer. I’m trying to figure out why that is. Perhaps it’s Mark Nottingham’s Web services has no architecture observation.

Perhaps it’s the potential for verb proliferation getting out of hand. Perhaps it’s just general WS-* malaise. Or perhaps it’s the feeling that this protocol feels more complicated than HTTP for most applications that come to mind. Services shouldn’t have to be complicated, right?

Web Service Grids: an Evolutionary Approach defines “a web service specification profile WS-I+ that builds upon the recognised WS-I Basic & Security profiles with the additional specifications: WS-Addressing, WS-ReliableMessaging and the Business Profile Execution Language (BPEL).”

It might be the right approach for that community, but it’s still astoundingly complex.

Phil Windley writes:

Dave Sifry gives some details about the Technorati outage this past weekend. Seems an electrical fire in the data center their co-lo at was the culprit. Running a 24/7 Web application reliably isn’t easy and it isn’t cheap. It took us several years of problems and study to hit on a solution at iMALL. We finally did figure it out and that was a real lightening of my load. One of the answers is product engineers, an engineer on the operations side whose job it is to make the product (not just the server) work. Properly incented, a product engineer will drive all of the emergency and contingency planning, along with ensuring that engineering delivers a system that can be reliably operated.

This just serves as a reminder that it’s still hard (and costly) to run a web service reliably.

Slashdot writes:

Michael S. Mimoso writes “A Yankee Group survey of 473 enterprise decision makers reveals that companies have put aside money for service-oriented architectures for 2005.” This is a bigger deal than it sounds – if companies keep moving this away, it will mean a sea change in corporate technology usage – and change the way/why development is done. We’re talking everything from SOAP stuff to wholesale ASP adoption like Salesforce.com.

Yankee’s survey results suggest that the biggest investments in SOA will come from the wireless telecom and manufacturing markets (78%), financial services (77%), and health care (71%).

Via Mike Dierken we found this wonderful tidbit from Adam Bosworth stating

I have a posted comment about just using XML over HTTP. Yes. I’m trying, right now to figure out if there is any real justification for the WS-* standards and even SOAP in the face of the complexity when XML over HTTP works so well. Reliable messaging would be such a justification, but it isn’t there. Eventing might be such a justification, but it isn’t there either and both specs are tied up in others in a sort of spec spaghetti. So, I’m kind of a skeptic of the value apart from the toolkits. They do deliver some value, (get a WSDL, instant code to talk to service), but what I’m really thinking about is whether there can’t be a much simpler kindler way to do this.

Amen. Stick a fork in WS-*, because as Simon says, Web Services are receding:

Web Services are on their way to a CORBA-like market: sort of interoperable, vendor-ridden, and critically important to a small number of people. If that’s the case, then maybe the rest of us can return to vanilla XML HTTP, sometimes known as REST.

Ah, how we all pine for a simpler time, before WS-* made everything feel so much more complicated than Web applications should feel… since I don’t have anything more constructive to say, we’ll pile on with some beautiful words from Sean McGrath:

The whole WS standards thing has more moving parts than a 747. Much of it recently invented, untested and unproven in the real world.

Given that there are no exceptions to Gall’s Law:

    A complex system that works is invariably found to have evolved from a simple system that worked.

I believe WS-YouMustBeJoking is doomed to collapse under its own weight. Good riddance to it.

Why has this situation come about? Because smart people had neural spasms? No. Because smart people realise that this stuff is *real* important and commercial agendas are at work all over the map.

The most important document to read if you want to understand the WS-IfThisIsProgressImAMonkeysUncle cacophony is How to wage and win a standards war by Carl Shapiro and Hal Varian.

That felt so satisfying to read, I’m going out back for a cigarette… ;)

For more wonderful backlash, see also Tim Bray’s The Loyal WS-Opposition:

No matter how hard I try, I still think the WS-* stack is bloated, opaque, and insanely complex. I think it’s going to be hard to understand, hard to implement, hard to interoperate, and hard to secure.

I look at Google and Amazon and EBay and Salesforce and see them doing tens of millions of transactions a day involving pumping XML back and forth over HTTP, and I can’t help noticing that they don’t seem to need much WS-apparatus.

I’m deeply suspicious of “standards” built by committees in advance of industry experience, and I’m deeply suspicious of Microsoft and IBM, and I’m deeply suspicious of multiple layers of abstraction that try to get between me and the messages full of angle-bracketed text that I push around to get work done.

This led to WS-PageCount, with a followup and a reference to WS-Halloween.

For more backlash, see also Mike Gunderloy’s WS-JustSayNo. (which quoth, “One of the powerful concepts in Extreme Programming is YAGNI, which stands for You Aren’t Gonna Need It. The idea is simple: implement things when you need them, not when you think you might need them in the future. As far as I’m concerned, this applies to most Web services experiments today.”)

For a more even-tempered approach, see Phil Wainewright’s WS-LooseCoupling.

And for those optimistic folks who still believe in the Web Services stack and/or want to know how all the pieces fit together and lead to Nirvana, see Microsoft’s just-released An Introduction to the Web Services Architecture.

David Longworth wrote an excellent piece, Grid Lock-in On Route To SOA, declaring that “vendor strategies to promote grid computing as the IT backbone for service oriented architectures are missing a vital element: standards.” Among his findings:

Immature and incomplete standards for sharing grid computing resources could leave enterprises locked into vendors’ proprietary technology stacks:

  • IBM, CA, HP, Sun, Microsoft and Oracle each have grid strategies
  • All aim to dynamically offer IT capacity to meet business needs
  • But vendors’ proprietary grid environments aren’t interoperable
  • Standards for resource sharing and management are emerging
  • For now, implementing grid means accepting vendor lock-in

Concludes Longworth: “Research group IDC has estimated that the market for grid computing will grow to $12 billion across both technical markets and commercial enterprise… But until standards like WSRF and DCML gain substance and momentum, the reality today is that the majority of commercial grid initiatives will be tied to individual vendors’ proprietary grid environments, without the ability to share or manage resources across separate grid architectures.”