Fascinating gambit; I usually associate it with more mature markets. Perhaps this is a sign that SOAP skills are going to be more profitable for enterprise developers to add to their portfolios soon…

It’s also an interesting judging panel — several of these folks have definitely been around for several hype cycles, so it says something that they’re onboard with this bet on WS’ maturity.

Grand Central Communications Unveils New Developer Program

“The Golden Spike,” Grand Central’s first annual contest for developers. The contest starts October 18th in conjunction with the Early Access Program and continues to December 10, 2004, with the winners to be announced in January 2005.

The Golden Spike contest provides developers with an opportunity to showcase their innovative work around reusable business processes and Web services development. Using industry-leading tools and resources provided by Grand Central, participants can submit one or more entries in the following categories:

* Best Business Process
* Best Use of SOAP APIs
* Best Use of Rich Client

The grand prize winner will receive a dream workstation of his or her own creation worth up to $10,000, and there will be three first place winners for each of the categories taking home a $1,000 prize. Contest entries will be judged by a panel of Web services and SOA experts including Tim O’Reilly, founder and chief executive officer of O’Reilly Media; Jason Bloomberg and Ron Schmelzer, senior analysts with industry research firm ZapThink; Bill Appleton, founder, president and chief scientist of DreamFactory; Phil Windley, contributing editor for InfoWorld Test Center; Tony Hong, co-founder of XMethods; Phil Wainewright, chief executive officer, Procullux Ventures, and publisher of Loosely Coupled; and Halsey Minor, chief executive officer, chairman and founder of Grand Central Communications.

Courtesy of Mike Dierken we found Bill de hÓra’s “WWW cubed: syndication and scale”, in which he writes:

The most advanced thinking that doesn’t involve throwing out the Web is probably Rohit Khare’s PhD thesis, which suggests an ‘eventing’, or push style extension to the Web model. An early example of this approach where the server calls back to the connected client instead of the client initiating each time, called mod_pubsub, is available as open source. One of HTTP’s designers, Roy Fielding, is rumoured to be working on a new protocol, that could feature support for easing of the load on servers.

The question of responsibility – especially in the event of operational issues arising – becomes complex. With a pull delivery model on the other hand, organisational boundaries are crisp and clear.” This may not matter for consumer applications, but a surprising number of important business systems and services are now based on HTTP data transfers. And many people believe that syndication technology like RSS and Atom will also be used for commercially consequential exchanges in the b2b, or “business to business” arena. Switching from a polling to a pushing mode, also confers a switching of responsibilities, and this might in time have far-reaching consequences where cost-efficiency is traded for risks, legal and financial. One day, your online bank might be morally and technically culpable for getting your bank statements to your computer. In that case, expect to sign even more of your rights away in the fine print.

The Now Economy will be guided by all kinds of Service Level Agreements. Empowerment comes when people understand the benefits — and, more importantly, the limitations and risks inherent — in moving to a world where information travels to where it needs to go instantly and across trust boundaries.

Jeremy Zawodny sees a tipping point for feeds coming soon:

Real-time pings mean that we don’t have to wait for a full polling or crawling cycle before getting the latest content… Once this feed stuff hits the tipping point (I think we’re close), things will get really, really interesting. Suddenly these feed sources will be the thing people care about. The model of “search and find” or “browse and read” will turn into “search, find, and subscribe” for a growing segment of Internet users and it will really change how they deal with information on the web. What’s that gonna be like? Will the “web search” folks be ready? What about the browser folks?

The ability for a person or program to subscribe (and then get told when things happen, and take action as needed) is the foundation of The Now Economy. RSS and Atom can provide semi-structured data on which to take action — for example, for use in catablog-style commerce interactions.

Through John Battelle we discovered Rick Skrenta’s post on the subject of information feeds. Says Skrenta, “The proliferation of incremental content sources, all pumping out new material on a regular basis, is what the mainstream Internet user will consume.”

This in turn reminds us of Phil Windley’s recent observation that subscription-based information routing (such as that of mod-pubsub and KnowNow) allows applications to receive such streams of semi-structured information and then do something with them (such as filtering, aggregating, displaying, further routing, or taking action based on rules).

Such programming models will be ripe for exploration in the coming years as the applications of The Now Economy are discovered, developed, and deployed.

There’s a lot of great things going on with the continuing evolution of Interplanetary IP into IETF’s Delay-Tolerant Networking group into a DARPA BAA on Disruption-Tolerant Networking (already closed on 9 July, unfortunately).

Proposers Day Announcement – Disruption Tolerant Networking (DTN)

The Disruption Tolerant Networking (DTN) Program will develop and field technology that will provide network services when no end-to-end path exists through the network.  The primary goal is to provide disruption tolerance by organizing information flow into bundles.  These bundles will be routed through an ?intelligent? DTN network that can manage the delivery of the bundles.  This method will allow messages to pass through the network with successive responsibilities, rather than the traditional end-to-end acknowledgement scheme.  DTN will result in the opportunistically leveraged connectivity and the use of multiple routes while relieving the delivery node of final acknowledgement.

The second goal of DTN is to provide dynamic network naming and routing.  This method uses a late-binding of bundles (or packets) technique to specific nodes or delivery path.  This avoids forcing all parts of the network to be aware of all other parts of the network.  The result will be a network that matches the tactical unit deployment needs for mobility and stealth.  

The Proposers’ Day Workshop will be held on 21 January 2004 at the George Mason University

The 14th International World Wide Web Conference 2005 May ’05, paper deadline of 11/28

SIGSOFT 2004 / FSE-12 Workshop in Interdisciplinary Software
Engineering Research (WISER)

SIGSOFT 2004/FSE-12 Home Page Oct 31 – Nov 5

2004 Workshop on Self-Managing Systems (WOSS04) Home Page

An increasingly important requirement for software-based systems is the ability to adapt themselves at run time to handle such things as resource variability, changing user needs, and system faults. In the past, systems that supported self-management were rare, confined mostly to domains like telecommunications switches or deep space control software, where taking a system down for upgrades was not an option, and where human intervention was not always feasible. However, today more and more systems have this requirement, including e-commerce systems and mobile embedded systems.

Latency matters — and embedded within this little trick forced by Reg NMS, is an agency conflict to boot: who’s really motivated to give me, the little guy, the best possible price??

NYSE To Further Upgrade Automated Execution System

The New York Stock Exchange next week will unveil a plan for further upgrading its automated execution system, Direct Plus, to enable more trades to be processed electronically. Earlier this year, the NYSE upgraded the system to handle orders of any size and to let investors use the system as often and as rapidly as they like; previously, Direct Plus was restricted to order sizes of less than 1,100 shares and investors had to wait 30 seconds before entering another order.

Thain proposes new electronic trading for NYSE

The NYSE plan is an effort to meet an SEC definition of “fast market” under trading rules proposed in February. That SEC definition would allow the NYSE to keep a large portion of its order flow. But the move to electronic trading will move business away from the floor. Friday was the deadline for comments on the watchdog agency’s Regulation NMS, or national market system.

Today’s Trade-Through Rule Must Die

The trade-through rule, which was first instituted in 1975, was designed to make sure investors got the best available price for their stock trade. A market system would not allow one customer to “trade through” an existing order without first matching that order. A customer’s order has to be routed to the destination with the best price at the moment the order is entered.

That sounds like a good idea on the surface, but the rule was enacted before electronic markets existed. Though it’s moving in the direction of automation, the NYSE is still at heart a manual system, with trades handled by specialists in particular stocks.

Nasdaq, however, is fully automated, so while a quote on a Nasdaq stock is currently executable, a quote on an NYSE stock is considered an indication and not a firm quote.

The trade-through rule as it stands means that if you place an order and the best possible quote is with a particular specialist on the floor of the NYSE, then your broker is required to route your order there. But a NYSE quote is not immediately executable–it’s more analogous to an advertised price than an actual price.

Specialists are allowed to hold an order for 30 seconds before either executing it or handing it off to another specialist—and during that time, the price may change.

Mark Baker responds to Rohit’s Teepee post:

Depending upon your POV, TP could either be PEP/SOAP (or similar), or HTTP itself. mod-pubsub probably best reflects the latter position, but I could see value in the former.

I’m not sure that there’s much in the way of a significant architectural difference between the two, but the latter would be a more useful self-contained package; a personal server, I reckon…

Oh, and parameterized topics! Woot! I think that’s just an optimization, but an important one…

Note how routing messages to you via email doesn’t work, but routing blog comments to an initial message of yours does? There’s a lesson there, somewhere.

TP will likely have both PEP/SOAP and HTTP gateways that
canonicalize the messages being sent so they can be output in either
format as well. SMTP is likely another important gateway interface.

The key arhitectural question we will likely pursue is: how to
make the TP protocol itself as simple as possible, but no simpler.

Parameterized topics? Hmmm… topics seem so gossamer sometimes
that it’s hard not to see them as just another name/value pair
attached to a message.

Matthias Nicola – Homepage

Matthias Nicola, Jasmi John: “XML Parsing: A Threat to Database Performance”, 12th Intl. Conference on Information and Knowledge Management, CIKM’2003, New Orleans, November 2003.

This paper, referred to me by Rick Ross, is a fascinating indictment of the inadequacy of XML for large-scale, real-world applications today. Fits in with other work at IBM on this that I saw at WWW2004 in NYC. I’m concerned that there aren’t any other companies than IBM and perhaps DataPower and Sarvega that even hint at working on hardware-assisted XML processing…

One of the primary thrusts of our work at CN Labs will be a new kind of internet-scale event notification service: an application-layer router. Just like there’s an IP packet format at the network layer, there ought to be a new standard that unifies the welter of application-layer protocols: smTP, htTP, fTP, nnTP, and more.

TP, a Transfer Protocol, merely provides a best-effort delivery service for named, MIME-typed bags of bits. Rather than using IP addresses, those names are the endpoints that identify multiple services.

If I want a $5 increase in IBM stock price to pop up an alert in my browser, I ought to be able to request something like

“send all messages about http://nyse.com/IBM?delta>5
to javascript://rohits-laptop/window1/alert(‘sell!’)”

There’s a lot more to this idea, whether you call it bringing pub/sub to the web, or bringing programmable agents to mail, or some other unification of those messaging middleware modes. Watch this space to see what we can pull together…