We showed off a re-skinned version of our Ångströ project at in San Diego that I hope makes a bit more sense as microsearchyou know, for . :)

We pointed at a live instance at http://tpd.angstro.net:19988/ — go search, grab the miffy bookmarkley, and start adding microformats to our big shared pile of bits!

Congratulations for an applause-winning demo to Ben Sittler, as the mad Javascript genius behind the whole system, and Elias Sinderson, who added semi-structured XQuery to the system!

Herewith, some notes from our slides…


What is “Atomic-Scale”?

Web pages contain chunks of information

A natural consequence of growing adoption of template languages & content management tools

Feeds create the illusion of immediacy

As chunks of information change, we can expect notification (in the form of updated feed files)

Microformats create the illusion of structure

Even if it’s HTML all the way down, we can read it

… so maybe REST will make more sense for atoms than for pages

How miffy works

walks through the document looking for the ‘root classes’ of the µfs it knows about

places green anchor boxes in front of them
using css — no graphics, since we want it to work offline

‘capturing’ clones those DOM nodes, then walks the tree to “reformulate” it

the only data structure that can represent all future µfs is the DOM itself

For More Information

Is Open Source —
but not yet an open-repository

grab snapshots of the code and our Subversion archive from our wiki: https://commerce.net/wiki/tpd

Uses Open Source

  • Depends on some other OS projects you’ll need
  • DBXml from Sleepycat Software
  • BeautifulSoup by Leonard Richardson
  • Feedparser by Mark Pilgrim
  • … and (not least!) Twisted by TwistedMatrix
Test Service

Running at http://tpd.angstro.net:19988

This afternoon, PubSub and Broadband Mechanics are announcing a “structured blogging initiative” at the Syndicate conference. The press release even includes a quote in support from us here at CommerceNet:

CommerceNet believes strongly in the vision of bootstrapping a more intelligent Web by embedding semi-structured information with easy-to-author techniques like microformats. Through our own research in developing tools for finding, sharing, indexing, and broadcasting microformatted data, we appreciate the challenges these companies have overcome to offer tools that will interoperate as widely as possible. We applaud their recent decision to support the microformats.org community in all of the core areas where commonly accepted schemas already exist, such as calendar entries, contact information, and reviews.

Given that we’re strong supporters of microformats.org, why did we take this stand? First and foremost, for the reasons stated above: because they’re committing to shipping tools that make it easier to produce microcontent using microformats. Even if they were supporting any number of other formats, we’d be glad to welcome any new implementations to the fold.

Of course, we’d prefer to minimize any confusion, too. Many other implementations exist for microformats and are copiously documented and discussed in public forums at microformats.org. Clearly, the (re-)launch of a public .org site titled StructuredBlogging with aspirations to non-profit status of its own could lead to perceptions that there’s some sort of “vs.” battle going on.

That might even have been true, a few months ago when the idea-of-structured-blogging was still conflated with a debatable proposal for structured-blogging-the-format that hid chunks of isolated XML within otherwise readable documents using a <SCRIPT> tag. The major news here today that we’d like to celebrate is that they’re in favor of using microformats for all of their core, commonly-used schemas like reviews, events, and lists.

Now, is the old format still in their code tree when you grab their alpha plugin? Sure, and there will always be room for developers who really, really want to cons up their own schema out of thin air. The microformats-rest mailing list is grappling with the same problem, focusing on XOXO as a solution for now.

The more intriguing implication of their work at StructuredBlogging.org is their microcontent description (MCD) format — even if it’s all hReview at the bottom, there’s room for custom UIs for reviewing movies that are different from reviewing restaurants, and we’ll see if that’s where these explorations lead to…

I wonder if Google wil be more involved with Vint Cerf’s recent decision to join them…

ACM News Service

“The Interplanetary Internet”
IEEE Spectrum (08/05) Vol. 42, No. 8, P. 30; Jackson, Joab
Ambitious plans for future space exploration cannot be realized without an effective communications network to link Earth with its far-flung explorers, and all of NASA is in agreement that the ideal scheme would be an Internet that spans between planets. But the space agency is split over how this can be achieved: One research group supports the use of existing Internet software and Internet protocols, while the other says wirelessly communicating across vast distances with such tools is a practical impossibility. Both groups looked for ways to address the two biggest obstacles of interplanetary communications–delays caused by distance and the handing-off problems associated with the need to go through multiple ground stations. The first group engineered a demo of the space IP network concept on the ill-fated Columbia’s last flight, in which a file was transferred between the Goddard Space Flight Center and the shuttle across a distance of about 600 kilometers. But a team of scientists at the Jet Propulsion Laboratory (JPL) also worked on the problem, only to find that TCP could not be successfully modified for space travel. Their alternative solution is Delay Tolerant Networking (DTN), an architecture that moves data across networks by using routers that retain a copy of every packet of data sent at least until the next node in the network acknowledges receipt, thus guaranteeing that no data is lost even if a node is offline. This scheme not only ensures that data reaches its destination, but it can improve robot explorers’ efficiency by requiring them to hang onto data only until it is received by the first node. The Goddard group’s concern is that a DTN model would be more costly and less capable because it eschews reusable, commercially developed Internet hardware and software.

Several people from CommerceNet attended CodeCon last weekend for varying amounts of time. The Wheat project, which we’ve contributed to, presented on Sunday and got a very good audience reception; Walter Landry of ArX, a variant of the GNU arch revision control system, presented a fascinating table of comparisons between the different free-software decentralized revision-control systems.

Decentralized source-code revision-control systems are at an interesting intersection; they allow any person to modify a piece of software with total independence, while facilitating their cooperation with others as much as possible. Historically, distributed systems have often achieved cooperation at the expense of independence, and this has limited their size or resulted in unfortunate social problems. Rohit Khare, the Director of CommerceNet zLab, wrote his doctoral thesis on architectural styles that support this intersection of cooperation and independence, and that area has been the focus of zLab.

ArX, along with several other projects presented at CodeCon, are at the forefront of real-world advances in this area: ApacheCA, a CA that makes trust decisions based on the GnuPG/PGP web of trust; OTR, an extension for instant messaging clients that provides cryptographic privacy for conversations without depending on third-party privacy infrastructure; and i-brokers, an Identity Commons/2idi project to develop a decentralized naming system for people on the internet.

I really appreciated the opportunity to spend a weekend with the people who are already doing the things that CommerceNet, so far, only dreams about.

Yaron Goland’s thoughts on Tor suggest an interesting decentralized system…

You don’t know what you have to hide and by the time you figure it out, it will likely be too late. This is where Tor comes in. It makes it much easier to hide. The reason to use Tor isn’t so much because you have something to hide, the reason to use Tor is so when you find out you had something to hide you can rest a little easier knowing that your secret may be protected.

He has some interesting design notes about constructing a network of routers to serve this purpose.

Ben Sittler noticed that Kenosis got Slashdotted yesterday:

UnderScan writes “Eric Ries, writer/programmer/CTO, authored an
article ‘Kenosis and the World Free Web’ at Freshmeat [Owned by
Slashdot’s Parent OSTG]. Kenosis is described as a ‘fully-distributed
peer-to-peer RPC system built on top of XMLRPC.’ He has combined his
Kenosis with BitTorrent & removed the need for a centralized tracker.
He states: ‘To demonstrate Kenosis’s suitability for these new
applications, we have used it to improve upon another peer-to-peer
filesharing application that Just Works: BitTorrent. BitTorrent does
one thing incredibly well. Using a centralized “tracker,” BitTorrent
manages efficient distribution of data that is in high demand. We have
extended BitTorrent, using Kenosis, to eliminate this dependence on a
centralized tracker.’ See also the Kenosis README for details on using
Kenosis-enabled BitTorrent.”

From http://kenosis.sourceforge.net/
Kenosis is:

  • a fully-distributed p2p RPC system built on top of XMLRPC.
  • zero-defect software.
  • highly compatible.

The inventor recently quoth:

Four years ago, I wrote an article for freshmeat called “The World
Free Web” in which I described a way to make Web content available in
a distributed and anonymous way via Freenet. Back then, I expected, as
did many others, that Freenet was on the verge of completion, and all
that remained was to think of interesting new applications to write on
this new platform.

Now, for the record, I still have high hopes for Freenet and am still
a contributor to the Freenet Foundation. But as it stands, Freenet
simply does not work, and it is not a suitable platform for the
development of new applications.

Two years ago, Malcolm Handley and I started the Dasein Software
Partnership in order to create new peer-to-peer tools and applications
for the Free Software world. We started writing applications for
Freenet, but grew frustrated with Freenet’s lack of stability. Next,
we switched to The Circle, a distributed hashtable based on Chord.
Despite its maturity, it too is not stable or reliable enough to form
a suitable platform.

So we decided that we would need to create a new system, designed from
the ground up for simplicity, stability, and scalability. We call that
system Kenosis.

Kenosis is a fully-distributed peer-to-peer RPC system built on top of
XMLRPC. Nodes are automatically connected to each other via a
Kademlia-style network and can route RPC requests efficiently to any
online node. Kenosis does not rely on a central server; any Kenosis
node can effectively join the network (“bootstrap”) from any connected
node.

Experimental Political Betting Markets and the 2004 Election (PDF)

Justin Wolfers, Wharton. U.Penn
Eric Zitzewitz, Stanford GSB

Abstract: Betting on elections has been of interest to economists and political scientists for some time. We recently persuaded TradeSports to run experimental contingent betting markets, in which one bets on whether President Bush will be re-elected, conditional on other specified events occurring. Early results suggest that market participants strongly believe that Osama bin Laden’s capture would have a substantial effect on President Bush’s electoral fortunes, and interestingly that the chance of his capture peaks just before the election. More generally, these markets suggest that issues outside the campaign -– like the state of the economy, and progress on the war on terror -– are the key factors in the forthcoming election.

Our idea is a twist on the usual election betting markets. In standard election betting markets, traders buy and sell securities whose payoffs are tied to the performance of a particular candidate. For example, since 1988 the Iowa Electronic Markets have traded securities that pay a penny for each percentage point of the two-party popular vote garnered by each of the major candidates, and these have proven to be accurate indicators. And as far back as Lincoln’s election, there were organized betting markets on the Presidency, and again, these markets were quite accurate.

Our contingent securities are linked to both the election outcome and to specific events that could influence the election. The three new contracts that we listed pay $100 if President Bush is re-elected, AND (respectively):


  • Osama bin Laden is caught prior to the election;
  • The unemployment rate falls to 5 percent or below;
  • The terror alert level is at its highest level (red).

Bettors call this sort of combination bet a “parlay,” and they have long recognized that if the two events in a parlay are not independent, then the pricing of the bet needs to take their interrelationship into account. We can exploit this to learn about what market participants think about the correlation between any two events.

Osama’s Capture Would Hand the Election to Bush

The data on the first contract are the easiest to interpret. At the time of writing, the Osama-and-Bush contract is at $9, suggesting a 9 percent probability that both events will occur. (We are sampling the mid-point of the bid-ask spreads). By comparison, a contract paying $100 if Osama is captured by October31 (two days before the election) is trading for $9.90.

Comparing these prices suggests that if Osama is captured, the markets believe the likelihood of a Bush victory to be 91 percent. (Why so high? The difference between the two prices, which is small in this case, corresponds to the likelihood that Kerry would win despite Osama’s capture.)

By comparison, a contract simply tied to whether Bush is re-elected is trading at $66.60, suggesting that overall, Bush is a two-in-three chance to be reelected. We can also use the Bush and Bush-and-Osama securities to learn about Bush’s chances if Osama is not captured. By purchasing a Bush contract and selling a Bush-and-Osama contract, we get a portfolio that pays $100 only if Bush is re-elected and Osama is not captured. Paying the $66.60, and receiving the $9, yields a total cost of $57.60.

Comparing this with the probability of 90.1 percent that Osama is not captured, this suggests that the markets believe that if Osama is not captured by the end of October, the likelihood of a Bush victory falls to 64 percent. These odds are still good for Bush, but nothing like his 91 percent chance if Osama is captured. It seems to be no exaggeration to state that the election depends more on whether Osama is captured than on any amount of campaign strategy.

A Better Economy Would Help Bush, But Not As Much

…A similar exercise can be performed to compute the effects of a better performing economy on the election.

Unfortunately, the likelihood of an unemployment rate below 5 percent— the contingency our contract depends on –is currently so low that the relevant contract has generated little recent trade. However, if we look to earlier trading in June 2004, it is clear that markets believe that a stronger economy would have given President Bush at least an 80 percent chance to win re-election, well above his overall odds at the time.

Other issues arise if we want to use what we learn from these markets to inform decision making. While other prediction markets have proved surprisingly immune to attempted manipulation, once the stakes become more than academic, the incentive for manipulators will rise dramatically. Once security prices are used for decision making, other traders’ beliefs start affecting their payoffs, and so a trader’s goal can move from predicting what will happen toward predicting what others think will happen. Careful market design can help on both issues, but the issues are fairly complex, and the focus of current study.

We should also emphasize that these results are only experimental. They come from fairly thin, and quite novel, markets. The prices represent the beliefs of a small and possibly quite unrepresentative population of traders, and the financial incentives here are small. They also involve asking people to evaluate small probabilities, which studies suggest many have a hard time doing.

One thing we know about the participants in this market is that unlike political pundits on Sunday television, these participants are putting their money where their mouths are. That feature is what makes market opinions well worth watching.

The Ninth Circuit had found Napster liable because the company itself maintained and controlled the servers that searched for the digital files its users wanted to download. Grokster and StreamCast, by contrast, operate decentralized systems that allow users to find each other over the Internet and then exchange files directly. Consequently, the appeals court said, the two services did not exercise the kind of control that could lead to legal liability for infringing uses.

Read more

There are some screenshots from their demo site.

“The main asset of the Caltech Laboratory for Experimental Finance (CLEF) is its markets software, called jMarkets. It allows us to run large-scale financial markets experiments reliably and flexibly over the web. jMarkets is pure-Java and J2EE compliant. It was developed from the beginning to become open-source, and a first release to the academic community is planned for 15 November 2004. We decided to make jMarkets open source, in order to promote experimental research on financial markets. Our research to date has demonstrated the potential of experiments, paving the way to investigating longstanding questions. But many more exciting questions exist than we can address on our own. jMarkets’ features will make it accessible to other research groups, usable in a variety of locations and populations. It is to become a tool to which many research groups will have easy access and to which they will be able to contribute.”

Welcome to jMarkets

jMarkets is meant to provide the infrastructure for running large-scale experiments. It is built around a specific theoretical framework, namely, General Equilibrium Theory (GE). This is the branch of Economics that studies large, competitive, interdependent systems.

Peter Bossaerts and William Zame are the scientific supervisors of the jMarkets project; Walter Yuan is the technical supervisor; Raj Advani is the lead programmer.

the breve simulation environment : home

breve is a free, open-source software package which makes it easy to build 3D simulations of decentralized systems and artificial life. Users define the behaviors of agents in a 3D world and observe how they interact. breve includes physical simulation and collision detection so you can simulate realistic creatures, and an OpenGL display engine so you can visualize your simulated worlds.

breve simulations are written in an easy to use language called steve. The language is object-oriented and borrows many features from languages such as C, Perl and Objective C, but even users without previous programming experience will find it easy to jump in. More information on the steve language can be found in the documentation section.

breve features an extensible plugin architecture which allows you to write your own plugins and interact with your own code. Writing plugins is simple and allows you to expand breve to work with existing projects. Plugins have been written in breve to generate MIDI music, download web pages, interact with a Lisp environment and interact with the “push” language. To develop your own plugins, you’ll need to download the plugin SDK from the download section.

Klein, J. 2002. breve: a 3D simulation environment for the simulation of decentralized systems and artificial life. Proceedings of Artificial Life VIII, the 8th International Conference on the Simulation and Synthesis of Living Systems. The MIT Press.

[from a recommendation by Kai Mildenberger]