Congratulations to Stanford,
who won DARPA’s Autonomous
Vehicle race on Saturday, and to
Carnegie Mellon, who had
two vehicles in close contention all day. DARPA also deserves kudos
for running a website that was tracking the vehicles as the race
progressed, showing where each vehicle was on the course and which
ones were still moving. It would be nice if they had enough funding
left to turn the site into something of continuing value–visitors
ought to be able to replay the race, given all the statistics DARPA
collected.
But the reason I’m able to categorize this post under the Prediction
Markets label is that the markets on the claim didn’t do anywhere as
well as the robotic cars. There wasn’t much participation in the
markets (14 coupons in circulation, and 57 total trades in the
yes/no
market and 7 coupons traded 19 times in the
scaled claim.) More importantly, there was a 30 point spread in
the prices offered. The morning of the race, the highest anyone was
offering on the yes/no claim was 25, and the lowest ask was at 50.
Observers must have learned last year’s lessons too well: a year ago,
the best vehicle traveled only 7 miles. This year, all but one of
the 23 finalists beat that mark, with 7 traveling more than 50 miles
through tough, visually confusing terrain, and 5 finishing the course
(4 within the alloted 10 hours.)
Before the race, the rhetoric was very positive. It seemed that most
of the teams were bragging that they had run their own practice
courses, and saying that the real course shouldn’t pose any new
problems. I would have expected that people close to one of the
teams would have been able to gauge their practice successes and
failures and conclude that if the team you know about has a 50 percent
chance of finishing, and there are 25 teams, then someone ought to be
able to finish. The only conclusions that make sense to me are that
no one active on the FX market knew someone who knew someone on one of
the teams, or that everyone was worried that the DARPA course would
hold some new challenge that none of the teams would be prepared for.
But they all already knew that tunnels (cutting off GPS reception),
cattle guards (fooling depth perception), narrow passes and mud
puddles were on the route.
Oh, well. We try not to claim that Prediction Markets can reliably
predict everything; just that they do a better job of coordinating
predictions than any other mechanism we know of. Here’s another
example of a weak prediction. Did any other mechanism do better?