Call for Papers:

All papers selected for this conference are peer-reviewed and will be published in the regular conference proceedings by the IEEE Computer Society Press. The best papers presented in the conference will be selected for journals such as the Journal on Information Systems and E-Business (ISeB), or the Electronic Commerce Research Journal (ECRJ).

· Submissions deadline: January 20, 2005
· Notification of authors: March 25, 2005
· Camera-ready papers: April 26, 2005
· Conference start: July 19, 2005

NetLab Workshop Report, chaired by Charlie Plott:

The time for collaboratories for experimental research in the social sciences has come. It is encouraging to note that, with very limited funding, individual researchers already are struggling to develop collaboratories. We assert that larger group efforts will have substantially greater payoffs in knowledge development. There is now an opportunity to set the conditions which will speed the development of social science knowledge and revolutionize social science education for the foreseeable future. To do so will require a substantial infrastructure investment in collaboratories. The time has come for that investment to be made.

III.2 Long-term Support

There has been no tradition of providing long-term support to highly technical fields in the social sciences. As researchers make greater use of complex networked systems in their research, the need grows for technicians to conduct experiments. Like any large-scale laboratory in engineering or natural sciences, technical support is necessary. Traditionally this has not been the case in the social sciences (and only somewhat common in the behavioral sciences). In order to integrate current computational and networked tools into social science experimentation, technical support must be forthcoming.

III.3 Hardware/Software Support

As experiments are scaled up to incorporate many more subjects or as experiments are distributed across a number of sites, hardware and software innovations are needed. The needs of NetLab researchers are quite different from those of other engineers and scientists. As a consequence, hardware and software development is going to have to be directed towards those special needs, rather than relying on what has been developed for other sciences.

One of the barriers to current NetLab work is the relatively slow speed of the Internet. Experiments involving “real-time” interactions between hundreds of subjects, scattered across a variety of sites, are nearly impossible. Many of these experiments require that all subjects are brought up-to-date within 500 milliseconds of any action, and that many different actions may be taking place nearly simultaneously. If subjects are all tied to the same server, this is a relatively trivial problem. However, if subjects are widely distributed, then “real time” interaction becomes difficult. Moreover, server “crashes,” backlogs, bottlenecks and other threats to subject connectivity must be addressed. These constitute fundamental challenges to our capacity to scale up experiments.

A second barrier concerns massive data storage, handling and retrieval for large-scale experiments. Many experiments require that linkable, heterogeneous data be transmitted from individual sites and merged together. However, there are enormous problems with linking data that may include behavioral actions, physiological measurement and visual images. Moreover, if such data are collected for each subject and the number of subjects is very large, then the resulting data set will be extremely large. Transmitting that data will be difficult. For instance, consider 100 subjects engaged in a 60-minute experiment in which information is collected on: the mouse location in 10 millisecond slices; all mouse clicks; physiological measures such as respiration, galvanic skin conductance; EEG measures; and the complete video of the individual’s facial expressions throughout the experiment. Such data, digitally linked, will be extremely valuable, but their size alone will produce major difficulties for researchers.

Yahoo! Research Labs Spot Workshop on Recommender Systems:

On August 26, 2004, Yahoo! Research Labs in Pasadena held the fourth in a series of Spot Workshops. Spot workshops are informal one-day gatherings of academics and Yahoo folks centered around a common theme. This workshop’s theme was “Recommender Systems”.

A recommender system is an automated algorithm for providing personalized recommendations (for movies, or music, or restaurants, for example) to a user, often by looking for relationships between that user and a large base of other users. In a sense, a recommender system automates the social process of obtaining referrals or recommendations from like-minded friends.

There were two invited academic speakers, Professor John Riedl and Professor Jon Herlocker, who are both are active at the forefront of recommender systems research (and who played a large role in the field’s creation, including founding NetPerceptions, one of the first startup companies in this area). Todd Beaupre spoke about recommendations on Launch Music — one of the Y! properties most successful at creating personalization that works, providing significant and measurable user value. Donna Boyer and Nilesh Gohel from Y! Network Products discussed the benefits of (and obstacles to) deploying recommendation services across dozens of Y! properties, including Movies, TV, shopping, personals, autos, and others. There was a technical session on algorithmic tools from machine learning and linear algebra useful for recommendation systems, with several topics presented by scientists from Yahoo! Research Labs. The workshop ended with a roundtable discussion. Turnout was excellent, including a number of people from Sunnyvale and Santa Monica. Many attendees felt that the workshop was productive and valuable, serving to successfully bring together a number of people throughout the company with similar goals and interests, and ending with concrete plans for continued interaction and collaboration. The invited academic speakers served as a bridge to the academic research community, helping us to assess the current state of the art, as well as make connections for future collaborative projects, student internships, and new hires.

I kept meaning to post this but kept forgetting to do so. Allan Schiffman: “This is why the Internet was invented. Unbelievably cool: check out eMachineshop. Link courtesy of Survival Arts.”

If you enjoyed that link, check out my favorite Allan Schiffman lines from recondite thus far.

Jeff Ubois in Release 1.0 (October 2003) wrote an issue on Online Reputation Systems:

Some like to think of the Net as a digital village, but in fact it’s closer to a digital city. The ability to interact with a billion people on the Net comes with its own costs: Dealing with strangers is risky, and verifying their trustworthiness is expensive – especially on a case-by-case basis.

Companies can use reputation systems to enhance customer support while reducing its costs, and to establish trust, thereby increasing the number and quality of transactions. EBay’s feedback forum, which is used by millions of people for millions of transactions every day, is a good example. According to a study of eBay’s reputation system by Paul Resnick, an associate professor at the University of Michigan’s School of Information, highly ranked sellers can charge about 8 percent more than sellers with no reputation, for identical items.

Commerce is all about reputation. Online reputation transcends any single reputation system, but no online reputation system reflects that fact. Somewhere out there someone’s designing a whuffie system — the one ring to bind them all.

Update, November 29. Dick Hardt reminds us that Sxip enables online reputation systems. One Network to bind them all!

IBM Research – Distributed and Fault Tolerant Computing – Current Seminars at Watson

Matt Welsh, Harvard University Title: Market-Based Programming Paradigms for Sensor Networks Date: 10/20/2004 Time: 2:00 -3:30 PM Location: Hawthorne 1S-F40 Host: Fred Douglis

Abstract:
Sensor networks present a novel programming challenge: that of achieving robust global behavior despite limited resources, varying node locations and capabilities, and changing network conditions. Current programming models typically require that global behavior be specified in terms of the low-level actions of individual nodes. This approach makes it extremely difficult to tune the operation of the sensor network as a whole.Ideally, sensor nodes should self-schedule to determine the set of operations that maximizes that node’s contribution to the network-wide task. In this talk, we present market-based macroprogramming (MBM), a new approach for achieving efficient resource allocation in sensor networks. Rather than programming individual sensor nodes, MBM defines a virtual market in which nodes sell goods (such as sensor readings or data aggregates) in response to prices that are established by the programmer. Nodes take actions to maximize their profit, subject to energy budget constraints. The behavior of the network is determined by adjusting the price vectors for each good, rather than by directly specifying local node programs. Nodes individually specialize their operation in response to feedback from payments. Market-based macroprogramming provides a useful set of primitives for controlling the aggregate behavior of sensor networks despite variance of individual nodes. We present the MBM paradigm and a sensor network vehicle tracking application based on this design, as well as a number of experiments demonstrating that MBM allows nodes to operate efficiently under limited energy budgets, while adapting to changing network conditions. This project is in collaboration with Geoff Mainland and David Parkes.

Biography:
Matt Welsh is an assistant professor of Computer Science at Harvard University. Prior to joining Harvard, he received his Ph.D. from UC Berkeley, and spent one year as a visiting researcher at Intel Research Berkeley. His research interests span many aspects of complex systems, including Internet services, distributed systems, and sensor networks.

Tony Gentile pointed eBay’s move to release pricing data as a Web service:

eBay announced the availability of Pulse, a tool that aggregates bids to provide up-to-the-minute (and historical) values for goods (and some services).

Yes, that’s right… eBay, who, unlike your local newspaper, has transactions that clear (i.e., you know that the item was sold and how much it was sold for), knows the fair market price, globally, of millions of different goods… and they’ve just opened it up for mining!

Many implications:

1) We’ve seen the high-water mark for The Kelley Blue Book’s value (and similar companies). Through Pulse, eBay Motors can be mined to provide near real-time and historical pricing information, on any make or model, in a narrow geographic region (i.e., local search), with car photos documenting the condition of the car, etc. Nice.

2) Data availability will impact marketplace participant behavior, likely resulting in Meta-marketplaces, ‘day-sellers’ and increased competition. For example, much as NexTag.com built a meta-marketplace by arbitraging SEM marketplaces (Google, Overture, etc) until finally finding a profitable niche in home mortgages, the ability to monitor demand on eBay for a particular good or service may result in speculative and opportunistic seller behavior, resulting in more (and more immediate) competition in eBay’s marketplace.

Interestingly, just as with Overture/Google, tools that make the marketplace more efficient, as described above, may have a negative short-term impact on revenue (as anything that decreases the avg selling price of an item on eBay would), but will likely result in significantly greater long-term impact as the increased transparency leads to greater trust, allowing more transactions to move online.

3) eBay continues to ensure its Web 2.0 relevance by extending its existing services 1: Classified Listings and Transaction clearing (core marketplace), 2: Payments (via PayPal), and 3: Reputation (core marketplace), by adding its first data product service, Pricing.

John Battelle writes in his piece, “The Transparent (Shopping) Society”:

First, the entire UPC system, which I must admit I do not fully grok, must be made open and available as a web service. Second, merchants must be compelled to make their inventory open and available to web services. Third, mobile device makers must install readers in their phones, essentially turning phones into magic gateways between the physical world and the virtual world of web-based information. And fourth, providers like Google must create applications that tie it all together.

Ross Stapleton-Gray addresses these points in the comments section, and a lively discussion ensues, including Sergei Burkov of Dulance, which just announced an RSS comparison shopping application.

New York Times has two interesting statistics:

An estimated half-million people make a full- or part-time living by auctioning everything from macrame to Maseratis on the Internet. In the online auction world, they are called power sellers, and they have succeeded by researching consumer trends, finding reliable sources for goods and not sparing the bubble wrap.

eBay, of course, is not the only game in town, though it is clearly the largest and most popular Internet auction site. [eBay] has 114 million users, far more than competitors like Ubid.com, Bidville.com and ePier, as well as the auction sections of Amazon.com, Yahoo and Overstock.com.

Note that not all of the half million people making a living with Internet auctions use eBay, though it’s likely that most of them do. The long tail of ecommerce, though in some ways decentralized, still heavily relies on eBay when it comes to Internet auctions.

CNET refers to an online trend — “the blurring of e-commerce and personal media such as Web logs and social networking sites”:

Amazon.com has quietly introduced a new feature on its Web store that lets customers post photos alongside product reviews–its latest effort to build a sense of community among customers.

The e-tailer introduced the feature, called Customer Images, last month for certain product categories including electronics, apparel, sporting goods and musical instruments. It added kitchen items, tools and hardware on Tuesday. The feature is in beta, meaning the company is still testing and tuning it.

“This feature allows customers to really showcase how they are using the product,” Amazon spokesman Craig Berman said. “It’s a great addition to our customer experience.”

The idea is to let customers highlight specific attributes of a product, such as size, and show the product in action, he said.

The Web and Web-based Services are finally discovering how to personalize their content to their customers, and it’s thrilling to watch this trend unfold.