November 26, 2004

NSF Workshop Calling for Shared Infrastructure for Ec Experiments

Commerce By: ams

NetLab Workshop Report, chaired by Charlie Plott:

The time for collaboratories for experimental research in the social sciences has come. It is encouraging to note that, with very limited funding, individual researchers already are struggling to develop collaboratories. We assert that larger group efforts will have substantially greater payoffs in knowledge development. There is now an opportunity to set the conditions which will speed the development of social science knowledge and revolutionize social science education for the foreseeable future. To do so will require a substantial infrastructure investment in collaboratories. The time has come for that investment to be made.

III.2 Long-term Support

There has been no tradition of providing long-term support to highly technical fields in the social sciences. As researchers make greater use of complex networked systems in their research, the need grows for technicians to conduct experiments. Like any large-scale laboratory in engineering or natural sciences, technical support is necessary. Traditionally this has not been the case in the social sciences (and only somewhat common in the behavioral sciences). In order to integrate current computational and networked tools into social science experimentation, technical support must be forthcoming.

III.3 Hardware/Software Support

As experiments are scaled up to incorporate many more subjects or as experiments are distributed across a number of sites, hardware and software innovations are needed. The needs of NetLab researchers are quite different from those of other engineers and scientists. As a consequence, hardware and software development is going to have to be directed towards those special needs, rather than relying on what has been developed for other sciences.

One of the barriers to current NetLab work is the relatively slow speed of the Internet. Experiments involving “real-time” interactions between hundreds of subjects, scattered across a variety of sites, are nearly impossible. Many of these experiments require that all subjects are brought up-to-date within 500 milliseconds of any action, and that many different actions may be taking place nearly simultaneously. If subjects are all tied to the same server, this is a relatively trivial problem. However, if subjects are widely distributed, then “real time” interaction becomes difficult. Moreover, server “crashes,” backlogs, bottlenecks and other threats to subject connectivity must be addressed. These constitute fundamental challenges to our capacity to scale up experiments.

A second barrier concerns massive data storage, handling and retrieval for large-scale experiments. Many experiments require that linkable, heterogeneous data be transmitted from individual sites and merged together. However, there are enormous problems with linking data that may include behavioral actions, physiological measurement and visual images. Moreover, if such data are collected for each subject and the number of subjects is very large, then the resulting data set will be extremely large. Transmitting that data will be difficult. For instance, consider 100 subjects engaged in a 60-minute experiment in which information is collected on: the mouse location in 10 millisecond slices; all mouse clicks; physiological measures such as respiration, galvanic skin conductance; EEG measures; and the complete video of the individual’s facial expressions throughout the experiment. Such data, digitally linked, will be extremely valuable, but their size alone will produce major difficulties for researchers.

  • blog

  • companies & initiatives

  • October 2019
    M T W T F S S
    « May    
     123456
    78910111213
    14151617181920
    21222324252627
    28293031  
  • archive

  • categories