skip to main content

Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research

Crump, Matthew J C ; McDonnell, John V ; Gureckis, Todd M Gilbert, Sam

PloS one, 2013-03, Vol.8 (3), p.e57410-e57410 [Periódico revisado por pares]

United States: Public Library of Science

Texto completo disponível

Citações Citado por
  • Título:
    Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research
  • Autor: Crump, Matthew J C ; McDonnell, John V ; Gureckis, Todd M
  • Gilbert, Sam
  • Assuntos: Attention ; Behavioral Research ; Cognitive ability ; Computer Science ; Cues ; Data collection ; Electronic commerce ; Experiments ; Humans ; Internet ; Learning ; Learning Curve ; Mechanical properties ; Medicine ; Priming ; Privacy ; Psychology ; Reaction Time ; Researchers ; Science Policy ; Social and Behavioral Sciences ; Studies ; Trends ; Web browsers ; Web sites ; Workers ; Workers' compensation
  • É parte de: PloS one, 2013-03, Vol.8 (3), p.e57410-e57410
  • Notas: ObjectType-Article-1
    SourceType-Scholarly Journals-1
    ObjectType-Feature-2
    content type line 23
    Competing Interests: The authors have declared that no competing interests exist.
    Conceived and designed the experiments: MC JM TG. Performed the experiments: MC JM TG. Analyzed the data: MC JM TG. Contributed reagents/materials/analysis tools: MC JM TG. Wrote the paper: MC JM TG.
  • Descrição: Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers.
  • Editor: United States: Public Library of Science
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.