skip to main content

Assessing Open-Ended Human-Computer Collaboration Systems: Applying a Hallmarks Approach

Kozierok, Robyn ; Aberdeen, John ; Clark, Cheryl ; Garay, Christopher ; Goodman, Bradley ; Korves, Tonia ; Hirschman, Lynette ; McDermott, Patricia L. ; Peterson, Matthew W.

Frontiers in artificial intelligence, 2021-10, Vol.4, p.670009-670009 [Periódico revisado por pares]

Frontiers Media S.A

Texto completo disponível

Citações Citado por
  • Título:
    Assessing Open-Ended Human-Computer Collaboration Systems: Applying a Hallmarks Approach
  • Autor: Kozierok, Robyn ; Aberdeen, John ; Clark, Cheryl ; Garay, Christopher ; Goodman, Bradley ; Korves, Tonia ; Hirschman, Lynette ; McDermott, Patricia L. ; Peterson, Matthew W.
  • Assuntos: Artificial Intelligence ; assessment ; collaborative assistants ; dialogue ; evaluation ; human-machine teaming ; multimodal
  • É parte de: Frontiers in artificial intelligence, 2021-10, Vol.4, p.670009-670009
  • Notas: ObjectType-Article-2
    SourceType-Scholarly Journals-1
    ObjectType-Feature-3
    content type line 23
    ObjectType-Review-1
    Edited by: Imed Zitouni, Google Zurich, Switzerland
    Ronald Böck, Otto von Guericke University Magdeburg, Germany
    Nadia Mana, Bruno Kessler Foundation (FBK), Italy
    This article was submitted to Machine Learning and Artificial Intelligence, a section of the journal Frontiers in Artificial Intelligence
    Reviewed by: Ajey Kumar, Symbiosis International (Deemed University), India
  • Descrição: There is a growing desire to create computer systems that can collaborate with humans on complex, open-ended activities. These activities typically have no set completion criteria and frequently involve multimodal communication, extensive world knowledge, creativity, and building structures or compositions through multiple steps. Because these systems differ from question and answer (Q&A) systems, chatbots, and simple task-oriented assistants, new methods for evaluating such collaborative computer systems are needed. Here, we present a set of criteria for evaluating these systems, called Hallmarks of Human-Machine Collaboration . The Hallmarks build on the success of heuristic evaluation used by the user interface community and past evaluation techniques used in the spoken language and chatbot communities. They consist of observable characteristics indicative of successful collaborative communication, grouped into eight high-level properties: robustness; habitability; mutual contribution of meaningful content; context-awareness; consistent human engagement; provision of rationale; use of elementary concepts to teach and learn new concepts; and successful collaboration. We present examples of how we used these Hallmarks in the DARPA Communicating with Computers (CwC) program to evaluate diverse activities, including story and music generation, interactive building with blocks, and exploration of molecular mechanisms in cancer. We used the Hallmarks as guides for developers and as diagnostics, assessing systems with the Hallmarks to identify strengths and opportunities for improvement using logs from user studies, surveying the human partner, third-party review of creative products, and direct tests. Informal feedback from CwC technology developers indicates that the use of the Hallmarks for program evaluation helped guide development. The Hallmarks also made it possible to identify areas of progress and major gaps in developing systems where the machine is an equal, creative partner.
  • Editor: Frontiers Media S.A
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.