skip to main content

Causability and explainability of artificial intelligence in medicine

Holzinger, Andreas ; Langs, Georg ; Denk, Helmut ; Zatloukal, Kurt ; Müller, Heimo

Wiley interdisciplinary reviews. Data mining and knowledge discovery, 2019-07, Vol.9 (4), p.e1312-n/a [Periódico revisado por pares]

Hoboken, USA: Wiley Periodicals, Inc

Texto completo disponível

Citações Citado por
  • Título:
    Causability and explainability of artificial intelligence in medicine
  • Autor: Holzinger, Andreas ; Langs, Georg ; Denk, Helmut ; Zatloukal, Kurt ; Müller, Heimo
  • Assuntos: Advanced Review ; Advanced Reviews ; Artificial intelligence ; causability ; explainability ; explainable AI ; Explainable artificial intelligence ; histopathology ; Human Centricity and User Interaction ; Machine learning ; Medicine ; Statistical analysis
  • É parte de: Wiley interdisciplinary reviews. Data mining and knowledge discovery, 2019-07, Vol.9 (4), p.e1312-n/a
  • Notas: Funding information
    FeatureCloud, Grant/Award Number: 826078 H2020 EU Project; Hochschulraum‐Infrastrukturmittelfonds; MEFO, Grant/Award Number: MEFO‐Graz; This work was partially supported by the Austrian Science Fund FWF (I2714‐B31) and the EU under H2020 (765148)
    Correction added on 11 June 2019, after first online publication: “explainabilty” has been corrected to “explainability” in the article title.
    ObjectType-Article-2
    SourceType-Scholarly Journals-1
    ObjectType-Feature-3
    content type line 23
    ObjectType-Review-1
    Funding information FeatureCloud, Grant/Award Number: 826078 H2020 EU Project; Hochschulraum‐Infrastrukturmittelfonds; MEFO, Grant/Award Number: MEFO‐Graz; This work was partially supported by the Austrian Science Fund FWF (I2714‐B31) and the EU under H2020 (765148)
  • Descrição: Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction Explainable AI.
  • Editor: Hoboken, USA: Wiley Periodicals, Inc
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.