skip to main content

Fault Localization with Code Coverage Representation Learning

Li, Yi ; Wang, Shaohua ; Nguyen, Tien

2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 2021, p.661-673

IEEE

Texto completo disponível

Citações Citado por
  • Título:
    Fault Localization with Code Coverage Representation Learning
  • Autor: Li, Yi ; Wang, Shaohua ; Nguyen, Tien
  • Assuntos: Code Coverage ; Deep learning ; Fault Localization ; Image recognition ; Location awareness ; Machine Learning ; Neural networks ; Pattern recognition ; Representation Learning ; Software engineering ; Training
  • É parte de: 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 2021, p.661-673
  • Descrição: In this paper, we propose DeepRL4FL, a deep learning fault localization (FL) approach that locates the buggy code at the statement and method levels by treating FL as an image pattern recognition problem. DeepRL4FL does so via novel code coverage representation learning (RL) and data dependencies RL for program statements. Those two types of RL on the dynamic information in a code coverage matrix are also combined with the code representation learning on the static information of the usual suspicious source code. This combination is inspired by crime scene investigation in which investigators analyze the crime scene (failed test cases and statements) and related persons (statements with dependencies), and at the same time, examine the usual suspects who have committed a similar crime in the past (similar buggy code in the training data). For the code coverage information, DeepRL4FL first orders the test cases and marks error-exhibiting code statements, expecting that a model can recognize the patterns discriminating between faulty and non-faulty statements/methods. For dependencies among statements, the suspiciousness of a statement is seen taking into account the data dependencies to other statements in execution and data flows, in addition to the statement by itself. Finally, the vector representations for code coverage matrix, data dependencies among statements, and source code are combined and used as the input of a classifier built from a Convolution Neural Network to detect buggy statements/methods. Our empirical evaluation shows that DeepRL4FL improves the top-1 results over the state-of-the-art statement-level FL baselines from 173.1% to 491.7%. It also improves the top-1 results over the existing method-level FL baselines from 15.0% to 206.3%.
  • Editor: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.