skip to main content
Primo Search
Search in: Busca Geral
Tipo de recurso Mostra resultados com: Mostra resultados com: Índice

Experimental Study of Fault Injection Attack on Image Sensor Interface for Triggering Backdoored DNN Models

OYAMA, Tatsuya ; OKURA, Shunsuke ; YOSHIDA, Kota ; FUJINO, Takeshi

IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2022/03/01, Vol.E105.A(3), pp.336-343 [Periódico revisado por pares]

Tokyo: The Institute of Electronics, Information and Communication Engineers

Texto completo disponível

Citações Citado por
  • Título:
    Experimental Study of Fault Injection Attack on Image Sensor Interface for Triggering Backdoored DNN Models
  • Autor: OYAMA, Tatsuya ; OKURA, Shunsuke ; YOSHIDA, Kota ; FUJINO, Takeshi
  • Assuntos: Artificial neural networks ; backdoor attack ; deep neural networks ; Handwriting ; Image classification ; Image manipulation ; image sensor interface ; image tampering ; Microprocessors ; MIPI ; Object recognition ; Performance degradation ; Sensors ; Signal processing
  • É parte de: IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2022/03/01, Vol.E105.A(3), pp.336-343
  • Descrição: A backdoor attack is a type of attack method inducing deep neural network (DNN) misclassification. An adversary mixes poison data, which consist of images tampered with adversarial marks at specific locations and of adversarial target classes, into a training dataset. The backdoor model classifies only images with adversarial marks into an adversarial target class and other images into the correct classes. However, the attack performance degrades sharply when the location of the adversarial marks is slightly shifted. An adversarial mark that induces the misclassification of a DNN is usually applied when a picture is taken, so the backdoor attack will have difficulty succeeding in the physical world because the adversarial mark position fluctuates. This paper proposes a new approach in which an adversarial mark is applied using fault injection on the mobile industry processor interface (MIPI) between an image sensor and the image recognition processor. Two independent attack drivers are electrically connected to the MIPI data lane in our attack system. While almost all image signals are transferred from the sensor to the processor without tampering by canceling the attack signal between the two drivers, the adversarial mark is injected into a given location of the image signal by activating the attack signal generated by the two attack drivers. In an experiment, the DNN was implemented on a Raspberry pi 4 to classify MNIST handwritten images transferred from the image sensor over the MIPI. The adversarial mark successfully appeared in a specific small part of the MNIST images using our attack system. The success rate of the backdoor attack using this adversarial mark was 91%, which is much higher than the 18% rate achieved using conventional input image tampering.
  • Editor: Tokyo: The Institute of Electronics, Information and Communication Engineers
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.