skip to main content
Tipo de recurso Mostra resultados com: Mostra resultados com: Índice

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

Cao, Yulong ; Xiao, Chaowei ; Cyr, Benjamin ; Zhou, Yimeng ; Park, Won ; Rampazzi, Sara ; Chen, Qi Alfred ; Fu, Kevin ; Mao, Z. Morley

Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, p.2267-2281

New York, NY, USA: ACM

Texto completo disponível

Citações Citado por
  • Título:
    Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
  • Autor: Cao, Yulong ; Xiao, Chaowei ; Cyr, Benjamin ; Zhou, Yimeng ; Park, Won ; Rampazzi, Sara ; Chen, Qi Alfred ; Fu, Kevin ; Mao, Z. Morley
  • Assuntos: Computing methodologies -- Machine learning -- Machine learning approaches -- Neural networks ; Security and privacy -- Software and application security -- Domain-specific security and privacy architectures
  • É parte de: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, p.2267-2281
  • Descrição: In Autonomous Vehicles (AVs), one fundamental pillar is perception,which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process.Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function.We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility.We also discuss defense directions at the AV system, sensor, and machine learning model levels.
  • Editor: New York, NY, USA: ACM
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.