skip to main content

Automatic Extraction of Damaged Houses by Earthquake Based on Improved YOLOv5: A Case Study in Yangbi

Jing, Yafei ; Ren, Yuhuan ; Liu, Yalan ; Wang, Dacheng ; Yu, Linjun

Remote sensing (Basel, Switzerland), 2022-01, Vol.14 (2), p.382 [Periódico revisado por pares]

Basel: MDPI AG

Texto completo disponível

Citações Citado por
  • Título:
    Automatic Extraction of Damaged Houses by Earthquake Based on Improved YOLOv5: A Case Study in Yangbi
  • Autor: Jing, Yafei ; Ren, Yuhuan ; Liu, Yalan ; Wang, Dacheng ; Yu, Linjun
  • Assuntos: Accuracy ; Algorithms ; Classification ; Damage detection ; damaged houses ; Deep learning ; detection ; Disasters ; Earthquake damage ; Earthquakes ; Efficiency ; Feature extraction ; Feature maps ; Houses ; Intelligence ; Interpreters ; Machine learning ; Model testing ; Morphology ; Neural networks ; orthophotos of UAV ; Remote sensing ; Residential areas ; Seismic activity ; Seismic engineering ; Surface waves ; Target detection ; Unmanned aerial vehicles ; Yangbi Ms6.4 earthquake ; YOLOv5s-ViT-BiFPN
  • É parte de: Remote sensing (Basel, Switzerland), 2022-01, Vol.14 (2), p.382
  • Descrição: Efficiently and automatically acquiring information on earthquake damage through remote sensing has posed great challenges because the classical methods of detecting houses damaged by destructive earthquakes are often both time consuming and low in accuracy. A series of deep-learning-based techniques have been developed and recent studies have demonstrated their high intelligence for automatic target extraction for natural and remote sensing images. For the detection of small artificial targets, current studies show that You Only Look Once (YOLO) has a good performance in aerial and Unmanned Aerial Vehicle (UAV) images. However, less work has been conducted on the extraction of damaged houses. In this study, we propose a YOLOv5s-ViT-BiFPN-based neural network for the detection of rural houses. Specifically, to enhance the feature information of damaged houses from the global information of the feature map, we introduce the Vision Transformer into the feature extraction network. Furthermore, regarding the scale differences for damaged houses in UAV images due to the changes in flying height, we apply the Bi-Directional Feature Pyramid Network (BiFPN) for multi-scale feature fusion to aggregate features with different resolutions and test the model. We took the 2021 Yangbi earthquake with a surface wave magnitude (Ms) of 6.4 in Yunan, China, as an example; the results show that the proposed model presents a better performance, with the average precision (AP) being increased by 9.31% and 1.23% compared to YOLOv3 and YOLOv5s, respectively, and a detection speed of 80 FPS, which is 2.96 times faster than YOLOv3. In addition, the transferability test for five other areas showed that the average accuracy was 91.23% and the total processing time was 4 min, while 100 min were needed for professional visual interpreters. The experimental results demonstrate that the YOLOv5s-ViT-BiFPN model can automatically detect damaged rural houses due to destructive earthquakes in UAV images with a good performance in terms of accuracy and timeliness, as well as being robust and transferable.
  • Editor: Basel: MDPI AG
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.