skip to main content

Mapping of land cover with open‐source software and ultra‐high‐resolution imagery acquired with unmanned aerial vehicles

Horning, Ned ; Fleishman, Erica ; Ersts, Peter J. ; Fogarty, Frank A. ; Wohlfeil Zillig, Martha ; Pettorelli, Nathalie ; Disney, Mat Disney, Mat ; Pettorelli, Nathalie

Remote sensing in ecology and conservation, 2020-12, Vol.6 (4), p.487-497 [Periódico revisado por pares]

Oxford: John Wiley & Sons, Inc

Texto completo disponível

Citações Citado por
  • Título:
    Mapping of land cover with open‐source software and ultra‐high‐resolution imagery acquired with unmanned aerial vehicles
  • Autor: Horning, Ned ; Fleishman, Erica ; Ersts, Peter J. ; Fogarty, Frank A. ; Wohlfeil Zillig, Martha ; Pettorelli, Nathalie ; Disney, Mat
  • Disney, Mat ; Pettorelli, Nathalie
  • Assuntos: Aerial photography ; Algorithms ; Altitude ; Artificial neural networks ; Automation ; Bromus tectorum ; Classification ; Computer programs ; Digital cameras ; Federal regulation ; Ground level ; Image acquisition ; Image classification ; Image contrast ; Image processing ; Image resolution ; Land cover ; Land use ; Learning algorithms ; machine learning ; Neural networks ; open source ; Pixels ; Software ; UAV ; ultra‐high resolution ; Unmanned aerial vehicles ; Vegetation ; Workflow
  • É parte de: Remote sensing in ecology and conservation, 2020-12, Vol.6 (4), p.487-497
  • Descrição: The use of unmanned aerial vehicles (UAVs) to map and monitor the environment has increased sharply in the last few years. Many individuals and organizations have purchased consumer‐grade UAVs, and commonly acquire aerial photographs to map land cover. The resulting ultra‐high‐resolution (sub‐decimeter‐resolution) imagery has high information content, but automating the extraction of this information to create accurate, wall‐to‐wall land‐cover maps is quite difficult. We introduce image‐processing workflows that are based on open‐source software and can be used to create land‐cover maps from ultra‐high‐resolution aerial imagery. We compared four machine‐learning workflows for classifying images. Two workflows were based on random forest algorithms. Of these, one used a pixel‐by‐pixel approach available in ilastik, and the other used image segments and was implemented with R and the Orfeo ToolBox. The other two workflows used fully connected neural networks and convolutional neural networks implemented with Nenetic. We applied the four workflows to aerial photographs acquired in the Great Basin (western USA) at flying heights of 10 m, 45 m and 90 m above ground level. Our focal cover type was cheatgrass (Bromus tectorum), a non‐native invasive grass that changes regional fire dynamics. The most accurate workflow for classifying ultra‐high‐resolution imagery depends on diverse factors that are influenced by image resolution and land‐cover characteristics, such as contrast, landscape patterns and the spectral texture of the land‐cover types being classified. For our application, the ilastik workflow yielded the highest overall accuracy (0.82–0.89) as assessed by pixel‐based accuracy. We compared four machine‐learning workflows for classifying ultra‐high resolution images, such as those acquired from unmanned aerial vehicles. Our results and descriptions of landcover classification workflows have not been published, previously, and they are a strong contribution to the growing knowledge base necessary to produce highly accurate land‐cover maps from extremely detailed aerial imagery.
  • Editor: Oxford: John Wiley & Sons, Inc
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.