skip to main content
Primo Search
Search in: Busca Geral

BigNAS: Scaling up Neural Architecture Search with Big Single-Stage Models

Vedaldi, Andrea ; Bischof, Horst ; Brox, Thomas ; Frahm, Jan-Michael

Computer Vision - ECCV 2020, 2020, Vol.12352, p.702-717 [Periódico revisado por pares]

Switzerland: Springer International Publishing AG

Sem texto completo

Citações Citado por
  • Título:
    BigNAS: Scaling up Neural Architecture Search with Big Single-Stage Models
  • Autor: Vedaldi, Andrea ; Bischof, Horst ; Brox, Thomas ; Frahm, Jan-Michael
  • Assuntos: AutoML ; Efficient neural architecture search
  • É parte de: Computer Vision - ECCV 2020, 2020, Vol.12352, p.702-717
  • Notas: Electronic supplementary materialThe online version of this chapter (https://doi.org/10.1007/978-3-030-58571-6_41) contains supplementary material, which is available to authorized users.
  • Descrição: Neural architecture search (NAS) has shown promising results discovering models that are both accurate and fast. For NAS, training a one-shot model has become a popular strategy to rank the relative quality of different architectures (child models) using a single set of shared weights. However, while one-shot model weights can effectively rank different network architectures, the absolute accuracies from these shared weights are typically far below those obtained from stand-alone training. To compensate, existing methods assume that the weights must be retrained, finetuned, or otherwise post-processed after the search is completed. These steps significantly increase the compute requirements and complexity of the architecture search and model deployment. In this work, we propose BigNAS, an approach that challenges the conventional wisdom that post-processing of the weights is necessary to get good prediction accuracies. Without extra retraining or post-processing steps, we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs. Our discovered model family, BigNASModels, achieve top-1 accuracies ranging from 76.5% to 80.9%, surpassing state-of-the-art models in this range including EfficientNets and Once-for-All networks without extra retraining or post-processing. We present ablative study and analysis to further understand the proposed BigNASModels.
  • Títulos relacionados: Lecture Notes in Computer Science
  • Editor: Switzerland: Springer International Publishing AG
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.