skip to main content
Primo Search
Search in: Busca Geral

Towards Accurate and Compact Architectures via Neural Architecture Transformer

Guo, Yong ; Zheng, Yin ; Tan, Mingkui ; Chen, Qi ; Li, Zhipeng ; Chen, Jian ; Zhao, Peilin ; Huang, Junzhou

IEEE transactions on pattern analysis and machine intelligence, 2022-10, Vol.44 (10), p.6501-6516 [Periódico revisado por pares]

United States: IEEE

Texto completo disponível

Citações Citado por
  • Título:
    Towards Accurate and Compact Architectures via Neural Architecture Transformer
  • Autor: Guo, Yong ; Zheng, Yin ; Tan, Mingkui ; Chen, Qi ; Li, Zhipeng ; Chen, Jian ; Zhao, Peilin ; Huang, Junzhou
  • Assuntos: Algorithms ; Architecture optimization ; compact architecture design ; Computational efficiency ; Computational modeling ; Computer architecture ; Convolution ; Kernel ; Microprocessors ; neural architecture search ; Neural Networks, Computer ; operation transition ; Optimization
  • É parte de: IEEE transactions on pattern analysis and machine intelligence, 2022-10, Vol.44 (10), p.6501-6516
  • Notas: ObjectType-Article-1
    SourceType-Scholarly Journals-1
    ObjectType-Feature-2
    content type line 23
  • Descrição: Designing effective architectures is one of the key factors behind the success of deep neural networks. Existing deep architectures are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-designed/searched architecture may still contain many nonsignificant or redundant modules/operations (e.g., some intermediate convolution or pooling layers). Such redundancy may not only incur substantial memory consumption and computational cost but also deteriorate the performance. Thus, it is necessary to optimize the operations inside an architecture to improve the performance without introducing extra computational cost. To this end, we have proposed a Neural Architecture Transformer (NAT) method which casts the optimization problem into a Markov Decision Process (MDP) and seeks to replace the redundant operations with more efficient operations, such as skip or null connection. Note that NAT only considers a small number of possible replacements/transitions and thus comes with a limited search space. As a result, such a small search space may hamper the performance of architecture optimization. To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization. Specifically, we present a two-level transition rule to obtain valid transitions, i.e., allowing operations to have more efficient types (e.g., convolution{\to } separable convolution) or smaller kernel sizes (e.g., 5{\times }5 {\to } 3{\times }3 5×53×3 ). Note that different operations may have different valid transitions. We further propose a Binary-Masked Softmax (BMSoftmax) layer to omit the possible invalid transitions. Last, based on the MDP formulation, we apply policy gradient to learn an optimal policy, which will be used to infer the optimized architectures. Extensive experiments show that the transformed architectures significantly outperform both their original counterparts and the architectures optimized by existing methods.
  • Editor: United States: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.