skip to main content
Tipo de recurso Mostra resultados com: Mostra resultados com: Índice

Cyclic Differentiable Architecture Search

Yu, Hongyuan ; Peng, Houwen ; Huang, Yan ; Fu, Jianlong ; Du, Hao ; Wang, Liang ; Ling, Haibin

IEEE transactions on pattern analysis and machine intelligence, 2023-01, Vol.45 (1), p.211-228 [Periódico revisado por pares]

United States: IEEE

Texto completo disponível

Citações Citado por
  • Título:
    Cyclic Differentiable Architecture Search
  • Autor: Yu, Hongyuan ; Peng, Houwen ; Huang, Yan ; Fu, Jianlong ; Du, Hao ; Wang, Liang ; Ling, Haibin
  • Assuntos: Computer architecture ; Cyclic ; differentiable architecture search ; introspective distillation ; Microprocessors ; Object detection ; Optimization ; Search problems ; Task analysis ; Training ; unified framework
  • É parte de: IEEE transactions on pattern analysis and machine intelligence, 2023-01, Vol.45 (1), p.211-228
  • Notas: ObjectType-Article-1
    SourceType-Scholarly Journals-1
    ObjectType-Feature-2
    content type line 23
  • Descrição: Differentiable ARchiTecture Search, i.e., DARTS, has drawn great attention in neural architecture search. It tries to find the optimal architecture in a shallow search network and then measures its performance in a deep evaluation network. The independent optimization of the search and evaluation networks, however, leaves a room for potential improvement by allowing interaction between the two networks. To address the problematic optimization issue, we propose new joint optimization objectives and a novel Cyclic Differentiable ARchiTecture Search framework, dubbed CDARTS. Considering the structure difference, CDARTS builds a cyclic feedback mechanism between the search and evaluation networks with introspective distillation. First, the search network generates an initial architecture for evaluation, and the weights of the evaluation network are optimized. Second, the architecture weights in the search network are further optimized by the label supervision in classification, as well as the regularization from the evaluation network through feature distillation. Repeating the above cycle results in a joint optimization of the search and evaluation networks and thus enables the evolution of the architecture to fit the final evaluation network. The experiments and analysis on CIFAR, ImageNet and NATS-Bench [95] demonstrate the effectiveness of the proposed approach over the state-of-the-art ones. Specifically, in the DARTS search space, we achieve 97.52% top-1 accuracy on CIFAR10 and 76.3% top-1 accuracy on ImageNet. In the chain-structured search space, we achieve 78.2% top-1 accuracy on ImageNet, which is 1.1% higher than EfficientNet-B0. Our code and models are publicly available at https://github.com/microsoft/Cream .
  • Editor: United States: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.