skip to main content
Primo Search
Search in: Busca Geral
Tipo de recurso Mostra resultados com: Mostra resultados com: Índice

Efficient Architecture Search for Continual Learning

Gao, Qiang ; Luo, Zhipeng ; Klabjan, Diego ; Zhang, Fengli

IEEE transaction on neural networks and learning systems, 2023-11, Vol.34 (11), p.8555-8565

United States: IEEE

Texto completo disponível

Citações Citado por
  • Título:
    Efficient Architecture Search for Continual Learning
  • Autor: Gao, Qiang ; Luo, Zhipeng ; Klabjan, Diego ; Zhang, Fengli
  • Assuntos: Computer architecture ; Continual learning ; Deep learning ; deep neural network ; Knowledge engineering ; Network architecture ; neural architecture search (NAS) ; Neural networks ; Neurons ; Task analysis
  • É parte de: IEEE transaction on neural networks and learning systems, 2023-11, Vol.34 (11), p.8555-8565
  • Notas: ObjectType-Article-1
    SourceType-Scholarly Journals-1
    ObjectType-Feature-2
    content type line 23
  • Descrição: Continual learning with neural networks, which aims to learn a sequence of tasks, is an important learning framework in artificial intelligence (AI). However, it often confronts three challenges: 1) overcome the catastrophic forgetting problem; 2) adapt the current network to new tasks; and 3) control its model complexity. To reach these goals, we propose a novel approach named continual learning with efficient architecture search (CLEAS). CLEAS works closely with neural architecture search (NAS), which leverages reinforcement learning techniques to search for the best neural architecture that fits a new task. In particular, we design a neuron-level NAS controller that decides which old neurons from previous tasks should be reused (knowledge transfer) and which new neurons should be added (to learn new knowledge). Such a fine-grained controller allows finding a very concise architecture that can fit each new task well. Meanwhile, since we do not alter the weights of the reused neurons, we perfectly memorize the knowledge learned from the previous tasks. We evaluate CLEAS on numerous sequential classification tasks, and the results demonstrate that CLEAS outperforms other state-of-the-art alternative methods, achieving higher classification accuracy while using simpler neural architectures.
  • Editor: United States: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.