skip to main content

Structured Pruning of Deep Convolutional Neural Networks

Anwar, Sajid ; Hwang, Kyuyeon ; Sung, Wonyong

ACM journal on emerging technologies in computing systems, 2017-07, Vol.13 (3), p.1-18 [Periódico revisado por pares]

Texto completo disponível

Citações Citado por
  • Título:
    Structured Pruning of Deep Convolutional Neural Networks
  • Autor: Anwar, Sajid ; Hwang, Kyuyeon ; Sung, Wonyong
  • É parte de: ACM journal on emerging technologies in computing systems, 2017-07, Vol.13 (3), p.1-18
  • Descrição: Real-time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks: feature map-wise, kernel-wise, and intra-kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, in parallel computing environments, and in hardware-based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by assessing the misclassification rate with a corresponding connectivity pattern. The pruned network is retrained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra-kernel strided sparsity with a simple constraint can significantly reduce the size of the kernel and feature map tensors. The proposed work shows that when pruning granularities are applied in combination, we can prune the CIFAR-10 network by more than 70% with less than a 1% loss in accuracy.
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.