skip to main content

Dropout vs. batch normalization: an empirical study of their impact to deep learning

Garbin, Christian ; Zhu, Xingquan ; Marques, Oge

Multimedia tools and applications, 2020-05, Vol.79 (19-20), p.12777-12815 [Revista revisada por pares]

New York: Springer US

Texto completo disponible

Citas Citado por
  • Título:
    Dropout vs. batch normalization: an empirical study of their impact to deep learning
  • Autor: Garbin, Christian ; Zhu, Xingquan ; Marques, Oge
  • Materias: Artificial neural networks ; Computer Communication Networks ; Computer programming ; Computer Science ; Data Structures and Information Theory ; Deep learning ; Experimentation ; Machine learning ; Model accuracy ; Multimedia Information Systems ; Neural networks ; Smartphones ; Special Purpose and Application-Based Systems ; Training ; Tuning
  • Es parte de: Multimedia tools and applications, 2020-05, Vol.79 (19-20), p.12777-12815
  • Descripción: Overfitting and long training time are two fundamental challenges in multilayered neural network learning and deep learning in particular. Dropout and batch normalization are two well-recognized approaches to tackle these challenges. While both approaches share overlapping design principles, numerous research results have shown that they have unique strengths to improve deep learning. Many tools simplify these two approaches as a simple function call, allowing flexible stacking to form deep learning architectures. Although their usage guidelines are available, unfortunately no well-defined set of rules or comprehensive studies to investigate them concerning data input, network configurations, learning efficiency, and accuracy. It is not clear when users should consider using dropout and/or batch normalization, and how they should be combined (or used alternatively) to achieve optimized deep learning outcomes. In this paper we conduct an empirical study to investigate the effect of dropout and batch normalization on training deep learning models. We use multilayered dense neural networks and convolutional neural networks (CNN) as the deep learning models, and mix dropout and batch normalization to design different architectures and subsequently observe their performance in terms of training and test CPU time, number of parameters in the model (as a proxy for model size), and classification accuracy. The interplay between network structures, dropout, and batch normalization, allow us to conclude when and how dropout and batch normalization should be considered in deep learning. The empirical study quantified the increase in training time when dropout and batch normalization are used, as well as the increase in prediction time (important for constrained environments, such as smartphones and low-powered IoT devices). It showed that a non-adaptive optimizer (e.g. SGD) can outperform adaptive optimizers, but only at the cost of a significant amount of training times to perform hyperparameter tuning, while an adaptive optimizer (e.g. RMSProp) performs well without much tuning. Finally, it showed that dropout and batch normalization should be used in CNNs only with caution and experimentation (when in doubt and short on time to experiment, use only batch normalization).
  • Editor: New York: Springer US
  • Idioma: Inglés

Buscando en bases de datos remotas, por favor espere

  • Buscando por
  • enscope:(USP_VIDEOS),scope:("PRIMO"),scope:(USP_FISICO),scope:(USP_EREVISTAS),scope:(USP),scope:(USP_EBOOKS),scope:(USP_PRODUCAO),primo_central_multiple_fe
  • Mostrar lo que tiene hasta ahora