skip to main content

Godiva: green on-chip interconnection for DNNs

Asad, Arghavan ; Mohammadi, Farah

The Journal of supercomputing, 2023-02, Vol.79 (3), p.2404-2430 [Periódico revisado por pares]

New York: Springer US

Texto completo disponível

Citações Citado por
  • Título:
    Godiva: green on-chip interconnection for DNNs
  • Autor: Asad, Arghavan ; Mohammadi, Farah
  • Assuntos: Access time ; Algorithms ; Artificial neural networks ; Central processing units ; Compilers ; Computer Science ; Computer vision ; CPUs ; Graphics processing units ; Hardware ; Image classification ; Interpreters ; Machine learning ; Object recognition ; Parallel processing ; Processor Architectures ; Programming Languages ; Speech recognition ; Workload ; Workloads
  • É parte de: The Journal of supercomputing, 2023-02, Vol.79 (3), p.2404-2430
  • Descrição: The benefits of deep neural networks (DNNs) and other big-data algorithms have led to their use in almost every modern application. The rising use of DNNs in diverse domains including computer vision, speech recognition, image classification, and prediction has increased the demand for energy-efficient hardware architectures. Massive amounts of parallel processing in large-scale DNN algorithms have made communication and storage a strong wall in front of a DNN’s power and performance. Nowadays, DNNs have gained a great deal of success by utilizing the inherent parallelism of GPU architectures. However, recent research shows that the integration of CPUs and GPUs presents a more efficient solution for running the next generation of machine learning (ML) chips. Designing interconnection networks for a heterogenous CPU-GPU platform are a challenge (especially for the execution of DNN workloads) as it must be scalable and efficient. A study in this work shows that the majority of traffic in DNN workloads is associated with last level caches (LLCs). Therefore, there is a need to design a low-overhead interconnect fabric to minimize the energy and access time to the LLC banks. To address this issue, a low-overhead on-chip interconnection, named Godiva, for running DNNs energy-efficiently has been proposed. Godiva interconnection affords low LLCs accesses delay using a low-overhead and small cost hardware in a heterogenous CPU-GPU platform. An experimental evaluation targeting a 16CPU-48GPU system and a set of popular DNN workloads reveals that the proposed heterogenous architecture improves system energy by about 21.7 × and reduces interconnection network area by about 51% when compared to a mesh-based CPU design.
  • Editor: New York: Springer US
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.