Algorithms for Efficient Inference in Convolutional Neural Networks

Research output: ThesisDissertation (TU Delft)

25 Downloads (Pure)

Abstract

In recent years, the accuracy of Deep Neural Networks (DNNs) has improved significantly because of three main factors: the availability of massive amounts training data, the introduction of powerful low-cost computational resources, and the development of complex deep learning models. The cloud can provide powerful computational resources to calculate DNNs but limits their deployment due to data communication and privacy issues. Thus, computing DNNs at the edge is becoming an important alternative to calculating these models in a centralized service. However, there is a mismatch between the resource-constrained devices at the edge and the models with increased computational complexity. To alleviate this mismatch, both the algorithms and hardware need to be explored to improve the efficiency of training various feedforward and recurrent neural networks and inferring using a DNN.
Original languageEnglish
Awarding Institution
  • Delft University of Technology
Supervisors/Advisors
  • Al-Ars, Z., Supervisor
  • Hofstee, H.P., Supervisor
Award date16 Sep 2021
DOIs
Publication statusPublished - 2021

Keywords

  • Convolution neural network
  • Efficiency
  • Approximation
  • Architecture design
  • Reconstruction
  • Feature reuse
  • Attention
  • Search

Fingerprint

Dive into the research topics of 'Algorithms for Efficient Inference in Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this