Architecture-based heat dissipation analyses allow us to reveal fundamental sources of inefficiency in a given processor and thereby provide us with road-maps to design less dissipative computing schemes independent of technology-base used to implement them. In this work, we study architectural-level contributions to energy dissipation in an Artificial Neural Network (ANN)-based processor that is trained to perform edge-detection task. We compare the training and information processing cost of ANN to that of conventional architectures and algorithms using 64-pixel binary image. Our results reveal the inherent efficiency advantages of an ANN network trained for specific tasks over general-purpose processors based on von Neumann architecture. We also compare the proposed performance improvements to that of Cellular Array Processors (CAPs) and illustrate the reduction in dissipation for special purpose processors. Lastly, we calculate the change in dissipation as a result of input data structure and show the effect of randomness on energetic cost of information processing. The results we obtained provide a basis for comparison for task-based fundamental energy efficiency analyses for a range of processors and therefore contribute to the study of architecture-level descriptions of processors and thermodynamic cost calculations based on physics of computation.
|Number of pages||13|
|Journal||International Journal of Parallel, Emergent and Distributed Systems|
|Publication status||Published - 2020|
- Artificial neural networks
- energy efficiency
- processor thermodynamics