Efficient, Geometry-Based Convolution
Journal article
Authors/Editors
Strategic Research Themes
Publication Details
Author list: Khongprasongsiri, Chanon; Suwansantisuk, Watcharapan; Kumhom, Pinit;
Publisher: Institute of Electrical and Electronics Engineers
Publication year: 2022
Volume number: 10
Start page: 33421
End page: 33439
Number of pages: 19
ISSN: 2169-3536
eISSN: 2169-3536
Languages: English-Great Britain (EN-GB)
View in Web of Science | View on publisher site | View citing articles in Web of Science
Abstract
Several computationally intensive applications in machine learning, signal processing, and computer vision call for convolution between a fixed vector and each of the incoming vectors. Often, the convolution need not be exact because a subsequent processing unit, such as an activation function in a neuron network or a visual unit in image processing, can tolerate a computational error, hence allowing the optimization of the convolution algorithm. This paper develops a method of approximate convolution and quantifies its performance in software and hardware. The key idea is to take advantage of the known fixed vector, view a convolution as a dot product, and approximate the angles between the fixed vector and an incoming vector geometrically. We evaluate the proposed method in terms of the accuracy, running time complexity, and hardware power consumption on the field programmable gate array (FPGA) and application-specific integrated circuit (ASIC) hardware platforms. In a benchmark test, the accuracy of the approximate convolution is 3.7% lower than that of the exact convolution, a tolerable loss for machine learning and signal processing. The proposed method reduces the number of operations in the hardware, and reduces the power consumption of conventional convolution by approximately 20% and the existing approximate convolution by approximately 10%, while maintaining the same throughput and latency. We also test the proposed method on 2D convolution and convolutional neural network (CNN). The proposed method reduces complexity, power consumption for 2D convolution, and power consumption for CNN of the conventional method by approximately 22%, 25%, and 13%, respectively. The proposed method of approximate convolution trades off accuracy with running time complexity and hardware power consumption, and it has practical utility in computationally intensive tasks that tolerate a margin of convolutional error. © 2013 IEEE.
Keywords
Convolution, dot product