Efficient, Geometry-Based Convolution

Journal article


Authors/Editors


Strategic Research Themes


Publication Details

Author listKhongprasongsiri, Chanon; Suwansantisuk, Watcharapan; Kumhom, Pinit;

PublisherInstitute of Electrical and Electronics Engineers

Publication year2022

Volume number10

Start page33421

End page33439

Number of pages19

ISSN2169-3536

eISSN2169-3536

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85126288281&doi=10.1109%2fACCESS.2022.3157325&partnerID=40&md5=f9ba7ac914c833c9a4beeb0c52bb76b6

LanguagesEnglish-Great Britain (EN-GB)


View in Web of Science | View on publisher site | View citing articles in Web of Science


Abstract

Several computationally intensive applications in machine learning, signal processing, and computer vision call for convolution between a fixed vector and each of the incoming vectors. Often, the convolution need not be exact because a subsequent processing unit, such as an activation function in a neuron network or a visual unit in image processing, can tolerate a computational error, hence allowing the optimization of the convolution algorithm. This paper develops a method of approximate convolution and quantifies its performance in software and hardware. The key idea is to take advantage of the known fixed vector, view a convolution as a dot product, and approximate the angles between the fixed vector and an incoming vector geometrically. We evaluate the proposed method in terms of the accuracy, running time complexity, and hardware power consumption on the field programmable gate array (FPGA) and application-specific integrated circuit (ASIC) hardware platforms. In a benchmark test, the accuracy of the approximate convolution is 3.7% lower than that of the exact convolution, a tolerable loss for machine learning and signal processing. The proposed method reduces the number of operations in the hardware, and reduces the power consumption of conventional convolution by approximately 20% and the existing approximate convolution by approximately 10%, while maintaining the same throughput and latency. We also test the proposed method on 2D convolution and convolutional neural network (CNN). The proposed method reduces complexity, power consumption for 2D convolution, and power consumption for CNN of the conventional method by approximately 22%, 25%, and 13%, respectively. The proposed method of approximate convolution trades off accuracy with running time complexity and hardware power consumption, and it has practical utility in computationally intensive tasks that tolerate a margin of convolutional error. © 2013 IEEE.


Keywords

Convolutiondot product


Last updated on 2023-29-09 at 07:36