Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Quantized convolutional neural networks through the lens of partial differential equations

Quantized convolutional neural networks through the lens of partial differential equations Quantization of convolutional neural networks (CNNs) is a common approach to ease the computational burden involved in the deployment of CNNs, especially on low-resource edge devices. However, fixed-point arithmetic is not natural to the type of computations involved in neural networks. In this work, we explore ways to improve quantized CNNs using PDE-based perspective and analysis. First, we harness the total variation (TV) approach to apply edge-aware smoothing to the feature maps throughout the network. This aims to reduce outliers in the distribution of values and promote piecewise constant maps, which are more suitable for quantization. Secondly, we consider symmetric and stable variants of common CNNs for image classification and graph convolutional networks for graph node classification. We demonstrate through several experiments that the property of forward stability preserves the action of a network under different quantization rates. As a result, stable quantized networks behave similarly to their non-quantized counterparts even though they rely on fewer parameters. We also find that at times, stability even aids in improving accuracy. These properties are of particular interest for sensitive, resource-constrained, low-power or real-time applications like autonomous driving. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Research in the Mathematical Sciences Springer Journals

Quantized convolutional neural networks through the lens of partial differential equations

Loading next page...
 
/lp/springer-journals/quantized-convolutional-neural-networks-through-the-lens-of-partial-0SsWMHmYUE
Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
eISSN
2197-9847
DOI
10.1007/s40687-022-00354-y
Publisher site
See Article on Publisher Site

Abstract

Quantization of convolutional neural networks (CNNs) is a common approach to ease the computational burden involved in the deployment of CNNs, especially on low-resource edge devices. However, fixed-point arithmetic is not natural to the type of computations involved in neural networks. In this work, we explore ways to improve quantized CNNs using PDE-based perspective and analysis. First, we harness the total variation (TV) approach to apply edge-aware smoothing to the feature maps throughout the network. This aims to reduce outliers in the distribution of values and promote piecewise constant maps, which are more suitable for quantization. Secondly, we consider symmetric and stable variants of common CNNs for image classification and graph convolutional networks for graph node classification. We demonstrate through several experiments that the property of forward stability preserves the action of a network under different quantization rates. As a result, stable quantized networks behave similarly to their non-quantized counterparts even though they rely on fewer parameters. We also find that at times, stability even aids in improving accuracy. These properties are of particular interest for sensitive, resource-constrained, low-power or real-time applications like autonomous driving.

Journal

Research in the Mathematical SciencesSpringer Journals

Published: Dec 1, 2022

Keywords: Quantization; Convolutional neural networks; Total variation; Neural ODEs; Stable symmetric architectures; 68T07; 68T10; 68T45

References