我们如何有效地压缩卷积神经网络(CNN),同时保持其在分类任务上的准确性?深度可分离卷积(DSConv)用深度卷积和点卷积代替了标准卷积,已用于构建轻量级体系结构。..

FALCON: Lightweight and Accurate Convolution

How can we efficiently compress Convolutional Neural Network (CNN) while retaining their accuracy on classification tasks? Depthwise Separable Convolution (DSConv), which replaces a standard convolution with a depthwise convolution and a pointwise convolution, has been used for building lightweight architectures.However, previous works based on depthwise separable convolution are limited when compressing a trained CNN model since 1) they are mostly heuristic approaches without a precise understanding of their relations to standard convolution, and 2) their accuracies do not match that of the standard convolution. In this paper, we propose FALCON, an accurate and lightweight method to compress CNN. FALCON uses GEP, our proposed mathematical formulation to approximate the standard convolution kernel, to interpret existing convolution methods based on depthwise separable convolution. By exploiting the knowledge of a trained standard model and carefully determining the order of depthwise separable convolution via GEP, FALCON achieves sufficient accuracy close to that of the trained standard model. Furthermore, this interpretation leads to developing a generalized version rank-k FALCON which performs k independent FALCON operations and sums up the result. Experiments show that FALCON 1) provides higher accuracy than existing methods based on depthwise separable convolution and tensor decomposition, and 2) reduces the number of parameters and FLOPs of standard convolution by up to a factor of 8 while ensuring similar accuracy. We also demonstrate that rank-k FALCON further improves the accuracy while sacrificing a bit of compression and computation reduction rates.