Adaptive Signal Variances: CNN Initialization Through Modern Architectures

Deep convolutional neural networks (CNN) have achieved the unwavering confidence in its performance on image processing tasks. The CNN architecture constitutes a variety of different types of layers including the convolution layer and the max-pooling layer.CNN practitioners widely understand the fact that the stability of learning depends on how to initialize the model parameters in each layer. Nowadays, no one doubts that the de facto standard scheme for initialization is the so-called Kaiming initialization that has been developed by He et al. The Kaiming scheme was derived from a much simpler model than the currently used CNN structure having evolved since the emergence of the Kaiming scheme. The Kaiming model consists only of the convolution and fully connected layers, ignoring the max-pooling layer and the global average pooling layer. In this study, we derived the initialization scheme again not from the simplified Kaiming model, but precisely from the modern CNN architectures, and empirically investigated how the new initialization method performs compared to the de facto standard ones that are widely used today.

自适应信号方差:通过现代架构进行的CNN初始化

深度卷积神经网络(CNN)在图像处理任务上的表现获得了坚定的信心。CNN体系结构构成了各种不同类型的层,包括卷积层和最大合并层。.. CNN的从业者广泛理解以下事实:学习的稳定性取决于如何初始化每一层中的模型参数。如今,没有人怀疑事实上的标准初始化方案是He等人开发的所谓的Kaiming初始化。自从开明方案出现以来,与目前使用的CNN结构相比,开明方案的模型要简单得多。Kaiming模型仅由卷积和完全连接的层组成,忽略了最大池化层和全局平均池化层。在这项研究中,我们不是从简化的Kaiming模型中而是从现代CNN架构中得出了初始化方案, (阅读更多)