Residual networks classify inputs based on their neural transient dynamics

qqfashionable18171 16 0 .pdf 2021-01-24 05:01:50

Residual networks classify inputs based on their neural transient dynamics

In this study, we analyze the input-output behavior of residual networks from a dynamical system point of view by disentangling the residual dynamics from the output activities before the classification stage. For a network with simple skip connections between every successive layer, and for logistic activation function, and shared weights between layers, we show analytically that there is a cooperation and competition dynamics between residuals corresponding to each input dimension.Interpreting these kind of networks as nonlinear filters, the steady state value of the residuals in the case of attractor networks are indicative of the common features between different input dimensions that the network has observed during training, and has encoded in those components. In cases where residuals do not converge to an attractor state, their internal dynamics are separable for each input class, and the network can reliably approximate the output. We bring analytical and empirical evidence that residual networks classify inputs based on the integration of the transient dynamics of the residuals, and will show how the network responds to input perturbations. We compare the network dynamics for a ResNet and a Multi-Layer Perceptron and show that the internal dynamics, and the noise evolution are fundamentally different in these networks, and ResNets are more robust to noisy inputs. Based on these findings, we also develop a new method to adjust the depth for residual networks during training. As it turns out, after pruning the depth of a ResNet using this algorithm,the network is still capable of classifying inputs with a high accuracy.

残差网络根据神经瞬态动力学对输入进行分类

在这项研究中,我们通过从分类阶段之前的输出活动中分解残差动力学,从动力学系统的角度分析了残差网络的输入输出行为。对于每个连续层之间具有简单跳过连接的网络,逻辑激活功能以及各层之间的权重共享的网络,我们分析表明,与每个输入维相对应的残差之间存在合作和竞争动态。.. 将这些网络解释为非线性滤波器,在吸引网络的情况下,残差的稳态值表示网络在训练过程中观察到并已编码在这些分量中的不同输入维度之间的共同特征。在残差未收敛到吸引子状态的情况下,它们的内部动力学对于每种输入类别都是可分离的,并且网络可以可靠地近似输出。我们提供分析和经验证据,即残差网络基于残差瞬态动力学的积分对输入进行分类,并将显示网络如何响应输入扰动。我们比较了ResNet和多层感知器的网络动态,并发现内部动态,并且这些网络中的噪声演变从根本上是不同的,而ResNets对于嘈杂的输入则更加健壮。基于这些发现,我们还开发了一种在训练过程中调整残差网络深度的新方法。事实证明,在使用该算法修剪ResNet的深度之后,网络仍然能够以较高的精度对输入进行分类。 (阅读更多)

用户评论
请输入评论内容
评分:
暂无评论