ResNet or DenseNet? Introducing Dense Shortcuts to ResNet

who6329 17 0 .pdf 2021-01-24 07:01:32

ResNet or DenseNet? Introducing Dense Shortcuts to ResNet

ResNet or DenseNet? Nowadays, most deep learning based approaches are implemented with seminal backbone networks, among them the two arguably most famous ones are ResNet and DenseNet.Despite their competitive performance and overwhelming popularity, inherent drawbacks exist for both of them. For ResNet, the identity shortcut that stabilizes training also limits its representation capacity, while DenseNet has a higher capacity with multi-layer feature concatenation. However, the dense concatenation causes a new problem of requiring high GPU memory and more training time. Partially due to this, it is not a trivial choice between ResNet and DenseNet. This paper provides a unified perspective of dense summation to analyze them, which facilitates a better understanding of their core difference. We further propose dense weighted normalized shortcuts as a solution to the dilemma between them. Our proposed dense shortcut inherits the design philosophy of simple design in ResNet and DenseNet. On several benchmark datasets, the experimental results show that the proposed DSNet achieves significantly better results than ResNet, and achieves comparable performance as DenseNet but requiring fewer computation resources.

ResNet还是DenseNet?

ResNet还是DenseNet?如今,大多数基于深度学习的方法都是通过精干的骨干网络实现的,其中两个最有名的方法是ResNet和DenseNet。.. 尽管它们具有竞争性的性能和压倒性的受欢迎程度,但它们两个都存在固有的缺点。对于ResNet,稳定培训的身份快捷方式也限制了其表示能力,而DenseNet具有更高的功能,可以进行多层功能串联。但是,密集级联会引起新的问题,即需要较高的GPU内存和更多的训练时间。部分由于这个原因,在ResNet和DenseNet之间不是一个简单的选择。本文提供了稠密求和的统一视角来分析它们,从而有助于更好地了解它们的核心差异。我们进一步提出了密集加权归一化快捷方式,以解决它们之间的难题。我们提出的密集快捷方式继承了ResNet和DenseNet中简单设计的设计理念。在几个基准数据集上, (阅读更多)

用户评论
请输入评论内容
评分:
暂无评论