Facial Expression Recognition in the Wild via Deep Attentive Center Loss

Learning discriminative features for Facial Expression Recognition (FER) in the wild using Convolutional Neural Networks (CNNs) is a non-trivial task due to the significant intra-class variations and inter-class similarities. Deep Metric Learning (DML) approaches such as center loss and its variants jointly optimized with softmax loss have been adopted in many FER methods to enhance the discriminative power of learned features in the embedding space.However, equally supervising all features with the metric learning method might include irrelevant features and ultimately degrade the generalization ability of the learning algorithm. We propose a Deep Attentive Center Loss (DACL) method to adaptively select a subset of significant feature elements for enhanced discrimination. The proposed DACL integrates an attention mechanism to estimate attention weights correlated with feature importance using the intermediate spatial feature maps in CNN as context. The estimated weights accommodate the sparse formulation of center loss to selectively achieve intra-class compactness and inter-class separation for the relevant information in the embedding space. An extensive study on two widely used wild FER datasets demonstrates the superiority of the proposed DACL method compared to state-of-the-art methods.

通过深深的注意力集中缺失在野外进行面部表情识别

使用卷积神经网络(CNN)在野外学习面部表情识别(FER)的判别特征是一项艰巨的任务,因为它具有明显的类内差异和类间相似性。许多FER方法都采用了深度度量学习(DML)方法,例如中心损失及其与softmax损失共同优化的变量,以增强嵌入空间中学习特征的判别能力。.. 但是,用度量学习方法对所有功能进行同等监督可能会包含不相关的功能,并最终降低学习算法的泛化能力。我们提出了一种深层注意力中心丢失(DACL)方法,以自适应地选择重要特征元素的子集以增强辨别力。提出的DACL集成了一种注意力机制,以CNN中的中间空间特征图为背景,估计与特征重要性相关的注意力权重。估计的权重适应中心损失的稀疏公式,以针对嵌入空间中的相关信息有选择地实现类内紧凑性和类间分离。对两个广泛使用的野生FER数据集的广泛研究表明,与最新方法相比,提出的DACL方法具有优越性。 (阅读更多)