Download PDFOpen PDF in browser

DacFER: Dual Attention Correction Learning for Efficient Facial Expression Recognition

EasyChair Preprint no. 11287

5 pagesDate: November 13, 2023


Facial expression recognition is an important task in computer vision, the application of facial expression recognition in various fields continues to grow and its research is receiving increasing attention. However, sample noise and label noise are important challenges that cannot be ignored in facial expression recognition. The dual attention correction approach is proposed, which aims to raise the accuracy of local attention and focus on the importance of global attention. Specifically, the correction of local attention is reflected on the fact that the importance of each channel is increased through channel attention, as a result it suppresses useless features for the FER task, enhances useful features and prompts the classification loss function to obtain a more accurate basis of classification. The correction of global attention is reflected on the fact that more global information attracts attention with the help of spatial attentional shift consistency, therefore classification errors caused by local attentional “errors” are avoided. Under the influence of classification loss and spatial shift attention consistency loss, theDacFER method solves problems of input and label corruption and achieves recognition performance comparable to state-of-the-art methods of large-scale datasets RAF-DB and AffectNet in the wild. Our code will be made publicly available.

Keyphrases: Dual attention, facial expression recognition, noise label

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Rui Sun and Zhaoli Zhang and Hai Liu},
  title = {DacFER: Dual Attention Correction Learning for Efficient Facial Expression Recognition},
  howpublished = {EasyChair Preprint no. 11287},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser