SelfME: Self-Supervised Motion Learning for Micro-Expression Recognition

Xinqi Fan, Xueli Chen, Mingjie Jiang, Ali Raza Shahid, Hong Yan

Research output: Contribution to conferencePaperpeer-review

Abstract

Facial micro-expression (ME) refers to a brief spontaneous facial movement that can convey a person's genuine emotion. It has numerous applications, including lie detection , and criminal analysis. Although deep learning-based ME recognition (MER) methods achieved considerable success, these methods still required sophisticated pre-processing using conventional optical flow-based methods to extract facial motions as inputs. To overcome this limitation, we proposed a novel MER framework using self-supervised learning to extract facial motion for ME (SelfME). To the best of our knowledge, this is the first work with an automatically self-learned motion technique for MER. However, the self-supervised motion learning method may suffer from ignoring symmetrical facial actions on the left and right sides of the face when extracting fine features. To tackle this problem, we developed a symmetric contrastive vision transformer (SCViT) to constrain the learning of similar facial action features for the left and right parts of the faces. Experiments were conducted on two benchmark datasets, showing that our method achieved state-of-the-art performance. In addition, ablation studies demonstrated the effectiveness of our method.
Original languageEnglish
Number of pages10
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'SelfME: Self-Supervised Motion Learning for Micro-Expression Recognition'. Together they form a unique fingerprint.

Cite this