Author

Jie Cai

Date of Award

Fall 2019

Document Type

Open Access Dissertation

Department

Computer Science and Engineering

First Advisor

Yan Tong

Second Advisor

Song Wang

Abstract

Over the past few years, deep learning, e.g., Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have shown promise on facial expression recog- nition. However, the performance degrades dramatically especially in close-to-real-world settings due to high intra-class variations and high inter-class similarities introduced by subtle facial appearance changes, head pose variations, illumination changes, occlusions, and identity-related attributes, e.g., age, race, and gender. In this work, we developed two novel CNN frameworks and one novel GAN approach to learn discriminative features for facial expression recognition.

First, a novel island loss is proposed to enhance the discriminative power of learned deep features. Specifically, the island loss is designed to reduce the intra-class variations while enlarging the inter-class differences simultaneously. Experimental results on three posed facial expression datasets and, more importantly, two spontaneous facial expression datasets have shown that the proposed island loss outperforms the baseline CNNs with the traditional softmax loss or the center loss and achieves better or at least comparable performance compared with the state-of-the-art methods.

Second, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explic- itly deal with the large intra-class variations caused by identity-related attributes. Specif- ically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to identity-related attributes, where the final features are less affected by the attributes. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on three posed facial expression datasets as well as four spontaneous facial expression datasets have demonstrated that the proposed PAT- CNN achieves the best performance compared with state-of-the-art methods by explicitly modeling attributes. Impressively, the PAT-CNN using a single model achieves the best performance on the SFEW test dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs.

Last, we present a novel Identity-Free conditional Generative Adversarial Network (IF- GAN) to explicitly reduce high inter-subject variations caused by identity-related attributes, e.g., age, race, and gender, for facial expression recognition. Specifically, for any given in- put facial expression image, a conditional generative model was developed to transform it to an “average” identity expressive face with the same expression as the input face image. Since the generated images have the same synthetic “average” identity, they differ from each other only by the displayed expressions and thus can be used for identity-free facial expression classification. In this work, an end-to-end system was developed to perform facial expression generation and facial expression recognition in the IF-GAN framework. Experimental results on four well-known facial expression datasets including a sponta- neous facial expression dataset have demonstrated that the proposed IF-GAN outperforms the baseline CNN model and achieves the best performance compared with the state-of- the-art methods for facial expression recognition.

Rights

© 2019, Jie Cai

Share

COinS