Agenda:
11:00 – 11:05 Moderator David Meredith welcomes the guests
11:05 – 11:50 Presentation by PhD student Chaudhary Muhammad Aqdus Ilyas
11:50 – 12:30 Lunch and Coffee Break
12:30 – 14:30 Questions
14:30 – 15:00 Assessment
15:30 Announcement from the committee
INVITATION
Sign Up & Questions:
The defense will be presented via Zoom. To sign up or to pose questions regarding the PhD defense, please contact secretary Kristina Wagner Røjen.
Phone: +4599409926
Email: kwro@create.aau.dk
Assessment Committee:
Associate Professor David Meredith (Chairman)
Department of Architecture, Design & Media Technology, Aalborg University, Denmark
Professor Britta Wrede
Bielefeld University, Germany
Professor Drik Heylen
Twente University, Netherlands
Supervisors:
Professor with Specific Responsibilities Matthias Rehm
Department of Architecture, Design & Media Technology, Aalborg University, Denmark
Professor Kamal Nasrollahi
Department of Architecture, Design & Media Technology, Aalborg University, Denmark
Abstract:
Humans are capable of producing a large number of facial expressions (FE) by the activation of facial muscles. Facial expression recognition (FER) is active research area where human emotions are determined by the classification of facial muscle movements. Automatic recognition of facial emotions is of prime importance for wide range of applications such as bio-metric security and surveillance, human-machine or human-robot interaction, identification of pain, depression and neurological disorders. This dissertation investigates the methods for facial expression analysis of people suffering from traumatic brain injury (TBI) and develops the system based on artificial emotional intelligence (AEI) for practical applications.
This dissertation focuses on the extraction of emotional signals from the patients suffering with TBI using computer vision techniques and the use of a social robot "Pepper robot" to assist in the rehabilitation. The work is organized into three themes: first, multimodal data collection from patients suffering from brain injury; second, extraction and recognition of facial emotions. Finally, the dissertation illustrates how extracted emotional signals can be applied in the effective human-robot interaction with the purpose of rehabilitation and social interaction.
Emotional signal extraction from the patients with brain injury is complex procedure due to unique and diverse psychological, physiological, and behavioral issues such as non-cooperation, face and body muscle paralysis, upper or lower limb disabilities, cognitive, motor, and hearing capabilities inhibition. It is necessary to interpret subtle changes in the emotional signals of people with brain injury for successful communication and implementation of affect-based strategies.
For data collection from subjects with brain injury, three different camera sensors, RGB, thermal, and depth, are used. New methods are introduced to gather good quality data for facial emotional recognition (FER). The thesis also presents a face quality assessment method to ensure a high-quality database in the face-log system.
In Facial Emotional Recognition, this dissertation has three main contributions: (i) development of state-of-the-art deep learning architecture for the extraction and analysis of emotional signals exploiting visual and temporal networks, (ii) exploration of different techniques for fusing facial features from various visual modalities to improve predictive knowledge for the final model, and (iii) implementation of deep transfer learning techniques to overcome the challenges associated with database acquired from the subjects.
Within the human-robot interaction, the Pepper robot has been introduced, equipped with a deep-trained model for emotion recognition. The study emphasizes the real therapeutic value for stroke rehabilitation supported with tools to provide assessment and feedback in the neuro centers.