Emotional states are expressed through facial expressions. Happiness, surprise , sadness, anger, fear, and disgust are the primary emotions and the most studied ones. Combinations of primary states i.e. compound emotions are also well recognised. ( e.g.: happily surprised vs angrily surprised). Twelve compound emotions are described. Appall (feeling disgust and anger with the emphasis being on disgust), Hate (feeling of disgust and anger with emphasis on anger) and Awe ( feeling of fear and wonder with the emphasis on wonder) are three additional compound emotions.
Could computers use learning algorithms to correctly identify these emotional states? Shichuan Du, Yong Tao, and Aleix M. Martinez from Ohio State University report an interesting development in this field. They tested whether the images of the 21 facial expressions of emotions (primary plus compound) as described above are visually discriminable by the computer .
Movement patterns of muscle groups make the different emotions distinct from each other. All emotional states ( 22) on the images in the database for this study were coded using the Facial Action Coding System (FACS) (Ekman and Friesen) that makes for a clear, compact representation of the muscle activation of a facial expression. Each Action Unit codes the fundamental actions of individual or groups of muscles typically seen while producing facial expressions of emotion. Computer was trained to detect facial landmarks using 94 points on the face that define the shape of face in independent databases . The algorithm use 8742 features (dimensions) defining the shape of the face. Pixel information was used to define appearance of the face.
Basic emotions: The successful classification rates were 90% when using shape features, 92% when using appearance features, and 97 % when using both (shape and appearance).
Compound emotions: The classification accuracies for the 5060 images corresponding to the 22 categories of basic and compound emotions (plus neutral) for the 230 identities in the study database was 74 % when using shape features only, 70 % when using appearance features, and 77% when shape and appearance are combined.
Conclusion: Larger number of emotions are recognisable. Machines can make accurate assessment of emotional state as revealed by facial expression. This opens a new area of research in face recognition that will take human– computer interfaces to a new level of complexity.
Summary of the article: