The Sabancı University Dynamic Face Database (SUDFace): development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions

Warning The system is temporarily closed to updates for reporting purpose.

Şentürk, Yağmur Damla and Tavacıoğlu, Ebru Ecem and Duymaz, M. İlker and Sayim, Bilge and Alp, Nihan (2022) The Sabancı University Dynamic Face Database (SUDFace): development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behavior Research Methods . ISSN 1554-351X (Print) 1554-3528 (Online) Published Online First https://dx.doi.org/10.3758/s13428-022-01951-z

Warning
There is a more recent version of this item available.
Full text not available from this repository. (Request a copy)

Abstract

Faces convey a wide range of information, including one’s identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.
Item Type: Article
Uncontrolled Keywords: Dynamic face; Face database; Face recognition; Natural face; Neutral face; Speech recognition
Divisions: Faculty of Arts and Social Sciences
Depositing User: Nihan Alp
Date Deposited: 21 Sep 2022 11:26
Last Modified: 21 Sep 2022 11:26
URI: https://research.sabanciuniv.edu/id/eprint/44501

Available Versions of this Item

Actions (login required)

View Item
View Item