Speaker
Description
We present DiffuseFace, a database of AI-generated face portraits designed to address limitations of traditional databases and enhance diversity in emotion studies. Traditional databases, such as CEED (Benda & Scherf, 2020) and FACES (Ebner et al., 2010), typically include photographs of real actors captured in controlled environments. While invaluable, their creation demands substantial time and financial resources and raises privacy concerns related to material sharing. These constraints may limit the representation of ethnicities and facial expressions, reducing diversity and hampering generalizability (Barrett et al., 2019). Recent research highlights the potential of generative AI (Demszky et al., 2023) to advance methodologies. We extend this approach to emotion research by using generative AI to create a large, diverse face database with lower costs and fewer constraints than traditional methods. DiffuseFace comprises 600 portraits of women and men from 20 nationalities, displaying 14 distinct emotional expressions (e.g., amusement, shame) and a neutral pose, generated with the open-source StableDiffusion model. Building on prior research (Holland et al., 2019), we will collect data on attitudes toward generative AI, perceived realism, and emotion recognition ratings from 500 U.S. participants. We will also evaluate whether AI-generated stimuli are comparable to real-actor portraits in features critical to emotion research. Preliminary data from 260 individuals indicate that AI-generated faces are perceived as highly realistic and their emotional expressions well-recognized. These findings underscore the potential of generative AI to efficiently produce diverse, high-quality stimuli and improve research generalizability.