Sociophonetic Variation and Human Interaction With “AI”-Based Systems
Rockefeller Hall Room 300
Nicole Holliday, Acting Associate Professor of Linguistics at UC Berkeley, will present her latest research on how tone-detection systems and digital voice assistants like Siri and Alexa reinforce linguistic and racial bias.
As technology that relies on speech is increasingly integrated into modern American society, voice assistants and “AI”-based speech recognition systems are becoming a more significant part of our everyday lives. This talk will present the results of three studies that focus on social perception of voice assistants, voice quality variation among the assistants themselves, and how “AI” systems that evaluate speech and “tone of voice” evaluation can reinforce systematic linguistic bias. Results of the first study demonstrate how listeners engage in racialized judgments of digital voice assistants and how these judgments interact with perceptions of the personality of such assistants, providing evidence that listeners personify these voices. Results of the second study shed light on the voice quality features that may trigger judgments of speaker race and personal characteristics, even when the speaker is non-human. Finally, results of the third study show the ways in which speech recognition technology can reinforce and perpetuate bias against already marginalized groups of speakers. A more comprehensive understanding of how sociolinguistic variation interacts with the design of such systems may help us to understand how listeners process variation and make judgments of voices, both digital and human. Additionally, a thorough analysis of how computational systems police speaker behavior can help us address systematic inequality as the linguistic line between humans and computers becomes increasingly porous.
This event is open to the public.
Sponsored by the Anthropology Department, Data Science & Society, Science, Technology, and Society, Media Studies, Africana Studies, and Dean of Faculty Office.