Joe Toscano
Monday, February 4th at 2:00 p.m., Joe Toscano will give a colloquium in the UCSD Linguistics Department, in AP&M 4301.
Speech as information: Computational principles and neural measures of speech processing
Research on speech perception has long sought to identify the acoustic-phonetic cues that listeners use to distinguish phonological categories. Some approaches to understanding this process have focused on the low-level properties of the sound signal, while others have emphasized more abstract representations. Here, I argue instead that thinking about speech perception as an information processing problem can allow us to build a more complete model that describes how listeners map continuous acoustic cues onto more abstract linguistic categories. Crucially, this type of model can be implemented using domain-general learning principles and relatively simple combinations of phonetic cues. I present evidence for this using several approaches, including (1) event-related potential (ERP) experiments that examine cortical responses to differences in speech sounds, (2) computational work that examines how statistical learning can be used to acquire speech sound categories over development and adapt those categories in adulthood, and (3) acoustic-phonetic analyses that allow us to determine which phonetic cues are most informative for a given phonological distinction and how those cues are weighted by listeners. Together, the results of these studies suggest that general mechanisms of statistical learning and cue-integration can provide useful models for understanding how listeners recognize speech in a variety of contexts.