Prosodic information in audiovisual spoken-word recognition

TitleProsodic information in audiovisual spoken-word recognition
Publication TypePresentation
Year of Publication2007
Conference NameSummer Meeting on Prosody
AuthorsJesse, Alexandra
PublisherNederlandse Vereniging voor Fonetische Wetenschappen
Conference LocationNijmegen, The Netherlands
Abstract

Prosodic information influences the spoken-word recognition process. For example, auditory lexical stress information contributes to the activation of word candidates during spoken-word recognition (Cooper, Cutler, & Wales, 2002; van Donselaar, Koster, Cutler, 2005; Soto-Faraco, Sebastián-Gallés, & Cutler, 2001). However, we typically do not only hear but also see speakers in conversations. Visual speech (i.e., information from seeing the face of a speaker) is known to contribute to the robust recognition of speech segments (e.g., for an overview, see Massaro and Jesse, in press). Segments are better recognized when presented as audiovisual than as auditory-only speech. But little is known about visual speech’s ability to provide prosodic information. The project reported here will address whether visual speech informs about lexical stress and whether this information can alter lexical competition during the audiovisual spoken-word recognition process.

Dutch word pairs that overlap in their first two syllables segmentally but differ in lexical stress were selected (e.g., OCtopus vs. okTOber; capital letters marking primary stress). In an audiovisual speech version of a cross-modal repetition priming task, the first two syllables of these pairs were presented sentence-finally either as auditory-only, visual-only, or audiovisual speech (e.g., ‘The password was OCto-’ ). On these critical trials, these primes were followed by printed presentations of either matching (‘octopus’) or stress-mismatching (‘oktober’) target words. Filler trials included nonword targets. Response times needed to indicate whether the printed items were words or nonwords were analyzed. Replicating previous results for auditory-only conditions (e.g., van Donselaar et al., 2005), matching primes should speed up and mismatching primes slow down correct target recognition compared to when unrelated primes precede target presentations (e.g., The password was machi-‘, where ‘machi-’ was taken from ‘machine’). If visual speech also conveys lexical stress information and this information influences indeed lexical activation, then for audiovisual primes, target response times should be similarly modulated by overlap in lexical stress. Results are discussed within the framework of current models of auditory and audiovisual spoken-word recognition.

  • Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45, 207-228.
  • Donselaar, W. van, Koster, M., & Cutler, A. (2005). Exploring the role of lexical stress in lexical recognition. The Quarterly Journal of Experimental Psychology, 58A, 251-273.
  • Massaro, D.W., & Jesse, A. (in press). Audiovisual speech perception and word recognition. In G. Gaskell (Ed.), The Oxford handbook of psycholinguistics. Oxford, U.K.: Oxford University Press.
  • Soto-Faraco, S., Sebastián-Gallés, N., & Cutler, A. (2001). Segmental and suprasegmental mismatch in lexical access. Journal of Memory and Language, 45, 412-432.