By Zhenguang (Garry) Cai and Jenni Rodd (@jennirodd)
Have you ever worried about being misunderstood when speaking to someone of a different dialect? For instance, if you are an American tourist travelling in London, will you be worried that you would confuse Londoners when you say “gas” for car fuel or “bonnet” for a woman’s hat when you realise that these words have different interpretations in the UK (e.g., a gaseous substance and a car part)?
Well, a recent study in Cognitive Psychology suggests that you might be worrying too much. Cai et al. (2017) found that although Brits usually had British interpretations for words such as “bonnet” when these words were spoken by another Brit, they became more likely to access the American interpretations (e.g., hat meaning for “bonnet”) when these words were spoken by someone with an American accent.
For instance, when asked to provide the first thing coming to mind when hearing strongly accented versions of ambigous words such as “gas” and “bonnet”, Brits were more likely to provide responses that related to the American meaning (e.g., “car” and “hat”) when these words were spoken by an American than a Brit.
Importantly, as shown in the following figure, this effect was present even when individual speech tokens were modified to have a neutral accent by morphing together British and American speech tokens: the same ambiguous speech tokens were interpreted differently depending on the accent context in which they were embedded. This ‘accent effect’ was of a similar size to the strongly accented stimuli that were also included in this experiment.
Here is one of our example morphed speech tokens, that is a balanced combination of tokens produced by an American and a British speaker:
This finding strongly suggests that listeners are NOT accessing a word’s meaning based on the accent cues in the specific word itself. If this were the case then we’d expect to see no accent effect for the neutral morphs. Instead, listeners seem to accumulate information about the speaker’s accent from the incoming stream of words and use this to infer their dialectic identity. Listeners then use their knowledge of how words are used in that dialect (e.g., that “bonnet” more likely to mean a hat in American English) to interpret all susbsequent words coming from that speaker.
Participants in the study had little awareness of the accent manipulation as they always listened to only an British or American accent (but never both) and they were given no information about the speaker, but merely picked up on the accent in the audio recordings. In follow-up experiments, we found that participants were influenced by the speaker’s accent even when they had to respond so quickly that they were unlikely to deliberately consider the speaker’s identity.
On the basis of these findings, we have proposed that spoken word recognition involves at least two pathways. The lexical-semantic pathway allows listeners to identify words and access meanings. This pathway does not seem to be influenced the accent of the word. In addition, a separate (but interacting) indexical pathway provides information about who the speaker is (their gender, age, accent etc; see also this study and this study). Such a speaker model would then nudges meaning access by, for example, boosting the hat meaning of “bonnet” when that word is spoken in the American accent.