What can lexical decision tasks tell us about bilingual language processing?

Cognates are words which share their form and meaning in multiple languages, like “winter” in Dutch and English. A wealth of research on bilingual language processing suggests that bilinguals process cognates more quickly than words that exist in only one language. Researchers take this finding as evidence that all the words a bilingual knows are stored in a single mental lexicon. Much of this research has made use of lexical decision tasks, but such tasks are quite artificial. In recent experiments we found that the presence or absence of this cognate facilitation effect depends on the other stimuli included in the task. In this blog we argue, on the basis of this finding, that evidence for a cognate facilitation effect in lexical decision tasks alone should not be taken as proof for this central assumption in bilingualism research.

By Eva Poort (@EvaDPoort) and Jenni Rodd (@jennirodd)

 

One of the most researched phenomena in the field of bilingualism is the cognate facilitation effect. Cognates are words that exist in both of the languages a bilingual speaks. An example of such a word is “winter”, which exists in both Dutch and English. Many studies have shown that bilinguals process these words more quickly than words that exist in only one of the languages they speak, like the word “ant”, which exists in English but not in Dutch (e.g. Cristoffanini, Kirsner, & Milech, 1986; De Groot & Nas, 1991; Dijkstra, Grainger, & Van Heuven, 1999; Dijkstra, Miwa, Brummelhuis, Sappelli, & Baayen, 2010; Lemhöfer & Dijkstra, 2004; Peeters, Dijkstra, & Grainger, 2013; Sánchez-Casas, García-Albea, & Davis, 1992; Van Hell & Dijkstra, 2002).

This cognate facilitation effect is taken as evidence for one of the central tenets of bilingualism research: all of the words a bilingual knows are stored in a single mental dictionary, regardless of which language they belong to. The reasoning behind this assumption goes as follows: whenever a bilingual (or monolingual) tries to find a word in his or her mental dictionary, all items in that dictionary compete with each other for selection. Because cognates are recognised more quickly than words that exist in only one of the languages the bilingual speaks, this must mean that the two languages are stored in a single mental dictionary, that words from both languages are activated in parallel regardless of which language the reader or listener is currently using and that cognates have some sort of special status in this shared dictionary. Most of the evidence in favour of these claims, though, comes from lexical decision tasks.

Lexical decision tasks are very commonly used in research on language processing. During a lexical decision task, a participant is asked to decide if a stimulus they see on a screen is a word or not. In a standard version of this task, participants will see cognates like “winter” and English control words like “ant”, as well as a set of non-words like “vasui”. In such a task, bilinguals respond more quickly to the cognates than to the control words. Almost all of the experiments that we are aware of that used lexical decision tasks to examine the cognate effect included only those three types of stimuli—cognates, English controls and non-words. Some of our previous research, however, suggested that the effect may be influenced by the types of other stimuli that we included in the task.

To determine whether this was indeed the case, we conducted two experiments that consisted of several versions of an English lexical decision task that each included different stimuli. The design of these experiments was quite complex, so for the sake of clarity, in this blog we will only discuss the findings for two of the versions of Experiment 2: the standard version of the experiment that was similar to the experiments we had read about and a version that also included some Dutch words.

Fig3.BlankMixed&+IIH&+P

The reaction time data for Experiment 2. A significant cognate facilitation effect was found in the standard version, but not in the version that included some Dutch words.

The results of that second experiment are displayed in the figure above. In our standard version of the experiment—which included only cognates like “winter”, English controls like “ant” and a set of non-words like “vasui”— we found a cognate facilitation effect of approximately 50 ms. This finding was fully consistent with previous findings. In the version of our experiment that also included some Dutch words like “vijand”—which the participants had to respond ‘no’ to because these items were not words in English— we did not find evidence of a cognate facilitation effect. (We have converging evidence that the presence of the Dutch words affected the processing of the cognates from an exploratory analysis that showed that participants were slower to respond to cognates that immediately followed a Dutch word than to English controls that followed a Dutch word.)

Our findings show that the types of other stimuli we included in the lexical decision task determined whether we found a cognate facilitation effect or not. Specifically, if we included stimuli in our English lexical decision task that were words in the bilinguals’ other language, the cognates were not any easier to process than the control words. This could be evidence that, contrary to most current theories, a bilingual has a separate, language-specific dictionary for each language they speak and that they only search one of these dictionaries at any one time. Under this view, our participants in the version with the Dutch words, who should only be saying ‘yes’ to English words, would only have accessed the English representation of the cognate. In contrast, participants in the standard version, with no Dutch words, could have accessed both representations and responded ‘yes’ to any highly familiar letter strings. (Because bilinguals will have seen the cognates in two language contexts, these words are likely to be more familiar to them and therefore easier to recognise as words.) In other words, our research could call into question the central assumption in bilingualism research.

Now, we think there is no reason to panic yet. There is a second interpretation of these effects. It is possible that for participants in both versions there was a benefit for the cognates that is a result of both languages being stored in a single, integrated lexicon. For participants in the version with the Dutch words this effect may have been cancelled out by a process called response competition. Response competition occurs when multiple responses to a single stimulus are valid. In the version of our experiment that included Dutch words, we told our participants to respond ‘no’ to those words. The cognates, of course, were also words in Dutch. We think this resulted in an internal struggle in our participants: they wanted to say ‘yes’ to “winter” because it is a word in English, but at the same time they wanted to say ‘no’ to it because it is also a word in Dutch. The extra time it took our participants to resolve this inner conflict then cancelled out the ‘traditional’ cognate facilitation effect.

Our findings, therefore, highlight the difficulty that researchers face in trying to determine which effects originate in the lexicon and which are artefacts of the—often highly artificial—tasks we use. Does the cognate facilitation effect necessarily provide evidence for the assumption of the integrated lexicon? Or could it reflect something that is specific to lexical decision? Lexical decision tasks are perhaps the most artificial and least ecologically valid of all the tasks that language researchers use. This means they are probably not the tasks we should be using to study natural language processing.

I’d like to end this blog with pointing out that there are other lines of research using different kinds of stimuli that suggest that the two languages a bilingual knows are stored in a single mental lexicon (e.g. Van Heuven, Dijkstra, & Grainger, 1998). Furthermore, the cognate facilitation effect has been found using other paradigms as well, for example in eye-tracking (Cop, Dirix, Van Assche, Drieghe, & Duyck, 2016; Duyck, Van Assche, Drieghe, & Hartsuiker, 2007; Libben & Titone, 2009; Titone, Libben, Mercier, Whitford, & Pivneva, 2011; Van Assche, Drieghe, Duyck, Welvaert, & Hartsuiker, 2011). In such experiments, participants read sentences while an eye-tracker records their eye movements. This is a much more natural way to study language processing, so any effects found using such a paradigm are less likely to be a result of using a particular task and more likely to be a reflection of how bilinguals process language and how cognates are represented in a bilingual’s mental dictionary. We believe more researchers should start using these more natural tasks. Aside from minimising the influences of task-based effects, such experiments will also allow us to learn more about bilingual language processing in real-life, every-day situations.

 

On a slightly different note, Experiment 2 provided us with our first experience with preregistration, through the Center for Open Science’s Preregistration Challenge. If you’d like to know more, you can read an interview with Eva here. And if you’re interested in reading an article about task demands in the monolingual domain, we’d suggest reading Ambiguity and relatedness effects in semantic tasks: Are they due to semantic coding? by Yasushi Hino, Penny M. Pexman and Stephen J. Lupker.

 

What do accent effects tell us about how word meanings are accessed?

By Zhenguang (Garry) Cai and Jenni Rodd (@jennirodd)

Have you ever worried about being misunderstood when speaking to someone of a different dialect? For instance, if you are an American tourist travelling in London, will you be worried that you would confuse Londoners when you say “gas” for car fuel or “bonnet” for a woman’s hat when you realise that these words have different interpretations in the UK (e.g., a gaseous substance and a car part)?

Well, a recent study in Cognitive Psychology suggests that you might be worrying too much. Cai et al. (2017) found that although Brits usually had British interpretations for words such as “bonnet” when these words were spoken by another Brit, they became more likely to access the American interpretations (e.g., hat meaning for “bonnet”) when these words were spoken by someone with an American accent.

For instance, when asked to provide the first thing coming to mind when hearing strongly accented versions of ambigous words such as “gas” and “bonnet”, Brits were more likely to provide responses that related to the American meaning (e.g., “car” and “hat”)  when these words were spoken by an American than a Brit.

Importantly, as shown in the following figure, this effect was present even when individual speech tokens were modified to have a neutral accent by morphing together British and American speech tokens:  the same ambiguous speech tokens were interpreted differently depending on the accent context in which they were embedded. This ‘accent effect’ was of a similar size to the strongly accented stimuli that were also included in this experiment.

 

accent cahrt

Here is one of our example morphed speech tokens, that is a balanced combination of tokens produced by an American and a British speaker:

 

This finding strongly suggests that listeners are NOT accessing a word’s meaning based on the accent cues in the specific word itself. If this were the case then we’d expect to see no accent effect for the neutral morphs. Instead, listeners seem to accumulate information about the speaker’s accent from the incoming stream of words and use this to infer their dialectic identity. Listeners then use their knowledge of how words are used in that dialect (e.g., that “bonnet” more likely to mean a hat in American English) to interpret all susbsequent words coming from that speaker.

Participants in the study had little awareness of the accent manipulation as they always listened to only an British or American accent (but never both) and they were given no information about the speaker, but merely picked up on the accent in the audio recordings. In follow-up experiments, we found that participants were influenced by the speaker’s accent even when they had to respond so quickly that they were unlikely to deliberately consider the speaker’s identity.

On the basis of these findings, we have proposed that spoken word recognition involves at least two pathways. The lexical-semantic pathway allows listeners to identify words and access meanings. This pathway does not seem to be influenced the accent of the word. In addition, a separate (but interacting) indexical pathway provides information about who the speaker is (their gender, age, accent etc; see also this study and this study). Such a speaker model would then nudges meaning access by, for example, boosting the hat meaning of “bonnet” when that word is spoken in the American accent.

SpeakerModel