What can lexical decision tasks tell us about bilingual language processing?

Cognates are words which share their form and meaning in multiple languages, like “winter” in Dutch and English. A wealth of research on bilingual language processing suggests that bilinguals process cognates more quickly than words that exist in only one language. Researchers take this finding as evidence that all the words a bilingual knows are stored in a single mental lexicon. Much of this research has made use of lexical decision tasks, but such tasks are quite artificial. In recent experiments we found that the presence or absence of this cognate facilitation effect depends on the other stimuli included in the task. In this blog we argue, on the basis of this finding, that evidence for a cognate facilitation effect in lexical decision tasks alone should not be taken as proof for this central assumption in bilingualism research.

By Eva Poort (@EvaDPoort) and Jenni Rodd (@jennirodd)

 

One of the most researched phenomena in the field of bilingualism is the cognate facilitation effect. Cognates are words that exist in both of the languages a bilingual speaks. An example of such a word is “winter”, which exists in both Dutch and English. Many studies have shown that bilinguals process these words more quickly than words that exist in only one of the languages they speak, like the word “ant”, which exists in English but not in Dutch (e.g. Cristoffanini, Kirsner, & Milech, 1986; De Groot & Nas, 1991; Dijkstra, Grainger, & Van Heuven, 1999; Dijkstra, Miwa, Brummelhuis, Sappelli, & Baayen, 2010; Lemhöfer & Dijkstra, 2004; Peeters, Dijkstra, & Grainger, 2013; Sánchez-Casas, García-Albea, & Davis, 1992; Van Hell & Dijkstra, 2002).

This cognate facilitation effect is taken as evidence for one of the central tenets of bilingualism research: all of the words a bilingual knows are stored in a single mental dictionary, regardless of which language they belong to. The reasoning behind this assumption goes as follows: whenever a bilingual (or monolingual) tries to find a word in his or her mental dictionary, all items in that dictionary compete with each other for selection. Because cognates are recognised more quickly than words that exist in only one of the languages the bilingual speaks, this must mean that the two languages are stored in a single mental dictionary, that words from both languages are activated in parallel regardless of which language the reader or listener is currently using and that cognates have some sort of special status in this shared dictionary. Most of the evidence in favour of these claims, though, comes from lexical decision tasks.

Lexical decision tasks are very commonly used in research on language processing. During a lexical decision task, a participant is asked to decide if a stimulus they see on a screen is a word or not. In a standard version of this task, participants will see cognates like “winter” and English control words like “ant”, as well as a set of non-words like “vasui”. In such a task, bilinguals respond more quickly to the cognates than to the control words. Almost all of the experiments that we are aware of that used lexical decision tasks to examine the cognate effect included only those three types of stimuli—cognates, English controls and non-words. Some of our previous research, however, suggested that the effect may be influenced by the types of other stimuli that we included in the task.

To determine whether this was indeed the case, we conducted two experiments that consisted of several versions of an English lexical decision task that each included different stimuli. The design of these experiments was quite complex, so for the sake of clarity, in this blog we will only discuss the findings for two of the versions of Experiment 2: the standard version of the experiment that was similar to the experiments we had read about and a version that also included some Dutch words.

Fig3.BlankMixed&+IIH&+P

The reaction time data for Experiment 2. A significant cognate facilitation effect was found in the standard version, but not in the version that included some Dutch words.

The results of that second experiment are displayed in the figure above. In our standard version of the experiment—which included only cognates like “winter”, English controls like “ant” and a set of non-words like “vasui”— we found a cognate facilitation effect of approximately 50 ms. This finding was fully consistent with previous findings. In the version of our experiment that also included some Dutch words like “vijand”—which the participants had to respond ‘no’ to because these items were not words in English— we did not find evidence of a cognate facilitation effect. (We have converging evidence that the presence of the Dutch words affected the processing of the cognates from an exploratory analysis that showed that participants were slower to respond to cognates that immediately followed a Dutch word than to English controls that followed a Dutch word.)

Our findings show that the types of other stimuli we included in the lexical decision task determined whether we found a cognate facilitation effect or not. Specifically, if we included stimuli in our English lexical decision task that were words in the bilinguals’ other language, the cognates were not any easier to process than the control words. This could be evidence that, contrary to most current theories, a bilingual has a separate, language-specific dictionary for each language they speak and that they only search one of these dictionaries at any one time. Under this view, our participants in the version with the Dutch words, who should only be saying ‘yes’ to English words, would only have accessed the English representation of the cognate. In contrast, participants in the standard version, with no Dutch words, could have accessed both representations and responded ‘yes’ to any highly familiar letter strings. (Because bilinguals will have seen the cognates in two language contexts, these words are likely to be more familiar to them and therefore easier to recognise as words.) In other words, our research could call into question the central assumption in bilingualism research.

Now, we think there is no reason to panic yet. There is a second interpretation of these effects. It is possible that for participants in both versions there was a benefit for the cognates that is a result of both languages being stored in a single, integrated lexicon. For participants in the version with the Dutch words this effect may have been cancelled out by a process called response competition. Response competition occurs when multiple responses to a single stimulus are valid. In the version of our experiment that included Dutch words, we told our participants to respond ‘no’ to those words. The cognates, of course, were also words in Dutch. We think this resulted in an internal struggle in our participants: they wanted to say ‘yes’ to “winter” because it is a word in English, but at the same time they wanted to say ‘no’ to it because it is also a word in Dutch. The extra time it took our participants to resolve this inner conflict then cancelled out the ‘traditional’ cognate facilitation effect.

Our findings, therefore, highlight the difficulty that researchers face in trying to determine which effects originate in the lexicon and which are artefacts of the—often highly artificial—tasks we use. Does the cognate facilitation effect necessarily provide evidence for the assumption of the integrated lexicon? Or could it reflect something that is specific to lexical decision? Lexical decision tasks are perhaps the most artificial and least ecologically valid of all the tasks that language researchers use. This means they are probably not the tasks we should be using to study natural language processing.

I’d like to end this blog with pointing out that there are other lines of research using different kinds of stimuli that suggest that the two languages a bilingual knows are stored in a single mental lexicon (e.g. Van Heuven, Dijkstra, & Grainger, 1998). Furthermore, the cognate facilitation effect has been found using other paradigms as well, for example in eye-tracking (Cop, Dirix, Van Assche, Drieghe, & Duyck, 2016; Duyck, Van Assche, Drieghe, & Hartsuiker, 2007; Libben & Titone, 2009; Titone, Libben, Mercier, Whitford, & Pivneva, 2011; Van Assche, Drieghe, Duyck, Welvaert, & Hartsuiker, 2011). In such experiments, participants read sentences while an eye-tracker records their eye movements. This is a much more natural way to study language processing, so any effects found using such a paradigm are less likely to be a result of using a particular task and more likely to be a reflection of how bilinguals process language and how cognates are represented in a bilingual’s mental dictionary. We believe more researchers should start using these more natural tasks. Aside from minimising the influences of task-based effects, such experiments will also allow us to learn more about bilingual language processing in real-life, every-day situations.

 

On a slightly different note, Experiment 2 provided us with our first experience with preregistration, through the Center for Open Science’s Preregistration Challenge. If you’d like to know more, you can read an interview with Eva here. And if you’re interested in reading an article about task demands in the monolingual domain, we’d suggest reading Ambiguity and relatedness effects in semantic tasks: Are they due to semantic coding? by Yasushi Hino, Penny M. Pexman and Stephen J. Lupker.

 

What do accent effects tell us about how word meanings are accessed?

By Zhenguang (Garry) Cai and Jenni Rodd (@jennirodd)

Have you ever worried about being misunderstood when speaking to someone of a different dialect? For instance, if you are an American tourist travelling in London, will you be worried that you would confuse Londoners when you say “gas” for car fuel or “bonnet” for a woman’s hat when you realise that these words have different interpretations in the UK (e.g., a gaseous substance and a car part)?

Well, a recent study in Cognitive Psychology suggests that you might be worrying too much. Cai et al. (2017) found that although Brits usually had British interpretations for words such as “bonnet” when these words were spoken by another Brit, they became more likely to access the American interpretations (e.g., hat meaning for “bonnet”) when these words were spoken by someone with an American accent.

For instance, when asked to provide the first thing coming to mind when hearing strongly accented versions of ambigous words such as “gas” and “bonnet”, Brits were more likely to provide responses that related to the American meaning (e.g., “car” and “hat”)  when these words were spoken by an American than a Brit.

Importantly, as shown in the following figure, this effect was present even when individual speech tokens were modified to have a neutral accent by morphing together British and American speech tokens:  the same ambiguous speech tokens were interpreted differently depending on the accent context in which they were embedded. This ‘accent effect’ was of a similar size to the strongly accented stimuli that were also included in this experiment.

 

accent cahrt

Here is one of our example morphed speech tokens, that is a balanced combination of tokens produced by an American and a British speaker:

 

This finding strongly suggests that listeners are NOT accessing a word’s meaning based on the accent cues in the specific word itself. If this were the case then we’d expect to see no accent effect for the neutral morphs. Instead, listeners seem to accumulate information about the speaker’s accent from the incoming stream of words and use this to infer their dialectic identity. Listeners then use their knowledge of how words are used in that dialect (e.g., that “bonnet” more likely to mean a hat in American English) to interpret all susbsequent words coming from that speaker.

Participants in the study had little awareness of the accent manipulation as they always listened to only an British or American accent (but never both) and they were given no information about the speaker, but merely picked up on the accent in the audio recordings. In follow-up experiments, we found that participants were influenced by the speaker’s accent even when they had to respond so quickly that they were unlikely to deliberately consider the speaker’s identity.

On the basis of these findings, we have proposed that spoken word recognition involves at least two pathways. The lexical-semantic pathway allows listeners to identify words and access meanings. This pathway does not seem to be influenced the accent of the word. In addition, a separate (but interacting) indexical pathway provides information about who the speaker is (their gender, age, accent etc; see also this study and this study). Such a speaker model would then nudges meaning access by, for example, boosting the hat meaning of “bonnet” when that word is spoken in the American accent.

SpeakerModel

 

 

 

How to study spoken language understanding

by Jenni Rodd (@jennirodd) and Matt Davis (@DrMattDavis)

An overview of our recent special issue in Language, Cognition and Neuroscience.

The aim of our research is to figure out how people perceive and understand speech. Specifically, we want to specify the computational mechanisms that support this remarkable human skill. Speech comprehension feels so easy and automatic that we rarely stop to think about the incredible achievement involved in hearing sounds that contain ideas transmitted from another person’s mind!

Historically, these questions have been answered using methods from cognitive psychology and psycholinguistics. As PhD students our supervisors (William Marslen-Wilson and Gareth Gaskell) trained us to use experimental techniques in which measuring how quickly and accurately people respond to words allows us to infer how people hear sounds, recognise words and access meanings (e.g. Davis et al, 2002; Rodd et al, 2002).

When we were learning to use these methods, we were both greatly helped by a 1996 special issue of the Journal Language and Cognitive Processes. This special issue catalogued the different tests that had been used by researchers (lexical decision, gating, priming etc). It provided highly structured reviews of each method and a summary of how they can be used to answer questions about the computations that allow people to understand speech.

But in the 20 years since this special issue was published, we, like many of our colleagues, have broadened the scope of our research. We now use neuroscientific methods such as functional neuroimaging to observe brain activity directly during speech comprehension. We, and many others, label ourselves not as ‘cognitive psychologists’ but as ‘cognitive neuroscientists’ – our goal is to not only to understand the computations that are involved in comprehension, but also to specify where and how these processes are implemented in the brain.

This trend towards using neuroscientific methods to answer questions about computational  mechanisms has not been without its criticisms. Neuroscience research is many times more expensive than behavioural research. An MRI scanner costs millions of pounds whereas behavioural experiments use a standard personal computer costing less than a thousand pounds. We need to take care to justify the use of expensive ‘toys’ like MRI scanners given the arguments that data from brain imaging are simply not relevant to understanding cognitive mechanisms (see, for example, Mike Page’s article “What can’t functional imaging tell the cognitive psychologist”). Several other senior authors have forcefully argued that neuroscientific measures tell us “where” and “when” the brain performs its computations but cannot answer the most critical “how” questions.

However, we’ve always been optimistic about the merits of Cognitive Neuroscience. Even a decade ago it was apparent that using brain activity to answer where and when questions had practical uses. One example is that, working with Adrian Owen and Martin Coleman, we used fMRI to find out whether people who are anaesthetised (Davis et al, 2007) or patients in a vegetative state (Coleman, Rodd et al, 2007) can still understand speech (you can read Adrian’s book “Into the Gray Zone”, for a popular account of this work).

Moreover, we believe that neuroscience can contribute to answering important “how” questions. One example comes from research by Gareth Gaskell that shows a role for sleep in word learning. Gareth studied how people add new words like “blog” to their vocabulary. He reasoned that once this new word has been added to the brain’s dictionary (or “mental lexicon”) it should slow down recognition of similar sounding words like “block” (through “lexical competition”). Gareth showed that new words cause competition, but that this is most apparent a few days after learning. With Nicolas Dumay, Gareth showed that the emergence of competition coincides with sleep. This suggests “consolidation”; an overnight change in how new words are stored in the brain. Gareth and I (Matt Davis) later showed exactly this sort of overnight change with functional MRI. Initial learning activates one brain system (the Hippocampus) and overnight consolidation of learned words changes the way that another system, the Superior Temporal cortex, responds to new words. This shows the operation of two complementary learning systems in different brain areas. Jakke Tamminen and Gareth then used polysomnography – measuring brain activity while people are asleep – to show that neural activity (slow wave spindles) is involved in consolidating new words. This work nicely illustrates how neuroscience methods can provide a direct test of cognitive theories and this in turn leads to a deeper understanding of the underlying computations.

We therefore argue that neuroscientific methods make an important contribution to answering questions about the computational mechanisms supporting speech comprehension. We now have a wealth of different tools that we can use to study the brain – as well as brain imaging, there are brain stimulation methods, and methods for linking measures of brain anatomy to behaviour (neuropsychology). Each of these approaches has their own strengths and limitations. A skilled researcher must understand the characteristics of each method so as to design experiments that can answer the most interesting questions. Importantly, these methods also raises different specific challenges when studying speech. For example, listening to speech can be difficult if brain activity is being measured with a noisy neuroimaging method. Speech is a continuous and highly variable sound which raises problems when using methods like EEG or TMS that require millisecond-precise coordination of brain activity and speech signals. With the added pressure that comes from running experiments that cost several thousand pounds using equipment that might cost millions, today’s students have much to learn and little freedom to make mistakes along the way.

For these reasons, we decided that the time was right to produce a new special issue of the journal Language Cognition and Neuroscience (the current name of the journal in which the 1996 special issue was published). We decided that what the field needed was a catalogue of the main neuroscientific methods that are currently available that was specifically aimed at researchers interested in studying spoken language. We approached potential authors with relevant expertise and gave them a very specific remit: to write a tutorial review of the specific method that they use (or a family of related methods in some cases).

LCN_SI_Figure_Submitted24Feb2017

The resulting review articles are all structured to include the following five sections:

  1. Historical background: Overview of early/influential studies using spoken language
  2. Overview of how the method provides insights in the brain function
  3. Specific challenges and solutions for studying spoken language
  4. Key empirical contributions of the method for the field
  5. Speculations about likely future directions

We chose authors with up-to date specialist knowledge to write about each of the methods. We also chose authors who are actively using these methods to answer what we believe to be the most interesting “how” questions about underlying mechanisms. We were delighted by the enthusiastic response we received from our authors, and the result is (we think) a wonderful special issue. All the papers are now available online and linked to below. We hope that you enjoy reading these papers and that they help inspire tomorrow’s researchers in the same way that we were inspired by learning about behavioural methods for studying speech understanding back when we were PhD students in 1996.

Enjoy!

 

How to study spoken language understanding: a survey of neuroscientific methods by Jennifer M. Rodd & Matthew Davis

Optical neuroimaging of spoken language by Jonathan E. Peelle

What can functional Transcranial Doppler Ultrasonography tell us about spoken language understanding? by Nicholas A. Badcock & Margriet A. Groen

Comprehending auditory speech: previous and potential contributions of functional MRI by Samuel Evans & Carolyn McGettigan

Tracking the signal, cracking the code: speech and speech comprehension in non-invasive human electrophysiology by Malte Wöstmann, Lorenz Fiedler & Jonas Obleser

Transcranial magnetic stimulation and motor evoked potentials in speech perception research by Patti Adank, Helen E. Nuttall & Dan Kennedy-Higgins

Brain structural imaging of receptive speech and beyond: a review of current methods by Damien Marie & Narly Golestani

Transcranial electric stimulation for the investigation of speech perception and comprehension by Benedikt Zoefel & Matthew H. Davis

Lesion-symptom mapping in the study of spoken language understanding by Stephen M. Wilson