How to study spoken language understanding

by Jenni Rodd (@jennirodd) and Matt Davis (@DrMattDavis)

An overview of our recent special issue in Language, Cognition and Neuroscience.

The aim of our research is to figure out how people perceive and understand speech. Specifically, we want to specify the computational mechanisms that support this remarkable human skill. Speech comprehension feels so easy and automatic that we rarely stop to think about the incredible achievement involved in hearing sounds that contain ideas transmitted from another person’s mind!

Historically, these questions have been answered using methods from cognitive psychology and psycholinguistics. As PhD students our supervisors (William Marslen-Wilson and Gareth Gaskell) trained us to use experimental techniques in which measuring how quickly and accurately people respond to words allows us to infer how people hear sounds, recognise words and access meanings (e.g. Davis et al, 2002; Rodd et al, 2002).

When we were learning to use these methods, we were both greatly helped by a 1996 special issue of the Journal Language and Cognitive Processes. This special issue catalogued the different tests that had been used by researchers (lexical decision, gating, priming etc). It provided highly structured reviews of each method and a summary of how they can be used to answer questions about the computations that allow people to understand speech.

But in the 20 years since this special issue was published, we, like many of our colleagues, have broadened the scope of our research. We now use neuroscientific methods such as functional neuroimaging to observe brain activity directly during speech comprehension. We, and many others, label ourselves not as ‘cognitive psychologists’ but as ‘cognitive neuroscientists’ – our goal is to not only to understand the computations that are involved in comprehension, but also to specify where and how these processes are implemented in the brain.

This trend towards using neuroscientific methods to answer questions about computational  mechanisms has not been without its criticisms. Neuroscience research is many times more expensive than behavioural research. An MRI scanner costs millions of pounds whereas behavioural experiments use a standard personal computer costing less than a thousand pounds. We need to take care to justify the use of expensive ‘toys’ like MRI scanners given the arguments that data from brain imaging are simply not relevant to understanding cognitive mechanisms (see, for example, Mike Page’s article “What can’t functional imaging tell the cognitive psychologist”). Several other senior authors have forcefully argued that neuroscientific measures tell us “where” and “when” the brain performs its computations but cannot answer the most critical “how” questions.

However, we’ve always been optimistic about the merits of Cognitive Neuroscience. Even a decade ago it was apparent that using brain activity to answer where and when questions had practical uses. One example is that, working with Adrian Owen and Martin Coleman, we used fMRI to find out whether people who are anaesthetised (Davis et al, 2007) or patients in a vegetative state (Coleman, Rodd et al, 2007) can still understand speech (you can read Adrian’s book “Into the Gray Zone”, for a popular account of this work).

Moreover, we believe that neuroscience can contribute to answering important “how” questions. One example comes from research by Gareth Gaskell that shows a role for sleep in word learning. Gareth studied how people add new words like “blog” to their vocabulary. He reasoned that once this new word has been added to the brain’s dictionary (or “mental lexicon”) it should slow down recognition of similar sounding words like “block” (through “lexical competition”). Gareth showed that new words cause competition, but that this is most apparent a few days after learning. With Nicolas Dumay, Gareth showed that the emergence of competition coincides with sleep. This suggests “consolidation”; an overnight change in how new words are stored in the brain. Gareth and I (Matt Davis) later showed exactly this sort of overnight change with functional MRI. Initial learning activates one brain system (the Hippocampus) and overnight consolidation of learned words changes the way that another system, the Superior Temporal cortex, responds to new words. This shows the operation of two complementary learning systems in different brain areas. Jakke Tamminen and Gareth then used polysomnography – measuring brain activity while people are asleep – to show that neural activity (slow wave spindles) is involved in consolidating new words. This work nicely illustrates how neuroscience methods can provide a direct test of cognitive theories and this in turn leads to a deeper understanding of the underlying computations.

We therefore argue that neuroscientific methods make an important contribution to answering questions about the computational mechanisms supporting speech comprehension. We now have a wealth of different tools that we can use to study the brain – as well as brain imaging, there are brain stimulation methods, and methods for linking measures of brain anatomy to behaviour (neuropsychology). Each of these approaches has their own strengths and limitations. A skilled researcher must understand the characteristics of each method so as to design experiments that can answer the most interesting questions. Importantly, these methods also raises different specific challenges when studying speech. For example, listening to speech can be difficult if brain activity is being measured with a noisy neuroimaging method. Speech is a continuous and highly variable sound which raises problems when using methods like EEG or TMS that require millisecond-precise coordination of brain activity and speech signals. With the added pressure that comes from running experiments that cost several thousand pounds using equipment that might cost millions, today’s students have much to learn and little freedom to make mistakes along the way.

For these reasons, we decided that the time was right to produce a new special issue of the journal Language Cognition and Neuroscience (the current name of the journal in which the 1996 special issue was published). We decided that what the field needed was a catalogue of the main neuroscientific methods that are currently available that was specifically aimed at researchers interested in studying spoken language. We approached potential authors with relevant expertise and gave them a very specific remit: to write a tutorial review of the specific method that they use (or a family of related methods in some cases). The resulting review articles are all structured to include the following five sections:

  1. Historical background: Overview of early/influential studies using spoken language
  2. Overview of how the method provides insights in the brain function
  3. Specific challenges and solutions for studying spoken language
  4. Key empirical contributions of the method for the field
  5. Speculations about likely future directions

We chose authors with up-to date specialist knowledge to write about each of the methods. We also chose authors who are actively using these methods to answer what we believe to be the most interesting “how” questions about underlying mechanisms. We were delighted by the enthusiastic response we received from our authors, and the result is (we think) a wonderful special issue. All the papers are now available online and linked to below. We hope that you enjoy reading these papers and that they help inspire tomorrow’s researchers in the same way that we were inspired by learning about behavioural methods for studying speech understanding back when we were PhD students in 1996.

Enjoy!

 

How to study spoken language understanding: a survey of neuroscientific methods by Jennifer M. Rodd & Matthew Davis

Optical neuroimaging of spoken language by Jonathan E. Peelle

What can functional Transcranial Doppler Ultrasonography tell us about spoken language understanding? by Nicholas A. Badcock & Margriet A. Groen

Comprehending auditory speech: previous and potential contributions of functional MRI by Samuel Evans & Carolyn McGettigan

Tracking the signal, cracking the code: speech and speech comprehension in non-invasive human electrophysiology by Malte Wöstmann, Lorenz Fiedler & Jonas Obleser

Transcranial magnetic stimulation and motor evoked potentials in speech perception research by Patti Adank, Helen E. Nuttall & Dan Kennedy-Higgins

Brain structural imaging of receptive speech and beyond: a review of current methods by Damien Marie & Narly Golestani

Transcranial electric stimulation for the investigation of speech perception and comprehension by Benedikt Zoefel & Matthew H. Davis

Lesion-symptom mapping in the study of spoken language understanding by Stephen M. Wilson