As
your eyes scan these words, your brain seems to derive their meaning
instantaneously. How are we able to recognise and interpret marks on a page so
rapidly? A new study confirms that a specialised brain area recognises printed
words as pictures rather than by their meaning, reports scientificamerican.com.
Researchers
led by neuroscientist Maximilian Riesenhuber of Georgetown University Medical
Center scanned the brains of 12 subjects with functional MRI. They focused on a
tiny area of the brain known to be involved in recognising words, the Visual
Word Form Area (VWFA), found on the surface of the brain, behind the left ear.
From
face recognition to words
The
VWFA’s right hemisphere analogue is the fusiform face area, which allows us to
recognise faces. In young children and people who are illiterate, the VWFA
region and the fusiform face area both respond to faces. As people learn to
read, the VWFA region is co-opted for word recognition.
The
researchers presented the subjects with a series of real words and made-up
words. The nonsense words elicited responses from a wide pool of neurons in the
VWFA, whereas distinct subsets of neurons responded to real words. After
subjects were trained to recognise pseudo words, however, neurons responded as
they did to real words, according to the paper published in March in the
Journal of Neuroscience. Because the nonsense words had no meaning, Riesenhuber
deduced that our neurons must respond to words’ orthography — how they look —
rather than their meaning. As we become more proficient at reading, then we
build up a visual dictionary in the VWFA — much as we accumulate a catalogue of
familiar faces on the opposite side of our brain.
We
“hear” written words in our head
Sound
may have been the original vehicle for language, but writing allows us to
create and understand words without it. Yet new research shows that sound
remains a critical element of reading.
When
people listen to speech, neural activity is correlated with each word’s “sound
envelope” — the fluctuation of the audio signal over time corresponds to the
fluctuation of neural activity over time. In the new study, Lorenzo Magrassi, a
neurosurgeon at the University of Pavia in Italy, and his colleagues made
electrocorticographic (ECoG) recordings from 16 individuals.
The
researchers measured neural activity directly from the surface of the
language-generating structure known as Broca’s area as subjects read text
silently or aloud. (This measurement was made possible by the fact that
participants were undergoing brain surgery while awake.)
Their
neural activity was correlated with the sound envelope of the text they read,
which was generated well before they spoke and even when they were not planning
to speak, according to the report published in the Proceedings of the National
Academy of Sciences USA.
In
other words, Broca’s area responded to silent reading much in the same way
auditory neurons respond to text spoken aloud — as if Broca’s area was
generating the sound of the words so the readers heard them internally.
The
finding speaks to a debate about whether words are encoded in the brain by a
neural pattern symbolic of their meaning or if they are encoded via simpler
attributes, such as how they sound. The results add to mounting evidence that
words are fundamentally processed and catalogued by their basic sounds and
shapes.
Source | Asian Age | 30 July 2015
No comments:
Post a Comment