Monday, December 26, 2011

My ideal Romance language

This is what it would look like:


El vent fugiva tras la planura
Rodava las folias per la rua
Morivan las flors e la terna verdura
Las finestras resplendevan de lum

Sopra los árbols corvins volavan

Murmurant el pessat cántic d’autón
E núvols de plomb pel cel glissavan
Portant nel son alent la fredura


It's just a mixture of my favourite traits found in different Romance languages. 

Thursday, December 15, 2011

Sweet, sweet phrenology

While he is an insufferable know-it-all, Sheldon Cooper, The Big Bang Theory's breakout character and (arguably) main driving force does appear conversant with fields of science far removed from his own area, theoretical physics. He seems to be familiar with not only the basics, but endless arcane details belonging to fields of knowledge as diverse as anthropology, history, philosophy, neuroanatomy, and microbiology. 






When it comes to Sheldon, it appears that the series' screenwriters often do their homework and ensure that the character is able to rattle off long strings of scientifically sounding gobbledygook at a second’s notice. Since my own domain of in-depth expertise is limited to a small number of sub-areas of cognitive science, I can’t really estimate how much sense Sheldon makes when he pontificates on string theory, molecular biology, or even philosophy.

What I do know is that his knowledge of linguistics is equivalent to attempting to pass off, say, phrenology or homeopathy as good science. Sheldon’s attitude to language appears to be hard-core prescriptivism. Thus, in one episode he corrects Raj by asserting that saying, “You are the guy we are trying to get away from” is bad grammar and that “the correct syntax” is, “You are the guy from whom we are trying to get away.” 

In other words, Sheldon believes that “ending your sentence with a preposition” is ungrammatical, as pseudo-authorities on English grammar have forever asserted. In more technical terms, Sheldon does not approve of preposition stranding, a perfectly natural phenomenon in English and some other Germanic languages and, therefore, correct syntax. Just compare this to saying something like, "You are the from guy we away are trying get to." Now that, my friends, is incorrect syntax.

Another hint of Sheldon's linguistic ignorance is his insistence on using nauseated and not nauseous to describe how one feels when vomiting is imminent. Similar to preposition stranding, People vs. Nauseous has long been a case off which language pedants simply will not get (tee hee). The simple fact of the matter is that nauseous means both "something which causes nausea" and "someone who feels nausea" and that it has been present in English in both these senses for approximately the same amounts of time. 

So, why is Sheldon Cooper a prescriptivist? The face-saving answer for the screenwriters would be that this is because prescriptivism seems to go neatly with another affliction of Sheldon's, OCD. In other words, it is not impossible that Sheldon's prescriptivism is deliberate, i.e. an example of shrewd screenwriting. 

However, I'd be inclined to bet that this is not it. I think that the much more likely answer is that Sheldon is a language pedant simply because the show's writers, just like the public at large, do not really have a clear idea of the existence of linguistics as a field of concerted human inquiry. If they were aware of it,  they would surely know that language pedantry is an indication of linguistic ignorance rather than sophistication (cf. Stephen Pinker's well-known dissection of language mavens). 

Whose fault is it that the public aren't aware of even the most basic findings of linguistics and that very few members of the general public even have a clear idea of what it is that a linguist actually does?

While it must largely be linguists' own fault (but this is a topic for a different post altogether), it is also the case that people tend to think of language as something "everybody knows about" (since everybody can "do it"), from which it clearly follows that linguistics must be a time-wasting pursuit of commonsense knowledge. Of course, this could not be farther from the truth. 

However, since language will always be something that everybody believes themselves to be an expert on (another stranded preposition!), linguistics will continue having a hard time making its existence matter in the real world. But may I note with resignation that nobody goes around believing they are a pulmonologist just because they know how to breathe?


Oh and just so there's no confusion: I heart Sheldon Cooper. 

Wednesday, June 8, 2011

New post coming up

I have been very busy with revisions to two articles and some pesky experimental items. I'm writing a lengthy post on how studies of the bilingual brain can illuminate the old debate about the effect of age on language learning.

I haven't given up on this blog; it just seems I won't be able to dedicate as much time to it as I might like. This means that new posts will come sporadically, but they will come.

Saturday, May 21, 2011

Monkey see, monkey do: Mirror neurons and language

The fact that speech is rather messy makes perceiving and making sense of it quite a formidable task. In fact, the human brain is the only object in the known universe that can both perceive speech and fully understand it. Computers may be getting a bit better at recognizing speech than they used to be, but even the best of them are still unacceptably bad at making sense of it, and will remain so for a long time. Some of our closest relatives don't do that badly considering that they didn't evolve to use language, but they still can't compete with us. Not by a long shot.


The Motor Theory of Speech Perception

The aforesaid messiness of speech, particularly the problem of lack of invariance, led American psycholinguist Alvin Liberman and his colleagues to propose a Motor Theory of Speech Perception more than half a century ago. While the theory has been revised over the years, at its core has always lain the rather bold (and at that time visionary!) proposal that humans perceive speech by reference to speech production. In other words, the theory states that when we listen to somebody speak, we are not only passive recipients of the speech signal. Rather, we imagine the articulatory gestures (e.g. lip and tongue movements) that the speaker is performing, and that is when comprehension of what we are hearing really starts happening.

The Motor Theory has had both its advocates and critics ever since its original publication. Some of the criticisms have  revolved around Janet Werker's finding (replicated and expanded numerous times, but also refined considerably) that infants are able to perceive a large number of sounds appearing in human languages (even those that they have never heard their caregivers utter) before they produce their first real word, but that they stop being sensitive to sounds not present in their native language(s) as they inch closer to starting to speak (around age one).

Also, as other critics pointed out, proposing that we mentally model others' articulatory gestures as we listen to them speak does not really solve the problem of lack of invariance, as the lip, tongue, etc. movements required to produce the first sound of cat and the first sound of cut are not exactly the same. In response to this, the theory was revised to refer to "intended phonetic gestures" (something like the movements we plan our speech organs to perform rather than the actual movements they do perform). The response to this revision has been that the concept of intended gesture is probably too vague to be testable with the kind of experimental techniques we have at our disposal right now.

Overall, it could be said that the Motor Theory has fared better in areas such as speech-language pathology or theoretical linguistics than in the field of speech perception itself, as it does not seem to be sufficient to fully account for our ability to perceive speech. However, the fact that mental modelling of speech gestures may not be the sole mechanism which enables us to figure out what sounds we are hearing when we're talking to somebody does not mean that it's not one of the mechanisms we rely on. The relatively recent discovery of mirror neurons, brain cells which fire both when an animal plans to perform an action and when it sees another animal do the same thing, may lend new credence to the Motor Theory of Speech Perception.

Mirror neurons in monkeys

Mirror neurons were first discovered in the premotor cortex of macaques. The premotor cortex (in both macaques and humans) is involved in the planning of actions (such as moving your legs, taking a sip of water, or grasping an object) and is located in front of the primary motor cortex, which is crucial for executing actions. Both can be seen in this illustration of the human brain.


Thus, macaques have neurons in their premotor cortex which fire both when the monkey performs an action and when it observes another monkey (or even a human!) perform that same action. [1] Particularly interesting are the mouth mirror neurons of macaques.

Most of these neurons fire either when the monkey performs or observes a feeding-related action with its mouth. In other words, the"active" and "mirror" functions of such neurons are related to the same action. However, in a smaller proportion of these neurons, there is a discrepancy between their apparent "active" and "mirror" functions. While such neurons still fire when the monkey performs a feeding-related mouth action, in "mirror mode" they best respond to communicative mouth actions by other monkeys! [2] This may point to an evolutionary connection between ingestive and communicative mouth movements. Also, there is a group of macaque mirror neurons which fire both when the monkey performs a hand action and when it hears the sound of that action. [3] Very intriguingly, these neurons are located in the macaque analog to Broca's area in the human brain, which is indispensable in language functioning. Broca's area can be seen in this picture (alongside another patch of cortex crucial for language use, Wernicke's area, and the arcuate fasciculus).


Before moving on to a discussion of mirror neurons in humans, I'll just say one more thing about macaques. There are mirror neurons in the inferior parietal cortex of macaques (see the illustration in the linked article) which respond differently to the same action depending on what action follows it (e.g. grasping an object in order to eat it or to move it). This is true both in "active" and "mirror" modes. [4] This is a truly important discovery, as it points to a potential neural mechanism for understanding others' intentions. Understanding others' intentions, or theory of mind, is, of course, critical for social functioning, including the acquisition of language by human infants.

Mirror neurons in humans

Two areas of the brain's surface largely involved in motor functioning are also implicated in the observation of actions: One is located in the the lower part of the parietal lobe and the other in the region of the frontal lobe bordering the temporal lobe and close to the parietal lobe, which is roughly equivalent to Broca's area, but on both the left and right sides of the brain (Broca's area is located on the left side of the brain for an overwhelming majority of right-handers as well as for most left-handers). This illustration shows the lobes of the human brain. (Note that this brain is oriented in the opposite direction from those in the previous illustrations; this is a view of the right hemisphere.)


Not surprisingly, the two areas involved in movement and the observation of actions are precisely the location of most human mirror neurons [5]. Note that Broca's area, which is heavily involved in speech production (but, as multiple recent studies show, is also activated during comprehension) is practically brimming with mirror neurons! This, then, invites questions about a possible connection between movement, perception of action, and language. Kind of sounds like the Motor Theory of Speech Perception, doesn't it?

Before I start talking about some neat experimental findings that speak to this connection, I need to say a bit about transcranial magnetic stimulation (TMS) and motor evoked potentials (MEPs). Since the brain relies on electricity to transmit information within neurons, it is possible to use electrodes to stimulate the brain during surgery and evoke various types of responses in the patient (who, incidentally, is awake). For instance, you might get a certain muscle to twitch. While the use of this technique has led to some very important discoveries, it is, obviously, not possible to do this type of research with healthy research subjects. Enter TMS! Relying on the principle of electromagnetic induction, TMS basically uses a powerful electromagnet, such as the one in the picture below, to induce electrical activity in the brain.



TMS can be used to stimulate the brain and produce some type of motor response as well as to temporarily (and reversibly!) disable small areas of the cortex (and observe the effect this has on behaviour). Finally, it appears that TMS can be used to treat depression, but this is not our topic here.

OK, so if you can use TMS to zap certain brain regions and to get certain muscles to twitch, you can easily place sensors on a person's skin right on top of the muscle you expect to control in this way and measure the strength of the response (called a motor evoked potential, or MEP) caused by the zapping.

With this in mind, let's turn to some interesting experimental results. For instance, one group of researchers [6] measured MEPs in the right hand muscles of healthy participants elicited by using TMS to stimulate the primary motor cortex in the left brain hemisphere while having the participants do different things (such as observing actions and gestures, looking at objects, etc.). The muscular reactions evoked by magnetic stimulation of the motor cortex were stronger when an action was observed. It did not matter whether the action was exerted upon an object or whether it was mere arm movement. Also, the evoked potentials were only larger in those muscles which the subject would need to use to perform the action that he or she was watching.

Another experiment [7] used functional magnetic resonance imaging (fMRI) to investigate whether human mirror neurons are only activated by observing other humans do stuff or whether observing a monkey or a dog perform an action would also result in mirror neurons firing (we saw above that monkeys' mirror neurons do indeed fire in response to actions performed by humans). It turns out our mirror neurons fire when we see a human, monkey, or dog bite something, as well as when we see a human speak or a monkey smack its lips (a communicative gesture). However, human mirror neurons do not respond when a person watches a dog bark (only visual areas get activated in this case). It appears, then, that our mirror neurons only respond when we observe actions that are part of our own repertoire, probably resulting in a much more personalized understanding of such actions [5]. Recall the importance of understanding others' motivations (theory of mind) for members of a highly social species such as Homo sapiens.

So what about language?

The main thing to note here is that a large part of these human mirror neurons that respond to hand and mouth actions are located smack dab in the middle of Broca's area! This hints at the intriguing possibility of an evolutionary connection between gesturing and language, which is one of a number of currently competing theories of how language might have evolved in humans. (See [5] for more on this as well as for some interesting arguments for why language might be less likely to have evolved from involuntary animal calls.)

There is also experimental evidence that manual gesturing and language directly interact through the mirror neuron system. For instance, TMS/MEP experiments show that the area of the motor cortex which controls the right hand (located in the left cerebral hemisphere) becomes more excitable while participants are reading aloud, but the areas which control the left hand and either leg do not. This increase in excitability can't be due to speech articulation, as articulatory movements are controlled by both hemispheres. Rather, they seem to be specifically related to language processing! [8] Convergent evidence comes from studies in which people with aphasia (a spectrum of language disorders caused by brain damage) are asked to  name objects (which is generally hard for people with aphasia). Naming is facilitated when accompanied by right-hand pointing gestures, but only for patients suffering from types of aphasia resulting predominantly from damage to the frontal lobes (the location of Broca's area). [9]

 Interestingly, humans appear to have evolved mirror neurons responsive to speech sounds. In one experiment, MEPs were recorded from the tongue muscles of subjects who received TMS bursts to the left motor cortex while listening to words containing either a double [f] sound or a double [r] sound. The difference between these sounds is that the former requires very little tongue movement, while the latter is primarily produced with the tongue. The recorded MEPs were larger while the subjects were listening to words containing the double [r] sound. [10] Similarly, the excitability of lip muscles following a TMS burst to the left motor cortex is higher when people are listening to speech or viewing speech-related mouth movements than when they're viewing eye and brow movements. Also, there is no increase in MEPs when the motor cortex in the right hemisphere is stimulated. [11]

It would seem, then, that there is something to the Motor Theory of Speech Perception after all. Mirror neurons present us with a plausible brain mechanism which might enable speech perception to proceed with reference to articulation. What is not clear at present is to what extent speech perception crucially depends on the listener's brain modelling the speaker's articulatory gestures. Apart from the criticisms of the Motor Theory mentioned above, another reason why we might want to allow for the possibility that speech perception may not critically depend on creating a mental model of the speaker's articulatory gestures is the fact that aphasic individuals with severe damage to the left frontal lobe, which often includes damage to Broca's area (and, presumably, in many cases, to a large part of the mirror neuron system), are often able to understand individual words as well as most connected speech. If forced to make an educated guess, I'd say that, if anything, speech perception may be enhanced by the mirror neuron system rather than crucially hinging on it. But even this is just a guess. Much research remains to be done.

Another distinct possibility is that the mirror neuron system is particularly important for imitation, and therefore for language learning, particularly if it is also true that it is important for understanding others' intentions. This too merits intensive investigation.

At any rate, the link between mirror neurons and language, whatever it ultimately turns out to be, is a tantalizing and fascinating research topic, and it will continue to inspire and intrigue cognitive scientists of all stripes for a long time to come.


References


[7] Buccino, G., Lui, F., Canessa, N., Patteri, I., Lagravinese, G., Benuzzi, F., et. al. (2004). Neural circuits involved in the recognition of actions performed by nonconspecifics: An fMRI study. Journal of Cognitive Neuroscience, 16, 114-126. 

[10] Fadiga, L., Craighero, L., Buccino, G., & Rizzolatti, G. (2002). Speech listening specifically modulates the excitability of tongue muscles: A TMS study. European Journal of Neuroscience, 15, 399-402. 

[6] Fadiga, L., Fogassi, L., Pavesi, G., & Rizzolatti, G. (1995). Motor facilitation during action observation: A magnetic stimulation study. Journal of Neurophysiology, 73, 2608-2611. 

[2] Ferrari, P. F., Gallese, V., Rizzolatti, G., & Fogassi, L. (2003). Mirror neurons responding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex. European Journal of Neuroscience, 17, 1703-1714. 

[4] Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G. (2005). Parietal lobe: From action organization to intention understanding. Science308, 662-667. 


[1] Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain, 119, 593-609. 

[9] Hanlon, R. E., Brown, J. W., & Gerstman, L. J. (1990). Enhancement of naming in nonfluent aphasia through gesture. Brain and Language, 38, 298-314. 


[3] Kohler, E., Keysers. C., Umiltà Fogassi, L., Gallese, V., & Rizzolatti, G. (2002). Hearing sounds, understanding actions: Action representation in mirror neurons. Science, 297, 846-848. 


[8] Meister, I. G., Boroojerdi, B., Foltys, H., Sparing, R., Huber, W, & Topper, R. (2003). Motor cortex hand area and speech: Implications for the development of language. Neuropsychologia, 41, 401-406. 

[5] Rizzolatti, G., & Craighero, L. (2007). Language and mirror neurons. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 771-785). Oxford: Oxford University Press.


[11] Watkins, K. E., Strafella, A. P., & Paus, T. (2003). Seeing and hearing speech excites the motor system involved in speech production. Neuropsychologia, 41, 989-994. 

Liar brains

Due to the fact that most human languages use sound as the primary medium of expressing meaning ( = getting another's brain to roughly and partially replicate our own brain's patterns of electrochemical activity; putting thoughts in other people's heads), we tend to think of the task of understanding other people's words as primarily an auditory one. People speak, and we listen. True enough (although by no means as simple as it may appear at first glance). However, there's more to speech perception than mere listening. You may be familiar with the McGurk effect:


As this video demonstrates, we rely on more than just listening when comprehending language. We integrate auditory and visual information. When shown a video in which the visual component shows a person saying "ga, ga, ga" (the sound [g] is produced in the back of the mouth, with the back of the tongue touching the soft palate), but the audio component consists of the person saying "ba, ba, ba" (the sound [b] is produced in the front of the mouth, with the lips touching), the information from the two channels gets conflated, and our brain ends up telling us that we're hearing the intermediate sound [d] ("da, da, da"), produced by the tip of the tongue touching the ridge behind our upper front teeth.

OK, so we don't just listen to speech, we also watch it. But, as I mentioned above, listening is not "just listening". One problem that any wetware or a software + hardware system trying to comprehend human speech faces is that speech doesn't come in distinct sound units. While we may think of words such as cat as consisting of separate sounds (in this case a consonant, a vowel, and another consonant), the reality is quite different. If you made a recording of a person saying the word cat and then tried to cut it up into three distinct parts similarly to the way you could cut up the printed version of this word:

c | a | t

you'd be in for a surprise. Wherever you decided to cut, you wouldn't be able to isolate individual sounds. This is because of a phenomenon called coarticulation, the fact that when you talk sounds largely tend to overlap or run into each other, so that you get an overall "smudged" effect, referred to as parallel transmission (meaning that, while we may think we're only saying one sound at a time, we're actually saying multiple sounds simultaneously).

It gets more complicated. We tend to think of the first sound in cut and cat as essentially the same (let's use the symbol /k/ to write down this sound). However, when you use some not too complicated equipment, you realize that the actual physical waves that our brains perceive as /k/ are actually rather different between these two words. Similarly, the first sound of kestrel, cot, and kit, as well as the second sound of skin and the last sound of disk all have distinct physical signatures. They're different sounds! What's more, the /k/ in cat pronounced by a toddler, a female adult, and a male adult are all physically different sounds, too! Somewhat convolutedly, linguists dub this lack of invariance. (Yes, we like to complicate things unnecessarily. Wouldn't variance suffice? Well perhaps it's not as emphatic...) Anyway, the fact remains that our brains are capable of hearing a bunch of acoustically very different signals, deciding that these differences are unimportant and writing them off as such, and then making us believe that we are in fact hearing the same sound, /k/, in all these words.

Not convinced? Well, you don't need a spectrograph (or anything of the sort) to become a believer. The first sound of kit and the second sound of skit are really not the same. A classic way of demonstrating this to beginning students of linguistics is to have them pronounce these words in front of a burning candle or while holding a handkerchief in front of their mouth. Even though your brain tells you that you're hearing /k/ in both cases, the candle and the handkerchief would beg to differ! You can try this and see for yourself.

Interestingly, if you sought out a monolingual speaker of Thai, Danish, or Hindi and asked these people to listen to you saying the English words kit and skit, they would most certainly tell you that (Why, of course!) the first sound of the former and the second sound of the latter word are different sounds! This is because (unlike English) Thai, Danish, and Hindi have words with different meanings that only differ in that one of them has the /k/ of kit and the other the /k/ of skit. However, your Thai, Danish, and Hindi speaker would still hear the first sound of cat and the first sound of cut as the same sound, just like you do.

In other words, your brain lies to you all the time. And mine lies to me. They deceive us into believing that we're hearing sounds we're not really hearing. They also make us believe that physically different sounds are identical and that words consist of individual, separately articulated sound units. What's more, they trick us into believing that speech is a string of individual words, separated by pauses, but the reality is actually quite different. We may mean, "What. Did. You. Say.", but we say, "Whadijasay." We run words together just like we mush sounds together (which becomes patently obvious when you try to make out individual words while listening to a language you don't speak). Try Xhosa, for instance:


Jaseethat?

The brain is a liar. And it lies to us Con. Stan. Tly. Not just when speaking or comprehending speech.

Of course, there are very good reasons that this is so. After all, we need to make sense of the world, respond to it quickly and efficiently, and survive long enough to procreate. (Or eat more chocolate cake. Whatever motivates you!) Imagine how bogged down our brains would get if they weren't able to group phenomena into categories. Imagine barely surviving an encounter with Leopard A only to be completely baffled by the sight of your next leopard, Leopard B, a couple of days later just because Leopard B had longer whiskers, a broken claw,  a different pattern of spots on its coat, or even no spots at all! Clearly, your brain needs to be able to tell you, "This is a leopard. Run!" rather than "Hey, let's check out this kitty cat."

Similarly, your brain needs to deceive you every now and then (or all the time, really) in order to be able to use language efficiently (or indeed at all).

And language is probably the most powerful evolutionary adaptation humans have ever undergone. Language gets you abundant progeny. And a lot of cake.

About this blog

Welcome to The Arcuate Fasciculus.

I am a psycholinguist, but am fascinated by most areas of linguistics. In this blog, I hope to write about all kinds of language-related topics that I find gripping, from neurolinguistics to language policy and planning. I will try to make it informative and not too technical (meaning intelligible to the non-specialist reader), but I will also try to avoid gross oversimplification of the issues wherever possible.

The name of this blog comes from an anatomical structure in the brain whose Latin name translates into English as "the arched bunch". You can see it in blue in this picture.



The arcuate fasciculus is a tract of neural fibers which links two areas important for language use (there are still disagreements about its precise role). The name is meant to reflect my primary professional interest - how the brain manages to learn, represent, and use language. By extension, it should also be understood to stand for "all things linguistic".

Thanks for stopping by, and I hope to see you again.