Tuesday, August 09, 2011

Interactions Between Hearing And Vision

I've been reading The Tell-Tale Brain by Vilayanur S. Ramashandran. I'm on a chapter where he describes "cross-modal" interactions:
"A modality is a sensory faculty, such as smell, touch, or hearing. "Cross-modal" refers to sharing information between senses, as when your vision and hearing together tell you that you're watching a badly dubbed foreign film."
He gives this example. In the illustration below, which shape would you name "bouba" and which "kiki"?

He says that 98% of English-speaking students called the jagged shape on the left "kiki" and the rounded shape "bouba." As well, "if you try the experiment on non-English-speaking people in India or China, where the writing systems are completely different, you find exactly the same thing." Children as young as 2.5, too young to read, also show this effect.

Interestingly, people with autism don't show as strong a preference, agreeing only 60% of the time.

Ramachandran says this happens because:
"The gentle curves and undulations of contour on the amoeba-like figure mimic the gentle undulations of the sound "bouba, as represented in the hearing centers in the brain and in the smooth rounding and relaxing of the lips for producing the curved booo-baaa sound. On the other hand, the sharp wave forms of the sound kee-kee and the sharp inflection of the tongue on the palate mimic the sudden changes in the jagged visual shape."
Thus, our brain associates a shape with a sound, and vice-versa, setting the stage for the development of language.

Here's another example I saw on Google+ this morning, the McGurk effect. The video exemplifies how our speech perception is cross-modal, is strongly affected by what we see. Even when we know what the sound is, we don't hear it the same way if we can't reconcile the visual with it. See if you agree:


No comments: