Undoing Babel

Comments Off on Undoing Babel
Undoing Babel

There are some 6,000 languages in the world.  This confusing state of affairs has been documented throughout history, going back to Sumerian civilization more than 5,000 years before the birth of Christ.  And since civilization began, translating one language into another has been a tedious and difficult task. 

Of course, many people learn more than one language and can translate fluently, but learning languages takes years for most people, and there are always other languages that even the multilingual among us won't understand.  As a result, scientists and engineers have worked for many years to come up with a machine that could translate one language into another, largely without success.  Anyone who has tried to use an online translator, such as WorldLingo.com, for example, knows that when confronted with any but the simplest sentences, the system produces nonsense. 

But there have been some promising developments of late.  For example, scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig and the Wellcome Trust Centre for Neuroimaging in London have been studying how the brain itself processes and understands language, giving hints as to how a machine translator might be designed. 

According to their report in the journal PLoS Computational Biology,1 spoken language uses sequences of different lengths that are ordered according to a temporal sequence.  There is a hierarchy of time that the brain pays attention to. 

For example, sounds of individual phonemes, such as the sound of a vowel, form a rapidly changing sequence.  But the subject that the person is addressing typically changes at a much slower rate — people tend to stay on the same topic. 

The brain is always trying to anticipate what is going to happen next.  This is a basic survival instinct that allows animals to avoid predators and find food, among other things.  In listening to speech, it is also trying to guess what's next, and it uses these temporal hierarchies to help do that.  For example, if the topic is how hot the weather is, then the sound made by the letters "s" and "u" is much more likely to lead to the word "summer" or "sun" than to the word "supper" or "sumptuous."  The brain employs this type of rapid filtering to understand language in a fluid manner.

So the brain can often know what's going to be said before the ear hears it.  To see if this process could be mechanized, the researchers devised a mathematical algorithm that mimicked the neurological process that takes place when people listen to spoken language...

To continue reading, become a paid subscriber for full access.
Already a Trends Magazine subscriber? Login for full access now.

Subscribe for as low as $195/year

  • Get 12 months of Trends that will impact your business and your life
  • Gain access to the entire Trends Research Library
  • Optional Trends monthly CDs in addition to your On-Line access
  • Receive our exclusive "Trends Investor Forecast 2015" as a free online gift
  • If you do not like what you see, you can cancel anytime and receive a 100% full refund