At the moment, I am taking a (temporary) break from my furious critiquing of peer review, and have begun working busily on a new series about the workings of human languages. Writing about this is for a general audience is hard, particularly because I suspect that many people have unexamined intuitive views about language that might be very different from the view I am trying to put forth. Additionally, if you're a linguist, an analytic philosopher, or a psychologist studying language, you will likely have a long-held world view that my writings may challenge. (It's all rather intimidating, really...)
But in any case, one of the serious puzzles that I'll be piecing together in upcoming posts is how on earth we are able to communicate about the wonderful complexity of the world through a noisy, lo-fi channel (speech). Some of the most important questions I'll be asking are : How do we understand what someone means through words? How do we communicate meaningfully through words when we speak? What is the relationship between words and the world?
To ease us all through the process, I've decided to begin with passages from a range of well known voices on this topic. The questions and the ideas these thinkers pose motivate much of the work that we do in our lab (and they may motivate you to spend some time puzzling on your own). Today's passage is written by George Miller (1951) and raises two intriguing questions :
First) We think of words as having meanings. But is the meaning of a word concrete and static? Or is it dynamic and contextual?
Second) Are words even the right linguistic unit to be looking at when it comes to measuring 'meaning'? What about phonemes? syllables? chunks? sentences? whole conversations?
Let's take a look at what Miller has to say on this:
"It is widely recognized and usually forgotten that modern dictionaries of English are based on usage rather than fiat. The lexicographer collects many sentences by writers of literary or historic importance who have used a certain word. He then proceeds to classify these sentences into groups that seem to use the word similarly. For each group he writes a short phrase that can be substituted into the sentences in place of the word itself. A dictionary definition is, therefore, an alternative verbalization derived from the contexts in which the word has been used previously. A given word may have a variety of definitions inferred from the variety of contexts in which it appears.
Consider the word ‘take.’ In the Oxford English Dictionary there are 317 definitions of this word… This is a confusing state of affairs. To find an analogy, suppose that ‘da-di-dit’ in the International Morse Code usually represented ‘D’ but under certain conditions might represent almost any other letter in the alphabet. How could one tell which letter was intended? A set of rules saying that ‘da-di-dit’ is ‘D’ unless used before one thing or after another, etc, would be required. The decoding would get much more complicated.
Why do people tolerate such ambiguity? The answer is that they don’t . There is nothing ambiguous about ‘take’ as it is used in everyday speech. The ambiguity appears only when we, quite arbitrarily, call isolated words the unit of meaning. It is possible to draw up a dictionary using some other verbal unit – the syllable, say, instead of the word. …Most people [would] object to such a syllable dictionary. It is useless, they feel, because syllables standing alone have no meaning. Or if they do have meanings, the meanings are so many and so ambiguous that it is impossible to know what any syllable means unless you know what other syllabels precede it and follow it. Words standing alone have no meaning either or, more precisely, have no single meaning. The dictionary definitions are derived from the contexts in which the words occur.
There are no meanings in the dictionary. There are only equivalent verbalizations, other ways for saying almost the same thing. There is a common belief that to define a word is to give its meaning. It is healthier to say that by defining the word we substitute one verbal pattern for another." (Miller, 1951, p. 111-2)
Fascinating, yes? There's developmental evidence, by the way, that young children don't initially distinguish between individual words as we do as adults. The psychologist Elena Lieven has found, for example, that fully half of the first 400 identifiable multi-word utterances that 2-year olds produce are 'frozen' -- meaning the children have no working knowledge of the meanings of the individual words. Lieven has also found that even with apparently 'non-frozen' multi-word expressions, children tend to be pretty conservative in their productions -- fully 75% of the 'non-frozen' utterances 2-year-olds produce only differ by the addition, subtraction or substitution of a single word. These findings suggest that children aren't automatically segmenting speech into individual 'words' as we might expect that they would -- which drives home Miller's point that words aren't necessarily the right units to be inspecting in the first place, when we're looking at how we arrive at 'meaning.'
Even more unsettling: there is some thought that without a writing system to emphasize the distinctions, adults might not think of words as distinct 'entities' either. (And even with a writing system in place, we need only think about how often people misspell "a lot" as "alot" to realize that the question of where we should draw the boundaries between words isn't always clear ). There's way more where that came from -- this, I think, is just a taste of what's to come!
 This is the one statement that I take issue with. I come from an 'information theoretic' perspective on language and think that that a certain level of ambiguity --called 'uncertainty' in the literature-- is a standard part of the linguistic 'signal.' Also -- it's simply not the case that we tend to understand each other with 100% certainty all the time. Just think -- if you're sitting in a lecture on a subject you know little about, even if you know all (or even most) of the words, there's still a good chance you're not going to be as sure about what's going on as the expert sitting next to you. You're both hearing the same words, but you're not processing them in the same way.
Miller, G.A. (1951) Language and Communication. McGraw-Hill Book Company: New York. p. 111-112. [Please note: This book is available for free online at Questia]
Lieven, E., Pine, J., & Baldwin, G. (1997). Lexically-based learning and the development of grammar in early multi-word speech Journal of Child Language, 24 (1), 187-219
Lieven E, Behrens H, Speares J, & Tomasello M (2003). Early syntactic creativity: a usage-based approach. Journal of child language, 30 (2), 333-70 PMID: 12846301
Lieven, E., Salomo, D., & Tomasello, M. (2009). Two-year-old children’s production of multiword utterances: A usage-based analysis Cognitive Linguistics, 20 (3), 481-508