You are currently browsing the monthly archive for November 2009.

So, halfway through the last post I forgot my original reason for writing it, as you can tell from the somewhat aimless jabbering. (Could someone at least tell me when I have wandered off the reservation? LOL.) But now I remember. To frame this discussion on paradox within the context of the first few parts, and this blog’s focus on economics, consider the following: We do not need to expand the set of words in natural language to cover every possible bit of information, though we know that we could attempt it forever without success. The reason why the endeavor is useless, however, is that the law of diminishing returns functions as well with words as it does for everything else. In microeconomics, the law of diminishing returns says (from wikipedia):

…the marginal production of a factor of production, in contrast to the increase that would otherwise be normally expected, actually starts to progressively decrease the more of the factor are added. According to this relationship, in a production system with fixed and variable inputs (say factory size and labor), beyond some point, each additional unit of the variable input (IE man*hours) yields smaller and smaller increases in outputs, also reducing the mean productivity of each worker. Conversely, producing one more unit of output, costs more and more (due to the major amount of variable inputs being used,to little effect).

Humans demand words and we use them, like our own capital, to produce and transmit information which gives great utility to each person capable of it. But we really only need so many descriptive words. At the point where the benefit from adding another word is less than the cost (the costs of memorizing, transmitting the word to enough people to be useful), and this point surely exists (take for example the extraordinarily minimal benefit from adding a word that means “soggy paper that could have been wet by any of many sources / ambiguously wet paper” versus the comparatively major cost), the word or set of words will not be added. Language is dynamic, meaning that new demand arises, and therefore so do new words, so this state need not stay forever. (In practice, languages are always changing and a prescriptivist book is archaic two seconds after it is published.)

With most linguistic needs met, the human spirit still needs more. Humans get more utility from moving outside the scope of natural language, giving heed to faith and developing paradox as a method for coping with all the dark corners, nooks, and gaps of natural language. It is not difficult to create a new color word in a language for a particular undifferentiated shade of green, but the need may not be strong enough to do so. The need to describe concepts and ideas that do not fit into one tidy shape requires entirely new words. Languages all over the world have long struggled with these ideas, certainly of paradox. When the cost to storing information went down, our vocabulary commensurately increased in all kinds of fields where it was previously more costly than beneficial (think: color vocabulary) to store such information in the human lexicon. Freed from the onerous costs of information storage, the vocabulary for faith and paradox, that which becomes the bright and ineffable in the human experience, zealously bloom in the art of the race.

Herbert Muller realized this in a way long before I did. In his incredible book, The Uses of the Past, he writes of the majestic Hagia Sophia in a book whose aim is to talk about relationship with history:

Only, my reflections failed to produce a neat theory of history, or any simple, wholesome moral. Hagia Sophia, or the ‘Holy Wisdom,’ gave me instead a fuller sense of the complexities, ambiguities, and paradoxes of human history. Nevertheless, I propose to dwell on these messy meanings. They may be, after all, the most wholesome meanings for us today; or so I finally concluded.

Any interesting and useful theory of economics, linguistics, or art is doomed to immediate obsolescence without considering messy meanings.

Advertisements

At first blush, faith seems like a quality of knowledge that could fall under the “personal knowledge” category of data source. Faith is often a deeply personal thing, though it is just as likely to not be so in evidence. Some skeptics think knowledge of God comes from the iron fist of parents and Republicans, so that would actually fit under the “knowledge-through-language” category of data sourcing. And many beliefs that derive from faith are considered myths in some speech communities, so that might fall under the “non-personal knowledge” category. Still, faith, whether religious or otherwise, may also sometimes be the domain of something completely different. It may not be a type of knowledge at all, but rather a conclusion of the will alone with almost, if not entirely, zero basis from other sources to back it up. ( In this sense, Christianity for many may not be the purest faith, since it involves reliance on The Bible and other sources generally. )

Since it seems to me that every bit of information transmitted in natural language has an implied data source element tied to it, I think natural language may have a difficult time touching the areas of faith. We may not all be entirely sure of our faith, in people, ideas, outcomes. Precisely because there is no backing for the objects of it, it is possible for the entire realm of imagination to come to the fore, leading to ever more components inside natural language and outside it as well, grasping equally unlikely and impossible ideas. (It reminds me of E Space from David Brin’s Heaven’s Reach.) Perhaps any world imagined requires a little faith (see: “Far Beyond the Stars” below).

Faith, and its linked universes, are but one manifestation of “the set of all things that are possible and impossible.” The set of all things that are possible and impossible is a large, infinite set, larger than the set of things that are merely possible. What has been, is, or will be imagined, which overlaps with the set of things possible and not, is also smaller. Since the capacity of natural language depends very much on imagination, as all texts, narratives, even biographies, are fictions (as Milosz said), language is limiting, though with its rules, it gives us the capacity to explore. This leads us to faith’s brother in the set: ethereality. This, too, could lead us beyond natural language. By this, I mean anything with one foot in our own tangible world and one foot in another, be it Heaven, Mt. Olympus, or a parallel universe slightly running slightly slower than our own. (The distinction here between ethereality and faith is mostly false, used for illustrative purposes.) As Anne Carson showed, where the Christians have holy, the ancients have MOLY. We have the sounds, can express the word, but have no idea what the expression really means, nor the etymology. The translator encounters a brilliant, not terrible, silence. It implies entire domains of knowledge outside our grasp, words, concepts, and rules for constructing them that are beyond natural language. Their utterance in our world is but a tip of the iceberg for their meaning. Translation is stopped, worthless.

One particular set of expressions, arising from faith and ethereality, is paradox. Paradoxes in conventional discourse could mean almost anything. According to wikipedia:

A paradox is a statement or group of statements that leads to a contradiction or a situation which defies intuition. The term is also used for an apparent contradiction that actually expresses a non-dual truth (cf. kōan, Catuskoti). Typically, either the statements in question do not really imply the contradiction, the puzzling result is not really a contradiction, or the premises themselves are not all really true or cannot all be true together. The word paradox is often used interchangeably with contradiction. Often, mistakenly, it is used to describe situations that are ironic.

Paradox is probably mostly used to describe situations defying intuition. Some paradoxes in logic, like Curry’s paradox, remain as a virtue of logic, though my hunch is that it could probably be solved by a heavy dose of linguistics. If you enjoy those games, by the way, knock yourself out. The ethereal cases we discussed above do not necessarily entail paradox of any sort, even the ironic. Rather, much paradox depends on our perceptions and beliefs. For example, is it possible for someone to be both good and evil? This questions relates to some deep questions of human nature that vex even those who do not think about them and lead to some profound art. Also: is it possible to be in the past and the future (and the present) at the same time?

Both questions were considered by Shakespeare, and one or both were considered by other greats, including Klimt, Milton, Spenser, and Dali. Klimt’s work pits the static, timeless, glittering gold medium versus passionate, timely, tangible action. His Byzantine and Egyptian influences command awe, not respect, because the meanings are meant to be ambiguous yet beautiful. Milton rejoices in the freedom to choose humanity possesses, showing that this freedom can lead to the most sublime of existences or the most dastardly, the glorious or the tragic. It is for us to choose, for we possess the potential for both. Spenser grasped at similar themes, as Kermode described:

The discords of our experience– delight in change, fear in change; the death of the individual and the survival of the species, the pains and pleasures of love, the knowledge of light and dark, the extinction and the perpetuity of empires– these were Spenser’s subject; and they could not be treated without this third thing, a kind of time between time and eternity.

Not just discords, but paradoxes, perhaps. Dali brought old symbols into modern art, meticulously plotting old stores for the modern era, but his “The Persistence of Memory” summons our consideration of our relationship with time. Most believe we live a linear existence, moving from one moment to the next. Dali suggested this isn’t necessarily the case. Although Dali showed you this idea quickly by painting, I have never seen a better explication in any other visual medium than this one from, sigh, yes, Deep Space Nine:

Of course, Shakespeare may have endured as the paradox specialist nonpareil. It is probably no coincidence his works stand above almost all others in their capacity to possess us. That’s because, unlike Twilight or Star Wars, they ask questions and there are no clear answers. We can consider them anew each day. Kermode, a critic where I am not, had much to say of the Bard:

Now Macbeth is above all others a play of prophecy; it not only enacts prophecies, it is obsessed by them. It is concerned with the desire to feel the future in the instant, to be transported beyond the ignorant present. It is about failures to attend to the part of equivoque which lacks immediate interest (as if one should attend to hurly and not to burly). It is concerned, too, with equivocations inherent in language. Hebrew could manage with one word for ‘I am’ and ‘I shall be’; Macbeth is a man of a different temporal order. The world feeds his fictions of the future. When he asks the sisters ‘what are you?’ their answer is to tell him what he will be. […] …and the similarities of language and feeling remind us that Macbeth had also to examine the relation between what may be willed and what is predicted. The equivocating witches conflate past, present, and future; Glamis, Cawdor, Scotland. They are themselves, like the future, fantasies capable of objective shape. Fair and foul, they say; lost and won; lesser and greater, less happy and much happier. […] The act is not an end. Macbeth three times wishes it were: if the doing were an end, he says; if surcease cancelled success, if ‘be’ were ‘end.’ But only the angels make their choices in non-successive time, and ‘be’ and ‘end’ are only one in God. The choice is between time and eternity. There is, in life, no such third order as that Macbeth wishes for.

That’s a mouthful, but you get a sense of the conceptual foldings that the reader must grapple with. Paradoxes may turn cause and effect on its head or involve contradictions. Whatever the case, they usually involve the existence of something that should not be given the truth value of other parts of the situation or statement. When one thing could suddenly mean another thing that was thought to be mutually exclusive, all kinds of possibilities unfold. This would, in turn, expand the scope of natural language and the landscapes of our human adventures. They are a means by which we can surpass our limits, thereby giving incentive to grow.

There are words that exist beyond the domain of natural language. Remember that language’s ultimate utility remains in its ability to transmit information. Yet, I think we would all agree that there are types of information impossible to convey. In the movie Contact, based on the book by Carl Sagan, Palmer Joss demonstrates this to Dr. Arroway by asking, “Did you love your father?” Arroway responded affirmatively. Given the previous narrative in the movie, there is little doubt and Arroway seems flummoxed by the need to give an answer to such an obvious question. Joss responds, “Prove it.” (2:05 – 2:17 below)

An action may not prove it. Any person could feed an ill father. Any person could watch television with a father. None of these things alone or together suffice. Yet, from the perspective of Dr. Arroway, she knows it beyond a doubt. The answer lies encoded within her consciousness. So do the language centers necessary to translate the neuronal patterns of the persona into information. This could be a gap in natural language. And maybe there just haven’t been words invented for this type of evidentiary matter.

But I think that we can expand the example further to prove an important point. How do we prove that we can prove it? How do we prove that we can prove that we proved it? How can we check that? And then how can we be sure it’s reliable? The problem here is not an infinite regress. Let us assume instead that we can prove all these things and more. A mathematician named Kurt Godel suggested with his Incompleteness Theorems that we can only come so close to perfect information without ever having it. Each attempt to obtain it pushes the goal one step further away.

In what has been compared in importance to Heisenberg’s Uncertainty Principle and Einstein’s General Relativity, Godel’s Incompleteness Theorem, encompassing both the First and Second Theorems, says two things (from Wikipedia):

  1. Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.
  2. For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent.

The first theorem listed says that in the consistent system of arithmetic there will be true statements that are simply not provable. This derives somewhat from the so-called Liar’s Paradox (“This statement is false.”), but whose analogue is “This statement is not provable within this system.” The difference is significant, so let us focus on the latter version. Interestingly, as Rebecca Goldstein explains in Incompleteness: The Proof and Paradox of Kurt Godel, efforts that one makes to expand the system and prove those unprovable statements will prove futile:

…the proof demonstrates that should we try to remedy the incompleteness by explicitly adding [the paradox] on as an axiom, thus creating a new, expanded formal system, then a counterpart to G can be constructed within that expanded system that is true but unprovable in the expanded system. The conclusion: There are provably unprovable, but nevertheless true, propositions in any formal system that contains elementary arithmetic, assuming that system to be consistent. A system rich enough to contain arithmetic cannot be both consistent and complete.

In truth, the expressions of arithmetic easily expand into a subset of natural language — the way in which we structure information, oftentimes even in our minds. Since arithmetic is a subset of natural language and it will always keep going and going and going, never ending complete, then natural language is doomed to the same fate, but there are other subsets that redouble arithmetic’s efforts. An attempt to make natural language (a system very different from arithmetic) complete involves going to the very end of it, putting in all the sounds and morphemes possible, as we have already discussed. In a mathematical sense, we could try to have a description and expression for everything in natural language. Every nook and cranny of meaning possible, likely including every permutation and flavor of combined meanings (n-dimensional superfactorials like !sweet and !love come to mind), would have to be revealed. Once all information on tangible things in the multiverse is discovered, there remains perceptive, intangible information locked in the conscious mind. Then there’s the unconscious mind. Then there’s… and on and on and on. This is all to say that it would be very nice to have natural language incorporate expressions for everything, but a) it’s impossible and b) it’s not practical. The amount of energy it would take to do so would be far better used doing something else, like making pizza, curing cancer, or inventing warp drive.

What’s interesting about all this is that despite this limitation we have already transcended the limits of natural language. This is best illustrated by going back to the set of perceptive, intangible information that must be described. If all the tangible information had a certainty of 99.9999% then we could add a “data source morpheme” to it such as /-wa/ that indicates this. Or, we could leave it unmarked and simply add morphemes for anything but 99.9999% certain information. Intangible information gets graded differently by data morphemes. Jaqaru, a dying language of 3000 speakers found primarily in Lima, Peru, features such data morphemes. Dr. M.J. Hardman of the University of Florida has studied the language in detail. Her findings on data morphemes include the following (from Wikipedia):

Data-source marking is reflected in every sentence of the language. The three major grammatical categories of data source are:

1. personal knowledge (PK)–typically referring to sight
2. knowledge-through-language (KTW)–referring to all that is learned by hearing others speak and by reading
3. non-personal-knowledge (NPK)–used for all myths, histories from longer ago than any living memory, stories, and non-involvement of the speaker in the current situation

So where are we transcending? In the areas of natural language for which even data source marking would be difficult, if not impossible without another data source morpheme referring to 0% certainty. I think of three areas primarily: faith, ethereality, and paradox.

At its roots, language is a means by which information is transmitted by humans to each other. Typically, its contours are determined by speech communities and linguists advocate the spoken form as language’s most important form; its most rich, varied, and dynamic form. While important to consider the differences between the spoken and written form, it should help to remember that sign languages are also completely robust, rule-governed, grammatical languages. American Sign Language in fact has a different grammar from English. So the central point to remember about language is that the main demand for it comes from a virtually universal need and desire to communicate.

In the past few decades, linguists have been forced to dig a little deeper into language in order to investigate just what differentiates human language, or natural language, from other forms of information transmission, both by artificial human means and by other organic means. Some bee species possess elaborate dances by instinct that allow them to communicate distance and identity. Parrots have been known to go beyond mere mimicry to some linguistic stimuli. Computers can perform extraordinarily difficult computations and hundreds of engineers work day by day to make them think more like us. Chimpanzees and other primates can learn and transmit arbitrary signals for tangible objects as well as actions. At the highest end of the spectrum of non-human language we have dolphins, Tursiops genus. I trust you already know that they are the dirtiest, filthiest joke tellers anywhere in the universe, but did you know that they are able to grasp simulations much faster than chimps, implying a much higher level of awareness?

Setting aside dolphins for a moment (dolphin linguistics are complex and far beyond the scope of this post), chimpanzees seem to perform some linguistic tasks relatively well. They can learn dozens of words. But we have learned that they possess only the most rudimentary of grammars. Words must be placed next to each other in a meaningless order, but usually repetitively next to each other. For example, “give food eat me eat food give….” (Still, some chimpanzees have been able to perform more impressive tasks as Emily Sue Savage-Rumbaugh showed in 1990.) Natural language is different. We are able to construct sentences that are virtually infinitely long, but that are not redundant. For example, “I learned that Annie bought the gun from Bob who said that Carolina borrowed money from David who…” could go on forever as a sentence. As a practical matter, this never happens. This is an important point as we will come to see later, but natural language has the property of recursion, which means that it can continue to spiral into its embedded phrases and dependent clauses forever. Don’t take recursion so seriously: the central point here is that humans can create infinite expressions from a finite set of discrete units of meaning (morphemes).

There have been a few attempts to demonstrate that the size of natural language is infinite, based on the fact that the grammar itself can spiral with new phrases forever and that the language can add words forever. Both properties, independently, would lead to such a conclusion. Based on the spectrum of translation outlined in my last post, I think we can look at natural language as the set of all human languages. Fundamentally, then, natural language is a flexible system, consisting of (many) sets of sometimes, but not always dynamic, rules for combining a functionally infinite set of words in order to convey information. The reason rules are sometimes dynamic is because, while many rules persist in language, they may occasionally do so out of inertia and therefore be broken when everyone in the speech community understands the expression despite its “illegal” form. One example of this is using the double negative in English. Although absolutely and utterly common in English of Chaucer’s day, the double negative fell from grace some time afterward. Still in use ever since, it is frowned upon by academicians, Strunk & White, and many other purveyors of proper English. This is why linguists frown upon prescriptivism. They do not dislike standardization, there are benefits to that. The problem is when people mistake standardization for right and wrong. Something may be standard, or it may be different, but it is not wrong, and likely is just as rule governed as the standard form. Witness the work of Walt Wolfram.

A language can add words in several ways, as shown partly by the translation spectrum. We can combine words from the language to mean something new, borrow words from another language, create new words out of thin air. This last one could be done simply by assembling sounds currently within language’s standard phonemic inventory for a new word. Typically, languages group various distinctive phonetic sounds into phonemes. For example, the ‘p’ in piranha is very different from the ‘p’ in stop. The first is said to be aspirated, which can be proven by placing your hand in front of your mouth when saying the word naturally. The second is unaspirated. In English, these sounds are grouped into the same phoneme /p/. In Hindi, they will lead to minimal pairs, where switching one for the other in a word creates different meanings. Therefore, they are in different phonemes. Switching them in English does not affect meaning. Let’s say a language has 30 phonemes. It would be a very long time before you run out of combinations of these sounds for making new words. And when you do run out of them for given word sizes, all you have to do is increase the word size. That gives you another virtual infinity of new word possibilities. And then you could add signs, as seen in the many sign languages of the world.

You could be thinking, “Well, aren’t there also infinite speech sounds? That is, phonetic sounds?” Yes. Just like the color spectrum, representing an infinite array of colors, there is an infinite array of sounds that our vocal chords can create — not to mention an infinite array of signs our hands can make. In the case of vocalizations however, like colors, they will be grouped in “best example” phonemes. The range of them may vary from language to language, but they will stick to a certain maximum deviation from the best example and always include the best example.

Remember that my definition of natural language here includes a grouping of all languages, which then implies that all phonemes are represented. Now let us add the phonemes have we have lost which may have existed in dead languages or simply been lost to current languages. We now have the entire array of speech sounds. By this method, all distinctive features should be represented. All these sounds can be brought into natural language, combined in every way morphologically and syntactically possible, and we wind up with an array of expressions that is boundless and infinite.

This is the scope of natural language. But there is more.