Translation
Ultimately, when we concern ourselves with language, we are always and without exception really talking about translation. That is, everything that we say about language is really a statement about a subfield of translation, which is the truer subject of study. Translation in the usual sense means converting the meaning from one arbitrary set of symbols to another. But the symbols portion of the definition is not a necessary condition, just a usual one. Rather, translation can be broadened to be seen as the art of focusing meaning from a more ambiguous source to a lesser one. But we will get to that.

Translation in the usual sense is something natural language processors are very concerned with. How, after all, do we get a computer program to recognize language? Developing software that breaks sound waves down and identifies the phonetics of the wave is the easy part. Encoding the complexities of a system with recursion and, worse, sophisticated senses of humor would stymie even the greatest of programmers– and indeed do on a regular basis.

Thinking Machines
Turing believed that we would know we had succeeded at the task when a machine is able to fool us into thinking it was a human through conversation. As a matter of odds, that means that when one speaks to a machine that can fool us, that we believe there is a roughly 50% chance it is a machine– even odds, in other words. A particularly entertaining RadioLab for me, Season 10 Episode 1 entitled “Talking to Machines” deals with different types of machines that seem like they’re communicating with us and the obvious question, “Are they aware?” This is a little bit of a different concern than Turing’s because Turing posited that a machine that could fool us would be aware of itself, but this is not necessary as a matter of logic.

In the episode, there are programmers who make profiles on sites like Match.com, and many many others, designed to try and fool regular humans. These bots respond to messages and keywords, often times fairly realistically. So much so, in fact, that many are regularly fooled. Now, if one expects this type of ruse, one might not so easily fall for it, especially when the tell-tale signs of the deception are revealed in the RadioLab episode. But for the trusting and/or unsuspecting, it is a different story. For the programmers, their experiment is easy to explain. What is more difficult is showing the technical methods they employ to achieve their results.

One method is to store words as matrices. Why matrices?

The Structure of Artificial Thought
Because matrices are very simple and flexible: they are easy to manipulate. This means that we cantranslate information into matrices and play with that information by performing operations on it, any kind of operation, in however many dimensions.

Let’s look at an example. Assume that I can create a database of all the current words in the English language, a snapshot. It would of course only reflect the language at a given time, seeing as how English changes so quickly. (We need not quibble with the different forms of English, sociolinguistics, and so on at this point.)  I might accomplish this by storing each word as a matrix. For the sake of simplicity, let us say that verbs have a certainmxn matrix structure, and nouns and adjectives have different mxn structures.

The present tense form of the verb ‘to run’ is ‘run’ and that present tense verb can be stored with some arbitrary values in its matrix form:

[  0  ]
[ 1 ]
[ 2 ]

There’re 3 rows, 1 columns in this matrix, a 3×1 matrix. Let us say that the matrix form for the past tense ‘ran’ is the following matrix:

[ 0 ]
[ 1 ]
[ 3 ]

In this example, the only difference is the last value. Assuming that the number of potential values which could slot in there is infinite, these words mean largely the same thing by arbitrary values stored in the matrix, the only difference is the last value which obviously determines tense in this type of structure. But this is just a very very brutally simple example of what the most sophisticated natural language processing models actually look like. The matrices usually are much larger and potentially infinite in size.

The Structure of Artificial Meaning
These matrices have many special properties. One of them is that we must be able to structure the matrices in a way so that we can perform regular types of operations, which would be analogs for syntactic interactions.  Again, as a simple example, a noun might be, let us say a 3 x 3 matrix. The word ‘I’ could be:

[ 10 11 12 ]
[ 13 14 15 ]
[ 16 17 18 ]

We could generate in our matrix system / representation of language a system whereby we know a sentence is grammatical only if the matrix product of the noun phrase (NP) and verb phrase (VP) was a 3 x 1 matrix. The matrix product of ‘I’ and ‘ran’, that is, a NP and a VP, would form a 3 x 1 matrix:

[47]
[59]
[71]

We are not as concerned with the values of the product as of the form at this point.  Since language is so complex, the form must obviously become more complex as well, without losing its flexibility. The reality is that while some core portions of the matrices by word types would have to have some kind of values for us to understand what they mean, have a frame of reference, and perform meaningful operations on them, many values may be variables– that is to say, they may be ambiguous.

The Structure of Ambiguous Meaning
While words like ‘love’ and ‘justice’ may be highly ambiguous and contextual in meaning, there are some words like ‘neutron’ or ‘hydrangea’ that are fairly specific. But even with these words, there is one way of changing their meaning. The meanings for their spoken and written forms is different. They necessarily must be– always.

Let us consider a 100 x 100 matrix that stores the meaning of ‘neutron.’  The core of the word might be stored in the 100×98 portion and then the 100×2 fragment at the end could be the contextual meaning that comes from the form the word is expressed in. For the spoken ‘neutron,’ it would be values that reflect the emotion of the voice, the tone, the pacing, the accent, the education, all kinds of things that might come out through pronunciation of a word. For the written ‘neutron,’ the 100×2 fragment means the most at time zero, when it is initially written. If the word is written at that time, a reader still has a very good proxy for what an author intended, but still is not privy to as much information as what the listener of the spoken ‘neutron’ is. This means two things. (1) The values in the last 100×2 fragment will be different, not necessarily entirely or even mostly so, but necessarily so in part; (2) The meaning of the written is more ambiguous due to the uncertainty of what an author meant to communicate. There is always a tone, even for a written word, but it is far more subject to fancy and therefore obviously more ambiguous. Variables of a sort will be needed in the written 100×2 fragment.

In my book Cultural Entropy, I devote some time to information theory, for the concept of entropy is impossible to explain without it. Likewise, attempting an explanation of cultural information, particularly the language subset of it, without entropy, is impossible. In reading various sources about information and language, I am struck by how excellent and simple the older texts are and how confusing or negligent are the newer texts. Language Files, which is a standard text for introductory linguistics courses, shows nothing, though it does discuss pragmatics.

But before the field was called pragmatics, and when linguistics had a little more perspective, the most common linguistics textbook was An Introduction to Descriptive Linguistics by H.A. Gleason (1955, 1961). This latter book, in particular, also forms an excellent foundation for a linguistics novice introduced in Field Linguistics, which I often analogize to amphibious warfare: the process of starting with zero firepower ashore and proceeding to dominance of the field. Field Linguistics as a practice is quite similar. A linguist arrives to a place s/he has never been, perhaps a village in remote Papua New Guinea, beginning with close to zero knowledge of the language and necessarily proceeding to learn everything, discerning a grammar, phonetic inventory, and all manner of other information. It is, in other words, a supremely practical art. Just so, Gleason’s textbook.

For the purposes of my discussion here, Descriptive Linguistics rises to the occasion as well. We begin with definitions:

The amount of information increases as the number of alternatives increases. [...] Information is measured in units called… bits.  By definition, a code with two alternative signals, both equally likely, has a capacity of one bit per use. A code with four alternatives is defined as having a capacity of two bits per use…. [...] The amount of information in any signal is the logarithm to the base two of the reciprocal of the probability of that signal.

This about sums up the useful parts for any schema of quantifying meaning that we might wish to undertake 50 years after the text was written. Focus on the point about alternatives. In a world with two machines communicating to each other, but only ever saying 1 or 0 back to each other and only once before responding, then the machines have only has two choices and they are both equally likely. The capacity is one bit. The machine might send its transmission in the following form: [0] or [1]. A code with four alternatives between the machines might look something like this: [0 0], [0 1], [1 0], or [1 1]. In fact, these would be all four of the alternatives and it’s a capacity of two bits being used.

Most human communication doesn’t look like this at all. True, we do often communicate in ways that necessitate or at least allow for either/or answers that might look like [0] or [1]. But most human utterances and writing look more like what you’re reading in terms of expressing ideas, narratives, and concepts, not just yes/no or either/or responses. An example of something slightly more complicated would be the set of alternatives to the question: which U.S. President from 1980 – 2011 has been the best? You have six choices: Carter, Reagan, Bush 41, Clinton, Bush 43, and Obama. The response, therefore, could be encoded as simply as [0], [1], [2], [3], [4], or [5] depending only on which number referred to which President. Another step up in complexity would be the set of alternatives to the question: which color is the best? As a technical matter, given the number of frequencies visible to the human eye, the answer is theoretically unlimited. There is, however, a practical limit: language. Every language only has so many recognized color words at any given moment. Some have as few as two, it is believed, while others have somewhere between 3 and 11, and a good many others have considerably more. English certainly falls into the last category and every 64 or 128 pack of crayons you see in the store proves it. There are many alternatives to choose from here.

Something that has been avoided by many linguists and information theorists until recently has been quantifying the amount of information that is actually transmitted, beyond the rote logical numerical answer suggested by Gleason in his textbook. In a response to the presidential question, if someone responds “Carter,” much much more information is transmitted to a listener than just the information that Carter is the best President. Any listener will assign a probability to that outcome, meaning reflexively that probabilities have been assigned to all other outcomes, but it will also say something about who the person is and their beliefs. But most of his other information could be called “peripheral information” as opposed to the “core information” transmitted by the response. Peripheral information is highly contextual.

Obtaining a kind of precision in expression has previously been the purview of mathematicians, logicians, statisticians, and those who use symbols to express the barest minimum of relationships amongst the most pure of concepts. Ambiguous words are (or should be, as a professional matter) as foreign, and as luxurious as the sweetest Dulce de Leche ice cream served at a zero depth pool in a hidden Bali mountain is for us.

But precision of expression is more important than most people think and precision varies enormously by form. When someone says, “The Gators won the football game,” the meaning is different than when someone writes it on a piece of paper. These two prior forms are still different from when it is typed and sent over email and these three prior forms are yet different from those words when painted on a canvas, or spray painted on a wall. The meanings are not so different that we cannot fathom the gaps, so I don’t mean to belabor the point. Rather, I merely want to point out that form matters. I’ll say more about this later.

My goal in this short series of posts will be to lay out a method for articulating differences in meaning, as well as comparing meanings, distinguishing levels of ambiguity in meaning, and why all these things are important. Finally, I will summarize what all of this means for the always growing, always diminishing set of cultural information we use as humans.

Introduction

Lady Gaga, known as Stefani Joanne Angelina Germanotta until quite recently, is a phenomenon. Although only 24, she already has six number one songs from two albums, The Fame and The Fame Monster. She has toured the United States and world as a conquering pop hero, whose ascent occurred at exactly the right time with exactly the right trajectory to propel her into superstardom. Gaga is not the only vocal artist to meet this kind of success. She is preceded by the likes of Madonna, Mariah Carey, Beyonce Knowles, and, to a lesser extent, Whitney Houston and Janet Jackson. But as of the writing of this post, Gaga has something that none of her predecessors had: The Method.

Markets in Music

In the past 100 years, markets have developed widely differentiated markets to satisfy an ever-more sophisticated melange of tastes on the part of humans. Before the 20th century, musical styles may have changed relatively slowly and catered towards small groups of elites and particular localities. Through the 20th and 21st centuries, the decreasing cost to the transmission of information along with increasing standards of living fostered a vastly increased consumption of music. People consume music in the sense that they listen to it and know it, but prefer more music to less, just as with any other good, all things being equal. Discarding old music is not a condition to further consumption, only the unsatiated appetite for more is.

In recent years, several notable artists have been able to maintain their position on the top of the charts by providing music that people demand. In some cases, their styles have changed to fit so as to please shifting preferences in the market. The most notable of this group of artists is Madonna and Mariah Carey. Mariah, whose traditional path to pop supremacy leaves little in common with Gaga’s, need not be addressed in this post. But in many ways, Madonna is Gaga’s most similar predecessor. Her example helps inform the path Gaga will take. Madonna burst onto the scene in the mid-1980s with music that was as catchy as it was interminable– I, for one, cannot get “Borderline,” “La Isla Bonita,” or “Like a Virgin” out of my head if I try upon hearing it. Within a matter of years, as she attempted to evolve artistically, she fell out of favor with the public. Madonna’s concerted effort to change her image into a sexually liberated dominatrix in her album Erotica and book Sex did significant damage to her brand. As her songs fell off the charts, Madonna’s increasingly desperate bid to remain in the public eye almost completely destroyed her. But Madonna was nothing if not resilient, and she learned from her mistakes. She remade herself for Ray of Light and never strayed too far from the cultural mainstream again.

By the time of 2008, pop music in the United States had become moribund with the same acts, being replenished only by American Idol contestants who were successful not because they rocked the boat, but because they excelled at traditional artistic convention. The market was saturated with typical romantic ballads, rap self-aggrandizing, and both gritty and soft country songs. The pop music segment, which consists mostly of young people whose tastes have not hardened fully yet, had consumed enough of the old. The time was ripe for something new.

The Method

Lady Gaga literally burst on to the scene. With the market so ready for something different, all it would take was a little “going gaga” to light a real fire. Her sartorial splendor ratcheted it up a notch. Was it just a temporary act? Nope: her music videos doubled down on the schtick and she rarely broke character. In fact, she told one source that she’s “Gaga 24/7.” She told another to never call her by her real name again: it’s Gaga from now on. From the impractical, yet somehow aesthetically interesting hairdos to the occasionally unflattering but always interestingly shaped dresses, Gaga’s method was simple and pure. She would push every superficial boundary right up until the breaking point while offering up pleasing, aggressive, strongly sexually suggestive music.

The most important element of The Method is not going past the breaking point. How would she know what it is? She doesn’t have a team of market analysts and economists looking for this mythical breaking point. She knows because we’ve already seen it: Madonna’s Erotica phase.

Madonna made the mistake of getting too personal. The market readily consumed her music, and even her behavior: she was young, after all. But they would not go for boundary-pushing content that seemed authentically representative of Madonna herself. Too personal, the material was perceived as representing the genuine sexual deviant she always was. There’s no faster way to stigmatize one’s self to the broader market to which, in reality, she wished to appeal. Indeed, she only became a hero to various small groups of people by unwittingly sacrificing broad appeal. She did not intend this. We know this because of how fast she dropped the routine. I suppose it is possible that Madonna converted to that phase out of commercial calculation, but whether she did or not is not as relevant as how it was perceived by people.

Gaga has not made any mistake like this. When she became a national figure in fall of 2008, she studiously avoided campaigning for her choice for President on national radio, saying that while her preference was well known, she would not say it to Ryan Seacrest on air, presumably because it would peel away a layer from the extraordinary artifice she had devised. Nothing personal to her previous identity, Stefani Germanotta, remains. What was Stefani Germanotta like? Check out this 2005 appearance by her on MTV’s Boiling Points.

Not too different from the rest of us, but a far cry from the Lady Gaga we know and… well… appreciate today. Our Lady Gaga has platinum blonde, dirty blonde, or bizarro color hair depending on the theme of the evening. A day dress? Try trash bag couture, the devil version of 1990′s chic, or geometric glamo-sportswear on for size. Virgin Media did us a favor by putting her “worst outfits” on display here. My favorite is her red lace outfit from the MTV Music awards, shown earlier, but focused on below.


There is something so revolting about this outfit. And yet, it is partly because of this, partly because of its newness, that I cannot turn away. The most compelling part about it is that it gets me using my imagination. Why did Gaga choose this? Can she see from under there? Whatever we say about the garment, it’s not uninteresting. Not only do we talk about it, we want to see more. It’s instructive to compare this to Madonna, and in more recent times, Adam Lambert. Madonna is sublime at what she does, while Adam Lambert is actually pretty good himself. But when Madonna gave us Erotica, it was hot, when it should have been cool. What I mean by that is that it seemed to represent her real personality. Adam Lambert’s 2009 appearance on Good Morning America was cancelled after a male performer simulated giving the singer fellatio at an MTV performance. It’s not just the homosexuality that bothered people. It’s that he warmed up to us too soon. We don’t know him. Maybe now we don’t want to know him. America’s okay with certain types of sexuality as long as it’s genuine, distanced performance.

This is what Gaga has mastered in her method. None of her outfits represents who she really is. Not her name, not her hair, not her food preferences, not what kind of animals she really likes, her hopes, dreams- nothing- all barred from us. Here are some of her lyrics from “Bad Romance”:

I want your ugly
I want your disease
I want your everything
As long as it’s free
I want your love

Since we cannot possibly take these lyrics entirely literally, we are forced to take them figuratively, metaphorically. Again: thought-provoking, but this is not authentic personal expression. She may say otherwise, that this is who she really is, but it isn’t true. It’s a well-crafted commercial persona for our consumption. She’s smart enough to stay in character, too. Whereas Madonna broached controversial Catholic imagery in her “Like a Prayer” video, Gaga is light-years away from taking on religion or war. Madonna discovered The Method too, but at a later stage in her career. Gaga knew it going in. Gaga is Stefani Germanotta’s avatar, in every sense of the word. And there’s nothing threatening to people about it because it’s purely play. She’s in on the joke.

Evidence for The Method

Recently, Lady Gaga and Beyonce unveiled their long-awaited 9 minute music video for the song “Telephone,” which is currently dominating your airwaves. The video is a terrific piece of circumstantial evidence that The Method is not something intrinsic to Lady Gaga, but something that can be replicated. Here is the video:

The first 5 minutes of the video aren’t particularly interesting and seem intent merely on connecting it to its prequel, “Paparazzi,” in which Gaga gets arrested after poisoning her boyfriend. But the last 4.5 are extraordinary. Beyonce also begins wearing Gaga-esque outfits and behaving like a “monster” as Gaga might say. Unfortunately for Gaga, Beyonce is the real star of the video, and as with their joint effort in “Videophone” (also a prequel perhaps judging by similar 1940s-ish wardrobe/hair in particular scenes), steals the show. But what I wish to draw your attention to here is the complete Gagafication of Beyonce. Beyonce adopts similar personality artifice, actually out-Gagaing Gaga herself. I love it.

There is another subtext here. Beyonce’s last album was called “I Am… Sasha Fierce.” This album is her best by far (so far), featuring “Halo,” “Sweet Dreams,” “If I Were a Boy,” and “Single Ladies (Put a Ring On It).” The album title referred to Beyonce’s long-lived (though now deceased) alter ego, Sasha Fierce. According to Beyonce:

Sasha Fierce is the singer’s sensual, aggressive alter ego, but don’t expect her to surface anywhere but the stage. “Sasha Fierce was born when I did ‘Crazy in Love.’ People, when they meet me, expect that all the time, but that person is strictly for the stage.”

This implies that we don’t really know the real Beyonce very well either. As noted in my “Siren Paradox” post, we haven’t seen her real hair, or much of her real preferences and beliefs. She performed at the 2009 inauguration of Barack Obama, but she also performed with Destiny’s Child at the 2001 inauguration of George W. Bush. Yes, one more likely represents her genuine vote preference, but things may not always be as they same. Beyonce is notoriously guarded about her private life. The artistic synthesis between Gaga and Beyonce, both using the method through completely detached, methodical commercial targeting is a beautiful thing to behold.

Predictions

But you know what they say: all good things must come to an end. I suspect that at some point Gaga’s exterior will begin to crack. She might have a marriage, a child, or late-night hotel altercation. She might get drunk, express a serious political theory, or get into a public personal bout with a rival. I wouldn’t bet on it any time soon, but she, like everyone else, has multiple desires in life that may shift in priority depending on her income.

To make an analogy, in economics there is the backward bending supply of labor curve. Above the reservation wage and halfway up, people tend to work more as they make higher wages. For this phase of the curve, we say that the substitution effect is greater than the income effect, which both continuously operate within us, but shift in priority as incentives as our wealth changes. At some point, they switch in importance, with the income effect overwhelming the substitution effect in our mind. That is, we no longer wish to work more hours at these higher wages, we wish to use more of our time for leisure. We just bought that 200 ft yacht and by gosh we’re going to use it, even if it costs $40,000 to fully gas!

Just so, at a certain income level, the odds are that Gaga will care less about appealing to the broad market and risk revealing herself more. At this point, she will probably never make quite as much money as she used to, but she will be more personally satisfied and the cost of her constant vigilance in obeying The Method will be relaxed. Let us hope that when the time comes, if it comes, that she can still stay in on the joke and, even if pursuing some random social justice issue, she does not take herself too seriously, as so many others have failed in doing.

Xenolinguistics, as broadly understood, though mostly as a matter of farce, is the study of non-human languages. In May 2009, the blockbuster Star Trek premiered around the world. In one of its funnier exchanges, James T. Kirk and Uhura bring xenolinguistics to our awareness:

KIRK: So you’re a cadet. You’re studying. What’s your focus?
UHURA: Xenolinguistics. You have no idea what that means.
KIRK: Study of alien languages. Morphology, phonology, syntax. It means you’ve got a talented tongue.

Yes, typically, xenolinguistics is the study of “alien” languages, but one must permit the possibility of other languages on planet Earth, whether from ocean-dwelling mammals as seen in Star Trek IV or Elvish from Lord of the Rings, so I choose to define it as the study of “non-human” languages. Perhaps unsurprisingly, Klingon arguably does not qualify, as its creator, Marc Okrand, developed the language with human language universals, though with admittedly rare syntactic and phonetic combinations. (Of course, one must cede that languages could have developed independently on other planets, as they apparently did in Star Trek, with exactly the same linguistic universals, tendencies, and restraints as ours.) The combinations are rare because they impede cognitive processing and pronunciation, respectively.

How so?

First, regarding cognitive processing, Klingon uses an “object first” sentence structure, whereby the sentence “I hit Charlie” becomes partially inverted in Klingon as “Charlie I hit” though they mean the same thing. Very few languages in English have this type of sentence structure, and the few that do are locked away in the Amazon or similarly remote, or possibly even undiscovered, environments. The reason why object first, as opposed to subject first languages, are so rare is because, in summary, we tend to think linearly. Starting with an effect, not a cause, increases uncertainty and ambiguity in the brain as it processes the sentences. Therefore, it seems likely that object first sentences have either evaporated with time due to others having a distinct competitive advantage, or that they never arose significantly in the first place due to its relative handicap. We would predict that such languages could only exist, all things being equal (this is a key phrase), in an environment of relative isolation, without trade and significant cultural exchange.

Second, regarding pronunciation, Klingon possesses a particularly odd phonetic inventory, yet its sounds, while not generally consistent with what occurs in human languages, are can all be found in the inventory of human sounds. In other words, there are no sounds in Klingon that a human cannot make. The reason why its sounds, alone and in combination, are relatively rare in English is because they cost of a lot of energy to make. The presence of harsh fricatives and gutturals is accentuated by lax (meek, in Klingon terms) vowels.

This discussion on Klingon is all to say that we really have no idea what an alien language would be like, as we are bound by certain customs and universals as human speakers. Suzette Haden Elgin recognized this problem when she wrote the science fiction novel, Native Tongue. In the novel, humans interact with aliens, but since presumably the plasticity of an adult brain is so low, only babies have the ability to learn alien languages because adult brains get overloaded by them. Therefore, Elgin’s solution to the problem is that humans force babies to interact with aliens thereby learning alien language and serving as a bridge. Yet there are many very important reasons to believe that even babies would have difficulty learning alien languages. Our specifically neural structures, as made more clear every day by neuroscientists, linguists, and psychologists, strongly impact our relationship with language. An easy way to think about this is the difference between how chimps and humans deal with language. Yes, chimps are capable of rudimentary language, expressing words with consistent referents, but they are not capable of the complex grammars we are.

The same might be true of aliens. Whether humans or aliens have the comparatively finite grammar is beside the point: the cost of information transmission seems like it will be relatively high. Whether the information transmission occurs through telepathy, or the spoken or written word, obviating the impact of impossible phonetics for the human tongue, grammars and meaning would be the most difficult barriers to understanding. But this is not to say they would be insurmountable. Logic is a fine tool to use, so long as specificity is a quality aliens value.

This is why meaning could be a problem. The physicist-cum-Nebula and Hugo Award-winning author David Brin turned the tables in his incredible Startide Rising saga. In this universe, humans, derogatorily called “wolflings” by most aliens, speak with far more ambiguity than others. It is the humans that do not value specificity, littering the language with metaphors and words that have all kinds of double or triple meanings. Someone familiar with any Chinese language would scoff at merely three possible meanings for an isolated word, as it could have many more than that. Most alien languages, such as Galactic Six or Galactic Five, do not allow for ambiguous meanings, as each word corresponds to something very specific and could not mean anything else. Some languages on Earth accomplish this feat with elaborate case systems in which certain morphemes are attached to a word, whether grammatically or morphologically, denoting its relationship to a subject, object, or other grammatical role.

The practical import of xenolinguistics is not yet that we need to communicate with alien races, of course, though this would be nice if we could find a way to do so. We would better be able to negotiate on our own behalf in the event of calamity, or just to establish beneficial trading relations. More immediately, but in light of the contributions of science fiction thinkers, consideration of xenolinguistics might help us assess the differences in meaning that need to be ironed out by natural language processors, for this is the difficulty with speech recognition programs and all manner of artificial intelligence. How will we store the information in such a way that it will convey all denotations and connotations, which may change given the context, and how will we store the context information in the word? In the book, I have a section on how natural language processors do it today and how it might improve. Unfortunately, we still have precious little real xenolinguistics to build upon for these tasks and therefore the absolute practical import is sadly very low for aspiring xenolinguists. My advice? Learn computer science.

So, halfway through the last post I forgot my original reason for writing it, as you can tell from the somewhat aimless jabbering. (Could someone at least tell me when I have wandered off the reservation? LOL.) But now I remember. To frame this discussion on paradox within the context of the first few parts, and this blog’s focus on economics, consider the following: We do not need to expand the set of words in natural language to cover every possible bit of information, though we know that we could attempt it forever without success. The reason why the endeavor is useless, however, is that the law of diminishing returns functions as well with words as it does for everything else. In microeconomics, the law of diminishing returns says (from wikipedia):

…the marginal production of a factor of production, in contrast to the increase that would otherwise be normally expected, actually starts to progressively decrease the more of the factor are added. According to this relationship, in a production system with fixed and variable inputs (say factory size and labor), beyond some point, each additional unit of the variable input (IE man*hours) yields smaller and smaller increases in outputs, also reducing the mean productivity of each worker. Conversely, producing one more unit of output, costs more and more (due to the major amount of variable inputs being used,to little effect).

Humans demand words and we use them, like our own capital, to produce and transmit information which gives great utility to each person capable of it. But we really only need so many descriptive words. At the point where the benefit from adding another word is less than the cost (the costs of memorizing, transmitting the word to enough people to be useful), and this point surely exists (take for example the extraordinarily minimal benefit from adding a word that means “soggy paper that could have been wet by any of many sources / ambiguously wet paper” versus the comparatively major cost), the word or set of words will not be added. Language is dynamic, meaning that new demand arises, and therefore so do new words, so this state need not stay forever. (In practice, languages are always changing and a prescriptivist book is archaic two seconds after it is published.)

With most linguistic needs met, the human spirit still needs more. Humans get more utility from moving outside the scope of natural language, giving heed to faith and developing paradox as a method for coping with all the dark corners, nooks, and gaps of natural language. It is not difficult to create a new color word in a language for a particular undifferentiated shade of green, but the need may not be strong enough to do so. The need to describe concepts and ideas that do not fit into one tidy shape requires entirely new words. Languages all over the world have long struggled with these ideas, certainly of paradox. When the cost to storing information went down, our vocabulary commensurately increased in all kinds of fields where it was previously more costly than beneficial (think: color vocabulary) to store such information in the human lexicon. Freed from the onerous costs of information storage, the vocabulary for faith and paradox, that which becomes the bright and ineffable in the human experience, zealously bloom in the art of the race.

Herbert Muller realized this in a way long before I did. In his incredible book, The Uses of the Past, he writes of the majestic Hagia Sophia in a book whose aim is to talk about relationship with history:

Only, my reflections failed to produce a neat theory of history, or any simple, wholesome moral. Hagia Sophia, or the ‘Holy Wisdom,’ gave me instead a fuller sense of the complexities, ambiguities, and paradoxes of human history. Nevertheless, I propose to dwell on these messy meanings. They may be, after all, the most wholesome meanings for us today; or so I finally concluded.

Any interesting and useful theory of economics, linguistics, or art is doomed to immediate obsolescence without considering messy meanings.

At first blush, faith seems like a quality of knowledge that could fall under the “personal knowledge” category of data source. Faith is often a deeply personal thing, though it is just as likely to not be so in evidence. Some skeptics think knowledge of God comes from the iron fist of parents and Republicans, so that would actually fit under the “knowledge-through-language” category of data sourcing. And many beliefs that derive from faith are considered myths in some speech communities, so that might fall under the “non-personal knowledge” category. Still, faith, whether religious or otherwise, may also sometimes be the domain of something completely different. It may not be a type of knowledge at all, but rather a conclusion of the will alone with almost, if not entirely, zero basis from other sources to back it up. ( In this sense, Christianity for many may not be the purest faith, since it involves reliance on The Bible and other sources generally. )

Since it seems to me that every bit of information transmitted in natural language has an implied data source element tied to it, I think natural language may have a difficult time touching the areas of faith. We may not all be entirely sure of our faith, in people, ideas, outcomes. Precisely because there is no backing for the objects of it, it is possible for the entire realm of imagination to come to the fore, leading to ever more components inside natural language and outside it as well, grasping equally unlikely and impossible ideas. (It reminds me of E Space from David Brin’s Heaven’s Reach.) Perhaps any world imagined requires a little faith (see: “Far Beyond the Stars” below).

Faith, and its linked universes, are but one manifestation of “the set of all things that are possible and impossible.” The set of all things that are possible and impossible is a large, infinite set, larger than the set of things that are merely possible. What has been, is, or will be imagined, which overlaps with the set of things possible and not, is also smaller. Since the capacity of natural language depends very much on imagination, as all texts, narratives, even biographies, are fictions (as Milosz said), language is limiting, though with its rules, it gives us the capacity to explore. This leads us to faith’s brother in the set: ethereality. This, too, could lead us beyond natural language. By this, I mean anything with one foot in our own tangible world and one foot in another, be it Heaven, Mt. Olympus, or a parallel universe slightly running slightly slower than our own. (The distinction here between ethereality and faith is mostly false, used for illustrative purposes.) As Anne Carson showed, where the Christians have holy, the ancients have MOLY. We have the sounds, can express the word, but have no idea what the expression really means, nor the etymology. The translator encounters a brilliant, not terrible, silence. It implies entire domains of knowledge outside our grasp, words, concepts, and rules for constructing them that are beyond natural language. Their utterance in our world is but a tip of the iceberg for their meaning. Translation is stopped, worthless.

One particular set of expressions, arising from faith and ethereality, is paradox. Paradoxes in conventional discourse could mean almost anything. According to wikipedia:

A paradox is a statement or group of statements that leads to a contradiction or a situation which defies intuition. The term is also used for an apparent contradiction that actually expresses a non-dual truth (cf. kōan, Catuskoti). Typically, either the statements in question do not really imply the contradiction, the puzzling result is not really a contradiction, or the premises themselves are not all really true or cannot all be true together. The word paradox is often used interchangeably with contradiction. Often, mistakenly, it is used to describe situations that are ironic.

Paradox is probably mostly used to describe situations defying intuition. Some paradoxes in logic, like Curry’s paradox, remain as a virtue of logic, though my hunch is that it could probably be solved by a heavy dose of linguistics. If you enjoy those games, by the way, knock yourself out. The ethereal cases we discussed above do not necessarily entail paradox of any sort, even the ironic. Rather, much paradox depends on our perceptions and beliefs. For example, is it possible for someone to be both good and evil? This questions relates to some deep questions of human nature that vex even those who do not think about them and lead to some profound art. Also: is it possible to be in the past and the future (and the present) at the same time?

Both questions were considered by Shakespeare, and one or both were considered by other greats, including Klimt, Milton, Spenser, and Dali. Klimt’s work pits the static, timeless, glittering gold medium versus passionate, timely, tangible action. His Byzantine and Egyptian influences command awe, not respect, because the meanings are meant to be ambiguous yet beautiful. Milton rejoices in the freedom to choose humanity possesses, showing that this freedom can lead to the most sublime of existences or the most dastardly, the glorious or the tragic. It is for us to choose, for we possess the potential for both. Spenser grasped at similar themes, as Kermode described:

The discords of our experience– delight in change, fear in change; the death of the individual and the survival of the species, the pains and pleasures of love, the knowledge of light and dark, the extinction and the perpetuity of empires– these were Spenser’s subject; and they could not be treated without this third thing, a kind of time between time and eternity.

Not just discords, but paradoxes, perhaps. Dali brought old symbols into modern art, meticulously plotting old stores for the modern era, but his “The Persistence of Memory” summons our consideration of our relationship with time. Most believe we live a linear existence, moving from one moment to the next. Dali suggested this isn’t necessarily the case. Although Dali showed you this idea quickly by painting, I have never seen a better explication in any other visual medium than this one from, sigh, yes, Deep Space Nine:

Of course, Shakespeare may have endured as the paradox specialist nonpareil. It is probably no coincidence his works stand above almost all others in their capacity to possess us. That’s because, unlike Twilight or Star Wars, they ask questions and there are no clear answers. We can consider them anew each day. Kermode, a critic where I am not, had much to say of the Bard:

Now Macbeth is above all others a play of prophecy; it not only enacts prophecies, it is obsessed by them. It is concerned with the desire to feel the future in the instant, to be transported beyond the ignorant present. It is about failures to attend to the part of equivoque which lacks immediate interest (as if one should attend to hurly and not to burly). It is concerned, too, with equivocations inherent in language. Hebrew could manage with one word for ‘I am’ and ‘I shall be’; Macbeth is a man of a different temporal order. The world feeds his fictions of the future. When he asks the sisters ‘what are you?’ their answer is to tell him what he will be. [...] …and the similarities of language and feeling remind us that Macbeth had also to examine the relation between what may be willed and what is predicted. The equivocating witches conflate past, present, and future; Glamis, Cawdor, Scotland. They are themselves, like the future, fantasies capable of objective shape. Fair and foul, they say; lost and won; lesser and greater, less happy and much happier. [...] The act is not an end. Macbeth three times wishes it were: if the doing were an end, he says; if surcease cancelled success, if ‘be’ were ‘end.’ But only the angels make their choices in non-successive time, and ‘be’ and ‘end’ are only one in God. The choice is between time and eternity. There is, in life, no such third order as that Macbeth wishes for.

That’s a mouthful, but you get a sense of the conceptual foldings that the reader must grapple with. Paradoxes may turn cause and effect on its head or involve contradictions. Whatever the case, they usually involve the existence of something that should not be given the truth value of other parts of the situation or statement. When one thing could suddenly mean another thing that was thought to be mutually exclusive, all kinds of possibilities unfold. This would, in turn, expand the scope of natural language and the landscapes of our human adventures. They are a means by which we can surpass our limits, thereby giving incentive to grow.

There are words that exist beyond the domain of natural language. Remember that language’s ultimate utility remains in its ability to transmit information. Yet, I think we would all agree that there are types of information impossible to convey. In the movie Contact, based on the book by Carl Sagan, Palmer Joss demonstrates this to Dr. Arroway by asking, “Did you love your father?” Arroway responded affirmatively. Given the previous narrative in the movie, there is little doubt and Arroway seems flummoxed by the need to give an answer to such an obvious question. Joss responds, “Prove it.” (2:05 – 2:17 below)

An action may not prove it. Any person could feed an ill father. Any person could watch television with a father. None of these things alone or together suffice. Yet, from the perspective of Dr. Arroway, she knows it beyond a doubt. The answer lies encoded within her consciousness. So do the language centers necessary to translate the neuronal patterns of the persona into information. This could be a gap in natural language. And maybe there just haven’t been words invented for this type of evidentiary matter.

But I think that we can expand the example further to prove an important point. How do we prove that we can prove it? How do we prove that we can prove that we proved it? How can we check that? And then how can we be sure it’s reliable? The problem here is not an infinite regress. Let us assume instead that we can prove all these things and more. A mathematician named Kurt Godel suggested with his Incompleteness Theorems that we can only come so close to perfect information without ever having it. Each attempt to obtain it pushes the goal one step further away.

In what has been compared in importance to Heisenberg’s Uncertainty Principle and Einstein’s General Relativity, Godel’s Incompleteness Theorem, encompassing both the First and Second Theorems, says two things (from Wikipedia):

  1. Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.
  2. For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent.

The first theorem listed says that in the consistent system of arithmetic there will be true statements that are simply not provable. This derives somewhat from the so-called Liar’s Paradox (“This statement is false.”), but whose analogue is “This statement is not provable within this system.” The difference is significant, so let us focus on the latter version. Interestingly, as Rebecca Goldstein explains in Incompleteness: The Proof and Paradox of Kurt Godel, efforts that one makes to expand the system and prove those unprovable statements will prove futile:

…the proof demonstrates that should we try to remedy the incompleteness by explicitly adding [the paradox] on as an axiom, thus creating a new, expanded formal system, then a counterpart to G can be constructed within that expanded system that is true but unprovable in the expanded system. The conclusion: There are provably unprovable, but nevertheless true, propositions in any formal system that contains elementary arithmetic, assuming that system to be consistent. A system rich enough to contain arithmetic cannot be both consistent and complete.

In truth, the expressions of arithmetic easily expand into a subset of natural language — the way in which we structure information, oftentimes even in our minds. Since arithmetic is a subset of natural language and it will always keep going and going and going, never ending complete, then natural language is doomed to the same fate, but there are other subsets that redouble arithmetic’s efforts. An attempt to make natural language (a system very different from arithmetic) complete involves going to the very end of it, putting in all the sounds and morphemes possible, as we have already discussed. In a mathematical sense, we could try to have a description and expression for everything in natural language. Every nook and cranny of meaning possible, likely including every permutation and flavor of combined meanings (n-dimensional superfactorials like !sweet and !love come to mind), would have to be revealed. Once all information on tangible things in the multiverse is discovered, there remains perceptive, intangible information locked in the conscious mind. Then there’s the unconscious mind. Then there’s… and on and on and on. This is all to say that it would be very nice to have natural language incorporate expressions for everything, but a) it’s impossible and b) it’s not practical. The amount of energy it would take to do so would be far better used doing something else, like making pizza, curing cancer, or inventing warp drive.

What’s interesting about all this is that despite this limitation we have already transcended the limits of natural language. This is best illustrated by going back to the set of perceptive, intangible information that must be described. If all the tangible information had a certainty of 99.9999% then we could add a “data source morpheme” to it such as /-wa/ that indicates this. Or, we could leave it unmarked and simply add morphemes for anything but 99.9999% certain information. Intangible information gets graded differently by data morphemes. Jaqaru, a dying language of 3000 speakers found primarily in Lima, Peru, features such data morphemes. Dr. M.J. Hardman of the University of Florida has studied the language in detail. Her findings on data morphemes include the following (from Wikipedia):

Data-source marking is reflected in every sentence of the language. The three major grammatical categories of data source are:

1. personal knowledge (PK)–typically referring to sight
2. knowledge-through-language (KTW)–referring to all that is learned by hearing others speak and by reading
3. non-personal-knowledge (NPK)–used for all myths, histories from longer ago than any living memory, stories, and non-involvement of the speaker in the current situation

So where are we transcending? In the areas of natural language for which even data source marking would be difficult, if not impossible without another data source morpheme referring to 0% certainty. I think of three areas primarily: faith, ethereality, and paradox.

At its roots, language is a means by which information is transmitted by humans to each other. Typically, its contours are determined by speech communities and linguists advocate the spoken form as language’s most important form; its most rich, varied, and dynamic form. While important to consider the differences between the spoken and written form, it should help to remember that sign languages are also completely robust, rule-governed, grammatical languages. American Sign Language in fact has a different grammar from English. So the central point to remember about language is that the main demand for it comes from a virtually universal need and desire to communicate.

In the past few decades, linguists have been forced to dig a little deeper into language in order to investigate just what differentiates human language, or natural language, from other forms of information transmission, both by artificial human means and by other organic means. Some bee species possess elaborate dances by instinct that allow them to communicate distance and identity. Parrots have been known to go beyond mere mimicry to some linguistic stimuli. Computers can perform extraordinarily difficult computations and hundreds of engineers work day by day to make them think more like us. Chimpanzees and other primates can learn and transmit arbitrary signals for tangible objects as well as actions. At the highest end of the spectrum of non-human language we have dolphins, Tursiops genus. I trust you already know that they are the dirtiest, filthiest joke tellers anywhere in the universe, but did you know that they are able to grasp simulations much faster than chimps, implying a much higher level of awareness?

Setting aside dolphins for a moment (dolphin linguistics are complex and far beyond the scope of this post), chimpanzees seem to perform some linguistic tasks relatively well. They can learn dozens of words. But we have learned that they possess only the most rudimentary of grammars. Words must be placed next to each other in a meaningless order, but usually repetitively next to each other. For example, “give food eat me eat food give….” (Still, some chimpanzees have been able to perform more impressive tasks as Emily Sue Savage-Rumbaugh showed in 1990.) Natural language is different. We are able to construct sentences that are virtually infinitely long, but that are not redundant. For example, “I learned that Annie bought the gun from Bob who said that Carolina borrowed money from David who…” could go on forever as a sentence. As a practical matter, this never happens. This is an important point as we will come to see later, but natural language has the property of recursion, which means that it can continue to spiral into its embedded phrases and dependent clauses forever. Don’t take recursion so seriously: the central point here is that humans can create infinite expressions from a finite set of discrete units of meaning (morphemes).

There have been a few attempts to demonstrate that the size of natural language is infinite, based on the fact that the grammar itself can spiral with new phrases forever and that the language can add words forever. Both properties, independently, would lead to such a conclusion. Based on the spectrum of translation outlined in my last post, I think we can look at natural language as the set of all human languages. Fundamentally, then, natural language is a flexible system, consisting of (many) sets of sometimes, but not always dynamic, rules for combining a functionally infinite set of words in order to convey information. The reason rules are sometimes dynamic is because, while many rules persist in language, they may occasionally do so out of inertia and therefore be broken when everyone in the speech community understands the expression despite its “illegal” form. One example of this is using the double negative in English. Although absolutely and utterly common in English of Chaucer’s day, the double negative fell from grace some time afterward. Still in use ever since, it is frowned upon by academicians, Strunk & White, and many other purveyors of proper English. This is why linguists frown upon prescriptivism. They do not dislike standardization, there are benefits to that. The problem is when people mistake standardization for right and wrong. Something may be standard, or it may be different, but it is not wrong, and likely is just as rule governed as the standard form. Witness the work of Walt Wolfram.

A language can add words in several ways, as shown partly by the translation spectrum. We can combine words from the language to mean something new, borrow words from another language, create new words out of thin air. This last one could be done simply by assembling sounds currently within language’s standard phonemic inventory for a new word. Typically, languages group various distinctive phonetic sounds into phonemes. For example, the ‘p’ in piranha is very different from the ‘p’ in stop. The first is said to be aspirated, which can be proven by placing your hand in front of your mouth when saying the word naturally. The second is unaspirated. In English, these sounds are grouped into the same phoneme /p/. In Hindi, they will lead to minimal pairs, where switching one for the other in a word creates different meanings. Therefore, they are in different phonemes. Switching them in English does not affect meaning. Let’s say a language has 30 phonemes. It would be a very long time before you run out of combinations of these sounds for making new words. And when you do run out of them for given word sizes, all you have to do is increase the word size. That gives you another virtual infinity of new word possibilities. And then you could add signs, as seen in the many sign languages of the world.

You could be thinking, “Well, aren’t there also infinite speech sounds? That is, phonetic sounds?” Yes. Just like the color spectrum, representing an infinite array of colors, there is an infinite array of sounds that our vocal chords can create — not to mention an infinite array of signs our hands can make. In the case of vocalizations however, like colors, they will be grouped in “best example” phonemes. The range of them may vary from language to language, but they will stick to a certain maximum deviation from the best example and always include the best example.

Remember that my definition of natural language here includes a grouping of all languages, which then implies that all phonemes are represented. Now let us add the phonemes have we have lost which may have existed in dead languages or simply been lost to current languages. We now have the entire array of speech sounds. By this method, all distinctive features should be represented. All these sounds can be brought into natural language, combined in every way morphologically and syntactically possible, and we wind up with an array of expressions that is boundless and infinite.

This is the scope of natural language. But there is more.

Some may remember my review of Anne Carson’s book If Not, Winter: Fragments of Sappho. Like everyone else, I adored her book and really took to her method of translation. Recently, I decided to investigate a little bit more about this talented artist and scholar. I found that If Not, Winter is hardly anomalous as a representative work.

In her essay “Variations on the Right to Remain Silent,” published in a 2008 edition of A Public Space, she confronts the boundary between linguistics and literary theory, hoping to develop a kind of a theory of silence. She doesn’t need more space than what she uses in the essay to do so.

The motivation for the essay has its roots in the art of translation. According to Carson, there are two kinds of silence to be reckoned with by the translator. Physical silence occurs where something the author intended to be there is missing, as with many of Sappho’s poems, largely lost to posterity. Carson deals with this by using brackets where the author’s intended expressions are missing, but she says translators may be as justified in some cases to extrapolate expressions. The other kind of silence is “metaphysical” silence, wherein “a word… does not intend to be translatable. A word… stops itself.” Carson gives an example from the Odyssey:

In the fifth book of the Odyssey when Odysseus is about to confront a witch named Kirke whose practice is to turn men into pigs, he is given by the god Hermes a pharmaceutical plant to use against her magic:

So speaking Hermes gave him the drug
by pulling it out of the ground and he showed the nature of it:
at the root it was black but like milk was the flower.
MOLY is what the gods call it. And it is very hard to dig up
for mortal men. But gods can do such things.

MOLY is one of several occurences in Homer’s poems of what he calls “the language of gods.” There are a handful of people or things in epics that have this sort of double name. Linguists like to see in these words traces of some older layer of Indo-European preserved in Homer’s Greek. However that may be, when he invokes the language of gods Homer usually tells you the mortal translation too. Here he does not. He wants this word to fall silent. Here are four letters of the alphabet, you can pronounce them but you cannot define, possess, or make use of them. You cannot search for this plant by the roadside or Google it and find out where to buy some. The plant is sacred, the knowledge belongs to gods, the word stops itself.

These silences occur with words that are a subset of unknown size of the words that must be borrowed from other languages as opposed to translated. Translators must make several difficult decisions in their work from artistic and linguistic standpoints, but it is the latter that is the most important here because there is a “spectrum of translation” they must always employ. On one end are single words that translate with virtually 1:1 correspondence to words in the other language. ‘Book’ is ‘libro’ in Spanish without much confusion. Then there’re words like ‘nose’ in English that translate with but the slightest difference into 鼻 (hana). In Words in Context, Takao Suzuki shows that the area American English speakers consider the nose covers a different portion of the face than the Japanese word, although both of course include the most important functional parts. Likewise, as discussed on this blog, Paul Kay (Berkeley) has shown that speakers of almost all languages consider the best example or shade of the word red as the same, despite differing ranges of shades that could be considered red. Nevertheless, for all intents and purposes, a single word translation will do. Next we have compound and composite word translations. The word ‘television’ seems like it translates quite cleanly to 電視 (dian4 shi4) for Mandarin (or Taiwanese if we’re being cute). But there are a few issues here: 電視 is actually a composite word, much like the original, made from two morphemes that indicate ‘electricity’ and ‘being looked at’ respectively.

At this point, we can see that for much translation, there are words that some languages possess which will be difficult to translate with the same economy. From here until the middle of the spectrum, words are translated with progressively more and more morphemes in the destination language. But when a translator is faced with the problem of translating one word into a paragraph, that might defeat so much about the original: pacing, essence, and so on. And then, of course, there’s the Heisenberg Uncertainty Principle in Language, which suggests that the more words we use to describe the word to be translated in order to most closely approximate the original meaning, the more its essential meaning, in addition to other connotations, is missed. Locking down the expression so rigidly pushes out meaning. Therefore, there comes a point on the spectrum where translators must seek different methods of translation besides seeking the complete and rigid expression for it.

Carson is a master of this, as I have pointed out before. In her book of Sappho poetry, If Not, Winter, she uses words such as ‘songdelighting’ and ‘radiant-shaking.’ Instead of writing out the complete expressions, she chooses innovation. She creates novel words using standard word formation rules in the destination language that may contain more of the original meaning than an attempt at complete expression might.

The second to last point on the spectrum of translation is when a word is just borrowed without further elaboration. Carson highlights the borrowing (outright theft, I’d think) of ‘cliché’ from French. She writes:

It has been assumed into English unchanged, partly because using French words makes English-speakers feel more intelligent and partly because the word has imitative origins (it is supposed to mimic the sound of the printer’s die striking the metal) that make it untranslatable.

The latter is a good reason for borrowing a word from another language. Another reason is that a speech community possesses significant demand for a word that it does not yet have. For example, French speakers started using the word ‘email’ because no word in French concisely described such a concept and its word formation rules would likely not have led to such an economical word either. (The Academie Francaise has tried to stifle the use of this word in favor of ‘courriel’ and I do not know the extent of its success.) A better example is the English borrowing of ‘schadenfreude’ from German which means “taking delight in others’ misfortune.” Although I have only really heard Dorothy Rabinowitz, a Pulitzer Prize winning writer of the Wall Street Journal, use the word, I have read it on several occasions from other writers. Just beyond these words are similar words for whom some meaning can never be discovered or reclaimed without being a native speaker of the language. Multilinguals know of many such words. Some brag about them. Some keep their knowledge locked away. Some of these words also depend crucially on shared temporal experience, as ‘truth’ and ‘authenticity’ mean so much more to many Czechs than most American English speakers can understand — though they can try if they read Havel, Seifert, Kundera, and maybe some Poles as well. This is a story worth telling in another post someday.

Finally, we arrive at the end of the spectrum, yet there is no guard rail or barrier, and we stand at a precipice beyond which we cannot see anything precisely: only the bright and ineffable, like MOLY. These words land in our language with a form bearing no relationship that we can trace back to any meaning. Morphological analysis stops because it can never start. Syntax? Phonology? Save yourself because the tracks have all been covered. Carson shows several examples of the bright, ineffable silences: they are all places that we cannot go. These silences may be uttered by our inner angels, the angels above, or from even more inexplicable origins. Our choice to explore them creates possibilities that we never before considered.

Follow

Get every new post delivered to your Inbox.