On Labeling

Mmm...babycakes.I keep running into an issue with labels. It wasn’t long ago that I revised my own from “agnostic” to the more accurate and more useful “agnostic atheist” (in a nutshell, anyway–but this is a topic for a future post). The problem I have is that the relevant parts of my beliefs didn’t change, only what I called myself did. I didn’t have a belief in any gods when I called myself an agnostic, and I don’t have any belief in any gods now that I call myself an atheist. From any objective standpoint, I was an atheist the whole time.

And this is the substance of the problem: the dissonance between what a person calls himself or herself, and what categories a person objectively falls into. These labels are frequently different, and frequently result in various confusions and complications.

On one hand, I think we’re inclined to take people at their word with regard to what their personal labels are. It’s a consequence of having so many labels that center around traits that can only be assessed subjectively. I can’t look into another person’s mind to know what they believe or who they’re attracted to or what their political beliefs really are, or even how they define the labels that relate to those arenas. We can only rely on their self-reporting. So, we have little choice but to accept their terminology for themselves.

But…there are objective definitions for some of these terms, and we can, based on a person’s self-reporting of their beliefs, see that an objectively-defined label–which may or may not be the one they apply to themselves–applies to them.

I fear I’m being obtuse in my generality, so here’s an example: Carl Sagan described himself as an agnostic. He resisted the term “atheist,” and clearly gave quite a bit of thought to the problem of how you define “god”–obviously, the “god” of Spinoza and Einstein, which is simply a term applied to the laws of the universe, exists, but the interventionist god of the creationists is far less likely. So Sagan professed agnosticism apparently in order to underscore the point that he assessed the question of each god’s existence individually.

On the other hand, he also seemed to define “atheist” and “agnostic” in unconventional ways–or perhaps in those days before a decent atheist movement, the terms just had different connotations or less specific definitions. Sagan said “An agnostic is somebody who doesn’t believe in something until there is evidence for it, so I’m agnostic,” and “An atheist is someone who knows there is no God.”

Now, I love Carl, but it seems to me that he’s got the definitions of these terms inside-out. “Agnostic,” as the root implies, has to do with what one claims to know–specifically, it’s used to describe people who claim not to know if there are gods. Atheist, on the other hand, is a stance on belief–specifically the lack of belief in gods.

So, if we’re to go with the definitions of terms as generally agreed upon, as well as Carl’s own self-reported lack of belief in gods and adherence to the null hypothesis with regard to supernatural god claims, then it’s clear that Carl is an atheist. Certainly an agnostic atheist–one who lacks belief in gods but does not claim to know that there are no gods–but an atheist nonetheless.

The dilemma with regard to Sagan is relatively easy to resolve; “agnostic” and “atheist” are not mutually exclusive terms, and the term one chooses to emphasize is certainly a matter of personal discretion. In the case of any self-chosen label, the pigeon-holes we voluntarily enter into are almost certainly not all of the pigeon-holes into which we could be placed. I describe myself as an atheist and a skeptic, but it would not be incorrect to call me an agnostic, a pearlist, a secularist, an empiricist, and so forth. What I choose to call myself reflects my priorities and my understanding of the relevant terminology, but it doesn’t necessarily exclude other terms.

The more difficult problems come when people adopt labels that, by any objective measure, do not fit them, or exclude labels that do. We see Sagan doing the latter in the quote above, eschewing the term “atheist” based on what we’d recognize now as a mistaken definition. The former is perhaps even more common–consider how 9/11 Truthers, Global Warming and AIDS denialists, and Creationists have all attempted to usurp the word “skeptic,” even though none of their methods even approach skepticism.

The danger with the former is when groups try to co-opt people into their groups who, due to lack of consistent or unambiguous self-reporting (or unambiguous reporting from reliable outside sources), can’t objectively be said to fit into them. We see this when Christians try to claim that the founding fathers were all devout Christian men, ignoring the reams of evidence that many of them were deists or otherwise unorthodox. It’s not just the fundies who do this, though; there was a poster at my college which cited Eleanor Roosevelt and Errol Flynn among its list of famous homosexual and bisexual people, despite there being inconsistent and inconclusive evidence to determine either of their sexualities. The same is true when my fellow atheists attempt to claim Abraham Lincoln and Thomas Paine (among others), despite ambiguity in their self-described beliefs. I think, especially those of us who pride ourselves on reason and evidence, that we must be careful with these labels, lest we become hypocrites or appear sloppy in our application and definition of terms. These terms have value only inasmuch as we use them consistently.

The matter of people adopting terms which clearly do not apply to them, however, presents a more familiar problem. It seems easy and safe enough to say something like “you call yourself an atheist, yet you say you believe in God. Those can’t both be true,” but situations rarely seem to be so cut-and-dry. Instead, what we end up with are ambiguities and apparent contradictions, and a need to be very accurate and very precise (and very conservative) in our definition of terms. Otherwise, it’s a very short slippery slope to No True Scotsman territory.

Case in point, the word “Christian.” It’s a term with an ambiguous definition, which (as far as I can tell) cannot be resolved without delving into doctrinal disputes. Even a definition as simple as “a Christian is someone who believes Jesus was the son of God” runs afoul of Trinitarian semantics, where Jesus is not the son, but God himself. A broader definition like, “One who follows the teachings of Jesus” ends up including people who don’t consider themselves Christians (for instance, Ben Franklin, who enumerated Jesus among other historical philosophers) and potentially excluding people who don’t meet the unclear standard of what constitutes “following,” and so forth.

Which is why there are so many denominations of Christianity who claim that none of the other denominations are “True Christians.” For many Protestants, the definition of “True Christian” excludes all Catholics, and vice versa; and for quite a lot of Christians, the definition of the term excludes Mormons, who are also Bible-believers that accept Jesus’s divinity.

When we start down the path of denying people the terms that they adopt for themselves, we must be very careful that we do not overstep the bounds of objectivity and strict definitions. Clear contradictions are easy enough to spot and call out; where terms are clearly defined and beliefs or traits are clearly expressed, we may indeed be able to say “you call yourself be bisexual, but you say you’re only attracted to the opposite sex. Those can’t both be true.” But where definitions are less clear, or where the apparent contradictions are more circumstantially represented, objectivity can quickly be thrown out the window.

I don’t really have a solution for this problem, except that we should recognize that our ability to objectively label people is severely limited by the definitions we ascribe to our labels and the information that our subjects report themselves. So long as we are careful about respecting those boundaries, we should remain well within the guidelines determined by reason and evidence. Any judgments we make and labels we apply should be done as carefully and conservatively as possible.

My reasons for laying all this out should become clear with my next big post. In the meantime, feel free to add to this discussion in the comments.

Advertisements

The Bible is Not an Objective Moral Standard

Why yes, this is my go-to image for discussions of morality. Why do you ask?Reading posts by Rhology have made me realize some of the problems involved in talking to people who believe their morals come from the Bible. There are several common refrains involved when arguing about this–“atheists have no basis for morality,” “without an objective morality/absolute moral code, you can’t judge other people’s morals,” “everyone has inborn morals from God, even if they don’t believe in him”–all of which are bound to pop up in any argument about secular morals. These all generally lead back to the point that God (and/or/through the Bible) provides a perfect and objective moral standard, without any of the problems that come from trying to define and justify a moral system in the absence of a deity. This idea is simply false: the Bible is emphatically not an objective moral standard; in fact, it fails in each of those points.

We’ll tackle “standard” first, since it’s the easiest. What moral standard does the Bible provide? Do we take our morals only from the explicit commandments, or should we learn by example from the various heroes and virtuous people?

If we are to learn only from the explicit commandments, then we run into a problem right away: there are an awful lot of apparent moral quandaries that never get discussed in the Bible. Are there moral implications of genetic engineering? Cybernetics? Overpopulation? Pollution? Birth control? Phone sex? Organ transplants? Euthanasia? Where the Bible touches on these issues, it does so only in the most broad, vague, and tangential fashions; there are no specific instructions on whether or not children should be given mood-altering drugs, no specific answers to questions about the introduction of novel organisms into foreign ecosystems. Are we to assume that the only moral issues are the ones that the Bible discusses directly? Is the choice to vaccinate your child morally neutral and equivalent to the choice to leave them unvaccinated? These are serious questions of real-life issues, on which the Bible is silent, preferring instead to tell us how best to combine goats and milk (Ex. 34:26) and the taxonomy of eunuchs (Mt. 19:12). Is there really no morally preferable choice in any of those situations?

So, perhaps we are meant to also learn from example. If that’s the case, then what lessons should we take away from the heroes’ stories? Take Jephthah, for instance. He makes a deal with God that if God helps him win in battle against the Ammonites, then he’ll sacrifice the first thing that comes through his doorway when he returns home. Naturally, after the successful battle, his daughter comes out to greet him. There’s no Abraham/Isaac cop-out in this story: Jephthah follows through with his promise to God. So do we read this story as a cautionary tale about the price of testing God, or do we read it as a positive example of what the faithful should be willing to do in the name of the Lord? There’s enough material outside the story to support both interpretations; which moral should we be receiving?

We could find similar quandaries with any number of Biblical characters–Joseph, Elisha, Solomon, Samson, etc.–maybe we shouldn’t be learning from all of their examples. So which characters should we be learning from? I suspect that Christians would say we ought not be following in the footsteps of Thomas, refusing to believe in the extraordinary until extraordinary evidence is provided to support the claims (despite the corroborating commandment of 1 Thessalonians 5:21). There are a litany of characters who are willing–even eager–to sacrifice their children based on God’s say-so, from Lot to Abraham to Jephthah to Yahweh, which suggests to me that according to Biblical morals, there’s nothing wrong with what Deanna Laney or Andrea Yates Dena Schlosser did*. Or perhaps we shouldn’t be learning from those particular examples. And what about the big guy himself? Should we be taking lesssons from God’s actions, or is he a “do as I say, not as I do” sort of father figure? After all, God does some pretty nasty stuff over the course of the Bible, commanding and committing genocide and inflicting plagues and so forth. Even the “do as I say” bit is difficult, given all the places where God issues direct commands that conflict with earlier laws and commandments (such as the various exhortations to kill women and children, contradicting the whole “thou shalt not murder” bit). Do you do as he said before, or as he’s saying now–what was written in stone, or what was given in a vision? This would be a lot easier if each of the real commandments started with “Simon Says.”

Hitting on that point of contradictory commandments, we see quite a few such things throughout the Bible. There are places where some moral imperatives issued by the book contradict others, there are places where heroes’ explicit flaunting of those imperatives is cast in a positive light, and then there are places where God issues edicts that directly conflict with previously-issued laws and edicts. How can we call this set of morals a “standard” if it is internally inconsistent, and if God can change it on a whim? Or is the only standard “what God says goes”? If it’s the latter point, then how do we determine what God’s message is, given contradictory passages in the Bible and stories with ambiguous moral teachings? How do we distinguish between actual commands from God and paranoid delusions? After all, Dena Schlosser believed that God had told her to cut off her daughter’s arms, which isn’t exactly out of character for the God of the Bible (Mark 9:43, for instance); can we say with any degree of certainty whether or not she was actually receiving instructions from Yahweh?

This segues nicely into the issue of objectivity**. In short, there isn’t any. In long, we have to make some distinctions here. Let’s say, for the sake of argument, that there is an omnipotent universe-creating God who has some idea of morality in his big giant head, and cares whether or not we follow it. To this end, he communicates with some Middle Eastern nomads through bushes and tablets, plays some role in their writing of a bunch of books full of teachings and laws, then later comes down himself to tell stories and make pronouncements which also eventually get written down. At this point, we could conceivably have three distinct moral codes: What-God-Thinks, What-God-Said, and What-Got-Recorded. In any human communication, these three things would be different–perhaps only subtly, but certainly different. What one thinks might be more nuanced and detailed than what one says, which may lose some inflection or connotation in the transition to writing (or may gain additional ones through the addition of punctuation and other conventions), not to mention that the writers are filtering what-one-says through their own perceptions. But, for the sake of simplicity, we’ll assume that God is super-awesome and communicated everything pertinent about his thoughts on morality to his various followers, who recorded these thoughts accurately–to make things simple (too late), we’ll assume that the Bible (as it was written) accurately and completely represents God’s moral codes, that What-God-Thinks and What-Got-Recorded are the same.

That’s all well and good, but it’s certainly not the end of the story. Even assuming that God is perfect and infallible and a fantastic communicator, and assuming that his secretaries were all very thorough and accurate, the morals aren’t doing much good until they’re read. The process of reading is where any lingering objectivity goes right out the window. I’ll refer you to my post on communication for the lengthy discussion. Suffice it to say, each person who reads the Bible is going to read it in the particular context of their own knowledge, culture, and experiences. These contextual differences are going to have profound impacts on the message that the person receives***.

Take, for example, Exodus 20:13: “Thou shalt not murder.” On the face of it, that’s pretty straightforward. “Murder” is a more specific term than, say, “kill” (which some translations use instead); “murder” implies some degree of intent, ruling out accidental deaths, and is usually reserved for humans, ruling out killing animals and plants and the like. It would seem that the Sixth Commandment is pretty cut-and-dry.

It’s not. It doesn’t take more than a brief application of common sense to realize that, either. Even legally, “murder” is a broad term, and the difference between it and manslaughter is often a matter of prosecutorial discretion.

Consider this: is it murder to kill someone who is trying to kill you? Legally, it isn’t; it’s self-defense. What if you’re killing someone who is trying to kill someone else, some innocent? If you could demonstrate that that person was a clear and present danger, then it’d be a pretty clear case of justifiable homicide. Is it murder to kill someone who is not attacking you, but has threatened or promised to kill you? Is there such a thing as pre-emptive self-defense? What if you think they’ve threatened you, or you just feel threatened by them? Is there a hard-and-fast line where it isn’t self-defense anymore? What if someone’s mere existence threatens your life–if you’re trapped on a raft or in the wilderness with another person, with only enough resources for one of you to survive, is it murder to kill the other person? Is it murder to continue living, ensuring that person’s death?

This is, of course, ignoring other pertinent questions–is it murder to kill an enemy in war? What about the unborn? Is abortion murder? Is it murder to dispose of unused frozen zygotes from in vitro fertilization? Is execution murder? Is it murder if you don’t act to prevent someone’s death when it’s in your power to do so? If someone who is already facing imminent-but-painful death begs you for a quick and painless one that you are able to provide, would it be murder to kill them? Would it be wrong? I guarantee, for nearly all of these questions, that one can easily find Bible-believing Christians on every conceivable side.

Some of this may seem like splitting hairs, but if there’s one thing I’ve learned about moral philosophy, it’s that it exists specifically to split those hairs. The whole point of moral philosophy is to provide answers–or at least reasoned arguments–regarding these tough hair-splitting moral questions. We don’t generally have much problem reasoning out the right thing to do in the obvious situations; it’s the ones that walk the lines, the no-win scenarios, and whatnot that cause moral anxiety.

Can the Bible be an objective moral standard if it doesn’t provide specific guidance on these questions? If it doesn’t provide a specific, detailed definition of murder (for instance), then how are we to determine what we shalt not do in these difficult situations? We started by assuming that God included his morals, completely and perfectly, in the Bible, but can any moral system be considered complete or perfect under any reasonable definition of either term if it leaves so much open to subjective interpretation?

It ends up being like the disagreement between Creationists regarding where to draw the line between “fully ape” and “fully human” when presented with the progression of transitional hominids. When a worldview that only admits binary options is presented with a continuum, dividing that spectrum up into those two absolute options is a subjective and arbitrary process. If the Bible had said “So God created man in his own image, which was upright and somewhat hairy and with a prominent sloping brow, and…,” those Creationists might have had more agreement. Similarly, if the Bible said “Thou shalt not murder, which includes but is not limited to…,” these questions might be answered more objectively within Biblical morality.

Or, rather than presenting us with the broad, general rules and expecting us to deduce the specifics, the more useful moral standard would provide us with a litany of specific situations and allow us to induce the generalizations. Sure, it would make the Bible exponentially longer, but after three hundred pages of various specific killing scenarios, it’d be pretty easy to reason “wow, God doesn’t much seem to like murder.” Instead, we have the general statement, which leaves us wondering “gee, what does God think about euthanasia?” and the like.

And this is where the Bible fails on the “moral” point. Even disregarding the bits of the Bible that no sane person would call “moral,” the Bible fails as a moral guide because it provides no clear guidance on any of these moral issues. Even if the Bible is a full and accurate description of God’s moral sense, it is not a complete guide to the morals that a human would need. We face moral issues that are apparently beneath God’s notice, and in these cases we must make our own decisions, we must determine the moral options for ourselves. And the fact that we are able to do this on an individual level (e.g., euthanasia) and on a social one (e.g., self-defense and justifiable homicide legal exceptions) completely invalidates the supposed need for an objective moral standard. The Christian’s claim that morality requires the Bible falls apart once one realizes that we routinely face moral quandaries for which the Bible offers no clear answer. The moral decisions we are required to make on our own are far more varied, nuanced, and difficult than the morals that are prescribed in the Bible; if we can make moral decisions in the vast gray areas and unpleasant scenarios of the real world, then I can’t see how the broad generalizations like “thou shalt not murder” would present any sort of problem. As I mentioned above, it would be much easier to induce the general rules from the specific situations than to deduce the moral options in specific situations from a general rule. The morals provided by the Bible are the simplest building blocks, the things we can all agree on and end up at independently (and, incidentally, things that most cultures have done independently), based on the much more complex situations we run across in the real world.

Where in the Bible we are meant to find morals is unclear; the stories are ambiguous, the commandments are overly general and often irrelevant, and there is little (if any) consistency. Most of the moral-making is ultimately left up to subjective interpretation, and the application of those morals is a matter for personal and social determination. The Bible does not provide the objective moral standard which so many of its adherents proclaim, and the notion that it is a necessary component for humans to have morals is self-refuting as a result. Moral philosophy, cultural anthropology, sociology, and biology have given us insights into how we make morals on the levels of the individual and as a society, and how moral codes and consciences developed in social animals. They have provided us with a way to develop our own systems of values, which then provide a way of distinguishing right from wrong in those situations where the division is indistinct. Finally, and perhaps most importantly, they have allowed us the freedom to do what people do (and indeed must do, regardless of their religious convictions) already–examine and evaluate their own values and come to their own conclusions–without the threat of damnation hanging over them should they make the wrong choice. Morals come not from above, but from within; they are a result of our individual instincts and our interactions with one another. Consequently, we are held responsible, made to account for our moral decisions, by ourselves and each other, not some external arbiter. The only “objective moral standard” is the one we set ourselves.


*Some theists would likely say that these people were not actually receiving instructions from God, even though they believed they were. I’d like to know how they make that distinction. After all, can’t the same be said for Jephthah or Abraham? If you accept those stories, then you certainly can’t claim that it’s not within God’s character to demand a parent to sacrifice his or her child–Abraham certainly believed that this was something that God would command, and the Jephthah story confirms Abraham’s conviction. On what grounds can we claim with any kind of certainty that Abraham and Jephthah were actually receiving instructions from God to violate the “thou shalt not murder” commandment, while Dena Schlosser and Andrea Yates were schizophrenic or otherwise mentally ill?

**There’s a further issue here with the definition of “objective,” which could probably warrant its own post. Generally, things that are “objective” are the things that can be verified through application of fact or reason. “Chocolate is brown” is an objective fact (admittedly with some definition-associated wiggle room), subject to verification or falsification; “chocolate is delicious” is a subjective opinion, which is not subject to proof or disproof. What, precisely, makes God’s opinion on morals objective? Why would his opinion be any less subjective than anyone else’s? Yes, God is more powerful, but what application of power can make subjective opinion into objective fact? God’s opinions are not subject to verification or falsification; they are as inaccessible to us as anyone else’s opinions. We can know them only by being told directly, by the subject, what the opinions are–and that runs us again into the problem of communication and interpretation.

Yeah, this is definitely fodder for another post.

***I’ve omitted here another pertinent issue: the matter of translation and copying. Long before anyone reading it today can get a chance to interpret the Bible, it has already been filtered through multiple interpreters. We know from the historical record that the Bible has been subject to multiple alterations (intentional and unintentional) through the copying process, many of which were due to various dogmas and ideologies of centuries past. The translators are working from copies that are many generations removed from any originals, and which have built into them many of the copying errors and alterations from the past. Those translators must then make their own interpretations when choosing the best words in one language to convey ideas expressed in another. There is rarely (if ever) a 1=1 correspondence between languages, especially ones as distantly related as modern English and ancient Greek. Each idea in the original could be phrased any number of ways in the translation, and each translated version will be different depending on what the translator decided to emphasize–was her intent to preserve the closest literal meaning of the text, or to convey the poetry, or to try to present the concepts as clearly as possible with less regard to the particular language, or did she have another motive for her choices? For an example of how much impact this kind of interpretive choice has on a text, try opening up up any two versions of “The Iliad.”

On Interpretation

I see an old lady!--No, wait, a young girl!--No, I mean, two faces eating a candlestick!I thought I’d talked about this before on the blog, but apparently I’ve managed to go this long without really tackling the issue of interpretation. Consequently, you might notice some of the themes and points in this post getting repeated in my next big article, since writing that was what alerted me to my omission.

I don’t generally like absolute statements, since they so rarely are, but I think this one works: there is no reading without interpretation. In fact, I could go a step further and say there’s no communication without interpretation, but reading is the most obvious and pertinent example.

Each person is different, the product of a unique set of circumstances, experiences, knowledge, and so forth. Consequently, each person approaches each and every text with different baggage, and a different framework. When they read the text, it gets filtered through and informed by those experiences, that knowledge, and that framework. This process influences the way the reader understands the text.

Gah, that’s way too general. Let’s try this again: I saw the first couple of Harry Potter movies before I started reading the books; consequently, I came to the books with the knowledge of the movie cast, and I interpreted the books through that framework–not intentionally, mind you, it’s just that the images the text produced in my mind included Daniel Radcliffe as Harry and Alan Rickman as Professor Snape. However, I plowed through the series faster than the moviemakers have. The descriptions in the books (and the illustrations) informed my mental images of other characters, so when I saw “Order of the Phoenix,” I found the casting decision for Dolores Umbridge quite at odds with my interpretation of the character, who was less frou-frou and more frog-frog.

We’ve all faced this kind of thing: our prior experiences inform our future interpretations. I imagine most people picking up an Ian Fleming novel have a particular Bond playing the role in their mental movies. There was quite a bit of tizzy over the character designs in “The Hitchhiker’s Guide to the Galaxy” movie, from Marvin’s stature and shape to the odd placement of Zaphod’s second head, to Ford Prefect’s skin color. I hear Kevin Conroy‘s voice when I read Batman dialogue.

This process is a subset of the larger linguistic process of accumulating connotation. As King of Ferrets fairly recently noted, words are more than just their definitions; they gather additional meaning through the accumulation of connotations–auxiliary meaning attached to the world through the forces of history and experience. Often, these connotations are widespread. For example, check out how the word “Socialist” got thrown around during the election. There’s nothing in the definition of the word that makes it the damning insult it’s supposed to be, but thanks to the Cold War and the USSR, people interpret the word to mean more than just “someone who believes in collective ownership of the means of production.” Nothing about “natural” means “good and healthy,” yet that’s how it’s perceived; nothing about “atheist” means “immoral and selfish,” nor does it mean “rational and scientific,” but depending on who you say it around, it may carry either of those auxiliary meanings. Words are, when it comes right down to it, symbols of whatever objects or concepts they represent, and like any symbols (crosses, six-pointed stars, bright red ‘A’s, Confederate flags, swastikas, etc.), they take on meanings in the minds of the people beyond what they were intended to represent.

This process isn’t just a social one; it happens on a personal level, too. We all attach some connotations and additional meanings to words and other symbols based on our own personal experiences. I’m sure we all have this on some level; we’ve all had a private little chuckle when some otherwise innocuous word or phrase reminds us of some inside joke–and we’ve also all had that sinking feeling as we’ve tried to explain the joke to someone who isn’t familiar with our private connotations. I know one group of people who would likely snicker if I said “gravy pipe,” while others would just scratch their heads; I know another group of people who would find the phrase “I’ve got a boat” hilarious, but everyone else is going to be lost. I could explain, but even if you understood, you wouldn’t find it funny, and you almost certainly wouldn’t be reminded of my story next time you heard the word “gravy.” Words like “doppelganger” and “ubiquitous” are funny to me because of the significance I’ve attached to them through the personal process of connotation-building.

And this is where it’s kind of key to be aware of your audience. If you’re going to communicate effectively with your audience, you need to have some understanding of this process. In order to communicate effectively, I need to recognize that not everyone will burst into laughter if I say “mass media” or “ice dragon,” because not everyone shares the significance that I’ve privately attached to those phrases. Communication is only effective where the speaker and listener share a common language; this simple fact requires the speaker to know what connotations he and his audience are likely to share.

Fortunately or unfortunately, we’re not telepathic. What this means is that we cannot know with certainty how any given audience will interpret what we say. We might guess to a high degree of accuracy, depending on how well we know our audience, but there’s always going to be some uncertainty involved. That ambiguity of meaning is present in nearly every word, no matter how simple, no matter how apparently direct, because of the way we naturally attach and interpret meaning.

Here’s the example I generally like to use: take the word “DOG.” It’s a very simple word with a fairly straightforward definition, yet it’s going to be interpreted slightly differently by everyone who reads or hears it. I imagine that everyone, reading the word, has formed a particular picture in their heads of some particular dog from their own experience. Some people are associating the word with smells, sounds, feelings, other words, sensations, and events in their lives. Some small number of people might be thinking of a certain TV bounty hunter. The point is that the word, while defined specifically, includes a large amount of ambiguity.

Let’s constrain the ambiguity, then. Take the phrase “BLACK DOG.” Now, I’ve closed off some possibilities: people’s mental pictures are no longer of golden retrievers and dalmatians. I’ve closed off some possibilities that the term “DOG” leaves open, moving to the included subset of black dogs. There’s still ambiguity, though: is it a little basket-dwelling dog like Toto, or a big German Shepherd? Long hair or short hair? What kind of collar?

But there’s an added wrinkle here. When I put the word “BLACK” in there, I brought in the ambiguity associated with that word as well. Is the dog all black, or mostly black with some other colors, like a doberman? What shade of black are we talking about? Is it matte or glossy?

Then there’s further ambiguity arising from the specific word combination. When I say “BLACK DOG,” I may mean a dark-colored canine, or I may mean that “I gotta roll, can’t stand still, got a flamin’ heart, can’t get my fill.”

And that’s just connotational ambiguity; there’s definitional ambiguity as well. The word “period” is a great example of this. Definitionally, it means something very different to a geologist, an astronomer, a physicist, a historian, a geneticist, a chemist, a musician, an editor, a hockey player, and Margaret Simon. Connotationally, it’s going to mean something very different to ten-year-old Margaret Simon lagging behind her classmates and 25-year-old Margaret Simon on the first day of her Hawaiian honeymoon.

People, I think, are aware of these ambiguities on some level; the vast majority of verbal humor relies on them to some degree. Our language has built-in mechanisms to alleviate it. In speaking, we augment the words with gestures, inflections, and expressions. If I say “BLACK DOG” while pointing at a black dog, or at the radio playing a distinctive guitar riff, my meaning is more clear. The tone of my voice as I say “BLACK DOG” will likely give some indication as to my general (or specific) feelings about black dogs, or that black dog in particular. Writing lacks these abilities, but punctuation, capitalization, and font modification (such as bold and italics) are able to accomplish some of the same goals, and other ones besides. Whether I’m talking about the canine or the song would be immediately apparent in print, as the difference between “black dog” and “‘Black Dog.'” In both venues, one of the most common ways to combat linguistic ambiguity is to add more words. Whether it’s writing “black dog, a Labrador Retriever, with floppy ears and a cold nose and the nicest temperament…” or saying “black dog, that black dog, the one over there by the flagpole…” we use words (generally in conjunction with the other tools of the communication medium) to clarify other words. None of these methods, however, can completely eliminate the ambiguity in communication, and they all have the potential to add further ambiguity to the communication by adding information as well.

To kind of summarize all that in a slightly more entertaining way, look at the phrase “JANE LOVES DICK.” It might be a sincere assessment of Jane’s affection for Richard, or it might be a crude explanation of Jane’s affinity for male genitals. Or, depending on how you define terms, it might be both. Textually, we can change it to “Jane loves Dick” or “Jane loves dick,” and that largely clarifies the point. Verbally, we’d probably use wildly different gestures and inflections to talk about Jane’s office crush and her organ preference. And in either case, we can say something like “Jane–Jane Sniegowski, from Accounting–loves Dick Travers, the executive assistant. Mostly, she loves his dick.”

The net result of all this is that in any communication, there is some loss of information, of specificity, between the speaker and the listener (or the writer and the reader). I have some specific interpretation of the ideas I want to communicate, I approximate that with words (and often the approximation is very close), and my audience interprets those words through their own individual framework. Hopefully, the resulting idea in my audience’s mind bears a close resemblance to the idea in mine; the closer they are, the more effective the communication. But perfect communication–loss-free transmission of ideas from one mind to another–is impossible given how language and our brains work.

I don’t really think any of this is controversial; in fact, I think it’s generally pretty obvious. Any good writer or speaker knows to anticipate their audience’s reactions and interpretations, specifically because what the audience hears might be wildly different from what the communicator says (or is trying to say). Part of why I’ve been perhaps overly explanatory and meticulous in this post is that I know talking about language can get very quickly confusing, and I’m hoping to make my points particularly clear.

There’s one other wrinkle here, which is a function of the timeless nature of things like written communication. What I’m writing here in the Midwestern United States in the early 21st Century might look as foreign to the readers of the 25th as the works of Shakespeare look to us. I can feel fairly confident that my current audience–especially the people who I know well who read this blog–will understand what I’ve said here, but I have no way of accurately anticipating the interpretive frameworks of future audiences. I can imagine the word “dick” losing its bawdy definition sometime in the next fifty years, so it’ll end up with a little definition footnote when this gets printed in the Norton Anthology of Blogging Literature. Meanwhile, “ambiguity” will take on an ancillary definition referring to the sex organs of virtual prostitutes, so those same students will be snickering throughout this passage.

I can’t know what words will lose their current definitions and take on other meanings or fall out of language entirely, so I can’t knowledgeably write for that audience. If those future audiences are to understand what I’m trying to communicate, then they’re going to have to read my writing in the context of my current definitions, connotations, idioms, and culture. Of course, even footnotes can only take you so far–in many cases, it’s going to be like reading an in-joke that’s been explained to you; you’ll kind of get the idea, but not the impact. The greater the difference between the culture of the communicator and the culture of the audience, the more difficulty the audience will have in accurately and completely interpreting the communicator’s ideas.

Great problems can arise when we forget about all these factors that go into communication and interpretation. We might mistakenly assume that everyone is familiar with the idioms we use, and thus open ourselves up to criticism (e.g., “lipstick on a pig” in the 2008 election); we might mistakenly assume that no one else is familiar with the terms we use, and again open ourselves up to criticism (e.g., “macaca” in the 2006 election). We might misjudge our audience’s knowledge and either baffle or condescend to them. We might forget the individuality of interpretation and presume that all audience members interpret things the same way, or that our interpretation is precisely what the speaker meant and all others have missed the point. We would all do well to remember that communication is a complicated thing, and that those complexities do have real-world consequences.

It’s not a big truck!

Hey, look at what I read about on the series of tubes today: Sen. Ted Stevens was indicted on seven felony counts for taking gifts and services (to the tune of a quarter million dollars) and subsequently covering it up.

In case you’ve forgotten, that’s this Ted Stevens:

Couldn’t have happened to a more coherent guy.

Define “Success”

Apparently, Expelled was a success at the box office this weekend. At least, that’s what Randy Olson and Chris Mooney say. Ed Brayton tells a different story. It seems that no one has a clear idea of what “success” means.

On one hand, it opened at 9th place over the weekend, and that $3.5 million weekend makes it number 8 on the list of top grossing political documentaries of all time. Not too shabby for a film plagued by plagiarism and unlicensed music.

On the other hand, it opened far beneath films that have been out for multiple weeks, like “Horton Hears a Who” and “Nim’s Island.” Hell, even “Prom Night” did better. Take a quick look at the other films on that list of top grossing political documentaries; it’s just above a movie that opened in one theater, and just below one that opened in two. Granted, these numbers reflect the per-theater income, but when a movie opening in over a thousand theaters can’t do better per theater than one that opened in two, that seems to be saying something. Moreover, $3.5 million might cover the cost of the film itself, but certainly not the publicity and the “we’ll pay you to go” campaign they had with religious schools. Even the producers’ own gauge for success (apparently 2 million tickets sold) was missed by a wide margin.

Before Expelled came out, people were comparing it to “The Passion of the Christ” (and its $83.8 million opening weekend). The same marketing firm worked on both, and the marketing directly to churches and friendly audiences was certainly similar. When I first started hearing these comparisons, I immediately thought of another recent movie that was repeatedly compared to “The Passion”: “The Nativity Story.” “Nativity” couldn’t move the churchgoers into the seats, and is widely considered a flop.

“The Nativity Story” made $8 million in its opening weekend.

Now, why is a movie marketed toward much the same audience, in much the same way, which made over twice as much, considered a flop, while ScienceBloggers are conceding defeat to the success juggernaut that is Expelled? Is it just because it’s a documentary? Is that what sets the “incredible success” bar so low?

Expelled certainly did better than I’d hoped, but I’m more than a little disheartened to see folks like Olson and Mooney essentially conceding defeat at this point. Instead of calling for people to make responses, and lauding the creationists for their superior framing and marketing abilities, and criticizing the scientific community for not doing enough, why not fucking do something about it? What purpose does it serve for a scientist to say “Meet Ben Stein, the New Spokesman for the Field of Evolution”? What kind of framing is that?

And what is the expected scientific response supposed to be? An equally high-budget movie responding to their claims as if they’re claims that deserve a response? Yeah, that’s good framing, letting your opponents determine the terms of the debate. A direct-to-DVD release explaining all the problems? How well do the anti-Michael Moore direct-to-DVD flicks do compared to the Michael Moore films? Why is it that the people who claim to be trying to improve scientific communication are the ones falling over themselves to declare victory for the other side?

I’ll be curious to see how “successful” Expelled is in the coming weeks, as the initial church-rush dies down.

Framed

I haven’t before really taken sides in this whole “framing” debate, which crops up occasionally on the ScienceBlogs. On one side, you have folks like Chris Mooney and Matt Nisbet calling for more competent framing of the science debates, calling for more outreach and softer language so as to get moderate Christians on the side of science and reason, calling for people to stop connecting science with atheism so strongly. On the other side, you have folks like PZ Myers and Richard Dawkins, who are very successful at getting their message out to the public, and who do their best to promote science and atheism to the masses.

Up ’til this point, I thought I could see the value in the framers’ side of things, but that they woefully misunderstood what Myers and Dawkins were trying to accomplish. Myers and Dawkins are working not only to promote science, but to (to borrow Dawkins’ phrasing) raise consciousness about religion and promote positive atheism. They’re doing darn good jobs on all fronts, from my perspective, and I think each front is necessary and has its utility.

Science does need better promotion; it seems to me that we’ve been somewhat adrift since Carl Sagan died and Stephen Hawking left the media spotlight. I’m not sure why Neil DeGrasse Tyson hasn’t completely overtaken that role, since he certainly seems suited and qualified, but I suspect it has a great deal to do with the current climate in the United States and prevailing attitudes regarding science and religion. People aren’t as excited about astronomy and NASA as they ought to be, and the only science that seems to make it to the front pages is what’s on the front lines against religion and conservatism: environmental science and biology. The pendulum has swung precisely toward Richard Dawkins, and his recent releases have been expertly timed to take advantage of the current climate.

Religion does need to be booted out of its privileged place in our social discourse. It does absolutely need to be opened up to question and criticism, and that need is underscored by its current role in American politics. We have a President who consults far-right Christian leaders on a weekly basis with regard to national policy, we have a bevy of political programs designed to promote specific religious organizations, and we have a concerted effort on all fronts to legislate conservative religious morals over people who don’t agree. Religious groups are fighting tooth and nail against education, science, and progress in general, and in many places they’re winning. If religion were the personal thing that it ought to be, this wouldn’t be a problem. When it inserts itself into the public sphere, when it tries to create policies that affect the rest of us, then it can no longer enjoy the untouchable place it might retain as a private process. Religionists can’t have it both ways; they can’t have their personal, private, untouchable convictions and also try to impose those convictions over the rest of us. Something has to give, and since it doesn’t look like the religion-genie is going back into its bottle, then it must be opened up to question, critique, and ridicule.

And positive atheism does need to be promoted. How many of us have been or have known the person who says “I didn’t know there was anyone else who thought the same way”? The phrase is becoming less common, and that’s largely due to the easy availability of atheist thought through popular books and blogs. Atheism is moving from a shameful secret to an open movement, and that is a good thing for atheists, and for religious freedom in general.

So, it seemed to me that the framers either neglected to note that Myers’ and Dawkins’ goals were more widespread than their own, or that they did not see the value in the latter two goals, only that they seemed to undermine the first. They were talking past one another, because neither side seemed to realize what the other’s goals were. And so I more or less ignored the debate, having no particular stake in either side.

But things have somewhat exploded following the Expelled-from-Expelled debacle, and it’s become increasingly clear that there’s something wrong on the framing side of things.

First, we have a chorus of people claiming that this controversy helps Expelled‘s exposure, and “there’s no such thing as bad publicity,” or something. The existence of bad publicity is something of a matter for debate; both sides in this argument have brought up the “Swift Boat Veterans for Truth,” though I think that pretty much proves the old adage wrong. I can see how this increased exposure might be beneficial for Expelled, but I think the overwhelmingly bad reviews might counteract some of that.

And it’s worthwhile to note that this isn’t the first time the film has gotten bad press in the New York Times, though some seem to think it is. For the scientists involved to get a chance to rebut the movie’s claims and call out the producers for obvious dishonesty and hypocrisy even before the film’s limited opening seems like a good thing for our side.

Anyway, Matt Nisbet wrote a screed (quoted here) telling Dawkins and Myers to “Lay low and let others do the talking” as Expelled hits theaters, and to defer any questions or comments to scientists more congenial to religion. He explicitly compares them to Samantha Powers and Geraldine Ferraro, as though either of them has specifically insulted someone on the other side (or worse, made explicitly racist comments) and should step down. He calls for other people to “play the role of communicator” of science, apparently unconscious of why Dawkins and Myers might be considered communicators of science (i.e., because they communicate effectively and people like their message enough to read it widely, not because of any top-down appointment), and apparently ignorant of the fact that Myers and Dawkins are speaking out because Myers and Dawkins specifically appear in the film. What message would it send if Myers and Dawkins sat out the movie’s release and subsequent commentary? I know just how the Creobots would frame it–that PZ and Dawkins were ashamed that they’d been exposed for the Big Science conspirators they were, that the claims in the film hit too close to home, that they were scared to admit that the Creationists were right. Silence from the participants would only help the message of Expelled.

PZ, understandably, replied, saying “fuck you very much.” I thought it was apropos. Short, terse, and dismissive, precisely what such a vapid sentiment warranted.

So the new chorus began, about how PZ was being impolite and uncivil, that he was acting like a spoiled child.

And then there’s this, which is fucking ridiculous. Somehow, Sheril Kirshenbaum, Chris Mooney’s blog partner, can say with a straight face that PZ should “mind his manners” and “That kind of language and reaction is simply unacceptable on and off the blogosphere,” and then go on to accuse him of not acting like an adult, of being an adolescent. Mooney echoes the sentiments in the comment thread.

Really? Really? You people are actually going to cry foul at PZ because he used a naughty word? And you call him the adolescent? Last I checked, adults were supposed to be mature enough to handle the use of swear words. I was under the impression that adults recognized that words are words, regardless of how many letters they contain, and that all words were useful in certain contexts. I thought that adults could recognize that whether or not one uses so-called “bad words,” it’s the substance of one’s statement that matters.

That the Framing proponents would attack PZ for breaking some kind of blogosphere no-profanity rule smacks of missing just about every possible point, and it sounds as if they’re blogging in a vacuum (where is this Internet etiquette rulebook?), which seriously calls into question their expertise on how people will react to things.

That’s the heart of framing, right? So far as I understand it, it’s one part tact, one part spin, and one part bending over backwards to win approval.

The first part is the one I can get behind entirely. The very basics of effective communication are knowing your audience, choosing your battles wisely, and using appropriate language for the situation. Here’s a brief example just from my experience tonight: I’m in a discussion-based class, and at one point we were supposed to discuss what some of the key problems are in society. I could have piped up with “religion;” about half the class (teacher included) knows I’m an atheist, so it wouldn’t be unexpected, but I decided to let it be. I didn’t want to have to get into why I was saying it, or into the twisty word games of “well, not all religious people, but certain organizations, and…” that would almost inevitably have to follow. I knew my audience (and moreover, didn’t see any reason to offend most of them unnecessarily) and chose not to fight that particular battle. Later in the class, we were discussing why women were marginalized by society. Now, this was a more worthwhile battle, in part because it was far easier to justify. But while I could have said “religion,” or “the Abrahamic faiths, which have throughout history characterized women in a negative, inferior, subservient light,” I didn’t. In part, this was because I (again) didn’t want to unnecessarily offend my class; in part, it was because I knew the problem went farther than just the Abrahamic religions (Greek mythology does it too, and there are some particularly odious doctrines of this sort in Buddhism). So what I said was “various patriarchal religions” (there may have been slightly more to it, but that’s the bulk of the comment). If I were blogging here about the question, I would’ve been a lot more long-winded and less diplomatic in my assessment.

So I get the call for being tactful, and I’m sure Myers and Dawkins do too. Both are clearly generally aware of their respective audiences; it’s a large part of why they’re so popular.

The spin aspect is something I understand, but I don’t support it quite as readily. It’s important, especially in politics, to be able to present information in a way that supports your position, that works to persuade and present your side of a given debate in a positive light. The problem is that spin doctoring often only works through subtle misrepresentation and lying by omission, neither of which are particularly in the scientific spirit. It’s fine to present scientific findings and the scientific method in a positive light, in order to win supporters, but the spin ought to be minimal, lest it come back to bite us in the collective ass. And there’s certainly a problem with the repeated exhortations that we tell religionists how there’s no conflict between science and religion: it places reality in the subservient role. Granted, there are plenty who would do that anyway, but when we say “no no, you can fit evolution into your religious beliefs!” we’re making a mistake. It’s the religion that needs to fit reality, and not the other way around. The process may be tough on religion (it always is; see also: Galileo), but eventually mainstream religion must adapt to our changing knowledge of reality, not the other way around. It happened with Galileo and heliocentrism, it happened with Ben Franklin and lightning, and it’ll happen with Darwin and evolution as well. Mainstream religion will fit their worldview to the scientific facts, and the conservative fundamentalists will be left behind to deny reality on their own, just like the flat-earthers and geocentrists. But for that to happen, science needs to stand its ground and say “look, here’s the evidence, e pur si evolve,” not “well, if you just look at it from this point of view, reality totally fits into your worldview.” Let the progressive religionists and theologians tell their flocks how religion and science mesh; it just looks like grasping at straws when our side does it.

It’s the “don’t ever offend anyone” attitude of the framers that I can’t stand. It’s at the heart of their calls for someone else (i.e., someone who isn’t an outspoken atheist) to be the “spokesperson” for science, it’s at the heart of their criticism of PZ for using naughty language, and it’s at the heart of their misunderstanding of effective communication, so far as I see it. There is a value in stopping the buck, in being blunt, in calling spades spades and bullshit bullshit. It’s why James Randi has been gainfully employed for the last several decades, it’s why Penn and Teller are getting a sixth season of their award-winning series. There’s a time for being diplomatic, for playing good cop and making friends with the other side and smoothing out the difficulties, and there’s a time for being terse, for playing bad cop and shocking people out of their complacent little bubbles. There’s a reason that “straight talker” is a compliment. The Framers seem to think that people never learn unless you slather the information in honey and sugar to help it go down. They don’t understand that sometimes it works to say “take your damn medicine.”

So until this point I haven’t put much thought into the whole “Framing” debate, but Sheril and Chris’s Puritan “Mommy Mommy, PZ made a swear!” outrage, their holier-than-thou “shame on you” attitude, really made me consider the issue. And it seems to me that the only things they bring to the debate are either common sense (being tactful) or misguided (spin, being totally unoffensive, not seeing the good in promoting atheism and attacking religion).

And the result of all that advice to increase successful science promotion? I can only speak for myself, but I’ve long been planning to pick up Chris Mooney’s book “The Republican War on Science,” though I hadn’t quite gotten around to it. Mooney was even at the top of the short list of people I wanted to invite to speak for Darwin Club a couple of years back, though that didn’t pan out. My opinion of him has plummeted; at this point, if I do ever read his book, I’ll just borrow it from a friend.

I can’t help but think that wasn’t the intended effect.