Bigotry, Satire, and the Left

[CW: Racism]

I used to be a big fan of “Family Guy.” I owned the first several seasons, and watched them repeatedly. I rejoiced when the show came back from its cancellation, even if the interim productions (A “live from Vegas” album and the direct-to-DVD Stewie movie) weren’t spectacular. I listened to the commentaries, which were often just as entertaining as the show itself. I loved how the show skewered right-wing religious fundamentalism, how frequently it crossed into the boundaries of bad taste for a laugh. Like, there was the bit where a JFK Pez dispenser got shot, or where Osama Bin Laden was trying to get past airport security by singing showtunes, and the whole “When You Wish Upon a Weinstein” episode. The latter of those never made it to air; the former segments were even cut from the DVD sets. Family Guy was edgy.

Seth MacFarlane, the creator and significant part of the voice cast of the show, is decidedly liberal, and his politics have certainly informed the series. More and more as the show went on, we saw bits lampooning creationists and religion, promoting pot legalization and gay marriage and positive immigration reform.

Unfortunately, as the show went on, we saw more and more of the stuff that eventually soured me on the series. That same “edginess,” that same intentionally-offensive philosophy of “we make fun of everyone,” meant more characters who were stereotype caricatures. Brian’s flamboyantly gay relative, the Asian reporter (voiced by a white woman) who occasionally slips into a “me ruv you rong time” accent for a laugh, the creepy old pedophile. And of course Quagmire, whose ’50s-throwback ladies-man character is eventually just a vehicle for relentless rape jokes.

Seth MacFarlane would probably tell you that he’s not a racist or a misogynist or a homophobe. He would probably tell you that he’s very liberal, that the show constantly makes fun of right-wing ideologies and satirizes even his erstwhile employers at Fox. In satirical parlance, he’d probably argue that his show is “punching up.”

The problem is that, while doing all that punching, he’s not giving any thought to the splash damage toward people who might not be his actual targets. What about satirizing right-wingers necessitates rape jokes and racial stereotypes? Would his satire be as effective without those elements? Might it be better? I don’t think Seth MacFarlane cares much. They get laughs, and when it comes down to it, laughs matter more to guys like Seth MacFarlane than the targets of those laughs.

There are lots of people in similar boats, willing to throw anyone under the bus for a cheap laugh, then defend themselves by saying that they’re being satirical, that because they’re politically liberal, or because they satirize the powerful in addition to the powerless, that they can’t be bigots. They’re just equal-opportunity offenders, treating everyone the same, and you don’t see their powerful targets complaining.

Which, of course, misses the point. It misses the point like a white person saying “well how come it’s okay to say ‘honky’ or ‘cracker’ but not the n-word?” It misses the point like a man saying “female comedians are always telling jokes about men, how come it’s only sexist when I tell jokes about chicks or rape?” It misses the point that when not all people are equal in society, mocking them equally does unequal harm. Author Saladin Ahmed put it best when he said “In an unequal world, satire that mocks everyone serves the powerful. It is worth asking what pre-existing injuries we add our insults to.

It’s an important thing to remember when you’re a satirist. Who is your target? Who do you want to hurt, and who might get hurt in the crossfire? Is it necessary to your point for your target to have sex with an offensive transphobic caricature? Is it necessary to your point to dredge up stereotypical slurs against one minority to lampoon bigotry against another? Is it necessary in making fun of racists and homophobes to replicate racist and homophobic imagery?

“Satire” is not a shield that protects its creators from crticism. “Liberalism” is not an inoculation that prevents its bearers from committing bigoted acts. Punching down is a problem. Splash damage is a problem. Not all slights are covered by “but look at the larger context,” not when your “larger context” conveniently omits the context of centuries of caricatures with hook noses or big lips or fishnet stockings.

And, it should go without saying, “criticism” doesn’t come from the barrel of a gun.

The Future’s So Bright

I’ve been watching a lot of action movies lately, inspired in part by Don’s Manly Monday series. It started with “Team America: World Police” and “Die Hard with a Vengeance,” and so far I’ve worked my way through “Live Free and Die Hard,” “Demolition Man,” “Con Air,” and most of “Lethal Weapon” recently. For some of those, it’s not the first time I’ve seen them, but there are others that I missed for one reason or another. “Demolition Man” is one that I’d managed never to see before, despite the massive amounts of hype I remember surrounding its release, and while it’s not the best of my recent marathon, it certainly gave me a lot to think about.

See, I love dystopian stories. I love the semi-reasonable ones and the fantastic ones and the blatantly ridiculous ones. I love the way they turn the slippery slope argument into a world-building exercise. I love the way that they can provide a handy reference for actual social issues. I’ve read and watched a lot of dystopian stories, and while there’s a lot of quality variation, I can’t think of any that I didn’t enjoy to some degree.

So, because I don’t have enough to do, I’m going to start a series of posts discussing some of the features and commonalities of my favorite dystopias. Unlike most of my posts, I’ll probably go back and edit these periodically to add titles to each list. Like most of my posts, I’m not going to put any kind of schedule or restriction on this, because FSM knows I’ll never be able to keep to it. But it’ll give me an outlet for some percolating thoughts, and I think it could be interesting.

The Bible is Not an Objective Moral Standard

Why yes, this is my go-to image for discussions of morality. Why do you ask?Reading posts by Rhology have made me realize some of the problems involved in talking to people who believe their morals come from the Bible. There are several common refrains involved when arguing about this–“atheists have no basis for morality,” “without an objective morality/absolute moral code, you can’t judge other people’s morals,” “everyone has inborn morals from God, even if they don’t believe in him”–all of which are bound to pop up in any argument about secular morals. These all generally lead back to the point that God (and/or/through the Bible) provides a perfect and objective moral standard, without any of the problems that come from trying to define and justify a moral system in the absence of a deity. This idea is simply false: the Bible is emphatically not an objective moral standard; in fact, it fails in each of those points.

We’ll tackle “standard” first, since it’s the easiest. What moral standard does the Bible provide? Do we take our morals only from the explicit commandments, or should we learn by example from the various heroes and virtuous people?

If we are to learn only from the explicit commandments, then we run into a problem right away: there are an awful lot of apparent moral quandaries that never get discussed in the Bible. Are there moral implications of genetic engineering? Cybernetics? Overpopulation? Pollution? Birth control? Phone sex? Organ transplants? Euthanasia? Where the Bible touches on these issues, it does so only in the most broad, vague, and tangential fashions; there are no specific instructions on whether or not children should be given mood-altering drugs, no specific answers to questions about the introduction of novel organisms into foreign ecosystems. Are we to assume that the only moral issues are the ones that the Bible discusses directly? Is the choice to vaccinate your child morally neutral and equivalent to the choice to leave them unvaccinated? These are serious questions of real-life issues, on which the Bible is silent, preferring instead to tell us how best to combine goats and milk (Ex. 34:26) and the taxonomy of eunuchs (Mt. 19:12). Is there really no morally preferable choice in any of those situations?

So, perhaps we are meant to also learn from example. If that’s the case, then what lessons should we take away from the heroes’ stories? Take Jephthah, for instance. He makes a deal with God that if God helps him win in battle against the Ammonites, then he’ll sacrifice the first thing that comes through his doorway when he returns home. Naturally, after the successful battle, his daughter comes out to greet him. There’s no Abraham/Isaac cop-out in this story: Jephthah follows through with his promise to God. So do we read this story as a cautionary tale about the price of testing God, or do we read it as a positive example of what the faithful should be willing to do in the name of the Lord? There’s enough material outside the story to support both interpretations; which moral should we be receiving?

We could find similar quandaries with any number of Biblical characters–Joseph, Elisha, Solomon, Samson, etc.–maybe we shouldn’t be learning from all of their examples. So which characters should we be learning from? I suspect that Christians would say we ought not be following in the footsteps of Thomas, refusing to believe in the extraordinary until extraordinary evidence is provided to support the claims (despite the corroborating commandment of 1 Thessalonians 5:21). There are a litany of characters who are willing–even eager–to sacrifice their children based on God’s say-so, from Lot to Abraham to Jephthah to Yahweh, which suggests to me that according to Biblical morals, there’s nothing wrong with what Deanna Laney or Andrea Yates Dena Schlosser did*. Or perhaps we shouldn’t be learning from those particular examples. And what about the big guy himself? Should we be taking lesssons from God’s actions, or is he a “do as I say, not as I do” sort of father figure? After all, God does some pretty nasty stuff over the course of the Bible, commanding and committing genocide and inflicting plagues and so forth. Even the “do as I say” bit is difficult, given all the places where God issues direct commands that conflict with earlier laws and commandments (such as the various exhortations to kill women and children, contradicting the whole “thou shalt not murder” bit). Do you do as he said before, or as he’s saying now–what was written in stone, or what was given in a vision? This would be a lot easier if each of the real commandments started with “Simon Says.”

Hitting on that point of contradictory commandments, we see quite a few such things throughout the Bible. There are places where some moral imperatives issued by the book contradict others, there are places where heroes’ explicit flaunting of those imperatives is cast in a positive light, and then there are places where God issues edicts that directly conflict with previously-issued laws and edicts. How can we call this set of morals a “standard” if it is internally inconsistent, and if God can change it on a whim? Or is the only standard “what God says goes”? If it’s the latter point, then how do we determine what God’s message is, given contradictory passages in the Bible and stories with ambiguous moral teachings? How do we distinguish between actual commands from God and paranoid delusions? After all, Dena Schlosser believed that God had told her to cut off her daughter’s arms, which isn’t exactly out of character for the God of the Bible (Mark 9:43, for instance); can we say with any degree of certainty whether or not she was actually receiving instructions from Yahweh?

This segues nicely into the issue of objectivity**. In short, there isn’t any. In long, we have to make some distinctions here. Let’s say, for the sake of argument, that there is an omnipotent universe-creating God who has some idea of morality in his big giant head, and cares whether or not we follow it. To this end, he communicates with some Middle Eastern nomads through bushes and tablets, plays some role in their writing of a bunch of books full of teachings and laws, then later comes down himself to tell stories and make pronouncements which also eventually get written down. At this point, we could conceivably have three distinct moral codes: What-God-Thinks, What-God-Said, and What-Got-Recorded. In any human communication, these three things would be different–perhaps only subtly, but certainly different. What one thinks might be more nuanced and detailed than what one says, which may lose some inflection or connotation in the transition to writing (or may gain additional ones through the addition of punctuation and other conventions), not to mention that the writers are filtering what-one-says through their own perceptions. But, for the sake of simplicity, we’ll assume that God is super-awesome and communicated everything pertinent about his thoughts on morality to his various followers, who recorded these thoughts accurately–to make things simple (too late), we’ll assume that the Bible (as it was written) accurately and completely represents God’s moral codes, that What-God-Thinks and What-Got-Recorded are the same.

That’s all well and good, but it’s certainly not the end of the story. Even assuming that God is perfect and infallible and a fantastic communicator, and assuming that his secretaries were all very thorough and accurate, the morals aren’t doing much good until they’re read. The process of reading is where any lingering objectivity goes right out the window. I’ll refer you to my post on communication for the lengthy discussion. Suffice it to say, each person who reads the Bible is going to read it in the particular context of their own knowledge, culture, and experiences. These contextual differences are going to have profound impacts on the message that the person receives***.

Take, for example, Exodus 20:13: “Thou shalt not murder.” On the face of it, that’s pretty straightforward. “Murder” is a more specific term than, say, “kill” (which some translations use instead); “murder” implies some degree of intent, ruling out accidental deaths, and is usually reserved for humans, ruling out killing animals and plants and the like. It would seem that the Sixth Commandment is pretty cut-and-dry.

It’s not. It doesn’t take more than a brief application of common sense to realize that, either. Even legally, “murder” is a broad term, and the difference between it and manslaughter is often a matter of prosecutorial discretion.

Consider this: is it murder to kill someone who is trying to kill you? Legally, it isn’t; it’s self-defense. What if you’re killing someone who is trying to kill someone else, some innocent? If you could demonstrate that that person was a clear and present danger, then it’d be a pretty clear case of justifiable homicide. Is it murder to kill someone who is not attacking you, but has threatened or promised to kill you? Is there such a thing as pre-emptive self-defense? What if you think they’ve threatened you, or you just feel threatened by them? Is there a hard-and-fast line where it isn’t self-defense anymore? What if someone’s mere existence threatens your life–if you’re trapped on a raft or in the wilderness with another person, with only enough resources for one of you to survive, is it murder to kill the other person? Is it murder to continue living, ensuring that person’s death?

This is, of course, ignoring other pertinent questions–is it murder to kill an enemy in war? What about the unborn? Is abortion murder? Is it murder to dispose of unused frozen zygotes from in vitro fertilization? Is execution murder? Is it murder if you don’t act to prevent someone’s death when it’s in your power to do so? If someone who is already facing imminent-but-painful death begs you for a quick and painless one that you are able to provide, would it be murder to kill them? Would it be wrong? I guarantee, for nearly all of these questions, that one can easily find Bible-believing Christians on every conceivable side.

Some of this may seem like splitting hairs, but if there’s one thing I’ve learned about moral philosophy, it’s that it exists specifically to split those hairs. The whole point of moral philosophy is to provide answers–or at least reasoned arguments–regarding these tough hair-splitting moral questions. We don’t generally have much problem reasoning out the right thing to do in the obvious situations; it’s the ones that walk the lines, the no-win scenarios, and whatnot that cause moral anxiety.

Can the Bible be an objective moral standard if it doesn’t provide specific guidance on these questions? If it doesn’t provide a specific, detailed definition of murder (for instance), then how are we to determine what we shalt not do in these difficult situations? We started by assuming that God included his morals, completely and perfectly, in the Bible, but can any moral system be considered complete or perfect under any reasonable definition of either term if it leaves so much open to subjective interpretation?

It ends up being like the disagreement between Creationists regarding where to draw the line between “fully ape” and “fully human” when presented with the progression of transitional hominids. When a worldview that only admits binary options is presented with a continuum, dividing that spectrum up into those two absolute options is a subjective and arbitrary process. If the Bible had said “So God created man in his own image, which was upright and somewhat hairy and with a prominent sloping brow, and…,” those Creationists might have had more agreement. Similarly, if the Bible said “Thou shalt not murder, which includes but is not limited to…,” these questions might be answered more objectively within Biblical morality.

Or, rather than presenting us with the broad, general rules and expecting us to deduce the specifics, the more useful moral standard would provide us with a litany of specific situations and allow us to induce the generalizations. Sure, it would make the Bible exponentially longer, but after three hundred pages of various specific killing scenarios, it’d be pretty easy to reason “wow, God doesn’t much seem to like murder.” Instead, we have the general statement, which leaves us wondering “gee, what does God think about euthanasia?” and the like.

And this is where the Bible fails on the “moral” point. Even disregarding the bits of the Bible that no sane person would call “moral,” the Bible fails as a moral guide because it provides no clear guidance on any of these moral issues. Even if the Bible is a full and accurate description of God’s moral sense, it is not a complete guide to the morals that a human would need. We face moral issues that are apparently beneath God’s notice, and in these cases we must make our own decisions, we must determine the moral options for ourselves. And the fact that we are able to do this on an individual level (e.g., euthanasia) and on a social one (e.g., self-defense and justifiable homicide legal exceptions) completely invalidates the supposed need for an objective moral standard. The Christian’s claim that morality requires the Bible falls apart once one realizes that we routinely face moral quandaries for which the Bible offers no clear answer. The moral decisions we are required to make on our own are far more varied, nuanced, and difficult than the morals that are prescribed in the Bible; if we can make moral decisions in the vast gray areas and unpleasant scenarios of the real world, then I can’t see how the broad generalizations like “thou shalt not murder” would present any sort of problem. As I mentioned above, it would be much easier to induce the general rules from the specific situations than to deduce the moral options in specific situations from a general rule. The morals provided by the Bible are the simplest building blocks, the things we can all agree on and end up at independently (and, incidentally, things that most cultures have done independently), based on the much more complex situations we run across in the real world.

Where in the Bible we are meant to find morals is unclear; the stories are ambiguous, the commandments are overly general and often irrelevant, and there is little (if any) consistency. Most of the moral-making is ultimately left up to subjective interpretation, and the application of those morals is a matter for personal and social determination. The Bible does not provide the objective moral standard which so many of its adherents proclaim, and the notion that it is a necessary component for humans to have morals is self-refuting as a result. Moral philosophy, cultural anthropology, sociology, and biology have given us insights into how we make morals on the levels of the individual and as a society, and how moral codes and consciences developed in social animals. They have provided us with a way to develop our own systems of values, which then provide a way of distinguishing right from wrong in those situations where the division is indistinct. Finally, and perhaps most importantly, they have allowed us the freedom to do what people do (and indeed must do, regardless of their religious convictions) already–examine and evaluate their own values and come to their own conclusions–without the threat of damnation hanging over them should they make the wrong choice. Morals come not from above, but from within; they are a result of our individual instincts and our interactions with one another. Consequently, we are held responsible, made to account for our moral decisions, by ourselves and each other, not some external arbiter. The only “objective moral standard” is the one we set ourselves.


*Some theists would likely say that these people were not actually receiving instructions from God, even though they believed they were. I’d like to know how they make that distinction. After all, can’t the same be said for Jephthah or Abraham? If you accept those stories, then you certainly can’t claim that it’s not within God’s character to demand a parent to sacrifice his or her child–Abraham certainly believed that this was something that God would command, and the Jephthah story confirms Abraham’s conviction. On what grounds can we claim with any kind of certainty that Abraham and Jephthah were actually receiving instructions from God to violate the “thou shalt not murder” commandment, while Dena Schlosser and Andrea Yates were schizophrenic or otherwise mentally ill?

**There’s a further issue here with the definition of “objective,” which could probably warrant its own post. Generally, things that are “objective” are the things that can be verified through application of fact or reason. “Chocolate is brown” is an objective fact (admittedly with some definition-associated wiggle room), subject to verification or falsification; “chocolate is delicious” is a subjective opinion, which is not subject to proof or disproof. What, precisely, makes God’s opinion on morals objective? Why would his opinion be any less subjective than anyone else’s? Yes, God is more powerful, but what application of power can make subjective opinion into objective fact? God’s opinions are not subject to verification or falsification; they are as inaccessible to us as anyone else’s opinions. We can know them only by being told directly, by the subject, what the opinions are–and that runs us again into the problem of communication and interpretation.

Yeah, this is definitely fodder for another post.

***I’ve omitted here another pertinent issue: the matter of translation and copying. Long before anyone reading it today can get a chance to interpret the Bible, it has already been filtered through multiple interpreters. We know from the historical record that the Bible has been subject to multiple alterations (intentional and unintentional) through the copying process, many of which were due to various dogmas and ideologies of centuries past. The translators are working from copies that are many generations removed from any originals, and which have built into them many of the copying errors and alterations from the past. Those translators must then make their own interpretations when choosing the best words in one language to convey ideas expressed in another. There is rarely (if ever) a 1=1 correspondence between languages, especially ones as distantly related as modern English and ancient Greek. Each idea in the original could be phrased any number of ways in the translation, and each translated version will be different depending on what the translator decided to emphasize–was her intent to preserve the closest literal meaning of the text, or to convey the poetry, or to try to present the concepts as clearly as possible with less regard to the particular language, or did she have another motive for her choices? For an example of how much impact this kind of interpretive choice has on a text, try opening up up any two versions of “The Iliad.”

More suffering

I was rereading this post tonight, when a thought occurred to me. The thought’s not going to mean much unless you go read the old post, so I’m putting it below the fold.

Job suffered more than Jesus did. Going along the thought toward the end of that old post, wouldn’t suffering on the level of Job’s have been more the sort of thing that we’d expect for someone suffering for all of humanity, past, present and future? Wouldn’t it be more in line with scriptural precedent for Jesus to have suffered like Job did? Rather than having to torture some passage about “piercing” as though it were a prophecy of crucifixion, Christians trying to demonstrate prophecies about Jesus could point to the Book of Job and say “look!”

I can imagine it now, with Christ amassing a following, starting his church in defiance of the Pharisees, marrying and starting a family, and ultimately making it to the apex of his life when God starts taking things away from him–first his followers, then his children, then his wife, then his health (but not so much that he is actually close to dying, to joining his family in the afterlife). Finally, his former friends betray him, the Pharisees force him to recant his message and deny his teachings before the masses, then betray him again to the Romans. Finally as he rots, broken and bullied and impotent in a Roman dungeon, Jesus looks toward the sky through a barred window. Job had the patience of a saint, but Jesus has the patience of a man, and he cries out–“My God, my God, why have You forsaken me?” And unlike Job, he curses God, unable to remain loyal when he has lost so much. That’s the kind of suffering that I’d expect from someone who’s suffering for everyone. It seems like ending the Jesus story with him losing faith, committing the unforgivable sin–with God denying God–would be the more poignant and powerful resolution.

More and more, it seems like God just doesn’t understand good writing.

On Interpretation

I see an old lady!--No, wait, a young girl!--No, I mean, two faces eating a candlestick!I thought I’d talked about this before on the blog, but apparently I’ve managed to go this long without really tackling the issue of interpretation. Consequently, you might notice some of the themes and points in this post getting repeated in my next big article, since writing that was what alerted me to my omission.

I don’t generally like absolute statements, since they so rarely are, but I think this one works: there is no reading without interpretation. In fact, I could go a step further and say there’s no communication without interpretation, but reading is the most obvious and pertinent example.

Each person is different, the product of a unique set of circumstances, experiences, knowledge, and so forth. Consequently, each person approaches each and every text with different baggage, and a different framework. When they read the text, it gets filtered through and informed by those experiences, that knowledge, and that framework. This process influences the way the reader understands the text.

Gah, that’s way too general. Let’s try this again: I saw the first couple of Harry Potter movies before I started reading the books; consequently, I came to the books with the knowledge of the movie cast, and I interpreted the books through that framework–not intentionally, mind you, it’s just that the images the text produced in my mind included Daniel Radcliffe as Harry and Alan Rickman as Professor Snape. However, I plowed through the series faster than the moviemakers have. The descriptions in the books (and the illustrations) informed my mental images of other characters, so when I saw “Order of the Phoenix,” I found the casting decision for Dolores Umbridge quite at odds with my interpretation of the character, who was less frou-frou and more frog-frog.

We’ve all faced this kind of thing: our prior experiences inform our future interpretations. I imagine most people picking up an Ian Fleming novel have a particular Bond playing the role in their mental movies. There was quite a bit of tizzy over the character designs in “The Hitchhiker’s Guide to the Galaxy” movie, from Marvin’s stature and shape to the odd placement of Zaphod’s second head, to Ford Prefect’s skin color. I hear Kevin Conroy‘s voice when I read Batman dialogue.

This process is a subset of the larger linguistic process of accumulating connotation. As King of Ferrets fairly recently noted, words are more than just their definitions; they gather additional meaning through the accumulation of connotations–auxiliary meaning attached to the world through the forces of history and experience. Often, these connotations are widespread. For example, check out how the word “Socialist” got thrown around during the election. There’s nothing in the definition of the word that makes it the damning insult it’s supposed to be, but thanks to the Cold War and the USSR, people interpret the word to mean more than just “someone who believes in collective ownership of the means of production.” Nothing about “natural” means “good and healthy,” yet that’s how it’s perceived; nothing about “atheist” means “immoral and selfish,” nor does it mean “rational and scientific,” but depending on who you say it around, it may carry either of those auxiliary meanings. Words are, when it comes right down to it, symbols of whatever objects or concepts they represent, and like any symbols (crosses, six-pointed stars, bright red ‘A’s, Confederate flags, swastikas, etc.), they take on meanings in the minds of the people beyond what they were intended to represent.

This process isn’t just a social one; it happens on a personal level, too. We all attach some connotations and additional meanings to words and other symbols based on our own personal experiences. I’m sure we all have this on some level; we’ve all had a private little chuckle when some otherwise innocuous word or phrase reminds us of some inside joke–and we’ve also all had that sinking feeling as we’ve tried to explain the joke to someone who isn’t familiar with our private connotations. I know one group of people who would likely snicker if I said “gravy pipe,” while others would just scratch their heads; I know another group of people who would find the phrase “I’ve got a boat” hilarious, but everyone else is going to be lost. I could explain, but even if you understood, you wouldn’t find it funny, and you almost certainly wouldn’t be reminded of my story next time you heard the word “gravy.” Words like “doppelganger” and “ubiquitous” are funny to me because of the significance I’ve attached to them through the personal process of connotation-building.

And this is where it’s kind of key to be aware of your audience. If you’re going to communicate effectively with your audience, you need to have some understanding of this process. In order to communicate effectively, I need to recognize that not everyone will burst into laughter if I say “mass media” or “ice dragon,” because not everyone shares the significance that I’ve privately attached to those phrases. Communication is only effective where the speaker and listener share a common language; this simple fact requires the speaker to know what connotations he and his audience are likely to share.

Fortunately or unfortunately, we’re not telepathic. What this means is that we cannot know with certainty how any given audience will interpret what we say. We might guess to a high degree of accuracy, depending on how well we know our audience, but there’s always going to be some uncertainty involved. That ambiguity of meaning is present in nearly every word, no matter how simple, no matter how apparently direct, because of the way we naturally attach and interpret meaning.

Here’s the example I generally like to use: take the word “DOG.” It’s a very simple word with a fairly straightforward definition, yet it’s going to be interpreted slightly differently by everyone who reads or hears it. I imagine that everyone, reading the word, has formed a particular picture in their heads of some particular dog from their own experience. Some people are associating the word with smells, sounds, feelings, other words, sensations, and events in their lives. Some small number of people might be thinking of a certain TV bounty hunter. The point is that the word, while defined specifically, includes a large amount of ambiguity.

Let’s constrain the ambiguity, then. Take the phrase “BLACK DOG.” Now, I’ve closed off some possibilities: people’s mental pictures are no longer of golden retrievers and dalmatians. I’ve closed off some possibilities that the term “DOG” leaves open, moving to the included subset of black dogs. There’s still ambiguity, though: is it a little basket-dwelling dog like Toto, or a big German Shepherd? Long hair or short hair? What kind of collar?

But there’s an added wrinkle here. When I put the word “BLACK” in there, I brought in the ambiguity associated with that word as well. Is the dog all black, or mostly black with some other colors, like a doberman? What shade of black are we talking about? Is it matte or glossy?

Then there’s further ambiguity arising from the specific word combination. When I say “BLACK DOG,” I may mean a dark-colored canine, or I may mean that “I gotta roll, can’t stand still, got a flamin’ heart, can’t get my fill.”

And that’s just connotational ambiguity; there’s definitional ambiguity as well. The word “period” is a great example of this. Definitionally, it means something very different to a geologist, an astronomer, a physicist, a historian, a geneticist, a chemist, a musician, an editor, a hockey player, and Margaret Simon. Connotationally, it’s going to mean something very different to ten-year-old Margaret Simon lagging behind her classmates and 25-year-old Margaret Simon on the first day of her Hawaiian honeymoon.

People, I think, are aware of these ambiguities on some level; the vast majority of verbal humor relies on them to some degree. Our language has built-in mechanisms to alleviate it. In speaking, we augment the words with gestures, inflections, and expressions. If I say “BLACK DOG” while pointing at a black dog, or at the radio playing a distinctive guitar riff, my meaning is more clear. The tone of my voice as I say “BLACK DOG” will likely give some indication as to my general (or specific) feelings about black dogs, or that black dog in particular. Writing lacks these abilities, but punctuation, capitalization, and font modification (such as bold and italics) are able to accomplish some of the same goals, and other ones besides. Whether I’m talking about the canine or the song would be immediately apparent in print, as the difference between “black dog” and “‘Black Dog.'” In both venues, one of the most common ways to combat linguistic ambiguity is to add more words. Whether it’s writing “black dog, a Labrador Retriever, with floppy ears and a cold nose and the nicest temperament…” or saying “black dog, that black dog, the one over there by the flagpole…” we use words (generally in conjunction with the other tools of the communication medium) to clarify other words. None of these methods, however, can completely eliminate the ambiguity in communication, and they all have the potential to add further ambiguity to the communication by adding information as well.

To kind of summarize all that in a slightly more entertaining way, look at the phrase “JANE LOVES DICK.” It might be a sincere assessment of Jane’s affection for Richard, or it might be a crude explanation of Jane’s affinity for male genitals. Or, depending on how you define terms, it might be both. Textually, we can change it to “Jane loves Dick” or “Jane loves dick,” and that largely clarifies the point. Verbally, we’d probably use wildly different gestures and inflections to talk about Jane’s office crush and her organ preference. And in either case, we can say something like “Jane–Jane Sniegowski, from Accounting–loves Dick Travers, the executive assistant. Mostly, she loves his dick.”

The net result of all this is that in any communication, there is some loss of information, of specificity, between the speaker and the listener (or the writer and the reader). I have some specific interpretation of the ideas I want to communicate, I approximate that with words (and often the approximation is very close), and my audience interprets those words through their own individual framework. Hopefully, the resulting idea in my audience’s mind bears a close resemblance to the idea in mine; the closer they are, the more effective the communication. But perfect communication–loss-free transmission of ideas from one mind to another–is impossible given how language and our brains work.

I don’t really think any of this is controversial; in fact, I think it’s generally pretty obvious. Any good writer or speaker knows to anticipate their audience’s reactions and interpretations, specifically because what the audience hears might be wildly different from what the communicator says (or is trying to say). Part of why I’ve been perhaps overly explanatory and meticulous in this post is that I know talking about language can get very quickly confusing, and I’m hoping to make my points particularly clear.

There’s one other wrinkle here, which is a function of the timeless nature of things like written communication. What I’m writing here in the Midwestern United States in the early 21st Century might look as foreign to the readers of the 25th as the works of Shakespeare look to us. I can feel fairly confident that my current audience–especially the people who I know well who read this blog–will understand what I’ve said here, but I have no way of accurately anticipating the interpretive frameworks of future audiences. I can imagine the word “dick” losing its bawdy definition sometime in the next fifty years, so it’ll end up with a little definition footnote when this gets printed in the Norton Anthology of Blogging Literature. Meanwhile, “ambiguity” will take on an ancillary definition referring to the sex organs of virtual prostitutes, so those same students will be snickering throughout this passage.

I can’t know what words will lose their current definitions and take on other meanings or fall out of language entirely, so I can’t knowledgeably write for that audience. If those future audiences are to understand what I’m trying to communicate, then they’re going to have to read my writing in the context of my current definitions, connotations, idioms, and culture. Of course, even footnotes can only take you so far–in many cases, it’s going to be like reading an in-joke that’s been explained to you; you’ll kind of get the idea, but not the impact. The greater the difference between the culture of the communicator and the culture of the audience, the more difficulty the audience will have in accurately and completely interpreting the communicator’s ideas.

Great problems can arise when we forget about all these factors that go into communication and interpretation. We might mistakenly assume that everyone is familiar with the idioms we use, and thus open ourselves up to criticism (e.g., “lipstick on a pig” in the 2008 election); we might mistakenly assume that no one else is familiar with the terms we use, and again open ourselves up to criticism (e.g., “macaca” in the 2006 election). We might misjudge our audience’s knowledge and either baffle or condescend to them. We might forget the individuality of interpretation and presume that all audience members interpret things the same way, or that our interpretation is precisely what the speaker meant and all others have missed the point. We would all do well to remember that communication is a complicated thing, and that those complexities do have real-world consequences.

The crazy train keeps a-rollin’

PZ, bless his heart, posted a bunch of the angry e-mails that Bill Donohue’s clueless masses sent his way following Crackergate. I haven’t been able to read through all of them (bring a sandwich and find a comfortable chair if you plan to), but one of them got me thinking.

Well, actually, lots of them got me thinking. Most of the thoughts were “these people are utterly clueless if they think [PZ would hesitate to insult tenets of Islam and Judaism / Insulting the Eucharist is a “hate crime” / PZ is somehow using University time or resources to blog / PZ is a math professor]” and “these people have no idea what precipitated this post.” Also, “[any God who could be threatened in cracker form / any God who would get his followers so worked up over a snack food] is clearly sillier than either the “body mutilation” or “wear these clothes” gods.”

But, back to the point, one post got me thinking about something specific:

I know you are smarter than most people and probably even God himself, if you even believe in God.

Besides the obvious (hey, check the blog header or the big red A in the sidebar for Dr. Myers’ belief-in-God status), this got me wondering about being “smarter than God.”

See, my first inclination (and a couple of commenters in the original thread said it as well) would be to say that I’m smarter than God. After all, I don’t believe that God exists, and obviously I’d be smarter than something that doesn’t exist.

But then I thought, if someone asked me “do you think you’re smarter than Batman?” I’d probably say no. And yet, my position on the existence of Batman is exactly the same as my position on the existence of God.

Which brings me to the realization that while I don’t think God or Batman exist, the fictional characters of Batman and God absolutely do exist. And those fictional characters have defined traits–in these cases, exceptional intelligence.

So, how do you respond to such a question? Do you answer in terms of reality, and declare yourself smarter than everything that doesn’t exist? Or do you answer in terms of character traits, and respond that the fictional character possesses the greater intellect?

I guess the best response is the one that clarifies the answer. “Obviously, I’d consider myself smarter than any nonexistent person, but as the character is defined, I think he’s probably more intelligent.” Or something.

And now I’m going to spend the next day or so running over these weird “one hand clapping” questions in the back of my mind–“Am I taller than Superman? Am I more muscular than the Hulk? Am I as observant as Hercule Poirot?”

I Ought to be a Woo: My Brain

This is the first post in what will probably be a long and rambling introspective series on how it’s a miracle* that I ended up as skeptical as I am. First up: how my brain works.

Yesterday I was listening to a “Doctor Who” audio drama on my iPod and thinking a little about continuity–not “Doctor Who” continuity, even…I think I was considering something about Kryptonite for some reason. Anyway, my years in various sorts of fandom have taught me that I’m very good at rationalizing things. Give me any continuity error, quibbling (“Han was bluffing Obi-Wan; obviously a parsec is a unit of distance. As he showed with the Death Star communicator, he’s not always good at bluffing”) or monumental (“Due to the traumatic regeneration, which took place on Earth instead of in the TARDIS, the Doctor took on some terrestrial biological characteristics for his Eighth Incarnation; he’s ‘half-human’ on the side of his mother–Mother Earth”) and I can smooth it out with some post-hocking. I don’t even have to try particularly hard, except when I start applying this kind of thinking outside of fiction.

Moreover, I’m pretty good at drawing connections between otherwise disparate things. It makes compare/contrast essays really easy, and I imagine it’s a large part of why I’m so fascinated with Joseph Campbell. Unfortunately, it doesn’t turn off. I find myself sometimes assigning thematic significance to things that happen in my life. I often hear new bands or see movies and begin describing it in terms of other bands or films–for instance, when I was riding with a friend yesterday, I described the band he was listening to as “Wall of Voodoo meets Tom Waits.” I then promptly felt like an asshole hipster and wanted to shoot myself. But that kind of thing happens all the time; I look at Xander from “Buffy” and can’t help thinking he must be Bruce Campbell’s secret love child, or I watch a preview for “P.S. I Love You” and think that it’s “Saw” as a love story. My brain is forever drawing connections.

As anyone who’s had any experience in the Skeptosphere already knows, post-hoc rationalization and connection-drawing are foundational to a variety of different types of magical thinking and woodom.

Post-hoc rationalizations require two things: first, an assumption of the truth, and second, an inconsistency between that assumption and observation. In fandom, that might look something like this:
Assumption: The “Star Wars” series is coherent and without contradiction.
Inconsistency: Princess Leia says in “Return of the Jedi” that she remembered her birth mother, who was “beautiful, kind but sad.” But we see in “Revenge of the Sith” that Padme Amidala dies in childbirth; how could Leia possibly remember that?
Post-Hoc Rationalization: Leia is Force-sensitive, and so her memories are influenced by telepathic impressions she received of her mother pre- and immediately post-natal.

See how it works? You start with your pre-existing worldview, and then iron out any inconsistencies with easy hand-waving explanations, ignoring totally the simpler, more parsimonious explanation that your initial assumptions may be flawed. For instance:
Assumption: God exists and answers prayers from His followers.
Inconsistency: Not all believers’ prayers get answered.
Post-Hoc Rationalization: They weren’t praying/believing right.

Or how about:
Assumption: Sylvia Browne has psychic powers.
Inconsistency: She told this lady that “the reason why you didn’t find him [her late husband’s body] is because he’s in water.” But the woman’s husband was a firefighter who died in the World Trade Center, not “in water.”
Post-Hoc Rationalization: Well, Sylvia was getting the water impression from the water used by the firefighters to put out the fire. The spirits, you see, they’re hard to hear, and maybe he didn’t die in the tower at all, or…

Did someone say World Trade Center? Why, I do believe that brings us to “drawing connections” (see how I drew that one? Not yet? Oh, well, wait a minute). Without the tendency to draw connections between otherwise unrelated things, there would be no conspiracy theories (get it now?), and alternative medicine types would have a much harder time hocking their wares. Connection drawing requires, in most cases, a great deal of cherry-picking, an affinity for analogies, and a tendency to inflate “connection” into “causal relationship.” It’s a boon for English majors, because it allows us to do things like literary interpretation and analysis, and pretend to have some degree of certainty.

As an example, I recently had to write a research paper on Bram Stoker’s “Dracula.” One of the ideas I had was that the vampires in Dracula (especially the Count himself) are 19th-century anti-Catholic caricatures. There’s the easy bits, like the fact that Stoker was an Anglican and the whole blood-drinking thing (since Catholics believe in real, not symbolic, transubstantiation). Our protagonists are largely Church of England, and are rather blasé about their faith; Jonathan Harker thinks that the Eastern Europeans he encounters are silly and superstitious, and he tries to refuse the Rosary one woman gives him. The vampires are all cowed and harmed Catholic iconography–the Host, crucifixes, etc.–which are used by our protagonists like magical spells. Only the vampires (and the “superstitious” characters) recognize any power in the icons, for everyone else, they are meaningless. This is a reference to the common characterization of Catholicism as witchcraft (and perhaps to Medieval Catholicism, where the illiterate laity incorporated those same Catholic icons in their old pagan magic rituals).

See, I could have built a pretty decent paper around that thesis, even though I recognize that it’s probably utter bullshit. I doubt that Stoker wrote his book as an anti-Catholic polemic, and if he did, then I doubt many of his readers would have gotten it. And to make the case, I have to ignore the fact that the most lauded character in the book is the obviously Catholic Abraham Van Helsing, or the various other details that don’t support (or actively contradict) my thesis. But I can cherry-pick details all day long, maybe do some quote-mining, and get a good essay out of it.

The same kind of thing is necessary for alternative medicine, astrology, or any other woo that posits a cause-effect relationship between otherwise unconnected objects. And conspiracy theories thrive on this. The phrase “do you think that’s [the deaths of the Apollo 1 astronauts/the government’s reluctance to release details about purported UFOs/the crash of Flight 93/the ‘expulsion’ of these ID advocates from academia/etc.] a coincidence?” is testament to that. I could offer up an example here, to match my term paper paragraph, but I’m sure you get the picture.

These are natural human drives. We are built to make connections; our ability to infer causal relationships and plan accordingly is one of the biggest survival advantages we have–it just doesn’t have a great deal of precision. And we crave explanations for things, any explanations, even ones that are pure guesswork, because that’s still more satisfying than not knowing.

When we combine these tendencies, to draw connections and iron out inconsistencies, we end up with neat, emotionally-satisfying narratives. In narrative storytelling, events must be connected or significant somehow. Everything fits together in a neat package, usually with some kind of moral center. There’s a climax and a resolution, and all the loose ends are tied up in a way that provides fulfillment and closure. We understand that kind of story; what we have a hard time grasping is reality, where things aren’t all connected and symbolic and leading to some emotionally-gratifying conclusion.

Maybe it’s hubris or shame or something that causes me to think that I’m somehow abnormal in having these connection-building and rationalizing drives in overdrive. Maybe I’m not that much different from anyone else. But it still seems amazing that I could become skeptical–heck, that anyone could become skeptical, with these cards stacked against them.

I think the first step is becoming aware of the common faults of human thought. In order to overcome the tendency toward erroneous thinking, you have to know that there’s something to overcome. It always comes back to education, doesn’t it?

That seems like enough rambling for now, but I’ll come back to this topic periodically.

Wonders of the world they wrote

So, the other night on a whim I read Ayn Rand’s Anthem. I’ve looked into Objectivism as a philosophy as a few times, though never with any particular understanding of its appeal to some people, but this was my first serious foray into the fiction of the often verbose Ms. Rand.

Okay, that’s not entirely true. A couple of years back, I saw a high school production of her play, “The Night of January 16th.” I was not particularly impressed with the script, though the actual performance was pretty good.

Anyway, Ayn has become something of a running joke among my peers. There was a period of time where a copy of The Fountainhead was rolling around the backseat of my friend’s car, and we’d occasionally read random passages of pretentious dialogue or florid descriptions out of it for a laugh.

Anthem, owing to its short length, avoids many of those problems; the story is so short that the printers have done everything possible to pad it out–the font size is enormous (and changes about twenty pages in), the columns are narrow, the leading is huge, and there’s an extra space after every paragraph. As if that weren’t enough, a second version of the novella is included, photocopied from an earlier manuscript, complete with the author’s handwritten editing. They really wanted to justify charging 7.99 for a story that weighs in at 90 pages, padded out.

About 15 pages in, my overwhelming impression was that I preferred this story when it was “Harrison Bergeron.” Halfway through, I realized that Rush’s “2112” concept album was a closer adaptation of the story than their song “Anthem.”

And before I knew it, the book had devolved into a chapter or two of soapbox lecturing, and then it was over. It wasn’t a bad story, mind you. And I’m always a sucker for a good dystopia story. But there were some significant problems with it, which cast harsh light on Objectivism as any kind of viable philosophy.

First, and perhaps least relevant to the philosophy, is just how blatantly anti-feminist the story is. The only female character follows around the protagonist like a lost puppy looking for guidance. I understand that Ms. Rand was a bit of a sub, but this is ridiculous. She barely had a personality; she was purely object.

Our heroic protagonist, Prometheus (née Equality 7-2521), was the pure, unfiltered Randian hero: brilliant, willful, and instantly skillful at whatever he puts his hands to. In his post-apocalyptic quasi-medieval society, he and another lowly street-sweeper manage to find an abandoned subway tunnel. Through pure trial-and-error experimentation with the remnants of 20th century technology, he rediscovers steel and electricity, and he singlehandedly reinvents the lightbulb. Ultimately, he decides to share this discovery with the scholars, but is refused and threatened with death. He flees into the woods, where he proves to be a fantastic hunter (his first flung stone kills a bird, and he’s able to fashion a bow and arrows–and use them with great skill–with no apparent prior knowledge or training). At some point, the one girl he knew in the city shows up, having followed him. Eventually, they come across an abandoned centuries-old cottage in the mountains, with its own generator (which our hero is able to repair). He reads voraciously from the cabin’s apparently prodigious library, then comes up with names for himself (Prometheus) and his bride (Gaea). He decides that community and altruism are evils, designed to keep folks like him from achieving their potential, and leading to the stagnation of society and the stifling of independent thought. He determines that he needs no one else, and so he will return to town eventually to liberate the other free-minded ones like himself.

Yikes, where to begin? I’ll leave aside the evolutionary benefits of altruism at this point, they’re purely incidental to the problems with Prometheus’s reasoning. Isolation and independence are all well and good for him, but they don’t translate into a real-world viable option. See, Prometheus is only able to become self-sufficient because he is the luckiest man on the damn planet. He literally falls into advanced technology that, for whatever reason, still kind of works, and then fumbles his way through several centuries worth of scientific progress. He manages to leave town with little problem, despite his violation of various serious laws. He manages never to eat anything poisonous while living in the wilderness for a fairly extended period of time, bumps into his girlfriend in the vast woods, and finds a pristine, untouched, undamaged house from centuries earlier. This might make for decent fiction, but you can’t count on such wondrous luck in the real world, and that’s a nail in the coffin for Objectivism as a viable way of life. Sure, selfishness works when you’ve got everything else going for you.

You know, except when it doesn’t. Somehow, in his rant at the end of the book, Prometheus fails to recognize that it’s dependence on others and their altruism which got him to this point. He found the entrance to the subway with his friend, International 4-8818, needed his help to open it, and had to trust him to put himself at risk in order to keep the finding (and their subsequent experimentations) secret. He needed Liberty 5-3000 (who he’d later rename Gaea) to keep their conversations and interactions secret. He needed the humans of the past to share their innovations with the rest of the world, so that he might rediscover them. And, you know, if Gaea’s going to fulfill her role in recreating an individualistic human society, he’s going to need her around too. Selfishness only works when you’re actually self-sufficient, which no one, not even our dear Prometheus, is.

Anthem, to an astute reader, is more an indictment of Objectivism than a promotion of it. Prometheus’s speech at the end reeks of sour grapes and undeserved feelings of superiority, and it utterly ignores how much he’s relied on other people to get anywhere near self-sufficiency. And then, after decrying altruism and denouncing society, he resolves to return to the city to liberate others like him and bring them to his mountain home. Last I checked, that sort of action qualified as altruistic, and a collection of people living together was the necessary component of a society.

It really demonstrates the problem with Objectivists: they claim that altruism is a general negative and that people ought to be able to get by on their own skills and merits. Then they run headlong into reality, in which people actually do need one another. They’re ultimately put into a position of perpetual complaining, that they’re better than society and they don’t need other people and altruism is bad, but not actually being able to do anything about it. Objectivism is a philosophy of inevitable cantankerousness.

This just in: Bush’s favorite song is “Rock the Casbah”

So, does anyone remember back in 2005, when one of the big news stories was that Bush was reading Albert Camus’s The Stranger while on vacation? At the time, I thought it was funny merely for the insinuation that our beloved Commander-in-Chief can read, let alone that he could read chapter books. After all, he reportedly eschewed his one-page daily briefings on national security, which led to missing that one way back which said “Bin Laden determined to strike in U.S.,” and as I recall led to some sort of tragic event.

But I realized recently that there was another layer of humor to the story, one which I have to imagine, originates somewhere within Bush’s cabinet, whether it’s with whoever recommended he read the book, or whoever decided to publicize it. It’s someone with the same sick sense of humor that would cause a man to crawl under tables making fun of his inability to find WMDs, the same disregard for tact that would cause a man to jokingly sing about bombing Iran, or to suggest that he was going to give an IED as a present.

You see, The Stranger is about a man named Meursault (literally, “death-leap,” if I remember correctly) who kills an Arab for no real reason, and goes through his trial, imprisonment, and execution feeling absolutely no remorse, and really not caring about anything at all.

Someone played a joke, somewhere along the line, and I for one don’t particularly find it funny. It hits a little too close to home.

Especially if your home is in Baghdad.

Where “The Secret” Ends

The Little Blue Engine
By Shel Silverstein, from Where the Sidewalk Ends

The little blue engine looked up at the hill.
His light was weak, his whistle was shrill.
He was tired and small, and the hill was tall,
And his face blushed red as he softly said,
“I think I can, I think I can, I think I can.”

So he started up with a chug and a strain,
And he puffed and pulled with might and main.
And slowly he climbed, a foot at a time,
And his engine coughed as he whispered soft,
“I think I can, I think I can, I think I can.”

With a squeak and a creak and a toot and a sigh,
With an extra hope and an extra try,
He would not stop — now he neared the top —
And strong and proud he cried out loud,
“I think I can, I think I can, I think I can!”

He was almost there, when — CRASH! SMASH! BASH!
He slid down and mashed into engine hash
On the rocks below… which goes to show
If the track is tough and the hill is rough,
THINKING you can just ain’t enough!