Flush the Movement

Natalie Reed’s most recent post is must reading. Please do.

I’m writing this here because it’d be derailing if I wrote it in the comments there. So, yeah.

You may recall that I’ve previously expressed some of my problems with movements, and even with the very notion of a “movement” inasmuch as it implies directed motion toward some single common goal. There are multiple goals within atheism and skepticism, and there are also multiple myopic people trying to claim that some of those goals are illegitimate.

But then, I look at the arguments I’ve had with asshats on Twitter, I look at my own beefs with the “movement,” I look at the concerns about being “outed” that led to my switch to WordPress and my attempt to build some kind of retroactive anonymity, and I read Natalie’s post and feel like a giant fucking idiot. I feel like the things I’ve seen as problems, the worries that have kept me up nights and sent me scrambling to lock down my blog or watch what I say in different venues, as problems that people without my tremendous level of privilege dream of having.

Being “outed” to me means worrying about the integrity and stability of my job for a whopping couple of years until increased job security sets in. It means worrying about discomfort in a close-knit community that I already have very little contact with outside of idle chit-chat. It means worrying about awkward conversations with some family members about matters that, ultimately, don’t affect anyone’s lives because they’re centered around entities that don’t exist. It doesn’t mean being attacked for my appearance, it doesn’t mean losing my house or possessions, it doesn’t mean being ostracized for an integral part of my identity.

I’m lucky. I’m incredibly lucky. I’m playing the game of life on Easy with the Konami Code.

And that’s a hard lesson to learn, that by virtue of luck, you have an easier time than others. It’s far easier to buy into the just-world fallacy and believe that, if people have it rough, then it’s because they deserve it, or because they’ve brought it on themselves, or because it’s just the way things are. It’s hard to realize that you’ve benefited from a system that inhibits others. It’s hard to realize that the world is more complicated than “people get what they earn/deserve.”

But it also seems like it’d be a basic lesson learned by anyone applying skepticism to reality. A lesson I’ve learned, time and time again, is that reality is generally more complicated than you think. Reality is fractal. Zoom out or in, and there’s always some new level of detail, some new perspective, some new complication, that you haven’t accounted for. It’s part of why a scientific understanding of the universe is so full of wonder. Anti-science types will criticize science for its “reductionist” stance, “reducing” everything to mere aggregations of particles. But that’s not it at all, because those aggregations of particles are anything but “mere.” At every level of magnification there is something new and amazing to be fascinated by, something grand and beautiful to admire. Whether examining the patterns of cells in a tissue sample or the patterns of whorls in a fingerprint or the pattern of mineral deposits on a continent or the pattern of stars in a galaxy, there is fascination to be had and wonder to be felt and beauty to be seen. By closing yourself off to those other perspectives, your worldview lacks detail and nuance, lacks those sources of beauty and awe and interest.

But it appears that not all skeptics, not all atheists, not all science enthusiasts learn this lesson. I’ve long suspected that some people arrive at atheism or skepticism out of some kind of contrarianism. They see the silly shit that some people believe and reject it. They reject religion and Bigfoot and UFOs because those are the beliefs of “The Man,” of the majority, of the establishment. Man, they reject the establishment. They’ve seen the light, man. Take that far enough, and they reject the “establishment” account of what happened on 9/11 or “the man”‘s opinion that you have to pay taxes, and you get the Zeitgeist crowd. Take that in a different direction, without the tempering influence of science enthusiasm, and they might reject the “establishment” notions of medicine like the germ theory, and become like Bill Maher. Sprinkle in a bit of that black-and-white overly-simplistic worldview, and you get libertarians, who reject the idea that the system might be unfair, that life and civilization might be more complex than what’s portrayed in an Ayn Rand novel. And focus that rejection of “the man” and the “establishment” on the notion of “political correctness,” and suddenly you have MRAs and every other bunch of “I’m so persecuted” bigots that roam these here Internets (and elsewhere).

And friend, I’m not sure that there’s anything that’s easier to believe than that you’re a brave hero fighting against a grand conspiracy that is behind all of your problems, and that everyone who disagrees is either in on the conspiracy, or duped by it. It’s the DeAngelis-Novella Postulates, the underlying egotist worldview behind all conspiracy theories. I am the enlightened hero, my enemies are powerful and legion, and everyone else is a dupe who just hasn’t seen the light like I have.

That’s what I don’t understand about the people ranting over how they’ve been “silenced” by the “FTBullies,” or that “feminists” are sowing “misandry,” or that the “atheist scientists” are “expelling” Christians, or that “the Illuminati” are doing whatever nefarious things they like to do. The worldview is ultimately so simplistic that it falls apart on comparison with the complexities of reality. And as skeptics, isn’t that precisely the sort of thing we train ourselves and pride ourselves on debunking?

I guess that’s one more privilege afforded the majority: the ability to believe a comforting, simplistic, ego-stroking version of reality, to perceive the world through the tinted glasses of a persecuted minority while being neither, and to claim heroism while tilting at nonexistent windmills.

I realize this is all armchair psychology, which I’m doing from an office chair without a background in psychology. It’s almost certainly true that the real situation isn’t nearly as simple as what I’ve laid out, and that the MRAs and libertarians and Zeitgeistians and so forth that infest the atheist and skeptical “movements” are the result of far more diverse factors.

But I realize that, because I realize that the world is more complicated than “us” and “them,” than “good” and “evil,” than “baboons” and “slimepitters,” than “FTBullies” and “the silenced,” than “the Conspiracy” and “the Army of Light” and “the Sheeple.”

I just wish that were a more generally-understood lesson.

On Labeling

Mmm...babycakes.I keep running into an issue with labels. It wasn’t long ago that I revised my own from “agnostic” to the more accurate and more useful “agnostic atheist” (in a nutshell, anyway–but this is a topic for a future post). The problem I have is that the relevant parts of my beliefs didn’t change, only what I called myself did. I didn’t have a belief in any gods when I called myself an agnostic, and I don’t have any belief in any gods now that I call myself an atheist. From any objective standpoint, I was an atheist the whole time.

And this is the substance of the problem: the dissonance between what a person calls himself or herself, and what categories a person objectively falls into. These labels are frequently different, and frequently result in various confusions and complications.

On one hand, I think we’re inclined to take people at their word with regard to what their personal labels are. It’s a consequence of having so many labels that center around traits that can only be assessed subjectively. I can’t look into another person’s mind to know what they believe or who they’re attracted to or what their political beliefs really are, or even how they define the labels that relate to those arenas. We can only rely on their self-reporting. So, we have little choice but to accept their terminology for themselves.

But…there are objective definitions for some of these terms, and we can, based on a person’s self-reporting of their beliefs, see that an objectively-defined label–which may or may not be the one they apply to themselves–applies to them.

I fear I’m being obtuse in my generality, so here’s an example: Carl Sagan described himself as an agnostic. He resisted the term “atheist,” and clearly gave quite a bit of thought to the problem of how you define “god”–obviously, the “god” of Spinoza and Einstein, which is simply a term applied to the laws of the universe, exists, but the interventionist god of the creationists is far less likely. So Sagan professed agnosticism apparently in order to underscore the point that he assessed the question of each god’s existence individually.

On the other hand, he also seemed to define “atheist” and “agnostic” in unconventional ways–or perhaps in those days before a decent atheist movement, the terms just had different connotations or less specific definitions. Sagan said “An agnostic is somebody who doesn’t believe in something until there is evidence for it, so I’m agnostic,” and “An atheist is someone who knows there is no God.”

Now, I love Carl, but it seems to me that he’s got the definitions of these terms inside-out. “Agnostic,” as the root implies, has to do with what one claims to know–specifically, it’s used to describe people who claim not to know if there are gods. Atheist, on the other hand, is a stance on belief–specifically the lack of belief in gods.

So, if we’re to go with the definitions of terms as generally agreed upon, as well as Carl’s own self-reported lack of belief in gods and adherence to the null hypothesis with regard to supernatural god claims, then it’s clear that Carl is an atheist. Certainly an agnostic atheist–one who lacks belief in gods but does not claim to know that there are no gods–but an atheist nonetheless.

The dilemma with regard to Sagan is relatively easy to resolve; “agnostic” and “atheist” are not mutually exclusive terms, and the term one chooses to emphasize is certainly a matter of personal discretion. In the case of any self-chosen label, the pigeon-holes we voluntarily enter into are almost certainly not all of the pigeon-holes into which we could be placed. I describe myself as an atheist and a skeptic, but it would not be incorrect to call me an agnostic, a pearlist, a secularist, an empiricist, and so forth. What I choose to call myself reflects my priorities and my understanding of the relevant terminology, but it doesn’t necessarily exclude other terms.

The more difficult problems come when people adopt labels that, by any objective measure, do not fit them, or exclude labels that do. We see Sagan doing the latter in the quote above, eschewing the term “atheist” based on what we’d recognize now as a mistaken definition. The former is perhaps even more common–consider how 9/11 Truthers, Global Warming and AIDS denialists, and Creationists have all attempted to usurp the word “skeptic,” even though none of their methods even approach skepticism.

The danger with the former is when groups try to co-opt people into their groups who, due to lack of consistent or unambiguous self-reporting (or unambiguous reporting from reliable outside sources), can’t objectively be said to fit into them. We see this when Christians try to claim that the founding fathers were all devout Christian men, ignoring the reams of evidence that many of them were deists or otherwise unorthodox. It’s not just the fundies who do this, though; there was a poster at my college which cited Eleanor Roosevelt and Errol Flynn among its list of famous homosexual and bisexual people, despite there being inconsistent and inconclusive evidence to determine either of their sexualities. The same is true when my fellow atheists attempt to claim Abraham Lincoln and Thomas Paine (among others), despite ambiguity in their self-described beliefs. I think, especially those of us who pride ourselves on reason and evidence, that we must be careful with these labels, lest we become hypocrites or appear sloppy in our application and definition of terms. These terms have value only inasmuch as we use them consistently.

The matter of people adopting terms which clearly do not apply to them, however, presents a more familiar problem. It seems easy and safe enough to say something like “you call yourself an atheist, yet you say you believe in God. Those can’t both be true,” but situations rarely seem to be so cut-and-dry. Instead, what we end up with are ambiguities and apparent contradictions, and a need to be very accurate and very precise (and very conservative) in our definition of terms. Otherwise, it’s a very short slippery slope to No True Scotsman territory.

Case in point, the word “Christian.” It’s a term with an ambiguous definition, which (as far as I can tell) cannot be resolved without delving into doctrinal disputes. Even a definition as simple as “a Christian is someone who believes Jesus was the son of God” runs afoul of Trinitarian semantics, where Jesus is not the son, but God himself. A broader definition like, “One who follows the teachings of Jesus” ends up including people who don’t consider themselves Christians (for instance, Ben Franklin, who enumerated Jesus among other historical philosophers) and potentially excluding people who don’t meet the unclear standard of what constitutes “following,” and so forth.

Which is why there are so many denominations of Christianity who claim that none of the other denominations are “True Christians.” For many Protestants, the definition of “True Christian” excludes all Catholics, and vice versa; and for quite a lot of Christians, the definition of the term excludes Mormons, who are also Bible-believers that accept Jesus’s divinity.

When we start down the path of denying people the terms that they adopt for themselves, we must be very careful that we do not overstep the bounds of objectivity and strict definitions. Clear contradictions are easy enough to spot and call out; where terms are clearly defined and beliefs or traits are clearly expressed, we may indeed be able to say “you call yourself be bisexual, but you say you’re only attracted to the opposite sex. Those can’t both be true.” But where definitions are less clear, or where the apparent contradictions are more circumstantially represented, objectivity can quickly be thrown out the window.

I don’t really have a solution for this problem, except that we should recognize that our ability to objectively label people is severely limited by the definitions we ascribe to our labels and the information that our subjects report themselves. So long as we are careful about respecting those boundaries, we should remain well within the guidelines determined by reason and evidence. Any judgments we make and labels we apply should be done as carefully and conservatively as possible.

My reasons for laying all this out should become clear with my next big post. In the meantime, feel free to add to this discussion in the comments.

On Interpretation

I see an old lady!--No, wait, a young girl!--No, I mean, two faces eating a candlestick!I thought I’d talked about this before on the blog, but apparently I’ve managed to go this long without really tackling the issue of interpretation. Consequently, you might notice some of the themes and points in this post getting repeated in my next big article, since writing that was what alerted me to my omission.

I don’t generally like absolute statements, since they so rarely are, but I think this one works: there is no reading without interpretation. In fact, I could go a step further and say there’s no communication without interpretation, but reading is the most obvious and pertinent example.

Each person is different, the product of a unique set of circumstances, experiences, knowledge, and so forth. Consequently, each person approaches each and every text with different baggage, and a different framework. When they read the text, it gets filtered through and informed by those experiences, that knowledge, and that framework. This process influences the way the reader understands the text.

Gah, that’s way too general. Let’s try this again: I saw the first couple of Harry Potter movies before I started reading the books; consequently, I came to the books with the knowledge of the movie cast, and I interpreted the books through that framework–not intentionally, mind you, it’s just that the images the text produced in my mind included Daniel Radcliffe as Harry and Alan Rickman as Professor Snape. However, I plowed through the series faster than the moviemakers have. The descriptions in the books (and the illustrations) informed my mental images of other characters, so when I saw “Order of the Phoenix,” I found the casting decision for Dolores Umbridge quite at odds with my interpretation of the character, who was less frou-frou and more frog-frog.

We’ve all faced this kind of thing: our prior experiences inform our future interpretations. I imagine most people picking up an Ian Fleming novel have a particular Bond playing the role in their mental movies. There was quite a bit of tizzy over the character designs in “The Hitchhiker’s Guide to the Galaxy” movie, from Marvin’s stature and shape to the odd placement of Zaphod’s second head, to Ford Prefect’s skin color. I hear Kevin Conroy‘s voice when I read Batman dialogue.

This process is a subset of the larger linguistic process of accumulating connotation. As King of Ferrets fairly recently noted, words are more than just their definitions; they gather additional meaning through the accumulation of connotations–auxiliary meaning attached to the world through the forces of history and experience. Often, these connotations are widespread. For example, check out how the word “Socialist” got thrown around during the election. There’s nothing in the definition of the word that makes it the damning insult it’s supposed to be, but thanks to the Cold War and the USSR, people interpret the word to mean more than just “someone who believes in collective ownership of the means of production.” Nothing about “natural” means “good and healthy,” yet that’s how it’s perceived; nothing about “atheist” means “immoral and selfish,” nor does it mean “rational and scientific,” but depending on who you say it around, it may carry either of those auxiliary meanings. Words are, when it comes right down to it, symbols of whatever objects or concepts they represent, and like any symbols (crosses, six-pointed stars, bright red ‘A’s, Confederate flags, swastikas, etc.), they take on meanings in the minds of the people beyond what they were intended to represent.

This process isn’t just a social one; it happens on a personal level, too. We all attach some connotations and additional meanings to words and other symbols based on our own personal experiences. I’m sure we all have this on some level; we’ve all had a private little chuckle when some otherwise innocuous word or phrase reminds us of some inside joke–and we’ve also all had that sinking feeling as we’ve tried to explain the joke to someone who isn’t familiar with our private connotations. I know one group of people who would likely snicker if I said “gravy pipe,” while others would just scratch their heads; I know another group of people who would find the phrase “I’ve got a boat” hilarious, but everyone else is going to be lost. I could explain, but even if you understood, you wouldn’t find it funny, and you almost certainly wouldn’t be reminded of my story next time you heard the word “gravy.” Words like “doppelganger” and “ubiquitous” are funny to me because of the significance I’ve attached to them through the personal process of connotation-building.

And this is where it’s kind of key to be aware of your audience. If you’re going to communicate effectively with your audience, you need to have some understanding of this process. In order to communicate effectively, I need to recognize that not everyone will burst into laughter if I say “mass media” or “ice dragon,” because not everyone shares the significance that I’ve privately attached to those phrases. Communication is only effective where the speaker and listener share a common language; this simple fact requires the speaker to know what connotations he and his audience are likely to share.

Fortunately or unfortunately, we’re not telepathic. What this means is that we cannot know with certainty how any given audience will interpret what we say. We might guess to a high degree of accuracy, depending on how well we know our audience, but there’s always going to be some uncertainty involved. That ambiguity of meaning is present in nearly every word, no matter how simple, no matter how apparently direct, because of the way we naturally attach and interpret meaning.

Here’s the example I generally like to use: take the word “DOG.” It’s a very simple word with a fairly straightforward definition, yet it’s going to be interpreted slightly differently by everyone who reads or hears it. I imagine that everyone, reading the word, has formed a particular picture in their heads of some particular dog from their own experience. Some people are associating the word with smells, sounds, feelings, other words, sensations, and events in their lives. Some small number of people might be thinking of a certain TV bounty hunter. The point is that the word, while defined specifically, includes a large amount of ambiguity.

Let’s constrain the ambiguity, then. Take the phrase “BLACK DOG.” Now, I’ve closed off some possibilities: people’s mental pictures are no longer of golden retrievers and dalmatians. I’ve closed off some possibilities that the term “DOG” leaves open, moving to the included subset of black dogs. There’s still ambiguity, though: is it a little basket-dwelling dog like Toto, or a big German Shepherd? Long hair or short hair? What kind of collar?

But there’s an added wrinkle here. When I put the word “BLACK” in there, I brought in the ambiguity associated with that word as well. Is the dog all black, or mostly black with some other colors, like a doberman? What shade of black are we talking about? Is it matte or glossy?

Then there’s further ambiguity arising from the specific word combination. When I say “BLACK DOG,” I may mean a dark-colored canine, or I may mean that “I gotta roll, can’t stand still, got a flamin’ heart, can’t get my fill.”

And that’s just connotational ambiguity; there’s definitional ambiguity as well. The word “period” is a great example of this. Definitionally, it means something very different to a geologist, an astronomer, a physicist, a historian, a geneticist, a chemist, a musician, an editor, a hockey player, and Margaret Simon. Connotationally, it’s going to mean something very different to ten-year-old Margaret Simon lagging behind her classmates and 25-year-old Margaret Simon on the first day of her Hawaiian honeymoon.

People, I think, are aware of these ambiguities on some level; the vast majority of verbal humor relies on them to some degree. Our language has built-in mechanisms to alleviate it. In speaking, we augment the words with gestures, inflections, and expressions. If I say “BLACK DOG” while pointing at a black dog, or at the radio playing a distinctive guitar riff, my meaning is more clear. The tone of my voice as I say “BLACK DOG” will likely give some indication as to my general (or specific) feelings about black dogs, or that black dog in particular. Writing lacks these abilities, but punctuation, capitalization, and font modification (such as bold and italics) are able to accomplish some of the same goals, and other ones besides. Whether I’m talking about the canine or the song would be immediately apparent in print, as the difference between “black dog” and “‘Black Dog.'” In both venues, one of the most common ways to combat linguistic ambiguity is to add more words. Whether it’s writing “black dog, a Labrador Retriever, with floppy ears and a cold nose and the nicest temperament…” or saying “black dog, that black dog, the one over there by the flagpole…” we use words (generally in conjunction with the other tools of the communication medium) to clarify other words. None of these methods, however, can completely eliminate the ambiguity in communication, and they all have the potential to add further ambiguity to the communication by adding information as well.

To kind of summarize all that in a slightly more entertaining way, look at the phrase “JANE LOVES DICK.” It might be a sincere assessment of Jane’s affection for Richard, or it might be a crude explanation of Jane’s affinity for male genitals. Or, depending on how you define terms, it might be both. Textually, we can change it to “Jane loves Dick” or “Jane loves dick,” and that largely clarifies the point. Verbally, we’d probably use wildly different gestures and inflections to talk about Jane’s office crush and her organ preference. And in either case, we can say something like “Jane–Jane Sniegowski, from Accounting–loves Dick Travers, the executive assistant. Mostly, she loves his dick.”

The net result of all this is that in any communication, there is some loss of information, of specificity, between the speaker and the listener (or the writer and the reader). I have some specific interpretation of the ideas I want to communicate, I approximate that with words (and often the approximation is very close), and my audience interprets those words through their own individual framework. Hopefully, the resulting idea in my audience’s mind bears a close resemblance to the idea in mine; the closer they are, the more effective the communication. But perfect communication–loss-free transmission of ideas from one mind to another–is impossible given how language and our brains work.

I don’t really think any of this is controversial; in fact, I think it’s generally pretty obvious. Any good writer or speaker knows to anticipate their audience’s reactions and interpretations, specifically because what the audience hears might be wildly different from what the communicator says (or is trying to say). Part of why I’ve been perhaps overly explanatory and meticulous in this post is that I know talking about language can get very quickly confusing, and I’m hoping to make my points particularly clear.

There’s one other wrinkle here, which is a function of the timeless nature of things like written communication. What I’m writing here in the Midwestern United States in the early 21st Century might look as foreign to the readers of the 25th as the works of Shakespeare look to us. I can feel fairly confident that my current audience–especially the people who I know well who read this blog–will understand what I’ve said here, but I have no way of accurately anticipating the interpretive frameworks of future audiences. I can imagine the word “dick” losing its bawdy definition sometime in the next fifty years, so it’ll end up with a little definition footnote when this gets printed in the Norton Anthology of Blogging Literature. Meanwhile, “ambiguity” will take on an ancillary definition referring to the sex organs of virtual prostitutes, so those same students will be snickering throughout this passage.

I can’t know what words will lose their current definitions and take on other meanings or fall out of language entirely, so I can’t knowledgeably write for that audience. If those future audiences are to understand what I’m trying to communicate, then they’re going to have to read my writing in the context of my current definitions, connotations, idioms, and culture. Of course, even footnotes can only take you so far–in many cases, it’s going to be like reading an in-joke that’s been explained to you; you’ll kind of get the idea, but not the impact. The greater the difference between the culture of the communicator and the culture of the audience, the more difficulty the audience will have in accurately and completely interpreting the communicator’s ideas.

Great problems can arise when we forget about all these factors that go into communication and interpretation. We might mistakenly assume that everyone is familiar with the idioms we use, and thus open ourselves up to criticism (e.g., “lipstick on a pig” in the 2008 election); we might mistakenly assume that no one else is familiar with the terms we use, and again open ourselves up to criticism (e.g., “macaca” in the 2006 election). We might misjudge our audience’s knowledge and either baffle or condescend to them. We might forget the individuality of interpretation and presume that all audience members interpret things the same way, or that our interpretation is precisely what the speaker meant and all others have missed the point. We would all do well to remember that communication is a complicated thing, and that those complexities do have real-world consequences.

The crazy train keeps a-rollin’

PZ, bless his heart, posted a bunch of the angry e-mails that Bill Donohue’s clueless masses sent his way following Crackergate. I haven’t been able to read through all of them (bring a sandwich and find a comfortable chair if you plan to), but one of them got me thinking.

Well, actually, lots of them got me thinking. Most of the thoughts were “these people are utterly clueless if they think [PZ would hesitate to insult tenets of Islam and Judaism / Insulting the Eucharist is a “hate crime” / PZ is somehow using University time or resources to blog / PZ is a math professor]” and “these people have no idea what precipitated this post.” Also, “[any God who could be threatened in cracker form / any God who would get his followers so worked up over a snack food] is clearly sillier than either the “body mutilation” or “wear these clothes” gods.”

But, back to the point, one post got me thinking about something specific:

I know you are smarter than most people and probably even God himself, if you even believe in God.

Besides the obvious (hey, check the blog header or the big red A in the sidebar for Dr. Myers’ belief-in-God status), this got me wondering about being “smarter than God.”

See, my first inclination (and a couple of commenters in the original thread said it as well) would be to say that I’m smarter than God. After all, I don’t believe that God exists, and obviously I’d be smarter than something that doesn’t exist.

But then I thought, if someone asked me “do you think you’re smarter than Batman?” I’d probably say no. And yet, my position on the existence of Batman is exactly the same as my position on the existence of God.

Which brings me to the realization that while I don’t think God or Batman exist, the fictional characters of Batman and God absolutely do exist. And those fictional characters have defined traits–in these cases, exceptional intelligence.

So, how do you respond to such a question? Do you answer in terms of reality, and declare yourself smarter than everything that doesn’t exist? Or do you answer in terms of character traits, and respond that the fictional character possesses the greater intellect?

I guess the best response is the one that clarifies the answer. “Obviously, I’d consider myself smarter than any nonexistent person, but as the character is defined, I think he’s probably more intelligent.” Or something.

And now I’m going to spend the next day or so running over these weird “one hand clapping” questions in the back of my mind–“Am I taller than Superman? Am I more muscular than the Hulk? Am I as observant as Hercule Poirot?”

I Ought to be a Woo: My Brain

This is the first post in what will probably be a long and rambling introspective series on how it’s a miracle* that I ended up as skeptical as I am. First up: how my brain works.

Yesterday I was listening to a “Doctor Who” audio drama on my iPod and thinking a little about continuity–not “Doctor Who” continuity, even…I think I was considering something about Kryptonite for some reason. Anyway, my years in various sorts of fandom have taught me that I’m very good at rationalizing things. Give me any continuity error, quibbling (“Han was bluffing Obi-Wan; obviously a parsec is a unit of distance. As he showed with the Death Star communicator, he’s not always good at bluffing”) or monumental (“Due to the traumatic regeneration, which took place on Earth instead of in the TARDIS, the Doctor took on some terrestrial biological characteristics for his Eighth Incarnation; he’s ‘half-human’ on the side of his mother–Mother Earth”) and I can smooth it out with some post-hocking. I don’t even have to try particularly hard, except when I start applying this kind of thinking outside of fiction.

Moreover, I’m pretty good at drawing connections between otherwise disparate things. It makes compare/contrast essays really easy, and I imagine it’s a large part of why I’m so fascinated with Joseph Campbell. Unfortunately, it doesn’t turn off. I find myself sometimes assigning thematic significance to things that happen in my life. I often hear new bands or see movies and begin describing it in terms of other bands or films–for instance, when I was riding with a friend yesterday, I described the band he was listening to as “Wall of Voodoo meets Tom Waits.” I then promptly felt like an asshole hipster and wanted to shoot myself. But that kind of thing happens all the time; I look at Xander from “Buffy” and can’t help thinking he must be Bruce Campbell’s secret love child, or I watch a preview for “P.S. I Love You” and think that it’s “Saw” as a love story. My brain is forever drawing connections.

As anyone who’s had any experience in the Skeptosphere already knows, post-hoc rationalization and connection-drawing are foundational to a variety of different types of magical thinking and woodom.

Post-hoc rationalizations require two things: first, an assumption of the truth, and second, an inconsistency between that assumption and observation. In fandom, that might look something like this:
Assumption: The “Star Wars” series is coherent and without contradiction.
Inconsistency: Princess Leia says in “Return of the Jedi” that she remembered her birth mother, who was “beautiful, kind but sad.” But we see in “Revenge of the Sith” that Padme Amidala dies in childbirth; how could Leia possibly remember that?
Post-Hoc Rationalization: Leia is Force-sensitive, and so her memories are influenced by telepathic impressions she received of her mother pre- and immediately post-natal.

See how it works? You start with your pre-existing worldview, and then iron out any inconsistencies with easy hand-waving explanations, ignoring totally the simpler, more parsimonious explanation that your initial assumptions may be flawed. For instance:
Assumption: God exists and answers prayers from His followers.
Inconsistency: Not all believers’ prayers get answered.
Post-Hoc Rationalization: They weren’t praying/believing right.

Or how about:
Assumption: Sylvia Browne has psychic powers.
Inconsistency: She told this lady that “the reason why you didn’t find him [her late husband’s body] is because he’s in water.” But the woman’s husband was a firefighter who died in the World Trade Center, not “in water.”
Post-Hoc Rationalization: Well, Sylvia was getting the water impression from the water used by the firefighters to put out the fire. The spirits, you see, they’re hard to hear, and maybe he didn’t die in the tower at all, or…

Did someone say World Trade Center? Why, I do believe that brings us to “drawing connections” (see how I drew that one? Not yet? Oh, well, wait a minute). Without the tendency to draw connections between otherwise unrelated things, there would be no conspiracy theories (get it now?), and alternative medicine types would have a much harder time hocking their wares. Connection drawing requires, in most cases, a great deal of cherry-picking, an affinity for analogies, and a tendency to inflate “connection” into “causal relationship.” It’s a boon for English majors, because it allows us to do things like literary interpretation and analysis, and pretend to have some degree of certainty.

As an example, I recently had to write a research paper on Bram Stoker’s “Dracula.” One of the ideas I had was that the vampires in Dracula (especially the Count himself) are 19th-century anti-Catholic caricatures. There’s the easy bits, like the fact that Stoker was an Anglican and the whole blood-drinking thing (since Catholics believe in real, not symbolic, transubstantiation). Our protagonists are largely Church of England, and are rather blasé about their faith; Jonathan Harker thinks that the Eastern Europeans he encounters are silly and superstitious, and he tries to refuse the Rosary one woman gives him. The vampires are all cowed and harmed Catholic iconography–the Host, crucifixes, etc.–which are used by our protagonists like magical spells. Only the vampires (and the “superstitious” characters) recognize any power in the icons, for everyone else, they are meaningless. This is a reference to the common characterization of Catholicism as witchcraft (and perhaps to Medieval Catholicism, where the illiterate laity incorporated those same Catholic icons in their old pagan magic rituals).

See, I could have built a pretty decent paper around that thesis, even though I recognize that it’s probably utter bullshit. I doubt that Stoker wrote his book as an anti-Catholic polemic, and if he did, then I doubt many of his readers would have gotten it. And to make the case, I have to ignore the fact that the most lauded character in the book is the obviously Catholic Abraham Van Helsing, or the various other details that don’t support (or actively contradict) my thesis. But I can cherry-pick details all day long, maybe do some quote-mining, and get a good essay out of it.

The same kind of thing is necessary for alternative medicine, astrology, or any other woo that posits a cause-effect relationship between otherwise unconnected objects. And conspiracy theories thrive on this. The phrase “do you think that’s [the deaths of the Apollo 1 astronauts/the government’s reluctance to release details about purported UFOs/the crash of Flight 93/the ‘expulsion’ of these ID advocates from academia/etc.] a coincidence?” is testament to that. I could offer up an example here, to match my term paper paragraph, but I’m sure you get the picture.

These are natural human drives. We are built to make connections; our ability to infer causal relationships and plan accordingly is one of the biggest survival advantages we have–it just doesn’t have a great deal of precision. And we crave explanations for things, any explanations, even ones that are pure guesswork, because that’s still more satisfying than not knowing.

When we combine these tendencies, to draw connections and iron out inconsistencies, we end up with neat, emotionally-satisfying narratives. In narrative storytelling, events must be connected or significant somehow. Everything fits together in a neat package, usually with some kind of moral center. There’s a climax and a resolution, and all the loose ends are tied up in a way that provides fulfillment and closure. We understand that kind of story; what we have a hard time grasping is reality, where things aren’t all connected and symbolic and leading to some emotionally-gratifying conclusion.

Maybe it’s hubris or shame or something that causes me to think that I’m somehow abnormal in having these connection-building and rationalizing drives in overdrive. Maybe I’m not that much different from anyone else. But it still seems amazing that I could become skeptical–heck, that anyone could become skeptical, with these cards stacked against them.

I think the first step is becoming aware of the common faults of human thought. In order to overcome the tendency toward erroneous thinking, you have to know that there’s something to overcome. It always comes back to education, doesn’t it?

That seems like enough rambling for now, but I’ll come back to this topic periodically.