Bigotry, Satire, and the Left

[CW: Racism]

I used to be a big fan of “Family Guy.” I owned the first several seasons, and watched them repeatedly. I rejoiced when the show came back from its cancellation, even if the interim productions (A “live from Vegas” album and the direct-to-DVD Stewie movie) weren’t spectacular. I listened to the commentaries, which were often just as entertaining as the show itself. I loved how the show skewered right-wing religious fundamentalism, how frequently it crossed into the boundaries of bad taste for a laugh. Like, there was the bit where a JFK Pez dispenser got shot, or where Osama Bin Laden was trying to get past airport security by singing showtunes, and the whole “When You Wish Upon a Weinstein” episode. The latter of those never made it to air; the former segments were even cut from the DVD sets. Family Guy was edgy.

Seth MacFarlane, the creator and significant part of the voice cast of the show, is decidedly liberal, and his politics have certainly informed the series. More and more as the show went on, we saw bits lampooning creationists and religion, promoting pot legalization and gay marriage and positive immigration reform.

Unfortunately, as the show went on, we saw more and more of the stuff that eventually soured me on the series. That same “edginess,” that same intentionally-offensive philosophy of “we make fun of everyone,” meant more characters who were stereotype caricatures. Brian’s flamboyantly gay relative, the Asian reporter (voiced by a white woman) who occasionally slips into a “me ruv you rong time” accent for a laugh, the creepy old pedophile. And of course Quagmire, whose ’50s-throwback ladies-man character is eventually just a vehicle for relentless rape jokes.

Seth MacFarlane would probably tell you that he’s not a racist or a misogynist or a homophobe. He would probably tell you that he’s very liberal, that the show constantly makes fun of right-wing ideologies and satirizes even his erstwhile employers at Fox. In satirical parlance, he’d probably argue that his show is “punching up.”

The problem is that, while doing all that punching, he’s not giving any thought to the splash damage toward people who might not be his actual targets. What about satirizing right-wingers necessitates rape jokes and racial stereotypes? Would his satire be as effective without those elements? Might it be better? I don’t think Seth MacFarlane cares much. They get laughs, and when it comes down to it, laughs matter more to guys like Seth MacFarlane than the targets of those laughs.

There are lots of people in similar boats, willing to throw anyone under the bus for a cheap laugh, then defend themselves by saying that they’re being satirical, that because they’re politically liberal, or because they satirize the powerful in addition to the powerless, that they can’t be bigots. They’re just equal-opportunity offenders, treating everyone the same, and you don’t see their powerful targets complaining.

Which, of course, misses the point. It misses the point like a white person saying “well how come it’s okay to say ‘honky’ or ‘cracker’ but not the n-word?” It misses the point like a man saying “female comedians are always telling jokes about men, how come it’s only sexist when I tell jokes about chicks or rape?” It misses the point that when not all people are equal in society, mocking them equally does unequal harm. Author Saladin Ahmed put it best when he said “In an unequal world, satire that mocks everyone serves the powerful. It is worth asking what pre-existing injuries we add our insults to.

It’s an important thing to remember when you’re a satirist. Who is your target? Who do you want to hurt, and who might get hurt in the crossfire? Is it necessary to your point for your target to have sex with an offensive transphobic caricature? Is it necessary to your point to dredge up stereotypical slurs against one minority to lampoon bigotry against another? Is it necessary in making fun of racists and homophobes to replicate racist and homophobic imagery?

“Satire” is not a shield that protects its creators from crticism. “Liberalism” is not an inoculation that prevents its bearers from committing bigoted acts. Punching down is a problem. Splash damage is a problem. Not all slights are covered by “but look at the larger context,” not when your “larger context” conveniently omits the context of centuries of caricatures with hook noses or big lips or fishnet stockings.

And, it should go without saying, “criticism” doesn’t come from the barrel of a gun.

Bigots Ruined It For You

[Trigger Warning for rape, misogyny, racism, assorted bigotry]

Let’s say you’re an enthusiastic Jain or Hindu, wanting to express your desire to be good, wanting to evoke Shakti with a clear symbolic representation as a flag or tattoo or something. You find the perfect symbol, one that has been used for that purpose since ancient times, the swastika. But you can’t use it. Bigots ruined it for you. A whole army of racists and supremacists claimed that symbol as their own and flew it over a campaign of genocide. It’s tainted, possibly forever. Try to adopt that symbol, and you’ll be mistaken for one of them, for a racist, a white supremacist, a Nazi, a bigot.

Let’s say it’s Halloween. You’re hosting a party for your friends, and you want to put together a costume that’s kind of ironic, something that you put thought into but looks like you just kind of threw it together. You settle on a classic ghost costume–white sheet, head to toe, with eye-holes cut out, like all the kids in “It’s the Great Pumpkin, Charlie Brown.” You could flair it up a bit with a point toward the top, like the little tail bit on heads of those ghosts from Casper. But you can’t wear that costume. Bigots ruined it for you. Dress like that, and you’ll be mistaken for one of them, for a racist, a white supremacist, a Klansman, a bigot.

Let’s say you’re an enthusiastic southerner in the United States. You want to represent your heritage with a symbol of the South, something that proclaims in bold, primary colors your love of the land of barbecue and hospitality and Molly Hatchet. There’s a flag you can fly to do just that, except…except it was designed in a bloody war over (among other things) the right to treat some humans as less than human. It’s a well-designed flag, but bigots ruined it for you. It’s been tainted, by the war that spawned it and the continuing centuries of racist policies that followed. Fly that flag over your house, in your garage, on your trailer hitch, and you’ll be mistaken for one of them, for a racist, a segregationist, a bigot.

Let’s say you heard a funny joke recently. It’s kind of offensive, because it plays on stereotypes that you know aren’t true, but it’s a well-constructed joke nonetheless. You know you could tell it with perfect timing and get a roomful of hearty belly-laughs. Besides, you don’t believe those stereotypes are true. They’re absurd! But you can’t tell that joke anyway. Bigots ruined it for you. You may recognize that those stereotypes are false exaggerations, and you may know that everyone who hears you tell the joke knows that, but there are people who still believe those things, and there are people who are still hurt by those stereotypes, still affected by their presence in our culture. Tell that joke, and you’ll be mistaken for one of those people, for a racist, a misogynist, a homophobe, a bigot.

Let’s say that you’re a Christian. You follow Christ’s teachings and recognize that the most important thing–like it says in First Corinthians, like Jesus said to the scribes–is love. You want to express your Christian love by standing up for family values, because families–of any shape or size or configuration–are the purest example in this world of God’s unconditional love, and you value that. You see love as the most fundamental part of the Christian message, the foundation of Christ’s teachings, and so you would call yourself a fundamentalist Christian to express its importance to you. You define sin as that which is opposed to love, acts of jealousy and hatred, and see such acts as the worst crimes that one can carry out against their fellow humans. Despite that, you recognize that no person is truly evil, that those sins of hatred and jealousy come mostly out of ignorance, and that they can be corrected and defeated with love. You would advise people not to get angry at the hateful, but to hate the sin and love the sinner. But you can’t use those phrases–“family values,” “fundamentalist Christian,” “love the sinner, hate the sin.” Bigots ruined it for you. Use those phrases, and you’ll be mistaken for one of those people, for a homophobe, a fanatic, a bigot.

Let’s say you want to talk to a stranger in an enclosed space, like an elevator or a bus or subway car. After all, people end up in those things together, and it’s really awkward to just sit around staring at the wall silently or pretending other people don’t exist. Besides, a stranger’s just a friend you haven’t met yet, and you’re a friendly person. So you’d like to seize the opportunity to make small talk. But you can’t. Bigots ruined it for you. Rapists and sadists committing what are, effectively, hate crimes against women, along with violent misogynists and a culture that ignores them and dismisses the concerns of women, that blames victims and makes rape and harassment costly to report, have ruined it for you. Try to strike up that conversation, and you’ll be mistaken for one of those people, for a rapist, a violent person, a misogynist, a bigot.

Bigots suck. They make life shitty for lots of people. They make life shitty for the people who are targeted by their bigotry. They make life shitty for people who have to endure the inequalities built into a culture that rose up on bigoted foundations, even as more enlightened people recognize the mistakes of the past and try to dismantle that bigotry. And they even make life shitty for people who, in their innocent cluelessness or lack of empathy, might be mistaken for bigots. The solution is not to lash out at the targets of that bigotry, at the people who’ve suffered the most because of it, for being unable to look into your heart and your past and see that you’re not a bigot at all. The solution is not to lash out at people for being unable to discern whether an action is motivated by bigotry or ignorance. The solution is to realize that bigotry makes things suck for everyone. If you’re going to lash out, make sure you’re lashing out at the bigots. They’re the ones who ruined things.

What kind of diversity?

Vjack has a post up on Atheist Revolution discussing his problems with Atheism+. I’m not going to go into a lot of detail about it; I think he’s wrong, I think his posts on this and related subjects have been full of telling elisions and bad arguments. I’m personally disappointed that someone I respected and agreed with in the past has devoted so much of his recent blogging to this apparent vendetta. I generally don’t understand the pushback and opposition to the various proposed and enacted social justice initiatives, but it’s more striking when it’s from people I like (see also my quarrel with Toxicpath). But that’s enough of the personal stuff. The point here is simply responding to a couple of statements from that long-ish post.

On Values

In suggesting that we share common goals, I am being descriptive rather than prescriptive. That is, I am suggesting that virtually all atheist do in fact have some common goals and not that we should adopt some set of goals that we do not currently share.

I get where Vjack is coming from here, but he’s arguing against two contradictory strawmen. The implication in this statement (made explicit in the subsequent paragraph) is that Atheism+ is a movement saying that atheists should adopt social justice values, which they currently have not adopted.

This is flatly wrong, and that’s pretty clear from the few prominent posts on the subject. The fact is that a lot of atheists already do share these social justice values, just as most atheists share the values that Vjack presumed for the first sentence, which I suspect would be similar to the incomplete list I compiled yesterday. The percentage of atheists who share social justice values is clearly not as large as the percentage who value science, for instance, but it’s still a preexisting category. “Atheism+” is the label that arose and took off from a discussion of like-minded atheists who already valued social justice to describe themselves.

Imagine that the libertarian wing of atheism–something that’s already in existence and has been clearly visible for some time–wanted to set themselves apart, so they could discuss libertarian issues without having to deal with the constant harping of liberal atheists, and so they could work to enact policies that supported their libertarian ideals, which is not something that the entirety of the atheist movement would be for. Would we begrudge them the ability to label themselves with something catchier than “libertarian atheists” (hey libertarian atheists: “Athei$m.” You can have that one for free) and unite to work toward particular goals that align with both their libertarian and atheist viewpoints?

I imagine some would. I wouldn’t. The less I have to deal with libertarians, the happier I generally am. It’d be a win-win situation.

So Vjack is wrong in suggesting that “Atheism+” is somehow, by its nature, prescriptive. It’s describing a movement and a group that’s been forming for a good long time, even if that movement isn’t “all atheists.” But I think he’s also wrong with seeing prescriptiveness as a problem. There’s nothing wrong or problematic in arguing that a particular group should care about a particular issue, or take action in a particular instance. It’s something that the atheist movement is generally familiar with. We hardly need any prodding to be spurred to action to support a high school atheist in a free speech battle or to speak out against tyrannical theocratic regimes, because those things are obviously in-line with our shared values. But, you know, take a look at the “Bullshit” episodes on secondhand smoke or the Americans with Disabilities Act or Cheerleading. Granted, they’re not directed primarily and solely at atheists, but they’re clear examples of some skeptically-minded folks saying to others “hey, these are issues that are important, which you should care about (and adopt our position on).” They’re making an argument that people who are like-minded on one set of positions and values (existence of gods, importance of science, promotion of reality-based policy) should also be like-minded on other positions and values (corporate liberty, opposing government intrusion, libertarianism).

They’re making an argument, which others are free to accept or reject. There’s no magical barrier between one set of values that some atheists share and any other set of values that some atheists share. If I hold libertarian or liberal or feminist or vegetarian or Objectivist values for the same basic reasons that I hold skeptical and scientific values, then of course I’m going to argue that others who hold one set of values should hold the other. “Hey, we both care about [THING A], and I care about [THING B] for the same reason I care about [THING A]. Since you agree with me about [THING A], you should also agree with me about [THING B].” Making the argument is not a problem, because there’s always the opportunity for a counterargument. And if a movement can handle guys like Bill Maher promoting anti-medical quackery and Penn Jilette promoting anti-government ideology and the legions of AGW deniers promoting anti-climate science demagoguery, all under the heading of “I’m anti-medicine/anti-government/anti-AGW for the same reason I’m anti-religion, because I’m a skeptic,” then I don’t see how it can’t handle feminists and social justice folks doing the same, even if you believe that those people are wrong/irrational/unskeptical/whatever.

On Diversity

I have always thought our movement was strong because of our diversity and not in spite of it. I value big tent atheism, and what I mean by that is a large movement with great diversity in which people work together to accomplish the few goals we truly share.

Had I been drinking, I probably would have ruined my smartphone when I read that first sentence. I agree, movement atheism has a lot of diversity, even of the kind that Vjack cites. But the idea that the community somehow only or generally or mostly works together to accomplish the few goals we truly share, that “Atheism+” is somehow an outlier in working together on goals that are only shared by a subset of atheists, is ludicrous. Some atheists have the goal of building bridges with theists to work on shared goals, others see that as a waste of time or worse. Some atheists have the goal of making all discourse civil and professional and non-dickish, others value blunt and acerbic speech. These groups have existed, and have been trying to unite like-minded atheists toward one or another goal, and creating DEEEEEP RIIIIIFTS in the movement/community for years. We generally work together on goals like fighting school prayer and supporting science, but there’s always been factions of atheists pulling in different directions and sniping at their opponents.

But there’s a bigger thing going on here, and it’s one that was laid out pretty clearly by Greta Christina. The question is what kind of diversity do you want? Do you want diversity of opinion, or diversity of background?

To some degree, you can have both. You can have libertarians and liberals and authoritarians, just as you can have blacks and whites and browns and so forth. But there comes a point where you have to make various choices, because encouraging, supporting, defending, or being explicitly inclusive of some opinions will necessarily make people from certain backgrounds feel excluded or dismissed, and vice-versa. As Greta Christina said, you can’t include both women and people who think women are inherently irrational. You can’t include both trans* people and people who think that trans* people are just self-deluded or insane. One way or another, someone’s going to leave.

Again, we’ve seen this recently with organized skepticism. Various leaders in the organized skeptical community have wanted to preserve a diversity of opinions on the god hypothesis by welcoming (and coddling) believers, which has left atheists feeling snubbed and delegitimized. In trying to accommodate one group, they’ve alienated another. TAM made their choice, that they’d rather have the Hal Bidlacks and Pamela Gays than the Christopher Hitchenses. We’ve seen it go the other way as well, such as when Orac declared his end with organized atheism after Richard Dawkins supported Bill Maher’s receipt of that science award. Dawkins said he found embracing a diverse group of atheists more important than promoting medicine, and so he lost the support of at least one medical practitioner.

Of course, it’s not quite that clear-cut, is it? It’s not like Hal Bidlack said at TAM “atheists aren’t welcome,” and it’s not like Vjack has said “feminists aren’t welcome.” What they’ve both said is that those groups are welcome under certain conditions. Atheists were welcome at TAM so long as they didn’t attack believers for their beliefs. Atheists are welcome to have their conferences about the god hypothesis, so long as they don’t do it under the heading of “skepticism.” Similarly, Vjack doesn’t have a problem with feminists, so long as they adhere to his standards of who should be considered a bigot. The rest of the social justice opponents seem to agree: so long as women are like Paula Kirby or Abbie Smith or Mallorie Nasrallah and don’t think harassment is that big a deal, or don’t ask people to change their practices, they can stick around. Heck, they’ll be celebrated. But man, suggest that it’s wrong to make rape jokes to a minor or hand an unsolicited nude photo to a speaker or that guys be more aware of appropriate times to ask women out, and then they’re unreasonable, irrational, unskeptical, shrill, militant, radical, feminazi, femistasi, c***s and t***s.

Diversity is okay–it’s great! it’s desirable! it makes us strong!–so long as it’s on our terms.

And you know what? That’s okay. If they want to prize diverse opinions over diverse backgrounds, that’s fine. But then they really can’t be surprised when the people who feel excluded by the side they’ve chosen (explicitly or through inaction) go off and do their own thing.

Personally, I prize diverse backgrounds. Somite argued that gender (and by extension, other background factors) didn’t determine ideas or facts. Would that that were the case. Societies around the world do not treat people of different backgrounds (gender, social class, skin color, neurology, disability status, etc.) the same way, and so those people develop different perspectives on the world. Those perspectives do not change what is objectively true or real, but they do affect which aspects of reality people are concerned about and focused on. Would an all-male group of skeptics and atheists ever consider the pseudoscience behind douching or various cosmetics? How highly would they prioritize those things? Would a group of non-parent skeptics and atheists consider the claims about the effects of breastfeeding or water birth or teaching about Santa Claus? How much effort would they expend on those topics as opposed to acupuncture and angels? White American ex-Christian atheists have certainly addressed the Muslim claim about the 72 heavenly virgins, but do they have the same depth of analysis on the subject as Heina Dadabhoy did? Would they provide the same emphases?

People from different backgrounds provide perspectives and priorities that a more homogenous group wouldn’t consider. And I think that’s important, I think that’s valuable. I think seeing problems or claims from different perspectives is an important tool in evaluating them, and an important tool in arguing about them. Just given the god hypothesis, some people might be more swayed by a moral argument (like the Euthyphro dilemma, or “Why Won’t God Heal Amputees”) than an evidentiary one, and vice versa. Having both those arguments in your toolset is more useful than only having one. But I also think that the perspectives of people who come from different backgrounds can also help shape and change what we find important. If all atheism were run by folks from mostly-godless European countries, then we’d probably see a lot more Alain de Bottons and a lot fewer Matt Dillahunties–and if the majority of atheists shared Alexander Aan’s perspective, then the movement would be different in a lot of other ways. Our backgrounds and experiences shape who we are, what we care about, and what we spend our time and effort on. Failing to consider the perspectives of others means we make those choices with less information, and may expend our efforts in less-than-worthwhile directions.

Moreover, there’s the P.R. angle. Like it or not, people are primed to listen to and agree with people who share their backgrounds, who come from the same place they do, who speak their language. Alain de Botton’s atheist-church arguments might play well in Europe where churches are mostly toothless, but it was roundly dismissed and ridiculed in god-soaked America. And I suspect that Reg Finley is going to play better at a black church in Tuskegee than a white doctor, as an example. The more people of different backgrounds, different places, different perspectives, we have, the more “languages” we can speak, the more people we can speak to and reach. If the whole movement looks like an old white boys’ club, it’s going to speak less strongly to people who don’t fit into those categories. You can call it irrational, I call it ethos.

So I’d prize diversity of background, which provides different perspectives and opinions and prioirties, over diversity of opinion, for the most part. Given the choice between an ex-Muslim atheist and a white supremacist atheist, I’m going to go for the former every time. I think we gain more than we lose by excluding the bigots. Is that divisive? Hell yes. But “divisiveness” is not in and of itself, a bad thing. Movement atheism has divided itself from secular Intelligent Design proponents like the Raelians and largely-secular cults like Scientology, and I think it’s benefited as a result.

And if what it takes for the social-justice-concerned atheists to move forward and work on those topics without being weighed down by the rape-jokers and c***-kickers and “only on my terms” diversity enthusiasts is to relabel themselves and widen an already-extant rift, then so be it. We’ll be divisive, and you can do whatever. The rest of us will work together on the goals we truly share, and you can comfortably sit back and call us irrational nazis and baboons.

How Dare You?!

This is kind of a follow-up to my post on friendship, and is likely to hit some of the same notes and indict some of the same people.

I’ve noticed recently, though I’m sure the trend has been around for some time, this tendency in skeptic/atheist circles to suggest, explicitly or implicitly, that a person has done so much for the atheist/skeptic community that it is somehow out of line to criticize them. Here’s an example I saw today, in PZ’s post about Sam Harris:

The Harris bashing going on here is just ridiculous. The man is a hero of the skepticism movement. All you people rushing to judgement should be embarrassed.

Hes admitted countless times he phrased his ideas poorly on the profiling issue (even publicly apologized on TV).

PZ, you need to take note on how well Harris defends himself against this character assassination you’ve exacerbated once again. Compare that with how you usually respond to criticism.

Remember that next time you’re getting all upset over a comedian’s joke and crying all over your keyboard and empty donut cases.

I know that I’ve seen this same kind of sentiment expressed about DJ Grothe of late (there’s one buried in this comment), and I’m pretty sure it came up a bunch about Dawkins in the whole “Dear Muslima” flap.

To put it bluntly, this kind of thinking is wrong-headed, fallacious, dangerous, and dare I say it, religious.

I’m not saying you shouldn’t have role models. That would be absurd. There are always people who are better than us or more informed than us at certain things. It’s fine to look up to people; the problem comes when you begin thinking those people are somehow above you.

A further problem comes if they begin thinking the same.

Must we, scientific skeptics and rational atheists, keep learning this lesson? This is the lesson of Linus Pauling, the lesson of Ayn Rand, the lesson of Edgar Mitchell, the lesson of Bill Maher, and so on. Being brilliant, well-informed, or just right about one area or subject does not make one brilliant, well-informed, or right about everything. Expertise does not transfer.

We as skeptics and atheists spend a lot of our time arguing with people because they’re wrong about something. We argue with strangers, we argue with anonymous idiots, we argue with professional pseudoscientists and preachers who hate us, we even argue with acquaintances and coworkers.

Why would we avoid arguing with the people we care about?

Granted, James Randi and Richard Dawkins and the like are basically strangers to me. The same is true for most people and the famous role models they look up to. We feel a kinship with these people because they’ve said or written or done things that resonate with us, that we wish to live up to or emulate. That forges an emotional connection, even if it’s one-way, which boils down to (at the very least) the point that we care what they have to say. We value their thoughts and opinions enough to spend our money buying books filled with just that, or spend our time watching their videos or reading their words online.

And so when they, our heroes, say or do something that is clearly wrong, I think we have a responsibility to speak up about it. In part, it’s because there’s a cognitive dissonance in saying “I value what you have to say” and “what you have to say with regard to X is wrong/reprehensible.” In part, it’s because we recognize that there are other people who value what they have to say, but may not be informed enough to see that, on this topic, they’re dead wrong. In part, it’s because we hope that our heroes are reasonable and, when presented with evidence that contradicts their position, would change it, making them even more admirable for following the evidence. In part, it’s because we just don’t like people being wrong. In part, I think we realize that leaving the wrongness unchallenged could eventually lead to worse problems (like the ubiquity of vitamin megadosing or libertarians). And in part, I think, it’s our responsibility.

That responsibility has different degrees of strength. If it’s, say, an author you like who has said something stupid, then your purchase of his book, your recommending it to your friends, etc., means that you have contributed to his popularity. But if it’s, say, someone who is often chosen by the media to speak for a group that you’re part of, then they’re sometimes (de facto) speaking on your behalf. And you don’t want the general public to think that this thing they’re wrong about is generally representative of the group’s beliefs.

Because, one way or another, their wrongness makes you look wrong. You’re wrong by proxy.

And so we call out our heroes when they’re wrong because we care about them and their opinions, because we want to give them the opportunity to realize their mistake and correct it, and because we want to show clearly that we don’t share their wrongness. Phil Plait called out Carl Sagan in his first book, because Sagan was wrong about Velikovski. Phil was also involved in correcting Randi when Randi spouted off about climate change. PZ called out Sam Harris about his unfounded views regarding racial profiling, and promoted the opinions of actual experts in response. Many spoke up when anti-medicine Bill Maher was nominated for a science award. And so on and so forth. Maybe if more people had spoken more loudly and forcefully at Linus Pauling, it wouldn’t be a generally-accepted belief that Vitamin C cures colds.

What we don’t do, what we shouldn’t do, what we must not do, is say “well, these people have done so much good that we can overlook this little bit of bad.” We don’t accept that from religious believers about the role of religion in history. We don’t accept that from the Catholic Church regarding its predator priests. We don’t accept that from science, dammit. We don’t say “well, these guys have published a bunch of good papers before, let’s just let this paper slide without peer review.” We don’t say “gee, Dr. Pauling’s been right about so many things, what’s the harm in just assuming he’s right about vitamin megadosing?” We don’t say “NASA’s got a pretty good track record, so we’re just going to overlook this error in the rover program. We wouldn’t want to hurt anyone’s feelings.”

No, dammit, we’re skeptics, we’re scientists and science enthusiasts. We pride ourselves on seeking the truth and fighting ignorance. When prominent scientists and skeptics go wrong, they’re the ones we should argue with most strongly, most fervently–because either they, prizing truth and knowledge as we do, will change their position, or we–prizing truth and knowledge–will realize that it was our own that was in error.

Or they’ll go on believing and spouting wrong things. And then we’re free to question whether they really are committed to truth and knowledge, or if they are committed to their own sense of infallible rightness. That’s a bitter pill to swallow, to realize that even your heroes (maybe even especially your heroes) can be blinded by ego, but it’s a necessary lesson to learn. It’s necessary because no one is perfectly right or perfectly insightful or perfectly skeptical or perfectly reasonable. Pobody’s nerfect, as the hat says. And sometimes we become complacent in accepting a person’s thoughts or ideas as pure unvarnished truth, and need to be shaken out of it with a glimpse of their clay feet.

Being a luminary, being a role model, being a tireless advocate, being a hero, shouldn’t shield a person from criticism. It may mean that we give them a little more benefit of the doubt to explain or clarify, but even that isn’t inexhaustible.

What it does (and should) grant them is a group of people who care what they have to say enough to explain to them why they’re wrong.

More on Movement Problems (or, Definitions Matter)

I’ve noticed a disturbing trend lately, and while there may be a bit of “when you’re a hammer, every problem starts looking like a nail” going on, I can’t help but see it as a symptom of the apparently growing notion that “skepticism” is something you join rather than something you do. But I keep seeing this twofold trend of people venerating logic and reason while failing to actually understand them (or at least to understand them as well as they think they do) and using terms like “rational” or “fallacy” in value-laden ways that strip them of their actual meaning.

The first time I really took notice of this was when Don talked about his trip to a CFI meeting in Indianapolis. At the meeting, he encountered a number of CFI members who saw skepticism not as a set of cognitive tools, but as a set of dogmatic rules which should be taught to people. In addition, and perhaps most relevantly:

[A]lmost every member I interacted with afterward was like an automaton repeating poorly understood buzzwords: “critical thinking,” “skepticism,” “freethought,” etc. They said these words and seemed to believe that they understood them and that, through that understanding, were part of a greater whole.

The same trend was the subject of the recent kerfuffle with Skepdude. The ‘Dude clearly held logic in high esteem, and clearly understood that fallacies were bad things, but just as clearly didn’t understand what made fallacies fallacious, and was quick to throw out the term “ad hominem” where it did not apply.

More alarming, however, were the comments of the much more prominent skeptic Daniel Loxton, who claimed that most insults were fallacious poisoning the well, despite that clearly not being the case as per the fairly strict and clear definition of poisoning the well.

You can see the same thing in spectacular action in the comment thread here, where commenter Ara throws around terms like “rational” and “anti-rational” as part of an argument that echoes Skepdude’s attempts to say that a valid argument doesn’t make insults valid, when in fact the opposite is the case.

Despite what Mr. Spock would have you believe, saying that something is “rational” or “logical” is to say almost nothing about the thing you are trying to describe. Any position, any conclusion–true or false, virtuous or reprehensible, sensible or absurd–can be supported by a logically valid argument. For instance:

All pigs are green.
All ostriches are pigs.
Therefore, all ostriches are green.

That’s a logically valid argument. The conclusion follows inexorably from the premises. That the conclusion is false and absurd is only because the premises are equally false and absurd. The argument is unsound, but it is perfectly logical. “Logical” is not a value judgment, it is an objective description, and can only be accurately applied to arguments1.

“Rational” is similar. There’s a lot of equivocation possible with “rational,” because it can mean “sensible” as well as “based on reason” or “sane” or “having good sense.” Some of those meanings are value-laden. However, if we are describing a conclusion, an argument, or a course of action, and if we are hoping to have any kind of meaningful discussion, then it’s important to be clear on what we’re trying to say when using the word “rational.”

If, for instance, I’m using the term “rational” to call an idea or action or something “sane” or “possessing good sense,” I’m probably expressing an opinion. “Good sense” is a subjective quality, and the things I consider “sane” may not be the same particular things that are excluded from the DSM-IV.

If, however, I’m trying to say that a belief or course of action or idea is “sensible” or “based on reason,” then I must first know what the reasons or senses involved are. A “sensible” course of action depends on subjective judgment, which is largely driven by circumstance and context. If someone cuts me off at 80mph on the freeway, I may consider such an action to be insensible, but not knowing what caused the person to take that action–say, for instance, their passenger was threatening them, or going into labor, or something–I really have no way of judging the sensibility of the action.

Similarly, if I don’t know what reasons are driving a person to hold some belief or take some action, then I cannot know if that action is based on reason–i.e., if it’s “rational,” in this sense. For instance, if I believe that autism is caused by mercury toxicity and that there are toxic levels of mercury in childhood vaccinations, then it may be a reasonable course of action to refuse to immunize my child. That an action may be wrong, or may be based on false reasons or bad reasons, does not make it irrational or unreasonable.

The fact is that most people do not knowingly take actions or hold beliefs for no reason. Many people take actions or hold beliefs for bad reasons, or ill-considered reasons, but most people do think “logically” and “rationally.” The problem comes from incorrect premises, or from a failure to consider all relevant reasons or weigh those reasons appropriately.

What I’m seeing more of lately, though, is the word “rational” used to mean “something that follows from my reasons” or “something I agree with,” or more simply, “good.” None of these are useful connotations, and none of them accurately represent what the word actually means. Similarly, “fallacy” is coming to mean, in some circles or usages, “something I disagree with” or “bad,” which again fails to recognize the word’s actual meaning. This is fairly detrimental; we already have a word for “bad.” We don’t have other good words for “fallacy,” and they are not directly synonymous with each other.

It seems like an awful lot of skeptics understand that logic and reason are good and important, but they don’t actually seem to understand what makes them work. They seem happy to understand the basics, to practice a slightly more in-depth sort of cargo cult argumentation, while missing the significant whys and wherefores. Sure, you might be able to avoid fallacious arguments by simply avoiding anything that looks like a fallacy, but if you actually understand what sorts of problems cause an argument to be fallacious, it makes your arguing much more effective.

Let me provide two examples. First, my car: I can get by just fine driving my car, even though I really know very little about what’s going on underneath the hood and throughout the machinery. It’s not that I’m not interested; I find the whole process fascinating, but I haven’t put the work in to actually understand what’s going on on a detailed level. Someone who knew my car more intimately would probably get better gas mileage, would recognize problems earlier than I do and have a better idea of what’s wrong than “it makes a grinding noise when I brake,” and would probably use D2 and D3, whatever those are. I don’t get the full experience and utility out of my car, and that’s okay for most everyday travel. But you’re not going to see me entering into a street race with it.

On the other hand, I love cooking, and I’ve found that understanding the science behind why and how various processes occur in the kitchen has made me a much more effective cook. Gone are the days when my grilling was mostly guesswork, and when my ribs would come out tough and stringy. Now that I understand how the textures of muscle and connective tissue differ, and how different kinds of cooking and heat can impact those textural factors, I’m a much better cook. Now that I understand how searing and browning work on a chemical level, I’m a much better cook. I can improvise more in the kitchen, now that I have a better understanding of how flavors work together, and how to control different aspects of taste. I’m no culinary expert, but I can whip up some good meals, and if something goes a way that I don’t like, I have a better idea of how to change or fix it than I did when I was just throwing things together by trial and error.

If you’re content with reading some skeptical books and countering the occasional claim of a co-worker, then yeah, you really don’t need to know the ins and outs of logic and fallacies and reasoning and so forth. But if you want to engage in the more varsity-level skeptical activities, like arguing with apologists or dissecting woo-woo claims in a public forum, then you’re going to need to bring a better game than a cursory understanding of logic and basic philosophy. You don’t need to be a philosophy major or anything, but you might need to do reading beyond learning this stuff by osmosis from hanging out on the skeptical forums. Mimicking the techniques and phrasing of people you’ve seen before only gets you so far; if you really want to improvise, then you have to know how to throw spices together in an effective way.

I’m generally against the faction who wants to frame skepticism as some new academic discipline. I think that’s silly, and I think (regardless of intent) that it smacks of elitism. I’m of the opinion that anyone can be a skeptic, and that most people are skeptics and do exercise skepticism about most things, most of the time. But that doesn’t mean that skepticism comes easily, or that the things we regularly talk about in skeptical forums are easily understood. You have to do some work, you have to put in some effort, and yeah, you have to learn the basics before you can expect to speak knowledgeably on the subject. But believe me, it takes a lot more to learn how to cook a decent steak than to learn how to cook up a good argument.


1. I suppose one could describe the thinking or processing methods of an individual or machine as “logical” in a moderately descriptive way, but it still doesn’t give much in the way of detail. What would a non-logical thought process be? One unrelated non-sequitur after another?

Please feel free to dismiss the following

What should have been a relatively academic conversation has become a feud, and I’m already finding it rather tiresome. I’m Phil Plait’s proverbial “dick,” you see, because I referenced an obscure little movie from twelve whole years ago made by a pair of independent directors with only, like, two Academy Awards to their names, and starring a bunch of unknown Oscar-winning actors, which only ranks #135 in IMDB’s Top 250 films of all time. Maybe it would have been better if I’d referenced a series of porn videos of drunk young women.

Also, because I’m snarky and sarcastic. Well, okay, guilty as charged.

So I’m exactly what Phil Plait was referring to, even though Phil’s clarifications make me suspect that even he doesn’t know exactly what he was referring to, and his speech has become a Rorschach Test for whatever tactic(s) any particular skeptic wants to authoritatively decry. Sure, fine, whatever. I’ve been called worse. By myself, no less.

Anyway, Junior Skeptic’s Daniel Loxton weighed in on Skepdude’s tweet:

Now, I’m no great fan of Loxton. I was; I enjoy Junior Skeptic, and I like his Evolution book. But I disagree with nearly everything he writes on skepticism, I think he tends to adopt a very condescending tone and a very authoritarian attitude over the skeptical movement (such as it is), and I lose a great deal of respect for anyone–especially a skeptic–who blocks people for disagreeing with them. You can read through my Twitter feed, if you like; I defy you to find any abuse or insult which would justify blockage.

So that’s my stated bias out of the way. I address Loxton’s point here not out of bitterness, but out of genuine surprise that someone who is so vocal and respected in the skeptical movement could be so very wrong about basic logical fallacies like ad hominem and poisoning the well. I also can’t help but feel a little prophetic with that whole last post I wrote about sloppy thinking.

Edit: I also want to offer a brief point in defense of Daniel Loxton: being a Twitter user, and knowing the limitations of the medium, it’s possible that truncating his thoughts in that medium impeded what he was trying to say, and that the mistakes are due less to sloppy thinking or misunderstanding, and more to trying to fit complex thoughts into ~140 characters. That being said, the proper place to make such a complex point without sacrificing clarity would have been here, at the linked post, in the comment section.

Loxton’s first claim, as I understand it, is that most insults belong to the “poisoning the well” subcategory of the ad hominem fallacy. This is wrong on a couple of levels. While poisoning the well is indeed a subcategory of ad hominem, neither category can be said, by any reasonable standard, to include “most insults.”

A little background: the ad hominem fallacy belongs to a category of fallacies of relevance, which are arguments whose premises offer insufficient support for their conclusions, and which are generally used to divert or obscure the topic of a debate. Ad hominem accomplishes this in one of two related ways: attempting to draw a conclusion about someone’s argument or points or claims by relying on an irrelevant personal attack, and by attempting to divert the topic of a debate from claims and arguments to the character of one of the debaters.

It becomes fairly easy, then, to see why “most insults” do not qualify as the ad hominem fallacy: most insults are not arguments. A logical fallacy, by definition, is an error in reasoning; in order for something to qualify as a fallacy, it must at least make an attempt at reasoning. If I say “Kevin Trudeau is a motherfucker,” I’m not making any actual argument. There are no premises, there is no conclusion, there is no attempt at reasoning, and so there can be no fallacy.

In order for there to be fallacious reasoning, there must first be some attempt at reasoning, which requires some semblance of premises and a conclusion. “Kevin Trudeau says colloidal silver is a useful remedy. But Kevin Trudeau is an idiot. So, yeah,” is more obviously fallacious (even though, as Skepdude would happily and correctly point out, the conclusion–“therefore Kevin Trudeau is wrong about colloidal silver”–is only implied). The implied conclusion is not sufficiently justified by the premises; that abusive second premise says nothing about the truth or falsehood of Kevin Trudeau’s claim. Even if it’s true, even an idiot is capable of valid arguments and true statements.

I could leave this here, I suppose; if poisoning the well is indeed a subcategory of ad hominem fallacies, and “most insults” are not in fact ad hominem fallacies, then “most insults” could not also be part of a subset of ad hominem fallacies. But poisoning the well is a tricky special case, and if there’s one thing I’m known for, it’s belaboring a point.

So what of poisoning the well? It’s a way of loading the audience, of turning a potential audience against your opponent before they even get a chance to present their argument. You present some information about your opponent–true or false–that you know your audience will perceive as negative, before your opponent gets a chance to state their case. The implication (and it’s almost always implied, as Loxton rightly notes) is that anything your opponent says thereafter is unreliable or incorrect.

Here’s where it gets tricky: it barely qualifies as a fallacy, because all the speaker is doing is offering an irrelevant fact about his opponent’s character. As we said, in order for something to be a logical fallacy, it has to contain an error in reasoning. The point of poisoning the well is not to actually commit a fallacy, but to make the audience commit a fallacy, specifically to commit an ad hominem fallacy, by dismissing your opponent’s claims and arguments based on the irrelevant information you provided at the beginning. So poisoning the well is a subset of ad hominem fallacies, where the fallacy is committed by an audience at the prompting of the well-poisoning speaker.

Here’s where Loxton gets it wrong–and only fairly slightly, I might add. I had to do a fairly large amount of research before I felt confident that this was a key point–is that the key feature of poisoning the well is that it’s done pre-emptively. Insults offered after your opponent has stated their case may be an attempt to manipulate the audience into the same ad hominem fallacy, but they do not qualify as poisoning the well.

An example: You open up a copy of “Natural Cures THEY Don’t Want You To Know About” by Kevin Trudeau, and someone has placed inside the front cover a description of Trudeau’s various fraud convictions. Consequently, everything you read in the book will be tainted by your knowledge that Trudeau is a convicted fraud. The well has been thus poisoned, and now you’re prompted to dismiss anything he says on the basis of his personal characteristics.

If someone places that same note halfway through the book, or at the end, and you don’t encounter it until you finish or partly finish, then you may still be inclined to commit an ad hominem fallacy based on the contents of that note. However, this is not poisoning the well, which requires preemption.

There’s an issue here, and it touches on all the talk I’ve been doing recently about using arguments based on ethos in various situations. See, the fact that Kevin Trudeau is a convicted fraud is relevant if the point is whether or not you should trust what he has to say, or bother spending time and effort listening to it. The truth or falsehood of his arguments absolutely stand on their own, but his past as a huckster is of great relevance to the consideration of whether or not to take his word on anything.

It is a sad fact of life that no one person can conduct all the relevant research necessary to establish or refute any given claim or argument. Consequently, we must often rely on trust to some degree in considering how to direct our efforts, which claims merit deep investigation, and which we can provisionally accept based on someone’s word. This splits the hairs between the matter of whether or not a claim is true and whether or not a claim warrants belief. While it’s a laudable ideal to make those two categories as close to one another as possible, that goal remains impractical.

What this means is that, when considering whether or not to believe a claim or accept an argument (again, not whether or not the claim or argument is true), we generally use a person’s credibility as a piece of evidence used to evaluate whether or not belief is warranted. It’s rarely the only piece of evidence, and it only really qualifies as sufficient evidence in particularly ordinary claims, but it’s a relevant piece of evidence to consider nonetheless.

But, and I want to make this abundantly clear, it has nothing to do with the truth of a claim or the validity of an argument, it has only to do with the credibility of the speaker making the claim and whether or not the claim warrants belief. We should be very clear and very careful about this point: Kevin Trudeau’s record as a fraudster has no bearing on whether or not his claims are true. It does, however, have a bearing on whether or not you or I or anyone else should trust him or believe what he has to say.

In other words, if most people told me it was sunny out, I’d take their word for it. If Kevin Trudeau told me it was sunny out, I’d look up. And I’d wonder if he had some way of profiting off people’s mistaken belief about the relative sunniness of a given day.

So, back to the issue of insults. There’s one more problem with saying that “most insults” are a subcategory of any fallacy, and that’s that, at least with fallacies of relevance, the fallacious nature of an argument is in the argument’s flawed structure, in its failure of logic, and not in the words which are used. An ad hominem fallacy is not fallacious because it contains an insult, but because the conclusion does not follow from the premises. Containing the insult is what makes it “ad hominem,” but it’s the flawed logic that makes it a fallacy.

For instance, take this argument:

If a person copulates with his or her mother, then that person is a motherfucker.
Oedipus copulated with his mother.
Therefore, Oedipus is a motherfucker.

The fact that this argument is vulgar and contains an insult has no bearing whatsoever on its validity. And it’s clearly valid; and within the context of “Oedipus Rex,” it’s also sound. An insult alone does not make an argument into an ad hominem fallacy.

Take this argument, then:

All men are mortal.
Socrates is a man.
Socrates smells like day-old goat shit, on account of his not bathing.
Therefore, Socrates is mortal.

A valid argument is one in which the conclusion is logically implied by and supported by the premises. The conclusion here is, in fact, logically implied by the premises, and is justified by them. The insulting third premise does not support the conclusion, but the conclusion also does not rely on it. Its inclusion is unnecessary, but including it does nothing to invalidate the argument.

Finally, take this argument:

All men are mortal.
Plato is a really smart guy, and he says that Socrates is mortal.
Therefore, Socrates is mortal.

This is a fallacious argument–a pro hominem argument, sort of the opposite of ad hominem–because the conclusion is not sufficiently supported by the premises. The conclusion relies upon an irrelevant premise, which renders the logic invalid–obviously, despite not being insulting at all.

I hope I laid that all out in a way that is clear, because I really don’t think I could make it any clearer. It bothers me to see terms which have distinct, specific, clear meanings being applied inaccurately by people who ought to know better. It further bothers me to see skeptics, who of all people should relish being corrected and doing the research to correct prior misconceptions, digging in their heels, committing style over substance fallacies, and generally misunderstanding basic principles of logic and argumentation.

But because I like to belabor a point, and because it’s been several paragraphs since I’ve been sufficiently snarky, let me offer one more example–pulled from real life, this time!–to clarify poisoning the well.


Here, the speaker offers a link to an opponent’s argument, but primes the audience first by obliquely calling his opponent a dick, and moreover, suggesting that the opponent is using tactics specifically identified by an authority in the relevant field as unacceptable and ill-advised. The speaker’s audience, on clicking through to the opposing article, is thus primed to read the article through the lens of the author’s suggested dickishness, and to dismiss it as dirty tactics from a dick, rather than actually considering the merits of the argument. This is classic poisoning the well, which, you’ll recall, is intended to cause the audience to commit an ad hominem fallacy.

We skeptics take pride in our allegiance to logic and evidence; we are aware of our own shortcomings; we are aware that we are fallible and that we make mistakes. In my opinion the above comments about Jenny McCarthy are a mistake that we should own up to and make amends, and stop using it. If you really want to counter Jenny’s anti-vaccine views, choose one of the claims she makes, do some research, and write a nice blog entry showing where she goes wrong and what the evidence says, but do not resort to ad-hominem attacks. We are skeptics and we ought to be better than that.

–Skepdude, “Skeptics Gone Wild,” 8/23/10.


An incomplete list of sources used for this post:

In which I piss on the ‘Dude’s rug

I’ve recently had a bit of a back-and-forth with the Skepdude that eventually spilled out onto Twitter. I started writing this post when it appeared that my last comment might languish in eternal moderation, but it has since shown up, so kudos to Skepdude for exceeding my pessimistic expectations. If this post hadn’t turned into a larger commentary before that bit posted, I might have deleted the whole thing. As it stands, I’ve used poor Skepdude as a springboard.

In any case, you can go ahead and read the relevant posts, then come back here and read my further commentary. It’s okay, I’ll wait.

Back? Great. Here’s the further commentary.

I think this conversation touches on a few key points relevant to skeptical activism. The first is this trepidation regarding basic rhetoric. We tend to throw around “rhetoric” in a disparaging fashion, often in the context of “baseless rhetoric” or “empty rhetoric.” And those can be to the point, but I think we run the risk of forgetting that rhetoric is the art of argumentation, the set of tools and strategies available to craft convincing arguments.

We’ve heard a lot from skeptics and scientists in the past few years claiming to be communications experts and saying that skeptics and scientists need to communicate better; we’ve all seen and complained about debates and discussions where the rational types fail because they can’t argue or work a crowd as well as their irrational opponents. These are both, to some degree, failures of rhetoric. Scientists are trained to argue in arenas and fora where facts and evidence are the most important thing, and the only convincing thing. That’s great if you’re defending a dissertation or critiquing a journal article, but as we’ve seen time and time again, it doesn’t translate to success in debates outside the university. Kent Hovind and Ray Comfort and Deepak Chopra may be blinkered idiots without a fact between the three of them, which would mean death in a scientific arena, but in the arena of public discourse, it becomes a strength. Because when you have no facts to work with, you have to make sure that the rest of your techniques have enough glitz and flash to distract the audience from your lack of substance. Scientists ignore the style, knowing they have substance, unaware or naïve about audiences’ universal love for shiny things.

We in the skeptic community, such as it is, have spent a lot of time recently debating whether it’s better to use honey or vinegar; one lesson we should all take away from that, however, is that facts and logic are bland on their own. You need to dress them up with spices and sauces if you expect anyone to want to swallow them. If one of your goals is to convince human beings–not, say, robots or Vulcans–then you can’t rely on pure logic alone.

Moving back to Skepdude, he seems to be in two places in this argument. On one hand, he seems to think that we can ignore ethos and pathos, and argue on logos alone. Depending on his purpose, this may be enough. I don’t know what his goals are, in particular, but if he is content with arguing in such a way as to make his points clear and valid to any philosopher, scientist, or skeptic who happens to be reading them, then arguing with pure logic might be all he needs. Heck, he could break everything down and put it into those crazy modal logic proofs, and save himself a lot of typing.

But if he’s hoping to make his arguments convincing to a broader swath of people–and the amount of rhetorical questions and righteous anger in some of his other posts suggests that he is, and that he already knows this–then he’s going to need to slather those bland syllogisms in tasty pathos and savory ethos.

But here’s where I have the problem, and nowhere was it more apparent than in our Twitter conversation, while he elevates and venerates logic, he doesn’t understand a pretty basic principle of it, which is how fallacies–in particular, the ad hominem fallacy–work.

The whole post revolves around skeptics saying that Jenny McCarthy claims to oppose toxins yet uses Botox. Skepdude calls this an ad hominem fallacy. And I can see where it could be. Where he makes his mistake–and where most people who mistakenly accuse ad hominem make the mistake–is in failing to understand that ad hominem fallacies are all about the specific context. It’s true; if my only response to Jenny McCarthy’s anti-toxin arguments were “Yeah, well you put botox in your face, so who cares what you think,” I’d be dismissing her arguments fallaciously, by attacking her character–specifically, by suggesting that her actions invalidate her arguments.

But that doesn’t mean that any time I were to bring up McCarthy’s botox use would be fallacious. Let’s say I said, for instance, “You claim to be anti-toxin, yet you use botox; that suggests you’re a hypocrite, or that you don’t understand what toxins are.” Now, if I left it at that, it would still be fallacious; saying just that in response to her anti-vaccine arguments would be fallaciously dismissing them on the basis of her character.

Now, let’s imagine I said: “In fact, all the evidence demonstrates that the ‘toxins’ you insinuate are in vaccines are, in fact, present in non-toxic doses. Furthermore, the evidence shows that there is no link between vaccines and any autism spectrum disorder.” This bit addresses the substance of her argument, and does so using facts and evidence. If I further added “Also, you claim to be anti-toxin, yet you use botox; either you’re a hypocrite, or you don’t understand what toxins are,” I would most definitely be attacking her character, but it would not be fallacious because I wouldn’t be using it to dismiss her arguments.

The ad hominem fallacy requires that last part: in order for it to be fallacious, in order for it to render your argument invalid, you must be using the personal attack to dismiss your opponent’s arguments. Otherwise, it’s just a personal attack.

Skepdude disagrees:

This is what he linked to, by the way.

I replied:

And these were my links: 1 2 3.

And then I walked away from Twitter for a few hours, because I’m getting better at knowing when to end things.

And then I started writing this post, because I’m still not very good at it. I’d respond to the ‘Dude on Twitter, but I feel bad dredging up topics after several hours, and I know what I’m going to say won’t fit well in Tweets.

Anyway, the ‘Dude responded some more:

Oh, I’m so glad to have your permission. I would have tossed and turned all night otherwise.


Yes, you can infer what someone’s saying from their speech. I can even see some situations where the implication is strong enough to qualify as a logical fallacy–of course, the implication has to be an argument before it can be a fallacious one, and that’s a lot to hang on an implied concept–but that is, after all, the whole point of the Unstated Major Premise. However, (as I said in tweets) there’s a razor-thin line between inferring what an argument left unstated and creating a straw man argument that’s easier to knock down (because it contains a fallacy).

Skepdude even found a quote–in one of my links, no less!–that he thought supported this view:

He’s right, the ad hominem fallacy there doesn’t end with “therefore he’s wrong;” most ad hominem fallacies don’t. His point, however, isn’t as right, as a look at the full quote will demonstrate:

Argumentum ad hominem literally means “argument directed at the man”; there are two varieties.

The first is the abusive form. If you refuse to accept a statement, and justify your refusal by criticizing the person who made the statement, then you are guilty of abusive argumentum ad hominem. For example:

“You claim that atheists can be moral–yet I happen to know that you abandoned your wife and children.”

This is a fallacy because the truth of an assertion doesn’t depend on the virtues of the person asserting it.

Did you catch it? Here’s the relevant bit again: “If you refuse to accept a statement, and justify your refusal by criticizing the person who made the statement, then you are guilty of abusive argumentum ad hominem.” The point isn’t that the anti-atheist arguer attacked the atheist speaker to justify rejecting his argument.

So, once again, context is key. If, for instance, the atheist had argued “all atheists are moral,” the “you abandoned your wife and children” comment would be a totally valid counterargument. The key in the example given was that the anti-atheist respondent used his attack on the atheist arguer to dismiss their argument, in lieu of actually engaging that argument. A point which my other links, which went into greater detail, all made clear.

I’ll say it again: in order for it to be an ad hominem, the personal attack has to be directly used to dismiss the argument. Dismissing the argument on other grounds and employing a personal attack as an aside or to some other end is, by definition, not an ad hominem. You don’t have to take my word for it, either:

In reality, ad hominem is unrelated to sarcasm or personal abuse. Argumentum ad hominem is the logical fallacy of attempting to undermine a speaker’s argument by attacking the speaker instead of addressing the argument. The mere presence of a personal attack does not indicate ad hominem: the attack must be used for the purpose of undermining the argument, or otherwise the logical fallacy isn’t there. It is not a logical fallacy to attack someone; the fallacy comes from assuming that a personal attack is also necessarily an attack on that person’s arguments. (Source

For instance, ad hominem is one of the most frequently misidentified fallacies, probably because it is one of the best known ones. Many people seem to think that any personal criticism, attack, or insult counts as an ad hominem fallacy. Moreover, in some contexts the phrase “ad hominem” may refer to an ethical lapse, rather than a logical mistake, as it may be a violation of debate etiquette to engage in personalities. So, in addition to ignorance, there is also the possibility of equivocation on the meaning of “ad hominem”.

For instance, the charge of “ad hominem” is often raised during American political campaigns, but is seldom logically warranted. We vote for, elect, and are governed by politicians, not platforms; in fact, political platforms are primarily symbolic and seldom enacted. So, personal criticisms are logically relevant to deciding who to vote for. Of course, such criticisms may be logically relevant but factually mistaken, or wrong in some other non-logical way.
[…]
An Abusive Ad Hominem occurs when an attack on the character or other irrelevant personal qualities of the opposition—such as appearance—is offered as evidence against her position. Such attacks are often effective distractions (“red herrings”), because the opponent feels it necessary to defend herself, thus being distracted from the topic of the debate. (Source)

Gratuitous verbal abuse or “name-calling” itself is not an argumentum ad hominem or a logical fallacy. The fallacy only occurs if personal attacks are employed instead of an argument to devalue an argument by attacking the speaker, not personal insults in the middle of an otherwise sound argument or insults that stand alone.(Source)

And so on, ad infinitum.

To return to the original point, let’s say a skeptic has said “Jenny McCarthy speaks of dangerous ‘toxins’ in vaccines, yet she gets Botox shots, which include botulinum, one of the most toxic substances around, right on her face.” Removed from its context, we cannot infer what the arguer intended. I can see three basic scenarios:

  1. The skeptic has used the phrase as evidence to dismiss Jenny McCarthy’s arguments about “dangerous ‘toxins’ in vaccines,” and has thus committed an ad hominem fallacy.
  2. The skeptic has used the phrase as an aside, in addition to a valid counter-argument against her anti-vaccine claims. This would not be an ad hominem fallacy.
  3. The skeptic has used the phrase as evidence for a separate but relevant argument, such as discussing Jenny McCarthy’s credibility as a scientific authority, in addition to dismissing her arguments with valid responses. This would not be an ad hominem fallacy.

There are other permutations, I’m sure, but I think these are the likeliest ones, and only one out of the three is fallacious. Moreover, trying to infer such a fallacy into those latter two arguments would not be valid cause to dismiss them, but it would probably demonstrate a lack of reading comprehension or a predisposition to dismiss such arguments.

Let’s say I’ve just finished demolishing McCarthy’s usual anti-vax arguments, and then I say “She must not be very anti-toxin if she gets Botox treatments on a regular basis.” Would it be reasonable to infer that I meant to use that statement as fallacious evidence against her point? I think not. If I’ve already addressed her point with evidence and logic, how could you infer that my aside, which is evidence- and logic-free, was also meant to be used as evidence in the argument I’ve already finished debunking?

On the other hand, let’s say I’ve done the same, and then I say “plus, it’s clear that Jenny doesn’t actually understand how toxins work. Toxicity is all about the dose. She thinks that children are in danger from the miniscule doses of vaccine preservatives they receive in a typical vaccine regimen, and yet she gets botox treatments, which require far larger dosages of a far more potent toxin. If toxins worked the way she apparently thinks they do, she’d be dead several times over.” Same point used in service of a separate argument. Would it be reasonable to infer here that I meant the point to be used as evidence against her anti-vaccine claims? Obviously not.

The only case in which it would be reasonable to make that inference would be some variation of me using that claim specifically to dismiss her argument. Maybe I say it in isolation–“Obviously she’s wrong about toxins; after all, she uses botox”–maybe I say it along with other things–“Former Playboy Playmate Jenny McCarthy says she’s anti-toxin, but uses botox. Sounds like a bigger mistake than picking her nose on national TV”–but those are fallacies only because I’m using the irrelevant personal attack to dismiss her argument.

So why have I put aside everything else I need to do on Sunday night to belabor this point? Well, I think that it’s a fine point, but one worth taking the time to understand. Skepdude’s argument is sloppy; he doesn’t seem to understand the fine distinctions between fallacious ad hominem and stand-alone personal attacks or valid ethical arguments, and so he’s advocating that skeptics stop using arguments that could potentially be mistaken for ad hominem fallacies. That way he–and the rest of us–could keep on being sloppy in our understanding and accusations of fallacies and not have to worry about facing any consequences for that sloppiness.

I can’t help but be reminded of my brother. When he was a kid, he did a crappy job mowing the lawn, and would get chewed out for it. He could have taken a little more time and effort to learn how to do it right–heck, I offered to teach him–but he didn’t. Rather, by doing it sloppily, he ensured that he’d only be asked to do it as a last resort; either Dad or I would take care of it, because we’d rather see it done right. He didn’t have to learn how to do a good job because doing a crappy job meant he could avoid doing the job altogether. By avoiding the job altogether, he avoided the criticism and consequences as well.

The problem, of course, is that the people who actually knew what they were doing had to pick up the slack.

This is the issue with Skepdude’s argument here, and I think it’s a point worth making. I disagree with those people who want to make skepticism into some academic discipline where everything is SRS BZNS, but that doesn’t mean that I don’t think we shouldn’t have some reasonable standards. Argumentation is a discipline and an art. It takes work, it takes research and effort, and it requires you to understand some very subtle points. It’s often hard to distinguish a fallacious argument from a valid one, especially in some of the common skeptical topics, since some of the woo-woo crowd have become quite adept at obfuscating their fallacies. It’s not enough to get a general idea and move on; logic and science require clarity and specificity from both terms and arguments. “Ad hominem fallacy” means a certain, very particular thing, and it’s not enough to get a general idea and figure that it’s close enough. If you know what the fallacies actually are and you structure your arguments and your rhetoric in ways that are sound and effective, then you don’t need to worry about people mistaking some bit of your writing for some logical fallacy. You get to say, “no, in fact, that’s not a fallacy, but I could see where you might make that mistake. Here’s why…” When you do the job right, when your arguments are valid and stand on their own, then you don’t need to fear criticism and accusation. Isn’t that what we tell every psychic, homeopath, and theist who claims to have the truth on their side? “If your beliefs are true, then you have nothing to fear from scientific inquiry/the Million Dollar Challenge/reasonable questions”? Why wouldn’t we require the same standard from our own points and arguments?

Skepdude, I apologize for making this lengthy, snarky reply. I generally agree with you, and I obviously wouldn’t follow you on Twitter if I didn’t generally like what you have to say. But on this point, which I think is important, I think you’re clearly wrong, and I think it’s important to correct. Feel free to respond here or in the comments at your post; I obviously can’t carry out this kind of discussion on Twitter.

On Labeling

Mmm...babycakes.I keep running into an issue with labels. It wasn’t long ago that I revised my own from “agnostic” to the more accurate and more useful “agnostic atheist” (in a nutshell, anyway–but this is a topic for a future post). The problem I have is that the relevant parts of my beliefs didn’t change, only what I called myself did. I didn’t have a belief in any gods when I called myself an agnostic, and I don’t have any belief in any gods now that I call myself an atheist. From any objective standpoint, I was an atheist the whole time.

And this is the substance of the problem: the dissonance between what a person calls himself or herself, and what categories a person objectively falls into. These labels are frequently different, and frequently result in various confusions and complications.

On one hand, I think we’re inclined to take people at their word with regard to what their personal labels are. It’s a consequence of having so many labels that center around traits that can only be assessed subjectively. I can’t look into another person’s mind to know what they believe or who they’re attracted to or what their political beliefs really are, or even how they define the labels that relate to those arenas. We can only rely on their self-reporting. So, we have little choice but to accept their terminology for themselves.

But…there are objective definitions for some of these terms, and we can, based on a person’s self-reporting of their beliefs, see that an objectively-defined label–which may or may not be the one they apply to themselves–applies to them.

I fear I’m being obtuse in my generality, so here’s an example: Carl Sagan described himself as an agnostic. He resisted the term “atheist,” and clearly gave quite a bit of thought to the problem of how you define “god”–obviously, the “god” of Spinoza and Einstein, which is simply a term applied to the laws of the universe, exists, but the interventionist god of the creationists is far less likely. So Sagan professed agnosticism apparently in order to underscore the point that he assessed the question of each god’s existence individually.

On the other hand, he also seemed to define “atheist” and “agnostic” in unconventional ways–or perhaps in those days before a decent atheist movement, the terms just had different connotations or less specific definitions. Sagan said “An agnostic is somebody who doesn’t believe in something until there is evidence for it, so I’m agnostic,” and “An atheist is someone who knows there is no God.”

Now, I love Carl, but it seems to me that he’s got the definitions of these terms inside-out. “Agnostic,” as the root implies, has to do with what one claims to know–specifically, it’s used to describe people who claim not to know if there are gods. Atheist, on the other hand, is a stance on belief–specifically the lack of belief in gods.

So, if we’re to go with the definitions of terms as generally agreed upon, as well as Carl’s own self-reported lack of belief in gods and adherence to the null hypothesis with regard to supernatural god claims, then it’s clear that Carl is an atheist. Certainly an agnostic atheist–one who lacks belief in gods but does not claim to know that there are no gods–but an atheist nonetheless.

The dilemma with regard to Sagan is relatively easy to resolve; “agnostic” and “atheist” are not mutually exclusive terms, and the term one chooses to emphasize is certainly a matter of personal discretion. In the case of any self-chosen label, the pigeon-holes we voluntarily enter into are almost certainly not all of the pigeon-holes into which we could be placed. I describe myself as an atheist and a skeptic, but it would not be incorrect to call me an agnostic, a pearlist, a secularist, an empiricist, and so forth. What I choose to call myself reflects my priorities and my understanding of the relevant terminology, but it doesn’t necessarily exclude other terms.

The more difficult problems come when people adopt labels that, by any objective measure, do not fit them, or exclude labels that do. We see Sagan doing the latter in the quote above, eschewing the term “atheist” based on what we’d recognize now as a mistaken definition. The former is perhaps even more common–consider how 9/11 Truthers, Global Warming and AIDS denialists, and Creationists have all attempted to usurp the word “skeptic,” even though none of their methods even approach skepticism.

The danger with the former is when groups try to co-opt people into their groups who, due to lack of consistent or unambiguous self-reporting (or unambiguous reporting from reliable outside sources), can’t objectively be said to fit into them. We see this when Christians try to claim that the founding fathers were all devout Christian men, ignoring the reams of evidence that many of them were deists or otherwise unorthodox. It’s not just the fundies who do this, though; there was a poster at my college which cited Eleanor Roosevelt and Errol Flynn among its list of famous homosexual and bisexual people, despite there being inconsistent and inconclusive evidence to determine either of their sexualities. The same is true when my fellow atheists attempt to claim Abraham Lincoln and Thomas Paine (among others), despite ambiguity in their self-described beliefs. I think, especially those of us who pride ourselves on reason and evidence, that we must be careful with these labels, lest we become hypocrites or appear sloppy in our application and definition of terms. These terms have value only inasmuch as we use them consistently.

The matter of people adopting terms which clearly do not apply to them, however, presents a more familiar problem. It seems easy and safe enough to say something like “you call yourself an atheist, yet you say you believe in God. Those can’t both be true,” but situations rarely seem to be so cut-and-dry. Instead, what we end up with are ambiguities and apparent contradictions, and a need to be very accurate and very precise (and very conservative) in our definition of terms. Otherwise, it’s a very short slippery slope to No True Scotsman territory.

Case in point, the word “Christian.” It’s a term with an ambiguous definition, which (as far as I can tell) cannot be resolved without delving into doctrinal disputes. Even a definition as simple as “a Christian is someone who believes Jesus was the son of God” runs afoul of Trinitarian semantics, where Jesus is not the son, but God himself. A broader definition like, “One who follows the teachings of Jesus” ends up including people who don’t consider themselves Christians (for instance, Ben Franklin, who enumerated Jesus among other historical philosophers) and potentially excluding people who don’t meet the unclear standard of what constitutes “following,” and so forth.

Which is why there are so many denominations of Christianity who claim that none of the other denominations are “True Christians.” For many Protestants, the definition of “True Christian” excludes all Catholics, and vice versa; and for quite a lot of Christians, the definition of the term excludes Mormons, who are also Bible-believers that accept Jesus’s divinity.

When we start down the path of denying people the terms that they adopt for themselves, we must be very careful that we do not overstep the bounds of objectivity and strict definitions. Clear contradictions are easy enough to spot and call out; where terms are clearly defined and beliefs or traits are clearly expressed, we may indeed be able to say “you call yourself be bisexual, but you say you’re only attracted to the opposite sex. Those can’t both be true.” But where definitions are less clear, or where the apparent contradictions are more circumstantially represented, objectivity can quickly be thrown out the window.

I don’t really have a solution for this problem, except that we should recognize that our ability to objectively label people is severely limited by the definitions we ascribe to our labels and the information that our subjects report themselves. So long as we are careful about respecting those boundaries, we should remain well within the guidelines determined by reason and evidence. Any judgments we make and labels we apply should be done as carefully and conservatively as possible.

My reasons for laying all this out should become clear with my next big post. In the meantime, feel free to add to this discussion in the comments.

On Interpretation

I see an old lady!--No, wait, a young girl!--No, I mean, two faces eating a candlestick!I thought I’d talked about this before on the blog, but apparently I’ve managed to go this long without really tackling the issue of interpretation. Consequently, you might notice some of the themes and points in this post getting repeated in my next big article, since writing that was what alerted me to my omission.

I don’t generally like absolute statements, since they so rarely are, but I think this one works: there is no reading without interpretation. In fact, I could go a step further and say there’s no communication without interpretation, but reading is the most obvious and pertinent example.

Each person is different, the product of a unique set of circumstances, experiences, knowledge, and so forth. Consequently, each person approaches each and every text with different baggage, and a different framework. When they read the text, it gets filtered through and informed by those experiences, that knowledge, and that framework. This process influences the way the reader understands the text.

Gah, that’s way too general. Let’s try this again: I saw the first couple of Harry Potter movies before I started reading the books; consequently, I came to the books with the knowledge of the movie cast, and I interpreted the books through that framework–not intentionally, mind you, it’s just that the images the text produced in my mind included Daniel Radcliffe as Harry and Alan Rickman as Professor Snape. However, I plowed through the series faster than the moviemakers have. The descriptions in the books (and the illustrations) informed my mental images of other characters, so when I saw “Order of the Phoenix,” I found the casting decision for Dolores Umbridge quite at odds with my interpretation of the character, who was less frou-frou and more frog-frog.

We’ve all faced this kind of thing: our prior experiences inform our future interpretations. I imagine most people picking up an Ian Fleming novel have a particular Bond playing the role in their mental movies. There was quite a bit of tizzy over the character designs in “The Hitchhiker’s Guide to the Galaxy” movie, from Marvin’s stature and shape to the odd placement of Zaphod’s second head, to Ford Prefect’s skin color. I hear Kevin Conroy‘s voice when I read Batman dialogue.

This process is a subset of the larger linguistic process of accumulating connotation. As King of Ferrets fairly recently noted, words are more than just their definitions; they gather additional meaning through the accumulation of connotations–auxiliary meaning attached to the world through the forces of history and experience. Often, these connotations are widespread. For example, check out how the word “Socialist” got thrown around during the election. There’s nothing in the definition of the word that makes it the damning insult it’s supposed to be, but thanks to the Cold War and the USSR, people interpret the word to mean more than just “someone who believes in collective ownership of the means of production.” Nothing about “natural” means “good and healthy,” yet that’s how it’s perceived; nothing about “atheist” means “immoral and selfish,” nor does it mean “rational and scientific,” but depending on who you say it around, it may carry either of those auxiliary meanings. Words are, when it comes right down to it, symbols of whatever objects or concepts they represent, and like any symbols (crosses, six-pointed stars, bright red ‘A’s, Confederate flags, swastikas, etc.), they take on meanings in the minds of the people beyond what they were intended to represent.

This process isn’t just a social one; it happens on a personal level, too. We all attach some connotations and additional meanings to words and other symbols based on our own personal experiences. I’m sure we all have this on some level; we’ve all had a private little chuckle when some otherwise innocuous word or phrase reminds us of some inside joke–and we’ve also all had that sinking feeling as we’ve tried to explain the joke to someone who isn’t familiar with our private connotations. I know one group of people who would likely snicker if I said “gravy pipe,” while others would just scratch their heads; I know another group of people who would find the phrase “I’ve got a boat” hilarious, but everyone else is going to be lost. I could explain, but even if you understood, you wouldn’t find it funny, and you almost certainly wouldn’t be reminded of my story next time you heard the word “gravy.” Words like “doppelganger” and “ubiquitous” are funny to me because of the significance I’ve attached to them through the personal process of connotation-building.

And this is where it’s kind of key to be aware of your audience. If you’re going to communicate effectively with your audience, you need to have some understanding of this process. In order to communicate effectively, I need to recognize that not everyone will burst into laughter if I say “mass media” or “ice dragon,” because not everyone shares the significance that I’ve privately attached to those phrases. Communication is only effective where the speaker and listener share a common language; this simple fact requires the speaker to know what connotations he and his audience are likely to share.

Fortunately or unfortunately, we’re not telepathic. What this means is that we cannot know with certainty how any given audience will interpret what we say. We might guess to a high degree of accuracy, depending on how well we know our audience, but there’s always going to be some uncertainty involved. That ambiguity of meaning is present in nearly every word, no matter how simple, no matter how apparently direct, because of the way we naturally attach and interpret meaning.

Here’s the example I generally like to use: take the word “DOG.” It’s a very simple word with a fairly straightforward definition, yet it’s going to be interpreted slightly differently by everyone who reads or hears it. I imagine that everyone, reading the word, has formed a particular picture in their heads of some particular dog from their own experience. Some people are associating the word with smells, sounds, feelings, other words, sensations, and events in their lives. Some small number of people might be thinking of a certain TV bounty hunter. The point is that the word, while defined specifically, includes a large amount of ambiguity.

Let’s constrain the ambiguity, then. Take the phrase “BLACK DOG.” Now, I’ve closed off some possibilities: people’s mental pictures are no longer of golden retrievers and dalmatians. I’ve closed off some possibilities that the term “DOG” leaves open, moving to the included subset of black dogs. There’s still ambiguity, though: is it a little basket-dwelling dog like Toto, or a big German Shepherd? Long hair or short hair? What kind of collar?

But there’s an added wrinkle here. When I put the word “BLACK” in there, I brought in the ambiguity associated with that word as well. Is the dog all black, or mostly black with some other colors, like a doberman? What shade of black are we talking about? Is it matte or glossy?

Then there’s further ambiguity arising from the specific word combination. When I say “BLACK DOG,” I may mean a dark-colored canine, or I may mean that “I gotta roll, can’t stand still, got a flamin’ heart, can’t get my fill.”

And that’s just connotational ambiguity; there’s definitional ambiguity as well. The word “period” is a great example of this. Definitionally, it means something very different to a geologist, an astronomer, a physicist, a historian, a geneticist, a chemist, a musician, an editor, a hockey player, and Margaret Simon. Connotationally, it’s going to mean something very different to ten-year-old Margaret Simon lagging behind her classmates and 25-year-old Margaret Simon on the first day of her Hawaiian honeymoon.

People, I think, are aware of these ambiguities on some level; the vast majority of verbal humor relies on them to some degree. Our language has built-in mechanisms to alleviate it. In speaking, we augment the words with gestures, inflections, and expressions. If I say “BLACK DOG” while pointing at a black dog, or at the radio playing a distinctive guitar riff, my meaning is more clear. The tone of my voice as I say “BLACK DOG” will likely give some indication as to my general (or specific) feelings about black dogs, or that black dog in particular. Writing lacks these abilities, but punctuation, capitalization, and font modification (such as bold and italics) are able to accomplish some of the same goals, and other ones besides. Whether I’m talking about the canine or the song would be immediately apparent in print, as the difference between “black dog” and “‘Black Dog.'” In both venues, one of the most common ways to combat linguistic ambiguity is to add more words. Whether it’s writing “black dog, a Labrador Retriever, with floppy ears and a cold nose and the nicest temperament…” or saying “black dog, that black dog, the one over there by the flagpole…” we use words (generally in conjunction with the other tools of the communication medium) to clarify other words. None of these methods, however, can completely eliminate the ambiguity in communication, and they all have the potential to add further ambiguity to the communication by adding information as well.

To kind of summarize all that in a slightly more entertaining way, look at the phrase “JANE LOVES DICK.” It might be a sincere assessment of Jane’s affection for Richard, or it might be a crude explanation of Jane’s affinity for male genitals. Or, depending on how you define terms, it might be both. Textually, we can change it to “Jane loves Dick” or “Jane loves dick,” and that largely clarifies the point. Verbally, we’d probably use wildly different gestures and inflections to talk about Jane’s office crush and her organ preference. And in either case, we can say something like “Jane–Jane Sniegowski, from Accounting–loves Dick Travers, the executive assistant. Mostly, she loves his dick.”

The net result of all this is that in any communication, there is some loss of information, of specificity, between the speaker and the listener (or the writer and the reader). I have some specific interpretation of the ideas I want to communicate, I approximate that with words (and often the approximation is very close), and my audience interprets those words through their own individual framework. Hopefully, the resulting idea in my audience’s mind bears a close resemblance to the idea in mine; the closer they are, the more effective the communication. But perfect communication–loss-free transmission of ideas from one mind to another–is impossible given how language and our brains work.

I don’t really think any of this is controversial; in fact, I think it’s generally pretty obvious. Any good writer or speaker knows to anticipate their audience’s reactions and interpretations, specifically because what the audience hears might be wildly different from what the communicator says (or is trying to say). Part of why I’ve been perhaps overly explanatory and meticulous in this post is that I know talking about language can get very quickly confusing, and I’m hoping to make my points particularly clear.

There’s one other wrinkle here, which is a function of the timeless nature of things like written communication. What I’m writing here in the Midwestern United States in the early 21st Century might look as foreign to the readers of the 25th as the works of Shakespeare look to us. I can feel fairly confident that my current audience–especially the people who I know well who read this blog–will understand what I’ve said here, but I have no way of accurately anticipating the interpretive frameworks of future audiences. I can imagine the word “dick” losing its bawdy definition sometime in the next fifty years, so it’ll end up with a little definition footnote when this gets printed in the Norton Anthology of Blogging Literature. Meanwhile, “ambiguity” will take on an ancillary definition referring to the sex organs of virtual prostitutes, so those same students will be snickering throughout this passage.

I can’t know what words will lose their current definitions and take on other meanings or fall out of language entirely, so I can’t knowledgeably write for that audience. If those future audiences are to understand what I’m trying to communicate, then they’re going to have to read my writing in the context of my current definitions, connotations, idioms, and culture. Of course, even footnotes can only take you so far–in many cases, it’s going to be like reading an in-joke that’s been explained to you; you’ll kind of get the idea, but not the impact. The greater the difference between the culture of the communicator and the culture of the audience, the more difficulty the audience will have in accurately and completely interpreting the communicator’s ideas.

Great problems can arise when we forget about all these factors that go into communication and interpretation. We might mistakenly assume that everyone is familiar with the idioms we use, and thus open ourselves up to criticism (e.g., “lipstick on a pig” in the 2008 election); we might mistakenly assume that no one else is familiar with the terms we use, and again open ourselves up to criticism (e.g., “macaca” in the 2006 election). We might misjudge our audience’s knowledge and either baffle or condescend to them. We might forget the individuality of interpretation and presume that all audience members interpret things the same way, or that our interpretation is precisely what the speaker meant and all others have missed the point. We would all do well to remember that communication is a complicated thing, and that those complexities do have real-world consequences.

…And some have Grey-ness thrust upon ’em

So, Alan Grey provided some musings on the Evolution/Creation “debate” at his blog, at my request. I figured I ought to draft a response, since I’ve got a bit of time now, and since Ty seems to want to know what my perspective is. Let’s jump right in, shall we?

Thomas Kuhn, in his famous work ‘The structure of scientific revolutions’ brought the wider worldview concept of his day into understanding science. His (and Polanyi’s) concept of paradigmic science, where scientific investigation is done within a wider ‘paradigm’ moved the debate over what exactly science is towards real science requiring two things
1) An overarching paradigm which shapes how scientists view data (i.e. theory laden science)
2) Solving problems within that paradigm

I think I’ve talked about The Structure of Scientific Revolutions here or elsewhere in the skeptosphere before. I really need to give it another read, but at the time I read it (freshman year of undergrad) I found it to be one of the densest, most confusing jargon-laden texts I’ve ever slogged through for a class. Now that I have a better understanding of science and the underlying philosophies, I really ought to give it another try. I’d just rather read more interesting stuff first.

Reading the Wikipedia article on the book, just to get a better idea of Kuhn’s arguments, gives me a little feeling of validation about my initial impressions all those years ago. See, my biggest problem with Structure–and I think I wrote a short essay to this effect for the class–was that Kuhn never offered a clear definition of what a “paradigm” was. Apparently my criticism wasn’t unique:

Margaret Masterman, a computer scientist working in computational linguistics, produced a critique of Kuhn’s definition of “paradigm” in which she noted that Kuhn had used the word in at least 21 subtly different ways. While she said she generally agreed with Kuhn’s argument, she claimed that this ambiguity contributed to misunderstandings on the part of philosophically-inclined critics of his book, thereby undermining his argument’s effectiveness.

That makes me feel a bit less stupid.

Kuhn claimed that Karl Popper’s ‘falsification criteria’ for science was not accurate, as there were many historical cases where a result occurred that could be considered as falsifying the theory, yet the theory was not discarded as the scientists merely created additional ad hoc hypothesis to explain the problems.

It is through the view of Kuhnian paradigms that I view the evolution and creation debate.

And I think that’s the first problem. To suggest that only Kuhn or only Popper has all the answers when it comes to the philosophy of science–which may not be entirely what Grey is doing here, but is certainly suggested by this passage–is a vast oversimplification. Kuhn’s paradigmatic model of science ignores to large degree the actual methods of science; arguably, Popper’s view presents an ideal situation that ignores the human element to science, and denies that there exists such a thing as confirmation in science–which again, may be due to ignoring the human element. The paradigmatic view is useful; it reminds us that the human ability to develop conceptual models is partially influenced by cultural factors, and that scientists must be diligent about examining their preconceptions, biases, and tendencies toward human error (such as ad hoc justifications) if they are to conduct accurate science. Falsificationism is also useful; it provides a metric by which to judge scientific statements on the basis of testability, and demonstrates one ideal to which the scientific method can asymptotically approach. But to try to view all of science through one lens or another is myopic at best. Just as science is neither purely deductive nor purely inductive, neither purely theoretical nor purely experimental, it is certainly not purely paradigmatic nor purely falsificationist.

One thing to keep in mind, though, is Grey’s brief mention of ad hoc hypotheses used to smooth out potentially-falsifying anomalies. While I’m sure that has happened and continues to happen, it’d be a mistake to think that any time an anomaly is smoothed over, it’s the result of ad-hocking. The whole process of theory-making is designed to continually review the theory, examine the evidence, and alter the theory to fit the evidence if necessary. We’re seeing a time, for instance, where our concept of how old and large the universe is may be undergoing revision, as (if I recall correctly) new evidence suggests that there are objects beyond the veil affecting objects that we can see. That doesn’t necessarily represent an ad hoc hypothesis; it represents a known unknown in the current model of the universe. Ad hocking would require positing some explanation without sufficient justification.

(Curiously, Karl Popper obliquely referred to Kuhn’s scientific paradigm concept when he said “Darwinism is not a testable scientific theory but a metaphysical research programme.” )

It’s been awhile since my quote mine alarm went off. It never fails. The quote is misleading at best, especially the way you’ve used it here, and somewhat wrong-headed at worst, as even Popper later acknowledged.

Here I define evolution (Common Descent Evolution or CDE) as: The theory that all life on earth evolved from a common ancestor over billions of years via the unguided natural processes of mutation and selection (and ‘drift’) and creation (Young earth creation or YEC) as: The theory that various kinds of life were created under 10,000 years ago and variation within these kinds occurs within limits via mutation and select (and ‘drift’).

I can’t see anything in there to disagree with. Yet, anyway.

I believe CDE and YEC can both be properly and most accurately defined as being scientific paradigms.

While this seems problematic. CDE, certainly, may be a scientific paradigm (though as usual, I’d like that term to be pinned down to a more specific definition). Why on Earth would YEC be a scientific paradigm? Going back to Wikipedia, that font of all knowledge:

Kuhn defines a scientific paradigm as:

  • what is to be observed and scrutinized
  • the kind of questions that are supposed to be asked and probed for answers in relation to this subject
  • how these questions are to be structured
  • how the results of scientific investigations should be interpreted

Alternatively, the Oxford English Dictionary defines paradigm as “a pattern or model, an exemplar.” Thus an additional component of Kuhn’s definition of paradigm is:

  • how is an experiment to be conducted, and what equipment is available to conduct the experiment.

So I can see, under a Creationist paradigm, that one might have different priorities for observations (searching, for instance, for the Garden of Eden or examining evidence for a Global Flood). I certainly understand the matter of formulating questions–we see this in debates with Creationists all the time: “who created the universe,” “why does the universe seem so fine-tuned to our existence,” and so forth. These questions imply what form their answers will take: the first suggests that there must have been an agent involved in the creation of the universe, the latter interprets the causal relationship in a human-centered, teleological fashion. If there’s one thing I’ve learned over years of experience with these debates, it’s the importance of asking the right questions in the right ways. Certainly when scientists were largely laboring under a YEC paradigm, and certainly Creationists and ID proponents looking at various lines of evidence, are interpreting those lines of evidence in particular ways: ID proponents see everything in terms of engineering–machines, codes, programs, and so forth. I’m not entirely sure how a YEC paradigm would affect the available scientific equipment, though.

So I can see how YEC is a paradigm; I’m just not sure how it’s a scientific one. I mean, I can adopt a Pastafarian paradigm of looking at the world, and it may influence how I interpret scientific findings, but that doesn’t give it any scientific value or credence. A scientific paradigm, it seems to me, ought to develop out of science; allowing any paradigm to act as a justified scientific paradigm seems to me to be a little more postmodernist than is valid in science.

Whilst CDE proponents claim that CDE is falsifiable

And Popper, too.

(E.g. Haldane and Dawkins saying a fossil Rabbit in the Precambrian era would falsify CDE), it is easy to see how the theory laden-ness of science makes such a find unlikely.

Um…how? A find is a find, regardless of how theory-laden the scientists are. And it’s not as though evolution hasn’t had its share of moments of potential falsification. Darwin was unaware of genes; his theory was missing a mechanism of transmission. Were we to discover that genes were not prone to the sorts of mutations and variation and drift that Darwinian evolution predicts, the theory would have been worthless. But the study of genes validated Darwin. If we had discovered that DNA replication was not prone to errors and problems, that would have been a major nail in the coffin for Darwinian evolution, but instead the DNA replication process supported the theory. If our studies of the genome had revealed vast differences between apparently related species, with broken genes and junk DNA and retroviral DNA in wildly different places in otherwise-close species, that would be a serious problem for evolutionary theory. Instead, the presence and drift of such genetic bits are perhaps the best evidence available for evolution, and give us a sort of genetic clock stretching backwards along the timeline. It could have been that the genetic evidence wildly contradicted the fossil evidence, but instead we find confirmation and further explanation of the existing lines.

Classification of rock strata was initially (and still commonly) done via the presence of index fossils. (Note: The designation of these fossils as representing a certain historical period was done within the CDE paradigm)

Bzzt! Simply untrue. There do exist index fossils–fossils which only occur in one strata–which can be used to verify the dates of some strata. However, those dates have already been determined through other methods–radiometric dating, which ones are on top of others, and so forth.

Incidentally, if anyone ever gets a chance to look into the various dating methods we have, I highly recommend it. I taught a lesson on it last Spring, and it’s really interesting stuff. You’d never believe how important trees are.

The finding of a fossil Rabbit in a rock strata would almost certainly result in classification of the strata as something other than pre-cambrian, or the inclusion of other ad hoc explanations for the fossil (Overthrusts, reworking etc).

No, I’m afraid that’s simply not the case. If a fossil rabbit were found in a Precambrian stratum, that was below the Cambrian stratum, and both the stratum and the fossil could be reasonably dated back to the Precambrian (through methods like radiometric dating), it would not simply force the redefinition of the stratum. Because then one would have to explain the presence of one geological stratum beneath several others that, chronologically, came earlier, and why there are other Precambrian fossils in this Postcambrian stratum. Either way, the result is an insurmountable anomaly.

Granted, there could be alternate hypotheses to explain how the rabbit got there. Maybe there was a hole in the ground, and some poor rabbit managed to fall in, die, and get fossilized. But then we wouldn’t have a Precambrian rabbit, we’d have a Postcambrian rabbit in a hole, and there ought to be other signs which could demonstrate that (not the least of which that the rabbit shouldn’t date back to the Precambrian radiometrically, and the strata above it, closing off the hole, should be out of place with regard to the rest of the strata. In order to call the stratum the result of an overthrust or erosion or something, there would have to be other evidence for that. Geological folding and erosion, so far as I know, would not affect one fossilized rabbit without leaving other signs behind.

It is worth noting that many smaller (only 200 million year) similar type surprises are happily integrated within CDE. (A recent example is pushing back gecko’s 40 million years in time)

I’d like to see more examples and sources for this. I read the gecko article, and I don’t see where it’s at all what you’re suggesting. This is not an example of a clearly out-of-place animal in the wrong era, it’s an example of there being an earlier ancestor of a modern species than what we knew of before. The preserved gecko is a new genus and species–it’s not as though it’s a modern gecko running around at the time of the dinosaurs–and it’s from a time when lizards and reptiles were common. The point of the “rabbit in the Precambrian” example is that there were no mammals in the Precambrian era. Multicellular life was more or less limited to various soft-bodied things and small shelled creatures; most of the fossils we find from the precambrian are tough to pin down to a kingdom, let alone a genus and species like Sylvilagus floridanus, for instance. There’s a world of difference between finding a near-modern mammal in a period 750 million years before anything resembling mammals existed, and finding a lizard during a lizard- and reptile-dominated time 40 million years before your earliest fossil in that line. There was nothing in the theory or the knowledge preventing a gecko from palling around with dinosaurs, there was just no evidence for it.

The main point here is that the claimed falsification is not a falsification of CDE, but merely falsifies the assumption that fossils are always buried in a chronological fashion. CDE can clearly survive as a theory even if only most fossils are buried in chronological fashion.

That may be closer to the case, as there is a wealth of other evidence for common descent and evolution to pull from. However, the Precambrian rabbit would call into question all fossil evidence, as well as the concept of geological stratification. It would require a serious reexamination of the evidences for evolution.

Many other events and observations exist which could be said to falsify evolution (e.g. the origin of life, soft tissue remaining in dinosaur fossils), but are happily left as unsolved issues.

How would the origin of life falsify evolution? Currently, while there are several models, there’s no prevailing theory of how abiogenesis occurred on Earth. It’s not “happily left as an unsolved issue;” scientists in a variety of fields have spent decades examining that question. Heck, the Miller-Urey experiments, though based on an inaccurate model of the early Earth’s composition, were recently re-examined and found to be more fruitful and valid than originally thought. The matter of soft tissue in dinosaur fossils has been widely misunderstood, largely due to a scientifically-illiterate media (for instance, this article which glosses over the softening process). It’s not like we found intact Tyrannosaurus meat; scientists had to remove the minerals from the substance in order to soften it, and even then the tissue may not be original to the Tyrannosaurus.

It is because of these types of occurrences that I suggest CDE is properly assigned as a scientific paradigm. Which is to say that CDE is not viewed as falsified by these unexpected observations, but instead these problems within CDE are viewed as the grist for the mill for making hypothesis and evaluating hypothesis within the paradigm.

Except that nothing you’ve mentioned satisfies the criteria for falsifiability. For any scientific theory or hypothesis, we can state a number of findings that would constitute falsification. “Rabbits in the precambrian” is one example, certainly, but origins of life? Softenable tissue in dino fossils? Previous gecko ancestors? The only way any of those would falsify evolution would be if we found out that life began suddenly a few thousand years ago, or somesuch. So far, no such discovery has been made, while progress continues on formulating a model of how life began on the Earth four-odd billion years ago.

In other words, you’ve equated any surprises or unanswered questions to falsification, when that’s not, nor has it ever been, the case.

YEC can also be properly identified as a scientific paradigm although significantly less well funded and so significantly less able to do research into the problems that existing observations create within the paradigm.

Yes, if only Creationists had more funding–say, tax-exempt funding from fundamentalist religious organizations, or $27 million dollars that might otherwise be spent on a museum trumpeting their claims–they’d be able to do the research to explain away the geological, physical, and astronomical evidence for a billions-of-years-old universe; the biological, genetic, and paleontological evidence for common descent; the lack of any apparent barriers that would keep evolutionary changes confined to some small areas; and ultimately, the lack of evidence for the existence of an omnipotent, unparsimonious entity who created this whole shebang. It’s a lack of funding that’s the problem.

One such example of research done is the RATE project. Specifically the helium diffusion study which predicted levels of helium in zircons to be approximately 100,000 times higher than expected if CDE were true.

Further reading on RATE. I’m sure the shoddy data and the conclusions that don’t actually support YEC are due to lack of funding as well.

What placing YEC and CDE as scientific paradigms does is make sense of the argument. CDE proponents (properly) place significant problems within CDE as being something that will be solved in the future (E.g. origin of life) within the CDE paradigm. YEC can also do the same (E.g. Endogenous Retroviral Inserts).

Except that the origin of life isn’t a serious problem for evolution; evolution’s concerned with what happened afterward. That’s like saying that (hypothetical) evidence against the Big Bang theory would be a problem for the Doppler Effect. You’ve presented nothing presently that would falsify evolution, while there are already oodles of existing observations to falsify the YEC model. Moreover, you’ve apparently ignored the differences in supporting evidence between the two paradigms; i.e., that evolution has lots of it, while YEC’s is paltry and sketchy at best, and nonexistent at worst. It can’t just be a matter of funding; the YEC paradigm reigned for centuries until Darwin, Lord Kelvin, and the like. Why isn’t there leftover evidence from those days, when they had all the funding? What evidence is there to support the YEC paradigm, that would make it anything like the equal of the evolutionary one?

Comments
1) Ideas like Stephen Gould’s non-overlapping Magistra (NOMA) are self-evidently false. If God did create the universe 7000 years ago, there will definitely be implications for science.

More or less agreed; the case can always be made for Last Thursdayism and the point that an omnipotent God could have created the universe in media res, but such claims are unfalsifiable and unparsimonious.

2) Ruling out a supernatural God as a possible causative agent is not valid. As with (1) such an activity is detectable for significant events (like creation of the world/life) and so can be investigated by science.

I’m not entirely clear on what you’re saying here. I think you’re suggesting that if a supernatural God has observable effects on the universe, then it would be subject to science inquiry. If that’s the case, I again agree. And a supernatural God who has no observable effects on the universe is indistinguishable from a nonexistent one.

a. To argue otherwise is essentially claim that science is not looking for truth, but merely the best naturalistic explanation. If this is the case, then science cannot disprove God, nor can science make a case that YEC is wrong.

Here’s where we part company. First, the idea that science is looking for “truth” really depends on what you mean by “truth.” In the sense of a 1:1 perfect correlation between our conceptual models and reality, truth may in fact be an asymptote, one which science continually strives for but recognizes as probably unattainable. There will never be a day when science “ends,” where we stop and declare that we have a perfect and complete understanding of the universe. Scientific knowledge, by definition, is tentative, and carries the assumption that new evidence may be discovered that will require the current knowledge to be revised or discarded. Until the end of time, there’s the possibility of receiving new evidence, so scientific knowledge will almost certainly never be complete.

As far as methodological naturalism goes, it doesn’t necessarily preclude the existence of supernatural agents, but anything that can cause observable effects in nature ought to be part of the naturalistic view. As soon as we discover something supernatural that has observable effects in nature, it can be studied, and thus can be included in the methodological naturalism of science.

Even if all this were not the case, science can certainly have a position on the truth or falsehood of YEC. YEC makes testable claims about the nature of reality; if those claims are contradicted by the evidence, then that suggests that YEC is not true. So far, many of YEC’s claims have been evaluated in precisely this fashion. While science is less equipped to determine whether or not there is a supernatural omnipotent god who lives outside the universe and is, by fiat, unknowable by human means, science is quite well equipped to determine the age of the Earth and the development of life, both areas where YEC makes testable, and incorrect, predictions.

b. Anthony Flew, famous atheist turned deist makes the point quite clearly when talking about his reasons for becoming a deist

“It was empirical evidence, the evidence uncovered by the sciences. But it was a philosophical inference drawn from the evidence. Scientists as scientists cannot make these kinds of philosophical inferences. They have to speak as philosophers when they study the philosophical implications of empirical evidence.”

What? We have very different definitions of “quite clearly.” Not sure why you’re citing Flew here, since he’s not talking about any particular evidence, since he has no particular expertise with the scientific questions involved, and since he’s certainly not a Young Earth Creationist, nor is his First Cause god consistent with the claims of YEC. I’m curious, though, where this quotation comes from, because despite the claim here that his conversion to Deism was based on evidence, the history of Flew’s conversion story cites mostly a lack of empirical evidence–specifically with regard to the origins of life–as his reason for believing in a First Cause God.

Flew’s comments highlight another significant issue. The role of inference. Especially in ‘historical’ (I prefer the term ‘non-experimental’) science.

You may prefer the term. It is not accurate. The nature of experimentation in historical sciences tends to be different from operational science, but it exists, is useful, and is valid nonetheless.

Much rhetorical use is given to the notion that YEC proponents discard the science that gave us planes, toasters and let us visit the moon (sometimes called ‘operational’…I prefer ‘experimental’ science). Yet CDE is not the same type of science that gave us these things.

No, CDE is the type of science that gives us more efficient breeding and genetic engineering techniques, a thorough understanding of how infectious entities adapt to medication and strategies for ameliorating the problems that presents, genetic algorithms, and a framework for understanding how and why many of the things we already take for granted in biology are able to work. It just happens to be based on the same principles and methodologies as the science that gave us toasters and lunar landers.

Incidentally, the determination of the age of the universe and the Earth is based on precisely the same science that allowed us to go to the moon and make airplanes. Or, more specifically, the science that allows us to power many of our space exploration devices and homes and allows us to view very distant objects.

CDE is making claims about the distant past by using present observations and there is a real disconnect when doing this.

It’s also making claims about the present by using present observations. Evolution is a continuous process.

One of the chief functions of experiment is to rule out other possible explanations (causes) for the occurrence being studied. Variables are carefully controlled in multiple experiments to do this. The ability to rule out competing explanations is severally degraded when dealing with historical science because you cannot repeat and control variables.

Fair enough. It’s similar to surgical medicine in that regard.

You may be able to repeat an observation, but there is no control over the variables for the historical event you are studying.

“No control” is another oversimplification. We can control what location we’re looking at, time period and time frame, and a variety of other factors. It’s certainly not as tight as operational science, but there are controls and experiments in the primarily-observational sciences.

Not that it matters, because experiments are not the be-all, end-all of science. Predictions, observations, and mathematical models are important too. Science in general has much more to do with repeated observation than with experimentation. And yes, repeated observation is enough (in fact, it’s the only thing) to determine cause and effect.

Scientists dealing with non-experimental science have to deal with this problem, and they generally do so by making assumptions (sometimes well founded, sometimes not).

Guh? You act like they just come up with these assumptions without any justification.

A couple of clear examples are uniformitarianism (Geological processes happening today, happened the same way, the same rate in the past) and the idea that similarity implies ancestry.

Okay, two problems. One: if we were to hypothesize that geological processes happened somehow differently in the past, one would have to provide some evidence to justify that hypothesis. Without evidence, it would be unparsimonious to assume that things functioned differently in the past. As far as all the evidence indicates, the laws of physics are generally constant in time and space, and those geological processes and whatnot operate according to those laws.

The idea that similarity implies ancestry is not a scientific one. While that may have been a way of thinking about it early on in evolutionary sciences, it does not actually represent science now. Similarity may imply relationship, but there are enough instances of analogous evolution to give the lie to the idea that scientists think similarity = ancestry.

A couple of quotes will make my point for me.

Doubtful.

Henry Gee chief science writer for Nature wrote “No fossil is buried with its birth certificate” … and “the intervals of time that separate fossils are so huge that we cannot say anything definite about their possible connection through ancestry and descent.”

Poor Henry Gee; first quote-mined in Jonathan Wells’ Icons of Evolution, now by you. What’s interesting here is that you’ve actually quote-mined Gee’s response to Wells and the DI for quote-mining him! (Which, I realize, you’re aware of, but I read this largely as I was writing the response) Here’s the full context:

That it is impossible to trace direct lineages of ancestry and descent from the fossil record should be self-evident. Ancestors must exist, of course — but we can never attribute ancestry to any particular fossil we might find. Just try this thought experiment — let’s say you find a fossil of a hominid, an ancient member of the human family. You can recognize various attributes that suggest kinship to humanity, but you would never know whether this particular fossil represented your lineal ancestor – even if that were actually the case. The reason is that fossils are never buried with their birth certificates. Again, this is a logical constraint that must apply even if evolution were true — which is not in doubt, because if we didn’t have ancestors, then we wouldn’t be here. Neither does this mean that fossils exhibiting transitional structures do not exist, nor that it is impossible to reconstruct what happened in evolution. Unfortunately, many paleontologists believe that ancestor/descendent lineages can be traced from the fossil record, and my book is intended to debunk this view. However, this disagreement is hardly evidence of some great scientific coverup — religious fundamentalists such as the DI — who live by dictatorial fiat — fail to understand that scientific disagreement is a mark of health rather than decay. However, the point of IN SEARCH OF DEEP TIME, ironically, is that old-style, traditional evolutionary biology — the type that feels it must tell a story, and is therefore more appealing to news reporters and makers of documentaries — is unscientific.

What Gee is criticizing here and in his book, as his response and further information here (4.14, 4.16) make clear, is the tendency among some scientists and journalists to interpret the evidence in terms of narratives and to see life as a linear progression, when in fact it’s more of a branching tree with many limbs. It’s impossible from fossil evidence alone to determine whether two animals are ancestor and descendant, or cousins, or whatever.

See, the problem with letting quotes make your point for you is that they often do no such thing.

Gee’s response to this quote of him supports my point

No, you’ve simply misunderstood it. The fact that you’ve read Icons, somehow find it valid, and somehow think it supports a YEC view, speaks volumes about your credibility.

Colin Paterson’s infamous quote about the lack of transitional fossils makes the same point. “The reason is that statements about ancestry and descent are not applicable in the fossil record. Is Archaeopteryx the ancestor of all birds? Perhaps yes, perhaps no: there is no way of answering the question.”

My quote mine alarm is getting quite a workout today, but I have a distinct suspicion that Patterson is talking about precisely what Gee was: that from the fossil evidence alone, we cannot determine whether archaeopteryx is the ancestor of all birds, or an offshoot of the lineage that produced birds. And a very brief look reveals precisely what I suspected. This isn’t the problem for evolution that you seem to think it is.

A simple thought experiment highlights this concept. Assuming at some point in the future, scientists find some scientific knowledge that makes the naturalistic origin of life a more plausible possibility given the time constraints. (For instance…given completely arbitrary probabilities, say there is a 15% chance of OOL from unliving chemicals driven by natural processes in the lifetime of the earth to date) Does this mean that it must of happened that way in the past? Clearly the answer is no.

No, it doesn’t mean it must have happened that way in the past. However, we can show ways it may have happened, or ways that it was likely to have happened. Merely showing a likely way for the origin of life to have occurred given the conditions on Earth four-odd billion years ago puts abiogenesis far ahead of the creationist hypothesis, due to their lack of parsimony.

Incidentally, as Dawkins explained in The God Delusion, the actual life-generating event needn’t be particularly likely to occur. After all, it’s only happened once in the history of the planet Earth, so far as we’re aware. Given the variety of condition and the timespan involved, that’s something of a low probability.

But even claims of certainty about experimental science is unjustified. The history of science contains many examples of widely held scientific beliefs being overturned. Phlogiston is probably the most famous, but geosynclinal theory (preceding plate techtonics) is a more non-experimental science example. So even claims about experimental science should be made with this in mind, evoking a more humble stance. Comments about CDE being a ‘fact’ or being on par with gravity are unfounded and display a profound ignorance of science and history. Such comments are not scientific, but faith based.

Wrong, wrong, wrong. You’re conflating an awful lot of things here, particularly with regard to scientific terminology. First, as I said above, scientific knowledge is tentative and admittedly so. Scientists are human, and are certainly prone in some cases to overstating their certainty about one given theory or another, but in general we recognize that our knowledge is subject to revision as future evidence becomes available. There is no 100% certainty in science.

Here’s the point where definitions would be important. In science, a “fact” is something that can be observed–an object, a process, etc. A “law” is a (usually) mathematical description of some process or fact. A “theory” is a model that explains how facts and laws work, and makes predictions of future observations that can be used to validate or falsify it. Gravity is a fact, a law, and a theory. The fact of gravity is that things with mass can be observed to be attracted to one another; the law of gravity is F=G*[(m1*m2)/R^2]; the (relativistic) theory of gravity is that massive objects warp spacetime, causing changes in the motion of other massive objects. Evolution is similar: the fact of evolution is the process of mutation and selection that can be observed and has been observed under a variety of different levels of control; the theory of evolution by natural selection is that organisms are descended with modification from a common ancestor through an ongoing selection process consisting of various natural forces and occurrences.

The claims by Gould and others that evolution is a fact are referring to the observable process of evolution. Your argument here amounts to suggesting that since scientists were wrong about phlogiston, they cannot claim with any certainty that things burn.

So how to evaluate between the two paradigms?

Reason and evidence?

This is the question that matters… Controversially, Kuhn claimed that choosing between paradigms was not a rational process.

…?

Whilst not subscribing to complete relativism, I believe there is a real subjective nature between paradigms. Objective problems play a part, but how much those problems are weighted seems to be a fairly subjective decision.

From my perspective, the cascading failure of many of the evidences used to infer CDE is a clear indication of the marginal superiority of the (admittedly immature) YEC paradigm.

False dichotomy. Try again. Evidence against evolution–which, I remind, you have not provided–is not evidence for YEC. Nor is it evidence for OEC or ID or Hindu Creation Stories or Pastafarianism. Each of those things requires its own evidence if it is to stand as a viable scientific paradigm.

Incidentally, you might actually want to look at some of the evidence for evolution before declaring any kind of “cascading failure.” You might also want to look at the evidence for creationism.

Chief examples are things such as embryonic recapitulation (found to be a fraud),

Found by scientists to be a fraud; never central to evolutionary theory.

the fossil record (Found to exhibit mostly stasis and significant convergence),

Source? Experts disagree.

the genetic evidence (Found to exhibit massive homoplasy).

Source? Experts disagree.

Update: And the disagreement between molecular and morphological data.

Nothing in the article you’ve linked suggests any problems for evolution. It merely shows how useful the genetic and molecular analyses are in distinguishing species and discovering exactly how organisms are related; I think you’ll find that most biologists agree with that sentiment, which is part of why there’s so much more focus on genetic evidence than fossil evidence now. Heck, as long as we’re quoting, here’s Francis Collins:

“Yes, evolution by descent from a common ancestor is clearly true. If there was any lingering doubt about the evidence from the fossil record, the study of DNA provides the strongest possible proof of our relatedness to all other living things.”

It is curious however, that even with the near monopoly of the CDE paradigm in science education in America, that only a small fraction believe it. (CDE hovers around 10%, whilst 50+% accept YEC and the remainder Theistic evolution) This certainly indicates to me, that perhaps it is CDE that is not as compelling an explanation than YEC.

So, an appeal to popularity? Yeah, that’s valid. Yes, evolution is believed by a fraction of the laity. Although your numbers suggest it’s about half–theistic evolution is still evolution, and evangelical Francis Collins agrees far more with Richard Dawkins than Duane Gish. Strangely enough, among scientists–you know, the people who have actually examined the evidence, regardless of their religious beliefs–it’s believed by the vast majority. What does that suggest?

Whatever the decision, it is more appropriate to say that YEC is the “better inferred explanation” than CDE or vice versa. Such an understanding of the debate leads to a far more productive discourse and avoids the insults, derision and anger that seems to be so prevalent.

I’m afraid you’ve lost me, so I’ll sum up. Your position is based on an examination of the situation that ignores the complete lack of evidence for the “YEC paradigm” and inflates perceived flaws in the “CDE paradigm” in order to make them appear to be somewhat equal. From there, you ignore the basic lack of parsimony in the “YEC paradigm” and make appeals to logical fallacies in order to declare it the more likely explanation.

Alan, you’re clearly a fairly intelligent guy, but that more or less amounts to your argument having a larger proportion of big words than the average creationist’s. Your use of false dichotomy and argumentum ad populum as though they had any value to science, your quote-mining to make your point, your misinterpretation of popular science articles and assumption that they refute a century of peer-reviewed journals, your ignorance of the actual evidence for evolution, and your postmodernist take on the whole debate, are all standard creationist tactics. You’re clearly intelligent enough and interested enough to correct your misconceptions and your errors in thinking, Alan, and I hope you take this chance to examine the evidence with an open mind and understand that scientific theories are based on positive evidence, not negative evidence against a competing theory. Thanks for the article!