More on Movement Problems (or, Definitions Matter)

I’ve noticed a disturbing trend lately, and while there may be a bit of “when you’re a hammer, every problem starts looking like a nail” going on, I can’t help but see it as a symptom of the apparently growing notion that “skepticism” is something you join rather than something you do. But I keep seeing this twofold trend of people venerating logic and reason while failing to actually understand them (or at least to understand them as well as they think they do) and using terms like “rational” or “fallacy” in value-laden ways that strip them of their actual meaning.

The first time I really took notice of this was when Don talked about his trip to a CFI meeting in Indianapolis. At the meeting, he encountered a number of CFI members who saw skepticism not as a set of cognitive tools, but as a set of dogmatic rules which should be taught to people. In addition, and perhaps most relevantly:

[A]lmost every member I interacted with afterward was like an automaton repeating poorly understood buzzwords: “critical thinking,” “skepticism,” “freethought,” etc. They said these words and seemed to believe that they understood them and that, through that understanding, were part of a greater whole.

The same trend was the subject of the recent kerfuffle with Skepdude. The ‘Dude clearly held logic in high esteem, and clearly understood that fallacies were bad things, but just as clearly didn’t understand what made fallacies fallacious, and was quick to throw out the term “ad hominem” where it did not apply.

More alarming, however, were the comments of the much more prominent skeptic Daniel Loxton, who claimed that most insults were fallacious poisoning the well, despite that clearly not being the case as per the fairly strict and clear definition of poisoning the well.

You can see the same thing in spectacular action in the comment thread here, where commenter Ara throws around terms like “rational” and “anti-rational” as part of an argument that echoes Skepdude’s attempts to say that a valid argument doesn’t make insults valid, when in fact the opposite is the case.

Despite what Mr. Spock would have you believe, saying that something is “rational” or “logical” is to say almost nothing about the thing you are trying to describe. Any position, any conclusion–true or false, virtuous or reprehensible, sensible or absurd–can be supported by a logically valid argument. For instance:

All pigs are green.
All ostriches are pigs.
Therefore, all ostriches are green.

That’s a logically valid argument. The conclusion follows inexorably from the premises. That the conclusion is false and absurd is only because the premises are equally false and absurd. The argument is unsound, but it is perfectly logical. “Logical” is not a value judgment, it is an objective description, and can only be accurately applied to arguments1.

“Rational” is similar. There’s a lot of equivocation possible with “rational,” because it can mean “sensible” as well as “based on reason” or “sane” or “having good sense.” Some of those meanings are value-laden. However, if we are describing a conclusion, an argument, or a course of action, and if we are hoping to have any kind of meaningful discussion, then it’s important to be clear on what we’re trying to say when using the word “rational.”

If, for instance, I’m using the term “rational” to call an idea or action or something “sane” or “possessing good sense,” I’m probably expressing an opinion. “Good sense” is a subjective quality, and the things I consider “sane” may not be the same particular things that are excluded from the DSM-IV.

If, however, I’m trying to say that a belief or course of action or idea is “sensible” or “based on reason,” then I must first know what the reasons or senses involved are. A “sensible” course of action depends on subjective judgment, which is largely driven by circumstance and context. If someone cuts me off at 80mph on the freeway, I may consider such an action to be insensible, but not knowing what caused the person to take that action–say, for instance, their passenger was threatening them, or going into labor, or something–I really have no way of judging the sensibility of the action.

Similarly, if I don’t know what reasons are driving a person to hold some belief or take some action, then I cannot know if that action is based on reason–i.e., if it’s “rational,” in this sense. For instance, if I believe that autism is caused by mercury toxicity and that there are toxic levels of mercury in childhood vaccinations, then it may be a reasonable course of action to refuse to immunize my child. That an action may be wrong, or may be based on false reasons or bad reasons, does not make it irrational or unreasonable.

The fact is that most people do not knowingly take actions or hold beliefs for no reason. Many people take actions or hold beliefs for bad reasons, or ill-considered reasons, but most people do think “logically” and “rationally.” The problem comes from incorrect premises, or from a failure to consider all relevant reasons or weigh those reasons appropriately.

What I’m seeing more of lately, though, is the word “rational” used to mean “something that follows from my reasons” or “something I agree with,” or more simply, “good.” None of these are useful connotations, and none of them accurately represent what the word actually means. Similarly, “fallacy” is coming to mean, in some circles or usages, “something I disagree with” or “bad,” which again fails to recognize the word’s actual meaning. This is fairly detrimental; we already have a word for “bad.” We don’t have other good words for “fallacy,” and they are not directly synonymous with each other.

It seems like an awful lot of skeptics understand that logic and reason are good and important, but they don’t actually seem to understand what makes them work. They seem happy to understand the basics, to practice a slightly more in-depth sort of cargo cult argumentation, while missing the significant whys and wherefores. Sure, you might be able to avoid fallacious arguments by simply avoiding anything that looks like a fallacy, but if you actually understand what sorts of problems cause an argument to be fallacious, it makes your arguing much more effective.

Let me provide two examples. First, my car: I can get by just fine driving my car, even though I really know very little about what’s going on underneath the hood and throughout the machinery. It’s not that I’m not interested; I find the whole process fascinating, but I haven’t put the work in to actually understand what’s going on on a detailed level. Someone who knew my car more intimately would probably get better gas mileage, would recognize problems earlier than I do and have a better idea of what’s wrong than “it makes a grinding noise when I brake,” and would probably use D2 and D3, whatever those are. I don’t get the full experience and utility out of my car, and that’s okay for most everyday travel. But you’re not going to see me entering into a street race with it.

On the other hand, I love cooking, and I’ve found that understanding the science behind why and how various processes occur in the kitchen has made me a much more effective cook. Gone are the days when my grilling was mostly guesswork, and when my ribs would come out tough and stringy. Now that I understand how the textures of muscle and connective tissue differ, and how different kinds of cooking and heat can impact those textural factors, I’m a much better cook. Now that I understand how searing and browning work on a chemical level, I’m a much better cook. I can improvise more in the kitchen, now that I have a better understanding of how flavors work together, and how to control different aspects of taste. I’m no culinary expert, but I can whip up some good meals, and if something goes a way that I don’t like, I have a better idea of how to change or fix it than I did when I was just throwing things together by trial and error.

If you’re content with reading some skeptical books and countering the occasional claim of a co-worker, then yeah, you really don’t need to know the ins and outs of logic and fallacies and reasoning and so forth. But if you want to engage in the more varsity-level skeptical activities, like arguing with apologists or dissecting woo-woo claims in a public forum, then you’re going to need to bring a better game than a cursory understanding of logic and basic philosophy. You don’t need to be a philosophy major or anything, but you might need to do reading beyond learning this stuff by osmosis from hanging out on the skeptical forums. Mimicking the techniques and phrasing of people you’ve seen before only gets you so far; if you really want to improvise, then you have to know how to throw spices together in an effective way.

I’m generally against the faction who wants to frame skepticism as some new academic discipline. I think that’s silly, and I think (regardless of intent) that it smacks of elitism. I’m of the opinion that anyone can be a skeptic, and that most people are skeptics and do exercise skepticism about most things, most of the time. But that doesn’t mean that skepticism comes easily, or that the things we regularly talk about in skeptical forums are easily understood. You have to do some work, you have to put in some effort, and yeah, you have to learn the basics before you can expect to speak knowledgeably on the subject. But believe me, it takes a lot more to learn how to cook a decent steak than to learn how to cook up a good argument.

1. I suppose one could describe the thinking or processing methods of an individual or machine as “logical” in a moderately descriptive way, but it still doesn’t give much in the way of detail. What would a non-logical thought process be? One unrelated non-sequitur after another?

Please feel free to dismiss the following

What should have been a relatively academic conversation has become a feud, and I’m already finding it rather tiresome. I’m Phil Plait’s proverbial “dick,” you see, because I referenced an obscure little movie from twelve whole years ago made by a pair of independent directors with only, like, two Academy Awards to their names, and starring a bunch of unknown Oscar-winning actors, which only ranks #135 in IMDB’s Top 250 films of all time. Maybe it would have been better if I’d referenced a series of porn videos of drunk young women.

Also, because I’m snarky and sarcastic. Well, okay, guilty as charged.

So I’m exactly what Phil Plait was referring to, even though Phil’s clarifications make me suspect that even he doesn’t know exactly what he was referring to, and his speech has become a Rorschach Test for whatever tactic(s) any particular skeptic wants to authoritatively decry. Sure, fine, whatever. I’ve been called worse. By myself, no less.

Anyway, Junior Skeptic’s Daniel Loxton weighed in on Skepdude’s tweet:

Now, I’m no great fan of Loxton. I was; I enjoy Junior Skeptic, and I like his Evolution book. But I disagree with nearly everything he writes on skepticism, I think he tends to adopt a very condescending tone and a very authoritarian attitude over the skeptical movement (such as it is), and I lose a great deal of respect for anyone–especially a skeptic–who blocks people for disagreeing with them. You can read through my Twitter feed, if you like; I defy you to find any abuse or insult which would justify blockage.

So that’s my stated bias out of the way. I address Loxton’s point here not out of bitterness, but out of genuine surprise that someone who is so vocal and respected in the skeptical movement could be so very wrong about basic logical fallacies like ad hominem and poisoning the well. I also can’t help but feel a little prophetic with that whole last post I wrote about sloppy thinking.

Edit: I also want to offer a brief point in defense of Daniel Loxton: being a Twitter user, and knowing the limitations of the medium, it’s possible that truncating his thoughts in that medium impeded what he was trying to say, and that the mistakes are due less to sloppy thinking or misunderstanding, and more to trying to fit complex thoughts into ~140 characters. That being said, the proper place to make such a complex point without sacrificing clarity would have been here, at the linked post, in the comment section.

Loxton’s first claim, as I understand it, is that most insults belong to the “poisoning the well” subcategory of the ad hominem fallacy. This is wrong on a couple of levels. While poisoning the well is indeed a subcategory of ad hominem, neither category can be said, by any reasonable standard, to include “most insults.”

A little background: the ad hominem fallacy belongs to a category of fallacies of relevance, which are arguments whose premises offer insufficient support for their conclusions, and which are generally used to divert or obscure the topic of a debate. Ad hominem accomplishes this in one of two related ways: attempting to draw a conclusion about someone’s argument or points or claims by relying on an irrelevant personal attack, and by attempting to divert the topic of a debate from claims and arguments to the character of one of the debaters.

It becomes fairly easy, then, to see why “most insults” do not qualify as the ad hominem fallacy: most insults are not arguments. A logical fallacy, by definition, is an error in reasoning; in order for something to qualify as a fallacy, it must at least make an attempt at reasoning. If I say “Kevin Trudeau is a motherfucker,” I’m not making any actual argument. There are no premises, there is no conclusion, there is no attempt at reasoning, and so there can be no fallacy.

In order for there to be fallacious reasoning, there must first be some attempt at reasoning, which requires some semblance of premises and a conclusion. “Kevin Trudeau says colloidal silver is a useful remedy. But Kevin Trudeau is an idiot. So, yeah,” is more obviously fallacious (even though, as Skepdude would happily and correctly point out, the conclusion–“therefore Kevin Trudeau is wrong about colloidal silver”–is only implied). The implied conclusion is not sufficiently justified by the premises; that abusive second premise says nothing about the truth or falsehood of Kevin Trudeau’s claim. Even if it’s true, even an idiot is capable of valid arguments and true statements.

I could leave this here, I suppose; if poisoning the well is indeed a subcategory of ad hominem fallacies, and “most insults” are not in fact ad hominem fallacies, then “most insults” could not also be part of a subset of ad hominem fallacies. But poisoning the well is a tricky special case, and if there’s one thing I’m known for, it’s belaboring a point.

So what of poisoning the well? It’s a way of loading the audience, of turning a potential audience against your opponent before they even get a chance to present their argument. You present some information about your opponent–true or false–that you know your audience will perceive as negative, before your opponent gets a chance to state their case. The implication (and it’s almost always implied, as Loxton rightly notes) is that anything your opponent says thereafter is unreliable or incorrect.

Here’s where it gets tricky: it barely qualifies as a fallacy, because all the speaker is doing is offering an irrelevant fact about his opponent’s character. As we said, in order for something to be a logical fallacy, it has to contain an error in reasoning. The point of poisoning the well is not to actually commit a fallacy, but to make the audience commit a fallacy, specifically to commit an ad hominem fallacy, by dismissing your opponent’s claims and arguments based on the irrelevant information you provided at the beginning. So poisoning the well is a subset of ad hominem fallacies, where the fallacy is committed by an audience at the prompting of the well-poisoning speaker.

Here’s where Loxton gets it wrong–and only fairly slightly, I might add. I had to do a fairly large amount of research before I felt confident that this was a key point–is that the key feature of poisoning the well is that it’s done pre-emptively. Insults offered after your opponent has stated their case may be an attempt to manipulate the audience into the same ad hominem fallacy, but they do not qualify as poisoning the well.

An example: You open up a copy of “Natural Cures THEY Don’t Want You To Know About” by Kevin Trudeau, and someone has placed inside the front cover a description of Trudeau’s various fraud convictions. Consequently, everything you read in the book will be tainted by your knowledge that Trudeau is a convicted fraud. The well has been thus poisoned, and now you’re prompted to dismiss anything he says on the basis of his personal characteristics.

If someone places that same note halfway through the book, or at the end, and you don’t encounter it until you finish or partly finish, then you may still be inclined to commit an ad hominem fallacy based on the contents of that note. However, this is not poisoning the well, which requires preemption.

There’s an issue here, and it touches on all the talk I’ve been doing recently about using arguments based on ethos in various situations. See, the fact that Kevin Trudeau is a convicted fraud is relevant if the point is whether or not you should trust what he has to say, or bother spending time and effort listening to it. The truth or falsehood of his arguments absolutely stand on their own, but his past as a huckster is of great relevance to the consideration of whether or not to take his word on anything.

It is a sad fact of life that no one person can conduct all the relevant research necessary to establish or refute any given claim or argument. Consequently, we must often rely on trust to some degree in considering how to direct our efforts, which claims merit deep investigation, and which we can provisionally accept based on someone’s word. This splits the hairs between the matter of whether or not a claim is true and whether or not a claim warrants belief. While it’s a laudable ideal to make those two categories as close to one another as possible, that goal remains impractical.

What this means is that, when considering whether or not to believe a claim or accept an argument (again, not whether or not the claim or argument is true), we generally use a person’s credibility as a piece of evidence used to evaluate whether or not belief is warranted. It’s rarely the only piece of evidence, and it only really qualifies as sufficient evidence in particularly ordinary claims, but it’s a relevant piece of evidence to consider nonetheless.

But, and I want to make this abundantly clear, it has nothing to do with the truth of a claim or the validity of an argument, it has only to do with the credibility of the speaker making the claim and whether or not the claim warrants belief. We should be very clear and very careful about this point: Kevin Trudeau’s record as a fraudster has no bearing on whether or not his claims are true. It does, however, have a bearing on whether or not you or I or anyone else should trust him or believe what he has to say.

In other words, if most people told me it was sunny out, I’d take their word for it. If Kevin Trudeau told me it was sunny out, I’d look up. And I’d wonder if he had some way of profiting off people’s mistaken belief about the relative sunniness of a given day.

So, back to the issue of insults. There’s one more problem with saying that “most insults” are a subcategory of any fallacy, and that’s that, at least with fallacies of relevance, the fallacious nature of an argument is in the argument’s flawed structure, in its failure of logic, and not in the words which are used. An ad hominem fallacy is not fallacious because it contains an insult, but because the conclusion does not follow from the premises. Containing the insult is what makes it “ad hominem,” but it’s the flawed logic that makes it a fallacy.

For instance, take this argument:

If a person copulates with his or her mother, then that person is a motherfucker.
Oedipus copulated with his mother.
Therefore, Oedipus is a motherfucker.

The fact that this argument is vulgar and contains an insult has no bearing whatsoever on its validity. And it’s clearly valid; and within the context of “Oedipus Rex,” it’s also sound. An insult alone does not make an argument into an ad hominem fallacy.

Take this argument, then:

All men are mortal.
Socrates is a man.
Socrates smells like day-old goat shit, on account of his not bathing.
Therefore, Socrates is mortal.

A valid argument is one in which the conclusion is logically implied by and supported by the premises. The conclusion here is, in fact, logically implied by the premises, and is justified by them. The insulting third premise does not support the conclusion, but the conclusion also does not rely on it. Its inclusion is unnecessary, but including it does nothing to invalidate the argument.

Finally, take this argument:

All men are mortal.
Plato is a really smart guy, and he says that Socrates is mortal.
Therefore, Socrates is mortal.

This is a fallacious argument–a pro hominem argument, sort of the opposite of ad hominem–because the conclusion is not sufficiently supported by the premises. The conclusion relies upon an irrelevant premise, which renders the logic invalid–obviously, despite not being insulting at all.

I hope I laid that all out in a way that is clear, because I really don’t think I could make it any clearer. It bothers me to see terms which have distinct, specific, clear meanings being applied inaccurately by people who ought to know better. It further bothers me to see skeptics, who of all people should relish being corrected and doing the research to correct prior misconceptions, digging in their heels, committing style over substance fallacies, and generally misunderstanding basic principles of logic and argumentation.

But because I like to belabor a point, and because it’s been several paragraphs since I’ve been sufficiently snarky, let me offer one more example–pulled from real life, this time!–to clarify poisoning the well.

Here, the speaker offers a link to an opponent’s argument, but primes the audience first by obliquely calling his opponent a dick, and moreover, suggesting that the opponent is using tactics specifically identified by an authority in the relevant field as unacceptable and ill-advised. The speaker’s audience, on clicking through to the opposing article, is thus primed to read the article through the lens of the author’s suggested dickishness, and to dismiss it as dirty tactics from a dick, rather than actually considering the merits of the argument. This is classic poisoning the well, which, you’ll recall, is intended to cause the audience to commit an ad hominem fallacy.

We skeptics take pride in our allegiance to logic and evidence; we are aware of our own shortcomings; we are aware that we are fallible and that we make mistakes. In my opinion the above comments about Jenny McCarthy are a mistake that we should own up to and make amends, and stop using it. If you really want to counter Jenny’s anti-vaccine views, choose one of the claims she makes, do some research, and write a nice blog entry showing where she goes wrong and what the evidence says, but do not resort to ad-hominem attacks. We are skeptics and we ought to be better than that.

–Skepdude, “Skeptics Gone Wild,” 8/23/10.

An incomplete list of sources used for this post:

In which I piss on the ‘Dude’s rug

I’ve recently had a bit of a back-and-forth with the Skepdude that eventually spilled out onto Twitter. I started writing this post when it appeared that my last comment might languish in eternal moderation, but it has since shown up, so kudos to Skepdude for exceeding my pessimistic expectations. If this post hadn’t turned into a larger commentary before that bit posted, I might have deleted the whole thing. As it stands, I’ve used poor Skepdude as a springboard.

In any case, you can go ahead and read the relevant posts, then come back here and read my further commentary. It’s okay, I’ll wait.

Back? Great. Here’s the further commentary.

I think this conversation touches on a few key points relevant to skeptical activism. The first is this trepidation regarding basic rhetoric. We tend to throw around “rhetoric” in a disparaging fashion, often in the context of “baseless rhetoric” or “empty rhetoric.” And those can be to the point, but I think we run the risk of forgetting that rhetoric is the art of argumentation, the set of tools and strategies available to craft convincing arguments.

We’ve heard a lot from skeptics and scientists in the past few years claiming to be communications experts and saying that skeptics and scientists need to communicate better; we’ve all seen and complained about debates and discussions where the rational types fail because they can’t argue or work a crowd as well as their irrational opponents. These are both, to some degree, failures of rhetoric. Scientists are trained to argue in arenas and fora where facts and evidence are the most important thing, and the only convincing thing. That’s great if you’re defending a dissertation or critiquing a journal article, but as we’ve seen time and time again, it doesn’t translate to success in debates outside the university. Kent Hovind and Ray Comfort and Deepak Chopra may be blinkered idiots without a fact between the three of them, which would mean death in a scientific arena, but in the arena of public discourse, it becomes a strength. Because when you have no facts to work with, you have to make sure that the rest of your techniques have enough glitz and flash to distract the audience from your lack of substance. Scientists ignore the style, knowing they have substance, unaware or naïve about audiences’ universal love for shiny things.

We in the skeptic community, such as it is, have spent a lot of time recently debating whether it’s better to use honey or vinegar; one lesson we should all take away from that, however, is that facts and logic are bland on their own. You need to dress them up with spices and sauces if you expect anyone to want to swallow them. If one of your goals is to convince human beings–not, say, robots or Vulcans–then you can’t rely on pure logic alone.

Moving back to Skepdude, he seems to be in two places in this argument. On one hand, he seems to think that we can ignore ethos and pathos, and argue on logos alone. Depending on his purpose, this may be enough. I don’t know what his goals are, in particular, but if he is content with arguing in such a way as to make his points clear and valid to any philosopher, scientist, or skeptic who happens to be reading them, then arguing with pure logic might be all he needs. Heck, he could break everything down and put it into those crazy modal logic proofs, and save himself a lot of typing.

But if he’s hoping to make his arguments convincing to a broader swath of people–and the amount of rhetorical questions and righteous anger in some of his other posts suggests that he is, and that he already knows this–then he’s going to need to slather those bland syllogisms in tasty pathos and savory ethos.

But here’s where I have the problem, and nowhere was it more apparent than in our Twitter conversation, while he elevates and venerates logic, he doesn’t understand a pretty basic principle of it, which is how fallacies–in particular, the ad hominem fallacy–work.

The whole post revolves around skeptics saying that Jenny McCarthy claims to oppose toxins yet uses Botox. Skepdude calls this an ad hominem fallacy. And I can see where it could be. Where he makes his mistake–and where most people who mistakenly accuse ad hominem make the mistake–is in failing to understand that ad hominem fallacies are all about the specific context. It’s true; if my only response to Jenny McCarthy’s anti-toxin arguments were “Yeah, well you put botox in your face, so who cares what you think,” I’d be dismissing her arguments fallaciously, by attacking her character–specifically, by suggesting that her actions invalidate her arguments.

But that doesn’t mean that any time I were to bring up McCarthy’s botox use would be fallacious. Let’s say I said, for instance, “You claim to be anti-toxin, yet you use botox; that suggests you’re a hypocrite, or that you don’t understand what toxins are.” Now, if I left it at that, it would still be fallacious; saying just that in response to her anti-vaccine arguments would be fallaciously dismissing them on the basis of her character.

Now, let’s imagine I said: “In fact, all the evidence demonstrates that the ‘toxins’ you insinuate are in vaccines are, in fact, present in non-toxic doses. Furthermore, the evidence shows that there is no link between vaccines and any autism spectrum disorder.” This bit addresses the substance of her argument, and does so using facts and evidence. If I further added “Also, you claim to be anti-toxin, yet you use botox; either you’re a hypocrite, or you don’t understand what toxins are,” I would most definitely be attacking her character, but it would not be fallacious because I wouldn’t be using it to dismiss her arguments.

The ad hominem fallacy requires that last part: in order for it to be fallacious, in order for it to render your argument invalid, you must be using the personal attack to dismiss your opponent’s arguments. Otherwise, it’s just a personal attack.

Skepdude disagrees:

This is what he linked to, by the way.

I replied:

And these were my links: 1 2 3.

And then I walked away from Twitter for a few hours, because I’m getting better at knowing when to end things.

And then I started writing this post, because I’m still not very good at it. I’d respond to the ‘Dude on Twitter, but I feel bad dredging up topics after several hours, and I know what I’m going to say won’t fit well in Tweets.

Anyway, the ‘Dude responded some more:

Oh, I’m so glad to have your permission. I would have tossed and turned all night otherwise.

Yes, you can infer what someone’s saying from their speech. I can even see some situations where the implication is strong enough to qualify as a logical fallacy–of course, the implication has to be an argument before it can be a fallacious one, and that’s a lot to hang on an implied concept–but that is, after all, the whole point of the Unstated Major Premise. However, (as I said in tweets) there’s a razor-thin line between inferring what an argument left unstated and creating a straw man argument that’s easier to knock down (because it contains a fallacy).

Skepdude even found a quote–in one of my links, no less!–that he thought supported this view:

He’s right, the ad hominem fallacy there doesn’t end with “therefore he’s wrong;” most ad hominem fallacies don’t. His point, however, isn’t as right, as a look at the full quote will demonstrate:

Argumentum ad hominem literally means “argument directed at the man”; there are two varieties.

The first is the abusive form. If you refuse to accept a statement, and justify your refusal by criticizing the person who made the statement, then you are guilty of abusive argumentum ad hominem. For example:

“You claim that atheists can be moral–yet I happen to know that you abandoned your wife and children.”

This is a fallacy because the truth of an assertion doesn’t depend on the virtues of the person asserting it.

Did you catch it? Here’s the relevant bit again: “If you refuse to accept a statement, and justify your refusal by criticizing the person who made the statement, then you are guilty of abusive argumentum ad hominem.” The point isn’t that the anti-atheist arguer attacked the atheist speaker to justify rejecting his argument.

So, once again, context is key. If, for instance, the atheist had argued “all atheists are moral,” the “you abandoned your wife and children” comment would be a totally valid counterargument. The key in the example given was that the anti-atheist respondent used his attack on the atheist arguer to dismiss their argument, in lieu of actually engaging that argument. A point which my other links, which went into greater detail, all made clear.

I’ll say it again: in order for it to be an ad hominem, the personal attack has to be directly used to dismiss the argument. Dismissing the argument on other grounds and employing a personal attack as an aside or to some other end is, by definition, not an ad hominem. You don’t have to take my word for it, either:

In reality, ad hominem is unrelated to sarcasm or personal abuse. Argumentum ad hominem is the logical fallacy of attempting to undermine a speaker’s argument by attacking the speaker instead of addressing the argument. The mere presence of a personal attack does not indicate ad hominem: the attack must be used for the purpose of undermining the argument, or otherwise the logical fallacy isn’t there. It is not a logical fallacy to attack someone; the fallacy comes from assuming that a personal attack is also necessarily an attack on that person’s arguments. (Source

For instance, ad hominem is one of the most frequently misidentified fallacies, probably because it is one of the best known ones. Many people seem to think that any personal criticism, attack, or insult counts as an ad hominem fallacy. Moreover, in some contexts the phrase “ad hominem” may refer to an ethical lapse, rather than a logical mistake, as it may be a violation of debate etiquette to engage in personalities. So, in addition to ignorance, there is also the possibility of equivocation on the meaning of “ad hominem”.

For instance, the charge of “ad hominem” is often raised during American political campaigns, but is seldom logically warranted. We vote for, elect, and are governed by politicians, not platforms; in fact, political platforms are primarily symbolic and seldom enacted. So, personal criticisms are logically relevant to deciding who to vote for. Of course, such criticisms may be logically relevant but factually mistaken, or wrong in some other non-logical way.
An Abusive Ad Hominem occurs when an attack on the character or other irrelevant personal qualities of the opposition—such as appearance—is offered as evidence against her position. Such attacks are often effective distractions (“red herrings”), because the opponent feels it necessary to defend herself, thus being distracted from the topic of the debate. (Source)

Gratuitous verbal abuse or “name-calling” itself is not an argumentum ad hominem or a logical fallacy. The fallacy only occurs if personal attacks are employed instead of an argument to devalue an argument by attacking the speaker, not personal insults in the middle of an otherwise sound argument or insults that stand alone.(Source)

And so on, ad infinitum.

To return to the original point, let’s say a skeptic has said “Jenny McCarthy speaks of dangerous ‘toxins’ in vaccines, yet she gets Botox shots, which include botulinum, one of the most toxic substances around, right on her face.” Removed from its context, we cannot infer what the arguer intended. I can see three basic scenarios:

  1. The skeptic has used the phrase as evidence to dismiss Jenny McCarthy’s arguments about “dangerous ‘toxins’ in vaccines,” and has thus committed an ad hominem fallacy.
  2. The skeptic has used the phrase as an aside, in addition to a valid counter-argument against her anti-vaccine claims. This would not be an ad hominem fallacy.
  3. The skeptic has used the phrase as evidence for a separate but relevant argument, such as discussing Jenny McCarthy’s credibility as a scientific authority, in addition to dismissing her arguments with valid responses. This would not be an ad hominem fallacy.

There are other permutations, I’m sure, but I think these are the likeliest ones, and only one out of the three is fallacious. Moreover, trying to infer such a fallacy into those latter two arguments would not be valid cause to dismiss them, but it would probably demonstrate a lack of reading comprehension or a predisposition to dismiss such arguments.

Let’s say I’ve just finished demolishing McCarthy’s usual anti-vax arguments, and then I say “She must not be very anti-toxin if she gets Botox treatments on a regular basis.” Would it be reasonable to infer that I meant to use that statement as fallacious evidence against her point? I think not. If I’ve already addressed her point with evidence and logic, how could you infer that my aside, which is evidence- and logic-free, was also meant to be used as evidence in the argument I’ve already finished debunking?

On the other hand, let’s say I’ve done the same, and then I say “plus, it’s clear that Jenny doesn’t actually understand how toxins work. Toxicity is all about the dose. She thinks that children are in danger from the miniscule doses of vaccine preservatives they receive in a typical vaccine regimen, and yet she gets botox treatments, which require far larger dosages of a far more potent toxin. If toxins worked the way she apparently thinks they do, she’d be dead several times over.” Same point used in service of a separate argument. Would it be reasonable to infer here that I meant the point to be used as evidence against her anti-vaccine claims? Obviously not.

The only case in which it would be reasonable to make that inference would be some variation of me using that claim specifically to dismiss her argument. Maybe I say it in isolation–“Obviously she’s wrong about toxins; after all, she uses botox”–maybe I say it along with other things–“Former Playboy Playmate Jenny McCarthy says she’s anti-toxin, but uses botox. Sounds like a bigger mistake than picking her nose on national TV”–but those are fallacies only because I’m using the irrelevant personal attack to dismiss her argument.

So why have I put aside everything else I need to do on Sunday night to belabor this point? Well, I think that it’s a fine point, but one worth taking the time to understand. Skepdude’s argument is sloppy; he doesn’t seem to understand the fine distinctions between fallacious ad hominem and stand-alone personal attacks or valid ethical arguments, and so he’s advocating that skeptics stop using arguments that could potentially be mistaken for ad hominem fallacies. That way he–and the rest of us–could keep on being sloppy in our understanding and accusations of fallacies and not have to worry about facing any consequences for that sloppiness.

I can’t help but be reminded of my brother. When he was a kid, he did a crappy job mowing the lawn, and would get chewed out for it. He could have taken a little more time and effort to learn how to do it right–heck, I offered to teach him–but he didn’t. Rather, by doing it sloppily, he ensured that he’d only be asked to do it as a last resort; either Dad or I would take care of it, because we’d rather see it done right. He didn’t have to learn how to do a good job because doing a crappy job meant he could avoid doing the job altogether. By avoiding the job altogether, he avoided the criticism and consequences as well.

The problem, of course, is that the people who actually knew what they were doing had to pick up the slack.

This is the issue with Skepdude’s argument here, and I think it’s a point worth making. I disagree with those people who want to make skepticism into some academic discipline where everything is SRS BZNS, but that doesn’t mean that I don’t think we shouldn’t have some reasonable standards. Argumentation is a discipline and an art. It takes work, it takes research and effort, and it requires you to understand some very subtle points. It’s often hard to distinguish a fallacious argument from a valid one, especially in some of the common skeptical topics, since some of the woo-woo crowd have become quite adept at obfuscating their fallacies. It’s not enough to get a general idea and move on; logic and science require clarity and specificity from both terms and arguments. “Ad hominem fallacy” means a certain, very particular thing, and it’s not enough to get a general idea and figure that it’s close enough. If you know what the fallacies actually are and you structure your arguments and your rhetoric in ways that are sound and effective, then you don’t need to worry about people mistaking some bit of your writing for some logical fallacy. You get to say, “no, in fact, that’s not a fallacy, but I could see where you might make that mistake. Here’s why…” When you do the job right, when your arguments are valid and stand on their own, then you don’t need to fear criticism and accusation. Isn’t that what we tell every psychic, homeopath, and theist who claims to have the truth on their side? “If your beliefs are true, then you have nothing to fear from scientific inquiry/the Million Dollar Challenge/reasonable questions”? Why wouldn’t we require the same standard from our own points and arguments?

Skepdude, I apologize for making this lengthy, snarky reply. I generally agree with you, and I obviously wouldn’t follow you on Twitter if I didn’t generally like what you have to say. But on this point, which I think is important, I think you’re clearly wrong, and I think it’s important to correct. Feel free to respond here or in the comments at your post; I obviously can’t carry out this kind of discussion on Twitter.

On Labeling

Mmm...babycakes.I keep running into an issue with labels. It wasn’t long ago that I revised my own from “agnostic” to the more accurate and more useful “agnostic atheist” (in a nutshell, anyway–but this is a topic for a future post). The problem I have is that the relevant parts of my beliefs didn’t change, only what I called myself did. I didn’t have a belief in any gods when I called myself an agnostic, and I don’t have any belief in any gods now that I call myself an atheist. From any objective standpoint, I was an atheist the whole time.

And this is the substance of the problem: the dissonance between what a person calls himself or herself, and what categories a person objectively falls into. These labels are frequently different, and frequently result in various confusions and complications.

On one hand, I think we’re inclined to take people at their word with regard to what their personal labels are. It’s a consequence of having so many labels that center around traits that can only be assessed subjectively. I can’t look into another person’s mind to know what they believe or who they’re attracted to or what their political beliefs really are, or even how they define the labels that relate to those arenas. We can only rely on their self-reporting. So, we have little choice but to accept their terminology for themselves.

But…there are objective definitions for some of these terms, and we can, based on a person’s self-reporting of their beliefs, see that an objectively-defined label–which may or may not be the one they apply to themselves–applies to them.

I fear I’m being obtuse in my generality, so here’s an example: Carl Sagan described himself as an agnostic. He resisted the term “atheist,” and clearly gave quite a bit of thought to the problem of how you define “god”–obviously, the “god” of Spinoza and Einstein, which is simply a term applied to the laws of the universe, exists, but the interventionist god of the creationists is far less likely. So Sagan professed agnosticism apparently in order to underscore the point that he assessed the question of each god’s existence individually.

On the other hand, he also seemed to define “atheist” and “agnostic” in unconventional ways–or perhaps in those days before a decent atheist movement, the terms just had different connotations or less specific definitions. Sagan said “An agnostic is somebody who doesn’t believe in something until there is evidence for it, so I’m agnostic,” and “An atheist is someone who knows there is no God.”

Now, I love Carl, but it seems to me that he’s got the definitions of these terms inside-out. “Agnostic,” as the root implies, has to do with what one claims to know–specifically, it’s used to describe people who claim not to know if there are gods. Atheist, on the other hand, is a stance on belief–specifically the lack of belief in gods.

So, if we’re to go with the definitions of terms as generally agreed upon, as well as Carl’s own self-reported lack of belief in gods and adherence to the null hypothesis with regard to supernatural god claims, then it’s clear that Carl is an atheist. Certainly an agnostic atheist–one who lacks belief in gods but does not claim to know that there are no gods–but an atheist nonetheless.

The dilemma with regard to Sagan is relatively easy to resolve; “agnostic” and “atheist” are not mutually exclusive terms, and the term one chooses to emphasize is certainly a matter of personal discretion. In the case of any self-chosen label, the pigeon-holes we voluntarily enter into are almost certainly not all of the pigeon-holes into which we could be placed. I describe myself as an atheist and a skeptic, but it would not be incorrect to call me an agnostic, a pearlist, a secularist, an empiricist, and so forth. What I choose to call myself reflects my priorities and my understanding of the relevant terminology, but it doesn’t necessarily exclude other terms.

The more difficult problems come when people adopt labels that, by any objective measure, do not fit them, or exclude labels that do. We see Sagan doing the latter in the quote above, eschewing the term “atheist” based on what we’d recognize now as a mistaken definition. The former is perhaps even more common–consider how 9/11 Truthers, Global Warming and AIDS denialists, and Creationists have all attempted to usurp the word “skeptic,” even though none of their methods even approach skepticism.

The danger with the former is when groups try to co-opt people into their groups who, due to lack of consistent or unambiguous self-reporting (or unambiguous reporting from reliable outside sources), can’t objectively be said to fit into them. We see this when Christians try to claim that the founding fathers were all devout Christian men, ignoring the reams of evidence that many of them were deists or otherwise unorthodox. It’s not just the fundies who do this, though; there was a poster at my college which cited Eleanor Roosevelt and Errol Flynn among its list of famous homosexual and bisexual people, despite there being inconsistent and inconclusive evidence to determine either of their sexualities. The same is true when my fellow atheists attempt to claim Abraham Lincoln and Thomas Paine (among others), despite ambiguity in their self-described beliefs. I think, especially those of us who pride ourselves on reason and evidence, that we must be careful with these labels, lest we become hypocrites or appear sloppy in our application and definition of terms. These terms have value only inasmuch as we use them consistently.

The matter of people adopting terms which clearly do not apply to them, however, presents a more familiar problem. It seems easy and safe enough to say something like “you call yourself an atheist, yet you say you believe in God. Those can’t both be true,” but situations rarely seem to be so cut-and-dry. Instead, what we end up with are ambiguities and apparent contradictions, and a need to be very accurate and very precise (and very conservative) in our definition of terms. Otherwise, it’s a very short slippery slope to No True Scotsman territory.

Case in point, the word “Christian.” It’s a term with an ambiguous definition, which (as far as I can tell) cannot be resolved without delving into doctrinal disputes. Even a definition as simple as “a Christian is someone who believes Jesus was the son of God” runs afoul of Trinitarian semantics, where Jesus is not the son, but God himself. A broader definition like, “One who follows the teachings of Jesus” ends up including people who don’t consider themselves Christians (for instance, Ben Franklin, who enumerated Jesus among other historical philosophers) and potentially excluding people who don’t meet the unclear standard of what constitutes “following,” and so forth.

Which is why there are so many denominations of Christianity who claim that none of the other denominations are “True Christians.” For many Protestants, the definition of “True Christian” excludes all Catholics, and vice versa; and for quite a lot of Christians, the definition of the term excludes Mormons, who are also Bible-believers that accept Jesus’s divinity.

When we start down the path of denying people the terms that they adopt for themselves, we must be very careful that we do not overstep the bounds of objectivity and strict definitions. Clear contradictions are easy enough to spot and call out; where terms are clearly defined and beliefs or traits are clearly expressed, we may indeed be able to say “you call yourself be bisexual, but you say you’re only attracted to the opposite sex. Those can’t both be true.” But where definitions are less clear, or where the apparent contradictions are more circumstantially represented, objectivity can quickly be thrown out the window.

I don’t really have a solution for this problem, except that we should recognize that our ability to objectively label people is severely limited by the definitions we ascribe to our labels and the information that our subjects report themselves. So long as we are careful about respecting those boundaries, we should remain well within the guidelines determined by reason and evidence. Any judgments we make and labels we apply should be done as carefully and conservatively as possible.

My reasons for laying all this out should become clear with my next big post. In the meantime, feel free to add to this discussion in the comments.

Reductio ad Shaquillum

I can conceive of a universe which is greater than all possible universes. If there is a maximally perfect being (God), then he would necessarily have made the greatest conceivable universe.

A universe which has Shaquille O’Neal movies in it is necessarily less great than a universe which does not have Shaquille O’Neal movies in it.

Since this universe has Shaquille O’Neal movies in it, I can conceive of a greater universe. Therefore, since this is not the greatest conceivable universe, it must not have been created by a maximally perfect god.

–The Kazaam Cosmological Argument*.

*Okay, all right, it’s an Ontological Argument, so the name’s not accurate, but the pun wouldn’t work otherwise. I tried the other way, but wrote myself into a corner somewhere after “Shaquille O’Neal’s acting career began to exist.” Now, given the quick annihilation of said career, it’s possible that it was just a virtual career, part of normal career vacuum fluctuations…

…And some have Grey-ness thrust upon ’em

So, Alan Grey provided some musings on the Evolution/Creation “debate” at his blog, at my request. I figured I ought to draft a response, since I’ve got a bit of time now, and since Ty seems to want to know what my perspective is. Let’s jump right in, shall we?

Thomas Kuhn, in his famous work ‘The structure of scientific revolutions’ brought the wider worldview concept of his day into understanding science. His (and Polanyi’s) concept of paradigmic science, where scientific investigation is done within a wider ‘paradigm’ moved the debate over what exactly science is towards real science requiring two things
1) An overarching paradigm which shapes how scientists view data (i.e. theory laden science)
2) Solving problems within that paradigm

I think I’ve talked about The Structure of Scientific Revolutions here or elsewhere in the skeptosphere before. I really need to give it another read, but at the time I read it (freshman year of undergrad) I found it to be one of the densest, most confusing jargon-laden texts I’ve ever slogged through for a class. Now that I have a better understanding of science and the underlying philosophies, I really ought to give it another try. I’d just rather read more interesting stuff first.

Reading the Wikipedia article on the book, just to get a better idea of Kuhn’s arguments, gives me a little feeling of validation about my initial impressions all those years ago. See, my biggest problem with Structure–and I think I wrote a short essay to this effect for the class–was that Kuhn never offered a clear definition of what a “paradigm” was. Apparently my criticism wasn’t unique:

Margaret Masterman, a computer scientist working in computational linguistics, produced a critique of Kuhn’s definition of “paradigm” in which she noted that Kuhn had used the word in at least 21 subtly different ways. While she said she generally agreed with Kuhn’s argument, she claimed that this ambiguity contributed to misunderstandings on the part of philosophically-inclined critics of his book, thereby undermining his argument’s effectiveness.

That makes me feel a bit less stupid.

Kuhn claimed that Karl Popper’s ‘falsification criteria’ for science was not accurate, as there were many historical cases where a result occurred that could be considered as falsifying the theory, yet the theory was not discarded as the scientists merely created additional ad hoc hypothesis to explain the problems.

It is through the view of Kuhnian paradigms that I view the evolution and creation debate.

And I think that’s the first problem. To suggest that only Kuhn or only Popper has all the answers when it comes to the philosophy of science–which may not be entirely what Grey is doing here, but is certainly suggested by this passage–is a vast oversimplification. Kuhn’s paradigmatic model of science ignores to large degree the actual methods of science; arguably, Popper’s view presents an ideal situation that ignores the human element to science, and denies that there exists such a thing as confirmation in science–which again, may be due to ignoring the human element. The paradigmatic view is useful; it reminds us that the human ability to develop conceptual models is partially influenced by cultural factors, and that scientists must be diligent about examining their preconceptions, biases, and tendencies toward human error (such as ad hoc justifications) if they are to conduct accurate science. Falsificationism is also useful; it provides a metric by which to judge scientific statements on the basis of testability, and demonstrates one ideal to which the scientific method can asymptotically approach. But to try to view all of science through one lens or another is myopic at best. Just as science is neither purely deductive nor purely inductive, neither purely theoretical nor purely experimental, it is certainly not purely paradigmatic nor purely falsificationist.

One thing to keep in mind, though, is Grey’s brief mention of ad hoc hypotheses used to smooth out potentially-falsifying anomalies. While I’m sure that has happened and continues to happen, it’d be a mistake to think that any time an anomaly is smoothed over, it’s the result of ad-hocking. The whole process of theory-making is designed to continually review the theory, examine the evidence, and alter the theory to fit the evidence if necessary. We’re seeing a time, for instance, where our concept of how old and large the universe is may be undergoing revision, as (if I recall correctly) new evidence suggests that there are objects beyond the veil affecting objects that we can see. That doesn’t necessarily represent an ad hoc hypothesis; it represents a known unknown in the current model of the universe. Ad hocking would require positing some explanation without sufficient justification.

(Curiously, Karl Popper obliquely referred to Kuhn’s scientific paradigm concept when he said “Darwinism is not a testable scientific theory but a metaphysical research programme.” )

It’s been awhile since my quote mine alarm went off. It never fails. The quote is misleading at best, especially the way you’ve used it here, and somewhat wrong-headed at worst, as even Popper later acknowledged.

Here I define evolution (Common Descent Evolution or CDE) as: The theory that all life on earth evolved from a common ancestor over billions of years via the unguided natural processes of mutation and selection (and ‘drift’) and creation (Young earth creation or YEC) as: The theory that various kinds of life were created under 10,000 years ago and variation within these kinds occurs within limits via mutation and select (and ‘drift’).

I can’t see anything in there to disagree with. Yet, anyway.

I believe CDE and YEC can both be properly and most accurately defined as being scientific paradigms.

While this seems problematic. CDE, certainly, may be a scientific paradigm (though as usual, I’d like that term to be pinned down to a more specific definition). Why on Earth would YEC be a scientific paradigm? Going back to Wikipedia, that font of all knowledge:

Kuhn defines a scientific paradigm as:
  • what is to be observed and scrutinized
  • the kind of questions that are supposed to be asked and probed for answers in relation to this subject
  • how these questions are to be structured
  • how the results of scientific investigations should be interpreted

Alternatively, the Oxford English Dictionary defines paradigm as “a pattern or model, an exemplar.” Thus an additional component of Kuhn’s definition of paradigm is:

  • how is an experiment to be conducted, and what equipment is available to conduct the experiment.

So I can see, under a Creationist paradigm, that one might have different priorities for observations (searching, for instance, for the Garden of Eden or examining evidence for a Global Flood). I certainly understand the matter of formulating questions–we see this in debates with Creationists all the time: “who created the universe,” “why does the universe seem so fine-tuned to our existence,” and so forth. These questions imply what form their answers will take: the first suggests that there must have been an agent involved in the creation of the universe, the latter interprets the causal relationship in a human-centered, teleological fashion. If there’s one thing I’ve learned over years of experience with these debates, it’s the importance of asking the right questions in the right ways. Certainly when scientists were largely laboring under a YEC paradigm, and certainly Creationists and ID proponents looking at various lines of evidence, are interpreting those lines of evidence in particular ways: ID proponents see everything in terms of engineering–machines, codes, programs, and so forth. I’m not entirely sure how a YEC paradigm would affect the available scientific equipment, though.

So I can see how YEC is a paradigm; I’m just not sure how it’s a scientific one. I mean, I can adopt a Pastafarian paradigm of looking at the world, and it may influence how I interpret scientific findings, but that doesn’t give it any scientific value or credence. A scientific paradigm, it seems to me, ought to develop out of science; allowing any paradigm to act as a justified scientific paradigm seems to me to be a little more postmodernist than is valid in science.

Whilst CDE proponents claim that CDE is falsifiable

And Popper, too.

(E.g. Haldane and Dawkins saying a fossil Rabbit in the Precambrian era would falsify CDE), it is easy to see how the theory laden-ness of science makes such a find unlikely.

Um…how? A find is a find, regardless of how theory-laden the scientists are. And it’s not as though evolution hasn’t had its share of moments of potential falsification. Darwin was unaware of genes; his theory was missing a mechanism of transmission. Were we to discover that genes were not prone to the sorts of mutations and variation and drift that Darwinian evolution predicts, the theory would have been worthless. But the study of genes validated Darwin. If we had discovered that DNA replication was not prone to errors and problems, that would have been a major nail in the coffin for Darwinian evolution, but instead the DNA replication process supported the theory. If our studies of the genome had revealed vast differences between apparently related species, with broken genes and junk DNA and retroviral DNA in wildly different places in otherwise-close species, that would be a serious problem for evolutionary theory. Instead, the presence and drift of such genetic bits are perhaps the best evidence available for evolution, and give us a sort of genetic clock stretching backwards along the timeline. It could have been that the genetic evidence wildly contradicted the fossil evidence, but instead we find confirmation and further explanation of the existing lines.

Classification of rock strata was initially (and still commonly) done via the presence of index fossils. (Note: The designation of these fossils as representing a certain historical period was done within the CDE paradigm)

Bzzt! Simply untrue. There do exist index fossils–fossils which only occur in one strata–which can be used to verify the dates of some strata. However, those dates have already been determined through other methods–radiometric dating, which ones are on top of others, and so forth.

Incidentally, if anyone ever gets a chance to look into the various dating methods we have, I highly recommend it. I taught a lesson on it last Spring, and it’s really interesting stuff. You’d never believe how important trees are.

The finding of a fossil Rabbit in a rock strata would almost certainly result in classification of the strata as something other than pre-cambrian, or the inclusion of other ad hoc explanations for the fossil (Overthrusts, reworking etc).

No, I’m afraid that’s simply not the case. If a fossil rabbit were found in a Precambrian stratum, that was below the Cambrian stratum, and both the stratum and the fossil could be reasonably dated back to the Precambrian (through methods like radiometric dating), it would not simply force the redefinition of the stratum. Because then one would have to explain the presence of one geological stratum beneath several others that, chronologically, came earlier, and why there are other Precambrian fossils in this Postcambrian stratum. Either way, the result is an insurmountable anomaly.

Granted, there could be alternate hypotheses to explain how the rabbit got there. Maybe there was a hole in the ground, and some poor rabbit managed to fall in, die, and get fossilized. But then we wouldn’t have a Precambrian rabbit, we’d have a Postcambrian rabbit in a hole, and there ought to be other signs which could demonstrate that (not the least of which that the rabbit shouldn’t date back to the Precambrian radiometrically, and the strata above it, closing off the hole, should be out of place with regard to the rest of the strata. In order to call the stratum the result of an overthrust or erosion or something, there would have to be other evidence for that. Geological folding and erosion, so far as I know, would not affect one fossilized rabbit without leaving other signs behind.

It is worth noting that many smaller (only 200 million year) similar type surprises are happily integrated within CDE. (A recent example is pushing back gecko’s 40 million years in time)

I’d like to see more examples and sources for this. I read the gecko article, and I don’t see where it’s at all what you’re suggesting. This is not an example of a clearly out-of-place animal in the wrong era, it’s an example of there being an earlier ancestor of a modern species than what we knew of before. The preserved gecko is a new genus and species–it’s not as though it’s a modern gecko running around at the time of the dinosaurs–and it’s from a time when lizards and reptiles were common. The point of the “rabbit in the Precambrian” example is that there were no mammals in the Precambrian era. Multicellular life was more or less limited to various soft-bodied things and small shelled creatures; most of the fossils we find from the precambrian are tough to pin down to a kingdom, let alone a genus and species like Sylvilagus floridanus, for instance. There’s a world of difference between finding a near-modern mammal in a period 750 million years before anything resembling mammals existed, and finding a lizard during a lizard- and reptile-dominated time 40 million years before your earliest fossil in that line. There was nothing in the theory or the knowledge preventing a gecko from palling around with dinosaurs, there was just no evidence for it.

The main point here is that the claimed falsification is not a falsification of CDE, but merely falsifies the assumption that fossils are always buried in a chronological fashion. CDE can clearly survive as a theory even if only most fossils are buried in chronological fashion.

That may be closer to the case, as there is a wealth of other evidence for common descent and evolution to pull from. However, the Precambrian rabbit would call into question all fossil evidence, as well as the concept of geological stratification. It would require a serious reexamination of the evidences for evolution.

Many other events and observations exist which could be said to falsify evolution (e.g. the origin of life, soft tissue remaining in dinosaur fossils), but are happily left as unsolved issues.

How would the origin of life falsify evolution? Currently, while there are several models, there’s no prevailing theory of how abiogenesis occurred on Earth. It’s not “happily left as an unsolved issue;” scientists in a variety of fields have spent decades examining that question. Heck, the Miller-Urey experiments, though based on an inaccurate model of the early Earth’s composition, were recently re-examined and found to be more fruitful and valid than originally thought. The matter of soft tissue in dinosaur fossils has been widely misunderstood, largely due to a scientifically-illiterate media (for instance, this article which glosses over the softening process). It’s not like we found intact Tyrannosaurus meat; scientists had to remove the minerals from the substance in order to soften it, and even then the tissue may not be original to the Tyrannosaurus.

It is because of these types of occurrences that I suggest CDE is properly assigned as a scientific paradigm. Which is to say that CDE is not viewed as falsified by these unexpected observations, but instead these problems within CDE are viewed as the grist for the mill for making hypothesis and evaluating hypothesis within the paradigm.

Except that nothing you’ve mentioned satisfies the criteria for falsifiability. For any scientific theory or hypothesis, we can state a number of findings that would constitute falsification. “Rabbits in the precambrian” is one example, certainly, but origins of life? Softenable tissue in dino fossils? Previous gecko ancestors? The only way any of those would falsify evolution would be if we found out that life began suddenly a few thousand years ago, or somesuch. So far, no such discovery has been made, while progress continues on formulating a model of how life began on the Earth four-odd billion years ago.

In other words, you’ve equated any surprises or unanswered questions to falsification, when that’s not, nor has it ever been, the case.

YEC can also be properly identified as a scientific paradigm although significantly less well funded and so significantly less able to do research into the problems that existing observations create within the paradigm.

Yes, if only Creationists had more funding–say, tax-exempt funding from fundamentalist religious organizations, or $27 million dollars that might otherwise be spent on a museum trumpeting their claims–they’d be able to do the research to explain away the geological, physical, and astronomical evidence for a billions-of-years-old universe; the biological, genetic, and paleontological evidence for common descent; the lack of any apparent barriers that would keep evolutionary changes confined to some small areas; and ultimately, the lack of evidence for the existence of an omnipotent, unparsimonious entity who created this whole shebang. It’s a lack of funding that’s the problem.

One such example of research done is the RATE project. Specifically the helium diffusion study which predicted levels of helium in zircons to be approximately 100,000 times higher than expected if CDE were true.

Further reading on RATE. I’m sure the shoddy data and the conclusions that don’t actually support YEC are due to lack of funding as well.

What placing YEC and CDE as scientific paradigms does is make sense of the argument. CDE proponents (properly) place significant problems within CDE as being something that will be solved in the future (E.g. origin of life) within the CDE paradigm. YEC can also do the same (E.g. Endogenous Retroviral Inserts).

Except that the origin of life isn’t a serious problem for evolution; evolution’s concerned with what happened afterward. That’s like saying that (hypothetical) evidence against the Big Bang theory would be a problem for the Doppler Effect. You’ve presented nothing presently that would falsify evolution, while there are already oodles of existing observations to falsify the YEC model. Moreover, you’ve apparently ignored the differences in supporting evidence between the two paradigms; i.e., that evolution has lots of it, while YEC’s is paltry and sketchy at best, and nonexistent at worst. It can’t just be a matter of funding; the YEC paradigm reigned for centuries until Darwin, Lord Kelvin, and the like. Why isn’t there leftover evidence from those days, when they had all the funding? What evidence is there to support the YEC paradigm, that would make it anything like the equal of the evolutionary one?

1) Ideas like Stephen Gould’s non-overlapping Magistra (NOMA) are self-evidently false. If God did create the universe 7000 years ago, there will definitely be implications for science.

More or less agreed; the case can always be made for Last Thursdayism and the point that an omnipotent God could have created the universe in media res, but such claims are unfalsifiable and unparsimonious.

2) Ruling out a supernatural God as a possible causative agent is not valid. As with (1) such an activity is detectable for significant events (like creation of the world/life) and so can be investigated by science.

I’m not entirely clear on what you’re saying here. I think you’re suggesting that if a supernatural God has observable effects on the universe, then it would be subject to science inquiry. If that’s the case, I again agree. And a supernatural God who has no observable effects on the universe is indistinguishable from a nonexistent one.

a. To argue otherwise is essentially claim that science is not looking for truth, but merely the best naturalistic explanation. If this is the case, then science cannot disprove God, nor can science make a case that YEC is wrong.

Here’s where we part company. First, the idea that science is looking for “truth” really depends on what you mean by “truth.” In the sense of a 1:1 perfect correlation between our conceptual models and reality, truth may in fact be an asymptote, one which science continually strives for but recognizes as probably unattainable. There will never be a day when science “ends,” where we stop and declare that we have a perfect and complete understanding of the universe. Scientific knowledge, by definition, is tentative, and carries the assumption that new evidence may be discovered that will require the current knowledge to be revised or discarded. Until the end of time, there’s the possibility of receiving new evidence, so scientific knowledge will almost certainly never be complete.

As far as methodological naturalism goes, it doesn’t necessarily preclude the existence of supernatural agents, but anything that can cause observable effects in nature ought to be part of the naturalistic view. As soon as we discover something supernatural that has observable effects in nature, it can be studied, and thus can be included in the methodological naturalism of science.

Even if all this were not the case, science can certainly have a position on the truth or falsehood of YEC. YEC makes testable claims about the nature of reality; if those claims are contradicted by the evidence, then that suggests that YEC is not true. So far, many of YEC’s claims have been evaluated in precisely this fashion. While science is less equipped to determine whether or not there is a supernatural omnipotent god who lives outside the universe and is, by fiat, unknowable by human means, science is quite well equipped to determine the age of the Earth and the development of life, both areas where YEC makes testable, and incorrect, predictions.

b. Anthony Flew, famous atheist turned deist makes the point quite clearly when talking about his reasons for becoming a deist

“It was empirical evidence, the evidence uncovered by the sciences. But it was a philosophical inference drawn from the evidence. Scientists as scientists cannot make these kinds of philosophical inferences. They have to speak as philosophers when they study the philosophical implications of empirical evidence.”

What? We have very different definitions of “quite clearly.” Not sure why you’re citing Flew here, since he’s not talking about any particular evidence, since he has no particular expertise with the scientific questions involved, and since he’s certainly not a Young Earth Creationist, nor is his First Cause god consistent with the claims of YEC. I’m curious, though, where this quotation comes from, because despite the claim here that his conversion to Deism was based on evidence, the history of Flew’s conversion story cites mostly a lack of empirical evidence–specifically with regard to the origins of life–as his reason for believing in a First Cause God.

Flew’s comments highlight another significant issue. The role of inference. Especially in ‘historical’ (I prefer the term ‘non-experimental’) science.

You may prefer the term. It is not accurate. The nature of experimentation in historical sciences tends to be different from operational science, but it exists, is useful, and is valid nonetheless.

Much rhetorical use is given to the notion that YEC proponents discard the science that gave us planes, toasters and let us visit the moon (sometimes called ‘operational’…I prefer ‘experimental’ science). Yet CDE is not the same type of science that gave us these things.

No, CDE is the type of science that gives us more efficient breeding and genetic engineering techniques, a thorough understanding of how infectious entities adapt to medication and strategies for ameliorating the problems that presents, genetic algorithms, and a framework for understanding how and why many of the things we already take for granted in biology are able to work. It just happens to be based on the same principles and methodologies as the science that gave us toasters and lunar landers.

Incidentally, the determination of the age of the universe and the Earth is based on precisely the same science that allowed us to go to the moon and make airplanes. Or, more specifically, the science that allows us to power many of our space exploration devices and homes and allows us to view very distant objects.

CDE is making claims about the distant past by using present observations and there is a real disconnect when doing this.

It’s also making claims about the present by using present observations. Evolution is a continuous process.

One of the chief functions of experiment is to rule out other possible explanations (causes) for the occurrence being studied. Variables are carefully controlled in multiple experiments to do this. The ability to rule out competing explanations is severally degraded when dealing with historical science because you cannot repeat and control variables.

Fair enough. It’s similar to surgical medicine in that regard.

You may be able to repeat an observation, but there is no control over the variables for the historical event you are studying.

“No control” is another oversimplification. We can control what location we’re looking at, time period and time frame, and a variety of other factors. It’s certainly not as tight as operational science, but there are controls and experiments in the primarily-observational sciences.

Not that it matters, because experiments are not the be-all, end-all of science. Predictions, observations, and mathematical models are important too. Science in general has much more to do with repeated observation than with experimentation. And yes, repeated observation is enough (in fact, it’s the only thing) to determine cause and effect.

Scientists dealing with non-experimental science have to deal with this problem, and they generally do so by making assumptions (sometimes well founded, sometimes not).

Guh? You act like they just come up with these assumptions without any justification.

A couple of clear examples are uniformitarianism (Geological processes happening today, happened the same way, the same rate in the past) and the idea that similarity implies ancestry.

Okay, two problems. One: if we were to hypothesize that geological processes happened somehow differently in the past, one would have to provide some evidence to justify that hypothesis. Without evidence, it would be unparsimonious to assume that things functioned differently in the past. As far as all the evidence indicates, the laws of physics are generally constant in time and space, and those geological processes and whatnot operate according to those laws.

The idea that similarity implies ancestry is not a scientific one. While that may have been a way of thinking about it early on in evolutionary sciences, it does not actually represent science now. Similarity may imply relationship, but there are enough instances of analogous evolution to give the lie to the idea that scientists think similarity = ancestry.

A couple of quotes will make my point for me.


Henry Gee chief science writer for Nature wrote “No fossil is buried with its birth certificate” … and “the intervals of time that separate fossils are so huge that we cannot say anything definite about their possible connection through ancestry and descent.”

Poor Henry Gee; first quote-mined in Jonathan Wells’ Icons of Evolution, now by you. What’s interesting here is that you’ve actually quote-mined Gee’s response to Wells and the DI for quote-mining him! (Which, I realize, you’re aware of, but I read this largely as I was writing the response) Here’s the full context:

That it is impossible to trace direct lineages of ancestry and descent from the fossil record should be self-evident. Ancestors must exist, of course — but we can never attribute ancestry to any particular fossil we might find. Just try this thought experiment — let’s say you find a fossil of a hominid, an ancient member of the human family. You can recognize various attributes that suggest kinship to humanity, but you would never know whether this particular fossil represented your lineal ancestor – even if that were actually the case. The reason is that fossils are never buried with their birth certificates. Again, this is a logical constraint that must apply even if evolution were true — which is not in doubt, because if we didn’t have ancestors, then we wouldn’t be here. Neither does this mean that fossils exhibiting transitional structures do not exist, nor that it is impossible to reconstruct what happened in evolution. Unfortunately, many paleontologists believe that ancestor/descendent lineages can be traced from the fossil record, and my book is intended to debunk this view. However, this disagreement is hardly evidence of some great scientific coverup — religious fundamentalists such as the DI — who live by dictatorial fiat — fail to understand that scientific disagreement is a mark of health rather than decay. However, the point of IN SEARCH OF DEEP TIME, ironically, is that old-style, traditional evolutionary biology — the type that feels it must tell a story, and is therefore more appealing to news reporters and makers of documentaries — is unscientific.

What Gee is criticizing here and in his book, as his response and further information here (4.14, 4.16) make clear, is the tendency among some scientists and journalists to interpret the evidence in terms of narratives and to see life as a linear progression, when in fact it’s more of a branching tree with many limbs. It’s impossible from fossil evidence alone to determine whether two animals are ancestor and descendant, or cousins, or whatever.

See, the problem with letting quotes make your point for you is that they often do no such thing.

Gee’s response to this quote of him supports my point

No, you’ve simply misunderstood it. The fact that you’ve read Icons, somehow find it valid, and somehow think it supports a YEC view, speaks volumes about your credibility.

Colin Paterson’s infamous quote about the lack of transitional fossils makes the same point. “The reason is that statements about ancestry and descent are not applicable in the fossil record. Is Archaeopteryx the ancestor of all birds? Perhaps yes, perhaps no: there is no way of answering the question.”

My quote mine alarm is getting quite a workout today, but I have a distinct suspicion that Patterson is talking about precisely what Gee was: that from the fossil evidence alone, we cannot determine whether archaeopteryx is the ancestor of all birds, or an offshoot of the lineage that produced birds. And a very brief look reveals precisely what I suspected. This isn’t the problem for evolution that you seem to think it is.

A simple thought experiment highlights this concept. Assuming at some point in the future, scientists find some scientific knowledge that makes the naturalistic origin of life a more plausible possibility given the time constraints. (For instance…given completely arbitrary probabilities, say there is a 15% chance of OOL from unliving chemicals driven by natural processes in the lifetime of the earth to date) Does this mean that it must of happened that way in the past? Clearly the answer is no.

No, it doesn’t mean it must have happened that way in the past. However, we can show ways it may have happened, or ways that it was likely to have happened. Merely showing a likely way for the origin of life to have occurred given the conditions on Earth four-odd billion years ago puts abiogenesis far ahead of the creationist hypothesis, due to their lack of parsimony.

Incidentally, as Dawkins explained in The God Delusion, the actual life-generating event needn’t be particularly likely to occur. After all, it’s only happened once in the history of the planet Earth, so far as we’re aware. Given the variety of condition and the timespan involved, that’s something of a low probability.

But even claims of certainty about experimental science is unjustified. The history of science contains many examples of widely held scientific beliefs being overturned. Phlogiston is probably the most famous, but geosynclinal theory (preceding plate techtonics) is a more non-experimental science example. So even claims about experimental science should be made with this in mind, evoking a more humble stance. Comments about CDE being a ‘fact’ or being on par with gravity are unfounded and display a profound ignorance of science and history. Such comments are not scientific, but faith based.

Wrong, wrong, wrong. You’re conflating an awful lot of things here, particularly with regard to scientific terminology. First, as I said above, scientific knowledge is tentative and admittedly so. Scientists are human, and are certainly prone in some cases to overstating their certainty about one given theory or another, but in general we recognize that our knowledge is subject to revision as future evidence becomes available. There is no 100% certainty in science.

Here’s the point where definitions would be important. In science, a “fact” is something that can be observed–an object, a process, etc. A “law” is a (usually) mathematical description of some process or fact. A “theory” is a model that explains how facts and laws work, and makes predictions of future observations that can be used to validate or falsify it. Gravity is a fact, a law, and a theory. The fact of gravity is that things with mass can be observed to be attracted to one another; the law of gravity is F=G*[(m1*m2)/R^2]; the (relativistic) theory of gravity is that massive objects warp spacetime, causing changes in the motion of other massive objects. Evolution is similar: the fact of evolution is the process of mutation and selection that can be observed and has been observed under a variety of different levels of control; the theory of evolution by natural selection is that organisms are descended with modification from a common ancestor through an ongoing selection process consisting of various natural forces and occurrences.

The claims by Gould and others that evolution is a fact are referring to the observable process of evolution. Your argument here amounts to suggesting that since scientists were wrong about phlogiston, they cannot claim with any certainty that things burn.

So how to evaluate between the two paradigms?

Reason and evidence?

This is the question that matters… Controversially, Kuhn claimed that choosing between paradigms was not a rational process.


Whilst not subscribing to complete relativism, I believe there is a real subjective nature between paradigms. Objective problems play a part, but how much those problems are weighted seems to be a fairly subjective decision.

From my perspective, the cascading failure of many of the evidences used to infer CDE is a clear indication of the marginal superiority of the (admittedly immature) YEC paradigm.

False dichotomy. Try again. Evidence against evolution–which, I remind, you have not provided–is not evidence for YEC. Nor is it evidence for OEC or ID or Hindu Creation Stories or Pastafarianism. Each of those things requires its own evidence if it is to stand as a viable scientific paradigm.

Incidentally, you might actually want to look at some of the evidence for evolution before declaring any kind of “cascading failure.” You might also want to look at the evidence for creationism.

Chief examples are things such as embryonic recapitulation (found to be a fraud),

Found by scientists to be a fraud; never central to evolutionary theory.

the fossil record (Found to exhibit mostly stasis and significant convergence),

Source? Experts disagree.

the genetic evidence (Found to exhibit massive homoplasy).

Source? Experts disagree.

Update: And the disagreement between molecular and morphological data.

Nothing in the article you’ve linked suggests any problems for evolution. It merely shows how useful the genetic and molecular analyses are in distinguishing species and discovering exactly how organisms are related; I think you’ll find that most biologists agree with that sentiment, which is part of why there’s so much more focus on genetic evidence than fossil evidence now. Heck, as long as we’re quoting, here’s Francis Collins:

“Yes, evolution by descent from a common ancestor is clearly true. If there was any lingering doubt about the evidence from the fossil record, the study of DNA provides the strongest possible proof of our relatedness to all other living things.”

It is curious however, that even with the near monopoly of the CDE paradigm in science education in America, that only a small fraction believe it. (CDE hovers around 10%, whilst 50+% accept YEC and the remainder Theistic evolution) This certainly indicates to me, that perhaps it is CDE that is not as compelling an explanation than YEC.

So, an appeal to popularity? Yeah, that’s valid. Yes, evolution is believed by a fraction of the laity. Although your numbers suggest it’s about half–theistic evolution is still evolution, and evangelical Francis Collins agrees far more with Richard Dawkins than Duane Gish. Strangely enough, among scientists–you know, the people who have actually examined the evidence, regardless of their religious beliefs–it’s believed by the vast majority. What does that suggest?

Whatever the decision, it is more appropriate to say that YEC is the “better inferred explanation” than CDE or vice versa. Such an understanding of the debate leads to a far more productive discourse and avoids the insults, derision and anger that seems to be so prevalent.

I’m afraid you’ve lost me, so I’ll sum up. Your position is based on an examination of the situation that ignores the complete lack of evidence for the “YEC paradigm” and inflates perceived flaws in the “CDE paradigm” in order to make them appear to be somewhat equal. From there, you ignore the basic lack of parsimony in the “YEC paradigm” and make appeals to logical fallacies in order to declare it the more likely explanation.

Alan, you’re clearly a fairly intelligent guy, but that more or less amounts to your argument having a larger proportion of big words than the average creationist’s. Your use of false dichotomy and argumentum ad populum as though they had any value to science, your quote-mining to make your point, your misinterpretation of popular science articles and assumption that they refute a century of peer-reviewed journals, your ignorance of the actual evidence for evolution, and your postmodernist take on the whole debate, are all standard creationist tactics. You’re clearly intelligent enough and interested enough to correct your misconceptions and your errors in thinking, Alan, and I hope you take this chance to examine the evidence with an open mind and understand that scientific theories are based on positive evidence, not negative evidence against a competing theory. Thanks for the article!

Morality and such

The Atheist Experience posted about morality here. Rhology posted a nonsensical comment here. I responded to it here and here. Rhology responded to me here. I responded in his comments, but I’ve reproduced it below the fold. Go ahead and read the exchanges in their original locations, this is just here in case of deletion.
So one wonders why Tom would have a problem with my statement.

My problem is that the statement is nonsensical. What does it mean to be “worthy” of something that does not depend on one’s worthiness? If I say, “you are all worthy of feet,” I’m not making a moralist statement, I’m making a Dadaist one.

If he were to be consistent, he’d neither disagree nor agree.

Consistent with what? With my determination that “worthy of death” is a meaningless judgment? As I said in the quoted portion, “worthy of death” and “worthy of being killed” are different judgments–one makes sense, the other does not. There is no inconsistency here, only your incoherency.

There’s no “should” in his worldview, no way to prescribe nor proscribe the ‘right’ behavior for anyone to follow.

That’s a blatant strawman. The “should” is determined by society, and at its core, by the necessary elements required for society to exist. I discuss this later in the post.

Further, putting someone to death is simply enabling a natural process to take place. It’s the same as giving someone a carrot to eat. Or a slab of steak. Or a live hamster (if one were so inclined). Or brain from a living person. It’s all-natural. It’s all the same.

I suspect there’s quite a bit of equivocation going on here, but in any case, you’re wrong. Killing someone is not merely allowing death to take place; killing someone necessarily implies that death would not have otherwise taken place at that moment. It is taking a process that would have come about eventually and making it happen immediately. You fail to recognize, in your meandering, that time exists and is significant.

1) There’s no necessity that society exist.

There is if the species is to continue. Granted, there are those individuals for whom that’s not a concern. For the rest of us, that society exists is a given.

On naturalism, it so happens that humans evolved in such a way that living together in community aids in survival, most of the time.

No, living together is necessary for prolonged survival, all of the time. Last I checked, humans couldn’t asexually reproduce.

But of course, praying mantises have evolved in such a way that they hang out alone all the time, except when they get together for sex and dinner (in that order). So what?

So what indeed. What’s your point?

2) I’ve heard this claim many times and always I have wondered whence this social consensus comes. When and where did “society” get together and establish this moral agreement?

It’s not a one-off thing, nor is it a universal thing. Surprisingly enough, Rhology, morals evolve as society progresses. It’s why, unlike your favorite holy book, the general consensus in the industrialized west is that women are not property, slavery is not right, and unruly children should not, in fact, be stoned to death.

What % is a consensus, and what is the basis for pegging the % at that point?

The consensus is not a matter of percentages, and I’m sure you’re not stupid enough to think that it is. It’s represented in the ongoing conversations about rights, the progression of laws, and the overall changing social attitude.

3) What of those in society, such as anarchist protesters, murderers and other psychopaths, and M-16-toting, compound-dwelling Mountain Men, who have no and want no part in this societal moral consensus?

They’re generally free to band together and secede. In many cases, to some degree, they do just that, which is why there are such things as “compounds” and “enclaves” and “communes.” People seclude themselves from the larger social group in order to form their own small societies, based on their own consensus of morality. Hence why those of us in the urban world do not share the Amish belief that buttons and technology are morally forbidden, and why those at the YFZ compound do not share our moral outrage over raping children.

Whence comes the “should” in “these guys should have no say in our moral deliberations”? It’s arbitrary.

Not in the least. One, no one says they have no say in the moral deliberations. They have a say, so long as they’re participants in the society, but their voices may be drowned out by the general consensus. Two, we come again to the closest thing society has to moral absolutes: the conditions necessary for society to exist. A society as complex as ours is naturally going to have a lot of such necessary qualities, but the most basic is “killing people is morally wrong” (because society cannot exist if we cannot reasonably trust one another not to kill us when we stop watching them). There are others, naturally, but I’d rather keep this post as brief as possible.

The point, anyway, is that we can judge these variant viewpoints by comparing them to our society’s foundational moral principles. Those mountain men sure don’t seem to fall in line with the qualities we recognize are necessary for our society to continue, but hey, let’s give them a fair shake. We recognize that there’s a lot of murderous mountain men out there, what might happen to society if we agreed with their point of view? Well, we can imagine that it might fall apart pretty quickly. But we needn’t be so quick to dismiss it even now; what if we make an exception to the rules? Well, then we have to roll up our sleeves, get together as a society, and decide what the parameters of the exception will be.

And that’s where it does get arbitrary, which is why we come to an explicit consensus and codify it in law. Much of law is arbitrary–arbitrary boundaries drawn in sand by democratic plurality or dictatorial edict. They vary from place to place, and that’s not generally a problem. It’s not morally significant whether the highest speed limit in the state is 65 or 70 mph; the difference is arbitrary.

That the tiny details are arbitrary does not mean that there are not practical absolutes. That reasonable people can reasonably disagree on moral principles is a demonstration of their malleability and flexibility. More disparate cultures may disagree on more basic points, but even the simplest social animals have codes against killing members of the society and other basic, foundational principles.

4) What of entire societies who have gone “astray”? The Yanomamo, the Auca, the 3rd Reich, Vichy France (who willingly exceeded the quotas for sending French Jews to Germany set by the Nazis)… when was their moral consensus created? And was it OK? Tom Foss would probably say no, but on what basis? He has to be inconsistent with his own stated views to avoid the awful (and embarrassing) conclusion.

Don’t presume to speak for me, Rhology. You don’t.

On a personal level, Rhology, I would say that these “astray” societies were obviously doing morally wrong things, since I, and the society of which I am a part, consider oppression, murder, pogroms, and so on to be morally reprehensible.

But what about those societies at the time? Certainly in 1945 we could have judged Nazi Germany to be in the wrong; their actions were–again–contrary to the moral values that we hold in the US. Moreover, they were contrary to the foundational values that are necessary for society: killing bad. Applying the same metric we used for the mountain men, we can imagine that a society where folks went around killing anyone they didn’t like would fall apart pretty quickly. So maybe they wanted to get together and make an arbitrary guideline about when an exception would be warranted–and they did, making an arbitrary exception to the “no killing” rule that applied to anyone who wasn’t Aryan. And we, and others, were able to judge that arbitrary decision to be morally incorrect, based on our own values and some pretty basic applications of reason and logic.

I’m curious, though, how much the actions of Nazi Germany actually fell in line with the moral consensus. Just because a government does something or codifies a law doesn’t mean that those actions or codes are in line with the moral consensus of the people.

Said actions were, however, well in line with the moral views of many folks here in the US and abroad, based on judicious applications of anti-Semitism. And much of that anti-Semitism stems from some supposed book of morals which suggests that homosexuals ought to be put to death (which the Nazis did happily) and that Jews deserved to die based on their treatment of some magical God-man centuries before (which was a handy moral justification).

Wow, all that, and without invoking the principle of least suffering or the ethic of reciprocity, both of which are about as foundational to our society (and most others, for that matter), and would probably shortcut the whole “how do you judge the Nazis” question.

I’m sure most of this will fall on deaf ears, Rhology, but I post it anyway.