More on Movement Problems (or, Definitions Matter)
September 23, 2010 6 Comments
I’ve noticed a disturbing trend lately, and while there may be a bit of “when you’re a hammer, every problem starts looking like a nail” going on, I can’t help but see it as a symptom of the apparently growing notion that “skepticism” is something you join rather than something you do. But I keep seeing this twofold trend of people venerating logic and reason while failing to actually understand them (or at least to understand them as well as they think they do) and using terms like “rational” or “fallacy” in value-laden ways that strip them of their actual meaning.
The first time I really took notice of this was when Don talked about his trip to a CFI meeting in Indianapolis. At the meeting, he encountered a number of CFI members who saw skepticism not as a set of cognitive tools, but as a set of dogmatic rules which should be taught to people. In addition, and perhaps most relevantly:
[A]lmost every member I interacted with afterward was like an automaton repeating poorly understood buzzwords: “critical thinking,” “skepticism,” “freethought,” etc. They said these words and seemed to believe that they understood them and that, through that understanding, were part of a greater whole.
The same trend was the subject of the recent kerfuffle with Skepdude. The ‘Dude clearly held logic in high esteem, and clearly understood that fallacies were bad things, but just as clearly didn’t understand what made fallacies fallacious, and was quick to throw out the term “ad hominem” where it did not apply.
More alarming, however, were the comments of the much more prominent skeptic Daniel Loxton, who claimed that most insults were fallacious poisoning the well, despite that clearly not being the case as per the fairly strict and clear definition of poisoning the well.
You can see the same thing in spectacular action in the comment thread here, where commenter Ara throws around terms like “rational” and “anti-rational” as part of an argument that echoes Skepdude’s attempts to say that a valid argument doesn’t make insults valid, when in fact the opposite is the case.
Despite what Mr. Spock would have you believe, saying that something is “rational” or “logical” is to say almost nothing about the thing you are trying to describe. Any position, any conclusion–true or false, virtuous or reprehensible, sensible or absurd–can be supported by a logically valid argument. For instance:
All pigs are green.
All ostriches are pigs.
Therefore, all ostriches are green.
That’s a logically valid argument. The conclusion follows inexorably from the premises. That the conclusion is false and absurd is only because the premises are equally false and absurd. The argument is unsound, but it is perfectly logical. “Logical” is not a value judgment, it is an objective description, and can only be accurately applied to arguments1.
“Rational” is similar. There’s a lot of equivocation possible with “rational,” because it can mean “sensible” as well as “based on reason” or “sane” or “having good sense.” Some of those meanings are value-laden. However, if we are describing a conclusion, an argument, or a course of action, and if we are hoping to have any kind of meaningful discussion, then it’s important to be clear on what we’re trying to say when using the word “rational.”
If, for instance, I’m using the term “rational” to call an idea or action or something “sane” or “possessing good sense,” I’m probably expressing an opinion. “Good sense” is a subjective quality, and the things I consider “sane” may not be the same particular things that are excluded from the DSM-IV.
If, however, I’m trying to say that a belief or course of action or idea is “sensible” or “based on reason,” then I must first know what the reasons or senses involved are. A “sensible” course of action depends on subjective judgment, which is largely driven by circumstance and context. If someone cuts me off at 80mph on the freeway, I may consider such an action to be insensible, but not knowing what caused the person to take that action–say, for instance, their passenger was threatening them, or going into labor, or something–I really have no way of judging the sensibility of the action.
Similarly, if I don’t know what reasons are driving a person to hold some belief or take some action, then I cannot know if that action is based on reason–i.e., if it’s “rational,” in this sense. For instance, if I believe that autism is caused by mercury toxicity and that there are toxic levels of mercury in childhood vaccinations, then it may be a reasonable course of action to refuse to immunize my child. That an action may be wrong, or may be based on false reasons or bad reasons, does not make it irrational or unreasonable.
The fact is that most people do not knowingly take actions or hold beliefs for no reason. Many people take actions or hold beliefs for bad reasons, or ill-considered reasons, but most people do think “logically” and “rationally.” The problem comes from incorrect premises, or from a failure to consider all relevant reasons or weigh those reasons appropriately.
What I’m seeing more of lately, though, is the word “rational” used to mean “something that follows from my reasons” or “something I agree with,” or more simply, “good.” None of these are useful connotations, and none of them accurately represent what the word actually means. Similarly, “fallacy” is coming to mean, in some circles or usages, “something I disagree with” or “bad,” which again fails to recognize the word’s actual meaning. This is fairly detrimental; we already have a word for “bad.” We don’t have other good words for “fallacy,” and they are not directly synonymous with each other.
It seems like an awful lot of skeptics understand that logic and reason are good and important, but they don’t actually seem to understand what makes them work. They seem happy to understand the basics, to practice a slightly more in-depth sort of cargo cult argumentation, while missing the significant whys and wherefores. Sure, you might be able to avoid fallacious arguments by simply avoiding anything that looks like a fallacy, but if you actually understand what sorts of problems cause an argument to be fallacious, it makes your arguing much more effective.
Let me provide two examples. First, my car: I can get by just fine driving my car, even though I really know very little about what’s going on underneath the hood and throughout the machinery. It’s not that I’m not interested; I find the whole process fascinating, but I haven’t put the work in to actually understand what’s going on on a detailed level. Someone who knew my car more intimately would probably get better gas mileage, would recognize problems earlier than I do and have a better idea of what’s wrong than “it makes a grinding noise when I brake,” and would probably use D2 and D3, whatever those are. I don’t get the full experience and utility out of my car, and that’s okay for most everyday travel. But you’re not going to see me entering into a street race with it.
On the other hand, I love cooking, and I’ve found that understanding the science behind why and how various processes occur in the kitchen has made me a much more effective cook. Gone are the days when my grilling was mostly guesswork, and when my ribs would come out tough and stringy. Now that I understand how the textures of muscle and connective tissue differ, and how different kinds of cooking and heat can impact those textural factors, I’m a much better cook. Now that I understand how searing and browning work on a chemical level, I’m a much better cook. I can improvise more in the kitchen, now that I have a better understanding of how flavors work together, and how to control different aspects of taste. I’m no culinary expert, but I can whip up some good meals, and if something goes a way that I don’t like, I have a better idea of how to change or fix it than I did when I was just throwing things together by trial and error.
If you’re content with reading some skeptical books and countering the occasional claim of a co-worker, then yeah, you really don’t need to know the ins and outs of logic and fallacies and reasoning and so forth. But if you want to engage in the more varsity-level skeptical activities, like arguing with apologists or dissecting woo-woo claims in a public forum, then you’re going to need to bring a better game than a cursory understanding of logic and basic philosophy. You don’t need to be a philosophy major or anything, but you might need to do reading beyond learning this stuff by osmosis from hanging out on the skeptical forums. Mimicking the techniques and phrasing of people you’ve seen before only gets you so far; if you really want to improvise, then you have to know how to throw spices together in an effective way.
I’m generally against the faction who wants to frame skepticism as some new academic discipline. I think that’s silly, and I think (regardless of intent) that it smacks of elitism. I’m of the opinion that anyone can be a skeptic, and that most people are skeptics and do exercise skepticism about most things, most of the time. But that doesn’t mean that skepticism comes easily, or that the things we regularly talk about in skeptical forums are easily understood. You have to do some work, you have to put in some effort, and yeah, you have to learn the basics before you can expect to speak knowledgeably on the subject. But believe me, it takes a lot more to learn how to cook a decent steak than to learn how to cook up a good argument.
1. I suppose one could describe the thinking or processing methods of an individual or machine as “logical” in a moderately descriptive way, but it still doesn’t give much in the way of detail. What would a non-logical thought process be? One unrelated non-sequitur after another?