On Labeling

Mmm...babycakes.I keep running into an issue with labels. It wasn’t long ago that I revised my own from “agnostic” to the more accurate and more useful “agnostic atheist” (in a nutshell, anyway–but this is a topic for a future post). The problem I have is that the relevant parts of my beliefs didn’t change, only what I called myself did. I didn’t have a belief in any gods when I called myself an agnostic, and I don’t have any belief in any gods now that I call myself an atheist. From any objective standpoint, I was an atheist the whole time.

And this is the substance of the problem: the dissonance between what a person calls himself or herself, and what categories a person objectively falls into. These labels are frequently different, and frequently result in various confusions and complications.

On one hand, I think we’re inclined to take people at their word with regard to what their personal labels are. It’s a consequence of having so many labels that center around traits that can only be assessed subjectively. I can’t look into another person’s mind to know what they believe or who they’re attracted to or what their political beliefs really are, or even how they define the labels that relate to those arenas. We can only rely on their self-reporting. So, we have little choice but to accept their terminology for themselves.

But…there are objective definitions for some of these terms, and we can, based on a person’s self-reporting of their beliefs, see that an objectively-defined label–which may or may not be the one they apply to themselves–applies to them.

I fear I’m being obtuse in my generality, so here’s an example: Carl Sagan described himself as an agnostic. He resisted the term “atheist,” and clearly gave quite a bit of thought to the problem of how you define “god”–obviously, the “god” of Spinoza and Einstein, which is simply a term applied to the laws of the universe, exists, but the interventionist god of the creationists is far less likely. So Sagan professed agnosticism apparently in order to underscore the point that he assessed the question of each god’s existence individually.

On the other hand, he also seemed to define “atheist” and “agnostic” in unconventional ways–or perhaps in those days before a decent atheist movement, the terms just had different connotations or less specific definitions. Sagan said “An agnostic is somebody who doesn’t believe in something until there is evidence for it, so I’m agnostic,” and “An atheist is someone who knows there is no God.”

Now, I love Carl, but it seems to me that he’s got the definitions of these terms inside-out. “Agnostic,” as the root implies, has to do with what one claims to know–specifically, it’s used to describe people who claim not to know if there are gods. Atheist, on the other hand, is a stance on belief–specifically the lack of belief in gods.

So, if we’re to go with the definitions of terms as generally agreed upon, as well as Carl’s own self-reported lack of belief in gods and adherence to the null hypothesis with regard to supernatural god claims, then it’s clear that Carl is an atheist. Certainly an agnostic atheist–one who lacks belief in gods but does not claim to know that there are no gods–but an atheist nonetheless.

The dilemma with regard to Sagan is relatively easy to resolve; “agnostic” and “atheist” are not mutually exclusive terms, and the term one chooses to emphasize is certainly a matter of personal discretion. In the case of any self-chosen label, the pigeon-holes we voluntarily enter into are almost certainly not all of the pigeon-holes into which we could be placed. I describe myself as an atheist and a skeptic, but it would not be incorrect to call me an agnostic, a pearlist, a secularist, an empiricist, and so forth. What I choose to call myself reflects my priorities and my understanding of the relevant terminology, but it doesn’t necessarily exclude other terms.

The more difficult problems come when people adopt labels that, by any objective measure, do not fit them, or exclude labels that do. We see Sagan doing the latter in the quote above, eschewing the term “atheist” based on what we’d recognize now as a mistaken definition. The former is perhaps even more common–consider how 9/11 Truthers, Global Warming and AIDS denialists, and Creationists have all attempted to usurp the word “skeptic,” even though none of their methods even approach skepticism.

The danger with the former is when groups try to co-opt people into their groups who, due to lack of consistent or unambiguous self-reporting (or unambiguous reporting from reliable outside sources), can’t objectively be said to fit into them. We see this when Christians try to claim that the founding fathers were all devout Christian men, ignoring the reams of evidence that many of them were deists or otherwise unorthodox. It’s not just the fundies who do this, though; there was a poster at my college which cited Eleanor Roosevelt and Errol Flynn among its list of famous homosexual and bisexual people, despite there being inconsistent and inconclusive evidence to determine either of their sexualities. The same is true when my fellow atheists attempt to claim Abraham Lincoln and Thomas Paine (among others), despite ambiguity in their self-described beliefs. I think, especially those of us who pride ourselves on reason and evidence, that we must be careful with these labels, lest we become hypocrites or appear sloppy in our application and definition of terms. These terms have value only inasmuch as we use them consistently.

The matter of people adopting terms which clearly do not apply to them, however, presents a more familiar problem. It seems easy and safe enough to say something like “you call yourself an atheist, yet you say you believe in God. Those can’t both be true,” but situations rarely seem to be so cut-and-dry. Instead, what we end up with are ambiguities and apparent contradictions, and a need to be very accurate and very precise (and very conservative) in our definition of terms. Otherwise, it’s a very short slippery slope to No True Scotsman territory.

Case in point, the word “Christian.” It’s a term with an ambiguous definition, which (as far as I can tell) cannot be resolved without delving into doctrinal disputes. Even a definition as simple as “a Christian is someone who believes Jesus was the son of God” runs afoul of Trinitarian semantics, where Jesus is not the son, but God himself. A broader definition like, “One who follows the teachings of Jesus” ends up including people who don’t consider themselves Christians (for instance, Ben Franklin, who enumerated Jesus among other historical philosophers) and potentially excluding people who don’t meet the unclear standard of what constitutes “following,” and so forth.

Which is why there are so many denominations of Christianity who claim that none of the other denominations are “True Christians.” For many Protestants, the definition of “True Christian” excludes all Catholics, and vice versa; and for quite a lot of Christians, the definition of the term excludes Mormons, who are also Bible-believers that accept Jesus’s divinity.

When we start down the path of denying people the terms that they adopt for themselves, we must be very careful that we do not overstep the bounds of objectivity and strict definitions. Clear contradictions are easy enough to spot and call out; where terms are clearly defined and beliefs or traits are clearly expressed, we may indeed be able to say “you call yourself be bisexual, but you say you’re only attracted to the opposite sex. Those can’t both be true.” But where definitions are less clear, or where the apparent contradictions are more circumstantially represented, objectivity can quickly be thrown out the window.

I don’t really have a solution for this problem, except that we should recognize that our ability to objectively label people is severely limited by the definitions we ascribe to our labels and the information that our subjects report themselves. So long as we are careful about respecting those boundaries, we should remain well within the guidelines determined by reason and evidence. Any judgments we make and labels we apply should be done as carefully and conservatively as possible.

My reasons for laying all this out should become clear with my next big post. In the meantime, feel free to add to this discussion in the comments.

Advertisements

On Interpretation

I see an old lady!--No, wait, a young girl!--No, I mean, two faces eating a candlestick!I thought I’d talked about this before on the blog, but apparently I’ve managed to go this long without really tackling the issue of interpretation. Consequently, you might notice some of the themes and points in this post getting repeated in my next big article, since writing that was what alerted me to my omission.

I don’t generally like absolute statements, since they so rarely are, but I think this one works: there is no reading without interpretation. In fact, I could go a step further and say there’s no communication without interpretation, but reading is the most obvious and pertinent example.

Each person is different, the product of a unique set of circumstances, experiences, knowledge, and so forth. Consequently, each person approaches each and every text with different baggage, and a different framework. When they read the text, it gets filtered through and informed by those experiences, that knowledge, and that framework. This process influences the way the reader understands the text.

Gah, that’s way too general. Let’s try this again: I saw the first couple of Harry Potter movies before I started reading the books; consequently, I came to the books with the knowledge of the movie cast, and I interpreted the books through that framework–not intentionally, mind you, it’s just that the images the text produced in my mind included Daniel Radcliffe as Harry and Alan Rickman as Professor Snape. However, I plowed through the series faster than the moviemakers have. The descriptions in the books (and the illustrations) informed my mental images of other characters, so when I saw “Order of the Phoenix,” I found the casting decision for Dolores Umbridge quite at odds with my interpretation of the character, who was less frou-frou and more frog-frog.

We’ve all faced this kind of thing: our prior experiences inform our future interpretations. I imagine most people picking up an Ian Fleming novel have a particular Bond playing the role in their mental movies. There was quite a bit of tizzy over the character designs in “The Hitchhiker’s Guide to the Galaxy” movie, from Marvin’s stature and shape to the odd placement of Zaphod’s second head, to Ford Prefect’s skin color. I hear Kevin Conroy‘s voice when I read Batman dialogue.

This process is a subset of the larger linguistic process of accumulating connotation. As King of Ferrets fairly recently noted, words are more than just their definitions; they gather additional meaning through the accumulation of connotations–auxiliary meaning attached to the world through the forces of history and experience. Often, these connotations are widespread. For example, check out how the word “Socialist” got thrown around during the election. There’s nothing in the definition of the word that makes it the damning insult it’s supposed to be, but thanks to the Cold War and the USSR, people interpret the word to mean more than just “someone who believes in collective ownership of the means of production.” Nothing about “natural” means “good and healthy,” yet that’s how it’s perceived; nothing about “atheist” means “immoral and selfish,” nor does it mean “rational and scientific,” but depending on who you say it around, it may carry either of those auxiliary meanings. Words are, when it comes right down to it, symbols of whatever objects or concepts they represent, and like any symbols (crosses, six-pointed stars, bright red ‘A’s, Confederate flags, swastikas, etc.), they take on meanings in the minds of the people beyond what they were intended to represent.

This process isn’t just a social one; it happens on a personal level, too. We all attach some connotations and additional meanings to words and other symbols based on our own personal experiences. I’m sure we all have this on some level; we’ve all had a private little chuckle when some otherwise innocuous word or phrase reminds us of some inside joke–and we’ve also all had that sinking feeling as we’ve tried to explain the joke to someone who isn’t familiar with our private connotations. I know one group of people who would likely snicker if I said “gravy pipe,” while others would just scratch their heads; I know another group of people who would find the phrase “I’ve got a boat” hilarious, but everyone else is going to be lost. I could explain, but even if you understood, you wouldn’t find it funny, and you almost certainly wouldn’t be reminded of my story next time you heard the word “gravy.” Words like “doppelganger” and “ubiquitous” are funny to me because of the significance I’ve attached to them through the personal process of connotation-building.

And this is where it’s kind of key to be aware of your audience. If you’re going to communicate effectively with your audience, you need to have some understanding of this process. In order to communicate effectively, I need to recognize that not everyone will burst into laughter if I say “mass media” or “ice dragon,” because not everyone shares the significance that I’ve privately attached to those phrases. Communication is only effective where the speaker and listener share a common language; this simple fact requires the speaker to know what connotations he and his audience are likely to share.

Fortunately or unfortunately, we’re not telepathic. What this means is that we cannot know with certainty how any given audience will interpret what we say. We might guess to a high degree of accuracy, depending on how well we know our audience, but there’s always going to be some uncertainty involved. That ambiguity of meaning is present in nearly every word, no matter how simple, no matter how apparently direct, because of the way we naturally attach and interpret meaning.

Here’s the example I generally like to use: take the word “DOG.” It’s a very simple word with a fairly straightforward definition, yet it’s going to be interpreted slightly differently by everyone who reads or hears it. I imagine that everyone, reading the word, has formed a particular picture in their heads of some particular dog from their own experience. Some people are associating the word with smells, sounds, feelings, other words, sensations, and events in their lives. Some small number of people might be thinking of a certain TV bounty hunter. The point is that the word, while defined specifically, includes a large amount of ambiguity.

Let’s constrain the ambiguity, then. Take the phrase “BLACK DOG.” Now, I’ve closed off some possibilities: people’s mental pictures are no longer of golden retrievers and dalmatians. I’ve closed off some possibilities that the term “DOG” leaves open, moving to the included subset of black dogs. There’s still ambiguity, though: is it a little basket-dwelling dog like Toto, or a big German Shepherd? Long hair or short hair? What kind of collar?

But there’s an added wrinkle here. When I put the word “BLACK” in there, I brought in the ambiguity associated with that word as well. Is the dog all black, or mostly black with some other colors, like a doberman? What shade of black are we talking about? Is it matte or glossy?

Then there’s further ambiguity arising from the specific word combination. When I say “BLACK DOG,” I may mean a dark-colored canine, or I may mean that “I gotta roll, can’t stand still, got a flamin’ heart, can’t get my fill.”

And that’s just connotational ambiguity; there’s definitional ambiguity as well. The word “period” is a great example of this. Definitionally, it means something very different to a geologist, an astronomer, a physicist, a historian, a geneticist, a chemist, a musician, an editor, a hockey player, and Margaret Simon. Connotationally, it’s going to mean something very different to ten-year-old Margaret Simon lagging behind her classmates and 25-year-old Margaret Simon on the first day of her Hawaiian honeymoon.

People, I think, are aware of these ambiguities on some level; the vast majority of verbal humor relies on them to some degree. Our language has built-in mechanisms to alleviate it. In speaking, we augment the words with gestures, inflections, and expressions. If I say “BLACK DOG” while pointing at a black dog, or at the radio playing a distinctive guitar riff, my meaning is more clear. The tone of my voice as I say “BLACK DOG” will likely give some indication as to my general (or specific) feelings about black dogs, or that black dog in particular. Writing lacks these abilities, but punctuation, capitalization, and font modification (such as bold and italics) are able to accomplish some of the same goals, and other ones besides. Whether I’m talking about the canine or the song would be immediately apparent in print, as the difference between “black dog” and “‘Black Dog.'” In both venues, one of the most common ways to combat linguistic ambiguity is to add more words. Whether it’s writing “black dog, a Labrador Retriever, with floppy ears and a cold nose and the nicest temperament…” or saying “black dog, that black dog, the one over there by the flagpole…” we use words (generally in conjunction with the other tools of the communication medium) to clarify other words. None of these methods, however, can completely eliminate the ambiguity in communication, and they all have the potential to add further ambiguity to the communication by adding information as well.

To kind of summarize all that in a slightly more entertaining way, look at the phrase “JANE LOVES DICK.” It might be a sincere assessment of Jane’s affection for Richard, or it might be a crude explanation of Jane’s affinity for male genitals. Or, depending on how you define terms, it might be both. Textually, we can change it to “Jane loves Dick” or “Jane loves dick,” and that largely clarifies the point. Verbally, we’d probably use wildly different gestures and inflections to talk about Jane’s office crush and her organ preference. And in either case, we can say something like “Jane–Jane Sniegowski, from Accounting–loves Dick Travers, the executive assistant. Mostly, she loves his dick.”

The net result of all this is that in any communication, there is some loss of information, of specificity, between the speaker and the listener (or the writer and the reader). I have some specific interpretation of the ideas I want to communicate, I approximate that with words (and often the approximation is very close), and my audience interprets those words through their own individual framework. Hopefully, the resulting idea in my audience’s mind bears a close resemblance to the idea in mine; the closer they are, the more effective the communication. But perfect communication–loss-free transmission of ideas from one mind to another–is impossible given how language and our brains work.

I don’t really think any of this is controversial; in fact, I think it’s generally pretty obvious. Any good writer or speaker knows to anticipate their audience’s reactions and interpretations, specifically because what the audience hears might be wildly different from what the communicator says (or is trying to say). Part of why I’ve been perhaps overly explanatory and meticulous in this post is that I know talking about language can get very quickly confusing, and I’m hoping to make my points particularly clear.

There’s one other wrinkle here, which is a function of the timeless nature of things like written communication. What I’m writing here in the Midwestern United States in the early 21st Century might look as foreign to the readers of the 25th as the works of Shakespeare look to us. I can feel fairly confident that my current audience–especially the people who I know well who read this blog–will understand what I’ve said here, but I have no way of accurately anticipating the interpretive frameworks of future audiences. I can imagine the word “dick” losing its bawdy definition sometime in the next fifty years, so it’ll end up with a little definition footnote when this gets printed in the Norton Anthology of Blogging Literature. Meanwhile, “ambiguity” will take on an ancillary definition referring to the sex organs of virtual prostitutes, so those same students will be snickering throughout this passage.

I can’t know what words will lose their current definitions and take on other meanings or fall out of language entirely, so I can’t knowledgeably write for that audience. If those future audiences are to understand what I’m trying to communicate, then they’re going to have to read my writing in the context of my current definitions, connotations, idioms, and culture. Of course, even footnotes can only take you so far–in many cases, it’s going to be like reading an in-joke that’s been explained to you; you’ll kind of get the idea, but not the impact. The greater the difference between the culture of the communicator and the culture of the audience, the more difficulty the audience will have in accurately and completely interpreting the communicator’s ideas.

Great problems can arise when we forget about all these factors that go into communication and interpretation. We might mistakenly assume that everyone is familiar with the idioms we use, and thus open ourselves up to criticism (e.g., “lipstick on a pig” in the 2008 election); we might mistakenly assume that no one else is familiar with the terms we use, and again open ourselves up to criticism (e.g., “macaca” in the 2006 election). We might misjudge our audience’s knowledge and either baffle or condescend to them. We might forget the individuality of interpretation and presume that all audience members interpret things the same way, or that our interpretation is precisely what the speaker meant and all others have missed the point. We would all do well to remember that communication is a complicated thing, and that those complexities do have real-world consequences.

…And some have Grey-ness thrust upon ’em

So, Alan Grey provided some musings on the Evolution/Creation “debate” at his blog, at my request. I figured I ought to draft a response, since I’ve got a bit of time now, and since Ty seems to want to know what my perspective is. Let’s jump right in, shall we?

Thomas Kuhn, in his famous work ‘The structure of scientific revolutions’ brought the wider worldview concept of his day into understanding science. His (and Polanyi’s) concept of paradigmic science, where scientific investigation is done within a wider ‘paradigm’ moved the debate over what exactly science is towards real science requiring two things
1) An overarching paradigm which shapes how scientists view data (i.e. theory laden science)
2) Solving problems within that paradigm

I think I’ve talked about The Structure of Scientific Revolutions here or elsewhere in the skeptosphere before. I really need to give it another read, but at the time I read it (freshman year of undergrad) I found it to be one of the densest, most confusing jargon-laden texts I’ve ever slogged through for a class. Now that I have a better understanding of science and the underlying philosophies, I really ought to give it another try. I’d just rather read more interesting stuff first.

Reading the Wikipedia article on the book, just to get a better idea of Kuhn’s arguments, gives me a little feeling of validation about my initial impressions all those years ago. See, my biggest problem with Structure–and I think I wrote a short essay to this effect for the class–was that Kuhn never offered a clear definition of what a “paradigm” was. Apparently my criticism wasn’t unique:

Margaret Masterman, a computer scientist working in computational linguistics, produced a critique of Kuhn’s definition of “paradigm” in which she noted that Kuhn had used the word in at least 21 subtly different ways. While she said she generally agreed with Kuhn’s argument, she claimed that this ambiguity contributed to misunderstandings on the part of philosophically-inclined critics of his book, thereby undermining his argument’s effectiveness.

That makes me feel a bit less stupid.

Kuhn claimed that Karl Popper’s ‘falsification criteria’ for science was not accurate, as there were many historical cases where a result occurred that could be considered as falsifying the theory, yet the theory was not discarded as the scientists merely created additional ad hoc hypothesis to explain the problems.

It is through the view of Kuhnian paradigms that I view the evolution and creation debate.

And I think that’s the first problem. To suggest that only Kuhn or only Popper has all the answers when it comes to the philosophy of science–which may not be entirely what Grey is doing here, but is certainly suggested by this passage–is a vast oversimplification. Kuhn’s paradigmatic model of science ignores to large degree the actual methods of science; arguably, Popper’s view presents an ideal situation that ignores the human element to science, and denies that there exists such a thing as confirmation in science–which again, may be due to ignoring the human element. The paradigmatic view is useful; it reminds us that the human ability to develop conceptual models is partially influenced by cultural factors, and that scientists must be diligent about examining their preconceptions, biases, and tendencies toward human error (such as ad hoc justifications) if they are to conduct accurate science. Falsificationism is also useful; it provides a metric by which to judge scientific statements on the basis of testability, and demonstrates one ideal to which the scientific method can asymptotically approach. But to try to view all of science through one lens or another is myopic at best. Just as science is neither purely deductive nor purely inductive, neither purely theoretical nor purely experimental, it is certainly not purely paradigmatic nor purely falsificationist.

One thing to keep in mind, though, is Grey’s brief mention of ad hoc hypotheses used to smooth out potentially-falsifying anomalies. While I’m sure that has happened and continues to happen, it’d be a mistake to think that any time an anomaly is smoothed over, it’s the result of ad-hocking. The whole process of theory-making is designed to continually review the theory, examine the evidence, and alter the theory to fit the evidence if necessary. We’re seeing a time, for instance, where our concept of how old and large the universe is may be undergoing revision, as (if I recall correctly) new evidence suggests that there are objects beyond the veil affecting objects that we can see. That doesn’t necessarily represent an ad hoc hypothesis; it represents a known unknown in the current model of the universe. Ad hocking would require positing some explanation without sufficient justification.

(Curiously, Karl Popper obliquely referred to Kuhn’s scientific paradigm concept when he said “Darwinism is not a testable scientific theory but a metaphysical research programme.” )

It’s been awhile since my quote mine alarm went off. It never fails. The quote is misleading at best, especially the way you’ve used it here, and somewhat wrong-headed at worst, as even Popper later acknowledged.

Here I define evolution (Common Descent Evolution or CDE) as: The theory that all life on earth evolved from a common ancestor over billions of years via the unguided natural processes of mutation and selection (and ‘drift’) and creation (Young earth creation or YEC) as: The theory that various kinds of life were created under 10,000 years ago and variation within these kinds occurs within limits via mutation and select (and ‘drift’).

I can’t see anything in there to disagree with. Yet, anyway.

I believe CDE and YEC can both be properly and most accurately defined as being scientific paradigms.

While this seems problematic. CDE, certainly, may be a scientific paradigm (though as usual, I’d like that term to be pinned down to a more specific definition). Why on Earth would YEC be a scientific paradigm? Going back to Wikipedia, that font of all knowledge:

Kuhn defines a scientific paradigm as:

  • what is to be observed and scrutinized
  • the kind of questions that are supposed to be asked and probed for answers in relation to this subject
  • how these questions are to be structured
  • how the results of scientific investigations should be interpreted

Alternatively, the Oxford English Dictionary defines paradigm as “a pattern or model, an exemplar.” Thus an additional component of Kuhn’s definition of paradigm is:

  • how is an experiment to be conducted, and what equipment is available to conduct the experiment.

So I can see, under a Creationist paradigm, that one might have different priorities for observations (searching, for instance, for the Garden of Eden or examining evidence for a Global Flood). I certainly understand the matter of formulating questions–we see this in debates with Creationists all the time: “who created the universe,” “why does the universe seem so fine-tuned to our existence,” and so forth. These questions imply what form their answers will take: the first suggests that there must have been an agent involved in the creation of the universe, the latter interprets the causal relationship in a human-centered, teleological fashion. If there’s one thing I’ve learned over years of experience with these debates, it’s the importance of asking the right questions in the right ways. Certainly when scientists were largely laboring under a YEC paradigm, and certainly Creationists and ID proponents looking at various lines of evidence, are interpreting those lines of evidence in particular ways: ID proponents see everything in terms of engineering–machines, codes, programs, and so forth. I’m not entirely sure how a YEC paradigm would affect the available scientific equipment, though.

So I can see how YEC is a paradigm; I’m just not sure how it’s a scientific one. I mean, I can adopt a Pastafarian paradigm of looking at the world, and it may influence how I interpret scientific findings, but that doesn’t give it any scientific value or credence. A scientific paradigm, it seems to me, ought to develop out of science; allowing any paradigm to act as a justified scientific paradigm seems to me to be a little more postmodernist than is valid in science.

Whilst CDE proponents claim that CDE is falsifiable

And Popper, too.

(E.g. Haldane and Dawkins saying a fossil Rabbit in the Precambrian era would falsify CDE), it is easy to see how the theory laden-ness of science makes such a find unlikely.

Um…how? A find is a find, regardless of how theory-laden the scientists are. And it’s not as though evolution hasn’t had its share of moments of potential falsification. Darwin was unaware of genes; his theory was missing a mechanism of transmission. Were we to discover that genes were not prone to the sorts of mutations and variation and drift that Darwinian evolution predicts, the theory would have been worthless. But the study of genes validated Darwin. If we had discovered that DNA replication was not prone to errors and problems, that would have been a major nail in the coffin for Darwinian evolution, but instead the DNA replication process supported the theory. If our studies of the genome had revealed vast differences between apparently related species, with broken genes and junk DNA and retroviral DNA in wildly different places in otherwise-close species, that would be a serious problem for evolutionary theory. Instead, the presence and drift of such genetic bits are perhaps the best evidence available for evolution, and give us a sort of genetic clock stretching backwards along the timeline. It could have been that the genetic evidence wildly contradicted the fossil evidence, but instead we find confirmation and further explanation of the existing lines.

Classification of rock strata was initially (and still commonly) done via the presence of index fossils. (Note: The designation of these fossils as representing a certain historical period was done within the CDE paradigm)

Bzzt! Simply untrue. There do exist index fossils–fossils which only occur in one strata–which can be used to verify the dates of some strata. However, those dates have already been determined through other methods–radiometric dating, which ones are on top of others, and so forth.

Incidentally, if anyone ever gets a chance to look into the various dating methods we have, I highly recommend it. I taught a lesson on it last Spring, and it’s really interesting stuff. You’d never believe how important trees are.

The finding of a fossil Rabbit in a rock strata would almost certainly result in classification of the strata as something other than pre-cambrian, or the inclusion of other ad hoc explanations for the fossil (Overthrusts, reworking etc).

No, I’m afraid that’s simply not the case. If a fossil rabbit were found in a Precambrian stratum, that was below the Cambrian stratum, and both the stratum and the fossil could be reasonably dated back to the Precambrian (through methods like radiometric dating), it would not simply force the redefinition of the stratum. Because then one would have to explain the presence of one geological stratum beneath several others that, chronologically, came earlier, and why there are other Precambrian fossils in this Postcambrian stratum. Either way, the result is an insurmountable anomaly.

Granted, there could be alternate hypotheses to explain how the rabbit got there. Maybe there was a hole in the ground, and some poor rabbit managed to fall in, die, and get fossilized. But then we wouldn’t have a Precambrian rabbit, we’d have a Postcambrian rabbit in a hole, and there ought to be other signs which could demonstrate that (not the least of which that the rabbit shouldn’t date back to the Precambrian radiometrically, and the strata above it, closing off the hole, should be out of place with regard to the rest of the strata. In order to call the stratum the result of an overthrust or erosion or something, there would have to be other evidence for that. Geological folding and erosion, so far as I know, would not affect one fossilized rabbit without leaving other signs behind.

It is worth noting that many smaller (only 200 million year) similar type surprises are happily integrated within CDE. (A recent example is pushing back gecko’s 40 million years in time)

I’d like to see more examples and sources for this. I read the gecko article, and I don’t see where it’s at all what you’re suggesting. This is not an example of a clearly out-of-place animal in the wrong era, it’s an example of there being an earlier ancestor of a modern species than what we knew of before. The preserved gecko is a new genus and species–it’s not as though it’s a modern gecko running around at the time of the dinosaurs–and it’s from a time when lizards and reptiles were common. The point of the “rabbit in the Precambrian” example is that there were no mammals in the Precambrian era. Multicellular life was more or less limited to various soft-bodied things and small shelled creatures; most of the fossils we find from the precambrian are tough to pin down to a kingdom, let alone a genus and species like Sylvilagus floridanus, for instance. There’s a world of difference between finding a near-modern mammal in a period 750 million years before anything resembling mammals existed, and finding a lizard during a lizard- and reptile-dominated time 40 million years before your earliest fossil in that line. There was nothing in the theory or the knowledge preventing a gecko from palling around with dinosaurs, there was just no evidence for it.

The main point here is that the claimed falsification is not a falsification of CDE, but merely falsifies the assumption that fossils are always buried in a chronological fashion. CDE can clearly survive as a theory even if only most fossils are buried in chronological fashion.

That may be closer to the case, as there is a wealth of other evidence for common descent and evolution to pull from. However, the Precambrian rabbit would call into question all fossil evidence, as well as the concept of geological stratification. It would require a serious reexamination of the evidences for evolution.

Many other events and observations exist which could be said to falsify evolution (e.g. the origin of life, soft tissue remaining in dinosaur fossils), but are happily left as unsolved issues.

How would the origin of life falsify evolution? Currently, while there are several models, there’s no prevailing theory of how abiogenesis occurred on Earth. It’s not “happily left as an unsolved issue;” scientists in a variety of fields have spent decades examining that question. Heck, the Miller-Urey experiments, though based on an inaccurate model of the early Earth’s composition, were recently re-examined and found to be more fruitful and valid than originally thought. The matter of soft tissue in dinosaur fossils has been widely misunderstood, largely due to a scientifically-illiterate media (for instance, this article which glosses over the softening process). It’s not like we found intact Tyrannosaurus meat; scientists had to remove the minerals from the substance in order to soften it, and even then the tissue may not be original to the Tyrannosaurus.

It is because of these types of occurrences that I suggest CDE is properly assigned as a scientific paradigm. Which is to say that CDE is not viewed as falsified by these unexpected observations, but instead these problems within CDE are viewed as the grist for the mill for making hypothesis and evaluating hypothesis within the paradigm.

Except that nothing you’ve mentioned satisfies the criteria for falsifiability. For any scientific theory or hypothesis, we can state a number of findings that would constitute falsification. “Rabbits in the precambrian” is one example, certainly, but origins of life? Softenable tissue in dino fossils? Previous gecko ancestors? The only way any of those would falsify evolution would be if we found out that life began suddenly a few thousand years ago, or somesuch. So far, no such discovery has been made, while progress continues on formulating a model of how life began on the Earth four-odd billion years ago.

In other words, you’ve equated any surprises or unanswered questions to falsification, when that’s not, nor has it ever been, the case.

YEC can also be properly identified as a scientific paradigm although significantly less well funded and so significantly less able to do research into the problems that existing observations create within the paradigm.

Yes, if only Creationists had more funding–say, tax-exempt funding from fundamentalist religious organizations, or $27 million dollars that might otherwise be spent on a museum trumpeting their claims–they’d be able to do the research to explain away the geological, physical, and astronomical evidence for a billions-of-years-old universe; the biological, genetic, and paleontological evidence for common descent; the lack of any apparent barriers that would keep evolutionary changes confined to some small areas; and ultimately, the lack of evidence for the existence of an omnipotent, unparsimonious entity who created this whole shebang. It’s a lack of funding that’s the problem.

One such example of research done is the RATE project. Specifically the helium diffusion study which predicted levels of helium in zircons to be approximately 100,000 times higher than expected if CDE were true.

Further reading on RATE. I’m sure the shoddy data and the conclusions that don’t actually support YEC are due to lack of funding as well.

What placing YEC and CDE as scientific paradigms does is make sense of the argument. CDE proponents (properly) place significant problems within CDE as being something that will be solved in the future (E.g. origin of life) within the CDE paradigm. YEC can also do the same (E.g. Endogenous Retroviral Inserts).

Except that the origin of life isn’t a serious problem for evolution; evolution’s concerned with what happened afterward. That’s like saying that (hypothetical) evidence against the Big Bang theory would be a problem for the Doppler Effect. You’ve presented nothing presently that would falsify evolution, while there are already oodles of existing observations to falsify the YEC model. Moreover, you’ve apparently ignored the differences in supporting evidence between the two paradigms; i.e., that evolution has lots of it, while YEC’s is paltry and sketchy at best, and nonexistent at worst. It can’t just be a matter of funding; the YEC paradigm reigned for centuries until Darwin, Lord Kelvin, and the like. Why isn’t there leftover evidence from those days, when they had all the funding? What evidence is there to support the YEC paradigm, that would make it anything like the equal of the evolutionary one?

Comments
1) Ideas like Stephen Gould’s non-overlapping Magistra (NOMA) are self-evidently false. If God did create the universe 7000 years ago, there will definitely be implications for science.

More or less agreed; the case can always be made for Last Thursdayism and the point that an omnipotent God could have created the universe in media res, but such claims are unfalsifiable and unparsimonious.

2) Ruling out a supernatural God as a possible causative agent is not valid. As with (1) such an activity is detectable for significant events (like creation of the world/life) and so can be investigated by science.

I’m not entirely clear on what you’re saying here. I think you’re suggesting that if a supernatural God has observable effects on the universe, then it would be subject to science inquiry. If that’s the case, I again agree. And a supernatural God who has no observable effects on the universe is indistinguishable from a nonexistent one.

a. To argue otherwise is essentially claim that science is not looking for truth, but merely the best naturalistic explanation. If this is the case, then science cannot disprove God, nor can science make a case that YEC is wrong.

Here’s where we part company. First, the idea that science is looking for “truth” really depends on what you mean by “truth.” In the sense of a 1:1 perfect correlation between our conceptual models and reality, truth may in fact be an asymptote, one which science continually strives for but recognizes as probably unattainable. There will never be a day when science “ends,” where we stop and declare that we have a perfect and complete understanding of the universe. Scientific knowledge, by definition, is tentative, and carries the assumption that new evidence may be discovered that will require the current knowledge to be revised or discarded. Until the end of time, there’s the possibility of receiving new evidence, so scientific knowledge will almost certainly never be complete.

As far as methodological naturalism goes, it doesn’t necessarily preclude the existence of supernatural agents, but anything that can cause observable effects in nature ought to be part of the naturalistic view. As soon as we discover something supernatural that has observable effects in nature, it can be studied, and thus can be included in the methodological naturalism of science.

Even if all this were not the case, science can certainly have a position on the truth or falsehood of YEC. YEC makes testable claims about the nature of reality; if those claims are contradicted by the evidence, then that suggests that YEC is not true. So far, many of YEC’s claims have been evaluated in precisely this fashion. While science is less equipped to determine whether or not there is a supernatural omnipotent god who lives outside the universe and is, by fiat, unknowable by human means, science is quite well equipped to determine the age of the Earth and the development of life, both areas where YEC makes testable, and incorrect, predictions.

b. Anthony Flew, famous atheist turned deist makes the point quite clearly when talking about his reasons for becoming a deist

“It was empirical evidence, the evidence uncovered by the sciences. But it was a philosophical inference drawn from the evidence. Scientists as scientists cannot make these kinds of philosophical inferences. They have to speak as philosophers when they study the philosophical implications of empirical evidence.”

What? We have very different definitions of “quite clearly.” Not sure why you’re citing Flew here, since he’s not talking about any particular evidence, since he has no particular expertise with the scientific questions involved, and since he’s certainly not a Young Earth Creationist, nor is his First Cause god consistent with the claims of YEC. I’m curious, though, where this quotation comes from, because despite the claim here that his conversion to Deism was based on evidence, the history of Flew’s conversion story cites mostly a lack of empirical evidence–specifically with regard to the origins of life–as his reason for believing in a First Cause God.

Flew’s comments highlight another significant issue. The role of inference. Especially in ‘historical’ (I prefer the term ‘non-experimental’) science.

You may prefer the term. It is not accurate. The nature of experimentation in historical sciences tends to be different from operational science, but it exists, is useful, and is valid nonetheless.

Much rhetorical use is given to the notion that YEC proponents discard the science that gave us planes, toasters and let us visit the moon (sometimes called ‘operational’…I prefer ‘experimental’ science). Yet CDE is not the same type of science that gave us these things.

No, CDE is the type of science that gives us more efficient breeding and genetic engineering techniques, a thorough understanding of how infectious entities adapt to medication and strategies for ameliorating the problems that presents, genetic algorithms, and a framework for understanding how and why many of the things we already take for granted in biology are able to work. It just happens to be based on the same principles and methodologies as the science that gave us toasters and lunar landers.

Incidentally, the determination of the age of the universe and the Earth is based on precisely the same science that allowed us to go to the moon and make airplanes. Or, more specifically, the science that allows us to power many of our space exploration devices and homes and allows us to view very distant objects.

CDE is making claims about the distant past by using present observations and there is a real disconnect when doing this.

It’s also making claims about the present by using present observations. Evolution is a continuous process.

One of the chief functions of experiment is to rule out other possible explanations (causes) for the occurrence being studied. Variables are carefully controlled in multiple experiments to do this. The ability to rule out competing explanations is severally degraded when dealing with historical science because you cannot repeat and control variables.

Fair enough. It’s similar to surgical medicine in that regard.

You may be able to repeat an observation, but there is no control over the variables for the historical event you are studying.

“No control” is another oversimplification. We can control what location we’re looking at, time period and time frame, and a variety of other factors. It’s certainly not as tight as operational science, but there are controls and experiments in the primarily-observational sciences.

Not that it matters, because experiments are not the be-all, end-all of science. Predictions, observations, and mathematical models are important too. Science in general has much more to do with repeated observation than with experimentation. And yes, repeated observation is enough (in fact, it’s the only thing) to determine cause and effect.

Scientists dealing with non-experimental science have to deal with this problem, and they generally do so by making assumptions (sometimes well founded, sometimes not).

Guh? You act like they just come up with these assumptions without any justification.

A couple of clear examples are uniformitarianism (Geological processes happening today, happened the same way, the same rate in the past) and the idea that similarity implies ancestry.

Okay, two problems. One: if we were to hypothesize that geological processes happened somehow differently in the past, one would have to provide some evidence to justify that hypothesis. Without evidence, it would be unparsimonious to assume that things functioned differently in the past. As far as all the evidence indicates, the laws of physics are generally constant in time and space, and those geological processes and whatnot operate according to those laws.

The idea that similarity implies ancestry is not a scientific one. While that may have been a way of thinking about it early on in evolutionary sciences, it does not actually represent science now. Similarity may imply relationship, but there are enough instances of analogous evolution to give the lie to the idea that scientists think similarity = ancestry.

A couple of quotes will make my point for me.

Doubtful.

Henry Gee chief science writer for Nature wrote “No fossil is buried with its birth certificate” … and “the intervals of time that separate fossils are so huge that we cannot say anything definite about their possible connection through ancestry and descent.”

Poor Henry Gee; first quote-mined in Jonathan Wells’ Icons of Evolution, now by you. What’s interesting here is that you’ve actually quote-mined Gee’s response to Wells and the DI for quote-mining him! (Which, I realize, you’re aware of, but I read this largely as I was writing the response) Here’s the full context:

That it is impossible to trace direct lineages of ancestry and descent from the fossil record should be self-evident. Ancestors must exist, of course — but we can never attribute ancestry to any particular fossil we might find. Just try this thought experiment — let’s say you find a fossil of a hominid, an ancient member of the human family. You can recognize various attributes that suggest kinship to humanity, but you would never know whether this particular fossil represented your lineal ancestor – even if that were actually the case. The reason is that fossils are never buried with their birth certificates. Again, this is a logical constraint that must apply even if evolution were true — which is not in doubt, because if we didn’t have ancestors, then we wouldn’t be here. Neither does this mean that fossils exhibiting transitional structures do not exist, nor that it is impossible to reconstruct what happened in evolution. Unfortunately, many paleontologists believe that ancestor/descendent lineages can be traced from the fossil record, and my book is intended to debunk this view. However, this disagreement is hardly evidence of some great scientific coverup — religious fundamentalists such as the DI — who live by dictatorial fiat — fail to understand that scientific disagreement is a mark of health rather than decay. However, the point of IN SEARCH OF DEEP TIME, ironically, is that old-style, traditional evolutionary biology — the type that feels it must tell a story, and is therefore more appealing to news reporters and makers of documentaries — is unscientific.

What Gee is criticizing here and in his book, as his response and further information here (4.14, 4.16) make clear, is the tendency among some scientists and journalists to interpret the evidence in terms of narratives and to see life as a linear progression, when in fact it’s more of a branching tree with many limbs. It’s impossible from fossil evidence alone to determine whether two animals are ancestor and descendant, or cousins, or whatever.

See, the problem with letting quotes make your point for you is that they often do no such thing.

Gee’s response to this quote of him supports my point

No, you’ve simply misunderstood it. The fact that you’ve read Icons, somehow find it valid, and somehow think it supports a YEC view, speaks volumes about your credibility.

Colin Paterson’s infamous quote about the lack of transitional fossils makes the same point. “The reason is that statements about ancestry and descent are not applicable in the fossil record. Is Archaeopteryx the ancestor of all birds? Perhaps yes, perhaps no: there is no way of answering the question.”

My quote mine alarm is getting quite a workout today, but I have a distinct suspicion that Patterson is talking about precisely what Gee was: that from the fossil evidence alone, we cannot determine whether archaeopteryx is the ancestor of all birds, or an offshoot of the lineage that produced birds. And a very brief look reveals precisely what I suspected. This isn’t the problem for evolution that you seem to think it is.

A simple thought experiment highlights this concept. Assuming at some point in the future, scientists find some scientific knowledge that makes the naturalistic origin of life a more plausible possibility given the time constraints. (For instance…given completely arbitrary probabilities, say there is a 15% chance of OOL from unliving chemicals driven by natural processes in the lifetime of the earth to date) Does this mean that it must of happened that way in the past? Clearly the answer is no.

No, it doesn’t mean it must have happened that way in the past. However, we can show ways it may have happened, or ways that it was likely to have happened. Merely showing a likely way for the origin of life to have occurred given the conditions on Earth four-odd billion years ago puts abiogenesis far ahead of the creationist hypothesis, due to their lack of parsimony.

Incidentally, as Dawkins explained in The God Delusion, the actual life-generating event needn’t be particularly likely to occur. After all, it’s only happened once in the history of the planet Earth, so far as we’re aware. Given the variety of condition and the timespan involved, that’s something of a low probability.

But even claims of certainty about experimental science is unjustified. The history of science contains many examples of widely held scientific beliefs being overturned. Phlogiston is probably the most famous, but geosynclinal theory (preceding plate techtonics) is a more non-experimental science example. So even claims about experimental science should be made with this in mind, evoking a more humble stance. Comments about CDE being a ‘fact’ or being on par with gravity are unfounded and display a profound ignorance of science and history. Such comments are not scientific, but faith based.

Wrong, wrong, wrong. You’re conflating an awful lot of things here, particularly with regard to scientific terminology. First, as I said above, scientific knowledge is tentative and admittedly so. Scientists are human, and are certainly prone in some cases to overstating their certainty about one given theory or another, but in general we recognize that our knowledge is subject to revision as future evidence becomes available. There is no 100% certainty in science.

Here’s the point where definitions would be important. In science, a “fact” is something that can be observed–an object, a process, etc. A “law” is a (usually) mathematical description of some process or fact. A “theory” is a model that explains how facts and laws work, and makes predictions of future observations that can be used to validate or falsify it. Gravity is a fact, a law, and a theory. The fact of gravity is that things with mass can be observed to be attracted to one another; the law of gravity is F=G*[(m1*m2)/R^2]; the (relativistic) theory of gravity is that massive objects warp spacetime, causing changes in the motion of other massive objects. Evolution is similar: the fact of evolution is the process of mutation and selection that can be observed and has been observed under a variety of different levels of control; the theory of evolution by natural selection is that organisms are descended with modification from a common ancestor through an ongoing selection process consisting of various natural forces and occurrences.

The claims by Gould and others that evolution is a fact are referring to the observable process of evolution. Your argument here amounts to suggesting that since scientists were wrong about phlogiston, they cannot claim with any certainty that things burn.

So how to evaluate between the two paradigms?

Reason and evidence?

This is the question that matters… Controversially, Kuhn claimed that choosing between paradigms was not a rational process.

…?

Whilst not subscribing to complete relativism, I believe there is a real subjective nature between paradigms. Objective problems play a part, but how much those problems are weighted seems to be a fairly subjective decision.

From my perspective, the cascading failure of many of the evidences used to infer CDE is a clear indication of the marginal superiority of the (admittedly immature) YEC paradigm.

False dichotomy. Try again. Evidence against evolution–which, I remind, you have not provided–is not evidence for YEC. Nor is it evidence for OEC or ID or Hindu Creation Stories or Pastafarianism. Each of those things requires its own evidence if it is to stand as a viable scientific paradigm.

Incidentally, you might actually want to look at some of the evidence for evolution before declaring any kind of “cascading failure.” You might also want to look at the evidence for creationism.

Chief examples are things such as embryonic recapitulation (found to be a fraud),

Found by scientists to be a fraud; never central to evolutionary theory.

the fossil record (Found to exhibit mostly stasis and significant convergence),

Source? Experts disagree.

the genetic evidence (Found to exhibit massive homoplasy).

Source? Experts disagree.

Update: And the disagreement between molecular and morphological data.

Nothing in the article you’ve linked suggests any problems for evolution. It merely shows how useful the genetic and molecular analyses are in distinguishing species and discovering exactly how organisms are related; I think you’ll find that most biologists agree with that sentiment, which is part of why there’s so much more focus on genetic evidence than fossil evidence now. Heck, as long as we’re quoting, here’s Francis Collins:

“Yes, evolution by descent from a common ancestor is clearly true. If there was any lingering doubt about the evidence from the fossil record, the study of DNA provides the strongest possible proof of our relatedness to all other living things.”

It is curious however, that even with the near monopoly of the CDE paradigm in science education in America, that only a small fraction believe it. (CDE hovers around 10%, whilst 50+% accept YEC and the remainder Theistic evolution) This certainly indicates to me, that perhaps it is CDE that is not as compelling an explanation than YEC.

So, an appeal to popularity? Yeah, that’s valid. Yes, evolution is believed by a fraction of the laity. Although your numbers suggest it’s about half–theistic evolution is still evolution, and evangelical Francis Collins agrees far more with Richard Dawkins than Duane Gish. Strangely enough, among scientists–you know, the people who have actually examined the evidence, regardless of their religious beliefs–it’s believed by the vast majority. What does that suggest?

Whatever the decision, it is more appropriate to say that YEC is the “better inferred explanation” than CDE or vice versa. Such an understanding of the debate leads to a far more productive discourse and avoids the insults, derision and anger that seems to be so prevalent.

I’m afraid you’ve lost me, so I’ll sum up. Your position is based on an examination of the situation that ignores the complete lack of evidence for the “YEC paradigm” and inflates perceived flaws in the “CDE paradigm” in order to make them appear to be somewhat equal. From there, you ignore the basic lack of parsimony in the “YEC paradigm” and make appeals to logical fallacies in order to declare it the more likely explanation.

Alan, you’re clearly a fairly intelligent guy, but that more or less amounts to your argument having a larger proportion of big words than the average creationist’s. Your use of false dichotomy and argumentum ad populum as though they had any value to science, your quote-mining to make your point, your misinterpretation of popular science articles and assumption that they refute a century of peer-reviewed journals, your ignorance of the actual evidence for evolution, and your postmodernist take on the whole debate, are all standard creationist tactics. You’re clearly intelligent enough and interested enough to correct your misconceptions and your errors in thinking, Alan, and I hope you take this chance to examine the evidence with an open mind and understand that scientific theories are based on positive evidence, not negative evidence against a competing theory. Thanks for the article!

An incomplete list of things I don’t know

  • The musical term for the rising inflection/voice cracking thing that you often hear singers do in Irish music, such as at the end of nearly every line in the Cranberries’ “Zombie.”
  • Where to find honey-roasted walnuts, outside of Steak ‘n’ Shake salads.
  • Why anyone in their right minds would name a cemetery “Resurrection Cemetery.” That’s just begging for a zombie invasion.

Continuing that last thought

The newest JREF newsletter references a French-made spray which claims to protect your skin from the aging effects of cell phone radiation. Now, if you were advertising such a fallacy-ridden product, how might you start your promo? Something like this, perhaps?

If electromagnetic waves can penetrate walls, imagine what they can do to your skin.

Sounds like Dove’s got some competition in the “terrible logic” arena. At least EM waves can have damaging effects on the skin; the radiation produced by cell phones, however, not so much.

Interchangable Arguments

So, there’s this new Dove commercial I keep seeing, wherein they make the claim that Dove isn’t soap. As if that’s not shocking enough, here’s the argument that precedes that claim:

If soap can dry itself, imagine what it can do to your skin.

Yeahbuhwha? That’s easily one of the dumbest arguments I’ve ever heard on an advertisement.

First, does this mean that Dove, not being soap, and not having the nasty effects that soap does, doesn’t dry itself? Is a Dove bar perpetually wet from the moment you lather it up the first time? Do you have to dry it off with a towel? Or does it dry itself sitting in the soapdish after you shut the faucet off?

And second, and more egregiously, how many things can you reasonably replace “soap” with in that argument? How about “if a towel can dry itself, imagine what it can do to your skin”? After all, my towel dries while hanging on the bar after my shower; maybe it can dry my skin out too. And what about “if your clothes can dry themselves, imagine what they can do to your skin”? I mean, most of us put our clothes in the dryer, but they can dry just as well on a clothesline or hanging on a drying rack. Or, better yet, “if your skin can dry itself, imagine what it can do to your skin!” I mean, I’ve dripped-dry before, who hasn’t? What if my skin is making my skin dry and filmy?

I realize that commercials are designed to convince you to buy their products, but can’t they be convincing and logically valid?