Response? What Response?
How Dembski has avoided addressing my arguments
First posted on May 26, 2002. Updated May 27, 2002.
Copyright © 2002
Permission is given to copy and print
this page for nonprofit personal or educational use.
Contents
1.
Preamble
2. Peer Review
3. Argument from Ignorance
4. Tornado in a Junkyard
5. Short Responses
6. Conclusion
TalkReason recently posted my critique of William Dembski's book No Free
Lunch, entitled Not a
Free Lunch But a Box of Chocolates, in which I thoroughly analyzed and
refuted his arguments. Dembski quickly responded with an article entitled Obsessively
Criticized But Scarcely Refuted: A Response To Richard Wein. Although that
article purports to be a response to my critique, it is in fact largely a
recapitulation of arguments from No Free Lunch which ignores my
refutations of them. In the few instances where Dembski has made a serious
attempt to address my arguments, he almost invariably misconstrues them. Not one
of my arguments has been satisfactorily addressed.
I invite any skeptical reader to crosscheck my original critique with
Dembski's response. To facilitate such a crosscheck, this article uses the same
section names and numbers as Dembski's and I provide links to the relevant
sections of my critique.
All quotations labelled "Wein:" are from my critique and those
labelled "Dembski:" are from his response, unless otherwise
indicated.
1. Preamble
I begin, like Dembski, with some peripheral issues, before getting down to
the technical issues in section 3 below.
Since he thinks it will work to his advantage here, Dembski makes some
appeals to authority. He correctly points out that I have little authority,
possessing only a humble bachelor's degree (in statistics). I am happy to let my
arguments be judged on their merit, and would not ask anyone to accept them on
my authority. Dembski should be wary of casting stones, however, since he
himself criticizes experts in fields where he has no qualifications at all, such
as biochemistry.
He also asks how the eminent endorsers of No Free Lunch can "think NFL
is the best thing since sliced bread" if it is as bad as I argue it is. Perhaps
he should ask himself how the far more eminent and numerous supporters of
evolutionary theory can be so convinced by it if it is as bad as
antievolutionists say. The antievolutionist is illplaced to make appeals to
authority when the overwhelming weight of scientific authority is against
him.
Regrettably, advanced academic qualifications are no safeguard against
falling into the trap of pseudoscience, especially when powerful dogmatic
convictions are at stake.
2. Peer Review
For all the length of his discussion of peer review, Dembski fails to refute
any of the facts that I presented (critique section 8).
Most important, he is unable to name even one statistician or information
theorist who approves of his work in their fields, confirming my suspicion that
there are none.
While remaining completely silent on the subject of information theorists,
Dembski attempts to explain the lack of support from statisticians by suggesting
that his statistical claims are more appropriately judged by philosophers of
science. I do not dispute that some philosophers have expertise in the field of
statistics. But I note that he preferred to claim the support of statisticians
when he thought he could get away with it:
Interestingly, my most severe critics have been philosophers (for
instance Elliot Sober and Robin Collins...). Mathematicians and statisticians
have been far more receptive to my codification of design inferences. [NFL, p.
372n2, and "Intelligent
Design Coming Clean"]
Will he now retract his claim that statisticians have been receptive to his
work?
In his article, Dembski claims that his method of calculating perturbation
probabilities constitutes "a contribution to the applied probability
literature." Has he submitted it to a journal of applied probability? Or is this
yet another claimed contribution to a technical field that is best judged by
philosophers?
3. Argument from Ignorance
Dembski begins this section by repeating the claim that his eliminative
method of design inference is "inductively sound", based on the same absurd
argument which I have already refuted (critique section
3.6). He makes no attempt to address my argument.
Next, he repeats his claim that it is possible to rule out unknown material
mechanisms:
Dembski: Specified complexity can dispense with unknown
material mechanisms provided there are independent reasons for thinking that
explanations based on known material mechanisms will never be overturned by
yettobeidentified unknown mechanisms.
In No Free Lunch an attempt to dispense with unknown material
mechanisms was known as a "proscriptive generalization", but that term appears
nowhere in Dembski's current article. It is unclear whether this is just a
matter of inconsistency, or whether he now feels the term was an unwise one,
suggesting a degree of certainty that he cannot justify. Since he has not
acknowledged any change in his argument, I will continue to use his old term for
the sake of consistency. I will also assume that his "material" mechanisms are
synonymous with the "natural" mechanisms of No Free Lunch. Dembski's
frequent unacknowledged changes of terminology do nothing to enhance the clarity
of his arguments.
The proponent of an argument from ignorance always believes he has
sufficient reasons for thinking that his explanation will not be overturned by a
yettobeidentified unknown mechanism. So the fact that such reasons exist does
not save the argument from being one of ignorance. It is an argument from
ignorance because it is based not on evidence for the proposed
explanation but on the purported absence of any other explanation.
In No Free Lunch, Dembski defined arguments from ignorance as being
arguments of the form "not X therefore Y" (p. 111). But this is just the type of
purely eliminative argument that he is making: not material, therefore design.
Whether he eliminates material mechanisms by eliminating known mechanisms
individually or by trying to rule out unknown mechanisms through proscriptive
generalizations, this is still a purely eliminative argument.
Dembski continues by giving an example of a proscriptive generalization
against alchemy. But the point is made needlessly, since I have already agreed
with Dembski that proscriptive generalizations can be used to rule out some
categories of possibilities (critique section
3.2). The point is irrelevant, since these are arguments against
possibilities not arguments for a hypothesis. In terms of Dembski's "not
X therefore Y", they are a "not X"; they are not a "therefore Y". Scientists did
not argue "not alchemy, therefore modern chemistry".
Next, we have the example of a combination lock, which I did not discuss in
my critique, so I'll consider in some detail here. Dembski argues as
follows:
Dembski: The geometry and symmetry of the lock precludes
that material mechanisms can distinguish one combination from anotherone is
as good as any other from the vantage of material mechanisms.
Not at all. There may be a flaw in the mechanism which causes it to favour
some combinations over others. Or the lock may be flawed in some other way. No
matter how carefully the lock has been inspected, we cannot completely rule out
that possibility.
Consider an imaginary scenario in which a safe's combination dial is randomly
rotated by natural forces. Let's say the safe is on a rowing boat at sea, and
the rolling of the boat is sufficient to make the dial rotate. Now suppose that,
while the sole occupant of the boat looks on, the safe springs open. For good
measure, suppose that the rower is a locksmith who has thoroughly inspected the
lock, found it to be flawless, closed the safe, and thoroughly randomized the
dial, all since he has been alone on the boat. How will he explain the
spontaneous opening of his safe? Let us say that he appreciates the sheer
improbability of the safe opening by pure chance if it was operating to
specification, and he rejects that explanation. Does he infer design? Or does he
infer that, despite his thorough check, there was a flaw in the mechanism that
caused it not to operate correctly? Even though it may seem implausible that the
safe sprang open spontaneously, he will surely consider it even more implausible
that someone boarded his boat and opened the safe while he was watching it,
without him noticing, and he will prefer the former explanation. This is an
"inference to the best explanation", or comparative inference. I discussed such
inferences at some length (critique section
3.5). Dembski has completely ignored that discussion, and continues to
insist that his eliminative inferences are the only option.
According to Dembski's logic, the rower should have inferred design, no
matter how certain he was that no human agent could have been responsible, even
if it required him to posit an unembodied designer. Dembski may respond that the
hypothesis of a flawed mechanism is another relevant chance hypothesis which
must be eliminated before inferring design. If he does so, he admits the fallacy
of his argument that it is possible to "guarantee the security of the lock."
I should add that this discussion of proscriptive generalizations has no
relevance to Dembski's argument for design in biology, since I have refuted his
claim of a proscriptive generalization against Darwinian evolution of
irreducibly complex systems (more on this below).
The remainder of Dembski's section is not a response to my critique at all,
but is a long passage cut and pasted from one of his articles, with trivial
changes. The article in question is the text of his recent talk at the American
Museum of Natural History, which he has posted to the Internet under the title
"Does
Evolution Even Have a Mechanism?" Most of this is a rehash of arguments from
No Free Lunch, and does not even belong in a section which purports to be
addressing the "Argument From Ignorance" issue. I will address only the parts
dealing with that issue.
Dembski: Isn't arguing for design on the basis of specified
complexity therefore merely an argument from ignorance? Two comments to this
objection: First, the great promise of Darwinian and other naturalistic
accounts of evolution was precisely to show how known material mechanisms
operating in known ways could produce all of biological complexity. So at the
very least, specified complexity is showing that problems claimed to be solved
by naturalistic means have not been solved.
Biologists have never claimed to know precisely how every biological
structure evolved. So Dembski is attacking a straw man.
Dembski: Second, the argument from ignorance objection
could in principle be raised for any design inference that employs specified
complexity, including those where humans are implicated in constructing
artifacts. An unknown material mechanism might explain the origin of the Mona
Lisa in the Louvre, or the Louvre itself, or Stonehenge, or how two students
wrote exactly the same essay. But no one is looking for such mechanisms. It
would be madness even to try. Intelligent design caused these objects to
exist, and we know that because of their specified complexity.
Once again, Dembski assumes that all our inferences of design use his
eliminative approach. I have already addressed this claim (critique section
3.5). He has not responded.
I note that this section of Dembski's article contained not one direct
response to any argument of mine. He made no attempt at all to refute my
argument that his method of design inference is a purely eliminative argument,
an argument from ignorance, and a godofthegaps argument (critique section
3.3). He did not provide definitions of these terms and attempt to show that
his method does not match them. He did not even attempt to challenge my
definitions of these terms. Nor did he address my argument that the term
"specified complexity" is an unnecessary middleman. Let me make that argument
even simpler: why not replace
all relevant chance hypotheses eliminated >
specified complexity > design
with
all relevant chance hypotheses eliminated >
design?
4. Tornado in a Junkyard
I pointed out (critique section
4.1) that Dembski's only probability calculation for a biological system is
based on a hypothesis of purely random combination, or what I called a "tornado
in a junkyard" scenario. Since such hypotheses are already universally rejected
by biologists, I argued that the calculation was addressing a straw man
hypothesis, and was therefore irrelevant. Dembski responds:
Dembski: Wein therefore does not dispute my calculation of
appearance by random combination, but the relevance of that calculation to
systems like the flagellum And why does he think it irrelevant? Because
cooptation is supposed to be able to do it.
Dembski has not read my article carefully enough. I wrote:
Wein: No biologist proposes that the flagellum appeared by
purely random combination of proteinsthey believe it evolved by natural
selectionand all would agree that the probability of appearance by random
combination is so minuscule that this is unsatisfying as a scientific
explanation. Therefore for Dembski to provide a probability calculation based
on this absurd scenario is a waste of time. There is no need to consider
whether Dembski's calculation is correct, because it is totally irrelevant to
the issue.
There is nothing here about cooptation. I did not mention cooptation until
the following section (where I called it "change of function"). So Dembski has
entirely failed to address my argument. Nevertheless, I thank him for confirming
that his calculation is based on a hypothesis of random combination.
At this point, let me interject a passage from Dembski's talk at the American
Museum of Natural History, which he included in his section 3:
Dembski: Convinced that the Darwinian mechanism must be
capable of doing such evolutionary design work, evolutionary biologists rarely
ask whether such a sequence of successful babysteps even exists; much less do
they attempt to quantify the probabilities involved. I attempt that in chapter
5 of NFL (to which I'll return shortly). There I lay out techniques for
assessing the probabilistic hurdles that the Darwinian mechanism faces in
trying to account for complex biological structures like the bacterial
flagellum. The probabilities I calculateand I try to be conservativeare
horrendous and render natural selection utterly implausible as a mechanism for
generating the flagellum and structures like it.
There is no mention here that the probabilities were calculated under a
hypothesis of random combination. On the contrary, there is a distinct
implication that they were calculated under a hypothesis involving natural
selection. We know this to be untrue, but listeners at the AMNH may have been
misled by it. A reader unfamiliar with the tactics of antievolutionists might
have thought that it did no harm to include an irrelevant calculation in No
Free Lunch. But those of us familiar with antievolutionist rhetoric foresaw
that it would be abused in the way we see here.
In my next section (critique section
4.2), I refuted Dembski's claim to have found a proscriptive generalization
against the Darwinian evolution of irreducible complexity. Since this is an
important point, I'll repeat the passage here:
Wein: Let us accept, for the sake of argument, that
Dembski's definition is tight enough to ensure that IC systems cannot evolve
by direct pathways. What has he said on the vital subject that Behe
failed to addressthe subject of indirect pathways? The answer is
nothing. The crux of his argument is this:
Dembski [NFL]: To achieve an irreducibly complex system,
the Darwinian mechanism has but two options. First, it can try to achieve the
system in one fell swoop. But if an irreducibly complex system's core consists
of numerous and diverse parts, that option is decisively precluded. The only
other option for the Darwinian mechanism then is to try to achieve the system
gradually by exploiting functional intermediates. But this option can only
work so long as the system admits substantial simplifications. The second
condition [that the irreducible core of the system is at the minimal level of
complexity needed to perform its function] blocks this other option. Let me
stress that there is no false dilemma hereit is not as though there are
other options that I have conveniently ignored but that the Darwinian
mechanism has at its disposal.[p. 287]
Wein: But there is indeed an option that Dembski has
overlooked. The system could have evolved from a simpler system with a
different function. In that case there could be functional
intermediates after all. Dembski's mistake is to assume that the only possible
functional intermediates are intermediates having the same
function.
For once, Dembski appears to have read and understood my argument, but he
makes no attempt to refute it. His proscriptive generalization is therefore
dead. That leaves him with only appeals to ignorance and red herrings:
Dembski: For Wein to account for systems like the
flagellum, functions of precursor systems must coevolve. But that means the
space of possible functions from which these coevolving functions are drawn
is completely unconstrained. This provides yet another recipe for insulating
Darwinian theory from critique, for the space of all possible biological
functions is vast and there is no way to establish the universal negative that
no sequence of coevolving functions could under cooptation have led to a
given system.
Dembski is not required to establish a universal negative. He just needs to
show that a design hypothesis is better, given the available evidence, than the
hypothesis of purely natural evolution. But he rejects inferences to the best
explanation, insisting on a purely eliminative mode of inference, and that puts
him in the unenviable position of either establishing a "universal negative" or
admitting there is a category of possibilities he has not eliminated. Since he
cannot do the first and does not wish to do the second, he equivocates, first
claiming that he has ruled out all Darwinian possibilities (his proscriptive
generalization) and then, when it is shown he has not done so, complaining that
the expectation was unreasonable. In short, he wants to have his lunch and eat
it too!
Dembski: Let me suggest that there are further reasons to
be deeply skeptical of Wein's cooptation scenario. First, specified
complexity is used to nail down design in cases of circumstantial evidence, so
if there should happen to be design in nature, specified complexity is how we
would detect it. Thus, my probability calculation for the flagellum, in the
absence of a countercalculation by Wein, is prima facie evidence of
biological design. This may not provide sufficient reason for convinced
Darwinists to abandon their paradigm, but it gives evolution skeptics reason
to consider other options, including design.
This is the crude argument from ignorance: having eliminated the absurd
hypothesis of purely random assembly, we must infer design unless biologists can
give an alternative hypothesis detailed enough to allow a probability
calculation.
Dembski: Second, there is a whole field of study developed
by Russian scientists and engineers known under the acronym TRIZ (Theory of
Inventive Problem Solving) that details patterns of technological
evolution...
Dembski argues that, because engineers do not use Darwinian methods to solve
"inventive" problems, biological evolution cannot do so. The argument is an
absurd non sequitur. Biological evolution can make billions of trials,
thanks to large populations and unimaginable periods of time. Human engineers do
not have such vast resources available. Furthermore, the premise of Dembski's
argument is false. In recent years some engineering problems have indeed been
solved using Darwinian methods, namely computerized evolutionary algorithms.
Dembski himself gives an example: the "crooked wire genetic antennas" of
Altshuler and Linden (NFL, p. 221).
Dembski: Third, and perhaps most telling, Wein needs
fitness to vary continuously with the topology of configuration space. Small
changes in configuration space need to correlate with small changes in
biological function, at least some of the time. If functions are extremely
isolated in the sense that small departures from a functional island in
configuration space lead to complete nonfunctionality, then there is no way to
evolve into or out of those islands of functionality by Darwinian
means.
The notion of "functional islands" is misleading, as I will show below.
But the essential point that Dembski seems to be making here is that there
might not be a viable evolutionary pathway to the bacterial flagellum.
This is just another appeal to ignorance. In his previous section, he made this
appeal more explicit: "But what guarantee is there that a sequence of babysteps
connects any two points in configuration space?" Science is not in the business
of giving guarantees, but of making inferences to the best explanation. (Note:
"configuration space" is equivalent to the term "phase space" which Dembski used
in No Free Lunch.)
Dembski: To clear away this obstacle to the Darwinian
mechanism, Wein argues that the laws of nature guarantee the continuity that
permits the Darwinian mechanism to flourish. According to Wein, smooth fitness
landscapes are the norm because we live in a world with regular laws of nature
and these are supposed to ensure smoothness.
Here, Dembski cites my argument from the regularity of the laws of nature,
but that argument was only a response to Dembski's argument from the NFL
theorems (see section 5.2 below).
He is taking it out of context and treating it as a response to a different
argument. He is also misreading it, since I wrote nothing about a guarantee.
Having failed to defend his argument from irreducible complexity, Dembski
finishes this section by appealing to another argument. It is based on research
that may be published in the next two years, and which allegedly will show that
functional enzymes are located on isolated "islands of functionality" in
configuration space. Since the research is not yet in and Dembski has not
described the argument in any detail, it hardly deserves a response.
Nevertheless, I will make a few comments:

Dembski claims that this research will provide an opportunity for his
method of "perturbation probabilities" to show its true merit. But his
perturbation probabilities are based on a hypothesis of purely random
combination of components, i.e. the "tornado in a junkyard" scenario. This
scenario is just as irrelevant to the evolution of individual enzymes as it is
to the evolution of the bacterial flagellum.

He predicts that the research "will provide convincing evidence for
specified complexity as a principled way to detect design and not merely as a
cloak for ignorance." This is nonsense. The research cannot be used both to
establish a result using Dembski's method and also as confirmation of the
validity of that method. The conclusion that specified complexity is a cloak
for the argument from ignorance follows from its definition, as I have already
shown, and cannot be refuted by any amount of research.

From the extremely brief description in No Free
Lunch (p. 301), it appears that the configuration space in question is one
in which the units of variation are individual amino acids. At best Dembski's
argument might show that it is not possible to evolve from one island to
another by multiple substitutions of individual amino acids. But biological
mutation is not limited to such a process. Whole sequences of amino acids may
be substituted by a single mutation (e.g. DNA shuffling). As with irreducible
complexity, Dembski is attempting to shoehorn a complex biological process
into an excessively simplistic model. He has overlooked a key element in that
process.
An interesting analogy may be noted with Dembski's example of attempting to
evolve a computer program by means of random changes to its source code. While
this cannot be achieved by changing single characters in a typical program, it
can be achieved by changing larger elements of a program which is suitably
structured. There is a field of study called "genetic programming" which is
devoted to just this subject. For a brief introduction, see "Genetic Programming with
C++".
5. Short Responses
5.1 Uniform Probabilities
One thing I was hoping to see in Dembski's response was a clarification
regarding his two apparently different methods of determining specified
complexity, the chanceelimination method and the uniformprobability method, as
I have named them. At first he seems to deny the uniformprobability method,
insisting that we must calculate probabilities with respect to all relevant
probability distributions (i.e. chance hypotheses):
Dembski: As a criterion for detecting or inferring design,
specified complexity must be assessed with respect to all the relevant
probability distributions. Consequently, for complexity to obtain, the
probability of the specified event in question must be less than the universal
probability bound with respect to all the probability distributions (i.e.,
relevant chance hypotheses) being considered. (Note that this means the
formula in the Law of Conservation of Information, I(A&B) = I(A) mod UCB,
needs to obtain for every relevant probability distribution P, which gets
converted to an information measure I by a logarithmic
transformation.)
It is now unclear what the Law of Conservation of Information (LCI) means. A law should tell us what does obtain, not what "needs to" obtain. It will help to sort out this confusion if I start by removing the disguise from the formula. The LCI's formula "I(A&B) = I(A) mod UCB" is a disguised form of "P(RH) ≥ α" the complement of the formula "P(RH) < α" from the chanceelimination method (NFL, p. 73). The disguise is implemented by means of the following transformations:

Refer to the rejection region R as an "item of information" B, consisting
of two parts T and E. The "target" T is just another name for the rejection
region R. The outcome (or elementary event) E plays no part in the LCI, which
only considers the probability of T, so can be ignored. In practice, B is just
R in disguise. Thus, P(RH) ≥ α becomes:
P(BH) ≥ α

Leave out the chance hypothesis H (as this can be taken for
granted):
P(B) ≥ α

Forget about computing a probability bound (α), and just use the universal
probability bound (10^{150}):
P(B) ≥ 10^{150}

Make explicit some prior event or state A on which B depends:
P(BA) ≥ 10^{150}

Substitute the identity P(BA) = P(A&B)/P(A) (see
NFL, pp. 128129):
P(A&B)/P(A) ≥ 10^{150}
P(A&B) ≥ P(A) × 10^{150}

Transform probability into "information" by applying the function I
= log_{2}P:
I(A&B) ≤ I(A) + 500

Call 500 the Universal Complexity Bound (UCB):
I(A&B) ≤ I(A) + UCB

This is the LCI as stated on p. 161. Now introduce Dembski's peculiar "mod"
notation:
I(A&B) = I(A) mod UCB
Remember that the LCI applies to events which "arose by natural causes" (NFL, p. 160). So, with the disguise removed, and taking Dembski's statement above at face value, the LCI tells us that, if an event arose by natural causes, "P(RH) ≥ α" needs to obtain for each relevant chance hypothesis. (Remember that P(RH) is the probability of any outcome occurring which matches a specification based on the observed outcome.) But "needs to" for what purpose? If he means it needs to obtain or else we should infer design, then he is now saying that we should infer design if the probability is small (P(RH) < α) for any relevant chance hypothesis. But the chanceelimination method tells us only to infer design if the probability is small for all relevant chance hypotheses (NFL, p. 73). So this interpretation can't be right.
Perhaps the LCI is telling us that, if an event arose by natural causes, "P(RH) ≥ α" does obtain for each relevant chance hypothesis. But a natural event may have very low probability under some "relevant" chance hypotheses (i.e. some chance hypotheses which we thought might have been operating before we did a probability calculation). For example, Dembski always seems to consider the hypothesis of purely random arrangement to be a relevant one. But many regular objects which result from natural processes (e.g. highly symmetrical snowflakes) would have very low probability under the hypothesis that they occurred by purely random arrangement of their molecules. So this interpretation can't be right either. It seems Dembski has not thought this through.
Voila! The LCI. So we see that the LCI is just a disguised version of the
chanceelimination method.
Let us move on from the LCI to specified complexity. Despite the clear statement above that for assessing specified complexity we must calculate the probability with respect to all relevant chance hypotheses (probability distributions), Dembski then proceeds to equivocate. In my critique, I listed several assertions by Dembski which support
a uniformprobability interpretation (critique section
6.3, exhibits #1 to #4). Dembski now makes feeble attempts to address my
exhibits #2 to #4, but only adds to the confusion.
To remind the reader, exhibits #2 and #3 were concerned with programs which
regularly produce specified results (solutions to the problem). Therefore, with
respect to the hypothesis that the results were produced by the program, they
have a high probability, and that hypothesis is therefore not rejected. This
hypothesis, which we know to be the true one, must surely be a "relevant" one.
According to the chanceelimination method and to Dembski's account above,
specified complexity is indicated only if the probability is low with respect to
each relevant chance hypothesis (probability distribution). In this case it is
not, so the results do not exhibit specified complexity. Yet Dembski claimed
they do. What does he have to say on the subject now?
Dembski: One clarification about uniform probabilities in
the context of evolutionary computation is worth making here. When I refer to
specified complexity in relation to evolutionary computation, I usually intend
the complexity to correspond to the probability of the target induced by the
natural topology on the configuration/search space in question. Often this is
a uniform probability, but it needn't be.
This "clarification" is exceptionally evasive, even by Dembski's standards.
He continues to insist that search spaces come with a probability distribution
attached, although I challenged this assertion in my critique and he has not
responded to my challenge. He does not tell us how to determine what this
probability distribution is, merely saying that it is "often" a uniform
distribution. He says that he "usually" bases specified complexity on this
distribution, but does not tell us why he does so, nor under what conditions he
would choose to use some other distribution instead. Most importantly, he fails
to explain why he is not calculating the probability with respect to all
relevant chance hypotheses, as he told us to above, or, alternatively, why the
true explanation is not a relevant chance hypothesis.
On exhibit #4, the SETI sequence, Dembski writes:
Dembski: In many contexts the uniform probability
distribution is a good way to see whether we're in a small probability ball
park. For instance, I regularly cite the Contact example in which a
long sequence of prime numbers represented as a bit string comes from outer
space and convinces SETI researchers of the reality of extraterrestrial
intelligence. What probabilities might reasonably be assigned to that
sequence? What are the relevant chance hypotheses that might assign these
probabilities? It's not simply going to be a uniform probability (1s vastly
outnumber 0s in representing that sequence). Yet the uniform probability is
much more in the right ball park than a probability that concentrates high
probability on this sequence.
Again, Dembski is extraordinarily evasive. He stated in No Free Lunch
that the SETI sequence exhibits specified complexity, so he needs to show that
the probability is low under all relevant chance hypotheses. He tells us that a
uniform probability distribution (conferring equal probabilities on the digits 1
and 0) is not quite the right one because "1s vastly outnumber 0s in
representing that sequence". Logically, then, we should consider a distribution
which corresponds to the actual proportions of 1s and 0s in the sequence, as I
suggested in my critique. This means assigning the probabilities 1102/1126 and
24/1126 to the digits 1 and 0 respectively, since we observed 1102 1s and 24 0s.
But, under this hypothesis, the probability was 3.78 × 10^{51}, not low
enough to reject the chance hypothesis, at least with a universal probability
bound. As well as ignoring this relevant chance hypothesis, Dembski failed to
provide a proscriptive generalization to rule out any other chance hypotheses.
So it seems he was not even attempting to consider all relevant chance
hypotheses, but thought it sufficient to consider just one, though he doesn't
tell us just what that hypothesis is, what probability it conferred on the
sequence, or what probability bound he compared it to. Once again, he did not
respond to my argument.
Why does Dembski persistently equivocate about this? Does he think there is
some tactical advantage in keeping his critics guessing?
5.2 The No Free Lunch Theorems
In my critique
section 5.4, I first showed that the NFL theorems do not apply to
coevolutionary systems such as biological evolution. Dembski has not even
attempted to address that argument.
I then argued that, even for those evolutionary algorithms to which they do
apply, the NFL theorems do not support Dembski's finetuning claim, because the
theorems assume that all mathematically possible fitness landscapes are equally
likely, but the existence of regular laws makes this assumption unrealistic. In
my previous section (critique section
5.3), I had provided a quotation which makes a similar point:
In spite of the correctness of this "nofreelunch theorem"
(Wolpert and Macready 1997) the result is not too interesting. It is easy to
see, that averaging over all different fitness functions does not match the
situation of blackbox optimization in practice. It can even be shown that in
more realistic optimization scenarios there can be no such thing as a
nofreelunch theorem (Droste, Jansen, and Wegener 1999). [Thomas Jansen, "On
Classifications of Fitness Functions", 1999]
Dembski's only response to my argument is to point out that not all real
fitness landscapes are smooth, citing as examples landscapes based on small
changes to written language and computer source code. This may be true (though
even the cited landscapes have smooth gradients in places), but it is
irrelevant. My argument only requires that smooth gradients be significantly
more probable in the presence of regular laws than if drawn randomly from the
set of all mathematically possible fitness landscapes. I did not claim, as
Dembski writes, that "the laws of nature guarantee the continuity that permits
the Darwinian mechanism to flourish". In the hope of avoiding such
misunderstandings, I even added the following disclaimer, which Dembski seems
not to have read:
Wein: Although it undermines Dembski's argument from NFL,
the regularity of laws is not sufficient to ensure that realworld evolution
will produce functional complexity.
Nevertheless, I can see now that I made my argument more complicated than
necessary. There was no need to mention the subject of "smoothness" at all. So I
now offer the following improved text, to replace the second half of my critique section
5.4, from the paragraph beginning "Moreover...":
Moreover, NFL is hardly relevant to Dembski's argument even for
the simpler, noninteractive evolutionary algorithms to which it does apply
(those where the reproductive success of individuals is determined by a
comparison of their innate fitness). NFL tells us that, on average, a search
algorithm will perform no better than random search if the fitness
landscape is selected at random from the set of all mathematically possible
fitness landscapes. Since, in practice, we observe that evolutionary
algorithms tend to perform much better than random search, Dembski argues that
the selection of fitness landscape must have been biased in favour of good
performance by an intelligent designer. But this does not follow. The
alternative to design is not random selection from the set of all
mathematically possible fitness landscapes. Fitness landscapes are determined
at least in part by rules (whether imposed by humans or by the laws of
nature), not generated randomly. Rules inevitably give rise to patterns, so
that patterned fitness landscapes will be favoured over totally chaotic ones.
Thus, the assumption on which NFL is based does not obtain in practice, and
NFL therefore cannot be used to support Dembski's conclusion. The fact that
NFL does not apply to real situations has already been noted above (see the
quotation from Jansen).
Dembski might respond that some sets of rules would produce
patterned fitness landscapes but not landscapes conducive to evolution. This
is true, but it is no longer an appeal to NFL. It is an appeal to the rules
being the right sort of rules for evolution. We do not need NFL to tell us
that some sets of rules would not be conducive to evolution. Indeed, we can
see some obvious examples. The rules mapping English text to meaning are not
conducive to evolution of meaningful texts by substitution of single letters.
Where the rules are invented by intelligent agents, as in this case, they may
or may not be conducive to evolution. In neither case does this support the
claim that an intelligent agent is necessary. If, on the other hand, Dembski
considers situations where the relevant rules are the laws of nature (or
simulations of them), he is making a variant on the argument from cosmological
finetuning, arguing that natural laws are more conducive to evolution than
could be expected by chance. But what does this add to the existing
cosmological finetuning argument? At most it would show that natural laws are
even more finetuned than was previously claimed, and Dembski has not even
made that case. He would need to show that fitness landscapes are more
conducive to evolution than can be accounted for by the currently claimed
examples of finetuning.
Next, Dembski appeals to a quote from Geoffrey Miller on how carefully
programmers must select the fitness function of an evolutionary algorithm. Since
he had already used this quote in a previous article, I foresaw that he might
introduce it, and preempted him by giving the quotation myself and showing that
Dembski had misinterpreted it (critique section
5.2). He appears not to have noticed. And elsewhere I discussed an example
where the programmers did not carefully select the fitness function at all,
instead using a perfectly obvious one: Chellapilla and Fogel's checkersplaying
program (critique
section 6.6). Again, no response from Dembski.
Having failed to establish any good reason why the fitness landscapes for
biological organisms should not be conducive to evolution, Dembski switches to
demanding that biologists explain why they are so conducive:
Dembski: Given, for the sake of argument, that Darwinism is
the means by which biological complexity emerged, why is nature such that the
fitness landscapes operating in biology are smooth and allow the Darwinian
mechanism to be successful?
My critique only set out to refute Dembski's arguments, not to make the case
for evolution. Nevertheless, I did briefly address this question (critique section
5.2):
Wein: In the case of biological evolution, the situation is
somewhat different, because the evolutionary parameters themselves evolve over
the course of evolution. For example, according to evolutionary theory, the
genetic code has evolved by natural selection. It is therefore not just good
luck that the genetic code is so suited to evolution. It has evolved to be
that way.
This is referred to more generally in the literature as "the evolution of
evolvability", a subject which Dembski fails to consider at all.
5.3 Specification
Dembski: As is so often the case with Wein's critique, he
conveniently omits the key the unlocks the door. Nowhere in his review does he
mention the tractability condition, and yet it is precisely that condition
that circumvents his worry of artificially tailoring patterns to events. He's
right to stress this concern, but I stress it too in NFL and I'm at pains to
show how this concern can be met. The way I meet it is through what I call the
tractability condition (why no mention in Wein's critique?).
Dembski did not read my critique carefully enough. Let me first quote a
sentence from No Free Lunch:
Dembski [NFL]: The tractability condition employs a
complexity measure φ that characterizes the complexity of patterns relative to
S's background knowledge and abilities as a cognizer to perceive and generate
patterns. (NFL, p. 76).
In my critique
section A.2, I devoted several paragraphs to discussion of this complexity
measure and its use in determining specificational resources. I even quoted the
sentence above, less the first four words. In other words, I discussed the
tractability condition in all but name. I felt the introduction of yet another
term to the discussion was unhelpful, especially one whose name is a hangover
from its old use in The Design Inference, and does not reflect its new
use in No Free Lunch. I also stated very clearly that specificational
resources help to compensate for the tailoring of the specification to the
observed event.
Having erroneously accused me of not addressing his argument, Dembski
then proceeds to ignore all of my arguments about his statistical method.
The one thing he does do is introduce a new, fourth complexity measure, based on
"levels" to add to the three complexity measures he mentioned in No Free
Lunch (critique section
A.2). Which of these four measures should we use? Do we have a free
choice?
5.4 Algorithmic Information Theory
Dembski: Wein attributes confusion to me about algorithmic
information theory, but the confusion is his. The reason Wein is confused is
because within algorithmic information theory, the highly incompressible bit
strings are the ones with "high complexity." Thus when I attribute specified
complexity to highly compressible bit strings, he thinks I'm confused. But
that's because he is reading in his own preconceptions rather than applying
the framework I lay out in NFL.
Confusion abounds because of Dembski's conflation of improbability with
complexity, and his failure to state which chance hypotheses his
improbabilitycumcomplexity is relative to (critique sections 3.7 and 6.3). He now
introduces the terms "probabilistic complexity" and "patterned complexity". Why
not just call these "improbability" and "complexity" respectively, like everyone
else does? Because it would mean abandoning an important part of the disguise
for his argument from ignorance.
He writes nothing in this section to clarify the issue of chance hypotheses,
failing to mention them at all except for a passing reference to "coin tosses",
which once again suggests that he is considering only a uniform probability
distribution instead of considering all relevant chance hypotheses. In my
critique, I discussed the relationship of algorithmic information to
uniformprobability specified complexity (critique section
6.7), a discussion which Dembski has not addressed. I will now discuss its
relationship to eliminative specified complexity.
Consider, for example, the following sequence:
111111111111111111111111111111111111111111111111111111111...
This is a highly compressible sequence, i.e. it has very low algorithmic
information (or Kolmogorov complexity). But does it exhibit eliminative
specified complexity? That depends on which chance hypotheses we consider to be
"relevant". The sequence is "probabilistically complex" with respect to the
hypothesis that the digits were generated independently with equal probability
(1/2) of being 0 or 1 (a uniform probability distribution). But it is not
"probabilistically complex" with respect to a process which outputs a 1 every
time; or a process which outputs a 1 with probability 0.9999; or a process which
outputs a 0 or 1 (with equal probability) for the first digit, and then repeats
the same digit; etc. Thus, the sequence may or may not exhibit specified
complexity, depending on whether the processes just listed constitute "relevant"
chance hypotheses given our knowledge of the situation.
For an example where Dembski himself attributes specified complexity to such
a simple sequence, consider the Caputo case, where he inferred design in this
sequence:
DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD
Dembski does not explicitly state that this sequence exhibits specified
complexity, but he reaches step #8 of the General Chance Elimination Argument
(NFL, p. 82), and this step tells us that the observed outcome (E) exhibits
specified complexity (p. 73). Since Dembski inferred design when 40 out of 41
draws were Ds, he would have had even more reason to infer design if all 41
draws had been Ds. Thus, a sequence of 41 Ds would also have exhibited specified
complexity. This sequence is analogous to the sequence of 1s given above, so we
see that such repetitive sequences can exhibit specified complexity. (The Caputo
case was based on a local probability bound, but we can imagine extending the
sequence to 500 Ds, to reach the universal probability bound.)
We can now see that Dembski's concept of specified complexity is quite
different from that of Orgel and Davies, although he persistently conflates his
with theirs. As we have just seen, highly compressible patterns can exhibit
specified complexity in Dembski's sense. But, in the work of Orgel and Davies,
it is incompressible patterns which exhibit specified complexity. I made
this point in my critique (critique section
6.7), but Dembski has not responded.
Instead, he gives us a red herring. He reminds us that we must consider all
outcomes matching a specification, and not just the observed outcome, implying
that I had overlooked this point. I had not. I made the point frequently
throughout my critique. In the examples above we can consider the specifications
to be "a sequence of all 1s" and "a sequence of all Ds" (see the Caputo case, p.
81). In such cases the specification corresponds to only a single outcome, just
as in Dembski's Champernowne (pp. 1618) and SETI (pp. 143144) cases.
Alternatively, if 1s have no special significance over 0s, we might consider the
specification "all digits the same", which corresponds to two outcomes
("1111111111..." and "0000000000..."); this will not significantly alter the
probabilities. (Incidentally, Dembski refers here to the "patterned complexity"
of a specification, but he has still not unambiguously defined this concept; see
critique section
A.2.)
I wonder whether Dembski would like to reconsider his insistence that he is
not confused about this issue, because the alternative is that his conflation is
a deliberate ploy to mislead his readers.
5.5 Predictive Power of Darwinian Evolution
Dembski describes as nonsense my claim that there is a high degree of
congruence between phylogenetic trees derived from morphological studies and
from independent molecular studies, citing Simon Conway Morris, who writes:
"Constructing phylogenies is central to the evolutionary enterprise, yet rival
schemes are often strongly contradictory. Can we really recover the true history
of life?"
This is an example of the quotemining tactic which plays such a central role
in antievolutionist rhetoric. Indeed, this quotation appears in the Discovery
Institute's compendium of mined quotes "Bibliography
of Supplementary Resources For Ohio Science Instruction". A quote is
provided which superficially appears to support one's position, but significant
context is omitted and contrary evidence is conveniently ignored. Here Dembski
simply ignores the evidence I cited ("The
One True Phylogenetic Tree") and omits to mention that Conway Morris was
referring only to the earliest organisms, in the Precambrian and Cambrian
periods, where phylogenies are tentative and complicated by lateral gene
transfer. The fact that these tentative phylogenies contain some contradictions
does not negate my example. I foresaw this sort of spurious objection, and wrote
in a footnote:
Wein: It will do Dembski no good to point out that there
are a few exceptions to this congruence. The methods for establishing
phylogenetic trees are fallible. The prediction is only that there will be a
high degree of congruence, not perfect congruence.
Incidentally, since Dembski considers Conway Morris such an authority, I
assume he will agree with the following passage from the same paper:
Cooption is, therefore, commonplace (e.g., Finkelstein and
Boncinelli, 1994; Holland and Holland, 1999), perhaps ubiquitous, and just
what we would expect in organic evolution. ["Evolution: Bringing Molecules
into the Fold", Simon Conway Morris, Cell, Vol. 100,
111]
5.6 Explanatory Power
Since I confessed myself unable to find a satisfactory definition of
"explanatory power", Dembski criticizes me for not consulting the literature. He
then proceeds to quote a definition from the literature which is both trivial
and questionbegging: "Explanations are also sometimes taken to be more
plausible the more explanatory 'power' they have. This power is usually defined
in terms of the number of things or more likely, the number of kinds of things,
the theory can explain. Thus Newtonian mechanics was so attractive, the argument
goes, partly because of the range of phenomena the theory could explain."
It is hardly very enlightening to say that the theory with the most
explanatory power is the one which can explain the most. This definition simply
begs the question of what it means for a theory to explain something. The
simplest (but not fully satisfactory) definition of "explain" is that a theory
explains a phenomenon if the phenomenon can be deduced from the theory (together
with auxiliary theories and initial conditions). This is the "retrodictive"
meaning which I gave in my critique. With this meaning, as I pointed out, the
intelligent design hypothesis has zero explanatory power.
I note that, except for a little unsuccessful nitpicking, Dembski has made no
challenge to my arguments showing that evolutionary theory is superior to the
design hypothesis in terms of productivity as a research program,
falsifiability, parsimony, predictive power and explanatory power (critique section
7.2).
5.7 Wein's Acknowledgment
Dembski: Wein offers the following acknowledgment for help
on his critique of NFL: "I am grateful for the assistance of Wesley Elsberry,
Jeffrey Shallit, Erik Tellgren and others who have shared their ideas with
me." Am I to assume that Wein speaks for Elsberry, Shallit, Tellgren, and
others...?
Of course not.
6. Conclusion
My critique has received no serious challenge from Dembski, and its conclusions remain
wellfounded.
This article first appeared in the Talk.Origins Archive.
