Home| Letters| Links| RSS| About Us| Contact Us

On the Frontline

What's New

Table of Contents

Index of Authors

Index of Titles

Index of Letters

Mailing List


subscribe to our mailing list:



SECTIONS

Critique of Intelligent Design

Evolution vs. Creationism

The Art of ID Stuntmen

Faith vs Reason

Anthropic Principle

Autopsy of the Bible code

Science and Religion

Historical Notes

Counter-Apologetics

Serious Notions with a Smile

Miscellaneous

Letter Serial Correlation

Mark Perakh's Web Site

The scientific vacuity of ID: design inference versus "Design Inference"

By Pim van Meurs

Posted November 06, 2006

On Evolution News Casey Luskin makes the following claim:

"North Korean Nuclear Test Forces Seismologists to Make a Design Inference".

Luskin is correct to point out that seismologists have made a design inference. What Luskin fails to tell you is that the design inference has little relevance to Intelligent Design's "Design Inference".

Let me explain why Luskin's claim shows that Intelligent Design has failed to address some of the many criticisms raised, and that ID's concessions have rendered it to be scientifically vacuous.

See also SETI, archeology and other sciences at Skeptico's blog for why Luskin's arguments fail.

In the past I and various others have pointed out how ID argues that on the one hand science excludes design inferences and on the other hand that science has successfully applied design inferences in such areas as archeology, criminology etc. The solution for this apparent contradiction is simple: When ID refers to design inference, they are actually talking about the "Design Inference" proposed by Dembski. This "Design Inference" differs in many important ways from how science applies its design inferences. The most important one is that Dembski's "Design Inference" attempts, with limited success, to detect design through elimination while science, with considerable success, eliminates hypothesis by matching positive signatures. The example proposed by Luskin is not different. Notice that the seismologists did not apply the "Design Inference" which would have required various steps to be taken: First of all the signal has to be shown to be ‘specified', secondly, the signal has to be complex which requires that the signal cannot be explained by known regularity or chance occurrences. One could argue that the specification criterion is met by a comparison with previous known nuclear tests but that would be cheating.

Dembski wrote:

Specifications are the independently given patterns that are not simply read off information. By contrast, the "bad" patterns will be called fabrications. Fabrications are the post hoc patterns that are simply read off already existing information.

Source: William Dembski Intelligent Design as a Theory of Information (sic)

So how about complexity?

What is it for a possibility to be identifiable by means of an independently given pattern? A full exposition of specification requires a detailed answer to this question. Unfortunately, such an exposition is beyond the scope of this paper. The key conceptual difficulty here is to characterize the independence condition between patterns and information. This independence condition breaks into two subsidiary conditions: (1) a condition to stochastic conditional independence between the information in question and certain relevant background knowledge; and (2) a tractability condition whereby the pattern in question can be constructed from the aforementioned background knowledge. Although these conditions make good intuitive sense, they are not easily formalized. For the details refer to my monograph The Design Inference.

But this raises a significant problem for ID as it either will have to accept both nuclear explosions as well as natural earthquakes as ‘specified', or neither one will meet the specification requirement. That a design inference is not necessarily the result of intelligent agency, is something that most ID activists simply overlook and yet Dembski was clear on this as was pointed out by Del Ratzsch

"I do not wish to play down or denigrate what Dembski has done. There is much of value in the Design Inference. But I think that some aspects of even the limited task Dembski set for himself still remains to be tamed." "That Dembski is not employing the robust, standard, agency-derived conception of design that most of his supporters and many of his critics have assumed seems clear."

Del Ratzsch in "Nature, Design, and Science:The Status of Design in Natural Science", SUNY Press, 2001.

More recently Ryan Nichols pointed out that Dembski has made a significant concession

Before I proceed, however, I note that Dembski makes an important concession to his critics. He refuses to make the second assumption noted above. When the EF implies that certain systems are intelligently designed, Dembski does not think it follows that there is some intelligent designer or other. He says that, "even though in practice inferring design is the first step in identifying an intelligent agent, taken by itself design does not require that such an agent be posited. The notion of design that emerges from the design inference must not be confused with intelligent agency" (TDI, 227, my emphasis).

Source: Ryan Nichols, The Vacuity of Intelligent Design Theory

Dembski wrote:

What's more, the competing possibilities that were excluded must be live possibilities, sufficiently numerous so that specifying the possibility that was actualized cannot be attributed to chance. In terms of probability, this means that the possibility that was specified is highly improbable. In terms of complexity, this means that the possibility that was specified is highly complex.

Certainly, such a specification would also render the signal to be simple rather than complex as it can be shown to match a known regularity. Intelligent Design however relies on an absence of known regularities to infer its ‘Design Inference'.

So assume seismologists detect a particular event, the conclusion is that whether the event was a natural earthquake or a nuclear explosion, a design inference can be made. In the former case, the designer involves the natural processes in the interior of the earth, in the latter, the natural processes of nuclear fission, set in motion by a nuclear bomb.

Luskin's example also shows why the paper by Wilkins and Elsberry titled The advantages of theft over toil: the design inference and arguing from ignorance is still very relevant as the authors show that ‘design' involves two different categories: "… the ordinary kind based on a knowledge of the behavior of designers, and a "rarefied" design, based on an inference from ignorance, both of the possible causes of regularities and of the nature of the designer".

Intelligent design theorist William Dembski has proposed an "explanatory filter" for distinguishing between events due to chance, lawful regularity or design. We show that if Dembski's filter were adopted as a scientific heuristic, some classical developments in science would not be rational, and that Dembski's assertion that the filter reliably identifies rarefied design requires ignoring the state of background knowledge. If background information changes even slightly, the filter's conclusion will vary wildly. Dembski fails to overcome Hume's objections to arguments from design.

So lets compare how science infers design and compare this with how ID infers ‘Design'.

Scientific design inference

Science: We know from extensive testing and validation that the signature of natural earthquakes differs significantly from nuclear explosions. In fact, the ratio of p to s waves tends to be higher for nuclear explosions.

Rather than relying on ignorance, scientists take measurements, build models and in this case things are not much different

Using compressional and shear wave data from known events, PNNL researchers have built statistical models that describe what energy waves look like for earthquakes and for explosions. "When we have a new event coming down the line and we don't know what it is, we can ask if its energy waves most closely match the earthquake model or the explosion model," said PNNL's Dale Anderson, principal investigator on this project.

In addition to creating statistical models for each discriminant, PNNL researchers are adding a new twist by mathematically combining the discriminants into a model to more accurately identify a seismic event.

These models account for uncertainty in measuring individual seismic discriminants. "The better we can estimate a discriminant, such as depth, the less uncertainty we will have in the final decision about the type of event," said Debbie Carlson, a PNNL mathematician working with Anderson.

Seismograms show the amplitude of shear energy (Lg) is larger for an earthquake than for an explosion. Scientists at PNNL are using compressional (represented by Pg) and shear wave data from seismic events to build statistical models that will ultimately help distinguish earthquakes from explosions. Source

Similarly on Seismic Monitoring Techniques Put to a Test scientist Bill Walter explains how science detects ‘foul play'.

First of all scientists have access to a large variety of data but for the moment I shall limit the available data to seismology. The first indication of a nuclear explosion is when the actual seismogram differs from typical seismograms found in the region.

Scientists then look at P and S waves. P waves are compression waves and S waves are transverse or shear waves. Based on scientific principles, one would expect that explosions will show large P waves and weak S waves and earthquakes would show just the opposite.
So when P to S amplitudes are measured, scientists can determine quickly if the event is natural or ‘designed'

And finally scientists compare the seismogram with earlier seismograms of similar events.

It should be clear by now that seismologists do not conclude ‘we cannot find any regularity or chance explanations, thus designed', but rather rely on comparisons with known events, both natural and ‘designed'.

The ID Design Inference

Remind us again what models Intelligent Design presents in support of its thesis? Nada… Niente… Nothing… Niets… Nichts…

Final Note:

As some have pointed out, and I will quote scientists from the University of Leeds, the low magnitude event (0.5-2kT) suggest one of the following hypotheses

1. North Korea successfully detonated a low-yield device (which is harder to design than a typical "first-design" weapon which would deliver in the 5-15 kT range).
2. It was a larger, decoupled, explosion. By firing the weapon in a ‘chamber' the amount of seismic waves can be suppressed.
3. The test failed (fizzle)
4. The test was a chemical explosion only

Seismic signal on vertical components of broadband stations operated by the IPE, BP filter 0.8-2.8 Hz.

and Seismic record (large version)