%\documentclass{article}
\documentclass{amsart}
\usepackage{amsmath}
\usepackage{amssymb,latexsym}
\newtheorem{theorem}{Theorem}
\newtheorem{cor}{Corollary}
\newtheorem{definition}{Definition}
\newtheorem{assumption}{Assumption}
% nice commands
\newcommand \rf[1] {Eq. (\ref{#1})}
\newcommand \arf[1] {Assumption \ref{#1}}
\newcommand \trf[1] {Theorem \ref{#1}}
\newcommand \eventmap[1] {\bar{#1}}
% Convenient defs
\def \ba {\begin{array}}
\def \ea {\end{array}}
\def \be {\begin{equation}}
\def \ee {\end{equation}}
% Special symbols
\def \hypset {\mathcal{H}}
\def \hypsetB {\mathcal{H}_{\text{new}}}
\def \knowset {\mathcal{K}}
\def \kid {\mathcal{K}\mathrm{ID}}
\def \detmap {\phi}
\def \specres {\mathcal{S}}
\def \probres {\mathcal{R}}
\begin{document}
\title{On Dembski's Law of Conservation of Information}
\author{Erik}
%\address{No Department At All\\
% Brandeis University\\
% Waltham}
%optional
%\email{erik\_12345@hotmail.com}
%\urladdr{http://www.angelfire.com/nv/dembski/thelci.html}
\date{\today}
\begin{abstract}
Dembski's ``Law of Conservation of Information'' is investigated using a more formal approach than is provided in his latest book ``No Free Lunch''. It is concluded that the Law of Conservation of Information is mathematically unsubstantiated.
\end{abstract}
\maketitle
\section{Introduction}
William Dembski is currently one of the front figures of the so-called Intelligent Design (ID) movement. This movement sets out to replace or modify the current theories of abiogenesis and biological evolution to explicitly include the intervention of an intelligent designer. One of the characteristics of the movement is a lack of interest in providing descriptive explanations and models. Instead, the ID movement sets out to establish the involvement of an intelligent designer by disproving the currently available alternatives that do not require one. Dembski's ideas constitute the most general and sophisticated proposal to disprove the currently available alternatives to date. Loosely speaking, the idea is that if an event is both very unlikely with respect to a particular hypothesis and fits a pattern that in a certain unformalized sense is ``independent'' of the event, then we should reject that particular hypothesis in favour of a general hypothesis that simply states that an unspecified designer was involved in the realization of the event. The hypothesis of the unspecified intelligent designer is considered established if all currently available alternatives can be rejected in this fashion. In his latest book, Dembski describes his ideas in more detail, provides a subjective quasi-mathematical ``formalization'', and applies this ``formalization'' to a biological feature \cite{DNFL}. Thorough refutations of Dembski's entire project have already been written \cite{ES_REFU,W_REFU}. Here I will therefore focus on a specific part of Dembski's total work; namely, his argument to establish the so-called ``Law of Conservation of Information''. My treatment will be as formal as possible. Dembski has been criticized for merely cloaking the simple idea that certain biological features are unlikely to be produced by anything except an intelligent designer in difficult-to-understand jargon. For instance, one reviewer of one of Dembski's previous books remarked
\begin{quote}
``Just as there are chess players who never resign a lost game but play on until they are checkmated, one can be oblivious to one's defeat in debate, if one is clever enough to invent argument after argument, however sophistical, scoring points on minor and reparable issues, and triumphally shooting down bad arguments that were never central to the debate anyway.'' \cite{FALK}
\end{quote}
Perhaps by treating Dembski's ideas as formally as possible, I unwittingly encourage the unnecessary jargon and peripheral arguments that Dembski has been criticized for. Nonetheless, I feel that a formal investigation of the ideas would be valuable. Formalization is a powerful tool to detect, and perhaps also repair, logical flaws. I stress that I will not be concerned with the way Dembski's ideas are being applied to real-world cases. I, like many other critics, consider Dembski's applications to the real world to fail to conform to even the rules he set up himself. But here I will be concerned with Dembski's ideas in theory, not in practice.
Finally, it should also be mentioned that Dembski's writings are notoriously difficult to interpret. To my knowledge, in the cases where Dembski has responded to criticism, the response has always been that the critic has misunderstood him in some fatal way. My own interpretations probably differ a little from those of all other critics, but I cannot guarantee that Dembski will agree with my presentation of his ideas. The reader is encouraged to critically compare my interpretation with Dembski's original writings.
\section{Notation and Definitions}
Dembski's work revolves around a number of concepts original to him. The main concepts are compactly described by him as follows:
\begin{quote}
``Given a reference class of possibilities $\Omega$, a chance hypothesis $\mathbf{H}$, a probability measure induced by $\mathbf{H}$ and defined on $\Omega$ (i.e., $\mathbf{P}(\cdot|\mathbf{H})$), and an event/sample $E$ from $\Omega$;
a rejection function $f$ is \emph{detachable} from $E$ if and only if a subject possesses background knowledge $\mathbf{K}$ that is conditionally independent of $E$ (i.e., $\mathbf{P}(E|\mathbf{H}\&\mathbf{K}) = \mathbf{P}(E|\mathbf{H})$) and such that $\mathbf{K}$ explicitly and univocally identifies the function $f$.
Any rejection region $R$ of the form $T^\gamma = \{\omega \in \Omega | f(\omega) \geq \gamma\}$ or $T_\delta = \{\omega \in \Omega | f(\omega) \leq \delta\}$ is then said to be \emph{detachable} from $E$ as well. Furthermore, $R$ is then called a \emph{specification} of $E$, and $E$ is said to be \emph{specified}.'' \cite[pp. 62--63]{DNFL}
\end{quote}
In the interest of clarity and completeness I will write out, one by one, all the relevant definitions of these concepts below. Some of them are the same as those in Dembski's book and some may differ slightly. Throughout this essay $\Omega$ will denote a sample space under consideration.
\begin{definition}[Rejection region]
An event of the form $T_{f,\delta} = \{\omega \in \Omega| f(\omega) \leq \delta\}$ or $T_f^{\gamma} = \{\omega \in \Omega| f(\omega) \geq \gamma\}$, where $f$ is a mapping $f:\Omega \to \mathbb{R}$ and $\delta,\gamma \in \mathbb{R}$ are constants, is called a \emph{rejection region}. The function $f$ is called a \emph{rejection function}.
\end{definition}
In this article there will be no need to consider both forms of the rejection region, so all rejection regions will be assumed to be of the form $T_{f,\delta}$ (a very small restriction since $T_{f,\delta} = T_{-f}^{-\delta}$). It will also be clear from the context which rejection function is used, so the indices will be left out. For a given event, Dembski hints that one should choose
\be
\delta = \min \{\delta' \in \mathbb{R} | E \subseteq T_{\delta'}\},
\ee
where $E$ is the event under study \cite[p. 72]{DNFL}. That is, a value that makes the rejection region as small as possible should be chosen.
The arguments in \cite{DNFL} depend heavily on the concept of background knowledge and its relations to the sample space. Unfortunately it is left unformalized, so here I will introduce some definitions that to my knowledge do not appear in Dembski's work. In particular, rejection functions that are ``explicitly and univocally'' identified by background knowledge play a special role. I will denote the set of all background knowledge by $\knowset$.
We also need a set of hypotheses $\hypset$.
\begin{assumption}
The set $\Omega \times \knowset$ is mathematically meaningful and finite, and every hypothesis $h \in \hypset$ determines a probability measure $P_h$ on it.
\end{assumption}
The assumption of finiteness is just made to avoid potential technical difficulties. When more than one sample space is considered, the corresponding assumptions are of course made for all the sample spaces under study. The assumption that we can evaluate the probability of knowledge may or may not be philosophically problematic, but I will simply note that Dembski's arguments and ideas depend on it and assume that it is true for the sake of the argument. For convenience, I will write $P_h(E,K)$ instead of $P_h(E \times K)$, where $E \subseteq \Omega$ and $K \subseteq \knowset$. Also, if $E = \Omega$, I shall simply omit it and write $P_h(K)$. The corresponding convention will be used if $K = \knowset$. I will also allow myself to sometimes use an outcome, rather than the elementary event containing the outcome, as argument in the probability measure.
Dembski also requires the user of his method to determine whether background knowledge ``explicitly and univocally'' identifies given rejection functions. For this purpose I define a relation as follows
\be
\kid_{\Omega} (f,K) = \begin{cases} \text{true} & \text{if $K$ explicitly and univocally identifies $f$}, \\
\text{false} & \text{otherwise}, \end{cases}
\ee
where $f$ is a mapping $f:\Omega \to \mathbb{R}$ and $K \subseteq \knowset$. From a philosophical point of view one could have considerable doubts as to whether the nature of knowledge is such that it is possible to determine whether it ``explicitly and univocally'' identifies a function, but again I will simply note that Dembski's arguments seem to require it and assume that a relation $\kid$ is given in a mathematical form.
\begin{assumption}
The relation $\kid_{\Omega}$ is meaningful and available in mathematical form.
\end{assumption}
This assumption is made for all sample spaces if more than one sample space is studied.
With the notation and assumptions introduced we are now ready to state the definition of detachability and specifications.\footnote{Curiously, the key concepts are defined in terms of \emph{events} rather than \emph{outcomes}, e.g. events---not outcomes---are specified. Perhaps this has the advantage of allowing incomplete knowledge, but it also adds considerable subtleties to the argument. For instance, if the outcome $\omega$ is observed, then all events containing this outcome must be said to have occurred. Thus, when investigating whether the outcome is such that the ID hypothesis of intelligent intervention should be inferred, one is free to consider any event that contains this outcome. If a rejection function is not $h$-detachable from the event $\{\omega\}$, perhaps it will be $h$-detachable from, say, $\{\omega,\omega'\}$.}
\begin{definition}[$h$-detachability]
Let $h \in \hypset$ be an hypothesis. A rejection function $f$ is said to be \emph{$h$-detachable} from an event $E \subseteq \Omega$ if there exists a $K \subseteq \knowset$ such that $\kid_{\Omega}(f,K)$ and $P_h(E) = P_h(E|K)$.
\end{definition}
\begin{definition}[$h$-specification]
A rejection region $T \subseteq \Omega$ defined by a rejection function that is $h$-detachable from the event $E \subseteq T$ is said to be a \emph{$h$-specification} of $E$ and $E$ is said to be \emph{$h$-specified}.
\end{definition}
\begin{definition}[$h$-specified information]
An ordered pair of events $(E,T)$ such that $E \subseteq T$, and where $T$ is a rejection region with a rejection function that is $h$-detachable from $E$, is called \emph{$h$-specified information}.
\end{definition}
Roughly speaking it is $h$-specified information for which the $h$-specification is sufficiently improbable that Dembski considers to be an infallible indication that the hypothesis $h$ should be rejected in favour of a hypothesis that an intelligent agent was involved in the realization of the event.
The rest of this essay will investigate to what extent the mathematical relations defined above are sufficient to establish Dembski's Law of Conservation of Information.
\section{The Law of Conservation of Information: The Deterministic Case}
Here the Law of Conservation of Information as it is described on pp. 152--153 of \cite{DNFL} will be investigated in detail.
Consider two sample spaces $\Omega$ and $\Lambda$ and an arbitrary mapping $\detmap : \Omega \to \Lambda$ between them. For convenience, define two associated mappings on the corresponding event spaces by
\begin{eqnarray}
\eventmap{\detmap}(A) & = & \{\detmap(\omega) | \omega \in A\}, \\
\eventmap{\detmap}^{-1}(B) & = & \{\omega \in \Omega | \detmap(\omega) \in B\},
\end{eqnarray}
where $A \subseteq \Omega$ and $B \subseteq \Lambda$.
If the mapping $\detmap$ can be thought of as a representation of a deterministic process, Dembski's idea that a deterministic process cannot generate specified information may be expressed in terms of $\detmap$. There are two ways in which specified information can potentially be generated. Either an event $A \subseteq \Omega$, which is not specified, can be mapped onto an event $B \subseteq \Lambda$, which \emph{is} specified. Or a specified event $A$ could potentially be mapped onto an event $B$, such that the specification of $B$ has lower probability than the specification of $A$. It must also be made clear relative to which hypotheses (or probability distributions) the events $A$ and $B$ should be specified in Dembski's sense. To determine this, we first consider two elements $\omega \in \Omega$ and $\lambda \in \Lambda$, such that $\lambda = \detmap(\omega)$. Let $h$ be an hypothesis that fixes a particular, but arbitrary, probability measure $P_h$ on $\Omega\times \knowset$. Then the probability of the transformation $\omega \stackrel{\detmap}{\mapsto} \lambda$ is of course $P_h(\omega)$. The probability that \emph{any} randomly chosen element in $\Omega$ will be mapped onto $\lambda$ is given by $Q_h^\detmap(\lambda) = P_h(\eventmap{\detmap}^{-1}(\lambda))$. This is easily generalized to arbitrary events and we have
\be \label{EQ_QhphionOMEGA}
Q_h^\detmap(B) = P_h(\eventmap{\detmap}^{-1}(B)).
\ee
Thus, any hypothesis $h$ together with any mapping $\detmap : \Omega \to \Lambda$ will determine a probability measure $Q_h^\detmap(B)$ on $\Lambda$. We may there consider the original hypothesis $h$ and the mapping $\detmap$ to be a ``partial hypothesis''. However, to enable the use of Dembski's definitions, a probability measure on the larger set $\Lambda \times \knowset$ is required. There is no mathematical reason to choose any particular way to extend \rf{EQ_QhphionOMEGA}, but it seems natural to try the following assumption.
\begin{assumption}
The correct way to extend $Q_h^\detmap$ from a probability distribution on $\Lambda$ to the larger set $\Lambda \times \knowset$ is through
\be \label{EQ_Qhphi}
Q_h^\detmap(B,K) = P_h(\eventmap{\detmap}^{-1}(B),K).
\ee
\end{assumption}
This also seems like the extension that is most charitable to Dembski. It is not clear to me exactly which assumptions about $\Omega$, $\Lambda$, $\knowset$ and $\detmap$ are hidden in \rf{EQ_Qhphi}. Below I will assume that the extension is done in some unique way and index hypotheses conferring a probability distribution on $\Lambda \times \knowset$ by the original hypothesis $h$ and the mapping $\detmap$.
\subsection{Is it possible to map unspecified events to specified events?}
Let us now consider the first of the two conceivable ways for a deterministic process to generate specified information. Let $(B,T)$ be $h$-specified information. That is, $T = \{\lambda \in \Lambda | g(\lambda) \leq \delta\} \supseteq B$ is a rejection region, there exists a $K' \subseteq \knowset$ such that $\kid_{\Lambda}(g,K')$, and $Q_h^\detmap(B) = Q_h^\detmap(B|K')$. Is it possible to find an $A \subseteq \Omega$, for which no valid $h$-specification exists and such that $\eventmap{\detmap}(A) = B$? The answer to this question requires knowledge of the relation $\kid_{\Omega}$ and of $P_h(A,K)$. I suggest that the answer is \emph{no}, based on the following informal appeal to the reader's intuition: If the relation $\kid_{\Omega}(f,K)$ is meaningful at all, then surely it should be true that any mathematically minded person has knowledge $K$ that ``explicitly and univocally'' identifies the function $f(\omega) \equiv 0$, i.e. $\kid_{\Omega}(f,K)$. Whether it holds that $P_h(A) = P_h(A|K)$ will of course depend on $h$, but it seems that the case for this is at least as good as for any other function. If the reader accepts this appeal to his or her intuition, then $\Omega = \{\omega \in \Omega | f(\omega) \leq 0\}$ is a valid $h$-specification of $A$. \emph{It then follows that every event is $h$-specified!}
Dembski's answer is also \emph{no}, but he has a slightly more complicated argument in mind. He notes that the rejection function $f = g \circ \detmap$ will define a set $S \subseteq \Omega$, such that $\eventmap{\detmap}(S) = T$. Together with $\eventmap{\detmap}(A) = B \subseteq T$, this implies that $A \subseteq S$. Dembski then seems to suggest that it is self-evident that the $h$-detachability of $g$ also implies the $h$-detachability of $g \circ \detmap$ \cite[p. 153]{DNFL}. That is, $\kid_{\Omega}(g \circ \detmap,K)$ and $P_h(A) = P_h(A|K)$, for some $K$. It seems unlikely to me that Dembski intended that $g \circ \detmap$ is $h$-detachable for all possible mappings $\detmap:\Omega\to\Lambda$. Rather I believe that his reasoning is most charitably interpreted as restricted to those mappings $\detmap$ that are \emph{known}. I speculate that part of the intended reasoning is the following informal argument: By definition, we must have knowledge that identifies $g$, i.e. $\kid_{\Lambda}(g,K')$ for some $K'$. If we also have knowledge that identifies the deterministic transformation $\detmap$, then this knowledge combined with $K'$---let's denote the combination by $K$---should identify the composition $g\circ\detmap$, i.e. $\kid_{\Omega}(g\circ\detmap,K')$. To establish that $P_h(A) = P_h(A|K)$, Dembski appears to suggest that it follows from the fact that the rejection region can be described without reference to $A$, and he writes that the rejection region can be ``identified independently'' of the event $A$. However, this is not necessarily the same kind of independence as the statistical independence that is required for $h$-detachability. We can, for instance, speak about the gasoline prices during the latest decade without ever mentioning the world's total population sizes during the same period, but that doesn't mean that gasoline prices are statistically independent of the total population sizes.
Dembski's attemped argument is of limited value due to its incompleteness and the many unclear aspects of it. It's applicability is also severly limited if I am correct that it should be interpreted as restricted to known $\detmap$.
\subsection{Can specifications be mapped to less probable specifications?}
The second way for a deterministic process to generate specified information would be to map a specified event to an event with a smaller specification. Dembski's argument here is in fact the same as that described in the preceeding section.\footnote{Dembski intended to treat both cases at once, whereas I have split the analysis in two cases for conceptual clarity.} Let us again suppose that $(B,T)$, with $B \subseteq T \subseteq \Lambda$, is $h$-specified information with the rejection function $g$. If $\eventmap{\detmap}(A) = B$, then Dembski thinks that one can construct a valid $h$-specification of $A$ as $S = \eventmap{\detmap}^{-1}(T) = \{\omega \in \Omega | (g \circ \detmap)(\omega) \leq \delta\}$. Furthermore, by definition $Q_h^\detmap(T) = P_h(\eventmap{\detmap}^{-1}(T)) = P_h(S)$, so the probability of the $h$-specified information is unchanged by deterministic transformations (provided that $S$ really is a valid $h$-specification).
Of course, the argument suffers from the flaws noted in the preceeding section. Dembski has not really established that there is background knowledge $K$ such that $\kid(g \circ \detmap,K)$ and $P_h(A) = P_h(A|K)$. If the mapping $\detmap$ is not injective, then the latter condition may also have to hold for several events, since $\eventmap{\detmap}(A) = B$ may not uniquely define $A$.
\subsection{A bizarre attempted remedy.}
After having stated what I take to be the arguments described above, Dembski remarks
\begin{quote}
``I have just argued that when a function acts to yield information, what the function acts upon has at least as much information as what the function yields. This argument, however, treats functions as mere conduits of information, and does not take seriously the possibility that functions might add information.'' \cite[p. 154]{DNFL}.
\end{quote}
A truly bizarre remark. It is difficult understand what it actually means. The only flaw in Dembski's argument is that he has not established the $h$-detachability of $f \circ \detmap$ from all events $A$ such that $\eventmap{\detmap}(A) = B$. Is this an admission of this flaw or did Dembski have another flaw in mind (one that I must have missed)?
Dembski then tries to repair the real or perceived flaw by considering mappings from pairs of specified information and functions to specified information. More precisely, Dembski considers a ``universal composition function'' $U$, defined by $U(i,\detmap) = \eventmap{\detmap}(i)$, where $i = (A,S) \subseteq \Omega^2$ and we take $\eventmap{\detmap}(i) = (\eventmap{\detmap}(A),\eventmap{\detmap}(S))$, for notational convenience (the reader should be warned that I will change the notation slightly below). It is claimed that
\begin{quote}
``The form of the original argument is therefore left unchanged: the information $j$ arises by applying $U$ (cf. $f$ in the original argument) to the information $i,f)$ (cf. $i$ in the original argument).'' \cite[p. 155]{DNFL}.
\end{quote}
(Note that Dembski uses $f$ to denote the same mapping that is denoted $\eventmap{\detmap}$ here.) It is not obvious how to interpret these statements, much less how to put them in a rigorous form. Besides from the general vagueness, it is not clear that Dembski's second argument actually improves anything. The one improvement---a big one---that can be made would be to, if possible, remedy the flaws pointed out above, but this new argument doesn't even seem to address this.
The comment that the original argument is left unchanged leads me to infer that one should primarily study how $U$ maps events in $\Omega \times \Phi$, where $\Phi = \Lambda^\Omega = \{\phi : \Omega \to \Lambda\}$, to events in $\Lambda$, but this violates the framework provided by Dembski. To use the definitions of Dembski's key concepts, e.g. detachability, we are required to first identify a set $\hypset$ of hypotheses, each of which confers a probability measure on the sample space $\Omega \times \knowset$. If a hypothesis $h \in \hypset$ and two events in the sample space $\Omega \times \knowset$ have certain properties, we may then speak of things like $h$-detachability, $h$-specified information, etc. There is no reason why hypotheses in $\hypset$ should also confer a probability measure on $\Omega \times \Phi \times \knowset$. To study this new sample space we need, in general, a new set of hypotheses, $\hypsetB$, conferring probability measures on \emph{this} sample space. For a given hypothesis $h_{\text{new}} \in \hypsetB$ one may then speak of $h_{\text{new}}$-specified information, but this must not be confused with $h$-specified information, $h \notin \hypsetB$ in general, in the original sample space.
Let's examine the situation more formally. Consider a new set of hypotheses $\hypsetB$, each conferring an arbitrary probability measure $P_{h_{\text{new}}}$ on the sample space $\Omega \times \Phi$, where $\Phi = \{\detmap : \Omega \to \Lambda\} = \Lambda^\Omega$. Define the mapping $U(\omega,\detmap) = \detmap(\omega)$ and associate two event mappings by defining
\be
\eventmap{U}(A) = \{\detmap(\omega) | (\omega,\detmap) \in A\}, \quad A \subseteq \Omega \times \Phi
\ee
\be
\eventmap{U}^{-1}(B) = \{(\omega,\detmap) \in \Omega \times \Phi | \detmap(\omega) \in B\}, \quad B \subseteq \Lambda.
\ee
If the outcomes in $\Lambda$ are formed by applying $U$ to outcomes in $\Omega \times \Phi$, a probability measure on $\Lambda$ is fixed. The probability of the transformation $(\omega,\detmap) \stackrel{U}{\mapsto} \detmap(\omega) = \lambda$ is of course $P_{h_{\text{new}}}(\omega,\detmap)$. The probability that $U$ will map \emph{any} element $(\omega,\detmap)$ into $B \subseteq \Lambda$ is then given by
\be
Q_{h_{\text{new}}}^U(B) = P_{h_{\text{new}}}(\eventmap{U}^{-1}(B)), \quad B \subseteq \Lambda
\ee
This probability measure must then be extended in some way to the sample space $\Lambda \times \knowset$, so that background knowledge can be handled.
Now suppose that $(B,T) \subseteq \Lambda^2$ is $h$-specified information, where $h \in \hypset$ is a hypothesis in the \emph{original} set of hypotheses, and that $\eventmap{\detmap}(A) = B$, for some $\detmap$ and $A$. The real issue is whether this can be used to construct a valid $h$-specification of $A$ with probability $\leq Q_h^\detmap(T)$, but Dembski seems to recognize that this is not always possible. Instead he seems to try to move the goalposts by considering the irrelevent question of whether one can always construct a valid $h_{\text{new}}$-specification of an event like $A \times \{\detmap\}$ (where $h_{\text{new}} \in \hypsetB$ belongs to the new set of hypotheses). Perhaps Dembski thinks that $S = \eventmap{U}^{-1}(T)$ is always a valid $h_{\text{new}}$-specification of $A \times \{\detmap\}$, but that would be to confuse two different kinds of specified information. If a deterministic transformation like $\omega \stackrel{\detmap}{\mapsto} \lambda$ generates $h$-specified information then that means just that. Such a fact would not be affected at all by whether or not the transformation $(\omega,\detmap) \stackrel{U}{\mapsto} \lambda$ generates $h_{\text{new}}$-specified information.
I challenge ID advocates, Dembski in particular, to explain in mathematical detail how the argument involving the universal composition function is supposed to work. As it is presented in \cite[pp. 154--155]{DNFL} it is nonsense.
\section{Specificational and Probabilistic Resources}
Before dealing with the stochastic case it is useful to know two additional concepts defined by Dembski. The first of these is \emph{specificational resources}. This concept is supposed to capture how many opportunities for specifications there are. The basic idea is that specified information is more remarkable if valid specifications are rare than if they are abundant. Dembski's (quasi-)formal way to deal with this idea is to introduce a ``complexity measure'', which measures how complicated it is for a particular user of Dembski's method to come up with a specification of an event. Let $\varphi(R)$ denote a quantitative measure of how complicated it is for a user to formulate the specification $R$, i.e. to find the corresponding rejection function. In \cite[p. 76]{DNFL}, Dembski asserted that the measure $\varphi$ is objectively given relative to the user and ``determined up to monotonic transformations''. If the user of his method for inferring ID may be thought of as a universal Turing machine, then Dembski would likely take $\varphi(R)$ to be the Kolmogorov complexity of $R$. It is unclear how one should choose $\varphi$ if the user may not be thought of as a universal Turing machine.
\begin{definition}[Specificational resources]
Given an event $A \subseteq \Omega$ and a rejection region $R$ that is a valid $h$-specification of $A$ for all $h \in \hypset$, the \emph{specificational resources} are defined as the set of events $T \subseteq \Omega$ that satisfy the following conditions:
\begin{enumerate}
\item a rejection function for $T$ is $h$-detachable from $A$ for all $h \in \hypset$,
\item $\max_{h \in \hypset} P_h(T) \leq \max_{h \in \hypset} P_h(R)$,
\item $\varphi(T) \leq \varphi(R)$, and
\item $T$ is not a proper subset of any other event that satisfies the previous criteria.
\end{enumerate}
The set of specificational resources will be denoted $\specres_{\hypset}(A,R)$. \cite[p. 77]{DNFL}
\end{definition}
The idea is that since
\be
P_h\left(\bigcup_{T \in \specres_{\{ h \}}(A,R)} T \right) \leq \sum_{T \in \specres_{\{ h \}}(A,R)} P_h(T) \leq P_h(R) |\specres_{\{ h \}}(A,R)|,
\ee
specifications with probability $\max_{h \in \hypset} P_h(R) \ll |\specres_{\hypset}(A,R)|^{-1}$ are rare, unlikely, and remarkable.
However, even very unlikely events can become likely to occur at least once if a large number of trials occurs. To take this into account, one can multiply the specificational resources by the number of trials.
\begin{definition}[Probabilistic resources]
Given an event $A \subseteq \Omega$ and a rejection region $R$ that is a valid $h$-specification of $A$ for all $h \in \hypset$, the \emph{probabilistic resources} are given by the number $\probres_{\hypset}(A,R) = |\specres_{\hypset}(A,R)|\cdot n$, where $n$ is the number of trials.
\end{definition}
The probability that at least one event that fits a specification in $\specres_{\hypset}(A,R)$ occurs is less than $\max_{h \in \hypset} P_h(R)\probres_{\hypset}(A,R)$, provided that the trials are statistically independent (and that they occur according to one of the hypotheses in $\hypset$). Therefore, Dembski seems to think, an event of probability $\ll \probres_{\hypset}(A,R)^{-1}$ should definitely be rare, unlikely, and inexplicable in terms of the hypotheses in $\hypset$. Indeed, Dembski's central claim is that specified events with such a low probability are infallible indications that the hypotheses $\hypset$ should be rejected in favour of the general ID hypothesis that some intelligent agent was somehow involved in the realization of the event (or at least some hypothesis not in $\hypset$).
There are a few problems with this reasoning, though. First, the condition that elements in $\specres_{\hypset}(A,R)$ must be $h$-detachable from $A$ with respect to all $h \in \hypset$ can seriously underestimate the opportunities for choosing specifications. For instance, suppose we have $\hypset = \{h_1,h_2\}$ and rejection functions $f_1,f_2,\ldots,f_{100}$ that are all different and identified by background knowledge. In this case it is possible that $f_k$ is $h_1$-detachable from $A$ for $1 \leq k \leq 50$ and $h_2$-detachable from $A$ for $50 \leq k \leq 100$. This implies that $\specres_{\hypset}(A,R) = \{R\}$, where $R$ must be the rejection region defined by $f_{50}$, yet there are obviously more than one way to choose a specification. Dembski's error is that he considered only specifications that are valid for \emph{all} $h \in \hypset$, rather than those that are valid for \emph{at least one} $h \in \hypset$.
Second, there is another way in which the specificational resources underestimate the number of available specifications. Consider, for example, a random number generator that is known to generate integers between 0 and 999 with the same probability for each number. The sample space is then $\Omega = \{k \in \mathbb{N} | 0 \leq k \leq 999\}$. Let $h_{\text{ep}}$ be the hypothesis that the outcomes are equiprobable and take $\hypset = \{ h_{\text{ep}} \}$. Suppose that all rejection functions of the form
\be
f_k(\omega) = \begin{cases}
0 & \text{if $\lfloor \omega / 5 \rfloor = k$}, \\
1 & \text{otherwise},
\end{cases}
\ee
$0 \leq k \leq 199$, are $h_{\text{ep}}$-detachable from all events. Suppose further that no other rejection function is $h_{\text{ep}}$-detachable from any event. Then the rejection regions\footnote{Note that $\{0,1,2,3,4\} = \{\omega \in \Omega | f_0(\omega) \leq 0\}$, etc. and $\Omega = \{\omega \in \Omega | f_{17}(\omega) \leq 1\}$, so these events are rejection regions.} $\{0,1,2,3,4\}, \{5,6,7,8,9\}, \ldots, \{995,996,997,998,999\}, \Omega$ are valid $h_{\text{ep}}$-specifications of any event that is a subset of them. If the event $A = \{0\}$ is observed, we then have the $h_{\text{ep}}$-specification $T = \{0,1,2,3,4\}$. Furthermore, $\specres_{\hypset}(A,T) = \{T\}$. Since
\be
P_{h_{\text{ep}}}(T) = \frac{1}{200} \ll 1 = |\specres_{\hypset}(A,T)|^{-1},
\ee
one might conclude that $A$ is a remarkable event that warrants the conclusion that $h_{\text{ep}}$ should be rejected in favour of the general ID hypothesis. However, this would clearly be an unjustified conclusion since the same argument could be made for \emph{any} outcome. This counter-example illustrates a deficiency in the concept of specificational resources.
While the strategy to deal with many (statistically independent) trials by multiplying the specificational resources by the number trials serves its purpose well, the concept of specificational resources does not adequately make up for the fact that the specification is chosen after the fact to fit, if possible, the observed event very tightly. The specificational resources are in a sense too tightly linked to the event that is actually observed. A more useful quantity to know would be the probability that an \emph{outcome} is part of an event that fits an $h$-specification $S$, such that $P_h(S) \leq p$ and $\varphi(T) \leq c$, for given $p,c$. Let's denote this quantity by $s(p,c,h)$. That is, let
\be
s_\omega(p,c,h) = \begin{cases}
\text{1} &
\ba{l}
\text{if $\omega$ is contained in any event with an $h$-specification} \\
\text{$T$ satisfying $P_h(T) \leq p$ and $\varphi(T) \leq c$},
\ea \\
0 & \text{otherwise}.
\end{cases}
\ee
Then we have
\be
\label{EQ_SPROB}
s(p,c,h) = \sum_{\omega \in \Omega} P_h(\omega)s_\omega(p,c,h).
\ee
Given an observed event $A$ with an $h$-specification $T$, quantities like
\begin{displaymath}
s(P_h(T),\varphi(T),h) \quad \text{and} \quad s(P_h(T),\infty,h)
\end{displaymath}
are far more interesting and useful than the specificational resources $\specres_\hypset(A,T)$ that Dembski wants to study. Contrary to Dembski's thinking, there is actually little basis for considering $h$-specifications of probability $\ll |\specres_\hypset(A,T)|^{-1}$ to be remarkable, because they may well be abundant and together cover large portions of the sample space.
\subsection{The universal probability bound.}
Let's ignore the deep conceptual flaws in the concept of probabilistic and specificational resources for the moment and consider a related concept. Determining the number of specificational resources in practice is a formidable task (assuming that the concept is well-defined), so Dembski tries make life easier by providing what is intended to be a safe, universal overestimate. To do this, he notes that the number of elementary particles in the \emph{directly observable} universe is estimated to $10^{80}$. Furthermore, he assumes that the age of the universe is less than $10^{25}$ seconds, or $3 \cdot 10^{17}$ years. He reasons that no transition can happen faster than the Planck time and arrives at the probabilistic resources
\be
\alpha^{-1} = \frac{10^{80} \cdot 10^{25} \text{ sec}}{t_{\text{Planck}}} = 10^{150}.
\ee
At least two things should be noted. First, the universe is much bigger than the part that is directly observable. Dembski is aware of some of the current views of cosmologists and astronomers regarding this, so it is not a direct oversight \cite[pp. 84--90]{DNFL}. However, he has seriously misunderstood the motivation behind cosmological and astronomical models. Cosmologists and astronomers are more than anything else trying to formulate descriptive models of the universe. In order to describe and explain features of our universe, a model of a universe that is bigger than the directly observable universe seems to be required (in the same sense that Lorentz invariance, at least at low energies, seems to be required of any reasonable model to describe and explain the Michelson-Morley experiments and other relativistic effects). To the best of my knowledge, no model that postulates that only the directly observable universe exists can satisfactorily account for as many astronomical and cosmological observations as our current models can account for. It appears, however, that Dembski considers cosmological models to be nothing more than desperate attempts to avoid the conclusion that God created the universe. This misunderstanding of scientists' motivation leads him to dismiss the idea that the universe is larger than the directly observable part as unfounded. In reality, it is a straightforward conclusion from the present cosmological models. For instance, it has been argued, based on the inflationary model, that the number of regions of about the size of our own directly observable region is infinite, while the number of distinct possible histories in such a region is finite \cite{OWORLDS}. Another example is the recently proposed competitor to the inflationary model that describes a cyclic universe that may be infinitely old \cite{CYCLIC}. Nonetheless, the primary motivation behind Dembski's ideas is to modify the present theories of abiogenesis and biological evolution. For this purpose it is reasonable to ignore cosmological considerations (at present there is no serious theory of abiogenesis or biological evolution that makes use of cosmological factors).
The second thing to note is that Dembski has not provided any convincing reason for interpreting $\alpha^{-1}$ as a safe overestimate of the number of specificational resources for even specific events in the directly observable universe. A number of concerns can be raised. E.g., how is the probability and ``complexity measure'' of the potential specifications taken into account, why can't more than one specification per particle arise in a time step, etc. The burden is on Dembski to show that $\alpha^{-1}$ really is an overestimate.\footnote{I also note an internal inconsistency in Dembski's view of intelligence. In his framework, intelligence is usually treated as something so powerful and mystical that it cannot be analyzed rationally. Hence the privileged position that hypotheses involving intelligence occupy in the framework (such hypotheses are not accepted or rejected based on their merits or lack thereof---they are default explanations to be invoked whenever we cannot think of a specific alternative). In attempting to count specifications, however, Dembski assumes that the ability of an intelligent mind to formulate specifications is reducible to interactions of elementary particles. While he is certainly free make such a materialistic assumption, he should do consistently. In response to criticism, Dembski wrote: ``We're dealing with intelligences and not with natural laws, regularities, or algorithms. It takes creativity, for instance, for a detective to see a pattern that incriminates a clever villain, and that creativity cannot be captured by `well-defined methods.' '' \cite[p. 3]{DEMREP2}. Yet when it comes to counting specifications intelligence is suddenly reducible to particle physics.}
One can imagine a more sophisticated argument, though. It has been argued that our observable universe has performed about $10^{120}$ operations on at most $10^{90}$ bits \cite{COMPCAP}. Assuming that the user's knowledge $\knowset$ is encoded by these bits and that the same knowledge can identify at most one rejection function, it is reasonable to conclude that at most $10^{90}$ rejection functions can be identified.\footnote{Or should it be $2^{10^{90}}$ rejection functions?} If only the brain of the user of Dembski's method is considered, the number will be much lower. Thus, it seems that Dembski's argument, while flawed, actually yielded a good conclusion. It seems to me that $\alpha^{-1}$ really is a reasonable overestimate of the specificational resources for non-astronomical purposes.
It must not be forgotten that the concept of specificational resources itself is \emph{not} an adequate tool for determining whether a specification is remarkable or not, though. Fortunately, Dembski's {\it non sequitur} can be fixed if the concept of specificational resources is replaced by the degree to which specifications cover the sample space. Note that if at most $N_{\text{rf}}$ rejection functions are ``explicitly and univocally'' identified by background knowledge, the degree to which the sample space is covered by $h$-specifications of probability $\leq p$ is bounded by
\be
s(p,\infty,h) \leq p N_{\text{rf}}.
\ee
If $N_{\text{rf}} = 10^{90}$ is used, it follows that $h$-specifications of probability $\ll 10^{-90}$ should be rare (assuming that events really occur with the frequencies dictated by $P_h$).
In conclusion, the universal probability bound is an acceptable one for biological purposes, but Dembski's reasoning leading this quantity suffers from serious conceptual errors.
\subsection{CSI}
One more concept remains before it is time to investigate Dembski's argument for stochastic transformations.
\begin{definition}[Complex $h$-Specified Information or $h$-CSI]
Any $h$-specified information $(A,S)$ with a probability $P_h(S) < \alpha$ constitutes \emph{complex $h$-specified information} or \emph{$h$-CSI}.
\end{definition}
For rhetorical reasons, Dembski prefers to transform probabilities of specifications by taking the negative base two logarithm and refer to the resulting number as the ``complexity'' (or ``information'') of the event.\footnote{Dembski mistakenly states that ``$-\log_2\mathbf{P}(A)$ measures the average number of bits required to specify an event with that probability'' \cite[p. 140]{DNFL}. The statement is in fact meaningless since there is nothing to average over. I believe part of the reason for this mistake is that Dembski doesn't adequately distinguish between outcomes and events. It is true that if outcomes are encoded by bit strings, then $-\log_2(p)$ is a good guess for how many bits that are needed to encode an outcome of probability $p$ in the encoding that minimizes the average code word length. This, however, cannot be directly generalized to events.} The quantity $\alpha$ is transformed to $-\log_2(\alpha) \approx 500$ in this way, and therefore Dembski often refers to the 500 bit threshold for something to constitute $h$-CSI (the quantity ``500 bits'' must not be confused with 500 binary digits---in this case the unit serves no purpose except to indicate that logarithms were taken to base two).
\section{The Law of Conservation of Information: The Stochastic Case}
When it comes to determining whether or not stochastic transformations can generate $h$-specified information, Dembski has little to add to his argument for the deterministic case. First, he admits that stochastic processes can generate some $h$-specified information, but asserts that $h$-specified information which is improbable enough to constitute $h$-CSI cannot be generated. To support this he then notes that any stochastic transformation may be viewed as a two-step process without any loss of generality. First, a \emph{deterministic transformation} is chosen randomly according to a probability function $\rho$. Then this mapping is applied to the outcome $\omega \in \Omega$ in question. But Dembski thinks that he has established that no $h$-specified information can be generated by \emph{any} deterministic transformation, and therefore simply refers back his argument with the ``universal composition function'' for the deterministic case. After all, the argument is, if no deterministic transformation can generate $h$-specified information, then it won't matter which deterministic mapping is chosen at the first step. This argument, however, contradicts Dembski's admission that stochastic transformations can generate a little $h$-specified information. If Dembski's argument were correct, then stochastic transformations would never generate any $h$-specified information at all.
\section{Conclusion}
In the preceeding sections the argument used by Dembski to establish the so-called Law of Conservation of Information has been investigated mathematically. I have intentionally been forthcoming with Dembski's philosophical assumptions about the nature of knowledge and the privileged status of the general ID hypothesis in order to focus on the mathematical parts of the argument. From his own words it seems that he considers his argument to have a status close to a mathematical proof. Indeed, he wrote:
\begin{quote}
``In this section I will present an in-principle mathematical argument for why natural causes are incapable of generating complex specified information.'' \cite[p. 150]{DNFL}
\newline \newline
``Justifying the claim that natural causes cannot generate complex specified information is technically demanding. Before justifying this claim mathematically, let me therefore try to spell out in plain English why natural causes are not up to the task of generating CSI.'' \cite[p. 150]{DNFL}
\end{quote}
Such statements indicate that Dembski considers his argument to be if not a mathematical proof, so at least a pretty rigorous mathematical argument. However, a mathematical investigation of Dembski's quasi-formalization reveals that the argument is far from rigorous. It suffers from several conceptual flaws, some of which are reparable and some of which seem fatal. The main flaws are:
\begin{enumerate}
\item In the original argument for the deterministic case, Dembski assumes that if the rejection function $f$ on $\Lambda$ is detachable, then so is the rejection function $f \circ \detmap$ on $\Omega$. But this is by no means established.
\item The argument involving the ``universal composition function'' confuses two different kinds of specified information.
\item It is possible for a number of improbable specifications to together cover a very large portion of the sample space. Given an event $A$ and a $h$-specification $T$, there is nothing that guarantees that $s(P_h(T),\varphi(T),h)$ (as defined by \rf{EQ_SPROB}) is low just because $P_h(T) \ll |\specres_\hypset(A,T)|^{-1}$. The concept of specificational resources therefore does not adequately serve its purpose and it should be replaced by the quantity defined by \rf{EQ_SPROB}.
\item Dembski admits that stochastic transformations can generate some specified information (but not enough to constitute CSI). However, if his own argument were correct, it would be impossible for stochastic transformations to generate any specified information at all.
\end{enumerate}
The Law of Conservation of Information must therefore be regarded as mathematically unsubstantiated. Regardless of the non-mathematical virtues, or lack thereof, of Dembski's argument, ambiguities and equivocation makes it entirely inadequate as a mathematical argument.
\subsection{Acknowledgement}
I thank Mark Perakh for useful comments on earlier drafts.
%You need to tell LaTeX how much space for the numbers of the references
%Here the 9 in \begin{thebibliography}{9} means
%that the number of digits in the largest reference number has
%no more digits than 9 does.
%(If you like you can think of this is saying that
%there are at most 9 references.)
\begin{thebibliography}{9}
\bibitem{DNFL}
Dembski, William A.
\newblock (2002)
\newblock ``No Free Lunch'',
\newblock {\it Rowman \& Littlefield}
\bibitem{DEMREP2}
Dembski W.
\newblock (2002)
\newblock ``The Fantasy Life of Richard Wein: A Response to a Response'', \\
\newblock http://www.iscid.org/papers/Dembski\_WeinsFantasy\_060702.pdf
\bibitem{ES_REFU}
Elsberry W. \& Shallit J.
\newblock (2002)
\newblock ``Information Theory, Evolutionary Computation, and Dembski's `Complex Specified Information'~'',
\newblock manuscript submitted for publication
\bibitem{FALK}
Falk A.
\newblock (2001)
\newblock ``Can We Be Darwinians and Religious?'',
\newblock {\it Quarterly Review of Biology},
\newblock {\bf 76}(1) : 47--52
\newblock (book rewiew)
\bibitem{OWORLDS}
Garriga J. \& Vilenkin A.
\newblock (2001)
\newblock ``Many worlds in one'', \\
\newblock http://xxx.lanl.gov/abs/gr-qc/0102010
%\newblock {\it Physical Review D},
%\newblock {\bf 64}(?):043511
\bibitem{COMPCAP}
Lloyd S.
\newblock (2002)
\newblock ``Computational Capacity of the Universe'',
\newblock {\it Physical Review Letters},
\newblock {\bf 88}(23) : 237901
\bibitem{CYCLIC}
Steinhardt P. \& Turok N.
\newblock (2002)
\newblock ``A Cyclic Model of the Universe'',
\newblock {\it Science},
\newblock {\bf 296}(5572) : 1436--1439
\bibitem{W_REFU}
Wein R.
\newblock (2002)
\newblock ``Not a Free Lunch But a Box of Chocolates'', \\
\newblock http://www.talkorigins.org/design/faqs/nfl/
\end{thebibliography}
\end{document}