THE NO FREE LUNCH THEOREMS AND THEIR APPLICATIONS TO EVOLUTIONARY ALGORITHMS
By Mark Perakh
Posted on January 6, 2003
H. Allen Orr published a very eloquent critique (Orr 2002) of Dembski’s book No Free Lunch (Dembski 2002a). There are many fine points in Orr’s critique elucidating inconsistencies and unsubstantiated assertions by Dembski. Among his critical comments Orr emphasizes what is, in his view, the most serious flaw in Dembski’s position, specifically in Dembski’s use of the No Free Lunch theorems as allegedly proving the impossibility of Darwinian evolution. Orr writes about the NFL theorems: “The NFL theorems compare the efficiency of evolutionary algorithms; roughly speaking, they ask how often different search algorithms reach a target within some number of steps.” Later Orr offers his explanation of why Dembski’s use of the NFL theorems is wrong, “The problem with all this is so simple that I hate to bring it up. But here goes: Darwinism isn't trying to reach a prespecified target.” He says further, “Evolution isn't searching for anything and Darwinism is not therefore a search algorithm.”
While essentially Orr’s critical remarks are well taken, their formulation was insufficiently careful from a mathematician’s viewpoint so it may create a distorted impression of what Orr had in mind. Indeed, in a private communication Orr, who is a biologist, provided an explanation of his actual views on the subject of the NFL theorems’ application to Darwinian evolution, and these views are essentially in accord with a proper interpretation of the NFL theorems. Since, however, the rendition of that problem in Orr’s review of Dembski’s book may be confusing for unprepared readers, I will try to clarify the problem by
first explaining in way as simple as possible the NFL theorems, and second, briefly discussing their misapplication by Dembski.
Contrary to Dembski’s assertions, the NFL theorems do not at all prohibit Darwinian evolution (or make evolutionary algorithms in general incapable of outperforming a random sampling – which is Demsbki’s position). Orr is correct in stating that, but his argument requires a slight clarification.
Dembski twice replied to Orr. In his first, relatively brief reply (2002 b) Dembski did not even mention the remark which Orr suggested as his main point for rejecting Dembski’s position. In his second, very lengthy response (2002 c) Dembski discussed at length a myriad of points having little to do with Orr’s main critical comment and only toward the end of his response briefly related to that point. Dembski’s rebuttal of Orr’s comment essentially boils down to the assertion that Darwinian evolution is after all a targeted process. He writes, “Darwinism is at least in part about finding specified targets (albeit not prespecified targets)…”
Since the difference between targets simply “specified” and “prespecified” does not convert them into “non-targets,” this statement contradicts Dembski’s earlier assertions, such as, for example, found on page 193 of his book (2002 a): “Not only does this algorithm introduce a teleology foreign to the natural world (and certainly to the Darwinian mechanism) but it also introduces a teleology foreign to the evolutionary algorithms actually used by working computer scientists.” (Italics are mine. MP). Either Darwinian evolution does have targets (i.e. is teleological) or it does not. The very concept of a target entails its pre-selection. What is a target which is not preselected is one of Dembski’s secrets.
It looks like, in his quest for rebutting Orr’s critique, Dembski resorted to notions which, first, contradict his own position stated previously and, second, are fallacious since Darwinian evolution, as Orr correctly states, is certainly a non-targeted process. On the other hand, Dembski (and all of his admirers and supporters) have so far missed a real vulnerable point in Orr’s wording which may be construed as misinterpreting certain features of the NFL theorems.
Dembski’s conclusions from the NFL theorems are indeed unsubstantiated since these theorems, contrary to Dembski’s assertion, in no way mean that evolutionary algorithms cannot outperform a random sampling. To see that this is indeed so, let us briefly discuss the NFL theorems (Wolpert and Macready 1997). I’ll do this without using mathematical symbolism.
Imagine two search algorithms conducting a search on a given fitness landscape. They move from point to point over the search space (choosing the search points either at random or in a certain order). Each algorithm conducts a certain number of iterations. After having performed, say, m measurements, an algorithm produces what Wolpert and Macready call a ”sample.” A sample is in fact simply a table wherein the values of the fitness function at each of m search points are listed.
Let us ask the question: if any two algorithms have searched the same number of points, will the two samples they produce be identical? The answer can only be given in probabilistic terms. Generally speaking, the samples cannot be expected to be identical for any two arbitrarily chosen algorithms. The probability of algorithm A producing a specific table which is m rows long is different from the probability of algorithm B producing the same table after the same number of iterations.
Now enter the first NFL theorem. It asserts that if the results of the two algorithms’ searches are compared not for a specific fitness landscape but averaged over all possible landscapes, the probabilities of obtaining the same sample, i.e. the same table m rows long, are equal for any pair of algorithms. Remember that the quantity which is averaged is the probability of generating a given sample by an algorithm.
This is a literally exact translation into plain words of the statement of the first NFL theorem from its mathematically symbolic form.
The NFL theorems do not restrict the value of m – the number of iterations – in any way. The size of a table which constitutes a sample may be as small or as large as one may choose. There is no condition that the search stops when a certain pre-selected number of iterations have been completed, or when a pre-selected value of the fitness function has been found. The NFL theorems are valid regardless of how many times the algorithms probe the search space, or what the measured values of the fitness function happen to be.
As we see, the concept of a target is absent from the NFL theorems. However, the NFL theorems do not forbid the algorithms to be target-oriented either. These theorems are indifferent to algorithms’ having or not having a target.
Since the NFL theorems are often discussed in terms of algorithms’ performance, what is the relation between the literal meaning of the NFL theorems and the algorithms’ “performance”? Here we enter the realm of consequences and interpretations of the NFL theorems.
One of the corollaries relates to what is called a performance measure. The concept of a performance measure is not part of the NFL theorems as such. It is introduced within the framework of consequences from the NFL theorems. The NFL theorems are usually (and reasonably) interpreted in a way allowing for a wide latitude in the choice of performance measures. The NFL theorems remain valid regardless of the choice of performance measures. In particular, whereas the NFL theorems as such do not incorporate the concept of a target of a search, the performance measure may or may not incorporate that concept. Without contradicting the NFL theorems, the algorithms in question may be either target-oriented or not. The choice between the two alternatives enters the discourse only on the level of performance measures. The assertion of the NFL theorems which literally is about the probability of algorithms’ generating a given sample, is interpreted as a consequent assertion about algorithms’ “performance.”
Here are examples of both targeted and non-targeted algorithms, both equally subject to the NFL theorems.
Assume that the fitness function is simply the height of peaks in a specific mountainous region. If we choose a target-oriented algorithm, the target of the search can be defined as a specific peak P whose height is, say 6,000 meters above the sea level. In this case the number of iterations required to reach the predefined height of 6,000 meters (or perhaps more convenient its reciprocal value) can serve as the performance measure. Then algorithm A performs better than algorithm B if A converges on the target in fewer steps than B. Obviously, if two algorithms generated the same sample after m iterations, this would mean that they both find the target – peak P – also after the same number of iterations. The first NFL theorem tells us that the average probabilities of reaching peak P in m steps are the same for any two algorithms. In other words, in probabilistic terms, the statement of the 1st NFL theorem translates into the statement of equal average performance of any two algorithms if averaging is over all possible fitness landscapes (not all of which all must necessarily exits materially). This is interpreted in the above example as the statement that the average number of iterations required to locate the target is the same for any two algorithms if the averaging is done over all possible mountainous landscapes.
The NFL theorems do not say anything however about the relative performance of algorithms A and B on a specific landscape where either A or B can happen to be much better than its competitor.
The competition of algorithms can occur as well in a targetless environment and the NFL theorems will be valid for this case as well. For example, rather than defining a target as a certain peak P, or even as a peak of a certain height, the algorithms may be compared by finding out which of them, A or B, finds simply a higher peak after a certain (arbitrarily chosen) number of iterations. The performance measure in this case is the height of a peak reached after m iterations. No specific peak and no specific height is pre-selected as a target. Algorithm A that after m iterations finds a higher peak than algorithm B performs better. The 1st NFL theorem tells us that, if averaged over all possible mountainous reliefs (not all of them necessarily materially existing) the probabilities of both A and B generating the same sample after m iterations are equal. Consequently, this may be reasonably interpreted as a statement that in all likelihood the height of a peak reached after m iterations, if averaged over all possible landscapes, will be the same for any two algorithms.
The conclusions in both reviewed situations, while not directly asserted in the NFL theorems, are consequences of the 1st NFL theorem based on its reasonable interpretation. The 1st NFL theorem asserts the equal probabilities of any two algorithms generating a given sample (i.e. identical tables containing the values of the fitness function measured at m points). It is (reasonably) interpreted as the equal values of performance measures for any two algorithms if averaged over all possible landscapes.
Whatever performance measure is chosen is secondary and does not invalidate the NFL theorems. These theorems are equally valid for both targeted and non-targeted searches. They are valid for evolutionary algorithms all right. (As Wolpert and Macready have proven recently – see Wolpert 2002, Wolpert and Macready 2003 - the NFL theorems may be invalid for co-evolutionary algorithms, but this is a different story). Orr’s assertion, because of its formulation, could unfortunately be interpreted as an assertion according to which the NFL theorems do not apply to Darwinian algorithms because the latter are targetless and hence are “not search algorithms” in the NFL sense. In such an unfortunate interpretation Orr's assertion would be incorrect and therefore Orr’s statement requires the above clarification.
What is true, though, is that the NFL theorems, while perfectly applicable to all kinds of algorithms including the Darwinian evolutionary algorithms (with a possible exception for co-evolution), contrary to Dembski’s assertions, do not in any way prohibit Darwinian evolution. The NFL theorems do not at all prevent evolutionary algorithms from outperforming a random sampling (or blind search) because these theorems are about performance averaged over all possible fitness functions. They say nothing about performance of different algorithms on specific fitness landscapes. In real-life situations, it is the performance on a specific landscape that counts and this is where evolutionary algorithms routinely outperform random searches and do so very efficiently, both when the processes are targeted (as in Dawkins’s algorithm –see Dawkins 1996 [1986]) and when they are non-targeted (as Darwinian evolution is).
In writing this brief essay I had fruitful discussions with Brendan McKay whose contribution is gratefully acknowledged. Helpful comments have also been suggested by Erik Tellgren. H. Allen Orr in a series of email messages kindly clarified his views on the discussed subjects.
REFERENCES
Orr, H. Allen. 2002. Review of William Dembski’s No Free Lunch. Boston Review. Summer issue. See also online http://bostonreview.mit.edu/BR27.3/orr.html , accessed on December 22,2002.
Dawkins, Richard. 1996 [1986]). The Blind Watchmaker. Why the Evidence of Evolution Reveals a Universe Without Design. (New York: W.W. Norton).
Dembski, William A. 2002. No Free Lunch. Why Specified Complexity Cannot be Purchased Without Intelligence.
(Lanham, Maryland: Rowman and Littlefield Publishers).
Dembski, William A. 2002 b. Sheer versus Real Possibilities. Boston Review, October-November. Also online http://bostonreview.mit.edu/BR27.5/exchange.html , accessed on December 26,2002.
Dembski, William. 2002 c. Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr. Online www.designinference.com/documents/2002.12.Unfettered_Resp_to_Orr.htm .
Accessed on December 26, 2002.
Wolpert, David H. 2002. Online www.talkreason.org/articles/jello.cfm . Accessed on January 6, 2003.
Wolpert, David H and William G. Macready. 1997. The No Free Lunch Theorems For Optimization. IEEE Trans.Evol. Comp. v.1, no 1, 67-82.
Wolpert, David H. and William G. Macready. 2003 (to be published).
Mark Perakh's home page http://members.cox.net/marperak
Comments: marperak@cox.net