Intelligent Design: The Design Inference

God of Wonders DVD

Intelligent Design- The Design Inference

In this section we will take a look at the evidences for design. William Dembski is an associate research professor at Baylor University and a senior fellow of the Discovery Institute's Center for the Renewal of Science and Culture. In his book Intelligent Design Dembski quotes Princeton theologian Charles Hodge: There are in the animal and vegetable world's innumerable instances of at least apparent contrivance, (evidence of the mental) which have excited the admiration of men in all ages. There are three ways of accounting for them. The first looks to an intelligent agent… In the external world there is always and everywhere indisputable evidence of the activity of two kinds of force: the one physical, the other mental. The physical belongs to matter and is due to the properties with which it has been endowed; the other is the… mind of God.

The second method of accounting for contrivances in nature admits that they were foreseen and purposed by God, and that He endowed matter with forces which He foresaw and intended should produce such results. But here His agency stops. He never interferes to guide the operation of physical causes…

The third method is that which refers them to the blind operation of natural causes. This is the doctrine of the Materialists. (Dembski 1999)

The problem with science is its attempt at empiricism. Everything has to be measured, analyzed and accounted for. How do you measure God or what God has done, or might do in the future. Because of this, science and an intelligent designer part ways.

There are very definite advantages to severing the world from God. Thomas Huxley, for instance, found great comfort in not having to account for his sins to a creator. Naturalism promises to free humanity from the weight of sin by dissolving the very concept of sin. (Dembski 1999)

The fact remains, that intelligent causes have played, are playing and will continue to play an important role in science. Entire industries, economic and scientific, depend crucially on such notions as intelligence, intentionality and information. Included here are forensic science, intellectual property law, insurance claims investigation, cryptography, random number generation, archaeology and the search for extraterrestrial intelligence (SETI). (Dembski 1999)

Can distinctions be made between physical and intelligent causes? Are these distinctions reliable to denote marks of intelligence that signal the activity of an intelligent cause? Finding a reliable criterion for detecting the activity of intelligent causes has to date constituted the key obstacle facing Hodge's first method ….determining the mind of God. (Dembski 1999)

If we prescribe in advance that science must be limited to strictly natural causes, the science will necessarily be incapable of investigating God's interaction with the world. But if we permit science to investigate intelligent causes as they do already such as in the earlier example of forensic science, then God's interaction with world, insofar as it manifests the characteristic features of intelligent causation, becomes a legitimate domain for scientific investigation. (Dembski 1999)


Design as a scientific theory


Scientists are beginning to realize that design can be rigorously formulated as a scientific theory. What has kept design outside the scientific mainstream these last hundred and forty years is the absence of precise methods for distinguishing intelligently caused objects from unintelligently caused ones.

What has emerged is a new program for scientific research known as intelligent design. Within biology, intelligent design is a theory of biologically origins and development. Its fundamental claim is that intelligent causes are necessary to explain the complex, information-rich structures of biology and that these causes are empirically detectable. There exist well-defined methods that on the basis of observational features of the world are capable of reliably distinguishing intelligent causes from undirected natural causes. Such methods are found in already existing sciences such as mentioned earlier.

Whenever these methods detect intelligent causation, the underlying entity they uncover is information. Information becomes a reliable indicator of intelligent causation as well as a proper object for scientific investigation. Intelligent design is therefore not the study of intelligent causes per se but of informational pathways induced by intelligent causes. Intelligent design presupposes neither a creator nor miracles. Intelligent design is theologically minimalist. It detects intelligence without speculating about the nature of the intelligence. (Dembski 1999) Intelligent design does not try to get into the mind of a designer and figure out what a designer is thinking. The designer's thought processes lie outside the scope of intelligent design. As a scientific research program, intelligent design investigates the effects of intelligence and not intelligence as such. (Dembski 2004)

There's a joke that clarifies the difference between intelligent design and creation. Scientists come to God and claim they can do everything God can do. "Like what?" asks God. "Like creating human beings," say the scientists. "Show me, "says God. The scientists say, "Well, we start with some dust and then" - God interrupts, "Wait a second. Get your own dust". Creation asks for the ultimate resting place of explanation: the source of being of the world. Intelligent design, by contrast, inquires not into the ultimate source of matter and energy but into the cause of their present arrangements. (Dembski 2004) Scientific creationism's reliance on narrowly held prior assumptions undercuts its status as a scientific theory. Intelligent design's reliance on widely accepted scientific principles, on the other hand, ensures its legitimacy as a scientific theory. (Dembski 2004)

What will science look like once intelligent causes are readmitted to full scientific status? The worry is that intelligent design will stultify scientific inquiry. Suppose Paley was right about the mammalian eye exhibiting sure marks of intelligent causation. How would this recognition help us understand the eye any better as scientists? Actually it would help quite a bit. It would put a stop to all those unsubstantiated just-so-stories that evolutionists spin out in trying to account for the eye through a gradual succession of undirected natural causes. It would preclude certain types of scientific explanations. This is a contribution to science. Now science becomes a process whereby one intelligence is determining, what another intelligence has done. (Dembski 1999)


The Designer
The physical world of science is silent about the revelation of Christ in Scripture. Nothing prevents the physical world from independently testifying to the God revealed in the Scripture. Now intelligent design does just this - it puts our native intellect to work and thereby confirms that a designer of remarkable talents is responsible for the physical world. How this designer connects with the God of Scripture is then left for theology to determine. (Dembski 1999)

Why should anyone want to reinstate design into science? Chance and necessity have proven too thin an explanatory soup on which to nourish a robust science. In fact, by dogmatically excluding design from science, scientists are themselves stifling scientific inquiry. Richard Dawkins begins his book, The Blind Watchmaker by stating, "Biology is the study of complicated things that give the appearance of having been designed for a purpose." In What Mad Pursuit Francis Crick, Nobel laureate and co discoverer of the structure of DNA, writes, "Biologists must constantly keep in mind that what they see was not designed, but rather evolved." (Dembski 1999)


The Complexity-Specification Criterion
Whenever design is inferred, three things must be established: contingency, complexity and specification. Contingency ensures that the object in question is not the result of an automatic and therefore unintelligent process that had no choice in its production. Complexity ensures that the object is not so simple that it can readily be explained by chance. Finally, specification ensures that the object exhibits the type of pattern characteristic of intelligence.

The concept of contingency is further understood as an object, event or structure becoming irreducible to any underlying physical necessity. The sequencing of DNA bases is irreducible to the bonding affinities between the bases as an example. (Dembski 1999)


The Explanatory Filter
William Dembski has devised what he calls the "explanatory filter" to determine whether design is present or not.

First to be assessed is if the situation or object is contingent. If not the situation is attributable to necessity. To say something is necessary is to say that it has to happen and that it can happen in one and only one way. Consider a biological structure which results from necessity. It would have to form as reliably as water freezes when its temperature is suitably lowered. (Dembski 2004) The opposite of necessity is contingent. For something to be contingent is to say that it can happen in more than one way. Contingency presupposes a range of possibilities such as the possible results of spinning a roulette wheel. To get a handle on those possibilities, scientists typically assign them probabilities. (Dembski 2004) Either contingency is a blind, purposeless contingency - which is chance (whether pure chance or chance constrained by necessity); or it is a guided, purposeful contingency - which is intelligent causation. (Dembski 2002)

Secondly; if something is determined to be contingent then the next question is "is it complex?" If complexity is not there the situation is attributable to chance.

Third; if something is determined to be complex then is it specified? If it is not specified then the situation is attributable to chance. However, if specificity is determined then the situation is determined to be designed. According to the complexity-specification criterion, once the improbabilities become too vast and the specifications too tight, chance is eliminated and design is implicated. (Dembski 1999)

Whenever this criterion attributes design, it does so correctly. In every instance where the complexity-specification criterion attributes design and where the underlying causal story is known, it turns out design actually is present. It has the same logical status as concluding that all ravens are black given that all ravens observed to date have been found to be black. (Dembski 1999)

William Dembski in his book The Design Revolution provides us with an example from the movie Contact that illustrates how intelligent design can be detected.

After years of receiving apparently meaningless "random" signals, the Contact researchers discovered a pattern of beats and pauses that corresponds to the sequence of all the prime numbers between 2 and 101. That grabbed their attention, and they immediately detected intelligent design. When a sequence begins with two beats and then a pause, three beats and then a pause, and continues, through each prime number all the way to 101 beats researchers must infer the presence of an extraterrestrial intelligence.

Here's why. Nothing in the laws of physics requires radio signals to take one form or another, so the prim sequence is contingent rather than necessary. Also, the prime sequence is a long sequence and there for complex. Finally, it was not just complex, but it also exhibited an independently given pattern or specification. (It was not just any old sequence of numbers but a mathematically significant one - the prime numbers.) (Dembski 2004)

A second application of the Explanatory Filter is seen in the workings of a safe's combination lock. The safe's lock is marked with a hundred numbers ranging from 00 to 99 and that five turns in alternating directions are required to open the lock. We assume that one and only one sequence of numbers is involved in the sequence (e.g., 34-98-25-09-71). There are thus 10 billion possible combinations, of which precisely one opens the lock.

Feeding this situation into the Explanatory Filter we note first that there is no regularity or law of nature requires that the combination lock turn to the combination that opens it, therefore the opening of the bank's safe is contingent. Secondly - random twirling of the combinations lock's dial is exceedingly unlikely to open the lock. This makes the opening of the safe complex. Is the opening of the safe specified? If not specified, the opening of the safe could be attributed to chance. Since there is only one in 10 billion possibilities, the opening of the safe is also specified. This moves the problem to the area of design. Any sane bank worker would instantly recognize: somebody knew, and chose to design the lock to open using the prescribed numbers in proper rotation. (Dembski 2004)

Notice the word "chose" in the preceding sentence. With natural selection there is the concept of choice. To "select" is to choose. In ascribing the power to choose to unintelligent natural forces, Darwin perpetrated the greatest intellectual swindle in the history of ideas. Nature has no power to choose. All natural selection does is narrow the variability of incidental change by weeding out the less fit. It acts on the spur of the moment, based solely on what the environment at the present time deems fit and thus without any prevision of future possibilities. This blind process, when coupled with another blind process, namely incidental change, is supposed to produce designs that exceed the capacity of any designers in our experience. No wonder Daniel Dennett, in Darwin's Dangerous Ideas, credits Darwin with "the single best idea anyone has ever had." Getting design without a designer is a good trick indeed. Now with advances in technology as well as the information and life sciences, the Darwinian gig is now up. It's time to lay aside the tricks - the smokescreens and the hand-waving, the just-so-stories and the stonewalling, the bluster and the bluffing - and to explain scientifically what people have known all along, namely, why you can't get design without a designer. That's were intelligent design comes in. (Dembski 2004)


Why the Criterion Works
What makes intelligent agents detectable? The principle characteristic of intelligent agency is choice. Intelligence consists in choosing between. How do we recognize that an intelligent agent has made a choice? A random ink blot is unspecified; a message written with ink on paper is specified. The exact message recorded may not be specified, but the characteristics of written language will nonetheless specify it. This is how we detect an intelligent agency.

A psychologist who observes a rat making no erroneous turns and in short order exiting a maze, will be convinced that the rat has indeed learned how to exit the maze and that this was not dumb luck. If the maze is sufficiently complex and the turns are of a highly specific nature, the more evidence the psychologist has that the rat did not accomplish this feat by chance. This general scheme for recognizing intelligent agency is but a thinly disguised form of the complexity-specification criterion. In general, to recognize intelligent agency we must observe an actualization of one among several competing possibilities, note which possibilities were ruled out and then be able to specify the possibility that was actualized. (Dembski 1999)

Therefore there exists a reliable criterion for detecting design. This criterion detects design strictly from observational features of the world. Moreover it belongs to probability and complexity theory, not to metaphysics and theology. And although it cannot achieve logical demonstration, it does achieve statistical justification so compelling as to demand assent. This criterion is relevant to biology, it detects design. In particular it shows that Michael Behe's irreducibly complex biochemical systems are designed. (Dembski 1999)

Information can be both complex and specified. Information that is both complex and specified will be called complex specified information, or CSI. The sixteen-digit number on your VISA card is an example of CSI. The complexity of this number ensures that a would-be thief cannot randomly pick a number and have it turn out to a valid VISA number.

Algorithms (mathematical procedures for solving problems) and natural laws are in principle incapable of explaining the origin of information. They can explain the flow of information. Indeed, algorithms and natural laws are ideally suited for transmitting already existing information. What they cannot do, however, is originate information. Instead of explaining the origin of CSI, algorithms and natural laws shift the problem elsewhere - in fact, to a place where the origin of CSI will be at least as difficult to explain as before. (Dembski 1999)

Take for example a computer algorithm that performs addition. The algorithm has a correctness proof so that it performs its additions correctly. Given the input data 2 + 2, can the algorithm output anything other than 4? Computer algorithms are wholly deterministic. They allow for no contingency (other option), and thus cannot generate no information. Contingency (options) cannot be produced. Without contingency laws cannot generate information, to say nothing of complex specified information. Time, chance and natural processes have limitations.

If not by means of laws, how then does contingency - and hence information - arise? Two possibilities arise. Either the contingency is a blind purposeless contingency, which is chance; or it is a guided, purposeful contingency, which is intelligent causation.


Can chance generate Complex Specified Information? (CSI)
Chance can generate complex unspecified information, and chance can generate noncomplex specified information. What chance cannot generate is information that is both complex and specified.

A typist randomly typing a long sequence of letters will generate complex unspecified information: the precise sequence of letters typed will constitute a highly improbable unspecified event, yielding complex unspecified information. Even though a meaningful word might appear, random typing cannot produce an extended meaningful text, thereby generating information that is both complex and specified.

Why can't this happen by chance? The improbabilities become too vast and the specifications too tight, chance is eliminated and design is implicated. Just where the probabilistic cutoff is can be debated, but that there is a probabilistic cutoff beyond which chance becomes an unacceptable explanation is clear. The universe will experience heat death before random typing at a keyboard produces a Shakespearean sonnet.
(Dembski, 1999)

Any output of specified complexity requires a prior input of specified complexity. In the case of evolutionary algorithms, they can yield specified complexity only if they themselves are carefully front-loaded with the right information and thus carefully adapted to the problem at hand. Evolutionary algorithms there fore do not generate or create specified complexity, but merely harness already existing specified complexity. There is only one known generator of specified complexity, and that is intelligence. (Dembski 2002)


The Probability Factor
The French mathematician Emile Borel proposed 10 to the 50th. power as a universal probability bound below which chance could definitely be preclude. Borel's probability bound translates into 166 bits of information. William Dembski in his book The Design Inference describes a more stringent probability bound which takes into consideration the number of elementary particles in the observable universe, the duration of the observable universe until its heat death and the Planck time. A probability bound of 10 to the 150th power results, which translates into 500 bits of information. Dembski chooses this more stringent value. If we now define CSI as any specified information whose complexity exceeds 500 bits of information, it follows immediately that chance cannot generate CSI. (Dembski 1999) Any specified event of probability less than 1 in 10 to the 150th power will remain improbable even after all conceivable probabilistic resources from the observable universe have been factored in. It thus becomes a universal probability bound. (Dembski 2004)

To take the view that the specific sequence of the nucleotides in the DNA molecule of the first organism came about by a purely random process is the early history of the earth, CSI cries out for explanation, and pure chance won't do it. Richard Dawkins makes this point eloquently: "We can accept a certain amount of luck in our explanations, but not too much…this ration has, as its upper limit, the number of eligible planets in the universe … We therefore have at our disposal, if we want to use it, odds of 1 in 100 billion-billion as an upper limit to spend on our theory of the origin of life. Suppose we want to suggest, for instance, that life began when both DNA and its protein-based replication machinery spontaneously chanced to come into existence. We can allow ourselves the luxury of such an extravagant theory, provided that the odds against this coincidence occurring on a planet do not exceed 100 billion-billion to one." (Dembski 2004)


Unlimited Probabilistic Resources
Probabilistic resources address the concept regarding the number of ways an event might occur. Unlimited probabilistic resources not only include probabilities that maybe mathematical and known in the present scientific context, but resources that go beyond what is presently known today. Evolutionists will resort to this method when their backs are to the wall. They will appeal to the addition of resources that are not within our purview at the present time and look to some future set of conditions that might help their position. It's important to deal with the here and now, and the reality of the present. If the present methods of applying probabilities to an occurrence, such as the origin of life, and those probabilities are zero, why proceed into the unknown unless out of sheer desperation?

William Dembski illustrates the following concept. What if the known universe is but one of many possible universes, each of which is as real as the known universe but causally inaccessible to it? If so, are not the probabilistic resources needed to eliminate chance vastly increased and is not the validity of 10 to the 150th power as a universal probability bound thrown into question? This line of reasoning has gained widespread currency among scientists and philosophers in recent years. Is it not illegitimate to rescue chance by invoking probabilistic resources from outside the known universe? Should there not be independent evidence to invoke a resource? (Dembski 2002)

Was Arthur Rubinstein a great pianist or was it just that whenever he sat at the piano, he happened by chance to put his fingers on the right keys to produce beautiful music? It could happen by chance, and there is some possible world where everything is exactly as it is in this world except that the counterpart to Arthur Rubenstein cannot read music and happens to be incredibly lucky whenever he sits at the piano.

Perhaps Shakespeare was an imbecile who just by chance happened to string together a long sequence of apt phrases. Unlimited probabilistic resources ensure that we will never know. (Dembski 2002) Are not the probabilities on our side that Rubenstein and Shakespeare are consummate pianists and writers?

How can we know for sure that one is listening to Arthur Rubenstein the musical genius and not the lucky poseur? Rubinstein's musical skill (design) is that he was following a pre-specified concert program, and in this instance that he was playing a particular piece listed in the program note for note. His performance exhibited specified complexity. Specified complexity is how we eliminate bizarre possibilities in which chance is made to account for things that we would ordinarily attribute to design. (Dembski 2002)

There is an advantage for science of limiting probabilistic resources. Limited probabilistic resources open possibilities for knowledge and discovery that would otherwise be closed. Limits enable us to detect design where otherwise it would elude us. Also limitations protect us from the unwarranted confidence in natural causes that unlimited probabilistic resources invariably seem to engender. (Dembski 2002)


The Law of Conservation of Information
If chance has no chance of producing complex specified information, what about natural causes? Natural causes are incapable of generating CSI. Dembski calls this result the law of conservation of information or LCI.

In his book the Limits of Science, Peter Medawar proposes several corollaries:

(1) The CSI in a closed system of natural causes remains constant or decreases.

(2) CSI cannot be generated spontaneously, originate endogenously or organize
itself.

(3) The CSI in a closed system of natural causes either has been in the system eternally or was at some point added exogenously (implying that the system, though now closed, was not always closed).

(4) In particular any closed system of natural causes that is also of finite duration received whatever CSI it contains before it became a closed system.

To explain the origin of information in a closed system requires what is called a reductive explanation. Richard Dawkins, Daniel Dennett and many scientists and philosophers are convinced that proper scientific explanations must be reductive, moving from the complex to the simple. The law of conservation of information (LCI) cannot be explained reductively. To explain an instance of CSI requires at least as much CSI as we started with. A pencil-making machine is more complicated than the pencils it makes. (Dembski 1999)

The most interesting application of the law of conservation of information is the reproduction of organisms. In reproduction one organism transmits its CSI to the next generation. Most evolutionists would argue that the Darwinian mechanism of mutation and selection introduces novel CSI into an organism, supplementing the CSI of the parents with CSI from the environment. However, there is a feature of CSI that will count decisively against generating CSI from the environment via mutation and selection. The crucial feature of CSI is that it is holistic. To say that CSI is holistic means that individual items of information cannot simply be added together and thereby form a new item of complex specified information. CSI requires not only having the right collection of parts but also having the parts in proper relation. Adding random information to an already present body of information will distort or reduce the information already present. Even if two coherent bodies of information are combined the results, unless specified in some way will not be useful to the organism. A sentence with words scrambled is nonsensical, and contains no information. Also, two sentences that have no relationship with one another do not add to information already present. The specification that identifies METHINKS IT IS LIKE A WEASEL, and the specification that identifies IN THE BEGINNING GOD CREATED, do not form a joint juxtaposed line of information. CSI is not obtained by merely aggregating component parts or by arbitrarily stitching items of information together. (Dembski 1999)

The best thing to happen to a book on a library shelf is that it remains as it was when originally published and thus preserves the CSI inherent in its text. Over time, however, what usually happens is that a book get old, pages fall apart, and the information on the pages disintegrates. The Law of Conservation of Information is therefore more like a law of thermodynamics governing entropy than a conservation law governing energy, with the focus on degradation rather than conservation. The Law of the Conservation of Information is that natural causes can at best preserve CSI, may degrade it, but cannot generate it. Natural causes are ideally suited as conduits for CSI. It is in this sense, then, that natural causes can be said to "produce CSI." But natural causes never produce things de novo or ex-nihilo. When natural causes produce things, they do so by reworking other things. (Dembski 2002)

A classic example whereby information is degraded over time is seen in an experiment by Spiegelman in 1967. The experiment allowed a molecular replicating system to proceed in a test tube without any cellular organization around it.

The replicating molecules (the nucleic acid templates) require an energy source, building blocks (i.e., nucleotide bases), and an enzyme to help the polymerization process that is involved in self-copying of the templates. Then away it goes, making more copies of the specific nucleotide sequences that define the initial templates. But the interesting result was that these initial templates did not stay the same; they were not accurately copied. They got shorter and shorter until they reached the minimal size compatible with the sequence retaining self-copying properties. And as they got shorter, the copying process went faster. So what happened with natural selection in a test tube: the shorter templates that copied themselves faster became more numerous, while the larger ones were gradually eliminated? This looks like Darwinian evolution in a test tube. But the interesting result was that this evolution went one way: toward greater simplicity. Actual evolution tends to go toward greater complexity, species becoming more elaborate in their structure and behavior, though the process can also go in reverse; toward simplicity. But DNA on its own can go nowhere but toward greater simplicity. In order for the evolution of complexity to occur, DNA has to be within a cellular context; the whole system evolves as a reproducing unit. (Dembski 2002)


Application to Evolutionary Biology
How does all this apply to evolutionary biology? Complex specified information (CSI) is abundant in the universe. Natural causes are able to shift it around and possibly express it in biological systems. What we wish to know, however, is how the CSI was first introduced into the organisms we see around us. In reference to the origin of life we want to know the informational pathway that takes the CSI inherent in a lifeless universe and translates it into the first organism. There are only so many options. CSI in an organism consists of CSI acquired at birth together with whatever CSI is acquired during the course of its life. CSI acquired at birth derives from inheritance with modification (mutation). Modification occurs by chance. CSI acquired after birth involves selection along with infusion or the direct introduction of novel information from outside the organism. Therefore inheritance with modification, selection and infusion - these three account for the CSI inherent in biological systems.

Modification includes - to name but a few - point mutations, base deletions, genetic crossover, transpositions, and recombination generally. Given the law of conservation of information, it follows that inheritance with modification by itself is incapable of explaining the increased complexity of CSI that organisms have exhibited in the course of natural history. Inheritance with modification or by mutations needs therefore to be supplemented. The candidate for this supplementation is selection. Selection can introduce new information into a population. Nonetheless this view places undue restrictions on the flow of biological information, restrictions that biological systems routinely violate.

For example we can use Michael Behe's bacterial flagellum. How does a bacterium without a flagellum evolve a flagellum by the processes so far discussed? We have already outlined the complexity issue of the flagellum. How does selection account for it? Selection cannot cumulate proteins, holding them in reserve until with the passing of many generations they're finally available to form a complete flagellum. Neither the environment nor the bacterial cell contains a prescribed plan or blueprint of the flagellum. Selection can only build on partial function, gradually generation after generation. But a flagellum without its full complement of proteins parts doesn't function at all. Consequently if selection and inheritance with modification are going to produce the flagellum, they have to do it on one generation. The CSI of a flagellum far exceeds 500 bits. Selection will only deselect any bacteria, that does not have the flagellum and a 500 bit novelty is far beyond any chance of occurring.

There remains only one source for the CSI in biological systems - infusion. Infusion becomes problematic once we start racing backwards the informational pathways of infused information. Plasmid exchange is well known in bacteria, which allows bacterial cells to acquire antibiotic resistance. Plasmids are small circular pieces of DNA that can be passed from one bacterial cell to another. Problems begin when we ask, where did the bacterium that released the plasmid in turn derive it? There is a regress here, and this regress always terminates in something non-organismal. If the plasmid is cumulatively complex, then the general evolutionary methods might apply. However, if the plasmid is irreducibly complex, whence could it have arisen? Because organisms have a finite trajectory back in time, biotic infusion must ultimately give way to abiotic infusion, and endogenous (intracellular), information must ultimately derive from exogenous (extra cellular) information.

Two final questions arise. (1) How is abiotically infused CSI transmitted to an organism? And, (2) where does this information reside prior to being transmitted? The obvious alternative is and must be a theological one. The information in biological systems can be traced back to the direct intervention of God. (Dembski 1999)

As Michael Behe's irreducibly complex biochemical system readily yields to design, so to does the fine-tuning of the universe. The complexity-specification criterion demonstrates that design pervades cosmology and biology. Moreover it is transcendent design, no reducible to the physical world. Indeed no intelligent agent who is strictly physical could have presided over the origin of the universe or the origin of life.

Just as physicists reject perpetual motion machines because of what they know about the inherent constraints on energy and matter, so too design theorists reject any naturalistic reduction of specified complexity because of what they know about the inherent constraints on natural causes. (Dembski 1999)

Evolutionary biologists assert that design theorists have failed to take into account indirect Darwinian pathways by which the bacterial flagellum might have evolved through a series of intermediate systems that changed function and structure over time in ways that we do not yet understand. There is no convincing evidence for such pathways. Can the debate end with evolutionary biologists chiding design theorists for not working hard enough to discover those (unknown) indirect Darwinian pathways that lead to the emergence of irreducibly and minimally complex biological structures like the bacterial flagellum? Science must form its conclusions on the basis of available evidence, not on the possibility of future evidence. (Dembski 1999)


The Darwinian Extrapolation
According to the Darwinian Theory, organisms’ possess unlimited plasticity to diversify across all boundaries; moreover, natural selection is said to have the capability of exploiting that plasticity and thereby delivering the spectacular diversity of living forms that we see.

Such a theory, however, necessarily commits an extrapolation. And as with all extrapolations, there is always the worry that what we are confident about in a limited domain may not hold more generally outside that domain. In the early days of Newtonian mechanics, physicists thought Newton's laws gave a total account of the constitution and dynamics of the universe. Maxwell, Einstein, and Heisenberg each showed that the proper domain of Newtonian mechanics was far more constricted. It is therefore fair to ask whether the Darwinian mechanism may not face similar limitations. With many extrapolations there is enough of a relationship between inputs and outputs so that the extrapolation is experimentally accessible. It then it becomes possible to confirm or disconfirm the extrapolation. This is not true for the Darwinian extrapolation. There are too many historical contingencies and too many missing data to form an accurate picture of precisely what happened. It is not possible presently to determine how the Darwinian mechanism actually transformed, say, a reptile into a mammal over the course of natural history. (Dembski 2002)


Testability
Let's now ask: Is intelligent design refutable? Is Darwinism refutable? Yes to the first question, no to the second. Intelligent design could in principle be readily refuted. Specified complexity, in general, and irreducible complexity in biology are, within the theory of intelligent design, key markers of an intelligent agency. If it could be shown that biological systems that are wonderfully complex, elegant and integrate - such as the bacterial flagellum - could have been formed by a gradual Darwinian process, then intelligent design would be refuted on the general grounds that one does not invoke intelligent causes when undirected natural causes will do.

By contrast, Darwinism seems effectively irrefutable. The problem is that Darwinists raise the standard for refutability too high. It is certainly possible to show that no Darwinian pathway could reasonably be expected to lead to an irreducibly complex biological structure. But Darwinists want something stronger, namely, to show that no conceivable Darwinian pathway could have led to that structure. Such as demonstration requires an exhaustive search of all conceptual possibilities and is effectively impossible to carry out. (Dembski 1999) What an odd set of circumstances. The methodology which has the more convincing and overwhelming evidence is ignored and the methodology that has little or no evidence is in vogue and irrefutable.

Let us turn to another aspect of testability - explanatory power. Underlying explanatory power is a view of explanation known as inference to the best explanation, in which a "best explanation" always presupposes at least two competing explanations. Obviously, a "best explanation" is one that comes out on top in a competition with other explanations. Design theorists have an edge up in explanatory power over natural selection. Darwinists, of course, see the matter differently.

What is the problem of having a design-theoretical tool chest added into a Darwinian tool chest? Much as some tools just sit there never to be used, design then has the option of just sitting there and possibly becoming superfluous. What is the fear of having a broad tool-chest? (Dembski 1999)

Is there any hope for the evolutionist in exploring, with an unlimited amount of time, indirect Darwinian pathways which have yet to be discovered? For the sake of clarification, an indirect Darwinian pathway is a way in which a complex specified biological pathway can be described by a Darwinian naturalistic methodology which has yet to present itself as a measurable entity to science.

William Dembski provides us with an illustration. Johnny is certain that there are leprechauns hiding in his room. Imagine this child were so ardent and convincing that he set all of Scotland Yard, onto the task of searching meticulously, tirelessly, decade after decade, for these supposed leprechauns, for any solid evidence at all of their prior habitation of the bedroom. Driven by gold fever for the leprechaun's treasure, postulating new ways of catching a glimpse of a leprechaun, a hair, a fingerprint, any clue at all the search continues. After many decades, what should one say to the aging parents of the now aging boy? Would it be logical to shake your finger at the parents and tell them, "Absence of evidence is not evidence of absence. Step aside and let the experts get back to work." That would be absurd. And yet that, essentially, is what evolutionary biologists are telling us concerning that utterly fruitless search for credible indirect Darwinian pathways to account for irreducible complexity. (Dembski 1999)


Crossing the bridge - Meeting the Designer
What if the designing intelligence responsible for biological complexity cannot be confined to physical objects? Why should this burst the bounds of science? In answering this criticism, let us first of all be clear that intelligent design does not require miracles (as does scientific creationism) in the sense of violations of natural law. Just as humans do not perform miracles every time they act as intelligent agents, so too there is no reason to assume that for a designer to act as an intelligent agent requires a violation of natural laws. How much more effective could science be if it includes intelligent causes? Intelligent causes can work with natural causes and help them to accomplish things that undirected natural causes cannot. Undirected natural causes can explain how ink gets applied to paper to form a random inkblot but cannot explain an arrangement of ink on paper that spells a meaningful message. Whether an intelligent cause is located within or outside nature is a separate question from whether an intelligent cause has acted within nature. Design has no prior commitment against naturalism or for supernaturalism unless one opens that door. Consequently science can offer no principled grounds for excluding design or relegating it to the sphere of religion automatically.

Decisions on this issue should be based upon which process has the greater explanatory power, undirected natural causes or intelligent causes. Does the designer need to be defined? Cannot the designing agent be a regulative principle- a conceptually useful device for making sense out of certain facts of biology- without assigning the designer any weight in reality? The status of the designer can then be taken up by philosophy and theology. The fact that the designing intelligence responsible for life can't be put under the microscope poses no obstacle to science. We learn of this intelligence as we learn of any other intelligence - not by studying it directly but through its effects. (Dembski 2004)

All of us have identified the effects of embodied designers. Our fellow human beings constitute our best example of such designers. A designer's embodiment is of no evidential significance for determining whether something was designed in the first place. We don't get into the mind of designers and thereby attribute design. Rather, we look at effects in the physical world that exhibit clear marks of intelligence and from those marks infer a designing intelligence. (Dembski 2004)

There is no principled way to argue that the work of embodied designers is detectable whereas the work of un-embodied designers isn't. Even if an un-embodied intelligence is responsible for the design displayed in some phenomenon, a science committed to the Naturalized Explanatory Filter (a filter which excludes God and thus design) will never discover it. A science that on a priori grounds refuses to consider the possibility of un-embodied designers artificially limits what it can discover. (Dembski 2004) What happens when a God is implicated in design? The Explanatory Filter doesn't consider design but becomes naturalized and takes the process back to square one with a decision between contingency and necessity. (Dembski 2002)


The Burden Of Proof
Dembski often lectures in university campuses about intelligent design. Often, he will say, a biologist in the audience will get up during the question-and-answer time to inform him that just because he doesn't know how complex biological systems might have formed by the Darwinian mechanism doesn't mean it didn't happen that way. He will then point out that the problem isn't that he personally doesn't know how such systems might have formed but that the biologist who raised the objection doesn't know how such systems might have formed - and that despite having a fabulous education in biology, a well-funded research laboratory, decades to put it all to use, security and prestige in the form of a tenured academic appointment, and the full backing of the biological community, which has also been desperately but unsuccessfully trying to discover how such systems are formed for more than one hundred years, still doesn't know. (Dembski 2004)

Many scientists have expressed their lack of knowledge of how any biochemical or cellular system could have evolved. Here are a few:

James Shapiro, a molecular biologist at the University of Chicago in the National Review, September 16, 1996, conceded that there are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations.

David Ray Griffin is a philosopher of religion with an interest in biological origins. He writes in his book Religion and Scientific Naturalism: There are, I am assured, evolutionists who have described how the transitions in question could have occurred. When I ask in which books I can find these discussions, however, I either get no answer or else some titles that, upon examination, do not in fact contain the promised accounts. That such accounts exist seems to be something that is widely known, but I have yet to encounter someone who knows where they exist. It is up to the Darwinists to fill in the details. (Dembski 2004)

Let us look to the evolution of the eye as an example where we find a lack of information between evolutionary jumps. Darwinists, for instance, explain the human eye as having evolved from a light sensitive spot that successively became more complicated as increasing visual acuity conferred increased reproductive capacity on an organism. In such a just-so story, all the historical and biological details in the eye's construction are lost. How did a spot become innervated and thereby light-sensitive? How did a lens form within a pinhole camera? What changes in embryological development are required to go from a light-sensitive sheet to a light-sensitive cup? None of these questions receives an answer in purely Darwinian terms. Darwinian just-so stories have no more scientific content than Rudyard Kipling's original just-so stories about how the elephant got its trunk or the giraffe its neck. (Dembski 2002)

Are not the Darwinists applying blind faith to their theory? Listen to the remark by Harvard biologist Richard Lewontin in The New York Review of Books:

We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism (i.e., naturalism). It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counterintuitive, no matter how mystifying to the uninitiated. (Dembski 2002)

This raises another question. What is the responsibility of teachers in their classrooms? If teachers who are persuaded of intelligent design and yet are directed by the system to teach evolution, should teach Darwinian evolution and the evidence that supports it. At the same time, however, they should candidly report problems with the theory; notably that its mechanism of transformation cannot account for the specified complexity we observe in biology.


Dembski - Major Points
1. Materialists search for contrivance through the activity of natural processes rather than contrivance as the mental process of an intelligence.
2. The option of a God's interaction with the world could be a legitimate domain for scientific investigation.
3. Intelligently caused events can be distinguished from unintelligently caused events.
4. Detection involves discovering information.
5. By excluding intelligent design from science stifles scientific inquiry.
6. The Explanatory Filter identifies information and thus a designer.
7. Choice is an important feature of intelligence.
8. Natural selection has no power to choose. It has no eye on the past or the future; it is blind.
9. Information can be both complex and specified.
10. Complex specified information cannot arise by chance.
11. There exists a "probability bound", beyond which chance cannot overcome.
12. Existing information cannot increase on its own, but can remain stable for a period of time or be lost, as understood by the Law of Conservation of Information.
13. The origin of original information is a mystery to modern science.
14. The existence of a molecular machine such as the bacterial flagellum is beyond explanation by natural processes.
15. Darwin uses an extrapolation to make his case, but an extrapolation that has limited or no data to confirm it.
16. Darwinism, unlike Intelligent Design, is not subject to refutation.
17. Decisions should be made about origins, based upon which proposal has the best explanatory power.

Prepared by

Ed Hopkins is a science educator with 32 years of experience teaching science. Ed has an undergraduate degree in chemistry and biology from George Peabody Teacher’s College, TN and a M.Ed. from Atlantic University, FL. Ed has been working closely with the Creation Studies Institute, Ft. Lauderdale, FL since its inception in 1988.