Oblogatory Reading
Saturday, July 15, 2006
  Evolution and Me - George Gilder - NR July 17, 2006 ‘The Darwinian theory has become an all-purpose obstacle to thought rather than an enabler of scientific advance’

GEORGE GILDER

I first became conscious that something was awry in Darwinian science some 40 years ago as I was writing my early critique of sexual liberation, Sexual Suicide (revised and republished as Men and Marriage). At the time, the publishing world was awash with such titles as Desmond Morris’s The Naked Ape and The Human Zoo and Robert Ardrey’s African Genesis, which touted or pruriently probed the animality of human beings. Particularly impressive to me was The Imperial Animal, a Darwinian scholarly work by two anthropologists aptly named Lionel Tiger and Robin Fox that gave my theory of sex roles a panoply of primatological support, largely based on the behavior of patriarchal hamadryas baboons.

Darwinism seemed to offer me and its other male devotees a long-sought tool — resembling the x-ray glasses lamentably found elsewhere only in cartoons — for stripping away the distracting décor of clothing and the political underwear of ideology worn by feminists and other young women of the day. Using this swashbuckling scheme of fitness and survival, nature “red in tooth and claw,” we could reveal our ideological nemeses as naked mammals on the savannah to be ruled and protected by hunting parties of macho males, rather like us.

In actually writing and researching Sexual Suicide, however, I was alarmed to discover that both sides could play the game of telling just-so stories. In The Descent of Woman, Elaine Morgan showed humans undulating from the tides as amphibious apes mostly led by females. Jane Goodall croodled about the friendliness of “our closest relatives,” the chimpanzees, and movement feminists flogged research citing the bonobo and other apes as chiefly matriarchal and frequently homosexual.

These evolutionary sex wars were mostly unresolvable because, at its root, Darwinian theory is tautological. What survives is fit; what is fit survives. While such tautologies ensure the consistency of any arguments based on them, they could contribute little to an analysis of what patterns of behavior and what ideals and aspirations were conducive to a good and productive society. Almost by definition, Darwinism is a materialist theory that banishes aspirations and ideals from the picture. As an all-purpose tool of reductionism that said that whatever survives is, in some way, normative, Darwinism could inspire almost any modern movement, from the eugenic furies of Nazism to the feminist crusades of Margaret Sanger and Planned Parenthood.

So in the end, for better or for worse, my book dealt chiefly with sociological and anthropological arguments and left out Darwin.

Turning to economics in researching my 1981 book Wealth & Poverty, I incurred new disappointments in Darwin and materialism. Forget God — economic science largely denies intelligent design or creation even by human beings. Depicting the entrepreneur as a mere opportunity scout, arbitrageur, or assembler of available chemical elements, economic theory left no room for the invention of radically new goods and services, and little room for economic expansion except by material “capital accumulation” or population growth. Accepted widely were Darwinian visions of capitalism as a dog-eat-dog zero-sum struggle impelled by greed, where the winners consume the losers and the best that can be expected for the poor is some trickle down of crumbs from the jaws (or tax tables) of the rich.

In my view, the zero-sum caricature applied much more accurately to socialism, which stifles the creation of new wealth and thus fosters a dog-eat-dog struggle over existing material resources. (For examples, look anywhere in the socialist Third World.) I preferred Michael Novak’s vision of capitalism as the “mind-centered system,” with the word itself derived from the Latin caput, meaning head. Expressing the infinite realm of ideas and information, it is a domain of abundance rather than of scarcity. Flouting zero-sum ideas, supply-side economics sprang from this insight. By tapping the abundance of human creativity, lower tax rates can yield more revenues than higher rates do and low-tax countries can raise their government spending faster than the high-tax countries do. Thus free nations can afford to win wars without first seizing resources from others. Ultimately capitalism can transcend war by creating rather than capturing wealth — a concept entirely alien to the Darwinian model.

After Wealth & Poverty, my work focused on the subject of human creativity as epitomized by science and technology and embodied in computers and communications. At the forefront of this field is a discipline called information theory. Largely invented in 1948 by Claude Shannon of MIT, it rigorously explained digital computation and transmission by zero-one, or off-on, codes called “bits.” Shannon defined information as unexpected bits, or “news,” and calculated its passage over a “channel” by elaborate logarithmic rules. That channel could be a wire or another other path across a distance of space, or it could be a transfer of information across a span of time, as in evolution.

Crucial in information theory was the separation of content from conduit — information from the vehicle that transports it. It takes a low-entropy (predictable) carrier to bear high-entropy (unpredictable) messages. A blank sheet of paper is a better vessel for a new message than one already covered with writing. In my book Telecosm (2000), I showed that the most predictable available information carriers were the regular waves of the electromagnetic spectrum and prophesied that all digital information would ultimately flow over it in some way. Whether across time (evolution) or across space (communication), information could not be borne by chemical processes alone, because these processes merged or blended the medium and the message, leaving the data illegible at the other end.

While studying computer science, I learned of the concept of a universal computing machine, an idealized computer envisioned by the tormented genius Alan Turing. (After contributing significantly to the Enigma project for decrypting German communications during World War II, Turing committed suicide following shock therapy — “treatment” for his homosexuality.) A so-called “Turing machine” is an idealized computer that can be created using any available material, from beach sand to Buckyballs, from microchips to matchsticks. Turing made clear that the essence of a computer is not its material substance but its architecture of ideas.

IDEAS SUPREME
Based as it is on ideas, a computer is intrinsically an object of intelligent design. Every silicon chip holds as many as 700 layers of implanted chemicals in patterns defined with nanometer precision and then is integrated with scores of other chips by an elaborately patterned architecture of wires and switches all governed by layers of software programming written by human beings. Equally planned and programmed are all the computers running the models of evolution and “artificial life” that are central to neo-Darwinian research. Everywhere on the apparatus and in the “genetic algorithms” appear the scientist’s fingerprints: the “fitness functions” and “target sequences.” These algorithms prove what they aim to refute: the need for intelligence and teleology (targets) in any creative process.

I came to see that the computer offers an insuperable obstacle to Darwinian materialism. In a computer, as information theory shows, the content is manifestly independent of its material substrate. No possible knowledge of the computer’s materials can yield any information whatsoever about the actual content of its computations. In the usual hierarchy of causation, they reflect the software or “source code” used to program the device; and, like the design of the computer itself, the software is contrived by human intelligence.

The failure of purely physical theories to describe or explain information reflects Shannon’s concept of entropy and his measure of “news.” Information is defined by its independence from physical determination: If it is determined, it is predictable and thus by definition not information. Yet Darwinian science seemed to be reducing all nature to material causes.

As I pondered this materialist superstition, it became increasingly clear to me that in all the sciences I studied, information comes first, and regulates the flesh and the world, not the other way around. The pattern seemed to echo some familiar wisdom. Could it be, I asked myself one day in astonishment, that the opening of St. John’s Gospel, In the beginning was the Word, is a central dogma of modern science?

In raising this question I was not affirming a religious stance. At the time it first occurred to me, I was still a mostly secular intellectual. But after some 35 years of writing and study in science and technology, I can now affirm the principle empirically. Salient in virtually every technical field — from quantum theory and molecular biology to computer science and economics — is an increasing concern with the word. It passes by many names: logos, logic, bits, bytes, mathematics, software, knowledge, syntax, semantics, code, plan, program, design, algorithm, as well as the ubiquitous “information.” In every case, the information is independent of its physical embodiment or carrier.

Biologists commonly blur the information into the slippery synecdoche of DNA, a material molecule, and imply that life is biochemistry rather than information processing. But even here, the deoxyribonucleic acid that bears the word is not itself the word. Like a sheet of paper or a computer memory chip, DNA bears messages but its chemistry is irrelevant to its content. The alphabet’s nucleotide “bases” form “words” without help from their bonds with the helical sugar-phosphate backbone that frames them. The genetic words are no more dictated by the chemistry of their frame than the words in Scrabble are determined by the chemistry of their wooden racks or by the force of gravity that holds them.

This reality expresses a key insight of Francis Crick, the Nobel laureate co-author of the discovery of the double-helix structure of DNA. Crick expounded and enshrined what he called the “Central Dogma” of molecular biology. The Central Dogma shows that influence can flow from the arrangement of the nucleotides on the DNA molecule to the arrangement of amino acids in proteins, but not from proteins to DNA. Like a sheet of paper or a series of magnetic points on a computer’s hard disk or the electrical domains in a random-access memory — or indeed all the undulations of the electromagnetic spectrum that bear information through air or wires in telecommunications — DNA is a neutral carrier of information, independent of its chemistry and physics. By asserting that the DNA message precedes and regulates the form of the proteins, and that proteins cannot specify a DNA program, Crick’s Central Dogma unintentionally recapitulates St. John’s assertion of the primacy of the word over the flesh.

By assuming that inheritance is a chemical process, Darwin ran afoul of the Central Dogma. He believed that the process of inheritance “blended” together the chemical inputs of the parents. Seven years after Darwin published The Origin of Species, though, Gregor Mendel showed that genes do not blend together like chemicals mixing. As the Central Dogma ordains and information theory dictates, the DNA program is discrete and digital, and its information is transferred through chemical carriers — but it is not specified by chemical forces. Each unit of biological information is passed on according to a digital program — a biological code — that is transcribed and translated into amino acids.

THE MEDIUM NOT THE MESSAGE
Throughout the 20th century and on into the 21st, many scientists and politicians have followed Darwin in missing the significance of the “Central Dogma.” They have assumed that life is dominated by local chemistry rather than by abstract informative codes. Upholding the inheritability of acquired characteristics, Jean-Baptiste Lamarck, Trofim Lysenko, Aleksandr Oparin, Friedrich Engels, and Josef Stalin all espoused the primacy of proteins and thus of the environment over the genetic endowment. By controlling the existing material of human beings through their environment, the Lamarckians believed that Communism could blend and breed a new Soviet man through chemistry. Dissenters were murdered or exiled. (The grim story is vividly told in Hubert Yockey’s definitive 2005 book, Information Theory, Evolution, and the Origin of Life.)

For some 45 years, Barry Commoner, the American Marxist biologist, refused to relinquish the Soviet mistake. He repeated it in an article in Harper’s in 2002, declaring that proteins must have come first because DNA cannot be created without protein-based enzymes. In fact, protein-based enzymes cannot be created without a DNA (or RNA) program; proteins have no structure without the information that defines them. As Yockey explains, “It is mathematically impossible, not just unlikely, for information to be transferred from the protein alphabet to the [DNA] alphabet. That is because no codes exist to transfer information from the 20-letter protein alphabet to the 64-letter [codon] alphabet of [DNA].” Twenty letters simply cannot directly specify the content of patterns of 64 codons.

But the beat goes on. By defrocking Lawrence Summers for implying the possible primacy of the genetic word over environmental conditions in the emergence of scientific aptitudes, the esteemed professoriat at Harvard expressed its continued faith in Lamarckian and Marxian biology.

Over at NASA, U.S. government scientists make an analogous mistake in constantly searching for traces of protein as evidence of life on distant planets. Without a hierarchy of informative programming, proteins are mere matter, impotent to produce life. The Central Dogma dooms the NASA pursuit of proteins on the planets to be what we might call a “wild goo chase.” As St. John implies, life is defined by the presence and precedence of the word: informative codes.

I began my 1989 book on microchips, Microcosm: The Quantum Era in Economics and Technology, by quoting physicist Max Planck, the discoverer of the quantum, on the resistance to his theory among the scientific establishment — the public scientists of any period whom I have dubbed the Panel of Peers. By any name they define the “consensus” of respectable science. At the beginning of the 20th century, said Planck, they balked at taking the “enormous step from the visible and directly controllable to the invisible sphere, from the macrocosm to the microcosm.”

But by entrance into the “microcosm” of the once-invisible world of atoms, all physical science was transformed. When it turned out early in the 20th century that the atom was not a “massy unbreakable particle,” as Isaac Newton had imagined, but a complex arena of quantum information, the classical physics of Newton began inexorably to break down. We are now at a similar point in the history of the sciences of life. The counterpoint to the atom in physics is the cell in biology. At the beginning of the 21st century it turns out that the biological cell is not a “simple lump of protoplasm” as long believed but a microcosmic processor of information and synthesizer of proteins at supercomputer speeds. As a result, breaking down as well is the established biology of Darwinian materialism.

No evolutionary theory can succeed without confronting the cell and the word. In each of the some 300 trillion cells in every human body, the words of life churn almost flawlessly through our flesh and nervous system at a speed that utterly dwarfs the data rates of all the world’s supercomputers. For example, just to assemble some 500 amino-acid units into each of the trillions of complex hemoglobin molecules that transfer oxygen from the lungs to bodily tissues takes a total of some 250 peta operations per second. (The word “peta” refers to the number ten to the 15th power — so this tiny process requires 250x1015 operations.)

Interpreting a DNA program and translating it through a code into a physical molecule, the cells collectively function at almost a thousand times the processing speed of IBM’s new Blue Gene/L state-of-the-art supercomputer. This information processing in one human body for just one function exceeds by some 25 percent the total computing power of all the world’s 200 million personal computers produced every year.

Yet, confined as they are to informational functions, computer models stop after performing the initial steps of decoding the DNA and doing a digital-to-analog conversion of the information. The models do not begin to accomplish the other feats of the cell, beginning with the synthesis of protein molecules from a code, and then the exquisitely accurate folding of the proteins into the precise shape needed to fit them together in functional systems. This process of protein synthesis and “plectics” cannot even in principle be modeled on a computer. Yet it is essential to the translation of information into life.

WORRYING THE WORD
Within the Panel of Peers, the emergence of the cell as supercomputer precipitated a mostly unreported wave of consternation. Crick himself ultimately arrived at the theory of “panspermia” — in which he speculated that life was delivered to the earth from other galaxies, thus relegating the problems of creation to a realm beyond our reach. Sensing a crisis in his then exclusively materialist philosophy, neo-Darwinian Richard Dawkins of Oxford coined the word “meme” to incorporate information in biology, describing ideas as undergoing a Darwinian process of survival of the fittest. But in the end Dawkins’s memes are mere froth on the surface of a purely chemical tempest, fictive reflections of material reality rather than a governing level of information. The tongue still wags the mind.

These stratagems can be summed up as an effort to subdue the word by shrinking it into a physical function, whimsically reducing it to a contortion of the pharynx reflecting a firing of synapses following a mimetic emanation of matter from a random flux of quanta shaking physical atoms. Like the whirling tigers of the children’s fable, the recursive loops of names for the word chase their tails around the tree of life, until there is left at the bottom only a muddled pool of what C. S. Lewis called “nothing buttery.”

“Nothing buttery” was Lewis’s way of summing up the stance of public scientists who declared that “life” or the brain or the universe is “nothing but” matter in motion. As MIT’s Marvin Minsky famously asserted, “The brain is nothing but a ‘meat machine.’” In DNA (2003), Crick’s collaborator James Watson doggedly insisted that the discovery of DNA “proved” that life is nothing but or “merely chemistry and physics.” It is a flat-universe epistemology, restricted to what technologists call the “physical layer,” which is the lowest of seven layers of abstraction in information technology between silicon chips and silica fiber on the bottom and the programs and content at the top.

After 100 years or so of attempted philosophical leveling, however, it turns out that the universe is stubbornly hierarchical. It is a top-down “nested hierarchy,” in which the higher levels command more degrees of freedom than the levels below them, which they use and constrain. Thus, the higher levels can neither eclipse the lower levels nor be reduced to them. Resisted at every step across the range of reductive sciences, this realization is now inexorable. We know now that no accumulation of knowledge about chemistry and physics will yield the slightest insight into the origins of life or the processes of computation or the sources of consciousness or the nature of intelligence or the causes of economic growth. As the famed chemist Michael Polanyi pointed out in 1961, all these fields depend on chemical and physical processes, but are not defined by them. Operating farther up the hierarchy, biological macro-systems such as brains, minds, human beings, businesses, societies, and economies consist of intelligent agents that harness chemical and physical laws to higher purposes but are not reducible to lower entities or explicable by them.

Materialism generally and Darwinian reductionism, specifically, comprise thoughts that deny thought, and contradict themselves. As British biologist J. B. S. Haldane wrote in 1927, “If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose my beliefs are true . . . and hence I have no reason for supposing my brain to be composed of atoms.” Nobel-laureate biologist Max Delbrück (who was trained as a physicist) described the contradiction in an amusing epigram when he said that the neuroscientist’s effort to explain the brain as mere meat or matter “reminds me of nothing so much as Baron Munchausen’s attempt to extract himself from a swamp by pulling on his own hair.”

Analogous to such canonical self-denying sayings as The Cretan says all Cretans are liars, the paradox of the self-denying mind tends to stultify every field of knowledge and art that it touches and threatens to diminish this golden age of technology into a dark age of scientistic reductionism and, following in its trail, artistic and philosophical nihilism.

All right, have a tantrum. Hurl the magazine aside. Say that I am some insidious charlatan of “creation-lite,” or, God forfend, “intelligent design.” “In the beginning was the Word” is from a mystical passage in a verboten book, the Bible, which is not a scientific text. On your side in rebuffing such arguments is John E. Jones III of central Pennsylvania, the gullible federal judge who earlier this year made an obsequious play to the Panel of Peers with an attempted refutation of what has been termed “intelligent design.”

But intelligent design is merely a way of asserting a hierarchical cosmos. The writings of the leading exponents of the concept, such as the formidably learned Stephen Meyer and William Dembski (both of the Discovery Institute), steer clear of any assumption that the intelligence manifestly present in the universe is necessarily supernatural. The intelligence of human beings offers an “existence proof” of the possibility of intelligence and creativity fully within nature. The idea that there is no other intelligence in the universe in any other form is certainly less plausible than the idea that intelligence is part of the natural world and arises in many different ways. MIT physicist and quantum-computing pioneer Seth Lloyd has just published a scintillating book called Programming the Universe that sees intelligence everywhere emerging from quantum processes themselves — the universe as a quantum computer. Lloyd would vehemently shun any notion of intelligent design, but he posits the universe as pullulating with computed functions. It is not unfair to describe this ubiquitous intelligence as something of a Godlike force pervading the cosmos. God becomes psi, the “quantum wave function” of the universe.

All explorers on the frontiers of nature ultimately must confront the futility of banishing faith from science. From physics and neural science to psychology and sociology, from mathematics to economics, every scientific belief combines faith and facts in an inextricable weave. Climbing the epistemic hierarchy, all pursuers of truth necessarily reach a point where they cannot prove their most crucial assumptions.

IRREDUCIBLE
The hierarchical hypothesis itself, however, can be proven. Kurt Gödel, perhaps the preeminent mathematician of the 20th century and Einstein’s close colleague, accomplished the proof in 1931. He demonstrated in essence that every logical system, including mathematics, is dependent on premises that it cannot prove and that cannot be demonstrated within the system itself, or be reduced to it. Refuting the confident claims of Bertrand Russell, Alfred North Whitehead, and David Hilbert that it would be possible to subdue all mathematics to a mechanical unfolding of the rules of symbolic logic, Gödel’s proof was a climactic moment in modern thought.

This saga of mathematical discovery has been beautifully expounded in a series of magisterial books and articles by David Berlinski, notably his intellectual autobiography Black Mischief (1986), The Advent of the Algorithm (2000), and Infinite Ascent: A Short History of Mathematics (2005). After contemplating the aporias of number theory in Black Mischief, he concluded, “It is the noble assumption of our own scientific culture that sooner or later everything might be explained: AIDS and the problems of astrophysics, the life cycle of the snail and the origins of the universe, the coming to be and the passing away. . . . Yet it is possible, too, that vast sections of our experience might be so very rich in information that they stay forever outside the scope of theory and remain simply what they are: unique, ineffable, insubsumable, irreducible.” And the irreducibility of mathematical axioms translates directly into a similar irreducibility of physics. As Caltech physicist and engineer Carver Mead, a guiding force in three generations of Silicon Valley technology, put it: “The simplest model of the galaxy is the galaxy.”

The irreducibility takes many forms and generates much confusion. Michael Behe, author of the classic Darwin’s Black Box (1996), shows that myriad phenomena in biology, such as the bacterial flagellum and the blood-clotting cascade, are “irreducibly complex” in the sense that they do not function unless all their components are present. It’s an all-or-nothing system incompatible with an evolutionary theory of slow, step-by-step incremental change. Behe’s claim of “irreducible complexity” is manifestly true, but it thrusts the debate into a morass of empirical biology, searching for transitional forms in the same way that paleontologists search for transitional fossils. Nothing definitive is found, but there are always enough molecules of smoke, or intriguing lumps of petrified stool or suggestive shards of bones or capsules of interesting gas, to persuade the gullible judge or professor that somewhere there was a flock of flying dragons or a whirling cellular rotaxane that fit the bill.

Mathematician Gregory Chaitin, however, has shown that biology is irreducibly complex in a more fundamental way: Physical and chemical laws contain hugely less information than biological phenomena. Chaitin’s algorithmic information theory demonstrates not that particular biological devices are irreducibly complex but that all biology as a field is irreducibly complex. It is above physics and chemistry on the epistemological ladder and cannot be subsumed under chemical and physical rules. It harnesses chemistry and physics to its own purposes. As chemist Arthur Robinson, for 15 years a Linus Pauling collaborator, puts it: “Using physics and chemistry to model biology is like using lego blocks to model the World Trade Center.” The instrument is simply too crude.

Science gained its authority from the successes of technology. When Daniel Dennett of Tufts wants to offer unanswerable proof of the supremacy of science, he writes, “I have yet to meet a postmodern science critic who is afraid to fly in an airplane because he doesn’t trust the calculations of the thousands of aeronautical engineers and physicists that have demonstrated and exploited the principles of flight.” Dennett is right: Real science is practical and demonstrable, following the inspiration of Michael Faraday, Heinrich Hertz, Thomas Edison, William Shockley, Robert Noyce, Charles Townes, and Charles Kao — the people who built the machines of the modern age. If you can build something, you can understand it.

The Panel of Peers, however, is drifting away from these technological foundations, where you have to demonstrate what you invent — and now seeks to usurp the role of philosophers and theologians. When Oxford physicist David Deutsch, or Scientific American in a cover story, asserts the reality of infinite multiple parallel universes, it is a trespass far beyond the bounds of science into the realm of wildly speculative philosophy. The effort to explain the miracles of our incumbent universe by postulating an infinite array of other universes is perhaps the silliest stratagem in the history of science.

Darwin’s critics are sometimes accused of confusing methodological materialism with philosophical materialism, but this is in fact a characteristic error of Darwin’s advocates. Multiverse theory itself is based on a methodological device invented by Richard Feynman, one that “reifies” math and sees it as a physical reality. (It’s an instance of what Whitehead called “the fallacy of misplaced concreteness.”) Feynman proposed the mapping of electron paths by assuming the electron took all possible routes, and then calculating the interference patterns that result among their wave functions. This method was a great success. But despite some dabbling as a youth in many-worlds theory, Feynman in his prime was too shrewd to suggest that the electron actually took all the possible paths, let alone to accept the theory that these paths compounded into entire separate universes.

Under the pressure of nothing buttery, though, scientists attempt to explain the exquisite hierarchies of life and knowledge through the flat workings of physics and chemistry alone. Information theory says this isn’t possible if there’s just one universe, and an earth that existed for only 400 million years before the emergence of cells. But if there are infinite numbers of universes all randomly tossing the dice, absolutely anything is possible. The Peers perform a prestidigitory shuffle of the cosmoses and place themselves, by the “anthropic principle,” in a privileged universe where life prevails on Darwinian terms. The Peers save the random mutations of nothing buttery by rendering all science arbitrary and stochastic.

Science still falls far short of developing satisfactory explanations of many crucial phenomena, such as human consciousness, the Big Bang, the superluminal quantum entanglement of photons across huge distances, even the bioenergetics of the brain of a fly in eluding the swatter. The more we learn about the universe the more wide-open the horizons of mystery. The pretense that Darwinian evolution is a complete theory of life is a huge distraction from the limits and language, the rigor and grandeur, of real scientific discovery. Observes Nobel-laureate physicist Robert Laughlin of Stanford: “The Darwinian theory has become an all-purpose obstacle to thought rather than an enabler of scientific advance.”

In the 21st century, the word — by any name — is primary. Just as in Crick’s Central Dogma ordaining the precedence of DNA over proteins, however, the word itself is not the summit of the hierarchy. Everywhere we encounter information, it does not bubble up from a random flux or prebiotic soup. It comes from mind. Taking the hierarchy beyond the word, the central dogma of intelligent design ordains that word is subordinate to mind. Mind can generate and lend meaning to words, but words in themselves cannot generate mind or intelligence.

Retorts the molecular biologist: Surely the information in DNA generates mind all the time, when it gives the instructions to map the amino acids into the cells of the brain? Here, however, intercedes the central dogma of the theory of intelligent design, which bars all “magical” proteins that morph into data, all “uppity” atoms transfigured as bits, all “miracles” of upstream influence. DNA can inform the creation of a brain, but a brain as an aggregation of proteins cannot generate the information in DNA. Wherever there is information, there is a preceding intelligence.

At the dawn of information theory in 1948, MIT cybernetician and Shannon rival Norbert Weiner defined the new crisis of materialism: “The mechanical brain does not secrete thought ‘as the liver does bile,’ as the earlier materialists claimed, nor does it put it out in the form of energy as the muscle puts out its activity. Information is information, not matter or energy. No materialism that does not admit this can survive at the present day.”

This constraint on the Munchausen men of the materialist superstition is a hard truth, but it is a truth nonetheless. The hierarchies of life do not stop at the word, or at the brain. The universe of knowledge does not close down to a molecular point. It opens up infinitely in all directions. Superior even to the word are the mind and the meaning, the will and the way. Intelligent people bow their heads before this higher power, which still remains inexorably beyond the reach of science.

Throughout the history of human thought, it has been convenient and inspirational to designate the summit of the hierarchy as God. While it is not necessary for science to use this term, it is important for scientists to grasp the hierarchical reality it signifies. Transcending its materialist trap, science must look up from the ever dimmer reaches of its Darwinian pit and cast its imagination toward the word and its sources: idea and meaning, mind and mystery, the will and the way. It must eschew reductionism — except as a methodological tool — and adopt an aspirational imagination. Though this new aim may seem blinding at first, it is ultimately redemptive because it is the only way that science can ever hope to solve the grand challenge problems before it, such as gravity, entanglement, quantum computing, time, space, mass, and mind. Accepting hierarchy, the explorer embarks on an adventure that leads to an ever deeper understanding of life and consciousness, cosmos and creation.

Mr. Gilder is editor-in-chief of Gilder Technology Report and co-founder of the Discovery Institute. His most recent book, The Silicon Eye, was a finalist for the Royal Society’s Aventis Prize for science. 
Comments: Post a Comment

<< Home
"The Senescent Man" Blog should be obligatory reading; but sometimes we need a little "Oblogatory Reading."

My Photo
Name:
Location: Cranston, Rhode Island, United States
ARCHIVES
December 2004 / January 2005 / July 2006 / August 2006 / September 2006 / October 2006 /


Powered by Blogger