There is a fundamental bifurcation in the human animal between its brain and the rest of its organism. The brain unlike the rest of our being is not tied to the real world. The brain can envision what the rest of our human organism can never experience. This ability to abstract has been both the source of immense human accomplishment and immense human destruction. Our brains have created myth which many of us believe in with the same conviction that we believe the sun will rise tomorrow, if not more so. It is willing to kill other human beings who do not share that belief. This is how tenacious our brain constructs can be.
That same human brain has, using that same capacity to abstract, by carefully observing the world around it, constructed elaborate systems that allow it to place our species on the moon. For most of my adult life this bifurcation has been fundamental to my understanding of human behavior and society. For example, it led me to view human life as essentially tragic because we could conceive more than we could ever be.
However, Ray Kurzweil's latest book The Singularity Is Near: When Humans Transcend Biology argues that the brain is breaking its shackles and probably within the first fifty years of this century, certainly by the end of this century, it will have done so. (The term "Singularity" in the title refers to the point at which machine intelligence exceeds human intelligence. The term is derived from its use in physics where it refers to the point in a black hole at which gravity is so intense it destroys space-time. Nothing is the same afterwards.) The driving force behind this development is the relentless exponential increase in technological comprehensiveness and power, especially in artificial intelligence, biometrics and computer capacity. Kurzweil and other scientists, applying Moore's law (the power of computers will double about every two years) to the whole of technology, sees these technologies doubling their capacities every 24 to 30 months. For example, in 1997 IBM's Deep Blue computer, designed to play chess, defeated world champion Gary Kasparov, who five years earlier had scoffed at the idea that a computer could do this. In 2011 IBM's Watson computer defeated the two most winning contestants of the Jeopardy game. The computer was not specifically designed to play the game in which any question can be asked. It won its match. In 14 years this form of computerization had advanced from a single game machine to one that had to deal with every possible question, decide whether it should answer or not and if it answered to be reasonably sure it was correct. (In this game there is a significant penalty for a wrong answer.) The real purpose of Watson is to quickly extract reliable information from very large, complex, databases.
As the power of artificial intelligence is increasing at an exponential rate, so is the rate of global warming and human population growth. All three are expected to reach some sort of threshold in this century. Understanding the full implications of this convergence is, to put it mildly, mind boggling and may require Watson generation 3 or 4 to provide that understanding.
The joker in all this is, of course, that humans are designing these initial iterations. Will they build in their customary human aggression and fear reactions and will this machine intelligence be devoted to military and other human-destructive uses? Kurzweil serves on the five member Army Science Advisory Board. The military, in an effort to minimize battlefield casualties, is moving increasingly to robots. A major problem is to what degree they make the robots autonomous, independent of human control. Technology, it should be constantly kept in mind, is a two edged sword. Kurzweil, however, believes wars are becoming less destructive, especially of human life, as a result of advanced technology. He compares the war technology of World War II, e.g. carpet bombing, with its millions of casualties to those of the wars in Iraq and Afghanistan. I find this dubious, especially when one looks ahead to the destructive potential of such biological warfare weapons nano-sized viruses.
Yet in spite of all this, and there is far more to it than I have mentioned, if Kurzweil and other technologically savvy academics and innovators are right about an immanent explosion of artificial intelligence, then the run up to it as well as its consequences will mark one of the most dramatic shifts in human existence. Some compare it to the introduction of agriculture and language.
One of the useful things one can do is look for the signs of the run up to the Singularity. For example there is a current controversy as to whether astronomy or atoms should be the measuring source for time. The problem is that atomic time, and time measured by the rotation of the earth, are getting increasingly out of sync. While on the one hand the atomic source is more accurate, on the other hand all of human behavior has been molded from the very beginning of evolution by astronomical time. How much of what we are as humans, of our institutions and our accounting for ourselves, is tied to astronomical time? Is one more link between man and his environment being broken?
The same issue without the apparent conflict can be seen in a video clip in which owners of Sony's robot dog Aibo gathered to discuss their dog's latest tricks and other behaviors. The emotions and attachments to their dogs were obviously the same as if they had been real dogs. Simulation is one of the technologies that will smooth the passage to and through the Singularity as humans identify ever more closely with their creations.
The conflict between human values and human technology is not new. A hundred years ago John Galswothy published his story of a bootmaker know for the high quality of his boots, losing out to factory-made boots of lesser quality. That quality, i.e. human care and competence, no longer mattered, was incomprehensible and devastating to the bootmaker.
My stepfather was born in a covered wagon heading to Canada where his father had heard farming opportunities were better than in his native Iowa. He lived to see a man on the moon. I have often asked myself, given the increasing rate of technological change, what change I might see that could approach such a dramatic difference. I think I may be seeing the initial stages of that change in which the human brain creates an intelligence vastly superior to itself and must live with the consequences.
This rapid coalescing of major global-scale trends, i.e. global warming, population increase and the shift of intelligence to machines, portends a future of enormous complexity. Will human beings finally realize that intelligence is the only tool they have for dealing with this complexity and can this be done without surrendering their humanity? Will the bifurcation in man's brain become the bifurcation in his future?
Bob Newhard