Age of Spiritual Machines: ‘A Conversation About the Future of Computers with Ray Kurzweil’

In the year 2020 AD, computers will exceed the memory capacity and computational ability of the human brain. Mankind will foster relationships with automated personalities and ultimately, the distinction between man and machine will vanish altogether. This is the world of cyborgs, downloaded personalities and spiritual machines. It is the world of Ray Kurzweil but he claims that in the near future it will be our world too. The author unfolds his prophetic blueprint in a clear chronological order, in his latest work “The Age of Spiritual Machines: When Computers Exceed Human Intelligence”. Are you ready?

In the year 2020 AD, computers will exceed the memory capacity and computational ability of the human brain. Mankind will foster relationships with automated personalities and ultimately, the distinction between man and machine will vanish altogether. This is the world of cyborgs, downloaded personalities and spiritual machines. It is the world of Ray Kurzweil but he claims that in the near future it will be our world too. The author unfolds his prophetic blueprint in a clear chronological order, in his latest work “The Age of Spiritual Machines: When Computers Exceed Human Intelligence”. Are you ready?

Introduction

Question: What kind of murderer has fiber?


Answer: A cereal killer.

All right, so it’s not the greatest joke you ever heard. It would probably get you hooted off the stage during open-mike night at the local comedy club. Still, it’s not too bad when you consider that the writer of this gag has a brain about a million times simpler than your own.

And I’m not talking about one of your in-laws. The author of the tasteless pun above is a computer program called JAPE (Joke Analysis and Production Engine), and it’s cited in a new book by computer whiz Ray Kurzweil as reassuring proof that computers are still far from surpassing human beings in the higher and subtler capacities of intelligence — like intuition, art, and humor. But in as little as thirty years we may face real competition in all aspects of consciousness from astonishingly clever and increasingly humanlike machines.

Or so says Kurzweil, who has a fairly impressive record of prognostication when it comes to the rapid evolution of computer intelligence. In his 1990 book The Age of Intelligent Machines, Kurzweil predicted that a computer would defeat a world chess master by 1998 (it happened in 1997). He also predicted the emergence of “a worldwide information network linking almost all organizations and tens of millions of individuals” within the decade. (The World Wide Web emerged in 1994 and was hot stuff by 1996). Kurzweil suggested that the majority of commercial music in the 90s would be produced by synthesizer, and that has come to pass as well.

Of course, the prognosticator had an inside track on that last development because he had more than a little to do with it; the name “Kurzweil Music Systems” is almost synonymous with computer-based music. An exceptionally productive inventor and entrepreneur, Kurzweil’s many technical landmarks include the first computer music keyboard capable of reproducing orchestral instruments, the first large-vocabulary speech-recognition system, the first text-to-speech synthesizer, and the first print-to-speech reading machine for the blind. The last invention gained him the lasting friendship and professional collaboration of musician Stevie Wonder as well as the 1998 Stevie Wonder Vision Award. A graduate of MIT and recipient of its 1988 award for Inventor of the Year, Kurzweil is also the recipient of nine honorary doctorates from leading colleges and universities. Kurzweil has started and sold successful companies at almost the same clip that he’s been inventing clever machines; his corporate portfolio includes Kurzweil Computer Products, Kurzweil Music Systems, Kurzweil Applied Intelligence, Kurzweil Educational Systems, and Kurzweil Technologies.

All of which adds credence to the astonishing predictions that Ray Kurzweil is making in his latest book, The Age of Spiritual Machines (Viking). First among them is that within two decades there will be computers whose ability to reason, make decisions, and hold intelligent conversations is equal to that of a human being. Within thirty years, many computers will convincingly claim to be conscious and self-aware, and not long after they will be reporting and discussing spiritual experiences that they have had on their own — not just as a result of programming by human engineers.

But how can all of this occur within a few decades when computers presently read, talk, and tell jokes at a level that could politely be described as imbecilic? The answer, says Kurzweil, lies in the exponential growth of computer intelligence. “Computers are about one hundred million times more powerful for the same unit cost than they were a half century ago,” he writes. “If the automobile industry had made as much progress in the past fifty years, a car today would cost a hundredth of a cent and go faster than the speed of light.”

Computers can already process information for specific purposes much faster than the human brain. But because their circuits and chips operate in only two dimensions, what our quasi-intelligent machines lack is a three-dimensional computing environment that can make as many interconnections as the brain’s 100 billion neurons. Research and development of 3-dimensional computing “cubes” that will eventually supplant chips is well underway. As the cubes get smaller and more powerful, allowing more and more interconnections within and between them, the brain will eventually lose its processing edge.

“With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation. That’s a key to the strength of human thinking,” explains Kurzweil. But it’s not a strength that will remain superior to machine intelligence for very long. According to Kurzweil “it is reasonable to estimate that a $1,000 personal computer will match the computing speed and capacity of the human brain by around the year 2020.”

Whether we like it or not, that computer of the near future will usher in a “brave new world” requiring the resolution of philosophical and ethical quandaries that we have not even begun to ponder yet. Foremost among them will be the questions of whether machines that can tell us they are intelligent, self-aware, and fully conscious entities really are such beings — and if not, what are they exactly? In the following conversation with Ray Kurzweil, he takes on a few such questions about our increasingly intelligent machines. His answers suggest that the stuff of science fiction in the 20th century may become the everyday facts of life in the 21st..

The Conversation

Intuition has been described as “knowing more than you know” — or the capacity to make unexpected but useful connections between different fields of knowledge. In these terms, can we soon expect the development of truly intuitive computers?

Kurzweil: My field is pattern recognition, the part of artificial intelligence where we try to build computers that can recognize patterns. Pattern recognition is a large part of intuition. It’s now known that the human brain principally uses pattern recognition as its means of cognition. At least 90% of our brain is devoted to recognizing patterns, not to making logical decisions.

For instance, that’s how a chessmaster plays chess, in contrast to the primary computer method of playing chess. Chess computers exhaustively analyze every move-countermove situation, and build up a broad tree of all the move-countermove possibilities. A human being doesn’t have time to think through billions of possible sequences in the minute or so one has to make a chess move, so what the chessmaster does instead is use his or her pattern recognition capabilities to recognize a pattern that’s similar to one used previously. Top-ranked players have mastered an estimated fifty to one hundred thousand board positions, and have thought through them during a long career.

In our work in computers, a similar approach has proved the most fruitful. It’s not been productive to try to program explicitly all possible patterns that might be anticipated. For instance, in speech recognition, to try to define exactly what an e-phoneme or a p-plosive are, and how they might be put together in words, is too complicated. Instead we set up self-organizing methods to train the computer system the same way a human being would learn — by exposing it to millions of examples of the situations it will eventually have to deal with, and letting it evolve its own methods and rules based on the fundamental programming of self-organization. For example, we’ll have the machine listen to thousands of hours of recorded speech and try to evolve by itself an understanding of what the different sounds are, what they look like, and how they relate to each other.

Another example is the use of genetic or evolutionary algorithms for stock-marketing investing. Here we create a million little programs, each of which has a set of rules for making investments. These programs compete with each other, and the best ones survive into the next generation. They then spawn offspring by combining the “genetic code” or programming of different parents, just as organisms do through sexual reproduction. We run this process for thousands of generations of simulated evolution. Ultimately it evolves better and better strategies, including many that a human being would never anticipate. So it begins to feel like the machine has intuition, because it comes up with some subtle arbitrage opportunities and formulas that a human broker might never notice.

What’s required to do this is a very rich and experienced database of the right information. For the stock market, we have every second of information about trading for the last several decades online, so that very rich database already exists. It’s more difficult in fields like speech recognition, where we have to create such databases.

As we go into the next century these systems won’t be so narrowly constructed, doing only speech recognition or making stock market decisions. We’ll be developing systems that combine multiple fields of expertise the way human beings do. Thus they should be increasingly able to develop unexpected but useful connections between different fields of knowledge.

So the brain primarily works by a sophisticated process of pattern recognition, comparable to what you’re gradually developing in computers?

Human beings have an ability for logical, sequential thinking, but it seems to be a rather recent evolutionary development. We fool ourselves if we think logic comprises the bulk of human thinking — only an estimated hundred thousand neurons are devoted to logic, less than one percent of our brain. So most of our brain is indeed dedicated to self-organizing methods of pattern recognition.

Most people limit the kinds of connections they’re willing to make through a wide variety of self-imposed censors, and thus we prevent ourselves from having as much intuition as we’re capable of. The brain has the capacity to be constantly evolving and experimenting with different connections, the vast majority of which are nonsensical. But if we don’t censor all our connections beforehand, it is possible to come up with powerful new connections that do make sense.

From my own experience with dream study, I realized over time that total dream recall was not very useful to me since at least 85% of what I remembered and wrote down was nonsensical or useless. But the highlights of the rest of my dream material was very significant.

If you’re getting ten or fifteen percent useful material, that’s great. Most irrational connections are not useful. Most of the art that humans produce is not very good either, but that’s not important. It’s the successful experiments of the mind that matter, and most people are afraid to experiment enough to do something that succeeds.

I do think dreams are very useful because they represent a process where we are experimenting with new connections and allowing a review of internal and external experiences in a way that relates to other knowledge that we have. Certain censors are relaxed in dreams that would normally prevent us from making connections when we’re awake. I actually use dreams as a creative problem-solving discipline, whether I’m trying to figure out a business decision, or how to handle another person, or how to write something. I’ll try to frame the problem in my mind before I go to sleep. In the morning the answer is often just there.

<i.Will computers ever develop a subconcious, or have any need for one?

This raises the nettlesome issue of consciousness in computers. Ultimately I feel that it’s not really possible to penetrate the subjective state of another entity. Each of us assumes that humans other than ourselves are conscious, although we don’t really have any direct proof of that beyond the claim of having consciousness, and behavior which suggests we are. It seems a philosophical absurdity to consider otherwise.

But it becomes a compelling concern if we talk about machines that will claim to be conscious, and will make a convincing case for their claim. We haven’t encountered machines like that yet, although we do have virtual personalities in games who will talk to you and could even tell you they’re conscious — that they’re feeling lonely or whatever. But these claims would not be compelling. Machine personalities are not yet sophisticated enough to be convincing, and that’s because even the most advanced computers are about a million times simpler than the human brain. That makes a real difference in the subtlety and depth of these machine personalities.

The primary projection I’m making in the book is that early in the 21st century we will see the emergence of non-organic entities, human-created technology, that will not only claim to be conscious and have feelings, but will do so convincingly enough that many people will believe them. That’s not the same as predicting the actual emergence of conscious machines, however. I’m saying only that there will be machines that seem as if they are conscious — and there the difficulty of penetrating the subjective experience of another entity becomes a very important issue.

We run into that issue today with animal consciousness. There’s genuine disagreement among humans whether or not animals — or which animals — may be conscious. Perhaps most people, including myself, believe that higher animals have some level of consciousness, a conclusion based on human empathetic qualities. When the animal acts in a way that reminds us of human emotions, we assume they’re experiencing those emotions and are therefore conscious. But others believe that even the highest animals other than man are acting purely on coded instinct, that is, having a machinelike response to stimuli.



In this view, there’s “nobody home” in these animals’ minds. This disagreement underlies the whole controversy of animal rights, and it’s really difficult to resolve. You can’t actually ask the animal and get a response in human language — at best you’ll get observable reactions in behavior that are still open to differing interpretations. We can examine animal brains and find structures or activities similar to human brains, but that evidence is only suggestive.

In the book I predict the inevitable development of the ability to copy the human brain into a computer. The brain is complex but it’s not infinitely complex, and eventually we’ll be able to scan and download it into a neural computer of sufficient capacity. I think that will be possible within thirty years. What’s likely to happen then is that a person will appear to emerge within a computer. Unlike today’s simulated personalities, this one will have the same complexities and depth of a human being because it will be a copy of a specific human being.

If you scanned and resubstantiated my brain, it would say “I grew up in Queens, I went to MIT, and one day I walked into a scanner, downloaded my brain, then woke up here in this machine. This technology really works because this is really me — Ray Kurzweil.” Since you’ve recreated my brain it’s going to have my memory and my experiences and will naturally lay claim to being me.

But is that really me, especially if the original me is still here? What if the experimenters say they’ve got me in a neural computer that’s more advanced than my own brain, so they can dispense with the old me? I wouldn’t be so comfortable with that. Are they murdering me if I’m still “alive” and conscious as a new entity in the computer? And is the new entity actually conscious, or just a machine acting very convincingly as if it’s conscious? What is the real difference?

Some people will say the difference is that the new entity has no biological basis — that without a body and biochemical processes the entity is not conscious at all, just a zombie who believes it’s humanly conscious. These are all considerations that may seem philosophical now, but that will very soon be of utmost practical significance to us all.

One difference between us and computers is that we don’t just sit and passively receive programming. We go out into the world and independently perceive what’s going on. How far are we from computers that will be able to perceive the world, and respond to it, on their own?

We’re doing that today in limited domains. We can use a genetic algorithm to tell a computer to go out onto the Web and examine all the data in a certain field, like stock trades. Now that’s a very narrow slice of human experience, but within that narrow slice the computer goes out and explores without being given any rules, being allowed to evolve its own insights. I think that by 2015 to 2020 computers will literally be going out into the world, with their own physical sense organs, and electronically onto the Web, which will be a much richer information source than it is today — really an arena of virtual realities. These computers will respond to the world and to virtual realities independently and come up with their own insights, based on genetic algorithms but drawing on a much broader range of experiences and data than computers can do today.

A key hurdle to this development is that computers must have a much broader mastery of human language. Today computers can master fairly simple language and respond to vocal commands, but they can’t read a short story and write a synopsis, or understand the subtleties of literature. These capacities will emerge within twenty years.

Psychology writer Dan Goleman has observed that much of the brain’s energy is involved in screening out information that our senses gather. Otherwise we’d be swamped with sensory input. The problem is that we often screen information through our biases and prejudices. When computers can perceive independently, will they be free of human-style prejudice?

Intelligence necessarily includes the destruction of information. We do get a torrent of information — a million bits per second through our ears, hundreds of millions of bits per second through our eyes, and so on. We can’t keep all that information around, so we have to abstract and understand it in a meaningful way.

Again, we do that through pattern recognition. When we see print on a page, we recognize it as print without having to decode every bit of visual information. We’re constantly boiling down a massive amount of data into a much smaller amount that’s meaningful, and to do that we destroy an enormous amount of data. That is, in fact, the process of intelligence.

Another form of data destruction is done by the elaborate censors mentioned earlier: internal censors that human beings use to prevent certain thoughts or insights from occurring. These censors can serve useful purposes, but they can also be a barrier to creativity. Specialists in any field learn a lot of rules that are necessary to their expertise, but the rules can restrict experimentation and innovation. That’s why it’s good for human beings to work outside their disciplines from time to time, to stretch our customary rules and boundaries.

Whether computers censor information according to bias will depend on how we build them. For example, there have been systems built to make real-estate loan approval decisions. There are several ways to approach that kind of programming. One is to assume that human beings know what they’re doing, and build a system that watches how real estate brokers and bankers make decisions. What was learned was that these systems would discover and repeat the prejudices of human brokers, coming up with rules like “Don’t lend money to people who live in certain areas” — areas characterized by low income or a high minority population, for example. Of course these weren’t rules written down for anyone to follow; they would be illegal. But by observation the computer would discover that such rules were in fact driving decision-making in this field.

Or you could set up a system that ignored the real decisions of human beings in a field, and only relied on objective data about payment records and so forth. Such a system would likely come up with a very different set of rules. Current stock-market programs are fairly objective in the sense that they’re not driven by fears, prejudices, or brokers’ hunches and so forth. These systems observe trading data only.

When we copy human brains to create intelligent systems, we are likely to copy the capacity for irrational prejudices, and then we’d have to program to correct those prejudices. On the other hand, some of our unspoken and inherent rules are valuable because they reflect the collective wisdom of human experience. We’re not always conscious of what such rules are, but they’re buried in our memories and decision-making patterns. We have to be careful not to throw out the baby with the bathwater when we refine programming.

Will computers ever decide that we’re just too prejudiced or dim-witted, and that they could get along perfectly well without us?

I tend to be more optimistic than that. My prediction is that we will increasingly merge with our technology. In the next century there’s not going to be a clear separation between humans and machines. We’re not looking at an invasion of intelligent machines, but rather an emergence of a human-machine civilization. We’re already integrating with our computers, of course. Without them civilization would grind to a halt, and that wasn’t true just twenty-five to thirty years ago.

So we’re already dependent on our machines, and our interconnections are going to grow immeasurably. We’ve begun to put neural implants into our brains to ease certain disabilities, including Parkinson’s disease and cerebral palsy. We have cochlear implants that enable deaf people to talk on the telephone, and there’s an experimental visual implant being tested right now. There was recently a report of an implant placed in a profoundly physically handicapped individual that enabled him to communicate with his computer and thus gain much more control over his environment.

We’re at a very early stage with neural implants, but within thirty years we’ll all be using them to improve memory, enhance perceptions, and increase creativity. That’s a real merger of the human and machine worlds. The reverse-engineering of human brains into computers will create machines that think they’re human, and conversely there will be many people doing a lot of thinking outside their own brains. There’s not going to be a clear distinction between humans and machines, so I don’t think there will be a question of machines deciding what to do with us.

Your book title suggests that some form of spirituality will arise in sufficiently sophisticated machines. But it could be argued that the signal elements of spirituality — such as compassion, forgiveness, and transcendent experience — do not arise simply from advanced intelligence. What exactly do you mean by spirituality, and how will we see its expression in machines?

The deeper emotions like compassion are not unrelated to cognitive processes. They are in fact very sophisticated and subtle methods of processing information. Our emotions serve a very useful purpose as a way of organizing our lives, and they are inextricable from the force that we call intelligence. All emotions are inevitable byproducts of the complexity that is human nature. So if we have machines that are of the same complexity, they will necessarily have attributes and qualities that are akin to human emotions. Our machines will be based on us, and we’ll want them to relate to us.

This brings us back to the issue of determining consciousness. Machines will begin to claim spiritual experiences of their own, but how will we know if they’re authentic? How do we know now if each other’s spiritual experiences are authentic, except as we choose to believe what people report? The point is that future machines will convincingly tell us about their transcendent and spiritual experiences — maybe even that they’ve become enlightened. We’ll have to decide what to make of those reports. Keep in mind that these machines will increasingly have humanlike bodies as well, so they’ll be even more convincing.

Is there an electronic or computer equivalent of meditation? Meditators go through changes in their electrical states, evidenced by the appearance of measurable alpha rhythms and so on. Will this happen to advanced computers as well?

The brain has a number of ways of organizing its behavior, and we’re only beginning to understand these methods. Neurons appear to operate asynchronously, that is, independently of each other. But they are influenced by certain cycles or rhythms that seem to play a role in helping neurons coordinate their activities. Different activities show characteristic rhythms of electrical activity.

Some of the computers we develop will be based on similar principles, particularly when we base computers on reverse-engineering of the brain. A key attribute of the brain that is markedly different from contemporary computers is that so many things happen simultaneously — alpha, beta, and delta rhythms all going on at once while a hundred trillion neuronal connections are potentially computing at the same time. Most of this is going on beyond our conscious awareness, of course. Computers are not built that way yet. They only do one thing at a time, although they can do one thing at a time so quickly that they can appear to be doing several things at once. At any rate the method of organization is quite different. A major trend over the next thirty years will be the development of computers that operate more like the brain, in terms of doing many things simultaneously.

What are you working on now, and what lies immediately ahead in computer intelligence?

I’m still working on speech recognition and natural language processing, as well as the creation of reading systems for people with learning disabilities and visual impairment. One area of new research is the application of evolutionary algorithms to stock-market decisions. About 5% or half a trillion dollars in the market is now controlled by neural nets and genetic algorithms, and that proportion is rapidly increasing. This is going to be a very hot area for the near future.

I’m also working with the American Board of Family Practice to develop a Patient Simulator that will be of use both to doctors and laypeople. In the near future you’ll be able to plug in a simulated patient with your physical characteristics, and experiment with different medical approaches to particular problems like diabetes. You’ll be able to “play doctor” in a very educational and useful way. The most sophisticated versions will be used in medical schools where students will be able to interview simulated patients in natural language, and treat them experimentally — even simulate the passage of time and see how the patient improves or gets worse with age.

– A Closing Synapse –

Ray Kurzweil was the principal developer of the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first CCD flat-bed scanner, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large vocabulary speech recognition. Ray has successfully founded, developed, and sold four AI businesses in OCR, music synthesis, speech recognition, and reading technology. All of these technologies continue today as market leaders.

You can find out more about Ray Kurzweil’s work
at the Kurzweil Technologies website.

A slightly different version of this interview appeared as the cover story of the April 1999 issue of Intuition Magazine.

Author: D. PATRICK MILLER

News Service: Fearless Reader

URL: http://www.fearlessbooks.com/FeatureLine12.html

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.