About The Author

Ray Kurzweil

New York Times  best-selling author of The Singularity Is Near

Ray Kurzweil is one of the world's leading inventors, thinkers, and futurists, with a thirty-year track record of accurate predictions. Called "the restless genius" by The Wall Street Journal  and "the ultimate thinking machine" by Forbes magazine, Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which described him as the "rightful heir to Thomas Edison." PBS selected him as one of the "sixteen revolutionaries who made America."... more

Subscribe to KurzweilAI News

The Kurzweil Accelerating Intelligence newsletter concisely covers relevant major science and technology breakthroughs (daily or weekly) via e-mail. It also profiles new blog posts, essays, events, videos, and books.

Sign up here by creating an account and choosing your newsletter options.

It is rare to find a book that offers unique and inspiring content on every page. How to Create a Mind achieves that and more. Ray has a way of tackling seemingly overwhelming challenges with an army of reason, in the end convincing the reader that it is within our reach to create nonbiological intelligence that will soar past our own. This is a visionary work that is also accessible and entertaining."

– RAFAEL REIF,

President of MIT, MIT Maseeh Professor of Emerging Technology, former MIT Provost, former department head of the Department of Electrical Engineering and Computer Science (EECS), MIT’s largest academic department.

Slide
Kurzweil’s new book on the mind is magnificent, timely, and solidly argued!! His best so far!"

– MARVIN MINSKY,

MIT Toshiba Professor of Media Arts and Sciences, Cofounder of the MIT Artificial Intelligence Lab, widely regarded as the "father of artificial intelligence."

Slide
If you have ever wondered about how your mind works, read this book. Kurzweil’s insights reveal key secrets underlying human thought and our ability to recreate it. This is an eloquent and thought-provoking work."

– DEAN KAMEN,

Physicist and inventor of the first wearable insulin pump, the HomeChoice portable dialysis machine, and the IBOT mobility system, and founder of FIRST; recipient of the National Medal of Technology.

Slide
One of the eminent AI pioneers, Ray Kurzweil, has created a new book to explain the true nature of intelligence, both biological and non-biological. The book describes the human brain as a machine that can understand hierarchical concepts ranging from the form of a chair to the nature of humor. His important insights emphasize the key role of learning both in the brain and AI. He provides a credible roadmap for achieving the goal of super human intelligence which will be necessary to solve the grand challenges of humanity."

– RAJ REDDY,

Founding director, Robotics Institute, Carnegie Mellon University, recipient of the Turing Award from the Association for Computing Machinery.

Slide
…Ray's new book is a clear and compelling overview of the progress, especially in learning, that is enabling this revolution in the technologies of intelligence. It also offers important insights into a future in which we will begin solving what I believe is the greatest problem in science and technology today: the problem of how the brain works and of how it generates intelligence."

– Tomaso Poggio,

Eugene McDermott Prof. in the MIT Department of Brain & Cognitive Sciences, director of the MIT Center for Biological & Computational Learning, former Chair MIT McGovern Institute for Brain Research, one of the most cited neuroscientists in the world.

Slide
This book is a Rosetta Stone for the mystery of human thought. Even more remarkably , it is a blueprint for creating artificial consciousness that is as persuasive and emotional as our own. Kurzweil deals with the subject of consciousness better than anyone from Blackmore to Dennett. His persuasive thought experiment is of Einstein quality: it forces recognition of the truth."

- MARTINE ROTHBLATT,

Chairman & CEO of United Therapeutics and Creator of Sirius XM Satellite Radio.

Slide
Kurzweil’s book is a shining example of his prodigious ability to synthesize ideas from disparate domains and explain them to readers in simple, elegant language. Just like Chanute’s Progress in Flying Machines ushered in the era of aviation over a century ago, this book is the harbinger of the coming revolution in artificial intelligence that will fulfill Kurzweil’s own prophecies about it."

- Dileep George,

AI scientist, pioneer of hierarchical models of the neocortex, Co-Founder of Numenta and Vicarious Systems.

Slide
Ray Kurzweil’s understanding of the Brain and Artificial Intelligence will dramatically impact every aspect of our lives, every industry on Earth and how we think about our future. If you care about any of these READ this book!"

- Peter H. Diamandis,

Chairman/CEO, X PRIZE, Executive Chairman, Singularity University, Author, New York Times Best Seller, Abundance – The Future Is Better Than You Think.

Slide
1
2
3
4
5
6
7
8

Q&A - Author Self Interview

So how do you create a mind?

Although the human brain uses biochemical methods rather than electronic ones, it still processes information. If we can understand its algorithms, we will be able to recreate its techniques in a computer of sufficient capacity.

So the human brain and a computer can be equivalent?

Based on Turing's principle of computational equivalence, a computer can match the performance of a human brain if it has sufficient speed and memory as well as the right software.

How are we doing on capacity?

In terms of hardware capability, what we are concerned about is functional equivalence; in other words, what is the computational speed and memory needed to match human performance? In The Singularity Is Near, I derived the speed figure to be approximately 1014 (a hundred trillion) calculations per second, but used a range of 1014 to 1016 cps to be conservative. AI expert Hans Moravec's estimate at the time, based on extrapolating the amount of computation needed to emulate a region of visual neural processing that had already been successfully recreated, was 1014 cps. In How to Create a Mind, I again derive this figure based on the latest neuroscience research and the model of human thinking that I present. The result is again 1014 cps. The fastest supercomputer today—IBM's Sequoia Blue Gene/Q computer—provides about 1016 cps. A routine desktop machine can reach about 1010 cps, but this can be significantly increased by using specialized chips or cloud resources. Given the ongoing exponential growth inherent in my "law of accelerating returns," personal computers will routinely achieve 1014 cps well before the end of this decade.

We are even further along in achieving the RAM memory needs. In How to Create a Mind, I estimate that there is a requirement at about 20 billion bytes, which is a number we can readily achieve today in personal computers.

Isn't Moore's law coming to an end?

Moore's law is not synonymous with the exponential growth of the price-performance of computing. It is one paradigm among many. The exponential growth of computing started decades before Gordon Moore was even born. We see continual exponential growth going back to the 1890 American census, the first to be automated. Moore's law, which refers to the continual shrinking of component sizes on a flat (that is, two dimensional) integrated circuit, was the fifth, not the first paradigm to bring exponential gains to computation. And it won't be the last. Examples of the sixth paradigm of self-organizing three-dimensional molecular circuits are already working experimentally. Semiconductors being fabricated today for MEMS and CMOS image sensors are already 3D chips using vertical stacking technology, which represents a first step into 3D electronics. This sixth paradigm will keep the exponential trajectory in computing going well into this century.

Okay, but what about software? Some observers say that it is stuck in the mud.

In The Singularity Is Near I addressed this issue at length, citing different methods of measuring complexity and capability in software that clearly demonstrate a similar exponential growth. One recent study ("Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology" by the President's Council of Advisors on Science and Technology) states the following:

"Even more remarkable—and even less widely understood—is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade. . . Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science."

Note that the linear programming that Grötschel cites above as having benefited from an improvement in performance of 43 million to 1 is a mathematical technique that I present in the book as being actually used in the human brain.

Aside from these quantitative analyses, we have viscerally impressive recent developments such as IBM's Watson computer, which got a higher score in a televised Jeopardy! contest than the best two human players combined. The Google self-driving cars have driven over a quarter million miles without human intervention in actual cities and towns.

Not everyone is so impressed with Watson. Microsoft cofounder Paul Allen writes that systems such as Watson "remain brittle, their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific areas."

First of all, we could make a similar observation about humans. I would also point out that Watson's "specific areas" include all of Wikipedia plus many other knowledge bases, which hardly constitutes a narrow focus. Watson deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes and metaphors in virtually all fields of human endeavor. It's not perfect, but neither are humans, and it was good enough to be victorious on Jeopardy! over the best human players. It did not obtain its knowledge by being programmed fact by fact, but rather by reading natural language documents such as Wikipedia and other encyclopedias.

Critics say that Watson works through statistical probabilities rather than "true understanding."

Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term "statistical information" in the case of Watson actually refers to distributed coefficients and symbolic connections in self-organizing methods. One could just as easily dismiss the distributed neurotransmitter concentrations and redundant connection patterns in the human cortex as "statistical information." Indeed we resolve ambiguities in much the same way that Watson does—by considering the likelihood of different interpretations of a phrase. If using statistical probabilities does not represent true understanding, then we would have to conclude that the human brain has no true understanding either.

How are we going to obtain the algorithms of human intelligence?

By reverse-engineering the human brain, that is, by understanding its methods and recreating them in a computer of sufficient capacity.

How is that going?

Up until just recently, we have not been able to see inside a living, thinking human brain with sufficient spatial and temporal resolution to assess what its methods are. That is now changing, thanks again to the law of accelerating returns. As I show in How to Create a Mind, the resolution of different types of brain scanning are improving at an exponential pace, just like every other information technology. We are now able to see in a thinking brain new interneuronal connections being formed and firing in real time. We can see the brain create our thoughts and we can see our thoughts create our brain, reflecting its ability to self-organize based on what we are thinking. Some of the best evidence for my thesis on how the brain works became available in the last few months that I was writing the book.

So just how does the brain work?

Let's talk first about where our thinking takes place. The region of the brain that we are most interested in is the neocortex. It is thin structure about the thickness of a stack of about dozen sheets of paper. It is where we do our thinking. Unlike the "old brain" (the brain we had before we were mammals) the neocortex enables us to think in hierarchies, reflecting the natural hierarchical organization of the world. It enables us to learn new skills that are complex and comprised of structures of structures of ideas.

The salient survival advantage of the neocortex was that it could learn complex new skills in a matter of days. If a species encounters dramatically changed circumstances and one member of that species invents or discovers or just stumbles upon (these three methods all being variations of innovation) a way to adapt to that change, other individuals will notice, learn and copy that method, and it will quickly spread virally to the entire population. The cataclysmic "Cretaceous-Paleogene extinction event" about 65 million years ago led to the rapid demise of many non-neocortex-bearing species, who could not adapt quickly enough to a suddenly altered environment. This marked the turning point for neocortex-capable mammals to take over their ecological niche. In this way, biological evolution found that the hierarchical learning of the neocortex was so valuable that this region of the brain continued to grow in size until it virtually took over the brain of Homo sapiens. 80 percent of the human brain's mass consists of the neocortex, which covers the old brain with elaborate folds and convolutions to increase its surface area.

The next observation that we can make about the neocortex is its uniformity in structure, appearance, and method. One region of the neocortex can readily take over the functionality of another region if necessitated by injury or disability. For example, in a congenitally blind individual, region V1, which usually performs very low level recognitions of basic visual phenomena such as edges and shadings, is reassigned to actually process high-level language concepts. There are many other examples of the interchangeability of the different portions of the neocortex. The evolutionary innovation in Homo sapiens was that we have a larger forehead to accommodate more neocortex in the form of the prefrontal cortex. This greater quantity resulted in a profound qualitative improvement in human thinking—it was the primary enabling factor that led to our invention of language, art, science and technology.

Okay, so how does the neocortex work?

By drawing on the most recent neuroscience research, my own research and inventions in artificial intelligence, and thought experiments (which I present in the book), I describe my theory of how the neocortex works: as a self-organizing hierarchical system of pattern recognizers. We have about 300 million of these pattern recognizers. Some are responsible for recognizing simple patterns such as the crossbar in a capital A. Others are responsible for high level abstract qualities such as irony, beauty, and humor. They are organized in a grand hierarchy. These pattern recognizers are all uncertain— they communicate with each other with networks of probabilities. We are not born with this hierarchy—our neocortex builds it from the thoughts we are thinking. So you are what you think!

Have we tried emulating this technique in software?

It turns out that the mathematics of what goes on in the neocortex is very similar to a method that I helped pioneer a couple of decades ago called hierarchical hidden Markov models (HHMM).

So why aren't artificial intelligence programs matching human performance?

For one thing, the hardware is still not as powerful unless you use the most powerful supercomputers which AI generally does not do. Second, the field of AI has not yet matched the human brain's ability to build the hierarchy itself. Most HHMMs have relatively fixed patterns of connections. Third, we need to learn how to educate our AIs. Even if we did a perfect job emulating the neocortex including its scale of 300 million recognizers, it wouldn't do anything useful without an education. That's why a newborn human child has a long way to go before she can hold a conversation. Educating an AI does not need to take as long as it does with a human child. Watson's education consisted of reading 200 million pages of natural language documents but it did that in a matter of weeks. The speech recognition systems that I developed in the 1980s and 1990s listened to many thousands of hours of recorded speech but were able to process it all in a matter of one or two hours. Ultimately we will need to provide AI learning experiences that compete with the sophistication of human ones.

I have been consistent in predicting that AIs will match human intelligence in all of the ways in which humans are now superior by 2029. They will then be able to apply their enormous speed and scale and total recall to all of human knowledge. I believe that recent advances should give us a lot of confidence that we will meet or beat that goal.

However, we will see enormous gains in the intelligence of software over the next several years. Keep in mind that Watson's ability to understand a natural language document is still substantially lower than a human, but it was nonetheless able to defeat the best two humans in the complex game of Jeopardy! because it could apply its level of understanding to a vast amount of material (200 million pages)— and remember it all—something that humans are unable to do.

What will we see over the next three to five years?

We will see question-answering systems that really work. We will see search engines that are based on an actual understanding of what is being said on each page rather than just the inclusion of keywords. We will see systems that anticipate your needs and answer your questions before you even ask them because they are listening in on your conversations, both spoken and written.

And in the 2030s, when AIs routinely outperform human intelligence, what then?

We'll merge with the intelligence expanders we are creating. Another exponential trend is a shrinking of technology, which I've measured at a rate of about 100 in 3D volume per decade. At that rate, we'll have computerized devices that are the size of blood cells in the 2030s. Some of these will circulate in our bloodstream to keep us healthy and extend our longevity. Others will go into the brain and connect with our biological neurons. They will communicate wirelessly with the Internet enabling our brains to directly tap into AI in the cloud. Keep in mind that we often use cloud-based AI when we do something interesting with our mobile devices. If you have your mobile device translate from one language to another, ask it a question, or just do a search, the action takes place in the cloud, not just in the device. The same thing will become true of our brains once we can noninvasively place computation and communication devices in our brains. So instead of being limited to around 300 million pattern recognizers in each neocortex, we will be able to have more—a billion, then tens of billions, then a trillion. Keep in mind that the evolutionary increase in the size of our human neocortex led to the qualitative advance of such inventions as language, art and science. Imagine the qualitative advances we will be able to make when we are able to expand the scope of our neocortex, based on the law of accelerating returns.

So will these AIs circa 2030 be conscious?

First of all, there is no reason to suppose that we would stop being conscious just because we put computers in our brains. I see a primary application of future AI as being expanders of our own intelligence. Consider that we currently connect computerized devices into the brains and nervous systems of Parkinson's patients and deaf people. No one suggests that people with today's neural implants are no longer conscious.

But how about the AIs themselves?

That's an issue that's actually been debated since Platonic times: does consciousness require a biological substrate? Is it possible for an entirely computerized system to be conscious? I make the case in the book that if an AI has the same subtle emotional responses as a human— if it gets the joke, can be funny or sexy, and is capable of expressing a convincing loving sentiment—then we will accept these entities as conscious persons.

Isn't that the essence of the Turing test?

Exactly—I make the case that the Turing test is indeed a valid test of consciousness.

No computer has ever passed a valid Turing test?

True—it's not 2029 yet. But they're getting better. In recent Turing tests competition, some AIs have gotten close.

Aren't there dangers to superintelligent AI?

Technology has always been a double-edged sword, going back to fire, which cooked our food and kept us warm but was used to burn down our villages. If an AI that is smarter than you has it in for you, well, that's not a good situation to get into.

What could you do about that?

Get an even smarter AI to protect you.

Okay, I'll have to remember to do that.

Follow Us on Facebook

Videos of Ray Kurzweil

Watch Video's of Ray Kurzweil Watch Video's of Ray Kurzweil

Other Books By Ray Kurzweil