Ray Kurzweil: How to Create a Mind

img_0361

Ray Kurzweil claims that ‚If understanding language and other phenomena through statistical analysis does not count as intelligence then humans have no intelligence either.’ That is a converse argument that should disqualify Kurzweil from ever being listened to again. While humans do use similar learning methods they are mostly not statistical but humans can learn things in AD HOC through a single event. They can create a new neural connection immediately. It just has to be emotionally strong in the experience and that turns it into immediate and dominant knowledge.

Even if we could reverse engineer all functions of the human brain, we still will not be able to emulate our human experience and its emotional context that is the key to human drives, goal setting and decision making. While many see that as a flawed aspect of human capability, the truth is that because of the uncertainty of knowledge and the inability to exert perfect control in the chaotic real world, human emotional intuition works a lot better than all machine like logic.

A natural language assistant and a self-driving car are no proof whatsoever that human-like machine intelligence is possible. It is not the neocortex that defines our ability for art and emotion but our limbic system, which Kurzweil covers briefly but diminishes it as a historic artefact overpowered by the neocortex. Maybe Kurzweil is really a Vulcan?

dilbert_20140127

Kurzweil often bases his conclusion on converse arguments as the above. For example ‚Because we can’t easily remember things in reverse order memories must be stored in sequence.‘ That does not allow to conclude that it is stored as a stream of information like in computer memory. If the brain was a computer-like construct then reversing the stored information should be easy. That we can’t is rather proof that the brain is not a computer or has no computer like function. Yes, memories seem to be linked together as a series of contents in sequence, but they are indexed by emotional events. If there is no emotional context we do not remember things or if the information is there we can not recall it out of sequence. It is well known that if the thalamus is impacted we experience memory problems.

Kurzweil makes the same mistake again when he argues further that because the apparent complexity of a Mandelbrot set is driven by a simple formula ‘Z equals Z squared plus C‘ it is possible to utilise a similar – still to be found – formula to reverse engineer a complex human brain. He clearly does not consider the problem of chaotic structures and that any natural environment does not consist of one formula but any number of such Mandelbrot sets that interact in totally unpredictable ways. And even if a brain could be constructed the way he suggests it would still develop into a unique structure and not be in any way human-like.

Starting off with an unproven assumption

Kurzweil starts out with the wrong assumption that the brain stores and processes information. There is no scientific basis for this conclusion. While the function of a neuron and a network of neurons can be simulated by a mathematical algorithm, that does not imply that this enables to simulate the complex multi-dimensional biochemical interaction of body and brain that creates and recalls emotional experiences as it sees necessary to support life. The sequence of memories accessible have no likeliness to the information stored in a computer.

Pattern recognition and pattern abstraction into features, even if purely trained, has in itself nothing to do with intelligence. Yes, we see what we can interpret and we miss what is unknown but cognition even if it is hierarchical in abstraction is not yet intelligence.

We do not constantly predict the future as Kurzweil claims, but as we map current experiences to stored ones in the neocortex we travel the neural pathways to likely patterns. The hierarchy of patterns and perceptions does not exist as statistical analysis but as a one time created link. Only when we need to learn things for the sake of learning we have to repeat it often to create a memory access path through a variety of techniques. We lack the emotional immediate access to that knowledge that is created by experience and by the dopamine induced success moment. Which is why we forget learned knowledge quickly while experience remains.

Kurzweil says that it is beyond the scope of his book to consider neurotransmitters such as dopamine and serotonin which is surprising as they are a key element of a working brain. Kurzweil never mentions any of the research that links the amygdala and the limbic system with human decision making. It has been shown that managers that can decide well and quickly have a very active limbic system when doing so. Kurzweil claims that the neocortex has taken over the functions of love controlled by oxytocin and vasopressin and he could not be more wrong. Our neocortex has no other function that to interpret our feelings and desires and makes sense of them while our emotional system has already made them. We can see in brain scans that we have taken the decisions before we actually become consciously aware of them. It means that we decide never logically but always emotionally. If we try to apply ratio and logic we search for additional potential aspects of loss and rewards and the associated emotions and throw them on our decision scale. We have never disconnected from our deep emotional past as Kurzweil claims. The reason why we humans have sex without wanting children, is not because it has become irrelevant but because it is a successful means to strengthen the pair bond of a relationship.

I find in the book the repeated and unfounded assumption that the pattern recognition capability of the neocortex is the basis for intelligence. There is no proof of that. The neocortex is purely about automation. It is used to reduce our thinking effort and is not the basis of it. We train the neocortex for example for all motor functions so that we can walk without thinking about which foot to put where. And yes we also train it to recognise higher level abstractions and concepts. But there is no search engine and the neocortex simply filters the abstractions without logical rules.

Are IBM’s Deep Blue and Watson actually AI?

Kurzweil also claims that Deep Blue won against Kasparov because of its stronger pattern recognition ability when in truth it was the programming of Chess grandmaster Joel Benjamin who defined its game strategies. Kurzweil assumes that the patterns stored in the neocortex are similar to the ones used by computers. The neocortex does not store patterns but contains networks which has a few magnitudes more capacity of contextual meaning without being a kind of storage device.

Pattern recognition has nothing to do with understanding while we can have a discussion what that means. I say it is about emotional relevance in a given context. An OCR module can identify a word and verify it against a dictionary. It can recognise multiple words in sequence and analyse it against a semantic rule engine and ensure it is a valid sentence. It can in principle translate that into another language with sufficient accuracy to make it work. All that is not intelligence and this program has no understanding of what it does.  It can not at all identify if the sentence makes any sense. The same is true for the much hyped Watson that won Jeopardy by downloading the Internet and using a huge amount of parallel processors with many different algorithms to quickly find probable answers and select the most likely one. Once again, no intelligence is used in Watson to achieve that. It does not even understand the rules of the game that are programmed into it. It has no desire to win or sees any benefit in winning and feels neither joy nor disappointment either way. it is not intelligent and will not lead to intelligent functionality. I am not saying there won’t be benefits achievable and yes, some may be better than human achievements, but they are still not intelligent.

The possible patterns that the neocortex has linked in its networks are not a prediction of the future as Kurzweil claims, but simply a list of things that happened in the past. The past does not predict the future and chaos causes all tries to perform predictions in the longer future as futile experiments. That includes the idiotic Black-Scholes formula for the future value of an investment. If at all, these are self-fulfilling prophecies like all stock market gambles.

A really strange idea is to return to the long-gone approach of LISP, a language used in early AI systems that produced obviously nothing of the kind. Kurzweil believes that the AI will statistically infer and construct several hundred millions of lines of LISP rules that will represent its knowledge. There is nothing of the kind in the human brain and thus all comparisons end right here.

Consciousness and the Turing Test

But then we come to the all-important question that Kurzweil tries to sweep under the table much like politicians do with unwanted subjects. Turn them into gibberish and cast them aside. He considers that question to be if the artificial intelligence will be self-conscious. He quotes Searle who says that we simply have to create a machine that emulates the brain and then we will also get consciousness. If only we would know how the brain does that. The limbic system, the thalamus and the medulla are key parts of that system but how that actually would interact with a pattern matching engine and a LISP rule set is conceptually fairly far fetched not to say ridiculous. Searle was however the one to state very plausibly that the Turing Test (interviewing an AI to see if it seems human) does not constitute proof of AI.

There is a small part in human and mammal brains called the medulla sitting on top of the spinal cord. If this gets damaged you fall into a coma and thus it has a key function in consciousness. Exactly what we do not know but most likely linking our current bodily experience to our memories. In terms of evolution it is one of the oldest parts of the brain. We had consciousness long before we had a very advanced neocortex. It is rather ignorant to assume that we will automatically get consciousness if we add more processing power to a pattern matching engine.

Consciousness is not just about knowing about oneself. It is easy to have a machine that has the information stored about itself and can report about its status as many machines do that today. So it is not about the performance of that task. A conscious mind is not about performance. It is about experience. It is about applying emotional context to everything. It does not matter how intelligent some machine will be by some scoring system but it will simply lack the human experience particularly in regards to emotions, feelings and bodily existence and it’s desires. These are not drawbacks of being human but it is the essence. That cannot be replaced as Kurzweil suggests with LISP rules declaring some goals. AI will always lack the human experience. Knowing or communicating about sex is a very different thing than actually doing it. The same is true for all human aspects. You can’t describe the feeling of love (what oxytocin and dopamine and endorphines do to the body and mind) to a pattern matching engine with a LISP rule book. Being human has mostly to do with empathy, being able to feel with and for others. Clearly a machine will never do this. It won’t have the emotional basis for human decision making either. Therefore its intelligence will be INHUMANE … and that I am afraid in the true meaning of the word.

Einstein: ‘The true measure of intelligence is not knowledge but imagination.’

Therefore Kurzweil turns in the end of his book to a religious principle. All you have to do is to have faith. If you want to believe its possible we will have AI machines. From my perspective he diminishes the human mind to raise the artificial one and claims that it will be more powerful. I do not think the human mind is mystical or superior in some way but I think it is unfathomable. And the problems we face today are not because our mind is inferior but because we do not consider well enough what it tells us. Flawed logic and pseudo-intellectual contemplation replaces real-world intuition. Computers can already do today so many things a lot faster than a human and while that is good in principle many things are not necessarily to our benefit.

I will still go with Noam Chomsky: ‚Watson is a bigger steamroller. It understands nothing.‘ 

I am the founder and Chief Technology Officer of Papyrus Software, a medium size software company offering solutions in communications and process management around the globe. I am also the owner and CEO of MJP Racing, a motorsports company focused on Rallycross or RX, a form of circuit racing on mixed surfaces that has been around for 40 years. I hold 8 national and international championship titles in RX. My team participates in the World Championship along Petter Solberg, Sebastian Loeb and Ken Block.

Tagged with: , , , , , , , , , ,
Posted in Artificial Intelligence, Evolution, Machine Learning
4 comments on “Ray Kurzweil: How to Create a Mind
  1. spieltmit says:

    I love to read your posts about AI. And yes I fully agree with your view. Only I would never say never. Only because there is no way nowadays to make a computer “feel” sth does not mean it will never be possible. Happy 2017 by the way!

    Like

  2. All I am saying is that a computer will never feel like a human and thus will never have human-like intelligence.

    Like

  3. Ala says:

    It’s nice to see someone giving emotion, intuition, experience and imagination some credit!

    I think there are so many philosophical and linguistic problems in defining intelligence in the first place. And even if we know what it is, is human style intelligence the only style of intelligence? And if so, would a machine have to simulate the way a human does it to achieve the same function?

    I think there aren’t good answers to a lot of these questions.

    Historically, intelligence was a word used to describe symbolic, ‘logical’ analytic syllogistic reasoning. This was the stuff that was hard for the average brain to do, especially as literacy and mathematical training were the preserve of an elite.

    In the twentieth century we started making machines in that image, and quickly found that symbolic reasoning is very easy for a dumb deterministic machine to do; it is much easier to work out the equations that describe the path of a thrown ball than it is to learn how to actually catch one… And so Artificial Intelligence is now about spending millions to work out how to do things that toddlers can do effortlessly, lol.

    The point I’m trying to get at is that the concept of ‘intelligence’ is a shifting one that reflects whatever cognition is at a premium.

    Two hundred years ago, multiplying two ten digits numbers in your head meant you were very intelligent – now, thanks to calculators, it shows a certain stupidity (or at least eccentricity) to waste your life learning to do that!

    To try to be a bit more succinct, the semantics of intelligence is a socio-technological construction in flux, IMHO, rather than an objective fixed definable thing?

    Like

    • Thanks for a very thoughtful comment. I do agree with you that there is a lot of room to define intelligence and how to rate or qualify it. I do say that my measuring stick for intelligence is the human capacity for it. Cognition is certainly part of becoming intelligent and using our mental ability to ponder and judge. My main criticism of AI propositions is that in difference to most AI ‘experts’ I am also focused on human neurology, neurobiology and the resulting psychology and their role in intelligence. That makes me understand that consciousness is a precursor to intelligence and not a consequence of more and more computing power with more powerful cognition. Our consciousness and brain evolution has given us the ability to live in two worlds at the same time, the real-time body-chemistry experience and our mental existence in the past we remember and the future we contemplate about. Our most important learning capability is from narrative and its emotional context in other humans rather than just searching for patterns in huge amounts of data. We can learn from a single event such strong emotional content that it changes our lives. All our memory and judgement is emotionally driven and requires our real-time body chemistry to map these to our natural urges. No matter how powerful we make these machines, they will never come close or be better in making these human decisions. There is no right or wrong in nature and as many truths as there are perceptions. If there was better intelligence evolution would have produced it. One can propose that our only role is to create that higher intelligence but there is again a substantial amount of arrogance at work just like proposing a God who created this universe for us only. Being intelligent also requires to be ‘wrong’ about something to find yourself or guide others to a ‘right’. These judgements being purely momentary in their value. AI is not dangerous. Humans are dangerous especially those who propose that AI will be better humans …

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Max J. Pucher
© 2007-19

by Max J. Pucher. All rights reserved.

Real World Statistics
  • 239,674 readers

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 366 other subscribers
ISIS Papyrus on Twitter