Ray Kurzweil claims that ‚If understanding language and other phenomena through statistical analysis does not count as intelligence then humans have no intelligence either.’ That is a converse argument that should disqualify Kurzweil from ever being listened to again. While humans do use similar learning methods they are mostly not statistical but humans can learn things in AD HOC through a single event. They can create a new neural connection immediately. It just has to be emotionally strong in the experience and that turns it into immediate and dominant knowledge.
Even if we could reverse engineer all functions of the human brain, we still will not be able to emulate our human experience and its emotional context that is the key to human drives, goal setting and decision making. While many see that as a flawed aspect of human capability, the truth is that because of the uncertainty of knowledge and the inability to exert perfect control in the chaotic real world, human emotional intuition works a lot better than all machine like logic.
A natural language assistant and a self-driving car are no proof whatsoever that human-like machine intelligence is possible. It is not the neocortex that defines our ability for art and emotion but our limbic system, which Kurzweil covers briefly but diminishes it as a historic artefact overpowered by the neocortex. Maybe Kurzweil is really a Vulcan?
Kurzweil often bases his conclusion on converse arguments as the above. For example ‚Because we can’t easily remember things in reverse order memories must be stored in sequence.‘ That does not allow to conclude that it is stored as a stream of information like in computer memory. If the brain was a computer-like construct then reversing the stored information should be easy. That we can’t is rather proof that the brain is not a computer or has no computer like function. Yes, memories seem to be linked together as a series of contents in sequence, but they are indexed by emotional events. If there is no emotional context we do not remember things or if the information is there we can not recall it out of sequence. It is well known that if the thalamus is impacted we experience memory problems.
Kurzweil makes the same mistake again when he argues further that because the apparent complexity of a Mandelbrot set is driven by a simple formula ‘Z equals Z squared plus C‘ it is possible to utilise a similar – still to be found – formula to reverse engineer a complex human brain. He clearly does not consider the problem of chaotic structures and that any natural environment does not consist of one formula but any number of such Mandelbrot sets that interact in totally unpredictable ways. And even if a brain could be constructed the way he suggests it would still develop into a unique structure and not be in any way human-like.
Starting off with an unproven assumption
Kurzweil starts out with the wrong assumption that the brain stores and processes information. There is no scientific basis for this conclusion. While the function of a neuron and a network of neurons can be simulated by a mathematical algorithm, that does not imply that this enables to simulate the complex multi-dimensional biochemical interaction of body and brain that creates and recalls emotional experiences as it sees necessary to support life. The sequence of memories accessible have no likeliness to the information stored in a computer.
Pattern recognition and pattern abstraction into features, even if purely trained, has in itself nothing to do with intelligence. Yes, we see what we can interpret and we miss what is unknown but cognition even if it is hierarchical in abstraction is not yet intelligence.
We do not constantly predict the future as Kurzweil claims, but as we map current experiences to stored ones in the neocortex we travel the neural pathways to likely patterns. The hierarchy of patterns and perceptions does not exist as statistical analysis but as a one time created link. Only when we need to learn things for the sake of learning we have to repeat it often to create a memory access path through a variety of techniques. We lack the emotional immediate access to that knowledge that is created by experience and by the dopamine induced success moment. Which is why we forget learned knowledge quickly while experience remains.
Kurzweil says that it is beyond the scope of his book to consider neurotransmitters such as dopamine and serotonin which is surprising as they are a key element of a working brain. Kurzweil never mentions any of the research that links the amygdala and the limbic system with human decision making. It has been shown that managers that can decide well and quickly have a very active limbic system when doing so. Kurzweil claims that the neocortex has taken over the functions of love controlled by oxytocin and vasopressin and he could not be more wrong. Our neocortex has no other function that to interpret our feelings and desires and makes sense of them while our emotional system has already made them. We can see in brain scans that we have taken the decisions before we actually become consciously aware of them. It means that we decide never logically but always emotionally. If we try to apply ratio and logic we search for additional potential aspects of loss and rewards and the associated emotions and throw them on our decision scale. We have never disconnected from our deep emotional past as Kurzweil claims. The reason why we humans have sex without wanting children, is not because it has become irrelevant but because it is a successful means to strengthen the pair bond of a relationship.
I find in the book the repeated and unfounded assumption that the pattern recognition capability of the neocortex is the basis for intelligence. There is no proof of that. The neocortex is purely about automation. It is used to reduce our thinking effort and is not the basis of it. We train the neocortex for example for all motor functions so that we can walk without thinking about which foot to put where. And yes we also train it to recognise higher level abstractions and concepts. But there is no search engine and the neocortex simply filters the abstractions without logical rules.
Are IBM’s Deep Blue and Watson actually AI?
Kurzweil also claims that Deep Blue won against Kasparov because of its stronger pattern recognition ability when in truth it was the programming of Chess grandmaster Joel Benjamin who defined its game strategies. Kurzweil assumes that the patterns stored in the neocortex are similar to the ones used by computers. The neocortex does not store patterns but contains networks which has a few magnitudes more capacity of contextual meaning without being a kind of storage device.
Pattern recognition has nothing to do with understanding while we can have a discussion what that means. I say it is about emotional relevance in a given context. An OCR module can identify a word and verify it against a dictionary. It can recognise multiple words in sequence and analyse it against a semantic rule engine and ensure it is a valid sentence. It can in principle translate that into another language with sufficient accuracy to make it work. All that is not intelligence and this program has no understanding of what it does. It can not at all identify if the sentence makes any sense. The same is true for the much hyped Watson that won Jeopardy by downloading the Internet and using a huge amount of parallel processors with many different algorithms to quickly find probable answers and select the most likely one. Once again, no intelligence is used in Watson to achieve that. It does not even understand the rules of the game that are programmed into it. It has no desire to win or sees any benefit in winning and feels neither joy nor disappointment either way. it is not intelligent and will not lead to intelligent functionality. I am not saying there won’t be benefits achievable and yes, some may be better than human achievements, but they are still not intelligent.
The possible patterns that the neocortex has linked in its networks are not a prediction of the future as Kurzweil claims, but simply a list of things that happened in the past. The past does not predict the future and chaos causes all tries to perform predictions in the longer future as futile experiments. That includes the idiotic Black-Scholes formula for the future value of an investment. If at all, these are self-fulfilling prophecies like all stock market gambles.
A really strange idea is to return to the long-gone approach of LISP, a language used in early AI systems that produced obviously nothing of the kind. Kurzweil believes that the AI will statistically infer and construct several hundred millions of lines of LISP rules that will represent its knowledge. There is nothing of the kind in the human brain and thus all comparisons end right here.
Consciousness and the Turing Test
But then we come to the all-important question that Kurzweil tries to sweep under the table much like politicians do with unwanted subjects. Turn them into gibberish and cast them aside. He considers that question to be if the artificial intelligence will be self-conscious. He quotes Searle who says that we simply have to create a machine that emulates the brain and then we will also get consciousness. If only we would know how the brain does that. The limbic system, the thalamus and the medulla are key parts of that system but how that actually would interact with a pattern matching engine and a LISP rule set is conceptually fairly far fetched not to say ridiculous. Searle was however the one to state very plausibly that the Turing Test (interviewing an AI to see if it seems human) does not constitute proof of AI.
Consciousness is not just about knowing about oneself. It is easy to have a machine that has the information stored about itself and can report about its status as many machines do that today. So it is not about the performance of that task. A conscious mind is not about performance. It is about experience. It is about applying emotional context to everything. It does not matter how intelligent some machine will be by some scoring system but it will simply lack the human experience particularly in regards to emotions, feelings and bodily existence and it’s desires. These are not drawbacks of being human but it is the essence. That cannot be replaced as Kurzweil suggests with LISP rules declaring some goals. AI will always lack the human experience. Knowing or communicating about sex is a very different thing than actually doing it. The same is true for all human aspects. You can’t describe the feeling of love (what oxytocin and dopamine and endorphines do to the body and mind) to a pattern matching engine with a LISP rule book. Being human has mostly to do with empathy, being able to feel with and for others. Clearly a machine will never do this. It won’t have the emotional basis for human decision making either. Therefore its intelligence will be INHUMANE … and that I am afraid in the true meaning of the word.
Einstein: ‘The true measure of intelligence is not knowledge but imagination.’
Therefore Kurzweil turns in the end of his book to a religious principle. All you have to do is to have faith. If you want to believe its possible we will have AI machines. From my perspective he diminishes the human mind to raise the artificial one and claims that it will be more powerful. I do not think the human mind is mystical or superior in some way but I think it is unfathomable. And the problems we face today are not because our mind is inferior but because we do not consider well enough what it tells us. Flawed logic and pseudo-intellectual contemplation replaces real-world intuition. Computers can already do today so many things a lot faster than a human and while that is good in principle many things are not necessarily to our benefit.
I will still go with Noam Chomsky: ‚Watson is a bigger steamroller. It understands nothing.‘