I am pleased to see that I am not the only one to believe in using human intelligence as a model for building better software and applications as we did with the User-Trained Agent of the Papyrus Platform.
In its 2006 conference the IBM Almaden Institute focused on the theme of “Cognitive Computing” and examined scientific and technological issues around how the human brain works. Approaches to understanding cognition that would unify neurological, biological, psychological, mathematical, computational, and information-theoretic insights were discussed.
As a major next step, IBM disclosed its Cognitive Computing project a few days ago. “Today’s computers are unable to deal with ambiguities or consolidating information from multiple sources into a holistic picture of an event. Without the ability to monitor, analyze, and react to the continuous growth of information in real time, the majority of its value may be lost,” IBM said in the press release.
IBM hopes that such a cognitive machine could integrate data from consumers’ credit reports, tax returns, pay stubs, and mortgage statements in order to allow lending companies to make instant decisions on loan eligibility. IBM and its research partners will look closely at the brain’s internal structure and functions and attempt to emulate those patterns on silicon and transistors. IBM’s partners include Stanford University, University of Wisconsin-Madison, Columbia University Medical Center, Cornell, and University of California at Merced.
My take: THAT IS A FAIRLY LONG SHOT. Neural net computing has been around for some time and little practical value has been achieved except in very special applications. IBM and its partners are fairly naive in the goals for this project. I don’t believe that a hardware approach will be the right one because it is too expensive and too slow to iterate in development. How such a system will model and understand data will be interesting to see. Further, it will be a hard sell once the solution should actually work.
Our Papyrus User-Trained Agent is a simple cognitive REAL-TIME functionality that models basic human decision-making on trained patterns. Our model can still be expanded substantially in depth and capability. I have been telling our customers about the UTA for two years now and they simply are not that interested. If they are we have one major problem: Cognitive computing that learns from large scale data patterns does not follow easy to understand logic. All the people that are interested in the UTA ask: “What has the UTA learned?” and “Why does it take one decision rather than another?” Also our study work with Vienna University brought up the same questions.
Therefore our main work item for the UTA at this time is to visualize the knowledge that it acquired. That is only possible because it works as a piece of software and has access to the data models and real-time data while it processes. It can use that information to write historic process information that can be used to create process decision graphs that a human can understand.
So IBM’s cognitive computing projects will face the same hurdles. Should such a cognitive HW device be commercially feasable at some stage (ten years down the road?), people will not want to use it because they would not understand its decisions and reasoning. People don’t trust business intelligence data right now, so why should they trust business intelligence that was analyzed by a computer whose thinking they don’t understand.
What is the solution? As always, it is: ‘Keep it simple.’ Cognitive computing has to happen on a small scale and on limited data that are plausible like we do with the UTA. Rather than huge computing devices that understand huge amounts of data, the right approach is to narrow the data volume down by ensuring that the context of information is the right one. Massive context-less data do not solve anything.
I have learned from human intelligence, that we are only so fast in our thinking because we always have limited processing to perform. We simply map information and knowledge we EXPECT because of the context, and compare it against the real-time input. Therefore the context of information reduces the data amount to be processed and simple pattern matching is enough to recognize information and to make decision based on past patterns trained. In the UTA the context is currently defined by the SCOPE of the state space that is monitored. Our current research tries to find better ways to identify context more automatically and efficiently. So in effect we are quite a few years ahead of IBM and its research partners.