There is no such thing as AI. And in the true human sense there never will be. Skynet will not take over the world. But yes, Skynet will be used by humans to try and take over the world. Not through AI but machine-learning(ML)-monitoring and its kin big-data-mining. Both technologies are perfect for Big Brother. AI is hype, while ML is real but not intelligent in any way.
Most of the people droning on about AI lack the basic understanding what it actually does. And most of the stuff considered AI is not. Not even IBM Watson, which is a glorified full-text search with semantic weighting. The ‘AI-Meeting Coordination Service’ Clara uses a huge man-made library of text analysis to prioritize meetings. Not bad, but not AI. Even the coolest self-driving cars are today dangerous pieces of automated junk. They can’t distinguish between a rabbit and a baby on the road and can’t make a human judgement of such situations. But yes, they might stop earlier than a human at anything uncertain on the road as long as visibility is good enough. That has nothing to do with AI. Superhuman does not mean super-intelligent but just faster at some human like pattern recognition tasks that then execute hard-coded processes. Hey, presto … why not BPM?
As you might know, my stance towards BPM is that it is utterly useless in improving what a business does on a human level. After rigorous testing BPM produces a frozen and dead version of a business knowledge illusion that becomes instant legacy dead weight to the business. It can however automate and dumb down business interactions so they no longer require much intelligence or knowledge and that’s what sells BPM and nothing else. Therefore it is odd to then consider that BPM will be improved by ML, because the ML would primarily be used to replace the hordes of consultants doing the process analysis and optimization. When ML learns from knowledgable staff you can’t replace them and you won’t need outside consultants.
ML would rather be a full replacement for the shortsighted concept of BPM because it will not require any kind of analysis, or monitoring or improvement as all that can happen through machine learning. But learning how and from whom and with what accuracy? The idea of process mining from unstructured communication has so far not delivered anything realistic.
Companies like Assist.ai and Digital Genius claim to use AI with a human touch to service customers better. It still remains a glorified answering machine. Supposedly some AI can identify that a customer needs support before he knows and offers proactive help. That nonsense will go down as did all the automated help agents on the PC screens. Annoying!!!! Chatbots that are coded to seem human in their responses seem just plain stupid (as they really are) once the conversation moves on.
One of the main issues is that we do not even understand how the brain actually works and all we know is that it is not a computer in any sense. It won’t be emulated by some algorithms even if we should happen to find it. It won’t emulate the biochemistry that makes us human.
Researchers presented recently experimental evidence for a Theory of Connectivity — that all of the brains processes are interconnected — “and that mathematical logic underlies brain computation.” Surely true, but understanding how a neuron works and to emulate that means nothing: neural nets tried that for decades. The research paper describes groups of similar neurons forming a “functional connectivity motifs” (FCM) for possible combination of ideas.
The assumption is that such an algorithm could lead to breakthroughs to have human-like future robot companions. It once again ignores the biochemical aspect that human memory and decision-making is driven by hormones and neuro-transmitters that can’t be emulated in software. It will still lack the human drives and as such compassion and empathy while strangely that is what most people consider to be necessary for ‘better’ decisions.
Just because a machine can recognise a face and maybe its emotional expression does not mean that it can understand and judge what it means. Intelligence does require emotional capability that creates desires and forms compassion and morality. Logic is not reason and it is most certainly not intelligence. Emotional weighting is the basis of decision making in all forms of intelligence we know. It is the key for the complexity of our interaction through language.
A key problem of that human interaction — may it be in written form or speech — is ambiguity. Humans solve it through context which computers find really hard. Modern speech recognition only works so well as it uses a dictionary and grammar library to turn gibberish into probably-aproximately-correct words and sentence structures, similar to what we did 15 years ago for OCR recognition. For a business transaction more is required, as much as I agree that speech is the computer interface of the future. A grammatically correct sentence can still not make any business sense at all and we do not gain great benefits if our inputs are single word answers to questions.
Which is why we at ISISPapyrus focus with ML on defining business ontologies that help to clearly define the terminology of a knowledge domain. It is similar to explaining to a child what a word means. But once user input can match to a domain knowledge model, ambiguity in design and Use Case Interactions is reduced and text or speech becomes well-working input to an application. ML can learn to recognize input correctly in a given context of a capability map and interface the user to the right transactions, guided by user-defined boundary rules and regulative constraints. Like in the human brain a mix of inherited and trained capabilities.
We use ML for automated discovery of Next Best Action recommendations since about 5 years (the patent is a few years older) and found that no one had any interest in using it, mostly for odd reasons. This included general fear of the technology, and aloof rejection of the idea that software could actually do that. Well, it still does in our platform as the famous User-Trained Agent, but those who do BPM do not get it or do not want it. Those who do not like BPM also do not like any ML functionality connected to it.
We do not use ML to emulate human reasoning (which it can’t because it is purely emotional and is why it works so well) but simply observe human actions and interactions in a well-defined environment of our platform. Once the ML software sees repeated patterns of actions and data it will start to recommend these actions, no longer requiring all the BPM mumbo jumbo. But still, there is little interest given the hordes of BPM ‘experts’ who need a job.
But be aware that the best ML pattern matching won’t substitute for emotional interactions between humans and won’t replace human decision making. What ML should be doing is simply augmenting human interaction especially where it cannot be person-to-person. ML can improve human-learning by recommending the best actions of other people.
Humans also do not need more data or more logic rules for better decisions. We need more real empathy and compassion and not just technocrats making policy for political or financial gain. We do have a problem with human decision making today that is not compassionate and when you see how many doctors treat patients, executives treat the workforce or customers and politicians treat their constituency it is clear that the problem is rampant. A shared moral framework is the basis of each society and AI won’t be relevant in that anytime soon.