Obviously a lot of the discussion and disagreement (which is good) has to do with what zero-code for BPM is supposed to mean in the marketing. It is as misused as ‘intelligent’ or ‘smart’. Clearly most consider zero-code to mean zero-programming in the classical sense. Fair enough. But yes, a flow diagram and writing rules is a form of programming and so are object models or decision tables and any form of telling the computer what to do explicitly as it involves logic. There are many opinions as to what is acceptable to business users and what not. I propose that the only thing acceptable to the business is their own business language and nothing else.
But there is one more issue and that is the point in time the programming is needed. Programming might be needed during setup of the system, during setup of the data or user interfaces or during creation of process templates. I propose that only the last is really relevant. But zero-code is a technical perspective that is really missing the point. Also automated code generation from any form of models is absolute nonsense, because such code is mostly not maintainable. Only a model-preserving technology that allows the models to be modified during execution can deliver a ‘less-code’ environment. And while you are doing this you need a versioning and deployment mechanism from a central repository. So many BPM solutions that claim zero-code today are a rag-tag technology stack of different functionality because they don’t have the content and rules functionality embedded. Zero-code ONLY refers to the ability to create a flow diagram. The content issue is widely ignored.
The discussion should really be about how business knowledge can be inferred from what the business performers are actually doing and how such knowledge can be continuously improved. A flow diagram is nowhere near enough and just 20% of all information needed. BPM does it today by interviewing the performers and by setting up a huge BPM bureaucracy for continuous improvement. The performers cannot improve the processes themselves because of the coding involved. Therefore the zero-code discussion.
A zero programming solution that learns from past cases as suggested by Keith Swenson in his blog post is the right direction. But unfortunately if you analyze past data then you will produce as many good predictions as bad predictions and these will therefore be statistically irrelevant. Over long enough time it all follows the Gauss curve. My main complaint is that process mining is also not a zero-programming solution, because data or process mining or most forms of knowledge systems require very educated AND experienced AI experts alongside business experts to do anything sensible.
Therfore Keith’s suggested zero-programming solution will not work either. Two core things are missing. Mining knowledge from a past case/process is a fools errand. There is not enough information available. The main problem is that you can’t collect the relevant data from existing systems and from static logs. More data simply means more noise. You need less but relevant data about each single decision taken. You can’t also do continuous improvement this way because you do not know how to map changes in recent cases over former ones. Are these new variants or has the process changed generally?
Data collection as well as process optimization and continuous improvement have to happen in real-time. Which means that current BPM and Case Management systems will never be able to implement it as they are missing the real-time access to such information. Some of the most relevant information about process decisions is INSIDE THE CONTENT and BPM systems (and process mining for that matter) are totally blind on that front.
The second element is that the machine-learning component (I call ours an agent) has to be able to categorize, segment and reuse knowledge. It can never learn the WHY but it can learn the WHAT-FOR. This means that goals, outcomes and handovers have to be defined by the business and not flows and rules. Goals are needed so that the performers can decide what to. Over time the agent can learn to recommend actions that have been goal-achieving by observing user actions in context with time and case structure using pattern matching. The only ones that know if actions to be taken are good ones or not are experienced users. These users actually have to be able to tell the agent if a recommendation is wrong. That is the only proper learning cycle and this is actually how we teach other people. This is how you get the continuous improvement into the system without going through a bureaucracy or process mining experts.
But I am not theorizing about this. This what we offer today and it was a key element in our top technology rating in the ACM/DCM report by Forrester. http://isismjpucher.wordpress.com/2014/04/12/making-sense-of-analyst-reports/
So is what we offer zero programming? No, and it is not relevant and we do not call it that. There is some ‘coding’ needed in our script language PQL to setup the system, the data interfaces, and for the process templates one usually needs process fragments and business rules. The user interface is another story again but it is as relevant as that is what the performers actually use. It has to present all information and possible actions in business language as defined in the ontology. No BPM or ACM functionality should be visible or needed to be understood. Is that always possible? No, mostly because IT and BPM consultants make implementation decisions and not just the business users.
The reality of this is that IT and BPM experts are even more sceptical than business people about a system making recommendations autonomously even if the knowledge comes from the same people that actually perform the processes anyway. Could the agent make a wrong recommendation? Yes, and some performers make false decisions. Both have to be corrected. Only SixSigma and BPM pundits see this as a problem, because in reality it is a learning event that improves the business knowledge inferred by the system.