The ‘Zero-Code BPM’ White Lie

Obviously a lot of the discussion and disagreement (which is good) has to do with what zero-code for BPM is supposed to mean in the marketing. It is as misused as ‘intelligent’ or ‘smart’ to describe a Business Process Management system. Clearly most consider zero-code to mean no need for programming using Java or C++. Fair enough. But yes, a flow diagram and writing rules is a form of programming and so are object models or decision tables and any form of telling the computer what to do explicitly as it involves logic. There are many opinions as to what is acceptable to business users and what not. I propose that the only thing acceptable to the business is their own business language and nothing else.

But there is one more issue and that is the point in time the programming is needed. Programming might be needed during setup of the system, during setup of the data or user interfaces or during creation of process templates. I propose that only the last is relevant. But zero-code is a technical perspective that is missing the point. Also automated code generation from any form of models is absolute nonsense, because such code is mostly not maintainable. Only a model-preserving technology that allows the models to be modified during execution can deliver a ‘less-code’ environment. And while you are doing this you need a versioning and deployment mechanism from a central repository. So many BPM solutions that claim zero-code today are a rag-tag technology stack of different, distinct products and they still don’t have the content and rules functionality embedded. Zero-code ONLY refers to the ability to create a flow diagram. That there is no process without content is still widely ignored. Truly amazing …

Zero-code claims that non-technical people can define already analysed processes. The discussion should really be about how business knowledge can be inferred from what the business performers are actually doing and how such knowledge can be continuously improved. A flow diagram is nowhere near enough and just 20% of all information needed. BPM does it today by interviewing the performers and by setting up a huge BPM bureaucracy for continuous improvement. The performers cannot improve the processes themselves because of the bureaucracy needed to manage the design for the necessary coding. So what is needed is a zero-programming solution, regardless if code or graphic modeling.

A solution that learns from past cases as suggested by Keith Swenson in his blog post is the right direction. But I propose that if you mine (statistically analyze) past data then you will produce as many good predictions as bad predictions and these will therefore be irrelevant. Over long enough time it all follows the Gauss curve. My main complaint is that process mining is also not a zero-programming solution, because data or process mining or most forms of knowledge systems require very educated AND experienced AI experts alongside business experts to do anything sensible.

Therefore Keith’s suggested zero-programming approach will not work either. Two core things are missing. Mining knowledge from a past case/process is a fool’s errand. There is not enough information available. The main problem is that you can’t collect the relevant data from existing systems and from static logs. More data simply means more noise. You need less but relevant data about each single decision taken. You can’t also do continuous improvement this way because you do not know how to map changes in recent cases compared to former ones. Are these new variants or has the process changed generally?

Process Discovery with the User-Trained Agent

Process Discovery with the User-Trained Agent

Data collection as well as process optimization and continuous improvement have to happen in real-time. Which means that current BPM and Case Management systems will never be able to implement it as they are missing the real-time access to such information. Some of the most relevant information about process decisions is INSIDE THE CONTENT and BPM systems (and process mining for that matter) are totally blind on that front.

The second element is that the machine-learning component (I call ours an agent) has to be able to categorize, segment and reuse knowledge. It can never learn the WHY but it can learn the WHAT-FOR. This means that goals, outcomes and handovers have to be defined by the business and not flows and rules. Goals are needed so that the performers can decide what to. Over time the agent can learn to recommend actions that have been goal-achieving by observing user actions in context with time and case structure using pattern matching. The only ones that know if actions to be taken are good ones or not are experienced users. These users actually have to be able to tell the agent if a recommendation is wrong. That is the only proper learning cycle and this is actually how we teach other people. This is how you get the continuous improvement into the system without going through a bureaucracy or process mining experts.

But I am not theorizing about this. This what we offer today and it was a key element in our top technology rating in the ACM/DCM report by Forrester. https://isismjpucher.wordpress.com/2014/04/12/making-sense-of-analyst-reports/

So is what we offer a zero-code solution? No, and it is not relevant which is why we do not call it that. There is some ‘coding’ needed in our script language PQL to setup the system, the data interfaces, and for the process templates one usually needs process fragments and business rules. The user interface is another story again but it is as relevant as that is what the performers actually use. It has to present all information and possible actions in business language as defined in the ontology. No BPM or ACM functionality should be visible or needed to be understood. Is that always possible? No, mostly because IT and BPM consultants make implementation decisions and not just the business users.

The reality of this is that IT and BPM experts are even more sceptical than business people about a system making recommendations autonomously even if the knowledge comes from the same people that actually perform the processes anyway. Could the agent make a wrong recommendation? Yes, and some performers make false decisions. Both have to be corrected. Only SixSigma and BPM pundits see this as a problem, because in reality it is a learning event that improves the business knowledge inferred by the system.

2 Comments on “The ‘Zero-Code BPM’ White Lie

  1. Max,

    Thanks for the link to my post about zero code, however you misrepresented it a bit here. The key aspect is the human who is the real agent learning. The system does mining and analytics to support the person, not replace them. This is very important since it takes expertise in the topic to decide what to do (e.g. a doctor would decide treatment based on recent research and external events, and not just history of past action in that organization). The analytics helps the human judge the efficacy of their work, and does not replace their planning activity.

    However, readers should also realize that the main point of my post was to give a clear example to highlight the fact that drawing a diagram of the process is still programming. Programming something does not mean you are using a text-based language with obscure syntax and symbols. It is not the typing of the text that makes programming difficult, but instead the working out of exactly what has to be done. Programming with BPMN can be just as complex and difficult as programming with C#, and I want to dispel the myth that somehow programming with a diagram makes everything easier. The difficulty with programming comes from the mental activity of deciding in detail exactly what the machine has to do, and not on whether it is text or graphical notation.

    There were several talks at BPMNext similar to the ideas that you present here. These goal driven approaches are VERY promising. The goals are not detailed “if this event happens then do that” but instead much higher level than that, and can be based upon statistical measures of the organization. For example, (1) keep costs low, sales high, (2) overtime labor is twice as expensive, and (3) delays in response to customer can mean losing the sale. The system then generates (infers) the detailed processes that are followed, and might tune the process to the changes in demand. It is not strictly a zero-code solution, because people still have to set goals that drive the process model generation. If your estimate that a process contains only 20% of the information, this might imply that specifying the goals is five times the amount of work. It still might be worth it, because the generated/inferred processes would be automatically tuned to the changing, evolving situation. Very promising indeed.

    Like

    • Hi Keith, thanks for the comment.

      I read your post again and I do not think that I am misrepresenting anything. I am too saying that the key is the performer and that he is not being replaced but that the experienced humans are necessary to continuously improve the recommendations. I just propose a different appraoch to process mining which I know to be too limited and is most certainly not a zero-code approach.

      I too say clearly that creating flow diagrams, models and rules is a kind programming but admittedly it is not ‘code’. We are in absolute agreement on the complexity of BPMN and its limited use for the business.

      Let me point out again that I am not presenting ‘IDEAS’. What I am discussing here is what we have developed in 2007 and is part of the Papyrus Platform since 2009. It has evolved a lot since then. When people discuss these as their ideas then they are plagiarizing. It was ISIS Papyrus and Whitestein to bring goals in different ways into process management.

      Goals in a process are not highlevel business objectives as there is little benefit for the performer and the agent. The goals and subgoals have to be ideally enumerable and verifyable but they can be human judgment too. The system cannot infer anything without the human performer actually doing the right thing a few times. That trains the agent to map the patterns to a defined goal that has been sucessfully reached.

      Specifying goals must also be part of any process design effort, but they are then simply not explicitly entered into the process execution because there is no such thing in BPMN. When I say that the flow diagram is just 20% of the process then this is not due to the goals but the required data models, the data integration, the rules, the forms, the GUI, and the related inbound and outbound content. The flow does not much in reality.

      I have never claimed that we target a zero code solution because I see that as irrelevant. I do explain that in my post. Defining goals is a business activity and can be done by business people. But yes, some goals may be described by something that looks like a rule.

      Thanks again for the comment and I think that we are very much saying the same thing.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: