I have several points to make and will cover them one by one. (Note: This merges content from various previous posts for clarity.)
Why is BPMN 2.0 as-is not usable as design tool for business users?
Disclosure: I am not against BPMN, because we use it with extensions for adaptive, goal-oriented processes. But BPMN with its 520 pages of specs does not enable an untrained business user to describe processes. Why? Just look at all the issues that make it hard for an expert. You think that a business user can handle any of it? Despite the progress from 1.1, the enhanced and additional definitions still are prone to very ambiguous models. An ‘Activity’ can still represent any number of different functions and the new event types are lacking in detail on how they interact with the flow. There are now artifacts, but without the ability to describe methods, attributes and states. Not surprisingly, there are no business rules. The proposed UML-like data modeling has not made it into the spec. All inbound and outbound content, as well as GUI artefacts and rules will still have to be done outside the BPMN model. Hence, no model preservation, no roundtrip, no usability by the business and thus a lot of project management bureaucracy.
Because of the above BPMN can only represent a small part of a complete process and it is neither usable as-is for very dynamic processes, nor is it easy enough to use for the business users to describe their processes. Especially the interaction of various processes is extremely difficult to design and coordinate. Business users cannot create event handling exceptions for an intersecting set of processes. All these issues need to and can be solved as extensions before BPMN becomes usable, and that is the great thing about it. It does however not resolve the general problem with flowcharts.
Flowchart models have only the acting agents (users) as real-world entities whose decisions to perform functions on artifacts can’t be modeled. Even BPMN 2.0 with its additional artifacts is STILL functionally blind to the inner functionality of the major elements of a business process (content and data in context with rules) and therefore to its decision logic. It remains basic routing. Flowchart modelers see the world (a business) as a ‘complicated’ system that can be decomposed into a sequential causal chain of functions ‘to be executed’ and user actions to be assigned and a limited set of logic switches that causally control the execution. The real-world process knowledge is however hidden in a) functions performed by different agents who influence state changes on business entities and b) patterns of entity states and attributes that cause different agents to perform certain functions, and c) a variety of complex business events that can change entity states at any time. Flowcharts are unable to represent that even if one could extract and analyze all the information from the agents and the entities! Why?
Flowcharts can’t model Complex Adaptive Systems (CAS)!
A classical model of physics (i.e. a watch) is complicated, but the economy or a large business that consists of individually acting agents is complex (Anderson, Arrow, Pines – 1988). The flowchart fallacy is to see a business as complicated rather than complex. Holland et. al (1986) proposed a method of real-world modeling in which the world consists of various states S where a transition function F(S) changes a given state at time t into a new state a time t+1. The caveat is that in a complex system the modeller using a modeling function can neither accurately describe the state space with all its entities nor can the function F – and its causality performed by the agents – be accurately known. So clearly, all models are wrong, but some models can still be useful. Is that the case for flows in CAS?
Only in a very limited sense. The Law Of Large Numbers allows us to calculate a statistical representation of the real-world across a large number of entities. While it enables us to see some data distribution at certain snapshots in time, this observation does however not describe causal rules that could be linked to a causal chain. It cannot be decomposed into why the various agents came to a particular decision. The individual agents have only a certain probability to interact with certain data patterns in an observed way. It is neither financially feasible to analyze all possible variants and all complex processes by questioning agents and ensure that they are inline with needs over time. Therefore ‘standardized’ processes are designed that hopefully cover 80% of process variants over time and thus would produce the savings. This is not the ‘Pareto Principle’ BTW, that would in difference suggest that for example 20% of processes will produce 80% of profits.
The ‘getting 80% right’ proposition fails however, because it does not take process interdependencies into account. Let’s assume that one process owner has only ten processes to achieve his goals and he manages to get each one to cover 80% of all variants (as proposed by BPM methodology) then the ten processes will intersect and produce also a combined set of possible variants. The question is how much of the total process space (as given by the joint graphs) is covered by this main variant. To determine the ratio of the overall coverage we use the cross correlation matrix for all processes. If we make the assumption that all variants are independent (the cross correlation matrix is the identity matrix), we can calculate the answer: 1 / (1 + n * (1-p)) with n=10 and p = 0.8 we get ~33% (Thanks to my colleague Dr. A. Adensamer). Only A THIRD of all possible variants for this process owner are covered! My own calculation using simple probability is even lower. To achieve 80% variant coverage for all ten processes, each one would have to be 97% correct! If all process owners of a business are only getting a third of their processes right by automating them, then the cost of achieving quality at process handoffs will be much higher than the optimized 80%-process seems to suggests. Is anyone surprised that the flowcharts may not be satisfying the goals of customers and that short term cost savings do not continue long term?
Reductionist decomposition cannot model emerging properties!
The reductionist modeling hypothesis suggests that the more a decomposition in smaller elements takes place the more accurate the model would be. Anderson proposed in 1972 that reductionist models are misleading for complex systems because they cannot map and predict bottom-up emerging properties, such as a group of people collaborating to combine their knowledge. Decompositon does further not consider the outer context of a system. Russel Ackoff pointed out that taking a British car to pieces (analysing it) does not explain WHY it has the steering wheel on the right side. Thus reductionism cannot be used to reconstruct the business logic from the decomposed tasks of a process analysis.
Therefore BPMN 2.0 and flowchart models cannot represent real business processes, because a large business clearly is a complex adaptive system that consists of individual acting agents with its employees and customers. Trying to simplify a complex business into a ‘complicated’ one by decomposition into small steps, makes the model unable to adapt to the customer agents or to other environmental changes. Agent interaction is emergent and not designed. That’s why it needs Centers of Excellence to redesign the blueprints from scratch. Because of the programmed data, content and rule functions this requires long projects to implement, and thus flowcharts reduce the agility of a business rather than improve it.
How can processes in complex adaptive environments be supported?
In this post I do not even discuss the issue that human decision-making is not following rational future utility, but uses emotionally intuitive heuristics that are much more effective in uncertain environments than boolean logic based on more data. Model that with a flowchart …
I thus propose bottom-up process creation, where knowledgable or skilled actors assign real-world entities with state-changes and rules to process goals that are linked to top-down business targets, will produce a much more realistic and most of all adaptable model. To use BPMN for that it has to be extended. Then it can support human decision-making and makes it transparent rather than replace it. Users find it much easier to interact with real world entities and their states than with purely abstract flows. Rather than to enforce agents to perform in a certain way, the process definition only enforces basic rules of conduct, which creates substantial transparency and therefore flexibility and adaptability. Process management has to offer complex real world models of people acting as a social group on business entities. Using SocialBPM to define flowcharts still creates the above limitations.
Does not enforcing a flow as in Adaptive Case Management mean that there is less control, less efficiency, or less compliance? Absolutely not! Rather the opposite. In difference to ad-hoc or dynamic BPM, there is no intentional point of process leniency in the flow, because the focus is empowerment. Control is only applied where absolutely necessary to allow in-process innovation and optimization by the people in the know. And they do not need to understand a BPMN Designer to do so.
Basic flowcharts are ok for very simple, repeatable processes if the design is done by experts who can also add all the other elements needed. In very rigid, bureaucreatic environments, such as government agencies or hospitals, flowcharts can be usable but they are still not necessary to execute control. While BPMN is not a tool for business people, the representation can be helpful for understanding smaller process structures once the processes are organized around business goals and mostly controlled by rules and not gateway flow-logic. The trick to make BPMN 2.0 usable for business users is to hide it during design, but offer it as optional visualization of the activities to achieve the defined PROCESS GOALS during execution.