An Essay on Reason, Intelligence, and Wisdom

My Subjective Perspective

To explain my perspective let me take you back to Dec 27th, 2005. Fooling around with my (adult) kids at the pool I jumped head-first into it, feeling an explosion in my head and a burning pain all over my spine when I hit the bottom. I had a badly bleeding cut on my head but was able to climb from the pool myself but was unable to get up. I felt that I had injured my spine in at least two places. It took the ambulance over an hour to get me to hospital.

I had to wait in the hospital for the orthopedic surgeon to do the CT scans. I had four long hours to consider either dying from the injuries to my neck or worse ending up a paraplegic. I had researched about spinal injuries for my novel ‘Deity’ and knew that I was lucky at first, because I never lost sensation and movement in my limbs. I also knew that the problems could arise later in case the spinal cord would get inflamed. The CT showed that I had cracked two vertebrae and herniated two disks in my neck, compression-fractured three vertebrae in my chest and damaged a number of disks in my spine. I was lucky because all fractures were initially stable and they saw no need for immediate surgery. I had refused pain medication, because I was worried about passing out or not being able to tell them when moving would hurt. Remembering my years of Buddhist study, I spent those four hours meditating the burning pain in my spine away. I had to stay in hospital and was given strong anti-inflammatory drugs. I was in a lot of pain, but refused pain killers and thus couldn’t sleep lying down. I saw the pain as deserved for my idiocy. It made me focus on breathing the pain away. It worked but my neck felt hard like a rock for months and I had lost sensation in my upper back.

Because the MRI machine was defective they couldn’t do a scan of my spinal cord. It took them two weeks to get it fixed. Oh, I forgot to tell you that all this happened on the Caribbean island of Antigua, so surgery would have meant emergency transfer to a Miami clinic. Once the MRI was done, the orthopedic surgeon – the leading authority in the Caribbean – told me quite bluntly that my life as I knew it was over because my spinal cord had no room left to move. I would not be able to do sports and would soon develop painful arthritis that couldn’t be treated due to the damaged disks. That was quite a shock to me, but I comforted myself that it was better than being a paraplegic. Two months later – while doing already some physiotherapy – I was allowed to travel and flew back to Austria. I consulted with a number of experts and was recommended to have at least the most badly collapsed vertebrae in my chest stabilized. I knew that with a fused spine there would be certainly NO sports. I simply refused.

On my return, I had sold my motorcycles as I didn’t want to see them. But 6 months later I borrowed my daughter’s BMW motorbike and rode it for over an hour, sometimes with tears in my eyes, mixed from pain and joy. I told the orthopedic surgeon about my achievement, who replied that I had a death wish. I told him that I had the opposite – a wish for life. I increased my training and was pronounced completely healed two years later. I still need to do my exercises and will have to for the rest of my life, but I own two motorcycles again and go scuba diving and skiing. The only thing I gave up was paragliding.

Let me ask you this: How would you judge my decisions? I agree that I was careless. Unreasonable not to follow medical advice? Unwise to be again testing my luck? Maybe. But I never had to think about it. I (emotionally) knew exactly what I wanted. Today I see this accident as an important part of being who I am. While obviously it would have better not happened, it has turned my into a wiser person who does – like much greater minds before me – no longer believe in the ultimate benefit of reason and logic.

A Philosophical Perspective

Considering something as a stupid act seems to point to a lack of intelligence or reason. I can only say that I felt there was no danger in something I had done often before. My highest jump was from an 80 ft cliff. I admit that I was 20 when I did that. Intelligence is defined as the ability to acquire and apply knowledge and skills. I had the skill and knowledge for high-jumping (taught by my mother who was Austrian Champion). So there is no simple, single cause as to why this accident happened, and while it could have been avoided that hindsight is completely useless. Overall it was my state of mind. So when are decisions wise? When we are fearful at all times? Feeling safe is a lot more dangerous than not, but I certainly don’t want to live in constant fear. Do you think that we employ different methods of decision making when we play, shop, work in our job, or even run a business? Can we only be intelligent when we are reasonable and logical? I absolutely disgree and so do many philosophers.

David Hume proposed that one couldn’t deduce certain relationships of cause and effect, and therefore no knowledge is based on reasoning alone. “We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” That is in alignment with Antonio Damasio’s discoveries of emotional decision making only.

Douglas R. Hofstadter, author of ‘Gödel, Escher, Bach – An eternal golden Braid’ explains that logic is done inside a system while reasoning is performed outside. He writes that reasoning happens via system models by for example drawing diagrams. “Reason” is a thought process, and “logic” describe rules by which reasoning operates. But they are both mental abstractions in our head only, even if we turn them over to be processed by machines. I therefore seriously question the argument that using computers to apply more complex logic to the same imaginary mental models makes any derived conclusions more intelligent or more accurate.

An Information Technology Perspective

All this clearly relates to business architecture models, predictive analytics, and business processes. How could a machine be intelligent or reasonable in a larger sense? The ability to apply experience and knowledge for an objectively positive outcome is considered wisdom. Can we or machines make good decisions without being wise? And is anyone suggesting that machines will do so?

Well, Gartner analyst Nigel Rayner will be speaking about Judgment Day, or Why We Should Let Machines Automate Decision Making’ in Gartner’ Maverick Research series in Barcelona next week. If you are a Gartner Client you can find the related research in the hyperlink.

Nigel uses research related to human psychology that I agree with. I have used the same arguments to a different end many times over the last ten years.  Thus I wonder about Nigel’s key findings:

  1. Humans are not good at making business decisions because they don’t act rationally, and the decision making environment in most corporations encourages cognitive and personal biases to flourish. However, humans are good at innovation and entrepreneurship where rational decision making is not the main activity.
  2. Machine-based simulation models (predictive analytics) are better predictors of future outcomes than human experts, and are already being used to improve decision making.
  3. Several other key technologies and disciplines (machine learning, intelligent business operations and extreme information management) are maturing, and will provide the technology foundation for the rise of machine-based business optimization systems that will make operational decisions to achieve corporate strategic goals.
  4. The role of humans in business will change to focus on setting strategic direction and providing oversight, which will be enabled by collaborative decision making technologies. Longer term, machines could take over strategy setting as well.

ad 1) Yes, humans don’t act rationally.

All human decision making is emotionally intuitive (Damasio 2003). Humans employ a number of decision making techniques that are prone to short term logic fallacies because humans have evolved in an environment where assumptions about the effects of long-term decisions were not necessary. Survival was mostly day to day. Humans are prone to the well-known seven biases:

  • Priming (exposure to a stimulus influences response to a later stimulus)
  • Anchoring (the tendency to overly rely on often repeated information)
  • Framing (mental filters defined by setting reference values)
  • Expectations (the mind sees signs of what it thinks will happen)
  • Overconfidence (humans overestimate their own skills and capabilities)
  • Inertia (innate bias toward maintaining the status quo
  • Loss aversion (the tendency to prefer avoiding losses to acquiring gains)

I recommend to read Gerd Gigerenzer/Herbert Simon on the concepts of ‘Bounded Rationality’ that shows very clearly that it is better to simply and quickly find a satisfactory solution than waste an exorbitant amount of time trying to find the best. The same is true for processes. While the effective rationality of an agent is determined by its computational intelligence (E. Tsang), testability, repeatability, and scalability are challenges for all kinds of mental and thus mathematical models. Everything else being equal, an agent that has better algorithms and heuristics could theoretically make more optimal decisions. But the world is not a university test lab.

Life, economy and business does not have cause and effect chain models that can be calculated through planning and scheduling algorithms. Human thinking employs models with varying degrees of reliability to apply gained experience for the future, as well explained by Gigerenzer (2000) and Hawkins (2004). Planning models can be employed for manufacturing processes, power or communication networks, transportation logistics, robot control, autonomous spacecraft or drones, where unexpected events can be minimized. For business and economy the planning/scheduling becomes a problem of complexity, because real-world instances continuously change and unexpected events mean that the predefined execution does not ensure outcomes. Planning and scheduling must become real-time and deal with the fact that other scheduling or humans may interfere with resources and/or dependent activities. Unexpected events during execution require consideration in the planning to verify if they are relevant or not. Humans can do that pretty well but there are no algorithms known that work better in complex adaptive systems.

ad 2) No, predictive analytics aren’t better at predicting anything.

First, predictive analytics are too simplistic in finding possible correlations and can’t provide the complex algorithms that would be necessary for decision making that ensures better economic outcomes. Correlation is not causation. Causality assumptions can be tested in closed systems but not in the real world. I want to point to this article in Scientific American that says that the market crash of 2008 and follow on recession was caused by wrongly applied mathematics.

Benoit Mandelbrot already wrote 1999 in Scientific American that: ‘established modeling techniques presume falsely that: a) radically large market shifts are unlikely; b) that all price changes are statistically independent; c) today’s fluctuations have nothing to do with tomorrow’s, and d) that one bank’s portfolio is unrelated to the next’s.’ Despite all the electronics that make it theoretically possible, no one would let an airplane fly without pilot. Financial models as well as business models and processes are no different.

ad 3) Machine learning and pattern matching can’t define or optimize business strategy.

Having developed the User-Trained Agent for our Papyrus Platform, I should be glad about Nigel’s recommendation of employing machine intelligence. But I don’t agree on the suggested use case with completely false expectations. I wonder how anyone can still propose that simulations and algorithms are working for economy. We are standing and the edge of a financial abyss (and may be a step further tomorrow) because of the financial algorithms that have been used to predict the future value of debts and the risk associated with ensuring them. Option-pricing theory (Black and Scholes, 1973; Merton, 1973) is one of the main culprits. Despite the fact that new theories of economic behavior have been proposed from time to time, most graduate programs in economics and finance teach only expected utility theory and rational expectations (Muth, 1961; Lucas, 1972). Portfolio optimization and the Capital Asset Pricing Model (Markowitz, 1954; Sharpe, 1964; Tobin, 1958) are as limited. No one knows how using all these models influences each others outcomes. The problem is that economics is not like physics but has to predict a complex adaptive system, which is by definition not possible. Only a complete moron of an executive would assume that these models make reasonably plausible predictions about the economic future. What about all the other theories that are hardly ever taught in economic schools?

  • Game theory (von Neumann and Morganstern, 1944; Nash, 1951)
  • Dynamic stochastic general equilibrium models (Debreu, 1959).
  • Economics of uncertainty (Arrow, 1964)
  • Long-term economic growth (Solow, 1956)
  • Macro-econometric models (Tinbergen, 1956; Klein, 1970)
  • Computable general equilibrium models (Scarf, 1973)

We have no idea how using any of the above models in combination and wide-spread would change the economic landscape. We are not only uncertain, but we are actively changing the environment that we supposedly modeled?

Uncertainty is not only in the long-term or the unexpected but there aren’t even plausible algorithms which predict prices or even trends we observe in financial markets. Stock prices are mostly self-fulfilling prophecies of analysts. Even if there are periods over which a model is useful, we must accept that its limitations will be painfully revealed. The human complex-adaptive element of uncertainty has been named “tail events”, “extreme events”, or “black swans” (Taleb, 2007) but even those aren’t considered under the make or break pressure in finance today. All of the models fail when put under serious scientific scrutiny. The simple fact is that the current IT of financial institutions does not allow management to obtain a holistic view of current risk, much less predict how their changes influence it. Not only academics fail to realize this, but also practitioners believe that these models work. I am stunned to bits about this …

ad 4) Yes on human strategy and oversight and no, machines will never employ wise strategies.

So is it all a question of limited human intelligence and data bandwidth capability of the human brain? I propose that even human intelligence is not a concept that can be simply measured. Richard Feynman, one of my quantum physics heroes, had just an average IQ of 125. According to the American Psychological Association, IQ only has a 30%-50% correlation with professional achievement. Many other factors are more important. It follows that building a machine a machine with a higher IQ than humans won’t be more successful either.

But we are far from having machines with human-like IQ. Financial algorithms are simple set of rules and respond instantly to market conditions, taking into account thousands or millions of data points in which machines respond irrationally and stupidly to one another’s actions. A single stock can receive thousands of bids per second that have to be reacted to. It is not an efficient and intelligent capital allocation machine ruled by precision and mathematics, but rather a completely uncontrollable feedback loop. “Our financial markets have become a largely automated adaptive dynamical system, with feedback,” says Michael Kearns, professor at the University of Pennsylvania. “There’s no science that can understand its potential implications.” And thus on May 6, 2010, the Dow Jones Industrial Average inexplicably experienced the flash crash, loosing 573 points in five minutes, leaving till today everyone stumped as to why it happened. All in all this has turned into a huge casino. It doesn’t create jobs or hedges risks. It is a zero sum game with the risk of imploding our financial system.

All computers do is performing tasks according to rules extremely quickly. Those rules correlate statistics to models created by humans, hope those patterns will reappear and are thus prone to the same fallacies. Computers will just arrive at the wrong conclusions much faster. Yes, they might squeeze some momentary optimization from that, but that can be irrelevant or even detrimental in the complex interdependences of reality and algorithms. Had my jump been less perfect I would not even have reached the bottom of the pool! Short-term perspective is dangerous.

Humans have however an important advantage. Our slow – but available in billions – organic processors can run in parallel, allowing us to perform pattern recognition in an intuitive way that no computer is capable of. I propose they never will be because they lack the emotional context. But I propose too that real-time pattern matching (like our User-Trained Agent) can VISUALIZE the links between data structures and human decisions in a complex strategic context, something the human perception would not be capable of. Thus it can provide better transparency of complex scenarios for the human decision-maker. Predictive analytics are senseless statistical correlations for the kind of ignoramuses who are destroying our financial markets.

Above all: The Human Perspective

Financial algorithms are assumptions about the workings of the dynamic and ultimately human aspects of economic interactions. Economic theories are simplified distillations of emerging phenomena in complex adaptive systems. The virtue of gained knowledge has however become a vice, particularly when it arrogantly ignores the limitations imposed by uncertainty. The role of humans in business will have to be more important than ever before. Everyone is all hyped up about BIG DATA which are no more than cute and sometimes interesting statistical observations about the past. Technology must reduce the ridiculous amount of data to a sensible and relevant minimum in the context that we require. We need to make the world less complex and provide less information to decision makers and not more.

Teddy Roosevelt said that we have to get our faces smeared with dust, sweat and blood, strive wholeheartedly, and fail again and again. To learn, humans need to experience failure, fear and pain. Our memory is not a database or a rule system. It is indexed and triggered by emotional experiences and those who think that this is a drawback are simply ignorant. Emotional experiences trigger the release of hormones that build and strengthen new synaptic connections to result in what we call expertise. Human intelligence is infinitely more than data or algorithmic processing and over time it turns experience into what we call wisdom. Wisdom is having good judgment for decisions with regard to the application of experience and knowledge. Wisdom tells us that we know nothing for certain and stops us from being arrogant pricks. Building computer models is also a kind of overconfidence in our abilities. IT IS NOT WISE!

Some propose that wisdom requires control of human emotions – such as greed – so that universal principles and reason prevail to determine one’s actions. I do not think that this is true. Reason is not a counteraction to emotion but it is the balance of our emotions and empathy that help us to make wise decisions. It is human arrogance that uses pure reason and logic – computerized or not – on ‘facts’ that will always lead to poor outcomes in the long-term. The best we can do in IT is not to substitute human reasoning, but to provide better context (transparency) for emotional decision-making. What I am doing with the User-Trained Agent is to learn from such human decisions and enable people to share this knowledge in a process management environment.

Let me close with George Bernard Shaw: “All progress depends on the unreasonable man.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: