The future is Machine Learning, not programs or processes.

I was once again voted the most outspoken BPM blogger on earth. Hey, why not the universe? But thanks for the mention. I won’t however waste my and your time on BPM predictions for next year and rather discuss the medium term future of process management in machine learning.

Machine Learning used to be called Artificial Intelligence and became popular when IBM’s Deep Blue won against Chess World Champion Kasparov in 1996. The victory wasn’t however a success of AI, but a result of Joel Benjamin’s game strategy and an ensuing Deep Blue bug, that led into a final draw and thus a win for IBM. And IBM did it again in 2011 with Watson which won the popular Jeopardy TV game show in a similar publicity stunt that has little relevance for machine learning. Jeopardy uses a single semantic structure for its question, so finding and structuring answers from a huge Internet database was not that hard. Watson had the hardest time with short questions that yielded too many probable answers. It used no intelligence whatsoever in providing answers.

1996: Kasparov is beaten by IBM's Deep Blue.

1996: Kasparov is beaten by IBM’s Deep Blue.

Why do I then say that the future of BPM is in this obscure AI arena? First, machine learning is not about beating humans at some task. And second, I see machine learning ideal for work that can’t be simply automated. My well-documented opposition in this blog to orthodox BPM is caused by the BPM expert illusion that a business and the economy are designed, rational structures of money and things that can be automated. In reality they are social interactions of people that can never be encoded in diagrams or algorithms. I have shown that machine learning can augment the human ability to understand complex situations and improve decision making and knowledge sharing.

To do so I turned my attention away from writing programs and processes many years ago. My research is into how humans learn and how that understanding can be used to create software that improves how people do business without the intent to replace them. For me the most important discovery was that humans do not use logic to make decisions but emotions (Damasio, 1993). Using logic requires correct data and perfect control which are both unattainable. Therefore humans developed the ability to come to decisions under uncertainty (Gigerenzer) using a wide variety of decision biases (Kahneman, Tversky). These aren’t however fallacies but very practical, simplified decision mechanisms. Many books on the subject focus on how wrong these biases can be in some situations. They ignore that logic-driven decisions are equally wrong in all other situations because the underlying data are wrong. Logic is only better in theory or in the closed shop of laboratory.

While machine learning collects past information to learn, it is a fact that the past never predicts the future. We can only identify general probability. Take a simple roulette wheel for exanple. But people are fascinated by predictions (see Astrology) and managers want certainty, which is however unattainable. Therefore they like buzzword driven hypes that suggest that if we collect and analyze more data about more things we will get better predictions to precode decisions for the future. It is mirrored in the ‚process mining‘ approach that assumes that collecting more data on more business processes makes them more predictable. That is in fact a misrepresenation of what ML can do.

Algorithms can be used to automate mechanical functionality in real-time in a factory, airplane or car. Take Google’s self-driving car for example, military drones or robotic assembly lines. Let’s not forget that they don’t take decisions on where to go and what to do when. They are surrounding-aware robots that follow a human given directive. That is not intelligent. Thus one can’t automate a business process or any complex human interaction the same way.

True innovation in process management won’t be delivered by rigid-process-minded ignoramuses, who fall for the illusion that correct decisions can be encoded. It will arrive in the arena of machine learning that will help humans to understand information to make better decisions.

What is machine learning and what is it not? And how far are we?

Marvin Minsky had 50 years ago a vision that computers would be as or more intelligent than humans fairly soon. He proposed the use of software called neural networks that mimicked human brains. However, a human brain does not work by itself, but is a complex construct of evolved brain matter, substantial inherited innate functionality and learned experiences of which many are only available through our bodily existence. Our experience of self is not a piece of code, but a biological function of our short-term memory and the connection to our body through our oldest part of the brain, the medulla which sits atop our spinal cord. Without our hormonal drives human intelligence and decision-making would not even develop. What makes us human are our bio-chemical emotions to feel fear, love and compassion. That is accepted science. Therefore a purely spiritual entity (our soul?) or logical function without body and body chemistry can’t feel either and thus won’t possess human-like intelligence. It won’t be able to take human-like decisions. But machine learning can provide benefits without the need for human-like intelligence. So in the last few years all large software companies have jumped on the machine learning bandwagon.

While IBM has no more to offer than publicity stunts, Facebook has no other interest than to utilize the private information of their 700 million users to make money. Facebook is using buzzword-speak ‘deep learning‘ to identify emotional content in text to improve its ad targeting through big data mining and prediction. Supposedly some of this is already used to reduce the Facebook news feed to an acceptable amount. My take? It isn’t working, much like Netflix movie suggestions or the product recommendations of Amazon. Why? Statistical distribution can’t judge my current emotional state!

But ML isn’t a ruse. It is real. Minsky’s neural networks are still being explored and improved for voice and image recognition and semantic, contextual mapping. These are important while low-end capabilities of the human brain for pattern recognition. Japanese, Canadian, and Stanford University researchers developed for example software to classify the sounds it was hearing into only a few vowel categories more than 80 percent of the time. Also face recognition is already extremely accurate today. Image classification is successfully used to recognize malignant forms of tumors for cancer treatment. In fact, voice recognition in Apple dictation in both iOS and OS X are extremely good in understanding spoken sentences. The hidden lesson is that Apple uses both a dictionary and a grammar library to correct the voice recognition. I have written much of this post using Apple dictation. The important progress in this area is the recognition of NEW common image features at a much higher success rate than humans can. But in all these approaches it is the human input that decides if the patterns are relevant or not. Man cooperates with machine, not machine replaces human intelligence.

So what is Google up to in machine learning?

The most publicized and successful Google venture in this domain is the self-driving car. A great example of how real-time data sensors in combination with a human-created world map (Google Maps obviously) allows a machine to interact safely and practically in a complex environment. Don’t forget that the car is not controlled by a BPM flow-diagram, but is totally event and context driven. So much for BPM and the ‘Internet of Things’ …

I dare to put Ray Kurzweil’s work at Google in the same category as IBM’s with similar illusions as Minsky. I have known Ray Kurzweil since the days he created the K250 synthesizer/sampler in 1984. It was the first successful attempt to emulate the complex sound of a grand piano. I was the proud owner of one in my musician days. It was inspired by a bet between Ray Kurzweil and Stevie Wonder over whether a synthesizer could sound like a real piano. It was awe-inspiring technology at the time and it too lead to predictions that performing musicians would become obsolete. It is obvious that this did not happen.

Kurzweil joined Google in 2013 to lead a project aimed at creating software capable of understanding text questions as well as humans can. The goal is to ask a question just as you would to another person and receive a fully reasoned answer, not just a list of links. Clearly this reminds of Siri and Wolfram Alpha and both have been at it for a while.

Kurzweil’s theory is that all functions in the neocortex, the plastic (meaning freely forming) six layers of neuron networks that is the seat of reasoning and abstract thought, are based on a hierarchy of pattern recognition. Not a new theory at all but pretty well established. It has led to a technique known as “hierarchical Hidden-Markov models,” that has been in used in speech recognition and other areas for over ten years. Very useful, but its limitations are well known. Kurzweil however proposes that his approach will allow human-like intelligence if the processor could provide a 100 trillion operations per second. A human brain is however not just a neocortex! And more processing power is not going to solve that problem.

In machine learning less is always more!

Google isn’t thus betting all its money on Ray Kurzweil but spent recently $400 million to acquire a company called DeepMind that attempts to mimic some properties of the human brain’s short-term memory. It too uses a neural network that identifies patterns as it stores memories and can later retrieve them to recognize texts that are analogies of the ones in memory. Here the less is more approach is used. DeepMind builds on the 1950 experiments of American cognitive psychologist George Miller who concluded that the human working memory stores information in the form of “chunks” and that it could hold approximately seven of them. Each chunk can represent anything from a simple number to an abstract concept pointing to a recognized pattern. In cognitive science, the ability to understand the components of a sentence and store them in working memory is called variable binding. The additional external memory enables the nerual network to store recognized sentences and retrieve them for later expansion. This allows to refer to the content of one sentence as a single term or chunk in another one.

Alex Graves, Greg Wayne, and Ivo Danihelka at London based DeepMind, call their machine a ‘Neural Turing Machine‘ because of the combination of neural networks with an additional short-term memory (as described by Turing).  While this is a great approach it lacks the ability for human interaction and training, which I see as the key aspect for practical use. But variable binding is a key functionality for intelligent reasoning.

Human collaboration and human-computer cooperation

The future is using computing as a tool to improve human capabilities and not to replace them. BPM being thus my pet peeve in large corporations. To illustrate the point I have been making for over a decade, I recommend to watch this interesting TED Talk by Shyam Sankar on human-computer collaboration.

Sankar talks about J.C.R. Licklider’s human-computer symbiosis vision to enable man and machine to cooperate in making decisions without the dependence on predetermined programs. Like me, Licklider proposed that humans would be setting the goals, formulating the hypotheses, determining the criteria, and performing the evaluations, while computers would deal with all operations at scale, such as computation and volume processing. They do not replace human creativity, intuition and decision-making.

So what are the aspects of machine learning that are both available, usable and do not suggest Orwellian scare scenarios? Well, machine learning technology is unfortunately perfectly suited and broadly used for surveillance but lets focus on the positive for the moment. Image and voice recognition and follow-on classification are areas where we have reached the stage of everyday practical use. We have been using image- and text-based document classification in Papyrus for 15 years. Machine learning is used in Papyrus for character recognition, text extraction, document structure, sentiment analysis, and for case context patterns related to user actions – with the so called UTA or User-Trained Agent. Pattern recognition for case management that uses the kind of human-training and cooperation that Sankar suggests has been patented by me in 2007.

What the UTA User-Trained Agent does and my patent describes, is that we do not look for patterns that predict that something will happen again. In human computer collaboration the repeated human reaction to a particular process pattern is interesting and therefore one can make others aware of the likelyhood that such an action is a good one. This ML functions does not just find patterns, it analyses how humans react to patterns. Users can also react to a recommended action by rejecting it. As I do not prescribe the process rigidly but require that goals to be achieved are defined, it is now possible to automatically map a chain of user actions to goal achievement and let a user judge how fast or efficient that process is.

But how practical is such machine learning to simplify process management for the business user. Does it require AI experts or big data scientists and huge machines? Absolutely not, as it too uses the less is more approach. Recognized patterns are automatically compacted into their simplest, smallest form and irrelevant information is truncated. But in 2007 it still used IT data structures and not business terminology. Using an ontology to describe processes in business language enables human-to-human collaboration and run-time process creation, and simplifies human-computer cooperation.

Papyrus thus uses a simplified form of ‘variable binding’ for process descriptions by means of an ontology. Such an ontology definition entry always has a subject, predicate and object just like the DeepMind short term memory. Now, the UTA can identify process patterns using the ontology terms. The first neural-network based User-Trained Agent version in 2007 could not explain why it suggested an action for a process pattern. Using the ontology to identify the pattern similarities in different work processes (really cases) one can tell in business terms why an action is recommended.

Business analysts create at design time a business ontology that the non-technical business users will use at run-time to create their processes and content. The technical effort is mostly related to creating interfaces to existing IT systems to read and write business data. At run-time users collaborate in business terminology and as they perform their work they create the process patterns that can be reused. These can both be stored explicitly as templates or the User-Trained Agent will pick up the most common actions performed by users in certain contexts.

Conclusion: We are just at the starting point of using machine learning for process management. IT and business management are mostly not ready for such advanced approaches because they lack the understanding of the underlying technology. We see it as our goal to dramatically reduce the friction, as Sankar calls it, between the human and the machine learning computer. Using these technologies has to become intuitive and natural. The ultimate benefit is an increase in the quality of the workforce and thus in customer service.

17 Comments on “The future is Machine Learning, not programs or processes.

  1. IBM’s Watson is a cognitive computing system, one that behaves like our brain, learning through experiences, finding correlations, and remembering — and learning from — the outcomes.
    First hitting the spotlight when pitted against two of Jeopardy’s biggest all-time winners Ken Jennings and Brad Rutter, IBM’s artificial intelligence machine names Watson threw these two off their throne in quick fashion – showing that artificial intelligence was a real thing and IBM has the technology.

    Liked by 1 person

    • Thanks for the comment.

      I have to certainly disagree on your superficial rewrite of IBM’s marketing material as there is nothing in Watson that works even close to our brain. It is a simple full text search engine that receives a list of hits from a search based on the question. It tries to fit the answers into a structure for the answer that is derived from the question and tries to find text in the list of hits that matches this structure. It does that in parallel to speed up the reply. Nothing to do with the three-dimensional, hierarchical pattern matching that we think goes on in our brain and develops over the course of our lifetime. Watson does not even build a knowledge pattern hierarchy like neural networks would. It does not ‘understand’ the question or the answer and has no knowledge of the subject at hand. Without the human-created Wikipedia database, Watson would be a dead duck, unable to learn or grow understanding or find information it needs.

      Yes, there may be some use of that functionality in a variety of fields but your suggestion that IBM or anyone else has today the basis of what will represent ‘self-aware computing algorithms’ is utter ignorance, I am sorry to say.

      Liked by 1 person

      • Dear Max, you’re right: nothing in Watson today corresponds to conciousness as I said. We are only in artificial intelligence with rules and data with data aquisition and rules aquisition. The main question of artificial aware is the capability for the system to develop its own programs and compile them for targeting something … and for the moment we are nowhere … but trying to develop the basis of semanticrules connected to all possible data via internet could help to reach this target. My feeling is that it will be possible before the end of 21 century – but not now! Neverthess, thank you for your comment. And you’re welcome.

        Liked by 1 person

  2. Dear Max, after re-checking your article written, there is something in my experience I have to write. Having Business Process Management with Ontology is good but absolutely not enough for company usage. Generally, we are not able to know all the ontology we have in a company; and ontology is moving; and people do not use words exactly like in the ontology defined. And some items used ontology with others meaning… In the concepts I had the chance to work, process management with ontology gives a lot of noise that requires too many people to manage them to know if items highlighted are appropriated or not … Then semantic management additionally to the ontology with a must-do in the BPM real usage (and not theoretical usage). And here, concept than Watson are very interesting … ;)

    Liked by 1 person

    • Oliver, thanks again. Yes, also an ontology leaves some room for ambiguity and misuse of terms. But it is much better to have a defined ontology rather than none. Orthodox BPM systems today do not use an ontology, but they require that an expert translates the process description of the user into process functionality by the use of a flow-diagram and programming. That makes it very rigid.

      Standards such as BPMN are not usable for a business user and they only cover 20% of the needed functionality. With an ontology one can describe a 100% of the needed business functionality so that business users can describe their own processes and perform them very flexibly at runtime. Where sensible they can save them as templates. Alternatively they can follow the recommendations of the machine learning UTA for next actions.

      To my understanding the semantic capability of Watscon is very limited and does not lend itself to more than a search capability. I could not ask it what the next best step in a process is because it has no capability to search previous processes and build patterns that are recurring. Semantics alone are not enough, just like a flow-diagram is not enough.

      Watson is a publicity stunt and not technology that advances AI or machine learning.

      Liked by 1 person

      • Dear Max, I agree with you: having an ontology in companies is the basis whatever the future will be. And for Watson, I will probably check by myself :-) I know that IBM made a lot of advertising on Watson but what I understood it is much more than search capabilities. I know they have spent 6 billions of dollars during ten years for Watson… If it is only a search tool, they will be totally down within 1 year when people will test it. But what I understood, it is much more than search tool and it could be probably interesting for you to check it in regards to your BPM tool (ISIS Papyrus…). I encourage you to check as I will do and we could after that discuss or meet about it if you would like ;-) Semantic tool is probably the topic that missing to increase dramatically the usage of BPM tools that requires today to much work in regard of what companies expect to do. If you are in international company, as I am, managing ontology in 50 languages is too expensive. Having multi lingual semantic tool (beyond current expectation of Watson…) is a must-do for having acceptable costs … I hope I will have the pleasure to see you one day to discuss about it … Have a nice day and beautiful end-of-the-year days ;-) Best Regards.

        Like

  3. Max, your depiction of BPM is vastly misguiding. As the name suggests, it is about mining (or discovering) patterns of certain means-end combinations with what humans reach a goal (https://air.unimi.it/retrieve/handle/2434/174874/182838/PMM.pdf). What will actually follow from these insights, is solely up to judgment of informed people. In no way it implies to constrain actions given a designed ‘optimum’ as you do by saying “innovation in process management won’t be delivered by rigid-process-minded ignoramuses”.
    This also means, I agree on not replicating human capabilities but augmenting them is the way forward. As mentioned, this is to a large extent predetermined due to the differing sensory-motor pathways in humans compared to machines.

    Like

    • Alex, thanks for your comment. Your understanding of the real world in which BPM is being used must be limited or at least different to mine. I see BPM being implemented every day and I see the negative consequences too. BPM is always used to constrain the performer one way or the other and not to empower them.

      The assumption that repeating a process that worked once, will in future ensure a certain goal to be reached is only valid under controlled conditions such as a laboratory or a factory. It is not valid in the chaotic environment of a complex adaptive system such as the economy and its businesses and human actors. The judgement of informed people then tries to enforce a process to be executed by the performers without the option to use their own judgment to react to changing conditions.

      I use the Adaptive Case Management approach to allow performers to do the work they see necessary to ensure a goal can be reached. Machine learning can be of help by offering the performer (and not only the process analyst) real world analysis what other people did in a similar situation. If you try to implement what you propose to be the target of BPM (as a methodology) you will find that BPM systems are not capable of doing so. And thus BPM (the practice) tires to convince management that standardized processes are the biggest time and money savers. That is not so and there is no proof in the long term.

      Like

      • Thanks for your very concise reply. I think we are on the same page regarding BPManaging as a practice and BPMining as a method. As a case in point, I’ve been working all my career in a highly knowledge driven business where goals were achieved in an ad-hoc and seemingly non-repeatable fashion. After a fresh graduate was hired, presumably endowed with the wisdom of textbook process management, he stepped into one fiasco after the other by trying to streamline and formalize processes. Much to the disdain of domain experts and veteran problem solvers.
        Coming from that trajectory, I naturally reject the practice that you are talking about and unfortunately still exists today. However, there ought to be some patterns in all the projects we have wrestled through: high-risk projects followed through in times of resource constraints usually yielded high warranty costs. Likewise, fragmented on-site services usually correlated with the same. Not that these superficial insights are particularly worthy, but having had a rigid BPMining approach should have allowed to dig deeper and raise collective awareness. In essence, BPMining provides a unique tool to gain an understanding of causalities, a feat necessarily lacking with other ML approaches. Even if these causalities are temporary and changing itself over time and context. This in turn warrants interest for BI algorithms that do online-learning without a fixed and hierarchical control-loop, which is I guess what you are suggesting with ACM.
        At least I would prefer more accuracy and not conflate BPMining (the discovery tool) with BPManagement (the legacy crutch stemming from the myth that standardization always yields efficiency). I guess it takes a lot to teach an old horse new tricks, which is what archaic Process Management, the accompanying software solutions and some of its connotations exemplify.

        Like

      • Alex, yes you are right. ACM is both a process discovery and execution tool.
        The machine learning is in fact a kind of process mining and it is able to map user actions to a particular process state. Process mining in use today cannot do that.
        And yes, changing ingrained BPM opinions will take rime …

        Like

  4. Pingback: Machine Learning: The Future, Get It? : Stephen E. Arnold @ Beyond Search

  5. Oliver, I do not agree that an ontology is a problem in multiple languages. We do not create the ontology because the business users create it. But if we reuse an existing one then we can use for example Google translate to automatically create a list of synonyms that the users can start with and then override with their own preference. The meaning and structure of the ontology remains in place. We have been using that approach for a long time in our applications.

    Therefore nothing IBM does in Watson is of interest to what we do in ISIS proprietary ML technology. We are much more advanced than IBM and run on cheap Intel nodes and do not need ML experts to setup the system. Watson needs substantial HW and also substantial expert support to do ANYTHING.

    Here is an excerpt from the Wikipedia entry on Watson:

    ‘It would then parse the clues into different keywords and sentence fragments in order to find statistically related phrases. Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously to find the correct answer. The more algorithms that find the same answer independently the more likely Watson is to be correct. Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense.

    Watson’s basic working principle is to parse keywords in a clue while searching for related terms as responses. This gives Watson some advantages and disadvantages compared with human Jeopardy! players. Watson has deficiencies in understanding the contexts of the clues.’

    I am also an expert on how the human brain works and Watson is not even remotely similar. Only that Watscon and the brain process in parallel is similar. That’s all. The brain’s analysis breaks the questions down into a set of hierarchical patterns and these patterns are linked to emotional experiences. The emotional triggers highlights stored memories because NOTHING ELSE DOES. Memory concepts are triggered and create a sea of responses that are compared to the current context and structure and the one fitting the best is used. But we UNDERSTAND what it is through the analysis. Watson can not. We can weigh the memories in emotional value and judge which one is right even if we are not sure. Watson can not. Probability is meaningless for contextual accuracy that’s why we often follow intuition because we cannot be sure. Intuition simply means: logic, certainty and accuracy do not work.

    Deficencies is IBM double-speak for: WATSON DOES NOT UNDERSTAND THE CLUE. Philosopher John Searle argues that Watson cannot actually think and is criticized for it. I do not argue … I KNOW IT DOESN’T and I am an AI or ML expert.

    In relation to Watson’s use in Healthcare for cancer treatment I spent considerable time to get one of my friends who since died of lung cancer involved in the trial program that would help to select the combination of ‘poisons’ to use for a particular type of tumor. Wellpoint is one of our largest US customers. As it happens, the answers I got said clearly that there was no trial treatment program, but just the intent to develop the software into this direction in collaboration with Wellpoint. Here the IBM double-speak again.

    You certainly need to check your sources and ensure your own understanding before making suggestions and proposals as you do in regards to Watson. I nevertheless appreciate the discussion and the time you are taking.

    Liked by 1 person

    • Dear Max, thank you also for the time spent to answer to me. Very well appreciated :-) Concerning Ontology I do not agree with you that automatic translation can be a solution for ontology in several languages …; yes, ontology is to be made by Business; yes, ontology in several languages can be made … but the cost of implementation is too high if we don’t have a semantic approach in addition of ontology definition . . Concering Ontology in multilingual implementations with automatic translations i.e. solutions as google translator, do you already have concrete experience on it? I have never seen something that give good results in the process triggered with ontology defined in english and translated in other languages. I have never seen something that can be usable with realistic IT/Business cost. If you have such real sample, I’m interested :-) For semantic propose, perhaps Watson is not the appropriate solution as you suggested and only “big” search tool, I need to test it by myself clearly because it is not my understanding. After, as I said Watson to not have currently the awareness artificial to “understand” the meaning; but he is able to make the appropriate association that ontology “alone” can’t. And we don’t know in the future is Watson or successor will be : but the challenge is there for the future. And my invitation for future discussions with you remains true if you accept it ;-) Whatever you answer is, I wish a very good end-of-the-year last day 2014. Best regards. Olivier.

      Like

  6. The pingback mentions Google’s ‘RECORDED FUTURE’ investment and asks why I ommited it.

    http://en.wikipedia.org/wiki/Recorded_Future

    Recorded Future functionality is not related to Machine Learning. It is a predictive data mining app that is most likely quite useless as it might predict some trends but not what, why and how we can make use of the information. I think Astrology is more reliable!

    Like

  7. Hi Max

    For someone that passed the last decade against predictive methodologies and discovered that ML is the new new big approach, when we know that a significant branch of ML is used particularly for … predicting outcomes …

    Cheers
    Alberto

    Like

  8. Hi Alberto, I spent the last decade developing and patenting ML applications and therefore I have not just discovered that ML is the new big approach. The reason I am against some uses of ML is because I know what they can do and not. I also think that the suggestions of ML becoming self-aware and humanly intelligent are utter nonsense.

    Yes, predictive analytics on top of big-data mining also tries to use machine learning to predict what one particular person might do in a particular situation. My opinion is that this is a waste of time. These solutions might discover from traffic monitoring that a number of cars will travel similar routes. They won’t discover why and they won’t learn how to drive the car.

    I use machine learning in a mode where it is taught by experts and does not just try to discover statistical distribution of patterns. Less data for particular situations and not huge amounts of data for all situations. Goal orientation is also here a unique aspect of our work that links actions to business patterns that are related to goals and their achievement.

    We have to deal with situations that are far more complex than what big-data analytics cover. We use a real-time mode of ML that does not require ML-experts or data scientists while in practical use. We have to continuously learn new knowledge from experts and it must not overwrite the old knowledge but learn to distinguish which to use when. We are also using a business language ontology in combination with ML. That allows to induce an understanding from human expert knowledge as to WHY certain actions are the better ones in certain situations and for certain goals.

    Big data predictive analytics does nothing of the kind …

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: