If businesses were people, they would lumber about in a vaguely purposeful manner like zombies. That’s due to the top-down, military-style hierarchies of modern corporations that result in integration of information only at the top and only to a limited extent below. Imagine Gepetto the CEO (external puppet master) of Pinocchio. Pinocchio is managed through strings from the CEO manipulating each of Pinocchio’s parts. The movements are jerky and not very lifelike. When Pinocchio becomes a real person managed through his completely integrated brain, his movements are smooth and lifelike. He can control and grow his life more effectively this way than through the indirect, latent, imperfect information-driven command of Gepetto.
This isn’t a criticism of how businesses are run today. Business enterprises are well beyond the capability of a single person to control it to the level that the enterprise appears “lifelike”. But taking Performance Management and Process Management to the next level supplements what is needed to achieve that “lifelike” movement in an enterprise. Businesses obviously have succeeded executed top-down from a command center (Gepetto’s brain) as opposed to the distributed, networked intelligence of the parts (Pinnochio’s brain). Businesses have produced valuable goods for their customers, met targets, supported the livelihood of employees and investors, and innovated. But most businesses don’t make it and for the ones that do, there was a lot of luck along the way, sometimes they made it in spite of themselves, and eventually do die.
Business Intelligence is supposed to provide the information required for decision makers to make better decisions. Although BI made significant impacts towards that goal, it still hasn’t quite made businesses look more like a world-class scientist or athlete than the lumbering zombie. So Big Data comes to the rescue … or does it?
At the time of this writing, one of the major reasons for why a Big Data project may yield underwhelming results is that it’s simply more data. There is no question that the availability of more data is beneficial, but most businesses still don’t know how to effectively analyze the data it already has and to implement actions off of it. On the other hand, more data can lead to counterproductive information overload.
So what happened to the huge buzzword of about ten years ago (circa 2005), Semantic Web? It’s still around, but it’s taken a seat far in the back of the buzzword bus. Yes, dealing with the complexity and mess of webs of relationships is more difficult than dealing with the tidy matrices of data (tables/spreadsheets) or even objects (aggregates in NoSQL terms). But we need to have relationships between data at least keep pace with the growing amount of data, or we just end up with information overload. Sure, we can find a needle in a haystack with Big Data, but so can a magnet.
In this blog, I present some of my thoughts on the feasibility and value of purposefully pursuing and measuring the level of integration of intelligence within a business, even if such an effort doesn’t address a clear and present need.
This blog is perhaps too philosophical for the tastes of the primarily “technical” audience. But as I look through the zoo of Apache projects around Big Data and all the skills required of a data scientist, it innately seems much too complicated. So at least for me, I need to take a few steps back in an attempt to see the single big picture of Big Data which cannot be captured in merely a thousand words or a catchy marketing phrase such as “the three Vs”, but can be felt by my consciousness.
Measuring a Continuum of Consciousness
On my flight home for Christmas Vacation some thoughts reminiscent of Map Rock were triggered by an incredibly intriguing article in the Jan/Feb 2014 issue of Scientific American Mind titled, Ubiquitous Minds, by Christof Koch. The article discusses the question of what things have consciousness. Is consciousness strictly a human phenomenon (at least on Earth today) or do other animals and even things possess it albeit to lesser degrees? The article suggests that it’s more of a continuum for which we can measure the degree.
That article includes introductions to two concepts, one called panpsychism and another called Integrated Information Theory. For the former, panpsychism, it would be too much of a stretch for this blog to place it in a business context. However, I can for the latter. In particular, a crucial part of Integrated Information Theory is this notion of a measurement of consciousness referred to as Φ (phi). From a philosophical point of view, the notion of measuring consciousness in non-human, even things seeming completely non-sentient would drastically change the world view of many. Growing up Buddhist, especially in Hawaii, that notion isn’t so foreign to me. From a BI practitioner’s point of view, this is very compelling since I’ve always thought of each business as individual organisms competing in various levels of ecosystems.
The Scientific American Mind article is actually an excerpt from Christof Koch’s book, Consciousness: Confessions of a Romantic Reductionist. I downloaded it onto my Kindle as soon as I could and blasted through it over Christmas Eve and Christmas.
Whether or not something is “conscious” as people are conscious, the notion of measuring the factors of consciousness as a measure of an enterprise’s ability to proactively evolve and get itself out of messes could be extremely compelling for a business. This notion is an extension of the foundation of my approach to BI, that businesses are like organisms. This metaphor has always been helpful in guiding me through the progression of BI over the past 15 years or so, as well as my earlier days with expert systems 25 years or so ago.
I know that sounds like useless BS, that I had too much time and maybe “Christmas cheer” over Christmas vacation. We don’t really even know what consciousness is in people, much less in an entity such as a business. Further it’s extremely difficult for most people to accept that anything but humans could possibly be conscious. Please bear with me a bit as thinking through the question of whether a business is conscious is related to the question of whether improving the aspects of sentience can serve a business as well as it has served humans.
So throughout this blog, I’ll stop apologizing every other sentence and assume that consciousness is something by far most highly developed in humans and that it is in fact the primary factor for our success as a species. Take this all with a grain of salt. This is a reset of how to approach BI, stepping out of the weeds.
As the developer of Map Rock, this means almost everything. Map Rock is about the integration of rules across an enterprise (a complex system) as is integration of the brain across regions what our symbolic-thinking consciousness is all about.
Businesses as Organisms
Like organic creatures, whether individuals or groups of creatures (hive, pride, species, or some higher level individuals such as dogs and great white sharks), businesses share major characteristics such as goals, strategies, complexity, intelligence, the ability to evolve, and maybe even “feel”. Those characteristics range widely in level; some businesses are more complex, some less, some more intelligent, some less.
The first four characteristics, goals, strategies, evolution, and complexity, are rather easy to understand and buy into. Businesses exist for a reason, therefore they have goals. This usually means to turn a profit for the owners or to improve the condition of a cause for non-profits. Reaching these goals means achieving some desired state in a complex system. It is accomplished through a strategy of taking in raw materials and converting them into something leading it towards its goals. Strategies involve a hodgepodge of many moving parts (usually complex, at least complicated) such as employees, software applications, machines, partners, and of course customers orchestrated in workflows.
Eventually strategies cease to work due to relentless changes in the world, and the business must adapt. Or sometimes the business is attacked at a vulnerable point, whereby it defends itself, and makes adjustments to strengthen that point. It evolves. In the case of the humans (highly conscious symbolic thinkers) we can proactively adapt in the face of predicted change.
The Magic of the Whole is Greater than the Sum of Its Parts
The intelligence of a business is mostly tied up in the heads of decision makers, mostly “bosses”. However, with the aid of Business Intelligence it’s increasingly feasible to de-centralize that intelligence, delegating more and more decision making to non-manager information workers. Additionally, certain aspects of BI-class applications such as predictive analytics models create non-human intelligence as well, encoded as objects such as regression formulas and sets of IF-THEN rules.
The sum of these individual intelligent components (the employees of the business and “smart” applications) does not equate to the intelligence at the business level. Even though this is an apples and oranges comparison of intelligence (like comparing the intelligence of your immune system to the intelligence from your brain), unfortunately the sum of the intelligence of the parts, inadequately integrated, is still greater than the whole (real intelligence doesn’t currently emerge in businesses today). In other words, businesses currently lacking adequate integration of intelligent parts are generally stupider than the collective intelligence of the parts. Genuine magic happens when the whole is greater than the sum of its parts.
To expand on the italicized term in the previous paragraph, inadequately integrated, the integration must be sensible as well. For example, a “bag o’ parts” is not the same as a properly assembled and tuned automobile that actually works. For a business, integration is the level of cross-entity reach within the collection and assemblage of the web of validated cause and effect throughout the business. “Cross entity” means not just cause and effect amongst team members or inter-department, but team to team across departments, role to role across departments, etc. “Validated cause and effect” refers to the data-driven proof of the validity of theories underlying the strategies that dictate our actions. I wrote more about this in my blog, The Effect Correlation Score for KPIs.
Unfortunately, I’ve experienced in the BI world an addiction to quantified values, values must be deterministic to be valid. It isn’t difficult to see why this is the case as BI is based on computers and quantitative is what computers excel at. Quantitative values are appealing because they are easier to understand than ambiguous qualitative “stories”. The phrase “gut decisions” (intuition) are the sworn enemy, inferior to data-driven decisions, because there is no elucidated process consisting of verifiable, deterministic values.
The thing is, neither quantitative nor qualitative (data-driven versus intuition) analysis is superior to the other. They are the crests and troughs of the iterative analytical process. They form a yin and yang cycle: Qualitative values address the fact that the world is complex and objects are constantly in flux. However, it’s difficult to make decisions based on such ambiguity. Therefore, we discretize objects into unambiguous entities so we’re able to execute our symbolic what-if experiments.
The problem with these discretized objects is that they are now a sterilized, hard definition of something, stripped of any “flair”, stripped of any fuzziness (reminds me of particle-wave duality). These hard values are now carved into square pegs for square holes, poorly able to deal with the ambiguity of real life. Everything is its full set of relationships, not a subset of what at some point seems like the most salient. What we’ve done is to define objects as particular states of a finite set of relationships (snapshots). Long story short, eventually we begin trying to shove rhomboid pegs into square holes.
It’s also interesting to consider that the results of Predictive Analytics models come with a probability attached to them. This is because a PA model only sees things from a small number of angles. In the real world, anything we are attempting to predict can happen from an infinite number of angles. In this context, we can think of an “angle” as a point of view, or a fixed set of inputs. This is the result of quantifying the inputs, stripping what appear to be non-salient relationships, reducing it to an unqualified, non-ambiguous object. We can consider those supposedly non-salient relationships as “context”, that unique set of circumstances around the phenomenon (the object we’re quantifying) that makes an instance in time unique even if the same object otherwise appears identical some time later.
Quantification really is a double-edged sword. On one hand, it is that very thing that enables symbolic thinking, the ability for us to play what-if games in our heard before committing to a physically irreversible action. On the other hand, that quantification strips out the context leading us to only probable answers. The real world in which businesses live isn’t a science lab where we can completely isolate an experiment from variance, thereby declaring that the definition of insanity is to keep doing the same thing expecting different results. In the real world, that is the definition of naïve.
Quantitative is the norm in the BI world. For example, I’ve become too used to the OLAP context of the word “aggregate”. These are sums, averages, maximum or minimum values, or counts over a large number of facts. These values still compare apples to apples values (ex, sum of all sales in CA to sum of sales in NY). Another is the knee-jerk rejection of the use of neural networks in predictive analytics because their results cannot be readily understood.
So it’s not always the case that the whole is greater than the sum of its parts. It’s more that the whole is at least very different from the sum of its parts. It is more like the chemistry context of a compound – water has very different properties than hydrogen and oxygen. Think of the great rock bands such as the Eagles, Fleetwood Mac, or the Beatles where none of them individually are the greatest.
Addressing the development and management of goals, strategies, complexity, evolution, and intelligence are where we BI/Analytics folks make our contributions. These are aspects of business that we can wrap our human brains and arms around towards improvement. I still wouldn’t go so far as claiming a business is alive like an organic creature, even though it acts like one; albeit, again, maybe a little on the zombie side. Setting aside that “alive” issue for now, do businesses possess some concept of “feeling”? It’s that elusive thing that’s hardly even considered in the context of a business. And if it is, it is readily dismissed as just an irrelevant, indulgent exercise.
Our feelings and the feelings of others (people we deal with) do matter (well, for the most part) in the course of our daily grind. But the feelings of animals, if we were to consider they have some level of feelings, with the exception of our pets, don’t matter as much and often not at all. We’ve all anthropomorphized inanimate things (such as stuffed animals, cool cars, laptops) attributing them with feelings, but it’s really our feelings towards those things that matter, not the imagined feelings of those inanimate things.
At least, businesses can be in anthropomorphized states of “feeling”. For example, businesses can be in a state of pain when its losing money, in an ill state when the equipment and/or employees are wearing down, in a state of happiness when goals are met, and in a state of confidence when it is secure and will reach goals in the near future. But being in a state isn’t the same thing as feeling it. A KPI on a scorecard can express a state of pain with a red icon, but is it felt?
Certainly the humans working in the business are conscious, but that doesn’t mean there is a consciousness at the business level. Even if there is one, that doesn’t mean we (the human parts) can converse with it. Similarly, whether my organs such as my liver and heart have a consciousness of their own, I seriously doubt they are aware of or could even comprehend my life as a complete human being.
I call the properties emerging from the interaction of things (the whole is greater than the sum of its parts) “magic” because it really is a gift that in some sense defies all the conservation of energy, zero-sum thinking permeating our lives. It’s like the “magic” of compounded interest. We’re all aware of this concept and hear someone mention it almost daily, but probably don’t appreciate it and strive for it as much as we should.
We also see it very often at the level of the teams we work on, whether it’s a rock band, a basketball team, or a BI development team. It’s fairly easy at the team level because we as individuals are directly a part of it. It becomes trickier at higher scales (teams of teams) because then we’re not directly observed. The point of this blog is to explore whether we can measure and the value of measuring this at the scales way bigger than teams; teams of teams of teams …
The Value of Φ to a Business
Does it matter if businesses have consciousness? Admittedly, the question of consciousness, whether people or whatever, was to me just an academic exercise. I thought of it as a very tough problem that wasn’t standing in the way of making technical progress towards smarter software.
I don’t think any business today is conscious, but that businesses could be driven towards consciousness and that progress in that direction would be beneficial. It is just an academic exercise from the standpoint of how measuring a business’ level of consciousness can help the business resolve current problems. However, to continue the analogy of a business of being an organism, I first ask if consciousness has provided benefits to humans. For the sake of argument let’s say that we humans are the most successful creatures and we are the most conscious. Is that directly linked?
Consciousness, self-awareness, enables us to imagine and perform symbolic what-if experiments in our heads before we commit to a physically irreversible action. That’s the game-changer because the world is complex and Black Swans, big and small, will constantly bombard businesses – like strokes and mini-strokes. Without the symbolic thinking afforded to us by consciousness and self-awareness, we would be at the mercy of the mathematical outcomes of the algorithms in our heads.
Would it be sensible to say that if a business were conscious, it would have the superior capability of generating strategies in a complex, only semi-predictable world as humans can? To me, that is the skill of skills. Consciousness is the secret sauce that makes humans resilient to the ambiguities the complex world in which we live during the course of our relatively short-term existence. That is the ability to fish; even if you happen to lose a fish, you’re not dead, you can still fish. When (not “if”) a business’ strategy starts to fail, it need not worry too much if it has superior capability to develop and implement new strategy.
If a business were indeed conscious, even if we individual humans were not able to interact with that consciousness (the voice of the CEO is not the same thing), would it still be valuable to attempt to nurture that consciousness (i.e. maintain a good value for the Φ KPI)? If that answer is yes, how could we do that?
The Term, “Actionable”
Before moving on to what it takes to calculate Φ for a business, I need to clarify that this is not what is thought of as an “actionable” KPI. I still read very many articles, mostly Big Data articles, insisting that the only good information is actionable. That is, we can readily do something with that information, otherwise, it’s merely interesting. I don’t disagree. However, I just think we need to consider that life throws curve balls and that it’s the human ability to deal with the unknown and ambiguous that sets us apart from other species, where we can imagine something not there right in front of us. It’s survival of the most adaptable and lucky, not survival of the fittest (see Note 1). “Fittest” is a definition that is constantly in flux. Regarding the luck component, as they say, “we make our own luck” through our ingenuity and agility.
When I read articles describing “actionable information”, the context usually implies that it means we can apply it to a clear and present problem. What can you do for me now regarding the problems that kept me up all last night? This mentality is dangerous because it strips imagination, the most prized human intellectual asset, out of the workflow. Remember, it’s often the un-actionable but still interesting pieces of information that allow us to step outside of the box and say, “but what if …”. We then avoid the side-swipe from that nasty startup.
Being a stickler for actionable information is seductive because it’s easier to deal with than the worrying about problem that could happen (see Note 2). It’s scary to get up in the morning needing to invent a new, un-Google-able (no one has yet figured it out) way to do something. Focusing completely on actionable information is smart as long as we’re in evolutionary, not revolutionary mode (see Note 3). The length of these periods is unpredictable. Periods of evolution could be decades or it could be days. But at any given moment, surprises big and small, good and bad, deadly or inconvenient, show up. The complexity of the world guarantees that.
Calculating Φ for a Business
Consciousness, at least in the context of this blog, is a complex system’s ability to have a big picture feeling of its state. What we’re measuring is the level of quality of that “feeling” from that complex system.
Calculating Φ, in all honesty and as an extreme understatement, isn’t easy. Our own individual consciousness is built up over a long period time using input and processing equipment that in most ways greatly exceeds computers and sensors submersed in an environment (society) vastly richer than sterile databases (even Big Data databases).
Unfortunately, there are no direct formulas already developed for calculating Φ for any consciousness; people, animals, or a business enterprise. There is though high-level guidance, even though it is tedious at this time. In an ideal world, the idea is to inventory the actors, the relationships between the actors, and every possible state of each actor in the enterprise resulting in a very messy graph of cause and effect. To make it worse, actors don’t stop at the level of individual people or software applications. There are lower-level actors with those actors.
I know very well how difficult it is to build such a graph even if we were just talking about a family tree because I’ve built a few. Mapping out such relationships in the SQL Server engine was extremely daunting. Documenting a heterogeneous mix of relationships among the large number of agents in even a small enterprise is unfeasible if not impossible. Just for the sake argument, let’s say we did manage to do this. The Φ value would roughly speaking measure the level of integration across all aspects of the graph. To put it in terms of our own consciousness, we know that there is a difference between just knowing a list of historical facts and integrating those facts into a story serving as a metaphoric scaffolding for a problem we currently face.
A feasible but still very non-trivial process would involve specialized software and is dependent upon the maturity of the business’ IT, especially BI, Business Process Management, Collaboration (such as SharePoint) and Performance Management. Such systems have captured much of the essence of an enterprise and can even be thought of as the IT side of a business’ neo cortex. However, these systems are not yet purposefully integrated as a mainstream practice, meaning the knowledge exists in silos. In fact, integrating knowledge is the purpose of Map Rock. But before diving a little more, I’ll just mention for now that the third level is really a set (“set” implies un-integrated) of baby steps that I’ll describe soon.
For example, creating a graph of communication (email, IM, meetings on the calendar) between employees shows connections between people. Aggregating it on attributes of employees, we can determine the levels of inter-department communication. Also identify groupings of people on the communications; these groups are “virtual” teams. Based on titles or roles, we can determine cross-functional communication. This acts as a reasonable facsimile to needing to catalog everything everyone knows. Mining a collaboration software such as SharePoint, we could find relationships such as people working on the same document.
For that graph of communication, what we’re looking for are relationships. We may not even know the nature of the communication, but if we know the Inventory department is talking to the Sales department, chances are good that each department is making decisions with consideration to the other department’s needs. We would know a senior engineer from IT regularly talks to the CIO.
The idea is that we’re building a rich, extensive, highly-connected web of cause and effect. This “database” of cause and effect is the experiences of a business. Collecting these experiences is only one side of the coin. The equally important other side of the coin is the maintenance of the web; the expiration, updating, deletion, or consolidation of the rules and statistics.
I designed and developed Map Rock from 2010 through 2013. I consider Map Rock a child of the “Big Data” buzzword age, but I was shooting downstream along the analytics workflow from Hadoop and a couple years into the future. At the time, as a long-time BI practitioner, I well understood that most companies haven’t even come close to exhausting what they could do with the data already at their disposal.
Would simply more data (Big Data) drastically improve the quality of insights from analytics? Certainly the incorporation of very granular data from all sorts of sources and semi-structured or even unstructured data trapped in documents and pictures would greatly enhance analytical capabilities. But more data doesn’t mean greater understanding. It’s relationships, strings or webs of cause and effect, that are the molecules of intelligence.
Departments within an enterprise are usually headed by a director or VP. Because of the hierarchical nature of the organization, the tendency (not necessarily the intent) is for each department to operate as if they are an independent company serving their “customers” (other departments). Orders are given from above and that drives an information worker’s actions. Even an awareness of a pain in another department is placed down the priority list, as the worker has enough problems of their own to deal with.
The result in the end is that this dis-integration leads companies behave like puppets, as I mentioned at the beginning, where the actions are controlled loosely from above. They work well enough to hobble down the street, not quite like world-class athletes. As entities, businesses aren’t very smart and thus have a mortality along the line of crabs where for every fully grown crab tens of thousands of larvae perished.
Intelligence exists at a relationship level, not at the data level. Data is simply the state of an agent living in system or relationships. Whenever we reason our way through a problem or create a solution, it is all based on a web of cause and effect slowly charted into our heads over the course of our lives (our experiences).
Catalogs, lists, and tables of data in our OLTP systems consisting of unrelated rows are not relationships. They are simply repositories placed together for convenience, no more than a list of alphabetically ordered names in a phone book says much about the community. The row in a typical customer database table for Eugene Asahara may store relationships about attributes of Eugene Asahara, but there are no relationships between another row for someone else. However, the fact that I am a customer may have relevance to the person who sold me something.
Relationships are rules. These rules are experiences in your head associating all sorts of things, predictive analytics models which precipitated relationships out of a slurry of data. In a business these rules are distributed amongst the brains of every employee and model (such as predictive analytics or relational schemas), and each is more island than a supercontinent where species roam wherever they can (see Note 4).
Before we can catalog relationships, we need to catalog the things we are relating – agents in a system. Agents are the players, the tangible and intangible things in a complex system. They can be as elemental and tangible as a document to as complex and intangible as a supply chain or major event. It’s important to realize too that agents can be part of many different macro agents; for example, a person can be an individual, part of a team, part of a department. Conversely, agents may not be as elemental as you would think either.
The direction of the communication matters as well. Obviously line employees receive communication from a chain spanning from their manager all the way to the CEO. But one way communication doesn’t promote informed decisions.
Events are really objects in a sense as well. I think of an “event” as an object that is ephemeral and consists of a loose set of characteristics. The attributes of a car or building are tightly tied together in terms of time and space (my arm is definitely a part of me and barring unforeseen circumstances always a part of me).
As mentioned, the intelligence of a business doesn’t only lie in the heads of humans. At most enterprises, there are many BI/Analytics objects possessing a level of intelligence.
Predictive Analytics models are the prime example of units of machine intelligence – even though humans do have a hand in their creation to varying extents. PA models are machine or human “authored” clusters of IF-THEN rules and calculations taking inputs and splitting best guesses either conveyed to a human or in many cases executed automatically. They can range from “white box” components where we can clearly see what goes on, to “dark gray boxes” (not “black box” where the internals are completely unknown) such as neural networks where it’s very difficult to figure out what actually goes on.
The many Predictive Analytics models scattered across enterprises form an un-integrated web of knowledge that is very reminiscent of the structure of our brains. Neurons are not simple gates but more complex mechanisms in themselves. Like neurons, PA models take many varied sets of inputs and spit out an some answer. That answer, the resulting mash from many inputs is in turn the input for a subsequent decision. Integration of these models, not just the storage of the metadata of all the models, is what the Prediction Performance Cube is about.
OLAP cubes and departmental data marts hold some level of intelligence even though they are primarily just data. They store what has been vetted (during the requirements gathering phase of the project) as analytically important at a departmental level (a subset of a full enterprise). For example, OLAP attributes provide a clue as to how agents are categorized and the hierarchies show taxonomies. In other words, OLAP cubes and Data Marts don’t store the entire enterprise universe, so we select data that is probably analytically useful. Browsing through the OLAP cubes shed light on what is important to the departments that built them.
In Business Problem Silos I describe how departments create department-level BI (cubes or data marts), which integrates data from two or more data silos to resolve departmental business problems, but we end up with a different kind of silo, a business problem silo
Naturally something with the word “rule” in it such as Business Rules Management must be rules. IBM’s ILOG is a great example of rules management software. Such Business Management Rules are usually intimately tied into a Business Process Management implementation as the decision mechanisms determining the next steps in a process flow.
Of course, the vast majority of the intelligence of a business lies in the heads of the employees. Machine-stored rules cannot even begin to match the nuances, flexibility, richer relationships, richer store of information that is in a human brain. The problem is that even though the intelligence of a human is richer, it can’t capture everything in an enterprise. Not even the enterprise-wide scope of a CEOs can match the collective intelligence of the narrowly-scoped “line workers”. Instead, they see a higher-level, more abstract, wider, shallower picture than the narrower but deeper intelligence of a line worker.
Not every employee can know everything and no one should need to know everything. But we must be cognizant of the side-effects of our actions. My discussion on KPIs below addresses that.
Human and Machine Interaction
In most enterprises, Performance Management, along with every workers’ assigned KPIs is a very well-socialized part of our work lives. KPIs are assigned (related) to people or machines measuring our efforts. They are like nerves informing us of pleasure or pain and the direction. I’ve written a few other blogs on KPI relationships that have the common theme of integrating KPIs into a central nervous system:
- KPI Cause and Effect Graph Part 1 and Part 2. These old blogs take the notion of KPIs as nerves to the next level, attempting to integrate them to at least some degree as a nervous system, by decomposing the calculations of the target, value, and status and mapping KPIs into a web of cause and effect at that more granular level.
- Effect Correlation Score for KPIs. This blog proposes another component to the KPI (besides value, target, status, and trend) that measures the intended effect meeting a KPI target is supposed to have. For example, does increasing employee satisfaction still lead to an increase in customer satisfaction?
Realistically, the KPI is the best bet for even beginning to attempt a Φ calculation. As KPIs are well socialized at the workplace, prevalent throughout an enterprise, they already define what is important to just about every breakdown of workers from departments to teams to individuals.
Business Process Management workflows are another fairly well-socialized program of an enterprise which documents the sequence of events that happen in business processes. Workflows include prescriptions for procurement, proposals, filing taxes, performance reviews, manufacturing, distributing. These are usually inter-department processes which will relate cause and effect across the departments. However, the workflows that are formalized probably account for just a small fraction of what actually goes on. The vast majority exist in the heads of the people involved in the workflow. This could be worse than it sounds as many workflows don’t exist in one person’s head, but pieces of it are distributed among many people (meaning people are only aware of their part of the workflow).
Map Rock is designed to integrate, both in automatic and manual fashion, such sources of relationships into a single repository. For example, a component of Map Rock known as the Prediction Performance Cube, automatically collects PMML metadata, training data, test data, and metadata of the training and test data sources. Another component of Map Rock, called the Correlation Grid, is a UI for exploring correlations among dimensions in different cubes.
Map Rock converts these rules, from whatever sources, into my SCL language, which is used as the lowest common denominator. SCL is based on the old Prolog language of the AI world. Beyond being a .NET implementation of Prolog, it is modified for the distributed world of today as opposed to the more monolithic world of the 1980s.
Putting it All Together
A suggestion for a calculation of Φ is a graph of communication ties between decision makers down the traditional hierarchical chains of command as well as across the departmental silos. These ties are the relationships described in the sections above from email, meetings, KPIs, workflows, etc.
Those are the ties that do exist, but we also need to figure out what should exist. In a nutshell, the Φ calculation is a comparison what exists and what should exist. I need to mention that what should exist is different from what could exist. What could exist is a Cartesian product of every agent in the system, a number so great it’s unusable. Determining what should exist is extremely difficult akin to “we don’t know what we don’t know”.
There will be four categories of ties:
- Known ties that should exist. These are the ties that we’ve discovered from KPIs, workflows, email, etc that should exist because there is information that must be passed.
- Known ties that do not need to exist. These are ties for which there is no required formal communication. Examples could be email from amongst employees who are friends but never work with each other or someone subscribing to updates on something for which he is not involved. These are not necessarily useless ties. The interest shown by an employee subscribing to updates unrelated to him demonstrates a possible future resource.
- Ties that should exist, but don’t.
- Ties that could exist, but don’t. This should make up the vast majority of conceivable ties. #1, #2, and #3 account for such a minority that this is considered “sparse”.
Ties between should and could, whether they exist in our enterprise or not, is interesting since these are the ties that can lead to innovation. The ties that should exist are in existence because they are part of formal processes already in place. But when we need to change processes to tackle a competitor or simply make an incremental improvement, we’ll need to explore these other ties. So ideally, our graph would consist of some level of ties that aren’t currently required. A good place to start would be ties involving agents from #2, those agents with ties that aren’t mapped to a formal process.
What would the Φ calculation look like? Imagine a graph of the agents of our enterprise and the relationships. Imagine that each relationship is a color ranging along a spectrum of green to yellow to red, where the extreme of green represents excellent communication and the extreme of red represents no communication (and that there should be communication).
The calculation of Φ would be range from greenish (score of close to 1 for generally good communication where there should be communication) to yellowish (score of around 0 for OK communication) to reddish (score close to -1 for generally poor communication where there should be communication) of the entire graph. If your graph ends up greenish, but that doesn’t correspond to your experience that the enterprise is dysfunctional, it would mean that there is a deficiency at identifying the ties that should exist.
A good starting place for what ties should exist would be in the KPI Cause and Effect graph I mentioned above. Going further with the KPIs, Map Rock includes a feature whereby we can view a matrix of the KPIs where each cell displays the level of correlation between the two. This will not necessarily identify all ties that exist, but it may show some surprises.
Back to the graph. We could start this graph of ties with the chain of command, the “org chart” tree, from the CEO at the top to the line workers with no one reporting to them. It’s a good starting point because every company has a chain of command, which forms a natural route of communication.
The next step is to identify the decision makers. Although decision makers and non-decision makers require information to operate, the decision makers are the ones that initiate changes to the system. However, in reality everyone makes decision to some extent, so this is more of a fuzzy score than a binary yes and no.
As a first pass it’s safe enough to assume that anyone with direct reports makes decisions. But for non managers, there are many who primarily operate under temporary project cross-functional teams consisting of members from across departments. In these cases, they may not necessarily have formal report relationships, but do make decisions in the team. For example, a “BI Architect” may not be a manager, but may decide on the components to be utilized. It’s tougher in these cases to automatically infer these decision makers.
In addition to identifying decision makers, we need to measure the strength of communication between the decision makers and those affected by and changes. This is an important aspect since we know a tie should exist, but it is poor. This is in some ways worse than not realizing a tie should exist. If a decision maker issues direction to someone and thinks the direction was received, the decision maker may go on assuming her instructions are carried out.
In the end, we should end up with a graph illustrating strong connections between many parts and little in the way of weak ties or ties that should exist but don’t. The thought processes behind such a graph is the bread and butter of companies like Facebook and LinkedIn. But the networks of enterprises are the soul of corporations.
For studying networks, particularly the social network aspects, there is no better place to start than with the work of Marc Smith. Marc also has lead the development the Excel add-in known as NodeXL, a visualization for displaying graph structures.
Baby Steps Towards Φ Calculation
All that is what Map Rock is about. Duplicating what I’ve developed in Map Rock will be daunting at the time of this writing. However, there are baby steps that should be very informative in helping to improve the agility of an enterprise. For the most part, these baby steps consist of the pieces I described above such as studying social communication networks, developing a KPI cause and Effect graph, implementing the Effect Correlation Score, and creating a Predictive Analytics Metadata repository. Still, though these parts won’t be integrated with each other, without Map Rock, they are an un-integrated set of tools.
So there it is, my attempt to step far back, looking at what it is we’re really trying to do with Big Data. From my point of view, we really are trying to apply to our businesses the same technique that made humans the dominant species at this time. That is, we’ve built the mechanisms for dealing with ambiguity. Therefore, with the proper implementation of Big Data we can much better deal with the roadblocks along the way.
The NoSQL world, particularly Neo4J the NoSQL graph database offering, should finally break down some walls in better embracing the complexity of the world. Graphs, the true representation of the world, the structure of workflow, semantics, taxonomies, etc, will finally take its place as the center of our “data processing”. Perhaps with Neo4j, the visions of the Semantic Web can finally help us make real progress.
Lastly, I apologize that this blog isn’t a step by step cookbook – that would indeed require a book. This blog ended up way larger than I had intended. My intent is to toss out this idea at a high level description, and drill down further in future blogs.
1. I asked my favorite judo instructor, What is the most important skill, if you could pick one? I expected him to tell me that is a useless question completely missing the point, that it’s a balance of many things; I’ve failed to think in a systemic way. But surprisingly, without a blink he said flexibility. Not technique, not strength, not size, not endurance, not balance, not tenacity. Certainly, they are all factors. He said, “With flexibility you can do all sorts of things your opponent had not thought of.”
2. I know it’s out of fashion to worry about problems that haven’t yet popped up; that we should focus on the now. However, there is a difference between being in the now because we’re:
- Incapable of predicting what will come next. So like honey bees we put faith in the math behind the rules. This is similar to how casinos put faith in the math behind black jack. In the end they will win.
- Deluded into thinking that if it hasn’t yet happened (i.e. the experience isn’t in my brain) it couldn’t possibly happen. A variation is, “No one is asking for that.”
- So busy with what we already have on the plate that anything not currently in your face goes down the task list.
and that we’re confident that we can get ourselves out of a jam if the time comes because we’ve packed our brains with an extensive library of experiences from which we can draw analogies to future problems, we are fully aware of our surroundings and at the same time fully engaged with the task at this instant in time, and we fully accept the outcome not dwelling in the past. In other words we’re satisfied that we’ve prepared as best as be could for whatever comes about.
3. I don’t like the phrase, “It’s evolution, not revolution”, uttered by the VP of the department I worked in a while back. The sentiment seemed to be the antithesis of high tech. Progress involves cycles of long periods of evolution and short bursts of revolution (even though I’m pretty sure that wasn’t the message of that VP). It’s a balancing act where progress requires some level of focus towards a goal and another where the evolution actually leads to some kind of stasis (leading to proactive revolution) or crisis (leading to reactive revolution). During evolution, distractions from elsewhere impede progress. However, when stasis or crisis arrives and there is a need to move on, integration of knowledge fertilizes the system.
4. The extremes of Island or supercontinent isn’t good. Rather there is something in between. There needs to be some level of isolation to allow diversity to first build, then for small, diverse populations to mature. Either a supercontinent separates into little islands allowing diversity to take shape, then converging to let the best one win, or there is some natural barrier whereby weak links can occasionally travel to other populations infusing some new ideas. Too much integration is really just chaos, too little will eventually lead to stasis.
One thought on “The Magic of the Whole is Greater than the Sum of Its Parts”