This is Part 3 of 5 of the Map Rock Problem Statement. We delve into a high-level description of the Map Rock software application, where it fits in the current BI framework, and how it differentiates from existing and emerging technologies. This is really the meat of the series.The previous installments can be viewed here:
- Part 1 – Preface to the 5-part series.
- Part 2 – I describe Map Rock’s target audience and the primary business scenario for Version 1. It is not just a tool for quants, wonks, and power-users.
What is Map Rock?
Map Rock is an application and platform that integrates terms across many repositories and uses these terms to develop, monitor, and evolve strategies towards the goal of improving competitive fitness. That statement contains the words, “terms” and “competitive”, which are fundamental to Map Rock, metaphorically and respectively at a molecular and philosophical level. I carefully chose the word, term, to encompass the common contexts of the words rule, law, heuristic, and relationship, each of which alone doesn’t suffice. It’s important that I discuss why I chose the word “term” – it’s more than a semantics, “po-ta-to/po-tah-o” situation.
“Term” in the context of Map Rock, is borrowed from the Prolog AI language, which incidentally is what my SCL language is based upon. A term is capable of representing pretty much any statement of definition I can make whether it’s unchanging (invariant), temporary, or inexact (deterministic).
I don’t really like the word “rule” as it suggests hard and fast notions around which life revolves around, more like in the context of invariant laws – whether or not they make sense (see Note #1). While driving under good weather conditions, we follow the rule set by the speed limit sign. However, during icy conditions, we follow a less precise heuristic: Drive at least 10 mph slower. Soccer Moms like mini-vans because Soccer Moms have kids. That statement is more of a heuristic, but the latter part is a rule because “Moms” by definition (a rule) have kids.
“Relationship” isn’t as invariant as “rule” and does suggest a somewhat ambiguous quality. However, whether invariant rule or ambiguous relationship, they are rather indistinguishable when we are devising a strategy. Relationships , such as there being a relationship between the price of oil and the disposable income of car drivers, play a central role in Version 1 of Map Rock. However, my main message is that relationships change. In 400 years when our cars primarily run on electricity generated from clean coal, that relationship will cease to be valid. Relationships are also affected by rules such as the price of oil cannot go lower than $0.
Wise strategies architected by people involve a combination of data, rules, heuristics, and relationships from a wide array of sources, both machine and human. Wise strategies also consider the current validity of rules and examine the context of the situation. Most importantly, wise strategies minimize the risks associated with being wrong and are built as though we expect to adjust the strategy as needed. That is Map Rock.
Figure 1 – Map Rock integrates terms from multiple, heterogeneous sources.
The second word, competition is really the “law of the land” on Planet Earth. Think about it: Every creature consumes another creature. Competition is the fuel for the evolution of all creatures driving species to adapt to the improved tactics of their predators, prey, as well as competitors. Without this friction, there would be no action, and thus life would be not much more than a static painting. Improving competitive fitness has always been the implied promise of Business Intelligence, but virtually all BI (Business Intelligence) implementations fall very short of that. Following are two reasons that are directly related to the genesis of Map Rock.
Quick and Dirty Pidgin
First, most BI implementations provide reporting on data integrated across data silos in a versatile and very responsive manner, but leave a giant chasm between the products of our BI efforts (data marts, the OLAP cubes, and predictive analytics models) and our brain. That is, those webs of cause and effect (the molecules of strategy) twinkling in the human brain, which make up our strategies, the “DNA molecules of our intelligence”. I’ve described this as riding a bus for 30 miles, then being dropped off with a mile walk through the snow to go. Figure 2 depicts this chasm and how Map Rock forms at least a pillar for a bridge.
Figure 2 – There is a big leap between a cube (or Data Mart) and the information worker. Predictive Analytics bridges that chasm.
Map Rock can be thought of as a pillar enabling a quick and dirty bridge (or road) between those two points that enables the partnership between human and machine intelligence. As the Lewis and Clark trail is not the I-90, a quick and dirty bridge is magnitudes better than no bridge at all. This is as opposed to the strategy of making humans more machine-like, something that is incredibly unappealing to me, and machines more human-like, which although proceeding nicely as of late, AI progress since the 1950s has been relatively slow and elusive. That is like starting the Golden Gate Bridge from both Marin County and San Francisco, taking 50 years to reach the middle, all the while there is no bridge.
The BI Depression Era
Second, since the dot-com bust it seems like anything that even hints of being out of the realm of status quo is quickly tagged as hype … although with good reason. IT is now under such scrutiny after the sins of the dot-com era lead to such hype (even though we’ve grown into that hype), highly publicized security breaches, mistakes due to garbage data, and tremendous workload, that it must move very cautiously. As how a lawyer that would automatically say no as their initial bargaining stance, IT must do the same. In my view BI seems to have been swung away from “decision support systems” towards just the optimization of existing business processes. That is, an over-emphasis on the mantra of “doing more with less”.
I know it’s unfair for me to refer to the post-dot-com-bust days as the “BI Depression Era” (very tongue-in-cheek) since BI as it’s usually implemented has actually provided a great deal of value over the past couple of decades. I just think we can do so much better. But it means risk, which is what I think is the problem. This reminds me of the many people I know now in their 80s who were children during the Depression Era of the 1930s. To this day they have fear of being poor and hungry again that relative to their level of wealth seems unjustified.
I think one of the major reasons BI has been restrained is because since bridging that chasm is a very difficult and risky undertaking as a business project, most BI implementations retreat to many smaller goals on both sides of the chasm with high probabilities for success. So instead of “Enterprise Data Warehouses”, BI implementations are still dominated by less ambitious departmental data marts. The collection of these smaller BI assets, each addressing departmental business problems, results in something I call “Business Problem Silos”. The compartmentalization of these business problems hinders an enterprise’s ability to align its efforts since they are operating without the left hand knowing what the right hand is doing.
Figure 3 shows how these departmental BI projects (2), each representing a resolution to its respective department’s business problems, integrate a few data silos (1) which can span departments, and where Map Rock (3) integrates those BI projects into a view of wider perspective. Consider as well that if it is true the departmental BI projects were built to address selected business problems (unfortunately, it often is the case where a BI project commences because someone said it’s a good idea), what is in those cubes represent what is of concern to the department. Through Map Rock, each departmental manager can now have an understanding of what is important to other managers. That notion plays one of the key roles in Map Rock towards Map Rock’s goal of aligning enterprise-wide efforts.
Figure 3 – Business Problem Silos result from smaller departmental BI projects.
Please note that Figure 3 only depicts Map Rock’s connection to BI projects, particularly OLAP cubes. This is just to illustrate the Business Problem Silo. However, as a reminder, Map Rock does integrate “terms” from many sources other than just OLAP cubes.
The effort invested into the building of BI will correlate to the value provided by the BI information. The more compelling the BI information, more BI projects there will be.
This is Where the “Space Odyssey” Theme Plays … Hahaha
If David lived today, instead of learning to use a slingshot, he would have prepared for his fight with Goliath by lifting weights and taking tae kwon do classes, tightening up on “what makes sense”, futilely trying to beat Goliath at his own game. It sounds safer and more sensible than trying some new, unproven method for battling a bully. This would result in only incremental improvements that will never match Goliath at Goliath’s game. David would need to change the rules.
Currently, the upwardly trending technologies such as Big Data, Complex Event Processing, and Self-Service BI are advancing decision support immensely. However, in the context of this series of blogs, changing the rules means we must look beyond simply procuring more data, retrieving it more quickly, processing it to the point where it loses much of its original meaning, and rendering it in sophisticated visualizations that can sometimes look like a Picasso or Jackson Pollock. It also means to face some tough realities such as to stop simply sweeping outliers and weak links under the rug, resisting the fact that the world is complex (not simply incredibly complicated – more on that in Part 4), insisting upon “just one number” as the only valid answer, and ignoring that diversity of thought and action (currently being squeezed out through relentlessly growing compliance obligations) is essential to progress, and that errors are the best way to expose problems.
As I said, procuring more data, retrieving and processing it at light speed, and innovative visualizations are fantastic. But it still leaves some weak spots that leave us vulnerable to many mistakes. Version 1 of Map Rock attempts to address these issues from a higher-level layer that I call “Term Integration”. We integrate a heterogeneous array of terms from the many sources in which it exists throughout an enterprise into a common format we would have a richer view of our situation. Those rules are immersed in an environment that perform tasks such as finding similarities between nodes and sets of nodes, noticing changes in relationship, and surfacing inadequate data (ex: cached values that haven’t been updated in a long time). Following are three high-level design choices for Version 1 of Map Rock:
- To ease “culture shock”, Map Rock is designed as a next step software application for Performance Management, something already very familiar to most information workers. For the most part, this means the meat of Performance Management is on the “Strategy Map” (the brain) and less on the Scorecard (the nerves). Map Rock does sport a dashboard, but I’ve added several components besides a user’s KPIs. They are also made aware of how their actions may affect the KPIs of others across the enterprise.
- “Casting a Wide Net” – This is the ability to explore for correlations in an OLAP Browser-like environment. This Correlation OLAP browser displays a grid of correlations across dimensions from any two cubes. This ability to tie cubes together addresses the “Business Problem Silos”.
- Prediction Performance Cube – This is a platform I’ve developed that is a repository of Predictive Analytics models for which I store versions, track predictions, track outcomes (when I can), and package up all this information into an OLAP cube I call the Prediction Performance Cube. Map Rock leverages this cube to track what models are weakening and why they are weakening.
I need to be clear as well that Map Rock is not an AI (Artificial Intelligence) application. But I do think of it as “pragmatic AI”. One of the ways I’ve been describing Map Rock is as a pidgin. A pidgin is a lowest-common-denominator language that emerges when two or more cultures are suddenly thrown together. In order to get things done, these people must somehow communicate. It doesn’t need to be eloquent, just enough to get the message across. Hawaii is a great example. In the mid to late 1800s, many cultures from the East and West were tossed together to work on the plantations. They were able to communicate through a mixture of words from those cultures. Today, that mixture of words has matured into an actual syntax, a creole (even though it’s referred to as pidgin). So even though I refer to Map Rock as a “quick and dirty bridge”, it is one that will improve over time and even be replaced time and time again by a better bridge being built while traffic continues along on the Map Rock bridge.
Figure 4 depicts human intelligence on the far left and machine data on the far right. Traditionally, we humans have communicated with that machine data by taking a far leap towards the machine utilizing very rigid user interfaces (solid yellow line from Einstein to the GUI). Since we humans are better at learning than computers, this is more feasible. If computers were to make a similar leap to communicate to us, they’d need to take a big leap (yellow dashed line from C3PO to the computers), which is real Artificial Intelligence, which has been five years away for the past fifty years.
Figure 4 – How the extremes of human and machine intelligence communicate.
The reason that pidgin managed to emerge enabling the many cultures to work together is that they had a compelling enough shared goal, an incentive to first learn enough about other languages and cultures and to work around the inconvenience of a not so optimized way of communication. That is, as opposed to just giving up and each camp sticking solely to themselves. In this case, Map Rock is a pidgin because it acts as an intermediary language between the human and machines intelligences suddenly (in historic timeframes) thrown together. Map Rock is designed to blur the delineation between the human brain’s ability to think and innovate and the machines’ ability to do lots of things very accurately and quickly. Yes, machines can process magnitudes of data with more accuracy and practically no whining than humans. And yes, the human mind is much more robust and versatile, qualities of the utmost importance in a complex world, than a machine. But rather than completely compartmentalizing these respective skills, blurring the lines opens the door to smoother “cooperation”.
I’ve read much BI literature lately about the folly of making gut, intuitive, instinctive decisions, as opposed to data-backed decisions, as if intuition is a bad thing. I think the people who say that really intend to say, “decisions based solely on one’s experience”, but it comes across as meaning going with a wild, Captain Kirk-like hunch at best, and at worst, a Miss Cleo psychic reading.
Over ten years ago I read a very insightful book titled, “Second Opinions“, by Jerome Groopman, M.D. One prime take-away is that there is a dichotomy of older, experienced doctors who have mastered their craft and younger doctors armed with the latest and greatest information and techniques. On the other hand, older doctors may be “locked in their ways” whereas younger doctors may trip up on rookie errors. Wouldn’t it be great if older doctors could maintain malleability in their brains as well as maintain the time to keep up as their utilization rises? And wouldn’t it be great if young doctors could have the countless things that would take 30 years to learn uploaded directly into their brain? This book was what lead me down the path of building a pidgin to bridge human and machine intelligence as we would bridge the benefits of experience and being brand new.
Basically, those gut decisions are based on our experiences running in massively parallel fashion off our “conscious thread”. When information in front of us triggers these experiences, it taps us on the shoulder. We just need the awareness to feel these taps and think beyond the tyranny of that conscious thread of ours.
However, there is more to our personal experience that contributes to our decisions. Our brains also come “factory-packaged” with some software that served us very well out in the jungle before we became the savvy, self-aware, symbolic-thinking beings we are today. Indeed, one of the primary design goals I have for Map Rock is to protect us against our brains’ “features” that can lead to towards bad decisions. A small sampling of examples include:
- Bias. There are so many ways we can be fooled. We tend to see what we want to see and pretend what interferes with our purpose isn’t there. That is, conditions that favor our needs, or in many ways avoiding a tougher path. “Numbers People” know the many types of bias (mathematical and psychological) and build their campaigns around it.
- Misconceptions due to pigeon-holing. This happens when we encounter something new and our brains are trying their hardest to fit it into something we already know.
- The innate notion that if X is good, X+1 is better and X*X is great. For example, if Omega 3 is good for us, then a lot must be great. The appeal of such simple heuristics leads to the tendency to swing from extreme to extreme.
- Related to “if X is good then X+1 is better” is the deeply engrained heuristic that bigger is stronger, and so we back off looking for a path of less resistence. To me, this is the “anti-strategy”. How often in the animal world do competing members of a species end up actually fighting once one proved it is bigger than the other? See Note #2.
- We become numb to things. For example, if I keep getting a warning about something about an event that seems off in the future not requiring immediate attention, I’ll eventually tune it out.
It’s not that these features are flaws at all. Of course, we’re biased towards what we know. How could we be biased towards what we don’t know? Our brains are incredibly gifted at filtering and twisting data to fit into what we know. That’s because if we recognize something, meaning we’ve managed to successfully pigeon-hole it even if it means shoving a rectangle into a square hole, we relieve the pain of needing to continue our investigation (in case what we’re seeing is a wild and hungry bear). Because the inability to recognize something is in a sense “painful” (again because we may be facing something dangerous), we’re more than happy to semi-consciously twist things around to our liking in order to put the issue to rest. Bias is a huge subject in the data mining world and way beyond the scope of this blog. But the phrase that suggests withholding judgment until “walking a mile in another man’s shoes” encapsulates the problems associated with bias.
Faulty analysis on our part, no matter how accurate the data is a major problem. I recall one vivid instance involving a highly publicized finding related to the occurrence of a particular disease. When an associate and I presented on data mining topics to the Department of Health of this state, one of the folks got up and began a tirade like I’ve never seen in a work environment. The bad analysis by the media, jumping to conclusions because that premature was conclusion was so juicy, resulted in terrible and unwarranted publicity and embarrassment for that DoH. It turned out to be a common problem of a very small town having an outlier number of cases of this disease due to a small sampling easily prone to extreme numbers.
On the other hand, decisions based mostly on data (the unambiguous, cold, hard facts) are naïve. When I give talks on data mining, I often ask, “What is the best thing about computers?” The answer is: They do exactly what you tell them to do. “What is the worst thing about computers?” They do exactly what you tell them to do. Computers are indeed the stupidest, most brittle “intelligences”, meaning, software is typically not designed to work beyond a well-defined boundary.
It’s our human brain structure that goes beyond pure logic raising us above the birds and reptiles, at least in our ability to manipulate our environment. Predictive analytics models offer one method of proof, but data is very open to interpretation as we’ve seen lately. There is so much data out there that one could find sufficient data points to make up a story about anybody about anything that at first or even second glance seems plausible. In fact, one of my primary design goals was to protect us from the biases that can lead us to make really silly decisions. For example, sometimes we are so sure about something that we will never question it. In Note #3, I describe something I learned last year that rocked my world.
In the case of Map Rock, it wasn’t a matter of just developing a pidgin language. I wanted to go deeper, into the essence of human activity. And that is striving to achieve goals by employing imagined or learned strategies. It’s important to mention here that just about all strategies are a combination of learned as well as novel components or a novel mix of a group of learned, mixed and matched components. Instead of automating a specific, well-defined process, I chose to not automate, but support a generic, vague process. The ability to handle the vagaries of life is the essence of the success of humans. And that is strategy, the king of processes.
I never bought into the notion of a software application solely automating and optimizing specific, well-defined business processes. I never liked thinking of software as merely tools such as hammers and screwdrivers. Software can also be the interface between human brains and the well-defined, rigid machines.
As software and robots grow in their effectiveness, it leaves less and less for humans to do. I believe that will revive the issues, winning, competitiveness, risk taking, and dealing with complexity instead of shying away from it, etc, as vital to success.
How Is Map Rock Different?
A common comment I get when talking about Map Rock is along the line of “Isn’t that how things already work?” or pointing out that “… Map Rock is just another …”. It’s easy to pigeon-hole Map Rock into one thing or another (mostly highly publicized buzzwords) because Map Rock touches so many technologies. It is an “integration tool” that integrates quite a few things. Map Rock isn’t “different” from these technologies it touches. Rather, Map Rock intends to unite them.
Following is a list of technologies/concepts that is either built-in to Map Rock or that Map Rock ties into. Hopefully for the set of technologies built into Map Rock, it is a case of the whole being greater than the sum of its parts. But keep in mind that all of these technologies are big subjects in themselves, beyond the scope of this blog. Additionally, some of the definitions are still up in the air (or I’m still resisting some of what are more or less the “official” definitions). Here, I’m simply offering a brief description, but more importantly attempting to place the pieces in the big puzzle of how to make better decisions.
I‘d like to reiterate as well that it is absolutely not my in intent to minimize the value of any of the items I discuss below. I consider everything listed below essential pieces to that big puzzle and created Map Rock as an attempt to tie them together. And Map Rock will have overlap with just about everything I mention below.
Figure 5 depicts four primary technologies around Map Rock that play the biggest roles. Roughly, Map Rock enhances Business Intelligence and Performance Management. I lumped Self-Service and Predictive Analytics into a bucket because self-service BI will enable data analysis on a more “personal” level. For example, data for a car company focused on the US will not apply in India. I also lump Big Data and Complex Event Processing together as there is some important overlap.
Figure 5 – Three Major Technologies Map Rock intends to integrate.
Business Intelligence in my view lumps data warehousing/marts, ETL, OLAP, and the necessary front-ends together. All of those pieces are certainly in the puzzle, but for the context of this discussion, it is necessary to differentiate Map Rock from what is currently thought of as multi-dimensional front-ends as Map Rock is not a database engine, OLAP engine or ETL tool.
BI systems manage to obtain and properly integrate data across a range of silos. But BI hasn’t really focused on relationships between nuggets of data. From the end-user’s point of view, it still just reported on calculations (mostly sums) in a very responsive and easily accessible manner. I think a big part of the reason the end-product of BI is still kind of simplistic is because most BI projects scope around a small enough domain where the people involved can still wrap their brains around it to a sufficient enough level.
BI projects were notorious for failure. Many times it was because BI by nature has a nebulous goal of “help me make better decisions”. A project has a hard enough time without a crystal-clear goal. Making better decisions means providing a decision maker with better data. Such an ambiguous goal is hard to achieve within a short period of time, on the order of a few months, and it is a moving target since business conditions are always evolving.
So BI projects were scoped down to resolve specific business problems as opposed to being a centralized, all-encompassing repository from which many business problems could be addressed in a consistent manner. This is often a departmental business problem that is resolved by at least accessing some data from another unit. For example, the marketing department incorporating data from the inventory department to resolve problems that may occur as a side-effect to a widely successful marketing campaign. The result is that BI cubes and data marts are scattered throughout an enterprise, for the most part, disconnected from each other in what I call Business Problem Silos.
A major Map Rock Version 1 UI is a Multi-Dimensional Browser such as the old ProClarity or PowerPivot, but with a big twist focusing on relationships. So it would be fair to include Map Rock in the BI Visualization category. The differentiation is that Map Rock incorporates many other components than just the “BI” part.
Performance Management is really the science of execution. I consider it the “BI Killer App”. Map Rock Version 1 is submerged within a Performance Management framework, familiar to just about anyone in the corporate world. The current suite of tools focuses mostly on “dashboard graphics” (scorecard and an array of supporting line graphs, pie and bar charts), but act not much more than a nervous system for the enterprise (Bridging Predictive Analytics and Performance Management). The strategies, illustrated in the pretty much dead component of the dashboard known as the Strategy Map, are the brains of the system. PM provides a framework for achieving goals, for aligning the efforts of people towards the same goals.
Map Rock “brings the strategy map to life” and enables a more natural distributed system of strategic thinking. Meaning, although performance management is rolled out as a top-down process, I like to think of Performance Management as bottom-up in practice.
Theory of Constraints is a methodology for the continuous improvement of a system. It underlies much of the approach taken in the BI world towards its usefulness in optimizing systems. Roughly speaking, a system is only as strong as its weakest link, so we must identify that link, improve it, then find what is now the weakest link (usually some sort of bottleneck), improve it, and on and on. Map Rock certainly encompasses much of the Theory of Constraints as do many applications and techniques ranging from Performance Management itself to Supply Chain Management.
Where I feel Map Rock is differentiated is that it targets beefing up on the “improve it” part of Theory of Constraints. The other part, identifying a constraint (bottleneck), can have its difficulties and as they say, identifying a problem is half the battle. But figuring out what to do to fix or improve the problem is a hairier challenge. For example, it’s comparatively easy for a doctor to diagnose cancer (see Note #4). A diagnosis is a recognition of something, matching a set of presented symptoms to something, and isn’t in itself a physically irreversible action. But treating the diagnosed cancer is a much bigger challenge. In the real world, we’re constrained by time, technology, law, morals, physics, resulting is what could be unique situations. Map Rock focuses on strategy, which is about finding novel ways to work around these constraints towards a goal of improvement.
Predictive Analytics and Data Mining employs data mining models to take “educated” guesses on future events based on statistics derived from historical transactions. For example, it uses a customer’s purchasing history to guess at what other products that customer may be interested in. Data mining is about finding relationships as is Map Rock. In fact, Predictive Analytics and Data Mining are very important to Map Rock. However, Map Rock not a data mining application upon which data mining models are developed. Rather, it integrates data mining models from other sources as just one type of source of relationship it integrates. Map Rock tries to pick up where predictive analytics leaves off.
Predictive Analytics and Data Mining models in themselves are still relatively “stupid”, meaning the predictions are often wrong, sometimes very wrong. As I write this, I can recall so many instances over the past few months when MSN weather claimed a zero percent chance (not an unlikely chance) for precipitation and it did much more than drizzle. These wrong predictions stems from the limited data from which relationships are derived – you only know what you know.
The level of wrong predictions (false positives and false negatives) varies widely depending on the application of the model. For example, a false negative for the TSA (missing a real terrorist) is so very expensive that the models take on the better safe than sorry approach, trading of a terrible false negative for an incredible number of false cheap positives (I don’t need to go into detail here) that are each cheap to address. But false positives and false negatives are sufficient enough in number that either the cost for blinding taking action based on these potentially incorrect predictions must be really small (sending an ad for cat food to someone allergic to cats) or a human must triage each prediction (patting down grandma at the airport). So Predictive Analytics is restricted to the harvest of low hanging fruit or where the cost of excessive false positives isn’t beyond a reasonable level. This reminds me of a dishwasher where 90% of the dishwashing is done manually anyway – yeah, it adds value, but we could do so much better.
Another problem is that models usually address a single problem, for example, predicting tomorrow’s weather or how many F-150s of each color to stock on a car lot. This is problematic because many contributing factors in a data mining model are not deterministic values and therefore, many factors in a data mining model are themselves the result of another prediction. These are “it depends” questions. For example, the ratio of high-end cars a dealer should stock depends on the mix of customers as well as a prediction of the state of the economy, both in themselves complicated data mining problems.
Additionally, since the world is in constant flux, models (relationships between things) also change and there is little analysis given to why the models changed. The models are refreshed with the latest data and probably will work better, but we miss a great opportunity analyze what has changed.
Most importantly, Predictive Analytics doesn’t address the issue of “What if I’m wrong?” Without being cognizant of the ramifications of a bad decision (instead of pretending they aren’t there), the consequences of being wrong, and not having contingency plans, we could create more problems that the Predictive Analytics models solve. This is roughly speaking, Risk Management. It’s also easy to forget too that a prediction could be correct except that is precisely what someone wants you to think. Remember, we’re able to catch fish with a worm on a hook because the fish doesn’t consider the agile cleverness of humans. I write more on this in Undermined Predictive Analytics.
I look at Predictive Analytics models more like “insect intelligence” than our human intelligence enabled by symbolic thinking and self-awareness. Insect Intelligence executes relatively simple rules that are hopefully right more than they are wrong. Follow these rules and the species survives, but there is no concern for the individuals of the species. There is no expectation for each insect to survive. No one is going to take birds to court over for each murder of an insect. In most cases, a model is deemed successful as long as it selects better than a coin toss. This isn’t good enough when the consequences for being wrong is a checkmate in our human world. We would need to look further up the tree at the high-hanging fruit, otherwise we’re doomed to never move on from this state of Predictive Analytics.
Lastly, Predictive Analytics is often criticized for using historic (lagging) data to make future (leading) predictions. Although this is a very valid point, historic data is no different from the experiences we learn through our lives. We engage those memories for the sake of metaphor in the hope that what we can get pointers from what we learned in the past. Very often, those metaphors are worthless (or worse, they lead us right into the trap of a predator that counted on us doing that), but apparently it works well enough. The fact that we are by far the most successful creatures on this planet is proof that applying outcomes from the past to present situations still works more than it doesn’t work. Of course, all brains capable of learning apply the past, and so there is some secret sauce to human success deficient or missing in other creatures. More on that in Parts 4 and 5.
Artificial Intelligence, at least of the Commander Data or C3PO sort, is too much of a stretch. It is the proverbial thing that is “five years from away” the last fifty. I sometimes think of Map Rock as a sort of “pragmatic AI”, an application that incorporates AI techniques, but isn’t in itself an “artificial intelligence”. Other examples of “pragmatic AI” are predictive analytics models that detects whether the swipe of your credit card is fraudulent or if a customer is demonstrating behavior indicative of jumping to a competitor.
The major issue with AI as it stands today is that it only knows what it knows. I discuss these problems in Parts 4 and 5 in sections titled, “The Limitations of Logic”, and “Imagination, the Key to Human Success”. Both IBM’s Watson and Apple’s Siri seem very intelligent, but they are really just “book-smart”. Meaning, they are very good at assimilating information and drawing sophisticated conclusions, but deficient in coming up with original thought; at least for the moment.
Master Data Management. If we cannot create a centralized, enterprise-wide source of all data in an enterprise from which all reporting springs, the next best thing is to centralize what we can; the relatively few but major entities common throughout such as products, customers, and employees. Additionally, that Master Data Management system would only store attributes of those entities that are used enterprise-wide (ex, customer name and address, product description and code). At least it’s a step in the right direction.
It would be easy to think that this is what Map Rock does. However, there will still be many instances where departmental data would have no common ground with data from other departments. Actually, in a sense I would be kind of worried if the enterprise were able to consolidate all of their data into such simplicity since such simplicity could only exist in a static system.
The growth of Master Data Management is great for Map Rock Version 1. It means that more disparate BI sources will share common dimensions through which we can correlate dimensions unique to different sources. Without such dimensions common to many BI sources, correlations would be limited to only time periods, which practically every BI source would include.
Self-Service BI. It’s increasingly realized that there isn’t “one version of the truth” in an enterprise. Information workers see the world from the perspective of their role. Accommodating each perspective places an infeasible burden on IT. Additionally, IT is already overburdened by growing demand and having to maintain impeccable service quality. But being limited to what IT currently can provide can tie the hands of an information worker at best, at worst, lead an information worker to make naïve decisions based on a limited scope of data. Self-service BI opens the door for an information worker to other sources of data to incorporate into her analysis. One self-service BI application, PowerPivot, has been fairly accurately referred to as “Excel on steroids”.
Map Rock in itself isn’t self-service BI, but it does accentuate its value. In particular, Map Rock would serve to address the negative side-effect of allowing for an uncontrolled proliferation of BI data (kind of like how Excel Services reigns in “spreadmarts”), for example, a proliferation of PowerPivot workbooks. For Version 1 of Map Rock, the proliferation of self-service BI increases the need for Map Rock’s concept of tying together “Business Problem Silos”.
Big Data (and Complex Event Processing). The procuring of huge data volumes of sensor data, clickstreams, diagnostic tools, etc will certainly bang open the doors for more analytics opportunities. Unlike in the past where data that was stored specifically supported OLTP applications, we now store data just for the purpose of analytics; studying customer behavior, predicting events. However, more data in itself isn’t the answer to deriving more value from BI, going beyond showing us what we already know. More data points eventually cloud the picture through its sheer volume.
Big Data is a more well-known than Complex Event Processing (CEP). I kind of think of CEP as our brain’s visual cortex that starts with a mass of photons and processes it into something at least somewhat useful vague shapes; then Map Rock picks it up and relates those pieces which then form into objects.
Big Data, combined with Complex Event Processing and in-memory analysis tools drastically lowers the bar for effective analysis of data of much higher volumes, which are a result of wider breadth (more data points), finer granularities, and greater lengths of historic time.
Metadata Management and Semantics. If I could only check one box from the list of technologies here, I’d say Map Rock is a metadata manager. It catalogues data sources, properties of data from those data sources, and captures relationships of many sorts between what could be a haystack data points. This integration of relationships facilitates Map Rock’s “Term Integration” philosophy. Any type of software application that is associated with the word “intelligence” is based on a foundation of metadata. “Intelligence” cannot exist without a “knee-bone connected to the thigh-bone” capability.
Master Data Management, described above, does overlap with Metadata Management in that the former does involve some metadata. However, that metadata generally refers to the source of the data such as server, table, and field names. In this case, I’m going further into semantics, which would include data such as a customer is a person or a surgical procedure is a product.
Cognitive Maps, Mind Mapping Tools facilitate brainstorming activity by drawing graphs electronically. Theory of Constraint software all include such a UI, as does the semantic component of Map Rock (which I mention under Metadata Management and Semantics). The graphs drawn by these mind mapping tools are like those network diagrams that teams draw on white boards. Such tools resemble Visio, but aren’t as generic. A really great hybrid tool is the whiteboard that can be electronically saved.
Mind Mapping software superficially looks like it provides the same services as Map Rock. In the way that Map Rock has a large semantics component, Map Rock does include a UI that looks just like Mind Mapping software. But again, Map Rock extends beyond just drawing the graph of relationships.
Brainstorming is about freeing ourselves from the constraints of how things are (inside the box) to outside the box thinking – imagination. Imagination is not just a “secret sauce” of Map Rock, but underlies the method by which humans overcome the limitations of logic because we only know what we know.
Rules Engines is an area where Map Rock could be most closely be categorized. My SCL language, is essentially a rules engine, and not as unique as it was when I first built it in 2005. The UIs of software with “workflow” elements such as SQL Server Integration Services, BizTalk, and Windows Workflow will look something like parts of Map Rock as well.
Earlier, I discussed why the word “rule” when describing Map Rock isn’t quite adequate. The problem I see with rules engines at this time is that:
- The rules management is too brittle and will not work without innately dealing with the complexity of the world.
- The rules are too simplistic (at least in state as of this writing), limiting the rules to making decisions geared towards only lower-hanging fruit.
A rule engine is sort of a mind map software and in fact, a UI, if the rule engine has one, looks like a “quantier” version of Mind Mapping software, but more hooked into data and resolving queries to the rules. The rules engines also don’t really address the difficulty in authoring rules. Map Rock integrates rule sources ranging from predictive analytics models and activity gleaned from brainstorming activity (casting a wide net) which is similar in concept to click-stream analysis.
The problem is that it’s really hard to map rules, which translates to teaching a very stupid computer what it can take years to teach a very smart human. So recognizing that obtaining such a repository of rules is the key to effectively competing, how do we bring down the difficulty to a point where the cost/benefit is worthwhile? The strategy I took is to:
- Find the most simple paradigm as possible.
- Figure out minimize the need to author rules – how to leverage and reuse rules.
Conclusion of Part 3
At this point, the reader of parts 1 through 3 should have a good idea of what Map Rock does and where it fits in the Decision Making space. Helping people make better decisions is a nebulous task and probably the toughest thing to be asked of a computer. The last two parts of this series focus more on the argument for why such an approach is necessary.
- Part 4 – We explore strategy, complexity, competition, and the limitations of logic.
- Part 5 – We close the Problem Statement with a discussion on imagination, which is how we overcome the limitations of logic, and how it is incorporated into Map Rock.
- Map Rock Proof of Concept – This blog, following the Problem Statement series will describe how to assess the need for Map Rock, readiness, a demo, and what a proof-of-concept could look like.
- The only speeding ticket I ever got was going 74 in a 70 mph zone up the I-5 near Coalinga. Every car zoomed by me like I was standing still and I was often alone on the freeway. In fact, it really was dangerous to be going only 74, with everyone else going at least 90. At 70 mph, that difference was ridiculous as packs of cars backed up around me with that 20 mph differential. So while all alone on the I-5, this cop pulls me over and gives me a ticket. I’m thinking, “Did you stop all those hundreds of cars zooming by me?” I decide I’m going to fight it. I go all the way to Coalinga and the judge says, “Yes or No, were you going over 70.” A $100 ticket wasn’t the hill I wanted him to “throw your ass in the city joint” over.
- When I was very young learning judo (around six), believe it or not, I was actually a skinny little kid. I recall the sensei telling me (very much paraphrasing) that the big kids are indeed stronger and an innate fear of size is natural. But put in the right position, their joints break as easily as anyone else’s. This is similar to how it’s easy to hold an alligator’s mouth shut.
- Last Fall Laurie took a picture of me at the edge of a corn field in Indiana. Up close, I was shocked to learn that corn (field corn – comprising about 99% of the corn crop) usually has just one ear! There is occasionally a second, but it is much smaller. I was so sure there was an ear at just about every leaf. I would have bet my life on it. It may sound silly, but this rocked my world. Similarly, my brother-in-law recently challenged me to name the color of a yield sign. Again, I would have bet my life it was yellow. As mundane as this issue is (or perhaps because this issue is so mundane), such experiences hammer into my head never to take something as a given.
- Diagnosis is much easier than treatment. As I mentioned, it’s easier to spot cancer than treating it. A diagnosis is the answer to an equation. Fill in the variables and we get our answer. It is just information, a recognition of a face, or recognizing an approaching storm; it is not yet a physically irreversible action. The diagnosis needs to be correct otherwise we will apply the wrong treatment (if there is a cookie-cutter treatment) or engineer an incorrect treatment strategy – which is now physically irreversible. Currently, humans with 12 years of college and on-the-job-training, and vetted through all sorts of exams and regulations, make these diagnoses. A prediction of any sort from a Predictive Analytics model (which is not at the level of a highly educated doctor) is also a “diagnosis”. I wonder what will happen as predictive analytics models out in the world proliferate, making billions of low-hanging-fruit predictions (way more than humans can keep up with), and we slowly become complacent about just following what they predict.