Map Rock Problem Statement – Part 2 of 5

This is Part 2 of 5 of the Map Rock Problem Statement. After a discussion on how I decided to develop Map Rock in, Map Rock Problem Statement – Part 1 of 5, Part 2 describes the target audience as well as the primary scenario for this Version 1.

Map Rock’s Target Audience

Map Rock’s target audience are those held accountable for meeting objectives in an ambiguously defined manner. Their objectives are ambiguous in that there isn’t a clear path to success and thus there is an art to meeting their objectives. This traditionally includes the managers; the CEOs through middle managers to team leaders. But it’s crucial to point out that even if you are one of the guys with no direct reports, as a modern, commando-like information worker, you still have a growing level of responsibility for designing how you achieve your objectives.  Map Rock is a software application intended to help such information workers through the strategy invention process and the execution of that strategy.

Map Rock isn’t just a “high-end tool” limited to a niche audience of quants, wonks, and power-users all with highly-targeted needs. For example, I’m aware of some of the marketing problems associated with the abrupt dropping of the PerformancePoint Server Planning application back in 2009. It was a very complex tool for a very small audience of managers. Such a product may not be a good fit for a Microsoft that develops blockbuster software for a wide audience (“whatever for the masses”), but can be a great opportunity for a startup willing and able to crack a few nuts. However, my goal isn’t to develop and sell a blockbuster software that goes “Gold” or “Platinum”. While financial success is naturally one of my goals, more importantly, I have a vision for BI which include – certainly not limited to – notions such as these: 

  • Assist information workers with finding relationships they don’t already know.
  • Blur and smooth out the rough, abrupt hand-off points between human and machine intelligence.
  • Do better than keep on shoving square pegs into round holes because things need to be neatly categorized.
  • When making a decision, what if I’m wrong and what can I do about it?.
  • Help humans to regain the appreciation for our rather unique quality of imagination and its artful application. (See Note #1 for my comment on “art”.)

The answer isn’t another high-end, power-user tool, or on the other extreme a watered-down, lowest-common-denominator, single-task tool.  I built an integration technology that will tie the efforts of managers, IT, wonks, and commando information workers. Towards that goal, I needed to dive deeper than the planning activity that the PerformancePoint Planning application addressed into the abstract lower-level of strategy, which is an activity common to all people trying to make their way in the world.

I’ve pondered whether Map Rock is a “data scientist’s” tool. I think of a data scientist as someone with the wide-ranging skill required to procure and analyze data. My view of a data scientist is someone with an “A” or “B” (in the context of school grades A through F) level of skill with databases, analytics, and business (all broad categories in themselves). This is different from quants and wonks who generally have a narrower and more focused “A+” level of typically math-oriented skill, utilizing a narrow range of highly specialized tools such as SPSS.

I see the data scientist as the high-end but not exclusive audience of Map Rock. So I need to be clear that Map Rock is intended to work best as a collaborative application, not a single-user, stand-alone tool. As I mentioned above, Map Rock integrates the efforts of human managers, IT, wonks, and commando information workers. The data scientist plays a crucial role in a Map Rock ecosystem but not like a DBA who manages the database system but is not herself a consumer of the data. The data scientists using Map Rock are more in spirit of the entrepreneurs of an economy who keep the economy robust through their push for change and are also consumers themselves. And in some ways like the Six Sigma black belt who actively nurtures the system.

Additionally and equally importantly, Map Rock integrates these human intelligences to the machine intelligence captured in our various BI implementations; Data Warehouses/Marts, OLAP Cubes, Metadata repositories, and Predictive Analytics Models. From this point of view, data scientists are also like the ambassadors of the human world to the machine world, as they speak machine almost as well as they speak human.

At a higher level (the organization level), the target audience also considers the plight of the corporate “Davids” (of David and Goliath fame) who have come to terms with the fact that they will not beat those Goliaths at the very top of their market (ex: McDonalds and Burger King, the Big Three Automakers) by playing their game. Any entity at the top is at the top because current conditions favor them. Those conditions may have been imposed on the world by them a while ago through crafty engineering, it already existed and they saw a way to capitalize on it, or most likely a happy series of events lead to dominance (somebody has to roll heads ten times in a row). In any case, it seemingly makes sense for the members of that “royalty” to do all they can to preserve those conditions, at worst controlling the relentless wave of change as best as they can. They play primarily a defensive game. They have at their disposal the sheer force of their weight, to exploit the powers such as economies of scale, brand name recognition, lobby for laws, trip up the promising startups. This isn’t bad, it makes perfect sense. But eventually, controlling change, plugging one hole leads to two more, then others, until finally, the system explodes or implodes.

The scrappy “sub-royalty” corporations below the royalty of #1 and #2 in their market at the same time keep exerting pressures by finding novel ways to topple #1 and #2; playing a wiry offensive game capitalizing on the virtues of being small. They find their weak spots and hit the Goliath there. This audience is the one that would have incentive to take Performance Management to the next step, beyond a “nervous system” to a focus on strategy.

Just as compelling are the thousands of relatively small”conglomerate” corporations consisting of a number of franchises such as hotels, fast food restaurants, and medical practices. These mini-businesses, for which there could be a few to hundreds in the conglomerate, span a range of markets and localities meaning each piece involves unique characteristics rendering what works one place useless in another. For example, it would be very useful to be able to test the risk of applying an initiative that worked in a pilot project across the board to all medical practices.

The most common negative feedback I get on Map Rock is some form of, “People would not want to do that.” That is, they will choose to rely on their gut instincts as opposed to turning to a software application to validate their beliefs, to find new levers of innovation, to explore for unintended consequences, to clarify a mess of data points back into a coherent picture. Curiously, most of the people who tell me this are themselves the player/managers I will talk about more below. Remember, Map Rock in great part about blurring the line between human gut instincts and the cold hard facts of our BI assets, meaning things don’t need to be “this or that”.

Yes, Map Rock goes beyond a tip calculator or the sort of “there is an app for that” software. While it is painful to learn anything new, enough incentive can get us to do all sorts of things. In other words, the pain of learning is less than (actually much less than) another pain the learning is attempting to resolve. The incentive here is to win. Corporate entities don’t exist simply so we have somewhere to do things. Map Rock is about enhancing the abilities of information workers, which is beyond just speeding up a process to get the work done faster to make room for other work (or even play). So I ask:

  • Are you willing to limit your options to only what you know? Are you going to limit yourself to that closed box which you may have mined to death over the years, depending solely on the existence of some gem you have yet to find?
  • Even if you feel confident in your knowledge, say from being in that position for decades, are you really sure that you know everything about your job? There is nothing else to learn? Are you 100% confident you kept up with the changes? Your mastery at your position hasn’t blinded you from things going on outside? Are you confident you’ll always stay in that job, never needing to take a few steps back in the mastery of your subject area?
  • Your actions don’t exist in a vacuum. Do you know how your actions affect other departments? Inadvertently, or even knowingly – like when there is so much going on and a deadline is looming, we realize we can’t save everyone. On the other side of the coin, at the risk of sounding cynical, are you certain all of the other managers will not jump on an opportunity to excel, even at your expense?

One version of the “people aren’t going to do that” I’ve heard a few times is the notion that people only want to do the least amount possible to accomplish their tasks. The reasoning is that our plates are already so full there is just no room for anything else, there isn’t enough time to even take care of what is already on our plates. While this may make logical “business sense” from a common-sense point of view, I find that very sad and a bit insulting to the many who consistently go beyond the call of duty, not just at work, but in many facets of their lives. While it may be true that many people would not want to think more than they need to and do the least amount possible to satisfy their most obligations, when it comes to one’s livelihood, I would think that is incentive enough to buck up and fight.

Perhaps you feel you are in a job where you think you aren’t paid to strategize (one of those “We don’t pay you to think” jobs), you’re paid to simply execute instructions as flawlessly as is humanly possible.  I’m not sure how comfortable I’d feel if I consider the implications of what the consistent improvement of Artificial Intelligence and robots hold in store for the value of such roles. That is, especially after conversations with an old auto worker comparing the assembly lines of the 1960s to the assembly lines of today. If outsourcing jobs to countries with low wages seem bad, think about outsourcing to a robot that doesn’t complain and can do your job flawlessly. See Note #2 on Robots.

Business Problem Scenario

At most feisty corporations, meaning those that are playing to win (as opposed to “not lose”), a periodic (at least annually, but to lesser intensities, quarterly or even monthly) ritual takes place called the Performance Review. Every employee, both managers and worker bees, are evaluated on their performance during the previous period and assigned targets for the next period. Those targets originated from corporate goals and high-level strategies designed by the corporate generals to meet those devised goals, and then trickled down the org chart. As the high-level goals and strategy trickle down to wider and wider breadths of people with increasingly specialized roles, they receive increasingly specific goals and objectives.

That trickling down goes something like this. A CEO deems that we must focus on our ability to retain customers and devises a strategy to “delight customers”. The VPs are given objectives specific to their department. For example, the VP of Manufacturing works to improve quality to Six Sigma standards and the VP of Support works to improve the scores of support surveys. These assigned objectives are just the “what”, not the “how”. It’s upon each VP to devise the how, a strategy, to meet their given objectives (as well as the key performance indicators to measure the progress of their efforts). Once they devise their strategies, they hand down objectives (components of their strategy) to the directors under them. The directors in turn devise strategies and hand objectives down to managers, and on and on until finally we reach those with no direct reports, the majority of the employees.

In this day of the information worker, decentralization of control, the delegation of responsibility, it’s more up to each worker to devise how to accomplish their objectives. It’s far from anarchy. We’re still expected to stick within myriad government and corporate guidelines and are usually expected to prove that our “how” has been tried many times before with great success (meaning, trying new things doesn’t happen very often because no one is willing to be the first). But we still have much wriggle room for creativity than if things were severely top-down, unambiguous, micro-managed directives.

At one corporation I’ve worked with, one with a highly competitive culture, there are about 3,000 managers out of a total of about 10,000 employees. That’s not as bad as it sounds since most of those managers are player/managers, such as a sergeant who shoots a gun as well as directs the actions of a squad. But that’s 3,000 managers struggling quarterly with devising strategies to meet objectives that are usually tougher than the previous quarter. That’s 3,000 different personalities, skillsets, points of view, points of pain, environments, social networks, etc. These assigned objectives aren’t just tougher because it involves simply a higher target for the same value, such as Sales. It could be tougher because it represents strategic shifts from above that require the manager to change as well.

So there are the 3,000 managers banging their heads against the wall figuring out to meet these challenging assigned objectives which are never as cut and dried as one would hope. After much booze and crying, they come up with a few ideas. Each of these ideas are a theory, a strategy. They consist of chains of cause and effect. For example, to achieve higher customer satisfaction ratings, I will improve the knowledge of the support personnel by placing them into a smaller room where they can overhear other conversations, increasing their knowledge of problems that pop up for our products and reducing the steps needed to find someone who may have already encountered that problem. For now, just note the negative side-effects this strategy could have.

Remember, this strategy is just a theory. Many funny things can happen during that trickling down of objectives from superior to subordinate. For example, as the trickling down gets wider and wider:

  • The spirit of the original strategy can get lost in translation. This is no different than how a story evolves as it’s told from person to person.
  • They can begin to conflict with or even undermine each other, hopefully always inadvertently. In a corporation with thousands of employees, the efforts are bound to conflict or have contention with others.
  • The strategies at whatever level could be wrong, whether too risky or too naïve. A strategy is just a theory.
  • The chosen KPIs could be the wrong things to be watching.

But it gets worse. During execution of the strategies, many things can happen as well that thwarts the corporation’s efforts:

  • If the “theories” of the strategies at any level are wrong, once a bad strategy is selected, subsequent strategies by subordinates will address things incorrectly.
  • Things fly out of left field throwing the game plan out of control. “Black Swan” (if it hasn’t yet happened, it couldn’t possibly happen) events are not as rare as we seem to think (see Note #3). As the effects cascade through the enterprise, execution degrades into a free for all, with employees retreating back to their comfort zone of old habits that don’t work anymore.
  • Employees could game the KPIs. Meaning, they are able to meet or even rocket past their goals in a way that is easy for them, but probably has some side-effect to someone else. It’s amazing how clever people can be. So it looks like they are doing well, but in reality are not.
  • Employees may leave or even give up on reaching their goals as a sports team would be discouraged once they know they are mathematically out of the playoffs.

The points I list above (there are many more) make me wonder how corporations in the end do succeed. For one thing, they often don’t and die a quiet death. Those that do survive do because people are intelligent  and somehow figure out how to make things work, even if it was very painful. 

The number of moving parts within the corporation, for which they have at least some control, are so many they are indeed complex systems. Additionally, there are countless factors outside the control of the corporation, for which they have little or no control, which exponentially adds to the complexity. How things do actually work, or at least appear to actually work, is beyond the scope of this blog. I’m attempting to make the case for the need to take the process I describe above, Performance Management, to the next level.

Performance Management is a framework for establishing at least some level of order to that process. The most widely recognized tool is currently the Dashboard, a set of relatively simple charts and graphs, assessable through collaborative software such as SharePoint, where information workers can monitor the state of affairs. The primary object on a Dashboard is the Scorecard, a list of key performance indicators. A Dashboard is tailored to each information worker reflecting their KPIs and relevant chart.

F1 – Example of a Dashboard.

Another of these parts of a typical dashboard is a yet under-developed part called a “Strategy Map”.  A Strategy Map is a graphic of a network of initiatives pointing to what the initiative is intended to affect. For example, happy employees, leads to higher product quality, which leads to satisfied customers, which leads to more purchases, which leads to higher profit. It’s currently implemented as an almost static visual created in a tool such as Visio. Nodes on the Strategy Map may do things like match the color of the status of the KPIs, but other than that, it is just a picture. Like Dashboards, the Strategy Map is tailored to the role of each information worker.

F2 – Example of a Strategy Map. This is for a dental practice.

However, this strategy map is really the brain of the corporation whereas the Dashboard is simply the nerves indicating the state of being. It could be said that Map Rock is about bringing the Strategy Map to life. A brain is a much tougher thing to implement than a nervous system. This current decoupling of the Dashboard from the Strategy Map is dangerous since we will supposedly take actions based on what we see from the Dashboard. Just about every action has side-effects. Without a flexible connection to cause and effect, which as of now exists pretty much solely in our heads, at best, we jump in with the faith that those side-effects will be innocuous or we will cross that bridge when we get to it.

This “Business Problem Scenario” may be set in a corporate environment, but any endeavor is executed in a similar manner. The ultimate goal may be something other than financial profit though. This could be a non-profit devising strategies for implementing its cause or a government attempting to understand the effects of laws that it passes (yes, I do have a sense of humor – and understand I may have just flushed all my credibility down the toilet).

Coming up:

  • Part 3 – We delve into a high-level description of the Map Rock software application, where it fits in the current BI framework, and how it differentiates from existing and emerging technologies. This is really the meat of the series.
  • Part 4 – We explore strategy, complexity, competition, and the limitations of logic.
  • Part 5 – We close the Problem Statement with a discussion on imagination, which is how we overcome the limitations of logic, and how it is incorporated into Map Rock.
  • Map Rock Proof of Concept – This blog, following the Problem Statement series will describe how to assess the need for Map Rock, readiness, a demo, and what a proof-of-concept could look like.

Notes:

  1. The term “art” has been used in a disparaging context. People snidely refer to an endeavor that is not just repeating a successful process as, “It is more art than science …”. To me, art is a combination of a imagination and a high degree of skill to manifest what is imagined. And imagination isn’t something we outgrow out of childhood. It’s really what separates humans from the birds and reptiles. Art is not just fine art – which does require a great amount of skill and imagination. All things that didn’t exist at some time required imaginative and high-skilled people to navigate an unclear path.
  2. Rise of AI and Robots. This is probably the most relevant force to Map Rock. Many jobs seemingly outsourced to other countries or lost through mergers are never coming back. Why would anyone hire an army of people to dig a ditch when a backhoe can do the job faster, better, and more cheaply (by today’s wage standards). A friend of mine who worked in an auto assembly plant in Grand Rapids during the 1960s, but left to do other things, recently visited an auto assembly plant. He was shocked by the seemingly endless line of people replaced by a line of robots and relatively few operators. People have been predicting this for decades and we aren’t yet ruled by robots. But the trend is certainly that every day, AI and robotics will impinge upon the work available to humans today.
  3. I probably don’t need to mention that “Black Swans” are a notion popularized by the book, “the Black Swan”. They are events that no one would have even known to attempt to predict and with a large impact on things. However, in a world as complex as ours, such events happen much more frequently than we’d like to think at smaller, personal scales. I believe this underestimation of them is because we generally only think of the spectacular ones like 9/11 and the effects of Hurricane Katrina. However, a stroke or heart attack on our way to work on what is otherwise a day like any other is just as impactful to us individually and those close to us. I recall my father in the hospital for his bone marrow transplant just before the first Gulf War turning away from the TV saying he had his own battle to deal with.
Posted in Map Rock | Tagged , , , , , , | Leave a comment

Map Rock Problem Statement – Part 1 of 5

Preface

Map Rock addresses the need to manage competitive fitness in an increasingly complex world through superior development and management of versatile strategies. There, that is the 25-words-or-less, Twitter-able, sound bite “Problem Statement” for Map Rock. I somewhat facetiously refer to this series of blogs as the “Map Rock Problem Statement” when it really is a “Problem Essay”. So out of professional courtesy to everyone, before I begin here is the Elevator Pitch:

The Problem Map Rock is trying to solve: Take Performance Management to the next level by “bringing the Strategy Map to life”. Performance Management initiatives would benefit from Business Intelligence systems that focus on presenting relationships rather than primarily returning data, sums and calculations, via what are still just reports. Business Intelligence packages many data points into aggregated “information” (data to information), but eventually there will be so many pieces of “information” that it again becomes data. Additionally, data from which we can calculate relationships exist is a number of formats and in a great number of isolated sources. What is needed is a system that can integrate these sources, sort the information into a hierarchy, and maintain the validity of the information.


Current Solutions and Where Map Rock Fills Some Holes: The Business Intelligence areas such as Data Warehouses, Performance Management, and Predictive Analytics as it stands today has added tremendous value to the decision-making capability of enterprises, but hasn’t lived up to its full potential. The vision of a Centralized Data Warehouse is elusive due to factors such as the complexity of integrating semantics across dozens if not thousands of data sources. Performance Management fails as it only tells us what is wrong with KPIs that we aren’t even sure is what we should be measuring. Additionally, KPIs are disconnected allowing workers ample room for gaming the system, which actually makes things worse. Predictive Analytics falls short in that the models make predictions based on historic patterns that are severely prone to skewing by one-off events. Simply removing the one-off event as an outlier could fail to detect what is really the birth of new trends.

Map Rock’s Added Value: New initiatives such as Self-Service BI, Master Data Management, Metadata Management, Semantic Webs, and of course, Big Data are significant steps in the right direction. But at the same time, these initiatives can further complicate matters if they are not united. For example, Big Data in itself for the most part mostly adds to more data points – sometimes simply more isn’t the answer. What is required is a way to integrate the heterogeneous array of technologies attempting to help us make better decisions from a higher level. Additionally, we need to smooth out communication between the wide communication chasm between where BI leaves off (ex: the OLAP cubes and Predictive Analytics models) and the human brain.

Oh good, you’re still here.

If Map Rock sounds intriguing enough from the elevator pitch, I will be posting a marketing-oriented blog on February 8, 2013 on how to inquire on a demo. Please look out for it. Otherwise, I offer this essay on all that is behind Map Rock. It will take many more than 25 words or a one minute speech to lay out the primary concepts underlying Map Rock which I will do by discussing:

  • Embedding the concepts into the well-known Performance Management framework.
  • Building on top of the efforts of what is traditional BI and Predictive Analytics.
  • The strategy of building a “pidgin” to bridge human and machine intelligence versus a genuinely AI system.
  • The fundamental place of competitiveness, strategy, and imagination in a complex world.
  •  Understanding the difference between complicated and complex systems for insight into why the result of current BI projects are still often only marginally helpful, or at worst, we still make a lot of bad decisions.

This series of blogs is the “Why”, the reasoning behind Map Rock, not the “How”. This blog isn’t intended to be the “marketing” blog.  I look at this article more as Map Rock’s “Federalist Papers”, from which the more consumer-friendly and poignant United States Constitution is derived. Actually, for Map Rock there is a journal of about 500 pages (in Word) dating back to 2003 which I’ve condensed down to a these approximately 20 pages. This theme of “why” is actually in itself very “Map Rock” as “why” is really a set of relationships, and relationships is what Map Rock is all about. We can be taught how to do something, but if we don’t know why, we will be lost when (not if) conditions for that “how” change.

The slogan driving the development of Map Rock is: The better we understand relationships, the more effective we can be at manipulating our surroundings. Humans have an enhanced ability to learn; that is to assimilate and process relationships throughout our life. When we understand why something happens, how are things related to each other, we can then engineer a solution to achieve a desired state in a system even if the starting points are different each time. A “solution” is a set of manipulations to pieces of a system. Over the last couple hundred thousand years, we’ve done very well in taking us from a relatively weak, “jack of all trades” animal to the apex of the apex.

About ten years ago I read the great book, The Ingenuity Gap, by Thomas Homer-Dixon. In a nutshell, the thesis is that eventually the increasing complexity of the world will overtake humankind’s ability to engineer our way towards our dreams and out of the messes we individually and collectively get ourselves into. The world is becoming more complex by magnitudes, but the innate intellectual capacity of humans is rather constant, or at best improved incrementally through superior education techniques.  I thought then that the popularity of this book would open the door for my thoughts around what would eventually become SCL from a “solution looking for a problem” to a “solution to a recognized problem”. Ten years later, we’re somewhere in between, but I optimistically think leaning toward the latter side.

That means I still have a significant “solution looking for a problem” issue to overcome – which by the way isn’t necessarily a bad thing. The big obstacle I feel stems from society having grown too comfortable with the seductive simplicity of the sound-bite, non-competitive, tips and tricks, best practices, bullet point, PowerPoint, quick fix, instant gratification, elevator pitch, Tweet quips, risk averse, single-function, multi-tasking, lowest-common-denominator culture that we’ve made for ourselves.

Don’t get me wrong. Believe me, I partake in and greatly appreciate all the ease and convenience the sound-bite culture provides. In fact, innovation in large part is about making the mundane of life as quick, effective, and painless as possible. But I feel the art of “American Ingenuity” (which can exist anywhere in the world where there are the conditions for innovation) and the appreciation for it is slipping through our fingers and I don’t believe it’s something that is easily re-learned or re-taken.

Innovation is about delayed gratification. It involves thinking deeply and widely, allowing for and learning from mistakes, and being allowed to be a little bit playful and crazy. It’s what differentiates humans from other creatures that do live solely by simple rules. When it comes to the chores of life, of which there are more imposed on us every day at home and work, I’d like them to be as simple and painless as possible. But when it comes to creating new things and competing, at the risk of sounding sadistic, we need to embrace the opposite. “Embracing” in this case means instead of rejecting complexity, we face it and tame it. Towards that goal, I think of my development of Map Rock over the past few years as having fought an epic battle with a grizzly bear that I’ve now tamed. Maybe we’re not yet BFFs, but at least we can have a working relationship, which is a start.

The complexity of life is growing at an accelerated rate at this time for many reasons which I’ll list later. Complexity means there is an unpredictable aspect to the outcomes of all movement involved in a complex system. In the course of all this movement, things are naturally destroyed and new things are created. But we humans have attachments to things and have a natural tendency to seek stability valiantly resisting the relentless change.

No, the world hasn’t come screeching to a halt due to the growing complexity of human activity. Life on Earth is still much too powerful to come to an end from our yet comparatively puny efforts. Life endured all sorts of much bigger catastrophes over a few billion year span. Humans are innovative and resilient creatures. The question is, how can we mitigate the risks and capitalize on the constantly changing conditions? Maybe we think we are handling it just fine. But maybe there is a boiling frog problem. Maybe we haven’t reached a scalability tipping point where drastic change can come very abruptly. Any more clichés? Hahaha.

At the end of the day, my intent for Map Rock is to help answer these three powerful questions:

  • How could this have happened?
  • What could possibly happen?
  • How can I make this happen?

Coming up:

  • Part 2 –  I describe Map Rock’s target audience and the primary business scenario for Version 1. It is not just a tool for quants, wonks, and power-users.
  • Part 3 – We delve into a high-level description of the Map Rock software application, where it fits in the current BI framework, and how it differentiates from existing and emerging technologies. This is really the meat of the series.
  • Part 4 – We explore strategy, complexity, competition, and the limitations of logic.
  • Part 5 – We close the Problem Statement with a discussion on imagination, which is how we overcome the limitations of logic, and how it is incorporated into Map Rock.
  • Map Rock Proof of Concept – This blog, following the Problem Statement series will describe how to assess the need for Map Rock, readiness, a demo, and what a proof-of-concept could look like.

Related Blogs

It may be beneficial to peruse material I’ve posted over the years that are collectively the soul of Map Rock. In a sense, almost all of my posts have something to do with Map Rock, but these posts strike me as the most relevant at this point. Map Rock is the manifestation of all these concepts. However, I will write this “Problem Statement” with the assumption that none of the posts were read.

Please keep in mind that these blogs were written over a few years (2005 through 2012) and may be a bit, or more than a bit outdated, at times as things have moved on over the years and my thoughts on the subjects have evolved as well.

Find and Measure Relationships in Your OLAP Cubes The first two blogs listed here set the direction for my efforts leading to Map Rock. This one really represents the foundation of Map Rock, the ability to “cast a wide net” for correlations or even lack of correlations. The main idea is to look for relationship measures to begin with, as opposed to looking for aggregate measures as is normal browsing an OLAP cube.

Bridging Predictive Analytics and Performance Management Performance Management usually centers around the Scorecard, a report on the Key Performance Indicators. It is just a report, the nerves reporting pain. But imagine if the pain in your nerves didn’t report to the brain with an awareness of pain from other parts of your body, an awareness of what is going on, a catalog of things it can do to alleviate the pain, etc.

Undermined Predictive Analytics This blog was meant to be a reminder that it’s a jungle out there. There is a big difference between data mining people as they just go about their daily business and when there is actually an intelligence involved or when people know they are being watched. In business, a big problem with performance management is that workers are clever in gaming the system.

Cutting Edge BI is About Imperfect Information There is no “one number” answer and practically all answers must to preceded by a series of “it depends” questions.

Why Does a Lt. General Outrank a Major General? This blog attempts to illustrate the role of strategy and tactics at different levels of jobs. But as companies trend towards decentralization of responsibility, the delegation of coming up with the “how” to people at all levels, what emerges is that modern information worker, the commando. That commando, who is often a player/manager, must be strategically, tactically, and operationally proficient.

Data to Information to Data to Information One of the main notions of Map Rock is that it’s the relationship between data that provides the really juicy, meaty insights. More data, as in Big Data, isn’t in itself the answer. A focus on Big Data still sidesteps tackling the challenges of embracing complexity.

Why Isn’t Predictive Analytics a Big Thing? At the time of the writing of this blog, Predictive Analytics was still frustratingly rather fringe. Since 2009, it is perhaps still not a household word but almost a “officehold” word. I positioned Predictive Analytics then similarly to how I’m positioning Map Rock; as a bridge between the chasm left by most BI implementations and the human brain.

Predictive Analytics is Science for the Masses The first feedback I usually get on Map Rock is that it is a quant’s tool. It is a tool intended to make non-quants a little bit “quantier”. It amazes me how people casually tell me they have “non-thinker” roles, as if thinking is reserved for scientists and quants. Who has never strategized about something? Who as a kid hasn’t schemed about something like getting a Red Ryder BB gun for Christmas? I’ve encountered so many people who say they know nothing about data mining but yet provide fantastically profound arguments for why Barry Bonds may or may not be better than Babe Ruth.

Where Do Rules Come From? A major factor of the evolution of SCL was triggered by a comment made by a friend of mine way back when I first began developing it. He said that the really hard part was encoding the rules. He is absolutely correct. That lead to the path on finding sources of rules that already exist or are as naturally produced as possible (ex: clickstream analysis gleaning insight from something people already do) and integrating these rules.

Exponentially Growing Complexity. There are many powerful trends adding to the complexity of our lives. It’s important to recognize them.

Things Quickly Become Complex. A short true story of how complexity slapped me in the face.

Posted in Map Rock | Tagged , , , | 2 Comments

Map Rock is Almost There!

Happy New Year!

I’m very happy to say that the development of Map Rock V1.0 is just about there. In fact, over the past month, my effort has shifted from primarily development of the actual product to primarily authoring rollout material.

Map Rock is the encapsulation of how I’ve always envisioned BI. In a nutshell, there is a big chasm between where BI ends and our human brains begin. Bridging that chasm is quite a challenge that I’ve tackled over the past few years. In some ways, it may have been easier to fight then tame a grizzly bear.

Beginning in a week or so, I’ll start releasing a series of blogs stepping through Map Rock’s purpose, value, its differentiation from emerging technologies, and most importantly, how enterprises can engage that power. For now, here is a short blog on one of the slides in the Map Rock presentation that seems to resonate: Business Problem Silos

Please also see how Map Rock Got Its Name.

Posted in Map Rock | Tagged , , , , , , | Leave a comment

Securing a Dimension with Members Having the Same Name

Here’s an SSAS security issue that doesn’t come up very much at all. In fact, when this recently came up, I had forgotten about a solution I provided way back in 2005 (which was the only other time I’ve encountered this). The problem is that when securing an attribute member (a role’s Dimension Data tab) using the member’s name and not key (using the ”Allowed Member Set” text box of the “Advanced” tab), and there is more than one member with that same name, only the first one will be secured.

For example, if I have two customers named “John Smith” (but with different key IDs) and I place the MDX in the “Allowed Member Set” text box, [Customers].[Customer].[John Smith], only the first John Smith will appear. This is consistent with the behavior of what happens in a select statement when referring to members by name. If there is more than one, only the first will show. Please note that this is different from what will happen if your Axis SET references the members’ key IDs (which means using functions such as members, children, etc – something like [Customers].[Customer].Members). In that case, “John Smith” will show up twice.

Before continuing to the solution, I should explain why the names instead of the IDs were referenced (which also explains why this is such a rare occurrence). The underlying data sources of the dimensions did not maintain static dimension keys. What happens is that the underlying data source (data warehouse or data mart) was completely repopulated when it refreshed. This means that any MDX referring to those members by key (ex: [Customers].[Customer].&[1]) are no longer valid; or worse, that key now refers to another member for which security should not be applied. Therefore, keys wherever they are referenced (security, calculations, KPIs,) must be manually changed.

Additionally, this also means is that SSAS dimensions must be fully processed, as opposed to incrementally processing with ProcessUpdate, since it now cannot map those dimension keys to the internal keys it creates when the dimensions are processed.

It’s these terrible side-effects of not maintaining consistent dimension keys combined with the relative infrequency of two members having the same name (especially companies, products, etc) that make this situation I describe so rare.

The solution is to refer to the member as the result of a FILTER statement. For our John Smith example, instead of simply composing the MDX, [Customers].[Customer].[John Smith], we would compose something like:

FILTER([Customers].[Customer].Members,INSTR([Customers].[Customer].CurrentMember.Name,”John Smith”)<>0)

This will return a set consisting of the two John Smiths. If we wanted to secure John Smith and Eugene Asahara, even though there is unlikely to be more than one Eugene Asahara and we could just state [Customers].[Customer].[Eugene Asahara], just to be safe we could write:

FILTER([Customers].[Customer].Members,INSTR([Customers].[Customer].CurrentMember.Name,”John Smith”)<>0)+FILTER([Customers].[Customer].Members,INSTR([Customers].[Customer].CurrentMember.Name,”Eugene Asahara”)<>0)

I’d like to note as well that another reason I suspect I do not see some situations very often is not just because they don’t happen often for one reason or another, but because customers may give up on the technology when they run into a wall. That’s a terrible shame since there is hardly ever a perfect product and often a solution is just one little insight away.

Posted in BI Development, SQL Server Analysis Services | Leave a comment

Thoughts Around “Aggregate Fact Tables”

I’ve encountered many situations where some set of complications in a cube model were eased by using an aggregation table derived from some base fact table as the fact table. The complications leading to that choice usually include expensive on-the-fly calculations, distinct count measures, or even many to many relationships.

The main idea is that we can avoid the query-time calculations by performing these calculations once, capturing these calculations in a database table, and using that table as the fact table. For example, a calculated measure could be something like a KPI status calculation which compares an employee versus the average of the group. If we were to view a large number of employees, calculating those results could take several seconds under best conditions. But that may translate into very many seconds if the SSAS server is under high utilization. Pre-calculating those KPIs by every conceivable way it could be sliced and diced would yield very quick responses.

Another good example of a situation for an aggregate table is where there is a many to many relationship associating dimensions with large numbers of members. This means that the intermediate fact table (the association table) could be very large. Under these conditions performance can be very sluggish. Applying the Many to Many Compression Algorithm can result in outstanding improvements (by a magnitude in some cases). However, a pre-aggregated fact table can usually still outperform it. Additionally, there are times that the algorithm cannot be effectively applied (see note #1 below).

There is one really big caveat to using this technique and some lesser drawbacks: Data explosion. I’ve seen fact tables of a few hundred million rows of a base fact table balloon to an aggregation table of tens of billions or even hundreds of billions. At tens of billions of facts, cube processing can take an exceptionally long time, at hundreds of billions, sometimes days.

Generally, these calculations are not additive measures, which means there needs to be one row for each combination of attributes for which we may want to view data. Essentially, we take on the burden of maintaining aggregations and all the complexities around it from SSAS. It won’t take many iterations of maintaining these tables to appreciate what SSAS does for us in regard to aggregations.

In fact, the main point of an OLAP engine is to encapsulate all the ugliness involved with maintaining and optimizing the usage of sets of aggregate (one for each granularity) tables so analytic (SQL) queries could be tremendously sped up. Users opting for this old-school, home-brewed approached to OLAP will soon figure out (to paraphrase the great book, “If You Give a Mouse a Cookie”):

If you create an aggregation table on a fixed set of attributes, you may need more for other unique sets of attributes. Eventually you will have so many that you must set up a mechanism to determine which are most important. Then you will need to have a mechanism to create them on the fly if you happen to need one for which there is no table. You will then realize there are many optimizations you can implement such as bitmaps quickly determining what tuples do not exist, pre-caching more data even if you didn’t explicitly ask for it, etc, etc.

I could go on and on, but you get my point. An OLAP engine is the encapsulation of a ton of optimization and maintenance techniques and algorithms around the bread-and-butter analytical patterns (slicing and dicing data in countless ways). That’s a ton of work and refinement, which will continue to progress. (See my counter-point in note #2 below.) There is a whole lot more than 90% of SSAS practitioners will need to know. SSAS is an ingenious engine and the best developers I’ve ever known (and I know very many) are those who designed and developed it.

Usually, the customer can live the consequences of an aggregation table size of ten or twenty billion rows. But I always ask if it’s likely there will be other attributes added, which will exponentially explode the number of rows. It’s important to remember that BI customers usually think of new things to analyze by once the juices get flowing with whatever is currently in the cube. That’s a very good sign your efforts are paying off for the customer. BI is a learning process; we usually figure out things we didn’t think about as we perform analysis. So this usually ends up not being a scalable solution.

When deadlines and budget overruns are looming, factors such as scalability (people often say in very sarcastic tones, “those ‘abilities’”) can easily take a back seat. It’s easy to convince yourself that another dimension will never be added or there will never be a request to go from three to ten years of history. Future events, no matter how probable and significant, just cannot compete with “clear and present” issues.

Other drawbacks include:

  • Maintenance complexity – Each granularity by which a query is issued would need a SCOPE to fetch the proper tuple (row) and a new SQL INSERT statement to insert that new granularity.
  • Limited ability to leverage “block computing” – Every query will be fetched cell by cell.

That really isn’t so bad considering the query performance increase. Again, the big issue is the data explosion potential of the aggregated fact table.

As with everything else SSAS, it’s all an “it depends” question and a complex web of balancing of competing factors – a “web of trade-offs”. In general, I try to stick with the “native” SSAS features unless pushed outside of SSAS’s features since they are optimized and could be further optimized in future releases (with little or no coding changes). In this case, what to do is more a matter of “Which risk are you more willing to live with?” versus “Which is the better choice?”

Remember, the devil is in the details. No matter how much easier the coding for one solution or the unlikelihood of some catastrophic event may be, making the wrong inference can be due to just one little bit of missing information. SSAS (and many other complex servers) is rife with “with don’t know what we don’t know” situations. It takes a lot of experience to draw a conclusion where there is a decent chance it isn’t ill-informed. See my note #3.

Sometimes we are forced into a corner and there is no choice. If we’re faced with a many to many situation where the compression algorithm doesn’t help, we have no choice but to use the aggregated fact table. If we’ve optimized our calculations and configurations as much as possible and it’s still too slow, we have no choice but to reduce scope, or if we’re lucky, simply scale-up or out the hardware.

Of course, it is worth exploring if the aggregation table’s measure group can be partitioned such that it can be processed incrementally. In fact, chances are fairly good that data will not change prior to some date (actually, that’s the case for any measure group). You can get kind of fancy as well. For example, I once worked through a solution that partitioned the measure group by products that were most likely to be affected by a price change (using a time series data mining algorithm), which would affect its forecasted value. It worked pretty well, but not great as about 60% of the partitions had to be processed each month.

I think a really neat feature for SSAS would be the ability to persist “warmed cache”. Or even better, to define a cache structure for which values calculated during query time could be added and persisted. This really means we can persist Formula Engine cache and load it directly into the Formula Engine (upon startup or as cache is evicted, then reloaded) similarly to how measure group aggregations are configured and loaded into the Storage Engine; the aggregation equivalent for the Formula Engine.

Semantically, that cache is very similar to those aggregate tables, so it really doesn’t save in terms of memory usage or disk storage space (such as compression would) for exploded tables. But the ability to persist that cache would at least benefit us by avoiding the maintenance chores for modifying the tables and SCOPE statements I mentioned above and just use “native” SSAS features.

Notes:

#1 The many to many compression algorithm will not be effective in cases where there is almost a one to one relationship between both sides. A great example is an insurance policy where there is a policy holder related to dependents. This generally means a parent as policy holder and the spouse and children as dependents. It is possible that someone may belong to more than one policy. Therefore, there is a many to many relationship: a policy can have many people and people can belong to many policies.

In these cases, there will hardly be any compression since each family is unique and usually will have their own policy – roughly a one to one match.

#2 It doesn’t take much SSAS experience to figure out that OLAP doesn’t handle complex calculations as well as it handles its simple aggregation such as SUM, MAX, MIN, COUNT, etc. Even the native DISTINCT COUNT has been a pain since it first appeared. This can mean degrading from sub-second to a few seconds for a small cellset. Of course, that “few seconds” can also be minutes or many minutes if the calculations are bad enough (ex: goes down to leaf levels of big dimensions).

This could be a challenge for OLAP as the term, “aggregation” in the OLAP world, driven by the likes of complex event processing, begins to take on a wider meaning encompassing complex scores or many types; predictions, weightings, and such. For example, rather than store a record for every heartbeat over a period of a minute, we simply store a salient fact such as “erratic” or “normal”. We reduce 60-80 rows into a few “commentaries”. Imagine how slow our OLAP query would be if we needed to calculate that at query time.

If that happens, these “aggregate tables” will win out more often, further relegating OLAP’s relevance into the background.

#3 I’m very careful about quickly dismissing the concerns of my colleagues or customers. The terminology in BI is so ambiguous that you can really think you understand a comment, but it turns out that two terms that sound similar with different meanings can result in a sensible statement. “Drill down” and “Drill through” is a good example. “We need the ability to drill down to finer details” from an implementation standpoint (maybe not so much a semantic standpoint) is very different from “We need the ability to drill through to finer details”. One is a matter of setting up the proper attributes, the other could require writing reports or ASP.NET pages and impact another server.

Posted in BI Development, SQL Server Analysis Services | Leave a comment

Deactivated my Facebook Account

I deactivated my Fascbook account. I promised myself that I would do so if they ever censored one of my posts, as had been experienced by others. It appears that they did so today. In fairness, I don’t think we will ever know for sure what happened to my post. All I know is:

  1. I posted a commentary on an article that was recommended on my LinkedIn page.
  2. It disappeared a few minutes later.
  3. I searched for it; exited IE, then re-entered. Still gone. Did I inadvertently delete it? Possibly, but I can’t see how. Is there a bug with FB? As a software developer, I know that’s always a possibility, but why didn’t this happen with the other posts?
  4. But … there are numerous examples of this censorship online and among friends. Additionally, someone may have reported my post as “abusive” or something. If that was the case, that person, if they are a “friend”, should have brought it up to me directly.

It probably was that I referred to MSFT founders as “nerds”, but not in any form more harmful than we IT types, me very much included, call ourselves nerds. Good god, I thought “The Big Bang Theory” made it cool to be a nerd. My comment was something like:

 “.. a bunch of nerds, despite their great success, still feeling inferior to IBMers …”

My Fascbook post was in regard to a quote in a Vanity Fair article:

“They used to point their finger at IBM and laugh,” said Bill Hill, a former Microsoft manager. “Now they’ve become the thing they despised.”

Unfortunately, I don’t have any record of the full post since I never did think I would need to refer to it elsewhere and not have access to it. It was actually meant to be sympathetic towards the MSFT founders, that they are every bit as much as the IBMers and never had a reason to feel inferior. I had the feeling while I worked at Microsoft that many people harbored inferiority complexes in relation to folks from companies such as IBM (better at delivery and execution), Oracle (better DBMS), and Apple (cooler products). I personally always thought the obsession with being like Oracle, Google, IBM, Apple, etc was silly and I was incredibly proud to work there. My feeling was Microsoft had a ton going for it the others were envious of, but chose instead to be strive to be like “X” instead of Microsoft.

To Facebook: If this is just a misunderstanding, let me know. I can’t seem to find a way to get a human response.

What purpose does this experience of mine serve on a blog site on Business Intelligence? A whole lot. Sites like Fascbook rely on algorithms and feedback from customers to flag posts they may find objectionable. This is a form of “predictive analytics” – predicting “offensive” posts. Then, I’m sure that filtered set of posts are submitted either to a deeper analytics engine and/or a human to determine an action. Predictive analytics has to be more than a numbers game. Even one bad false positive (or false negative) can make a big impact. This sort of thing is precisely what I don’t want predictive analytics solutions I develop to toss onto innocent people already too busy with what is already on their plates.

This thoughtless (literally) implementation of predictive analytics is extremely bad for our business. We will all fall victim some time to this attitude of folliwng 80/20 rules – ex: If we can automatically (or with minimal human intervention) remove 99% of the offensive posts and only 1% of the legitimate ones are removed, that’s great. Most people don’t like the notion of machines “thinking” for us in the first place.

In any case, I can’t see how my post would have violated their “Code of Conduct”, which I 100% agree with as it stands today, BTW.

Although I do make posts on Facebook that express my opinions, which aren’t always sugary and Pollyanna, I would never post anything inflammatory or gratuitously mean-spirited there, especially since my FB friends are family as well as colleagues. Yes, I like other entrepreneurs have opinions. I consider that a good trait and what drives me to put my money where my mouth is and build products like Map Rock that hopefully will fill voids making life better. I’ve seen Facebook’s “code of conduct” and my post doesn’t qualify by any stretch. Some things I post because they are fun. Some are related to issues that I’m interested in that are sometimes not exactly fun and games. So I see this as censorship well beyond what is reasonable.

Besides, even if my post were an over-the-top, mean-spirited one, do the Microsoft founders need the Facebook Police to protect them against me? I believe there are times some things should be censored, but those are very rare occasions that could affect the safety of someone. My FB post certainly doesn’t qualify in the slightest.

This experience is very troubling and a serious issue. At least in the real world, you get a trial for an alleged “crime”. But I’ve had no explanation from Facebook. Apparently, it’s very difficult to get any sort of human response from them.

From the point of view of a citizen, it doesn’t feel like we’re headed in the right direction if your opinions are so easily censored. We couldn’t be that fragile. From a BI or predictive analytics point of view, it makes me stop and think more about the danger of false positives.

None of us are going to agree on everything. Yes, there is a line we shouldn’t cross and I believe most of us know what that looks like. I personally don’t like the idea of having my values and opinions, and equally the values and opinions of others, dictated by anyone at Facebook or anywhere else. If people have disagreements, they can work it out amongst themselves. They don’t need the help of a bunch of strangers who like to play God.

What kind of arrogance does it take for a company to feel they can play judge and jury like that? I don’t want any part of that Facebook world. Who made them the Police of the World? Well, we did by immigrating to their virtual “country”. At least for now, they are only the Facebook Police.

It’s important to understand that there is a difference between me deleting a comment from my blog site and Facebook removing a post I write on my wall, especially since it is only seen by those in my group. A comment on my blog is between the commenter and me. Things on my Facebook wall are between my 59 friends and me, even though it is on “Facebook soil”. All 59 of my Facebook friends can de-friend me or express their disapproval themselves.

I imagine that WordPress (hosts of this blog site) would remove blogs that are truly inappropriate and viewable by the public. However, as a reminder, my post on Facebook was not of that nature. It was commentary, it was tongue-in-cheek, there was no profanity, it wasn’t targeted at puppies or baby seals, it made no threats, there was no slander, and as I said, it was meant to portray support for the MSFT founders who always seemed to have this inferiority complex in relation to IBM.

With that said, I’m deporting myself back to the world of relating to my friends the old way. Facebook re-connected me to many long lost people. Some were blessings, a few perhaps are relationships that maybe should have died a natural death – we all change over time and no longer have that unique chemistry that made us friends long ago. More so though, I do believe some of my stronger friendships were drastically degraded into a lowest common denominator of shallowness because of Facebook. After all, why call or visit when we already know what’s up seeing updates just about every day?

An over-reliance on Facebook to relate to people is part of the McGoogle mentality of no sense of delayed gratification that troubles me deeply. The lowest common denominator level of friendships via Facebook, the shallow answers found through Google, and the mass produced food of McDonalds promote a mentality of instant gratification.

It’s not that I’m not guilty of partaking: I LOVE McDonalds and I use Google at least 20 times a day. Sometimes I just need a lunch that I know I can obtain quickly and will be very tasty. Sometimes I just need an answer now. But I also believe that delayed gratification is the super-skill that is the foundation of peace of mind and performing “miraculous” feats. Every day I take time to nurture that skill while living in a world that operates on instant gratification more and more.

This incident actually happened three days ago. The day after I felt a void without Facebook. I automatically wanted to check if I had notifications or messages or if anyone posted something good. But interestingly by yesterday, Facebook was already out of my daily routine. Email quickly filled the void that Facebook had taken from it five years ago. Life is complex enough in the real world with what as John Stossel refers to as the “rule orgy” without adding the rules of the Teletubby world Facebook seems to want to forge.

Laurie mentioned to me that I was one of the few people left in our Facebook circles who still posted on a regular basis and things beyond just clever retorts of other posts, “… I had burgers for lunch …”, or worst of all, game updates. I think Facebook is fading and will probably be replaced by something closer to home of the “infrastructure” companies.

One last thing, and really, this is the important point about censorship. Think about if Facebook cleansed everything about hating dogs and none of us ever saw anything about hating dogs on Facebook. Wouldn’t you wrongly conclude that there is no problem with dog-hating?

These days the term “censorship” is tossed around more as a benevolent act that protects the public from horrors such as nudity, violence, and profanity. However, in communist/fascist countries, censorship is a tactic to manipulate the general public. I’m not calling Facebook fascists, but ex parte judgement of what is appropriate is censorship. Remember that no matter what your politics may be.

So to all my Facebook friends, catch you on an old-fashioned lunch, phone call, email or … yes, the not-so-old-fashioned LinkedIn, where we will have richer conversations.

Posted in Uncategorized | 1 Comment

Problems with MDX’s STDEV Function

I spent quite a bit of time this morning wrestling with the quirkiness of MDX’s STDEV function (SQL Server Analysis Services 2008 R2). For the most part it works well, but there are a few things to know that will save you a ton of debugging time:

  • The STDEV function doesn’t like NULL values. NULL values aren’t converted to a zero.
  • If STDEV is being calculated across only one tuple, it will return -1.#ND. I had hoped it would return a 0. So I need to perform a check to ensure the set has more than one row.
  • Worst of all, it just doesn’t seem to like some numbers.

Try out this MDX on AdventureWorks (for SQL Server 2008 R2):

WITH MEMBER StDevTest as STDEV([Date].[Calendar].CurrentMember.Children, [Measures].[Sales Amount Quota]) SELECT StDevTest on 0, [Date].[Calendar].[Calendar Quarter].Members ON 1 FROM [Adventure Works]

You’ll see that the value for [Q1 CY 2002] is correctly zero, whereas the value for [Q2 CY 2002] is -1.#ND. Both quarters consist of three children (months) with the same value (1583333.33 and 1689333.33, respectively). So the STDEV should be zero for both quarters. But SSAS doesn’t seem to like 1689333.33. I tried an STDEV in SQL on that value. It returned zero. I also created a very stripped down cube using those values, but this time, I received something like 2.xxx-2.

The reason I say the third point is the worst is because I can’t seem to figure out how to get around that one. The first two points have workarounds, although they are tedious – remove null tuples and test for only one tuple in the set for which we’re applying the STDEV. There doesn’t seem to be a way to test for NaN values like C#’s double.IsNaN function – which BTW means an SSAS “stored procedure” would be an answer for this one.

For my situation, this wasn’t just a matter of having an ugly -1.#ND show up in a cell. I was attempting to determine the granularity of a measure by testing at what level the measures of children are not the same as the parent. For example, if all months under each quarter had the same value as its parent quarter, it’s possible that the values are at least not at the month level. The STDEV for the months of a quarter should be 0 if all the months have the same value. Run this MDX which does just that:

WITH MEMBER StDevTest as SUM([Date].[Calendar].[Calendar Quarter].Members,STDEV([Date].[Calendar].CurrentMember.Children, [Measures].[Sales Amount Quota])) SELECT StDevTest on 0 FROM [Adventure Works]

You’ll see one value of -1.#ND. If you remove [Q2 CY 2002], a valid sum displays:

WITH MEMBER StDevTest as SUM([Date].[Calendar].[Calendar Quarter].Members-[Date].[Calendar].[Calendar Quarter].[Q2 CY 2002],STDEV([Date].[Calendar].CurrentMember.Children, [Measures].[Sales Amount Quota])) SELECT StDevTest on 0 FROM [Adventure Works]

The actual MDX I used to find the granularity was much more complex and tougher to debug:

WITH  MEMBER [MEASURES].[Granularity Level] AS ([MEASURES].[Granularity Level Recursive],[Date].[Calendar].Levels([Date].[Calendar].Levels.Count-1).Members.Item(0))  MEMBER [MEASURES].[Granularity Level Recursive] AS   IIF ([Date].[Calendar].CurrentMember.Level.Ordinal=0, [Date].[Calendar].CurrentMember.Level.Name, IIF (SUM([Date].[Calendar].CurrentMember.Parent.Level.Members, IIF (NONEMPTY([Date].[Calendar].CurrentMember.Children,[Measures].[Sales Amount Quota]).Count>1, STDEV([Date].[Calendar].CurrentMember.Children, [Measures].[Sales Amount Quota]),0))=0,    ([MEASURES].[Granularity Level Recursive], [Date].[Calendar].CurrentMember.Parent), [Date].[Calendar].CurrentMember.Level.Name))  SELECT  {[MEASURES].[Granularity Level]} ON 0  FROM  [Adventure Works]

Actually, this method doesn’t work in AdventureWorks because although the [Sales Amount Quota] measure is quarterly, it is SCOPEd to allocate monthly values. This would actually work on the particular cube I’m working on since values at different granularities are keyed to a day key (ex: Values for Q1/2001 are keyed to 1/1/2001).

Why I was doing this and why I needed to resort to this method is well beyond the scope of this blog, but the result is that the -1.#ND value for [Q2 CY 2002] screwed up my entire calculation. I should be happy that the day has finally come where the STDEV function is actually something I need to engage it deeply enough to run into such quirks.

Note: I apologize for the bad formatting of the MDX. WordPress does a lot of weird things with the formatting cut directly out of SSMS, so it’s easier to strip out the formatting to plain text. Here is a cleaner version of that granularity MDX with some embedded comments.

Posted in BI Development | Leave a comment