A Bit of a False Alarm Scare (Hopefully) Last Thursday on the SSAS Roadmap

Last Thursday (November 11, 2010), a couple of colleagues sent me a link to Chris Webb’s recent blog on roadmap information just revealed at PASS on SSAS. In a nutshell, there could be a shift of focus from MOLAP cubes to the BI Semantic Model (BISM). I won’t pretend to know any more about it than that since unfortunately, I wasn’t able to attend PASS this year and I’ve been more focused on predictive analytics than OLAP lately.
 
I was at first concerned, not because I thought it’s a bad idea, but because I thought it would disrupt SSAS projects until things are ironed out. That may be a selfish thought and that it’s a matter of what is best for the customer, "one step back to take two steps forward" strategies often end up giving competitors the window they are waiting for., Just as every candidate Alpha is waiting for any sign of weakness in the incumbant Alpha, any disruption to the rhythm of SSAS opens the door for all other BI products lying in wait. Those other BI products could include BISM as well.
 
But after Amir Netz’s comment on Chris’ blog, I’m satisfied that SSAS MOLAP isn’t going away and that the SSAS dev team still very much know what they’re doing as far as strategy is concerned. I’ve always thought the SSAS dev team is the best at Microsoft (the smartest and most passionate). The BISM is filling in one of the holes in the MSFT BI stack, in the self-service BI part.
 
Chris puts it really well: "BISM models are the UDM 2.0." The "UDM 1.0" (the Unified Dimensional Model term used circa SQL Server 2005) intended to have all data through an enterprise transformed into a cube and accessed from that central cube. An OLAP cube isn’t robust enough to be a lowest common denominator of all data. It’s merely a way to process and store data for a certain kind of access.
 
On a related, but different train of thought, as I pondered all this yesterday, I realized that about a year ago I started referring to SSAS performance tuning as a "web of trade-offs, a zero-sum game".  I’ve seen the limits of SSAS OLAP over the past year, seen the edges of its universe where the laws of the best practices no longer apply. The only "free lunch" (or at least cheap lunch) at least relatively recently is the "many to many compression algorithm" (developed by Erik Veerman and Dan Hardan).
 
After 12 years of SSAS’s existence, "free lunch" optimization techniques for distinct count measures, parent-child relationships, many to many relationships, real-time data, and complex calculations are at a standstill in SSAS; actually they’ve been at a standstill for a few years. When a community of really bright people start going around in circles when addressing specific performance issues without creating another, the technology has reached its limits. But this doesn’t mean it’s obsolete. Quantum mechanics didn’t make Newtonian physics obsolete. Amir’s example is that C# didn’t make C++ obsolete.
 
Another thing I began saying a couple of years ago is "SSAS 2005/2008 is at its best behavior when it looks most like SSAS 7.0/2000". That is, all hierarchies are strong and all measures are additive. This is the core concept of OLAP and everything else is an attempt to expand that core concept. It works very well for query use cases that happen to be very widely used – but not all types of queries. Distinct count measures are better processed with columnar databases, long reports with relational databases, aggregated sums with OLAP, etc. With all the different sorts of data out there, there are ways to address each most efficiently. It means storing data redundantly in an array of different patterns. The brain inputs data redundantly and stores data redundantly.
 
Even the brain isn’t the ultimate in data processing; "the grand unified theory of data" as my wife teases. Computers do some things better than brains and vice versa. Yes, someday in the too distant future an artificial intelligence will make a human brain obsolete. But that artificial intelligence will probably include aspects of the brain’s structure as well as other data structures (hopefully mostly SCL … hahaha). Right now I refer to Predictive Analytics as the integration of human and machine intelligence, not the replacement of human intelligence with machine intelligence.
 
I used to think of the five senses as different data sources, with the brain as the integrator. But I see that not only are they separate data sources that when integrated give an all-encompassing view, but they are data readers specializing in different structures for the purpose of looking at a problem from multiple, redundant angles. For example, I suppose our eyes could "hear" if they were sensitive enough to detect vibrations (like those laser listeners that "hear" vibrations of voices off glass using light). However, if our eyes were so finely tuned that it could see and process an object vibrating, that would interfere with what our eyes normally see, and picking up vibrations from the air with a separate mechanism is better.
 

About Eugene

Business Intelligence and Predictive Analytics on the Microsoft BI Stack.
This entry was posted in SQL Server Analysis Services. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s