I created a monster!

2.0

With the 2.0 version I redesigned everything, the Ticket tracking is now a robust and complex workhorse. With it anyone (even senior mangers – unfortunately) are able to crunch numbers like pros. The truth about this and of course the bit that I’m really interested, Problem Management and Continual Service Improvement is the those insights that you stumble upon are often the most critical.

I’m reminded of the story of the truck stuck in the Brooklyn Tunnel – I don’t know if it’s true but never let that get in the way of a good story – anyway this truck was wedged into the tunnel and the Fire Brigade were there, the police, highway’s etc… all with different ideas about how to “fix” the problem. A little girl happened to be going past in her parents car and wondered why they hadn’t just let the tires down on the truck to unstick it? Right or wrong fact or fiction the sentiment is true, often those left field solutions have merit and and having a tool that allows you to test a hypothesis and see if it is historically true before making the change and then running for a limited period and assessing if the hypothesis is true of false is incredibly powerful. We have the capability now to answer any question “The manager at the <insert name here> office says his site gets terrible service, is that true?” or “Does it take longer to get equipment in <insert country name>?” Now we can do all that and most importantly we can do all that without having to engage BI resources. It was PowerPivot.com – Avi I think – who put it in ways that I really embrace now. In fact it’s even now part of my Personal Development at work. BI is something for everyone now, it’s not about me controlling and dictating access, it’s about me bringing people into the tooling and working with them to help them access as much as they need to understand everything that is happening within their Service, Site, Division, country, front room?

Three months in and I’m so far loving it, many of the old CMDB development rules are true in PowerPivot (I may have to write them down again in another post). The journey continues….

2.0 A different animal

So I released the 2.0 tool yesterday after completely re-writing the whole datamodel from the ground up including redeveloping the SQL. There were some red herrings that I came across, but in general things have gone well. 25% reduction in file size while adding more data and some more intangible efficiencies. Look out for more posts shortly as I go through areas of the development.

To Pivot or not to Pivot

So, after the first proof of concept I am now zeroing in on the 2.0 release and I have to say it is looking pretty gosh darn good. Please remember that the journey of PowerPivot means that while I think it’s great now in a few posts no doubt I’ll be saying it looks like something a misinformed chimpanzee would produce.

The key to understand with PowerPivot is the storage model, this is a big thing and really is central to how Microsoft have devised a tool that enables you on your desktop to crunch the kind of data sets that previously you needed hugely powerful servers to process. As a caveat to this I will remind you all that even a low end desktop these days has more processing power than a high end server of 10 years ago, but that’s honestly not the point!

Microsoft have turned the data storage component on it’s head to ultimately align the tool to usage. “Hang on what do you mean align the tool to usage?”

Simple – when you write a query and trust me basically do that when you set up a pivot table you start with “give me the rows that match some column criteria from a big table of data”. Please don’t tell anyone, but that’s all SQL queries are! Practical example “Select IncidentID from Incident where status = ‘New’ will provide a list off all Incident ID’s in your Cherwell System that have a current status of New, so the columns I am asking for are just “IncidentID”, the rows is all that match the criteria and the criteria is Status = ‘New’. Is everyone still with me? Excellent

So now we’ve got a basic query you can start to see the traditional way a database works, rows are everything. You have to ask for columns because rows are always there, I will always get every row that matches my column criteria either for display (select part) or filtering (where part). A traditional database works like this think of layers of cake or if you’re a Garfield fan layers of lasagne – please don’t overthink it databases don’t have layers of fruit or mince, maybe I should have gone Shrek and said onions, databases have layers just like onions mmmmm Lasagne…. sorry I’m getting off topic. The layers of a database table, the rows are designed to be accessed together and stored together, so space is allocated on Hard Drives ahead of time and then used. This is great for write it and occasionally read it. The challenge now though is thaternest-ted even a basic laptop these days has more power than a database server of 15 years, the internet has also given everyone access to vast quantities of data. The result is that we want to be able to analyse it and to quote one of my favourite lines from Ernest Borgnine “People today have attention spans that can only be measured in nanoseconds.” – Basketball 1998.

We want instant gratification, you ask google (other search engines are available) and Basic Pivotinstantly you get an answer (not necessarily The Answer). Business has the same challenge so a tooling methodology change was needed. A query that we need to understand of crunch large quantities of data is more like show me [arithmatic (count or total of) something] for [these areas]. If that seems dumb let me spell it out, show me total sales by region per quarter. You’ve probably guessed art is not my strongest subject.

Here are a couple of links from PowerPivotPro to further help if my explanation sucks (doesn’t work for you). I’ll continue below…

http://www.powerpivotpro.com/2010/02/surprising-example-of-powerpivot-compression/

http://www.powerpivotpro.com/2015/12/compression-with-power-pivot/