It is quite funny how we got used to charts, statistics, and discussions about different perspectives.
But, despite all our computers, technologies, and so on… we keep getting the usual white/black alternatives, or “dichotomies”.
This short articles is about something that I kept doing often to convert complexity into something we are all used to: getting a 0 to 10, or 0 to 5, or F to A.
The concept is simple: our brain is inherently multi-dimensional.
We are able to see, perceive, think in shades- and to switch from visual to verbal if and when needed.
But when we move from what is personal to what is abstract (our work, in most cases, or news), we seemingly accept at face value boundaries that do not exist- false dichotomies.
If you are lost- do not worry: the rest of this article will make sense: including the “Devil’s advocate” section (i.e. how to prepare for objections).
The beginning: concepts
Over the last few decades, thousands of pages have been written about the difference between qualitative and quantitative- basically, the values that you assign vs. the values that you collect.
A seemingly unrelated but parallel differentiation is between visual and verbal: instead of using one to reinforce the other, they are presented as alternatives.
But, in my experience, if you really understand your qualitative information, you can actually identify ways to set a numerical representation, i.e. turning qualitative information into something that you can more easily compute- and represent.
As for representation… the more complex the information, the easier is to use a visual, multidimensional representation (i.e. color, “thickness” of lines, position inside a chart, and so on).
Easier? Yes- easier than building tables and lists of data.
I started designing interpretation models and visual representations in the 1980s (respectively called decision support systems – DSS – and executive information systems – EIS).
You do not need anything more complex than Microsoft Excel or OpenOffice.
In this article, I will go with the “free” option, i.e. OpenOffice- if you prefer, you can use Excel or one of the various Business Intelligence tools- but, frankly, it is overkill.
I will use a small example, realistic but not real, as it represents something that I did repeatedly for customers, partners, startups: assessing and comparing options.
In this case, there will be few parameters, or “measures”, both qualitative (or “intangible”- such as the level of timeliness) and quantitative (or “measurable”- such as the number of incidents).
The sample: raw data
Let’s have a look at this small table, containing the raw data from out analysis of three business units, in terms of what they deliver.
In this table, the columns represent:
- qualitative information: a “degree” of something, or the relative position between the three units
quality: how do you rate the quality of what is delivered?
staffing: is the staffing appropriate?
timeliness: are the deliverables produced according to schedule?
- quantitative information: a “measure” of something, or the absolute values
incidents: number of incidents
allocation: percentage of resources allocated
on budget: percentage of deliverables produced on budget
“Timeliness” is the qualitative counterpart of “On budget”, while “Staffing” is the same for “Allocation”.
The concept here is simple: each “hard” (i.e. number-based) parameter has a “soft” counterpart: you can be timely in delivering something, but if you go over budget, that is not necessarily as positive as being less timely, but within the budget.
The same applies to staffing: you can have appropriate staffing- but if you consistently under-allocate you staff, it means that you have people on your activity sitting idle- not really the best use of your resouces.
In this example, a fourth line is added: “average”, i.e. the computed average, and I ignored the obvious shortcomings of these six parameters (more about this in the “Devil’s Advocate” section).
When it comes to real cases, e.g. for change management initiatives, sometimes the desired target on each parameter is shown as an additional line, and the average represents the current average status.
And, of course, despite what I wrote above… you could start piling up so many measures, that using a specialized system (or, at least, comparing by categories, not by individual measure) makes sense.
But once you have the raw data, you can easily see that that you need to concentrate on the numbers to understand how each unit fares- therefore, a first step is converting everything into a “hit parade”.
The sample: hit parade
Moving from the raw data to the hit parade requires selecting a rule.
This simple formula sets the “top” in a 1 to 10 range by assigning it to the highest value (the “max”) function, and then seeing the actual value as a percentage of the max value.
To summarize: it simply says- if the value that you are comparing is equal to the max value, it is 10; anything lower… goes down.
Again: it is a really simple formula- here, replace “number of events” with “value of the parameter for each business unit”, and you can get the value.
In a real world business case, probably some additional qualification will be needed: if you see a table like this one, check the formulas- it is easier to trick numbers (more about this in the “Devil’s Advocate” section) and then hide the “manipulation” behind a visual screen (the chart).
The result from the example above?
But then, we went only through the first step: converting qualitative information into a quantity, so that we can compare values.
Still: just six parameters, just three units and the average: but can you, at a glance, see who is performing better or worse?
The “10” is scattered around.
So, in all my change management and audit activities, I usually adopted a chart- my favourite?
What somebody calls “the radar chart”, and in OpenOffice is called “Net chart”.
The sample: charting results
The aim here is not to replace the tables with numbers- just to allow showing the original, “raw data” table alongside something that allows to compare at a glance.
Usually, I do not present the “flattened” hit parade: it is an artificial by-product needed to produce a meaningful radar chart.
If you look at this chart, now you understand why I created the hit parade with a top value of 10.
It does not take a PhD to identify, visually, potential problem areas and compare the different units.
Practically: do not use more than 8-10 parameters- and no more than 5-10 units or entities to compare.
If you need more: think about categories, and do what is called a “drill-down”, i.e. go into details.
The approach shown here is just an example: the focus is on the process, not the formulas or tools.
But now comes the interesting part: interpretation.
The Devil’s Advocate
Any data transformation introduces an interpretation bias.
Sometimes it is transparent, sometimes both numbers and visual representations can deceive.
Look at the “raw data” section: I meant to leave it as an exercise for my readers, but let’s see it together.
When you see values such as 1,2,3 for “quality”, you assume that 1 is the best, 3 is the worst.
In this example, as the formula used is “max(numbers)”, it is the other way around.
But as readers will be used to the “traditional” way, this could create misunderstandings- sometimes, intentional.
While you can always (as I do in real projects) give a legenda or other documentation, few people will actually read it.
Therefore, when building values, it is better to use something that matches the “normal” assumptions of your audience, e.g. using A to F (or AAA etc, if you are in the financial sector).
Personally, I feel always a little bit uneasy about “qualitative” parameters (or indicators, if you prefer) that ask the authors of the study to make a subjective evaluation (e.g. assign a relative ranking between the available options).
I usually instead adopt a “qualitative range” that is based on objective values.
As an example, timeliness will not be “the best of the three, the second best, and the last”, but a scale such as: always on time, occasionally not on time, usually not on time, never on time.
Yes, I do not like the “average” value that is typical in 5-points scales, as it is a kind of “I was unable to decide, or it was too politically sensitive, so I bundled everything there”.
While the Executive Summary is an act of diplomacy, the numbers (and their visual representation) are not.
If you let external influences play a role, you are delivering biased analysis to whoever you are supposed to support in decision making- i.e. you are imposing an agenda hiding behind numbers, instead of delivering analysis.
This is an issue that I had quite few times in my projects: analysts who thought that they knew better, and tried to force their solutions onto the decision makers, by distorting their analysis.
In our example, if you consider “1” to be the best (in the qualitative side), then instead of the “max(numbers)”, you should adopt another formula, so that 10 goes to the option that receives the 1 value, and the others decline accordingly.
Moreover: instead of a linear value, you can adopt a different approach, to “weight”, or to mark also the “clustering” of options around a certain value (e.g. if just one has the top score, and the others are toward the bottom, you would like to show the distance).
The point being: while a linear representation is immediately intuitive, any other choice implies a “value” judgement from the authors, that is then conveyed visually to the readers- giving them a perception of reality that is not in the data provided, but in its interpretation.
Actually- I read more than once nice, interesting visual representations and reports resulting from assessment or audit projects that were simply deceiving.
Such as: comparing business units with different markets or seasonality (e.g. what is the point in comparing the timeliness of a unit producing biscuits 24/7, with another one working only for Christmas?).
Or, more simply, adopting parameters that were clearly skewed to give a better result (this is a favourite of the software industry).
I remember some software selections that were simply hilarious: the parameters were so skewed toward one of the invited products, that attending was just shooting yourself in the foot, if you weren’t the one proposing the “selected” one.
My suggestion: ask for the evaluation model before attending an evaluation: if it is skewed, decline the invitation.
Why? Because then “informally” the “winner” would reuse on the market just the global score- without reporting the ludicrous parameters selected to show as a logical outcome what was really an hand-picked solution.
And software selection is often based on “if we choose the best, we cannot fail” instead of “let’s see what we really need”.
But you can see that in any advertisement comparing products and services.
In the small example above, the bias is blatant: each unit is considered as any other unit- and this is fine only if the business processes, size of the activities, market, etc are shared by all the units involved in the assessment.
As an example, when few years ago I had to do a similar analysis to restructure a customers’ portfolio, I first had to analyze the business of each unit- their market, target, and intra-company activities.
Visual representations are a powerful tool to help the decision makers in selecting the appropriate path of action- but the larger the quantity of information summarized by your graphical representation, the higher the risk that the decision maker will be deceived.
A typical error is scaling: while a radar chart based on the “hit parade” approach avoids distortions, I often saw pie charts, bar charts, histograms, and others (Boston Consulting Group chart, SWOT, etc) that were “fed” with information that was simply shaped to fit the chart or data representation- not the other way around.
Moving from assessment to action
If the radar chart allows to compare visually, something more is needed then to plan, execute, monitor the changes.
And the radar chart allows also to compare, say, how similar projects are faring toward a set goal, represented by the parameters used to build the chart (or your “Key Performance Indicators”, KPIs, in industry parlance).
In project/programme/activity management, there are plenty of tools.
But when it comes to planning and comparing options before you enter in a “project mode”, the simplest but most complete tool that I found was actually free.
The story is funny: during the “re-inventing the government” initiative, the DoD had a software called TurboBPR, and dropped it inside a “virtual library on BPR”.
As at that time I was preparing something for customers, and I had no qualms about writing to ask information to anybody, instead of re-inventing the wheel, I wrote to the DoD.
And received a copy of the CD (including both the software and a library on business process re-engineering), along with a subscription to a magazine called CrossTalk (main theme: use off-the-shelf solutions, or COTS in alphabet soup, instead of re-inventing the wheel).
Along with best wishes for my career in change management 😀
With the same approach, I wrote to software companies when I moved from DSS/EIS to business intelligence- and received every year software licenses for free, to use in my prototyping and demo activities.
I think that my BPR CD has travelled a lot. As it is now offline, more than once I used it to help non-profit BPR initiatives.
TurboBPR used the Mission/Goals/Measures/etc approach- and produced full reports.
Or should I say- produces. When needed.
The only drawback? It is so old, that I had to keep a tweaked Virtual Machine with Windows2000 to keep using it when needed- it does not work under Windows XP 😀