SCENARIO-BASED MODELLING: Storytelling our way to success. 1

“The soft stuff is always the hard stuff.”

Unknown.

Whoever said ‘the soft stuff is the hard stuff’ was right.  In fact, Douglas R. Conant, coauthor of TouchPoints: Creating Powerful Leadership Connections in the Smallest of Moments, when talking about an excerpt from The 3rd Alternative: Solving Life’s Most Difficult Problems, by Stephen R. Covey, goes on to note:

“In my 35-year corporate journey and my 60-year life journey, I have consistently found that the thorniest problems I face each day are soft stuff — problems of intention, understanding, communication, and interpersonal effectiveness — not hard stuff such as return on investment and other quantitative challenges. Inevitably, I have found myself needing to step back from the problem, listen more carefully, and frame the conflict more thoughtfully, while still finding a way to advance the corporate agenda empathetically. Most of the time, interestingly, this has led to a more promising path forward and a better relationship, which in turn has made the next conflict easier to deal with.”

Douglas R. Conant.

Conant is talking about the most pressing problem in modern organisations – making sense of stuff.

Sense Making

Companies today are awash with data.  Big data.  Small data.  Sharp data.  Fuzzy data.  Indeed, there are myriad software companies offering niche and bespoke software to help manage and analyse data.  Data, however is only one-dimensional.  To make sense of inforamtion is, essentially, to turn it into knowledge. To do this we need to contextualise it within the frameworks of our own understanding.  This is a phenomenally important point in sense-making; the notion of understanding something within the parameters of our own metal frameworks and it is something that most people can immediately recognise within their every day work.

Contextualisation

Take, for instance, the building of a bridge.  The mental framework by which an accountant understands risks in building the bridge is uniquely different from the way an engineer understands the risks or indeed how a lawyer sees those very same risks.  Each was educated differently and the mental models they all use to conceptualise the same risks (for example)  leads to different understandings.  Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

For instance, take again the same risk of a structural weakness within the new bridge.  The accountant will understand it as a financial problem, the engineer will understand it as a design issue and the lawyer will see some form of liability and warranty issue.  Ontologically, the ‘thing’ is the same but its context is different.  However, in order to make decisions based on their understanding, each person builds a ‘mental model’ to re-contextualise this new knowledge (with some additional information).

There is a problem.

Just like when we all learned to add fractions when we were 8, we have to have a ‘common denominator’ when we add models together.  I call this calibration, i.e. the art and science of creating a common denominator among models in order to combine and make sense of them.

Calibration

Why do we need to calibrate?  Because trying to analyse vast amounts of the same type of information only increases information overload.  It is a key tenent of Knowledge Management that increasing variation decreases overload.

It is a key tenent of Knowledge Management that increasing variation decreases overload.

We know this to be intuitively correct.  We know that staring at reams and reams of data on a spreadsheet will not lead to an epiphany.  The clouds will not part and the trumpets will not blare and no shepherd in the sky will point the right way.  Overload and confusion occurs when one has too much of the same kind of information.  Making sense of something requires more variety.  In fact, overload only increases puzzlement due to the amount of uncertainty and imprecision in the data.  This, in turn, leads to greater deliberation which then leads to increased emotional arousal.  The ensuing ‘management hysteria’ is all too easily recognisable.  It leads to much more cost growth as senior management spend time and energy trying to make sense of a problem and it also leads to further strategic risk and lost opportunity as these same people don’t do their own jobs whilst trying to make sense of it.

De-Mystifying

In order to make sense, therefore, we need to aggregate and analyse disparate, calibrated models.  In other words, we need to look at the information from a variety of different perspectives through a variety of lenses.  The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

Information still needs a lot of structural similarity if it’s to be calibrated and analysed by both technology and our own brains.

The diagram below gives an outline as to how this is done but it is only part of the equation.  Once the data is analysed and valid inferences are made then we still are only partially on our way to better understanding.  We still need those inferences to be contextualised and explained back to us in order for the answers to crystalise.  For example, in our model of a bridge, we may make valid inferences of engineering problems based on a detailed analysis of the schedule and the Earned Value but we still don’t know it that’s correct.

Storytelling

As an accountant or lawyer, therefore, in order to make sense of the technical risks we need the engineers to play back our inferences in our own language.  The easiest way to do this is through storytelling.  Storytelling is a new take on an old phenomenon.  It is the rediscovery of possibly the oldest practice of knowledge management – a practice which has come to the fore out of necessity and due to the abysmal failure of IT in this field.

Scenario-Based Model Development copy

Using our diagram above in our fictitious example, we can see how the Legal and Finance teams, armed with new analysis-based  information, seek to understand how the programme may be recovered.   They themselves have nowhere near enough contextual information or technical understanding of either the makeup or execution of such a complex programme but they do know it isn’t going according to plan.

So, with new analysis they engage the Project Managers in a series of detailed conversations whereby the technical experts tell their ‘stories’ of how they intend to right-side the ailing project.

Notice the key differentiator between a bedtime story and a business story – DETAIL!  Asking a broad generalised question typically elicits a stormy response.  Being non-specific is either adversarial or leaves too much room to evade the question altogether.  Engaging in specific narratives around particular scenarios (backed up by their S-curves) forces the managers to contextualise the right information in the right way.

From an organisational perspective, specific scenario-based storytelling forces manages into a positive, inquistive and non-adversarial narrative on how they are going to make things work without having to painfully translate technical data.  Done right, scenario based modelling is an ideal way to squeeze the most out of human capital without massive IT spends.

 

 

 

 

 

Getting Rid of the Help Desk–a structured approach to KM 1

In a recent article in CIO magazine Tom Kaneshige argues that the rise of BYOD spells the demise of the traditional Help Desk.  He intimates that BYOD has now been overtaken by BYOS – bring-your-own-support!  The network-enabled user, with access to huge volumes of information, requires a new Help Desk. 

He is right that, ultimately, power-users need better, faster support delivered to them in a format and by people with a deeper understanding of the context and with more intricate solutions.

BYOS is the exception and not the rule. 

Although the IT function is becoming more commoditised, the larger fields of knowledge work isn’t, hasn’t and won’t be commoditised anytime just yet.  Otherwise, any 12 year old with a laptop would be in with a chance.  Help Desks don’t need to be expanded but they do need to become more mature, agile and integrated into the KM procedures of modern networked enterprises (ie those businesses with a heavy KM focus).  Expanding the remit of the Help Desk opens the door for colossal cost increases.  Internal knowledge management functions need to become more structured beyond simplistic portals.

INTERNAL KNOWLEDGE MANAGEMENT

In a recent article in McKinsey Quarterly, Tom Davenport argues that organisations need to get a lot smarter in their approaches to supporting knowledge workers.  He says that greater use of social media and internet use will harm the business more than help it.  Lower level knowledge workers need more structured support to their processes.  On the other hand, high-level knowledge workers are better supported by an open platform of tools.  Getting the right balance is as much art as science.

BYOS is the wrong approach.  It’s a derogation of KM responsibilities.  Organisations need to focus on an approach to KM with the following structures:

  1. A good Help Desk function for knowledge workers involved in highly structured processes.
  2. An IT function which supports a flexible arrangement of tools for advanced knowledge workers.
  3. Knowledge Managers:  people who provide the focal point for certain areas of knowledge.
  4. Portals:  A single entry point for people seeking access to communities of interest.

So, be careful when thinking about Tom Kaneshige’s advice and “blowing up” your Help Desk.  IT can be a self-licking lollipop.  More tools and more information won’t necessarily improve productivity.  At the lower level, sometimes it makes more economic sense to support the process.  It’s only at the upper levels of expertise that it is more profitable to support the person.

The Business End of Social Reply

Integrating Emerging Social Software Platforms (ESSPs) into a business is fraught with danger but the payoff can be substantial.  Not only does the company have the potential for positive brand messages to flood a series of trusted networks but it can also leverage the renewed engagement of staff for better knowledge management.  In the end, though, social is fun and sexy but it is utterly irrelevant to most employees until there is some link to an employee’s remuneration.  To rephrase the HBR article I don’t think social needs to get more businesslike, rather, I think business needs to get more social.

The Cost of Capability: a better way to calculate IT chargebacks Reply

IT_Profit_Centre

THE VALUE OF SHARED SERVICES

Almost every C-Suite executive will agree that shared services, done well, are a critical factor in moving the business forward.  The problem is that implemented poorly they can potentially overload good processes and profitable service lines with villainous overhead allocations.

IT chargebacks are important because, used well,  they can assist the business with the following:

  • help IT prioritise service delivery to the most profitable business units,
  • help the business understand which IT services are value-adding to the market verticals, and
  • reduce the overall vulnerability of IT-enabled business capability.

OVERHEAD ALLOCATIONS CAN RUIN GOOD PROCESSES

However, many shared service implementations are poorly received by the business units because they add little or no value and are charged at higher than the market rate.  As Kaplan pointed out in his seminal work “Relevance Lost: the rise and fall of management accounting” the result of poor overhead cost allocation is that perfectly profitable processes and services, burdened by excessive and misallocated overhead costs seem to be unprofitable.  Kaplan goes further and points out that all overhead which cannot be directly incorporated into the cost-of-goods-sold should be absorbed by the business and not charged back to the market verticals and service lines.  This is the fairest method but most businesses avoid this method because high SG&A costs has a negative impact on financial ratios and therefore investor attractiveness.

HIGGLEDY-PIGGLEDY 

In a recent article (shown below) McKinsey & Co pointed out a variety of methods which their client firms use to calculate IT chargebacks.   Even though they differentiated between new and mature models it is worth noting that very few companies charged their business units for what they used (Activity-Based Costing).   Rather, they used some form of bespoke methodology.  This is usually (i) a flat rate, (ii) a budget rate with penalties (for behaviour change), or (iii) a market rate (usually with additional penalties for IT R&D costs).

IT Chargebacks. McKinsey. IT Metrics

 

 

 

 

 

 

 

 

 

 

 

 

 

ALIGNMENT & ACCOUNTABILITY

Chargebacks are essential.  They are a critical means for companies to take charge of their IT costs.  Otherwise, a ballooning IT overhead can destroy perfectly good processes and service lines.  However, chargebacks can obscure accountability.  If they are not calculated transparently, clearly and on the basis of value then there will be no accountability of IT to the business and whose capabilities they enable.  Without  accountability there can also never be alignment between IT and the business.

CHARGEBACK AS AN INDICATOR OF MANAGEMENT-VALUE-ADDED

Traditional methods of IT cost modelling, on which standard chargebacks are calculated, only account for the hard costs of ICT,  namely infrastructure and applications.  It should be noted that chargebacks should only be applied for Management Information Systems (eg, knowledge bases, team collaboration sites such as MS Sharepoint, CRM systems, and company portals etc).  All other systems are either embedded (eg, robotics etc) or operational, (ie mission critical to a business unit’s operations).  MIS are largely used by overhead personnel whereas operational systems and the finance for embedded systems should be accounted for in the cost-of-good-sold.  The real question therefore, is: what is the value of the management support to my business?  The question underlies the myth that Use = Value, which it does not.  Good capability applied well = Value.

THE COST OF CAPABILITY

The cost model, therefore, needs to determine the cost of capability.  Metrics based on per unit costs are inappropriate because the equipment amortises so rapidly that the cost largely represents a penalty rate.  Metrics based on per user costs are unfair because each user is at a different level of ability.  In previous blogs we have outlined how low team capabilities such as distributed locations, poor requirements, unaligned processes etc all have a negative and direct financial correlation on project values.  We have also written about how projects should realise benefits along a value ladderdelivering demonstrable financial and capability benefits – rung by rung – to business units.

It is reasonable to say, therefore, that managers should not have to pay the full chargeback rate for software which is misaligned to the business unit and implemented badly.

It is unfair for under-performing business units to be charged market rates for inappropriate software which the IT department mis-sold them.  If that business unit where a company in its own right they be offered customisation and consulting support.  In large firms the business often scrimps on these costs to save money.  Given the usual overruns in software implementations business units are traditionally left with uncustomised, vanilla software which does not meet their needs.  The training budget is misallocated to pay for cost overruns and little money is ever left for proper process change.

In order to create a fair and accurate chargeback model which accounts for the Cost of Capability, use the following criteria:

  • Incorporate the COSYSMO cost coefficients into software and service costings so that low capability business units pay less.
  • Only charge for  professional services which the business doesn’t own.  Charging for professional/consulting serrvices which are really just work substitution merely encourages greater vertical integration.  This is duplication and duplication in information work creates friction and exponential cost overruns.
  • Watch out for category proliferation, especially where the cost of labour for some unique sub-categories is high.  Don’t let the overall cost model get skewed by running a few highly specialised services.  Remove all IT delivery personnel from the verticals.  Where there are ‘remoteness’ considerations then have people embedded but allocate their costs as overhead.
  • Do not allow project cost misallocation.  Ensure that cost codes are limited.

In order that businesses do not fall into the “Build and they will Come” trap a clear and precise chargeback model should be created for all IT costings.   Businesses should start by charging simple unit costs such as per user.  Everything else will initially be an overhead but firms may then move to a more complex chargeback model later.

It is important that low capability business units do not pay full price for their software and services.  As Kaplan is at pains to point out, where businesses do this they are at risk of making perfectly good processes and service lines seem unprofitable.  The only way to properly broker for external services is to account for the cost of capability.

 

The Complexity of Cost: the core elements of an ICT cost model Reply

cost model. financial modelThere are 2 reasons why IT cost cost reduction strategies are so difficult:  Firstly, many of the benefits of ICT are intangible and it is difficult to trace their origin.  It is hard to determine the value of increased customer service or the increase in productivity from better search and retrieval of information.   Secondly, many of the inputs which actually make IT systems work are left unaccounted for and unaccountable.  The management glue which implements the systems (often poorly and contrary to the architecture) and the project tools, systems and methods which build/customise  the system (because IT, unlike standard captital goods, is often maintained as a going concern under constant development, e.g. upgrades, customisation, workflows etc) are very difficult to cost.

Standard IT cost models only account for the hard costs of the goods and services necessary to implement and maintain the infrastructure, applications and ancillary services.  Anything more is believed to be a project cost needed to be funded by the overhead.

This is unsatisfactory.

The value of technology systems – embedded systems excluded – is in the ability of information workers to apply their knowledge by communicating with the relevant experts (customers, suppliers etc) within a structured workflow (process) in order to achieve a corporate goal.

Capturing the dependencies of knowledge and process within the cost model, therefore, is critical.   Showing how the IT system enables the relevant capability is the critical factor.  A system is more valuable when used by employees who are trained than less trained.  A system is more valuable when workers can operate, with flexibility, from different locations.  A system is more valuable where workers can collaborate to solve problems and bring their knowledge to bear on relevant problems.  So how much is knowledge management worth?

The full cost of a system – the way they are traditionally modelled – assumes 100% (at least!) effectiveness.  Cost models such as COSYSMO and COSYSMOR account for internal capability with statistical coefficients.  Modelling soft costs such as information effectiveness and technology performance helps the business define the root causes of poor performance rather than subjective self-analysis.  If a firm makes the wrong assessment of capability scores in COSYSMO the projected cost of an IT system could be out by tens of millions.

Financial models for IT should therefore focus less on the cost of technology and more on the cost of capability.  The answer to this is in modelling soft costs (management costs), indirect costs and project costs as well as the hard costs of the system’s infrastructure, apps and services.