RISK: A Quick-Start Methodology Reply

Enterprise Risk Management Systems (ERMS) are a powerful means to aggregate and assess risk data in large portfolios.  The systems empower better decision making by enabling project teams to collate, analyse and report on complex risk information and support senior decision making upwards and across the organisation.

However, it is only through the accurate input of risk information that users achieve deep insight by precise analysis.

The following methodology provides for a detailed approach to structuring risk statements as well as collecting and aggregating risk information.  The method is part of a wider approach to governance, risk and compliance that enables the fast, efficient and effective capture and management of risk.  It does so in a way which not only reduces the overall cost and burden of effort but also increases the value that risk management adds to the organisation by increasing stakeholder buy-in through more effective cross-functional collaboration over complex risks.  In turn, organisations and teams derive a far greater benefit from an ERMS’ analytical power with the additional support of the extended project community.

Together, an ERMS and accompanying methodology provide mutual support for a more streamlined and cost-effective means for managing complex portfolios of risk.

The 8A Methodology

Identifying what might go wrong on projects is often self-explanatory.  A good project manager will usually know most risks, when they might occur and what to do about them.  The value of a system, however, is in managing the sheer volume of risk in large projects as well as understanding and mapping the interplay of subtle and complex risk factors across portfolios.  The 8A Risk Methodology provides users the guidance to collate and calibrate risk data across a wide range of projects so that collection is streamlined, analysis is precise and the necessary management action is accurate and effective.

  1. ABSTRACT.   Identify and isolate the information sources to be analysed and used.
  2. ASSEMBLE.   Extract and gather uncorrupted information at source.
  3. ARTICULATE.   Deconstruct complex problems and structure singular statements of atomic risk.
  4. AGGREGATE.   Demarcate and delineate risk data so that there is no duplication of risk across the portfolio.
  5. ANALYSE.   Run quantitative and qualitative analysis on risk information.
  6. ASSESS.   Increase stakeholder buy-in of quantitative analysis through subjective scoring and prioritisation.
  7. ADVISE.   Report risk across and upwards in the organisation.
  8. ALIGN.   Assign, delegate and track the progress of risk mitigation action.

Identifying Risk

A risk is not always a risk.  More importantly, business risks must be filtered from all the metaphysical risks.  The following definition enables users to define the real and tangible business risks from the plethora of vague and tenuous risks which unnecessarily fill risk registers and sap management attention.

  1. HOT – the ‘heat’ of a risk only applies to in-flight risks/ contract management.  The ‘heat’ of a risk/element/item is traditionally derived from Finance’s assessment of adverse variances.
  2. VALUABLE – the value of a risk depends on how sensitive the overall cost/schedule is to the particular element.  This can be derived from a sensitivity or statistical analysis.
  3. WEAK – the element must also have or be part of some weakness in the project.  This may be a physical or conceptual weakness, e.g. the weakness in steel or the weakness in a contract or the weakness in a claims management process etc.

All 3 of these criteria must be satisfied if something is to be classed as “at risk”.  For instance, an object may be running hot and it may also be highly volatile in the cost model but if the variance is within parameters and it is a thing that is traditionally managed well then it is not really at risk.


The abstraction process involves the identification of the types of risks which exist in the projects, usually around some form of risk framework.  From there the teams can identify sources of risk information.  The Risk Manager will not have deep experience across all the functional areas of the project and so will need to direct departments and teams in a simple but effective manner.

The abstraction process involves the identification of the types of risks which exist in the projects, usually around some form of risk framework.  From there the teams can identify sources of risk information.  The Risk Manager will not have deep experience across all the functional areas of the project and so will need to direct departments and teams in a simple but effective manner.


The process of assembling the myriad risk data that the user has identified is not always a simple process.  Despite the supposed ease of technological interoperability most data is designed to reside in and be used by single systems (and perhaps their proprietary add-ons).  Assembling risk data often involves the export of data from a variety of systems into excel spreadsheets so they may be manipulated.

Extracting data, especially on a monthly basis, must be simple, repeatable and non-intrusive.  It should always be conducted by the Risk Manager for 2 reasons:  (i) for confidence in the provenance of the numbers, and (ii) because extraction (along with all other areas of risk management) must not increase management effort, unless the task adds value.


Deconstructing risks into singular statements of atomic risk can be surprisingly difficult and often leads to the most experienced managers tying themselves in mental knots.  In simple risk registers it is often not necessary as risk may be clumped into rough blocks of obvious risk.  In this way organisations may precisely allocate and manage contingency funds.

When deconstructing ‘clumps’ of risks it is often best to work backwards:

  1. Impact. Focus on the financial impact, i.e. the potential loss.  This will help remove irrelevant risks from the analysis.  For instance, if I fall to the ground because I sat on a broken chair the financial impact will be my loss of earnings whilst recuperating.
  2. Risk.  Identify the change in state which caused the financial impact.  Note:  Risks are best seen as an ‘adverse change in state’, e.g. changing from ‘employed’ to ‘unemployed’.  Focusing on risks as events can be too narrow.
  3. Cause.  Ask ‘why?’  In our case why did I stop earning money?  The risk will be realised by some weakness (either structural or operational).  The weakness may be in the schedule, costs or contract as well as part of any physical structure.  In our example it was the weak leg of a chair.  Note that it was not just the weak leg of a chair.  I also had to sit on the chair, there had to be an absence of any soft landing and I had to fall awkwardly.  All these additional conditions are necessary but in and of themselves they are not sufficient to cause my injuries.  The single causal statement is the weak leg which was both necessary and sufficient to cause my injuries.  The other conditions were also necessary and should figure somewhere in the risk equation but they will not form part of the causal statement.
  4. Outcome. Ask the question: “what happened when it broke?”  This will refer to what happened when the chair broke, when the contract broke, when the management process broke etc.  This will be the effect statement.  In the example, when the chair broke the effect was that I fell to the ground but the outcome was that I injured myself.  The distinction is small but important.  As risk managers we need to be aware of how to reduce the probability that I sit on the chair as well as the outcome should it break.  So long as all the parts of the risk statement link up in a single, direct causal chain the distinction is not important.
  5. Calibrate.  The whole statement needs to be a singular statement of risk in a direct causal chain.  This is to say that the cause > risk > effect > impact are all directly causally linked.


One of the greatest benefits of an ERMS is in its ability to aggregate risk data across large and complex portfolios.  Not only can the system manage numeric data visually – with BowTie diagrams – but it has the ability to analyse large volumes of data using project tree-structures.  In order to do this risks need to be singular statements of atomic risk.  In this way the user may demarcate risks and delineate between them.  This not only comes from tagging risk data with categories and classifications but also through the structured statements of deconstructed risk.

  • Singular.  Make sure that the risk statement only contains one risk.
  • Atomic.  Make sure that the risk statement only refers to one element of structural weakness.
  • Classify.  Classify risks.  Classifications are ‘objective’ groupings of types, e.g. a girl is female human.  Project classifications typically include:  construction or IT or financial etc and are best derived by asking “what sort of project is this?”
  • Categorise.  Categorise risks.  Categories are ‘subjective’ groupings of risk types, e.g. an “IT Girl” is a type of girl based on subjective factors and may change from time to time. Project categories may include: Projects in the Bowen Basin and are best derived by asking “what type of classification is this?”
  • Categories and Classifications are very important in large portfolios as they are an essential step in reducing duplication and redundancy.


Risk is about seeing around corners.  Any good risk system must be able to provide some sort of predictive analysis based on the data that is already known.  Stochastic simulation through Monte Carlo analysis provides for a degree of objective, quantitative scoring.  Coupled with qualitative assessment both help the users predict what might go wrong, when, where and what it might affect.  Importantly:

  • Inputs are the hard bits but the better the inputs the better the outputs.
  • Quantitative simulation and analysis provides for irrefutable, objective results.
  • The provenance of risk data is critical.
  • Qualitative assessments increase stakeholder buy-in and executive confidence.


Subjective (qualitative) assessments of quantitative data drastically increases the quality of the analysis along with the level of overall stakeholder buy-in.  Subjective analysis may include scoring prioritisation but it may also include other factors such as ‘complexity’ or ‘proximity’.  All these factors help executives and project teams decide on and delegate the best management action.


The primary purpose of risk analysis is to assign management action.  This may be the allocation of contingency funds, insurances or the injection of liquidity from elsewhere.  Regardless, risk is not being managed if it is not reported in a useful and meaningful way across the organisation and upwards.  It is also worth noting:

  • People, generally, know how to manage their own risks.
  • Enterprises do not, generally, know how to manage complex webs of risk with multiple interdependencies.
  • Visual reporting creates better buy-in beyond risk/project personnel.
  • Resist the urge to misrepresent risk information in non-standard PowerPoint diagrams.  Infographics are excellent but PowerPoint engineering is an executive crime.


Alignment is used to describe the  process of calibrating risk data across the projects and portfolios.  This is a vitally important activity and it ensures that the analyses made in one area are carried through to their logical conclusions.  For instance, project risks need to have assigned management action in the project Risk Management Plan.  Likewise, an analysis showing that £X contingency allocation is necessary needs to show up in the project funding.

Alignment is usually where the science of risk stops and the ‘art’ of risk management starts.  Equipped now the relevant, detailed data there will invariably be areas which simply do not make sense.  It is the job of the Risk Manager to muster all their soft skills to ask the difficult and uncomfortable questions.  In doing so they will inevitably align much of the data with the artefacts.


Deconstructing risks is often not self explanatory.  Risk managers and project teams can get bogged down in what seems like a simple task.  Unable to move past what seem like trivial semantics, risks are clumped into clunky statements.  The duplication of risk and redundancy of mitigation strategies leads to the artificial inflation of sums at risk and the over-allocation of contingency funds. The 8A Methodology provides a structured approach to the problem of deconstructing risk problems and building a comprehensive Risk Register which provides a dynamic means of analysing risks across broad portfolios.


  • ISOLATE data based on a structured methodical approach/framework.
  • EXTRACT data at source in a non-intrusive way.
  • DECONSTRUCT clumps of risk into singular statements of atomic risk.
  • ANALYSE risk with a variety and range of sophisticated tools in order to develop unassailable objectivity in the results.
  • ASSESS quantitative data qualitatively.  Create better stakeholder buy-in through internal and statistical scoring.
  • REPORT risk information across and upwards.
  • CALIBRATE risk data across all business artefacts.


SCENARIO-BASED MODELLING: Storytelling our way to success. 1

“The soft stuff is always the hard stuff.”


Whoever said ‘the soft stuff is the hard stuff’ was right.  In fact, Douglas R. Conant, coauthor of TouchPoints: Creating Powerful Leadership Connections in the Smallest of Moments, when talking about an excerpt from The 3rd Alternative: Solving Life’s Most Difficult Problems, by Stephen R. Covey, goes on to note:

“In my 35-year corporate journey and my 60-year life journey, I have consistently found that the thorniest problems I face each day are soft stuff — problems of intention, understanding, communication, and interpersonal effectiveness — not hard stuff such as return on investment and other quantitative challenges. Inevitably, I have found myself needing to step back from the problem, listen more carefully, and frame the conflict more thoughtfully, while still finding a way to advance the corporate agenda empathetically. Most of the time, interestingly, this has led to a more promising path forward and a better relationship, which in turn has made the next conflict easier to deal with.”

Douglas R. Conant.

Conant is talking about the most pressing problem in modern organisations – making sense of stuff.

Sense Making

Companies today are awash with data.  Big data.  Small data.  Sharp data.  Fuzzy data.  Indeed, there are myriad software companies offering niche and bespoke software to help manage and analyse data.  Data, however is only one-dimensional.  To make sense of inforamtion is, essentially, to turn it into knowledge. To do this we need to contextualise it within the frameworks of our own understanding.  This is a phenomenally important point in sense-making; the notion of understanding something within the parameters of our own metal frameworks and it is something that most people can immediately recognise within their every day work.


Take, for instance, the building of a bridge.  The mental framework by which an accountant understands risks in building the bridge is uniquely different from the way an engineer understands the risks or indeed how a lawyer sees those very same risks.  Each was educated differently and the mental models they all use to conceptualise the same risks (for example)  leads to different understandings.  Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

For instance, take again the same risk of a structural weakness within the new bridge.  The accountant will understand it as a financial problem, the engineer will understand it as a design issue and the lawyer will see some form of liability and warranty issue.  Ontologically, the ‘thing’ is the same but its context is different.  However, in order to make decisions based on their understanding, each person builds a ‘mental model’ to re-contextualise this new knowledge (with some additional information).

There is a problem.

Just like when we all learned to add fractions when we were 8, we have to have a ‘common denominator’ when we add models together.  I call this calibration, i.e. the art and science of creating a common denominator among models in order to combine and make sense of them.


Why do we need to calibrate?  Because trying to analyse vast amounts of the same type of information only increases information overload.  It is a key tenent of Knowledge Management that increasing variation decreases overload.

It is a key tenent of Knowledge Management that increasing variation decreases overload.

We know this to be intuitively correct.  We know that staring at reams and reams of data on a spreadsheet will not lead to an epiphany.  The clouds will not part and the trumpets will not blare and no shepherd in the sky will point the right way.  Overload and confusion occurs when one has too much of the same kind of information.  Making sense of something requires more variety.  In fact, overload only increases puzzlement due to the amount of uncertainty and imprecision in the data.  This, in turn, leads to greater deliberation which then leads to increased emotional arousal.  The ensuing ‘management hysteria’ is all too easily recognisable.  It leads to much more cost growth as senior management spend time and energy trying to make sense of a problem and it also leads to further strategic risk and lost opportunity as these same people don’t do their own jobs whilst trying to make sense of it.


In order to make sense, therefore, we need to aggregate and analyse disparate, calibrated models.  In other words, we need to look at the information from a variety of different perspectives through a variety of lenses.  The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

Information still needs a lot of structural similarity if it’s to be calibrated and analysed by both technology and our own brains.

The diagram below gives an outline as to how this is done but it is only part of the equation.  Once the data is analysed and valid inferences are made then we still are only partially on our way to better understanding.  We still need those inferences to be contextualised and explained back to us in order for the answers to crystalise.  For example, in our model of a bridge, we may make valid inferences of engineering problems based on a detailed analysis of the schedule and the Earned Value but we still don’t know it that’s correct.


As an accountant or lawyer, therefore, in order to make sense of the technical risks we need the engineers to play back our inferences in our own language.  The easiest way to do this is through storytelling.  Storytelling is a new take on an old phenomenon.  It is the rediscovery of possibly the oldest practice of knowledge management – a practice which has come to the fore out of necessity and due to the abysmal failure of IT in this field.

Scenario-Based Model Development copy

Using our diagram above in our fictitious example, we can see how the Legal and Finance teams, armed with new analysis-based  information, seek to understand how the programme may be recovered.   They themselves have nowhere near enough contextual information or technical understanding of either the makeup or execution of such a complex programme but they do know it isn’t going according to plan.

So, with new analysis they engage the Project Managers in a series of detailed conversations whereby the technical experts tell their ‘stories’ of how they intend to right-side the ailing project.

Notice the key differentiator between a bedtime story and a business story – DETAIL!  Asking a broad generalised question typically elicits a stormy response.  Being non-specific is either adversarial or leaves too much room to evade the question altogether.  Engaging in specific narratives around particular scenarios (backed up by their S-curves) forces the managers to contextualise the right information in the right way.

From an organisational perspective, specific scenario-based storytelling forces manages into a positive, inquistive and non-adversarial narrative on how they are going to make things work without having to painfully translate technical data.  Done right, scenario based modelling is an ideal way to squeeze the most out of human capital without massive IT spends.






The True Cost of IT Reply

The cost of hardware has been falling for years and yet enterprise IT budgets continue to rise well beyond inflation. The truth is that per-capita costs seem to be increasing faster than per-unit costs are decreasing.   Coupled with colossal cost overruns in implementations most CIOs are under significant pressure for their technology budgets not only to deliver better systems but also demonstrable business benefits as well.  Many companies are looking  to cloud services to solve their technology cost problems but few realise that not only are the  costs of ICT hidden throughout the organisation but so is the value.  Simply put, IT just should not be so expensive.  So, where does the ‘real’ cost of technology lie?

ICT managers traditionally have little accountability and there are few controls over their cost centre.  Yet, these days ICT plays a pivotal role in supporting the execution of corporate strategy.  Despite this the only time the business has the opportunity to analyse and to correct failed projects is after they have delivered (or not, as the case may be).
In this paper we argue that using a 2-step method to ICT costing:  (i) a top-down approach to consolidating ICT costs, and (ii) by including  ‘project costs’ businesses can increase the granularity of their ICT total cost of ownership (TCO).  In this way, CIOs become masters of their own cost control and not the CFO.  Projects can be costed properly, projects can be right-sided accurately, projects can be deconflicted and business capability – not just shiny new tech – can actually be delivered.


Whether downturn or upturn.  Whether cost reduction or growth, the challenge for most CIOs is how to squeeze the most out of their limited budget.  In order to do this the business must gain an exact picture of its IT cost base.  Fundamentally, without knowledge of just how much IT costs the business every project, every initiative is doomed and IT will never deliver the capability to execute company strategy.

The cost of hardware may be decreasing but the total cost of enterprise software is always rising for 2 reasons:

  • The cost of labour to run enterprise software is higher because it is, seemingly, more specialised.
  • The cost of delivery is higher because deployment and integration takes longer and involves more people.

Few of these costs, if any at all, are known at the outset.  More problematically, there is no in-house ability to actually gain the insight into total expected costs.  The statistical ability and IT awareness needed to build the necessary models simply does not exist even in the most sophisticated organisation. What is clear is that  it is impossible to drive the development and approval of all ICT projects through the same capital budgeting system as any other large corporate purchase.  Quite simply, the mathematics needed to assess the purchasing of new software is radically different from new plant,  or acquiring another company.  The business may wish that the approach should be the same but to do so is incorrect.  For example, most projects are assessed on an NPV analysis.  However, to say that a software program (typically a Management Information System) can actually increase cash flow to the business is complete nonsense.  If all projects need to demonstrate measurable financial return (how much and when in discounted cash flows) right from the outset.  So, how is IT to achieve this? With so many broken promises jaded executives could be excused from disbelieving the wild exaggerations of the technology prophets.  The challenge is to create a system of ICT cost analysis which allows both better capital allocation as well as better understanding of the value added from IT systems. If the business does not then projects are inevitably delivered late and under-spec.  The IT function then spends years cannibalising their budget in order to deliver the original project. The result is huge write-downs in software in order to get the organisation back up to its benchmark.


True Cost of IT_Scatter Plot

The link between IT spend and commercial profitability has never been proven.  Increased technology budgets are largely a product of larger cash reserves.

Despite the recent economic downturns across the globe little has changed in ICT budgets.  Finance may be scarcer and the hurdle heights for budgetary approval may be higher but CIOs are still largely capitalised on a yearly spend level with little additional oversight.  Although IT departments have to fight harder for their dollars, once they have them there is still scant understanding or linkage between the money they are given and the value they deliver.

IT departments constantly bemoan the fact that evaluating the benefits of technology is difficult.  Some common misconceptions are that:

  • Most of the benefits of technology are intangible,
  • the benefits take a long time to materialise,
  • the real benefits of IT are strategic and in competitive advantage,
  • the benefits are indirect,
  • the benefits often come from dependencies, and that
  • there are insufficient mechanisms for capturing the value of these information systems.

All of this is nonsense.  The fact is that all of our traditional analysis is either too general or too narrow.  The cost of technology and the value it creates is already accounted for, it is just that they are hard to find.


The core of the issue is – where are the IT costs?  In order to show IT executing corporate strategy its costs have to be clearly aligned to the corporate chart of accounts.  Unfortunately, IFRS/GAAP accounting precedures do not take into consideration the indirect/non-capital ICT spend.  Even most cost tracking systems do not isolate IT related training budgets, do not extricate executive management oversight of IT projects and it do not  account for meetings and interviews relating to ICT choice and procurement.  In fact 10-15% of total ICT project costs are usually spent prior to delivery even starting.
Most IT project budgets are hugely understated.  Management time is unaccounted for and even operating costs are hidden in other budgets in order to reduce the visibility of overspend.  Even with these costs the IT spend often does not incorporate systems related training as well as the operational time of non-IT personnel in assisting implementation and deployment.

APPROACH – The Top-Down Model

True Cost of IT(1)

When creating a cost model for the total cost of IT ownership most management accountants go about accumulating line items and activities of the various financial centres relating to IT.  This goes nowhere; fast.  The secret is to create a top-down model such as the cost accounting model shown to the right.

Hard Costs v Soft Costs v Project Costs

By subtracting from total revenues all non-ICT related costs the business naturally arrives at the total cost of their ‘Information’ spend. What becomes clear is the staggering amount of time and energy which management spend on the development, purchase and integration of IT, for they are the primary beneficiaries of information.  The role of management, after all, is largely being able to use information to allocate better capital and resources within the business, predominantly through the use of IT to make decisions.
The cost of management relating to IT project work (i.e. non-transactional work) is called ‘Soft Costs.’  It is essential to arrive at soft costs from the top down because management rarely accounts for its time.  A bottom-up cost model, therefore will always be woefully inadequate.

The Defence community has also quantified the relationship of certain qualitative indicators and project costs.  These are called ‘Project Costs’ and they take into account, for instance, the effect that a cohesive stakeholder community has on a project.  There are 14 factors in total which calculate, amongst others, project complexity, workforce maturity etc.

Traditionally, the business’ management accountants attempt to shoehorn a fraction of these costs – inaccurately – into a ‘bottom-up’ cost model of hard costs which has limited granularity.  Unable to account for the soft services of management and the effect of qualitative indicators, projects inevitably always have the most optimistic, rose-tinted view of delivery.  Given that most ICT projects spend 10% of their budget before they start, and most misallocate another 20% of their time, therefore, even a project that only modestly reports a 10% overspend has really overrun by 40%.  An ICT project which fails to deliver within cost parameters has 2 possible outcomes:  People get sacked and budgets are written down.  The IT department cannibalise operational budgets of all other projects trying to deliver the failed initiative.

In the post-GFC world CIOs cannot afford to keep over-promising and under-delivering through a failure to understand the true cost of their ICT.  Until they do they will still just be the heads of the IT department.


ICT projects are more complex than normal capital projects.  They  often contain a large number of unknown variables and are often implemented to achieve vague business outcomes.  Normal capital projects such as building a new factory or developing a new product have clear, well established objectives, outcomes and processes. What is more, their success directly relates to increased revenue.  ICT projects, on the other hand do not, generally, directly increase revenue.    The are usually supporting back-office functions.  Budgeting for ICT projects therefore requires a different mindset and a different costing process.

The Defence community have long realised the expense of soft costs involved in projects.  Of the 6 phases of a project’s lifecycle the first 3 are devoted entirely to systems engineering and managerial decision making.

Defence often calculates the effect that qualitative factors have on projects – these are what may be termed ‘Project Costs’.  These factors take into consideration the complexity of the project, the design factors, the team maturity and cohesion amongst others.
A CIO, therefore, who only costs their department on the basis of hard costs is only scratching  the surface of real costs.  Without taking into account ‘Soft Costs’ and ‘Project Costs’ ICT projects are  doomed to overrun on costs and schedule and under-deliver on capability.

Benefits of Better IT Costings

Naturally, the new financial model of the ICT spend shows an increase due to the addition of previously hidden costs.  Although the new cost of management may be shocking it is also revealing given at how many of the systems are neither directed at better managerial decision making nor is management value-added included in any business case.   In fact, most business cases myopically rely on the traditional approach of reduced labour costs and increased business efficiency – both of which usually turn out to be fanciful.
This, in turn, also enables the projects to show (i) the value of ICT to managerial decision making, and (ii) it allows for a clearer and more precise method of ICT chargebacks.
Ultimately, finer granularity in the cost accounting of the ICT spend goes beyond an increase in the alignment between the IT department and the Finance function.  Greater transparency and scrutiny will inevitably lead not only to better implementations but also better projects which reap hard dollar benefits to the business in cost savings, precise growth and increased management value-added.

When this is done the business can adjust and fine-tune the IT costs in the same way as any other operational spend.  More importantly, the business will be able to choose the best IT investments that are able to execute corporate strategy rather than relying on industry fads and vendor promises of wild growth and productivity increases.  In the end, until the CIO can get to grips with the true costs of their technology spend and then take a seat at the top table.




How to Construct a Risk Statement Reply

Risk is not a self-licking lollipop.  Nor is risk a beautiful and delicate flower to be tended and admired on its own.  Risk is part of a system (Assurance) and that system has a point and the point is to reduce cost and time growth; both now and in the future.


It does this in two ways:  (i) operational pull-through, and (ii) commercial pull-through. Where operational pull-through reduces cost and time overrun now, commercial pull through reduces cost and time overrun (often the same) in the future by breaking the causal chain of a potential claim as well as by reducing future liability by addressing specific areas of causation.  Traditional Risk Management addresses operational pull-through.  The goal is to assess what might go wrong on a project in the immediate future.  The structure of the deal and any inherent soft-spots within it are rarely given thought.   Commercial pull-through is just as, if not more, important.  Commercial pull-through will support or defend claims (either extension-of-time or cost) and it will help resolve further issues of liability.

“The financial risk is hardly ever worth taking. I have just seen a £5m job ending up with a £12m claim.  A lot of clients are still not prepared to spend the money up front to get the contract, the budget and the contingency right. I would say you can end up spending around 10 times as much resolving a dispute compared to what you would have needed to put in at the start to avoid it becoming an issue.”

Gary Kitt, head of contract solutions in the UK for EC Harris.

The linchpin for both is causation and the key to proving causation is through causal chains.

Causal Chains

Figure 1 below outlines a causal chain for a fictitious business problem.  In this case, low quality steel is likely to affect the build of a steel frame on a construction site.  The Contractor, warranting the workmanship of the building, they will be liable for defects in the future.  Additionally, they have operational liability now and will likely incur some cost growth through re-work.  ‘What is the cause?’  is an ontological question.  ‘How can the business reduce their liability and current costs?’ is a business question.  In the example below the Supplier’s liability is counterweighted by an equally damaging causal chain derived from the Contractor’s poor welding.  Without the ability to define and delineate these causal chains – and therefore limit libility – the Contractor will have, prima facie, extensive liability from poor workmanship.

In exploring the context of our example somewhat more, we begin to understand that although the lower capability of the workforce was compounded by a low supervision ratio, the welds themselves were of a sufficient standard to warrant their quality.  A poor understanding and examination of the causal chain would not have held out such analysis. A poor understanding has the lawyers and commercial team clutching at straws rather than aiming to rebut or settle the claim.

Causal Chains

Figure 1 – A causal chain from a Construction Company example.

Causation:  Assumptions v Weaknesses

The primary problem with causal chains in the poor understanding of causation itself.  In the aforementioned example at Figure 1 many Risk Managers would misassociate the possibility of the framework buckling with the assumptions and technical trends further upstream.  None of them, however, on their own are either sufficient or necessary to cause the buckling of the frame.  Causation can have multiple factors but only so many that are sufficient and necessary to give rise to the risk.  In this case, some of our causes can be demarcated into a different causal chain altogether.  If placed together, therefore, would confuse the commercial negotiating leverage that such risk analysis was able to provide.

Counter-Factual Argument

Be careful of counter-factual arguments, i.e. the “but for” or sine qua non analysis of causation.  Scholars of causation theory, such as David Lewis, were deeply sceptical of counterfactuals.  For example, were a cat to jump out of a window after I had unlatched it, prove causation?  No but under a sine qua non my guilt would be highlighted.

Risks:  Events v Transitions

One should never think of a business risk as an event.  This may be heresy to some but unless you can easily distinguish between the causation events, risk events and impact events then you should seek to differentiate the language used to define them.  To try and conceptually isolate individual events is like trying to spot the offending domino.  They all played a part. However, by having a language and methodology to differentiate causal elements and causal chains one has the ability to reduce duplication and redundancy in quantitative risk simulations.  For instance, when deconstructing problems we talk about ‘structural weaknesses’ and not causal events.  Such language helps us delineate between causal events rather than redefine a taxonomy of risk.

Risk statements are often the hardest to differentiate.  One can understand causal events (although pinning them in time can be difficult without the right language) but structuring a risk statement itself can be conceptually difficult.  A risk, after all, is artificial.  Instinctively, we know cause-and-effect but risk is far less intuitive.

Conceptualising risk as yet another event in the causal chain opens risk up for too much confusion.  What is needed is a way to encapsulate, in  a risk statement, the impact to the business of a causal chain.  One way to do this is through transition statements.  Viewing risk as a potentially harmful state transition is a useful way to construct effective risk statements.  For instance, the risk of falling off my chair at work is not in hurting my back or even breaking the chair.  It is that I will transition from earning to not earning.  Wording risk as such allows us to articulate the causal chain without contaminating it with an artificial risk statement, i.e. the risk statement is the conceptualisation of a discrete part of the causal chain.

Impacts:  Impacts v Effects

It is important to distinguish between types of impacts in causal chains, after all, not all impacts are relevant or useful but can be included by naysayers and contrarians to obfuscate the effect of a risk and thereby avoid being delegated work.  The impact that is relevant in risk is the direct causal impact.  Impacts felt further downstream are important and may be relevant but not to an accountant or lawyer.  Downstream impacts may be distinguished as ‘effects’ and therefore only of indirect causal significance.

In summ, there is not commercial pull-through without precise causal chains.  Claims can be developed and leverage gained without them but always imprecise and far more costly to generate.  If Risk is ever to be a sophisticated, value-adding practice in the business then it must show commercial pull-through and to do that it must start with supporting the development and analysis of causal chains.

The Fallacy of Process Reply

“Every block of stone has a statue inside it and it is the task of the sculptor to discover it.”


Michelangelo felt that defining a beautiful statue was a constant labour of chipping away at the stone that wasn’t ‘David’, for instance.  It wasn’t a process that started at the feet and ended at the head.  There was structure to his activity but none which could be codified.

Corporate Risk Management is far less artistic but the analogy is nonetheless useful.  Risk Management often defies process because it is not an architectural pillar of the business.  There is no design in Risk Management.  By its very nature it revolves around the constant engagement with the business to define and refine the potential impact of business problems.  Risk Management, therefore, is highly iterative but just in the same way that Michelangelo chipped away at the statue, so too does the effective Risk Manager chip away at corporate problems in order to define their true reality.

This lack of process is upsetting to some and indeed unsettling to many Risk Managers. Corporate Services, unnerved by their lack of operational necessity always feel they have to sit within a process in order to find meaning in their labours as opposed to defining the value they create in the overall outputs.  Risk Management – as part of the Assurance function – needs to justify its intrusion into the business by:

  • the costs and time it helps to save,
  • the risks it helps to mitigate, and
  • the workload it reduces within teams.

and ultimately by doing so in the least obtrusive manner.

Like the sculptor, however, there is process underneath it but this is often highly specific to the industry and level of the role.  What is key is that there is a common maturity level in Risk Management, namely:

  1. Structure.  Risks must achieve structure before anything else.  They need to be made up of singular statements of atomic risk, i.e. the risks are granular.  The statements themselves need to be clear and precise and broken into causation, risk and impact which articulates a complete and direct causal chain.  The key to remember is that the structure and the granularity of the statement will refine over time as more information is borne out.  Risk is fractal.  The closer one looks, the more detail becomes apparent.  Structural weaknesses giving rise to problems can be traced deeply; internally and externally.  Knowing when to stop is often half the art.
  2. Completeness.  The risk statement must also be complete.  Like structure, completeness will be a battle.  Mitigations will change and develop over the life of the project as will impacts as the resilience of the surrounding architecture develops.
  3. Quantification.  Risks should be quantified but not all of them can.  It is hard to quantify reputation and relationship risks.  However, at the operational, service and technical levels quantification in terms of time and cost impacts will be vital.
  4. Traceability.  Risks need to be traced.  It is self evident that a risk must have provenance if it is even to be considered.  If it can’t it does not necessarily mean that it won’t eventually be a risk but rather that, currently, all it can be is the feeling of a problem; however legitimate a feeling nonetheless.
  5. Utility.  Lastly but most importantly, a risk must be useful., either to the business or the team.  Regardless, if the risk does not add value then it serves no purpose.


These five points go to the heart of effective Risk Management:  that it is not process but methodology which drives its effectiveness.  Section 3(J) of ISO 31000 clearly sets out that Risk Management is an iterative process.  A process, on the other hand would imply a sequential jump from one completed phse of Risk Management onto the next.  According to Deloitte’s (Figure 1 below), the third biggest challenge to businesses is in achieving clear and effective risk data.  A concern which underscores the need for iterative cycles of assurance right throughout complex projects.  Chipping away at risks using the aforementioned approach, therefore, is a useful and highly effective way to achieve assured projects without the need for over-prescriptive and intrusive processes.


Figure 1 – Deloitte Global Risk Management Survey, 2015.

“The financial crisis has underscored how insufficient attention to fundamental corporate governance concepts can have devastating effects on an institution and its continued viability. It is clear that many banks did not fully implement these fundamental concepts. The obvious lesson is that banks need to improve their corporate governance practices and supervisors must ensure that sound corporate governance principles are thoroughly and consistently implemented.”

Danièle Nouy,  President of the Supervisory Council at the European Central Bank.

Trying to add too much structure is doomed to failure and being over-prescriptive is counter productive.  Risk does not create value and so the RM needs consent and co-operation to perform deep, effective risk management because it iss all about gaining trust and confidence to explore into the deepest and darkest spaces of the business.  Indeed, the entire Assurance function needs to justify its interference and intervention to make sure that it’s not holding up delivery  or interfering with operations.  This can be extremely uncomfortable and even confrontational.

In large and complex programmes the burden of governance is onerous.  Lots of things can go wrong and statistically they do and ultimately people are working with and because of someone else’s money.  For that reason project teams need to structure for transparency and be prepared to report the detail and context of their variances.  Under such circumstances the process of sculpting risk becomes much easier.  The image of David more obvious.

The organisation should be motivated to maximise their gains, tempered only by a mature, well-developed, empowered and independent assurance function which has the ability to monitor, measure and rein in projects in a precise and appropriate way.  This relies on the systems to achieve transparency and the expertise to achieve visibility.

It is impossible to say how effective this approach is as the more successful Risk Management is the more likely it is to be seen as unnecessary.  The approach, however, is certainly more sustainable. The pendulum swings between over-regulation and laissez-faire operational focus.  Rigid structures are always too brittle to survive.  The only way to achieve a sensible and sustainable balance is not to clip the wings of the business but give them give them strict parameters within which to spread them and fly.  As the saying goes:

“You don’t know who’s swimming naked until the tide goes out.”

Warren Buffet.

Risk Management is a Team Sport Reply

Good financial risk management requires a high level team team effort and cross-functional collaboration.  Risks, by their very nature,  are highlighted precisely for the reason that the project team is unable to do anything about them.  If they could be effectively mitigated or avoided by a single function (e.g. Legal, Finance, Operations etc) then they would/should not have been placed on a Risk Register.

If the project were able to mitigate the risk themselves, alone, then it wouldn’t be a risk.

Dealing with these risks, therefore, requires not only close collaboration from multiple functions but it also requires the delegation and intervention from executives on pay-bands which are at a higher level than the project, i.e. effective management of financial risk is expensive.  The corollary is that project teams should ensure that whatever risks they have identified are very important and cannot be dealt with by the project team.


Traditionally, risks that populate project risk registers will be well-known problem statements.  To be unkind, these will be statements of the blindingly obvious.  A menagerie of opinion, Google hits, speculation and wild guesses.   In the absence of an external assurance function, risk managers work for Bid Managers or project teams.  They are not incentivised to look too hard or too deeply at risk.  The last thing that either of these roles want is prying executive eyes or torches shined into dark and dusty corners of the business.

Most risk registers are populated with a menagerie of opinion, Google hits, speculation and wild guesses.

Future blogs will go into this area in greater depth.  However, in the absence of an external assurance function curious risk managers can assuage their intellectual integrity as well as supporting their boss by deriving their risks statistically.  By going back through 6-10 similar projects they can analyse, categorise and classify risks by a variety of quantitative and qualitative measures.  In this way, the Risk Manager will bring cross-functional experience to the project team and, hopefully, become a catalyst for inexpensive, collaborative risk management.

Rory Cleland

riskHive Ltd

What is a Risk? Reply

“When you combine leverage with ignorance you get some pretty interesting results.”

Warren Buffet.

To extend Warren Buffet’s sentiment, the greatest threat to effective risk management, therefore, is poor risk identification.  Put simply if we can’t discover and quantify risks then our exposure is neither visible nor manageable.  This may be acceptable to small, contained project/investment teams who have a greater inherent understanding of the risk context but it certainly should send shivers up the spine of any senior executive.

Business v Metaphysical Risk

Firstly, let us be clear that we are trying to identify business risks, not metaphysical risks.  A business risk has a clear and direct impact in cost or time.  A metaphysical risk might still, theoretically, be a risk but without any effect which may be directly quantifiable in cost or time, is not a threat to the business.

Business Artefacts

It stands to reason, therefore, that business risks can be uncovered from the Business Artefacts.  The 5 core business artefacts are the:  Contract, Cost Model, Schedule/Plan, Architecture and, of course, Risk Register.

Business Elements

Business artefacts are made up of business elements and should encompass all the discoverable things that could potentially go into a Risk Register.  A business element is not necessarily a business driver, because business elements should be used to extend visibility in a project.  For instance, a construction company may be building a steel frame. The steel is the core component.  Its price is a critical driver of cost and its arrival is a critical driver of the schedule.  However, what happens when upon inspection of the welds and joins the company discovers that the steel is of poor quality?  Insurances and liability  clauses indemnify the company for loss but they will never get all their money back and nor will they recover their schedule.  In this real life example the business would never have had any visibility over their supply chain risk.  It would not normally go in the cost model because their is no cost of quality, i.e. it isn’t a cost driver, it’s a compliance factor.  It should, however, go into the cost model as a business element in order to give visibility to the project team and allow them to draw out risks. It is, therefore, vital that teams have as much visibility as possible into business drivers if they are to develop effective mitigation strategies.  For instance, the business without full visibility into the elements of cost and quality will naturally use insurance to offset the risk from having to indemnify clients from faulty workmanship.  The business with visibility into the elements behind the cost or steel or its delivery or place in the business architecture, will be able to place simple quality mechanisms in place to reduce their insurance costs.  Will this come from the cost model?  Possibly but if not then such technical risk may be modelled or measured  from the architecture (a topic we will look into later).

3 Criteria of a Risk

As not all business elements are business drivers, so too not all business drivers are risks. For a business element to be a risk it needs to have the following  criteria, i.e. it needs to be:

  1. HOT.  Once the deal has started, the element needs to be running at a variance (negative or indeed positive).
  2. SENSITIVE.  The element needs to cause volatility within the cost model or the schedule.  It needs to cause exposure in the contract or significant problems within the architecture.
  3. WEAK.  The element needs to exhibit some structural weakness.  The structure may be a commercial structure, a financial structure, a technical structure or an organisational structure.  The key point is that it must not be something that the business or agency traditionally recovers well from.  For example, a business element might be running hot and it might be sensitive but if there is no inherent structural weakness in it then it’s highly likely that it is just following a standard pattern of variance trajectory.
Risk Criteria Venns

Risk Identification

A business element may evidence one or more of these criteria but unless it fulfills all three then it just isn’t a risk.  A problem, a potential problem maybe but not yet a business risk worthy of inclusion on a business’ Risk Register.  After all, a business can only quantify tangible problems, not the metaphysical.

The 10 Commandments of Risk Reply

Old Newtonian physics claimed that things have an objective reality separate from our perception of them. Quantum physics, and particularly Heisenberg’s Uncertainty Principle, reveal that, as our perception of an object changes, the object itself literally changes.

Marianne Williamson.

All risk is a wet finger in the air.  A scientifically wet finger but a wet finger nonetheless.  In large and complex programmes which have a fluid dynamic all of their own we are unable t0 define and analyse detailed statistical models in order give certainty to probabilistic analysis.

Why Bother with Risk?

Why bother with risk in an every changing programme, if our analysis will always be insufficient and caveated by a long list of exceptions?  Why not just trust our technical experts and their inherently detailed understanding of the technical risks and their commercial contexts?  There is a strong argument for letting technical expertise run the projects in the absence of structural control.  There are far more powerful arguments which provide counterweight, though.  The primary concern is, of course, that large and complex programmes – statistically – never achieve their cost and time parameters, usually causing embarrassment to the contractor and concern to the investors.

So, what should the ground-rules of Risk be to ensure that our analyses have a modicum of consistency in order to achieve some consensus in their predictability?

The 10 Commandments of Risk:

  1. Not all Genuine Concerns are Risks.  In all honesty, most ‘Risk’ statements in Risk Registers are more generic problem statements;  large aggregations of things that might have an adverse impact on the business.  They are issues, assumptions and trends.  To be a ‘Risk’ it has to be an atomic element of the business that can directly and quantifiably cause an impact in cost and/or time.  If it can’t then it needs to be decomposed and deconstructed until it can.  Which brings us to our second commandment,
  2. Risks Require Contingency (Assumptions Require Analysis).  Risks are used to develop contingency.  A business should have as few risks as they can in a Risk Register.  The more risk, the more contingency (cost or time).  CYA risks have no place in a Risk Register and it is the job of the CRO to ensure that all such Risks are weeded out.  What is left can be sifted for legitimate concerns.  In turn, these can then be decomposed and deconstructed in order to drive out the golden nuggets of risk which lie hidden inside such large, clumpy statements.
  3. Most Risks Need to Be Traced to Cost or Schedule.  In order to determine contingency the Risk needs to be simulated within a cost model or schedule.  In order to do this the Risk needs to be attached to a specific cost item or line in the schedule.  There are some risks at the strategic level (corporate reputational risk) which are not attached to cost or time and we will delve into these when we show how risk frameworks are used to classify and categorise risks.
  4. Risks Need To Be Classified and Categorised.  Categorising and classifying risks is an extremely useful way to ensure that duplication and redundancy is avoided.  It is statistically likely that a relatively even spread of risks will occur in a large and complex project.  In fact, in large technical delivery projects the majority of risks are usually technical because that is all that the business knows.  Unnatural skewing should reveal where exposure is more likely.  For instance, significant Risks around ‘resourcing’ often highlight capability gaps in processes or a lack of role clarity.
  5. Risks Require Resources.  Real risks need accurate and adequately resourced mitigation plans.  If the Risk Register is loaded with fanciful and fictitious risks then it would be astronomically expensive to actually mitigate them.
  6. Risks Must Be Hot, Sensitive & Weak.  See our first blog.
  7. Risk Statements Must be Singular and Atomic.  Risk statements must be singular and atomic if the ensuing Monte Carlo simulation is to have any meaning at all.  Simulations founded on duplicated or overlapping risks will have no credibility at all.
  8. Risk Statements Must Articulate a Direct Causal Chain.  A Risk statement must be made up of three direct elements of a causal chain:  The transition statement (RISK), the structural weakness (CAUSATION) and the impact (direct causal impact on time/cost).  Too often, Risk statements are taken from different parts of a causal chain and may only display loose and tenuous relationships between each other.
  9. Risk Causes Must Highlight a Structural Weakness.  Causation must clearly and precisely point to some form of structural weakness.  In a business, this may be legal, organisational, financial or technical.  However, if accurate and delegable mitigations are to be created then then clear structural weaknesses must be identified.
  10. Causes Must Be Necessary and Sufficient.  The test for causation is necessity and sufficiency.  For instance, a fire needs both the presence of combustible material as well as the absence of fire retardant, to be a business risk.  It is not sufficient to say that there is an electrical fault somewhere.

In future blogs we will delve deeper into some of these elements but together they form the core rules of risk identification and management.  Indeed, by following these simple rules, along with the underlying context, Risk will become simpler and more meaningful.

The 4 Phases of Successful Bidding Reply

ICT deals are unlike other technical deals. Whilst engineering deals may be ‘complicated’ with their detailed technical details, ICT deals are often exponentially more ‘complex’ because they blend loose business concepts with technical specifications. Technical success is pegged to business benefits or unknown end states at the end of lengthy business transformations based on multiple increments of capability. For this reason the standard Colour Phases of bid development do not adequately support large, complex ICT deals. Robust process is a must in order to ensure adequate cross-functional collaboration across large, dispersed organisations. However, tech deals need to evolve through iterative stages of architectural development. The diagram below sets out a colour phased development lifecycle for large and complex tech deals. In this way, they get the process to manage the organisational governance as well as the architectural focus necessary to develop a winning value proposition.


The issue is never designing the solution it is unpicking it. Tech deals are not only complex they are aggregations of complexity: security systems, infrastructure, applications, business rules, benefits, integration – it all develops in isolation and it all has its own logic and its own costings and commercial parameters. It does, however, all work together. So, when it comes to the end of the bid and the team have to reduce costs in order to hit their estimated price-to-win it is somewhat akin to playing a giant game of “Kerplunk“. Straw by straw the architecture is unpicked and all the stakeholders around the metaphorical table just hold their breath and hope that not all the marbles fall down. If they don’t it is pure luck.

Technical ICT architectures are not like normal technical engineering projects or other capital works. The customer is not building an independent means of production. ICT (less robotic systems, i.e. embedded technology) is an integral and entwined part of business capability. It is inextricably linked to the business. If the dots are not all joined at the outset of the bid then when it comes to reducing costs at the end, the executives will strip down random parts of the technical solution which is usually the foundation of the value proposition. What will be left is a patchy solution that has obvious structural weaknesses (to the intelligent customer).


The problem is how to design elegant and sophisticated architectural solutions within cost, commercial and time parameters. The individual system components will be designed in splendid isolation with a variety of assumptions and guesses. The entire solution, however, needs to be designed, costed, stress-tested and reviewed within tight timeframes and against aggressive price points. In order to lead a large and disparate (often virtual) team to achieve these goals will take a robust process and a strong personality exerting herculean effort. The crux of the matter is how to manage varied personalities through a convoluted development process without stifling the elegance of the design yet still manage to corral such strong temperaments through tight review gates.


The solution is a blend of both the traditional colour-phased bid structure and the complexities of the architectural development process. Together it might be referred to as Colour-Phased Architecture. In this way the bid team are cognisant and appreciative of the subtleties and complexities of the architectural design process and the executive get a strong process which even the strictest governance may rely on.


Adapting Krutchen’s 4+1 Views the architecture is developed through 5 critical structures:

  • Use Cases. The Customer narratives and scenarios are developed to produce and update the various iterations of the architectural solution.
  • Architecture. The solution should develop through 4 iterations:
    • Logical – This will ensure that it is anchored by the strategy and accurately reflects it.
    • Physical – This is where specific solutions are developed and the tighter parameters are loaded.
    • Transition – This addresses the specific issues around the deployment, transition and change management issues for the customer.
    • Transformation – The vendor should make the majority of the margin during this phase and so much of the strategy, modelling should focus on the incremental capability uplifts. This is where the primary benefits realisation management should focus.

Architectural Development

The architecture of the solution is defined by a careful communication between elements. The interaction between the structural colour-phases and the functional solution development supports this in two ways: (i) the Use Cases inform the design of the solution, and (ii) the phases inform the development of the solution. Initially, the customer use cases determine solution design. The solution must be derived from a desire to solve the customer’s problem. A list of requirements is essential and usefully but may only result in a patchwork quilt of technical components and business benefits. The logical solution will then be molded and developed using the vendor strategy and broad parameters.

The physical solution along with more detailed attention to the Transition and Transformation stages is then pushed through another 3 phases of development. In this way, the team ensures that the architecture is not only technically comprehensive but that it is a sensible, well costed solution that will meet the customer’s needs well into the future.

Process Colour-Phases

The colour phases are essential because without them the architectural development process would just spiral out of control. The colour phases ensure:

  • that the development of the solution occurs within a structured governance framework,
  • that there is cross-functional collaboration between the technical silos and the commercial support, and
  • that the progress of development is measurable and communicable to the non-technical executive.

To that extent there are 4 essential phases to ensure that the technical solution does not exist in splendid isolation but is a well costed, commercially sensible plan to make money by delivering something which plays to the vendor’s strengths whilst delivering something of real value to the customer. In order to do that the process must ensure it develops:

  1. Strategy. The strategy must be developed from the outset. Architecture is not strategy. Without strategy there is nothing to anchor the technical development with and nor is there anything to prod and cajole business partners with.
  2. Architecture. Solution development is only part of the process. As with any design process, the artistic talents and temperaments of the team must be constrained and focused.
  3. Commercials. The commercials have to be right. No matter how elegant or sophisticated the solution, unless it works commercially it will not be signed off.
  4. Viability & Review. The solution must be able to be reviewed internally by peers and colleagues. It must exist as a coherent body of documents (which will be pulled together into the bid document). The team must be able to prove that the solution will work over the long term and exist as a part of the customer’s enterprise architecture.

There is both art and science in the design, development and management of large and complex bids. Bids team not only require the gravitas of deep technical ability but also leadership and communication skills. Without these the solution will not be balanced but rather a mess of technical solutions competing to over-deliver functionality. There is no easy way to navigate the issue but a colour-phased process of architectural development blends the best of both worlds into a practical and deliverable lifecycle.

Figure 1 – A colour-phased lifecycle for technical bids.


In a word – Yes. Although it is probably not possible to outsource an entire front office capability it is possible to outsource key elements so that the capability is (i) better supported, and (ii) more effective.


In a particular UK outsourcing company the account manager of a Local Government account in a in a poor area once asked the CEO of the Council if he felt that more IT would help him. The CEO snapped back that unless the account manager could solve teenage pregnancy with his IT then to stop bothering him! The question remained: could we help reduce teenage pregnancy through the use of our services? At the time no single process or set of processes existed which was particularly focused on the reduction of teenage pregnancy. Social workers merely had a bag of blunt tools which they applied on a case by case basis, usually after the fact (or in cases of extreme and obvious risk).

It would be ambitious. The area in question had the UK’s youngest grandmother. She was 28. Many young girls saw their only way out of the poverty cycle was by falling pregnant in order to get a council house and benefits. Regardless of the social problems this causes the sheer drain on the public purse is immense.

On an investigative trip to India where the outsourcing delivery partners were visited the Use Case was put to them. 3 areas were focussed on:

  • Analytics. Analysing and correlating large volumes of data is key. This is not something that social services did nor could they do. Analysis would not only require the infrastructure but it would also need the personnel to input and interpret the output.
  • Security. Data security and anonymity is critical. Employing local people was a key differentiator of the vendor and sending personal data offshore would be sensitive. Data would need to be cleaned, anonymised and then transmitted. Only when analyses were made and returned would results be matched with identifying indicators.
  • Reporting. Reporting needs to be intelligent and intuitive. The analysis could not spit out typed reports but rather would need to target social workers directly and inform them of specific risks and link the risks to benefits, programs, policies and guidance in and around the entire area.


Being part of such a complex capability would be very daunting for a business. Outsourcing vendors typically eschew complexity. Profit is based not only on an economy of scale but also on lower pay-band workers performing the tasks (typically in countries where labour is cheaper). Any increase in complexity adds the need for managerial oversight which adds the indirect (non-chargeable) costs of higher pay-band workers. Complexity is not good.

In order to avoid this in front-office outsourcing the vendor needs to get the customer to perform service orchestration. This is where the customer performs the oversight and configures the services as they wish. This does not mean that the vendor is just crunching numbers in the background. Rather, the vendor provides a level of human interpretation on the analysis as well as feeding them back into a vendor rep in the social services environment who can then ‘push’ the information. This last part is essential because the vendor is likely only to be financially incentivised on the basis of lower teenage pregnancy. Dealing with public service apathy and stagnant processes at a personal level is critical.


What if it goes wrong? Vendors are now part of vulnerable people’s lives. If something goes wrong who carries the liability? Should the vendor provide warranties for the services? Or simply guarantee results year by year? These areas are untested. However, it is likely that vendors will only warrant the technical aspects of the services which is not really front-office outsourcing. One way around this is to ‘own’ a number of key roles or at least the role specification. In this way they can realistically provide warranties for some of the outcomes.

In the end, front-office outsourcing is not only possible but it will be increasingly necessary in the future in order to provide vendors with the differentiators necessary to maintain profits. The most critical issues will be getting vendors to deal with complexity without paying too much for the additional risk and in a way which allows them to take on the overall liability as well. The front office is a prime source of differentiation but it can be ground captured by outsourcers so long as they are smart about both complexity and liability.