SCENARIO-BASED MODELLING: Storytelling our way to success. 1

“The soft stuff is always the hard stuff.”

Unknown.

Whoever said ‘the soft stuff is the hard stuff’ was right.  In fact, Douglas R. Conant, coauthor of TouchPoints: Creating Powerful Leadership Connections in the Smallest of Moments, when talking about an excerpt from The 3rd Alternative: Solving Life’s Most Difficult Problems, by Stephen R. Covey, goes on to note:

“In my 35-year corporate journey and my 60-year life journey, I have consistently found that the thorniest problems I face each day are soft stuff — problems of intention, understanding, communication, and interpersonal effectiveness — not hard stuff such as return on investment and other quantitative challenges. Inevitably, I have found myself needing to step back from the problem, listen more carefully, and frame the conflict more thoughtfully, while still finding a way to advance the corporate agenda empathetically. Most of the time, interestingly, this has led to a more promising path forward and a better relationship, which in turn has made the next conflict easier to deal with.”

Douglas R. Conant.

Conant is talking about the most pressing problem in modern organisations – making sense of stuff.

Sense Making

Companies today are awash with data.  Big data.  Small data.  Sharp data.  Fuzzy data.  Indeed, there are myriad software companies offering niche and bespoke software to help manage and analyse data.  Data, however is only one-dimensional.  To make sense of inforamtion is, essentially, to turn it into knowledge. To do this we need to contextualise it within the frameworks of our own understanding.  This is a phenomenally important point in sense-making; the notion of understanding something within the parameters of our own metal frameworks and it is something that most people can immediately recognise within their every day work.

Contextualisation

Take, for instance, the building of a bridge.  The mental framework by which an accountant understands risks in building the bridge is uniquely different from the way an engineer understands the risks or indeed how a lawyer sees those very same risks.  Each was educated differently and the mental models they all use to conceptualise the same risks (for example)  leads to different understandings.  Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

For instance, take again the same risk of a structural weakness within the new bridge.  The accountant will understand it as a financial problem, the engineer will understand it as a design issue and the lawyer will see some form of liability and warranty issue.  Ontologically, the ‘thing’ is the same but its context is different.  However, in order to make decisions based on their understanding, each person builds a ‘mental model’ to re-contextualise this new knowledge (with some additional information).

There is a problem.

Just like when we all learned to add fractions when we were 8, we have to have a ‘common denominator’ when we add models together.  I call this calibration, i.e. the art and science of creating a common denominator among models in order to combine and make sense of them.

Calibration

Why do we need to calibrate?  Because trying to analyse vast amounts of the same type of information only increases information overload.  It is a key tenent of Knowledge Management that increasing variation decreases overload.

It is a key tenent of Knowledge Management that increasing variation decreases overload.

We know this to be intuitively correct.  We know that staring at reams and reams of data on a spreadsheet will not lead to an epiphany.  The clouds will not part and the trumpets will not blare and no shepherd in the sky will point the right way.  Overload and confusion occurs when one has too much of the same kind of information.  Making sense of something requires more variety.  In fact, overload only increases puzzlement due to the amount of uncertainty and imprecision in the data.  This, in turn, leads to greater deliberation which then leads to increased emotional arousal.  The ensuing ‘management hysteria’ is all too easily recognisable.  It leads to much more cost growth as senior management spend time and energy trying to make sense of a problem and it also leads to further strategic risk and lost opportunity as these same people don’t do their own jobs whilst trying to make sense of it.

De-Mystifying

In order to make sense, therefore, we need to aggregate and analyse disparate, calibrated models.  In other words, we need to look at the information from a variety of different perspectives through a variety of lenses.  The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

Information still needs a lot of structural similarity if it’s to be calibrated and analysed by both technology and our own brains.

The diagram below gives an outline as to how this is done but it is only part of the equation.  Once the data is analysed and valid inferences are made then we still are only partially on our way to better understanding.  We still need those inferences to be contextualised and explained back to us in order for the answers to crystalise.  For example, in our model of a bridge, we may make valid inferences of engineering problems based on a detailed analysis of the schedule and the Earned Value but we still don’t know it that’s correct.

Storytelling

As an accountant or lawyer, therefore, in order to make sense of the technical risks we need the engineers to play back our inferences in our own language.  The easiest way to do this is through storytelling.  Storytelling is a new take on an old phenomenon.  It is the rediscovery of possibly the oldest practice of knowledge management – a practice which has come to the fore out of necessity and due to the abysmal failure of IT in this field.

Scenario-Based Model Development copy

Using our diagram above in our fictitious example, we can see how the Legal and Finance teams, armed with new analysis-based  information, seek to understand how the programme may be recovered.   They themselves have nowhere near enough contextual information or technical understanding of either the makeup or execution of such a complex programme but they do know it isn’t going according to plan.

So, with new analysis they engage the Project Managers in a series of detailed conversations whereby the technical experts tell their ‘stories’ of how they intend to right-side the ailing project.

Notice the key differentiator between a bedtime story and a business story – DETAIL!  Asking a broad generalised question typically elicits a stormy response.  Being non-specific is either adversarial or leaves too much room to evade the question altogether.  Engaging in specific narratives around particular scenarios (backed up by their S-curves) forces the managers to contextualise the right information in the right way.

From an organisational perspective, specific scenario-based storytelling forces manages into a positive, inquistive and non-adversarial narrative on how they are going to make things work without having to painfully translate technical data.  Done right, scenario based modelling is an ideal way to squeeze the most out of human capital without massive IT spends.

 

 

 

 

 

McKinsey – Why the Customer pays for Risk Reply

In the forthcoming McKinsey Insights article “Avoiding Blindspots in Your Next Joint Venture” the authors point out some of their recent research into the failure of JVs. Further to my previous post about the need to design accurate risk models into a JV/Alliance/PPP, the authors note:

“At the beginning of any JV relationship, parent companies naturally have different risk profiles and appetites for risk, reflecting their unique backgrounds, experiences, and portfolios of initiatives, as well as their different exposures to market risk. Parent companies often neglect this aspect of planning, preferring to avoid conflict with their prospective partners and getting to mutually agreeable terms—even if those terms aren’t best for either the JV or its parents. But left unaddressed, such asymmetries often come to light during launch, expand once operations are under way, and ultimately can undermine the long-term success of the joint venture.

Certainly, some JVs must be rigidly defined to be effective and enforce the right behavior. But when that isn’t the case, JV planners too often leave contingency planning to the lawyers, focusing on legal protection and risk mitigation without the business sense, which shows up in the legalese of the arbitration process and exit provisions. Both tend to be adversarial processes that kick in after problems arise, when in fact contingency planning should just as often focus on the collaborative processes that anticipate changes and create mechanisms or agreements that enable parent companies to adapt with less dysfunction. As the head of strategy for one insurance company noted, “If a JV is set up correctly, particularly regarding governance and restructuring, it should be able to weather most storms between the parents.” Such mechanisms might include, for example, release valves in service-level agreements, partner-performance management, go/no-go triggers, or dynamic value-sharing arrangements and can allow a joint venture to maintain balance in spite of partners’ different or evolving priorities and risks.”

Designing proper risk models need not be adversarial. In the hands of a skilled lawyer or commercial manager it is a sharp and powerful weapon at the negotiating table. This does not mean that the contract is not a win/win deal. On the contrary, insight into the proper and necessary allocation of risk is essential for a win/win deal. Anything else is simply wilful blindness. In fact, a more mathematical approach to risk modelling lays the foundation for a negotiating process that is more inquisitorial rather than adversarial. The primary questions remain: is the management sophisticated enough and does the business have the stomach for it?

The Customer Always Pays for Risk: risk allocation in large and complex contracts Reply

A recent report in The Australian (17/12/13) newspaper into the Air Warfare Destroyer (AWD) debacle states that the project is already $106M of its $618M budget for 2012-13 (a wastage of over $2M a week). The article states that the project delays are a combination of “shipbuilding bungles, infighting between partners, Defence budget cuts and a cultural clash with the ship’s Spanish designer, Navantia.” Poor efficiencies at the ASC in Adelaide and little coherence in the support phase are also contributing to the mess but the AWD Alliance still maintains it is on track because its emergency funds have not been exhausted.

The AWD Alliance is the “unwieldy and largely unaccountable” body responsible for the AWD project. The alliance is made up of ASC, the Defence Materiel Organisation and Raytheon Australia. The “secretive” alliance is apparently fractured by internal disputes, with the DMO blaming the ASC. The ASC, in turn, blames Navantia. Further still, the ASC is also blighted by a “poisonous” relationship with its primary subcontractor, BAE Systems.

This is a contracting issue:

  1. There is no issue with the Spanish design. This design was picked as part of a strict and comprehensive procurement process. If the ASC did not perform the requisite due diligence on the engineering then it should not be in the game of building ships.
  2. In multi-party construction contracts poor site efficiencies are largely a result of (i) cumbersome management, and (ii) poor depth of vision into the total supply chain. Both issues should have been obvious and ironed out at the contracting stage.
  3. There is no doubt that budget cuts and legislative change pose a high degree of risk. However, if the revenue curve of the corporate entities were subject to ill winds from Canberra then those teams did not do their jobs.
  4. Lastly, if a project is eating its emergency/contingency funds then it is an emergency and it is definitely not on track.

So, what’s wrong with Alliance contracting?

John Cooper, writing in the journal of Building & Construction Law (2009 25 BCL 372) notes that alliance contracting is increasingly popular in Australia. Promoted by contractors and adopted by some state governments it is seen as a way to overcome the problems said to be associated with “more traditional forms of contracting”. From this I assume he is including PPP contracts and their PFI/PF2 subset.

Alliance contracts are supposed to be more conducive to collegial management and better outcomes because:

  1. They are governed by a charter of principles and not the black letters of a strict contract.
  2. Each party (theoretically) operates in good faith (although, unlike German franchise law, not necessarily to the mutual benefit of the project).
  3. There is an understanding of “collective responsibility”
  4. There is a socially enforceable culture of “no-blame, no dispute”.

In fact, all of these points are patent nonsense and the article in The Australian and the Australian National Audit Office report on the same project clearly highlight the complete ineffectiveness of an Alliance contract in this instance.

At the heart of the problem is the risk model. Alliance contracts are popular for Defence because (i) the government underwrites the requirements risk (i.e. future requirements creep), and (ii) they do not have to expose this as an additional cost. So, the project appears to be good value for money. In fact, DMO is to blame here because it knows that it would never be able to get the AWD it wanted if it had to expose/pay for the risk. This is standard Defence sharp practice and to my mind borderline procurement fraud. By getting the government to underwrite the risk the DMO ends up getting the ship it wants at a bargain price. It ends up paying way over the odds but the project would never have been approved if the risk had been exposed. The price would simply have been too high.

In a standard construction contract the client would not underwrite future risk. So, the builder would cover this through (a) additional systems engineering to uncover and cost future dependencies, (b) they would insure against certain risks, and (c) they would then add this into the cost model, i.e. the customer would end up buying the risk back. In the end, the customer always pays for risk. Even if the builder has to absorb hefty liquidated damages for lateness the customer will still pay for them down the line in more aggressive management practices or exorbitant extension-of-time claims and even larger margins on acceleration costs. The customer always pays for risk.

NETWORK RISK

The primary cause of these problems is an unsophisticated and uneducated approach to contracting. Underlying these is the simple fact that risk cannot be allocated if the allocatee does not endorse the allocation. In fact, I would posit that risk cannot be allocated at all. Risk must be bought and sold in order for (i) a party to be truly incentivised to deal with risk, and (ii) the risk to go away. The last point is critical. In standard risk flow-down models risk never goes away. Rather, it simply flows down to the party with the least bargaining power to offload it. In the end, the customer always buys the risk back. In a network model, risk is sold to the party who wants it the most. In the end, they absorb the cost (or a certain percentage) based on the value they will reap in the event the risk is realised.

For instance, the network model below at Figure 1 is based on a large outsourcing contract. A multi-divisional outsourcing company won a contract to deliver products and services to a government body. Part of that contract was the hosting of IT infrastructure, a portal for public access and a billing application. The latter of which was being coded from scratch. In this model, the risk that the code for the billing application is held in escrow and the risk that the billing application will not be ready or fit for purpose (significant) is sold (i.e. the contingent risk) to another company in the model. In this way:

  • the purchaser of the risk get a (partially finished) billing application at a knock-down price (if the risk is realised).
  • the primary outsourcer can simply pass the application on to the former without having to find a suitable programmer in mid-flight, and therefore
  • the primary outsourcer does not need to insure this risk (so much), and
  • the original application company are greatly incentivised, lest they lose their R&D costs.

Additional vehicles (other than escrow) for contingent risk may be:

  • Step-In Rights
  • Holding other titles and licenses in escrow
  • contingent transfers of other property.

In all the cases something happens automatically. There is no better way to make this happen than for someone to profit from another’s poor performance. In such cases, the vampiric action of the vestee is swift justice for sub-standard management. When risk is realised, the vestee swoops and kills. There can be no greater motivation for either party. The primary question is whether a business has enough faith in its management to set up contracts in this way. Although a network model of risk is, technically, the best means to manage risk in large and complex contracts the businesses need to decide whether they have the management sophistication and the stomach to deal with risk in this way.

Figure 1 – Model of “Derived Risk” in a large outsourcing contract.

Measuring Legal Exposure Reply

Mind the gapThe function of a contract is to cover legal exposure.  It does not, by and large, govern the relations between parties.  Those are already established by the community and contracts merely document well established facts.  The way the parties will behave will already be a an established reflection of their education, training and previous business experience.  It is naïve to think, for instance, that a contract will be used by engineers to help manage the construction of a building.  On the contrary, the contract will present a myriad of hurdles, obstacles, impasse and problems as the workers try to get on and do their job – build.  It is a truism to say then that almost all litigation is a function of poor contract management rather than poor contract design.  Indeed, I have never met a client who had either fully read OR fully understood the contracts they were in.

A contract, rather, seeks to cover the inevitable areas of risk when two parties necessarily compromise to enter into an agreement. as my father used to say, ‘there are two parties to a contract – the screwor and the screwee.  One party is always disadvantaged.  The lesser party needs to cover their legal exposure and the greater party needs to ensure that not so much risk flows down that the lesser party is overloaded with risk, making the contract unworkable. Picture1Legal exposure is derived from financial risk.  Contracts will generally cover most financial exposure.  However, in Westminster-based systems much of the law of contract is still based in Equity.  Usually, there is still some degree of exposure that remains.  A party can only be forced to  indemnify so much; can only warrant so much and not beyond the reality of the arrangement.legal exposure

Most contracts, however, do not measure the legal exposure a party faces.  Most contracts stick with the standard blanket coverage formula, i.e. zero exposure.  This approach is unhelpful and in many cases counter-productive, because namely:

  • Phantom Exposure.  contract negotiations become unnecessarily bogged down over non-existent risk.  Arguing for 100% coverage when the risk is well covered already is just chasing phantom risk.
  • Lazy.  Quite frankly, the body of knowledge which exists in each sector, the sophistication of clients and the modern quantitative tools which exist to make contracting easier give no excuse for legal laziness.

Measuring legal exposure is both qualitative and quantitative.  Firstly, deriving financial risk is a mathematical function.  Secondly, as exposure is derived from the limitations of contractual coverage then legal exposure is a function of qualitative assessments.

My own method uses a threefold approach, namely:

  1. sensitivity analysis to measure financial risk, and then
  2. three separate qualitative measurements to define whether an element is a legal risk, then
  3. a legal assessment to determine if the remaining elements are covered (i.e. measure the exposure) and to what degree.

All of this is done as a collaborative process around a single bubble chart (shown below).  As is shown in the chart,

  • the bubble size (Z ‘axis’) relates directly to the mathematical analysis of financial sensitivity.
  • the X-axis is a qualitative scoring designed to assess the relative complexity of each item of volatility.
  • the Y-Axis is another qualitative scoring to determine just how close the item is to the project team, i.e. can they actually do something about it?  The less a team can influence a risk the more such risk needs to be pushed upwards so that the corporate functions of a business (Legal, Finance) can act upon it with centralised authority,
  • the colouring, lastly, deals with the notion of immediacy, i.e. prioritisation.

In this way, if a risk is both very complex and not able to be influenced by the project team (i.e. cannot be mitigated) then it, most likely, needs to be dealt with by the Legal function as there will be no way to otherwise influence it when the risk is realised.

Risk-Based Bubble Chart to engender cross-functional collaboration

Once legal risk is conceptually isolated in the upper-right quadrant of the bubble chart then lawyers may make a qualitative determination as to the amount of legal exposure.  For instance, a builder may warrant the quality of workmanship on a specific structure and cover it with insurance.  Legal may determine that there is virtually no statistical evidence that such risk is likely to be realised.  Therefore, the existing premiums easily cover the risk highlighted in the chart.

Alternately, the chart may have defined financial risk beyond, say, the indemnities provided by a firm’s subcontractors.  In such a case insurance or contractual renegotiation may be necessary.  It is important to know that in such circumstances it is precisely targeted cross-functional management energy that is being expended to determine, define and collaboratively deal with specific  financial risks.  Indeed, there is little more any business could hope for.

Governance is More Than Openness 1

In a recent blog Richard Sage (@BakedIdea) points out that governance is just a matter of openness and sharing.  If only life were so simple.  If this were the case then what of Enron?  What of the whole global financial crisis?  These people were open?  These people shared?  They had GAAP reporting duties – So what went wrong?

The simple fact of the matter is that (i) governance is more than just sharing, but (ii) less than the full apparatus of conformance which Richard sets out.

More Than Sharing

Governance is more than sharing.  It is about design and flow.  Financial institutions shared information internally and reported it externally but this made not one jot of difference to the near collapse of the global economy.  Collateralised Debt Obligations (CDOs) were so complex that it would take a long time to unpick each one.  It is essential to understand that if an organisation actively conspires to confound regulatory procedures then there is no governance structure that will catch it.

“Governance without design is somewhat akin to looking at a ball of multi-coloured string and trying to guess what the pullover will look like.”

Organisations (here I extend the net to government and not-for-profit) need to design for misuse.  Understand that cross-functional information flows require some degree of architecture.  Without the necessary degree of design in governable artefacts (e.g. cost models, delivery schedules and contracts) it is impossible to unpick them.  In fact, it is somewhat akin to looking at a ball of multi-coloured string and guessing what the pullover is going to look like.

Governance Is Less Than You Think

I believe that governance is only the set of structures necessary to give confidence to institutional shareholders  that their interests are being well looked after.  The functions are the business processes and technical systems which enforce and deliver them.  This is why corporate governance speaks only of Directors Duties and not of business process.  The how will be forever changing in our modern and dynamic world.

In the end, governance is counterintuitive to business.  Good governance is seen to reduce profits, to close off avenues of growth and to burden management with bureaucracy and nugatory process.  Yet good governance should clear the way.  It should lower the bar and reduce the hurdles.  In concert with a stringent and effective assurance process governance becomes light yet effective.  It delivers confidence without suffocating the organisation.

The Failure of Risk: lessons from the GFC Reply

risk management. hop scotchWe live in uncertain times. The failures in risk management which lead to the global financial crisis have created an unprecedented set of circumstances. Not only are regulators imposing heavier compliance burdens but shareholders and investors are demanding greater reporting and higher levels of information transparency. On top of all this operational costs are too tight to carry the overhead of separate risk and assurance functions.

When the analysis is done there are 6 key lessons to learn from the global financial crisis:

  1. Integrate G, R & C.  In medium and large corporations isolated risk management practices actively work against the business.  Technical and operational experts will identify risk from experience and create risk slush-funds to mitigate them.  These increase the cost of business and in many cases price the company out of the market.  In an integrated GRC system the firm is able to manage risks across business units so that the risk funds are held centrally and do not add a premium to initial project costs.  Risk identification and analysis percolates from the bottom up but governance is driven from the top down.  In an integrated system they both to work within the business lifecycle to add the right mix of checks and balances so that no additional drag is added to investment/project approvals.
  2. Make Passive GRC Active.  Systems need to be active.  They need to hunt out risk, define it, quantify it and measure the dependencies of the risk.  Then, those same systems need to bring it to the attention of the executives so that they may make informed investment decisions.  In the end, humans follow the law of least effort:  employees will follow the path of least resistance in designing and gaining approval for their projects.   GRC must not follow a system of honour & audit but rather one of  active assurance.  When GRC systems are passive the business lifecycle becomes clogged with nugatory and useless program reviews that turn into technical sales pitches by design teams.  Such events and practices only serve to affirm the belief that GRC is a legal burden and one which only serves to satisfy the needs of regulatory compliance.  Raytheon, for instance, have an excellent system of governance-by-exception.   Their Integrated Product Design System (IPDS) has active governance measures and allows Raytheon to manage a pipeline of thousands of critical projects dynamically and by exception.GRC
  3. Get Granular.  When projects fail it is not usually because the risks have not been adequately managed.  The primary problems in risk practices are the failures of risk identification and analysis.  Managers are simply unable to deal with risks at a granular level and then weigh them up on a per project basis.   This is largely because the technical skills needed to do so are not within the standard sets of most executives (but they are within the more mathematical ones of the FS&I industry).   Where this disparity exists then businesses need to develop separate Red Teams or Assurance Teams, either from the existing PMO of from hand picked executives.
  4. Bottom Up & Top Down.  Risk management is bottom-up but governance is top-down.  The technical skills and software reliance involved in effective risk management mean that the entire practice usually percolates from the bottom of a business, upwards.  Consequently, unless it fits within a comprehensive governance framework it will be open to being gamed by senior executives.  This is why major projects which are seen as must-win are often approved with little or no governance or assurance.
  5. Risk Ownership.  Risks need to be owned at the lowest responsible level.  This is to say that when things go wrong the person at the lowest level who has the greatest amount of operational responsibility must be able to take charge to mitigate all aspects of the risk.  It is vital that the person owning the risk be able to recognise the variables which may see the risk realised.  It is also critical that the risk owner understand the corporate decision points, i.e. the points at which the contingency plans should be triggered.
  6. Invest in the Right Type of Risk Culture.  Risk should not be a dirty word.  Risks are inherent in every project and balancing them quantitatively and qualitatively is an essential skill for all senior executives.  Risk should be as much about seizing opportunity as it is about guarding profitability.  Businesses need to invest in top talent in order to drive good risk practices from the top.  Effective, Active-GRC involves a complex array of tools, practices, structures and processes which need an experienced senior executive to drive them constantly and consistently in the business.  The softer side of risk management cannot be neglected.  The nature of risk forces people onto the defensive as they attempt to justify all aspects of their project designs.  CROs need to help executives understand that all projects must balance risk if they are to attempt to push profitability.  Otherwise, risk cultures will mire companies in conservative, risk averse cultures which only act to add friction and reduce profitability.

Risk practices need to work together inside a single, comprehensive risk framework that goes beyond simple probabilistic modelling and disjointed regulatory compliance.   Businesses need to implement processes which not only integrate the business lifecycle but actively increase both liquidity and opportunity for risk to be seen to add real value to the company.   Only once this is achieved can risk management cease to be an operational drag for the business and become a value-adding proposition which works actively to increase the profit and performance of a company.

 

Top 5 Benefits of Effective Risk Management 1

risk management.little menBENEFITS OF AN INTEGRATED “ACTIVE GRC” FRAMEWORK

After the failure of risk management during the recent (and ongoing) financial crisis one could be forgiven for thinking that risk management – as we know it – is dead.  However, effective risk management is the only means which businesses have to:  (i) assess and compare investment decisions, (ii) seize subtle opportunities, and (iii) ensure regulatory compliance.  Risk management has greater utility beyond these obvious benefits.  Listed below are 5 of the top financial benefits of effective risk management:

1.  IMPROVED LIQUIDITY

When managers cannot identify or mitigate complex risks they create risk contingency slush funds and pad their accounts with excessive risk premiums. This is not an efficient allocation of capital and it can even price a business out of the market. Precise identification of risk premiums removes these slush funds and creates greater firm liquidity and the ability to allocate capital where it is needed.

2.  BETTER PROJECT PERFORMANCE

The best methods for risk identification and analysis of risk in projects are through the quantitative analysis of cost models and project schedules. However, these methods are only useful where such models are in enough detail. Good risk management leads to greater collaboration by cross-functional teams to optimise cost and schedule performance.

3.  BETTER OPPORTUNITY MANAGEMENT

With greater liquidity comes the ability to seize emerging opportunities. Not only can the company use this capital across portfolios to manage risks but it can also seize opportunities for M&A, talent acquisition, share buybacks, increased dividends, employee bonuses or increased project funding/investment.

4.  CONSENSUAL MANAGEMENT CULTURE

As managers work across the business to calibrate cost models with the project schedules; the contract and commercials with the technical architecture, the business is forced to adopt a more consensual, multi-disciplinary approach. Where GRC is implemented as part of a high-performance business initiative the culture is more likely to stick rather than one imposed from the top-down.

5.  IMPROVED REPORTING & DECISION MAKING

An active GRC process which is fully integrated with the business relies on the quantitative analysis of core artifacts (cost models, project schedules and technical architectures and contracts). A quantitative culture coupled with regular, detailed analytical outputs also greatly improves the standard of financial and operational reporting and therefore the possibility for improved investment decision making.

Building a Risk Culture is a Waste of Time 3

The focus of a good risk management practice is the building of a high-performance operational culture which is baked-in to the business.  Efforts to develop risk cultures cultures only serve to increase risk aversion in senior executives and calcify adversarial governance measures which decrease overall profitability.  The right approach to risk management is a comprehensive, holistic risk management framework which integrates tightly with the business.

risk management. waste of timeThe financial crisis is largely due to the the failure of risk management and over-exposure in leading risk-based institutions.  More specifically, the failure of risk management is linked to:

  • The failure to link link risk to investment/project approval decision making.  The aim of risk management is not to create really big risk registers.  Although, in many organisations one could be forgiven for thinking that this is the goal.  The aim of identifying risks is to calibrate them with the financial models and program plans of the projects so that risks can be comprehensively assessed within the value of the investment.  Once their financial value is quantified and their inputs and dependencies are mapped – and only then – can realistic and practical contingency planning be implemented for accurate risk management.
  • The failure to identify risks accurately and comprehensively.  Most risk toolsets and risk registers reveal a higgledy-piggledy mess of risks mixed up in a range from the strategic down to the technical.  Risks are identified differently at each level (strategic, financial, operational, technical).  Technical and Operational risks are best identified by overlapping processes of technical experts and parametric systems/discrete event simulation.  Financial risks are best identified by sensitivity analysis and stochastic simulation but strategic risks will largely focus on brand and competitor risks.  Risk identification is the most critical but most overlooked aspect of risk management.
  • The failure to use current risk toolsets in a meaningful way.  The software market is flooded with excellent risk modelling and management tools.  Risk management programs, however, are usually implemented by vendors with a “build it and they will come” mentality.  Risk management benefits investment appraisal at Board and C-Suite level and it cannot be expected to percolate from the bottom up.

RISK MANAGEMENT IS COUNTER-INTUITIVE

All this does not mean that risk management is a waste of time but rather it is counter-intuitive to the business.  It is almost impossible to ask most executives to push profits to the limit if their focus is on conservatism.  Building a culture of risk management is fraught with danger.  The result is usually a culture of risk aversion, conservatism and a heavy and burdensome governance framework that only adds friction to the business lifecycle and investment/project approval process.  Executives, unable to navigate the labyrinthine technicalities of such a systems achieve approvals for their pet programs by political means.  More so, projects that are obviously important to the business actually receive less risk attention than small projects.  Employees learn to  dismiss risk management and lose trust in senior management.

If risk management is to be an effective and value-adding component it must be a baked into the business as part of the project/investment design phase.  If not, then risk management processes  just build another silo within the business.  The key is to forget about “Risk” as the aim.  The goal must be a performance culture with an active and dynamic governance system which acts as a failsafe.  The threat of censure is the best risk incentive.

risk management. immature disciplineAWARENESS IS NOT MANAGEMENT

risk management. immature disciplineManagement has long been aware of risk but this does not always translate into true understanding of the risk implications of business decisions.  Risk policies and practices are often viewed as being parallel to business and not complimentary to it.

Why is it that most businesses rate themselves high on risk management behaviours?  This is largely because businesses do not correlate the failure of projects with the failure of risk and assurance processes. 

In a 2009 McKinsey & Co survey (published in June 2012 “Driving Value from Post-Crisis Operational Risk Management”) it was clear that risk management was seen as adding little value to the business.  Responses were collected from the financial services industry – an industry seen as the high-water mark for quantitative risk management. 

COLLABORATION IS THE KEY

Risk management needs to become a collaborative process which is tightly integrated with the business.  The key is to incentivise operational managers to make calculated risks.  As a rule of thumb there are 4 key measures to integrate risk management into the business:

  1. Red Teams.  Despite writing about collaboration the unique specialities of risk management often requires senior executives to polarise the business.  It is often easier to incentivise operational managers to maximise risks and check them by using Red Teams to minimise risks.  Where Red Teams are not cost effective then a dynamic assurance team (potentially coming from the PMO) will suffice.  Effective risk management requires different skills and backgrounds.  Using quantitative and qualitative risk management practices together requires a multi-disciplinary team of experts to suck out all the risks and calibrate them within the financial models and program schedules in order that investment committees can make sensible appraisals. 
  2. Contingency Planning.  Operational risk management should usually just boil down to good contingency planning.  Due to the unique skill sets in risk management, operational teams should largely focus on contingency planning and leave the financial calibration up to the assurance/Red teams to sweep up.
  3. Build Transparency through Common Artefacts.  The most fundamental element of a comprehensive  risk process is a lingua franca of risk  – and that language is finance.  All risk management tools need to percolate up into a financial model of a project.  This is so that the decision making process is based on a comprehensive assessment and when it comes to optimise the program the various risky components can be traced and unpicked.
  4. Deeper Assurance by the PMO.  The PMO needs to get involved in the ongoing identification of risk.  Executives try and game the governance system and the assurance team simply does not have the capacity for 100% audit and assurance.  The PMO is by far the best structure to assist in quantitative and qualitative risk identification because it already has oversight of 100% of projects and their financial controls.

Traditional risk management practices only provide broad oversight. With the added cost pressures that businesses now feel it is impossible to create large risk teams funded by a fat overhead. The future of risk management is not for companies to waste money by investing in costly and ineffective risk-culture programs.  Good risk management can only be developed by tightly integrating it with a GRC framework that actively and dynamically supports better operational performance.

BUSINESS PROCESS FAILURES: the importance of logical architectures Reply

business process risk. chart

In a recent 2012 survey by McKinsey & Co, IT executives noted that their top priorities were ‘improving the effectiveness and efficiency of business processes’.   One of the critical failings of IT, however, is to implement effective and efficient business process architectures in the first place.  The IT priorities to the left only serve to highlight what we already know:  that IT service companies implement processes badly.

Why?

Whether through a failing of Requirements or Integration (or both), IT service companies often implement inappropriate business process architectures and then spend the first 6 to 12 months fixing them.  This is why those companies ask for a 6 month service-credit holiday.  It is also the same reason those companies differentiate between Transition and Transformation.  The former is where they implement their cost model but the latter is where they implement their revenue model.

The failing is not within the design of the technical architecture.  Very few senior executives report that failed projects lacked the technical expertise.  Likewise, project management is usually excellent.  Requirements, too, are not usually the problem with business process implementations as most commercial systems implement standardised Level 1 or 2 business processes very well.

Logical Archtiectures instantiate the subtleties and complexities of social systems which the software must implement

The first failing in the development of a technical architecture to implement a business processes is the design of the Logical Architecture.  Logical Architectures are critical for two reasons: (i) because requirements are one hundred times cheaper to correct during early design phases as opposed to implementation, and (ii) because logical systems are where the social elements of software systems are implemented.  Requirements gathering will naturally throw up a varying range of features, technical requirements, operational dependencies and physical constraints (non-functional requirements) that Solution Architects inevitably miss.  Their focus and value is on sourcing and vendor selection rather than the capture of the subtleties and complexities of human social interactions and the translation of them into architect-able business constructs (that is the role of the Business Analyst).

The second failing is the development of Trade-Space.  This is the ability to make trade-offs between logical designs.  This is the critical stage before freezing the design for the technical architecture.  This is also vital where soft, social systems such as knowledge, decision making and collaboration are a core requirement.  However, trade-space cannot be affected unless there is some form of quantitative analysis.  The usual outcome is to make trade-offs between technical architectures.   Like magpies, executives and designers, by this stage have already chosen their favourite shiny things.  Energy and reputation has already been invested in various solutions, internal politicking has taken place and the final solution almost eschews all assurance and is pushed through the final stages of governance.

With proper development and assessment of trade-space, companies have the ability to instantiate the complex concepts of front and middle office processes.  Until  now, business analysts have hardly been able to articulate the complicated interactions between senior knowledge workers.   These, however, are far more profitable to outsource other than more mechanical clerical work which is already the subject of existing software solutions.  The higher pay bands and longer setup times for senior information work makes executive decision making the next frontier in outsourcing.

Service offerings

Logical architectures are not usually developed because there is no easy, standardised means of assessing them.  Despite the obvious cost effectiveness logical architectures most Business Analysts do not have the skills to design logical architectures and most Technical Architects move straight to solutions. Logical Architectures which are quantitatively measurable and designed within a standardised methodology have the potential to give large technical service and BPO organisations greater profits and faster times-to-market.

The future is already upon us.  BPO and enterprise services are already highly commoditised.  The margins in outsourcing are already decreasing, especially as cloud-based software becomes more capable.  If high cost labour companies (particularly those in based in Western democracies) are to move to more value-added middle and front office process outsourcing then they will need to use logical architecture methodologies to design more sophisticated offerings.

In the next blog we will show one method of quantitatively assessing logical architectures in order to assess trade-space and make good financial decisions around the choices of technical designs.