SCENARIO-BASED MODELLING: Storytelling our way to success. 1

“The soft stuff is always the hard stuff.”

Unknown.

Whoever said ‘the soft stuff is the hard stuff’ was right.  In fact, Douglas R. Conant, coauthor of TouchPoints: Creating Powerful Leadership Connections in the Smallest of Moments, when talking about an excerpt from The 3rd Alternative: Solving Life’s Most Difficult Problems, by Stephen R. Covey, goes on to note:

“In my 35-year corporate journey and my 60-year life journey, I have consistently found that the thorniest problems I face each day are soft stuff — problems of intention, understanding, communication, and interpersonal effectiveness — not hard stuff such as return on investment and other quantitative challenges. Inevitably, I have found myself needing to step back from the problem, listen more carefully, and frame the conflict more thoughtfully, while still finding a way to advance the corporate agenda empathetically. Most of the time, interestingly, this has led to a more promising path forward and a better relationship, which in turn has made the next conflict easier to deal with.”

Douglas R. Conant.

Conant is talking about the most pressing problem in modern organisations – making sense of stuff.

Sense Making

Companies today are awash with data.  Big data.  Small data.  Sharp data.  Fuzzy data.  Indeed, there are myriad software companies offering niche and bespoke software to help manage and analyse data.  Data, however is only one-dimensional.  To make sense of inforamtion is, essentially, to turn it into knowledge. To do this we need to contextualise it within the frameworks of our own understanding.  This is a phenomenally important point in sense-making; the notion of understanding something within the parameters of our own metal frameworks and it is something that most people can immediately recognise within their every day work.

Contextualisation

Take, for instance, the building of a bridge.  The mental framework by which an accountant understands risks in building the bridge is uniquely different from the way an engineer understands the risks or indeed how a lawyer sees those very same risks.  Each was educated differently and the mental models they all use to conceptualise the same risks (for example)  leads to different understandings.  Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

For instance, take again the same risk of a structural weakness within the new bridge.  The accountant will understand it as a financial problem, the engineer will understand it as a design issue and the lawyer will see some form of liability and warranty issue.  Ontologically, the ‘thing’ is the same but its context is different.  However, in order to make decisions based on their understanding, each person builds a ‘mental model’ to re-contextualise this new knowledge (with some additional information).

There is a problem.

Just like when we all learned to add fractions when we were 8, we have to have a ‘common denominator’ when we add models together.  I call this calibration, i.e. the art and science of creating a common denominator among models in order to combine and make sense of them.

Calibration

Why do we need to calibrate?  Because trying to analyse vast amounts of the same type of information only increases information overload.  It is a key tenent of Knowledge Management that increasing variation decreases overload.

It is a key tenent of Knowledge Management that increasing variation decreases overload.

We know this to be intuitively correct.  We know that staring at reams and reams of data on a spreadsheet will not lead to an epiphany.  The clouds will not part and the trumpets will not blare and no shepherd in the sky will point the right way.  Overload and confusion occurs when one has too much of the same kind of information.  Making sense of something requires more variety.  In fact, overload only increases puzzlement due to the amount of uncertainty and imprecision in the data.  This, in turn, leads to greater deliberation which then leads to increased emotional arousal.  The ensuing ‘management hysteria’ is all too easily recognisable.  It leads to much more cost growth as senior management spend time and energy trying to make sense of a problem and it also leads to further strategic risk and lost opportunity as these same people don’t do their own jobs whilst trying to make sense of it.

De-Mystifying

In order to make sense, therefore, we need to aggregate and analyse disparate, calibrated models.  In other words, we need to look at the information from a variety of different perspectives through a variety of lenses.  The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

Information still needs a lot of structural similarity if it’s to be calibrated and analysed by both technology and our own brains.

The diagram below gives an outline as to how this is done but it is only part of the equation.  Once the data is analysed and valid inferences are made then we still are only partially on our way to better understanding.  We still need those inferences to be contextualised and explained back to us in order for the answers to crystalise.  For example, in our model of a bridge, we may make valid inferences of engineering problems based on a detailed analysis of the schedule and the Earned Value but we still don’t know it that’s correct.

Storytelling

As an accountant or lawyer, therefore, in order to make sense of the technical risks we need the engineers to play back our inferences in our own language.  The easiest way to do this is through storytelling.  Storytelling is a new take on an old phenomenon.  It is the rediscovery of possibly the oldest practice of knowledge management – a practice which has come to the fore out of necessity and due to the abysmal failure of IT in this field.

Scenario-Based Model Development copy

Using our diagram above in our fictitious example, we can see how the Legal and Finance teams, armed with new analysis-based  information, seek to understand how the programme may be recovered.   They themselves have nowhere near enough contextual information or technical understanding of either the makeup or execution of such a complex programme but they do know it isn’t going according to plan.

So, with new analysis they engage the Project Managers in a series of detailed conversations whereby the technical experts tell their ‘stories’ of how they intend to right-side the ailing project.

Notice the key differentiator between a bedtime story and a business story – DETAIL!  Asking a broad generalised question typically elicits a stormy response.  Being non-specific is either adversarial or leaves too much room to evade the question altogether.  Engaging in specific narratives around particular scenarios (backed up by their S-curves) forces the managers to contextualise the right information in the right way.

From an organisational perspective, specific scenario-based storytelling forces manages into a positive, inquistive and non-adversarial narrative on how they are going to make things work without having to painfully translate technical data.  Done right, scenario based modelling is an ideal way to squeeze the most out of human capital without massive IT spends.

 

 

 

 

 

Hidden Costs in ICT Outsourcing Contracts Reply

hidden-costs

Why are IT outsourcing contracts almost always delivered over-budget and over-schedule?  Why do IT outsourcing contracts almost always fail to achieve their planned value? How come IT contracts seem to be afflicted with this curse more than any other area?

Quote

The common answer is that (i) the requirements change,  and (ii) that handovers from the pre-contractual phase to in-service management are always done poorly.  These are both true although hardly explain the complexity of the situation.  If requirements change were an issue then freezing requirements would solve it – it doesn’t.  The complexity of large ICT projects is derived directly from the fact that not all the requirements are even knowable from the outset.  This high level of unknown-unknowns, coupled with the inherent interdependence of business and system requirements, means that requirements creep is not only likely but inevitable.  Secondly, (ii) handover issues should be able to be solved by unpicking the architecture and going back to the issue points.  This too is never so simple.  My own research has shown that the problem is not in the handover but that the subtleties and complexities of the project architecture is not usually pulled through into the management and delivery structures.  Simply put, it is one thing to design an elegant IT architecture.  It is another thing entirely to design it to be managed well over a number of years.  Such management requires a range of new elements and concepts that never exist in architectural design.

The primary factor contributing to excessive cost (including from schedule overrun) is poor financial modelling.  Simply put, the hidden costs were never uncovered in the first place.  Most cost models are developed by finance teams and uncover the hard costs of the project.  There are, overall however, a total of 3 cost areas which must be addressed in order to determine the true cost of it outsourcing. 

True Cost of IT

1.  Hard costs.  This is the easy stuff to count; the tangibles.  These are the standard costs, the costs of licensing, hardware, software etc.  It is not just the obvious but also includes change management (communications and training).  The Purchasor of the services should be very careful to build the most comprehensive cost model based on a detailed breakdown of the project structure, ensuring that all the relevant teams input costing details as appropriate.

2.  Soft Costs.  The construction industry, for instance, has been building things for over 10,000 years.  With this level of maturity one would imagine that soft costs would be well understood.  They are not.  With project costs in an extremely mature sector often spiralling out of proportion it is easy to see that this might also afflict the technology sector which is wildly different almost from year to year. 

Soft costs deal with the stuff that is difficult to cost; the intangibles:  The cost of information as well as process and transaction costs.  These costs are largely determined by the ratio of revenue (or budget in terms of government departments) against the Sales, General & Administration costs, i.e. the value of the use of information towards the business.  Note that this information is not already counted in the cost-of-goods-sold for specific transactions.

Soft costs go to the very heart of how a business/government department manages its information.  Are processes performed by workers on high pay-bands?  Are workflows long and convoluted?  The answers to these questions have an exponential effect on the cost of doing business in an information-centric organisation.  Indeed, even though the cost of computing hardware is decreasing, the real cost of information work – labour – is increasing.  This is not just a function of indexed costs but also the advent of increasing accreditation and institutionalisation in the knowledge worker community.  Firstly, there is greater tertiary education for knowledge work which has hitherto been unaccounted for or part of an external function.  The rise of the Business Analyst, the Enterprise Architect (and a plethora of other “architects”) all serve to drive delivery costs much higher.  Not only are the costs of this labour increasing but the labour is now institutionalised, i.e. its place and value is not questioned – despite the data showing there seems to be limited economic value added through these services (i.e. no great improvement in industry delivery costs).

3.  Project Costs.  Projects are never delivered according to plan.  Requirements are interpreted differently, the cohesion of the stakeholder team can adversely impact the management of the project, even the sheer size and complexity of the project can baffle and bewilder the most competent of teams.  Supply chain visibility, complicated security implementations and difficult management structures all add to project friction and management drag.  There are many more factors which may have an adverse or favourable effect on the cost of performing projects. 

IT Transition Cost Graph

In the Defence community, Ph.D student Ricardo Valerdi created a cost model – COSYSMO – which isolated 14 separate factors peculiar to systems engineering projects  and gave these factors cost coefficients in a cost model.  Ultimately, each factor may be scored and the scoring then determines the effort multiplier, usually a number between approximately 0.6 and 1.8.  Naturally, when all factors are taken into account the overall effect on the contract price is significant. 

More importantly, for IT implementations, the “project” is not short.  IT outsourcing projects are generally split into 2 phases:  Transition and Transformation.  Transition involves what outsourcers call “shift-and-lift” or the removal of the data centres from the customer site and rear-basing or disposal of the hardware which allows the company to realise significant cost savings on office space. 

During the second phase – Transformation – the business seeks to realise the financial benefits of outsourcing.  Here, a myriad of small projects are set about in order to change the way a business operates and thereby realise the cost benefits of computer-based work, i.e. faster processes from a reduced headcount and better processes which are performed by workers on a lower pay-band. 

IT outsourcing  is not just about the boxes and wires.  It involves all the systems, hard and soft, the people, processes and data which enable the business to derive value from its information.  Just finding all of these moving parts is a difficult task let alone throwing the whole bag of machinery over the fence to an outsourcing provider.   To continue the metaphor, if the linkages between the Purchasor and the Vendor are not maintained then the business will not work.  More importantly, certain elements will need to be rebuilt on the Purchasor’s side of this metaphorical fence, thus only serving to increase costs overall.  The financial modelling which takes into account all of these people, processes and systems must, therefore, be exceptional if an outsourcing deal is to survive.

Benefits-Led Contracting: no immediate future for outcome based agreements Reply

The IACCM rightly points out that key supplier relationships underpinned by robust and comprehensible contracts are essential to the implementation of significant strategic change.  Their research identifies a 9.2% impact on bottom line from contract weakness.  Top 5 causes being:

  •      Disagreement over contract scope,
  •      Weaknesses in contract change management,
  •      Performance failures due to over commitment,
  •      Performance issues due to disagreement over what was committed,
  •      Inappropriate contract structures or responsibilities.

Two things are given in this mess:  (i) Firstly, that contractual structures are weak and inappropriate to deal with high levels of operational complexity and technical risk, and (ii) secondly, that legal means of enforcement are cumbersome, expensive and ineffective.

That business is ready to solve this legal problem by contracting for outcomes is (a) nonsense and (b) missing the point.  Business is already dealing with the operational and technical risk of large and complex contracts.  Business is already structuring many of its agreements to deal with outcomes.  Large prime contracts,  alliance contracts and performance-based contracts are already commonplace in PFI/PPP and Defence sector deals.  That neither are wholly efficient or effective is for another time.  It is, however, for the legal community to devise more sophisticated ways of contracting in order to solve their side of the problem.

Picture1

PEOPLE ARE THE KEY

The primary reason for not being able to contract for outcomes is that the vendor doesn’t own the people.  This is critical because without the ability to control and intervene in the delivery of work the risk increases exponentially.  Consequently, the risk premium paid for outcome-based contracts will either make them (a) prohibitively expensive, or (b) impossible to perform (within parameters).  So, a business which offers you an outcome-based contract is either having you on or just about to charge you the earth.

 

 

Competitive Advantage in IT 1

Although a recent article in SearchCIO alludes to competitive advantage by IT departments, arguments like this can take the CIO down a dangerous road.  The holy grail of many CIOs is to run a department which is both profitable and also increases business capability.  Mostly, however, IT departments are costly and the subject of constant complaint.

Can IT ever be a profit centre?

Economists have long argued that businesses should strip away overhead (i.e. not included in the cost of goods sold but pure overhead) cost chargebacks from business verticals and their processes in order to gain a clearer view of what is profitable and what is not.  If they don’t then smaller, profitable processes are often in danger of being swamped with overhead.  In this way, many businesses often outsource or cut the wrong activities.

It is notoriously difficult to cost IT chargebacks so that market verticals are charged just the right overhead.  Should businesses charge their verticals for email?  They often do but isn’t this just a cost of business that the centre should absorb?  Isn’t the burden of communication and reporting largely placed onto verticals anyway?  So if they could run their business units in a more entrepreneurial way wouldn’t the cost of IT be significantly reduced? 

What if we extend that argument and let IT be a profit centre?  Why don’t we let business units find cheaper ways of doing business and compete with the IT department?  Security/integration/management time arguments aside – it is likely that if IT departments were able to charge for the thing they were really good at, this would be a source of competitive advantage within the business.

So, there is a good reason why IT departments aren’t profit centres but, of course, this doesn’t solve the problem of the high cost of IT inside business.

Getting Rid of the Help Desk–a structured approach to KM 1

In a recent article in CIO magazine Tom Kaneshige argues that the rise of BYOD spells the demise of the traditional Help Desk.  He intimates that BYOD has now been overtaken by BYOS – bring-your-own-support!  The network-enabled user, with access to huge volumes of information, requires a new Help Desk. 

He is right that, ultimately, power-users need better, faster support delivered to them in a format and by people with a deeper understanding of the context and with more intricate solutions.

BYOS is the exception and not the rule. 

Although the IT function is becoming more commoditised, the larger fields of knowledge work isn’t, hasn’t and won’t be commoditised anytime just yet.  Otherwise, any 12 year old with a laptop would be in with a chance.  Help Desks don’t need to be expanded but they do need to become more mature, agile and integrated into the KM procedures of modern networked enterprises (ie those businesses with a heavy KM focus).  Expanding the remit of the Help Desk opens the door for colossal cost increases.  Internal knowledge management functions need to become more structured beyond simplistic portals.

INTERNAL KNOWLEDGE MANAGEMENT

In a recent article in McKinsey Quarterly, Tom Davenport argues that organisations need to get a lot smarter in their approaches to supporting knowledge workers.  He says that greater use of social media and internet use will harm the business more than help it.  Lower level knowledge workers need more structured support to their processes.  On the other hand, high-level knowledge workers are better supported by an open platform of tools.  Getting the right balance is as much art as science.

BYOS is the wrong approach.  It’s a derogation of KM responsibilities.  Organisations need to focus on an approach to KM with the following structures:

  1. A good Help Desk function for knowledge workers involved in highly structured processes.
  2. An IT function which supports a flexible arrangement of tools for advanced knowledge workers.
  3. Knowledge Managers:  people who provide the focal point for certain areas of knowledge.
  4. Portals:  A single entry point for people seeking access to communities of interest.

So, be careful when thinking about Tom Kaneshige’s advice and “blowing up” your Help Desk.  IT can be a self-licking lollipop.  More tools and more information won’t necessarily improve productivity.  At the lower level, sometimes it makes more economic sense to support the process.  It’s only at the upper levels of expertise that it is more profitable to support the person.

Sometimes the best defense is deletion – CSO Online – Security and Risk Reply

Sometimes the best defense is deletion – CSO Online – Security and Risk.

data mining. big dataThe point is prescient.  In these early days of Big Data awareness the battle between information management v. store now/analyse later can obfuscate other issues:  Cost and Necessity.

ONE BIG POT

Is there really the practical technology that an organisation can actually move away from structured databases and just stick all its information into one big ‘pot’, to be mined for gold nuggets at a later date?

Storing information (as opposed to just letting stuff pile up) is a costly business and the decision to store information usually comes from people on higher pay bands.  The decision of where to locate is often a manual decision which not only has a significant management overhead of its own but also involves co-ordination from other high pay bands.

THE COMPLEXITY OF INFORMATION

Picture1

Add to this dilemma the complexities of  ‘legal hold’ on material and the identification of ‘discoverable’ items.  Suddenly information management looks a lot harder and the siren song of Big Data seems a lot more alluring.  The problem is that information that is not valuable to some is valuable to others.  Who is qualified to make that decision?  Should all information be held given that it will likely have some enterprise value?  The battle is between cost and necessity:

  1. Cost:  Deciding what to keep and what to get rid of takes management time and effort that costs money.  The problem is that it is neither cost effective nor good policy to to push hold/delete decision making down to the lowest clerical level. The secret is to have those decisions made by more senior case-workers but only within their limited remit.
  2. Necessity:  The secret is to categorise management information to determine necessity.  Use a workflow to cascade and delegate (not to avoid) work.  As it moves it accumulates metadata.  No metadata means no necessity and therefore it should be disposed of automatically (eschewing arguments of regulatory compliance).

THE ANSWER

The answer is to automate the deletion of information (other than ‘Legal Hold’).  Once a document/question has reached the end of the workflow without accumulating any metadata then the information should be disposed of automatically.  Case-Workers make the decisions to act on the document/question and metadata is attached by more clerical staff (on lower pay bands) as the item moves through the workflow.  If no metadata is attached it can be assumed that the item is not important and is therefore disposed of.  Cost is minimised by letting case-workers make decisions of relevance within their own sphere of expertise without the additional management overhead for de-confliction/meetings etc.   In this way, the enterprise makes a collective decision of importance and stores the information accordingly thus answering the issue of necessity.

Will CIOs Really Focus on the Business in 2013? Reply

“Whilst CIOs are predominantly drawn from the infrastructure segment of ICT there is unlikely to be a shift in focus towards proactive business initiatives.”

The CIO’s commercial prerogatives largely stem from CEO directives as they tally with other recent CEO surveys from McKinsey & Co etc.  It is likely, however, that need to increase services to corporate clouds through a myriad of new/personal devices during these times of severe cost pressures will keep CIOs occupied for the next year, at least.

Looking to the future, until business schools focus their corporate decision making modules on information management and technology enablers the dearth of IM savvy senior executives will continue and thereby the pull-through into the CIO role.  The solution is likely to come in one of two ways, namely:

  1. A cost/complexity inflection point will be reached.  Medium sized businesses will begin to outsource not only their IT but also their IM.  As better IM begins to solve business problems some people will naturally be pulled through into corporate CIO roles at FTE.
  2. Alternately, clever CEOs will shift the accounting of their IT departments towards Profit Centres.  CIOs will be forced to come up with innovative chargeback models and new services in order to compete beyond storage  for non-essential services.  The good will survive and the bad will move back to being small, in-house IT departments.

 

EA as Strategic Planning: I’m Still Not Convinced Reply

Business and TechnologyA recent blog entitled: “EA is Strategic Planning” highlights a sentiment by many enterprise architects (a widely abused moniker) that what they are doing is new, ingenious and necessary.  I’m still not convinced.  Whilst one cannot decry the skills, expertise, knowledge and ability of many enterprise architects I am yet to see a cogent argument that what they do is either cost effective or necessary.

Heresy?  Hardly.

The enterprise has done remarkably well since the Dutch East India Company was granted its royal license in 1600.  The rise of the  enterprise has not abated and diversified companies such as Du Pont and ITT have shown that complexity and size are no obstacle to good, valuable shareholder growth. 

IS EA JUST GOOD IT?

I am in two minds:  (i) EA has certainly helped the IT community with complexity by bringing a portfolio view ICT programs, but(ii) EA has added no significant value to a listed company (beyond just good sense, well delivered IT programs) or reduced its risk to such an extent that would warrant dedicated EA. 

EA has likely been the product of a traditional lack of the requisite skills to translate the social value of collaborative software into corporate monetary value.  It is worth noting that embedded systems (such as robotics) and operational systems (systems that a given corporation simply cannot perform its operations without) are not included in this assessment as their value to the business can be calculated in a simple NPV assessment of projects, i.e. the system will directly result in higher discounted cash flows.  That this should be the job of a programmer is nonsense.  That large commercial enterprises are only beginning to adopt social media systems (which people have been using for years) highlights the general inability of enterprises to grasp the financial value of subtle and complex ICT.

SO WHAT DOES EA OFFER?

Enterprise architecture is not strategic planning.  As much as I like David Robertson’s book “Enterprise Architecture as Strategy”, it is farcical to suggest that the structure of an organisation should either come first or drive (other than the broad parameters)  the functional design of the business model.  If EA is to deliver value to the organisation then it must reach beyond large, complex IT.  To add real value it must be the the function which is capable of reaching across the business siloes to solve the problems which the corporation does not even yet know it has.

ENTERPRISE ARCHITECTURE AS A SEPARATE DISCIPLINE

Enterprise architecture must grow out of its humble ICT beginnings if it is to have the boardroom caché and intellectual gravitas necessary to drive strategy.  EA must develop beyond it systems engineering fundamentals and extend its validity into the statistical relationships between technology structures, information performance and shareholder return.  Only in this way will EA be able to communicate the financial return which subtle and complex MIS systems can add to a company.  Whatever enterprise architects believe they can do they will not get the opportunity to display their value, beyond simple tenders, unless they can convince the finance function.

Building a Risk Culture is a Waste of Time 3

The focus of a good risk management practice is the building of a high-performance operational culture which is baked-in to the business.  Efforts to develop risk cultures cultures only serve to increase risk aversion in senior executives and calcify adversarial governance measures which decrease overall profitability.  The right approach to risk management is a comprehensive, holistic risk management framework which integrates tightly with the business.

risk management. waste of timeThe financial crisis is largely due to the the failure of risk management and over-exposure in leading risk-based institutions.  More specifically, the failure of risk management is linked to:

  • The failure to link link risk to investment/project approval decision making.  The aim of risk management is not to create really big risk registers.  Although, in many organisations one could be forgiven for thinking that this is the goal.  The aim of identifying risks is to calibrate them with the financial models and program plans of the projects so that risks can be comprehensively assessed within the value of the investment.  Once their financial value is quantified and their inputs and dependencies are mapped – and only then – can realistic and practical contingency planning be implemented for accurate risk management.
  • The failure to identify risks accurately and comprehensively.  Most risk toolsets and risk registers reveal a higgledy-piggledy mess of risks mixed up in a range from the strategic down to the technical.  Risks are identified differently at each level (strategic, financial, operational, technical).  Technical and Operational risks are best identified by overlapping processes of technical experts and parametric systems/discrete event simulation.  Financial risks are best identified by sensitivity analysis and stochastic simulation but strategic risks will largely focus on brand and competitor risks.  Risk identification is the most critical but most overlooked aspect of risk management.
  • The failure to use current risk toolsets in a meaningful way.  The software market is flooded with excellent risk modelling and management tools.  Risk management programs, however, are usually implemented by vendors with a “build it and they will come” mentality.  Risk management benefits investment appraisal at Board and C-Suite level and it cannot be expected to percolate from the bottom up.

RISK MANAGEMENT IS COUNTER-INTUITIVE

All this does not mean that risk management is a waste of time but rather it is counter-intuitive to the business.  It is almost impossible to ask most executives to push profits to the limit if their focus is on conservatism.  Building a culture of risk management is fraught with danger.  The result is usually a culture of risk aversion, conservatism and a heavy and burdensome governance framework that only adds friction to the business lifecycle and investment/project approval process.  Executives, unable to navigate the labyrinthine technicalities of such a systems achieve approvals for their pet programs by political means.  More so, projects that are obviously important to the business actually receive less risk attention than small projects.  Employees learn to  dismiss risk management and lose trust in senior management.

If risk management is to be an effective and value-adding component it must be a baked into the business as part of the project/investment design phase.  If not, then risk management processes  just build another silo within the business.  The key is to forget about “Risk” as the aim.  The goal must be a performance culture with an active and dynamic governance system which acts as a failsafe.  The threat of censure is the best risk incentive.

risk management. immature disciplineAWARENESS IS NOT MANAGEMENT

risk management. immature disciplineManagement has long been aware of risk but this does not always translate into true understanding of the risk implications of business decisions.  Risk policies and practices are often viewed as being parallel to business and not complimentary to it.

Why is it that most businesses rate themselves high on risk management behaviours?  This is largely because businesses do not correlate the failure of projects with the failure of risk and assurance processes. 

In a 2009 McKinsey & Co survey (published in June 2012 “Driving Value from Post-Crisis Operational Risk Management”) it was clear that risk management was seen as adding little value to the business.  Responses were collected from the financial services industry – an industry seen as the high-water mark for quantitative risk management. 

COLLABORATION IS THE KEY

Risk management needs to become a collaborative process which is tightly integrated with the business.  The key is to incentivise operational managers to make calculated risks.  As a rule of thumb there are 4 key measures to integrate risk management into the business:

  1. Red Teams.  Despite writing about collaboration the unique specialities of risk management often requires senior executives to polarise the business.  It is often easier to incentivise operational managers to maximise risks and check them by using Red Teams to minimise risks.  Where Red Teams are not cost effective then a dynamic assurance team (potentially coming from the PMO) will suffice.  Effective risk management requires different skills and backgrounds.  Using quantitative and qualitative risk management practices together requires a multi-disciplinary team of experts to suck out all the risks and calibrate them within the financial models and program schedules in order that investment committees can make sensible appraisals. 
  2. Contingency Planning.  Operational risk management should usually just boil down to good contingency planning.  Due to the unique skill sets in risk management, operational teams should largely focus on contingency planning and leave the financial calibration up to the assurance/Red teams to sweep up.
  3. Build Transparency through Common Artefacts.  The most fundamental element of a comprehensive  risk process is a lingua franca of risk  – and that language is finance.  All risk management tools need to percolate up into a financial model of a project.  This is so that the decision making process is based on a comprehensive assessment and when it comes to optimise the program the various risky components can be traced and unpicked.
  4. Deeper Assurance by the PMO.  The PMO needs to get involved in the ongoing identification of risk.  Executives try and game the governance system and the assurance team simply does not have the capacity for 100% audit and assurance.  The PMO is by far the best structure to assist in quantitative and qualitative risk identification because it already has oversight of 100% of projects and their financial controls.

Traditional risk management practices only provide broad oversight. With the added cost pressures that businesses now feel it is impossible to create large risk teams funded by a fat overhead. The future of risk management is not for companies to waste money by investing in costly and ineffective risk-culture programs.  Good risk management can only be developed by tightly integrating it with a GRC framework that actively and dynamically supports better operational performance.

Wall Street Beat? The Fiction of 2013 IT Spending Forecasts Reply

Wall Street Beat: 2013 IT Spending Forecasts Look Upbeat.

If this is the case then an organisation with a $USD 100m IT spend is set to increase their capex by $3.3m this year and $6.1m next year.  This represents almost $10m increase in capex in the next 2 years.

I am not sure where they get these figures from?

technology spending. tech rebound. mckinsey chart

If we assume the standard wisdom that economies traditionally take 2 years to recover from recession and a further 2 years to return to trend growth then it will be 2017 before IT budgets hit 3.4% growth.  Given the savage cuts in IT budgets after the recent recession(s) I think these figures are conservative.  A further factor to consider is that the ICT industry is so highly segmented that generalised growth is meaningless.

Looking at the finances of the tech rebound of 2003/3 (shown above in the Mckinsey & Co chart) we can see that – at the high end – IT capex of $73m accounts for 12% of the overall budget.  At this rate, 6% growth equals a $36.5% growth in capex by 2015.

This is, of course,  nonsense.  The moral of the story is:  don’t look at reports of astonishing growth in the tech sector.  Research has shown that the ICT sector is made up of so many tiny segments that even McKinsey’s figures are to be viewed with caution.

In summ, the burst of the 2001 tech bubble saw IT budgets plummet roughly 70%.  There are no reliable current fugures as to the general sum of cost cuts per sector in ICT budgets.  However, if we count on 10-25% overall budget reductions then it will be well beyond 2017 before we see budgets returning to pre-2008 in real terms. If anything is certain, however, tech always surprises.