Hidden Costs in ICT Outsourcing Contracts Reply

hidden-costs

Why are IT outsourcing contracts almost always delivered over-budget and over-schedule?  Why do IT outsourcing contracts almost always fail to achieve their planned value? How come IT contracts seem to be afflicted with this curse more than any other area?

Quote

The common answer is that (i) the requirements change,  and (ii) that handovers from the pre-contractual phase to in-service management are always done poorly.  These are both true although hardly explain the complexity of the situation.  If requirements change were an issue then freezing requirements would solve it – it doesn’t.  The complexity of large ICT projects is derived directly from the fact that not all the requirements are even knowable from the outset.  This high level of unknown-unknowns, coupled with the inherent interdependence of business and system requirements, means that requirements creep is not only likely but inevitable.  Secondly, (ii) handover issues should be able to be solved by unpicking the architecture and going back to the issue points.  This too is never so simple.  My own research has shown that the problem is not in the handover but that the subtleties and complexities of the project architecture is not usually pulled through into the management and delivery structures.  Simply put, it is one thing to design an elegant IT architecture.  It is another thing entirely to design it to be managed well over a number of years.  Such management requires a range of new elements and concepts that never exist in architectural design.

The primary factor contributing to excessive cost (including from schedule overrun) is poor financial modelling.  Simply put, the hidden costs were never uncovered in the first place.  Most cost models are developed by finance teams and uncover the hard costs of the project.  There are, overall however, a total of 3 cost areas which must be addressed in order to determine the true cost of it outsourcing. 

True Cost of IT

1.  Hard costs.  This is the easy stuff to count; the tangibles.  These are the standard costs, the costs of licensing, hardware, software etc.  It is not just the obvious but also includes change management (communications and training).  The Purchasor of the services should be very careful to build the most comprehensive cost model based on a detailed breakdown of the project structure, ensuring that all the relevant teams input costing details as appropriate.

2.  Soft Costs.  The construction industry, for instance, has been building things for over 10,000 years.  With this level of maturity one would imagine that soft costs would be well understood.  They are not.  With project costs in an extremely mature sector often spiralling out of proportion it is easy to see that this might also afflict the technology sector which is wildly different almost from year to year. 

Soft costs deal with the stuff that is difficult to cost; the intangibles:  The cost of information as well as process and transaction costs.  These costs are largely determined by the ratio of revenue (or budget in terms of government departments) against the Sales, General & Administration costs, i.e. the value of the use of information towards the business.  Note that this information is not already counted in the cost-of-goods-sold for specific transactions.

Soft costs go to the very heart of how a business/government department manages its information.  Are processes performed by workers on high pay-bands?  Are workflows long and convoluted?  The answers to these questions have an exponential effect on the cost of doing business in an information-centric organisation.  Indeed, even though the cost of computing hardware is decreasing, the real cost of information work – labour – is increasing.  This is not just a function of indexed costs but also the advent of increasing accreditation and institutionalisation in the knowledge worker community.  Firstly, there is greater tertiary education for knowledge work which has hitherto been unaccounted for or part of an external function.  The rise of the Business Analyst, the Enterprise Architect (and a plethora of other “architects”) all serve to drive delivery costs much higher.  Not only are the costs of this labour increasing but the labour is now institutionalised, i.e. its place and value is not questioned – despite the data showing there seems to be limited economic value added through these services (i.e. no great improvement in industry delivery costs).

3.  Project Costs.  Projects are never delivered according to plan.  Requirements are interpreted differently, the cohesion of the stakeholder team can adversely impact the management of the project, even the sheer size and complexity of the project can baffle and bewilder the most competent of teams.  Supply chain visibility, complicated security implementations and difficult management structures all add to project friction and management drag.  There are many more factors which may have an adverse or favourable effect on the cost of performing projects. 

IT Transition Cost Graph

In the Defence community, Ph.D student Ricardo Valerdi created a cost model – COSYSMO – which isolated 14 separate factors peculiar to systems engineering projects  and gave these factors cost coefficients in a cost model.  Ultimately, each factor may be scored and the scoring then determines the effort multiplier, usually a number between approximately 0.6 and 1.8.  Naturally, when all factors are taken into account the overall effect on the contract price is significant. 

More importantly, for IT implementations, the “project” is not short.  IT outsourcing projects are generally split into 2 phases:  Transition and Transformation.  Transition involves what outsourcers call “shift-and-lift” or the removal of the data centres from the customer site and rear-basing or disposal of the hardware which allows the company to realise significant cost savings on office space. 

During the second phase – Transformation – the business seeks to realise the financial benefits of outsourcing.  Here, a myriad of small projects are set about in order to change the way a business operates and thereby realise the cost benefits of computer-based work, i.e. faster processes from a reduced headcount and better processes which are performed by workers on a lower pay-band. 

IT outsourcing  is not just about the boxes and wires.  It involves all the systems, hard and soft, the people, processes and data which enable the business to derive value from its information.  Just finding all of these moving parts is a difficult task let alone throwing the whole bag of machinery over the fence to an outsourcing provider.   To continue the metaphor, if the linkages between the Purchasor and the Vendor are not maintained then the business will not work.  More importantly, certain elements will need to be rebuilt on the Purchasor’s side of this metaphorical fence, thus only serving to increase costs overall.  The financial modelling which takes into account all of these people, processes and systems must, therefore, be exceptional if an outsourcing deal is to survive.

INFOGRAPHICS: the worst form of risk management Reply

Aristotle wrote that “metaphor is the worst form of argument.  He’s right.  If you have something to show/prove, then do so precisely and in a way which is meaningful and useful to your target audience. 

Recently the guys at Innovation of Risk posted an article on the use of infographics to analyse risk.  I don’t know who coined the term “PowerPoint Engineering” but most infographics fit neatly into this category.  Infographics can save presentation or they can sink it but mostly they are used to convey ill conceived and poorly thought out ideas which snowball into worse run projects.  The best advice is to take these bath-tub moments (why do people think they have great ideas when they’re washing?) and run the analysis with an expert using an expert system.  If you can’t do that then (a) you’re in the wrong department for having the idea in the first place, and (b) chances are that there is a tonne of minute but important detail you missed out.

Whilst I think that visual display of graphics is vital to achieve stakeholder buy-in, it is also clear that imprecise PPT-engineering masquerading as infographics is the worst form of management snake-oil there is.  An erstwhile systems engineering mentor of mine used to say, “if you think they’re BS’ing you then ask them what the arrows mean”.  9 times out of 10 they won’t have a clue. 

The Failure of Risk: lessons from the GFC Reply

risk management. hop scotchWe live in uncertain times. The failures in risk management which lead to the global financial crisis have created an unprecedented set of circumstances. Not only are regulators imposing heavier compliance burdens but shareholders and investors are demanding greater reporting and higher levels of information transparency. On top of all this operational costs are too tight to carry the overhead of separate risk and assurance functions.

When the analysis is done there are 6 key lessons to learn from the global financial crisis:

  1. Integrate G, R & C.  In medium and large corporations isolated risk management practices actively work against the business.  Technical and operational experts will identify risk from experience and create risk slush-funds to mitigate them.  These increase the cost of business and in many cases price the company out of the market.  In an integrated GRC system the firm is able to manage risks across business units so that the risk funds are held centrally and do not add a premium to initial project costs.  Risk identification and analysis percolates from the bottom up but governance is driven from the top down.  In an integrated system they both to work within the business lifecycle to add the right mix of checks and balances so that no additional drag is added to investment/project approvals.
  2. Make Passive GRC Active.  Systems need to be active.  They need to hunt out risk, define it, quantify it and measure the dependencies of the risk.  Then, those same systems need to bring it to the attention of the executives so that they may make informed investment decisions.  In the end, humans follow the law of least effort:  employees will follow the path of least resistance in designing and gaining approval for their projects.   GRC must not follow a system of honour & audit but rather one of  active assurance.  When GRC systems are passive the business lifecycle becomes clogged with nugatory and useless program reviews that turn into technical sales pitches by design teams.  Such events and practices only serve to affirm the belief that GRC is a legal burden and one which only serves to satisfy the needs of regulatory compliance.  Raytheon, for instance, have an excellent system of governance-by-exception.   Their Integrated Product Design System (IPDS) has active governance measures and allows Raytheon to manage a pipeline of thousands of critical projects dynamically and by exception.GRC
  3. Get Granular.  When projects fail it is not usually because the risks have not been adequately managed.  The primary problems in risk practices are the failures of risk identification and analysis.  Managers are simply unable to deal with risks at a granular level and then weigh them up on a per project basis.   This is largely because the technical skills needed to do so are not within the standard sets of most executives (but they are within the more mathematical ones of the FS&I industry).   Where this disparity exists then businesses need to develop separate Red Teams or Assurance Teams, either from the existing PMO of from hand picked executives.
  4. Bottom Up & Top Down.  Risk management is bottom-up but governance is top-down.  The technical skills and software reliance involved in effective risk management mean that the entire practice usually percolates from the bottom of a business, upwards.  Consequently, unless it fits within a comprehensive governance framework it will be open to being gamed by senior executives.  This is why major projects which are seen as must-win are often approved with little or no governance or assurance.
  5. Risk Ownership.  Risks need to be owned at the lowest responsible level.  This is to say that when things go wrong the person at the lowest level who has the greatest amount of operational responsibility must be able to take charge to mitigate all aspects of the risk.  It is vital that the person owning the risk be able to recognise the variables which may see the risk realised.  It is also critical that the risk owner understand the corporate decision points, i.e. the points at which the contingency plans should be triggered.
  6. Invest in the Right Type of Risk Culture.  Risk should not be a dirty word.  Risks are inherent in every project and balancing them quantitatively and qualitatively is an essential skill for all senior executives.  Risk should be as much about seizing opportunity as it is about guarding profitability.  Businesses need to invest in top talent in order to drive good risk practices from the top.  Effective, Active-GRC involves a complex array of tools, practices, structures and processes which need an experienced senior executive to drive them constantly and consistently in the business.  The softer side of risk management cannot be neglected.  The nature of risk forces people onto the defensive as they attempt to justify all aspects of their project designs.  CROs need to help executives understand that all projects must balance risk if they are to attempt to push profitability.  Otherwise, risk cultures will mire companies in conservative, risk averse cultures which only act to add friction and reduce profitability.

Risk practices need to work together inside a single, comprehensive risk framework that goes beyond simple probabilistic modelling and disjointed regulatory compliance.   Businesses need to implement processes which not only integrate the business lifecycle but actively increase both liquidity and opportunity for risk to be seen to add real value to the company.   Only once this is achieved can risk management cease to be an operational drag for the business and become a value-adding proposition which works actively to increase the profit and performance of a company.

 

Measuring Risk in Logical Processes Reply

logical-architecture.-wire-diagram.pngLogical architecture is valuable in the design of large systems for 2 key reasons:  (i)  it helps developers instantiate the softer concepts and  more social aspects of large systems, and (ii) it provides another review-gate to iron out design flaws before proceeding to the physical system.  Military systems provide good examples of the value of logical architecture.  Many Defence systems are so complex that they are never developed at all.  If they are at all then they are often broken down into such small components that the integration can become unmanageable.  Joint Effects Targeting, counter-IED exploitation, systems to fuse operational and intelligence and the nuclear firing chain are all areas which have enormous social input so that the development of a logical architecture is paramount.

Unless a person has a pedigree in military systems logical architecture is, usually, the least understood/used part of the design process.  Certainly in Agile environments or any area requiring rapid applications development where the application is fixed (portals, billing systems, SAP etc) then logical architecture design is nugatory.

In this blog we look at logical process design but the method is equally applicable to the entire logical design phase.

BENEFITS OF LOGICAL ARCHITECTURE

When designing processes, however, logical architecture is an invaluable tool in measuring, assessing and comparing risk before moving to the more expensive technical design & implementation phases.  Because logical designs can be created, compared and assessed, quickly,  they become an excellent technical/commercial appraisal tool.   Cross-functional teams of executives and architects can collaborate on logical designs before a GO/NO-GO investment decision and thereby create 3 major benefits:

  • Reduce the time of the physical design cycle.
  • Increase executive involvement and the effect of executive steering on designs.
  • Significantly reduce the risk in physical designs.

PROCESS VALUE

The Value of a Process

There is a way of viewing, and thereby measuring risk in logical processes.  Ultimately, the value of a process is its cost divided by its risk.  So, a process which has a total cost of $100,000 and a 60% chance of success has a nominal value (not “worth” or “price”) of $60,000.  Which is to say that on average the business will realise only 60% of its value.  This is roughly the same as saying that, on average, for each $100k in earnings, the firm will spend $40k on faults.  Whether the value indicator is dollars or white elephants does not matter, so long as it is applied consistently over the choices.  This simple measuring mechanism allows senior executives to engage in the design process and forces architects to help assign costs to difficult design components.

COSTS & CONCEPTS

The difficulty is in ascribing costs to concepts.  In order to do this the team must first instantiate the concepts win some form of logical structure, such as a software system or a management committee/team.  The team then ascribes an industry benchmark cost to this structure, accounting for uncertainty.  Uncertainty is important because the benchmark cost will not represent the actual cost exactly (in fact the benchmark cost should represent the 50% CI cost).  So, when it comes to determining the probability it is vital to use the experts to come up with what the construct could cost (as little as and as much as).

PROCESS RISK

The difficulty with measuring logical architectures is in measuring concepts.  Concepts usually have no value and no standard means of comparing them.  In short, (i) assemble a small, cross-functional team of experts, (ii) ascribe costs (with uncertainty) to the concepts, apply a risk equation, and then (iii) simulate.  One possible equation is:

Logical Risk Equation - An equation for measuring process risk in logical architectures.

Where:

  • R is the overall risk.
  • P is the probability of an adverse event occurring in the process.
  • Ct is the criticality of the location of the event, in the process.
  • is the likely time it will take to notice the manifestation of the risk (i.e. feedback mechanisms).
  • Cy is the availability of a contingency plan which is both close and effective to the point of the problem, in the process, and
  • Sl is the likelihood of success that the process will be fixed and achieve an acceptable outcome.
  • 100 simply makes it easier for the team to see differences between scores.

In this equation, we determine the overall risk of the process.  It does not have to be perfect but rather it just needs to be applied consistently and account for the major variables.  If applied rigorously and evenly, measuring risk in logical architectures has the ability to reduce the design cycle, increase the certainty of the choice, build better stakeholder buy-in and significantly reduce the risk in the physical solution.

Building a Risk Culture is a Waste of Time 3

The focus of a good risk management practice is the building of a high-performance operational culture which is baked-in to the business.  Efforts to develop risk cultures cultures only serve to increase risk aversion in senior executives and calcify adversarial governance measures which decrease overall profitability.  The right approach to risk management is a comprehensive, holistic risk management framework which integrates tightly with the business.

risk management. waste of timeThe financial crisis is largely due to the the failure of risk management and over-exposure in leading risk-based institutions.  More specifically, the failure of risk management is linked to:

  • The failure to link link risk to investment/project approval decision making.  The aim of risk management is not to create really big risk registers.  Although, in many organisations one could be forgiven for thinking that this is the goal.  The aim of identifying risks is to calibrate them with the financial models and program plans of the projects so that risks can be comprehensively assessed within the value of the investment.  Once their financial value is quantified and their inputs and dependencies are mapped – and only then – can realistic and practical contingency planning be implemented for accurate risk management.
  • The failure to identify risks accurately and comprehensively.  Most risk toolsets and risk registers reveal a higgledy-piggledy mess of risks mixed up in a range from the strategic down to the technical.  Risks are identified differently at each level (strategic, financial, operational, technical).  Technical and Operational risks are best identified by overlapping processes of technical experts and parametric systems/discrete event simulation.  Financial risks are best identified by sensitivity analysis and stochastic simulation but strategic risks will largely focus on brand and competitor risks.  Risk identification is the most critical but most overlooked aspect of risk management.
  • The failure to use current risk toolsets in a meaningful way.  The software market is flooded with excellent risk modelling and management tools.  Risk management programs, however, are usually implemented by vendors with a “build it and they will come” mentality.  Risk management benefits investment appraisal at Board and C-Suite level and it cannot be expected to percolate from the bottom up.

RISK MANAGEMENT IS COUNTER-INTUITIVE

All this does not mean that risk management is a waste of time but rather it is counter-intuitive to the business.  It is almost impossible to ask most executives to push profits to the limit if their focus is on conservatism.  Building a culture of risk management is fraught with danger.  The result is usually a culture of risk aversion, conservatism and a heavy and burdensome governance framework that only adds friction to the business lifecycle and investment/project approval process.  Executives, unable to navigate the labyrinthine technicalities of such a systems achieve approvals for their pet programs by political means.  More so, projects that are obviously important to the business actually receive less risk attention than small projects.  Employees learn to  dismiss risk management and lose trust in senior management.

If risk management is to be an effective and value-adding component it must be a baked into the business as part of the project/investment design phase.  If not, then risk management processes  just build another silo within the business.  The key is to forget about “Risk” as the aim.  The goal must be a performance culture with an active and dynamic governance system which acts as a failsafe.  The threat of censure is the best risk incentive.

risk management. immature disciplineAWARENESS IS NOT MANAGEMENT

risk management. immature disciplineManagement has long been aware of risk but this does not always translate into true understanding of the risk implications of business decisions.  Risk policies and practices are often viewed as being parallel to business and not complimentary to it.

Why is it that most businesses rate themselves high on risk management behaviours?  This is largely because businesses do not correlate the failure of projects with the failure of risk and assurance processes. 

In a 2009 McKinsey & Co survey (published in June 2012 “Driving Value from Post-Crisis Operational Risk Management”) it was clear that risk management was seen as adding little value to the business.  Responses were collected from the financial services industry – an industry seen as the high-water mark for quantitative risk management. 

COLLABORATION IS THE KEY

Risk management needs to become a collaborative process which is tightly integrated with the business.  The key is to incentivise operational managers to make calculated risks.  As a rule of thumb there are 4 key measures to integrate risk management into the business:

  1. Red Teams.  Despite writing about collaboration the unique specialities of risk management often requires senior executives to polarise the business.  It is often easier to incentivise operational managers to maximise risks and check them by using Red Teams to minimise risks.  Where Red Teams are not cost effective then a dynamic assurance team (potentially coming from the PMO) will suffice.  Effective risk management requires different skills and backgrounds.  Using quantitative and qualitative risk management practices together requires a multi-disciplinary team of experts to suck out all the risks and calibrate them within the financial models and program schedules in order that investment committees can make sensible appraisals. 
  2. Contingency Planning.  Operational risk management should usually just boil down to good contingency planning.  Due to the unique skill sets in risk management, operational teams should largely focus on contingency planning and leave the financial calibration up to the assurance/Red teams to sweep up.
  3. Build Transparency through Common Artefacts.  The most fundamental element of a comprehensive  risk process is a lingua franca of risk  – and that language is finance.  All risk management tools need to percolate up into a financial model of a project.  This is so that the decision making process is based on a comprehensive assessment and when it comes to optimise the program the various risky components can be traced and unpicked.
  4. Deeper Assurance by the PMO.  The PMO needs to get involved in the ongoing identification of risk.  Executives try and game the governance system and the assurance team simply does not have the capacity for 100% audit and assurance.  The PMO is by far the best structure to assist in quantitative and qualitative risk identification because it already has oversight of 100% of projects and their financial controls.

Traditional risk management practices only provide broad oversight. With the added cost pressures that businesses now feel it is impossible to create large risk teams funded by a fat overhead. The future of risk management is not for companies to waste money by investing in costly and ineffective risk-culture programs.  Good risk management can only be developed by tightly integrating it with a GRC framework that actively and dynamically supports better operational performance.

Wall Street Beat? The Fiction of 2013 IT Spending Forecasts Reply

Wall Street Beat: 2013 IT Spending Forecasts Look Upbeat.

If this is the case then an organisation with a $USD 100m IT spend is set to increase their capex by $3.3m this year and $6.1m next year.  This represents almost $10m increase in capex in the next 2 years.

I am not sure where they get these figures from?

technology spending. tech rebound. mckinsey chart

If we assume the standard wisdom that economies traditionally take 2 years to recover from recession and a further 2 years to return to trend growth then it will be 2017 before IT budgets hit 3.4% growth.  Given the savage cuts in IT budgets after the recent recession(s) I think these figures are conservative.  A further factor to consider is that the ICT industry is so highly segmented that generalised growth is meaningless.

Looking at the finances of the tech rebound of 2003/3 (shown above in the Mckinsey & Co chart) we can see that – at the high end – IT capex of $73m accounts for 12% of the overall budget.  At this rate, 6% growth equals a $36.5% growth in capex by 2015.

This is, of course,  nonsense.  The moral of the story is:  don’t look at reports of astonishing growth in the tech sector.  Research has shown that the ICT sector is made up of so many tiny segments that even McKinsey’s figures are to be viewed with caution.

In summ, the burst of the 2001 tech bubble saw IT budgets plummet roughly 70%.  There are no reliable current fugures as to the general sum of cost cuts per sector in ICT budgets.  However, if we count on 10-25% overall budget reductions then it will be well beyond 2017 before we see budgets returning to pre-2008 in real terms. If anything is certain, however, tech always surprises.

 

ALIGNMENT: Building a Closer Relationship Between Business and IT Reply

alignment. team workjpgThe business gurus Kaplan and Norton describe “Alignment” as a state where all the units of an organisational structure are brought to bear to execute corporate strategy in unison.  When Alignment is executed well it is a huge source of economic value.  When it is executed badly it is a colossal source of friction which can cripple the business.  The authors go on to note:

Alignment

IT DOESN’T NEED ALIGNMENT, IT NEEDS BETTER UNDERSTANDING

IT and the Business speak of alignment in two radically different ways.  The Business talks about alignment between business units.  When speaking of tech they use words and phrases such as ROI and operational performance.  IT talks about alignment in a way that makes them feel as though they matter to the business.  That profitable, customer facing business units could achieve more if the corporate centre where to align business units under a single, cohesive strategy is one thing.  That IT depts fail to execute strategy or even deliver operational effectiveness through poor understanding of requirements, an inability to see the technical reality of commercial value or realise some of the social cohesion which enterprise software systems need is not mis-alignment – it is just bad practice.

THE RELATIONSHIP BETWEEN THE BUSINESS AND ICT IS DIVERGING

The increasing capabilities of a smarter, more mobile , more virtual workforce means a greater commoditisation of knowledge work.  With this comes the polarisation of Business and ICT.   A broader ICT function with a wider array of narrower and deeper areas of expertise will, increasingly, be incapable of coding the more subtle and complex social aspects of human collaborations.  In such a world the ICT agenda must be set by the corporate centre.

mis-alignment

ICT NEEDS TO FOCUS ON EXECUTION NOT ALIGNMENT

ICT’s economic value will be realised when it (and therefore Enterprise Architects) can support business units to reach across each other to create valuable products and services which justify the corporate overhead.  McKinsey & Co, for instance focus heavily on central knowledge management.  This enables research to drive service line improvement in relevant sectors.  IBM spends over 3 Bn GBP on R&D and the development of leading-edge products way beyond their years.  ICT needs to focus on the execution of corporate strategy and not alignment. Alignment is a structural issue whereas execution is a functional issue.  Stop tinkering with the structures and focus on the functions/operations.

GOVERNANCE – ALIGNMENT AT A PRICE

Moves to improve the business relevance of ICT usually result in heavier, more burdensome technical governance.  The finance function imposes capital project controls on technology projects and insists that benefits be quantified.  Although greater cost transparency will bring IT closer to the Business, heavier ICT governance only serves to drive ICT investment underground.  Pet-projects abound, useless apps proliferate and ICT costs continue to rise.  In the meantime, in a perverse inverse relationship, assurance becomes even lighter on larger programs. 

Alignment takes strong leadership and clear definitions of business intent.  A fancy set of IT tools are not necessary for alignment rather they are important when it comes to agilityMis-alignment is the fault of deep rooted cultural divisions which can only be overcome through the strict adherence to financial value and the use of a lingua franca engendered through a common architectural framework.   If ICT is to realise its potential and add real financial value then it must actively support the real-time execution of business operations.

BUSINESS PROCESS FAILURES: the importance of logical architectures Reply

business process risk. chart

In a recent 2012 survey by McKinsey & Co, IT executives noted that their top priorities were ‘improving the effectiveness and efficiency of business processes’.   One of the critical failings of IT, however, is to implement effective and efficient business process architectures in the first place.  The IT priorities to the left only serve to highlight what we already know:  that IT service companies implement processes badly.

Why?

Whether through a failing of Requirements or Integration (or both), IT service companies often implement inappropriate business process architectures and then spend the first 6 to 12 months fixing them.  This is why those companies ask for a 6 month service-credit holiday.  It is also the same reason those companies differentiate between Transition and Transformation.  The former is where they implement their cost model but the latter is where they implement their revenue model.

The failing is not within the design of the technical architecture.  Very few senior executives report that failed projects lacked the technical expertise.  Likewise, project management is usually excellent.  Requirements, too, are not usually the problem with business process implementations as most commercial systems implement standardised Level 1 or 2 business processes very well.

Logical Archtiectures instantiate the subtleties and complexities of social systems which the software must implement

The first failing in the development of a technical architecture to implement a business processes is the design of the Logical Architecture.  Logical Architectures are critical for two reasons: (i) because requirements are one hundred times cheaper to correct during early design phases as opposed to implementation, and (ii) because logical systems are where the social elements of software systems are implemented.  Requirements gathering will naturally throw up a varying range of features, technical requirements, operational dependencies and physical constraints (non-functional requirements) that Solution Architects inevitably miss.  Their focus and value is on sourcing and vendor selection rather than the capture of the subtleties and complexities of human social interactions and the translation of them into architect-able business constructs (that is the role of the Business Analyst).

The second failing is the development of Trade-Space.  This is the ability to make trade-offs between logical designs.  This is the critical stage before freezing the design for the technical architecture.  This is also vital where soft, social systems such as knowledge, decision making and collaboration are a core requirement.  However, trade-space cannot be affected unless there is some form of quantitative analysis.  The usual outcome is to make trade-offs between technical architectures.   Like magpies, executives and designers, by this stage have already chosen their favourite shiny things.  Energy and reputation has already been invested in various solutions, internal politicking has taken place and the final solution almost eschews all assurance and is pushed through the final stages of governance.

With proper development and assessment of trade-space, companies have the ability to instantiate the complex concepts of front and middle office processes.  Until  now, business analysts have hardly been able to articulate the complicated interactions between senior knowledge workers.   These, however, are far more profitable to outsource other than more mechanical clerical work which is already the subject of existing software solutions.  The higher pay bands and longer setup times for senior information work makes executive decision making the next frontier in outsourcing.

Service offerings

Logical architectures are not usually developed because there is no easy, standardised means of assessing them.  Despite the obvious cost effectiveness logical architectures most Business Analysts do not have the skills to design logical architectures and most Technical Architects move straight to solutions. Logical Architectures which are quantitatively measurable and designed within a standardised methodology have the potential to give large technical service and BPO organisations greater profits and faster times-to-market.

The future is already upon us.  BPO and enterprise services are already highly commoditised.  The margins in outsourcing are already decreasing, especially as cloud-based software becomes more capable.  If high cost labour companies (particularly those in based in Western democracies) are to move to more value-added middle and front office process outsourcing then they will need to use logical architecture methodologies to design more sophisticated offerings.

In the next blog we will show one method of quantitatively assessing logical architectures in order to assess trade-space and make good financial decisions around the choices of technical designs.

The Cost of Capability: a better way to calculate IT chargebacks Reply

IT_Profit_Centre

THE VALUE OF SHARED SERVICES

Almost every C-Suite executive will agree that shared services, done well, are a critical factor in moving the business forward.  The problem is that implemented poorly they can potentially overload good processes and profitable service lines with villainous overhead allocations.

IT chargebacks are important because, used well,  they can assist the business with the following:

  • help IT prioritise service delivery to the most profitable business units,
  • help the business understand which IT services are value-adding to the market verticals, and
  • reduce the overall vulnerability of IT-enabled business capability.

OVERHEAD ALLOCATIONS CAN RUIN GOOD PROCESSES

However, many shared service implementations are poorly received by the business units because they add little or no value and are charged at higher than the market rate.  As Kaplan pointed out in his seminal work “Relevance Lost: the rise and fall of management accounting” the result of poor overhead cost allocation is that perfectly profitable processes and services, burdened by excessive and misallocated overhead costs seem to be unprofitable.  Kaplan goes further and points out that all overhead which cannot be directly incorporated into the cost-of-goods-sold should be absorbed by the business and not charged back to the market verticals and service lines.  This is the fairest method but most businesses avoid this method because high SG&A costs has a negative impact on financial ratios and therefore investor attractiveness.

HIGGLEDY-PIGGLEDY 

In a recent article (shown below) McKinsey & Co pointed out a variety of methods which their client firms use to calculate IT chargebacks.   Even though they differentiated between new and mature models it is worth noting that very few companies charged their business units for what they used (Activity-Based Costing).   Rather, they used some form of bespoke methodology.  This is usually (i) a flat rate, (ii) a budget rate with penalties (for behaviour change), or (iii) a market rate (usually with additional penalties for IT R&D costs).

IT Chargebacks. McKinsey. IT Metrics

 

 

 

 

 

 

 

 

 

 

 

 

 

ALIGNMENT & ACCOUNTABILITY

Chargebacks are essential.  They are a critical means for companies to take charge of their IT costs.  Otherwise, a ballooning IT overhead can destroy perfectly good processes and service lines.  However, chargebacks can obscure accountability.  If they are not calculated transparently, clearly and on the basis of value then there will be no accountability of IT to the business and whose capabilities they enable.  Without  accountability there can also never be alignment between IT and the business.

CHARGEBACK AS AN INDICATOR OF MANAGEMENT-VALUE-ADDED

Traditional methods of IT cost modelling, on which standard chargebacks are calculated, only account for the hard costs of ICT,  namely infrastructure and applications.  It should be noted that chargebacks should only be applied for Management Information Systems (eg, knowledge bases, team collaboration sites such as MS Sharepoint, CRM systems, and company portals etc).  All other systems are either embedded (eg, robotics etc) or operational, (ie mission critical to a business unit’s operations).  MIS are largely used by overhead personnel whereas operational systems and the finance for embedded systems should be accounted for in the cost-of-good-sold.  The real question therefore, is: what is the value of the management support to my business?  The question underlies the myth that Use = Value, which it does not.  Good capability applied well = Value.

THE COST OF CAPABILITY

The cost model, therefore, needs to determine the cost of capability.  Metrics based on per unit costs are inappropriate because the equipment amortises so rapidly that the cost largely represents a penalty rate.  Metrics based on per user costs are unfair because each user is at a different level of ability.  In previous blogs we have outlined how low team capabilities such as distributed locations, poor requirements, unaligned processes etc all have a negative and direct financial correlation on project values.  We have also written about how projects should realise benefits along a value ladderdelivering demonstrable financial and capability benefits – rung by rung – to business units.

It is reasonable to say, therefore, that managers should not have to pay the full chargeback rate for software which is misaligned to the business unit and implemented badly.

It is unfair for under-performing business units to be charged market rates for inappropriate software which the IT department mis-sold them.  If that business unit where a company in its own right they be offered customisation and consulting support.  In large firms the business often scrimps on these costs to save money.  Given the usual overruns in software implementations business units are traditionally left with uncustomised, vanilla software which does not meet their needs.  The training budget is misallocated to pay for cost overruns and little money is ever left for proper process change.

In order to create a fair and accurate chargeback model which accounts for the Cost of Capability, use the following criteria:

  • Incorporate the COSYSMO cost coefficients into software and service costings so that low capability business units pay less.
  • Only charge for  professional services which the business doesn’t own.  Charging for professional/consulting serrvices which are really just work substitution merely encourages greater vertical integration.  This is duplication and duplication in information work creates friction and exponential cost overruns.
  • Watch out for category proliferation, especially where the cost of labour for some unique sub-categories is high.  Don’t let the overall cost model get skewed by running a few highly specialised services.  Remove all IT delivery personnel from the verticals.  Where there are ‘remoteness’ considerations then have people embedded but allocate their costs as overhead.
  • Do not allow project cost misallocation.  Ensure that cost codes are limited.

In order that businesses do not fall into the “Build and they will Come” trap a clear and precise chargeback model should be created for all IT costings.   Businesses should start by charging simple unit costs such as per user.  Everything else will initially be an overhead but firms may then move to a more complex chargeback model later.

It is important that low capability business units do not pay full price for their software and services.  As Kaplan is at pains to point out, where businesses do this they are at risk of making perfectly good processes and service lines seem unprofitable.  The only way to properly broker for external services is to account for the cost of capability.

 

The Complexity of Cost: the core elements of an ICT cost model Reply

cost model. financial modelThere are 2 reasons why IT cost cost reduction strategies are so difficult:  Firstly, many of the benefits of ICT are intangible and it is difficult to trace their origin.  It is hard to determine the value of increased customer service or the increase in productivity from better search and retrieval of information.   Secondly, many of the inputs which actually make IT systems work are left unaccounted for and unaccountable.  The management glue which implements the systems (often poorly and contrary to the architecture) and the project tools, systems and methods which build/customise  the system (because IT, unlike standard captital goods, is often maintained as a going concern under constant development, e.g. upgrades, customisation, workflows etc) are very difficult to cost.

Standard IT cost models only account for the hard costs of the goods and services necessary to implement and maintain the infrastructure, applications and ancillary services.  Anything more is believed to be a project cost needed to be funded by the overhead.

This is unsatisfactory.

The value of technology systems – embedded systems excluded – is in the ability of information workers to apply their knowledge by communicating with the relevant experts (customers, suppliers etc) within a structured workflow (process) in order to achieve a corporate goal.

Capturing the dependencies of knowledge and process within the cost model, therefore, is critical.   Showing how the IT system enables the relevant capability is the critical factor.  A system is more valuable when used by employees who are trained than less trained.  A system is more valuable when workers can operate, with flexibility, from different locations.  A system is more valuable where workers can collaborate to solve problems and bring their knowledge to bear on relevant problems.  So how much is knowledge management worth?

The full cost of a system – the way they are traditionally modelled – assumes 100% (at least!) effectiveness.  Cost models such as COSYSMO and COSYSMOR account for internal capability with statistical coefficients.  Modelling soft costs such as information effectiveness and technology performance helps the business define the root causes of poor performance rather than subjective self-analysis.  If a firm makes the wrong assessment of capability scores in COSYSMO the projected cost of an IT system could be out by tens of millions.

Financial models for IT should therefore focus less on the cost of technology and more on the cost of capability.  The answer to this is in modelling soft costs (management costs), indirect costs and project costs as well as the hard costs of the system’s infrastructure, apps and services.