Hidden Costs in ICT Outsourcing Contracts Reply

hidden-costs

Why are IT outsourcing contracts almost always delivered over-budget and over-schedule?  Why do IT outsourcing contracts almost always fail to achieve their planned value? How come IT contracts seem to be afflicted with this curse more than any other area?

Quote

The common answer is that (i) the requirements change,  and (ii) that handovers from the pre-contractual phase to in-service management are always done poorly.  These are both true although hardly explain the complexity of the situation.  If requirements change were an issue then freezing requirements would solve it – it doesn’t.  The complexity of large ICT projects is derived directly from the fact that not all the requirements are even knowable from the outset.  This high level of unknown-unknowns, coupled with the inherent interdependence of business and system requirements, means that requirements creep is not only likely but inevitable.  Secondly, (ii) handover issues should be able to be solved by unpicking the architecture and going back to the issue points.  This too is never so simple.  My own research has shown that the problem is not in the handover but that the subtleties and complexities of the project architecture is not usually pulled through into the management and delivery structures.  Simply put, it is one thing to design an elegant IT architecture.  It is another thing entirely to design it to be managed well over a number of years.  Such management requires a range of new elements and concepts that never exist in architectural design.

The primary factor contributing to excessive cost (including from schedule overrun) is poor financial modelling.  Simply put, the hidden costs were never uncovered in the first place.  Most cost models are developed by finance teams and uncover the hard costs of the project.  There are, overall however, a total of 3 cost areas which must be addressed in order to determine the true cost of it outsourcing. 

True Cost of IT

1.  Hard costs.  This is the easy stuff to count; the tangibles.  These are the standard costs, the costs of licensing, hardware, software etc.  It is not just the obvious but also includes change management (communications and training).  The Purchasor of the services should be very careful to build the most comprehensive cost model based on a detailed breakdown of the project structure, ensuring that all the relevant teams input costing details as appropriate.

2.  Soft Costs.  The construction industry, for instance, has been building things for over 10,000 years.  With this level of maturity one would imagine that soft costs would be well understood.  They are not.  With project costs in an extremely mature sector often spiralling out of proportion it is easy to see that this might also afflict the technology sector which is wildly different almost from year to year. 

Soft costs deal with the stuff that is difficult to cost; the intangibles:  The cost of information as well as process and transaction costs.  These costs are largely determined by the ratio of revenue (or budget in terms of government departments) against the Sales, General & Administration costs, i.e. the value of the use of information towards the business.  Note that this information is not already counted in the cost-of-goods-sold for specific transactions.

Soft costs go to the very heart of how a business/government department manages its information.  Are processes performed by workers on high pay-bands?  Are workflows long and convoluted?  The answers to these questions have an exponential effect on the cost of doing business in an information-centric organisation.  Indeed, even though the cost of computing hardware is decreasing, the real cost of information work – labour – is increasing.  This is not just a function of indexed costs but also the advent of increasing accreditation and institutionalisation in the knowledge worker community.  Firstly, there is greater tertiary education for knowledge work which has hitherto been unaccounted for or part of an external function.  The rise of the Business Analyst, the Enterprise Architect (and a plethora of other “architects”) all serve to drive delivery costs much higher.  Not only are the costs of this labour increasing but the labour is now institutionalised, i.e. its place and value is not questioned – despite the data showing there seems to be limited economic value added through these services (i.e. no great improvement in industry delivery costs).

3.  Project Costs.  Projects are never delivered according to plan.  Requirements are interpreted differently, the cohesion of the stakeholder team can adversely impact the management of the project, even the sheer size and complexity of the project can baffle and bewilder the most competent of teams.  Supply chain visibility, complicated security implementations and difficult management structures all add to project friction and management drag.  There are many more factors which may have an adverse or favourable effect on the cost of performing projects. 

IT Transition Cost Graph

In the Defence community, Ph.D student Ricardo Valerdi created a cost model – COSYSMO – which isolated 14 separate factors peculiar to systems engineering projects  and gave these factors cost coefficients in a cost model.  Ultimately, each factor may be scored and the scoring then determines the effort multiplier, usually a number between approximately 0.6 and 1.8.  Naturally, when all factors are taken into account the overall effect on the contract price is significant. 

More importantly, for IT implementations, the “project” is not short.  IT outsourcing projects are generally split into 2 phases:  Transition and Transformation.  Transition involves what outsourcers call “shift-and-lift” or the removal of the data centres from the customer site and rear-basing or disposal of the hardware which allows the company to realise significant cost savings on office space. 

During the second phase – Transformation – the business seeks to realise the financial benefits of outsourcing.  Here, a myriad of small projects are set about in order to change the way a business operates and thereby realise the cost benefits of computer-based work, i.e. faster processes from a reduced headcount and better processes which are performed by workers on a lower pay-band. 

IT outsourcing  is not just about the boxes and wires.  It involves all the systems, hard and soft, the people, processes and data which enable the business to derive value from its information.  Just finding all of these moving parts is a difficult task let alone throwing the whole bag of machinery over the fence to an outsourcing provider.   To continue the metaphor, if the linkages between the Purchasor and the Vendor are not maintained then the business will not work.  More importantly, certain elements will need to be rebuilt on the Purchasor’s side of this metaphorical fence, thus only serving to increase costs overall.  The financial modelling which takes into account all of these people, processes and systems must, therefore, be exceptional if an outsourcing deal is to survive.

Benefits-Led Contracting: no immediate future for outcome based agreements Reply

The IACCM rightly points out that key supplier relationships underpinned by robust and comprehensible contracts are essential to the implementation of significant strategic change.  Their research identifies a 9.2% impact on bottom line from contract weakness.  Top 5 causes being:

  •      Disagreement over contract scope,
  •      Weaknesses in contract change management,
  •      Performance failures due to over commitment,
  •      Performance issues due to disagreement over what was committed,
  •      Inappropriate contract structures or responsibilities.

Two things are given in this mess:  (i) Firstly, that contractual structures are weak and inappropriate to deal with high levels of operational complexity and technical risk, and (ii) secondly, that legal means of enforcement are cumbersome, expensive and ineffective.

That business is ready to solve this legal problem by contracting for outcomes is (a) nonsense and (b) missing the point.  Business is already dealing with the operational and technical risk of large and complex contracts.  Business is already structuring many of its agreements to deal with outcomes.  Large prime contracts,  alliance contracts and performance-based contracts are already commonplace in PFI/PPP and Defence sector deals.  That neither are wholly efficient or effective is for another time.  It is, however, for the legal community to devise more sophisticated ways of contracting in order to solve their side of the problem.

Picture1

PEOPLE ARE THE KEY

The primary reason for not being able to contract for outcomes is that the vendor doesn’t own the people.  This is critical because without the ability to control and intervene in the delivery of work the risk increases exponentially.  Consequently, the risk premium paid for outcome-based contracts will either make them (a) prohibitively expensive, or (b) impossible to perform (within parameters).  So, a business which offers you an outcome-based contract is either having you on or just about to charge you the earth.

 

 

INFOGRAPHICS: the worst form of risk management Reply

Aristotle wrote that “metaphor is the worst form of argument.  He’s right.  If you have something to show/prove, then do so precisely and in a way which is meaningful and useful to your target audience. 

Recently the guys at Innovation of Risk posted an article on the use of infographics to analyse risk.  I don’t know who coined the term “PowerPoint Engineering” but most infographics fit neatly into this category.  Infographics can save presentation or they can sink it but mostly they are used to convey ill conceived and poorly thought out ideas which snowball into worse run projects.  The best advice is to take these bath-tub moments (why do people think they have great ideas when they’re washing?) and run the analysis with an expert using an expert system.  If you can’t do that then (a) you’re in the wrong department for having the idea in the first place, and (b) chances are that there is a tonne of minute but important detail you missed out.

Whilst I think that visual display of graphics is vital to achieve stakeholder buy-in, it is also clear that imprecise PPT-engineering masquerading as infographics is the worst form of management snake-oil there is.  An erstwhile systems engineering mentor of mine used to say, “if you think they’re BS’ing you then ask them what the arrows mean”.  9 times out of 10 they won’t have a clue. 

Sometimes the best defense is deletion – CSO Online – Security and Risk Reply

Sometimes the best defense is deletion – CSO Online – Security and Risk.

data mining. big dataThe point is prescient.  In these early days of Big Data awareness the battle between information management v. store now/analyse later can obfuscate other issues:  Cost and Necessity.

ONE BIG POT

Is there really the practical technology that an organisation can actually move away from structured databases and just stick all its information into one big ‘pot’, to be mined for gold nuggets at a later date?

Storing information (as opposed to just letting stuff pile up) is a costly business and the decision to store information usually comes from people on higher pay bands.  The decision of where to locate is often a manual decision which not only has a significant management overhead of its own but also involves co-ordination from other high pay bands.

THE COMPLEXITY OF INFORMATION

Picture1

Add to this dilemma the complexities of  ‘legal hold’ on material and the identification of ‘discoverable’ items.  Suddenly information management looks a lot harder and the siren song of Big Data seems a lot more alluring.  The problem is that information that is not valuable to some is valuable to others.  Who is qualified to make that decision?  Should all information be held given that it will likely have some enterprise value?  The battle is between cost and necessity:

  1. Cost:  Deciding what to keep and what to get rid of takes management time and effort that costs money.  The problem is that it is neither cost effective nor good policy to to push hold/delete decision making down to the lowest clerical level. The secret is to have those decisions made by more senior case-workers but only within their limited remit.
  2. Necessity:  The secret is to categorise management information to determine necessity.  Use a workflow to cascade and delegate (not to avoid) work.  As it moves it accumulates metadata.  No metadata means no necessity and therefore it should be disposed of automatically (eschewing arguments of regulatory compliance).

THE ANSWER

The answer is to automate the deletion of information (other than ‘Legal Hold’).  Once a document/question has reached the end of the workflow without accumulating any metadata then the information should be disposed of automatically.  Case-Workers make the decisions to act on the document/question and metadata is attached by more clerical staff (on lower pay bands) as the item moves through the workflow.  If no metadata is attached it can be assumed that the item is not important and is therefore disposed of.  Cost is minimised by letting case-workers make decisions of relevance within their own sphere of expertise without the additional management overhead for de-confliction/meetings etc.   In this way, the enterprise makes a collective decision of importance and stores the information accordingly thus answering the issue of necessity.

Top 5 Benefits of Effective Risk Management 1

risk management.little menBENEFITS OF AN INTEGRATED “ACTIVE GRC” FRAMEWORK

After the failure of risk management during the recent (and ongoing) financial crisis one could be forgiven for thinking that risk management – as we know it – is dead.  However, effective risk management is the only means which businesses have to:  (i) assess and compare investment decisions, (ii) seize subtle opportunities, and (iii) ensure regulatory compliance.  Risk management has greater utility beyond these obvious benefits.  Listed below are 5 of the top financial benefits of effective risk management:

1.  IMPROVED LIQUIDITY

When managers cannot identify or mitigate complex risks they create risk contingency slush funds and pad their accounts with excessive risk premiums. This is not an efficient allocation of capital and it can even price a business out of the market. Precise identification of risk premiums removes these slush funds and creates greater firm liquidity and the ability to allocate capital where it is needed.

2.  BETTER PROJECT PERFORMANCE

The best methods for risk identification and analysis of risk in projects are through the quantitative analysis of cost models and project schedules. However, these methods are only useful where such models are in enough detail. Good risk management leads to greater collaboration by cross-functional teams to optimise cost and schedule performance.

3.  BETTER OPPORTUNITY MANAGEMENT

With greater liquidity comes the ability to seize emerging opportunities. Not only can the company use this capital across portfolios to manage risks but it can also seize opportunities for M&A, talent acquisition, share buybacks, increased dividends, employee bonuses or increased project funding/investment.

4.  CONSENSUAL MANAGEMENT CULTURE

As managers work across the business to calibrate cost models with the project schedules; the contract and commercials with the technical architecture, the business is forced to adopt a more consensual, multi-disciplinary approach. Where GRC is implemented as part of a high-performance business initiative the culture is more likely to stick rather than one imposed from the top-down.

5.  IMPROVED REPORTING & DECISION MAKING

An active GRC process which is fully integrated with the business relies on the quantitative analysis of core artifacts (cost models, project schedules and technical architectures and contracts). A quantitative culture coupled with regular, detailed analytical outputs also greatly improves the standard of financial and operational reporting and therefore the possibility for improved investment decision making.

Improving Contract Management: manage the deal not the database 1

The guys at Selectica have some great points but to make expensive enterprise software work it’s important to work a system and not to work the software:

  1. Don’t try and put all your contractual information in one single database at once.  Not only do individuals have different ways and systems (what I call the e-Hub of someone’s daily life) from which they manage their data they may also run into legal issues around probity and confidentiality (by cross-contaminating case management with archival material).  Businesses do not need to invest in costly customisation but do need to strike a financial balance between customisation and counter-intuitive vendor processes.  One neat tool is to create  a visual model of the deal (its structures, functions and concepts) and provide hyperlinks to the various file systems.  This removes the need to develop a common taxonomy as workers now have a visual reference point (rather than a word) for their own understanding.
  2. With process automation it is critical to ensure that the business doesn’t  codefy its culture.  This will only calcify bottlenecks.  A firm needs to make sure that it re-engineers its CLM process before it creates a workflow from it.  Remove non-tasks and automate simple clerical work and approvals.
  3. The business also needs to make sure that experts are not only notified but they are also edified  and contextualised.  When pushing workflows out to experts, such as in-house counsel, outside counsel etc then these people must have a clear view of the dependent components of the deal’s architecture.  Businesses can speed this process and reduce its costs by linking their own systems to online legal databases such as Thomson Reuters (Westlaw AU, FirstPoint), Lexis Nexis or CCH.

In summ, good contract management needs a highly cross-functional and multi-disciplinary approach if it is not only to be successful but also if it isn’t going to add additional cost and friction to business operations.  Enterprise products such as Selectica’s are a great start but customers must be careful to make sure that the software supports their own system otherwise they will spend all their money and time working the software.

Measuring Risk in Logical Processes Reply

logical-architecture.-wire-diagram.pngLogical architecture is valuable in the design of large systems for 2 key reasons:  (i)  it helps developers instantiate the softer concepts and  more social aspects of large systems, and (ii) it provides another review-gate to iron out design flaws before proceeding to the physical system.  Military systems provide good examples of the value of logical architecture.  Many Defence systems are so complex that they are never developed at all.  If they are at all then they are often broken down into such small components that the integration can become unmanageable.  Joint Effects Targeting, counter-IED exploitation, systems to fuse operational and intelligence and the nuclear firing chain are all areas which have enormous social input so that the development of a logical architecture is paramount.

Unless a person has a pedigree in military systems logical architecture is, usually, the least understood/used part of the design process.  Certainly in Agile environments or any area requiring rapid applications development where the application is fixed (portals, billing systems, SAP etc) then logical architecture design is nugatory.

In this blog we look at logical process design but the method is equally applicable to the entire logical design phase.

BENEFITS OF LOGICAL ARCHITECTURE

When designing processes, however, logical architecture is an invaluable tool in measuring, assessing and comparing risk before moving to the more expensive technical design & implementation phases.  Because logical designs can be created, compared and assessed, quickly,  they become an excellent technical/commercial appraisal tool.   Cross-functional teams of executives and architects can collaborate on logical designs before a GO/NO-GO investment decision and thereby create 3 major benefits:

  • Reduce the time of the physical design cycle.
  • Increase executive involvement and the effect of executive steering on designs.
  • Significantly reduce the risk in physical designs.

PROCESS VALUE

The Value of a Process

There is a way of viewing, and thereby measuring risk in logical processes.  Ultimately, the value of a process is its cost divided by its risk.  So, a process which has a total cost of $100,000 and a 60% chance of success has a nominal value (not “worth” or “price”) of $60,000.  Which is to say that on average the business will realise only 60% of its value.  This is roughly the same as saying that, on average, for each $100k in earnings, the firm will spend $40k on faults.  Whether the value indicator is dollars or white elephants does not matter, so long as it is applied consistently over the choices.  This simple measuring mechanism allows senior executives to engage in the design process and forces architects to help assign costs to difficult design components.

COSTS & CONCEPTS

The difficulty is in ascribing costs to concepts.  In order to do this the team must first instantiate the concepts win some form of logical structure, such as a software system or a management committee/team.  The team then ascribes an industry benchmark cost to this structure, accounting for uncertainty.  Uncertainty is important because the benchmark cost will not represent the actual cost exactly (in fact the benchmark cost should represent the 50% CI cost).  So, when it comes to determining the probability it is vital to use the experts to come up with what the construct could cost (as little as and as much as).

PROCESS RISK

The difficulty with measuring logical architectures is in measuring concepts.  Concepts usually have no value and no standard means of comparing them.  In short, (i) assemble a small, cross-functional team of experts, (ii) ascribe costs (with uncertainty) to the concepts, apply a risk equation, and then (iii) simulate.  One possible equation is:

Logical Risk Equation - An equation for measuring process risk in logical architectures.

Where:

  • R is the overall risk.
  • P is the probability of an adverse event occurring in the process.
  • Ct is the criticality of the location of the event, in the process.
  • is the likely time it will take to notice the manifestation of the risk (i.e. feedback mechanisms).
  • Cy is the availability of a contingency plan which is both close and effective to the point of the problem, in the process, and
  • Sl is the likelihood of success that the process will be fixed and achieve an acceptable outcome.
  • 100 simply makes it easier for the team to see differences between scores.

In this equation, we determine the overall risk of the process.  It does not have to be perfect but rather it just needs to be applied consistently and account for the major variables.  If applied rigorously and evenly, measuring risk in logical architectures has the ability to reduce the design cycle, increase the certainty of the choice, build better stakeholder buy-in and significantly reduce the risk in the physical solution.

Building a Risk Culture is a Waste of Time 3

The focus of a good risk management practice is the building of a high-performance operational culture which is baked-in to the business.  Efforts to develop risk cultures cultures only serve to increase risk aversion in senior executives and calcify adversarial governance measures which decrease overall profitability.  The right approach to risk management is a comprehensive, holistic risk management framework which integrates tightly with the business.

risk management. waste of timeThe financial crisis is largely due to the the failure of risk management and over-exposure in leading risk-based institutions.  More specifically, the failure of risk management is linked to:

  • The failure to link link risk to investment/project approval decision making.  The aim of risk management is not to create really big risk registers.  Although, in many organisations one could be forgiven for thinking that this is the goal.  The aim of identifying risks is to calibrate them with the financial models and program plans of the projects so that risks can be comprehensively assessed within the value of the investment.  Once their financial value is quantified and their inputs and dependencies are mapped – and only then – can realistic and practical contingency planning be implemented for accurate risk management.
  • The failure to identify risks accurately and comprehensively.  Most risk toolsets and risk registers reveal a higgledy-piggledy mess of risks mixed up in a range from the strategic down to the technical.  Risks are identified differently at each level (strategic, financial, operational, technical).  Technical and Operational risks are best identified by overlapping processes of technical experts and parametric systems/discrete event simulation.  Financial risks are best identified by sensitivity analysis and stochastic simulation but strategic risks will largely focus on brand and competitor risks.  Risk identification is the most critical but most overlooked aspect of risk management.
  • The failure to use current risk toolsets in a meaningful way.  The software market is flooded with excellent risk modelling and management tools.  Risk management programs, however, are usually implemented by vendors with a “build it and they will come” mentality.  Risk management benefits investment appraisal at Board and C-Suite level and it cannot be expected to percolate from the bottom up.

RISK MANAGEMENT IS COUNTER-INTUITIVE

All this does not mean that risk management is a waste of time but rather it is counter-intuitive to the business.  It is almost impossible to ask most executives to push profits to the limit if their focus is on conservatism.  Building a culture of risk management is fraught with danger.  The result is usually a culture of risk aversion, conservatism and a heavy and burdensome governance framework that only adds friction to the business lifecycle and investment/project approval process.  Executives, unable to navigate the labyrinthine technicalities of such a systems achieve approvals for their pet programs by political means.  More so, projects that are obviously important to the business actually receive less risk attention than small projects.  Employees learn to  dismiss risk management and lose trust in senior management.

If risk management is to be an effective and value-adding component it must be a baked into the business as part of the project/investment design phase.  If not, then risk management processes  just build another silo within the business.  The key is to forget about “Risk” as the aim.  The goal must be a performance culture with an active and dynamic governance system which acts as a failsafe.  The threat of censure is the best risk incentive.

risk management. immature disciplineAWARENESS IS NOT MANAGEMENT

risk management. immature disciplineManagement has long been aware of risk but this does not always translate into true understanding of the risk implications of business decisions.  Risk policies and practices are often viewed as being parallel to business and not complimentary to it.

Why is it that most businesses rate themselves high on risk management behaviours?  This is largely because businesses do not correlate the failure of projects with the failure of risk and assurance processes. 

In a 2009 McKinsey & Co survey (published in June 2012 “Driving Value from Post-Crisis Operational Risk Management”) it was clear that risk management was seen as adding little value to the business.  Responses were collected from the financial services industry – an industry seen as the high-water mark for quantitative risk management. 

COLLABORATION IS THE KEY

Risk management needs to become a collaborative process which is tightly integrated with the business.  The key is to incentivise operational managers to make calculated risks.  As a rule of thumb there are 4 key measures to integrate risk management into the business:

  1. Red Teams.  Despite writing about collaboration the unique specialities of risk management often requires senior executives to polarise the business.  It is often easier to incentivise operational managers to maximise risks and check them by using Red Teams to minimise risks.  Where Red Teams are not cost effective then a dynamic assurance team (potentially coming from the PMO) will suffice.  Effective risk management requires different skills and backgrounds.  Using quantitative and qualitative risk management practices together requires a multi-disciplinary team of experts to suck out all the risks and calibrate them within the financial models and program schedules in order that investment committees can make sensible appraisals. 
  2. Contingency Planning.  Operational risk management should usually just boil down to good contingency planning.  Due to the unique skill sets in risk management, operational teams should largely focus on contingency planning and leave the financial calibration up to the assurance/Red teams to sweep up.
  3. Build Transparency through Common Artefacts.  The most fundamental element of a comprehensive  risk process is a lingua franca of risk  – and that language is finance.  All risk management tools need to percolate up into a financial model of a project.  This is so that the decision making process is based on a comprehensive assessment and when it comes to optimise the program the various risky components can be traced and unpicked.
  4. Deeper Assurance by the PMO.  The PMO needs to get involved in the ongoing identification of risk.  Executives try and game the governance system and the assurance team simply does not have the capacity for 100% audit and assurance.  The PMO is by far the best structure to assist in quantitative and qualitative risk identification because it already has oversight of 100% of projects and their financial controls.

Traditional risk management practices only provide broad oversight. With the added cost pressures that businesses now feel it is impossible to create large risk teams funded by a fat overhead. The future of risk management is not for companies to waste money by investing in costly and ineffective risk-culture programs.  Good risk management can only be developed by tightly integrating it with a GRC framework that actively and dynamically supports better operational performance.

ALIGNMENT: Building a Closer Relationship Between Business and IT Reply

alignment. team workjpgThe business gurus Kaplan and Norton describe “Alignment” as a state where all the units of an organisational structure are brought to bear to execute corporate strategy in unison.  When Alignment is executed well it is a huge source of economic value.  When it is executed badly it is a colossal source of friction which can cripple the business.  The authors go on to note:

Alignment

IT DOESN’T NEED ALIGNMENT, IT NEEDS BETTER UNDERSTANDING

IT and the Business speak of alignment in two radically different ways.  The Business talks about alignment between business units.  When speaking of tech they use words and phrases such as ROI and operational performance.  IT talks about alignment in a way that makes them feel as though they matter to the business.  That profitable, customer facing business units could achieve more if the corporate centre where to align business units under a single, cohesive strategy is one thing.  That IT depts fail to execute strategy or even deliver operational effectiveness through poor understanding of requirements, an inability to see the technical reality of commercial value or realise some of the social cohesion which enterprise software systems need is not mis-alignment – it is just bad practice.

THE RELATIONSHIP BETWEEN THE BUSINESS AND ICT IS DIVERGING

The increasing capabilities of a smarter, more mobile , more virtual workforce means a greater commoditisation of knowledge work.  With this comes the polarisation of Business and ICT.   A broader ICT function with a wider array of narrower and deeper areas of expertise will, increasingly, be incapable of coding the more subtle and complex social aspects of human collaborations.  In such a world the ICT agenda must be set by the corporate centre.

mis-alignment

ICT NEEDS TO FOCUS ON EXECUTION NOT ALIGNMENT

ICT’s economic value will be realised when it (and therefore Enterprise Architects) can support business units to reach across each other to create valuable products and services which justify the corporate overhead.  McKinsey & Co, for instance focus heavily on central knowledge management.  This enables research to drive service line improvement in relevant sectors.  IBM spends over 3 Bn GBP on R&D and the development of leading-edge products way beyond their years.  ICT needs to focus on the execution of corporate strategy and not alignment. Alignment is a structural issue whereas execution is a functional issue.  Stop tinkering with the structures and focus on the functions/operations.

GOVERNANCE – ALIGNMENT AT A PRICE

Moves to improve the business relevance of ICT usually result in heavier, more burdensome technical governance.  The finance function imposes capital project controls on technology projects and insists that benefits be quantified.  Although greater cost transparency will bring IT closer to the Business, heavier ICT governance only serves to drive ICT investment underground.  Pet-projects abound, useless apps proliferate and ICT costs continue to rise.  In the meantime, in a perverse inverse relationship, assurance becomes even lighter on larger programs. 

Alignment takes strong leadership and clear definitions of business intent.  A fancy set of IT tools are not necessary for alignment rather they are important when it comes to agilityMis-alignment is the fault of deep rooted cultural divisions which can only be overcome through the strict adherence to financial value and the use of a lingua franca engendered through a common architectural framework.   If ICT is to realise its potential and add real financial value then it must actively support the real-time execution of business operations.

BUSINESS PROCESS FAILURES: the importance of logical architectures Reply

business process risk. chart

In a recent 2012 survey by McKinsey & Co, IT executives noted that their top priorities were ‘improving the effectiveness and efficiency of business processes’.   One of the critical failings of IT, however, is to implement effective and efficient business process architectures in the first place.  The IT priorities to the left only serve to highlight what we already know:  that IT service companies implement processes badly.

Why?

Whether through a failing of Requirements or Integration (or both), IT service companies often implement inappropriate business process architectures and then spend the first 6 to 12 months fixing them.  This is why those companies ask for a 6 month service-credit holiday.  It is also the same reason those companies differentiate between Transition and Transformation.  The former is where they implement their cost model but the latter is where they implement their revenue model.

The failing is not within the design of the technical architecture.  Very few senior executives report that failed projects lacked the technical expertise.  Likewise, project management is usually excellent.  Requirements, too, are not usually the problem with business process implementations as most commercial systems implement standardised Level 1 or 2 business processes very well.

Logical Archtiectures instantiate the subtleties and complexities of social systems which the software must implement

The first failing in the development of a technical architecture to implement a business processes is the design of the Logical Architecture.  Logical Architectures are critical for two reasons: (i) because requirements are one hundred times cheaper to correct during early design phases as opposed to implementation, and (ii) because logical systems are where the social elements of software systems are implemented.  Requirements gathering will naturally throw up a varying range of features, technical requirements, operational dependencies and physical constraints (non-functional requirements) that Solution Architects inevitably miss.  Their focus and value is on sourcing and vendor selection rather than the capture of the subtleties and complexities of human social interactions and the translation of them into architect-able business constructs (that is the role of the Business Analyst).

The second failing is the development of Trade-Space.  This is the ability to make trade-offs between logical designs.  This is the critical stage before freezing the design for the technical architecture.  This is also vital where soft, social systems such as knowledge, decision making and collaboration are a core requirement.  However, trade-space cannot be affected unless there is some form of quantitative analysis.  The usual outcome is to make trade-offs between technical architectures.   Like magpies, executives and designers, by this stage have already chosen their favourite shiny things.  Energy and reputation has already been invested in various solutions, internal politicking has taken place and the final solution almost eschews all assurance and is pushed through the final stages of governance.

With proper development and assessment of trade-space, companies have the ability to instantiate the complex concepts of front and middle office processes.  Until  now, business analysts have hardly been able to articulate the complicated interactions between senior knowledge workers.   These, however, are far more profitable to outsource other than more mechanical clerical work which is already the subject of existing software solutions.  The higher pay bands and longer setup times for senior information work makes executive decision making the next frontier in outsourcing.

Service offerings

Logical architectures are not usually developed because there is no easy, standardised means of assessing them.  Despite the obvious cost effectiveness logical architectures most Business Analysts do not have the skills to design logical architectures and most Technical Architects move straight to solutions. Logical Architectures which are quantitatively measurable and designed within a standardised methodology have the potential to give large technical service and BPO organisations greater profits and faster times-to-market.

The future is already upon us.  BPO and enterprise services are already highly commoditised.  The margins in outsourcing are already decreasing, especially as cloud-based software becomes more capable.  If high cost labour companies (particularly those in based in Western democracies) are to move to more value-added middle and front office process outsourcing then they will need to use logical architecture methodologies to design more sophisticated offerings.

In the next blog we will show one method of quantitatively assessing logical architectures in order to assess trade-space and make good financial decisions around the choices of technical designs.