SCENARIO-BASED MODELLING: Storytelling our way to success. 1

“The soft stuff is always the hard stuff.”


Whoever said ‘the soft stuff is the hard stuff’ was right.  In fact, Douglas R. Conant, coauthor of TouchPoints: Creating Powerful Leadership Connections in the Smallest of Moments, when talking about an excerpt from The 3rd Alternative: Solving Life’s Most Difficult Problems, by Stephen R. Covey, goes on to note:

“In my 35-year corporate journey and my 60-year life journey, I have consistently found that the thorniest problems I face each day are soft stuff — problems of intention, understanding, communication, and interpersonal effectiveness — not hard stuff such as return on investment and other quantitative challenges. Inevitably, I have found myself needing to step back from the problem, listen more carefully, and frame the conflict more thoughtfully, while still finding a way to advance the corporate agenda empathetically. Most of the time, interestingly, this has led to a more promising path forward and a better relationship, which in turn has made the next conflict easier to deal with.”

Douglas R. Conant.

Conant is talking about the most pressing problem in modern organisations – making sense of stuff.

Sense Making

Companies today are awash with data.  Big data.  Small data.  Sharp data.  Fuzzy data.  Indeed, there are myriad software companies offering niche and bespoke software to help manage and analyse data.  Data, however is only one-dimensional.  To make sense of inforamtion is, essentially, to turn it into knowledge. To do this we need to contextualise it within the frameworks of our own understanding.  This is a phenomenally important point in sense-making; the notion of understanding something within the parameters of our own metal frameworks and it is something that most people can immediately recognise within their every day work.


Take, for instance, the building of a bridge.  The mental framework by which an accountant understands risks in building the bridge is uniquely different from the way an engineer understands the risks or indeed how a lawyer sees those very same risks.  Each was educated differently and the mental models they all use to conceptualise the same risks (for example)  leads to different understandings.  Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

Knowledge has broad utility – it is polyvalent – but it needs to be contextualised before it can be caplitalised.

For instance, take again the same risk of a structural weakness within the new bridge.  The accountant will understand it as a financial problem, the engineer will understand it as a design issue and the lawyer will see some form of liability and warranty issue.  Ontologically, the ‘thing’ is the same but its context is different.  However, in order to make decisions based on their understanding, each person builds a ‘mental model’ to re-contextualise this new knowledge (with some additional information).

There is a problem.

Just like when we all learned to add fractions when we were 8, we have to have a ‘common denominator’ when we add models together.  I call this calibration, i.e. the art and science of creating a common denominator among models in order to combine and make sense of them.


Why do we need to calibrate?  Because trying to analyse vast amounts of the same type of information only increases information overload.  It is a key tenent of Knowledge Management that increasing variation decreases overload.

It is a key tenent of Knowledge Management that increasing variation decreases overload.

We know this to be intuitively correct.  We know that staring at reams and reams of data on a spreadsheet will not lead to an epiphany.  The clouds will not part and the trumpets will not blare and no shepherd in the sky will point the right way.  Overload and confusion occurs when one has too much of the same kind of information.  Making sense of something requires more variety.  In fact, overload only increases puzzlement due to the amount of uncertainty and imprecision in the data.  This, in turn, leads to greater deliberation which then leads to increased emotional arousal.  The ensuing ‘management hysteria’ is all too easily recognisable.  It leads to much more cost growth as senior management spend time and energy trying to make sense of a problem and it also leads to further strategic risk and lost opportunity as these same people don’t do their own jobs whilst trying to make sense of it.


In order to make sense, therefore, we need to aggregate and analyse disparate, calibrated models.  In other words, we need to look at the information from a variety of different perspectives through a variety of lenses.  The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

The notion that IT companies would have us believe, that we can simply pour a load of wild data into a big tech hopper and have it spit out answers like some modern Delphic oracle is absurd.

Information still needs a lot of structural similarity if it’s to be calibrated and analysed by both technology and our own brains.

The diagram below gives an outline as to how this is done but it is only part of the equation.  Once the data is analysed and valid inferences are made then we still are only partially on our way to better understanding.  We still need those inferences to be contextualised and explained back to us in order for the answers to crystalise.  For example, in our model of a bridge, we may make valid inferences of engineering problems based on a detailed analysis of the schedule and the Earned Value but we still don’t know it that’s correct.


As an accountant or lawyer, therefore, in order to make sense of the technical risks we need the engineers to play back our inferences in our own language.  The easiest way to do this is through storytelling.  Storytelling is a new take on an old phenomenon.  It is the rediscovery of possibly the oldest practice of knowledge management – a practice which has come to the fore out of necessity and due to the abysmal failure of IT in this field.

Scenario-Based Model Development copy

Using our diagram above in our fictitious example, we can see how the Legal and Finance teams, armed with new analysis-based  information, seek to understand how the programme may be recovered.   They themselves have nowhere near enough contextual information or technical understanding of either the makeup or execution of such a complex programme but they do know it isn’t going according to plan.

So, with new analysis they engage the Project Managers in a series of detailed conversations whereby the technical experts tell their ‘stories’ of how they intend to right-side the ailing project.

Notice the key differentiator between a bedtime story and a business story – DETAIL!  Asking a broad generalised question typically elicits a stormy response.  Being non-specific is either adversarial or leaves too much room to evade the question altogether.  Engaging in specific narratives around particular scenarios (backed up by their S-curves) forces the managers to contextualise the right information in the right way.

From an organisational perspective, specific scenario-based storytelling forces manages into a positive, inquistive and non-adversarial narrative on how they are going to make things work without having to painfully translate technical data.  Done right, scenario based modelling is an ideal way to squeeze the most out of human capital without massive IT spends.






Hidden Costs in ICT Outsourcing Contracts Reply


Why are IT outsourcing contracts almost always delivered over-budget and over-schedule?  Why do IT outsourcing contracts almost always fail to achieve their planned value? How come IT contracts seem to be afflicted with this curse more than any other area?


The common answer is that (i) the requirements change,  and (ii) that handovers from the pre-contractual phase to in-service management are always done poorly.  These are both true although hardly explain the complexity of the situation.  If requirements change were an issue then freezing requirements would solve it – it doesn’t.  The complexity of large ICT projects is derived directly from the fact that not all the requirements are even knowable from the outset.  This high level of unknown-unknowns, coupled with the inherent interdependence of business and system requirements, means that requirements creep is not only likely but inevitable.  Secondly, (ii) handover issues should be able to be solved by unpicking the architecture and going back to the issue points.  This too is never so simple.  My own research has shown that the problem is not in the handover but that the subtleties and complexities of the project architecture is not usually pulled through into the management and delivery structures.  Simply put, it is one thing to design an elegant IT architecture.  It is another thing entirely to design it to be managed well over a number of years.  Such management requires a range of new elements and concepts that never exist in architectural design.

The primary factor contributing to excessive cost (including from schedule overrun) is poor financial modelling.  Simply put, the hidden costs were never uncovered in the first place.  Most cost models are developed by finance teams and uncover the hard costs of the project.  There are, overall however, a total of 3 cost areas which must be addressed in order to determine the true cost of it outsourcing. 

True Cost of IT

1.  Hard costs.  This is the easy stuff to count; the tangibles.  These are the standard costs, the costs of licensing, hardware, software etc.  It is not just the obvious but also includes change management (communications and training).  The Purchasor of the services should be very careful to build the most comprehensive cost model based on a detailed breakdown of the project structure, ensuring that all the relevant teams input costing details as appropriate.

2.  Soft Costs.  The construction industry, for instance, has been building things for over 10,000 years.  With this level of maturity one would imagine that soft costs would be well understood.  They are not.  With project costs in an extremely mature sector often spiralling out of proportion it is easy to see that this might also afflict the technology sector which is wildly different almost from year to year. 

Soft costs deal with the stuff that is difficult to cost; the intangibles:  The cost of information as well as process and transaction costs.  These costs are largely determined by the ratio of revenue (or budget in terms of government departments) against the Sales, General & Administration costs, i.e. the value of the use of information towards the business.  Note that this information is not already counted in the cost-of-goods-sold for specific transactions.

Soft costs go to the very heart of how a business/government department manages its information.  Are processes performed by workers on high pay-bands?  Are workflows long and convoluted?  The answers to these questions have an exponential effect on the cost of doing business in an information-centric organisation.  Indeed, even though the cost of computing hardware is decreasing, the real cost of information work – labour – is increasing.  This is not just a function of indexed costs but also the advent of increasing accreditation and institutionalisation in the knowledge worker community.  Firstly, there is greater tertiary education for knowledge work which has hitherto been unaccounted for or part of an external function.  The rise of the Business Analyst, the Enterprise Architect (and a plethora of other “architects”) all serve to drive delivery costs much higher.  Not only are the costs of this labour increasing but the labour is now institutionalised, i.e. its place and value is not questioned – despite the data showing there seems to be limited economic value added through these services (i.e. no great improvement in industry delivery costs).

3.  Project Costs.  Projects are never delivered according to plan.  Requirements are interpreted differently, the cohesion of the stakeholder team can adversely impact the management of the project, even the sheer size and complexity of the project can baffle and bewilder the most competent of teams.  Supply chain visibility, complicated security implementations and difficult management structures all add to project friction and management drag.  There are many more factors which may have an adverse or favourable effect on the cost of performing projects. 

IT Transition Cost Graph

In the Defence community, Ph.D student Ricardo Valerdi created a cost model – COSYSMO – which isolated 14 separate factors peculiar to systems engineering projects  and gave these factors cost coefficients in a cost model.  Ultimately, each factor may be scored and the scoring then determines the effort multiplier, usually a number between approximately 0.6 and 1.8.  Naturally, when all factors are taken into account the overall effect on the contract price is significant. 

More importantly, for IT implementations, the “project” is not short.  IT outsourcing projects are generally split into 2 phases:  Transition and Transformation.  Transition involves what outsourcers call “shift-and-lift” or the removal of the data centres from the customer site and rear-basing or disposal of the hardware which allows the company to realise significant cost savings on office space. 

During the second phase – Transformation – the business seeks to realise the financial benefits of outsourcing.  Here, a myriad of small projects are set about in order to change the way a business operates and thereby realise the cost benefits of computer-based work, i.e. faster processes from a reduced headcount and better processes which are performed by workers on a lower pay-band. 

IT outsourcing  is not just about the boxes and wires.  It involves all the systems, hard and soft, the people, processes and data which enable the business to derive value from its information.  Just finding all of these moving parts is a difficult task let alone throwing the whole bag of machinery over the fence to an outsourcing provider.   To continue the metaphor, if the linkages between the Purchasor and the Vendor are not maintained then the business will not work.  More importantly, certain elements will need to be rebuilt on the Purchasor’s side of this metaphorical fence, thus only serving to increase costs overall.  The financial modelling which takes into account all of these people, processes and systems must, therefore, be exceptional if an outsourcing deal is to survive.

Information Outsourcing Reply

Although the Gartner article deals with the monetisation of information assets, the sentiments may lead many businesses to outsource their entire information management responsibility.

The volume of data that most businesses can – or think they should be able to – manage is reaching an inflection point.  Businesses which grasp how analytics supports their revenue model will be able grapple with the continuing demands of information management (IM).  Businesses which cannot cope with the perceived threat of information overload may seek to outsource this responsibility.  The former will survive, the latter will fail. The research is clear:

  • IM is critical business:  derogating from one’s IM responsibility leads to an overall loss of revenue as businesses are unable to respond to market trends, develop appropriate differentiators, design suitable new products and services as well as leverage their information and knowledge for wider benefit.  Information is a firm’s core business, whether they like it or not.  Outsourcing the responsibility to understand the intricacies of a company’s business model and dependencies into the extended value net is a recipe for disaster.  Businesses should use all available software and technical expertise to do this but must do so with internal resources.
  • Outsourcing accounts for cost differentiators not key value drivers:  Firms which seek to cut costs by outsourcing their IT function do not recoup their losses.  The lessons of Ford, GM and Levi Strauss still remain.  Businesses which outsource their entire IT function continue to lose economic-value-added (EVA).  Although it is a good idea to outsource platforms and infrastructure it is rarely beneficial to outsource applications and services which are deeply intertwined with the more social aspects of a company’s business processes, i.e. if your process isn’t rigidly vanilla and perfectly understood then don’t outsource it.  Banks have well documented electronic processes which allow customers to manage their money and transactions remotely.  Even so, they manage these processes internally because it’s core business.

Businesses which purport to leverage economies of scale in order to be able to make sense of a firm’s information are not telling the whole story. It is virtually impossible to crunch structured and unstructured data to squeeze out additional value unless the vendor has also programmed the client’s value chain and key differentiator’s into their big-data algorithm.

“IM is not a software problem it is a business problem.  Regardless of the promises by vendors they will never be able to support management in their daily needs to navigate the subtleties and complexities of corporate information.”

It is highly likely that by 2016 the next fad, after Big Data, will be the monetisation of a firm’s information assets.  No doubt that in the low-end of the market there will be some level of commoditisation of information which will support more targeted marketing and the procurement of specialist advertising services.  However, businesses which outsource critical IM functions (largely through cost pressures)  in their business will turn unprofitable (if not already) as they become unable to respond to the market.

The Complexity of Cost: the core elements of an ICT cost model Reply

cost model. financial modelThere are 2 reasons why IT cost cost reduction strategies are so difficult:  Firstly, many of the benefits of ICT are intangible and it is difficult to trace their origin.  It is hard to determine the value of increased customer service or the increase in productivity from better search and retrieval of information.   Secondly, many of the inputs which actually make IT systems work are left unaccounted for and unaccountable.  The management glue which implements the systems (often poorly and contrary to the architecture) and the project tools, systems and methods which build/customise  the system (because IT, unlike standard captital goods, is often maintained as a going concern under constant development, e.g. upgrades, customisation, workflows etc) are very difficult to cost.

Standard IT cost models only account for the hard costs of the goods and services necessary to implement and maintain the infrastructure, applications and ancillary services.  Anything more is believed to be a project cost needed to be funded by the overhead.

This is unsatisfactory.

The value of technology systems – embedded systems excluded – is in the ability of information workers to apply their knowledge by communicating with the relevant experts (customers, suppliers etc) within a structured workflow (process) in order to achieve a corporate goal.

Capturing the dependencies of knowledge and process within the cost model, therefore, is critical.   Showing how the IT system enables the relevant capability is the critical factor.  A system is more valuable when used by employees who are trained than less trained.  A system is more valuable when workers can operate, with flexibility, from different locations.  A system is more valuable where workers can collaborate to solve problems and bring their knowledge to bear on relevant problems.  So how much is knowledge management worth?

The full cost of a system – the way they are traditionally modelled – assumes 100% (at least!) effectiveness.  Cost models such as COSYSMO and COSYSMOR account for internal capability with statistical coefficients.  Modelling soft costs such as information effectiveness and technology performance helps the business define the root causes of poor performance rather than subjective self-analysis.  If a firm makes the wrong assessment of capability scores in COSYSMO the projected cost of an IT system could be out by tens of millions.

Financial models for IT should therefore focus less on the cost of technology and more on the cost of capability.  The answer to this is in modelling soft costs (management costs), indirect costs and project costs as well as the hard costs of the system’s infrastructure, apps and services.


Will the HR Function Survive Enterprise 2.0? 1


What will HR look like in the future?  If recent articles online are to be believed then the HR function will be more powerful and more important than ever.  We do know the following:

  • We do know that the standard job market will become more fragmented as better information management allows much work as we know it to become commoditised.
  • We do know that in a world where executive education is highly specialised and more competitive the acquisition of top talent will become more difficult as senior executives look for a richer, more cross-functional, more multi-disciplinary skillsets in their stars.
  • We do know that top execs will add complexity to HR through cross-functional skillsets (i.e. top talent will be less obvious because they will not necessarily rise in vertical portfolios.  The best project manager may also be a registered psychologist, for instance).

What we do not know is just how the enterprise will react.  The onset of better databasing technologies and mobile methods of capture did not ease the information management problem in organisations (rather it has just created mountains of unmanageable data).

  • So, will the  fragmentation of the labour market and the rise of cross-functional skill sets add too much complexity?  Will the HR market be able to cope?
  • Will the HR function reach an inflection point where complexity is too great and the entire function is outsourced completely? Or,
  • will better information management allow line managers to integrate talent sourcing directly into operational business processes?

Future of HR.ChartIf a  recent (27 Sep 2012) survey by KPMG is an indicator then cost pressures in businesses mean that they will have to get smarter about HR if they are to remain competitive.  In boom times even with 35% of respondents arguing for greater direct collaboration with operational management, it is unlikely that even this volume of vociferous response would change the HR paradigm.  However, with the cost pressures at almost unbearable levels and social media increasing the transparency and speed of operations, it is unlikely that the HR function will survive even as it stands today.

The following outcomes are likeley: (i) Commoditised work will be consumed by line management into standard operations, and (ii) top, cross-functional talent will be outsourced to ever more high-end boutique HR consultancies.  Smaller HR firms will fall by the wayside but the high-end firms will demand higher margins and their clients will demand greater results.  It is possible that, much like IT services of the ’90s boutique HR consultancies could take stakes in realised profit that certain new-hires make.

Whatever the answer is it will be global and there will be big money in it.

The Polyvalence of Knowledge: using financial ratios to inform system choice (Pt II) Reply

For too long the language of CIOs and CFOs has not been confluent.  CFOs want hard returns in the form of discounted cash flows and better return on equity (ROE).  CIOs speak of ‘intangible’ benefits, better working practices and compliance.  Despite its poor reputation, information management, however, does show up in the balance sheet.  Intuitively, it is easy to understand that if a business not only increases the speed and accuracy of decision making but also decreases the number of highly paid executives needed to do it, then the effect on the bottom line can be significant.

The first part of this blog looked at the structure of information management that shows up in the balance sheet.  This part looks at how to calculate some of those ratios.

Unfortunately, International Financial Regulation Standards (IFRS) are almost entirely geared towards the performance of capital as a measure of productive wealth.  As the information economy picks up speed, however, capital is neither the scarcest nor the more valuable resource a business can own.

The difficulty is in calculating the value of managerial decision making.  Without going in to the detailed calculations of Return-on-Management (ROM) I have outlined below two new financial ratios which allow businesses to determine a financial indicator of both information effectiveness and technology performance.

  1. Information Effectiveness.  This ratio measures the effectiveness of corporate decision making at increasing the financial value of the business.  This is defined as the decision making value minus the decision making costs.  As described in a previous blog The value of information is calculated through Information Value Added (IVA).  Information Value-Added = (Total Revenues + Intellectual Property + Intellectual Capital) – (operations value-added – capital value-added – all purchases of materials, energy and services).   This is to say that once all labour, expenses and capital (that is not part of an information system) is accounted for, the cost is subtracted from the total of gross revenues (plus IP).  In other words, it is the part of the profit which is not directly accounted for by operations or increased capital value.  It is profit which is attained purely through better managerial decision making.  This might be achieving better terms and conditions in the supply of a product or it might be in the reduction of insurance costs on a given contract.   The cost of information management is somewhat easier.  Ultimately, corporate decision making is accounted for as ‘overhead’ and therefore shows up in Sales, General & Administrative expenses (SG&A) on the balance sheet.  The sum total of managerial activity – which is ultimately what information management is for – can be accounted for within SG&A ledger.  The more operational aspects of SG&A, such as R&D costs should be removed first, however.
  2. Technology Performance.  The measurement of the ability of corporate information systems to increase company value.  Ultimately, this answers the question whether a firm’s technology is creating value, as opposed to being simply value for money.  More specifically, how much value are the company’s systems adding.  This is shown as the total value added by information support (IVA) less the total cost of technology and management, as a percentage of shareholder equity.  Note that shareholder equity is chosen above short term indicators such as EBT because many KM/management systems will take time to deliver longer term equity, as opposed to short term cash flow.  This metric assists in determining whether the cost of technology is worth the value it is creating.

Financial ratios have benefits over other performance indicators.  For instance, there is a tendency within the corporate environment to benchmark costs against firms in a similar segment.  This is excellent where small and medium sized enterprises have aggressive cost management programs.  However, in large companies with sprawling ICT environments benchmarked costs become less relevant for cost comparison and more relevant for contract management.  The benefit of financially driven information management is  that it allows a companies to benchmark against themselves.  In compiling quarterly or yearly indices firms can benchmark their own information management performance.  More importantly, these non-standard financial ratios provide not only a means for the CIO and CFO to communicate using a common language but also the ability to refine the exact nature of the solutions.

In summ, financial ratios will not tell a business what brands to buy but they will help executives refine their choice.

The Polyvalence of Knowledge (Pt I): how financial ratios can influence system choice Reply

In this, the first part of, “The Polyalence of Knowledge” we examine the use of financial analysis to inform system choice.  In particular, back-office business systems and not operational systems.  Operational systems, such as the software used to tip a smelter are best analysed through NPV.  Back-office systems, however, cannot in any way be directly linked to the increase or decrease in revenue/cash flow.  These investments are much harder to appraise because there are few ways of determining exactly how much value they add to a business.  This blog looks at how to analyse financial statements in order determine exactly which systems are needed.

Does One Size Really Fit All?

Modern systems can be described as multi-valent.  One system can act on a number of critical functional areas but does one size really fit all?  Business and ICT believe they are achieving good value for money by purchasing a single inexpensive system to achieve multi-faceted roles.  However, what ends up happening is the system achieves little in each area and becomes a costly white-elephant.

What prompts the one-size-fits-all solution?  The primary cause of many of these implementations is (a) multiple business units have problems, coupled with (b) an inability by ICT to develop precise, accurate and complex business cases directly supporting the improved financial performance of the business.  In many cases a senior executive becomes nervous about the security of information, a separate business unit voices their frustration with their inability to collaborate and co-ordinate information and ICT says that it can solve both problems with one system.

Firstly, what are the primary back-office systems, what are they used for and what are their financial benefits?

  •   Electronic Document & Record Management Systems.  EDRMS are designed for the storage and retrieval of high-value records, such as contracts, patents and other documents containing intellectual capital which is hard to replace.  The loss of such material would be considered a security breach and would compromise current and future operations.  EDRMS, unfortunately, are usually only fully implemented in back-office units which have a culture of compliance and are therefore the least likely to need it.

Due to the nature of mature documentation EDRMS typically support contracts and supply chain & vendor management.  These systems assist in the search and retrieval of framework agreements for procurement as well as operational information.  A business with a well embedded EDRMS and developed supporting business practices could expect to have lower costs in their supply chain.  Supply chains themselves tend to be capital intensive and so a high-performing supply chain will empower a greater Return-on-Assets ratio, i.e. better better contract and supply chain management tends to support higher capital utilisation.  Service companies tend not to have capital intensive supply chains and are not, therefore, significant users of EDRMSs.

  •   Business Intelligence Systems.  BI systems exist in a variety of forms and have promised much over the years.  They can be as simple as reporting tools for standard data warehouses or may be implemented as complex artificial intelligence over multiple operational systems.  BI holds the power to reduce complexity in decision making making and good BI therefore holds the power not only to reduce management staff (overhead) but also to increase a company’s Information Productivity index (a ratio showing ‘value of information’, i.e. SG&A-2-Revenue, accounting for the cost of capital).  Good BI equals good Information Productivity index. Ultimately, if a company uses its BI systems well then it will show in their Information Productivity index.
  •   Project Management Systems.  PPM systems are designed to speed the efficiency and accuracy of resource allocation across a distributed enterprise as well as contribute to better project cost control measures.  Fluctuations in resource efficiency are not usually felt in the overhead but rather in project cost and schedule overruns.  It is important to note that PPM systems are only perfect when perfectly used.  This is to say that none are effective for significant analysis or project optimisation.  However, if a distributed enterprise does not have a PPMS then the likelihood of cost and schedule overrun increase significantly beyond the standard 30% risk factor.

It should be noted that I class CRM systems as a hybrid of project and risk management systems.

  •   Messaging & Email.  Little should be said about ubiquitous messaging and email systems other than that they are merely a cost of doing business.  Costs of these systems should be seen as sunk costs because the modern business simply cannot afford to do without them.  When building business cases for modern messaging firms could actually look further valuable social networking applications for the following reasons:
  1. The structure of SN groups of interest already provides valuable metadata to automatically tag messages for archive, search and retrieval.
  2. Parceling conversations by subject, group and associative images works more similarly to human memory than standard systems.
  3. The development of Communities of Interest (COIs) along with the web-based structure and storage of documents/non-critical records is both easier and more secure.

For these reasons and more companies should seriously look to SN apps to replace standard email systems.  More importantly, the security and storage issues taken on by SN apps remove the necessity for the rollout of the plethora of inappropriate SharePoint implementations.  In these latter cases greater attention could be paid to the development of more focused operational systems.

  •   Enterprise Resource Management Systems.  ERPS reduce the clerical burden of processing payroll and human resource transactions.  The value of ERPS is in the amount of overhead (finance and HR) labour they can remove from a business.  Good implementations of ERPS should show in reduced labour and lower SG&A costs.
  •   Knowledge Management Systems.  KMSs exist to store the non-critical knowledge capital and intellectual assets of a firm.  These may simply be the records and materials one needs for daily work.  For instance it has been estimated that it costs a law firm $100,000 in lost knowledge when a partner leaves.  Alternatively, KMSs may also store the accumulated knowledge capital of a firm, such as frameworks and intellectual property.  Businesses which use KMSs well have a higher Knowledge Capital value which is the difference between market value and shareholder equity, less estimated good will.
  •   Collaboration Systems.  Collaboration systems or team sites are generally smaller, simpler and locally managed KMSs.  MS SharePoint is one such example and also shares functionality with PPMSs.  Any financial benefits will be similar to standard KMSs.
  •   Enterprise Risk Management Systems.  xRMSs exist to reduce a broad spectrum of risks across the enterprise.  They provide a database for the documentation of risk although they offer little analytical capacity.  Separate systems often must be used for this.  xRM effectiveness will normally only show in reduced project cost and schedule overruns.  However, a large company may also be able to reduce its risk premium through the effective management of financial risk.

Why haven’t back-office systems increased information productivity, knowledge capital and asset utilisation?  Simply, the ICT and Finance functions do not generally work together to support cost reduction or revenue growth strategies.  Largely, back-office systems are implemented to satisfy the whims of personal functionality, security or broad-based compliance issues.  In addition, current financial ratios focus almost entirely on capital rather than information yet it could be said that the former is neither the scarcest nor most expensive anymore.

Read more in the next instalment of this blog to see how to analyse the financial systems in order to determine what applications the business requires.

The Cost of Information Reply

In simplest form the cost of information is the aggregate of all information systems (less depreciation), management salary costs and management expenses.  This is roughly equal to the net cost of information in a company.  Unremarkably, the number is huge.


Information is expensive.  It is not merely an amorphous mass of noughts and ones floating about in the commercial ether.  Indeed, that might be the nature of information but costing it is different.  Information is the lifeblood of an enterprise.  In fact, information is the lifeblood of management.  More specifically, the role of management (those not directly involved in the manufacture of the company’s core products or services) is to optimise corporate output by using information to make the best decisions on allocations of capital or resources.  In order to achieve this management requires information for decision making.  This information comes at a cost; the cost of capital and the cost of resources namely:  The cost of capital systems such as information systems (or machines at least with a diagnostic function) and services to provide the information.  This includes overhead costs of shared peripherals (printers, routers etc) and organisational costs such as support staff and training.  In addition to capital costs are the resource costs (the people/knowledge workers) to process the information.


Direct and indirect labour costs are always the most expensive aspect in the cost of information.  The cost of computers has dropped significantly in the last 40 years and yet the cost of labour has risen exponentially.  In accounting terms, per unit costs of computing are down but per capita costs are up.  Consequently, for cost control, with every reduction in the cost of workstations there is generally a corresponding rise in the use of services.  A single computer workstation is cheaper but the cost of running it is much higher. The cost of labour, therefore, will always exceed the cost of capital and so the organisational costs of information will always be the highest.


Significantly, organisational costs increase the further away from the customer the information is pushed.  Just look at any back office and to think that all that activity is triggered from single customer requests.  Information is eventually decomposed and pushed off into various departments.  In this way, the further away from the customer the volume of information increases.  Processing systems and resources proliferate in order to deal with this thoughput.  As transactions increase, so to do set up costs, further systems (network devices, security etc) as well as additional management for governance and oversight.

Transactions cost money.  The higher the number of transactions the higher the setup costs between unit workers, the higher the management for oversight, governance and workflow and, more pertinently, the lower the overall productivity.  Just think about each time a manager turns their computer on (setup costs) to check a spreadsheet, a project plan or approve a budget expense or purchase they increase the costs for the business and slow down the eventual output.

Information is produced through the direct application of labour to capital (people using machines).  Information is a cost that must be apportioned and controlled like any other capital cost.  The high costs of information should not mean that information should be undervalued.  On the contrary, the low capital costs mean that capital purchases are always attractive so long as a business can control the associated labour rates.  After all, information may be expensive but properly developed and properly used it is also extremely valuable.


The moral of the story is that organisations should reduce the cost of people and not the cost of machines.  The market is a good arbiter of the cost of information systems.  They are cheap and haggling over price in the current market is wasted effort.  Try and reduce overall labour costs in departments but more importantly reduce transactional volume by increasing organisational learning.  In this way, single workers will be able to perform more types of transactions and therefore increase process efficiency.  In this way, per capita costs may rise but units of work will also increase.  In the end, the idea is to have fewer but more expensive workers who will ultimately get more done in less time.