The Financial Value of a System: how to determine how much to pay for your ICT. Reply

Much work has been done in the field of Applied Information Economics (Hubbard, John Wiley & Sons 2007).  Most of the analysis goes towards how much a business should pay for a large commercial decision but glosses over the individual value of systems, human activity and infrastructure.



It’s easy to say that a company should spend no more than $375k on an advertising campaign, for instance, but that’s easy.  How much should the company spend on it’s technology?  How much should the firm spend on its ERP system?  How much should it spend on its CRM system? Is it even possible to measure the value derived from good customer relations management and if so how much of that value can be attributed, accurately, to the technology?  So, how much is it worth the company spending to come up with that figure?


If we take this analysis further, then what is the value of a back-office system?  What decisions do back office systems assist businesses in making? ERP systems assist in monthly financial reports and variance analysis.  Project modules of ERPs assist companies to determine whether project costs have overrun.   The core value of back-office systems, as opposed to operational systems, is that they reduce risk (cost) rather than create opportunity (profit).  Operational systems which can directly increase discounted cash flows, therefore, are better suited to NPV analysis.  Back-office systems, which are largely seen as sunk costs (the cost of doing business) are better appraised through NPC analysis.  Before the business gets to that stage it must come up with the detailed cost model.


Although the types of systems is the topic of another blog, the investment goal for a back-office system should be one which reduces uncertainty in decision making.  Therefore, what is the expected opportunity loss (EOL) from poor decision making?  How does one calculate the amount of revenue lost from poor decision support?  More importantly, how does a business calculate the value of the decision support which a back-office system delivers?

  •   Estimate Financial Value.  In operational systems it is far easier.  For instance, in an investment appraisal of new systems to automate certain plant and equipment experts may attest that system controls improve efficiency by 20%.  The decision is likely to be clear cut.  What about a new ERP system?

In such cases it is important to take a holistic view of the whole ‘capability’ (i.e. the technology, people and processes together).  Imagine that the new ‘system’ will enable an engineering services firm to quote and estimate proposals (in this example it is important to imagine that the new system will enable them to do so ‘perfectly’).

In this case, what is the current value of bidding and tendering information?  With the current information, for instance, a firm may have $10 million EBIT from tenders won, based on a 60% win rate as well as a 40% cost blowout. The firm wishes to improve their profitability by increasing their bid capability (which includes cost and schedule estimation).  If each project were tendered perfectly (let us forget for the moment about failures of project delivery) they could achieve almost $17 million in EBIT.  To this end they want to know how much to invest in ICT and how much to invest in people and process in order to achieve an additional $7 million profit?


For the system to work perfectly it must not only contribute perfect information but the information must also be perfectly usable, i.e. lost revenue should be a factor of human error not human input.  Note that the system will only increase the accuracy of bids/proposals (costs and schedules etc).  It may or may not increase the probability of winning.

Using the example above, how much should should the company invest in technology?  Firstly, in this example our ‘experts’ estimated that they had 90% confidence that the business would achieve $15,500,000 with a new capability.  This is largely because the problems seem relatively known to them.  They had a 10% confidence that the business would only achieve $100,000 largely because they don’t think their analysis is wrong.  They also estimate that the break-even threshold is about $375,000 which is roughly the equivalent of one new role (to account for capacity) and some new technology to improve workflow.

Without going through Hubbard’s detailed calculations that gives us an estimated $616,878,163.00 value of information.  This means that the company should invest no more than this sum in their bid capability to achieve the desired $7 million profit.

This means that the overall capital implementation and operating expenses (let us say out to 3 year projections) should be no more than $616,000.


How much of the approximately $600k should be spent on technology and how much on people (new roles/new hires and training), and process?

CAPABILITY CONFIGURATIONS – parametric modelling

The simplest and easiest way to assess how much to spend on technology is to develop a calibrated estimate of the configuration of the capability.  In summ, there is no system or tool that can authoritatively tell a company how it should spend its cash.  The business is the expert and the best way is still to calibrate its experts to give the best options to management.

With Capability Configurations one must note that any given capability may have multiple configurations.  There is no right configuration just one that is optimised for the given parameters.  It is up to the business to offer a variety of configurations with a range of costs.

After each configuration is developed it is run through Monte Carlo simulations to determine the probability of achieving the cost target within the desired range.  The success of this method is twofold.  Firstly, it simulates a range of costs knowing that static costs cannot be predicted.  Secondly, the cost ranges are determined by experts in the first place.

It is worth noting:

  1. estimates should be calibrated and given by experts.  These are not wild guesses but represent the true high and low ends of the likely (not way-out possibilities) spectrum.
  2. the cost models for the capability configurations must be decomposed to the lowest level.  To be effective, the Monte Carlo simulations run best on more detailed models.


The simulation shown above was performed on a cost model with high application, training and infrastructure costs.  The analysis showed that there was a 99% chance of achieving costs within the $617,000 range and a 46% chance of achieving ideal costs (here estimated at approx $450,000).  The second capability configuration included more people at a greater expense and estimated a lower probability of achieving the desired $450k goal.  Intuitively, one knows that when simulating technology scenarios in the low ten thousands as opposed to people in the hundreds of thousands (salary), the probability of success will be greater.  Intuitively we know that it is less riskyto invest in technology and systems rather than people.  People always cost the most so, ideally, a business will wish to spend more on technology than people.  Some money should ideally go towards training of high-performing staff who are difficult to replace.

This was a highly simplistic model.  A more decomposed parametric showing greater detail may have yielded a slightly different result.


The major criticism is that it only takes into account hard costs and does not account for the integration of the capability into the business.  How will the capability take to the business?  Will the firm be able to develop it? implement it? run it?  These use of qualitative factors and more will be examined in another blog soon.


The Polyvalence of Knowledge: using financial ratios to inform system choice (Pt II) Reply

For too long the language of CIOs and CFOs has not been confluent.  CFOs want hard returns in the form of discounted cash flows and better return on equity (ROE).  CIOs speak of ‘intangible’ benefits, better working practices and compliance.  Despite its poor reputation, information management, however, does show up in the balance sheet.  Intuitively, it is easy to understand that if a business not only increases the speed and accuracy of decision making but also decreases the number of highly paid executives needed to do it, then the effect on the bottom line can be significant.

The first part of this blog looked at the structure of information management that shows up in the balance sheet.  This part looks at how to calculate some of those ratios.

Unfortunately, International Financial Regulation Standards (IFRS) are almost entirely geared towards the performance of capital as a measure of productive wealth.  As the information economy picks up speed, however, capital is neither the scarcest nor the more valuable resource a business can own.

The difficulty is in calculating the value of managerial decision making.  Without going in to the detailed calculations of Return-on-Management (ROM) I have outlined below two new financial ratios which allow businesses to determine a financial indicator of both information effectiveness and technology performance.

  1. Information Effectiveness.  This ratio measures the effectiveness of corporate decision making at increasing the financial value of the business.  This is defined as the decision making value minus the decision making costs.  As described in a previous blog The value of information is calculated through Information Value Added (IVA).  Information Value-Added = (Total Revenues + Intellectual Property + Intellectual Capital) – (operations value-added – capital value-added – all purchases of materials, energy and services).   This is to say that once all labour, expenses and capital (that is not part of an information system) is accounted for, the cost is subtracted from the total of gross revenues (plus IP).  In other words, it is the part of the profit which is not directly accounted for by operations or increased capital value.  It is profit which is attained purely through better managerial decision making.  This might be achieving better terms and conditions in the supply of a product or it might be in the reduction of insurance costs on a given contract.   The cost of information management is somewhat easier.  Ultimately, corporate decision making is accounted for as ‘overhead’ and therefore shows up in Sales, General & Administrative expenses (SG&A) on the balance sheet.  The sum total of managerial activity – which is ultimately what information management is for – can be accounted for within SG&A ledger.  The more operational aspects of SG&A, such as R&D costs should be removed first, however.
  2. Technology Performance.  The measurement of the ability of corporate information systems to increase company value.  Ultimately, this answers the question whether a firm’s technology is creating value, as opposed to being simply value for money.  More specifically, how much value are the company’s systems adding.  This is shown as the total value added by information support (IVA) less the total cost of technology and management, as a percentage of shareholder equity.  Note that shareholder equity is chosen above short term indicators such as EBT because many KM/management systems will take time to deliver longer term equity, as opposed to short term cash flow.  This metric assists in determining whether the cost of technology is worth the value it is creating.

Financial ratios have benefits over other performance indicators.  For instance, there is a tendency within the corporate environment to benchmark costs against firms in a similar segment.  This is excellent where small and medium sized enterprises have aggressive cost management programs.  However, in large companies with sprawling ICT environments benchmarked costs become less relevant for cost comparison and more relevant for contract management.  The benefit of financially driven information management is  that it allows a companies to benchmark against themselves.  In compiling quarterly or yearly indices firms can benchmark their own information management performance.  More importantly, these non-standard financial ratios provide not only a means for the CIO and CFO to communicate using a common language but also the ability to refine the exact nature of the solutions.

In summ, financial ratios will not tell a business what brands to buy but they will help executives refine their choice.

The Polyvalence of Knowledge (Pt I): how financial ratios can influence system choice Reply

In this, the first part of, “The Polyalence of Knowledge” we examine the use of financial analysis to inform system choice.  In particular, back-office business systems and not operational systems.  Operational systems, such as the software used to tip a smelter are best analysed through NPV.  Back-office systems, however, cannot in any way be directly linked to the increase or decrease in revenue/cash flow.  These investments are much harder to appraise because there are few ways of determining exactly how much value they add to a business.  This blog looks at how to analyse financial statements in order determine exactly which systems are needed.

Does One Size Really Fit All?

Modern systems can be described as multi-valent.  One system can act on a number of critical functional areas but does one size really fit all?  Business and ICT believe they are achieving good value for money by purchasing a single inexpensive system to achieve multi-faceted roles.  However, what ends up happening is the system achieves little in each area and becomes a costly white-elephant.

What prompts the one-size-fits-all solution?  The primary cause of many of these implementations is (a) multiple business units have problems, coupled with (b) an inability by ICT to develop precise, accurate and complex business cases directly supporting the improved financial performance of the business.  In many cases a senior executive becomes nervous about the security of information, a separate business unit voices their frustration with their inability to collaborate and co-ordinate information and ICT says that it can solve both problems with one system.

Firstly, what are the primary back-office systems, what are they used for and what are their financial benefits?

  •   Electronic Document & Record Management Systems.  EDRMS are designed for the storage and retrieval of high-value records, such as contracts, patents and other documents containing intellectual capital which is hard to replace.  The loss of such material would be considered a security breach and would compromise current and future operations.  EDRMS, unfortunately, are usually only fully implemented in back-office units which have a culture of compliance and are therefore the least likely to need it.

Due to the nature of mature documentation EDRMS typically support contracts and supply chain & vendor management.  These systems assist in the search and retrieval of framework agreements for procurement as well as operational information.  A business with a well embedded EDRMS and developed supporting business practices could expect to have lower costs in their supply chain.  Supply chains themselves tend to be capital intensive and so a high-performing supply chain will empower a greater Return-on-Assets ratio, i.e. better better contract and supply chain management tends to support higher capital utilisation.  Service companies tend not to have capital intensive supply chains and are not, therefore, significant users of EDRMSs.

  •   Business Intelligence Systems.  BI systems exist in a variety of forms and have promised much over the years.  They can be as simple as reporting tools for standard data warehouses or may be implemented as complex artificial intelligence over multiple operational systems.  BI holds the power to reduce complexity in decision making making and good BI therefore holds the power not only to reduce management staff (overhead) but also to increase a company’s Information Productivity index (a ratio showing ‘value of information’, i.e. SG&A-2-Revenue, accounting for the cost of capital).  Good BI equals good Information Productivity index. Ultimately, if a company uses its BI systems well then it will show in their Information Productivity index.
  •   Project Management Systems.  PPM systems are designed to speed the efficiency and accuracy of resource allocation across a distributed enterprise as well as contribute to better project cost control measures.  Fluctuations in resource efficiency are not usually felt in the overhead but rather in project cost and schedule overruns.  It is important to note that PPM systems are only perfect when perfectly used.  This is to say that none are effective for significant analysis or project optimisation.  However, if a distributed enterprise does not have a PPMS then the likelihood of cost and schedule overrun increase significantly beyond the standard 30% risk factor.

It should be noted that I class CRM systems as a hybrid of project and risk management systems.

  •   Messaging & Email.  Little should be said about ubiquitous messaging and email systems other than that they are merely a cost of doing business.  Costs of these systems should be seen as sunk costs because the modern business simply cannot afford to do without them.  When building business cases for modern messaging firms could actually look further valuable social networking applications for the following reasons:
  1. The structure of SN groups of interest already provides valuable metadata to automatically tag messages for archive, search and retrieval.
  2. Parceling conversations by subject, group and associative images works more similarly to human memory than standard systems.
  3. The development of Communities of Interest (COIs) along with the web-based structure and storage of documents/non-critical records is both easier and more secure.

For these reasons and more companies should seriously look to SN apps to replace standard email systems.  More importantly, the security and storage issues taken on by SN apps remove the necessity for the rollout of the plethora of inappropriate SharePoint implementations.  In these latter cases greater attention could be paid to the development of more focused operational systems.

  •   Enterprise Resource Management Systems.  ERPS reduce the clerical burden of processing payroll and human resource transactions.  The value of ERPS is in the amount of overhead (finance and HR) labour they can remove from a business.  Good implementations of ERPS should show in reduced labour and lower SG&A costs.
  •   Knowledge Management Systems.  KMSs exist to store the non-critical knowledge capital and intellectual assets of a firm.  These may simply be the records and materials one needs for daily work.  For instance it has been estimated that it costs a law firm $100,000 in lost knowledge when a partner leaves.  Alternatively, KMSs may also store the accumulated knowledge capital of a firm, such as frameworks and intellectual property.  Businesses which use KMSs well have a higher Knowledge Capital value which is the difference between market value and shareholder equity, less estimated good will.
  •   Collaboration Systems.  Collaboration systems or team sites are generally smaller, simpler and locally managed KMSs.  MS SharePoint is one such example and also shares functionality with PPMSs.  Any financial benefits will be similar to standard KMSs.
  •   Enterprise Risk Management Systems.  xRMSs exist to reduce a broad spectrum of risks across the enterprise.  They provide a database for the documentation of risk although they offer little analytical capacity.  Separate systems often must be used for this.  xRM effectiveness will normally only show in reduced project cost and schedule overruns.  However, a large company may also be able to reduce its risk premium through the effective management of financial risk.

Why haven’t back-office systems increased information productivity, knowledge capital and asset utilisation?  Simply, the ICT and Finance functions do not generally work together to support cost reduction or revenue growth strategies.  Largely, back-office systems are implemented to satisfy the whims of personal functionality, security or broad-based compliance issues.  In addition, current financial ratios focus almost entirely on capital rather than information yet it could be said that the former is neither the scarcest nor most expensive anymore.

Read more in the next instalment of this blog to see how to analyse the financial systems in order to determine what applications the business requires.

The Architectural Enterprise: financially lead capability development Reply


There is one truism in the world of enterprise architecture, namely:  do not focus on developing the architecture first.  In other words, enterprises should focus on developing capability and not architecture.  Focusing on architecture can only ever gild the lilly. To focus on architecture first is to focus on systems first and not value.  To focus on architecture first is to focus on structures first rather than functions.

Enterprise architecture programs have received poor reviews in the past few years and most even struggle to leave the comfortable boundaries of the all-too-familiar systems rationalisation through data model interoperability.

Architecture, however, should not be the focus.  The focus should be value and the means to achieve this should be through organisational capability.

This bog is Part I in how to develop a comprehensive enterprise architecture program within an organisation, namely:  developing a capability portfolio.

STEP 1:  Value.

The best way to define value in an organisation or department is through variance analysis (so long as this is performed well and to at least 3 degrees of depth) in the relevant business unit.  In the Architectural Enterprise (the fictional enterprise built and run on EA guidelines) the specific variance would be referred to an Architectural Council to ensure that the variance was cross-referenced for dependencies and all the ‘noise’ was stripped away, i.e. personnel as opposed to role issues. The architectural team can now focus on supporting the process, service delivery or value activities.

Alliteratively, if the EA program needs to start more stealthily, then the ICT department may begin by cost-modelling the ICT budget.  The financial model needs to include 3-point estimates for all relevant costs.  Importantly, the higher bound is what the organisation does pay, the middle bound is what they should pay, and the lower bound (the 10% CI) is what they could pay.  This forces the team to not only to prep the model with uncertainty (for later simulation) but also to make sure that realistic stretch targets are imposed for projected cost reductions.

Once the model is run through a deep sensitivity analysis the team can then strip out all non-capability cost drivers, such as internal transfers, tax liabilities, interest and depreciation etc.

STEP 2:  Capability.

What is left are the most sensitive capability cost drivers.  The team now needs to make sure they focus on what is valuable and not just what is sensitive.  This is critical to ensure that the team doesn’t focus on low impact back-office ICT enabled capability but rather on high-impact, high-value operational capability.  The key is to ensure that an accurate value chain is used.

The best way to achieve an accurate value chain is to (a) develop a generic value chain based on industry baselines and then, (b) refine it using the firm’s consolidated financial returns.  The team should work with a management accountant to allocate costs amongst the various primary and supporting functions.  This will highlight exactly where the firm (i) derives its primary value, and (ii) along which lines it is differentiated.

Once the team understands these financial value-drivers and competitive subtleties they can now calibrate the capability-cost drivers with the value chain.

STEP 3:  Architecture.

Once the team has a short list of sensitive areas which are both architecturally weak and financially valuable they can then set about increasing the capability within defined parameters.

To do this, the team needs to create a parameterised architecture.  The architectural model has two facets: (a) it has capability components enabling it to encompass the dependencies of the area the team is focusing on, and (b) the capability components have attached values.

Determining values for model is oftentimes difficult.  Not all components of capability have easily defined financial parameters.  What value does the team place on information?  on processes? on services or even on task functions in certain roles?  Although this will be the subject of another blog the intangible aspects of capability must all affect the 100% of financial value.  For instance, a Role filled to 80% (due to shortfalls in the person filling the role) will not necessarily mean that the capability runs at 80%.  For a good set of capability coefficients the team can use the COSYSMO model elements. These coefficients allow the team to see how the overall cost is varied by differences in organisational capability.

Once the architectural model is built the team can adjust the parameters so that they achieve both the cost that they could deliver whilst making sure that they are also increasing overall capability, i.e. process output, software integration, role utilisation etc.

Whereas standard cost reduction programs reduce cost without accounting for the wider capability sensitivities, this methodology is able to model a capability financially yet cognisant of how intangible aspects support its overall enterprise value.


Through this method the team is not only able to identify the areas requiring attention but they are also able to ensure that that costs are reduced without compromising overall business capability.  Moreover, the team will be able to do this whilst engaging with both the technical and the financial teams.

Most importantly, by focusing on capability and not architecture the organisation can hone in on not just on the hot-spots – because their will be too many to count – but on the valuable hot-spots.

How to do “Anti-Law” (What is “Anti-Law” – Pt 2) Reply


Designing legal clauses should be an extension of the architectural process.  The contract then should be about 75-85% by design directly from the business and technical architectures and the taxonomy (terms).  The rest will be terms of art (contract construction, jurisdiction etc).


The first thing people say to me when I set out this proposition is that it’s obvious and sensible.  So, why isn’t it done already?  Well, to a certain extent it is, we just don’t know it.  Secondly, it’s not intuitive – it’s not the logical next-step when we’re doing the work.  Firstly, we do already do it:  most commercial and technical managers already understand a great deal of the law surrounding their areas and simply design around it. Secondly, commercial architecture – cost models, program plans etc which are just the attributes of the other architectures – have no spatio-temporal extent.  We can’t see, feel or touch them like we can the stuff we’re engineering and therefore our understanding is governed by how we construct these concepts and thoughts in our mind (Wittgenstein).


The counterintuitive bit is that many commercial entities are actually separate things  – they are relationships to an entity.  In ontological terms they are ‘tuples’ and instantiated on their own.  In the same way that a packet of cornflakes has a cost and then the manufacturer puts a price on it.

The price is a different thing but related to the cost. We know that the price, in turn is related to their profit margin which in turn is related to their overheads which in turn are made of R&D costs, admin, advertising etc.  All these elements are part of the business architecture which influences the commercial (financial and legal) architecture of the given project.


Risk is the key relationship we are modelling.  It is best to see risk as a relationship rather than a property.  Widget A does not have risk in and of itself.  Widget A does have risk when used in a certain machine, in a certain contract.  This is important to note because in later whitepapers I will talk about the need to separate the logical and physical commercial architectures in order to make trade-offs and impact assessments.


Risk links to architecture through a series of relationships.  Ultimately, if we look at components in a database then the relationships I would draw would be:


In the above the ‘context’ is the domain architectural instantiation (the physical architecture) we are modelling, for instance, Architecture A or 1 etc.  This allows us to compare risk profiles across different possibilities and then make trade-offs.  The difference between the ‘mitigation’ and the ‘mitigation instantiation’ is merely logical v physical.  Logically we might wish to insure against a risk but when it comes down to it physically we are insuring part, funding part with a bond issue and hedging against the rest. The point is to identify clearly in the model what is mitigated and what is not.  In a large and complex model of thousands of elements we will want to make sure that all mitigations are accounted for and their are no overlaps.  If we’ve overlapped mitigations (insurance, loans etc) on a number of parts then the cost could be astronomic.  The calculations will be clearer in later whitepapers when I explain how to aggregate derived risk.


The physical mitigations are the most important things to agree on.  These are the processes and the structures that the business must agree on.  This will be a collaborative effort of legal, financial, technical and programmatic.  They must be realistic, sensible and accounted for in the program plan.  Once clearly set out then a precise and specific contract clause (not a mopping-up clause) can be tightly crafted around each and every one.  These clauses must have triggers built into the program plan and each trigger must have adequate information derivation for decision making (ie, you need to know when the clause might need to be used).

The trick in all of this is not in deciding what to do but in identifying the risks in the first place but that is for examination another time.


Repackaging risk to mitigate effectively will be the topic of another blog.

What is ‘architecty’ about IT architecture? Reply

While the building and construction industry may rail against the self-proclamation of architect status by IT workers one wonders whether to whom the greater disservice is being done.  Although a ‘system’ may not manifest such obvious beauty to which we turn and marvel in the spectacle of its design the question should be asked – cannot a system have elegance too?

What is ‘architecty’ about architecture?

To me architecture has always encompassed the synergy of design and form.  It is the gentle blending of the functional with the artistic.  An objet d’art yet a liveable space with true vitruvian utility.  An expression of values and placement in the world yet something obviously sensible and practical.  Entirely down to earth yet striving to connect with the heavens.  Think of the pyramids, the Taj Mahal or the Paris Opera.  Can we compare IT to these? Could we ever design an IT system that could convey the sense of awe and achievement that great buildings and spaces inspire?


First we must broaden our perspective of ‘system’.  I always think of a system as a collection of parts functioning in unison for a common purpose:  the body, the metro, a biometric system at an airport or even a bee colony.  A system has spatio-temporal extent.  It is not conceptual and we can and do interact with it – purposefully or not.  It will always be hard to describe a network of underfloor cables and servers as beautiful.  But what about a well designed software application?  Where code has been masterfully crafted together into a contained system which delivers meaning to our lives and allows us greater utility to interact with humanity and our environment, can that be architecture?  What of the larger system?  Think of Ewan McGregor in “The Island”.  Possibly not the best example but a highly complex yet (almost) perfect blending of the utility of the system with the elegance of its structure.

What is elegant about systems?

Edward de Bono describes a good joke as one that is completely logical and obvious only in hindsight.  I think an elegant system is the same:  almost impossible to design logically and progressively but rather, it requires some divergent and parallel thinking to arrive at the seemingly obvious answer.  An architected system, therefore, should be a pleasure to use; inspiring and yet unintrusive, functional and provide us a clear means to our ultimate goal.  The Venetian transport system, Facebook even?

What do IT people do that is creative and beautiful?

IT allows us to interact with our environment in a way which not only heightens the end experience but the overall journey.  Software designers can create incredible applications which are not only functionally rich but are also a delight to interact with.  We can all think of a system that encompasses these things but there is one thing that architecture isn’t and that is haphazard.  ‘Architecting’ is design for purpose of a single entity.  Whether at the macro or micro level, architecting produces a single ‘architecture’.  It has unity, singular identity and purpose.  Think of the Parisian skyline.  There is one thing for certain and that is that architecting haphazardly is as bad as not architecting at all.

A new word for IT architects?

Even So, do IT architects ‘architect’.  Most usually not.  I would say that software designers often will come the closest.  This is not to say that IT is just Lego – positioning per-fabricated electronic building blocks to achieve a slightly different look.  By and large IT is more about technical engineering (capacity, throughput, interface, latency, queuing) than it is about designing into the human experience but this does not mean that it has to be nor is it always the case.

The Cost of Information Reply

In simplest form the cost of information is the aggregate of all information systems (less depreciation), management salary costs and management expenses.  This is roughly equal to the net cost of information in a company.  Unremarkably, the number is huge.


Information is expensive.  It is not merely an amorphous mass of noughts and ones floating about in the commercial ether.  Indeed, that might be the nature of information but costing it is different.  Information is the lifeblood of an enterprise.  In fact, information is the lifeblood of management.  More specifically, the role of management (those not directly involved in the manufacture of the company’s core products or services) is to optimise corporate output by using information to make the best decisions on allocations of capital or resources.  In order to achieve this management requires information for decision making.  This information comes at a cost; the cost of capital and the cost of resources namely:  The cost of capital systems such as information systems (or machines at least with a diagnostic function) and services to provide the information.  This includes overhead costs of shared peripherals (printers, routers etc) and organisational costs such as support staff and training.  In addition to capital costs are the resource costs (the people/knowledge workers) to process the information.


Direct and indirect labour costs are always the most expensive aspect in the cost of information.  The cost of computers has dropped significantly in the last 40 years and yet the cost of labour has risen exponentially.  In accounting terms, per unit costs of computing are down but per capita costs are up.  Consequently, for cost control, with every reduction in the cost of workstations there is generally a corresponding rise in the use of services.  A single computer workstation is cheaper but the cost of running it is much higher. The cost of labour, therefore, will always exceed the cost of capital and so the organisational costs of information will always be the highest.


Significantly, organisational costs increase the further away from the customer the information is pushed.  Just look at any back office and to think that all that activity is triggered from single customer requests.  Information is eventually decomposed and pushed off into various departments.  In this way, the further away from the customer the volume of information increases.  Processing systems and resources proliferate in order to deal with this thoughput.  As transactions increase, so to do set up costs, further systems (network devices, security etc) as well as additional management for governance and oversight.

Transactions cost money.  The higher the number of transactions the higher the setup costs between unit workers, the higher the management for oversight, governance and workflow and, more pertinently, the lower the overall productivity.  Just think about each time a manager turns their computer on (setup costs) to check a spreadsheet, a project plan or approve a budget expense or purchase they increase the costs for the business and slow down the eventual output.

Information is produced through the direct application of labour to capital (people using machines).  Information is a cost that must be apportioned and controlled like any other capital cost.  The high costs of information should not mean that information should be undervalued.  On the contrary, the low capital costs mean that capital purchases are always attractive so long as a business can control the associated labour rates.  After all, information may be expensive but properly developed and properly used it is also extremely valuable.


The moral of the story is that organisations should reduce the cost of people and not the cost of machines.  The market is a good arbiter of the cost of information systems.  They are cheap and haggling over price in the current market is wasted effort.  Try and reduce overall labour costs in departments but more importantly reduce transactional volume by increasing organisational learning.  In this way, single workers will be able to perform more types of transactions and therefore increase process efficiency.  In this way, per capita costs may rise but units of work will also increase.  In the end, the idea is to have fewer but more expensive workers who will ultimately get more done in less time.  

The Value of Information Reply


Information is expensive, of that there is no doubt.  The cost of information technology as a percentage of revenue remains high despite falling capital costs and the cost to maintain specialised management skills to sort and interpret the incredible volume of information.  The question is whether information is actually financially valuable.  Companies spend a large amount on managing information but what return do they see?  What is the value-added figure for information?

Information Value-Added = (Total Revenues + Intellectual Property + Intellectual Capital) – (operations value-added – capital value-added – all purchases of materials, energy and services).

This is to say that once all labour, expenses and capital (that is not part of an information system) is accounted for, the cost is subtracted from the total of gross revenues (plus IP).  In other words, it is the part of the profit which is not directly accounted for by operations or increased capital value.  It is profit which is attained purely through better managerial decision making.  This might be achieving better terms and conditions in the supply of a product or it might be in the reduction of insurance costs on a given contract.


Note that I include the term ‘intellectual capital’.  I define intellectual capital as an information asset which, if valued, would increase the value of the firm.  This is not to say that the information asset itself has value (such as patents and trademarks which may be bought and sold and therefore are IP) but rather information such as mailing lists, customer preferences, methodologies, databases etc.  These are generally valued in a business as goodwill but ideally should be valued separately.


Information value as an index can be calculated through the ratio of Information value-added divided by information costs (see previous blog). So, in any given business unit the value of its information is the additional money earned from management’s better decision making.  If this results in better operations then this is ‘operations value-added’ and should not be included.  However, increased revenue not directly attributable to operations should be included as ‘information value-added’.


The standard answer is no.  In most companies and business units gross revenues (plus IP/IC) may increase but unless unit labour costs and technology costs are kept in check then the overall productivity of information is limited.  In many managerial accounting case studies the value of information is counted as being gross revenue less the cost of IT (where the analysis takes place in a non-operational function such as purchasing).   However, with reductions in IT capital costs over the years one would assume that the ROIC from IT is great.  By adding associated increased labour costs, however, the story is different.  Year on year declines are evident.  The story is clear – information is no longer adding much value in business.


In order to achieve greater value from corporate information a business must do 2 things:

Firstly, reduce per unit managerial labour costs.  Instead of merely reducing head count the per unit cost of management should be reduced.  In this way the company is working cheaper not harder.  Overall headcount should be the focus of operational process performance enhancements and not structural adjustments.

Secondly, increase profitability from managerial, non-operational decision making (because operational decisions are subject to their own dynamics).  With a renewed focus on (non-operational) decisions which increase profit or reduce costs businesses will find that their ratio of earnings:information cost indices increase favourably.


In order to achieve a greater return on invested capital companies seek to ‘sweat the assets’.  However, labour costs associated with processing and managing information will always rise faster than capital expenditure for IT (as well as associated operational expenditure for service costs of cloud computing).  Sweating IT assets is of limited value since they depreciate so quickly that they have no value virtually as soon as they are purchased.  In fact, increasing the value of information will largely come from increased revenue from higher management performance not from lower IT costs.  So, in order to achieve a greater return on information companies should seek to ‘sweat the management’.

The Architectural Enterprise: developing an EA program in an organisation Reply


Enterprise architecture is what happens not what an organisation does.  The teams, roles, structures and functions already exist within most organisations. The benefits of an enterprise architecture are best driven from a central, cross-functional governing body which has the power to (a) analyse the business and (b) drive alignment of the technical architecture.

EA is not a separate team or function.  EA is best achieved from a high-level by linking the functions of teams and business units. Most businesses already have enough EA structures already  in place.  The usual failing is focus not facilities.

Using an active governance, risk & compliance framework enterprises can ensure alignment of business capability and technical architecture.  To achieve this businesses should focus on the following 3 steps:

  1.   Focus on Business Capability not technical architecture.
  •   Use financial modelling.
  •   Use Value Chain modelling for alignment.
  •   Use parametric modelling to ensure that capability does not decrease with costs.
  1.   Become a source of value for the business.  Offer capability development into the Strategic Business Units.
  •   It is the only way to get full visibility of their cost structures.
  •   Build better stakeholder buy-in for complex projects.
  1.   Enterprise Architecture best sits within the GRC function.  An Architectural Council (Tier 2 executives) which helps provide the focus for architectural development.
  •   Reinvigorate the Business Lifecycle Governance Process with dynamic assurance measures.
  •   Remove much of the onerous, burdensome program review and develop a more analytical assurance based model.

The Construction of Contracts: Problems with Plain English for the Interpretation of Legalese 1

Lawyers have long been derided for their use of verbose and complex language.  Businesses often criticise legal teams for turning simple commercial agreements into indecipherable esoteric jargon.

The state of legal drafting is likely to be responsible for a large portion of the commercial litigation occurring these days.


When I was at law school someone asked why judgements were written in such complicated language.  The lecturer gave the rather unsatisfactory answer of “well, they just are, so I suggested you get used to writing like that as well.”  So begins the institutionalisation of the legal mind.  Whilst lawyers argue that complex topics beget complex documents it need not always be the case.  It is simply not sufficient to argue that circumlocutory language is essential to navigate the labyrinthine technicalities of the law.


The problems is not the generation of meaning it is surviving interpretation.  For instance, it is one thing to explain a business model so that your nine year old daughter understands it.  It is another thing entirely for the same language to stand up to intense scrutiny from another party who stands to lose money on that interpretation.

The phrase lawyers use is “covering the field”.  Lawyers reduce the penetration of an attack by reducing all the avenues of construction so that there is only one possible interpretation.  What is left is usually a wordy cocktail which is so complex that it defies the very interpretation intended.


The problem with plain English is that it dumbs-down complex concepts.  Jargon, on the other hand, usually arises in two situations:  (a) where the author seeks to look cleverer than they really are, and (b) where the author is trying to convey complex concepts precisely to an audience of the same background understanding.

The fact is that English is an imprecise language.  It is no wonder that French is the international language of diplomacy.  For instance, in English we have no gender for our nouns.  There is no way of telling which preceding noun the ‘he’ or ‘she’ of a sentence relates to.


Understanding, therefore, is created through a common conceptualisation not the design of the language.  Two builders, for example, will understand an agreement to build a house based on the architectural schematics.  Two bankers will, likewise, understand the securitisation of debt and sale as a derivative.  Both of these contracts will be entirely opaque to most of us and certainly to the nine year old daughter.

The trick, therefore, is in the creation of a common conceptualisation, one that is understandable and interpretable to the rest of us.  “The rest of us” should include the judge and expert witnesses should the matter ever be disputed.


English, therefore, is not the problem.  Architecture is the problem.  Agreements fail through the inability to create a common operating model.  Much can be brought over from the building and construction industry wherein the architecture, engineering and schedule form key artefacts of the business agreement and reduce ambiguity and uncertainty.  Likewise, where parties create an easily interpretable model (physical or conceptual) then the likelihood of  ambiguity, misinterpretation or, perish the thought, obligation avoidance, is greatly reduced.  Defence departments are moving this way through the use of Defence Architecture Frameworks (such as MoDAF, DoDAF and NAF) although none are optimised for the use in legal documents just yet. However, it will be a long time and only with huge a huge push from the business community that the state of legal drafting will change for the better.