The Complexity of Cost (Pt.1): problems with ICT cost reduction Reply

cost reduction

In a crisis the company P&L statement can be a useful starting point for cost reduction programs.  Over the long term, however, general ledger entries do not have the required level of detail to garner the requisite per unit analysis (McKinsey, May 2010).  Unfortunately, few companies do not have systems which can analyse the complexity of cost and spend in order to make accurate and detailed changes.

In the following series of blogs we will highlight the problems with standard ICT cost reduction & management programs and detail how to structure and run one effectively.

The key to an effective ICT cost reduction & management program is detailed cost modelling.  Most financial systems do not capture costs at the right level of detail for businesses to perform accurate and detailed cost reductions.  Businesses need to perform intricate spend analyses and build up intricate cost models for ICT which highlight the following:

  • The capabilities which various ICT components support (and where in the Value Chain they lie).  Only through this level of visibility can the business consolidate their ICT spend.
  • The HR and process dependencies which are indirectly attributed to various ICT elements.  Only with this level of detail can ICT remove duplication and redundancy.

In the absence of this granularity, cost reduction programs invariably fail or fail to stick.  In fact, McKinsey & Co note that 90% of cost reduction programs fail.  Only 10% of these programs actually succeed in realising sustained cost management three years on.

In a typical IT cost reduction cycle the following happens:

  • Headcount is reduced.  The remaining people then have to work harder (but with fewer skills, because tasks are pushed to the lower pay bands) to achieve the same amount of work.
  • Many, often unique, soft skills are also removed (from experienced people in the higher pay bands) in the redundancies.
  • Overall service levels decrease.
  • Further cost reductions are then required and some applications and services are axed.

In simple businesses this is not a problem.  In large and complex businesses the outcome usually follows a vicious cycle, namely:

  1. The firm still needs to retain a significant management overhead in order to deal with complexity.
  2. In these cases, poor transfer pricing and high overhead allocations mean that perfectly good, competitive core business process seem cost-ineffective.
  3. Critically, Kaplan notes in his seminal work “Relevance Lost: The Rise and Fall of Management Accounting” that the increased costs of  processes leads to outsourcing of perfectly good processes.
  4. Capability suffers and the  business loses competitive advantage.
  5.  The business is no longer able to deal with the level of complexity and complexity reaches an inflection point.  The business outsources the whole problem (eg, large ERM programs with much customisation),  getting locked into  horrific terms and conditions.
  6. Core business is lost and competitive advantage is reduced. Remaining managers pad out their budgets with excessive risk and contingency in order to shield themselves from further cost reductions.
  7. Overheads increase again and the business eventually prices itself out of the market.

cost reduction.accenture

In a recent (2010) Accenture survey on general cost reduction effectiveness in the banking industry, 40% of  respondents noted that the program has reduced overall ICT effectiveness and impacted adversely on both customer service and general management.

in order to reduce costs effectively without impinging on capability as well as making new costs stick, it is essential to view costs and spend at the most granular level possible.

In our next blogs we will go into detail how to structure and run an effective ICT cost reduction and cost management program including effective ICT cost modelling.

 

The Law of Mobility: The legal implications of BYOD 2

BYOD

The legal ramifications of moves towards corporate Bring-Your-Own-Device policies extend far beyond simple issues of IT security and the legal discovery issues of locally held commercial data.  The biggest challenge facing the commercial world is how far businesses will have to go in regulating the online life of an employee.

Most companies have a dusty old clause in their employee contracts which states that there is no privacy in the use of firm equipment.  Recent proposed legislative amendments in the US and cases in Canada (R. v. Cole, 2012 SCC 53) clearly show that the use of social media on corporate platforms is (a) increasingly permissible  and (b) restricted from company access.  More importantly, it highlights how corporate and personal data are being blended together in a socio-corporate online collage.

Previously, companies and government departments have been able to ignore personal cries for BYOD due to: (i)  enterprise security concerns, (ii) legal risks around e-Discovery (iii) a perception of limited utility in social media and (iv) cost pressures relating to IT support costs.  However, now:

BYOD,McKinsey.Graph

  1. Enterprise security is no longer an excuse.  Increases in corporate cloud-based applications and desktop virtualisation mean that limited data is stored or cached on local devices.  In addition, any security breaches can be isolated to a certain user profile.  In the end, Bradley Manning and Wikileaks highlight the fact that there is little that will stop a disgruntled employee if they are intent on data theft.  Heavily layered, holistic security is the only answer.
  2. Mobile connectivity and enterprise workflows reduce local data storage.  Previously, compliance requirements for eDiscovery have limited the ability to store data locally.  However, mobile coverage is now better and  costs have reduced for 3/4G  and wifi acess.  Coupled with cloud/virtual apps and the ability to sign-in/sign-out documents from company portals means that firms can reap the benefits of extended and flexible working along with greater Discovery compliance.
  3. The benefits of social media have extended the boundaries and time of the corporate workplace.  Corporate blogging has now, apparently, increased to 38% with two-thirds of companies having a social media presence (beyond the 50% level in 2009).  Social media not only provides additional channels for marketing but it also increases both external and internal customer/stakeholder engagement (and such engagement extends both beyond the doors and timeframes of the office).
  4. Multi-Device support does not require bigger IT departments.  In fact, support is far more user-friendly (e.g. Salesforce.com, MS 365 etc) and has not resulted in burgeoning IT departments.  Companies can specify what devices they do support and outsource platform support to infrastructure providers.

The fact of the matter is that companies and government departments must move to BYOD sooner rather than later.  In a recent article, Elizabeth Johnson of law firm Poyner Spruill LLP notes that in the US:

  • 87% of people confirm that they use personal devices at work.
  • 48% of companies state that they will not allow it.
  • 57% of the same companies acknowledge that employees do it anyway.
  • 72% check email on their personal devices.
  • 42% check email on personal devices even when sick.

In fact, many US college students claim that they would accept lower pay for the flexibility to use personal devices at work.  Whatever the case the creeping cloud of BYOD will take hold, if only due to the cost benefits of not having to pay for new devices enabled by better enterprise apps and improved enterprise security.

I would posit that much of blame for limited uptake can be laid on the fact that organisations are simply unwilling to deal with the additional layer of complexity.  BYOD lies at the nexus point of enterprise trust:  their data in your hands.  How far are companies willing to let go of their information in order to reduce costs and increase productivity?  Will the law protect commercial interests in data rather just IP? Or computer based personal records?  In the case of Phonedog v Kravitz the employers (Phonedog) set up the Twitter account “@PhoneDog_Noah”, which the employee used “to disseminate information and promote PhoneDog’s services.”  During his employment, Kravitz’s Twitter account attracted approximately 17,000 followers.  When he left he kept using it and gained another 10,000 followers.  Phonedog claimed that the account was theirs and sued for damages.  The court was satisfied that an economic interest was established and that harm was done.

In brief, the answer is that companies need to define the touchpoints where their data meets the social sphere.  If businesses are to reap the benefits of increased customer/stakeholder management through wider adoption of emerging social software platforms, enabled by BYOD then they need to deal with the added complexity at the nexus point of security, legal and information management.

 

The Social Enterprise: what will business 2.0 look like? Reply

social-enterprise

If Andrew McAfee‘s book “Enterprise 2.0: New Collaborative Tools for your Organization’s Toughest Challenges” is to be believed then:

“We are on the cusp of a management revolution that is likely to be as profound and unsettling as the one that gave birth to the modern industrial age. Driven by the emergence of powerful new collaborative technologies, this transformation will radically reshape the nature of work, the boundaries of the enterprise, and the responsibilities of business leaders.”

Most pundits believe that Enterprise 2.0 is the full adoption of Web 2.o by an organisation.  In the next few years, therefore, we will see:

  • Cloud technologies and better enterprise application security enable bring-your-own-device and with it the greater fragmentation of organisational information.
  • Greater transparency of organisational work through social media leaks (i.e. people advertising their work and mistakes on the internet)
  • The decomposition of many more business processes into micro-tasks (much of which can be outsourced or contracted out).
  • The improvement of distributed working practices enabled by better collaboration tools, devices and connectivity.
  • Increased pace of business through improved self-governance and, in turn, empowered by better oversight (from GRC and finance software to more pervasive CRM implementations).
  • Shorter time-to-market cycles driven by improved idea generation and organisational creativity (so called – ‘ideation’).

So, is Enterprise 2.0 the social enterprise?  Are the benefits of Enterprise 2.0 merely social?  Simply a more hectic work schedule enabled by greater ease of using mobile devices and tighter communities of practice?

McKinsey Social Enterprise

A 2010 survey by McKinsey & Company found that most executives do believe that this is the sum total of Enterprise 2.0 benefits.  Most simply believe that (i) knowledge flow and management will improve.  Many believe that (ii)  their marketing channels will be greatly improved whilst only a few believe that (iii)  revenue or margins will increase in the networked enterprise.

If this is the dawn of the new enterprise then why do so many large businesses find it difficult even to implement Microsoft SharePoint?

The most likely truth is that this is not the dawn of Enterprise 2.0.  We are probably not on the cusp of a grand new age of information work.  Our businesses are unlikely to change significantly, although the hype will be re-sold by IT vendors for some time. One only has to hearken back to the ’80’s to remember to cries of the ‘paperless office’ to realise the low probability of Enterprise 2.0 materialising.

Whether it will be Enterprise 2.0 is debatable but we are entering the age of  The Social Entreprise.  It has ushered in a new age of commercial culture but it will unlikely herald a paradigm shift in commercial structures.  The truth is that human networks and communities operate in parallel to corporate reality.  Networks are how humans interact  – they are not how humans are paid.  Ask anyone who has ever been through or performed a cost reduction exercise.  In short, emerging social software platforms (ESSPs) are fun and sexy but the do not currently affect operations in most businesses.  Emerging social software platforms will make a difference internally when they affect cost structures and not just when they show up in sales figures.  This means that ESSPs need to be able to track and apportion innovation; they need to actively manage workflow (not just passive engines); they need to engage dynamically in governance and highlight good corporate participation and collaboration.   Only once these elements are incorporated into scales of remuneration and talent sourcing will both the enterprise and the workers benefit.

Maybe then we can move to Enterprise 3.0.

Will the HR Function Survive Enterprise 2.0? 1

HR

What will HR look like in the future?  If recent articles online are to be believed then the HR function will be more powerful and more important than ever.  We do know the following:

  • We do know that the standard job market will become more fragmented as better information management allows much work as we know it to become commoditised.
  • We do know that in a world where executive education is highly specialised and more competitive the acquisition of top talent will become more difficult as senior executives look for a richer, more cross-functional, more multi-disciplinary skillsets in their stars.
  • We do know that top execs will add complexity to HR through cross-functional skillsets (i.e. top talent will be less obvious because they will not necessarily rise in vertical portfolios.  The best project manager may also be a registered psychologist, for instance).

What we do not know is just how the enterprise will react.  The onset of better databasing technologies and mobile methods of capture did not ease the information management problem in organisations (rather it has just created mountains of unmanageable data).

  • So, will the  fragmentation of the labour market and the rise of cross-functional skill sets add too much complexity?  Will the HR market be able to cope?
  • Will the HR function reach an inflection point where complexity is too great and the entire function is outsourced completely? Or,
  • will better information management allow line managers to integrate talent sourcing directly into operational business processes?

Future of HR.ChartIf a  recent (27 Sep 2012) survey by KPMG is an indicator then cost pressures in businesses mean that they will have to get smarter about HR if they are to remain competitive.  In boom times even with 35% of respondents arguing for greater direct collaboration with operational management, it is unlikely that even this volume of vociferous response would change the HR paradigm.  However, with the cost pressures at almost unbearable levels and social media increasing the transparency and speed of operations, it is unlikely that the HR function will survive even as it stands today.

The following outcomes are likeley: (i) Commoditised work will be consumed by line management into standard operations, and (ii) top, cross-functional talent will be outsourced to ever more high-end boutique HR consultancies.  Smaller HR firms will fall by the wayside but the high-end firms will demand higher margins and their clients will demand greater results.  It is possible that, much like IT services of the ’90s boutique HR consultancies could take stakes in realised profit that certain new-hires make.

Whatever the answer is it will be global and there will be big money in it.

Setting Strategic Posture: Boards and technology Reply

Innovation

In their recent article on the low standard of the technological conversation in the boardroom (“On Technology, Boards Need to Get More Sophisticated“) Michael Bloch, Brad Brown and Johnson Sikes missed the goal but scored the point.

I posit that the paucity of good tech-talk in the boardroom does not require that Directors lift their technological game.  Does technical governance and commercial oversight need to improve?  Absolutely!  In fact I believe that it is the CIOs and the CMOs who really need to improve their conversation.

In summ:  Directors, through the caché of their reputation and gravitas of their experience,  provide the confidence to institutional investors that their money is being well looked after.  Company officers (senior execs) provide the Directors with the facts necessary to delivery that corporate oversight.

To say, therefore, that Directors need tech lessons, to me, points clearly to 2 underlying facts: those companies either have the wrong Directors or the wrong execs – or both.  Either way, Directors are the wrong people to be educating.   In fact, pointing the finger at the Directors is a commercial abdication by the company.  Better information and decision making frameworks are the keys but this does include the focus on governance which the authors allude to.

James Quinn wrote a great article in HBR in May 1985 on the increasing sensitivity of the Board to technology issues.  He noted that despite ICT asset spend often accounting for anywhere in the vicinity of 50% of a company’s capital spend, there was still a lack of understanding by the Board on what part ICT actually played in these projects.  That was 27 years ago.

Lack of Board understanding equals lack of Board oversight.

Richard Nolan and F. Warren McFarlan wrote a terrific article published in HBR in October 2005.  Entitled “Information Technology and the Board of Directors”, the authors argued for differing approaches to oversight depending on the market.  They outline that companies either have a Defensive or an Offensive need for IT.  For instance, vertically integrated companies with a simple supply chain or businesses with operational computing (such as factories) have little need for strategic IT.  These firms operate IT in a Defensive mode which allows a more passive approach to IT investment oversight.

On the other hand, companies which place a strategic imperative on IT (such as banks and insurance brokers) or businesses who are betting on IT to turnaround business (such as the publishing industry) would see an immediate loss of significant revenue if the IT failed.  In these cases they operate IT in a Offensive mode which requires that IT investment programs require greater scrutiny and oversight.  

The question remains whether Boards need to get more sophisticated on tech?  No:

  1. Boards need to set the strategic posture of ICT investment and oversight,
  2. Chief Marketing Officers need to be clear about the technology and user profiles of consumers of the company’s products.
  3. CIOs need to be transparent about the data needed to support decision making, and
  4. CTOs need to be clear about strategically important technology and opaque about all the other boxes and wires.
  5. Whoever is in charge of GRC needs to be clear about the integration of business and technical governance.

In the end, technology is but one topic as the Board combs through the technical enablers of critical investments.  What does not need to happen is technology become the focus of the conversation.

Technology is fun and interesting but it must not bedazzle the company.  Technology is usually a critical enabler but sometimes it can become a strategic instrument.  The Board needs enough information to choose which but it is up to the senior execs to provide the information necessary for effective oversight.

 

The Polyvalence of Knowledge: using financial ratios to inform system choice (Pt II) Reply

For too long the language of CIOs and CFOs has not been confluent.  CFOs want hard returns in the form of discounted cash flows and better return on equity (ROE).  CIOs speak of ‘intangible’ benefits, better working practices and compliance.  Despite its poor reputation, information management, however, does show up in the balance sheet.  Intuitively, it is easy to understand that if a business not only increases the speed and accuracy of decision making but also decreases the number of highly paid executives needed to do it, then the effect on the bottom line can be significant.

The first part of this blog looked at the structure of information management that shows up in the balance sheet.  This part looks at how to calculate some of those ratios.

Unfortunately, International Financial Regulation Standards (IFRS) are almost entirely geared towards the performance of capital as a measure of productive wealth.  As the information economy picks up speed, however, capital is neither the scarcest nor the more valuable resource a business can own.

The difficulty is in calculating the value of managerial decision making.  Without going in to the detailed calculations of Return-on-Management (ROM) I have outlined below two new financial ratios which allow businesses to determine a financial indicator of both information effectiveness and technology performance.

  1. Information Effectiveness.  This ratio measures the effectiveness of corporate decision making at increasing the financial value of the business.  This is defined as the decision making value minus the decision making costs.  As described in a previous blog The value of information is calculated through Information Value Added (IVA).  Information Value-Added = (Total Revenues + Intellectual Property + Intellectual Capital) – (operations value-added – capital value-added – all purchases of materials, energy and services).   This is to say that once all labour, expenses and capital (that is not part of an information system) is accounted for, the cost is subtracted from the total of gross revenues (plus IP).  In other words, it is the part of the profit which is not directly accounted for by operations or increased capital value.  It is profit which is attained purely through better managerial decision making.  This might be achieving better terms and conditions in the supply of a product or it might be in the reduction of insurance costs on a given contract.   The cost of information management is somewhat easier.  Ultimately, corporate decision making is accounted for as ‘overhead’ and therefore shows up in Sales, General & Administrative expenses (SG&A) on the balance sheet.  The sum total of managerial activity – which is ultimately what information management is for – can be accounted for within SG&A ledger.  The more operational aspects of SG&A, such as R&D costs should be removed first, however.
  2. Technology Performance.  The measurement of the ability of corporate information systems to increase company value.  Ultimately, this answers the question whether a firm’s technology is creating value, as opposed to being simply value for money.  More specifically, how much value are the company’s systems adding.  This is shown as the total value added by information support (IVA) less the total cost of technology and management, as a percentage of shareholder equity.  Note that shareholder equity is chosen above short term indicators such as EBT because many KM/management systems will take time to deliver longer term equity, as opposed to short term cash flow.  This metric assists in determining whether the cost of technology is worth the value it is creating.

Financial ratios have benefits over other performance indicators.  For instance, there is a tendency within the corporate environment to benchmark costs against firms in a similar segment.  This is excellent where small and medium sized enterprises have aggressive cost management programs.  However, in large companies with sprawling ICT environments benchmarked costs become less relevant for cost comparison and more relevant for contract management.  The benefit of financially driven information management is  that it allows a companies to benchmark against themselves.  In compiling quarterly or yearly indices firms can benchmark their own information management performance.  More importantly, these non-standard financial ratios provide not only a means for the CIO and CFO to communicate using a common language but also the ability to refine the exact nature of the solutions.

In summ, financial ratios will not tell a business what brands to buy but they will help executives refine their choice.

The Architectural Enterprise: financially lead capability development Reply

Image

There is one truism in the world of enterprise architecture, namely:  do not focus on developing the architecture first.  In other words, enterprises should focus on developing capability and not architecture.  Focusing on architecture can only ever gild the lilly. To focus on architecture first is to focus on systems first and not value.  To focus on architecture first is to focus on structures first rather than functions.

Enterprise architecture programs have received poor reviews in the past few years and most even struggle to leave the comfortable boundaries of the all-too-familiar systems rationalisation through data model interoperability.

Architecture, however, should not be the focus.  The focus should be value and the means to achieve this should be through organisational capability.

This bog is Part I in how to develop a comprehensive enterprise architecture program within an organisation, namely:  developing a capability portfolio.

STEP 1:  Value.

The best way to define value in an organisation or department is through variance analysis (so long as this is performed well and to at least 3 degrees of depth) in the relevant business unit.  In the Architectural Enterprise (the fictional enterprise built and run on EA guidelines) the specific variance would be referred to an Architectural Council to ensure that the variance was cross-referenced for dependencies and all the ‘noise’ was stripped away, i.e. personnel as opposed to role issues. The architectural team can now focus on supporting the process, service delivery or value activities.

Alliteratively, if the EA program needs to start more stealthily, then the ICT department may begin by cost-modelling the ICT budget.  The financial model needs to include 3-point estimates for all relevant costs.  Importantly, the higher bound is what the organisation does pay, the middle bound is what they should pay, and the lower bound (the 10% CI) is what they could pay.  This forces the team to not only to prep the model with uncertainty (for later simulation) but also to make sure that realistic stretch targets are imposed for projected cost reductions.

Once the model is run through a deep sensitivity analysis the team can then strip out all non-capability cost drivers, such as internal transfers, tax liabilities, interest and depreciation etc.

STEP 2:  Capability.

What is left are the most sensitive capability cost drivers.  The team now needs to make sure they focus on what is valuable and not just what is sensitive.  This is critical to ensure that the team doesn’t focus on low impact back-office ICT enabled capability but rather on high-impact, high-value operational capability.  The key is to ensure that an accurate value chain is used.

The best way to achieve an accurate value chain is to (a) develop a generic value chain based on industry baselines and then, (b) refine it using the firm’s consolidated financial returns.  The team should work with a management accountant to allocate costs amongst the various primary and supporting functions.  This will highlight exactly where the firm (i) derives its primary value, and (ii) along which lines it is differentiated.

Once the team understands these financial value-drivers and competitive subtleties they can now calibrate the capability-cost drivers with the value chain.

STEP 3:  Architecture.

Once the team has a short list of sensitive areas which are both architecturally weak and financially valuable they can then set about increasing the capability within defined parameters.

To do this, the team needs to create a parameterised architecture.  The architectural model has two facets: (a) it has capability components enabling it to encompass the dependencies of the area the team is focusing on, and (b) the capability components have attached values.

Determining values for model is oftentimes difficult.  Not all components of capability have easily defined financial parameters.  What value does the team place on information?  on processes? on services or even on task functions in certain roles?  Although this will be the subject of another blog the intangible aspects of capability must all affect the 100% of financial value.  For instance, a Role filled to 80% (due to shortfalls in the person filling the role) will not necessarily mean that the capability runs at 80%.  For a good set of capability coefficients the team can use the COSYSMO model elements. These coefficients allow the team to see how the overall cost is varied by differences in organisational capability.

Once the architectural model is built the team can adjust the parameters so that they achieve both the cost that they could deliver whilst making sure that they are also increasing overall capability, i.e. process output, software integration, role utilisation etc.

Whereas standard cost reduction programs reduce cost without accounting for the wider capability sensitivities, this methodology is able to model a capability financially yet cognisant of how intangible aspects support its overall enterprise value.

IN SUMM

Through this method the team is not only able to identify the areas requiring attention but they are also able to ensure that that costs are reduced without compromising overall business capability.  Moreover, the team will be able to do this whilst engaging with both the technical and the financial teams.

Most importantly, by focusing on capability and not architecture the organisation can hone in on not just on the hot-spots – because their will be too many to count – but on the valuable hot-spots.

How to do “Anti-Law” (What is “Anti-Law” – Pt 2) Reply

THE BIG IDEA

Designing legal clauses should be an extension of the architectural process.  The contract then should be about 75-85% by design directly from the business and technical architectures and the taxonomy (terms).  The rest will be terms of art (contract construction, jurisdiction etc).

WE ALREADY DO IT

The first thing people say to me when I set out this proposition is that it’s obvious and sensible.  So, why isn’t it done already?  Well, to a certain extent it is, we just don’t know it.  Secondly, it’s not intuitive – it’s not the logical next-step when we’re doing the work.  Firstly, we do already do it:  most commercial and technical managers already understand a great deal of the law surrounding their areas and simply design around it. Secondly, commercial architecture – cost models, program plans etc which are just the attributes of the other architectures – have no spatio-temporal extent.  We can’t see, feel or touch them like we can the stuff we’re engineering and therefore our understanding is governed by how we construct these concepts and thoughts in our mind (Wittgenstein).

THE COUNTERINTUITIVE BIT

The counterintuitive bit is that many commercial entities are actually separate things  – they are relationships to an entity.  In ontological terms they are ‘tuples’ and instantiated on their own.  In the same way that a packet of cornflakes has a cost and then the manufacturer puts a price on it.

The price is a different thing but related to the cost. We know that the price, in turn is related to their profit margin which in turn is related to their overheads which in turn are made of R&D costs, admin, advertising etc.  All these elements are part of the business architecture which influences the commercial (financial and legal) architecture of the given project.

WHAT RELATIONSHIPS

Risk is the key relationship we are modelling.  It is best to see risk as a relationship rather than a property.  Widget A does not have risk in and of itself.  Widget A does have risk when used in a certain machine, in a certain contract.  This is important to note because in later whitepapers I will talk about the need to separate the logical and physical commercial architectures in order to make trade-offs and impact assessments.

ARCHITECTURAL RISKS

Risk links to architecture through a series of relationships.  Ultimately, if we look at components in a database then the relationships I would draw would be:

REQUIREMENT > ELEMENT > CONTEXT > RISK > MITIGATION > MITIGATION INSTANTIATION > CONTRACT CLAUSE.

In the above the ‘context’ is the domain architectural instantiation (the physical architecture) we are modelling, for instance, Architecture A or 1 etc.  This allows us to compare risk profiles across different possibilities and then make trade-offs.  The difference between the ‘mitigation’ and the ‘mitigation instantiation’ is merely logical v physical.  Logically we might wish to insure against a risk but when it comes down to it physically we are insuring part, funding part with a bond issue and hedging against the rest. The point is to identify clearly in the model what is mitigated and what is not.  In a large and complex model of thousands of elements we will want to make sure that all mitigations are accounted for and their are no overlaps.  If we’ve overlapped mitigations (insurance, loans etc) on a number of parts then the cost could be astronomic.  The calculations will be clearer in later whitepapers when I explain how to aggregate derived risk.

PHYSICAL MITIGATIONS

The physical mitigations are the most important things to agree on.  These are the processes and the structures that the business must agree on.  This will be a collaborative effort of legal, financial, technical and programmatic.  They must be realistic, sensible and accounted for in the program plan.  Once clearly set out then a precise and specific contract clause (not a mopping-up clause) can be tightly crafted around each and every one.  These clauses must have triggers built into the program plan and each trigger must have adequate information derivation for decision making (ie, you need to know when the clause might need to be used).

The trick in all of this is not in deciding what to do but in identifying the risks in the first place but that is for examination another time.

HOW TO MITIGATE

Repackaging risk to mitigate effectively will be the topic of another blog.

What is ‘architecty’ about IT architecture? Reply

While the building and construction industry may rail against the self-proclamation of architect status by IT workers one wonders whether to whom the greater disservice is being done.  Although a ‘system’ may not manifest such obvious beauty to which we turn and marvel in the spectacle of its design the question should be asked – cannot a system have elegance too?

What is ‘architecty’ about architecture?

To me architecture has always encompassed the synergy of design and form.  It is the gentle blending of the functional with the artistic.  An objet d’art yet a liveable space with true vitruvian utility.  An expression of values and placement in the world yet something obviously sensible and practical.  Entirely down to earth yet striving to connect with the heavens.  Think of the pyramids, the Taj Mahal or the Paris Opera.  Can we compare IT to these? Could we ever design an IT system that could convey the sense of awe and achievement that great buildings and spaces inspire?

Possibly.

First we must broaden our perspective of ‘system’.  I always think of a system as a collection of parts functioning in unison for a common purpose:  the body, the metro, a biometric system at an airport or even a bee colony.  A system has spatio-temporal extent.  It is not conceptual and we can and do interact with it – purposefully or not.  It will always be hard to describe a network of underfloor cables and servers as beautiful.  But what about a well designed software application?  Where code has been masterfully crafted together into a contained system which delivers meaning to our lives and allows us greater utility to interact with humanity and our environment, can that be architecture?  What of the larger system?  Think of Ewan McGregor in “The Island”.  Possibly not the best example but a highly complex yet (almost) perfect blending of the utility of the system with the elegance of its structure.

What is elegant about systems?

Edward de Bono describes a good joke as one that is completely logical and obvious only in hindsight.  I think an elegant system is the same:  almost impossible to design logically and progressively but rather, it requires some divergent and parallel thinking to arrive at the seemingly obvious answer.  An architected system, therefore, should be a pleasure to use; inspiring and yet unintrusive, functional and provide us a clear means to our ultimate goal.  The Venetian transport system, Facebook even?

What do IT people do that is creative and beautiful?

IT allows us to interact with our environment in a way which not only heightens the end experience but the overall journey.  Software designers can create incredible applications which are not only functionally rich but are also a delight to interact with.  We can all think of a system that encompasses these things but there is one thing that architecture isn’t and that is haphazard.  ‘Architecting’ is design for purpose of a single entity.  Whether at the macro or micro level, architecting produces a single ‘architecture’.  It has unity, singular identity and purpose.  Think of the Parisian skyline.  There is one thing for certain and that is that architecting haphazardly is as bad as not architecting at all.

A new word for IT architects?

Even So, do IT architects ‘architect’.  Most usually not.  I would say that software designers often will come the closest.  This is not to say that IT is just Lego – positioning per-fabricated electronic building blocks to achieve a slightly different look.  By and large IT is more about technical engineering (capacity, throughput, interface, latency, queuing) than it is about designing into the human experience but this does not mean that it has to be nor is it always the case.

The Cost of Information Reply

In simplest form the cost of information is the aggregate of all information systems (less depreciation), management salary costs and management expenses.  This is roughly equal to the net cost of information in a company.  Unremarkably, the number is huge.

INFORMATION IS EXPENSIVE

Information is expensive.  It is not merely an amorphous mass of noughts and ones floating about in the commercial ether.  Indeed, that might be the nature of information but costing it is different.  Information is the lifeblood of an enterprise.  In fact, information is the lifeblood of management.  More specifically, the role of management (those not directly involved in the manufacture of the company’s core products or services) is to optimise corporate output by using information to make the best decisions on allocations of capital or resources.  In order to achieve this management requires information for decision making.  This information comes at a cost; the cost of capital and the cost of resources namely:  The cost of capital systems such as information systems (or machines at least with a diagnostic function) and services to provide the information.  This includes overhead costs of shared peripherals (printers, routers etc) and organisational costs such as support staff and training.  In addition to capital costs are the resource costs (the people/knowledge workers) to process the information.

MANAGE THE LABOUR NOT THE CAPITAL

Direct and indirect labour costs are always the most expensive aspect in the cost of information.  The cost of computers has dropped significantly in the last 40 years and yet the cost of labour has risen exponentially.  In accounting terms, per unit costs of computing are down but per capita costs are up.  Consequently, for cost control, with every reduction in the cost of workstations there is generally a corresponding rise in the use of services.  A single computer workstation is cheaper but the cost of running it is much higher. The cost of labour, therefore, will always exceed the cost of capital and so the organisational costs of information will always be the highest.

TRANSACTION COSTS

Significantly, organisational costs increase the further away from the customer the information is pushed.  Just look at any back office and to think that all that activity is triggered from single customer requests.  Information is eventually decomposed and pushed off into various departments.  In this way, the further away from the customer the volume of information increases.  Processing systems and resources proliferate in order to deal with this thoughput.  As transactions increase, so to do set up costs, further systems (network devices, security etc) as well as additional management for governance and oversight.

Transactions cost money.  The higher the number of transactions the higher the setup costs between unit workers, the higher the management for oversight, governance and workflow and, more pertinently, the lower the overall productivity.  Just think about each time a manager turns their computer on (setup costs) to check a spreadsheet, a project plan or approve a budget expense or purchase they increase the costs for the business and slow down the eventual output.

Information is produced through the direct application of labour to capital (people using machines).  Information is a cost that must be apportioned and controlled like any other capital cost.  The high costs of information should not mean that information should be undervalued.  On the contrary, the low capital costs mean that capital purchases are always attractive so long as a business can control the associated labour rates.  After all, information may be expensive but properly developed and properly used it is also extremely valuable.

THE MORAL

The moral of the story is that organisations should reduce the cost of people and not the cost of machines.  The market is a good arbiter of the cost of information systems.  They are cheap and haggling over price in the current market is wasted effort.  Try and reduce overall labour costs in departments but more importantly reduce transactional volume by increasing organisational learning.  In this way, single workers will be able to perform more types of transactions and therefore increase process efficiency.  In this way, per capita costs may rise but units of work will also increase.  In the end, the idea is to have fewer but more expensive workers who will ultimately get more done in less time.