Logical architecture is valuable in the design of large systems for 2 key reasons: (i) it helps developers instantiate the softer concepts and more social aspects of large systems, and (ii) it provides another review-gate to iron out design flaws before proceeding to the physical system. Military systems provide good examples of the value of logical architecture. Many Defence systems are so complex that they are never developed at all. If they are at all then they are often broken down into such small components that the integration can become unmanageable. Joint Effects Targeting, counter-IED exploitation, systems to fuse operational and intelligence and the nuclear firing chain are all areas which have enormous social input so that the development of a logical architecture is paramount.
Unless a person has a pedigree in military systems logical architecture is, usually, the least understood/used part of the design process. Certainly in Agile environments or any area requiring rapid applications development where the application is fixed (portals, billing systems, SAP etc) then logical architecture design is nugatory.
In this blog we look at logical process design but the method is equally applicable to the entire logical design phase.
BENEFITS OF LOGICAL ARCHITECTURE
When designing processes, however, logical architecture is an invaluable tool in measuring, assessing and comparing risk before moving to the more expensive technical design & implementation phases. Because logical designs can be created, compared and assessed, quickly, they become an excellent technical/commercial appraisal tool. Cross-functional teams of executives and architects can collaborate on logical designs before a GO/NO-GO investment decision and thereby create 3 major benefits:
- Reduce the time of the physical design cycle.
- Increase executive involvement and the effect of executive steering on designs.
- Significantly reduce the risk in physical designs.
There is a way of viewing, and thereby measuring risk in logical processes. Ultimately, the value of a process is its cost divided by its risk. So, a process which has a total cost of $100,000 and a 60% chance of success has a nominal value (not “worth” or “price”) of $60,000. Which is to say that on average the business will realise only 60% of its value. This is roughly the same as saying that, on average, for each $100k in earnings, the firm will spend $40k on faults. Whether the value indicator is dollars or white elephants does not matter, so long as it is applied consistently over the choices. This simple measuring mechanism allows senior executives to engage in the design process and forces architects to help assign costs to difficult design components.
COSTS & CONCEPTS
The difficulty is in ascribing costs to concepts. In order to do this the team must first instantiate the concepts win some form of logical structure, such as a software system or a management committee/team. The team then ascribes an industry benchmark cost to this structure, accounting for uncertainty. Uncertainty is important because the benchmark cost will not represent the actual cost exactly (in fact the benchmark cost should represent the 50% CI cost). So, when it comes to determining the probability it is vital to use the experts to come up with what the construct could cost (as little as and as much as).
The difficulty with measuring logical architectures is in measuring concepts. Concepts usually have no value and no standard means of comparing them. In short, (i) assemble a small, cross-functional team of experts, (ii) ascribe costs (with uncertainty) to the concepts, apply a risk equation, and then (iii) simulate. One possible equation is:
- R is the overall risk.
- P is the probability of an adverse event occurring in the process.
- Ct is the criticality of the location of the event, in the process.
- T is the likely time it will take to notice the manifestation of the risk (i.e. feedback mechanisms).
- Cy is the availability of a contingency plan which is both close and effective to the point of the problem, in the process, and
- Sl is the likelihood of success that the process will be fixed and achieve an acceptable outcome.
- 100 simply makes it easier for the team to see differences between scores.
In this equation, we determine the overall risk of the process. It does not have to be perfect but rather it just needs to be applied consistently and account for the major variables. If applied rigorously and evenly, measuring risk in logical architectures has the ability to reduce the design cycle, increase the certainty of the choice, build better stakeholder buy-in and significantly reduce the risk in the physical solution.