The core problem in modeling of anything, especially of project performance, is not understanding what the answer should look like. This leads to naive and uninformed decisions. Know what you’re looking for before you start looking for it, than you’ll have a higher probability of recognizing it when you see it.
One senior executive of a financial institution was recently reminiscing about the implementation of probabilistic modelling in his organisation. He said that Monte Carlo analysis was implemented, enthusiastically at first, but then the joy died down.
He explained how they had a number of investments and projects within a range of portfolios. Each one showed a 100% chance of success when the financial models were simulated. Limited attention was paid to these projects as they ran their course and each one suffered catastrophic losses. How could probabilistic modelling fail them so badly?
For any mildly experienced manager at a financial institution there would have been a gut feeling for standard risks. Senior managers would have know, roughly, the risks and opportunities on each project – within range.
The key to decent probabilistic modelling is to infuse some element of Bayesian analysis, i.e. build in what you do know to what you don’t know. This will not only help improve the granularity of the model itself (and therefore the usefulness of the simulation) but it will also limit the uncertainty of the critical ranges themselves.