Models & Arches
We have discussed our concerns about the limits of models on several different occasions, but this recent post at The Psy-Fi Blog wraps the weaknesses of models within an idea that we obviously embrace: the Romans’ approach of over-engineering arches.
If you can model the load that a bridge can take then you can design it optimally to ensure that money isn’t wasted on unnecessary materials and labor. On the other hand, the tendency in such situations is to design for exactly the situation you’ve specified: four axle trucks, say. So what happens when six axle trucks come along?
The Romans had a different way of dealing with the problem. They had their bridge architects stand underneath the structure when the supports were removed. They figured that this would concentrate the minds of their chief modellers. And of course it did, but at the cost of over-specification: structures built before the days of computer aided modelling were usually far better built, and far more expensive, than they needed to be.
The benefit of this over-specification is that these structures are able to deal with the unexpected, not just the precise circumstances for which they were designed. Six-axle trucks would be no problem. Likely sixteen axle-trucks would be too. There’s a 2000 year old Roman aqueduct in use in Spain today.
* * * * *
The problem with our financial models is that the people using them — the truck drivers — don’t understand the weight limits and that the people who designed them don’t do enough to make them understand. Often the view seems to be that if the bridge hasn’t failed yet then it’s OK. Unfortunately if you build a bridge that’s an efficient short-cut you may find yourself dealing with a great deal more traffic than it was ever designed for: and if you don’t understand the risks you may find that it collapses when you least expect it.
The key points are that the people overseeing the use of mathematical models need to understand the risks. One of those risks is that if the model becomes popular then the temptation to use it — and abuse it — will become ever greater. The second risk is that designing a system to be entirely reliant on one particular point of failure is plainly stupid: models should only ever form one part of a risk management culture.