Excellent article, posted to the discussion forum of the Fly NextGen Linkedin group:
Today’s economic turmoil, it seems, is an implicit indictment of the arcane field of financial engineering — a blend of mathematics, statistics and computing. Its practitioners devised not only the exotic, mortgage-backed securities that proved so troublesome, but also the mathematical models of risk that suggested these securities were safe.
The models, according to finance experts and economists, did fail to keep pace with the explosive growth in complex securities, the resulting intricate web of risk and the dimensions of the danger.
But the larger failure, they say, was human — in how the risk models were applied, understood and managed. Some respected quantitative finance analysts, or quants, as financial engineers are known, had begun pointing to warning signs years ago. But while markets were booming, the incentives on Wall Street were to keep chasing profits by trading more and more sophisticated securities, piling on more debt and making larger and larger bets.
“Innovation can be a dangerous game,” said Andrew W. Lo, an economist and professor of finance at the Sloan School of Management of the Massachusetts Institute of Technology. “The technology got ahead of our ability to use it in responsible ways.”
Bob Pearce commented:
While this article is focused on how risk modeling contributed to the failure of the financial system, it is an interesting perspective on the pitfalls of modeling risk in very complex, human-centric systems such as NextGen.
This quote jumped out at me: “Better modeling, more wisely applied, would have helped, Mr. Lindsey said, but so would have common sense in senior management.” — My question: What would a mechanism look like that would encourage systematic and serious-minded sanity checks in a complex organizational context?