Posted by: Jack Santos
Congress has met. The crisis continues.
But what about the next one?
You can spend time and energy avoiding a crash, or fixing the cause to avoid one in the future. You can't do both. Congress is focusing on the former. But what about the latter?
At Burton Group's Catalyst event , Nick Leeson told us that the Barings Bank failure in the 1990s was due to, mainly, poor employee oversight. Managers that didn't understand the work. Organizations that valued short term profits at any cost.
I think the issue is more pernicious than that, and strikes at the heart of our troubles today. We captured it in an email exchange between me and Chris Howard, that started with the following question posed to us:
Question: The magnitude of the current crisis makes it clear that there is significant room for improvement in current credit-assessment approaches. What more is needed, from a technology or human perspective, to capture the full range of risk factors in the market? What is the impact the crisis is having on practices around the application of existing technology, integration efforts, Etc.?
My thesis is that IT was quite instrumental, though certainly not solely responsible, for the financial crisis:
Failures in employee and investment oversight can be traced to inept use and deployment of business intelligence, asset management, and portfolio valuation systems; a lack of a good handle on the raw information that the organization is managing, and who has access to that.
Ultimately the root cause is organizational immaturity in the governance of computer system implementations and usage, and inability to deal with information overload (email, data) effectively.
Business decisions made as part of the enterprise's use of computer systems has caused these problems - and it makes for an interesting conversation about how to address them.
The complexity of financial instruments is the most compelling reason for widespread failure, and that is a combination of crafty humans and technology. The BI angle comes in when reporting is insufficient to notice problems within those complex instruments.
The subprime thing was exacerbated by continual selling and redistillation of mortgages into more and more complex packages, making it impossible to track/know the extent of risk and exposure.
So, I think that IT has a role in the failure, but was not directly responsible. IT simply facilitated and unwittingly obfuscated questionable practices.
As I said in my talk in London last week, this is all very Enron-esque.
Per Nick Leeson's comment, I think employee oversight (performance management systems, MBO and bonus determination approaches, organizational flattening through expanded automation, workforce geographic dispersion through expanded automation) practically predetermined the kind of behavior that he spoke about.
As for complexity of instruments, I would pose a similar argument. More compute power, more software made it possible. Rather than critically think about how we were doing it, or institute the peer/management review of key decisions in the algorithms, we just did it. Blame the software ease of use and the pressure for results -- faster faster faster.
Ultimately it’s a management issue. But don't think that management's view of IT as a black art, and fear of technology, didn’t come into play.
Of course IT was not directly responsible, but IT made it possible. And better IT/Business dialogue might have prevented it.
As for Business Intelligence - BI is not only used after the fact - but also before the fact in coming up with the design and impact of the design of new financial instruments. Obviously, not well enough. and it wasn't a tool issue, but a critical thinking issue on the part of the designers.
In a perfect world where we could capture and track the risks, and make decisions based on our assumptions and accurate data, distillation of financial instruments is a great way to spread risk and create a broader market. The concept could work. The implementation, through the software that IT and the business created, was not thought through and basic design decisions were just plain wrong.
This is a case study on misuse of technology!
IT created fertile ground for this to happen, although IT did not provide the thought-leadership around the creation of the products themselves. Nor did they question the appropriateness of the complexity. I totally agree with the fact that it was lack of oversight, coupled with lack of insight into the dark side of technology and usage possibilities.
The opportunity for meltdown occurs when IT just does what business says without an understanding of the risk involved or an effective dialogue on implementation options. But, even if IT had objected (which is a stretch), no one would have paid attention anyways.
NO NO NO! - its not a case where its a failure of IT suggesting that a plug be pulled. It's more of a design issue, and how we design and use systems.
So let's focus on where you are going. (BTW - the personnel argument that Leeson addressed leads into rights management and provisioning, and has significant implications on how to manage the newer workers in the workforce - the millenials - that are used to be entrepreneurial with IT tools).
It's not a black and white "we want a system to manage Collatorized Debt Obligations" (CDOs) and "IT won't let us do it because CDOs are too complex". It's more that IT does the system as asked, but key design and usage decisions are not elevated to business decisions, and stay at lower operational levels For example (not that this happened in this case) a programmer or business analyst just picks a parameter without effectively raising the issue so that more eyes have a chance to review the assumption).
Unfortunately, our financial meltdown is not as simple as one or a few parameter changes (an amusing blog post on the causes is here). But with system processing capacity ever increasing, our software is handling more and more complexity, including more and more inherent assumptions. The real question is whether the design decisions are being elevated, and whether our blind faith in technology masks those decisions (I argue the latter).
It is the job of the IT manager to make sure that appropriate design decisions are getting escalated, or that there is a methodology and process in place that enables business decision exposure. This is at the crux of IT governance.
Instead, I would argue, the pressure to deliver glosses over the issue and is a strong force that enables this behavior and that leads to a crisis. In fact, with financial software a crisis like this is just one way that kind of behavior and early stage decision-making manifests itself. Juxtaposed this to software programs like Windows, where the normal result for a bad design decision is "a blue screen". In some cases it is even more insidious - like when calculations were wrong because of a mis-coded floating point algorithm in an Intel chip. or when a spacecraft get lost because of a bad assumption on metric-english measurements.
The counter argument is that the processes, methodologies, and conceptual (and detailed) understanding was in place, and key design/decision points had enough review and oversight - and the crisis was a result of conscious decisions made by senior managers (and this may still prove to be the case).
But financial organizations grounds for defense ("it's way to complex to understand") says to me that the inability to explain the complexity falls at the feet of the designers (business and IT). "Too complex" is just a smoke screen - it's not that we can't understand it (I have enormous faith in the brain to comprehend this stuff), we just didn’t have the appropriate review points, discussions, warnings, etc to give us time to understand it and mitigate it.
Now, a side observation.
A crash like this will slow down the economy, and the development (read: innovation) of new products (financial and others). Because of a slower, or contracting, economy the side effect will be more cautiousness, more review, slower development, more time taken with design decisions. So rather than trying to figure out why the engine broke down at 10000 RPM and fix that, we rev back down to 100 and just go slower and take longer to innovate and get better at it until we can exceed 10000 rpm again sometime in the future -- which is probably the right answer to "how to react?" I submit understanding the reasons for breakdown and addressing that will get us to a better place sooner (ie, ability to run at 10000+ rpms).
It does come down to design, and the *unchecked drivers* (just do it, don't question the ethics/risk of it) for that design. The complexity of the financial instruments has more to do with conceptual product construction and how products are bought/sold/decomposed/reconstituted/traded. Certainly IT makes those functions easier to perform at large scale and high speed.
The "faith in technology" issue needs some modification, though. If we design to contract, and create interfaces with our systems that support that contract, then the way that contract is implemented in the technology is moot (as long as the interface returns the correct information). The design issue you raise has to do with those contracts. Now, the design of those contracts should be the output of meetings and collaborative design. That's a feature of an Agile SDLC which most large organizations are *not* practicing. So whether the IT organization is pushing back on meetings or simply hiding behind a calcified pre-iterative SDLC, the result will be the same.
I get the feeling that excitement ruled at these companies, and that sometimes IT worked hand-in-hand with Biz on breaking new, risky ground. Not unlike tweaking search algorithms at Google, but with results that were much more volatile.
Re: "excitement ruled". That may be true. But isn’t that a good thing? That means we are innovating, creating new products, breaking new ground. The issue is mitigating the risk of that. And when designing new financial products (like anything else) the risk gets mitigated in a variety of ways -- more review, break the problem down into the pieces, do dry runs, regression test, performance test, etc. etc. The types of things that we in IT are getting better at doing, but still, clearly, not good enough when it comes to understanding business impact. And I use "we in IT" as being inclusive of business designers and participants.
Yes, it is a good thing when governed correctly. It goes to your 10K RPM analogy. Governance should not snuff it out, but should allow it to perform with optimal efficiency and *safety*. That safety extends to the company as a whole. It is wrong to over-react and snap to the opposite polarity just because we're afraid.
And that’s what is happening now, isn’t it? We just came close to shutting down Wall Street and basically lost the independent investment bank model instituted (for good or bad - which is a whole other discussion) by the Glass-Stegall act.
And then this comes in my inbox:
Hmmm...isn't this the definition of "knee jerk"? The answer is NOT spend more money on risk management software (although that MAY help). The answer is better design decisions. We need to take ownership of our decisions or approaches, rather than blame the tools.
The issue is that overemphasis on technology is a dangerous myopia. You miss the train coming at you.
Which is how we got here in the first place.....