IT people are always being encouraged to become knowledgeable about the world of business. This is a concept I have been preaching about for over a decade! Most of them work in some sort of business, and there’s no denying that an understanding of how commerce works can help IT personnel do their job better, whatever industry they work in. But what about the flow of knowledge from IT to business? Is there anything in IT which is even applicable to business? After all, to the layperson, IT usually seems to be a world of confusing acronyms, incomprehensible jargon, and thoroughly uninteresting technical detail. However, most business people have only experienced the sharp end of IT—real, physical contact with machines and software—and not the ideas and concepts that underpin all that technology and the way it works.
These ideas and concepts have, on the whole, very little to do with beige-colored boxes, spaghetti wiring, and flashing lights. They are concerned, instead, with understanding the problems those boxes and wires are marshaled to solve. These problems are not just technical problems, but also business problems: how to manage information, how to structure and organize activity, how to improve efficiency and productivity, and so on and so forth. The use of computers as part of the solution to these problems is nothing more than a modern convenience. Pen and paper would do just as well in terms of functionality, but these days such a solution would mean a totally unacceptable hit to performance. Even so, there’s little point in deploying technology to solve a problem before you understand the nature and extent of the problem, which “solutions” work and which don’t, and how both problem and solution relate to the organization as a whole.
You are probably wondering where this is coming from. Well, being engaged with such a large customer, I went back through my notes from the very beginning and realized that this was going to be a massive challenge. With the number of contractors/consultants in place and the complexity of the global organization, along with the ITO provider trying to shoehorn a global infrastructure into cookie-cutter solutions, the project was going to end in disaster if it wasn’t handled with extreme care. This engagement definitely fits within the definition of a strategic IT project. The issue here is that most projects like this end in failure. I just wanted to share with you some thoughts, ideas, and input on how to not royally screw up such strategic projects.
Let’s return for a moment to the subject of non-obvious obviousness. This kind of knowledge, like that outlined above, is so, so obvious once you understand it, but it is completely opaque and obscure until you do. I always say mathematics is this way, and it’s a good example of what I’m talking about. Mathematics, once you understand it, is easy; however, until you do, it seems impossible. Going from a state of ignorance to a state of knowledge like this can be a tricky process; it requires something akin to a flash of insight. Grasping a whole subject may take many separate such flashes of insight, in the right order.
Nevertheless, people do learn mathematics to advanced levels, so this process must be possible, right? So I was wondering why I was getting odd looks when I discussed concepts and ideas around this subject. It was new and seemed nonintuitive and perverse to the uneducated. I begged them to stick with me: endeavor to persevere, so to speak. I told them it would be worth it; in a gestalt-like way, an understanding of the whole subject is worth much more than an understanding of its individual topics, because of the interconnectedness of the ideas.
In general, strategic IT projects build systems that are large, complex beasts, being often multisite, and in this case, multicountry. Organizations that are prepared to outlay millions on these systems usually expect some sort of return on their investment, in the form of solving their information-related problems for the foreseeable future. In other words, these projects are hard to do, and they have to be done right. The future of the organization may very well depend on them.
There are several ways to screw up a strategic IT project, each of which has numerous possible causes. Ultimately, most projects that fail do so because of poor management, bad planning, or, more commonly, both. These failures manifest themselves in a variety of ways, depending on the kinds of incompetence shown.
I think it is safe to say that we can group them into four categories of failure:
- Building a system that doesn’t do the things it should do
- Building a system that doesn’t perform or behave correctly
- Building a system that is obsolete before its planned end of life
- Failing to get the system finished and installed
The first kind of failure is about what systems do. In other words, they are about functionality. In a way, we should be less worried about this kind of failure than about the others, partly because there is already so much good advice out there, and also because the elicitation of functional requirements and their subsequent embodiment in a system is something that has always been part of systems development.
The second kind of failure involves how the system operates: its performance, reliability, security, and a host of related factors. There is a tendency to refer to requirements of this nature as “nonfunctional” in order to distinguish them from the “functional” requirements I mentioned previously.
Of course, both kinds of requirements are important, but of the two, I’d argue that it’s far more important to get the how requirements right first. This is a consequence of one of the most fundamental observations we can make about the way strategic systems work.
The third kind of failure is typically caused by carelessness, parsimony, indecent haste, or an inability to do “back-of-the-napkin” math. It’s the failure of the system to scale properly, whether by accident or design. Accidental nonscaling is somewhat forgivable, but designing it into a system is inexcusable; the system needs to scale for as long as its planned lifetime (plus contingency). So, when you build a system, you need to think not only about its annual throughput growth rate, but about when you plan to retire it, so you know for how many years it will be expected to cope with an increasing load. The more successful the business, the faster the system’s spare capacity will run out.
The fourth kind of failure probably has the most causes, but there are two main themes: lack of appropriate project infrastructure, and a lack of respect for the needs of the project team. It’s a source of continual amazement to me that organizations that commit to spending millions and millions of dollars on a sparkling new system are rarely prepared to invest enough to ensure that the project team has what it needs to do the job properly. It’s like planning a televised moon landing, but with a proviso that the astronauts will wear secondhand deep-sea diving suits, the ground team will write with crayons, and the event will be captured on a 2004 vintage mobile phone camera. The other issue—mutual respect—is unfortunately out of the scope of this discussion. There are other reasons why a project might fail, reasons that may not be under anyone’s control. However, the point here is to minimize the risk of failure, not eliminate all risk (which is impossible).
To finish, consider for a moment this quotation from Aristotle: “It is possible to fail in many ways…while to succeed is possible only in one way.” Aristotle lived more than two thousand years before the invention of the computer, but he understood success well enough—in order to succeed, you have to have an absence of failure. This means that to develop a system successfully, you have to get its functional aspects right, its “quality of service” aspects right, and its scaling right, and you’d better adopt a style of working that, well, works for all the people who will be participating in the project, at all levels and in all areas. To do anything else increases the risk of your project going completely sideways. Why take these unnecessary risks? They are, with the right knowledge and an appropriate way of looking at things, entirely avoidable.