One of the most puzzling things in high technology, especially for executives on the business side of things, is the software development process. It’s the high tech equivalent to the “Black Hole” phenomenon made famous in Astronomy. Endless resources can be poured into a software development project, yet there never seems to be an end in sight. Monitoring the progress of a software project can be like peering into the darkness of a seemingly bottomless pit.
And why is this so? It seems that in such a typically high tech, yet now familiar activity, we would have long ago figured it out. We’re in an age where PCs, with the power of supercomputers from just a few years back, are slapped together like bicycles, and don’t cost much more than a bike. You would think that the process of software development would, by now, amount to simply turning a crank—yet it seems it hasn’t advanced much since the dawn of the PC age.
I don’t mean to be overly dramatic here. But I have been in the high tech and software industries since 1983, and I have never been involved with—or even personally known of a software project—that came in on time and under budget. Never. Not even ONCE. That’s pretty incredible. Now, I realize that there are almost certainly examples of on-schedule projects out there, but they are in the overwhelming minority of all software that is developed.
THEY ALWAYS SLIP
It’s just accepted in the software business that projects will slip, particularly when the end result is an actual commercial product. The businesses I’ve been involved in have tried everything. When I’ve had direct responsibility, we’ve taken every approach imaginable. We’ve tried an approach of “No upfront planning”—starting coding as soon as possible. We’ve tried “extensive and laborious upfront planning”—with a detailed spec, and a prototype, completed prior to initiating production coding. I’ve seen many projects that tried using intermediate steps, falling between the two extreme approaches above. We’ve tried to start projects by purchasing as many “pre-written” modules as possible, used various languages and platforms, hired dedicated debugging personnel, tried code-generators, assembled both small teams & large teams, you name it—we’ve tried it. Project schedules have been written with the utmost conservatism, at the insistence of senior management. No matter. Across a number of different companies, EVERY project has slipped out beyond the wildest nightmares or everyone involved.
ONE LINE OF CODE, TWO WEEK DELAY
Once I asked our lead programmer to change ONE LINE OF CODE in a well-established product. He estimated it would take just a few seconds to make the change, and a few hours to test it. The change would be final by the end of the day, at the latest. Two weeks later I was still waiting for a solid product.
Now, don’t misunderstand. I’m not writing this to bash software developers. While not every developer I’ve worked with over the years has been a world-beater, I’ve had the fortune to work with quite a number whom I consider to be outstanding. Many have been extremely bright, dedicated and hard working. But no matter how much thought, time and effort went into it, our projects always slipped. A lot. We usually ended up with a commercially successful product, but how much better we could have done, had we figured out a way to bring the product to market on time? The only saving grace was the competition had the same problem.
MORE ART THAN SCIENCE
The reason, I believe, is that writing software remains much more of an art than a science. This statement is a bit surprising, until you look a little deeper. There is certainly much methodology available to guide a team to use sound, time-tested practices in developing software. However, a software program is really just a document written in a foreign language. That’s why C++ and Java are called Programming Languages. It’s also interesting that many programmers who aren’t classically trained in computer science come from an English, Music, or other language background. Just like in writing a novel you are guided by syntax, grammer and writing rules, writing a software program is very similar. In writing a novel you are essentially creating a unique work that has never been done quite the same way before. Also true for a software program. If you knew exactly how the writing of a novel or software program would go before you began, there would be no need to write it—it would have already been done. While there are plenty of rules (representing the science) to writing good software, at the end of the day it’s a unique, written creation (the art).
COMPLEXITY OVERWHELMS EXPERIENCE
Another key reason why conquering the software development process has appeared to be impossible, is the vastly increased complexity associated with software projects today. Let’s face it, the average piece of software today does a lot more, and is quite a larger in terms of the number of lines of code, than at the dawn of the PC era. The creation of graphical user interfaces really started the explosion in the size of software code. So much more code is needed, to bring the user-friendly products of today to life. And what enabled this, of course, was the dawn of the modern operating systems, especially the overcoming of the 640K limit that the original DOS operating system required PC programs to run in. Windows and other modern operating systems almost eliminated the need to write software efficiently, at least from a code size perspective. Today the embedded systems world is pretty much the last bastion where writing code efficiently lives on—it’s pretty much a lost art to most of the software world. It’s interesting to speculate—if we were still writing in the 640K box, would software development have evolved to a more predictable science today? Maybe, but the world would be a less productive as a result.
WHAT TO DO FROM A BUSINESS PERSPECTIVE?
As you can tell from this discussion, I don’t have a great set of answers on how to bring software to market on time. It’s one of the great frustrations of my career. I still strongly believe that getting the best people you can get will make the problem better, even if it can’t be solved completely. I also believe in keeping development teams small, with the minimum of structure necessary to run the project. It’s also wise, in my opinion, to structure your product releases to be more frequent, while adding fewer new features per release. This should at least minimize the pain of each release slipping, since the slip time of each release should be less. And knowing what you’re going to be coding, developing a spec document and sticking to it (no feature creep!) is also sound practice, although I’ve found it to be no panacea. Beyond that, I’m at a loss. Maybe one of you has a strong opinion on how to bring projects out on time? If so, post a comment—this is a discussion worth having.
Phil Morettini
PJM Consulting
www.pjmconsult.com
Hey, Phil, this is another thought-provoking article. Thanks!
As a software developer in the ’80’s I remember hearing about the ninety-ninety rule, which went something like this: “The first 90% of the programming accounts for the first 90% of the time to finish development. The remaining 10% accounts for the other 90% of the time to finish development.” Yeah, it sounds like something from Dilbert, yet, it’s way too true. Even that this “rule” adds up to well over 100% is a poke at the reality that things take time.
There is no silver-bullet fix for this reality. As you say, there is art involved. Also, there are cultural differences that need to be taken into account, and there are parallels to sales and marketing.
Don’t these sound familiar?
To the developer – “Why does it take so long to change one line of code?”
To the sales person – “How hard is it to make one phone call?”
To the marketer – “How hard could be to change one line on a website or brochure?”
Phil, great topic and nice observations . I will comment on your observations and add my own before making some suggestions that I have experience with.
I particularly like your art observation and it is something that I have believed and professed during my first years as a programmer for IBM in the early 90’s working on early CASE tools and methodologies. I believed that programming was more art than science, using pure creativity to create something from nothing. Yes, you did need training and it WAS on a very “scientific based” platform, but it was very akin to an artist learning the basics of painting on say a canvas or other medium. That said, as artists, the most significant reason for delay in my experience is that quest for perfection in their craft. Every developer that I have worked with (including myself in another life) has ALWAYS said, “…it can be better if…I can make it better if…let me try one more thing…”. I see this artistic drive even more in my last 8+ years working occasionally with the Media & Entertainment industry where production artists (animators, CGIers, producers, directors, actors, all creative types) essentially say the same thing. Studio execs have had essentially the same problem since their industry began. When was the last time you heard about a film coming in on time and under budget?
It is because of artistic freedom that methodology and newer technologies and standards have come in to address the problem of complexity and scope of projects and personnel. As an example, I learned Structured Cobol during the mid 80’s in high school (yes, I was a geek) to help my dad build a Tool Management System for the Navy. I knew I could code in many other ways to get the job done (think spaghetti) , but I was taught, if I do it this way, other developers can work with my code and vice-versa. So over the years, the predominant movement is to allow more complex systems to be built by encapsulating capabilities (technology) somehow and provide a set of rules to access them (methodology). There is a whole other conversation that can be spawned on the topic of “the right technology” or “the right methodology” but I feel that it becomes more of a religious debate more often than not. I prefer instead to be technology agnostic and apply what is most immediately pragmatic while keeping in mind the ultimate strategy of the business solution. In this way, I find that complexity can be managed, but is, of course always to be kept an eye on.
From my experience, the problem most often starts in the up-front portion of an engagement. It starts with really understanding the problem from BOTH the business aspect as well as the technology aspect, translating that effectively so that everyone is on the same page and then moving forward with a solution that specifically addresses the issue(s). This seems straightforward, but honestly, I have seen too many solutions designed by technical architects who have “done this before” or worse yet developers “who know better”. What ultimately happens is there is an immediate disconnect, so that even if you are using agile development with rapid prototyping, you are already behind the 8 ball when it “isn’t quite right”. We won’t even think about what happens with a traditional waterfall process. Associated with this is the expectation setting from the solutions team based on these “similar” solutions or requirements. More often than not, because these solutions are designed with little to no requirements, they have a tendency to be larger in scope, because the components that make the initial description are “something that we have done before”.
So what we have done to address this is two fold. Firstly, send in people who are adept at listening and understanding requirements from the appropriate level, whether it’s business value or technical application, to come to as close to an understanding of the problem. This is easier said than done primarily because while “listening and understanding” seems to be a basic skill, true mastery requires practice and is surprisingly hard to find. Secondly, we have found much success in a “start small – grow fast” philosophy. What I mean by that is very specifically scoping a project down to its bare essence at the beginning to ensure short term success. This accomplishes a couple of things. We get validation very quickly from the customer (stakeholder) as to whether or not we do have a solution to address the problem. If possible, we can start gathering metrics from the solution if needed to promote further investment on the proposed solution. We train the implementation team on the true business value and nature of the solution, providing valuable perspective. Finally, both customer and developer are able to move incrementally on what works and what doesn’t work from a pragmatic point of view.
Now, I am NOT saying this works EVERYWHERE. The organization has been groomed and trained for this engagement style. But to the original discussion, yes, I believe there is a way and I have seen a significant measure of success in our engagements, both in smaller 6-8 week projects as well as multi-year engagements.
Hi Phil,
It is apparent that you have learned some hard lessons in software. Yikes! For you I wish you had found proper help as you could have had assistance through this struggle. As an experienced project manager in software releases and implementations there is most certainly a process minimizing headache, managing expectations, developing code and yes releasing it on time!
In case you are still in the struggle… remember scope, schedule & budget. That “simple” change you requested from a coder was indeed a setback. It *never* is as simple as you once hope. So remember, if you change the scope, then so does the schedule and budget.
But I think you did come across something good – that multiple, frequent, small releases is good. It gets the user in working on the system. You can have get great feedback for future needs and releases. Make sure you set expectations and you can be on your way to successful releases!
Good luck!
Ashleigh
In my over 30 years experience in business and computer related fields including stints as a programmer and system analyst, I found that nothing takes the place of the initial research phase of a project. Sitting down with an actual future user of the software product, even doing their job alongside them is the only way to grasp all of the data, features and functions that the product will need to contain to make the user’s life/job easier or more efficient. And isn’t that the objective of most business oriented software products?