Sunday, March 11, 2007

Ten Commandments of Successful Software Development

#1 Thou shall start development with software requirements

Every morning, some software developer wakes up with a great new idea for a software application, utility, or tool. Those who go off and start writing code without first considering the requirements for their program are likely doomed to failure. Invest up front in developing your software requirements and you will be starting down the path to successful software development. A software development organization without any requirements management process in place will not repeatedly achieve development success. Here are some tips as to why and how you should develop and manage software requirements for any project, regardless of size.

For starters, if you can’t define the requirements for your software system, you will never be able to measure the success of your development effort. If you were to write a calculator program, would it be successful if it could add two numbers and produce the correct result? What about subtraction, multiplication, and division? Does the calculator need to handle floating-point numbers or just integers? How many digits of precision are needed in results? What should the calculator’s behavior be if a divide by zero is encountered? Should the results be displayed in a textual or graphical format? How long does the result have to be saved after it is displayed? The list goes on. Even in this trivial example, requirements are extremely important to determining the success of the project.

It is very difficult to write good software requirements. Without good requirements precisely stating what a software program is suppose to accomplish, it is very difficult to judge the application’s success, much less complete the project in the first place. One of the main reasons it is so hard to write good software requirements has to do with the nature of human language. English, and for that matter all spoken languages, are very imprecise and leave much to be inferred by the listener or reader. Computers, by their digital nature, are very precise and not easily programmed to infer missing requirements. Therein lies a dichotomy. Think of a requirement statement as simple as "the program shall add the two numbers and display the results." You could raise all the same questions posed in the calculator example in the last paragraph. Even if these questions were answered, more requirements would probably be uncovered during development of the application.

If your software development team is asking questions like those in the previous calculator example, it probably is a good sign. There is no surer failure of a software development project than to have incomplete requirements. Of course the next steps after asking questions and developing requirements are to document them, organize them, and tracking functions that help you do this.

Many development projects actually start with a good set of functionality requirements, such as input this, perform this processing, and output that. What are often left out are performance and other environmental requirements as summarized in Table 1-1. How quickly does the program have to complete the required processing?

Table 1-1 Commonly Overlooked Performance and Environmental Requirements

Requirements Examples

Processing Speed CPU speed
Memory Capacity Cache size, RAM size
Network Capacity Network interface card speed, switch/router bandwidth
Persistent Storage On-line disk capacity; tape backup capacity
Internationalization Support Will the application be deployed in different countries or in different language?
Minimum Display Monitor size and resolution; number of colors supported
Financial Budget and schedule
Environmental Power requirements, air conditioning, special temperature or humidity requirements


How much RAM or disk space can it use? Does the software have to support internationalization for worldwide use? What is the minimal display size required? Environment related requirements are becoming increasingly important, especially with the advent of cross-platform development environments such as Java. Java truly provides a write-once run anywhere environment ranging from smartcards to workstations to mainframes. A Java applet that looks great on a workstation’s 21" color monitor, however, may look much different on the 4" screen of a monochrome personal digital assistant (PDA). Finally, don’t forget budget and schedule requirements. In gathering these requirements, the reasons for the second commandments should become very clear to you.

#2 Thou shall honor thy users and communicate with them often

Their own developers do not use most software applications; the intent is for it to be used by the developer’s customers, clients, and end-users. This implies that someone in your development organization had better spend a lot of time communicating with users so that their requirements can be correctly understood. In the calculator example from the first commandment, a developer may be perfectly happy with the four basic arithmetic functions while the user would like a square root function or a memory function. What the developer thinks is a complete set of requirements isn’t necessarily so.

Moving from a trivial to a real world example, an enterprise-wide IT application may have many types of users, each with their own business requirements. Take a payroll system for example. One type of user is the employee, whose requirements include getting paid the correct amount in a timely fashion. A second type of user is a manager, who wants to be able to administer salary increases and track budgets. A third type of user might be the HR administrator, who wants to compare salary ranges across an entire organization. Each user type will have unique requirements.

The second part of this commandment focuses on the word "often." Frequent communication is required, among other reasons, because English (or any modern language) is an imprecise language. Communication with users only starts at the requirements definition phase. A developer may have to discuss a requirement with a user several times before the true definition of the requirements is captured. In addition, user requirements are likely to change often, and indeed several requirements may even conflict. Frequent communication with users gives the developer early notice of requirements changes the user is considering.

A successful software development organization has established processes for frequently communicating with users during all stages of the development process. The sooner an incorrect or missing requirement is discovered, the easier it is to fix the problem. Many successful development organizations have made customer advisory teams an integral part of the software development process. Such customer teams participate in all stages of development from initial requirements gathering to production acceptance signoff. The Web-Centric Production Acceptance (WCPA) process (see our book titled Software Development: Building Reliable Systems, ISBN 0-13-081246-3) presented is another vehicle for bringing users and developers together and promoting and instilling good communication practices.

#3 Thou shall not allow unwarranted requirements changes

While user requirements do often change, it is the job of the development organization to manage these changes in a controlled fashion and assure that requirements do not change or grow unnecessarily. Pity the poor programmer who started off writing a basic arithmetic calculator, agreed to add square roots and a few other user requested functions, then a few more, and soon had the task of developing a sixty-five function scientific calculator. Modifying or adding new requirements can happen for many reasons, not the least of which being the failure to honor Commandments One and Two.

Another reason requirements grow beyond their original scope is simply that software is so easy to change compared to hardware. No one would purchase a calculator at the local electronics store and then expect the manufacturer to add additional transistors to the calculator chip to implement a new function. If the calculator were a software application, no one would think twice about the ability to add a new function.

Perhaps an even more common reason that requirements change is due to the programmers themselves. Many a programmer has accepted a new user requirement not based on valid business reasons but simply to please a user. Other times the software may be fully functional and pass all unit test requirements, but the programmer just wants to add one more feature to make the application "just a little better." Good developers always want their code to be perfect in their own eyes. The cost of making even simple changes, however, is minor compared to the cost of the retesting and re-qualifying that may result.

This does not mean to say we are against iterative development. In Chapter 10 (Software Development book), "The Software Life Cycle", the use of iterative or spiral development is prominently discussed. Even in a spiral development model, however, new requirements are introduced at the start of new iteration, not continually during the development process. Since requirement changes impact project budget and schedules, only allowing changes at set points in the software life cycle allows time to trade off the value and validity of each new proposed requirement against its cost. Meanwhile, developers can complete each stage with a frozen set of requirements, speeding the total development cycle.

Object-oriented and component-based software technologies help further isolate the impact of many requirement changes. Still, requirement changes are a constant problem for many software projects. Part of this is the developer’s fault. Luckily, managing requirements is mainly a process issue rather than a technical one. Here are some ways to help prevent requirements from constantly expanding beyond their original scope:

  • Document all user requirements, allowing the users to review the completed requirements document and agree to its completeness;

  • Get users to agree up-front that future requirements changes will only be accepted after being evaluated for schedule and budget impacts;

  • Practice iterative development. Get users and developers to understand that the first version is not the final version. That "one last change" can always wait for the next version. Many a software system has suffered unexpected delays because a simple last-minute change rushed through just before release broke huge parts of the system.

#4 Thou shall invest up front in software architecture

Every morning, some developer goes to work with software requirements for a new application in hand and starts writing code. For those who go off and start writing code without first developing software architecture, their programs are likely doomed to failure. Invest up front in your software architecture and you will be starting down the path to successful software development.

Developing architecture for an industrial-strength software system prior to its construction is as essential as having a blueprint for a large building. The architecture becomes the blueprint, or model, for the design of your software system. We build models of complex systems because we cannot comprehend any such system in its entirety. As the complexity of systems increases, so does the importance of good modeling techniques. There are many additional factors to a project’s success, but starting with a software architecture backed by rigorous models is one essential factor.

In the face of increasingly complex systems, visualization and modeling become essential tools in defining software architecture. If you invest the time and effort up front to correctly define and communicate software architecture, you will reap many benefits including:Accelerated development, by improved communication among various team members:

  • Improved quality, by mapping business processes to software architecture;Increased visibility and predictability, by making critical design decisions explicit.

  • Here are some tips on how and why to always start your software development project with software architecture.

  • Start with a minimum yet sufficient software architecture to define the logical and physical structure of your system. Some sample activities performed are summarized in Table 1-2.

Software architecture is the top-level blueprint for designing a software system. To develop good software architecture requires knowledge of the system’s end users, the business domain, and the production environment, including hardware and the other software programs the system will interface with. Knowledge of programming languages, operating systems, develops good software architecture. As software systems grow more and more complex, ever more knowledge is required of the software architect. Object-oriented and component-based technologies may simplify individual programs, but the complexity typically remains at the architectural level as more objects or components and their interaction must be understood.

Table 1-2 Software Architecture Activities

Activity Example Architecture Level

Gather user requirementsGenerate use-case examplesLogical architecture

Document sample user activities
Create class diagrams
Create state diagrams
Create sequence diagrams
Create collaboration diagrams
Start design and production acceptanceDefine packages and componentsPhysical architecture
Define deployment environment


There are no shortcuts to designing good software architecture. It all starts with a small number, perhaps one to three, of software architects. If you have more than three architects working on a single program’s software architecture, they probably are not working at the right level of detail. When software architecture delves too deeply into detailed design, it becomes impossible to see the whole architecture at a top level and properly design from it.

Most software applications are much more complex than the makers of GUI development tools would sometimes like you to believe. Every application should be built around a software architecture that defines the overall structure, main components, internal and external interfaces, and logic of the application. Applications that work together in a common subsystem should adhere to an architecture that defines the data flow and event triggers between applications. Finally, applications that run across your entire organization need to follow some set of minimal guidelines to assure interoperability and consistency between applications and maximize opportunities for reuse of components.

Software architects should always be designed from the top down. If you are going to implement a multi-tier software architecture across your IT organization, its nice to do this before you have lots of components written which can only communicate with other applications on the same host. Start by developing your organization’s overall application architecture. Next, you should define the system architecture for the major systems that will be deployed in your organization. Finally, each application in every system needs to have its own internal software architecture. All of these architectures should be developed up front before you develop any production code or invest in any purchased software packages. The notion of completing software architecture up front does not contradict the spiral model of software development that utilizes prototyping and iterative refinement of software. Rather, prototyping should be acknowledged as an essential step in defining many parts of your architecture.

Trying to design a global set of every reusable component you think you might ever need is a losing proposition. You only know which components will be useful with lots of real experience delivering systems. If you don’t prototype, you don’t know if what you’re building is useful, tractable, and feasible, or not.

#5 Thou shall not confuse products with standards


A common mistake made by IT organization is to confuse products with standards. Standards are open specifications such as TCP/IP or HTML. Standards can either be de facto or official. De facto standards, while not endorsed by any standards body, are widely accepted throughout an industry. Official standards are controlled by standards, bodies such as the IEEE or ISO. Products can implement specific standards or they may be based on proprietary protocols or designs. Standards, because many vendors typically support them, tend to outlive specific products. For instance, in the early 1990’s, Banyan Vines was one of the top two network operating systems for PC’s. Today, suffering from its own proprietary protocol, Banyan Vines has been relegated to a niche player in the network operating system market.

If your IT organization chooses to standardize on a product, say Cisco routers for network connectivity, you should not do so until you first standardize on a standard protocol for network connectivity, such as TCP/IP. Here are some common mistakes IT organizations make when defining their application, system, and software architectures.

  • The application architecture is defined at too high a level. Some CIO’s make the mistake of declaring Windows NT (or Unix, or Mainframes) their corporate application architecture for an IT organization. Even the various third party programs designed for NT, or any other operating system, do not define all the characteristics of how to run a business. This is not to say that a corporation might not standardize on NT and use it wherever possible in its IT infrastructure, only that application architecture requires a finer granularity of detail. In general, application architectures should not be so specific as to be tied to particular products.

  • The application architecture is defined at too low a level. Oracle Version 8 is not application architecture – it is a specific version of a vendor’s database product. Once again, application architectures should not be product specific. A better architecture phrase would be, "relational databases that implement the SQL standard." Once again, this may not preclude a company from deciding to purchase only SQL DBMS systems from Oracle, but specific product choices should be made only after the underlying standards decision has been made.

  • System architecture does not address how the system is going to be tested. Many software projects have wonderfully elegant (from a computer science perspective) architectures that result in projects that fail miserably because no attention was ever paid to how the system was going to be tested. One of the most commonly overlooked test factors is performance testing. System architecture must take into account how a system is going to be fully exercised and tested. This is especially relevant when designing multi-tier applications. For instance, in a three-tiered system, the architecture may allow for individual testing of components in each of the three tiers, but may not allow for end-to-end system testing verifying the correct interoperation between all three tiers. An equally bad architecture allows for end-to-end testing without allowing for testing of components in each individual tier. There is no worse plight to know that your whole system isn’t operating successfully and have no way to isolate what component is causing the problem.

  • Software architecture does not consider production rollout of the application. Besides taking into account how an application will be tested, the process of rolling out an application into production needs to be considered in your software architecture. Many great systems have been designed that were never fielded because the infrastructure to support their widespread use was not available. The Web-Centric Production Acceptance process specifically addresses the production rollout process for web-centric applications.

#6 Thou shall recognize and retain your top talent

Too many software development books concentrate on technology or management practices. At the end of the day, a lot of your success in software development will come down to who you have working for you and how happy they are in their work. You can have the best technology and organize your team in the best way possible, but if you have the wrong team it won’t do you much good. Another surprise, given the effort so many organizations put into recruiting talent, is their failure to recognize and retain that talent after they join the organization.

Organizations that are successful at retaining their top talent start by recognizing their top talent. There are plenty of metrics that can be gathered around a developer’s productivity and quality. Don’t forget, however, more subjective criteria as summarized in Table 1-3. Who are the developers who always show up at other code reviews and make constructive comments? Who is known as the "goto" person when you have a tough bug to find? Which developers really understand the business you are in? Who has the best contacts with your customers? Be sure not to concentrate 100% on hard metrics and overlook such factors as these. Once you know whom you want to keep around, go about thinking of ways to make it happen.

Table 1-3 Traits of Successful Developers

Skill Dimension Trait Example



Operating system knowledgeUnderstands operating system principles and the impact of OS on code design choices
Networking knowledge

Understands networking infrastructure and matches application network demands to available infrastructure
Data managementUnderstands how and when to use databases, transaction monitors, and other middleware components
HardwareKnows the limits of what can be accomplished by the software application on various hardware configuration
Business
Understands the business
Can differentiate between "nice to have" requirements and those that are essential to the function of the application
Market knowledgeKeeps up-to-date on developers tools, frameworks, and hardware
ProfessionalWritten and verbal communicationEffective presenter at code reviews; documentation is easy to read and understand
TeamworkParticipates actively in other’s code reviews
FlexibilityCan work well on a wide variety of coding assignments
ReliabilityAlways completes assignments on time; strong work ethic
Problem solving skillsViewed as a "goto" person for help in solving difficult software bugs


Throughout most of the 1990’s, demand for skilled high technology workers has far exceeded the supply. It is easy to throw software developers into this general category and assume good developers are no harder to find than other high tech workers. Based on our experiences, we believe good software developers are even scarcer than good IT personnel in general. Just consider some of the numbers. The Java language was introduced as a new programming language in 1995. By late 1997, IDC estimated that there was a worldwide demand for 400,000 Java programmers and that this would grow to a need for 700,000 Java programmers by the year 1999. While retaining existing developers to program in the Java language has filled much of this demand, it still represents a tremendous outstripping of the supply of knowledgeable Java programmers.

Developer skill, more than any other factor, is the largest single determinant to the outcome of successful software projects. This is reflected in software costing and scheduling models, most of which place higher weighing factors on developer skill than all other factors, including even project complexity. In other words, skilled developers working on a complex software development project are more likely to produce a successful application than lesser skilled developers working on a simpler project. It is no accident, therefore, our Software Development book devotes a large part of its text to describing what makes a good developer (Chapter 5), how to hire one (Chapter 7), and how to retain your developers after they are hired (Chapter 8). Studies have shown that top developers can be two to three times more productive than average developers and up to one hundred times more productive than poor developers. This wide range of productivity and its standard deviation is higher than for any other profession. Good developers not only produce more lines of code, their code has fewer errors, and the code they produce is of higher general quality (i.e., it performs better, is more readable, is more maintainable, and exceeds in other subjective and objective factors) than code produced by average or poor developers.

One false belief we have heard from many software development managers, especially those without a development background, is the notion that as development tools become more advanced, they "level the playing field" between average and great developers. Anyone who has ever attended a software development-related convention has seen slick demonstrations showing how "non-programmers" can use a development tool to quickly build an application from scratch. Modern development tools, especially Integrated Developments Environment (IDE’s) addressing everything from requirements definition to testing, certainly help developer productivity. This is especially true in the area of graphical user interface code. Even with the best of IDE’s, however, there remains a highly intellectual component to software development. The best software requirements, the best software architectures, and the most error free code continue to come from the best software developers and software architects.

Rather than leveling the playing field, our experiences have shown that good IDE’s, used as part of the development process, increase rather than decrease the difference between average and great developers. There is often a compounding effect as an unskilled developer misuses a built-in function of the IDE, introducing bugs into the program, while never gaining the experience of thinking out the complete solution. We are not, of course, by any means against the use of good IDE’s or the concept of code reuse. It’s just that neither of these are a substitute for developer skill.

#7 Thou shall understand object-oriented technology

Every key software developer, architect, and manager should clearly understand object-oriented technology. We use the term "object-oriented technology" versus "object-oriented programming" because one does not necessarily imply the other. There are many C++ and Java programmers who are developing in an object-oriented programming language without any in-depth knowledge of object-oriented technology. Their code, apart from the syntax differences, probably looks very much like previous application they have written in FORTRAN, C, or COBOL.

While object-oriented technology is not a panacea for software developers, it is an important enough technology that the key engineers in every development organization should understand it. Even if your organization does not currently have any ongoing object-oriented development projects, you should have people who understand this technology. For starters, without understanding the technology you will never know if it is appropriate to use on your next project or not. Secondly, due to the long learning curves associated with object-oriented technology, organizations need to invest in it long before they undertake their first major project. While object-oriented programming syntax can be learned in a few weeks, becoming skilled in architecting object-oriented solutions usually takes six to eighteen months or more, depending on the initial skill set of the software engineer.

#8 Thou shall design web-centric applications and reusable components

As in the case of object-oriented programming, not all software architectures will be web-centric. With the explosion of the public Internet, corporate intranets and extranets, however, web-centric software is becoming more and more universal. This changes not only the way you design software, but also some of the very basic infrastructure requirements as well. Here are some of the infrastructure components needed for a typical web-centric application:

  • Database server. A web-centric application will typically access one or more corporate databases. Unlike a two-tiered client-server application, however, a web-centric application is less likely to access database directly. More commonly, a web-centric application would access some sort of middle-tier application server containing the business rules of the application. The middle-tier application would then communicate with the database server on behalf of the web-centric client. There are many advantages to such a multi-tiered approach, including greater application scalability, security, and flexibility.

  • Application servers. In a web-centric architecture, application servers implement the business logic of the application. In many cases, this is programmed using the Java language. From a Java program, the Java Database Connectivity (JDBC) API is most often used to connect back to the central database. Specialized application servers may offer services such as DBMS data caching or transactions. A single business function is often broken down into components that execute across many applications servers.

  • Web servers. Web servers are used to store and distribute both Java applications and web pages containing text and graphics. Many advanced applications will generate web pages dynamically to provide a customized look and feel.

  • Caching proxy servers. These servers, while not explicitly part of the application, are typically located strategically across the network to cut down on network bandwidth and provide faster access times to web-based data and applications.

  • Reverse proxy server. A reverse proxy server is typically used to provide remote users secure access over the Internet back to their corporate Intranet.

  • Web clients. Until recently, a web client meant either Netscape’s Communicator (or Navigator) browser or Microsoft’s Internet Explorer browser. Today, a web client could still be one of these browsers, or it could be any of the following:

    • An HTML rendering JavaBean component in your application

    • An applet viewer built into a Java Development Kit (JDK)

    • A Java application

    • A collection of functions built directly into the operating system

One of the main advantages of web-centric design is that it starts taking IT out of the business of supporting heavyweight clients. In fact, most new operating systems ship with one or more bundled web browsers so no additional client installation is required for a web-centric application. Even if you are deploying to older desktops without a bundled web browser, the popular browsers are available for free and easily installed. If a web-centric application is designed correctly, the end-user client really doesn’t matter, as long as an HTML rendering component and Java Virtual Machine (JVM) are present.

If there is any disservice that the web has brought to software development, it is that inexperienced managers may believe that the web has trivialized web-centric software development. True, almost any word processor today can spit out HTML code and dozens of development tools promise "point and click" generation of Java codes while the web makes software distribution a non-issue. All of this has allowed web-savvy organizations to develop new applications on "Internet time", several times faster than using traditional client-server environments. It has not, however, by any means, trivialized software development. From requirements definition through production acceptance, the same disciplines that apply to client-server development hold true for web-centric development. We remind developers of this continually throughout our Web-Centric Production Acceptance (WCPA) process, presented in Chapter 13.

While embracing web-centric design does not necessarily require using reusable components, it certainly is a good time to start. More and more development organizations every day are investing in the design and development of reusable components. Chapter 17 of our Software Development book discusses component-based software development in greater detail, along with several of the popular component frameworks. It is such frameworks that have fostered the popularity of reusable components. Consider some of the reasons why more and more people are investing in reusable component-based design.

It can take longer and be more expensive to design and implement a given function as a reusable component than as a non-reusable one. The savings only come when the component is reused. Especially with web-centric design, however, you will find your developers reusing well-designed components more and more. This reuse is facilitated by component standards such as JavaBeans components integration. The cost trade off, therefore, is to compare the overhead of reusable design with the average number of times a component is expected to be reused. A reusable component, on average, might cost from 10% to 25% more to develop. Few development managers today could justify a 25% cost and schedule overrun just to save the next project money. However, properly implemented, reusable components can begin saving project money today.

  • If you invest in the design of reusable components and an accompanying framework, you will undoubtedly find components you can reuse from elsewhere in place of some of the code you would otherwise develop.

  • It is likely components developed on your own project can be reused elsewhere in the organization.

  • You can buy and sell components (either externally, or by exchanging with other development groups inside your company).

  • Well-built components are much easier to swap out and upgrade.

#9 Thou shall plan for change

The best developers and architects plan for change during all phases of the software life cycle. During the course of an average one year development cycle, not only will the design be subject to change, but so too will the user requirements, the development tools, the third party software libraries, the DBMS system, the operating system, the hardware, the network, the programmers, and many other aspects of the application that cannot possibly be foreseen or otherwise planned for. Some aspects of change, such as a new release of the operating system, can certainly be planned for by discussing schedules with the vendor and making a decision whether a new release should be installed or not. Sometime during the application’s life, however, the underlying operating system will probably have to be upgraded so it’s really just a matter of when changes such as these are done. In either case, you still have to plan for the changes.

The longer the expected project lifetime, the more important it is to plan for change. The Cassini mission to Saturn, operated by JPL, was launched in October of 1997. With any luck, the spacecraft will enter orbit around Saturn in 2004. The JPL engineers running the Cassini ground station definitely must plan for changes in hardware and software prior to the spacecraft’s encounter with Saturn in 2004. Any company that designs a long lifetime product with an embedded hardware and software component must pay careful attention to planning for change. Back down on earth, for instance, every high-end Xerox printer contains an embedded workstation controller. Typically these workstations are commercial off-the-shelf products with an average lifespan of eighteen months. Xerox high-end printers are designed for five to ten or more years of continuous operation. Xerox must plan for change in the embedded workstation components for each of its printer lines.

There are many ways to plan for change. For starters, allow extra budget and schedule in your project for unforeseen changes. At the start of the project, work to clearly identify all risk items that could lead to a possible change somewhere in the future. During the design, look for ways to mitigate the risk of a change further downstream. At the coding level, look for ways quickly and easily set up code to be adapted to new situations and events within your business. For instance, use tabular definitions whenever possible versus "hard coding" these parameters into your code. Here is a real life example of two companies that implemented the same application, and how they did (or didn’t) plan for change.

We worked with two companies of about the same size that implemented the Oracle Financials application package in early 1997. In both companies, various business units wanted to modify the standard financials applications to meet some unique need of the business unit. Much of Oracle Financials operation is table driven with those tables residing in a DBMS. The IT department thus entertained each business unit’s request as most of the changes could be done by a database administrator with little or no coding. The first company went ahead, approved, and implemented the customization for each business unit. In the second company, they planned ahead for change and considered what the impact of making the customizations would be next time Oracle Financials came out with a major upgrade. The latter company decided the marginal business benefits of providing the customization was outweighed by the future costs to maintain these upgrades.

Well, as you might guess, in 1998, Oracle Financials released a major revision, based on their Network Computing Architecture (NCA). With NCA, the Oracle Financials client would be deployed via a web browser, versus loading client software on each desktop requiring access to the application. NCA offers tremendous business advantages to corporations through reduced application administration costs along with the improved functionality that is bundled into the release. The company who did the customization is still evaluating how to roll out Oracle’s NCA as the customization they performed prevents a simple upgrade and the DBA’s who did the initial customization are not yet trained on NCA. By contrast, the second company, with no extensive customization, was able to complete the upgrade to NCA in a single weekend. They have been enjoying the added functionality and client administration cost savings ever since.

#10 Thou shall implement and always adhere to a production acceptance process

Mentioned several times already in the first nine commandments, our last commandment centers around the use of the Web-Centric Production Acceptance (WCPA) process presented in Chapter 13. In our book we focus on the WCPA as tailored for web-centric applications. The WCPA is really a superset of the Client-Server Production Acceptance (CSPA) process presented in Building the New Enterprise. Most of the WCPA will be useful even if you are not yet designing web-centric applications. Production acceptance takes an application from the early stages of development and into a production status. However, planning for WCPA really begins at the first stages of the software development process. This is where we first start getting users involved, and keeping them involved throughout the development process through the use of customer project teams. At the same time, the development team needs to start getting IT operations involved. The WCPA shows that it is never too early to start getting both users and operations involved.

One of the reasons we developed the WCPA is to serve as a communications vehicle. All too often, development organizations are isolated from the business groups who will use their application and the operations group that will run and maintain them. Proper use of the WCPA will help promote and instill good communication practices. Just as iterative development is a key software development process, so it is iterative and ongoing communications with operations and with users.

This commandment is important because without a closely followed WCPA your business may lose valuable revenue or even customers because your web-centric application does not function as expected. Perhaps one of the earliest examples of a WCPA can be traced to Netscape’s web server when they first opened for business in mid-1994. When designing their web site, Netscape engineers studied the web server load of their competitor at the time, the Mosaic web site at the National Center for Supercomputer Applications (NCSA). The NCSA site was receiving approximately 1.5 million web "hits" a day at the time. Netscape engineers thus sized their web site to initially handle 5 million hits a day during its first week of operation. Luckily, Netscape had planned their web site architecture to be scalable, and were able to add additional hardware capacity to handle the load.

The success of the WCPA process is also related to the robustness of your software system’s architecture. An even greater percentage growth than Netscape’s occurred at AT&T Worldnet. AT&T had expected to sign up 40,000 customers for their Worldnet Internet service during its first month of operation and had designed the site accordingly. During its first month of operation, Worldnet registered 400,000 new subscribers, ten times the expected amount. Luckily for AT&T, they had architected the system for growth and put in place the equivalent of a WCPA process. As a result, all new subscribers were able to start receiving service with few complaints of busy signals on dial-in attempts (in contrast to some other well-known Internet services).

A great example of what happens when you don’t follow a complete WCPA process occurred at a major U.S. bank during 1998. The bank was planning an upgrade to its Internet home banking service. Prior to the upgrade, the bank used a load-balancing scheme to distribute users to a number of front-end web servers, all of which connected to the bank’s mainframe back-end systems. As part of the upgrade process, the bank was planning to let all users change their login ID and password, thus allowing more individual flexibility than the previous bank-generated login ID scheme. The first time a customer logged in after the upgrade, he would be required to select a new login ID and password or confirm keeping the old login ID. This process was delegated to a separate new web server in order to not interfere with any of the software on the existing load-sharing servers. The bank, of course, did have some production acceptance processes in place and tested the entire web-centric system. On the first day of production, users started to complain of extremely long access times. Unfortunately, the bank had not taken into account the potential bottleneck of funneling all users through the single server while they were queried for potential login ID changes.

While no process can guarantee a new production system will function 100% correctly, web-centric applications require new kinds of planning like that covered in the WCPA. Not only are user loads on the Internet much more unpredictable, web-centric applications typically involve the interaction of a much larger number of software and hardware systems.

http://www.harriskern.com/index.php?m=p&pid=377&authorid=34&aid=97

Software Development Has Always Been Difficult

In the previous article, we defined our own ten commandments for successful software development. In this article, we present some of the reasons why successful software development has always been so difficult in the past. The answer lies in the unique combination of people, processes, and technology that need to come together for a software development project to succeed. If you understand the dynamics of this combination, you will start to understand why there has never been, and never will be, any "silver bullets" in software development. This is a necessary starting point in understanding the difficulty surrounding successful software development. Only by learning from lessons of the past can we hope to avoid making the same mistakes in the future. Let’s take a brief look at the history of modern software development and identify some of the difficulties surrounding successful software development it.

In the 1970’s, development backlogs for corporate IT departments averaged eighteen months or longer. Since IT was usually the only department with the necessary resources to develop software, it owned the monopoly and wasn’t often concerned about service levels or prices. In the 1980’s developers struggled with PC’s, DOS, and 64K memory limitations. In the 1990’s, just as many software developers thought they were starting to understand client-server software, widespread use of the Web set expectations for point and click access to any piece of corporate data. Software and network infrastructures struggled to catch up with web technology that obsolete, literally overnight, many of even the newest client-server software architectures. One thing, however, has remained constant over time: there are no silver bullets in software development.

Successful software development starts with good requirements and good software architecture, long before the first line of code is ever written. Even then, since software is easy to modify, compared to hardware, it often is all too easy for users to change the requirements. The impact of changing a single line of code can wreak havoc on a program, especially a poorly designed one. On the people side, you’ll need more than just a few good software developers for a successful project. Besides good developers, you’ll also need system administrators and other support staff, such as database administrators in your development organization. As you schedule and budget a project, remember to make programmer skills the largest weighting factor, more so than the language, development tool, OS, and hardware choices you will also have to make. Finally, start planning for testing, production rollout, and maintenance of your software early in the project life cycle, or you’ll never catch up. If COBOL programmers in the 1970’s would have planned for users accessing their programs through a web-based front end in the year 2001, imagine where we would be today!

Software’s Difficult Past

In the 1970’s, IT departments running large mainframes controlled most of the corporate software development projects. The mainframe was the infrastructure for the enterprise-computing environment. COBOL was the language of choice and any department with adequate budget willing to wait for the average IT department-programming backlog of eighteen months could have the application they wanted developed or modified. Software was difficult to develop if for no other reason than because development was so tightly controlled by a small group of people with the necessary skills and access to expensive computers. In reality, much of the perceived unresponsiveness of centralized IT organizations was not due to any lack of software development skills or organizational structure, it was simply a result of the software architectures imposed by COBOL and mainframes.

Mainframe-based enterprise software applications, such as payroll processing, were typically large monolithic programs in which even simple changes were difficult to implement. The complicated structure of such programs usually limited the number of people who could modify them to their original developers. It was cost-prohibitive to have a new developer learn enough about a large mainframe program to modify it. This is painfully obvious today as many organizations return to the 70’s era code and try to update it to the year 2000 compliant. For this reason, development managers would instead simply wait for the original developers to finish their current task and then assign them to go back and modify their earlier work. COBOL technology was well understood by those developers who programmed in it. Even in the rather simplified model of centralized mainframe development organizations, however, people and process issues already played equal weight to technology issues in their impact on the success of software development.

In the 1980’s, inexpensive PC’s and the popularity of simpler computer programming languages such as BASIC led to the start of IT decentralization. Even small departments with no formal IT staff could purchase a PC, figure out the details of DOS configuration files, and get a department member with a technical background to learn BASIC. There was no longer always a requirement to wait eighteen months or more for a centralized IT organization to develop your software program. All of a sudden large companies had dozens or perhaps even hundreds of "unofficial" IT departments springing up, with no backlog to complete, which could immediately start developing stand-alone applications. The only infrastructure they needed was a PC and plenty of floppy disks for backing up their programs. Software seemed easy for a moment, at least until a program grew larger than 64K or needed more than a single floppy drive’s worth of storage. Even the year 2000 was only a far-off concern that crossed few developer’s minds. Most PC applications couldn’t access mainframe data, but most developers were too concerned about installing the latest OS upgrade to worry. Software development was still difficult; we were just too busy learning about PC’s to worry about it.

One result of the 1980’s PC boom on software development was the creation of "islands of automation." While the software program on a stand-alone PC might have been very useful to its user, such programs often led to duplicated work and lower productivity for the organization as a whole. One of the biggest productivity losses suffered by organizations was probably duplicate data entry because a stand-alone system could not communicate with a centralized system and the same data was required by both systems. Many organizations still suffer from the "multiple data entry" problem today and it continues to be a challenge to software developers who must reconcile different input errors when trying to collect and merge data. This process, referred to as "data cleansing", is especially true in one of the hottest new fields of software, data warehousing. Data cleansing is a well-known problem to anyone trying to build a large data warehouse from multiple sources. Electronically connecting "islands of automatic," rather than resolving the problem, simply increases the volumes of data that must be combined from various systems. As with many software development-related problems, the answer lies not in simply interconnecting diverse systems, but in doing so with common software architecture that prevents such problems in the first place.

In the 1990’s, corporations started to worry about centralized software development again. Microsoft Windows replaced DOS and brought a graphical user interface to stand-alone applications, along with a whole new level of programming complexity. Business managers realized that stand-alone PC applications might solve the need of one department, but did little to solve enterprise-wide business and information flow problems. At the same time, Unix finally matured to the point that it brought mainframe level reliability to client-server systems. This helped connect some of those PC islands of automation, but at a cost. MIS directors often found themselves supporting three separate development staff: for mainframes, Unix, and PC’s.

In the second half of the 1990’s, our kids suddenly started teaching us about the World Wide Web and the Internet. Almost overnight, network infrastructure went from connecting to the laser printer down the hall to down the hall to downloading multi-megabyte files from the web server halfway across the world. All it takes is a few clicks and anyone who can figure out how to use a mouse can get stock quotes and Java-enabled stock graphs on a web browser. A few more clicks to register on the site and you can be completing e-commerce transactions to sell or purchase that same stock. With the explosion of the Internet and its inherent ease of use, the same expectations for accessing corporate data, upwards of 80% of which is still stored on mainframes, were instantly set. Fewer computer users than ever before understand or even care about software development and its accompanying infrastructure. Software development, however, continues to be very difficult, and mostly for the same reasons.

Year 2000 and Other Similar Problems

Software has always been very difficult to develop and even more difficult to modify. Witness the billions of dollars being spent by corporations worldwide to upgrade or replace approximately 36 million applications so they will function correctly in the year 2000 (Y2K) and beyond. Those unfamiliar with software development struggle to understand why something as simple as the representation of the year, a four digit number comprehend by most kindergarten children, can wreak such havoc on software. Given software’s difficulty in handling the Y2K issue, it is even more amazing that brand new computer programming languages like Java can help accomplish such feats as bringing color images back from a small toy-like rover on Mars and allow them to be displayed on our PC’s at home a few minutes later.

Many people think the Y2K problem is a one-time occurrence in the history of software. This is not all so. Some other similar software problems include:

  • Around the year 2015, the phone system is projected to run out of three-digit area codes, requiring changes to approximately 25 million applications

  • In 1999, European countries switched over to a new universal currency, the euro, for non-cash transactions. By mid-2002, the use of the euro will be expanded to include cash transactions. These changes will impact approximately 10 million applications.

  • Around the year 2075, United States social security numbers, based on a nine-digit number, are expected to run out. Approximately 15 million applications use social security numbers and would be impacted by this.


It is Hard to Structure Development Organizations for Success

There are certainly more wrong ways to structure a software development organization than there are correct ways. No single organizational structure, however, will meet every company’s needs. Centralized development organizations are often too big to be responsive to departmental concerns. Decentralized organizations may not have enough staff to provide needed specialty skills. Nevertheless, certain organizational concepts apply no matter how you structure your developers. For instance, integrated software development teams, where software architects, developers, testers, and other specialists are teamed together, almost always have fewer barriers to success than more traditional "silo" organizations. In the latter, software architects, developers, and testers are divided into separate teams and hand over the project from one step of its life cycle to another. There are several problems with this type of organization. First, it is not conducive to iterative development processes. Secondly, since no group has ownership in the other’s products, there is a natural tendency to blame problems on the work of another group. In our book titled Software Development (Chapter 6) focuses on organizing your software development organization for success and provides more information on these and other organizational topics.

It is Hard to Schedule and Budget Correctly

While entire books have been devoted to software development scheduling and budgeting, it remains rare to find a software development project completed under budget and on schedule. One reason is that development managers often set software schedules and budgets early in a project’s life cycle with little or no buy-in from the actual developers. Another reason is that many software development projects begin with pre-set budget or schedule limitations and then try to back-into the eventual end product. The best single piece of advice we have is to avoid using historical "magic numbers" from other projects when developing your budget or schedule. Accurate software development scheduling and budgeting requires that you understand the project, know the developers, development environments, and other factors that will impact your schedule and budget. These issues are addressed in Chapter 12 of our Software Development book.

It is Hard to Select the Right Language and Development Tools

Language choice continues to have a major impact on software development projects, starting with the software architecture. For a given project, the software architecture will look quite different if FORTRAN is chosen as the development language than if the Java was chosen. Chapter 14 discusses some of the features and benefits of today’s most widely used programming languages. Combined with other information in this book, the reader should be able to quickly narrow the language choice for any single project to one or two languages. Once a language is chosen, you will also need to select one or more development tools. Many development tools start by having you design the user interface and thus focus on that task. Mapping the user interface to the back-end database and adding business logic is left as an exercise to the developer. Other tools start with the database design, and use this to derive the user interface and structure the business logic. In both cases, the developer is forced to trade off one design for the other. A better approach, although supported by fewer development tools, is to start by defining your business logic. Once the business logic of an application is designed, it is then much easier to derive an appropriate user interface along with the required back-end database structure. Chapter 15 discusses the features to look for in a development tool and describes the various kinds of development tools available.

It is Hard to Select the Right OS and Hardware Platform

In the future, platform independent languages like Java might make OS and hardware issues irrelevant. Today, however, the OS and hardware platform chosen continues to have an impact on software development. Chapter 16 discusses general requirements for hardware environments, including developer desktops, servers, and production hardware. Also discussed in this chapter are hardware architecture issues, such as SMP Vs. MPP, and their impact on software architecture.

It is Hard to Accomplish a Production Rollout

One of the most overlooked reasons for the failure of software projects is the difficulty associated with a successful production rollout. From bringing online a new corporate-wide financial system to upgrading a simple software application, successful production rollout does not happen without careful planning and execution. Some of the best designed and developed software applications never see production use because they did not take into account some important factor of the production environment. Chapter 13 presents our solution to the production rollout problem, the Web-Centric Production Acceptance process. As with many software development issues, planning for production operations needs to begin early in the software life cycle as the software architecture is being defined. Many chapters in this book therefore will contain information related to helping you accomplish a successful production rollout of your software project.

http://www.harriskern.com/index.php?m=p&pid=377&authorid=34&aid=28

Rapid Application Development

For the last ten years, many software projects have incorporated the use of "Rapid Application Development" methodologies in an effort to decrease development times. RAD, as it is generally referred to, incorporates an umbrella of methodologies based o spiral, iterative development technologies. RAD techniques range from the simple use of GUI development tools to quickly build prototypes, to processes incorporating complete, cross-functional business analysis. Since January 1997, Cambridge Technology Partners, one of the early practitioners of RAD, adapted their methodology to address the special needs of electronic commerce. Dubbed CoRAD, for customer oriented RAD, Cambridge’s methodology brings together a unique combination of technical, business, creative, and cognitive disciplines to implement high impact, successful electronic commerce solutions. If you are even considering building an electronic commerce application, you should read closely to avoid pitfalls many early electronics commerce sites faced because they concentrated too narrowly on either the technical or creative side of electronic commerce. Furthermore, you need to realize that e-commerce isn’t just about building a web site – it’s about building a whole new business channel.

CoRAD projects consist of five distinct phases: strategic planning, product definition, product development, product design, and product delivery. CoRAD treats your electronic commerce project as a product because that is how customers will view it. Successful web sites have to be launched, marketed to customers, and provide incentives for customers to try them out, just like traditional consumer products. They will compare your web site against your competitors’, judging its usefulness. If the site crashes or takes too long to download, customers will go elsewhere. The role of technical, business, creative, and cognitive specialists in each phase is described below. Before discussing each phase of the CoRAD methodology, however, let’s spend some time describing why it is needed in the first place.

Why Another Methodology?

Cambridge first started developing its RAD methodology for developing client-server solutions in 1991. Over the years, as client-server technologies matured, Cambridge continued to evolve its RAD model. Internet applications, however, including electronic commerce, extranets, on-line communities, interactive marketing, and interactive web services place new demands on software over and above traditional client-server development. The CoRAD methodology brings together four key disciplines for the rapid development of an Internet application:

  • Technical

  • Business

  • Cognitive

  • Creative


Internet applications and online business have placed new technical demands on software architecture. Often, it is impossible to accurately predict how many people will use an Internet site. For example, when Netscape sized its first web site, it considered the NCSA site from which the original Mosaic web browser was distributed. At the time, NCSA was receiving 1.5 million "hits" per day. Netscape wanted to be able to handle at least three times that load and designed its site for 5million hits a day. That number was surpassed in Netscape’s first week of operation and the site routinely handles 150 million or more hits per day, or 100 times the original NCSA reference. While your site may not see this amount of growth, experts say Internet architecture should be capable of scaling to handle ten times the expected load without reaching an architectural bottleneck. On an Internet site, you will have to consider the scalability of your application software; networking and security software must also scale commensurately with application usage.

Electronic commerce applications also have a major business impact. They affect how you market your products, how you sell, and how you service your customers. How will you transition your people to work in this new environment, or will you need new people? What new business processes will be needed? Will you need new channels or partners?

Finally, the creative and cognitive skills needed for successful Internet applications are substantially different than traditional internal client-server applications. No matter how good the technology employed, the business goals of your electronic commerce application cannot be successfully achieved unless the targeted customers use the solution you provide. Experience on the Internet has proven this is not always the case. Today’s electronic commerce customers typically have a choice. They may choose to use your interactive solution or choose not to. They may choose a competitor’s web site or they may use another traditional channel offered by your own company. Traditional client-server application users have used the application because they had no choice. The Internet changes the relationship between application and content. To successfully design for the web you need to be able to influence your customers to choose your content from your site. Your electronic commerce application is merely the means for them to do so, rather than an application that is forcing them to do so. CoRAD’s creative and cognitive disciplines come to play in creating an application with content that customers will choose to view. This is done using a five-step process as shown in Figure 1-1, The CoRad Approach.



Continuous Renewal

Strategic Planning



The CoRAD methodology starts with a three to six week strategic planning workshop where Cambridge helps its clients identify and prioritize the electronic commerce initiatives that will give them the highest return on investment. Cambridge examines the internal factors that could contribute to or impede the success of an e-commerce endeavor – i.e., existing Internet initiatives, technical infrastructure, operational processes, organizations, and staffing. It also conducts an external assessment in which it focuses on competitors in the industry and e-commerce best practices from other industries. A strategy and a work plan are developed that include recommendations for the next phase.

Product Definition

The next phase of an engagement is a three to six week product definition workshop. This step, much like a traditional software requirements analysis, is the key to avoiding surprises halfway through the project. The product definition stage begins by defining the scope of the project including identification of the target customers, their needs and the functions the site must perform. Sample customers are identified and, if possible, interacted with, to understand the design context. This is where cognitive disciplines become an absolutely crucial element of successful web implementations. This helps derive a more complete understanding of your customers, including their environment, habits, goals, and conceptual models. In turn, this helps you define an electronic commerce product that is truly intuitive for your specific customers to use. Use of these cognitive experts continues through the product design and development stages.

In parallel with gathering information about the customer’s requirements, the product definition phase examines several technology choices. These choices include everything from development environments to hardware platforms to integration with existing systems. Existing network and security infrastructures are also examined to evaluate if they are e-commerce ready. This typically involves IT executives, as they are the ones who ultimately will have the responsibility for operating the site. This results in the selection of a technology framework for the project.

Another parallel task in product definition is an analysis of your business environment. The goal of this analysis is to identify any needed process, such as marketing plans, or organizational changes. Business goals are clarified and critical success factors are established. These may range from specific dollar sales to more subjective qualities such as customer loyalty indexes. A cross-functional consensus on the product is established when possible, along with executive buy-off.

Most importantly, the first prototype of the product is developed. Even at this early stage, you should avoid building user-interface-only prototypes. Prototypes, although they may lack some of the final functionality, should incorporate all the key business logic of the product. Both user interfaces and back-end databases are easier to derive once the business logic has been defined than visa-versa.

Product Design

The third phase in Cambridge’s CoRAD methodology is the product design stage. The emphasis during this six to twelve week phase is on architecture. The technologists architect an infrastructure for your product that is secure, scalable, and reliable. The creative and cognitive designers build on the work begun in the prototype to architect a powerful and effective user interface for the customer. A good rule of thumb for web applications is they should be so simple to use that no help screens are required. While this may seem limiting, it is indeed true. PeopleSoft, for example, a popular vendor of enterprise resource planning applications, has designed all its web-deployed interfaces without help screens. Also during this phase, the business consultants architect new processes and organizations to align the functionality in the application with delivery capabilities of the organization. Customer testing of the prototype also continues during this phase, allowing additional feedback to be gathered and incorporated into the evolving design.

Product Development

During the six to twenty week product developments, the focus turns to implementation. The software is written and the content is refined. At the same time, the organization is prepared to assimilate the changes the product will impact prior to its launch. Larger scale customer pilots are undertaken and more feedback gathered. This helps ascertain that the product meets the requirements for the release and that it’s fast, reliable, and intuitive. Depending on the application, internal tests may also be useful for stressing the application and finding bugs before public release.

Product Delivery

The final phase in the CoRAD methodology is product delivery. This phase has two parts: preparation and execution. During rollout preparation, a conversion strategy is developed if any conversion from an existing system is being done. Conversion programs follow this. Rollout execution involves the complete installation of the application in the production environment for the extended user community.

Cambridge’s CoRAD methodology is just one example of a rapid application development methodology. Like many methodologies, CoRAD is designed and tailored to the individual projects where it is applied. No matter whether you use CoRAD, or your own version of a rapid application development methodology, your software project is more likely to be successful if some sort of RAD approach is followed.

http://www.harriskern.com/index.php?m=p&pid=377&authorid=34&aid=40

The Software Life Cycle

Many people view the software development life cycle as that time between when a programmer sits down to write the first line of code for a program to when the completed program successfully compiles and executes. Successful software development organizations have much more complete definitions of a software life cycle. These life cycle definitions start with early requirements gathering and analysis stages and proceed through ongoing operation and maintenance. The maturity of a software development organization, in fact, is closely related to its understanding of the software life cycle and the underlying processes and procedures required to successfully develop software. The Software Engineering Institute has captured this in a model, called the Capability Maturity Model for Software. This is a model for judging the maturity of the software processes of an organization and for identifying the key practices required to increase the maturity of these processes. This chapter introduces the Capability Maturity Model and then discusses how it applies during the software life cycle, from initial requirements definition to production acceptance.

The Capability Maturity Model for Software

The United States Government, as one of the largest developers and users of software in the world, has always been very concerned with improving software processes. As a result, the Software Engineering Institute (SEI) was created. The Institute is a federally funded research and development center, which has been run under contract by Carnegie Mellon University since 1984. Software professionals from government, industry, and academia staff the SEI. The SEI’s web site, at http://www.sei.cmu.edu, contains information about all the activities of the institute. One of the most important contributions to software development to come out of the SEI’s is its series of capability maturity models, which describe how to measure the maturity of software development organizations. The SEI has defined six, capability maturity models:

  1. SW-CMM: A capability maturity model for measuring software development organizations.

  2. P-CMM: The people capability maturity model, for measuring an organization’s maturity in managing its people.

  3. SE-CMM: A capability maturity model for measuring system-engineering organizations.

  4. SA-CMM: A capability maturity model for how an organization acquires software.

  5. IPD-CMM: A capability maturity model for measuring an organization’s ability to perform integrated product development.

  6. CMMI: The capability maturity model integration.


The CMMI is the most recent focus of the SEI’s activities and currently exists in draft form. This project’s objective is to develop a capability maturity model integrated product suite that provides industry and government with a set of integrated products to support process and product improvement. This project will serve to preserve government and industry investment in process improvement and enhance the use of multiple models. The project’s output will consist of integrated models, assessment methods, and training materials.

The first capability maturity model developed by the SEI was the capability maturity model for software, also known as the SW-CMM. Watts Humphrey and William Sweet first developed it in 1987. The SW-CMM defines five levels of maturity commonly found in software development organizations and describes processes required to increase maturity at each level. While concepts such as network computing and the Internet were unknown then, the SW-CMM remains a benchmark by which software development organizations are judged. The Software Engineering Institute has updated the model since then, with the latest version being the SW-CMM version 2 draft C, released in October of 1997. The basics of the model however, have not changed. Now more than ever, as development organizations are forced to work to schedules on "Internet time," process maturity remains critical to software development organizations.

The capability maturity model for software categorizes software development organizations into one of five levels according to the maturity of their processes. A brief description of each of the five maturity levels is given below along with key process areas for each level. Within each process area, a few representative traits of organizations performing at this level are listed. The complete SW-CMM, of course, includes many more details than are possible to cover in this chapter.

Level One: Initial

At this level, software development is ad hoc and no well-defined processes are followed. As such, organization focus is typically placed on those key developers, or "heroes," who happen to fix the software bug of the day. Organizations at this level of maturity are not likely to be successful at delivering anything but the most simple software projects. An organization operating at this level might expect to take six to nine months to move to level two, assuming a proper management team was in place with a focused effort to improve the organization.

Level Two: Repeatable

At this level, there is a focus on project management to bring repeatability to the software development processes. The key process areas expected to be mastered by organizations at this level listed below.

  • Requirements Management: software requirements are developed prior to application design or coding; at each step in the software design process, requirements are mapped to software functions to assure all requirements are being met; software testing includes requirements tractability matrices.

  • Software Project Planning: software projects are scheduled and budgeted accurately; software engineers of the right skill mix and experience are assigned to each project.

  • Software Project Control: software projects are tracked against their plan; proper management oversight is used to identify project risks instead of waiting until delivery dates are missed.

  • Software Acquisition Management: any third party software acquired for use on the project is properly evaluated for training, performance, usability or other limitations it may impose on the project.

  • Software Quality Assurance: each developer is held accountable for software quality; quality metrics have been established and quality is tracked against these metrics.

  • Configuration Management: All developers use a software revision control system for all project code; software baselines are properly established and tracked.


Having these processes and their management in place will typically result in organizations that can deliver small to mid-sized projects in a repeatable fashion. Organizations at this level who do not move toward level three often fail when they undertake larger projects or fail to meet cost, quality, and schedule constraints that become imposed on them. Level Two software groups are fairly common to find among the IT organizations of large corporations where software development management has been made a priority. Moving to the next level, however, requires a concentrated effort in software process development and might take anywhere from 12 to 24 months for a typical Level Three organization.

Level Three: Defined

Organizations at Level Three have moved on from simple project management of software development to focus on the underlying engineering processes. The key processes areas are to be mastered by organizations at this level are listed below.

  • Organization Process Focus: a process focus is ingrained into the culture of the development organization.

  • Organization Process Definition: the organization translates its process focus into the clear definition of processes for all aspects of the software development process from initial requirements definition to production acceptance.

  • Organization Training Program: the organization not only trains all software engineers on the software technologies being used, but also on all processes.

  • Integrated Software Management: organizations have implemented the categorization, indexing, search, and retrieval of software components to foster reuse of software as much as possible.

  • Software Product Engineering: individual software products are not simply developed in isolation, but are part of an overall software product engineering process that defines business-wide applications architecture.

  • Project Interface Coordination: individual software projects are not defined in isolation.

  • Peer Reviews: peer reviews of software are accomplished at various places during the software life cycle, after design is complete, during coding, and prior to start of unit test.


Achieving Level Three of the capability maturity model is the goal of most large software development organizations. Level Four and Five go on to define additional criteria that very few organizations are able to meet.

Level Four: Managed

At this level, the entire software development process is not only defined but is managed in a proactive fashion. The key processes areas are to be mastered by organizations at this level are listed below.

  • Organizations Software Asset Commonality: besides enabling reuse through software management, reuse is built into the design process by following common design standards, interfaces, programming guidelines, and other standards.

  • Organization Process Performance: the organization has established metrics for evaluating the performance of its software processes.

  • Statistical Process Management: statistical methods are used and managed in the development, implementation, and tracking of process use and effectiveness.


Organizations at Level Four thus not only manage the quality of their software products, but also can manage the quality of their software processes and understand the second order affect of process quality on product quality.

Level Five: Optimized

This is the "Holy Grail" of software development. In fact, very few large organizations have ever achieved a Level Five score in SEI evaluations. To do so requires a demonstration of continuous process improvement in software development. The key processes areas are to be mastered by organizations at this level are listed below.

  • Defect Prevention: the organization not only focuses on quality assurance, that is, finding and correcting defects, but on defect prevention.

  • Organization Process and Technology Innovation: the organization continually innovates both in new processes that are developed and in new technology that is applied to the software development process.

  • Organization Improvement Deployment: continuous process improvement in software development is not just a buzzword but is planned, executed on, and tracked against the plan with ongoing feedback loops.


Certainly many organizations have achieved some of these criteria on some projects, however achievement of Level Five requires universal adherence by all software development groups and on every project.

The software processes of the SW-CMM can be applied across the entire software life cycle, from requirements gathering through final testing. The rest of this chapter provides a brief description of different stages of the software development process.

Requirements Analysis and Definition

This is where every software project starts. Requirements serve many purposes for a software development project. For starters, requirements define what the software is supposed to do. The software requirements serve as the basis for all the future design, coding, and testing that will be done on the project. Typically, requirements start out as high-level general statements about the software’s functionality as perceived by the users of the software. Requirements are further defined through performance, look and feel, and other criteria. Each top-level requirement is assigned to one or more subsystems within an application. Subsystem level requirements are further refined and allocated to individual modules. As this process points out, requirements definition is not just a process that takes place at the start of a development project, but is an ongoing process. This is especially true when a spiral development model is used. The ability to manage the definition and tracking of requirements is a key process area required of Level Two organizations by the SW-CMM.

System Architecture and Design

The system design stage is where the software architect plays a crucial role. The architect takes the output from the requirements stage, which states what the software is supposed to do, and defines how it should do it. This is a crucial stage for any software project because even the best programmers will have trouble implementing a poor design.

Test Plan Design

Designing your software test plan is really part of system design, but we have decided to break this out into a separate stage because it is so often overlooked. Many great software designs end in unsuccessful projects because no one thought about how the system would be tested.

Implementation

During the implementation, or coding stage of a software development project, software engineers complete detailed level designs and write code. One of the key processes Level Three organizations complete during the implementation stage is peer review. During a peer review a group of developers will meet to review a software module, including the code under development, the detailed design, and the requirements of the module. To someone who is not a software developer, getting a half dozen people in a room to review, line by line, hundreds of lines of code in a software module may seem like a large waste of time. In reality, however, code reviews are one of the common processes followed by successful software development organizations of all kinds. Here is the outline for a sample code review:

  1. Attendees: We typically try to have between four to eight code reviewers (peers of the developer) from the same or related application development teams. The developer’s manager attends if possible. We typically try to involve a system architect or senior developer as a facilitator for the code review.

  2. Ground Rules: We have found that these ground rules are essential to making a code review productive. We always review the ground rules at the start of each code review.

  3. No grading. The purpose of the code review is not to grade or otherwise measure the performance of a developer, but to identify and resolve any issues while the code is still under development. Unless this is perfectly understood and practiced, we often find a developer’s peers are unwilling to raise issues with code in front of a manager or more senior developer for fear of having a negative impact on the developer. This dilutes the entire value of the code review.

  4. No incomplete phrases. During the review, incomplete phrases are typically well understood by all given their context. However, two weeks later, a developer will typically not be able to understand the meaning of an incomplete phrase noted in the meeting minutes. When everyone speaks and writes in complete sentences, it makes follow-up on the review much simpler.

  5. Majority rules. Everyone must agree up front that when issues are raised, they will be discussed until everyone is in agreement. This is a peer review by developers working on the same or closely related applications. If there are any questions as to the validity of the design or of a code segment, this is the time to resolve it. If there is not 100% agreement, then those in the minority must ultimately, if they cannot sway others to their case, side with the majority view.

  6. Stay intellectually committed. It takes a lot of effort to follow, line by line, the code of another developer. However, everyone in the code review is making a major investment in the review for the benefit of the entire team. Everyone needs to be 100% committed to the review process and not duck what they perceive are issues.


  1. Requirements Review:The developer presents the requirements that have been allocated to the module.

  2. Design Review: The developer presents the detailed design of the module. Visual aids used include flow charts, UML diagrams, call graph trees, class hierarchies, and user interface components.

  3. Code Review: At this point, the facilitator takes over and walks everyone through the code. This trees the developer to concentrate on listening to and documenting comments made by the reviewers.


Summary

The facilitator summarizes the findings of the code review. If necessary, specific design modifications are agreed to. If major design changes are required, a follow-up code review will always be conducted. If only minor code changes are identified, no follow-up review will typically be required.

Validation and Testing

Many people do not understand why software is so prone to errors and in fact simply accept software bugs as a fact of life. The same people who accept their PC rebooting twice a day would, however, be intolerant of picking up the phone and not receiving dial tone because of a software bug in the telephone company switch. One can easily think of other examples of mission critical software, such as control software for nuclear power plant operations or the software in your car’s anti-lock brake computer. Even state-of-the-art development techniques cannot eliminate 100% of software bugs but proper software validation and testing can go a long way toward detecting most bugs before software is released to end users. This section discusses some of the common types of testing performed during the software development life cycle.

Unit Testing

Unit testing is the testing of a single software module, usually developed by a single developer. In most organizations, unit testing is the responsibility of the software developer.

Subsystem and System Testing

An application typically is made up of one or more software modules. System testing will test all software modules in an application. On larger applications, subsystem testing may precede this. One of the focuses of subsystem and system level testing is to test all the interactions between modules.

Black-Box and White-Box Testing

Two different approaches to developing software tests are black-box and white-box test design methods. Black-box test design treats the software system as a "black-box," and doesn’t explicitly use knowledge of the internal software structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the "box," or software component, and focus specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box, and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural." Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but rather simply discouraged. In practice, it hasn’t proven useful to use a single test design method. Many organizations use a mixture of different methods so that they aren’t hindered by the limitations of a particular approach.

It is important to understand that these methods are used during the test design phase. The influence of the test design method used is hard to see once the tests are implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, simply because the developer designing a unit test is typically the same person who wrote the code. Subsystem and system level tests are more likely to use behavioral test design methods.

Alpha and Beta Testing

Alpha testing refers to internal company testing of an application prior to its external release. External users prior to official release of the software typically do beta testing. In both cases, users exercise the application software for its intended purpose and report back any bugs they may encounter. Many companies have found both alpha and beta testing extremely useful because it allows much more extensive testing than could have ever been accomplished solely by the in-house development and quality assurance teams.

Several years ago, Sun Microsystems began an extensive alpha test program of its Solaris operating system. Literally hundreds of engineers throughout the company who were unrelated to the actual operating system development installed early builds of Solaris, often six months or more before customer release. This testing was so successful that the Solaris group went the nest step and began installing weekly alpha release updates on their main file server machine, providing production service to over 400 engineers. It certainly doesn’t take long to get bugs fixed in that environment. The Solaris alpha test program was so successful that many other software product groups within Sun are now alpha testing their software on internal engineering desktops and servers.

Beta testing may also have the advantage of providing users early access to new features and thus building or maintaining a customer base in rapidly changing markets such as the Internet. Netscape has certainly been one of the most widespread sponsors of beta test programs. Some of Netscape’s web browser products have undergone half a dozen or more beta releases with millions of users. Before widespread use of the Internet such feedback was impossible simply because of distribution issues. Netscape can beta test six releases, one per week, of its software with a million or more Internet downloads for each release. Just to produce and distribute a million CDROM’s would take six weeks for most software vendors.

Stress Testing

The purpose of stress testing is to ascertain that application software will meet its design requirements even under full performance loads and under all possible input conditions. Because software performance is so closely tied to system hardware, stress testing most often is accomplished on the actual production hardware, or a replica thereof.

Production Acceptance

After all other testing has been completed, the final step is for software to undergo production acceptance. This is where operations personnel will integrate the software into a production baseline and perform final regression testing. The main purpose of these tests is to document the correct operation of the software in its production rollout. A production acceptance process tailored to client-server systems is presented in the book titled Building The New Enterprise.

http://www.harriskern.com/index.php?m=p&pid=377&authorid=34&aid=41