Sunday, March 11, 2007

The Software Life Cycle

Many people view the software development life cycle as that time between when a programmer sits down to write the first line of code for a program to when the completed program successfully compiles and executes. Successful software development organizations have much more complete definitions of a software life cycle. These life cycle definitions start with early requirements gathering and analysis stages and proceed through ongoing operation and maintenance. The maturity of a software development organization, in fact, is closely related to its understanding of the software life cycle and the underlying processes and procedures required to successfully develop software. The Software Engineering Institute has captured this in a model, called the Capability Maturity Model for Software. This is a model for judging the maturity of the software processes of an organization and for identifying the key practices required to increase the maturity of these processes. This chapter introduces the Capability Maturity Model and then discusses how it applies during the software life cycle, from initial requirements definition to production acceptance.

The Capability Maturity Model for Software

The United States Government, as one of the largest developers and users of software in the world, has always been very concerned with improving software processes. As a result, the Software Engineering Institute (SEI) was created. The Institute is a federally funded research and development center, which has been run under contract by Carnegie Mellon University since 1984. Software professionals from government, industry, and academia staff the SEI. The SEI’s web site, at http://www.sei.cmu.edu, contains information about all the activities of the institute. One of the most important contributions to software development to come out of the SEI’s is its series of capability maturity models, which describe how to measure the maturity of software development organizations. The SEI has defined six, capability maturity models:

  1. SW-CMM: A capability maturity model for measuring software development organizations.

  2. P-CMM: The people capability maturity model, for measuring an organization’s maturity in managing its people.

  3. SE-CMM: A capability maturity model for measuring system-engineering organizations.

  4. SA-CMM: A capability maturity model for how an organization acquires software.

  5. IPD-CMM: A capability maturity model for measuring an organization’s ability to perform integrated product development.

  6. CMMI: The capability maturity model integration.


The CMMI is the most recent focus of the SEI’s activities and currently exists in draft form. This project’s objective is to develop a capability maturity model integrated product suite that provides industry and government with a set of integrated products to support process and product improvement. This project will serve to preserve government and industry investment in process improvement and enhance the use of multiple models. The project’s output will consist of integrated models, assessment methods, and training materials.

The first capability maturity model developed by the SEI was the capability maturity model for software, also known as the SW-CMM. Watts Humphrey and William Sweet first developed it in 1987. The SW-CMM defines five levels of maturity commonly found in software development organizations and describes processes required to increase maturity at each level. While concepts such as network computing and the Internet were unknown then, the SW-CMM remains a benchmark by which software development organizations are judged. The Software Engineering Institute has updated the model since then, with the latest version being the SW-CMM version 2 draft C, released in October of 1997. The basics of the model however, have not changed. Now more than ever, as development organizations are forced to work to schedules on "Internet time," process maturity remains critical to software development organizations.

The capability maturity model for software categorizes software development organizations into one of five levels according to the maturity of their processes. A brief description of each of the five maturity levels is given below along with key process areas for each level. Within each process area, a few representative traits of organizations performing at this level are listed. The complete SW-CMM, of course, includes many more details than are possible to cover in this chapter.

Level One: Initial

At this level, software development is ad hoc and no well-defined processes are followed. As such, organization focus is typically placed on those key developers, or "heroes," who happen to fix the software bug of the day. Organizations at this level of maturity are not likely to be successful at delivering anything but the most simple software projects. An organization operating at this level might expect to take six to nine months to move to level two, assuming a proper management team was in place with a focused effort to improve the organization.

Level Two: Repeatable

At this level, there is a focus on project management to bring repeatability to the software development processes. The key process areas expected to be mastered by organizations at this level listed below.

  • Requirements Management: software requirements are developed prior to application design or coding; at each step in the software design process, requirements are mapped to software functions to assure all requirements are being met; software testing includes requirements tractability matrices.

  • Software Project Planning: software projects are scheduled and budgeted accurately; software engineers of the right skill mix and experience are assigned to each project.

  • Software Project Control: software projects are tracked against their plan; proper management oversight is used to identify project risks instead of waiting until delivery dates are missed.

  • Software Acquisition Management: any third party software acquired for use on the project is properly evaluated for training, performance, usability or other limitations it may impose on the project.

  • Software Quality Assurance: each developer is held accountable for software quality; quality metrics have been established and quality is tracked against these metrics.

  • Configuration Management: All developers use a software revision control system for all project code; software baselines are properly established and tracked.


Having these processes and their management in place will typically result in organizations that can deliver small to mid-sized projects in a repeatable fashion. Organizations at this level who do not move toward level three often fail when they undertake larger projects or fail to meet cost, quality, and schedule constraints that become imposed on them. Level Two software groups are fairly common to find among the IT organizations of large corporations where software development management has been made a priority. Moving to the next level, however, requires a concentrated effort in software process development and might take anywhere from 12 to 24 months for a typical Level Three organization.

Level Three: Defined

Organizations at Level Three have moved on from simple project management of software development to focus on the underlying engineering processes. The key processes areas are to be mastered by organizations at this level are listed below.

  • Organization Process Focus: a process focus is ingrained into the culture of the development organization.

  • Organization Process Definition: the organization translates its process focus into the clear definition of processes for all aspects of the software development process from initial requirements definition to production acceptance.

  • Organization Training Program: the organization not only trains all software engineers on the software technologies being used, but also on all processes.

  • Integrated Software Management: organizations have implemented the categorization, indexing, search, and retrieval of software components to foster reuse of software as much as possible.

  • Software Product Engineering: individual software products are not simply developed in isolation, but are part of an overall software product engineering process that defines business-wide applications architecture.

  • Project Interface Coordination: individual software projects are not defined in isolation.

  • Peer Reviews: peer reviews of software are accomplished at various places during the software life cycle, after design is complete, during coding, and prior to start of unit test.


Achieving Level Three of the capability maturity model is the goal of most large software development organizations. Level Four and Five go on to define additional criteria that very few organizations are able to meet.

Level Four: Managed

At this level, the entire software development process is not only defined but is managed in a proactive fashion. The key processes areas are to be mastered by organizations at this level are listed below.

  • Organizations Software Asset Commonality: besides enabling reuse through software management, reuse is built into the design process by following common design standards, interfaces, programming guidelines, and other standards.

  • Organization Process Performance: the organization has established metrics for evaluating the performance of its software processes.

  • Statistical Process Management: statistical methods are used and managed in the development, implementation, and tracking of process use and effectiveness.


Organizations at Level Four thus not only manage the quality of their software products, but also can manage the quality of their software processes and understand the second order affect of process quality on product quality.

Level Five: Optimized

This is the "Holy Grail" of software development. In fact, very few large organizations have ever achieved a Level Five score in SEI evaluations. To do so requires a demonstration of continuous process improvement in software development. The key processes areas are to be mastered by organizations at this level are listed below.

  • Defect Prevention: the organization not only focuses on quality assurance, that is, finding and correcting defects, but on defect prevention.

  • Organization Process and Technology Innovation: the organization continually innovates both in new processes that are developed and in new technology that is applied to the software development process.

  • Organization Improvement Deployment: continuous process improvement in software development is not just a buzzword but is planned, executed on, and tracked against the plan with ongoing feedback loops.


Certainly many organizations have achieved some of these criteria on some projects, however achievement of Level Five requires universal adherence by all software development groups and on every project.

The software processes of the SW-CMM can be applied across the entire software life cycle, from requirements gathering through final testing. The rest of this chapter provides a brief description of different stages of the software development process.

Requirements Analysis and Definition

This is where every software project starts. Requirements serve many purposes for a software development project. For starters, requirements define what the software is supposed to do. The software requirements serve as the basis for all the future design, coding, and testing that will be done on the project. Typically, requirements start out as high-level general statements about the software’s functionality as perceived by the users of the software. Requirements are further defined through performance, look and feel, and other criteria. Each top-level requirement is assigned to one or more subsystems within an application. Subsystem level requirements are further refined and allocated to individual modules. As this process points out, requirements definition is not just a process that takes place at the start of a development project, but is an ongoing process. This is especially true when a spiral development model is used. The ability to manage the definition and tracking of requirements is a key process area required of Level Two organizations by the SW-CMM.

System Architecture and Design

The system design stage is where the software architect plays a crucial role. The architect takes the output from the requirements stage, which states what the software is supposed to do, and defines how it should do it. This is a crucial stage for any software project because even the best programmers will have trouble implementing a poor design.

Test Plan Design

Designing your software test plan is really part of system design, but we have decided to break this out into a separate stage because it is so often overlooked. Many great software designs end in unsuccessful projects because no one thought about how the system would be tested.

Implementation

During the implementation, or coding stage of a software development project, software engineers complete detailed level designs and write code. One of the key processes Level Three organizations complete during the implementation stage is peer review. During a peer review a group of developers will meet to review a software module, including the code under development, the detailed design, and the requirements of the module. To someone who is not a software developer, getting a half dozen people in a room to review, line by line, hundreds of lines of code in a software module may seem like a large waste of time. In reality, however, code reviews are one of the common processes followed by successful software development organizations of all kinds. Here is the outline for a sample code review:

  1. Attendees: We typically try to have between four to eight code reviewers (peers of the developer) from the same or related application development teams. The developer’s manager attends if possible. We typically try to involve a system architect or senior developer as a facilitator for the code review.

  2. Ground Rules: We have found that these ground rules are essential to making a code review productive. We always review the ground rules at the start of each code review.

  3. No grading. The purpose of the code review is not to grade or otherwise measure the performance of a developer, but to identify and resolve any issues while the code is still under development. Unless this is perfectly understood and practiced, we often find a developer’s peers are unwilling to raise issues with code in front of a manager or more senior developer for fear of having a negative impact on the developer. This dilutes the entire value of the code review.

  4. No incomplete phrases. During the review, incomplete phrases are typically well understood by all given their context. However, two weeks later, a developer will typically not be able to understand the meaning of an incomplete phrase noted in the meeting minutes. When everyone speaks and writes in complete sentences, it makes follow-up on the review much simpler.

  5. Majority rules. Everyone must agree up front that when issues are raised, they will be discussed until everyone is in agreement. This is a peer review by developers working on the same or closely related applications. If there are any questions as to the validity of the design or of a code segment, this is the time to resolve it. If there is not 100% agreement, then those in the minority must ultimately, if they cannot sway others to their case, side with the majority view.

  6. Stay intellectually committed. It takes a lot of effort to follow, line by line, the code of another developer. However, everyone in the code review is making a major investment in the review for the benefit of the entire team. Everyone needs to be 100% committed to the review process and not duck what they perceive are issues.


  1. Requirements Review:The developer presents the requirements that have been allocated to the module.

  2. Design Review: The developer presents the detailed design of the module. Visual aids used include flow charts, UML diagrams, call graph trees, class hierarchies, and user interface components.

  3. Code Review: At this point, the facilitator takes over and walks everyone through the code. This trees the developer to concentrate on listening to and documenting comments made by the reviewers.


Summary

The facilitator summarizes the findings of the code review. If necessary, specific design modifications are agreed to. If major design changes are required, a follow-up code review will always be conducted. If only minor code changes are identified, no follow-up review will typically be required.

Validation and Testing

Many people do not understand why software is so prone to errors and in fact simply accept software bugs as a fact of life. The same people who accept their PC rebooting twice a day would, however, be intolerant of picking up the phone and not receiving dial tone because of a software bug in the telephone company switch. One can easily think of other examples of mission critical software, such as control software for nuclear power plant operations or the software in your car’s anti-lock brake computer. Even state-of-the-art development techniques cannot eliminate 100% of software bugs but proper software validation and testing can go a long way toward detecting most bugs before software is released to end users. This section discusses some of the common types of testing performed during the software development life cycle.

Unit Testing

Unit testing is the testing of a single software module, usually developed by a single developer. In most organizations, unit testing is the responsibility of the software developer.

Subsystem and System Testing

An application typically is made up of one or more software modules. System testing will test all software modules in an application. On larger applications, subsystem testing may precede this. One of the focuses of subsystem and system level testing is to test all the interactions between modules.

Black-Box and White-Box Testing

Two different approaches to developing software tests are black-box and white-box test design methods. Black-box test design treats the software system as a "black-box," and doesn’t explicitly use knowledge of the internal software structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the "box," or software component, and focus specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box, and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural." Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but rather simply discouraged. In practice, it hasn’t proven useful to use a single test design method. Many organizations use a mixture of different methods so that they aren’t hindered by the limitations of a particular approach.

It is important to understand that these methods are used during the test design phase. The influence of the test design method used is hard to see once the tests are implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, simply because the developer designing a unit test is typically the same person who wrote the code. Subsystem and system level tests are more likely to use behavioral test design methods.

Alpha and Beta Testing

Alpha testing refers to internal company testing of an application prior to its external release. External users prior to official release of the software typically do beta testing. In both cases, users exercise the application software for its intended purpose and report back any bugs they may encounter. Many companies have found both alpha and beta testing extremely useful because it allows much more extensive testing than could have ever been accomplished solely by the in-house development and quality assurance teams.

Several years ago, Sun Microsystems began an extensive alpha test program of its Solaris operating system. Literally hundreds of engineers throughout the company who were unrelated to the actual operating system development installed early builds of Solaris, often six months or more before customer release. This testing was so successful that the Solaris group went the nest step and began installing weekly alpha release updates on their main file server machine, providing production service to over 400 engineers. It certainly doesn’t take long to get bugs fixed in that environment. The Solaris alpha test program was so successful that many other software product groups within Sun are now alpha testing their software on internal engineering desktops and servers.

Beta testing may also have the advantage of providing users early access to new features and thus building or maintaining a customer base in rapidly changing markets such as the Internet. Netscape has certainly been one of the most widespread sponsors of beta test programs. Some of Netscape’s web browser products have undergone half a dozen or more beta releases with millions of users. Before widespread use of the Internet such feedback was impossible simply because of distribution issues. Netscape can beta test six releases, one per week, of its software with a million or more Internet downloads for each release. Just to produce and distribute a million CDROM’s would take six weeks for most software vendors.

Stress Testing

The purpose of stress testing is to ascertain that application software will meet its design requirements even under full performance loads and under all possible input conditions. Because software performance is so closely tied to system hardware, stress testing most often is accomplished on the actual production hardware, or a replica thereof.

Production Acceptance

After all other testing has been completed, the final step is for software to undergo production acceptance. This is where operations personnel will integrate the software into a production baseline and perform final regression testing. The main purpose of these tests is to document the correct operation of the software in its production rollout. A production acceptance process tailored to client-server systems is presented in the book titled Building The New Enterprise.

http://www.harriskern.com/index.php?m=p&pid=377&authorid=34&aid=41