Wednesday, January 31, 2007

Software Quality at Top Speed

Software products exhibit two general kinds of quality, which affect software schedules in different ways. The first kind of quality that people usually think of when they refer to "software quality" is low defect rate.

Some project managers try to shorten their schedules by reducing the time spent on quality-assurance practices such as design and code reviews. Some shortchange the upstream activities of requirements analysis and design. Others--running late--try to make up time by compressing the testing schedule, which is vulnerable to reduction since it’s the critical-path item at the end of the schedule.

These are some of the worst decisions a person who wants to maximize development speed can make. In software, higher quality (in the form of lower defect rates) and reduced development time go hand in hand. Figure 1 illustrates the relationship between defect rate and development time.


Figure 1. Relationship between defect rate and development time. As a rule, the projects that achieve the lowest defect rates also achieve the shortest schedules.

A few organizations have achieved extremely low defect rates (shown on the far right of the curve), and when you reach that point, further reducing the number of defects will tend to increase the amount of development time. This applies to life-critical systems such as the life-support systems on the space shuttle. It doesn’t apply to the rest of us.

The rest of us would do well to learn from a discovery made by IBM in the 1970s: Products with the lowest defect counts also have the shortest schedules (Jones 1991). Many organizations currently develop software with defect levels that give them longer schedules than necessary. After surveying about 4000 software projects, Capers Jones reported that poor quality was one of the most common reasons for schedule overruns (1994). He also reported that poor quality is implicated in close to half of all canceled projects. A Software Engineering Institute survey found that more than 60 percent of organizations assessed suffered from inadequate quality assurance (Kitson and Masters 1993). On the curve in Figure 1, those organizations are to the left of the 95-percent-removal line.

That 95-percent-removal line--or some point in its neighborhood--is significant because that level of pre-release defect removal appears to be the point at which projects achieve the shortest schedules, least effort, and highest levels of user satisfaction (Jones 1991). If you’re finding more than 5 percent of your defects after your product has been released, you’re vulnerable to the problems associated with low quality, and you’re probably taking longer to develop your software than you need to.

Design Shortcuts

Projects that are in a hurry are particularly vulnerable to shortchanging quality-assurance at the individual-developer level. Any developer who has been pushed to ship a product quickly knows how much pressure there can be to cut corners because "we’re only three weeks from shipping." For example, rather than writing a separate, completely clean printing module, you might piggyback printing onto the screen-display module. You know that’s a bad design, that it isn’t extendible or maintainable, but you don’t have time to do it right. You’re being pressured to get the product done, so you feel compelled to take the shortcut.

Two months later, the product still hasn’t shipped, and those cut corners come back to haunt you. You find that users are unhappy with printing, and the only way to satisfy their requests is to significantly extend the printing functionality. Unfortunately, in the two months since you piggybacked printing onto the screen-display module, the printing functionality and the screen-display functionality have become thoroughly intertwined. Redesigning printing and separating it from the screen display is now a tough, time-consuming, error-prone operation.

The upshot is that a shortcut that was supposed to save time actually wasted time in the following ways:

  • The original time spent designing and implementing the printing hack was completely wasted because most of that code will be thrown away. The time spent unit-testing and debugging the printing-hack code was also wasted.
  • Additional time must be spent to strip the printing-specific code out of the display module.
  • Additional testing and debugging time must be spent to ensure that the modified display code still works after the printing code has been stripped out.
  • The new printing module, which should have been designed as an integral part of the system, has to be designed onto and around the existing system, which was not designed with it in mind.

All this happens, when the only necessary cost--if the right decision had been made at the right time--was to design and implement one version of the printing module. And now you still have to do that anyway.

This example is not uncommon. Up to four times the normal number of defects are reported for released products that were developed under excessive schedule pressure.2 Projects that are in schedule trouble often become obsessed with working harder rather than working smarter. Attention to quality is seen as a luxury. The result is that projects often work dumber, which gets them into even deeper schedule trouble.

Error-Prone Modules

One aspect of quality assurance that’s particularly important to rapid development is the existence of error-prone modules, which are modules that are responsible for a disproportionate number of defects. Barry Boehm reported that 20 percent of the modules in a program are typically responsible for 80 percent of the errors.5 On its IMS project, IBM found that 57 percent of the errors clumped into 7 percent of the modules.1

Modules with such high defect rates are more expensive and time-consuming to deliver than less error-prone modules. Normal modules cost about $500 to $1000 per function point to develop. Error-prone modules cost about $2000 to $4000 per function point to develop.2 Error-prone modules tend to be more complex than other modules in the system, less structured, and unusually large. They often were developed under excessive schedule pressure and were not fully tested.

If development speed is important, make identification and redesign of error-prone modules a priority. Once a module’s error rate hits about 10 defects per thousand lines of code, review it to determine whether it should be redesigned or reimplemented. If it’s poorly structured, excessively complex, or excessively long, redesign the module and reimplement it from the ground up. You’ll shorten the schedule and improve the quality of your product at the same time.

Quality-Assurance and Development Speed

If you can prevent defects or detect and remove them early, you can realize a significant schedule benefit. Studies have found that reworking defective requirements, design, and code typically consumes 40 to 50 percent of the total cost of software development (Jones 1986). As a rule of thumb, every hour you spend on defect prevention will reduce your repair time from three to ten hours.2 In the worst case, reworking a software requirements problem once the software is in operation typically costs 50 to 200 times what it would take to rework the problem in the requirements stage (Boehm and Papaccio 1988). It’s easy to understand why. A 1-sentence requirement can expand into 5 pages of design diagrams, then into 500 lines of code, 15 pages of user documentation, and a few dozen test cases. It’s cheaper to correct an error in that 1-sentence requirement at requirements time than it is after design, code, user documentation, and test cases have been written to it.

Figure 2 illustrates the way that defects tend to become more expensive the longer they stay in a program.


Figure 2. The longer a defect remains undetected, the more expensive it becomes to correct.

The savings potential from early defect detection is huge--about 60 percent of all defects usually exist by design time (Gilb 1988), and you should try to eliminate them by design time. A decision early in a project not to focus on defect detection amounts to a decision to postpone defect detection and correction until later in the project when they will be much more expensive and time-consuming. That’s not a rational decision when time is at a premium.

Quality-Assurance Practices

The various quality-assurance measures have different effects on development speed. Here is a summary.

Presentations

The most common quality-assurance practice is undoubtedly execution testing, finding errors by executing a program and seeing what happens. The two basic kinds of execution testing are unit tests, in which the developer checks his or her own code to verify that it works correctly, and system tests, in which an independent tester checks to see whether the system operates as expected.

Testing is the black sheep of QA practices as far as development speed is concerned. It can certainly be done so clumsily that it slows down the development schedule, but most often its effect on the schedule is only indirect. Testing discovers that the product’s quality is too low for it to be released, and the product has to be delayed until it can be improved. Testing thus becomes the messenger that delivers bad news.

The best way to leverage testing from a rapid-development viewpoint is to plan ahead for bad news--set up testing so that if there’s bad news to deliver, testing will deliver it as early as possible.

Technical Reviews

Technical reviews include all the kinds of reviews that are used to detect defects in requirements, design, code, test cases, and other work products. Reviews vary in level of formality and effectiveness, and they play a more critical role in maximizing development speed than testing does.

The least formal and most common kind of review is the walkthrough, which is any meeting at which two or more developers review technical work with the purpose of improving its quality. Walkthroughs are useful to rapid development because you can use them to detect defects earlier than you can with testing.

Code reading is a somewhat more formal review process than a walkthrough but nominally applies only to code. In code reading, the author of the code hands out source listings to two or more reviewers. The reviewers read the code and report any errors to the code’s author. A study at NASA’s Software Engineering Laboratory found that code reading detected about twice as many defects per hour of effort as testing (Card 1987). That suggests that, on a rapid-development project, some combination of code reading and testing would be more schedule-effective than testing alone.

Inspections are the most formal kind of technical review, and they have been found to be extremely effective in detecting defects throughout a project. Developers are trained in the use of inspection techniques and play specific roles during the inspection process. The "moderator" hands out the material to be inspected before the inspection meeting. The "reviewers" examine the material before the meeting and use checklists to stimulate their reviews. During the inspection meeting, the "author" paraphrases the material, the reviewers identify errors, and the "scribe" records the errors. After the meeting, the moderator produces an inspection report that describes each defect and indicates what will be done about it. Throughout the inspection process you gather data about defects, hours spent correcting defects, and hours spent on inspections so that you can analyze the effectiveness of your software-development process and improve it.

Because they can be used early in the development cycle, inspections have been found to produce net schedule savings of 10 to 30 percent (Gilb and Graham 1993). One study of large programs even found that each hour spent on inspections avoided an average of 33 hours of maintenance, and inspections were up to 20 times more efficient than testing (Russell 1991).

Comment on Technical Reviews

Technical reviews are a useful and important supplement to testing. Reviews find defects earlier, which saves time and is good for the schedule. They are more cost effective on a per-defect-found basis because they detect both the symptom of the defect and the underlying cause of the defect at the same time. Testing detects only the symptom of the defect; the developer still has to isolate the cause by debugging. Reviews tend to find a higher percentage of defects (Jones 1986). And reviews serve as a time when developers share their knowledge of best practices with each other, which increases their rapid-development capability over time. Technical reviews are thus a critical component of any development effort that aims to achieve the shortest possible schedule.

The Other Kind of Quality

I mentioned at the beginning of the article that there were two kinds of quality. The other kind of quality includes all of the other characteristics that you think of when you think of a high-quality software product--usability, efficiency, robustness, maintainability, portability, and so on. Unlike the low-defect kind of quality, attention to this kind of quality tends to lengthen the development schedule.

Summary

When a software product has too many defects, developers spend more time fixing the software than they spend writing it in the first place. Most organizations have found that an important key to achieving shortest possible schedules is focusing their development processes so that they do their work right the first time. "If you don’t have time to do the job right," the old chestnut goes, "where will you find the time to do it over?"

Friday, January 26, 2007

Improve software development department efficiency with VMware

Companies developing software for themselves or for customers know how complex, expensive and time consuming can be releasing a new product.

Development team members have to independently work on code and share it when needed, build and rebuild it on the same or different environments, while QA engineers have to test it on multiple configurations and scenarios, until the final deployment in production, where several factors are out of control and can mine stability and reliability.

IT managers always had small or no possibilities to mitigate technical issues and smooth the release path. But server virtualization changed everything becoming one of the first choices for boost the process.

In this article we'll see how deploying a VMware virtual infrastructure can reduce most of the problems our development department encounter, speeding up its capability to deliver new products at new, unexpected levels.

VMware is not the only company offering virtualization solutions, but its wide range of products and its capability to seamless migrate a virtual machine from a platform to another makes it the best choice for this scenario.


Typical problems


The very first issue of a software development is environment integrity.

For many software engineers is normal having their development tools on the everyday workstation. When a new project starts the large majority of them simply start coding on the same environment they use for browsing the web, reading the email, watch videos or presentations, etc. Often even gaming on it.

Such systems should be perfectly clean, like the fresh installation where customers are supposed to host the product we are developing. Unfortunately this rarely happens.

Daily use for so many tasks imply a lot of installed software, which injects libraries, requires high-level permissions, modifies environmental variables and so on. Not to talk about possible malware infections.
Developed code could easily run or not run because of these variations but moving it on different machines, where operating systems have been compromised in different manners, will produce different results, leading to a complex and time-consuming troubleshooting.


Another frequent slowing down issue in complex projects is environment portability.

Software architects and engineers, product managers, have to verify how a product is growing during the whole development process or have to collaborate on it to improve or debug its routines.

Having many persons around the same monitor or permitting remote access to the development environment is highly unpractical. On the other way moving code from an operating system to another is not simple.

It's not only depending on environment integrity, which cannot be granted in any way, but also a mere fact of delivering all parts needed to run a piece of code.

Any application based on database access, for example, is very hard to share if the developer, as often happens, has installed the database instance on his own machine or rely on a remote instance on a dedicated network segment where not everybody can access.

But even without a database, development team could be in need of libraries or, in case of multi-tiered applications, of other services which aren't moved along with the code.


A third typical problem is the lack of physical resources.

When software engineers are savvy enough to not rely on locally installed services, they need remotely accessed services which have to be deployed, configured and notified to the team.

This requires time but most of all implies machines availability which cannot be given for granted.

In similar way often happens hardware configurations have to be modified during development, adding for example more memory or another network interface card.
Adding new components can be even more complex in big companies where hardware is acquired in stock from OEMs.

But the amount of machines for software development is not only limited to ones where to deploy needed services. It's also depends on how many systems the company wants to support.

QA engineers which have to try the same code on several versions of the same operating system to verify our code works as expected in all possible scenarios: with different permission levels in several Windows editions or with different libraries availability in several Linux distributions.

A dedicated machine is expected to be available for each of them and things become very complex when multiple new applications are concurrently in development.

It's worth to note that lack of hardware machines once solved for developers and testers can soon turn to be a problem for IT managers.
Once the big project is finished they have a certain amount of computers which will be wasted until the next one and could become obsolete in the meanwhile, obliging to replace some or all of them.


Even having enough resources, software engineers and testing staff still have to front the most frightening risk: time shortcoming.

A long series of logistical operations can severely slowdown development distracting coders from their focuses.

For example recognizing the need for environment integrity leads to the act of debugging code always on a fresh installation, which is impossible until developers reinstall the whole operating system from scratch every time.

But even without such level of attention, when the developed code includes an installer it's critical working on a virgin OS.

Lack of time availability interests also testers, which not only need multiple physical machines for every platform where our code has to be certified as working, but also need to reinstall the same operating system several times, maybe because last installation failed or simply because have to test different languages, service packs or concurrent applications.

Fundamentally every test case should be conducted on a dedicated environment and this implies a notable effort. Even if disk backup solutions are in place they can help limitedly considering dependency on underlying hardware, which could change and require to save a whole new set of images, and restore times.


Improving the development phase


The most popular and oldest product from VMware is also the most important in the whole solution chain: Workstation.

Workstation offers a wide range of features able to address the largest part of mentioned problems in software development.
The probability a software engineer tries it and still sticks with traditional tools is near zero.

The first problem Workstation addresses is the one about environment integrity: developers and testers can count on the Snapshot feature which allows saving a virtual machine state and reverting back to it anytime is needed.

A savvy use of the snapshot feature implies developers install a brand new operating system in the virtual machine, fits it with all tools they need to produce new code and finally save a snapshot.
The operation grants a pure environment completely isolated from the every day workstation.

In this scenario a developer is able browse the internet, read his emails and even gaming on his own machine without jeopardizing the development workspace.

For maximum security the virtual machine could be completely disconnected from the real network card, so there are no chances the workspace can be compromised with a remote attack or virus infection, and there is no hassle to continuously patch the operating system or install an antivirus to maintain security.

But even if the workspace cannot be compromised it still can be overloaded with libraries, utilities and other things during a project.

In this case our software engineer can revert back to a clean state as soon as the project is closed simply calling back its first snapshot, within seconds.


Snapshot feature is pretty evolved and it's a needful tool for QA as well.
When a compatibility testing is in progress testers need to assure the new application works correctly with several different products, from our company or third parties.

A snapshot manager permits to save multiple states of the same virtual machine, allowing testers to install one application after another without reinstalling the whole environment every time.

For example in a scenario where our new prototype application has to be tested for compatibility with Microsoft Office and several service packs, the best approach is to save a first snapshot of the just installed operating system, another after the non service packed version of Office has been installed, and still another one after the service pack is in place.

At this point testers are able to proceed with our code installation.
If something goes wrong or if they want to test the same installation with a different service pack, they just need to revert back to the snapshot taken before the service pack installation.

Trying to do the same thing without virtualization or a lot of different physical machines would take hours or days.


This process can be further improved thanks to other Workstation feature: multiple snapshots branches and linked clones.

Multiple snapshots branches feature permits to set an already taken snapshot as the original virtual machine image, and take new snapshots from there.

Linked clones act in similar way but disjoin the new snapshots from original virtual machine image location.

Both features are particularly useful for QA since they don't copy what already exists of the original virtual machine but only refer to it for what will be done in future.

To better clarify we can reconsider the previous example: a tester in need of verify compatibility of a new application against multiple Microsoft Office versions and their service packs, can proceed creating a snapshot of the brand new operating system.

After installing Office 2003 over this snapshot the QA engineer will be able to set the new starting point on the snapshot he already took after installing the fresh OS.

At this point he will be able to invoke a new snapshot for both branches before install Service Pack 2 on Office 2003 and Service Pack 1 on Office 2000.

Our application can be tested against all these environments while the Snapshot Manager makes easy creating and discarding snapshots and linked clones.


Snapshots and linked clones not only drastically reduce time needed to prepare a new development or testing environment but also address another critical problem we already discussed: lack of physical resources.

With them QA engineers don't need a new machine for every single environment to test, but just enough disk space to contain several branches of snapshots and clones.

Another great feature of Workstation is Teaming, useful both in development and testing.

Teaming allows logically linking together multiple virtual machines and launching them at the same time.

It also allows users to create virtual connections between them with simulated speeds.

So, for example, software engineer developing a multi-tier application can control how his code performs when used on a modem or a broadband connection, or usability tester can verify how much bandwidth is needed for a networked application to run without providing a bad user experience.



Addressing portability
As already said the biggest benefit of VMware software is capability to seamless share virtual machines between different products.

This not only permits developers to move without modifications and show their work to other team mates or product managers, but also allows to port applications to other virtualization facilities, where the code will be tested or even put in production.

So as simply as copying a folder the virtual machine containing the new software can be moved from Workstation to Server, the enterprise virtualization product which VMware offers for free.

There can be tested for compatibility, usability and its performances can be verified against stress tests.

After the QA phase then, the same virtual machine can be moved again in the VMware product aimed at datacenter deployment, ESX Server, where it will be put in production.

Anytime a problem appears the virtual machine can be moved back and forth between these platforms for patching errors or testing new configurations.

And if a customer wants an onsite demonstration of the new product, the same virtual machine can be moved once again, put in the VMware free virtualization product for desktops, Player, and distributed to sales managers worldwide.


Conclusion
Virtualization is not revolutioning just datacenter planning and deployment aspects. It's also touching the most critical part of IT industry: application development.

VMware saw this before competitors and is creating a whole ecosystem improving software engineers' efficiency by cutting away unproductive time.
As side benefits, companies fully adopting virtualization gain safer environments and flexible ways to reach new customers. But it's just the beginning: today all operations are done manually but in a near future VMware will provide automation for some of them with a new product called Virtual Lab Manager which is expected before end of this year.

This will greatly simplify control and optimization of software production phases in big companies where multiple departments adopt different development tools but need to leverage virtual machines images in mandatory testing and production virtual infrastructures.

Automation is behind the corner. A new dimension in software development lifecycle too.

Avoiding the most common software development goofs

Finding defects in code has been the bane of developers' existence since the earliest days of computer programming. Maurice Wilkes, the British computer scientist best known for his work on the EDSAC, said in 1949:

"As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs."

This keen observation from more than 50 years ago still resonates with anyone tasked with developing software. But why do we make mistakes? And what are some of the ways that we can avoid making mistakes in an attempt to diminish the task of debugging software after it is written? In this paper, we use our years of experience from developing and commercializing static source code analysis to help answer these questions.

During this decade, we have analyzed hundreds of millions of lines of code, seen programming errors from the very simple to the most complicated and heard first hand accounts of the bugs that killed development organizations. While it is an impossible task to relate all of the relevant and interesting anecdotes in this type of discussion, our aim is to convey the general impression of what mistakes keep developers and managers awake at night.

As a means for communicating our experience, we first discuss the cost of mistakes in software development and hypothesize as to why developers make mistakes. Then, in an attempt to help developers identify their most common mistakes as they write their code, we examine some of the categories of these mistakes, both from a pure source code perspective as well as from a higher level programming methodology perspective. Finally, we make the case for automatic technology to help weed out these mistakes earlier in the development process.

The cost of software defects

It is a well known fact that software defects are a very costly problem. According to a study commissioned by the National Institute of Standards and Technology (NIST), software errors are costing the U.S. economy an estimated $59.5 billion annually. The study also reports that more than one-third of these costs could be eliminated by an improved testing infrastructure that enables earlier and more effective identification and removal of software defects.

Drilling into the problem further, it has been shown that the cost of discovering a defect increases drastically the later it is found in the development lifecycle. A defect found during the coding phase of a project is very inexpensive to fix. This makes sense intuitively since the developer responsible for the defect is working on the questionable code, has all of the context of that code in his head at the time the defect is discovered, and as such, can make a reasonable fix in a small amount of time.

When that same defect slips into the QA or system integration phase of the development lifecycle, it now can become an order of magnitude more expensive to address. Now the defect must be discovered as the program is being executed and the person who discovered the defect must reproduce the defect and communicate the errant behavior with the development organization.

Then the development organization must determine which part of the code was likely to cause that particular fault, assign the appropriate developer or developers to investigate further to determine the root cause in the faulty code, then finally fix the defect without introducing other problems into the code.

Another order of magnitude in cost is added if a defect slips passed the QA organization and reaches the field. Not only does an organization have all of the above issues in removing that defect, the organization must now deal with the additional cost of reproducing the issue through their support organization, not to mention the cost of bad public perception surrounding their "buggy product."

Software defects end up costing organizations millions of dollars every year. But the problem is not because the cost of discovering a defect in the field is high; it is because organizations are discovering defects in the field. The distribution of defects across the development lifecycle (from coding to testing to release) is what determines the actual cost of those defects to the organization.

If two organizations each have one thousand defects in their code and the first finds them all in the coding phase but the second discovers them all after the product has been released, the first organization is in much better shape financially. Therefore, we must focus on discovering more defects earlier in the process.

Why do developers make mistakes?
If it's clear to everyone that software defects are an expensive problems (and we assume that it is), why do developers make mistakes? Or rather, why do they make as many mistakes as they do to the point where NIST performs studies and shows that it is costing businesses sixty billion dollars a year? Based on our experience in developing software as well as interacting with thousands of software developers and seeing the types of bugs that come out of the software development process, we view the following as the top reasons developers make mistakes.

Ignorance. The reader might think from this header that we are taking a shot at the educational system that trains our software developers, but that is not the thrust of this argument. Developers are ignorant of the systems that they develop. A single developer can keep thousands, maybe even tens of thousands of lines of code in his or her head for the purpose of perfectly understanding how different pieces of the code interact.

However, today's systems are in the hundreds of thousands, if not millions or tens of millions of lines of code. A single developer working on that type of system will be calling functions or methods of which they are quite ignorant. The pieces of the code that he is forced to interact with may have been written years ago by someone who is no longer available to explain their intent or nuance. So the developer does his best, quickly reading though the implementation or the comments (potentially incorrect!) provided when he needs to interact with another piece of the system. And this leads to errors.

Stress. We mentioned above that the developer does his best to "quickly" read through the implementation of a piece of code that he must interact with. If you are a developer, you probably didn't think twice about the phrasing of that sentence (nor did we when writing it) because that is the reality of any software development process. Managers put pressure on developers to generate code quickly " deadlines come fast and this leads to hasty coding and that leads to mistakes. Often these mistakes are not necessarily in the most common case of the code (since that is well tested), but on edge cases. When time is of the essence and developers are stressed, the parts of the code less traversed suffer. Yet these defects can be just as costly as mainstream bugs.

Boredom. Not all coding is rocket science. In fact, a good number of coding projects, once the design is complete, would be classified by most developers as "boring." Of course, if a developer is bored, he is much less likely to produce good code than if he is excited about his work.

Pounding out those last few cases in a switch statement when the first few took dozens of minutes can be just mind-numbing enough to switch off the brain and make the simplest of mistakes. Boredom also leads to shortcuts " if you are bored with any given task, you are probably looking for ways to eliminate your boredom as quickly as possible. And unfortunately, a shortcut in coding often translates to a defect in the code.

Human Frailties. Certainly the above points play into this last point about the very nature of human beings. Humans are creative and intelligent and able to solve difficult problems through reason. However, we are not robots. We are not so good at repeating the exact same operation thousands of times without some variance. If you doubt this, pull out a piece of paper and sign your name ten times.

Signing your name is probably something you've done thousands of times in your life, yet each time is a little different. This variance means that even if a developer understood every interface in a system perfectly, had all the time in the world, and were programming the most interesting project computer science has ever known, he would still make a mistake in the translation from the design in his head to the code that he writes. That is just a fact of life.

Common goofs
When discussing common programming defects, we have (at least) two choices for categorization. We can either categorize based on root cause in the code (e.g., null pointer dereference, failure to unlock after acquiring a lock, buffer overrun, etc.) or based on a higher level reason for the mistake (e.g., improper error handling, typo, copy and paste, etc.).

Having a hybrid of these two categorizations is difficult in this format, so we choose the latter because we feel it gives a better sense for why a particular defect is introduced. However, we acknowledge that this higher level categorization is very subjective. We're not here to forge new territory in defect classification, but rather want to shed light on why we believe these defects are made.

The examples below are admittedly toy fragments meant only to highlight the particular issue in the discussion. Bear in mind that these problems do manifest themselves over hundreds or thousands of lines of code within and across functions and methods in real systems.

Ignorance. If you were to ask most developers, "should you return a pointer into data on the stack?" they would answer a resounding no. However, from time to time, we see the following type of code in programs:

The function looks simple enough " it is putting a name into a character array and then returning that array presumably for the caller to use. However, once the stack is popped upon return from this function, that pointer is no longer a reliable piece of data. Once other functions are called, the data containing that name will be likely overwritten. To make this function work correctly, we should allocate the memory dynamically so that it persists past the end of the function:

Now the caller of the function can trust that the pointer points to valid data for as long as that memory is not freed. Imagine a potential caller:

This code will work just fine in printing the name. However, notice that with the change to the get_name function, we now have introduced a resource leak in calls_get_name! If the developer implementing calls_get_name does not realize that the implementation changed, there is a defect due to the developer ignorance of that changed interface.

Copy and paste. Now suppose our developer is tasked with writing a function similar to get_name, but that instead duplicated the name of an incoming parameter, the developer would likely copy and paste the original code. Copying and pasting code is a common practice and often stems from developer boredom (since the task is not seen as interesting) or from time stress in not having sufficient time to code a function from scratch. So, the developer copies get_name as follows:

And then he changes the name and adds a parameter:

Then he just changes the part that does the strncpy to call strdup since he knows that's a good way to duplicate a string:

And now the function works as desired. However, the astute reader notices that in the midst of the copy and pasting, the developer has left the original call to malloc in the code, thus causing a resource leak on the very next line when he reassigns the temp_name pointer:

Error handling. One of the most common problems we see in code is in the handling of error conditions. Programmers tend to program for the common case leaving the outliers, from a path execution standpoint, largely untested. However, these outliers are exactly the scenario that the end user is likely to hit as the load becomes high or the application has been running for days or weeks at a time. Examine the following piece of code, pulled directly from Linux:

Here a lock is being acquired near the beginning of the function with the call to spin_lock_irq. And on the common case, right before the end of the function, the corresponding unlock function is called. However, notice that there is an error case in the middle of the function depending on the return value of vortex_adb_allocroute. If this function fails, the calling function returns without unlocking the acquired lock! This can lead to deadlock causing the kernel to hang. In this particular case, failing to handle the error case correctly lead to a concurrency type problem, but this bad behavior can also lead to other coding defects like resource leaks.

Off by ones. Similar to the case of returning pointers from the stack, if you were to ask a developer "How do you index arrays in C/C++ code?" most would appropriately respond that arrays are 0-indexed and the maximum value that should be used to index into array is the size of the array minus one. However, we still see this type of code more often than we'd like:

In this case, depending on how the stack is arranged, it is likely that ptr will be overwritten by the buffer overrun caused by the off by one error in indexing the array. What's worse, this pointer is now null, and as such, the caller of the function may inadvertently deference a null pointer. If you were to catch this type of problem in testing, it may seem very strange that the pointer is null if you know that the something_very_important function can never return a null pointer!

Typos. From time to time, a developer simply omits some punctuation. Unlike in English, where the reader can likely "figure out what you meant," a computer will blindly execute code as is, causing the functionality to be incorrect. In this example below, the developer clearly meant to break if the element found in the array was greater than 100. But because he forgot the { and }, the break will occur on the first iteration of the loop:

And finally, the following typo was discovered in the X.org code that controls root access in a certain piece of the system:

Notice that the second "call" to geteuid does not have parenthesis following the identifier. As such, it is treated as a function pointer and its value is compared against 0. This test always succeeds allowing a normal user of the system to have root access when this piece of code is triggered. Yes, this piece of code is in a real system that tens of thousands of users are probably still using.

Avoiding the goofs

Unfortunately, we do not have a silver bullet for guaranteeing that developers will not make some of the common mistakes that lead to very expensive defects.

There's no way to make code less complex or give them more time to develop it. However, there is technology that helps alleviate the problem of human frailties in the software development process. Research in static source code analysis has made tremendous strides in the past decade " gone are the false positive ridden days of Lint and other light weight code scanning tools.

All of the goofs listed in this paper are easily detected by state of the art static source code analysis technology. Compared with testing tools (e.g., purify), static source code analysis has the benefit of analyzing all of the paths through a given code base and is not tied to the particular test suite of the application. Compared with manual code audits or developer debugging, static source code analysis technology isn't hindered by the human frailties discussed previously.

There is no ignorance of the numerous interfaces in the code since it can analyze the whole program, keeping billions of contexts in memory simultaneously. Also, static source code analysis never suffers from stress or boredom or typos. Computers are very good at performing the same operation thousands of times in a row without variance. If you want to avoid the most common development goofs, augment your development process to include the latest technology to help find defects earlier in the lifecycle.

Thursday, January 25, 2007

How is Software Development guided?

The software development process is almost invariably guided by some systematic software development method (SDM). Referred to by a number of terms, including process models, development guidelines, and systems development life cycle models (SDLC), software development methods nevertheless generally include the same development phases:

The existing system is evaluated and its deficiencies identified, usually through interviewing system users and support personnel.

The new system requirements are defined. In particular, the deficiencies in the existing system must be addressed with specific proposals for improvement.
The proposed system is designed. Plans are laid out concerning the physical construction, hardware, operating systems, programming, communications, and security issues.

The new system is developed. The new components and programs must be obtained and installed. Users of the system must be trained in its use, and all aspects of performance must be tested. If necessary, adjustments must be made at this stage.
The system is put into use. This can be done in various ways. The new system can phased in, according to application or location, and the old system gradually replaced. In some cases, it may be more cost-effective to shut down the old system and implement the new system all at once.

Once the new system is up and running for awhile, it should be exhaustively evaluated. Maintenance must be kept up rigorously at all times. Users of the system should be kept up-to-date concerning the latest modifications and procedures.

System Development Life Cycle Models:

The systems development life cycle model was developed as a structured approach to information system development that guides all the processes involved from an initial feasibility study through to maintenance of the finished application. SDLC models take a variety of approaches to development.

System Development Life Cycle models include:

The waterfall model:

This is the classic SDLC model, with a linear and sequential method that has goals for each development phase. The waterfall model simplifies task scheduling, because there are no iterative or overlapping steps. One drawback of the waterfall is that it does not allow for much revision.

Rapid application development (RAD):

This model is based on the concept that better products can be developed more quickly by: using workshops or focus groups to gather system requirements; prototyping and reiterative testing of designs; rigid adherence to schedule; and less formality of team communications such as reviews.

Joint application development (JAD):

This model involves the client or end user in the design and development of an application, through a series of collaborative workshops called JAD sessions.

The prototyping model:

In this model, a prototype (an early approximation of a final system or product) is built, tested, and then reworked as necessary until an acceptable prototype is finally achieved from which the complete system or product can now be developed.

Synchronize-and-stabilize:

This model involves teams working in parallel on individual application modules, frequently synchronizing their code with that of other teams and stabilizing code frequently throughout the development process.

The Spiral model:

This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favored for large, expensive, and complicated projects.

How do you choose the “right” programming language for your project?

For most projects, the right language is easy to choose.

Your company may have standardized on a particular development environment and language (and you may have been hired because you were already familiar with the language). Or you may be updating or enhancing an existing program; it’s almost always best to use the same language the existing program is written in. In some cases, however, someone will need to select the best (or, since the best may be somewhat arguable, at least an appropriate language). In some cases, you or your team of developers may need to know several languages for different purposes.

General saying about programming languages are that:

Perl or a similar script language is most suitable for small tasks and sometimes acting as a program that goes between other, larger programs.
Visual Basic is most suitable for relatively novice programmers and relatively simple programs.

Java, C++, or comparable languages like Python and Tcl is most suitable for larger applications using object orientation as a design model.
C is most suitable for programs where efficiency and performance are the primary concern.

The appropriate assembler language is most suitable where the program is relatively short and high performance is critical.

Where constraints permit, some programmers may favor one object-oriented language over another (for example, Java, C++, Python, or Tcl). A programmer with skills in C is likely to prefer C++, which combines the procedural and other concepts and syntax of C with object-oriented concepts.

What are some trends regarding the future of software development?

Blogs - A growing number of big-name software developers are finding that they can make better software applications if they share information with potential customers from the start and incorporate customer feedback into development decisions. While developers of games software have used this method for years, business software makers are now also catching on and using blogs (Web logs) as an important part of the development process.

Big-name support for independent software vendors (ISVs) - Big players like Microsoft, IBM, and Sun have recognized that they cannot fill every niche industry’s software demands, so they have begun to actively seek partnerships with small ISVs, in hopes that by encouraging ISVs to focus on niche vertical industry applications, everyone will benefit.

Component-based development - In this approach, software is developed in modules that are linked dynamically to construct a complete application. Charles Simonyi (creator of the WYSIWYG editor) believes that eventually, software development will become so modular that even lay-people will be able to assemble components effectively to create customized software applications.

Continued improvements in refactoring tools - - Eric Raymond, a leading philosopher about program development, maintains that the concept of refactoring is consistent with the idea of get-something-working-now-and-perfect-it-later approach long familiar to UNIX and open source programmers. The idea is also embodied in the approach known as Extreme Programming. As software applications become larger, better refactoring tools will be required to maintain code bases and diagnose bugs.

Outsourcing - Using this approach, software companies hire employees around the world to take advantage of time zone and labor/cost differences. Proponents say that in effect, software development teams now have a 24-hour work day, and are able to provide fast turn-around. Detractors say that outsourcing parts of a project leads to sloppy coding and only works if there is a high degree of coordination regarding modularized tasks, and above-average communication within the team.

How has the open source development process influenced software development in general?

Open source development software is developed collaboratively; source code is freely available for use and modification. The open source movement arose because some developers came to believe that competition amongst vendors’ leads to inferior products and that the best approach to development is a collaborative one.

The OSI (Open Source Initiative) is an industry body that certifies products as open source if they conform to a number of rules:

The software being distributed must be redistributed to anyone else without any restriction

The source code must be made available (so that the receiving party will be able to improve or modify it)

The license can require improved versions of the software to carry a different name or version from the original software

Despite its emphasis on the collaborative process, the biggest influence that open source has had on software development in general may be through competition: by competing with proprietary software products, open source products force vendors to work that much harder to hold their market share in the face of viable open source alternatives.

Tuesday, January 23, 2007

Must You Outsource Software Development? Here Are Some Reasons Why You Should

How do you secure your rights over intellectual property? When a software development project is finished, who retains ownership of both the software’s source code and the intellectual property rights?

These are valid concerns that should be addressed before outsourcing your software development needs, commences. To develop software inside the company’s premises is undeniably safer, but lower cost alternatives nowadays, are enticing.

Programs and equipments needed to build software are expensive, local skilled labor may be scarce and time may not be on your side. Offshore outsourcing of your software development needs to India will give you the in-house advantage of being with a virtual team of your own choosing, cost effective, and you will be introduced to a broad spectrum of competencies far advanced than your own.

Software development is a complex job, highly technical and fast-changing. To fully adapt, even the first Fortune 500 companies go offshore outsourcing of their software needs. In a business environment that changes quickly, where competition is keener than it seems, speed in the accomplishment of tasks, as well as quality of inputs are vital factors that need not be compromised.

When does offshore outsourcing of software development to India, beneficial?


A big project entails hiring of more manpower and key support personnel. Regular hiring and selection process may take time, and training will use up a portion of company’s resources.

Outsourcing of software development needs will obliterate the rigorous and expensive process of manpower selection and deployment.

Routine tasks that take up more time away from the concentration of efforts on competencies that can bring about more revenues for the company can be outsourced. As higher priority is accorded to core functions, your company earns more market advantage against its competitors, while maintaining its technological upper hand, through the expertise of your outsourced software developers.

Offshore outsourcing of software development favors small-scale companies that have just started, and as much as possible scrimp on anything that pertains to cost. Outsourcing to India is definitely a far less expensive option.

But how secure?

Indian companies have been known for their high levels of work ethics. Ownership of rights over intellectual properties, software codes and software development specs remain with the client. Indian outsource service providers assure the strictest measures to safeguard these rights.

Next time you outsource your software development needs, think of India. It’s one decision you can never go wrong.

Offshore Software Development: Save Money - Get Quality & Enjoy Success

In recent years, offshore software development has come up as a successful business strategy adopted by worldwide giant corporations. Many renowned foreign web hosting companies prefer outsourcing their software development to enable themselves focus on their core competencies. It not only enhances their business but also gets them exclusive cost-effective solutions for their business requirements. Though you must be aware of the key benefits of offshore software development but here I would like to give you some in depth information on these benefits.

Immense Benefits To Startup Company

Offshore software development offers huge benefits to companies just starting up as it helps them leverage their IT budget & resources without hiring a team of programmers to carry out their projects. These companies can simply save 40 to 50% by handing over their software development projects to one of the preeminent offshore companies.

Huge Resources

Offshore companies are always enriched with huge resources to carry out successful software development processes. Outsourcer companies are always on winning edge as they get an access to a huge resource pool in an effort to enhance their business.

Premium Quality

Offshore outsourcing has spread around the world like a jungle fire which has further boosted the ongoing competition among the software development companies in developing countries. At this stage, every company is armed with best of the services to impart superior quality & convinced reliability of software at competitive prices.

Proven Software Development Processes


Rising competencies among offshore development companies has given way to customer centric, authentic, mature & standardized development processes that are designed to zero down project risks & development time.

Large Pool of Technical Expertise

Offshore development companies are backed with a solid team of programmers & developers to relieve from unnecessary stress of hiring new employees for your software. As the team is highly focused on software development, it let you concentrate on your core competencies to achieve your goals.

Low Cost Services

With extreme competition in IT industry, offshore companies are proving themselves by providing highest quality software at lowest possible prices. Outsourcers are taking full advantages of this increasing competition that is leading to the birth of an era of offshore outsourcing.

Post Maintenance

To prove their efficiency & build sustainable relationships with their clients, offshore companies are providing post maintenance services & technical support with great interest which attracts outsourcers to avail rich development services from them.

Benefits of offshore software development are immense & list will go on rising. In conclusion, offshore software development is nowhere a bad deal if you are getting plenty of benefits in your hand with sure shot successes from outsourced projects.

Quality Certifications and What they Mean in Software Development

Large scale software development companies are still quite young and the software industry itself is a fairly new one. Outsourcing of software development has been around for only a couple of decades and as the industry gains maturity, quality certification has taken on a whole new meaning for suppliers as well as customers. Quality certification in software is slightly different from quality certification in manufacturing. Though a number of business process management and quality control principles are derived from popular quality certifications, the implementation and implications are noticeably different.

There are two broad types of quality certifications which can be obtained by software development companies. One is the ISO 9001:2000 standard and the other is various levels of SEI CMM. Some organizations may achieve an ISO first and then work towards an SEI CMM level certification whereas some may go directly to an SEI CMM certification. ISO certification however, is a lot easier than SEI CMM (as well a lot cheaper) and thus the number of companies with ISO certifications are quite a few whereas SEI CMM level companies are not so many in number.

One of the key benefits of quality certification in a software development company is that it showcases the maturity and continuity of the organization. Both quality certifications pay attention to processes. ISO guidelines state that you should define a process and make sure that it is being followed whereas SEI CMM dictates certain parameters of a process within which the company should work. Achieving certification and maintaining the documented processes provides a long term growth pattern in the company and at the same time helps in building a differentiating factor with customers.

Apart from the maturity and continuity of the organization, software development companies need quality certification to ensure the success of large projects. Tried and tested development methodologies which are part of the certification process ensure that the coding and designing produced by the company are of a high standard and will withstand the test of use and durability. Customers planning to do business with a quality certified company find it much easier to get a good quality software product. Non-certified companies have a tough time when competing with a certified company and that is the reason why more and more software development companies are moving towards quality certification.

Most medium to large companies are moving towards SEI CMM level certification as that quality certification has been developed with software development in mind. There are various levels of the certification and level 5 is the highest a software development company can achieve. The entire certification process for SEI CMM level is lengthy, time consuming and quite expensive when compared with ISO 9001:2000 but the benefits compensate often compensate for that.

So if you are a software company and have not yet gone down the path of certification, it is time you gave it serious thought. If you are an organization looking to outsource software development work to companies in India, China, the Philippines, Poland or parts of Eastern Europe, it is advisable that you consider their quality certifications. Though we have mostly mentioned ISO 9001:2000 quality certifications, there are other industry and technology specific certifications which can also be obtained by software development companies. Usually these certifications are given by software manufacturers or independent bodies and though they might not be as critical as the quality certifications mentioned, they have a good level of importance when evaluating a supplier.

Software Development Life Cycle

Software development life cycle (SDLC) is a process adopted and followed during the development of software. Also known as software life cycle and software process, there are quite a few stages for SDLC.

Requirements analysis

Being acquainted with the specific requirements of the desired software is the first important step. This requires skill and experience in software engineering so that the exact software is developed

Specification analysis

A software development process enters the specification stage once the deliverables are figured out. This is the stage when the software is properly described and written to understand. Specifications are most important for external interfaces that must remain stable.

Architectural analysis

Architecture refers to a conceptual representation of the software system. A well-defined architecture ensures that the software has all it needs to meet the requirements, and accommodate future requirements if any. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system.

Coding

The coding stage is a widely followed step in software development. Here the design is reduced to code for better understandability.

Testing

The coding stage is followed by the testing phase. This is the most important stage of any software development life cycle where extensive testing is done to ensure that coding done by different software developers work together in harmony.

An important part of software development is documentation. Many a times this step is overlooked, only to land up in problems whenever future maintenance and additions are a necessity. Meanwhile, many software projects are found to fail because of lack of training among end users. Ideally, training is an important part of the software development life cycle where end users have all their queries solved by the developers.

Earlier, the entire process of software development consisted only of a developer writing the code of software. However, today the scenario is quite big, complicated, and involves teams of architects, analysts, programmers, testers and users who work in tandem to create codes. This is the main reason why SDLC has become such important. In other words, without a well-defined development life cycle, software often have found to either fail, or lack performance.

Ten Tools Every Software Developer Needs

In the early days, software development was more art than science and developers were looked on as geeks and quasi-magicians. Over the years, methodologies have evolved that have brought the software development process more into the mainstream. Here are ten recommendations for the modern programmer. Get to know these tools and you’ll be in high demand in the software development field.

In no particular order:

1) SQL – Structured Query Language is the Lingua Franca of database programming and all modern business programming requires some database interaction. Having a strong SQL understanding will ensure you can talk the database language when the time comes.

2) Database Design – Good database design is a key factor in any modern complex system. You may never have to design a database from the ground up but you will certainly need to know key concepts like indexing, foreign keys and table normalization.

3) UML – The Unified Modeling Language isn’t really a language at all but rather, it’s a mechanism for expressing relationships and processes in any system. UML is widely used in commercial software design and development, and it will greatly enhance your ability to communicate and understand complex systems.

4) Object Oriented Design – Good OO design skills are required in most software development today. While UML might be used to express a system’s design, the software developer must be able to actually design the objects themselves using good OO design skills. In an analogy, UML might represent the written sheet music, and the OO design would be the process of composing the music.

5) Refactoring – Refactoring is closely related to OO design. It is the process of improving on existing implementations by applying sound design principals and making changes accordingly.

6) Design Patterns – Software developers often face similar or even identical problems while developing disparate systems. Some problems, and their solutions, are so common that they have been cataloged into a common set of design patterns. The more of these patterns a developer knows, the more productive he or she will be.

7) Web Apps – Web application programming is evolving rapidly and it’s a completely different model than traditional desktop or client/server application programming. The modern developer will become familiar with the evolving technologies and stay abreast of the changing landscape.

8) Client/Server Apps – Client/Server apps operate in more controlled environments than Webb apps and come with their own sets of concerns. Many C/S applications run today’s businesses and will continue to through the foreseeable future.

9) Programming Language Skills – The basis for all software development is the programming language. Languages come in and out of favor and the modern developer has to keep current on the languages that are in-demand. At the time of this writing, Java, C#, C++, HTML, XML, and other Web-oriented languages are in high demand.

10) Infrastructure – Although the hardware, operating systems, network topology and administrative concerns that go along with those things aren’t directly tied to software development, they are very closely related. A software developer who has in-depth knowledge of any of those topics along with the previous 9 tools will be highly sought after in today’s marketplace.

User Driven Modelling - Background Information

Explanation of the Problem to be Addressed

Research Aim

This research arises out of work to create systems to facilitate management of design and cost related knowledge within those organisations, with the aim of using this knowledge to reduce the costs of manufacturing products. This thesis identifies ways that problems arising from the model development process can be addressed by a new way of providing for the creation of software. With experience from projects, which have used a combination of proprietary software solutions and bespoke software, it is possible to identify the approach of User Driven Programming (UDP). This research unites approaches of Object Orientation, the Semantic Web, and Relational Databases and event driven programming. The approach encourages much greater user involvement in software development.

Software development is time consuming and error prone because of the need to learn computer languages. If people could instruct a computer without this requirement they could concentrate all their effort on the problem to be solved. This is termed User Driven Programming (UDP) within this research, and for the examples demonstrated the term User Driven modelling (UDM) is used to explain the application of user driven programming to model development. This research aims to create software that enables people to program using visual metaphors. Users enter information in a diagram, which for these examples is tree based. The program translates this human readable representation into computer languages.

This research demonstrates how a taxonomy can be used to automatically produce software. This technique is most suitable at present to modelling, visualisation, and searching for information. The research explains the technique of User Driven Model Development that could be part of a wider approach of User Driven Programming. This approach involves the creation of a visual environment for software development, where modelling programs can be created without the requirement of the model developer to learn programming languages. The theory behind this approach is explained and also the main practical work in creation of this system. The basis of this approach is modelling of the software to be produced in Ontology systems such as Jena and Protégé.

The research applies this technique to aerospace engineering but it should be applicable to any subject.

Why a different approach is needed


User involvement is important in the development of software but a domain expert does not necessarily possess expertise in software development, and a software developer cannot have expertise in every domain to which software might apply. So it is important to make it possible for software to be created using methods as close as possible to that which the domain expert normally uses. The proportion of domain experts in a particular domain (aerospace engineering) for example who can develop their own programs is fairly low, but the proportion that are computer literate in the every day use of computers is much higher. If this computer literacy is harnessed to allow the domain experts to develop and share models, the productivity for software development will be increased and the proportion of misunderstandings between domain experts and developers reduced. The domain experts can then explore a problem they are trying to solve and produce code to solve it. The role of the developer would then become more that of a mentor and enabler rather than someone who has to translate all the ideas of the expert into code themselves.

User Driven Model Development

The intention of the research into User Driven Modelling (UDM) and more widely User Driven Programming (UDP) is to enable non-programmers to create software from a user interface that allows them to model a particular problem or scenario. This involves a user entering information visually in the from of a tree diagram. The research involves developing ways of automatically translating this information into program code in a variety of computer languages. This is very important and useful for many employees that have insufficient time to learn programming languages. To achieve this visual editors are used to create and edit taxonomies to be translated into code. To make this possible it is also important to examine visualisation, and visualisation techniques to create a human computer interface that allows non experts to create software.

The research mainly concentrates on using the above technique for modelling, searching and sorting. The technique should be usable for other types of program development. Research relevant to User Driven Programming in general is covered as this could be applied to the problem in future.

This research unites approaches of object orientation, the semantic web, relational databases, and event driven programming. Tim Berners-Lee defined the semantic web as 'a web of data that can be processed directly or indirectly by machines' http://www.w3.org/People/Berners-Lee/Weaving/Overview.html. The research examines ways of structuring information, and enabling processing and searching of the information to provide a modelling capability.

UDM could also help increase user involvement in software, by providing templates to enable non-programmers to develop modelling software for the purposes that interest them. If more users of software are involved in creation of software and the source of the code is open this allows for the creation of development communities that can share ideas and code and learn form each other. These communities could include both software experts, and domain experts who are much more able to attain the expertise to develop their own models than they are using current software languages. Vanguard are creating a modelling network where universities can share decision support models over a network http://wiki.vanguardsw.com/. We are creating a modelling network that will link to that of Vanguard http://www.cems.uwe.ac.uk/amrc/seeds/models.htm.

Criteria necessary for User Driven Model Development

This section explains the factors necessary to make the User Driven Model Development approach later outlined possible.

Firstly it is necessary to find a way for people with little programming expertise to use an alternative form of software creation that can later be translated into program code. The main approach taken was the use of visual metaphors to enable this creation process, although others may investigate a natural language approach. A translation method can then be provided that converts this representation into program code in a number of languages or into a Meta-language that can then be further translated. In order to achieve this it is necessary for the translator to understand and interpret equations that relate objects in the visual definition and obtain the results. In order for the user to understand the translation that has been performed it is then important to visualise the translated code and this must be accessible to others who use the translated implementation. Web pages are a useful mechanism for this as they are widely accessible.

How To Work Out A Software Development Contract With An Overseas Provider

You may be surprised to know that many companies in the US and UK do not put together a water tight contract when dealing with an overseas software services provider. Most of the agreements are done via email with little or no regard to important aspects such as dispute resolution, intellectual property rights, confidentiality issues and employee infringement. If you plan to use an offshore provider soon, here are some basic tips on how to draw up a workable contract which safeguards the interests of both parties:

Define deliverables: Since software development is mostly intellectual work and has many grey areas in its definition, it is advisable to define deliverables in a detailed fashion. This helps in making sure that the understanding of the work is clear on both sides and there is no miscommunication of any kind with the supplier. You can also choose to define the change management process and the number of revisions allowed as it makes the deliverables more structured.

Mention the acceptance clause: What is good for the goose may not be good for the gander. Though an old saying, there is a huge amount of truth in it. Sometimes the software services provider may consider the work completed whereas you might not accept it. Thus the goal or the premises on which the work will be accepted should be clearly mentioned to both parties concerned.

Confidentiality rights on both sides: Sometimes companies get an NDA signed with the service provider and expect it to hold true even when working on the project for a long time. This method is not advisable. A suitable contract must be drawn up in the case of ongoing work so that issues such as confidentiality of information are maintained by the service provider. Though some customers feel that their projects do not warrant such a clause, the information exchanged may even be about the company, business or related information which has been given out unknowingly.

Employee Infringement: Approaching a service provider's employee directly is one of the cardinal sins which can be committed by a client. Thus as a service provider, it is necessary that this clause is mentioned in a contract. The opposite can also happen where the service provider may approach the client's personnel for indirect or direct gain. An employee infringement clause keeps a check on such practices and provides a legal route if there is substantial evidence of the infringement.

Force Majeure: The relatively recent natural calamities of the Tsunami and Hurricane Katrina have made it necessary for many large companies to seriously consider the Force Majeure clause. This is necessary to protect the interests of both parties.

Last but not least, pricing: This is probably the most common reason for arguments between a supplier and vendor and is applicable in all industries throughout the world. A clear mention of the total project pricing and milestones at which the charges will be paid should be included as an important schedule within the contract.

There might be other specific terms and conditions which may have been agreed by you and the supplier. These should all be mentioned in the contract not only for the sake of posterity but also for ensuring continuity of work in case of personnel change in the supplier's company.

Oracle E-Business Suite Integration & Development – Software Factory Approach

When we talk about implementation, customization, integration, data conversion and tailoring the ERP for large corporate business or non-profit organization, we need to formalize and structure software development project. This approach is also referred as Software Development Factory.

The Software Factory concept is based on a production line for systems from user requirements to software delivered. This production should be done without any direct communication between developers (production line workers) and users, system analysts and designers (customer side), based upon a scope, schedule, costs and quality standards.

A software development process is a fundamental piece to a software factory success, it considers all software development cycle and help project activities and resources management (plan and control). The activities could be categorized as following:

• Project Management: project scope definition activities; version control; work, quality and risk plans; human resources organization, training and allocation, and so far;

• Business Requirements Mapping: based on specific business requirements and Oracle Applications functionalities gaps, customizations (extensions) will be planned and developed;

• Module Design and Build: activities to estimate, plan, design, build, test and document custom program modules (forms, reports, database, etc);

• Business System test: integrated approach to testing the quality of all application system elements;

• Performance Testing: these activities helps the project team define, build, and execute a performance test on specific system modules and configurations;

• Adoption and Learning: accelerates the implementation team’s ability to work together through organization-specific customizations learning.

Other important development process features are standards names for file structures, tables, fields, variables and others key elements used during development activity. This facilitates upgrades procedures and applications maintenance.

Oracle E-Business Suite: Software Factory Development Process

The Software Factory concept is based on a production line for systems from user requirements to software delivered. This production should be done without any direct communication between developers (production line workers) and users, system analysts and designers (customer side), based upon a scope, schedule, costs and quality standards.

A software development process is a fundamental piece to a software factory success, it considers all software development cycle and help project activities and resources management (plan and control).

Software Factory for Oracle E-Business Suite Projects uses a software development process based on AIM Advantage (Application Implementation Method). AIM Advantage is a proven, comprehensive method and toolkit to successfully guide implementation of an Oracle Applications solution. Developed and sold by Oracle Corporation, AIM is used by Oracle consultants, partners and customers when implementing Oracle Applications.

Talking about Oracle Applications extensions only, AIM offers templates and tools for all extensions development cycle, considering problem definitions phase, business requirements analysis, system analysis, design, build, test and transition to production environment. The activities executed can be categorized as follow:

• Project Management: project scope definition activities; version control; work, quality and risk plans; human resources organization, training and allocation, and so far;

• Business Requirements Mapping: based on specific business requirements and Oracle Applications functionalities gaps, customizations (extensions) will be planned and developed;

• Module Design and Build: activities to estimate, plan, design, build, test and document custom program modules (forms, reports, database, etc);

• Business System test: integrated approach to testing the quality of all application system elements;

• Performance Testing: these activities helps the project team define, build, and execute a performance test on specific system modules and configurations;

• Adoption and Learning: accelerates the implementation team’s ability to work together through organization-specific customizations learning.

Other important development process features are standards names for file structures, tables, fields, variables and others key elements used during development activity. This facilitates upgrades procedures and applications maintenance.

If you would like to see how an Oracle Applications customization would be using such a techniques you can contact us, we are able to work with you on your company specific needs with great results and quality with affordable costs.

The Benefits of Custom Software Development vs. Generic Applications

In the world of computers being used for business, it is essential to have quality software regardless of the type of business you offer or the size of it. While technology is a great thing, it can be complicated especially when it comes to the issue of software. You don’t want to purchase general applications that are difficult to use and maneuver. You also don’t want to have additional features that you will never use. This is why custom software development is often a much better choice.

Custom software development starts with identifying your goals and the needs of your business. In many cases custom software development is less expensive than a general application because it is designed to meet your business needs. You also don’t have additional programs and features that you will never use. You will also get the software to do exactly what you need it to do, saving time for yourself and the other employees who may use the software.

It is important to choose a software programmer or developer who has taken the time to understand the type of business you conduct and what you want the software to do for your business. Check their references and that they are credible. You will want to find out about training, customer support, and a refund in the event you are not happy with your software. There are many reputable software programmers you can find in the newspaper, the yellow pages, and on the internet. It is a good idea to get an estimate for the work, what the software functions will be, and the completion date. All of this information as well as the training time and customer support should be in writing before you pay any money for services.

The 80:20 Software Product Development Lifecycle

The argument about off-the-shelf software products versus customized software has been ongoing for quite some time. As the industry matures, a mid-path has been achieved by ready made software manufacturers. This concept is the 80:20 software product development process which captures the benefits of off-the-shelf as well as customized software products.

Software architects have come up with the option where most of the common modules of the software is developed and a part of it is left to be customized later on. Thus eighty percent of the software is actually ready and twenty percent of customization allows the customer to get highly flexible software for a fraction of the cost of fully customized software. Benefit analysis shows that it is definitely better to pick up this software development concept rather than go for development from scratch as it helps save time and money and boosts the bottom-line of the business.

The cost of development of up to eighty (or thereabouts) percent of the software is divided over a number of customers and this makes the product very affordable when compared with 100% customized software.

So is there a downside of the 80:20 software product development lifecycle? It depends. The 80:20 software development process requires good management skills from the customer as well as the vendor end to ensure that the benefits really accrue to the customer. The vendor of the software should be able to demonstrate the software correctly to the customer and show him which sections can be customized. Similarly the customer should be able to spec out the customization sheet accurately for the vendor to get the software as per his or her requirement.

This process has a good level of success with large software manufacturers as they have started developing many applications which can be purchased by customers and then customized for them by third party suppliers. Most database software companies operate with this business model. Not only can the software manufacturer concentrate on developing good base software, third party software development companies can build competencies in customizing it.

Customization in 80:20 software products is usually done on the user interface, the navigation system and also within the database tables. Additional modules can also be added depending on the requirements of the customer. Costs of modifications are usually governed by the time that the vendor would require to do the work. Sometimes the vendor can also give a flat fee for various modules and the customer is free to choose which modules are required by their business.

However, 80:20 software development processes are not the only way to go. Many organizations with special needs will continue to work with customized software and organizations with limited budgets will work with off-the-shelf software products. Requirements and budget will remain the key reasons which will govern the choice and each option is liable to have its pros and cons.

How to Successfully Deploy Productivity Software Across Teams

Part 1 of 4: Software, Change and Missionaries

In 25 words or less, what would you define as the critical success factors for rolling out project and performance management software across a workgroup or team?

If you were to start today, what would be on your list for doing what works, and avoiding what doesn't? If you aren't sure, but interested, maybe even slightly uncomfortable at the prospect, you are not alone. Rolling out a project management or groupware solution is enough to make all of us in the position of management at some time or another cringe. It starts the voices in the back of the head. They are quite persistent in their warning that there is trouble ahead. The voices say that to launch on such a venture is to:

* Enlist direct reports and peers in learning "one more" software package amidst their protests that they are already swamped, and frankly don't need or want another tool,
* Incur high frustration, something akin to "herding cats" or "pushing a wet noodle up a hill,"
* Expose oneself to being publicly challenged, defied, and/or defeated,
* Fail to generate the outcomes expected, promised, or hoped for,
* Engage in risks such as spending money, taking on resistance and for what exactly?

In fact introducing new software to be applied across a team or group looks a lot like engaging in a change management effort amidst numerous risks. It is. Employing new software tools, especially as part of a step forward in performance and productivity, requires a change in the way people work. Let me say it again more directly, it changes the way people work. For many people in management, the software rollout (change) process means embarking with the expectation that "things are going to be different." And as we will see, often without having clearly defined and worked through with the group receiving the software what exactly is going to be different and why.

This article is a four-part review of our findings and recommendations in this area. But before we continue, let's be clear about what to avoid in implementing software, or as we like to describe it...

The course of the hapless missionary.

The typical process of implementation often starts with a technology adopter or visionary who becomes a "missionary" (for use of a software tool). By that we mean someone who decides that a software package is a valuable solution for others. Sometimes they arrive at this decision point out of personal experience; sometimes they arrive at this conclusion based upon what they think someone else on their team needs, e.g. "You really need to be more organized."

Having uncovered value in the software, they determine that it would be beneficial to have others, if not everyone, share in their discovery, and experience the benefits of using this software tool. They begin the process of attempting to convert others. They may or may not encounter interest, but they always encounter some form of opposition, passively if not actively. Everyone is not convinced they should convert to the new "betterment" tool.

Part of the missionary approach to tools, is the belief that others would share in their appreciation or value or benefit from this tool...if they would "just try it." However sharing their perspective and tools, even if done enthusiastically, is usually not enough to convert the team or group to a new tool. Typical results yield only a couple of converts within a workgroup of six to twelve, with regular usage by others limited to a cursory review and/or try it approach (drive it around the block and then back to the lot). This is further anchored or underscored by the belief position of other team members that:

1. "I'm too busy to learn a new tool," and
2. "I'm doing basically OK with the (preferred) tools I presently know how to use," e.g. so why do I need to change.

Encountering this type of reaction, an effort that started out with such zeal easily becomes thwarted. Our findings indicate that "missionaries" are eaten by the "cannibals" (give up on the software conversion due to resistant staff) within six months, if not supported by insistence from a person or position of power. What do we mean by that? Despite the lofty commitment to values of increased performance, productivity or simply reduced work effort, the average implementation across a group drifts into what might be called an aspiration. As aspiration to improve that is not realized, and is only moderately enhanced by a command from above. From inspiration to aspiration... without a successful deliverable turns out to be bad business. It's not just an unsuccessful attempt that generates little return on investment; it can leave individuals frustrated and the workgroup less collaborative.

Purchasing and installing productivity software from this purview parallels what one might expect to be the life cycle of the average piece of home workout equipment. The average consumer starts out with gusto in the inspired phase. Shifts to an aspiration as usage declines, and then concludes in storage or a garage sale. From inspiration to aspiration to storage on the sidelines as one more goal unachieved, due to the requirements of more work, more effort, more discipline... more of something than was unavailable to resource the change in behavior.

The good news is that we have also identified not only the painful average life cycle of (marginally successful) groupware implementation; we have also identified the patterns of what works in successful software introduction. Patterns that emerge in the three best practice areas mentioned above. We'll start with the first practice, "Shed illusions about performance improvement and replace with four key reality concepts" in part two.

Outsourcing Software Development Offshore

The trend to offshore outsourcing is continuing at pace, with Gartner estimating that a quarter of all technology jobs will be based in low-cost countries such as India by 2008. These are the lessons learned from a year of offshore outsourcing.

When starting any offshore outsourcing venture:

  1. Make an effort to understand the cultural issues
  2. Concentrate a lot of effort on communication
  3. Start small and build up, try a model office and when its working take it offshore
  4. Pilot what you want to do
  5. Expect work to take 2-3 times longer while you develop your processes
  6. Don't underestimate the management effort required to make the venture work
  7. Document your processes in detail and make sure everyone understands them
  8. Involve the offshore team from the beginning of a project and in the planning process
  9. Don't assume implicit understanding of "the way things are"
  10. Make an effort to understand why things aren't working, don't assume you know

These are some of the other important findings:

  1. Make sure you have strong project managers offshore. Indian developers are not good at managing themselves and grind to a halt if they do not have clear goals and objectives set for them
  2. Expect around a 30 per cent staff churn. Loyalty does not seem strong and employees seem to have no compunction about going down the road for a few Rupees more
  3. Almost all of the companies seem to exaggerate their capabilities and worry about it later. Do not take industry certification (Capability Maturity Model etc.) as a sign that they actually operate according to it. Agree in advance the processes and procedures necessary for your business
  4. Don't be blinded by the claims of huge financial savings. Sure, the costs are low, but you will spend a lot managing the relationship and man managing the Indian resources remotely
  5. Outsourcing is suitable for large well-defined projects. Do not consider it for smaller pieces of work it is not worth it
  6. Do not put all your eggs in one basket. Keep some internal resource, you will need them believe me

Offshore outsourcing can work given the right conditions and plenty of forward planning, but don't be fooled into thinking it's a bed of roses. There are plenty of thorns on the way to success.