Friday, December 29, 2006

Outsourcing IT Development: Advantages and Disadvantages

You can outsource almost anything. Maybe you don't know it yet, but it's true. A couple of days ago, when I was drinking coffee in the kitchen, my wife pointed at the faucet that was leaking big time. The good ole faucet was there when we moved in about ten years ago, and trying to fix it again didn't make sense any more. Since I religiously believe in DIY, I bought a new faucet and set about working. When the old faucet was gone, I found out the metal pipe under the sink had to be replaced, too. There was no way I could do it without recourse to welding. I realized I was ready to outsource that part of the project, so I called the plumber.


If your experience and budget allow you to cope with a task, you should do it yourself. Otherwise, it's about time to consider the advantages and disadvantages of outsourcing.

IT development outsourcing isn't much different than any other kind of outsourcing. When you face an insistent need to start a new IT development project, you have to weigh your current in-house capacity first. If your experience and budget allow you to cope with the task without resorting to any outside expertise, you should probably take full advantage of your potential and do it yourself. However, if there's danger that you'll bite off more than you can chew, it's about time to consider the advantages and disadvantages of outsourcing.

Advantages

Basically, outsource service providers offer you higher quality services at a lower cost. This makes the advantages of IT development outsourcing obvious, so let's have a look at just a few of them.

Outsourcing IT development is a most effective way to stretch your budget. When managers plan IT development outsourcing, they usually make it their aim to cut down the company's expenditures by 30%. This is a figure that speaks for itself. Of course, there's always the risk of failure, but if you outsource prudently, you'll afford to implement projects of such a scale that would be impossible for you to reach on your own.

Outsource service providers offer you higher quality services at a lower cost. Cutting your costs and upgrading the quality of your services will expand the competitive capacity of your business.

If you need to have state-of-the-art IT solutions worked out and innovations implemented with small losses, outsourcing may be the only way out. It will save you from the nightmare of retraining your employees (or even hiring new ones) and/or paying for re-equipment.

Cutting your costs and upgrading the quality of the services you offer will allow you to expand the competitive capacity of your business. I suppose the state the IT market is in today makes this simple argument a crucial one.

When you outsource IT development to an outside company, you can concentrate on your core activities. You won't be able to completely forget all about the project or its part that you have chosen to outsource as soon as you sign a contract with an outsource service provider, but you won't have to get scattered, either.

If you deal with an experienced and highly qualified vendor, you'll be able to gain valuable expertise in support of your IT capacity. Almost any vendor will surely try to set a dependency trap for you, but it doesn't mean you have to acquire the dependency pattern instead of learning everything you can derive from the vendor's expertise.

Disadvantages

So, you have finally decided in favor of outsourcing. Will it automatically make you wealthy and happy? This is far from true. Various studies show that 20% to 35% of IT outsourcing contracts are not revived after they expire. Needless to say that most customers in these cases are not satisfied with the quality and/or price of the services. Outsourcing as a nightmare was eloquently illustrated by Beth Cohen, president of Luth Computer Specialists, Inc., "There was a company in Dayton that decided to outsource much of its IT and production to a foreign company about five years ago. After about nine months of outsourcing, the company realized that there was a huge loss in quality for both production and IT support. The company decided to cancel the contract and rehire their old employees. They ended up getting most of their old employees back but at a higher wage than before. Most people would think that the story ends there. However, as hard as it is to believe, the company is actually considering outsourcing again. They think it will be different this time. It will be interesting to see what happens."

You will partially lose control over the project you have chosen to outsource. Try to make the whole process of the project implementation as transparent for you as possible.

If you need to have state-of-the-art IT solutions worked out and innovations implemented with small losses, outsourcing may be the only way out. It will save you from the nightmare of retraining your employees (or even hiring new ones) and/or paying for re-equipment.

Cutting your costs and upgrading the quality of the services you offer will allow you to expand the competitive capacity of your business. I suppose the state the IT market is in today makes this simple argument a crucial one.

Forewarned is forearmed. This is why I suggest we discuss the pitfalls expecting a business that puts out to the sea of outsourcing.

You will lose control over the project or at least over the part that you have chosen to outsource. This is the problem that frightens almost any manager who has little or no experience in outsourcing. This is the challenge any business involved in outsourcing faces. This is the risk you have to take. It is inevitable that outsource service providers should take control - at least in part - over outsourcing projects. However, they are not supposed to abuse the confidence reposed in them by their customers. In order to minimize the risk, you have to be extremely careful studying the background of your potential vendor. Once you decide in favor of this or that company and begin negotiating the contract, you should try to make the whole process of the project implementation as transparent for you as it is possible.

It's usually difficult to avoid the inherent problems of communication.

* Telephone conversations are bad enough, but email and communicating via some instant messaging program online takes even more time. You'll have to put up with an endless amount of emails to be sent and received. Besides, if you are dealing with an overseas vendor, the time zone problem will surely arise - the difference between your vendor and you may be seven hours or more. Just imagine: you arrive at the office at the same time when your vendor's employees are going to leave. The best way around this problem is to set the mutually acceptable time for online meetings and to require that your vendor should stick to the schedule. In fact, you can even benefit from the difference in time zones between your overseas vendor and you. For instance, you transmit a rush order to the vendor at the end of your working day, the vendor receives it in the morning (their morning) having those seven or more hours behind, and by the time you arrive at the office, a considerable amount of work will have been done.
* Standards of correspondence may be different to the extent of misunderstanding. If you are having any problems like that while corresponding with your potential outsource service provider, you should try to work out some standards that both of you will find easy to follow, or you'd better start looking for another vendor.
* Language and/or cultural problems might contribute to all kinds of mix-up. For instance, a lot of people knowing some fundamentals of English are sure that when they ask your opinion about something and you say, "It's okay," it means you like it a lot. Don't waste your time on foreign vendors communicating in something like "Pidgin English," and even if the person you're contacting has a fairly good command of English, ask for the resumes of those employees who are going to be responsible for each part/stage of the project to make sure they are fluent in English.

An outsource service provider might be trying to diversify the business so zealously that achieving progress in one particular area becomes questionable. The solution to this problem lies in the company's portfolio. Examine the relevant case studies and success stories, ask the vendor for references, and, if you are still uncertain, do not hesitate to check these references.

Some vendors advertise services and even take up projects having little or no experience in the corresponding areas. Apparently, they intend to farm out at least some parts of such projects to subcontractors - which certainly doesn't look very attractive to the customer. This problem resembles the previous one, and the recommended solution is the same.

Almost all outsource service providers place the highest emphasis on the most advantageous projects. It's only natural, but it surely doesn't make the life of the customers with lower profit potential easy. In order not to become a neglected customer, you should:

* insist on appending to the contract a project implementation schedule that includes as many milestones and deadlines as you find it necessary;
* stipulate for tough financial sanctions in case the vendor fails to meet any of the deadlines;
* agree on some incentive payments for completing the project on schedule (or even ahead of schedule);
* last but not least, build partnership relations with the vendor whose work you are satisfied with and whose high-value customer you want to become.

Most vendors try to accumulate as many projects as they can. It's also easy to understand. However, the burden might appear to be beyond the vendor's strength, and this will most likely wreck the project schedule, if not the whole project. If you don't want it to happen to you, you can:

* find out the scale of the vendor's operations including the approximate number of employees and customers - of course, if it's possible;
* request the resumes of all the vendor's employees that are going to be involved in the project implementation;
* ask the vendor to describe in detail these employees' responsibilities;
* follow the advice given in the previous paragraph.

An unscrupulous vendor may be simply unqualified for the project that an imprudent customer have chosen to outsource. One of the ways to solve this problem is to focus your attention on the expertise of your potential outsource service provider at the selection stage.

A number of problems may arise due to the incompetence of a customer who is a novice in outsourcing. That's right, you don't have to think that an outsource service provider is the root of all evil. Incompetent customers tend to make modifications in standards and procedures that have been long established. A vendor who knows that the customer is always right tries to implement the project the way the customer wants it, which finally leads to a total mess-up. In order to avoid this kind of situation, try to find out as much as you can about IT development outsourcing from your contacts and… from articles like this.

Conclusion

If you are discreet selecting the outsource service provider, negotiating the contract, and monitoring the project implementation, the return on investment might be the greatest you have ever had.


Will outsourcing IT development really profit your business? Uh, maybe yes, or maybe no. In other words, it depends.

If you don't possess in-house expertise and/or budget necessary to implement a vital IT development project, outsourcing it - in full or in part - to an outside company seems to be the best solution you can find.

However, you should be discreet selecting the vendor, examining the vendor's expertise, negotiating the contract, and monitoring the project implementation. In this case, outsourcing IT development will be rewarding, and the return on investment might be the greatest you have ever had.

Content Management Systems: What's the Catch?

"Content Management System" (CMS) is an expression that is widely used in relation to Web site development and maintenance today. Maybe even more than widely: the search for "Content Management System" returned 2,190,000 results pages from Google. What's so special about it? Is a CMS a must for any individual or company that owns a Web site? Can you do without it? If you know the answers to these questions, you don't have to read any further - this article is for those who want to find out what a CMS is and whether they need it at all.

What Is a CMS?

Managing a Web site becomes easy even for non-technical business people - all they have to do is to log on via their browser, create or change a Web page or several pages, and then have the CMS publish the content to the Web server at the click of a button.

A traditional Web site built of static HTML pages works great if it's relatively small and simple, if it doesn't have many interactive features, and if you don't have to change its content too often. However, if you need professional help to add/delete or modify a page and upload it to the site, it may be time for you to think about employing a CMS.

A CMS makes creating and modifying content independent of publishing it. You can type or paste text in special boxes and upload images using the placeholders in a form, or if the CMS provides a WYSIWYG editor, you can even work on a page and at the same time view what it will look like. All content including metadata (page title, description, keywords, etc.) is added to the database and used by the CMS to dynamically generate Web pages. In most cases, there is no need to install any special administrative software on the user's computer. Managing a Web site becomes easy even for non-technical business people - all they have to do is to log on via their browser, create or change a Web page or several pages, and then have the CMS publish the content to the Web server at the click of a button.

There is a whole range of content management systems. A simple and easy-to-use CMS can be sufficient for an individual or a small business, while a large company may require a more sophisticated solution. A sophisticated CMS may be used to manage company workflow by setting roles and permissions for different employees: some of them may be allowed only to upload their material, others - to review and edit it, and the others - to approve and publish it.

Advantages and Disadvantages

You can finally break free of the HTML coding dependency trap and start working on the content yourself. You can make changes to your Web site quickly and easily, correct mistakes as soon as you notice them, and on the whole, work smarter, not harder.

One of the most important benefits of using a CMS is reducing the costs of professional assistance. A CMS enables people with little (or no) Web authoring skills to re-design, maintain, and update their Web sites themselves. You can finally break free of the HTML coding dependency trap and start working on the content yourself. If you can use your Web browser, you will be able to use a CMS (not any CMS, though). You can make changes to your Web site quickly and easily, correct mistakes as soon as you notice them, and on the whole, work smarter, not harder.

Another advantage is that with a CMS, the design of a Web site always remains consistent because the content obtained from the database can be inserted into pre-designed templates. Using templates also makes it possible to change the entire site quickly and easily: you modify the template rather than each and every page.

There sure are some disadvantages as well. One of them is that a Web site based on a CMS tends to work slower, and that search engines index it worse than a static HTML site. These problems can be solved by adding a function that allows you to publish the site as static HTML pages or simply by choosing a CMS that uses search engine friendly URLs.

Don't be surprised to find out that your initial costs are higher than if you had your Web site built of static HTML pages - in the long run, a CMS may save you more.

Also, a CMS is usually a fairly expensive product. Even if you get an open source solution, you will most certainly have to pay for installing it and setting it up. Don't be surprised to find out that your initial costs are higher than if you had your Web site built of static HTML pages - in the long run, a CMS may save you more.

How to Get a CMS

Open source systems are usually as good as commercial ones, and you don't have to pay for them because software distributed under the GNU public license is free to use, modify, and redistribute. Such systems are also easy to extend because the development community continuously create new features and modules.

There are several options that you may consider.

1. Developing a CMS in house. This option is the most unlikely one. If you want to use a CMS because you aren't a Web guru, how can you have enough knowledge and skills to create it yourself? The very point of using a CMS is that you don't need to be a professional to do it. You may have technical staff that can do it for you, though, but what will happen if they quit, and the system will need improvements or bug fixing? I'm not sure you'll be able to easily find someone who will qualify for this job, and I don't know how much it will cost you.

2. Outsourcing the development of a CMS. This option is the most expensive one. There's no doubt that a lot of vendors will offer you to develop a CMS that will fit all your needs. There's no doubt that most of these vendors will have enough experience for that. The problem is that even if you want to have a CMS based site with rather modest functionality, you're probably looking at the $6,000 range as a minimum. Developing a CMS with a respectable number of functions and modules from scratch may cost several hundred thousand dollars.

3. Purchasing a commercial off-the-shelf CMS. This option is probably the easiest and the most reliable one. There is a wide choice of such products on the market, so you may choose the one that suits you best. These systems are normally well tested, and they work smoothly. On the downside, the affordable systems aren't very flexible or efficient, while the high-end sophisticated ones are extremely expensive. Besides, you will still need to hire professionals to have the system deployed and set up.

4. Using an open source based CMS. This option seems to be the most effective one. Open source systems are usually as good as commercial ones, and you don't have to pay for them because software distributed under the GNU public license is free to use, modify, and redistribute. Development teams and users all over the world have spent years working with open source systems, so the core functionality of an open source CMS is thoroughly tested and improved. Such a system is also easy to extend because the development community continuously create new features and modules.

SQL Server T-SQL LPAD & RPAD Functions (String Padding Equivalent to PadLeft & PadRight)

Here is my method for achieving left and right string padding in the Microsoft SQL Server T-SQL language. Unfortunately T-SQL does not offer functions like Oracle PL/SQL's LPAD() and RPAD() and C#'s PadLeft() and PadRight() functions. However, you can achieve the same thing using the T-SQL REPLICATE and LEN functions. Suppose you have a numeric column called Quantity and you need to return it as a string left-padded with 0's to exactly ten characters. You could do it this way:

SELECT REPLICATE('0', (10 - LEN(CAST(Quantity, VARCHAR)))) + CAST(Quantity, VARCHAR) AS PaddedQuantity
FROM TableX

The calls to the CAST function are based on the assumption that the value you padding is numeric. If Quantity were already an string you could do it like this:

SELECT REPLICATE('0', (10 - LEN(Quantity))) + Quantity AS PaddedQuantity
FROM TableX

In certain cases you might be concerned that that value you want to pad might be wider than your maximum number of characters. In that case you could use a CASE block to check the LEN of your input value to avoid passing a negative result to the second argument of the REPLICATE function. No need to worry about passing a 0 to REPLICATE, though: it will simply return an empty string, which is what you'd want since no padding would be necessary.

Update: I decided to go ahead and turn these into user defined functions. Here is a script for fnPadLeft:

IF EXISTS (
SELECT *
FROM dbo.sysobjects
WHERE id = object_id(N'[dbo].[fnPadLeft]')
AND xtype IN (N'FN', N'IF', N'TF')
)
DROP FUNCTION [dbo].[fnPadLeft]
GO

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO

CREATE FUNCTION fnPadLeft
(
@PadChar char(1),
@PadToLen int,
@BaseString varchar(100)
)
RETURNS varchar(1000)
AS
/* ****************************************************
Author: Daniel Read

Description:
Pads @BaseString to an exact length (@PadToLen) using the
specified character (@PadChar). Base string will not be
trimmed. Implicit type conversion should allow caller to
pass a numeric T-SQL value for @BaseString.

Unfortunately T-SQL string variables must be declared with an
explicit width, so I chose 100 for the base and 1000 for the
return. Feel free to adjust data types to suit your needs.
Keep in mind that if you don't assign an explicit width to
varchar it is the same as declaring varchar(1).

Revision History:

Date Name Description
---- ---- -----------

***************************************************** */
BEGIN
DECLARE @Padded varchar(1000)
DECLARE @BaseLen int

SET @BaseLen = LEN(@BaseString)

IF @BaseLen >= @PadToLen
BEGIN
SET @Padded = @BaseString
END
ELSE
BEGIN
SET @Padded = REPLICATE(@PadChar, @PadToLen - @BaseLen) + @BaseString
END

RETURN @Padded
END

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

And for fnPadRight:

IF EXISTS (
SELECT *
FROM dbo.sysobjects
WHERE id = object_id(N'[dbo].[fnPadRight]')
AND xtype IN (N'FN', N'IF', N'TF')
)
DROP FUNCTION [dbo].[fnPadRight]
GO

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO

CREATE FUNCTION fnPadRight
(
@PadChar char(1),
@PadToLen int,
@BaseString varchar(100)
)
RETURNS varchar(1000)
AS
/* ****************************************************
Author: Daniel Read

Description:
Pads @BaseString to an exact length (@PadToLen) using the
specified character (@PadChar). Base string will not be
trimmed. Implicit type conversion should allow caller to
pass a numeric T-SQL value for @BaseString.

Unfortunately T-SQL string variables must be declared with an
explicit width, so I chose 100 for the base and 1000 for the
return. Feel free to adjust data types to suit your needs.
Keep in mind that if you don't assign an explicit width to
varchar it is the same as declaring varchar(1).

Revision History:

Date Name Description
---- ---- -----------

**************************************************** */
BEGIN
DECLARE @Padded varchar(1000)
DECLARE @BaseLen int

SET @BaseLen = LEN(@BaseString)

IF @BaseLen >= @PadToLen
BEGIN
SET @Padded = @BaseString
END
ELSE
BEGIN
SET @Padded = @BaseString + REPLICATE(@PadChar, @PadToLen - @BaseLen)
END

RETURN @Padded
END

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

Example usage:

SELECT dbo.fnPadLeft('X', 15, 100.25)
SELECT dbo.fnPadRight('X', 15, 'ABC')

Seven Sins of Software Sellers: Common Mistakes In Protecting Intellectual Property

It has been a couple of years since the dot-com party abruptly ended—when software as ordinary as sand commanded prices befitting a masterpiece painting.

Today, firms must rely on their genuinely innovative ideas for business survival. Buzzwords and high-priced promotion have been exposed; software companies must stand or fall on strong patents and truly groundbreaking innovation. Now executives must understand whether a new idea is tainted with defective legal protections such as poor patents or incomplete contracts.

Here are some of the common legal mistakes software executives make when evaluating the quality of technology protection.

One: When a Patent Application is Not a Patent

If a patent is appropriate, companies often mistake various filings and procedures for full patent protection. For example, patent attorneys may recommend initially filing a "provisional patent application." This provisional patent application protects the filing date for one year. The document itself generally describes the invention, but often does not provide the needed detail for a full patent filing. Companies, or their business partners or investors, dangerously confuse these "provisional patent applications" for the real thing.

Two: Endure the Tedious Patent Homework

If a company is serious about its technology, it cannot simply file a patent application or hide an invention from the competition. It must also consistently survey the literature in the industry to ensure that no one else has filed claims for the same invention. Intellectual property pirates could have pilfered the technology from the conscientious inventor who honestly and fairly kept it secret.

Aggressive players of the patent game might also try to push the patent rules to their limit. Firms often file overly broad patent claims simply to crowd out the market. Another technique is to file numerous patents for variations on the same inventive theme. The honest company must diligently review published patent applications, articles, web sites or other marketing materials. Intellectual property will become devalued without this effort. Getting patents or confidentiality agreements is like arriving at the battlefield—not fighting the battle itself. The real fight involves constant surveys of intellectual property articles and public claims, as well as expensive litigation.

Three: Guns without gunpowder

An intellectual property strategy must include reserves for litigation. If the software is genuinely the most valuable in the market, then others will copy it aggressively. Often, competitors will encroach on the idea to test the bounds of the law and the resolve of the rightful owner. Even if a competitor does not use the entire idea, they might incorporate elements of the new concept into a competing product. Innocently or not, competitors might include some elements of the idea in new variations of their product. The original owner of the intellectual property must first expend the time and the funds to monitor the market to ensure that no one else uses the idea. This requires diligence and notices to the infringing party to "cease and desist" from marketing the offending products. To preserve future claims, a company must demonstrate a track record of diligent surveillance of the marketplace, and discipline in providing notice to potential infringers.

Four: Incomplete Confidentiality Agreements

Trade secrets absolutely demand well-crafted confidentiality agreements. Skimp on these details and an entire trade secrets program may collapse. If the company's trade secrets are truly valuable, then the employees and all third parties with access to the company secrets should sign these documents. Collecting signed agreements is the easy part. Organized files can make the difference between retaining and losing trade secrets. These agreements should be considered as valuable as stock certificates, bonds, or patent and trademark certificates. File them, bury them, stash them in a safe deposit box—do whatever it takes to hold onto a complete and organized file of these documents, as we'll as all agreements that include significant confidentiality clauses.

Five: Selling the Soul of IP

License agreements, especially with Fortune 500 customers, often include one-sided provisions giving customers rights to intellectual property under the most trivial of breaches. Some large software users may demand significant rights under ancillary software escrow agreements or license agreements. These sometimes include provisions granting rights to source code upon late delivery, or missed milestones. Identify the critical elements of the license, and negotiate hard to protect the IP—while at the same time closing that make-or-break software deal.

Six: Matching the Product Cycle with the Patent

The life of the patents should correspond to the lifecycle of the software innovation. Patents are usually issued between two and four years from the filing date—sometimes much longer. During the same period, the hardware platform for the customer may have changed, new chips marketed, communications speeds enhanced, and computer architecture improved. By the time an enforceable patent issues, there may be relatively little time to exploit the invention. Occasionally, large companies file an excessive number of process patent applications on software. The delay getting the patent will render it virtually worthless in the context of software. However, the simple act of filing the application can be used to bully smaller companies when negotiating cross licensing and revenue sharing deals. For the smaller technology-focused firm, however, this brute force strategy yields few valuable results, since it will lack the money and time to casually exploit worthless filings to intimidate under-funded competitors.

Seven: Bragging Rights Should Follow Patent Rights

Sadly, inventors who have boasted of their invention in articles, speeches, or to colleagues may have accidentally disqualified the idea from patent protection. A good patent attorney will not waste the client's time preparing a filing in cases of early disclosure and tardy filing. Even if the Patent and Trademark Office overlooks the late filing, the competitor bringing the law suit will not.

Patent attorneys will initially determine if someone (including the inventor) has already sold or promoted the invention—that is, whether "prior art" disqualifies the patent. This simply provides that the premature publishing or disclosing of ideas will render the patent unenforceable. Often, the inventor brags of the invention to colleagues in the profession—through articles, web pages, chat rooms, or speeches. These inventors (who are at times professional academics) are judged by their peers by how prolific they produce meaningful innovations in their fields. Also, the smaller software firm, attempting to generate product excitement to attract more customers and investors, will prematurely boast of its great idea. Many patents have turned to legal dust from the early and innocent enthusiasm of the inventor. In this case, business clearly trumps academics in a for-profit technology firm: disclose as little as possible until the concept is comfortably on file with the Patent and Trademark Office.

Follow the Rules

The list above may not address all the issues. The firm that can avoid these mistakes, however, and follow these precautions, may profit far more for their hard work developing software. Elegant code may generate cheers from engineers, but it is effective legal protection that will generate profits from customers.

Who Owns Your Software? Software Copyright and the Work for Hire Doctrine

Businesses all over the United States hire software developers to create software that offers a competitive advantage or cuts operating costs. Frequently both business owners and software developers enter into these agreements to develop software without addressing the issue of copyright. How does copyright law apply to these kinds of agreements, especially in cases where copyright ownership is not addressed explicitly? Who owns the software?

Think about a car repair shop owner who hires a developer to create a program that allows the shop to keep track of when its customers are due for scheduled maintenance and automatically generates a reminder email to that customer that it is time for the maintenance. The owner and the developer negotiate a $20,000 price for producing the program, and the developer agrees to have the project ready for installation in three months time. The parties send correspondence back and forth agreeing to these basic terms but never address copyright ownership.

The developer creates the program at his home, installs it on time, and receives his $20,000 fee. The program exceeds all expectations, customers rave about the customer service, and profits rise. The repair shop owner brags about this new service to his competitors. Several of them inquire if they might purchase the program and thoughts of a yacht and early retirement flood through the owner's head.

The shop owner is about to sign the first agreement to sell the software when he receives a letter from the software designer stating that the owner has no right to sell the software since the developer, not the shop owner, owns the copyright. The shop owner never thought about the copyright but thinks he must own it since he paid $20,000 for the program. Only at this point do the parties contact their respective lawyers for advice on who owns the copyright.

Copyright ownership is critical, since the copyright owner will have the exclusive right to reproduce, distribute, and create a derivative work (among other rights). Under our facts, if the shop owner does not own the copyright, he does not have the right to reproduce the software to sell to others. If he did so without the software developer's permission, the developer could sue him for damages arising from copyright infringement.

The Federal law addressing this situation is entitled The Copyright Act of 1976, 17 USC 201(a). The general rule is that the author of the work owns the copyright. The Copyright Act, however, contains an important exception called the "work for hire" doctrine. If the facts establish that the "work for hire" doctrine applies, the person for whom the work was created (in this case the shop owner) would own the copyright. The "work for hire" doctrine applies when employees create works within the scope of their employment or a situation where a certain type of work is specially ordered or commissioned by which an express agreement is to be considered a work for hire [Freiburn, 2004].

In our case, the bare facts indicate that the software designer is not the shop owner's employee. As a result, the employee portion of the "work for hire" doctrine does not apply. What about the second portion of the "work for hire" doctrine? Might it apply and revive the shop owner's dreams of early retirement?

Applying the second portion of the "work for hire" doctrine requires that the facts establish the following three conditions:

1. The work must be specially ordered or commissioned. This simply means that the shop owner hired and paid the designer to create something new, as opposed to paying him for an existing work [Jassin, 2004]. That appears to be the case here: the shop owner hired the software designer to create a new software program.
2. Both parties agreed, prior to beginning the work, that the work would be considered a work for hire. In our case, the parties did not address this point at any time prior to the software designer beginning the work. [Jassin, 2004]
3. The work must fall into at least one of the following nine categories under the Copyright Act:
1. translation;
2. motion picture contribution or other audiovisual work;
3. collective work contribution (magazine article);
4. an atlas;
5. a compilation;
6. instructional text;
7. a test;
8. answer material for a test; and
9. supplemental work such as a preface to a book.

In our case, software does not fall under any of these categories. In other words, even if the parties had agreed to designate the software as a work for hire prior to the work beginning, it would likely be invalid because software cannot be designated as work for hire since it does not fall within any of the categories listed above.

The software developer is the copyright owner. Unless the shop owner gets the developer’s permission, his yacht will remain firmly anchored in the showroom, and he will be at work bright and early Monday morning.

The issue raised in my fictional story could have been avoided if the parties had sought legal advice prior to negotiating the agreement. The copyright ownership issue could have been addressed with the assistance of legal counsel in preparing the agreement.

From the shop owner's perspective, the agreement might have read that the software developer assigned his copyright ownership to the shop owner as part of the agreement. Although the developer might be reluctant to do this or might require an additional fee to give up this right, the issue is on the table up front for resolution. In most legal disputes, use the rule of thumb that employing counsel to resolve the problem raised in this article will likely cost at least three times as much as if the parties had used lawyers to raise and resolve these issues initially.

Copyright law is an area that regularly defies the mathematical logic used to create the software programs to which the law applies. When your livelihood or business revenue depends on your intellectual property ownership rights, an ounce of legal prevention averts the expenditure of three times the amount for prescribing a legal cure.

Thursday, December 28, 2006

Supporting Software Development in Virtual Enterprises

Overview of the DHT approach

Our approach to project coordination and sharing of project artifacts is implemented in a framework that employs two complementary forms of information integration:

* logical integration provides a view of the shared information space based on a virtual central artifact repository to facilitate project coordination
* physical integration provides transparent access to objects that appear in the virtual repository, but are actually stored and managed in autonomous, distributed, heterogeneous repositories.

The structure of the virtual repository is described with a semantic hypertext data model. [NS91] Hypertext is an information management concept that organizes data into content objects called nodes, containing text, graphics, binary data, or possibly audio and video, that are connected by links which establish relationships between nodes or sub-parts of nodes. The resulting directed graph, called a hypertext corpus, forms a semantic network-like structure that can capture rich data organization concepts while at the same time providing intuitive user interaction via navigational browsing.

The DHT hypertext data model augments the traditional node and link model with aggregate constructs called contexts that represent sub-graphs of the hypertext, and dynamic links that allow the relationships among nodes to evolve automatically as artifacts are created and modified. The DHT data model defines the structure of objects in the global hypertext, and the operations (including updates) that may be performed on them.

DHT achieves physical integration with a client-broker-server architecture that provides transparent access to heterogeneous repositories through intermediary information brokers we call transformers. Clients are software tools (or engineering environments) that developers use to access objects concurrently in server repositories.

Over the past five years that the DHT prototype has evolved, about a dozen different types of software development tools and heterogeneous software repositories have been integrated to run within DHT.

2.1 Architecture

The DHT architecture is based on a client-broker-server model. [ACDC96] Clients implement all application functionality that is not directly involved in a server's storage management. Thus, a client is typically an individual tool, but may be a complete software development environment.

Software artifacts are exported from their native repository through server transformers. A transformer is a kind of mediator that exports local objects (artifacts and relationships) as DHT nodes and links, and translates DHT messages into local repository operations (Figure 2). Transformers run at the repository site, typically on the same host as the repository; thus, from the repository viewpoint, the transformer appears to be just another local tool or application.



A request-response style communication protocol implements the operations specified in the DHT data model, [NS91, NS94] and includes provisions for locating transformers and authenticating and encrypting messages. The protocol also provides a form of time stamp-based concurrency control [KS86, NS94] to track and prevent 'lost updates'.

Our experience has been that transformers for new repositories can be developed with modest effort (i.e. hours to days), based on reusable server templates that are augmented with code to interact with specific repositories.

2.2 Data model

The DHT data model consists of three types of primitive objects: nodes, that represent content objects such as program modules or project documents; links that model relationships among nodes; and contexts, that enumerate sets of links to allow specification of object compositions as sub-graphs. Nodes, links and contexts are all first class objects having types, attributes and unique object identifiers (OIDs) associated with each. In addition, links have anchors, that specify regions or sub-components within node contents to which the endpoints of a link are attached.

Contexts enumerate, but do not actually contain, links. Thus, a given link can be a member of several contexts, making it possible to compose different views of the same objects by imposing different structures or configurations as described by links among those objects. Contexts are also first class objects, and so may serve as the endpoints of links.

A fixed set of operations can be applied to DHT objects: create, delete, read and update an object. The owners or administrators of a given repository can elect to provide any subset of these operations (e.g. segmented by user groups, network location, or by type of client), as appropriate for the level of access they intend to offer. In addition, any operation can be performed by a single repository on its own objects. Cooperation among repositories is not required. Therefore, the DHT model preserves a high degree of local repository autonomy.

3 Tool integration

Whether artifacts are stored in a real or virtual repository, software developers create, manipulate and configure shared artifacts using software tools and environments. Many of these objects will exist before the virtual enterprise is formed, and thus before integration by DHT. It is impractical to expect developers and organizations to abandon their favorite tools to use new tools that can access a DHT corpus. Therefore, DHT includes a strategy for migrating existing and new tools to the DHT environment, and a configurable cache mechanism to enable alternative approaches for creating access to, and controlling concurrent updates to, collaborative information spaces.

The migration strategy specifies five levels of integration:

* Level 0. At the foundation level DHT provides a process integration capability [cf. MS92] that enables the configuration (via incremental modeling) and binding of individual developers to development roles, process tasks, and product components to appropriate (client-side) tools. During process prototyping the choice of tool(s) may be unspecified (no tool) or specified by class name or similar place holder (tool stub or bitmap), which enables process walkthrough or simulation. [S96, SM97] To support process enactment, executable tools must be bound to corresponding task steps in order to be invoked on the specified product component.

* Level 1. At this level tools are not integrated at all. They exist unmodified, or as 'helper applications', and require auxiliary tools (e.g. Web browsers) to interact with DHT on their behalf. Auxiliary tools simply perform node retrieval and update, and link resolution, to and from a tool's standard input/output, or files in the local file system. The use of a Web form-based interface to an existing relational data base management system would be an example.

* Level 2. Integration at this level treats DHT nodes as file-like objects. Tools use file system calls like open(), read(), write(), etc., to access a node's contents, passing a string representation of the node's OID rather than a file pathname. Level 2 integration can be accomplished without recompiling or modifying source code; simply relinking the tool with a DHT compatibility library (described below) is all that is required. Note, however, that Level 2 tools do not have knowledge of DHT links.

* Level 3. At this level a tool is aware of links as relationships among objects, and can follow them. This awareness does not appear at the user interface. An example of a level 3 tool is a document compiler that resolves links of type 'include' to incorporate text from other nodes into a source node.

* Level 4. Last, at this level a tool integrates hypertext browsing and linking into its user interface. This may require extensive modification to the tool's source code. Fortunately, many tools and environments incorporate extension languages or escapes to external programs that can be used to implement linking without re-compilation. For example, this technique was used to implement the DHT editor using GNU Emacs Lisp. Process modeling and enactment are supported at Level 0, and this is described later. It can be used together with any other level of tool integration.

Levels 0 and 1 provide 'facade-level' integration of tools at the user interface. Levels 2 to 4 provide increasing scope for data integration capabilities. Control integration of the kind represented by the use of a software bus or similar message/event broadcast mechanism are not provided, however. As Reiss [R96, p. 405] observes, control integration forces tools to have a common reference framework, which is typically a file name and line number. In this regard, the Level 2 integration scheme for file system emulation could therefore be used to support compatibility with such a control integration scheme. The following sub-sections expand on the role of DHT's file system emulation scheme and object caching framework.

A vast legacy of software development tools and environments use the file system as their repository. These applications read and write objects as files through the file system interface, typically by calling standard I/O library routines supplied for the tool's implementation language. Our goal to provide a reasonable cost implementation strategy precludes requiring that all of these tools be modified to use the DHT tool/application interface in place of the file system library.

To solve this problem, the DHT architecture exploits the file-like nature of DHT atomic nodes to provide a file system emulation library. This library intercepts file system calls and converts them to DHT access operations when strings encoding DHT object identifiers are passed as the pathname argument. As an example, the entry points for the Unix version of this library.

To enable a tool for DHT access, one simply re-links it with the emulation library. Thus, the tool will continue to function as before when invoked with real file names, yet will retrieve contents from the DHT object cache (described below) when DHT object identifiers are used.

3.1 Object caching

Many software development artifacts which DHT manages change slowly, while others see frequent access during a short period of time. To facilitate collaborative information sharing, and to improve access latency and reduce transformer loads, we have found it desirable to cache frequently used objects, especially those from repositories accessed over the Internet.

A cache layer is built into the the basic request interface to provide transparent node and link caching. The cache is maintained in the local file system; node contents are cached in separate files to support the file system emulation library discussed above, while links and node attributes are cached in a hash table. Clients call a set of object access functions to retrieve objects through the cache layer.

Each DHT object has a 'time-to-live' attribute that specifies the length of time an object in the cache is expected to be valid. The cache layer uses this attribute, set by the transformer when the object is retrieved, to invalidate objects in the cache upon access. An administrative function periodically sweeps through the entire cache to remove objects whose time-to-live has expired.

The time-to-live attribute is not a guarantee of validity, however. Certain shared objects may be updated frequently by multiple clients. To allow such clients to verify that requested objects have not been modified by another client, the cache layer can be configured with different cache policies to support specific tool/application needs:

* Never use the cached copy; always retrieve an object from the repository.
* Use the cached copy if its time-to-live has not expired.
* Use the cached copy if it has not been modified; verify this by retrieving the object's time stamp from the repository.
* Always use the cached copy if present, regardless of its time-to-live or time stamp.

The cache interface layer does not automatically write updates through to the repository. Instead, a separate function DhtSync() causes the cache to send an update request to synchronize the cached copy with that in the repository. This enables DHT integrated tools to tailor access to the cache for different policies for concurrent object access/update. This is especially important when dealing with legacy repositories for software development that impose different user workspace models. Therefore, when using DHT, we need not endorse some particular workspace model as 'best in all circumstances' and thus we can avoid or mitigate some of the costs of transitioning to a different workspace model.



As indicated in Figure 3, by delaying synchronization and specifying the non-validating cache policy, the cache can be used as a 'local workspace'. Objects, once placed into the cache, are read and updated locally, and thus are not affected by updates from other developers. A 'sweep' application periodically synchronizes the cached copies, possibly invoking tools that will merge objects that have changed in the interim.

Alternately, updates can be written-through immediately, by calling DhtSync() after each update operation. [cf. BHP92] This, coupled with the verifying cache policy, can be used to implement a 'shared workspace' policy for development (Figure 3), in which each developer sees updates from other developers upon object access.

To simulate a 'RCS-style' of version controlled development, in which developers obtain a transaction or exclusive write access to an object through locking, a lock attribute must be added to objects. The lock is bound to the user-ID of the developer who seeks to control the object. The DHT concurrency mechanism ensures that only one developer can set the lock, which is cleared when the object is 'released'. However, applications must cooperate by not modifying objects unless they have successfully set the lock attribute; there is no way to enforce the lock by denying updates if someone insists on updating an object. This policy can be coupled with the validating or non-validating cache policy, depending on the preferences of the developer.

Taken together, the multi-level scheme for integrating new/legacy tools, and support for different policies for configuring object sharing workspaces, provides DHT will an ability to configure and coordinate collaborative workspaces within a project. These workspaces can then be accessed concurrently and updated using tools familiar to distributed developers, even though the individual tools and object repositories may either lack such support, or implement different policies for sharing access and synchronizing updates. Nonetheless, the challenge remains of how best to support cache consistency in the light of the need to maintain autonomy conditions.

From Nothing, to Monumental, to Agile

Most software development is a chaotic activity, often characterized by the phrase "code and fix". The software is written without much of an underlying plan, and the design of the system is cobbled together from many short term decisions. This actually works pretty well as the system is small, but as the system grows it becomes increasingly difficult to add new features to the system. Furthermore bugs become increasingly prevalent and increasingly difficult to fix. A typical sign of such a system is a long test phase after the system is "feature complete". Such a long test phase plays havoc with schedules as testing and debugging is impossible to schedule.

The original movement to try to change this introduced the notion of methodology. These methodologies impose a disciplined process upon software development with the aim of making software development more predictable and more efficient. They do this by developing a detailed process with a strong emphasis on planning inspired by other engineering disciplines - which is why I like to refer to them as engineering methodologies (another widely used term for them is plan-driven methodologies).

Engineering methodologies have been around for a long time. They've not been noticeable for being terribly successful. They are even less noted for being popular. The most frequent criticism of these methodologies is that they are bureaucratic. There's so much stuff to do to follow the methodology that the whole pace of development slows down.

Agile methodologies developed as a reaction to these methodologies. For many people the appeal of these agile methodologies is their reaction to the bureaucracy of the engineering methodologies. These new methods attempt a useful compromise between no process and too much process, providing just enough process to gain a reasonable payoff.

The result of all of this is that agile methods have some significant changes in emphasis from engineering methods. The most immediate difference is that they are less document-oriented, usually emphasizing a smaller amount of documentation for a given task. In many ways they are rather code-oriented: following a route that says that the key part of documentation is source code.

However I don't think this is the key point about agile methods. Lack of documentation is a symptom of two much deeper differences:

* Agile methods are adaptive rather than predictive. Engineering methods tend to try to plan out a large part of the software process in great detail for a long span of time, this works well until things change. So their nature is to resist change. The agile methods, however, welcome change. They try to be processes that adapt and thrive on change, even to the point of changing themselves.
* Agile methods are people-oriented rather than process-oriented. The goal of engineering methods is to define a process that will work well whoever happens to be using it. Agile methods assert that no process will ever make up the skill of the development team, so the role of a process is to support the development team in their work.

In the following sections I'll explore these differences in more detail, so that you can understand what an adaptive and people-centered process is like, its benefits and drawbacks, and whether it's something you should use: either as a developer or customer of software.

Programmers are Responsible Professionals

A key part of the Taylorist notion is that the people doing the work are not the people who can best figure out how best to do that work. In a factory this may be true for several reasons. Part of this is that many factory workers are not the most intelligent or creative people, in part this is because there is a tension between management and workers in that management makes more money when the workers make less.

Recent history increasingly shows us how untrue this is for software development. Increasingly bright and capable people are attracted to software development, attracted by both its glitz and by potentially large rewards. (Both of which tempted me away from electronic engineering.) Despite the downturn of the early 00's, there is still a great deal of talent and creativity in software development.

(There may well be a generational effect here. Some anecdotal evidence makes me wonder if more brighter people have ventured into software engineering in the last fifteen years or so. If so this would be a reason for why there is such a cult of youth in the computer business, like most cults there needs to be a grain of truth in it.)

When you want to hire and retain good people, you have to recognize that they are competent professionals. As such they are the best people to decide how to conduct their technical work. The Taylorist notion of a separate planning department that decides how to do things only works if the planners understand how to do the job better than those doing it. If you have bright, motivated people doing the job then this does not hold.

Plug Compatible Programming Units

One of the aims of traditional methodologies is to develop a process where the people involved are replaceable parts. With such a process you can treat people as resources who are available in various types. You have an analyst, some coders, some testers, a manager. The individuals aren't so important, only the roles are important. That way if you plan a project it doesn't matter which analyst and which testers you get, just that you know how many you have so you know how the number of resources affects your plan.

But this raises a key question: are the people involved in software development replaceable parts? One of the key features of agile methods is that they reject this assumption.

Perhaps the most explicit rejection of people as resources is Alistair Cockburn. In his paper Characterizing People as Non-Linear, First-Order Components in Software Development, he makes the point that predictable processes require components that behave in a predictable way. However people are not predictable components. Furthermore his studies of software projects have led him to conclude the people are the most important factor in software development.

In the title, [of his article] I refer to people as "components". That is how people are treated in the process / methodology design literature. The mistake in this approach is that "people" are highly variable and non-linear, with unique success and failure modes. Those factors are first-order, not negligible factors. Failure of process and methodology designers to account for them contributes to the sorts of unplanned project trajectories we so often see.

One wonders if not the nature of software development works against us here. When we're programming a computer, we control an inherently predictable device. Since we're in this business because we are good at doing that, we are ideally suited to messing up when faced with human beings.

Although Cockburn is the most explicit in his people-centric view of software development, the notion of people first is a common theme with many thinkers in software. The problem, too often, is that methodology has been opposed to the notion of people as the first-order factor in project success.

This creates a strong positive feedback effect. If you expect all your developers to be plug compatible programming units, you don't try to treat them as individuals. This lowers morale (and productivity). The good people look for a better place to be, and you end up with what you desire: plug compatible programming units.

Deciding that people come first is a big decision, one that requires a lot of determination to push through. The notion of people as resources is deeply ingrained in business thinking, its roots going back to the impact of Frederick Taylor's Scientific Management approach. In running a factory, this Taylorist approach may make sense. But for the highly creative and professional work, which I believe software development to be, this does not hold. (And in fact modern manufacturing is also moving away from the Taylorist model.)

Managing a People Oriented Process

People orientation manifests itself in a number of different ways in agile processes. It leads to different effects, not all of them are consistent.

One of the key elements is that of accepting the process rather than the imposition of a process. Often software processes are imposed by management figures. As such they are often resisted, particularly when the management figures have had a significant amount of time away from active development. Accepting a process requires commitment, and as such needs the active involvement of all the team.

This ends up with the interesting result that only the developers themselves can choose to follow an adaptive process. This is particularly true for XP, which requires a lot of discipline to execute. Crystal considers itself as a less disciplined approach that's appropriate for a wider audience.

Another point is that the developers must be able to make all technical decisions. XP gets to the heart of this where in its planning process it states that only developers may make estimates on how much time it will take to do some work.

Such technical leadership is a big shift for many people in management positions. Such an approach requires a sharing of responsibility where developers and management have an equal place in the leadership of the project. Notice that I say equal. Management still plays a role, but recognizes the expertise of developers.

An important reason for this is the rate of change of technology in our industry. After a few years technical knowledge becomes obsolete. This half life of technical skills is without parallel in any other industry. Even technical people have to recognize that entering management means their technical skills will wither rapidly. Ex-developers need to recognize that their technical skills will rapidly disappear and they need to trust and rely on current developers.

Wednesday, December 27, 2006

Bluetooth Made it Easy!

Introduction

Bluetooth is an open specification for a cutting-edge technology that enables short-range wireless connections between desktop and laptop computers, personal digital assistants, cellular phones, printers, scanners, digital cameras and even home appliances on a globally available band (2.4GHz) for worldwide compatibility. In a nutshell, Bluetooth unplugs your digital peripherals and makes cable clutter a thing of the past.

Bluetooth Connections

Bluetooth devices are addressed in two ways: (1) When referring to the local device configuration, a Bluetooth Device is the local Bluetooth hardware which can be a USB dongles, a UART device, a PCMCIA card or a BCSP device. The user is expected to select the interface of his Bluetooth device in his configuration. (2) When referring to the Bluetooth application, it is the Bluetooth system as a whole, e.g. a Bluetooth modem, Bluetooth mobiles or a Bluetooth PDA etc.

Bluetooth typically supports two kinds of radio adapters: USB, CompactFlash card (UART or BCSP). These adapters can be added onto a PC or a notebook to make it Bluetooth enabled. A user initiates the bonding procedure and enters a passkey to create a paired relationship between two devices.

Security

Security is an important part for the wireless communication technology. Illegal access to your computer can be rejected. There are three levels of security: Low, Medium and High.

It offers three levels of security:

Low (Security Mode 1, No security)
No security procedure is needed for connections.

Medium (Security Mode 2, Service level enforced security)
Authentication or Authorization is requested when a specific service is accessed by other Bluetooth enabled devices. If two devices are connecting for the first time, or if two devices do not have a trusted relationship, then the same passkey must be provided on both sides to complete the Authentication. This mode allows you to assign different access rights for each service supported by the server.

High (Security Mode 3, Link level enforced security)
If either of two devices is in Security Mode 3, Authentication is requested whenever a connection is initiated between two Bluetooth enabled devices. The passkey must be provided on both sides to complete Authentication.

Bluetooth Capabilities


With Bluetooth enabled devices, a number of profiles are available for performing various tasks:

lAV Headphone Profie

enables users to use a Bluetooth enabled headphone to listen high-quality stereo music played in a computer. So you can listen to music using a Bluetooth enabled AV Headphone. Only one Headset or AV Headphone connection can exist at a time, since there is only one virtual Bluetooth audio device.

Basic Imaging Profile

(BIP) enables users to receive pictures from a Bluetooth device such as digital camera, mobile phone, or other compatible device. It also enables remote control of shooting, display, and other imaging functions. You can control Camera to take pictures, receive pictures sent from BIP-enabled digital devices.

Bluetooth Dial-up Networking (DUN) Profile

enables users to wirelessly dial-up to Internet through a Bluetooth modem or a mobile phone that supported the DUN Profile. You can Dial-up to Internet via a Bluetooth enabled mobile phone or modem.

Bluetooth FAX Profile

enables users to send fax from a computer via a Bluetooth enabled mobile phone or modem.You can send fax via a Bluetooth enabled mobile phone or modem.

File Transfer Profile

(FTP) enables users to transfer files and/or folders between Bluetooth enabled laptops, desktops, PDAs, mobile phones, etc. You can connect to a Bluetooth enabled mobile phone and transfer files or folders to/from the phone or, share a folder on your computer with other Bluetooth enabled devices or, access a shared folder on another Bluetooth enabled device.

Headset Profile

enables users to use a Bluetooth headset as wireless earplug or microphone. You can use Headset as a device for audio input/output.

Bluetooth Human Interface Device (HID) Profile
enables users to use Bluetooth enabled HID Device such as keyboard, mice or joystick to control your computer.

Bluetooth LAN Access Profile

(LAP) allows users to access a Local Area Network (LAN) via a Bluetooth enabled LAN access point. You can use your computer as a LAN Access Point.

Bluetooth Object Push Profile

(OPP) enables users to send and receive Personal Information Management (PIM) data objects (Including messages, notes, calendars items, and Business cards) to and from a Bluetooth enabled PDA or mobile phone.

The following objects are supported:

Contacts (*.vcf)
Calendars (*.vcs)
Notes (*.vnt)
Messages (*.vmg)

You can push objects to a Bluetooth enabled mobile phone or PDA or, receive objects from a Bluetooth enabled mobile phone or PDA.

Bluetooth Personal Area Networking (PAN) Profile
enables PCs, laptops, PDAs, and other Bluetooth enabled devices to form either of two kinds of PAN networks. In a Group ad-hoc Network (GN), which functions as an isolated network, multiple PAN Users (PANUs) are linked together via a GN controller. Alternatively, a PAN can consist of multiple PANUs linked to a Network Access Point (NAP), which provides access to external Local Area Network (LAN) infrastructure. BlueSoleil supports all three of these device roles — GN (controller), PANU, and NAP.

For example, you can set up a Group Ad-hoc Network (Peer-to- peer networking). One device acts as the GN, and others function as PANU devices. These computers can visit each other or use an application based on TCP/IP.
You can access a LAN via a Network Access Point (or a Computer Acting as a NAP). After the computers connect to the NAP, they become members of the LAN and can directly communicate with other computers in the LAN.

Bluetooth Printer Profile

(HCRP) enables your compter to connect to a Bluetooth enabled printer.

Bluetooth Serial Port Profile
(SPP) provides PCs, laptops, PDAs, GPS receivers, cordless serial adapters, and other Bluetooth enabled devices with a virtual serial port, enabling them to connect with each other wirelessly via Bluetooth instead of a serial cable. Typically it uses four Bluetooth Serial Ports for out-going connections and two Bluetooth Serial Ports for incoming connections.

Bluetooth Synchronization (SYNC) Profile

enables users to synchronize PIM objects on their computer with that of other Bluetooth enabled computers as well as Bluetooth enabled mobile phones, PDAs, and other devices.

Four kinds of objects are supported in synchronization. They are,

Contacts (*.vcf)
Calendars (*.vcs)
Notes (*.vnt)
Messages (*.vmg)
Supported Outlook versions: MS Outlook 2000, Outlook 2002 (xp), Outlook 2003.

Conclusion


Though the above capabilities are found in most of the Bluetooth Enabling Software, this article is based on the one from BlueSoleil for Motorola's Bluetooth enabled mobiles.

BlueSoleil is a Windows-based software from IVT Corporation that allows your Bluetooth® enabled desktop or notebook computer to wirelessly connect to other Bluetooth enabled devices such as cameras, mobile phones, headsets, printers, and GPS receivers. You can also form networks and exchange data with other Bluetooth enabled computers or PDAs.

Date Validation and Triming the spaces in Javascript

.NET Classes used :

Summary


This function uses to trim the whitespaces in the word.


/* Function to remove the leading & trailing spaces */
function Trim(strInput)
{
strInput = LTrim(strInput);
strInput = RTrim(strInput);
return strInput;
}

/* Function to remove the leading spaces */
function LTrim(strInput)
{
var intCntr;
for (intCntr=0; intCntr < strInput.length; intCntr++)
{
if (strInput.charAt(intCntr) != " ")
break;
}
return strInput.substr(intCntr, strInput.length);
}

/* Function to remove the trailing spaces */
function RTrim(strInput)
{
var intCntr;
for (intCntr=strInput.length-1; intCntr > -1; intCntr--)
{
if (strInput.charAt(intCntr) != " ")
break;
}
return strInput.substr(0, intCntr + 1);
}



Output : var strName = Trim("Senthil ");

To validate the date use this function





/* Function for validating the date
Declaring valid date character, minimum year and maximum year
*/
var dtCh= "/";
var dtCh1= "-";
var minYear=2000;
var maxYear=2050;
function isDate(dtStr)
{
var daysInMonth = DaysArray(12)
dtCh= "/";
dtCh1= "-";
if (dtStr.indexOf(dtCh)==-1)
{
if (dtStr.indexOf(dtCh1) == -1)
{
return false;
}
else
{
dtCh=dtCh1;
}
}
else
{
dtCh=dtCh;
}

var pos1=dtStr.indexOf(dtCh)
var pos2=dtStr.indexOf(dtCh,pos1+1)
var strMonth=dtStr.substring(0,pos1)
var strDay=dtStr.substring(pos1+1,pos2)
var strYear=dtStr.substring(pos2+1)
strYr=strYear
if (strDay.charAt(0)=="0" && strDay.length>1) strDay=strDay.substring(1)
if (strMonth.charAt(0)=="0" && strMonth.length>1) strMonth=strMonth.substring(1)

for (var i = 1; i <= 3; i++)
{
if (strYr.charAt(0)=="0" && strYr.length>1)
{
strYr=strYr.substring(1)
}
}
month=parseInt(strMonth)
day=parseInt(strDay)
year=parseInt(strYr)
if (pos1==-1 || pos2==-1)
{
return false
}
if (strMonth.length<1 || month<1 || month>12)
{
return false
}
if (strDay.length<1 || day<1 || day>31 || (month==2 && day>daysInFebruary(year)) || day > daysInMonth[month])
{
return false
}
if (strYear.length != 4 || year==0 || yearmaxYear)
{
return false
}
if (dtStr.indexOf(dtCh,pos2+1)!=-1 || isInteger(stripCharsInBag(dtStr, dtCh))==false)
{
return false
}
return true
}

/* Function to return the number of days in a month */
function DaysArray(n)
{
for (var i = 1; i <= n; i++)
{
this[i] = 31
if (i==4 || i==6 || i==9 || i==11) {this[i] = 30}
if (i==2) {this[i] = 29}
}
return this
}



/* Function to whether input parameter is a number or not */
function isInteger(s){
var i;
for (i = 0; i < s.length; i++){
/* Check that current character is number */
var c = s.charAt(i);
if (((c < "0") || (c > "9"))) return false;
}
/* All characters are numbers */
return true;
}



/* Search through string's characters one by one.
If character is not in bag, append to returnString
*/
function stripCharsInBag(s, bag)
{
var i;
var returnString = "";
for (i = 0; i < s.length; i++){
var c = s.charAt(i);
if (bag.indexOf(c) == -1) returnString += c;
}
return returnString;
}

/* February has 29 days in any year evenly divisible by four,
EXCEPT for centurial years which are not also divisible by 400.
*/
function daysInFebruary (year)
{
return (((year % 4 == 0) && ( (!(year % 100 == 0)) || (year % 400 == 0))) ? 29 : 28 );
}




Output : alert(isDate(document.getElementById("tBDate").value));

A small Introduction to BizTalk Server 2004

Summary:

Microsoft BizTalk Server 2004 is an integration server product that enables you to develop, deploy, and manage integrated business processes and XML-based Web services. It supports the goal of creating business processes that unite separate applications into a coherent whole.

Introduction:

Microsoft BizTalk Server 2004 supports service-oriented architecture (SOA), the goal is creating business processes that unite separate applications into a coherent whole.

BizTalk Server has been used for application integration, where the following two scenarios are most important:
++ Connecting applications within a single organization, commonly referred to as enterprise application integration (EAI).
++ Connecting applications in different organizations, often called business-to-business (B2B) integration.

BizTalk Server 2004 Engine:

The two capabilities of BizTalk Server 2004 engine : a way to specify the business process, and some mechanism for communicating between the applications the business process uses.

The components of BizTalk Server 2004 Engine:
A. Incoming Message
B. Message Box
C. Business Rule Engine and Orchestrations
D. Outgoing Message

BizTalk Server engine
A business process is implemented as one or more orchestrations, each of which consists of executable code. These orchestrations are not created by writing code in a language such as C#, however. Instead, a business analyst uses the Orchestration Designer for Business Analysts to graphically organize a defined group of shapes to express the conditions, loops, and other behavior of the business process. Business processes can also use the Business Rule Engine, which provides a simpler and more easily modified way to express the rules in a business process. The Business Rule Engine to enable more business-oriented users to directly create and modify sets of business rules. Each orchestration creates subscriptions to indicate the kinds of messages it wants to receive.

The message process is as follows:

Incoming Message

-- A message is received through a receive adapter. Different adapters provide different communication mechanisms, so a message might be acquired by accessing a Web service, reading from a file, or in some other way.

-- The message is processed through a receive pipeline. This pipeline can contain components that perform actions such as converting the message from its native format into an XML document or validating the message’s digital signature.

Message Box

-- The message is delivered into a database called the MessageBox database, which is implemented by using Microsoft SQL Server.

Orchestrations

-- The message is dispatched to its target orchestration, which takes whatever action the business process requires.

-- The result of this processing is typically another message, produced by the business process and saved in the MessageBox database.

Outgoing Message

-- This message, in turn, is processed by a send pipeline, which may convert it from the internal XML format used by BizTalk Server 2004 to the format required by its destination, add a digital signature, and more.

--The message is sent out through a send adapter, which uses an appropriate mechanism to communicate with the application for which this message is destined.

Three roles are necessary to create and maintain BizTalk Server 2004 applications. These roles, and the functions they perform with the BizTalk Server 2004 engine, are as follows:
++ Business analyst: Defines the rules and behaviors that make up a business process, and determines the flow of the business process by defining what information gets sent to each application and how one business document is mapped into another.
++ Developer: Implements the business process defined by the business analyst. Implementation includes tasks such as defining the XML schemas for the business documents that will be used, specifying the detailed mapping between them, and creating the orchestrations necessary to implement the business process.
++ Administrator: Performs tasks such as setting up communication among the parts and deploying the application in an appropriately scalable way.

Software Development's Evolution towards Product Design

Occasionally, some poor fellow at a dinner party makes the unfortunate mistake of asking what I do for a living. My initial (and quite subdued) response is that I help design software for artists.

Then comes the inevitable question, “Oh, so you are a programmer?” A gleam appears in my eye and I no longer feel obligated to blather on about the rainy weather. With a great flourish, I whip out my gold nibbed pen and draw a little diagram on a napkin that explains concisely how modern software development works. In the grand finale, I circle one of the little scribbles buried deep in the entire convoluted process and proudly proclaim ‘And that is what I do!”. This admittedly selfish exercise usually keeps everyone involved merrily entertained until dessert arrives.

After dozens of napkin defiling lectures, I’ve put together an extended PDF of my sketch for download. In short, we have a one page infographic that explains:

* The evolution of software development over four distinct eras.
* The key goals of software development and our saddest failures
* Where software development is moving in the future.

The diagram also contains a surprising amount of poo. But then, that is the bigger lesson lurking within the scrawls. Much of what software developers create fails to serve the full spectrum of their customer’s needs. The funny part is that the usually non-technical folks that I’m talking to laugh heartily at this point…they know exactly what I’m talking about.

The Technocrat Era: Programmers serving programmers

At the dawn of software history, programmers wrote software for other programmers. This was a golden era. Life was so simple. The programmers understood their own technical needs quite intimately and were able to produce software that served those needs. The act of software development was a closed circuit. A programmer could sit in a corner and write code that he wanted. By default it also happened to apply to other programmers.



Programmers that grew up in these idyllic days still remember it fondly. There is still the programmer who will claim that all they need is EMACS and the latest version of GCC to make great software. For software intended for a highly technical audience, they may well be right.

The Early Business Era: Programmers attempt to serve others

The end of the Technocrat era was came about due to a startling discovery: the vast majority of the world was composed of non-technical people. Artists, accountants, authors, history majors and other unexpected consumers roamed the prehistoric landscape outside the hallowed engineering halls.

A new class of software entrepreneur called a ‘business man’ came into existence. This new creature realized that the hordes of non-technical people had their own needs that might be served by a well written program running on one of these new fangled “personal computers.” The business man gathered programmers and told them to build software to solve things like balancing budgets or writing letters.

The software products that the programmers created were technical marvels. They provided practical benefits far beyond what was available. VisiCalc revolutionized finance. WordStar and WordPerfect forever changed the act of writing. Even the Quantel Paintbox changed the world of art.

And yet the recently converted users, who couldn’t live without the efficient, powerful and technically amazing new software, were curiously ambivalent. They liked what these new products did from a practical standpoint, but found them to be confusing and often quite irritating.

You see, the programmers treated their customers just like programmers. They made them memorize crazy expert keyboard conventions and loaded the product with dozens of obscure features. This is what the programmers wanted out of a piece of software, so they assumed that the customers must want the same.

Unfortunately, the customer needs were a different beast. Their needs could be roughly divided into two categories:

* Practical needs: They wanted a product that worked. For example, the product obviously needed to save time and money.
* Emotional needs: They also wanted products the possessed less tangible benefits. They wanted applications that treated them with kindness and understanding if they made a mistake. They even wanted to use products that were attractive and conferred status. The customers desired programs that appealed to the softer aspects of their humanity.

When emotional needs were raised, the programmers quickly determined that the customers were indeed “nuts.” The pure technocrats simply did not possess the broad skill set necessary to comprehend, never mind serve, the customer’s unexpectedly important emotional needs.

The Late Business Era: Programmers and artists meet and do battle

The software market had become quite competitive at this point. Business folks, experienced in the Jedi wisdom of more mature markets, reasoned that perhaps serving emotional needs could give their companies an edge. A few companies experimentally hired non-programmers such as artists, marketing and usability dabblers. I use the term artist quite liberally here since it captures that charming, hand-waving vagueness of all classes of “people people”



Oh, the suffering that resulted. The inevitable culture clash was a bit like unleashing wild dogs upon one another and then watching them sulk afterwards.

None of the freshly introduced team members spoke one another’s language. The artists talked about fluff like color and mood. The marketing people made outrageous requests with zero comprehension about technical feasibility. The programmers were suddenly enslaved by bizarre, conflicting feature demands that they did not understand. “Make it friendlier” translates poorly into C++ code.

Let’s take something as simple as making an interface more appealing.

* The artist would whip up a picture sporting rounded corners and more pleasing colors. They’d send it to over to the programmer with a hand scribbled note “Make this.”
* The programmer then would either A) State that an infinite number of programmers could never finish such a technical abomination or B) Recreate the image using rectangles and the preexisting color scheme.
* Everyone would then rage at one another about their general incompetence.

In these battles for dominance, the winners lost horribly. If the programmers got the upper hand, they produced software that, despite great technical accomplishments, was ugly, difficult to use and no better than currently existing products. If the artists got the upper hand, you ended up with lovely products that didn’t do anything worthwhile.

The best products came from those odd teams that managed to compromise. The technology was clumsy and the emotional benefits of the software shaky. But it was better than the crap that customer had to put up with before. The original Mac OS was a great example of this. Later versions of Windows also managed to address a few emotional needs.

The Product Design Era: Can programmers and artists learn to work together?

The clock of progress has moved forward once again. The competition ratcheted up one more notch, and it appeared evident that companies that fail to master the lessons of the last era would be punished by the customers. Our battle scarred software companies were left looking for a better way.

Obviously, designing for emotional needs in addition to technical needs was a winning evolutionary strategy. In the last era however, mixing multiple potent skill sets together left organizational and cultural wounds that often were difficult to heal.

The companies that survived the influx of designers and marketing folks often relegated the survivors to their own separate silos. Marketing people got one org tree, developers got another. Artists and usability folks in most cases were shoved in random back corners to rot in black despair. Many groups were so busy protecting their domain that it was surprising that software got released at all. Release dates slipped by multiple years as the development talent stagnated is a cesspool of misplaced process.

What was needed was the most dramatic of transformations: A change in the cored development process. What made it difficult is that original culture of technocratic software development would need to evolve to support a broadly humanistic approach to product design.

The lead users of the Product Design Era

This change found root, as is the case for most dramatic transitions, from the most unlikely places.

There is a concept in product design known as the ‘lead user’[1]. This is a group of users that solves a difficult program far ahead of the mass market. Often they’ll cobble together their own tools and discover fundamental process and technology issues years in advance of mainstream users.

The old joke about writing Shakespeare with infinite monkeys randomly typing on typewriters got it all wrong. Instead give me a million customers trying to do their job with broken tools and one of them will stumble upon a process that is truly better. By watching the edges of the market place, we gain great insight into the direction that the larger market will take in the future.

The lead users in software development came from several widely divergent areas.

* The first was the game industry. Here, small cross functional teams built products focused entirely on serving emotional needs. Due to intense competition and vicious delivery cycles, many teams were forced to innovate far outside the traditional software development methodologies.
* The second were companies like Apple that followed curious ‘design’ philosophies more similar to that pursued by consumer good companies than software companies. What do shampoo companies and software development have in common? A remarkable amount it turns out.
* The third were web design companies. Due to the low cost of entry and the early emphasis on the web as a marketing medium, the web design market is dominated by tiny, hungry graphic design firms. They brought with them a culture of small teams, close collaboration between artists and programmers, and a nearly slavish devotion to serving their customers.

What the next era looks like

Each of these lead users follows a variant of what is broadly known in more mature industries as “product design.” They see software development not as a pure technical exercise. Instead they look at it as an integrative process of building a new product that solves both their customer’s combined emotional and practical needs. Even when forced to use a “technically inferior” platform, the religious devotion to rapidly and effectively serving customer’s complete spectrum of needs make their product offerings more attractive than the competition.

Robert G. Cooper, a well known researcher on new product development, states that there are several core factors [2] (listed in order of importance) for any successful new product design process:

1. A unique, superior and differentiated product with good value-for-money for the customer.
2. A strong market orientation – voice of the customer is built in
3. Sharp, early, fact-based product definition before product development begins
4. Solid up-front homework – doing front end activities like market analysis well
5. True cross functional teams: empowered, resourced, accountable, dedicated leader
6. Leverage – Where the project builds on business’s technology and marketing competencies
7. Market attractiveness – size, growth, margins
8. Quality of the launch effort: well planned, properly resourced
9. Technological competencies and quality of execution of technology activities.

Many companies in the Late Business era already emphasize a few of these factors. However, there are some differences. Notice how technical competencies are important, but last on the list. Notice also, how that creating a solution to customer needs is first on the list.

Software companies that understand product design tend to pour their efforts into the following activities:

* Focus on a unifying team goal built around customer needs. They ensure that the product is always driven by customer needs, not internal whims.
* Work together in cross functional teams. They build an organization that encourages the sharing of skills to promote problem solving. They discourage the formation of silos of individual ‘experts’.
* Communicate clearly with a process that applies to all team members. They build a commonly followed process includes both fuzzy front end activities like design and production activities like coding. This process forms the language that unifies disparate groups.
* Work efficiently with design friendly tools. The team avoids custom coding and ‘tossing it over the fence’ by adopting tools that work on common data format across all skill sets.

Benefits of a product design philosophy

The benefits of a product design process are well documented. New products that deliver superior, unique benefits to the customer have a commercial success rate of 98% compared to 18.4% for undifferentiated products. These products reach an outstanding 53.5% market share.

Some of the highlights of a strong product design include:

* Create highly competitive products that achieve market dominance.
* Save money by focusing on the right features that bring customer value, not low yield ‘nice-to-have’ features.
* Create passionate customers that accelerate the spread of your marketing message.

To the losers, the success of their rivals appears miraculous. “How is it that a slow web app can take away market share from our superior desktop application?” they ask in surprise.

The answer will be simple. The successful company identified the correct emotional and practical needs of the customer and poured their efforts into serving those needs. The richer companies - flush with silos of ‘wise’ experts - fought with one another and threw random features at the customer. It is rarely about doing more; it is about grokking customers and doing the small set of correct things necessary to succeed.

Dangers of the Product Design Era

As with any process transition, some groups are adopting these techniques slower than others.

* Even within game development there are huge swaths of publishers and development teams that are ignorant of techniques used to incorporate market research, concept testing and new-to-the world innovation into their process.
* Many web developers still create new products by following their ‘gut’ without clearly identifying their ultimate customer.
* Most traditional desktop software developers are just barely escaping the Late Business era of functional silos and warring factions.

Most laggards who hold desperately onto their aging processes will die off in the face of advancing competition. Those with larger war chests merely buy a small period of additional learning time.

Unfortunately, many companies that attempt to adopt a product design philosophy will also fail, despite their best efforts. Cultural change is hard work. To adopt product design you must alter the most basic DNA of the company’s values.

* It involves asking vice presidents to give up their empires that they’ve fought decades to establish. What is the point of an organization of engineers when engineers are all just members of small cross-functional teams?
* It involves asking the men and women in the trenches to give up their own dreams of building their own empires according to the old rules.
* If the ultimate reward of the old system is an isolated corner office with windows, try convincing people to work together in a common war room that has walls covered with whiteboards.

Such traumatic change is absolutely necessary. If you want to make great products for happy customers, you need to make the transition to a broader product design methodology. We may have once been a clique of technocrats. But now we must take our place in the broader society by providing human solutions to the very human customers that we serve.