Thursday, February 22, 2007

The wise work manager for context-based scoping

It's a long-standing rule of thumb that we should not use java.lang.Thread within a Java EE server's servlet or Enterprise JavaBeans (EJB) container. The reason is that a managed environment should have complete visibility over all spawned threads; when users create their own threads, visibility goes by the wayside. In this article we explain, instead of creating new threads for certain tasks, how to reuse threads safely in a managed environment, conducting some time-consuming tasks in parallel, as well as increasing the response time and throughput. To achieve these goals, we use the Work Manager for Application Servers Specification, which provides an application-server-supported facility for concurrent execution. We describe how to convert a startup servlet efficiently to do tasks in parallel, how to do audit-trailing without using Java Message Service and how to change some time-consuming API calls to fast, asynchronous invocations.

The code and examples shared in this article successfully run in IBM WebSphere Application Server 6.0.2. The code can be reused with minor changes, however, in other environments that support the Work Manager API. Please refer to relevant application server resources.

The problem

Suppose we have some startup servlets in an existing Java EE application. There are 13 different modules in the complete ear file, which is a mix of war files (Web projects) and jar files (client or utility files). The existing startup servlet of one module takes about 6 minutes, because the application caches a lot of data from the database, loads and instantiates some classes from an XML configuration file, and so forth. Each Web module has a corresponding startup servlet; two of those servlets each take 10 minutes to start and then rest 30 seconds; this happens sequentially, depending on the order in which the application server loads each Web module. Even worse, when the ear deploys in a clustered environment of 10 servers, the total startup time is 200 minutes, taking into account the sequential startup of the different cluster nodes.

Obviously, we need a performance boost. Our solution is to use the Work Manager API, which eventually reduces the startup time to less than a minute.

Introducing the Work Manager API

In considering how to solve our problem, we first think about the sequential loading of the startup servlets of the individual Web modules in the ear file. If we had a mechanism to run these startup servlets in parallel, the first phase of the solution would reduce the overall startup time to 6 to 7 minutes. (This assumes all startup servlets perform independent actions. We will explain how to handle dependent modules later.)

You may be considering spawning threads in each startup. However, we already have mentioned that the Java EE specification does not advocate using our own threads within a container. Then can we use a single thread? There are two problems with this approach:

1. You are not supposed to create a thread within a container.
2. Even if you create a thread, how will you be able to pass the individual module-specific metadata (that is, the classpath information, the transactional information, the security, and so forth) to that single thread? That's impossible.

The Work Manager API can help out with this situation, freeing us from creating our own threads and giving us the facility to pass context-sensitive information. If you tell a work manager to do something, you can assume the container will perform the work as if the work manager is sitting within your current Web context.

A work manager starts a parallel daemon worker thread along with the application server. You configure the work manager through the administration console and give it a Java Naming and Directory Interface (JNDI) name, just as you attach a JNDI name to an EJB component. This JNDI name attached to the work manager is available in the global namespace. When you want your Web module to perform an action, you create an implementation of the Work interface and submit that instance to the work manager. The work manager daemon creates another thread, which invokes the run method of your Work implementation. Hence, using a thread pool the work manager can create threads for as many Work implementations submitted to it. Not only that, the work manager takes a snapshot of the current Java EE context on the thread when the work is submitted. We can consider this as a WorkWithExcecutionContext object.

In Figure 1 below, we show the background scenario. When a module submits a task to the work manager, a corresponding context is created by inheriting all the contexts of the caller module.


Figure 1. Context-based execution

As an example, let's assume I have a startup servlet AServlet in module A, which is part of my ear file. Suppose the startup servlet in this module uses three jar files: log4j.jar, concurrent.jar, and moduleAspecifc.jar. When we call the work manager from module A's AServlet, the work manager retrieves the classpath entries log4j.jar, concurrent.jar, and moduleAspecific.jar in the execution context of the global work manager thread. If you have many different modules, say, module B and module C, tell the work manager to execute your Work implementations; then all your modules will execute in parallel, without waiting for the completion of each.


Figure 2. Each module can be executed in parallel with a work manager. Click on thumbnail to view full-sized image.

A new concept based on the work manager, named asynchronous beans, can be found in WebSphere 5.0 and later versions. An asynchronous bean is a Java object or EJB that can be executed asynchronously by a Java EE application by using the Java EE context of its creator. IBM has revamped the work manager so that it now is considered a thread pool. This new concept, along with the use of Java's concurrent utilities, forms the basis for Java Specification Request 237 (Work Manager for Application Servers), which most likely will be incorporated into Java EE 6. The details and downloads are available in Resources.

To illustrate how the manager works, we will use the following three classes from asynchbeans.jar, available within the lib directory of your WebSphere installation: Work, WorkManager, and WorkItem. You can start by creating a work manager, using WebSphere's administrative console.


Figure 3. Use the application server console for creating a work manager. Click on thumbnail to view full-sized image.

As shown in Figure 3, you can click on the Work Managers link on the left to create a work manager of your own. For our purposes here, however, we use the DefaultWorkManager, shipped with every WebSphere Application Server 5.x or 6.x. Please note the global JNDI name "wm/default" for this work manager. As explained before, this JNDI name is available across all the applications of your ear file, as it is a global namespace.
The solution

For the code that follows, you need one or all of the following three imports:

import com.ibm.websphere.asynchbeans.Work;
import com.ibm.websphere.asynchbeans.WorkItem;
import com.ibm.websphere.asynchbeans.WorkManager;

To improve the performance of our startup servlets, let's begin by writing a CommonServiceLocator, which will have one method named getWorkManager(), as shown below:

public static WorkManager getWorkManager()
{
//workManager is a static variable
if(workManager == null) {
try {
InitialContext ctx = new InitialContext();
String jndiName = "java:comp/env/wm/default";
workManager = (WorkManager)ctx.lookup(jndiName);
System.out.println("WorkManager obtained");

} catch(Exception ex) {
OscarLogger.logException("", ex);
System.out.println("Unable to lookup workmanager: " + ex.getMessage());
}
}

return workManager;
}

Let's assume we can invoke the above method anywhere in any module (provided we have put the above class in the application server's classpath):

WorkManager wmger = CommonServiceLocator.getWorkManager();

Being a normal "resource," a work manager also needs to be registered in web.xml if you are trying to access it from the Web module:



WorkManager
wm/default
com.ibm.websphere.asynchbeans.WorkManager
Container
Shareable


It's time to complete our StartupServlet:

public class StartupServlet extends HttpServlet
{
public void init()
{
try
{
WorkManager wm = CommonServiceLocator.getWorkManager();
wm.startWork(new StartupWork(configFileNames));
// parameter configFileNames is optional.
// You may want to pass some XML or configuration files here
}
catch(Exception e)
{
e.printStackTrace();
}
}
}

Here, we have created our WorkManager instance with the help of the service locator we wrote earlier. Then we just call the startWork() method to initiate the action, which is written within the StartupWork class, explained below. Note that the thread won't wait for the action to finish. It's asynchronous. If you have, say, 10 Web modules, then have 10 different startup servlets with corresponding action classes, when the application server starts it will invoke the init methods of all the servlets, without waiting for them to complete.

Quickly add StartupServlet to web.xml and give an appropriate value for load-on-startup so that StartupServlet's init method is called during the application server's startup process.

As you can see in the above snippet, we need to define a StartupWork class. Please remember to implement the Work interface we have been talking about:

class StartupWork implements Work
{
private String configFileNames;
public StartupWork(String configFileNames)
{
this.configFileNames = configFileNames;
}

public void run()
{
try{

//Please do your time-consuming actions here such as
//loading and instantiating classes from, say, config file,
//connecting to database and caching lots of data,
//some complex I/O operations,
//reading and storing an XML DOM,
// doing some asynchronous logging or audit trialing, etc.

} catch(Exception e) {
e.printStackTrace();
}
}
public synchronized void release()
{
// Release and cleanup here
}
}

As you can see, the Work interface is just an inherited interface from the java.lang.Runnable interface. You need to implement two methods, run() and release(). In the run() method, you do all your heavy work; release() can be used for some cleanup actions if needed. Note that the release() method should never be called directly from the application. It is usually called at run-time, when, for example, the Java Virtual Machine is shutting down. Also note that the above code could easily be extended for incorporating logging or audit-trailing, which usually may require writing large amounts of data in a particular format to be written to files.

Finally, we would like to conclude by introducing one more class named WorkItem. Note that in the above code snippets, we do not capture the startWork() method's return value. Actually, startWork() returns an instance of WorkItem, through which you can capture return values of your actions. Further, many WorkItems can be combined to tell the work manager to wait until the submitted works are completed.

For example, assume we have a Work implementation called SampleWork, whose run method fires messages to a back-end mainframe system, and that we have three such messages to be fired. Typically, we used to fire one by one, so that the overall time would be the sum of the individual times taken for each message. Using WorkItem, the approach could be:

//Code as before to get Work Manager
//Create three Work Items
WorkItem witem1 = wm.startWork(message1params);
WorkItem witem2 = wm.startWork(message2params);
WorkItem witem3 = wm.startWork(message3params);

//Create an ArrayList

List items = new ArrayList();

//Add the previous WorkItems to ArrayList

items.add(witem1);
items.add(witem2);
items.add(witem3);

//Join them using WorkManager wm

wm.join(items, WorkManager.JOIN_AND,(int)WorkManager.INDEFINITE);

In the above code, we use the return values and join them until all of them complete. You may specify WorkManager.JOIN_OR to make sure the return occurs after one of the threads completes. Obviously, the above approach would reduce the execution time significantly when you have many such tasks to be joined.
Conclusion

This article examined the possibilities for high-performance Java EE applications using context-scoped threads created by the application server with the Work Manager API. Developers can use these high-performance threads to enhance the speed of startups, logging, audit-trailing and time-consuming method invocations without sacrificing core Java EE principles.

Speed up SOA/ESB software development with a "container-less" technique

Service-oriented architecture (SOA) and enterprise service bus (ESB) are two buzzwords that have gained significant momentum and traction in live architectures over the last few years. Speaking from firsthand experience, I would say SOA/ESB techniques do deliver increased modularity, flexibility and reusability of components, which has been the age-old promise of object-oriented programming (OOP) and even general software engineering. In particular, SOA/ESB appears to extend effectively on the hallowed OOP principle of encapsulation. But it seems all new paradigms come with a "dark side" that must be effectively recognized and managed to minimize undesirable consequences. What is the dark side of SOA/ESB? This article examines it and proposes a powerful technique and "quasi-architecture" as the solution.

The dark side of SOA/ESB is increased runtime and testing complexity. A complex SOA/ESB architecture can resemble a circuit wiring diagram, where service calls are the wires and operations are the components. However, circuits have an inherent one-way flow of electricity, and SOA can involve two-way communication (service A calls service B, service B calls service A). If you have n services, there are n-squared possible interactions between those services. Even limiting a system to some small subset of that total can lead to a tangle. Also, often later into development, one might realize that a particular function covered by service A is better handled by service B. Depending on the design, moving (refactoring) code between service boundaries can be difficult, time consuming and even fraught with peril. Is there some way to minimize all that?

The container

Outside this custom-built complexity is the problem of the container. For this article, "container" refers to a Java Platform Enterprise Edition (Java EE) Web server that supports Java Message Service (JMS) and, say, message-driven Enterprise JavaBeans (EJB), the basic and increasingly standardized enterprise approach to SOA/ESB. Two containers I have worked with are BEA WebLogic and JBoss. (They have properties similar to other containers, such as IBM WebSphere.) Despite all its power and justifiable fanfare, the container has a dark side too. I worked on a system involving just a half-dozen services, but noticed the following (note that many of these issues are also associated with simple client-server applications but are exacerbated with SOA/ESB):

  • Long startup and shutdown time for the container. At several seconds per service, startup time for all services was nearing two minutes.
  • Difficulty in debugging one service independently of another. If one service B is incorrectly compiling or behaving, it can be difficult for another service A to function, even if service A does not depend on B.
  • Complex classpath setup requirements sometimes interfere with correct functioning.
  • Complex container setup requirements. A distribution directory must be set up with the right jar files, Spring/Hibernate configuration files, etc. It is neither simple nor automatic to keep the distribution directory synched up with code changes.
  • On Windows, (re)compiles/builds can fail because the container is "holding on" to files such as log files or jar files; i.e., a delete or overwrite of a file fails because the file is open by some process. Hence, the container often must be shut down and restarted just to do a rebuild/redeploy of the code.
  • Using the debugger generally requires setting up a remote debugger. A developer must interrupt the process of running through code and frequently connect/disconnect. New deployment requires disconnecting/reconnecting the debugger. Many developers never set up remote debugging and resort to crude System.out.println and — not intentionally, but in practice — long edit-compile-test cycles.
  • Since the container is inherently multithreaded, the debugging view is significantly complicated. WebLogic immediately spawns dozens, or more, helper threads on startup even if none are used immediately, or ever. This requires initialization and memory, and dormant threads seem to slow JVM performance on active threads—i.e., a significant waste of resources for "low load" situations.
  • Container-based debugging typically involves scanning log files, which can be time consuming and problematic for many reasons:
oToo much output
oMust restart the container to make log4j logging-level changes
oLogs "scroll" to new files, and a simple scan does not keep up, thereby seeming to falsely indicate a code hang
oDifficult to understand what the system is doing if there is no log output
oMust recompile lots of code just to make minor new logging calls
oCannot access critical internal variables and stack state

All of the above interfere, sometimes in a time-consuming way, with the critical edit-compile-test loop, where developers spend most of their time. This overall cost is not obvious, difficult to quantify, and almost invisible. But at the end of the day, it can all add up substantially. Developers do not always realize how much time they spend setting up the system just to get the code to run, because it is sometimes a mechanical, almost-unconscious detour.

Can one mix SOA/ESB with RAD, or rapid application development? The above enumeration of the quite substantial "dark side" seems to preclude it. The container (and Java EE at times) could be said to suffer from "monolithicity."

Container vs. container-less

Contrast the massive overhead described above to the fast approach of running code in the Eclipse environment. The developer can just right-click on a Java file and run it as an application or as a JUnit test to instantly start executing the code — just point and click, or plug and play.

Eclipse always automatically compiles the code. And it supports complex inter-package dependencies. This mode of execution is sometimes called "container-less," to contrast it with its complementary mode. The big question is: "Why is executing code in the Eclipse environment so quick and easy, as compared to executing code within the container?"

The answer is clearly that the container provides many subsystems that "wrap" the code in various layers of insulation, so to speak. The major capabilities provided are:

  • Concurrency: Any number of message-driven EJB components can run simultaneously on one machine.
  • Transactions: Atomic commit/rollback of the whole service call or parts of it.
  • Pooling: Such as with threads or database connections.
  • Distributed computing: The same EJB can be run on multiple machines.

For this article, let's call the general union of the above properties "scalability."

What is remarkable is how little of this container-centric functionality developers care about in just implementing specific business logic, generally the main point of value for custom enterprise development. Most business logic is not really oriented around any of the above areas. Thus, programmers must deal with two major layers of complexity: application logic and container wrapping.

If all containers are somewhat interchangeable, then a programmer's coding effort that frequently focuses on a container's general minutiae (say, for example, JMS message-passing infrastructure, or mock/test/simulation systems) can be repetitive and unproductive, or in other words, not valuable. Yet, on the multi-developer SOA/ESB projects with which I've worked, the developers spend significant, even excessive time on all the container wrapping.

This is part of the core of what industry analysts such as Bruce Tate (in his book Beyond Java) complain about in their criticisms of the complexity of Java EE. But if this is the case, how can we develop code that minimizes time spent on container wrapping? Is there some kind of architecture approach that can minimize the amount of time the programmer deals with container wrapping?

Container-less testing

An answer, model, prototype, and solution can be found in extending the existing technique of container-less JUnit testing. Developers, while developing and testing SOA/ESB-type code, may have JUnit tests that exercise some large part of a service independent of the container, using the Eclipse "Run as JUnit" Java File menu option.

Container-less tests executed in Eclipse or from Ant do not remove all the above pains and disadvantages associated with the container but definitely diminish many of them significantly. Container-less tests are sometimes not as easy to set up because all project effort is typically driven toward getting the container-based code to function correctly. But often the initial up-front effort is well worth it, because of the possible immense savings in the edit-compile-test cycle.

The immediate problem with the container-less test for SOA/ESB code is at the service boundary. What happens when one service calls another, as is the case for any nontrivial SOA/ESB system? One typical approach is to use stub code for inter-service calls. However, stubs can be time consuming to write and maintain. A whole set of parallel code must be maintained relative to live code. Often, to effectively or nontrivially test a service, nontrivial (dynamic) behavior from the stub is required. Test systems that attempt to mimic live systems can evolve into halls of mirrors, with programmers wandering, meandering, or finding themselves lost in them. A developer can get sidetracked building and maintaining complex stub/mock/simulator systems.

Container-less tests for various aspects of the system, e.g., live database calls (or Hibernate), are well known. The approach I advocate is something similar that might be called "container-less inter-service calls," which may sound like a contradiction, but is possible; the sophisticated technique is not widely known but has many advantages.

A foremost killer-app advantage is that virtually no time is required for recompiling via Ant targets, and no time is required for shutting down WebLogic, or restarting and redeploying. Eclipse recompiles the changed Java files almost instantly.

Another paradoxical advantageous property is that unit tests become nearly indistinguishable from integration tests. It is possible to write a thorough inter-service unit test that is in many ways actually an integration test. This pushes integration testing closer to individual developers and away from the larger time-consuming process of inter-developer integration, and thereby tightens the loop. Each developer participates in more of the complete, overall functionality, and there is less working in isolated silos, or less of the dreaded, project-killing heavy-pain-on-integration effect. (These concepts are explored by Fowler in his continuous integration philosophy)

Note: Container-less tests require excellent understanding of the differences in the way the container and Eclipse manage the classpath and delicate care in setting up inter-project dependencies in Eclipse. The Eclipse classpath behavior can be extremely complex and can lead to subtle problems in running the container-less tests because of the multilayered way it builds up the classpath.

JUnit testing can be extended beyond mere tests to the overall functionality of the system. Consider some SOA/ESB processes that span multiple services. Imagine just having all the code in Eclipse and running it such that the code never makes any inter-service JMS calls — it just moves "quasi messages" around in memory while exercising all the code that crosses multiple services. Why not? Let's call this quasi-architecture "container-less SOA." It comes with the basic realization that services and JMS are not necessary for exercising all the business logic. JMS just boils down to a distributed parameter-passing system between Java methods.

Example: Container-less SOA

Enough of the pitch! Let's work through a tangible example and a live code fragment. Consider a basic emerging SOA/ESB (inter-service!) design pattern that could be called "the assembler." The assembler takes orders for a product and issues requests for parts to different services. This is a fundamentally cross-service operation, whose correctness is difficult to verify piecemeal. One standard approach might be to create mock part-services that reply with mock objects (mock parts).

Instead, consider the container-less SOA technique that exercises entirely live code. To be specific, the part-services are Customer, Order, AssetManagement, and AssetDetail. The Assembly service starts with a service call to Customer, then one to Order, then three in turn to AssetManagement, then AssetDetail, one more to AssetManagement, and finally one more to Order.

The container-less SOA code fragment to completely run all that business logic without mock objects looks as follows (the code reflects the unrequired condition that the messages are sent and delivered on different JMS queue names):

public void sequence() throws CoreApplicationException, JMSException {
Thread thread = new Thread(new Runnable() {
public void run() {
CustomerMessageBean customerbean = new CustomerMessageBean();
OrderMessageBean orderbean = new OrderMessageBean();
AssetManagementMessageBean assetbean = new AssetManagementMessageBean();
DetailMessageBean detailbean = new DetailMessageBean();

customerbean.ejbCreate();
orderbean.ejbCreate();
assetbean.ejbCreate();
detailbean.ejbCreate(); // normally the container calls this EJB method

MessageReceiver receiver = null;

try {
receiver = new MessageReceiver(jmsServerUrl);

Message msg = receiver.receiveQueue("customer_queue");

customerbean.onMessage(msg);

// assembly bean then calls Order bean to find
// out about the customer's order

msg = receiver.receiveQueue("order_queue");

orderbean.onMessage(msg);

// assembly then makes 3 sequential calls to assetbean
// for digital assets. like say maps or images

for (int i = 1; i <= 3; i++) {
msg = receiver.receiveQueue("asset_queue");

assetbean.onMessage(msg);
}

// get some addl details on the assets. say map location info
msg = receiver.receiveQueue("detail_queue");

detailbean.onMessage(msg);

// asset rendering files. eg fonts etc
msg = receiver.receiveQueue("asset_queue");

assetbean.onMessage(msg);

// assembly then wants more info on the order again
msg = receiver.receiveQueue("order_queue");

orderbean.onMessage(msg);

// assembly then gets a final asset type
msg = receiver.receiveQueue("asset_queue");

assetbean.onMessage(msg);


} catch (Exception e) {
throw new RuntimeException(e);
} finally {
done = true;
}
}
});

// start the above thread/mock-container/beans that will
// service the assembly bean in a concurrent "helper" or "supplier" thread
thread.start();

// receive the assembly message to "start assembly"
MessageReceiver receiver = new MessageReceiver(jmsServerUrl);
Message msg = receiver.receiveQueue("assembly_queue");

AssemblyMessageBean assemblybean = new AssemblyMessageBean();
assemblybean.ejbCreate();

// in the next call, the "supplier" thread runs concurrently with assembly.
// assembly makes the requests and the thread supplies them in the exact
// order. (if assembly makes a different sequence of calls than expected,
// then a more sophisticated supplier thread is required.)
assemblybean.onMessage(msg);

// at this point assembly has sent and received its last request
// and the thread terminated immediately after it sent the last reply
}
}

Note that all the beans actually extend the standard Java ME JMS message bean class, MessageDrivenBean. The supply-part service code runs in a thread to simulate the concurrency provided by the container. It receives the start-assembly message at the bottom, which is sent to the assembly bean. The assembly bean sends out its requests to the other services, which are then handled by the sequence of code running in the thread.

The MessageReceiver contains mainly JMS-oriented code. Here, I advocate that the only stub, or mock, code be JMS-simulating code that passes the JMS messages around in memory (implemented in code in MessageReceiver)—instead of through a live JMS implementation—thereby completely bypassing the need for the container, at least during development.

The other minor implementation detail is that in Eclipse, the "part" services must be dependent projects of Assembly. Previously, the services could be compiled independently, and that should be maintained, but the test code introduces the new dependency.

Using the above code, massive amounts of cross-service business logic and functionality can be exercised in a way that is indistinguishable from the live, production system. No stubs or their expensive associated maintenance for the services are required. A system that would probably be tested piecemeal or through time-consuming container startup/shutdown can be tested directly, immediately. Developer integration of the Assembly service goes from a complex undertaking to something almost automatic.

The above arrangement is not a "mock-service" system, it is actually a "mock-container" system! It simulates the core, basic operation of the container in instantiating message beans and passing them the message data.

Note the fine print: This container-less SOA pattern should not be taken to extremes. For example, if the individual part-services are extremely time consuming to run in their execution time, then mock services would be preferred. However, in general, the goal of minimizing the mock code/objects required is definitely preferred from the development point of view. For example, maybe only some subsystem of the part-service requires a mock implementation. Hence, my personal guideline: Mock systems should be used to replace time-consuming systems but not to avoid the difficulty of executing live code associated with a non-container-less SOA approach.

New perspective: Container scalability revisited

All this might lead to the question: "What exactly is the container providing if much of it sometimes gets in the way of development?"

One key answer is scalability. To summarize the previous section on scalability, it seems that scalability signifies that the code continues to run well for much larger datasets, hiding, as much as possible, the hardware from the software, so to speak. Java EE provides the infrastructure for scalability in the form of concurrency, transactions, pooling, and distribution. For example, any number of message-driven beans can run on any number of machines.

The container delivers on its key promise of scalability by, for example, providing seamless multithreading (or similarly, hardware clustering) of the message beans. But this multithreading capability is often of no direct need to the developer, who, during code development, is usually more focused on getting the sequence of business logic correct. Similarly for transactions — the developer should focus on writing code that works correctly and then just wrap it with the transaction logic.

Therefore, the developer should work with small datasets that minimize the edit-compile-test loop and best-approximate the larger datasets seen in production — or, more specifically, work with small datasets that best cover the same execution-code-flow of larger ones. The proposed container-less inter-service pattern is a powerful tool for this purpose.

This technique has an influence on integration testing. In this newer approach, the chief purpose of integration testing is to determine how the code functions under higher loads (i.e., scalability), not to find logical inconsistencies in the code (i.e., non-load-related bugs), which will have largely been ironed out in unit testing.

Effectively, integration testing is redefined. Individual developers have the full system at their fingertips. The focus of inter-developer code testing moves from standard integration testing (making sure pieces fit together) to scalability/load testing (pieces are already resolved to fit together; make sure pieces run fast enough).

Another way of looking at the technique is as a system for minimizing developer overhead. A system can be easy or hard to develop on. Minimizing developer overhead indirectly contributes to minimizing application overhead and ultimately decreasing overall project overhead.

High pliability via scalability plug-ins

I call this concept of container-less SOA a quasi-architecture because it can hide behind the real architecture and be visible mostly only to developers. Overall, it clearly leads to a new architecture concept. Instead of building a system in which all the full-blown scalability systems are entrenched in and tightly coupled with the code, the scalability systems can be utilized as plug-ins, and the developer can easily switch between the dual environments of low scalability and fast development to high scalability and heavy load testing. The simple philosophy expressed and manifested in the technique is that instead of the code being subservient to the container, the container(s) should be subservient to the code.

A related area is the use of the HSQLDB database. HSQLDB is an in-memory database that can handle most aspects of a production database like Oracle. The database can be seen as another scalability system. On one system, when converting almost 250 persistence/data-layer test methods to run in-memory on HSQLDB instead of Oracle, I saw a speed improvement of almost three times. There are even some gains to be had by using a low-overhead system like MYSQL instead of Oracle, if the persistence code does not use many database-implementation-specific features.

Developers can sometimes quickly iron out kinks in business logic by utilizing the less-monolithic, lower-overhead, faster, low-scalability systems. The standard concept of out-of-container testing is an acknowledgement of this reality. Container-less SOA and these related offshoots simply build on it in a logical progression.

Consider a system with a plug-in architecture for messaging, the container (bean management), and the database, including connection pooling. All of these could run in-memory in the low-scalability scenario, but be replaced with production-scalable systems with a quick flip of a switch (i.e., implemented as a Java property). We could call such a scenario "highly pliable." There is really no inherent technical restriction or constraint against this elegant architecture. Actually, maybe high pliability defined in this way is as valuable as other recognized critical features, such as high scalability. In actuality, they are tightly coupled with each other in successful systems.

Pliability of a system does seem to be orthogonal to the property of scalability in the following sense: Some systems are highly scalable in the sense of handling massive data loads, but have low-pliability in the sense of one developer not being able to either wrap his local effort around the overall system or exercise the overall flow of the system. Yet the latter is exactly what is required for maximum programmer productivity and effectiveness against that system. Maybe pliability is the opposite of monolithicity.

This view also meshes closely with the new widespread concept of dependency injection. The scalability systems can be dependency injected into the code. But that's another article!

The general direction of container-less SOA is a packaged architecture. The code could be developed such that it is one cohesive application that doesn't use any external systems, e.g., JMS/socket-based messaging, external database, connection pools, etc.; then a packaging step can substitute in the components that permit scalability.

With the container-less SOA approach, an added benefit is that programmer emphasis on developing container-common features, such as JMS libraries, is decreased. Also, because the message-passing architecture associated with container-less SOA tends to be better encapsulated, refactoring the code between services becomes more feasible. Once the code is refactored, it is easier to retest the full-flow functionality to ensure the refactoring did not change any logic, compared with a piecemeal testing system.

In fact, the whole concept of refactoring might be revisited under the container-less SOA architecture. The SOA architecture encourages building all the business logic to function correctly and then, at the very last moment, exploring, deriving, and delineating the service boundaries, which is quite the opposite of the current convention of attempting to dictate the service boundaries before any code is written. Putting business logic into services then becomes more of a final wrapping step rather than a cumbersome priority always to keep foremost in mind during development.

To summarize, a container-less SOA/ESB technique holds significant potential for speedier SOA/ESB software development. The SOA/ESB approach is a highly enterprise-level model right now and is somewhat far from utilizing a rapid application development scenario; however, the container-less approach brings these paradigms closer together. It uses the leverage of a self-contained system.

When AMD delivers a chip, its entire complex functionality is encased in a few-inch-square piece of packaged silicon, despite containing many sophisticated subsystems, such as a floating-point unit, a cache, or a logic unit. Similarly, with this approach, all the live inter-service code and functionality is almost fully circumscribed, available, and accessible in an individual developer's local environment, in addition to being spread out over the enterprise network. The developer can switch between the two with a flick of the switch for maximum coding and testing leverage. Container-less SOA contains a developer-centric philosophy at its heart, but not at the expense of the overall enterprise. Both benefit.

The Implementation of an FTP Server

In the previous article, we discussed RFC 959, which defines how an FTP server should work. We also walked through some of the commands that are used in this protocol. In this article, we will be creating an example FTP server in Delphi.

Required Tools

Delphi 6 or higher and Indy 10.1.5, the latest snapshot. I've only tested this application on Windows XP, so, although I do not think you will encounter any problems on most other platforms, please let me know if you do. I will try to make the application as platform agnostic as possible.

The application

To create our FTP server, we are going to use the idftpserver component available in Delphi. This component implements all the commands needed to create a viable server application. As always, I am going to implement as many commands as possible in this application, but by no means all of them. To implement an FTP server application that meets the minimum requirements you need to:

  • Be able to show all files in a given directory.
  • Change and remove a directory.
  • Upload and download files.
  • Delete and view file details.
  • Authenticate a user.

There are many more commands, but you can just about get on with implementing the minimum. To see how many commands you can implement with Indy, drop an idftpserver component on a form and look at the object inspectors events tab. Indy has done most of the interpreting of the FTP protocol, and thus makes it easy to implement the commands.

Start Delphi and create a new application. On the form add the following components:

  • One Memo
  • Three Edits
  • Two Buttons
  • Four Labels

In edit1's text property add "myusername" and on edit2's text property add "mypassword." Then from the Indy servers tab drop the idFTPServer component, and from the Indy intercepts tab drop a TidServerInterceptlogfile component; rename it to logfile1. Now we need to connect this intercept to the idftpserver component. So click on the idftpserver component, go to its intercept property on the object inspector and click on the drop down arrow. It should contain the word "logfile1." Select it and we're done.

On button1 add the caption "Connect" and on button2 add the caption "Exit." Double click on button1 and add the following code:

procedure TForm1.Button1Click(Sender: TObject); begin idftpserver1.DefaultPort:=strtoint(edit3.text); idftpserver1.Active:=true; showmessage('Connected'); end;

Then do the same for button2 but add the following code:

procedure TForm1.Button2Click(Sender: TObject); begin idftpserver1.Active:=false; close; end;

Button1 does a couple of things. It activates the ftpserver application and also sets the default port, which is number 21. The port number is very important because it is where the client application is going to try to connect to the server. Button2 just closes down the application. Your form should now look something like this:



Now that the GUI is done, let's move on to the code. Most of the procedures below give you a clue as to what the code is all about. I have commented most of the code so that it you can easily work out what is going on in a particular procedure.

The setslashes function replaces double backslashes with single backslashes and also replaces a single frontslash with a backslash. This is to make sure that any pathnames that are sent by the client FTP application are processed in the correct order. For example if the user supplies a pathname like "c:afilename" then this function will correct it to "c:afilename," which is the correct way to handle pathnames.

function setSlashes(APath:String):String; var slash:string; begin slash := StringReplace(APath, '/', '', [rfReplaceAll]); slash := StringReplace(slash, '', '', [rfReplaceAll]); Result := slash; end;

The event below gets fired when a client FTP application connects to the server. You can use this event to track what a particular client is doing. For example you can get the IP address from which the client connected and the time that the client connected. The event is entirely optional.

procedure TForm1.IdFTPServer1Connect(AContext: TIdContext); begin //Here you could take the client IP address, time that a particular //client logged in etc, for tracking purposes end;

The Vdirectory string contains the directory to which you need to change. The directory name is sent by the client FTP application.:

procedure TForm1.IdFTPServer1ChangeDirectory(ASender:
TIdFTPServerContext; var VDirectory: String); begin
ASender.CurrentDir:= VDirectory; end;

The client sends the name of the file to be deleted in the APathName variable. The procedure below basically takes that pathname and checks to see if the file exists before deleting it.

procedure TForm1.IdFTPServer1DeleteFile(ASender:TIdFTPServerContext; const APathName: String); begin if fileexists(APathName) then begin DeleteFile(APathName); end; end;

This event gets fired when the client wants to verify whether a file exists. The procedure uses the file exists() function to carry out the request:

procedure TForm1.IdFTPServer1FileExistCheck(ASender:TIdFTPServerContext; const APathName: String; var VExist: Boolean); begin if fileexists(APathName) then begin VExist:=true; end else begin VExist:=False; end; end;

procedure TForm1.IdFTPServer1GetFileDate(ASender: TIdFTPServerContext; const AFilename: String; var VFileDate: TDateTime); var fdate:tdatetime; begin //put the file date in a variable fdate:= FileAge(AFilename); if not (fdate=-1) then begin VFileDate:=fdate; end; end;

We use the FindNext and FindFirst functions to get the requested file size as below:

procedure TForm1.IdFTPServer1GetFileSize(ASender:TIdFTPServerContext; const AFilename: String; var VFileSize: Int64); Var LFile : String; rec:tsearchrec; ASize: Int64 ; begin LFile := setslashes(homedir + AFilename ); try if FindFirst(Lfile, faAnyFile, rec) = 0 then repeat Asize:=rec.Size; until FindNext(rec) <> 0; finally FindClose(rec); end; if Asize > 1 then VFileSize:= Asize else VFilesize:=0; end;

This is perhaps the most important procedure of them all. This event lists all the files in a given directory. It is from here that you can manipulate all the files on a file system. So let's step carefully through the code.

The first thing that we do is use the FindFirst/FindNext functions to run through any files that may be in a directory. So we use the path that the client sent, which is stored in the Apath variable:

SRI := FindFirst(HomeDir +APath + '*.*', faAnyFile - faHidden -
faSysFile, SR);

Then as FindFirst finds the files, we use the specially created list item type to store the different components of the files:

LFTPItem := ADirectoryListing.Add; LFTPItem.FileName := SR.Name; LFTPItem.Size := SR.Size; LFTPItem.ModifiedDate := FileDateToDateTime(SR.Time);

Then we check to see whether a directory has been found, and store the relevant file type.

if SR.Attr = faDirectory then LFTPItem.ItemType := ditDirectory else LFTPItem.ItemType := ditFile; SRI := FindNext(SR);

Then we simply close the file search operation down and set the current directory :

FindClose(SR); SetCurrentDir(HomeDir + APath + '..');

procedure TForm1.IdFTPServer1ListDirectory(ASender:TIdFTPServerContext; const APath: String; ADirectoryListing: TIdFTPListOutput;const ACmd, ASwitches: String); var LFTPItem :TIdFTPListItem; SR : TSearchRec; SRI : Integer; begin ADirectoryListing.DirFormat := doUnix; SRI := FindFirst(HomeDir + APath + '*.*',
faAnyFile - faHidden - faSysFile, SR); While SRI = 0 do begin LFTPItem := ADirectoryListing.Add; LFTPItem.FileName := SR.Name; LFTPItem.Size := SR.Size; LFTPItem.ModifiedDate := FileDateToDateTime(SR.Time); if SR.Attr = faDirectory then LFTPItem.ItemType := ditDirectory else LFTPItem.ItemType := ditFile; SRI := FindNext(SR); end; FindClose(SR); SetCurrentDir(HomeDir + APath + '..'); end;

http://www.devarticles.com/c/a/Delphi-Kylix/The-Implementation-of-an-FTP-Server/4/

Monday, February 19, 2007

Successful SEO For Your Website

Many of you over here are for sure trying to get your hands on tips for successful SEO for your website, in layman terms getting top rankings on Search Engines.

The most important target for Search Engine Optimizers and webmasters has been Google.

This legendary search engine since its inception has grown to a mammoth size from a small company to a Multi National Giant with facilities world wide now.

It’s by far the most used search engine available today.

Though official statistics show that Yahoo and MSN do hold a big share of the search market, but that’s still has a long way to compete with Google.

Google has been always trying to get relevant content to it’s users, so the most important tips to get your site visible in Google is content and that too I repeat relevant content.

You often may come across pages over the Internet, with long promotional information and nothing but text, text and text filled in the websites who are trying to sell you a package for $49 only.

Long time ago, I use to bypass this information, and scroll straight to the bottom, wondering why this guy has put so up much text, does he think any sane person will be reading all this?

Many of you would have the obvious answer that they are trying to lure the search engines to put you on top, but for those who don’t, it’s correct, they are trying to fill in the text not always for you, but for the search engines to read it as well.

So, shall we do the same, well it depends on you choice.

You have to ask these questions to yourself.

What is your primary goal?

a) Get top listings on top of search engines, no matter what your visitors feel about your website?
b) Put search engine rankings as secondary and create a site fully loaded with graphics and content to get your visitors remain hooked.
c) To get the best of both worlds.

Many of you would think, that the guy is talking completely off beat, how can you get the best of both the worlds.

The answer is by adding more pages, serving as much information to your visitors and content hungry search engines as possible.

In this way you would be serving the online community by giving them as much relevant information as possible by getting blogs, articles, links, news, directory, related to your business field on your website.

You may create small and tiny links and sub-domains for these additional services at the bottom of your website pages and mention them in your Sitemap, for sure you would be submitting the sitemap in Google, and insure that it lies on a fairly visible corner of your homepage.

Next thing, would be to invite as many as visitors as possible to visit your homepage, as well as free pages to advertise their business, or share their views on your blogs, articles, and forums which you have created with much efforts.

This would definitely give your company an advertising leverage and a reputation for potential buyers who would consider you as a stable and large business.

Now, since your services are free, you would get more of these automatic links and offline referrals, which would again serve you well as advertisement for your business as well as inbound links coming to your website from various sources, without you actually doing any work on it.

Hey, did I mention that Link Building and relevant links pointing to your websites is one of the most important factor for getting top listings on Google and a superior page rank for your website?

http://www.articlecity.com/articles/web_design_and_development/article_1231.shtml

Content Management System CMS

Most of the organizations have global visions these days and these aspirations have their ramifications. One of which is creation and management of huge amounts of data. This activity is a time consuming process and requires a huge team of professionals to do so. Content management system, CMS is a system that allows you to organize your data and lets your organization to share, use, retrieve or search the data.

Content management system, CMS is a software that helps in organizing and facilitating creation of documents and content. This is also referred as a web application that is used to manage web content as well as websites. Generally, the system requires client software for creating and editing the article.

Content management system consists of two parts - the content management application (CMA) and content delivery application (CDA) the function of CMA is to allow the content writer or the manager without the knowledge of HTML to manage, create, remove or modify the content from a website without involving the webmaster. This information is used by the CDA to update the website. Most of the CMS systems have Web based publishing, revision control, format management along with search, indexing and retrieval of content.

The web based publishing allows the user to use templates, wizards and other tools while the format management feature allows formatting of different documents like the scanned documents and legacy electronic documents in PDF or HTML documents. The revision control helps in updating the content to a new or an old version along with tracking changes made by others in the files. CMS system also helps in indexing, retrieval and search of content. The user can search the data available with the help of keywords, the CMS system helps in retrieving the data.

If you want to use the system for the effective management of your websites, Mosaic services provides you with an effective CMS development & Content management system PHP . The competitive rates and services provided by the company will surely benefit your organization. Mosaic services, is a prestigious SEO company with a list of satisfied national & international clientele who vouch for the SEO, Web design & development services provided by the company.

Visit the site www.technology.mosaic-service.com to gain an insight of the organization and the products and services offered by them. The site serves your web design and development needs. The sister sites of the website also offer other advertising and marketing services that can enhance your business future prospects.

http://www.articlecity.com/articles/web_design_and_development/article_1238.shtml

5 Tips When Choosing Multiple Domain Hosting

If you have or are planning to have several domains running on the web, then you should consider getting a multiple domain hosting. Multiple domain hosting allows you to host several domains under a single hosting plan. Most web hosting companies call it shared hosting.

The main advantage of multiple domain hosting is it helps to consolidate all your domains under one hosting plan. It makes domain management a breeze. It also makes it easier to modify any settings since you are doing it from the same control panel.

Here are a few tips you might want to bear in mind when looking a shared hosting service.

Hard Disk Space

Nowadays most shared web hosting plans provide more than enough disk space to satisfy your needs. It is important you know before hand how much disk space your web site will be using not just currently but projected 6 months, 1 year and 2 years down the road.

Some web applications requires more disk space than others. One example are file hosting sites. Also remember to check the cost of upgrading your disk space in case you need one. Most web hosting providers charge on a per GB basis.

Bandwidth

Most web hosting providers cap a limit on each hosting plan’s bandwidth. Check to make sure it provides sufficient bandwidth for your current websites and also room for growth. Most web hosting providers will charge you on a per GB basis for exceeded bandwidth.

The speed is also important for websites. No one wants to visit a website that loads slowly. File hosting and video sites for example requires an acceptable speed in order to deliver file download or streaming video.

Number of Domains Allowed

Some web hosting limit the number of domains for each hosting plan. It is best to get one with unlimited domains so you have to worry about the limit.

Emails/FTP accounts

You probably will need separate email accounts for each domain you have. Most web hosting services provide webbased or as well outlook based email accounts. Preferably try to get one that has unlimited email/ftp accounts.

Ease of control panel

With many domains, emails, ftp accounts etc to manage, it is important that the control panel is easy to use and all information can be accessed easily. Most web hosting services provide cpanel. Some web hosting services do use their own customised control panel. A good example is godaddy.

Whichever control panel you are using, familiarize yourself with the interface and get used to it as fast as possible.

http://www.articlecity.com/articles/web_design_and_development/article_1244.shtml

How To Install SharePoint 2007 Beta 2

Following are the detailed instructions on how to install SharePoint 2007 Beta 2 on a clean version of Windows 2003. These instructions were done by completing the installation on a Virtual Server 2005 machine that was NOT joined to a domain. I have seen a lot of articles that say the server has to be joined to a domain but that is not a requirement. Below are the detailed installation instructions for SharePoint 2007 Beta 2 on Windows 2003 using SQL Server 2005 on the same machine:

Install Windows 2003 and update the machine with the latest Services Packs and hotfixes from Microsoft by using the "Windows Update" utility.

Create a local machine account to run SQL Server and SharePoint with. Make sure that this account is set to be an administrator on the local machine and that it is set to not have the password expire or to change the password on the first login.

* If you are doing the installation on a server joined to a domain you can use a domain account that is set as an administrator on the local machine

* To create a new local machine account go to Start and right click on "My Computer". Select "Manage" from the list that appears and the "Computer Management" snap-in appears. Expand "Local Users and Groups" and right click on the "Users" folder. Select "New User" and fill in the appropriate information.

Insert the SQL Server 2005 into the machine and make sure that "Autoplay" is enabled so the initial installation screen appears.

From the initial screen select "Choose to Install components"

Click the "Accept terms" checkbox on the license screen that appears and then click "Next"

Click "Install" on the next screen that appears.

Once the process to install "Native Client and Microsoft SQL Server 2005 setup support files" has completed click "Next".

Scanning of the system hardware will occur next.

On the screen that appears after the hardware scan is complete click "Next".

When the system configuration check has completed click "Next".

On the registration page enter the Name, Company and product key and click "Next".

Select the components that should be installed.

You will need SQL Server Database Services, Workstation and Client components; and may want to install Books Online and development tools.

You can click the "Advanced" button and select the specific components that you would like to install. Click "Next".

Click "Default instance name" checkbox on the instance name screen and click "Next".

On the next screen choose what account will run SQL Server 2005. Specify the information for the domain/local account you created earlier to run SQL Server in the appropriate boxes. Make sure the "Use a domain user account" radio button is checked.

Click SQL server and SQL Server Agent check boxes at the bottom of the screen. Click "Next".

On the next screen choose to use Mixed Mode authentication. You can change this after the SharePoint setup is complete.

On the collations screen leave the defaults and click "Next".

On the "Error and Usage Report Settings" screen leave the defaults and click "Next".

On the "Ready to Install" screen click "Install".

The setup process will run and you will see the setup progress screen.

When the setup completes click "Next". Then click "Finish".

Set the Sharepoint account that you created earlier, it can be the same account that is used to run SQL Server, to be have Database Creator, sysadmin and security admin rights on SQL server.

Next install Windows Workflow extensions. You will need to make sure that it is version 3807.7 build or higher.

The 3807.7 version can be downloaded here: http://www.microsoft.com/downloads/details.aspx?FamilyID=5c080096-f3a0-4ce4-8830-1489d0215877&DisplayLang=en#Instructions. You need to make sure that you install the Runtime Components download that is appropriate for your system if you don’t have Visual Studio 2005 installed on your machine.

Double click on the install file for Windows Workflow.

Click "Ok" on the license screen that appears.

Install IIS on the sever and make sure ASP 2.0 is installed on IIS. If you are not sure you can run the aspnet_regiis –I from within the 2.0 folder. The default install location is C:\windows\Microsoft.NET\Framework\V2.0.50727.

* To install IIS on the server go to the "Control Panel" and double click on "Add\Remove Programs". Click on "Add/Remove Windows Components" and in the window that appears check the box next to "Application Server".

Highlight "Application Server" and click on the "Details" button. Make sure that the check boxes next to "ASP .NET" and "Internet Information Services (IIS)" are checked. Click OK and then click "Next".

Insert the SharePoint 2007 Beta 2 disk into the server.

If the disk doesn't autoplay, run setup.bat on the disk.

Enter the product key on the appropriate screen. Click "Continue".

Click the check box that you except the terms of use. Click "Continue".

You will need to choose if you are going to install SharePoint 2007 as standalone single server, this will install SQL Desktop Express, or the advanced version which can be for a single server or a server farm.

You will need SQL Server 2000 SP3 or SQL Server 2005 for the Advanced version. We will do the advanced version in this process on SQL Server 2005. Click "Advanced".

On the next screen you can choose from three options. We will be installing on a single server and SQL Server 2005 so we will choose the first options "Complete – Intall all components. Can add servers to form a SharePoint farm." because the last option will install the desktop database engine which we don’t want to install. On the file location tab you can pick the installation location on the server.

Click "Install Now".

Make sure the "Run the SharePoint Products and Technologies Configuration Wizard now." is checked and click "Close".

When the Configuration Wizard appears click "Next".

Click "Yes" on the pop-up window that appears about restarting services on the server.

Click the "No, I want to create a new server farm" as we don’t have an existing 2007 SharePoint installation.

On the Specify Configuration Database settings screen specify the name of the server. If you used above the options above in the SQL Server setup this will be the name of the server you are installing SharePoint 2007 on.

Leave the configuration database name as it is.

Enter the username and password of the account that you created earlier in the setup process.

Click "Next".

Leave the default options on the screen that appears and click "Next".

On the next screen that appears click "Next".

Sharepoint will then complete 9 configuration tasks.

Once the tasks are completed you will be redirected to the Sharepoint Administration page.

Click http://www.totalproductivitysolutions.com/ProgrammingTips/Steps to install Sharepoint 2007.doc to download the word document version of these instructions.

http://www.articlecity.com/articles/web_design_and_development/article_1245.shtml

Tuesday, February 13, 2007

Let Me In: Pocket PC User Interface Password Redirect Sample

This article describes what to do to implement a custom password tool (redirect) for a Pocket PC device, and provides sample code to demonstrate the concept.

NOTE: The power-on password functionality may not function predictably on some devices. This is because the original equipment manufacturer of the device may change the StartUI component in some ways, which may change the behavior of the component or your ability to modify it.

The "Let Me In" sample replaces the standard Pocket PC password user interface with a doodle grid. The user can specify a pattern that connects the grid points by drawing on the grid in the Settings tool. After the password tool is enabled, and after the device is powered-on, the user must enter the pattern before any other tasks can be performed.

NOTE: Although the user sees the password tool in Settings, you (the developer) see the tool in the control panel.


MORE INFORMATION

Creating a Password Tool

To create a password tool (also known as applet), follow these steps:
1. Create a DLL project that outputs a module named Password.cpl.
2. Implement and export the CPlApplet() function to create the new tool.

Use the CheckPassword function, the SetPassword function, and the SetPasswordActive function to set the platform password settings. Use this password for two things that must be considered:

a. The ActiveSync desktop component can request this password when the device tries to connect.
b. The device can request this password during power-on before any other operations can be performed.
3. Implement and export the PromptForPasswd() function. This function displays the power-on password interface.
4. Create a .cpl file association in the device registry. This is required for some other critical registry keys to work correctly:
[HKCR\.cpl]
(default) = "cplfile"

[HKCR\cplfile\Shell\Open\Command]
(default) = "\Windows\ctlpnl.exe %1"
Microsoft also recommends the following optional keys for completeness:
[HKCR\cplfile]
(default) = ""

[HKCR\cplfile\DefaultIcon]
(default) = ","
5. Set the location of the redirect password module in the device registry.
[HKLM\ControlPanel\Password]
"Redirect" = "\windows\password.cpl"
NOTE: You must name the module Password.cpl for the module to be backward compatible with earlier versions of the Pocket PC platform.
6. NEW TO POCKET PC 2002: set the password request timeout policy in the device registry:
[HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Shell]
"ActivePeriod"=DWORD:
This minutes value determines how long the device must be off continuously before power-on prompts for the password. A value of 0 (zero) indicates that the device always prompts for the password. That is, if this value is not 0 (zero), and the device is not turned off for the specified period of time, the password dialog does not appear when you resume the device. The control panel tool can provide a user interface to make this user-configurable.

Special Functions

The SetPassword Function

This Pocket PC function sets the system password to a new string.

BOOL SetPassword(
LPWSTR lpszOldPassword,
LPWSTR lpszNewPassword );


Parameters

lpszOldPassword. The current device password. This determines whether the user or program has permission to call this function. Specify NULL if there currently is no password set.

lpszNewPassword. The new password that you want. Specify NULL to remove the current password without setting a new one.

Return Values

TRUE indicates that the change of password was successful.
FALSE indicates that the change of password was not successful.
The SetPasswordActive Function
This Pocket PC function enables or disables the password.

BOOL SetPasswordActive(
BOOL bActive,
LPWSTR lpszPassword );


Parameters

bActive. TRUE enables the password. FALSE disables the password.

lpszPassword. The current device password. This determines whether the user or program has permission to call this function.

Return Values

TRUE indicates that activation of the password was successful.
FALSE indicates that activation of the password was not successful.

When active, this password can be used by the StartUI component and ActiveSync to control access to the device. In the case of ActiveSync, this function results in a password prompt that appears on the desktop. The StartUI component prompts for this password if the power-on password is enabled by means of the following registry setting:

[HKCU\ControlPanel\Owner]
"PowrPass"=REG_BINARY:
A flag-byte value of 0x01 enables the power-on prompt. A value of 0x00 disables the power-on prompt.
The GetPasswordActive Function
This Pocket PC function indicates whether the current password is active.

BOOL GetPasswordActive( void );


Return Values

TRUE indicates that the password is active.
FALSE indicates that the password is not active.

The PromptForPasswd Function
This function is implemented by the password redirect module and is called by the operating system to prompt the user for a password.

LPTSTR PromptForPasswd(
HWND hwndParent,
BOOL fTightSecurity );


Parameters

hwndParent. The handle of the parent window. Make any password window a descendant of this window.

fTightSecurity. Always set this to TRUE. This parameter is currently reserved.

Return Values

A password string that will be validated by the operating system. This string must be allocated with LocalAlloc and will be freed by the operating system.

Back to the top
The LetMeIn.exe Download
NOTE: To make the code easier to read in the sample, error checking in the sample is not comprehensive.

The following file is available for download from the Microsoft Download Center:
DownloadDownload LetMeIn.exe now (http://download.microsoft.com/download/wince/sample/1.0/wce/en-us/letmein.exe)
Release Date: 06-June-2002

For additional information about how to download Microsoft Support files, click the following article number to view the article in the Microsoft Knowledge Base:
119591 (http://support.microsoft.com/kb/119591/EN-US/) How to Obtain Microsoft Support Files from Online Services
Microsoft scanned this file for viruses. Microsoft used the most current virus-detection software that was available on the date that the file was posted. The file is stored on security-enhanced servers that help to prevent any unauthorized changes to the file.

The LetMeIn.exe file contains 19 files of varying sizes.

File name Size in KB
EULA.txt 2
LetMeIn.def 1
LetMeIn.ico 1
LetMeIn.rc 3
LetMeIn.vcb 49
LetMeIn.vcl 3
LetMeIn.vco 49
LetMeIn.vcp 36
LetMeIn.vcw 1
LmiDialog.cpp 16
LmiDialog.h 1
LmiDoodler.cpp 11
LmiDoodler.h 1
LmiGlobals.cpp 1
LmiGlobals.h 1
LmiMain.cpp 3
LmiWindows.h 1
ReadMe.txt 7
Resource.h 1

APPLIES TO
• Microsoft Windows CE Platform Software Development Kit for Handheld PC 2000
• Microsoft Windows CE Platform Software Development Kit for Handheld PC 2000


Keywords:
kbinfo kbfile KB314989

http://support.microsoft.com/default.aspx?scid=kb;en-us;314989

Changes in password protection on Pocket PC 2002

Introduction

Password protection has changed in Pocket PC 2002. That is why some programs that used PromptForPasswd function may not work on Pocket PC 2002. This article describes new features and how you can make your program work on Pocket PC 2002.

Background

Using built-in password protection is a way to create an application that will run every time the device is switched on and no other applications can run until the application is finished. It is useful for creating two kinds of applications:

1. Different custom protection and security programs.
2. Applications for special devices based on Pocket PC. Applications that cannot be switched to other programs.

To create such a program one should create a custom DLL that exports PromptForPasswd function. This function should have the following signature: LPTSTR PromptForPasswd(HWND hParent);

This DLL should have .cpl extension. It is also needed to add Redirect value to the Password key in the Control Panel registry section:

HKEY_LOCAL_MACHINE\ControlPanel\Password\Redirect

After you create a DLL that exports PromptForPasswd function and register it in the registry in Redirect key your PromptForPasswd function will be called every time the device is switched on ... on Pocket PC but not on Pocket PC 2002.

New features of Pocket PC 2002

Pocket PC 2002 has two new password related features:

1. Alphanumeric password
2. Password activation delay ("Prompt if device unused for...")

The first feature is not interesting for us because Pocket PC also supports alphanumeric passwords. It is just a question of the default user interface. One can write a program for Pocket PC that will substitute the default password applet in the Control Panel with one that supports an alphanumeric password. For example you can download Microsoft Password for Pocket PC that replaces the standard Pocket PC 4 digit password applet with an alphanumeric one.

The second feature is a reason why some password related programs do not work on Pocket PC 2002. The password delay option ("Prompt if device unused for...") was added in oreder to simplify working with Pocket PC devices. If a delay is set to 30 minutes then password will not be asked (and PromptForPasswd function will not be called) for 30 minutes after the device is switched off. If you need your PromptForPasswd function to be called every time the device is switched on you should set the activation delay value to zero.

Setting password timeout

Password activation delay value is stored in registry in the following location: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Shell\ActivePeriod

You can set it to 0 if you want the old Pocket PC behavior.

Conclusion

Pocket PC 2002 is not backward compatible with Pocket PC. Password protection is a thing that changed. Programs that use PromptForPasswd function can work uproperly (sometimes PromptForPasswd function may not be called). However you can easily change your program to make it work on Pocket PC 2002 by changing ActivePeriod value in registry.

http://www.pocketpcdn.com/articles/pocketpc2002password.html

Friday, February 9, 2007

The Differences Between Software Development and Software Engineering

Software development and software engineering go hand in hand when it comes to the implementation of software. Software development deals more with the creation of the software and when this is complete, software engineering takes over with the creation of software systems. Both of these disciplines are at times interchangeable and without much difference to the layman. If you just want to have one specific piece of software designed, such as database software that will keep track of your bird watching hobby, then you’ll just need software development. If, however, you want your bird watching database to be able to support multiple functions, such as delivering a report with statistics and results, then you’ll more likely need the expertise of software engineering.

Software engineers will implement and design software applications through the use of many mediums. These software applications will then be used for a variety of purposes that include business practices to entertainment purposes. It is these software applications that allow users to make their time on the computer as functional and productive as possible. Types of software applications include language applications, office applications, entertainment packages, and applications for education.

The cost of hiring a software developer will be significantly less than hiring a software engineer. Before you make your final decision about what you want the software to do you need to plan you budget, your timeline, and determine what you want the end result to be. The industry of software development continues to grow each year as more and more businesses are having their own software developed for them that is specific to what they do and what they want the software to do. Most companies will already be using some type of software application, such as Office Suite, and most likely won’t need another application developed for them. For most intents and purposes you’ll be fine hiring a software developer for you and your business needs.

How To Work Out A Software Development Contract With An Overseas Provider

You may be surprised to know that many companies in the US and UK do not put together a water tight contract when dealing with an overseas software services provider. Most of the agreements are done via email with little or no regard to important aspects such as dispute resolution, intellectual property rights, confidentiality issues and employee infringement. If you plan to use an offshore provider soon, here are some basic tips on how to draw up a workable contract which safeguards the interests of both parties:

Define deliverables: Since software development is mostly intellectual work and has many grey areas in its definition, it is advisable to define deliverables in a detailed fashion. This helps in making sure that the understanding of the work is clear on both sides and there is no miscommunication of any kind with the supplier. You can also choose to define the change management process and the number of revisions allowed as it makes the deliverables more structured.

Mention the acceptance clause: What is good for the goose may not be good for the gander. Though an old saying, there is a huge amount of truth in it. Sometimes the software services provider may consider the work completed whereas you might not accept it. Thus the goal or the premises on which the work will be accepted should be clearly mentioned to both parties concerned.

Confidentiality rights on both sides: Sometimes companies get an NDA signed with the service provider and expect it to hold true even when working on the project for a long time. This method is not advisable. A suitable contract must be drawn up in the case of ongoing work so that issues such as confidentiality of information are maintained by the service provider. Though some customers feel that their projects do not warrant such a clause, the information exchanged may even be about the company, business or related information which has been given out unknowingly.

Employee Infringement: Approaching a service provider's employee directly is one of the cardinal sins which can be committed by a client. Thus as a service provider, it is necessary that this clause is mentioned in a contract. The opposite can also happen where the service provider may approach the client's personnel for indirect or direct gain. An employee infringement clause keeps a check on such practices and provides a legal route if there is substantial evidence of the infringement.

Force Majeure: The relatively recent natural calamities of the Tsunami and Hurricane Katrina have made it necessary for many large companies to seriously consider the Force Majeure clause. This is necessary to protect the interests of both parties.

Last but not least, pricing: This is probably the most common reason for arguments between a supplier and vendor and is applicable in all industries throughout the world. A clear mention of the total project pricing and milestones at which the charges will be paid should be included as an important schedule within the contract.

There might be other specific terms and conditions which may have been agreed by you and the supplier. These should all be mentioned in the contract not only for the sake of posterity but also for ensuring continuity of work in case of personnel change in the supplier's company.

Offshore Software Development: Save Money - Get Quality & Enjoy Success

In recent years, offshore software development has come up as a successful business strategy adopted by worldwide giant corporations. Many renowned foreign web hosting companies prefer outsourcing their software development to enable themselves focus on their core competencies. It not only enhances their business but also gets them exclusive cost-effective solutions for their business requirements. Though you must be aware of the key benefits of offshore software development but here I would like to give you some in depth information on these benefits.

Immense Benefits To Startup Company

Offshore software development offers huge benefits to companies just starting up as it helps them leverage their IT budget & resources without hiring a team of programmers to carry out their projects. These companies can simply save 40 to 50% by handing over their software development projects to one of the preeminent offshore companies.

Huge Resources

Offshore companies are always enriched with huge resources to carry out successful software development processes. Outsourcer companies are always on winning edge as they get an access to a huge resource pool in an effort to enhance their business.

Premium Quality

Offshore outsourcing has spread around the world like a jungle fire which has further boosted the ongoing competition among the software development companies in developing countries. At this stage, every company is armed with best of the services to impart superior quality & convinced reliability of software at competitive prices.

Proven Software Development Processes

Rising competencies among offshore development companies has given way to customer centric, authentic, mature & standardized development processes that are designed to zero down project risks & development time.

Large Pool of Technical Expertise

Offshore development companies are backed with a solid team of programmers & developers to relieve from unnecessary stress of hiring new employees for your software. As the team is highly focused on software development, it let you concentrate on your core competencies to achieve your goals.

Low Cost Services

With extreme competition in IT industry, offshore companies are proving themselves by providing highest quality software at lowest possible prices. Outsourcers are taking full advantages of this increasing competition that is leading to the birth of an era of offshore outsourcing.

Post Maintenance

To prove their efficiency & build sustainable relationships with their clients, offshore companies are providing post maintenance services & technical support with great interest which attracts outsourcers to avail rich development services from them.

Benefits of offshore software development are immense & list will go on rising. In conclusion, offshore software development is nowhere a bad deal if you are getting plenty of benefits in your hand with sure shot successes from outsourced projects.

Quality Certifications and What they Mean in Software Development

Large scale software development companies are still quite young and the software industry itself is a fairly new one. Outsourcing of software development has been around for only a couple of decades and as the industry gains maturity, quality certification has taken on a whole new meaning for suppliers as well as customers. Quality certification in software is slightly different from quality certification in manufacturing. Though a number of business process management and quality control principles are derived from popular quality certifications, the implementation and implications are noticeably different.

There are two broad types of quality certifications which can be obtained by software development companies. One is the ISO 9001:2000 standard and the other is various levels of SEI CMM. Some organizations may achieve an ISO first and then work towards an SEI CMM level certification whereas some may go directly to an SEI CMM certification. ISO certification however, is a lot easier than SEI CMM (as well a lot cheaper) and thus the number of companies with ISO certifications are quite a few whereas SEI CMM level companies are not so many in number.

One of the key benefits of quality certification in a software development company is that it showcases the maturity and continuity of the organization. Both quality certifications pay attention to processes. ISO guidelines state that you should define a process and make sure that it is being followed whereas SEI CMM dictates certain parameters of a process within which the company should work. Achieving certification and maintaining the documented processes provides a long term growth pattern in the company and at the same time helps in building a differentiating factor with customers.

Apart from the maturity and continuity of the organization, software development companies need quality certification to ensure the success of large projects. Tried and tested development methodologies which are part of the certification process ensure that the coding and designing produced by the company are of a high standard and will withstand the test of use and durability. Customers planning to do business with a quality certified company find it much easier to get a good quality software product. Non-certified companies have a tough time when competing with a certified company and that is the reason why more and more software development companies are moving towards quality certification.

Most medium to large companies are moving towards SEI CMM level certification as that quality certification has been developed with software development in mind. There are various levels of the certification and level 5 is the highest a software development company can achieve. The entire certification process for SEI CMM level is lengthy, time consuming and quite expensive when compared with ISO 9001:2000 but the benefits compensate often compensate for that.

So if you are a software company and have not yet gone down the path of certification, it is time you gave it serious thought. If you are an organization looking to outsource software development work to companies in India, China, the Philippines, Poland or parts of Eastern Europe, it is advisable that you consider their quality certifications. Though we have mostly mentioned ISO 9001:2000 quality certifications, there are other industry and technology specific certifications which can also be obtained by software development companies. Usually these certifications are given by software manufacturers or independent bodies and though they might not be as critical as the quality certifications mentioned, they have a good level of importance when evaluating a supplier.

Outsourcing your Software Development

Inefficiency of your company’s existing software or the need for specialized software functions particularly suited to your business may prompt you to seek the services of a software developer. Your business may require custom software for applications such as contact management, invoicing or inventory. The mere thought of selecting a developer can be daunting if you are not technically minded, but be assured that your role in the selection process is one of assessing the developer, rather than that of assessing software technology. Successful software development relies heavily on a strong partnership with the developer. Thus, picking the right developer is crucial, and the following suggestions will assist you in hiring a reputable and proficient developer.

Establish your software requirements

Software development cannot occur without a well structured and clearly defined set of your business’s software requirements, as the work is in essence a process of addressing needs and solving problems. Consequently, development success will depend largely on the time and effort you dedicate to this stage of the process. It is only by analyzing needs and desired functions that a developer can provide you with as accurate a job proposal and cost estimation as possible.

Be extremely thorough and precise at this stage, including key employees’ suggestions and needs, and compile a comprehensible requirements document, separating the mandatory needs from the optional. Draw up a list of potential developers by asking businesspeople you know for recommendations or by researching web directories. Send them the requirements document, as well as information about your company (such as business objectives) and your budget, so that they can in turn provide you with a job proposal and quotation.

Assess the candidate developers

A preliminary assessment of developers’ written proposals and quotations should give you a good indication of their suitability in addressing your needs, but a final decision should be determined by in-person interviews as well. Meeting face-to-face is crucial in evaluating not only the candidates’ services, but their personalities and communication skills too. The latter two are vital aspects in ensuring a strong collaborative partnership with the company, which will largely determine the success of the software development. In your assessment of the potential developers, consider these factors:

Experience and expertise

You will obviously want to hire someone who is proficient in the field and keeps abreast of the latest software technology trends and discoveries. Be sure that the candidate is a genuine software developer able to suggest solutions to your problems, and not merely a programmer who wants exact instructions on what program he or she should write. Also be careful of developers who are preoccupied with their particular area of technology specialization at the expense of your particular needs. A good developer should provide you with the type of technology most suited to your requirements.

It is preferable to choose a developer who is both experienced in their own domain and familiar with your particular industry. The reason for this is that they will be aware of the common types of needs (both clearly stated and implied), problems and general expectations in your line of work.

Visiting a candidate’s website should give you a good indication of these aspects, but the best and most direct way to determine a candidate’s experience and expertise is to contact former and current clients. Ask them specific questions about the development company’s general service delivery, response to problems, and the efficiency of the developed software.

You can ask to see samples of software, and test it yourself to see whether it is user-friendly (although remember that training will be provided) and effective.

Industry awards are also obviously a good indication of a company’s expertise.

Size

There are advantages and disadvantages to both big companies and sole proprietor situations. A big company may house all the skills and services needed by your requirements, but you run the risk of getting lost among many clients. The opposite is true for a small company or sole proprietor. Therefore, size is not an important deciding factor. Rather, make sure that the developer you choose can cope with the size of your company, and either cover all your requirements or be able to outsource specialized skills to reputable contacts.

Personality and communication skills

This may sound trivial, but your instinctual like or dislike of the person or group is significant in the selection process. You will be working in close partnership with the developer, discussing problems which can become draining and difficult, so it is vital that you get along. The ability to communicate clearly and patiently, without loads of jargon, is also imperative. Software development entails your description of needs and problems being translated by the developer into functional solutions. Misunderstandings are inevitable in such a complex communication situation, therefore be sure that a good basic level of interaction is evident from the start.

Note too their interest in the work and in your vision. Passion for a subject will generate creative problem solving.

Support

Your company will need technical and administrative support during and after software implementation. This includes staff training, user-manuals or help documentation, and debugging of software. The company should also be committed to the general improvement of your software and the software should support integration with your existing applications and major systems, and comply with all platforms. These issues, along with specifications of the amount of support provided, should be clearly stated in the contract.

Price

This is another factor which should not solely determine your choice of developer. Software development is a complex process and you should expect to invest a substantial amount of money in the process.

More important issues of price in choosing a developer are those of costing methods and charging for changes. Avoid companies that charge hourly rates without specifying the amount of time that the job will take. A good developer should be able to make a fairly accurate cost estimation that constitutes fixed fees, providing that your requirements have been clearly and completely stated. Be prepared, however, for possible added costs later in the development process if changes are needed (which they usually are). Changes cannot be predicted, but be certain that you understand the developer’s means of dealing with and charging for changes (this should also be stated in the contract).

Legal issues

An important aspect that should be stated in the contract is that of licensing. Ensure that you will be able to use the software on all the computers that you need to, and be aware of any specific copyright claims the developer might have.

A guarantee as to the product’s effectiveness should also be stated.

Begin the development

After considering all these factors in the evaluation of candidates, you should be able to hire one that you are happy with. After signing the contract and starting the development, remember that communication is key. Address problems and announce required changes as early in the process as possible. The beginning of development will involve a more in-depth analysis of your company needs by the developer. This may include interviews and observation, and should demand a fair amount of your time. Keep in mind, however, that this is the most crucial phase of development and therefore a sound investment of time. The developer should then provide you with a functional specification of your requirements, which can be signed off to commence the project. Make certain, however, that signing off the requirements does not bind you to them, but allows for changes to be made for an added fee. The remainder of the process entails the development of prototype(s), testing, implementation and post-development training, support and maintenance.