Wednesday, December 13, 2006

ADAPTIVE SOFTWARE

The Problem With Software

The problem with software is that it takes too much time and money to develop, and is brittle when used in situations for which it was not explicitly designed. Various software design methodologies address this problem:

* 1970s: Structured Programming makes it feasible to build larger-scale software systems - provided you have a specification of the desired results at the start of the project and the specification rarely changed. A typical application is a database report writing program which reads an input file and produces an output file. We call this an input/output based application.

* 1980s: Object-Oriented Programming makes it easier to reorganize when the specification changes, because functionality is split up into separate classes that are designed to have minimal interaction between them However, each change to the specification (or to the environment) still requires programmer intervention, with a costly redesign/reimplement/rebuild/retest cycle A typical application is a desktop publishing system, where user-initiated events (mouse clicks, menu selections, etc.) trigger computation. We call this a user-initiated event based application.

* Today: Adaptive Programming is aimed at the problem of producing applications that can readily adapt in the face of changing user needs, desires, and environment. Adaptive software explicitly represent the actions that can be taken and the goals that the user is trying to achieve. This makes it possible for the user to change goals without a need to rewrite the program. A typical application is an information filter which searches the Internet or company intranet for information of personal interest to the reader. Note that much of the searching may go on when the user is not even logged in. The application does more on behalf of the user without constant interaction, and the sophistication comes from a splitting of responsibilities between the program and the user. We call this an agent based application.

Of course, there have been other proposed methodologies for improving software. Some address the problem of managing change, but only adaptive programming is about anticipating change and automatically dealing with it within a running program, without need of a programmer. As a definition, we can say:

"Adaptive software uses available information about
changes in its environment to improve its behavior."

The Challenge of Complex Environments

Software is expected to do more for us today, in more situations, than we ever expected in the past. This is the challenge of complex environments. The complexity comes from three dimensions. First, there are more users. Now everyone, not just trained professionals, uses software. Second, there are more systems and more interactions among them. A company that once had a homogeneous mainframe now has a wide variety of desktop, server, and mainframe machines, running a wide variety of protocols. Third, there are more resources and goals. Programmers are accustomed to trading off time versus space. Now they also have to worry about bandwidth, security, money, completeness of results, quality of information, resolution of images and audio, and other factors, and they have to make the right trade-offs for a wide variety of users.

Together, these three dimensions make the designer's job harder. The designer can't foresee all the circumstances in which an application will be used, and thus can't always make the right design choices. This can shorten a product's lifetime, because upgrades will be needed for unforeseen situations, and it undermines the end user's confidence that the product will even perform as advertised

Five Myths of Traditional Software Development

Traditional software development is based on some principles which are no longer appropriate under this new world of complex environments. It is worth looking at five of these myths.

* The myth of the specification: Traditional software development assumes that the first task is to determine the specification, and all design and development follows after the specification is finalized. More modern approaches stress spiral rapid development methods in which the specification is constantly being refined. The target is really a combination of related components, not a single application.

* The myth of software maintenance: The word "maintenance" gives the impression that the software has somehow degraded, and needs to be refurbished to its original condition. This is misleading. The bits in a program do not degrade. They remain the same, while the environment around the program changes. So it is really a process of upgrading or evolving to meet the needs of the changing environment. By viewing it as a maintenance problem, one risks making the mistake of trying to preserve the old structure when its time has passed.


* The myth of the black box: Abstraction plays a crucial role in the development of reliable software. However, treating a process as a black box with an input/output specification ignores the practical problem of resource usage. When a system built out of these black boxes is too big or too slow, there is no way to improve the performance other than to tear the boxes apart. Some work has been done on the idea of Open Implementation, in which modules have one interface for input/output behavior, and another orthogonal interface for performance tweaking. Adaptive software adds to this by making the orthogonal interface a two way system ÷ there is a feedback loop that provides information on the results of the performance tweaking.

* The myth of design choices: In traditional software development, the designers consider several alternatives to implement each component, and then make choices based on the desired product. The alternatives that were not chosen, along with the rationale for the choice, are then discarded. That means when the environment changes (perhaps a new communications protocol becomes popular, or the user gets a computer that is twice as powerful), one needs to start all over again to see what choices are now the best. Often, the original programmers have moved on, and so their design rationale is lost. The alternative is to capture the design criteria as a formal part of the program, and let the program reconfigure itself, as the environment changes, to optimally satisfy the criteria.

* The myth of the expert programmer: Most programmers have pride in their abilities, and feel they can always come up with a good solution, given a suitable specification of the problem. In reality, programmers are only one resource available to a project, and an expensive resource at that. Furthermore, it is impractical to ship a developer with each program (except for very large and expensive custom applications). This suggests that we find a tradeoff where the programmers and designers do what they can do best ÷ formally describe what the program is trying to do. Then the program itself does the calculating and configuring necessary to achieve these goals in the current environment.

Key Technologies for Adaptive Software

Now we know what adaptive software does: it uses information from the environment to improve its behavior over time, as the program gains more experience. And we know why it is difficult to do this: because specifications are incomplete and changing, because the environment is constantly changing, because carefully crafted designs may rely on assumptions that become obsolete, and because there is never as much programmer time as you really need. In this section we survey some of the tools that help us overcome these difficulties. Here are five of the most important:

* Dynamic Programming Languages provide a robust framework for writing applications that persist over long lifetimes, and can be updated while they are running.
* Agent Technology is a point of view that encompasses the idea of acting in accord to a user's preferences.
* Decision Theory provides the basic terminology to talk about an uncertain world, and about what the preferred outcomes are.
* Reinforcement Learning gives us a way to learn what sequence of actions to perform, given local results of the worth of individual actions.
* Probabilistic Networks provide powerful algorithms for computing optimal actions based on whatever is known about the current state of the world.

Dynamic Programming Languages

Static languages like C require the programmer to make a lot of decisions that nail down the structure of the program and the data it manipulates. Dynamic languages like Dylan and CLOS (Common Lisp Object System) allow these decisions to be delayed, and thus provide a more responsive programming environment in the face of changing specifications. Changes that would be too pervasive to even try in static languages can be easily explored with dynamic languages. Dynamic languages provide the interfaces by which a program can change or extend its performance. For example, in Dylan, a running program can add a method to an existing class without access to the original source code; can define a brand new class or function under program control; can debug another Dylan program, even one running remotely over the web. All this makes it possible to build, re-build and modify complex programs using components. Java is currently the most popular language with dynamic features, although it is not as thoroughly and elegantly dynamic as Dylan and CLOS.

Agent Technology

There has been a lot of talk about software agents, and not much agreement on what they are. The Oxford English Dictionary entry for "Agent" says: from Latin agere ("to do"), 1. One who acts or exerts power. 2. He ... who produces an effect. 3. Any natural force. (e.g. a chemical agent). 4. One who does the actual work - a deputy, substitute, representative.

So we see that essentially, an agent does something and optionally does it for someone. Of course, all programs are intended to do something for someone. Therefore, in order to make a distinction between "software agent" and just a "program," we stress that software agents should be able to immediately respond to the preferences of their sponsors, and act according to those preferences. In addition, we think of an agent as existing in a complex environment. It can perceive part of the environment, and take some actions, but it does not have complete control over the environment.

In short, we want the agent to do the right thing, where "right thing" is defined by the user or users at that point. There are three reasons why this is difficult:

* You can't always get what you want: In a complex environment, you generally can't optimize everything at once. You can't have a program that is 100% accurate, yet uses no resources and runs in no time. So we need a way for the user to say which resources and results are most important, and which ones are less so.
* You never know what's going to happen: Traditional programs are built with Boolean logic, which treats everything as "true" or "false." This is appropriate for the internal working of a digital computer, but out in the real world there are always going to be events about which we are uncertain. For example, it is "likely" that we will be able to establish a given network connection, but we won't know for sure until we try.
* You're not the only one in the world: There are other programs out there which can change the environment, and with which we can cooperate, contract, compete, and communicate.

Decision Theory

Decision Theory is the combination of utility theory (a way of formally representing a user's preferences) and probability theory (a way of representing uncertainty). Decision Theory is the cornerstone for designing proper adaptive software in an uncertain, changing environment. It is "the" language for:

* Describing adaptive software: To respond to user preferences and deal with uncertainty, decision theory provides the only mathematically sound formalism to describe what it means to "do the right thing."
* Building adaptive software: In some simple cases, once you've done an analysis of a problem using decision theory, it becomes clear how to implement a solution using traditional methods. But more often we need to use decision-theoretic technology like reinforcement learning or probabilistic. Sometimes, a compilation step can be used, so that the run-time computational demands are minimal.
* Communicating with agents: What should a user say to a software agent? Or what would one software agent say to another? Most importantly, they should be able talk about what they want to happen, and so they need the language of utility theory. But they also may want to talk about what they believe, and that requires probability theory.

Reinforcement Learning

Reinforcement Learning is a powerful technique to automatically learn a utility function, given only a representation of the possible actions and an immediate reward function - an indication of how well the program is doing on each step. Given enough practice (either in the real world or in a simulated environment), reinforcement learning is guaranteed to converge to an optimal policy. Reinforcement Learning problems could in principle be solved exactly as a system of nonlinear equations, but in practice the problems are much too large for an exact solution, so we rely on an approximation technique based on dynamic programming.

As an example, consider assigning cellular phone calls to channels. You want to use the available channels wisely so that there is no interference and a minimum of dropped calls. Using reinforcement learning, Harlequin invented an algorithm that dynamically adapts to changes in calling patterns, and performs better than all existing published algorithms. Reinforcement learning has been applied to other domains, such as flying an airplane, scheduling elevators, and controlling robots.

Probabilistic Networks

Probabilistic networks (also known as Bayesian networks, Decision networks, and Influence diagrams) are a way of representing a complex joint probability distribution as a graphical model of nodes (representing random variables) and arcs (representing dependence between variables). There are algorithms for determining the value of any variable, for learning qualitative and quantitative dependencies from data, for quantifying the sensitivity of an answer to uncertainties in the data, and for computing value of information: the worth of acquiring a new piece of information.

Industries and Applications for Adaptive Software

Here are some real-world applications of adaptive software undertaken by the Adaptive Systems Group at Harlequin:

Crime Pattern Analysis and Fraud Detection: Harlequin makes a data visualization tool for fraud and crime analysis called Watson. For mid-size databases, trained users can easily navigate through the various displays to find the patterns they are looking for. However, for larger databases we needed to add data mining techniques to help spot the patterns. As new patterns in the data occur, we provide ways of visualizing them.

Financial: A credit card company wanted to be able to track purchase authorization requests, and decide which requests were likely to be from a stolen card, or were likely to be a credit risk. Harlequin supplies the software upon which the authorization system is written. The key for the customer was to have an efficient system capable of processing large volumes of transactions in real time, but still have a flexible system, where changes could be made in minutes, not days, with no need to bring the system down to install upgrades.

Manufacturing: A multinational company faced the problem of needing to continually build new versions of their product to meet new industry standard specifications. Each test run of a new product variation costs about $10,000. Harlequin is actively consulting on this project to bring together all the prior test results and accumulated experience of the organization in one online tool, and to provide sophisticated statistical optimization techniques to recommend an optimal experimental test plan, based on past results and predicted future contingencies.

Electronic Publishing: Harlequin's raster imaging processing software for high-end commercial printing is the fastest on the market. But customers are interested in the speed of the total solution, so we are developing a dynamic workflow management system that will optimize the tracking and scheduling of the customer's complete set of job tasks. This requires monitoring the jobs and being able to predict how long each job will take at each processing stage.

Telecommunications: A telecommunications company wanted to build an experimental switching system for delivering video on demand to the home. This requires strict real-time response (30 frames per second, no matter what), but it also requires flexibility in the software. The customer required that they be able to add new functionality and even redefine existing functions while the switch is running, because it would not be acceptable to interrupt service. Harlequin worked with the customer to meet these requirements using a real-time version of CLOS. The result was a switching system that met all requirements, and was built with only 25 programmers, as compared to the 250 needed on the system it replaced. A corresponding 10-fold improvement was also seen in development costs and engineering change turn-around time.