Sunday, December 3, 2006

Metadata-Driven Application Design and Development

In this article, I present an overview of a pragmatic approach to using a metadata-driven approach to designing applications that has a practical presence in the development of applications. The article is presented in an open style and hence is not targeted at the deep technical community. I hope this will demonstrate why metadata-driven design is one of the most powerful tools at the disposal of IT professionals who are engaged on building a software product.

Remember 30?

Turning thirty was a brief affair. No run up to a massive 'fair well twenties' party, no late night at a nightclub, and no enormous hangover in the morning. But it was no less memorable for the lack of pomp and ceremony. In place of a hangover was a small realization that formulating patterns is important to anyone working with technology.

I was in Singapore, to assist a small team running a package selection and integration project for a bank. I had arrived shortly before my birthday, with the intention to lay low, get some reading done, maybe to do a little work, and certainly to defer all celebrations until I returned home. A colleague of mine, Lyn, was clearly appalled by my attitude and had an altogether different idea. Post dinner on that ceremonious day I was duly press-ganged up to the seventieth floor bar of our hotel. So there we were sat, with the wonderful views out over the Singapore harbour, me drinking the ubiquitous Singapore Sling and Lyn drinking a fruit cocktail, as she was nearly six months into her second pregnancy. Not a scenario I ever imagined for my thirtieth birthday.

It was the first time any of us had worked closely together and hence we sat discussing the path of professional fortune that had brought us to this project. Lyn had formerly worked for SWIFT and held much of the business knowledge the project would require. As for me, my previous engagement had been on a large custom application project but this time my role was to define an architecture in which to position and integrate the various products to be employed on the project.

Clearly, Lyn and I were different characters with very different professional backgrounds. Business verses techie on the seventieth floor. The stage was set for a showdown, a tête-à-tête to the bone, but strangely we didn't end up trying to kill each other. Business verses techie it may have been, but Lyn and I reached an agreement that evening on one thing that has influenced my professional work since I first started reading the patterns literature. When it comes to IT projects, they work on patterns. And these patterns are not just for techies.

How High Are Your Patterns?

Patterns of analysis, design, and implementation are virtually ubiquitous in the IT industry today and have received much publicity. Given the corpus of work freely available in the patterns area, there can be little doubt that the majority of key players in the IT industry believe applying patterns to the building of applications is a valuable and powerful approach. Many aspects of patterns feed directly into the common goals of methodologists, analysts, designers, implementers, testers and management, namely those of higher quality, more maintainable, more reliable and faster to develop solutions.

Powerful as patterns can be, they have one aspect in application design and development that has arguably proven difficult to achieve. That aspect is a substantial improvement in traceability (and hence consistency), from the analysis of the solution domain through to the creation of an application based on a standard framework-based implementation.

It's a simple but lofty goal; to make software quicker to develop and more reliable the IT industry needs to learn to reuse software infrastructure more effectively; to reuse infrastructure more effectively, we need to define the allowable (or supportable?) patterns of use for the target infrastructure. Introduce a more complete traceability in the software lifecycle and you get the potential to realize this, because traceability provides a formalised path to transform analysis-time artefacts right through to build-time artefacts. This is the key to being able to forward-engineer the analysis; you need to know the technical (infrastructure) environment you are planning to target with generated code.

Looking at the various infrastructure initiatives in the IT industry, across many technologies, it is clear that many software infrastructure providers are looking to productize the concepts around patterns in to a framework for the general implementation of applications. To a certain extent, this has been going on in the IT industry for many years. However, this time the infrastructure providers are looking much higher in the layers of the application architecture. This time, they are out to steal the wind of the application architect and put it in the sails of their framework.

Looking at what is on offer today and what is clearly in the pipeline, application architects out there should be tracking this carefully; this time we have infrastructure products and tools that have been creeping up towards the base of the functional layers. And with this creeping comes a new adversary for the not-invented-here syndrome. The tools are not just helping you to model, but are helping you to speed your coding via systematic code generation.

Want some of this? Then you need to start working with patterns, both on the infrastructure and application layers.

This movement in the IT industry has been gathering strength for many years. A simplified view could be, naturally, simple.

* The higher the patterns of [reuse of] infrastructure climb, the more we can produce repeatable, higher quality and faster time to market business applications.

The less simple, more technical view on this should be a more long-term perspective.

* Understand the patterns and we can build the supporting infrastructure.
* Productise the infrastructure and we can build high-productivity tools to support that infrastructure.
* Give the technical community the tools, and they will create more repeatable, higher quality applications with a faster time to market than is typical today.

This movement in the IT industry has been gathering strength for many years.A simplified view could be, naturally, simple.

* The higher the patterns of [reuse of] infrastructure climb, the more we can produce repeatable, higher quality and faster time to market business applications.

The less simple, more technical view on this should be a more longterm perspective.

* Understand the patterns and we can build the supporting infrastructure.
* Productise the infrastructure and we can build high-productivity tools to support that infrastructure.
* Give the technical community the tools, and they will create more repeatable, higher quality applications with a faster time to market than is typical today.

Patterns, Patterns, Everywhere

If our work in information technology is as heavily linked to patterns as the industry appears to believe, then working with patterns is a reality for most of us. We probably don't realize the patterns we employ in our everyday roles, but this is the nature of a pervasive subject matter. If something is everywhere, after only a brief period most of us simply don't see it anymore; especially if it's related to tasks we know and perform frequently.

If they are everywhere, what does it mean to work with patterns? Many things to many people I am sure, but for the purposes of this article it has two concrete meanings. In the design process, using patterns means basing the design on commonly occurring structures ('patterns of design' for example, as per the definitions documented so very successfully in the 'gang of four' book 'Patterns of Software Design'), preferably using design tools capable of storing the design in some form of model representation such as UML. In the build process—where the real code is cut—using patterns means using software infrastructure products and tools including code frameworks and code generators such as Codesmith.

Fortunately, the nature of architecture and design proffers the opportunity to identify and classify patterns. The valuable work done by other authors means I don't need to engage in a long discourse on the value proposition of patterns in design. However, the same statement cannot be made carte blanche for the implementation phase of the software lifecycle where patterns—or more specifically rameworks—unfortunately have gained the occasional bad reputation.

Effectively, frameworks suffer from 'generics'. It is often possible to apply the use—or abuse—of a framework in many different ways. In providing for a reasonable number of alternative uses and styles of use, to provide a generic base service to those people building applications, frameworks have gained an unjust reputation of being overly complex and hence difficult to understand as IT professionals are swamped with information on the various approaches employed. Getting the level of a framework right, achieving the balance between prescription and flexibility, has proven a difficult challenge to meet. Get it wrong and you always face the question, do you need patterns to implement software-based systems?

The root cause of this issue is probably that most application teams are under pressure to produce higher quality solutions as fast as possible. Naturally, this leads teams away from approaches that are potentially more complex and that hence may delay the onset of the application's build. When working under pressure, nobody is looking for an approach that they feel has the potential to bring more stress.

Anyone for Some Stress Reduction?

Thinking about stress brings a question to mind. Ever found a pattern to help relieve some stress? I have one that is absolutely fantastic. Go home, hide in the study away from the madding crowd, scan read the remainder of the days' emails and try to update the little work and play mind-map I keep hidden on my laptop. All the while sipping a good single malt or maybe a white port. Now that is a truly great pattern.

I have always found patterns in many places, from the manner in which start of day status meetings are run to the end of the day wind-down. It is inherent in the nature of patterns that they help us to manage complexity. A good pattern can provide a good understandable description of complex structures that can equally assist in the description of complex relations. This can make life so much more pleasant and stress-free for anyone whom needs to work with complexity. Admittedly, doing so also has a couple of other useful side effects, but we will investigate those later in the article.

If the use of software frameworks and infrastructure can be defined by patterns [of use], then metadata is the language used to describe those patterns. Hence, describing the patterns is the responsibility of metadata. Metadata is not a secular language, many different dialects of metadata exist and may even cohabit. Metadata is the common term for the representation of the data models that describe patterns. Hence, for any given set of metadata the quality of the description it offers is determined by the associated metadata model (literally, the model of the metadata).

It is worth noting at this point that patterns are not and should never be restricted to design-time. Previously, standards such as the Unified Modelling Language and initiatives such as Model Driven Architecture indicated to many that modelling and metadata were upstream in the project lifecycle. Not so. IBM, like many large technology organizations, started work on formalising a model of a total system based in UML during the late 90's. Many of these initiatives never became official external publications, but in October 2003 Microsoft broke this apparent silence and published details of its pending System Definition Model (SDM). With models such as SDM covering the entire scope of a system, you can expect to see model-driven and hence metadata-driven processes encroaching on almost every area of the software development lifecycle.

Describe Your Patterns to Reap the Benefits of Functional, Policy and Service Abstractions

Using metadata in the design and build processes—or metadata-driven design and development of applications—proffers one solution that can elegantly leverage the powerful nature of patterns, and hence leverage the high value software framework and infrastructure products that are now available in the market.

Metadata helps to describe the patterns present in your application's domain. Describing (or modelling) the patterns that the application must employ helps to promote understanding of what features the software infrastructure must provide and the style in which that infrastructure could best be utilised. This provides very early visibility of the infrastructure needs and also permits, very importantly, a safer, more controlled environment in which the software infrastructure can climb higher in the layers of an application's architecture. Control over how infrastructure creeps up the layers is critical to providing higher value services to the application developers. This provides your applications developers with an increased level of functional abstraction.

Modelling the needs of an application using patterns (in turn described by metadata) may be applied to almost any area. Hence, metadata can be used to describe the mapping of object attributes to relational tables, through the definition of rules around the runtime session management of a pool of related objects to the signatures of the services offered by a particular class of object. Such rules form the policy definitions used by application developers to control the underlying behaviour of the infrastructure.

When carefully and judiciously applied to the modelling of your application, metadata may be employed in the description of both the infrastructure and application service boundaries and for the configuration data of the associated service implementations. Subsequently, the metadata descriptions associated with these services (and the metadata model) may be used in the forward-engineering of your services—into existing and future technology environments. This provides a rich—and reusable—approach to services definitions.

Using metadata in this manner, across the business (application) and technical (infrastructure) domains, requires an agreement on the overall metadata-model between the business analysis and the software infrastructure. This is an agreement on a behavioural contract between the application and its infrastructure and permits both the detailed functional analysis and the construction of software infrastructure to run in parallel. In such a scenario, the application development team may now tackle critical issues at a very early stage in the development process which is known to be a key factor in improving the effectiveness of the software development lifecycle.

The metadata-driven approach also works well with other mainstream approaches such as use case modelling and artefact-based methodologies (for example, the Rational Unified Process), has the potential to naturally lend itself to reuse and repeatability, and is independent of the target technology platform.

Technology

Before diving into the discussion about how metadata links to the service-oriented architecture and Web services, it is probably worth a little pause at this point to discuss the technology behind metadata.

In thinking how should this be best formulated, I came to the conclusion that there is no easy way to say this in a technology-related article. So let's just put it out there—quite simply, there is no 'technology' in metadata.

Metadata itself is a concept. That is why many different models of metadata—or metadata models—can exist. The metadata concept may be applied to many areas of technology, and hence to many software products, in the business and technical layers. JNDI has a meta model, as does LDAP and Active Directory. They all use similar concepts in their meta models, but all have different (physical) metadata.

What is clear is those design and development products using metadata also use (embed) or link (integrate) to tools. Frequently these tools are graphical and, more and more, are linked to improving development productivity. Next to the metadata itself, which clearly has a primary importance, this use and integration of tools to support the storage, propagation and use of your metadata model across the software lifecycle is probably the most important aspect of technology in the design and build environments.

When selecting tools to assist with design and development, the most important criteria are often centered on:

* The ability of the tools to store and use metadata, both of their design and yours.
* The ability of the tools to share metadata, which is particularly useful between the design-time and build-time to help reduce the semantic gap between the design and the implementation.
* The integration of the development tooling with the underlying software infrastructure and runtime platform, particularly for code generation.
* The integration of the design and development tooling with code generation technology, such as scripting languages that may be used to interrogate the metadata model.
* Standards (MOF) compliance.

Linking SOA and Web Services Using Metadata

Instead of creating a several thousand word essay on why using metadata in design and development is so great, I'll try an alternative, more 'techie-friendly' approach. Let's assume we all agree that metadata-driven design and development is just fine. Patterns are great, patterns govern most of what we do in IT, metadata is a cool way to represent your patterns, and finally that metadata can be used in both design and build.

As we all agree that metadata-driven design and development is great, we don't have to waste any more time debating its merits. In place of the essay, let's take a look at what we could implement with this approach based on a link between SOA and Web services.

First, a cautionary note to the reader before continuing. The examples are exactly that—examples. Focussing on one aspect of using metadata in a system does not mean the same concepts do not apply to other areas. Many areas of systems already use accepted metadata models, particularly around authentication and authorisation using products such as Active Directory/LDAP and certificate management services.
The Proposition

Walking through two example applications built on similar principles but different implementations, I am going to explore how a metadata-based approach can be used to define the service interfaces on an existing application and on a new application. The primary goal of this exercise is to propagate those interfaces to an additional technology environment using a combination of infrastructure products, a (code) framework for building applications (an application framework) and a metadata model seeded in the design phase.

Example Applications—Common Ground

In this walkthrough, the two example applications perform similar functional roles and have a similar facade to the exterior world.

As I am focussing on the service interface layer on the core server, I'm going to make a few assumptions about the implementation of each application. First, both applications are running on the same database and data model—application #2 is the candidate replacement for application #1. Second, I am not concerned with the presentation interface or any middle- tier user interface components. Third, I am not concerned with any detailed deployment aspects or configuration of any communications mechanisms. Fourth, the server model is identical, with application sessions shared across all requests for services and server-side session handling embedded in the application or the runtime platform. Fifth, and last, the development of the application is carried out by multiple teams in several time zones.

While the actual (business) functionality may be similar between the applications, what is not common between the applications is the model for presenting their service interfaces on the system boundary.

Example Application #1—the Original, the Legacy

The server-side interface of the original application was comprised of a C header file. Each function on the interface employed a signature tailored to its purpose. External components called to the interface functions using a DCE-based remote procedure call.

Adding a new function is relatively simple, but highly coupled to the DCE technology. Adding an alterative access to provide a non-DCE invocation channel to the function is not possible without significant re-engineering.

Note that the data dictionaries of all services are exposed directly on the application's interfaces and that function names are placed directly in the global namespace of the application (they must be unique across all functions being developed by the teams). This is shown in Figure 1.
Click here for larger image.



Figure 1. Application #1

Example Application #1—the Evolution

The original application #1 was later wrapped by a set of C++ classes to expose the server-side interfaces on a CORBA bus. The C++ interfaces are not direct wrappers, but group 'C' functions in to sets of services and provide a uniform 'generic' object as the request context.

Each service defines a number of classes derived from the request context class, to create a set of dedicated 'containers' for the parameters required by the underlying 'C' functions on the server-side. Thus functions are grouped into services and the parameter definitions for all functions are encapsulated by the set of container class definitions.

To facilitate the handling of these 'more generic' requests, a new server-side (application) component type was introduced to check the request context to validate the correct parameters have been supplied and map these parameters to the correct underlying 'C' function. The request mapper is the implementation of the service defined in the interface definition language (IDL). Normally, to permit parallel development, there is one request mapper per service IDL. However, this model is not enforced by the system.

Adding a new function to an existing service requires the derivation of a new container class and the addition of new code in the mapper to marshal the container data and make the underlying 'C' call. Adding a new function to a new service requires first that the service interface be defined in IDL and a mapper be created based on that service interface.

Note that the data dictionaries of all services are exposed directly on the application's interfaces, but are scoped by service and request context definitions as shown in Figure 2.
Click here for larger image.



Figure 2. Application #1 evolved

Example Application #2—the King of the Hill?

The key difference between application #2 and application #1 is that application #2 was built with a metadata-based approach. Application #2 is not restricted to reusing the existing 'C' functions.

Each server-side function in application #2 is written based on a common metadata model for the representation of a service request. This is similar to the evolved application #1 in concept, but the handling of the generic request contexts is embedded in the server code. Server-side functions must now understand the generic request context format. This is shown in Figure 3.
Click here for larger image.

Figure 3. Application #2

To represent the request context, concrete parameters are not desirable. In place of a concrete function signature, at runtime application #2 is using a generic structure for the request context. However, in this instance the request context needs to be similar in nature to a name-value pair property-bag. Property bags can be hierarchical and hence can be used to represent any arbitrary data structure much like an XML document based on an XML Schema.

The service name and request context are all that is required by the new server-side application components to execute the request. The service name coupled to the definition of the permitted structure of the request context is persisted in the database as a 'service definition'. This definition, effectively an entry (row) in the database, belongs to that service alone.

A new generic request handler application module is included in the system. This module has been designed to interrogate the metadata repository and validate incoming requests against the associated service description metadata. It can also create the internal memory structure from the request context to pass to the target service implementation.

Thus the interface definition is held in the metadata repository (in this case, the applications database).

Adding a new service to the application is straightforward on the server-side. Implement the generic interface and manage the request context structure required for the service request, assign the service a name, register the name and request context description in the database and perform any deployment actions required to deploy the new service code on the server.

The same cannot be stated for the client side. In place of the definitive client-side header file or IDL interface definitions of application #1, the client-side of application #2 receives only a generic entry point function via the same technical-interfacing technique. This is a big change, in that the service data dictionaries are no longer propagated directly onto the service interfaces. The service interfaces, from an external perspective, are all soft-set in the database. Note also that the scope of a service definition is now one-to-one with the service implementation.

Clearly, application #2 suffers from an imbalance. The server-side is now very flexible and well defined, while the client-side needs to understand the metadata before it is possible to understand what services are on offer.

Enter the world of service discovery. The services of application #2 need to be discoverable by clients wishing to use them. This is possible via the tried and trusted pattern called documentation, Users (clients) of the applications services can read the interface specification and therefore construct well formed and validate requests.

Metadata-Driven SOA

Obviously, example application #2 is very service oriented at the system boundary—it does not know anything else—but it would be unfair to claim example application #1 does not embody some aspects of a service oriented architecture.

In its original form, application #1 is like many so-called legacy applications written in 'C' or COBOL. The code may have been modified many times, but within the domains of the technologies available at the time these applications were fundamentally service-oriented. 'C' programs had their extern function signatures; COBOL programs had their copy books. Include an infrastructure like IMS or CICS, and you certainly have a service-oriented system facade as these TP monitors forced that model—a bucket of data in, run the request, a bucket of data out and describe the data buckets via data structures. That is a service, right?

So what is the fundamental difference between applications #1 and #2? The answer [for me] is they are both service-oriented at their system boundaries, but to be a true service-oriented application the fractal model must be applicable from the system boundary to the database, with service interfaces defined for each component or sub-system and each service treated as a black- box by the caller.

However, staying concentrated on the system boundaries of the examples indicates that the quality of the service description is vastly different between them.

* The original application #1 is doing nothing more than exposing library or executable end points, with its entry point function signatures propagated over the system boundary via the DCE technology.
* The evolved application #1 is doing the same, but with a more elegant end point description mechanism employing a simple abstraction. Entry point functions are exposed as a simple service-oriented abstraction via CORBA, with the link between the abstraction and the concrete implementation via coded marshalling.
* Application #2 has only one technical end point to expose. Exposing this end point in any technical environment in the same manner (via DCE, CORBA, COM, asynchronous messaging, etc.) will generate the same result—the data dictionary will never be exposed, only the signature of a generic service call. The value in this approach is to use the metadata-driven service definition and the associated application infrastructure for invoking those services, as the implementation runtime for generating many different proxies in a number of different technologies to permit wider usage of the services.

The metadata-driven nature of the services of application #2 leads the solution to a dead-end if a pure technical 'code it' approach is taken to providing access to those services. In such a metadata-driven application exposing functions is replaced by exposing metadata.

Sound a little dangerous, this idea of exposing metadata? Exposing the metadata itself is not the true intent of a metadata-driven application. Using the metadata to drive the propagation of services [functions] over the system boundary is a more accurate manner of phasing the approach that needs to be employed.
From SOA to CORBA, from SOA to Web Services, from SOA to ...

To demonstrate how a real metadata-based service design can outrun the competition, let's propose some new modifications to our existing example applications.

* We have our basic server-side runtime setup for all three applications, this stays 'as is'.
* We would like to add a clean migration path to a very fashionable new integration capability based on Web services.
* We also want a migration path for a number of the external applications that have been integrated via CORBA against a number of specific services of application #1.
* Someone asked for documentation, so let's give them that too.
* These external applications are to be migrated to use the corresponding services of application #2, requiring that application #2 supports the same type of CORBA interface as employed in the evolved application #1.

Providing support for both CORBA and Web services access channels to the services of application #2 may all be achieved efficiently, effectively and safely using an infrastructure and code generation. The same cannot be stated for either guise of application #1. If the same approach were to be applied to application #1, the most likely result would be the sufferance of increased complexity from the lack of a central model to describe its services. Note it is the central model that is important, much like the TModel concept in UDDI being the central model (as opposed to the physical data model used in MSDE or SQLServer as the repository).

Using the metadata service definitions coupled to an appropriate tools and infrastructure environment, a metadata-driven application is capable of providing this bridging approach to propagate its services in to many technologies via code generation. This is a direct result of all services being regular (described in a common grammar and vocabulary) and that all service descriptions are available in a meta-format (a descriptive format—the metadata) at both build-time and runtime.
Design Direct to Implementation—Linking Metadata to Tools

There are a number of prerequisites to this approach that are formidable enough subjects to warrant articles of their own. But to touch on them briefly, in a horribly simplified vein, we need to go shopping for the following items.

* Modelling tools that are capable of providing a graphical interface on to a model repository where we may store our metadata models and the associated metadata instances (not necessarily in the same physical store). Dependent on your application domain the usual suspects are likely to be an UML-style tool, a process modelling tool or a combination of the two. The tools must provide a programmatic interface to their model storage.
* One or more target infrastructure products, preferably standards-based or de facto third party standards-based but home-grown if required. These products should be focussed on providing the bulk of the technical services required by an application, from persistent storage to session management (and that is session management on all tiers of the application, where possible).
* Development environment tools capable of supporting the programmatic interfaces of the modelling tools and the target infrastructure products.
* Development environment tools capable of supporting code generation. For those familiar with Lex and Yacc, this is not the type of 'tool' in question. Supporting code generation in this context needs to be at a higher level than the basis of regular expressions and context-free grammars. Ideally, code generation tools should embody some of the principles of dealing with metadata as this helps significantly reduce the complexity of the code generator.
* One or two strong technical leaders, familiar with the concept and use of employing metadata to drive the application development lifecycle.
* A number of designers familiar with the concept of metadata modelling and the design of application infrastructure.
* A number of developers familiar with the concept of using metadata and whom agree with the designers.

The modelling tools, infrastructure products and development environments all exist in today's marketplace. The final 'people-oriented points' are arguably the most difficult prerequisites to get right, as is the case with most software teams it is the mix of people and approaches that often makes or breaks the software lifecycle.

Professor Belbin's test will help get the mix of 'people types' reasonably well balanced, but there is no compensating for a unified team3. This is critically important in a metadata-driven approach, as all team members must adhere to the model if the application is to achieve its goals.

Putting the whole thing together, we arrive at a process whereby it is viable for the solution analysis (sometimes termed the requirements analysis) to feed directly in to the application design and the application infrastructure. The application design governs the overall schema of the component model and the metadata definitions. The application infrastructure looks to provide support for the application schema via a set of application framework APIs, the reuse of standard infrastructure or building of bespoke infrastructure. Finally, the application tooling is responsible for interrogating the models and metadata of the application's design and generating code on top of the application infrastructure. This is shown in Figure 4.
Click here for larger image.

Figure 4. A metadata-driven process

Build-Time

Looking a little more closely at the build-time, and once again focussing on the service interfaces, the metadata service definitions are used by the application tooling to provide code generation of the service interface implementations (the service proxies, in different technology environments).

The application's metadata is used directly to derive generated code. It is important to note the metadata feeds in to the build of the application's services also, as does the common infrastructure of the application. The generated code should be based on the same code base (the framework) as the core application. This helps to keep the volume of generated code to a minimum—code is generated on top of this infrastructure layer. This is important to help reduce the need for complete regression testing.

The role of the generated code is to providing the marshalling of requests from one format (such as a request originated from a CORBA peer, or a Web services POST) to the generic internal metadata-oriented request format. The metadata must contain information about the service interface, such as parameters types and names, but can be extended to include default values and validation. If extended in this vein, the generated proxy will be able to use the same validation rules as the core service (on the assumption that the same metadata is used by the core service—which in this scenario, it should be!).

The generic infrastructure is, fundamentally, extended to support a specific technology via systemic code generation. The code generation should simply be defining a wrapper on the underlying application infrastructure to marshal requests from 'one side to the other'. This is made possible by the presence of a metadata model, as that model defines an overall structural representation for the service interfaces. The result is that the infrastructure for a given technical environment can be tested independently of the code generation, a key factor in increasing the quality of a solution employing code generation. Deriving from the metadata also permits different code generation policies to be applied, such as 'include code generation for request validation' verses 'no code generation for validation'.
Click here for larger image.

Figure 5. Systemic code generation

One downside to code generation is the need to ensure that all critical requirements of the target environment (to be bridged by the code generator) are well described in the service metadata model. If not, the metadata model needs to be revised or extended to support the concepts in the target environment that are absent from the metadata model (that is, the model is not complete).

Another downside to deriving the service proxies in this manner is that of versioning. Versioning and configuration management is often a very thorny subject and certainly a subject that warrants dedicated treatment. However, the issues facing proxy versioning are not so different from the issues faced by more typical development approaches. If a service definition changes, the associated proxy will need to be regenerated and any integration against that proxy will need to be assessed for impact (all facilitated by the formalised traceability of the approach). This is no different to any other scenario, except here we need to ensure the core services, infrastructure and generated proxies are all matching what was intended in a deployment!

For the former, there is no real answer to this as it is all about the completeness of the metadata model being utilised. Looking for standards may help, but nothing will beat a well reviewed and documented (use cases!) model. Fortunately, for the latter, derivation from a metadata model means both packaging and impact analysis on changes to packages or packaging may be performed within the same tools-based regime. At code generation, the exact configuration of the build is well known and can be used to populate a deployment repository.
Runtime

Returning to the original intent of showing what can be done with a metadata-driven approach to design and development of applications, the best place to review that is in the runtime.

Assuming the build process used a common application infrastructure tailored towards the metadata service definitions and that metadata was used in the creation (coding—no magic there!) of the core services of the application as well as the generation of service proxies, the consistency of the overall solution will be as complete as the metadata model itself.

A request originating from any supported external source is marshalled by the respective service proxy. In that marshalling, dependent on the code generation policy employed, the service proxy may perform some validation of the request based on the rules supplied via the metadata. This could even include authentication, authorisation and session management via a hand-off of metadata to delegated sub-systems.

Once marshalled in to the generic service request format the request is forwarded to the request manager for execution. In that execution, the request manager interrogates the request and matches it against the metadata for the target [core] service. If all is well, the request is accepted by the target service and that service may then use additional metadata in the processing of the request. This is shown in Figure 6.
Click here for larger image.

Figure 6. Request processing

The final result is an approach to solution design that flows from inception to implementation, providing a pragmatic view on how and why metadata-driven applications potentially have a longer life than their more traditional counterparts.
Documentation

If this approach permits the generation of code from service definitions, there is no reason not to generate the service interface specification documentation also. This has been common practice for many years, with top-down documentation generation from tools like Rational Rose (to Microsoft Office Word) or bottom-up via code-oriented tools such as AutoDuck or JavaDoc.
Applicability

Worthy of a few observations is a brief summation on the organizational aspects involved in the application of this approach. To characterize the types of organizations that often appear drawn to a metadata-driven philosophy to developments, this approach probably is geared for:

* Designers and developers that want a closer link between the design and the implementation;
* Organizations that are looking seriously at a real infrastructure-driven, higher productivity and higher quality approach to their development;
* Organizations that are not nervous about getting in to bed with a couple of productivity enhancing tools;
* Organizations looking to build knowledge repositories and the associated tools, to better describe their product(s) via a formalised description language;
* Organizations whom need to support a range of technical environments, particularly at the system boundary.

Positive Benefits?

This approach has been used to varying degrees on many projects I have discussed, reviewed and worked with. Many of these projects have sought the high goal of complete forward engineering from the models to the runtime, and some have found success when dealing with a specific domain.

At Temenos, we have been using this and related techniques in the production of the new 24x7 capable banking system, 'T24', launched late in 2003. More specifically, we have used these techniques in the production of a software development kit (programmatic APIs on existing functions) and Web services deployment tooling. It might not be easy, and it might hurt your head from time to time. But looking at the model for the T24 solution it is clear to us that a metadata-driven approach to the design and development of your applications will, when the next technology wave comes, help you engineer your existing services out of a hole and in to the limelight.

Check, carefully, the initiatives of many of the IDE and tools vendors. Metadata representation and code generation is being courted once more. This time however, the aim appears to be to help the development process become more productive by providing tools to manage the abstractions and complexity in today's technical environment.