Friday, December 5, 2008
Pretty bold statement, some would say. I don’t think so. Let’s consider the facts.
When the times are tough, the first thing most companies do is slash budgets. IT budget gets reduced just like everyone else’s. The focus shifts from the strategic initiatives to simply keeping the lights on and completing projects as quickly as possible. Enterprise Architecture efforts are usually the first ones to be eliminated or significantly reduced. Point solutions become the norm resulting in duplication of software, hardware, and overall efforts. Smokestack applications rise up from the ashes of the Enterprise Architecture. Everyone becomes more concerned about keeping their jobs rather than doing the right thing for the company. IT managers shift to the aggressive empire building mode in order to protect their jobs and eliminate their own risks. (The old mentality of “I own more than you, therefore I am more important than you” is still alive and well, unfortunately. IT managers also think that if they can “own” and control every piece of their application, it will reduce their risk and allow them to deliver results faster.) Governance becomes unenforceable and largely forgotten.
Through this chaos, interesting trends emerge. While the initial IT budget is reduced through a series of staff reductions and some technology rationalization efforts, the costs begin to creep back up in subsequent years. When the economy finally turns around and the pressure to keep the budget low eases, the IT budget suddenly becomes larger than what it was prior to the cuts. Why? The explanation is simple. The empire building and unfettered decision making by IT management finally bears fruit. There are more software, licenses, hardware, and code in the data center, all of which requires more people to support. There is very little reuse and sharing because each group has built silo applications residing on their own unique platforms. Costs increase, efficiencies decrease, and it takes longer to deliver new capabilities especially if they require several applications to integrate with each other.
Enterprise Architecture and SOA can help reverse these trends and, in fact, keep the IT budgets low. Most companies have a number of redundant systems, applications, and capabilities that have grown through the type of uncontrolled behavior described above. EA, through an effective discovery and governance mechanisms, can eliminate these redundancies while maintaining the same capacity and level of operational responsiveness. Additionally, EA groups can influence or implement new architecture approaches to help consolidate resources and gain efficiencies. Examples of this could be virtualization, green technologies, cloud computing, etc. SOA, as a subset of EA, provides much the same benefits. Encapsulating key business functions as reusable services will help achieve more consistency, save money, and enable faster project delivery. An effective EA program can protect companies’ IT budgets from ballooning by establishing and enforcing standards, promoting reuse opportunities, and ensuring transparency across all IT systems.
The bottom line is that companies can not afford not to invest in EA and SOA. These programs will make organizations more efficient through the economic downturn and help achieve the necessary savings. On the long run, EA and SOA will keep the costs down while increasing business agility. Effective EA and SOA programs are a competitive advantage, not an overhead. They will easily pay for themselves and, what’s more important, enable organizations to avoid uncontrolled spending in the future. Enterprise Architecture and SOA is a must, not an option!
Wednesday, October 29, 2008
If you refer to the diagram above, you will notice several major components that make up the SOA Ecosystem.
- Registry/Repository (RegRep)
- Service Management
- Shared Service Environments
- Service Consumers
To truly comprehend how the SOA ecosystem operates, a clear understanding must be developed of what each component does and what its role is. Let’s start from the service consumer side.
- Service Consumers
- Application Developers build applications that consume services. They use IDEs and other development tools to construct service requests and parse responses. Developers interact with the Registry/Repository to find the right services, obtain service metadata, and understand usage patterns.
- Application Testers perform quality assurance tasks on the final product.
- Application Servers that execute the application code interact directly with the SOA platform hosting the services.
- SOA Infrastructure
- Service Management Platform acts as an entry point into the SOA infrastructure. It retrieves policy information about the service being executed and applies it appropriately to the request. The policy is used to understand service security and authority, associated SLAs, constraints, contracts, etc. The Service Management Platform is often utilized to keep track of the service consumption and run-time metrics, which are then fed into the Registry/Repository.
- The role of the Enterprise Service Bus has already been discussed.
- Registry/Repository acts as a central repository for services and their metadata. Its uses and integrations are discussed at each related point.
- Security / Authentication Platform is a part of the larger IT infrastructure and is typically represented by either LDAP or Active Directory technology.
- Shared Service Environments are used to host reusable services. While different organizations choose to approach service hosting differently, if a common service hosting platform can be established, many issues related to service scalability, performance, reuse, security, implementation, standardization, etc. can be easily resolved. A centrally managed platform can be easily upgraded to accommodate additional – foreseen or unforeseen – volume. Standard capabilities can be provided to perform security, authentication, logging, monitoring, instrumentation, deployment, and many other tasks.
- Service Creation
- Service Architects and Developers create reusable services using the appropriate design and development tools. They also interact with the Registry/Repository to discover existing services and register new services and related metadata. The created services should ideally be deployed into a Shared Service Environment.
- Service Testers perform quality assurance tasks on the new or modified services. They use special SOA testing tools to create test cases and automate their execution. These tools interface with the Registry/Repository to retrieve metadata about the services and update related information once testing is complete.
Friday, September 26, 2008
That is the question. Many SOA thought leaders have addressed this topic. Most recently, David Linthicum wondered if ESBs were evil. He also talked about ESBs hurting SOA in his blog. Eric Roch has chimed in on the debate by providing some general guidelines for how to use the ESBs. Joe McKendrick has summarized the recent debate in his blog.
There seems to be a lot of pent up emotions in the industry when it comes to the ESBs. A lot of people tend to view ESBs as over-engineered, complicated, and unnecessary. Maybe, it is a backlash from the vendor hype or consistent experiences with a failed ESB implementation. Maybe, it is a reaction to the industry’s push towards choosing the tools first and fitting the solution into them later rather than vice versa. Maybe, it is a response to the architects calling the ESB implementation Enterprise SOA. I don’t know. What I do know is that ESBs have its place and when properly used are very useful.
SOA in not just about exposing services via a ubiquitous protocol and letting people use them. A successful SOA must have the following elements in place:
- Governance and Processes
- SOA Governance
- SOA Methodology
- SOA Reference Architecture (and possible Reference Implementations)
- SOA Maturity Model
- Service testing and versioning approaches
- SOA design patterns
- Service Management platform
- SOA Governance platform
- Registry / Repository (often is part of the SOA Governance platform)
- SOA testing tools
Note that the ESB plays a central role in the SOA ecosystem. It needs to be tightly integrated with the Registry/Repository tool that will store policy information and service metadata, service management platform that will ensure compliance to the predefined policies, and platforms exposing the physical service endpoints. ESBs are very useful when utilized to perform the following tasks:
- SLA and policy management
- Security reconciliation
- Protocol reconciliation
- Message transformation
- Orchestration (possibly, in conjunction with a BPM tool)
- Logging and instrumentation
- Metrics collection
When services are created, it is impossible to know who and how will consume them. In fact, it should be irrelevant. Services should not worry about all of the potential consumers, protocols, and contracts. It is the job of the ESB to reconcile all of them. Services should not have to include all of this complexity in their designs and implementations. They should only make sure that the business logic is properly implemented and a standard interface is provided. The ESB will take care of the rest.
Obviously, without proper planning and architectural oversight, ESBs can fail. Using an ESB to support only a handful of services is an overkill. Blindly choosing a product without performing adequate analysis always leads to problems. However, putting ESBs in the right place in the SOA ecosystem and utilizing them for the right purposes will only simplify the development, increase efficiency, clearly distribute the responsibilities between architectural components, and improve standardization. ESBs are not evil when used correctly.
Friday, August 15, 2008
Friday, August 1, 2008
One of the cornerstones of SOA is service reuse. Success of the SOA program is often measured through the amount of services created and reused. The biggest problem with testing in an SOA environment manifests itself when a service has several consumers and changes are made to it. How do you validate that this change does not impact service consumers? How do you determine the best way to deal with this change? Do you ask all of the service consumers to perform their own regression testing to make sure internal service changes do not impact them? Obviously, this is not an effective solution. With more and more services getting more and more reuse, you need a solution that minimizes the amount of manual testing you need to do but, at the same time, provides a clear understanding of how the service changes impact its consumers.
The services are composed of three primary elements – interface, contract, and implementation. Interface represents the protocol and communication mechanism between service and its consumers. Contract defines all of the interaction details such as message formats, SLAs, policies, etc. Service implementation is self-explanatory. A service can expose multiple interfaces and may potentially support multiple contracts. The key to understanding the impact on service consumers is to verify whether or not changes to any of the service elements invalidate how it behaves today. Changes that have no impact are called non-breaking; changes that modify the behavior are called breaking.
Each shared service needs to have an automated test created as part of its normal implementation. It will address two issues – provide an initial test bed for the service and automate all future testing needs. The test should inspect what changes are made to each service element. When service is modified in any way, the automated test suite should be executed to understand the impact of the changes. If all tests pass, the changes should be considered non-breaking and consumers should be unaffected. If any of the tests fail, this would indicate a breaking change and a new version of the service would need to be created. Alternatively, the impacted consumers can change but, ideally, breaking changes should trigger a new service version.
The biggest problem with SOA many companies face is lack of a consistent, comprehensive testing approach. Without automated regression testing for shared services, organizations are exposed to risk of high manual testing costs every time a service is changed or new consumer is added. Additionally, it can drive service versioning and serve as a formal validation mechanism that service consumers can trust. Automation can save millions of dollars in manual labor and ensure stability of the whole SOA platform.
Wednesday, June 25, 2008
Despite Gartner labeling CoDA as an emerging trend, I don’t believe it is a new concept. Context and location aware applications have been in existence for a while. Think back to the Internet bubble days when all kinds of schemes were designed to deliver coupons, advertisements, and other “useful” information to your mobile devices when you got close to certain location. RFID and its applications became the staple of context aware applications. Even Gartner based its research on these trends. It is true, however, that CoDA has not yet become mainstream and is moving up the Gartner hype cycle curve.
CoDA is still very immature. The vision is that CoDA applications will ubiquitously run on a variety of devices, technologies, and platforms. For this to become a reality, technology needs to be created that would allow the same services to be delivered to a variety of platforms that possess the same context aware capabilities. Users should benefit from being mobile, not be hampered by it. For example, salespeople that left the office for the client visit should be able to obtain specific customer information, find out sales status, and view the whole relationship picture immediately on their preferred device. The same capabilities should exist on all mobile platforms, which will truly make context aware applications possible. At the same time, mobile devices should evolve to ubiquitously interact with the network. Whether a WiFi, cellular, or any other kind of network is available should not prevent the application and the device from performing their functions.
Even though CoDA was billed by Gartner as the next step in the evolution of SOA, I don’t think it fits into the same paradigm. SOA’s primary goal is to create composite applications through the leverage of existing services. EDA, or as Gartner likes to call it, Advanced SOA, pursues the same objective, except that instead of services, the same events are sought to be consumed. By contrast, CoDA aims to enhance user’s experience through the knowledge of his/her context and tailor the application behavior to it. While it builds on the concept of reusable services that would deliver the right information at the right time, the whole concept has nothing else in common with SOA. In my opinion, CoDA is a move towards more intelligent applications but it is definitely not the next evolution of SOA.
CoDA still has a long way to go. It is an exciting concept that has science fiction written all over it. However, the technology, devices, networks, and people are nearing the point when context aware applications will become commonplace. The exciting thing is that I don’t think we have much longer to wait.
Sunday, June 8, 2008
The reasoning for this is simple. SOA on a small scale is not SOA. It is just a bunch of services. SOA’s goal is to create and leverage services across the organization. A single project or a couple of services cannot achieve this. Furthermore, effective governance, best practices, and lifecycle processes cannot be established on a small scale. They need to be designed and implemented with the large scale in mind. Testing them on a single project is not only impractical – it doesn’t provide any knowledge of how SOA will truly work within the organization.
Any successful SOA implementation will eventually have all of its elements in place – infrastructure, technology, governance, practices, processes, and people. Consider the impact of growing all this organically. You will end up with a hodge-podge of services implemented on different platforms using different approaches and design patterns. The technology set will be inconsistent. Governance mechanisms that typically tend to be established late in the game will most likely allow inadequately designed and implemented services to go into production. All this would have to be remediated at some point of time. Imagine the effort required to clean up years of organic growth! Most companies simply move on and leave the mess behind.
Now imagine what will happen if all of the SOA elements are in place from the very beginning. No rework, re-platforming, or cleanup will be required. All of the services will reside on the right platform that can be scaled for future demands, all of the best practices will be followed, and the governance mechanisms will be able to catch most, if not all the subpar services. The company will be able to reap SOA benefits right away without having to do the costly cleanup or conversion.
Of course, waiting to complete all the preliminary work can take years. No company, regardless of how strong its commitment to SOA is, can wait that long to start seeing the benefits of something that will require a lot of upfront investment. Thus, the most pragmatic approach is to introduce as many SOA elements as possible that will provide the most complete and consistent SOA foundation for the future. This should be achieved within a reasonable timeframe, so that services can start to be built and benefits can be quickly shown. All the remaining strategic tasks should continue to be addressed in parallel with the ongoing tactical service implementations.
The prescription above will not cure all of your SOA ills but will introduce a dose of prevention for the future. Building services following a consistent set of standards, using a consistent set of tools, and deploying on a consistent platform from the very beginning will ensure the success of your whole SOA program, not just a few projects or services.
Thursday, May 29, 2008
EDA is based on the concepts of events, publishers, and subscribers. At the most basic level, the idea behind EDA is that publishers publish events and subscribers consume them. Of course, some logic and rules must be applied to properly route the events. Through this mechanism, systems become connected in a loosely coupled fashion. This makes the integrations a lot easier and eliminates the need for each publisher and subscriber to know the details of how to communicate with each other.
Sounds familiar? Absolutely! Change the terms events, publishers, and subscribers to services, providers, and consumers respectively and the paragraph above reads like an explanation of SOA. Why is this, you would ask? Because EDA is nothing more than an asynchronous version of SOA. The major difference between the two is that services are typically implemented as real time calls while events are published and consumed asynchronously. All the other concepts are virtually the same.
While the architectural approaches and design patterns for EDA are slightly different from SOA, the fundamental concepts are still the same. A central event handling infrastructure that knows how to receive, route, and transform the messages is required. It should be viewed as practically the same thing as the Enterprise Service Bus (ESB). In fact, generically, I would call it the Enterprise Eventing Bus (EEB). As events are published, they need to be translated into a common representation, so that a consistent set of rules and operations can be applied to them. A canonical model is the best solution to achieve this goal. Additionally, the same façade pattern should be used as described in the SOA Façade Pattern post to abstract the publishers from knowing and being tied directly to the Enterprise Canonical Model. Note that the logical EDA architecture presented below is very similar to the one introduced in the SOA Façade Pattern.
Wednesday, May 21, 2008
I believe there is place for both REST and SOAP in an SOA program. You cannot necessarily prohibit the use of one technology over the other with one notable exception – you cannot yet standardize solely on REST. SOAP should still be considered the preferred protocol with REST being utilized in very specific situations. There is a number of reasons why.
- Enterprise applications require enterprise capabilities
REST was designed for simple Web-based interactions, not for complex enterprise applications. A plethora of WS-* standards exist specifically to address the complexity of enterprise integration and interaction needs. Capabilities such as transactions, security, policy, guaranteed delivery, and many others are a must in any enterprise caliber system. REST does not yet support this level of standards and, most likely, never will due to its focus on simplicity and performance.
REST does not support any specific security standards. In fact, it relies on the infrastructure and middleware to secure end-to-end communications. If service consumers have more robust and complex security requirements than can be met with the underlying infrastructure alone, REST becomes insufficient.
- Strong contracts
Strong adherence to service contracts is the cornerstone of any SOA implementation. It ensures that services expose well-defined contracts and provide adequate information on how to access them. SOAP-based web services have built-in capability to validate their contracts. REST does not inherently support this type of verification. In fact, all the interactions between the consumers and RESTful services contain nothing more than a command and a list of parameters. It becomes the responsibility of the service to ensure the validity of the contract.
As a general rule, REST should be used when performance and simplicity are paramount, no special security requirements exist, and no complex interactions between the consumer and the service are necessary. The best uses for RESTful services should be considered simple extracts of small data sets, inquiries against open public data sources, calls to external vendors supporting the protocol, etc.
A number of SOA and middleware vendors are feverishly working on creating solutions that support both SOAP-based and RESTful services. Very soon, REST and SOAP will simply become some of the many communication protocol choices existing in the rich SOA toolbox. Most of the differences will be eliminated by a combination of server-side technologies and client-side frameworks. However, until this happens, strict standards should be established guiding the use and adoption of RESTful services.
Wednesday, May 14, 2008
One of differences is the cost of traversing the distributed network boundaries. It is nontrivial in terms of complexity and performance. Service-oriented designs acknowledge these costs by putting a premium on boundary crossings. Because each cross-boundary communication is potentially costly, service-orientation is based on a model of explicit message passing rather than implicit method invocation. Compared to distributed objects, the service-oriented model views cross-service method invocation as a private implementation technique, not as a primitive construct — the fact that a given interaction may be implemented as a method call is a private implementation detail that is not visible outside the service boundary.
Another difference is autonomy of the code. Object-oriented programs tend to be deployed and act as a unit. They are not autonomous but rather embedded into a managing container. Service-oriented development departs from object-orientation by assuming that atomic deployment of an application is the exception, not the rule. Services are deployed, managed, and run as autonomous units. They are, in fact, the containers, in which OO code lives.
The final difference is the architecture. Since OO almost always deals with in-memory calls, it places no premium on the number of method invocations and the amount of data passed. In fact, the best OO interface contains many methods that are created for a single purpose. Under SOA, network overhead needs to be considered. This fact puts a premium on the amount of calls made from the client to the service. Thus, the number of service calls should be limited while the amount of data passed should be maximized.
Keep in mind that OO and SOA are not competing but rather complementary approaches. Think about service as an outer shell of an OO application that enables it to become sharable across the network.
Monday, May 12, 2008
There are basically three ways to create a canonical model.
- Buy it or adopt an existing industry standard model
There is a number of organizations that either developed a set of standard models targeting a specific vertical (e.g. IBM - http://www-03.ibm.com/industries/financialservices/doc/content/bin/fss_ifw_gim_2006.pdf) or maintain industry standard definitions (like MISMO for Financial Services or ACORD for insurance). You can adopt one of these models to serve as the canonical representation of all the business entities.
- Most of the work is already done
- Another organization maintains the model for you and introduces changes as necessary
- The model is standard and should help with external partner integration
- The specifics of your organization may not be complete captured, which require custom additions to be made
- Some changes or customizations may be needed that would make it harder to upgrade in the future
- The elements are too generic or unnecessarily complex
- High learning curve for canonical model consumers
- Create it from scratch
A canonical model is created from scratch and built out completely before any work utilizing it can begin. This would require at least 3-6 months of effort meeting with various groups across the organization, collecting and sorting the information, and validating the result with the potential users.
- Would provide the most complete and targeted model
- Users will have innate knowledge of the model since they helped build it
- Requires all projects that need to use the canonical model to stop until it is completed
- Not highly realistic or pragmatic approach
- Requires modifications to be made and managed internally
- Build it incrementally
A canonical model is built incrementally over the span of multiple projects. Only those elements that are required by the project are added or modified.
- Does not require a lot of upfront effort to get started
- Efficient and demand-driven – model only what is needed
- Low learning curve – users have more opportunity to learn it as the model evolves
- High propensity for change – the model is frequently refactored as new projects leverage it
- Requires a centralized team to own or govern it
- Frequent changes would require a large amount of testing and updates to the existing consumers
The best way to manage the changes to the canonical model is to establish a centralized team to own the whole thing or to provide governance over it. It would be responsible for making / tracking the changes, notifying consumers, performing compatibility testing, versioning, training, and communications. A comprehensive list of all the canonical model consumers needs to be maintained in order to notify them of all the relevant changes and understand the overall impact of modifications. Without a centralized team, there can never be a canonical model because there will be no one to synchronize or drive all the disparate efforts towards a single goal.
Regardless of the chosen approach, changes to the canonical model are inevitable. Therefore, the façade pattern described in one of my earlier posts must be utilized when using canonical models in the SOA context.
Tuesday, May 6, 2008
In the last post, I discussed the value of the canonical modeling and described how to minimize the impact of canonical model changes on the service consumers. The solution was to use the façade pattern. I would like to elaborate on this topic since a more in-depth discussion is needed to define the pattern and understand its uses.
A good definition of the façade pattern can be found on Wikipedia: http://en.wikipedia.org/wiki/Facade_pattern. In general terms, it is described as “a simplified interface to a larger body of code”. This is exactly how it should be applied to SOA. A façade should be built in front of any service whose interface is based on the canonical model. Consumers would not access the service directly but rather through its exposed façade interface. In fact, the canonical interface should only be exposed for internal consumption. Each façade should be designed to be specific for each consumer or a group of consumers and not directly tied to the canonical model. The diagram below depicts the pattern details and its usage.
There are several distinct benefits of using the façade pattern.
- Façade shields service consumers from the changes in the canonical model.
If every consumer was dependent on the canonical model, even the smallest change could have disastrous effects. All of the services as well as potentially all of the service consumers would need to be re-tested. Lacking automated regression and functional tests already developed, this would be a major undertaking. Using the façade pattern would minimize the impacts of any canonical model changes. Since the facades are specific to each consumer and are not directly tied to the canonical model, the only thing that would need to change is internal mapping between the façade interface and the canonical model.
- Façade hides the complexity of the canonical model.
Modeling the whole business domain is not a simple task. Therefore, canonical models are usually large and complex. Service consumers do not typically want to know the entire canonical model and understand all of its intricacies. They want to get the data they need and continue performing their business functions. Exposing a consumer-specific interface via the façade prevents service consumers from having to know any canonical model details. Additionally, since canonical models are fairly generic, most of the data elements in the returned entity may not be relevant to the consumer. A façade simplifies the request and response data structures and ensures that only relevant information is returned.
- Façade returns data representation understood by the consumer.
A canonical model is generic. It is designed to describe the whole organization. However, service consumers typically operate in their own specific domains. Service façade that is designed to return data in a format that consumers understand simplifies the overall consumption experience and reduces the overall efforts. The consumer does not need to perform any translations and can start working with the data right away. Additionally, a façade can help representing the same entity differently for different consumers if so required. There may be instances, for example, when one Line of Business (LoB) thinks of a customer one way while another LoB views a customer completely differently. These views may even be largely incompatible but as long as they are represented in the canonical model, a façade can be created to address specific LoB needs.
Saturday, May 3, 2008
Primary goal of any SOA program is to introduce a variety of reusable services. Reusability typically implies that a service has a number of different consumers and data providers. The first and most obvious value of a canonical model is that it acts as an abstraction layer between all those consumers and providers. It is an old and well-known design pattern – when you have a number of data sources and their consumers that need to be integrated together, you introduce an abstraction layer, so that neither is aware of the internal details of the other. This way, any changes made to a consumer or a provider will have minimal impact on all of its integration points. This is the second benefit of using a canonical model. It minimizes the impact of internal service changes, modifications of data sources, or switch to a new backend data source on the service consumers. The canonical model should remain unchanged regardless of what happens inside the service, which, in turn, ensures that the contract between the service consumer and the service itself remains unaffected. The maximum possible impact on the service may be the need to change the mappings between internal service data structures and the canonical model.
Since the canonical model minimizes the impact of internal service changes on its consumers, it also reduces the need for regression testing. (This is the third benefit of canonical modeling.) If services did not provide a layer of abstraction between its internal implementation, backend data sources, and its consumers, any change inside the service or data provider would be reflected in its interface and thus would require a full regression test. Usage of a canonical model eliminates the need to perform rigorous regression testing since, as we discussed above, any such changes would not impact the service consumer. The only thing that would need to be done is perform a test validating that the service contract did not change.
Representing standard structures in a canonical model maximizes service reuse (fourth benefit). Consider all the entities that your business deals with every day. It could be customer, account, price, payment, etc. However, different parts of the organization may view these entities differently. One division, for example, may care about the customer household information while other about his/her geo-positioning. Without a canonical model, you would end up with multiple slightly different representations of the same entity. This results in services being built based on disparate models targeted for only specific audiences. Other groups trying to reuse these services would require changes to address their needs or would simply not be able to consume them. Representing all entities in a standard way eliminates this incongruence and allows different parts of the organization to speak the same language. This, in turn, maximizes the potential and real reuse of services built across the company.
Of course, some would argue that any changes to the canonical model would impact all of the service consumers and they would be right. In order to minimize this risk, a façade pattern needs to be used. Rather than exposing the canonical interface directly to the consumers, a façade would need to be build based on each consumer’s needs. It would expose data contracts specific to each consumer or a group of consumers. The service would never be called directly but only through one of its façade interfaces. This way, any changes made to the canonical model would not impact the service consumers directly. The façade would remain intact. The only change that would potentially need to be made is modification of mappings between the façade and canonical data structures.
Used together, the canonical modeling technique and the façade pattern will maximize the service reuse and minimize the impact of internal changes on service consumers. The approach will save costs and time on regression testing efforts. Regardless of what the opponents say, use of these techniques is critical to the overall SOA program success.
Links to some good SOA & canonical modeling articles:
Thursday, May 1, 2008
Agile is created to build software quickly while requirements continue to shift thus allowing the end users to see the results in days rather than months. Refactoring (read “change”) is assumed as normal and is, in fact, welcomed. Software design is often organic and evolves with each story card or iteration.
SOA is an architectural style that requires rigorous planning, forethought, and discipline. Most experts will tell you that A (“Architecture”) is the important element in SOA. Services must be designed with an eye towards the future reuse, not immediate requirements. Contract-first development, which is inherent in any SOA approach, is largely alien to an ASDM. Same is true for comprehensive design cycles that focus on designing reusable, flexible, and architecturally sound services. Agile simply has no room for design. Code is self documenting, Agilists will tell you. Some Agile flavors that account for some design time still focus on a very narrow set of requirements and never take the big picture into account.
The most telling example of incompatibility between SOA and Agile is a project that needs to build a large number of shared services for a large number of consumers. Under an ASDM, each service would be built incrementally, over time as the story cards with new requirements come up. As new consumer’s requirements are satisfied, the service must change, which most likely triggers the need to test the impact on the existing consumers. The more consumers the service has, the more testing must be done. Using a model-driven design and contract-first development would undoubtedly solve this problem. Service interface would be modeled in its entirety and a complete contract presented to each consumer. This would eliminate the need to retest each subsequent consumer integration.
To take this example even further, imagine that this needs to be done for several, not just one project! And timelines are not evenly in synch. Amount of refactoring and testing would become enormous. Throw in the typical governance processes each organization has in place, a registry that keeps track of each service’s lifecycle and policies, service management platform that must be integrated with it, an ESB through which services must be exposed, etc. and you will get an even better picture. Without proper planning, design, and architecture, your SOA program will not succeed. No Agile methodology applies the level of rigor and depth required to build truly reusable services. Stop using Agile before it’s too late for your SOA program!