Wednesday, December 30, 2009

Microsoft Azure

When I first heard about Microsoft Azure, I thought it was yet another feeble attempt by Microsoft to get in on the hype with a subpar product just to get the foot in the door. I have to admit, though, that I was pleasantly surprised. Shockingly, Azure is a robust, well designed cloud platform that may yet prove to be better than some of its competitors.

In a nutshell, Azure encompasses three products.

  • Windows Azure
    • Compute: Virtualized compute based on Windows Server
    • Storage: Durable, scalable, & available storage
    • Management: Automated, management of the service
  • SQL Azure
    • Database: Relational processing for structured/unstructured data
  • .Net Services
    • Service Bus: General purpose application bus
    • Access Control: Rules-driven, claims-based access control

Essentially, Azure provides the complete cloud computing stack that allows developers to write their own applications on top of it. The self administration interface is simple and intuitive. Depending on the services you are using, it allows you to allocate your server or database capacity, hook in the service bus, and configure your application in minutes.

The Windows Azure platform introduces the Web and Worker roles. This is the implementation of a similar pattern used in WCF that decouples the network transport from the component logic. The Web role allows the applications to accept incoming requests via a variety of protocols supported by IIS. The Worker role cannot accept any direct requests from the Internet but instead can receive messages from an internal Azure queue hosted by SQL Azure. Under the covers, Web and Worker roles run in their own instances of Microsoft VM engine. All the queues and communication protocols can be configured via the control panel.

SQL Azure is no less impressive. It allows you to store data directly in the cloud in three different forms:

  • Blobs
  • Tables
  • Relational
All of these operations are exposed through RESTful services and are really easy to use. For relational data, complete databases can be hosted in the cloud and applications can access them directly whether they themselves are hosted in the cloud or a private datacenter.

The .Net Services platform provides a couple of services – access control and message routing. Access control serves the identity validation, transformation, and federation purposes. This is all based on the rules defined through the control panel. The service bus part of the platform does what you would expect any ESB to do – service endpoint registration and access, message transformation and routing, and improved security.

Even though Azure is still a relatively immature platform, it holds a lot of promise. Microsoft has finally hit the mark. Some risks still need to be addressed, however. The typical cloud computing concerns remain – security, privacy, longevity, etc. Additionally, a platform like Azure may cause some issues for IT departments that need to adhere to regulations like Sarbanes Oxley, SAS 70, and others. Division of responsibilities, following IT governance processes, quality control, and other sticky situations may keep CIOs and other IT managers up at night. These things will eventually work themselves out through maturing the Azure platform or enhancing the IT processes. Despite the drawbacks, I believe Azure is a viable and solid platform for “cloudizing” your applications.

Friday, July 24, 2009

Cloud Computing and the Reality

Cloud computing is all the hype nowadays. All you hear from vendors, analysts, and consulting companies is that cloud computing will solve all of your IT problems. Here are just a few promises associated with cloud computing:
  • Eliminate your data center
  • Solve all of your scalability and on-demand computing challenges
  • Simplify infrastructure
  • Reduce IT spend
  • Make IT operations more efficient
Are they all true? It’s a possibility. To determine what cloud computing may mean to you, examine how it fits into your IT strategy and the way you deliver technology services to the business. Here are a few things to consider.

First of all, everyone needs to understand what cloud computing really is. According to Wikipedia, “Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet”. (See Too many people, however, forget that it is only a style and begin to associate cloud computing with specific product offerings such as the Amazon Elastic Compute Cloud (Amazon EC2), Google Apps, Microsoft Azure, and others. Companies are not limited to just third party solutions. They can implement their own private clouds if they choose.

Secondly, you need to understand the vision behind cloud computing. The idea is simple – to seamlessly provide flexible, on-demand computing resources whenever necessary. This is not a revolutionary development. The Application Service Provider (ASP) model has been in existence for years. Infrastructure outsourcing practices have been utilized a long time before cloud computing became a term. So, what is all the hype then, you ask. They keys are the ubiquitous nature of the protocols used, increased reliability of the Internet, and the packaging of the offering as a generic service. Cloud computing, as a general approach, may support outsourcing of specific applications, generic computing resources or platforms, and software services. It may potentially lead to outsourcing of the whole data center.

Finally, all the pros and cons behind cloud computing need to be considered. Having someone take care of all your computing resources without investing into expensive data centers is an appealing concept but loss of control and unreliable SLAs may be a cause of concern for a number of businesses. Since the Internet is the primary communication mechanism for the public cloud, its reliability and performance need to be questioned whenever considering third party cloud offerings. Private clouds provide better control, reliability, and performance but what is the real difference between those and existing data centers? In my opinion, aside from following a different architectural model of allocating computing resources, nothing. On-demand computing is a great concept but making it work effectively is a tough task. Technologies exist today to dynamically divert unused resources to those applications that need them most. Grid computing, virtualization platforms, and others all provide these capabilities. However, there are limitations. Whenever maximum capacity is reached, hardware needs to be added. No software trick will work to cover this up. Therefore, efficient capacity and pipeline management need to exist to make cloud computing an effective and viable platform.

While there are some cloud computing zealots ( and realists (, many are still cautious about this technology. And for a good reason. In my opinion, cloud computing has proven its worth in a number of situations but it is still not ready for the enterprise. Public clouds are too fickle for really demanding applications. Private clouds have not yet been effectively built. More importantly, however, lack of cloud computing standards and consensus among the key players will present challenges for anyone trying to enter this arena.

Wednesday, May 20, 2009

SOA Misconceptions

I continue to see a number of common misconceptions about SOA. There have been a number of articles written about this when SOA was first introduced. They primarily concentrated on dispelling the myths that SOA = Web Services. Current misconceptions run deeper and a lot more general.

1. SOA is Expensive
SOA doesn't have to expensive. There is an abundance of open source tools and technologies that can be used to build a truly state of the art SOA platform.

2. SOA is Inefficient
SOA is as inefficient as you make it. Not all the problems and organizations require a complete SOA stack but, at a certain level, it becomes necessary. Otherwise, you will have too much complexity and, in fact, breed inefficiency.

3. SOA is Non-productive
A well established SOA program can run like a well-oiled machine. I am sure a lot of people can cite numerous examples of lost productivity because of unnecessarily complex SOA environments and processes. However, in large and complex corporate IT departments well defined organizational structures and processes are a must. They actually make things more efficient and increase productivity. Every situation requires a different approach. However, SOA, as a general pattern for building software, has been shown to dramatically improve productivity. Think about it -- if several projects can reuse an already existing, well tested service, this results in tremendous productivity improvements and costs savings!

4. SOA is Unnecessary
This is true, with a caveat. SOA has been employed as an IT and business strategy to make organizations more efficient, productive, and agile. If you want your company to achieve these goals, SOA becomes a part of the answer. Otherwise, you are stuck in the world of point-to-point integrations, Just a Bunch of Services, and an unmitigated integration mess in general. Obviously, smaller companies can get away without employing SOA for a much longer period of time but larger ones will feel the pain much sooner.

5. SOA is Too Complicated
Yes, SOA is complicated. But what major effort isn't? Enterprise Architecture is complex, so we shouldn't do it?! EAI was complex but companies did it out of necessity. Master Data Management is complex, so companies should forget about managing their data?! SOA can be as simple or complex as you need to make it. Create a roadmap for your SOA program and follow it. It should guide you in your quest to achieve SOA maturity, however simple or complex you need to make it.

SOA is a program. You can make it into whatever you need. You can use whatever technologies and approaches you like. As long as you keep the goal of increasing agility and saving money as a result of creating reusable business capabilities in mind, you should be successful.

SOA and the Trough of Disillusionment

All the signs point to the fact that SOA has entered the trough of disillusionment. Why? I am seeing a lot of companies de-funding or stopping SOA efforts altogether. Many executives don't see the hype anymore and thus no longer understand the value of SOA. Many companies have tried SOA and failed, so they don't want to try it again. Plus, the economy is not helping. Budgets are getting slashed, jobs are lost, people are afraid to innovate. The first type of spending that companies cut is strategic. They want to get through the tough economic times with minimal required spending -- just to keep the lights on. Thus, SOA program funding gets cut and the executive support goes downhill with it.

What can we do to make sure that SOA moves onto the slope of enlightenment and further to the plateau of productivity? Continue to educate your organizations and executives about the value of SOA! Even though SOA is not in vogue anymore, it doesn’t mean that it lost its value. The benefits are still there. You just have to work harder to make them visible. Document your successes and shift into a marketing mode. Promote the SOA program and the benefits you achieved as much as possible. Show real value. This will certainly get executives’ attention and guarantee their support. Don’t give up. Continue to fight negative perceptions and concentrate on delivering value.

SOA, like any other enterprise wide initiative, is a differentiator. Companies that can successfully implement SOA will become a lot more successful than those that can’t. When the economic crisis ends, organizations that continued to invest into SOA will come out on top. Those that didn’t will be left behind and will need to spend a lot more money and efforts to catch up. SOA can produce tangible business benefits. Don’t let the disillusioned minority silence its value proposition.

Monday, March 23, 2009

SOA Funding Models

One of the primary reasons SOA efforts fail in many companies is simply due to inadequate or inappropriate funding models. Costs are typically at the core of every problem and SOA programs are not exempt. We hear horror stories all the time – the initial investment to establish an SOA environment was too high, so the effort was cancelled; there are many services created in the company but they are hardly reused; etc. Establishing a funding model that is right for your company is the key to moving the SOA program forward.

Any SOA initiative is comprised of two parts – infrastructure and services. Both need to have a separate funding model established in order to successfully support SOA program’s goals.

SOA Infrastructure Funding

Infrastructure funding requires a pretty straight forward approach. When discussing SOA infrastructure, I am referring to shared platforms that are used by a number of services across the organization. Some companies host services on the same platforms whose functionality is being exposed. However, even if this is the case, some shared infrastructure components like ESBs, service management technology, Registry/Repository, etc. must exist to support SOA program’s needs. Thus, it is safe to assume that some form of shared SOA infrastructure exists. There are two possible ways to provide effective funding to build it out.
  1. Fund all the SOA infrastructure centrally
  2. Identify appropriate projects to acquire / extend new / existing SOA infrastructure
Central funding is probably the easiest and most effective approach. It allows the organizations to establish an independent roadmap for introducing and upgrading SOA infrastructure. It also makes the SOA program operate more efficiently as the cost, scaling, and availability issues will no longer be relevant to individual projects. If central funding option is selected, several approaches for recouping the initial and ongoing investment can be utilized.
  • Do not recoup the investment
  • Place an entry fee to use any SOA infrastructure component
  • Charge a small fee for each usage instance

Since all the SOA infrastructure is provided centrally, not recouping the initial investment is a real option. If the organization’s fiscal model does not call for IT recouping all its costs from the business groups using their products, this option works well. If this is not the case, however, you have a choice between placing a predefined entry fee that each application / project must pay to use the specific SOA infrastructure platform and charging end users based on the total usage.

The per-use-fee scenario is a little tricky as each SOA infrastructure component needs to define what a transactional unit is and how much to charge for it. Transactional units can be different for each SOA platform. For example, an ESB transactional unit can be a service call, Registry/Repository – an individual user and/or a UDDI request, etc. In this case, total usage amount based on predefined transactional units would be calculated, multiplied by the unit cost, and charged to the business units. The most effective way to determine a unit cost is to divide the total investment made in the platform by the total transactional units being consumed. The obvious effect is that unit costs would decrease with increased usage. Here are all the formulae discussed above.

Usage charges per platform:
Unit = Different per Platform
Unit Cost = Total Platform Investment / Total Amount of Units Consumed
Line of Business Usage = Units Used by Line of Business * Unit Cost

Some companies have chosen to grow their SOA infrastructure gradually, without a central program or funding. A typical approach in this scenario has been to attach SOA spending to the most appropriate projects. Thus, the projects would purchase new SOA infrastructure platforms or upgrade existing ones to suit their needs. There are several problems with this approach.

  1. Typically, the projects purchasing the infrastructure don’t want to share it with other potential consumers unless there is significant pressure from above. The platforms don’t end up being reused or, if so, only minimally. The projects do not have any incentive to sharing their investments with anyone else, especially since they are seen as critical to projects’ success.
  2. Projects often get cancelled due to over-inflated budgets. SOA infrastructure is expensive and cost conscious enterprises do not want to invest into what looks like excessive infrastructure for project’s needs.
  3. Demand to extend a platform based on project’s needs typically comes without enough lead time to accommodate project’s timelines. Thus, projects face a tough decision – to extend their delivery date or use alternative infrastructure.

Funding the SOA infrastructure centrally is more effective in delivering service-oriented solutions faster, moving the enterprise more efficiently towards a higher level of SOA maturity, and addressing the project needs. Project-based funding will most likely spell doom to the SOA program as a whole.

Service Funding

As discussed earlier, funding for the SOA infrastructure should come from a central source. Where the money comes to build individual services, however, presents a bigger challenge. Since projects are the primary drivers behind demand for services, special consideration should be given to project needs and budgets. However, service design and implementation can incorporate additional requirements that fall outside of the project scope. Another typical project-related problem stems from the shared nature of services. It is unfair to burden a project with the full cost of a service that will be utilized by a number of other consumers.

There are three possible ways to address the service funding concerns.

  1. Make the first project to build a service provide the complete funding
  2. Establish a central funding source that will cover all service design and construction expenses
  3. Provide supplementary funding to projects building services

If option1 is selected, several strategies for recouping the initial investment can be used.

  • Do not recoup the investment
  • Place a surcharge on each instance of service leverage
  • Charge a small fee for each service call

As mentioned above, it is unfair for the project to carry the complete costs of the service build-out, especially if it includes additional requirements. Thus, unless the project implements one of the options to recoup its initial investment, funding option #1 is not going to be viable. Not recovering the funds is not a realistic option either as it does not incent the projects to build truly reusable services. The other cost recovery strategies may work but require detailed metrics to be captured on the service leverage and/or transactional volume.

Establishing a central funding source for all projects to use when building reusable services is probably the ideal approach. Few companies, however, would be willing to write what in essence would be a blank check for the projects to use in their service delivery efforts. The opportunity for abuse and misappropriations would be too tempting. Unless strong governance and control mechanisms are in place, this funding method will most likely end up costing the company more money and provide unrealistically small return on investment.

Providing supplementary funding to projects building services is probably the most realistic approach. A central fund needs to be established to cover the efforts falling outside of the project scope. Since shared services would typically incorporate other projects’ and enterprise requirements, the actual cost ends up being higher than what projects budgeted for their needs. Thus, the easiest way to distribute supplementary funding is to allow the projects to pay for functionality already included in their budgets and cover all the additional costs through the central fund.

Whatever the funding approach is used, it needs to be carefully administered. A party not involved in day-to-day project work is best suited to play the administrative role. There could be a couple different groups managing the infrastructure and service funding and chargeback mechanisms. Overall, however, this should fall under the SOA Governance umbrella and managed centrally as part of the SOA Program.

Thursday, February 19, 2009

Thursday, January 29, 2009

Service Orchestration Guidelines

Many SOA articles, white papers, and vendor documents talk about “service orchestration”. But understanding of the underlying concepts and orchestration best practices remain elusive.

A quick Google search will produce a number of articles and links that discuss service orchestration and related topics. Most of them will talk about BPM engines, ESBs, and BPEL. This, unfortunately, pollutes the true definition of service orchestration and gives it a much more technology centric view.

In my opinion, Service Orchestration is an automated way to combine several services together to achieve new functionality. The end result is a new composite service that provides a distinct business capability and can be invoked independently. It must have all the appropriate attributes as discussed in my previous article.

Orchestration is a technology independent concept. It can be achieved via a descriptive language such as BPEL, built-in tools within a specific platform (ESBs typically provide their own orchestration mechanisms), or programmatically. Depending on your needs, situation, or technology available, the best way to perform service orchestration may be different. Here are a few guidelines to help you create service orchestrations faster and make them more flexible, maintainable, and scalable.
  • Use the platform with built-in orchestration capabilities as your first choice
  • Avoid implementing service orchestrations programmatically whenever possible
  • Choose a platform or mechanism that can easily perform flow logic, result aggregation, message transformation, and business rule application
  • Ensure the composite service fits the definition of a service, i.e. has all the attributes of a service
The rationale behind the above guidelines is very simple. You want to choose a platform that already provides most of the capabilities you will need when creating new service orchestrations. You will typically need to call several services, aggregate their results in some way or chain the calls together through some kind of logic, transform the end result to match the exposed contract(s), and return it. The less work you have to do and the more you can rely on the platform’s capabilities, the more efficient your orchestration will be. If you can complete your orchestration work through a visual interface and never see the code, you are on the right path. This way, you will spend less time maintaining the orchestration, it will be easier to make changes, and you don’t have to build all the necessary mechanisms from scratch.

Many would argue that a programming language will give you the most flexibility when implementing an orchestration. While this is true, the overhead is pretty large and efficiency is low. First of all, no programming language seamlessly integrates all the mechanism you need to create an orchestration, especially in a visual way. Secondly, every time an orchestration needs to change in some way, no matter how small, new code needs to be written, deployed, and tested. While the same steps need to be performed on any orchestration platform, the level of effort will be a lot smaller on full featured orchestration platforms.

When creating service orchestrations, it is important to maintain proper relationships between composite and atomic services. The diagram below shows which services should be allowed to interact with each other.

The following list details the rules and guidelines for establishing relationships between composite and atomic services.

  1. Atomic business services should not call each other. Use orchestration to combine several business services together.
    The goal of service orchestration is to combine several services together through a series of logical steps and expose new capability to the consumers. Orchestration platforms, as discussed above, provide a lot of functionality to make this work easy and efficient. If individual services are allowed to call each other, they would not be taking advantage of the orchestration platform’s capabilities. Furthermore, when business services call each other, it establishes a tight coupling between them, which makes them less reusable and harder to maintain. Atomic business services should provide specific, well defined business capabilities and be reusable in their own right. Reliance on other services to complete work indicates plurality of purpose and lack of specificity.
  2. Business services can call Utility services.
    While coupling services together should be avoided as much as possible, sometimes generic, low level functionality that needs to be invoked from a business service is exposed via utility services. It would be an overkill and sometimes even impossible to use an orchestration platform in order to allow business services to take advantage of such functionality as logging, retrieving or storing configuration information, and authorization.
  3. Utility services cannot call Business services.
    Utility services should not be tied to any business processes or capabilities. Thus, a utility service calling a business service would violate this rule.
  4. Business services cannot call Composite services.
    The logic behind this guideline is the same as in disallowing business services call each other. A composite service is also a business service. Thus, a business service calling a composite service should not be allowed.
  5. Composite services can call other Composite services.
    Other composite services are allowed to participate in orchestrations. They should be treated as regular atomic services in this case.
Note that, even though atomic business services and composite services are, in essence, business services, they are different and guidelines provided above are not contradictory in their treatment. There are two levels at which they should be compared -- logical and physical. From a logical perspective, atomic business services and composite services are the same. They expose some kind of unique business capability and adhere to the service definition guidelines. From a physical perspective, however, they are different. Atomic business services, as opposed to composite services, rely solely on internal business logic and direct interaction with backend data sources to perform their work. Thus, by definition, atomic business services should not call other business services as part of their implementation. By the same token, since composite services already rely on other business services to complete their work, they should not differentiate between calling atomic business services or other composite services.

Service orchestration is a complex topic and might take a series of articles to discuss completely. However, the rules outlined above should establish a good foundation for creating and managing composite services.

Friday, January 23, 2009

Services Explained

Since the SOA term was coined, many discussions raged on what a service was from a business and technical perspectives. In this article, I offer my views on the topic.

There are several categories of services. Many leading SOA vendors and thinkers typically break them down into Business and Utility types. A Business service represents a business capability that is needed to complete a step within a larger business process. Examples may include retrieving customer information, making payments, or checking order status. Utility services represent a technical capability that is business process agnostic. Examples are e-mail, logging, or authentication services.

Services can be combined together to create composite services. This is called orchestration. An example of this can be a Money Transfer service that needs to debit one account and deposit money into another one. Composite services can also be categorized as Business and Utility. Best practices and general orchestration guidelines as related to orchestrations, atomic services, and their relationships will be discussed separately.

Regardless of the type, a service is comprised of three components.
  • Interface
    • This defines how services are exposed and can be accessed by its consumers.
    • Interfaces are not limited to Web Services and can be represented by any remote messaging protocol.
  • Contract
    • This defines what services expect during the interaction with the consumer. Message structures, relevant policies, and related security mechanisms are all part of a contract.
    • Contract defines a “legal” agreement on how the service and its consumers should interact.
  • Implementation
    • This is the actual service code.
A service may have multiple interfaces. Different consumers may have the need to access the service via different protocols. For example, a Java consumer would like to access a service via a Web Service protocol while a Mainframe application can only use MQ Series. Even though a service may itself expose multiple interfaces, it is more effective to let the ESB platform handle this. For more details about the rationale behind this recommendation, see “To ESB or Not to ESB?" post.

A service may also have multiple contracts. I have recommended in the past that for a service to be maximally reusable, it needs to implement the Service Façade pattern (see “SOA Façade Pattern” post and “Service Façade” pattern). This pattern recommends that multiple different interfaces and contracts for the same service be created. The Concurrent Contracts pattern also addresses this issue.

It is important to understand that while the interface, contract, and implementation describe a service as a whole, they are not closely tied together. An interface is not dependent on the contract details and should not be tied to the specific messaging structure. The opposite is also true – the contract should not be tied to any specific communication protocols. Additionally, the implementation should be flexible enough to accommodate the potential for multiple interfaces and contracts. Ideally, however, I would recommend that a service expose only a single contract and interface and the ESB would take care of exposing additional endpoints and facades as necessary.