Discussions and thoughts related to SOA, Enterprise Architecture, design patterns, service/application testing and management, software development methodologies, new trends in architecture, state of IT, and technology in general.
Wednesday, January 15, 2014
Implementing Effective Enterprise Architecture
http://www.slideshare.net/LeoShuster/implementing-effective-enterprise-architecture
Wednesday, September 18, 2013
Introduction to Enterprise Architecture
Here's the link: http://www.slideshare.net/LeoShuster/introduction-to-enterprise-architecture-26319680. Enjoy!
Thursday, August 22, 2013
A Roadmap to Business Agility
I’ve argued that enabling business users to make changes to IT systems themselves and providing a streamlined deployment process goes a long way towards achieving business agility. However, the more astute readers will note that there has to be a more holistic approach to achieving business agility. And they would be right! In fact, there are many approaches and models to maximize an organization’s business agility. Doing a Google search on “business agility” brings up hundreds of relevant articles. All of them approach this topic from different perspectives, and all of them have merit. In this discussion, I will present an alternative to maximizing business agility – simple, yet powerful roadmap to move the organization forward.
My model consists of two variables – Business Enablement and IT Complexity. The hypothesis that is being proposed by it is very simple – by increasing Business Enablement and reducing IT Complexity, you will maximize your organization’s business agility. The visual representation of this model is shown below.
As you can see, on the way towards maximum agility, each organization will pass through a variety of states.
- Highly IT Dependent
- Business depends on IT to perform even the simplest tasks
- Process Dependent
- All processes are defined and understood
- Business and IT both follow consistent and repeatable processes
- Enabled & Largely Independent
- Business is self sufficient in most operations
- Needs to rely on IT in some situations
- Self Sufficient & Automated
- Business can perform all the critical functions with little or no IT involvement
- IT plays a supporting role
These maturity phases are all very high level, but you can see how and where the progress should be made to achieve the next level of maturity. The maturation process makes business increasingly more and more self sufficient while at the same time mandating simplification of IT. Without Business Enablement and IT Complexity variables moving in the right direction, the subsequent phases on the maturity curve cannot be reached. The proof of this hypothesis is provided below.
The IT Complexity component of the equation has to be explained a bit further, as it carries a very specific meaning in this model. There are many elements contributing to IT Complexity. They may include number of systems, number of interfaces, diversity of platforms, supported technologies, etc. My simple definition of IT Complexity is the amount of effort required to introduce a unit of change to IT systems. A unit of change is a measure of a basic change that can be made to a system’s functionality. Each organization may have a different notion of a unit of change, which is why using this measure helps generalize this approach across a variety of IT organizations.
Some IT environments can be so complex and tightly interconnected that even the simplest change can take a lot of effort. On the other hand, there may be organizations with such well defined systems and clear boundaries around them that changes can be made quickly with little or no impact on the outside world. Most organizations, however, will find themselves somewhere in the middle of the two extremes.
The reason IT Complexity plays a prominent role in the business agility model presented here is simple. The faster and easier that changes can be made to IT systems, the more agile the entire organization can become. In fact, by simplifying the IT environment and introducing next generation technologies such as Business Rules, BPM, Cloud Computing, etc. you will decrease the time required to make changes to IT systems. I’ve argued this exact point in the Death of Custom Software Development post.
The bottom line is that by striving to increase Business Enablement and reduce IT Complexity you will ultimately maximize your organization’s business agility. Why? Simply because by increasing Business Enablement, you enable business to react faster to any situation, and by decreasing IT Complexity, you make IT more efficient. Putting these two variables together leads to maximum business agility.
The path towards business agility is neither straight nor simple. The distance between each maturity level on the model is measured in years, not months. The journey requires significant investment of time and resources, strong backing of IT and business leaders, deep commitment from everyone involved, and a lot of hard work. But the results are worth the effort. If you had to choose between the market leadership, mere survival, or extinction, where would you want your company to be? My guess is that market leader is where everyone wants to be. To achieve this, you will need to follow the simple roadmap outlined above.
Friday, August 2, 2013
Business Enablement through Release Management
- Establish a Business Development Environment, a location where business users can make changes to elements of IT systems under their control and test the impacts of these changes
- Once the changes are ready, provide an automated way to execute as many regression tests as necessary to validate that the changes do not impact any existing systems
- Enable an automated way to execute performance / load tests to ensure that changes do not adversely impact established NFRs
- Ensure stakeholders validate that the changes do not break current systems’ functionality
- Upon successful completion of all the tests, provide an automated way to deploy changes to production
- If a change is confined to business owned elements only and no changes by IT are needed, the change is eligible for the streamlined release
- If any IT changes are required, the regular release management process should be followed
Thursday, July 18, 2013
Achieving True Business and IT Partnership
“Business’ expectations are unrealistic”
“All business cares about is dates”
“Business doesn't understand how IT operates”
Or…
“IT never delivers anything on time”
“IT can’t deliver anything for less than $100K”
“IT processes are too complex and get in the way”
We know that the best way to deal with this conflict is to establish a true partnership between business and IT organizations. Yet, few companies are successful in achieving such symbiotic relationship. Why? There are many reasons for this. However, none is more relevant than IT’s desire to control the entire software delivery lifecycle. No, your eyes do not deceive you. I indeed contend that many problems will be solved if IT gives up some control over how software is delivered into production.
I can hear the accusations of heresy right now. I can feel the wrath of IT purists enraged with such preposterous ideas. I can hear IT managers grinding their teeth in response to business infringing on their domain. Hold your angry rhetoric for just a bit longer. I will explain my reasoning shortly.
In this day and age, software packages have become so sophisticated that more and more features are aimed at non-IT people. Many systems include rules engines, BPM capabilities, scripting languages, etc. IT no longer has to reinvent the wheel to provide business the features it needs. As I argued in my earlier post, Death of Custom Software Development, IT organizations have to become more and more integration rather than custom development focused.
This argument can be extended further. IT no longer has to control the entire software delivery lifecycle. Enable the business to make changes to rules, processes, static values, web pages, printed communications, etc., and you will enter a brave new world of true partnership. It’s that easy! Let the business control its own domain with minimal support from IT and let IT control the technology and infrastructure. The two will intersect when they truly need each other. IT will start playing a true supporting role by enabling the business to be productive and ensuring that changes business is about to introduce will not have a negative impact on everything else. Business will start looking at IT as an enabling partner, not just an expensive scapegoat. Everyone will hold hands and sing Kumbaya.
Of course, a lot needs to be done to achieve this Utopian society. Roadmaps need to be created, reviewed, and approved. Next generation packages need to be procured and implemented. Business users that will be empowered to make changes will need to be identified and trained. Significantly simplified and streamlined release management processes need to be created. Culture needs to change.
It is a long and arduous journey, but it is well worth the effort. Think about how much work IT does today that should really be done by the business. Let the business finally be the master of its own domain. And for the first time in many years, business will have no one but itself to blame for defects and missed deadlines!
Monday, April 29, 2013
Death of Custom Software Development
Finally, the company’s entire culture must change. Everyone, from business and IT leadership to sales people and developers, must embrace the integration first mentality. Without it, there will constant infighting between the development and integration clans. Philosophical battles will ensue. But if clear direction is given to the organization and business and IT stand shoulder to shoulder, there will be very little choice but to accept the change.
One CIO I worked for issued a very clear and concise statement of direction to his IT organization related to software development decision criteria: "Reuse before buy before build." Everyone understood and embraced it. The IT organization understood what decisions took priority over others. Business stakeholders started asking questions like "Are there software assets we can reuse for this effort?" and "What vendor solutions exist for this problem?" The entire organization adopted the integration first mentality as a result of the CIO's guidance.
While software development will never completely disappear, most IT organizations supporting mature businesses need to shift towards the integration first culture. Let's face it, IT groups are not in the business of software development -- they are in business of supporting their business. IT needs to concentrate on delivering business value, not building custom software!
Friday, August 31, 2012
The Architect Manifesto
There are several key principles that help us become successful as IT architects. No matter what problems we face, no matter what types of solution we need to apply, no matter what technologies we use, these general guidelines should always be followed. This forms the basis for the Architect Manifesto.
My research indicates that there have been multiple attempts at defining an Architect Manifesto. None of them seem to have succeeded or gained traction. Yet, I am certain that we, as architects, need the guiding light of a set of principles that will guide us in everything we do. We already unconsciously understand and follow them, but a formal definition will strengthen and inspire us. This is my attempt at verbalizing what we already know and apply in everyday work – the Architect Manifesto.
| We, IT architects, guide the design and evolution of computer systems. We strive to achieve maximum value for our business partners through this work. We constantly discover better ways to architect systems. As a result, we have come to value: Simplicity over complexity Pragmatism over perfectionism Thinking out of the box over applying the same approaches to every problem Developing technology agnostic solutions over making technology driven decisions Delivering solutions to the business problems over concentrating purely on technology deliverables Looking at the big picture over getting too deep into details Long term, strategic thinking over short term, tactical thinking That is, while there is a place for the tactics on the right, we should strive towards approaches on the left. |
I would appreciate feedback from all of the architects out there to make this manifesto more complete and relevant.
Wednesday, January 26, 2011
Business Process Excellence for Financial Services 2011 Conference
Wednesday, January 12, 2011
SOA & Cloud Symposium Podcast
Tuesday, July 27, 2010
Architectural Thinking
I always believed that while good architecture skills can be taught, it is the way we think that differentiates us and makes us good architects. Our natural ability to understand the problem and determine the right solution is the magic formula. This – and this alone – distinguishes good architects in our midst. You don't have to have "architect" in your job title or even be in IT to be a good architect. You can understand the technology in minute detail, know all the design patterns, possess tremendous depth of experience, have in-depth knowledge of various design techniques, etc. All this can be taught or acquired. At the end of the day, however, it is your approach to problem solving and the way you think that will make you stand out among your peer architects.
Why do I put such a heavy emphasis on the thinking style? To help you understand, we need to take a step back and define "architecture". According to Wikipedia, "the software architecture of a program or computing system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships between them." Many books, articles, and white papers provide a definition of architecture as well. However, in my opinion, these definitions are too complex and do not truly reflect the nature of architecture. My personal definition is very simple: architecture is a high level view of a technical solution to a business problem.
What does the definition of architecture have to do with our approach and thinking style? Everything! Taking all the specifics of how we actually deliver architectural solutions out of conversation, to do our job well we simply need to determine the right technical solution to the right business problem. We need to consider all the possibilities when addressing a business problem – what is there today, what is missing, what problem are we really trying to solve, is the current process the most efficient, what would deliver the most value, are we solving the right problem?.. We should be able to look at the problem from a very high level, abstract ourselves from the underlying technology or processes, and envision the most effective and efficient solution. As in the old adage, we should see the forest for the trees. Once the "ideal" solution is found, we can start concentrating on its details and understanding technology implications. Many times, we will discover that our vision cannot be readily implemented due to technology limitations, insufficient process maturity, and a host of other factors. Do not despair, however. Your vision becomes the solution goal state, and you would need to create a roadmap to get there.
Undoubtedly, some of you will disagree with the approach presented above. I can hear the arguments now – "You must consider the current situation to determine the right solution to the problem", "The project has a limited scope, and thinking broader is impractical", "The technology landscape should be considered upfront to design the most effective solution", etc. However, if we are to find the right technical solution, we must shed the old baggage. It is very hard to find new and innovative solutions if you cannot think outside the current box. To a large extent, the architectural thinking principles are grounded in the definition of what makes a good architect.
Architecture is an art, not a science. Therefore, a good architect is more of an artist – creative, imaginative, someone who can paint with a big brush. In my opinion, the characteristics and approaches listed below are the true differentiating factors among architects.
- Abstract thinking – this is the #1 quality of a good architect. You must be able to see the big picture and understand it abstractly, absent of many details that can cloud your judgment.
- Out-of-the-box thinking – many situations require us to be creative and innovative in our approaches. Good architects should be able to adapt to new situations easily and come up with the right solutions regardless of the situation.
- Clarity of vision – you, as an architect, should be able to clearly envision the solution and all of its implications including business process, technology, low level design, development, and potential phased delivery.
- Strength of convictions – architects should always try to do things right the first time, oppose inappropriate or wrong decisions, and stand up for what they believe are the right architectural solutions.
- Critical thinking – architects should always cast a critical eye towards their domain. You should challenge everything. Nothing should remain status quo or off limits. There are always opportunities for improvement. Don’t miss them because you feel comfortable with the current situation or are used to doing things a certain way.
- Problem solving skills – architects are problem solvers. Good architects strive to solve problems in the minimalist way, i.e. reaching the right solution in the most efficient manner. Even better architects ensure that they are solving the right business problem.
- Soft skills – this one is obvious. Good architects should have excellent soft skills to work well with the diverse audiences they are exposed to every day.
As in the everlasting nature vs. nurture debate, I believe good architects cannot be made – a large portion of what makes architects stand out is ingrained in how we think, act, and approach problems. To be truly effective, we should practice all the elements of architectural thinking and exhibit all the traits of a good architect.
Wednesday, December 30, 2009
Microsoft Azure
In a nutshell, Azure encompasses three products.
- Windows Azure
- Compute: Virtualized compute based on Windows Server
- Storage: Durable, scalable, & available storage
- Management: Automated, management of the service
- SQL Azure
- Database: Relational processing for structured/unstructured data
- .Net Services
- Service Bus: General purpose application bus
- Access Control: Rules-driven, claims-based access control
The Windows Azure platform introduces the Web and Worker roles. This is the implementation of a similar pattern used in WCF that decouples the network transport from the component logic. The Web role allows the applications to accept incoming requests via a variety of protocols supported by IIS. The Worker role cannot accept any direct requests from the Internet but instead can receive messages from an internal Azure queue hosted by SQL Azure. Under the covers, Web and Worker roles run in their own instances of Microsoft VM engine. All the queues and communication protocols can be configured via the control panel.
SQL Azure is no less impressive. It allows you to store data directly in the cloud in three different forms:
- Blobs
- Tables
- Relational
The .Net Services platform provides a couple of services – access control and message routing. Access control serves the identity validation, transformation, and federation purposes. This is all based on the rules defined through the control panel. The service bus part of the platform does what you would expect any ESB to do – service endpoint registration and access, message transformation and routing, and improved security.
Even though Azure is still a relatively immature platform, it holds a lot of promise. Microsoft has finally hit the mark. Some risks still need to be addressed, however. The typical cloud computing concerns remain – security, privacy, longevity, etc. Additionally, a platform like Azure may cause some issues for IT departments that need to adhere to regulations like Sarbanes Oxley, SAS 70, and others. Division of responsibilities, following IT governance processes, quality control, and other sticky situations may keep CIOs and other IT managers up at night. These things will eventually work themselves out through maturing the Azure platform or enhancing the IT processes. Despite the drawbacks, I believe Azure is a viable and solid platform for “cloudizing” your applications.
Friday, July 24, 2009
Cloud Computing and the Reality
- Eliminate your data center
- Solve all of your scalability and on-demand computing challenges
- Simplify infrastructure
- Reduce IT spend
- Make IT operations more efficient
First of all, everyone needs to understand what cloud computing really is. According to Wikipedia, “Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet”. (See http://en.wikipedia.org/wiki/Cloud_computing.) Too many people, however, forget that it is only a style and begin to associate cloud computing with specific product offerings such as the Amazon Elastic Compute Cloud (Amazon EC2), Google Apps, Microsoft Azure, and others. Companies are not limited to just third party solutions. They can implement their own private clouds if they choose.
Secondly, you need to understand the vision behind cloud computing. The idea is simple – to seamlessly provide flexible, on-demand computing resources whenever necessary. This is not a revolutionary development. The Application Service Provider (ASP) model has been in existence for years. Infrastructure outsourcing practices have been utilized a long time before cloud computing became a term. So, what is all the hype then, you ask. They keys are the ubiquitous nature of the protocols used, increased reliability of the Internet, and the packaging of the offering as a generic service. Cloud computing, as a general approach, may support outsourcing of specific applications, generic computing resources or platforms, and software services. It may potentially lead to outsourcing of the whole data center.
Finally, all the pros and cons behind cloud computing need to be considered. Having someone take care of all your computing resources without investing into expensive data centers is an appealing concept but loss of control and unreliable SLAs may be a cause of concern for a number of businesses. Since the Internet is the primary communication mechanism for the public cloud, its reliability and performance need to be questioned whenever considering third party cloud offerings. Private clouds provide better control, reliability, and performance but what is the real difference between those and existing data centers? In my opinion, aside from following a different architectural model of allocating computing resources, nothing. On-demand computing is a great concept but making it work effectively is a tough task. Technologies exist today to dynamically divert unused resources to those applications that need them most. Grid computing, virtualization platforms, and others all provide these capabilities. However, there are limitations. Whenever maximum capacity is reached, hardware needs to be added. No software trick will work to cover this up. Therefore, efficient capacity and pipeline management need to exist to make cloud computing an effective and viable platform.
While there are some cloud computing zealots (http://www.infoworld.com/d/architecture/soa-realized-enterprise-computing-cloud-computing-146) and realists (http://www.gcn.com/Articles/2009/03/09/Guest-commentary-SOA-cloud.aspx), many are still cautious about this technology. And for a good reason. In my opinion, cloud computing has proven its worth in a number of situations but it is still not ready for the enterprise. Public clouds are too fickle for really demanding applications. Private clouds have not yet been effectively built. More importantly, however, lack of cloud computing standards and consensus among the key players will present challenges for anyone trying to enter this arena.
Wednesday, May 20, 2009
SOA Misconceptions
1. SOA is Expensive
SOA doesn't have to expensive. There is an abundance of open source tools and technologies that can be used to build a truly state of the art SOA platform.
2. SOA is Inefficient
SOA is as inefficient as you make it. Not all the problems and organizations require a complete SOA stack but, at a certain level, it becomes necessary. Otherwise, you will have too much complexity and, in fact, breed inefficiency.
3. SOA is Non-productive
A well established SOA program can run like a well-oiled machine. I am sure a lot of people can cite numerous examples of lost productivity because of unnecessarily complex SOA environments and processes. However, in large and complex corporate IT departments well defined organizational structures and processes are a must. They actually make things more efficient and increase productivity. Every situation requires a different approach. However, SOA, as a general pattern for building software, has been shown to dramatically improve productivity. Think about it -- if several projects can reuse an already existing, well tested service, this results in tremendous productivity improvements and costs savings!
4. SOA is Unnecessary
This is true, with a caveat. SOA has been employed as an IT and business strategy to make organizations more efficient, productive, and agile. If you want your company to achieve these goals, SOA becomes a part of the answer. Otherwise, you are stuck in the world of point-to-point integrations, Just a Bunch of Services, and an unmitigated integration mess in general. Obviously, smaller companies can get away without employing SOA for a much longer period of time but larger ones will feel the pain much sooner.
5. SOA is Too Complicated
Yes, SOA is complicated. But what major effort isn't? Enterprise Architecture is complex, so we shouldn't do it?! EAI was complex but companies did it out of necessity. Master Data Management is complex, so companies should forget about managing their data?! SOA can be as simple or complex as you need to make it. Create a roadmap for your SOA program and follow it. It should guide you in your quest to achieve SOA maturity, however simple or complex you need to make it.
SOA is a program. You can make it into whatever you need. You can use whatever technologies and approaches you like. As long as you keep the goal of increasing agility and saving money as a result of creating reusable business capabilities in mind, you should be successful.
SOA and the Trough of Disillusionment
What can we do to make sure that SOA moves onto the slope of enlightenment and further to the plateau of productivity? Continue to educate your organizations and executives about the value of SOA! Even though SOA is not in vogue anymore, it doesn’t mean that it lost its value. The benefits are still there. You just have to work harder to make them visible. Document your successes and shift into a marketing mode. Promote the SOA program and the benefits you achieved as much as possible. Show real value. This will certainly get executives’ attention and guarantee their support. Don’t give up. Continue to fight negative perceptions and concentrate on delivering value.
SOA, like any other enterprise wide initiative, is a differentiator. Companies that can successfully implement SOA will become a lot more successful than those that can’t. When the economic crisis ends, organizations that continued to invest into SOA will come out on top. Those that didn’t will be left behind and will need to spend a lot more money and efforts to catch up. SOA can produce tangible business benefits. Don’t let the disillusioned minority silence its value proposition.
Monday, March 23, 2009
SOA Funding Models
Any SOA initiative is comprised of two parts – infrastructure and services. Both need to have a separate funding model established in order to successfully support SOA program’s goals.
SOA Infrastructure Funding
Infrastructure funding requires a pretty straight forward approach. When discussing SOA infrastructure, I am referring to shared platforms that are used by a number of services across the organization. Some companies host services on the same platforms whose functionality is being exposed. However, even if this is the case, some shared infrastructure components like ESBs, service management technology, Registry/Repository, etc. must exist to support SOA program’s needs. Thus, it is safe to assume that some form of shared SOA infrastructure exists. There are two possible ways to provide effective funding to build it out.
- Fund all the SOA infrastructure centrally
- Identify appropriate projects to acquire / extend new / existing SOA infrastructure
- Do not recoup the investment
- Place an entry fee to use any SOA infrastructure component
- Charge a small fee for each usage instance
Since all the SOA infrastructure is provided centrally, not recouping the initial investment is a real option. If the organization’s fiscal model does not call for IT recouping all its costs from the business groups using their products, this option works well. If this is not the case, however, you have a choice between placing a predefined entry fee that each application / project must pay to use the specific SOA infrastructure platform and charging end users based on the total usage.
The per-use-fee scenario is a little tricky as each SOA infrastructure component needs to define what a transactional unit is and how much to charge for it. Transactional units can be different for each SOA platform. For example, an ESB transactional unit can be a service call, Registry/Repository – an individual user and/or a UDDI request, etc. In this case, total usage amount based on predefined transactional units would be calculated, multiplied by the unit cost, and charged to the business units. The most effective way to determine a unit cost is to divide the total investment made in the platform by the total transactional units being consumed. The obvious effect is that unit costs would decrease with increased usage. Here are all the formulae discussed above.
Usage charges per platform:
Unit = Different per Platform
Unit Cost = Total Platform Investment / Total Amount of Units Consumed
Line of Business Usage = Units Used by Line of Business * Unit Cost
Some companies have chosen to grow their SOA infrastructure gradually, without a central program or funding. A typical approach in this scenario has been to attach SOA spending to the most appropriate projects. Thus, the projects would purchase new SOA infrastructure platforms or upgrade existing ones to suit their needs. There are several problems with this approach.
- Typically, the projects purchasing the infrastructure don’t want to share it with other potential consumers unless there is significant pressure from above. The platforms don’t end up being reused or, if so, only minimally. The projects do not have any incentive to sharing their investments with anyone else, especially since they are seen as critical to projects’ success.
- Projects often get cancelled due to over-inflated budgets. SOA infrastructure is expensive and cost conscious enterprises do not want to invest into what looks like excessive infrastructure for project’s needs.
- Demand to extend a platform based on project’s needs typically comes without enough lead time to accommodate project’s timelines. Thus, projects face a tough decision – to extend their delivery date or use alternative infrastructure.
Funding the SOA infrastructure centrally is more effective in delivering service-oriented solutions faster, moving the enterprise more efficiently towards a higher level of SOA maturity, and addressing the project needs. Project-based funding will most likely spell doom to the SOA program as a whole.
Service Funding
As discussed earlier, funding for the SOA infrastructure should come from a central source. Where the money comes to build individual services, however, presents a bigger challenge. Since projects are the primary drivers behind demand for services, special consideration should be given to project needs and budgets. However, service design and implementation can incorporate additional requirements that fall outside of the project scope. Another typical project-related problem stems from the shared nature of services. It is unfair to burden a project with the full cost of a service that will be utilized by a number of other consumers.
There are three possible ways to address the service funding concerns.
- Make the first project to build a service provide the complete funding
- Establish a central funding source that will cover all service design and construction expenses
- Provide supplementary funding to projects building services
If option1 is selected, several strategies for recouping the initial investment can be used.
- Do not recoup the investment
- Place a surcharge on each instance of service leverage
- Charge a small fee for each service call
As mentioned above, it is unfair for the project to carry the complete costs of the service build-out, especially if it includes additional requirements. Thus, unless the project implements one of the options to recoup its initial investment, funding option #1 is not going to be viable. Not recovering the funds is not a realistic option either as it does not incent the projects to build truly reusable services. The other cost recovery strategies may work but require detailed metrics to be captured on the service leverage and/or transactional volume.
Establishing a central funding source for all projects to use when building reusable services is probably the ideal approach. Few companies, however, would be willing to write what in essence would be a blank check for the projects to use in their service delivery efforts. The opportunity for abuse and misappropriations would be too tempting. Unless strong governance and control mechanisms are in place, this funding method will most likely end up costing the company more money and provide unrealistically small return on investment.
Providing supplementary funding to projects building services is probably the most realistic approach. A central fund needs to be established to cover the efforts falling outside of the project scope. Since shared services would typically incorporate other projects’ and enterprise requirements, the actual cost ends up being higher than what projects budgeted for their needs. Thus, the easiest way to distribute supplementary funding is to allow the projects to pay for functionality already included in their budgets and cover all the additional costs through the central fund.
Whatever the funding approach is used, it needs to be carefully administered. A party not involved in day-to-day project work is best suited to play the administrative role. There could be a couple different groups managing the infrastructure and service funding and chargeback mechanisms. Overall, however, this should fall under the SOA Governance umbrella and managed centrally as part of the SOA Program.
Thursday, February 19, 2009
Making SOA ROI Real
http://xml.sys-con.com/node/847118
Thursday, January 29, 2009
Service Orchestration Guidelines
A quick Google search will produce a number of articles and links that discuss service orchestration and related topics. Most of them will talk about BPM engines, ESBs, and BPEL. This, unfortunately, pollutes the true definition of service orchestration and gives it a much more technology centric view.
In my opinion, Service Orchestration is an automated way to combine several services together to achieve new functionality. The end result is a new composite service that provides a distinct business capability and can be invoked independently. It must have all the appropriate attributes as discussed in my previous article.
Orchestration is a technology independent concept. It can be achieved via a descriptive language such as BPEL, built-in tools within a specific platform (ESBs typically provide their own orchestration mechanisms), or programmatically. Depending on your needs, situation, or technology available, the best way to perform service orchestration may be different. Here are a few guidelines to help you create service orchestrations faster and make them more flexible, maintainable, and scalable.
- Use the platform with built-in orchestration capabilities as your first choice
- Avoid implementing service orchestrations programmatically whenever possible
- Choose a platform or mechanism that can easily perform flow logic, result aggregation, message transformation, and business rule application
- Ensure the composite service fits the definition of a service, i.e. has all the attributes of a service
Many would argue that a programming language will give you the most flexibility when implementing an orchestration. While this is true, the overhead is pretty large and efficiency is low. First of all, no programming language seamlessly integrates all the mechanism you need to create an orchestration, especially in a visual way. Secondly, every time an orchestration needs to change in some way, no matter how small, new code needs to be written, deployed, and tested. While the same steps need to be performed on any orchestration platform, the level of effort will be a lot smaller on full featured orchestration platforms.
When creating service orchestrations, it is important to maintain proper relationships between composite and atomic services. The diagram below shows which services should be allowed to interact with each other.

The following list details the rules and guidelines for establishing relationships between composite and atomic services.
- Atomic business services should not call each other. Use orchestration to combine several business services together.
The goal of service orchestration is to combine several services together through a series of logical steps and expose new capability to the consumers. Orchestration platforms, as discussed above, provide a lot of functionality to make this work easy and efficient. If individual services are allowed to call each other, they would not be taking advantage of the orchestration platform’s capabilities. Furthermore, when business services call each other, it establishes a tight coupling between them, which makes them less reusable and harder to maintain. Atomic business services should provide specific, well defined business capabilities and be reusable in their own right. Reliance on other services to complete work indicates plurality of purpose and lack of specificity. - Business services can call Utility services.
While coupling services together should be avoided as much as possible, sometimes generic, low level functionality that needs to be invoked from a business service is exposed via utility services. It would be an overkill and sometimes even impossible to use an orchestration platform in order to allow business services to take advantage of such functionality as logging, retrieving or storing configuration information, and authorization. - Utility services cannot call Business services.
Utility services should not be tied to any business processes or capabilities. Thus, a utility service calling a business service would violate this rule. - Business services cannot call Composite services.
The logic behind this guideline is the same as in disallowing business services call each other. A composite service is also a business service. Thus, a business service calling a composite service should not be allowed. - Composite services can call other Composite services.
Other composite services are allowed to participate in orchestrations. They should be treated as regular atomic services in this case.
Service orchestration is a complex topic and might take a series of articles to discuss completely. However, the rules outlined above should establish a good foundation for creating and managing composite services.
Friday, January 23, 2009
Services Explained
There are several categories of services. Many leading SOA vendors and thinkers typically break them down into Business and Utility types. A Business service represents a business capability that is needed to complete a step within a larger business process. Examples may include retrieving customer information, making payments, or checking order status. Utility services represent a technical capability that is business process agnostic. Examples are e-mail, logging, or authentication services.
Services can be combined together to create composite services. This is called orchestration. An example of this can be a Money Transfer service that needs to debit one account and deposit money into another one. Composite services can also be categorized as Business and Utility. Best practices and general orchestration guidelines as related to orchestrations, atomic services, and their relationships will be discussed separately.
Regardless of the type, a service is comprised of three components.
- Interface
- This defines how services are exposed and can be accessed by its consumers.
- Interfaces are not limited to Web Services and can be represented by any remote messaging protocol.
- Contract
- This defines what services expect during the interaction with the consumer. Message structures, relevant policies, and related security mechanisms are all part of a contract.
- Contract defines a “legal” agreement on how the service and its consumers should interact.
- Implementation
- This is the actual service code.
A service may also have multiple contracts. I have recommended in the past that for a service to be maximally reusable, it needs to implement the Service Façade pattern (see “SOA Façade Pattern” post and “Service Façade” pattern). This pattern recommends that multiple different interfaces and contracts for the same service be created. The Concurrent Contracts pattern also addresses this issue.
It is important to understand that while the interface, contract, and implementation describe a service as a whole, they are not closely tied together. An interface is not dependent on the contract details and should not be tied to the specific messaging structure. The opposite is also true – the contract should not be tied to any specific communication protocols. Additionally, the implementation should be flexible enough to accommodate the potential for multiple interfaces and contracts. Ideally, however, I would recommend that a service expose only a single contract and interface and the ESB would take care of exposing additional endpoints and facades as necessary.
Friday, December 5, 2008
EA & SOA in Down Economy
Pretty bold statement, some would say. I don’t think so. Let’s consider the facts.
When the times are tough, the first thing most companies do is slash budgets. IT budget gets reduced just like everyone else’s. The focus shifts from the strategic initiatives to simply keeping the lights on and completing projects as quickly as possible. Enterprise Architecture efforts are usually the first ones to be eliminated or significantly reduced. Point solutions become the norm resulting in duplication of software, hardware, and overall efforts. Smokestack applications rise up from the ashes of the Enterprise Architecture. Everyone becomes more concerned about keeping their jobs rather than doing the right thing for the company. IT managers shift to the aggressive empire building mode in order to protect their jobs and eliminate their own risks. (The old mentality of “I own more than you, therefore I am more important than you” is still alive and well, unfortunately. IT managers also think that if they can “own” and control every piece of their application, it will reduce their risk and allow them to deliver results faster.) Governance becomes unenforceable and largely forgotten.
Through this chaos, interesting trends emerge. While the initial IT budget is reduced through a series of staff reductions and some technology rationalization efforts, the costs begin to creep back up in subsequent years. When the economy finally turns around and the pressure to keep the budget low eases, the IT budget suddenly becomes larger than what it was prior to the cuts. Why? The explanation is simple. The empire building and unfettered decision making by IT management finally bears fruit. There are more software, licenses, hardware, and code in the data center, all of which requires more people to support. There is very little reuse and sharing because each group has built silo applications residing on their own unique platforms. Costs increase, efficiencies decrease, and it takes longer to deliver new capabilities especially if they require several applications to integrate with each other.
Enterprise Architecture and SOA can help reverse these trends and, in fact, keep the IT budgets low. Most companies have a number of redundant systems, applications, and capabilities that have grown through the type of uncontrolled behavior described above. EA, through an effective discovery and governance mechanisms, can eliminate these redundancies while maintaining the same capacity and level of operational responsiveness. Additionally, EA groups can influence or implement new architecture approaches to help consolidate resources and gain efficiencies. Examples of this could be virtualization, green technologies, cloud computing, etc. SOA, as a subset of EA, provides much the same benefits. Encapsulating key business functions as reusable services will help achieve more consistency, save money, and enable faster project delivery. An effective EA program can protect companies’ IT budgets from ballooning by establishing and enforcing standards, promoting reuse opportunities, and ensuring transparency across all IT systems.
The bottom line is that companies can not afford not to invest in EA and SOA. These programs will make organizations more efficient through the economic downturn and help achieve the necessary savings. On the long run, EA and SOA will keep the costs down while increasing business agility. Effective EA and SOA programs are a competitive advantage, not an overhead. They will easily pay for themselves and, what’s more important, enable organizations to avoid uncontrolled spending in the future. Enterprise Architecture and SOA is a must, not an option!
Wednesday, October 29, 2008
SOA Ecosystem

If you refer to the diagram above, you will notice several major components that make up the SOA Ecosystem.
- ESB
- Registry/Repository (RegRep)
- Security
- Service Management
- Shared Service Environments
- Service Consumers
To truly comprehend how the SOA ecosystem operates, a clear understanding must be developed of what each component does and what its role is. Let’s start from the service consumer side.
- Service Consumers
- Application Developers build applications that consume services. They use IDEs and other development tools to construct service requests and parse responses. Developers interact with the Registry/Repository to find the right services, obtain service metadata, and understand usage patterns.
- Application Testers perform quality assurance tasks on the final product.
- Application Servers that execute the application code interact directly with the SOA platform hosting the services.
- SOA Infrastructure
- Service Management Platform acts as an entry point into the SOA infrastructure. It retrieves policy information about the service being executed and applies it appropriately to the request. The policy is used to understand service security and authority, associated SLAs, constraints, contracts, etc. The Service Management Platform is often utilized to keep track of the service consumption and run-time metrics, which are then fed into the Registry/Repository.
- The role of the Enterprise Service Bus has already been discussed.
- Registry/Repository acts as a central repository for services and their metadata. Its uses and integrations are discussed at each related point.
- Security / Authentication Platform is a part of the larger IT infrastructure and is typically represented by either LDAP or Active Directory technology.
- Shared Service Environments are used to host reusable services. While different organizations choose to approach service hosting differently, if a common service hosting platform can be established, many issues related to service scalability, performance, reuse, security, implementation, standardization, etc. can be easily resolved. A centrally managed platform can be easily upgraded to accommodate additional – foreseen or unforeseen – volume. Standard capabilities can be provided to perform security, authentication, logging, monitoring, instrumentation, deployment, and many other tasks.
- Service Creation
- Service Architects and Developers create reusable services using the appropriate design and development tools. They also interact with the Registry/Repository to discover existing services and register new services and related metadata. The created services should ideally be deployed into a Shared Service Environment.
- Service Testers perform quality assurance tasks on the new or modified services. They use special SOA testing tools to create test cases and automate their execution. These tools interface with the Registry/Repository to retrieve metadata about the services and update related information once testing is complete.

