Mr Potato Head Explains SOA & BPM

Excellent originally posted by Craig Reid explaining SOA and BPM. Enjoy!

Original article:

I was lucky enough to attend the Sydney BP Trends Group run by Johan Nelis a few years ago. The meeting was all about what SOA (Service Oriented Architecture) is and how it works with BPM.

Having had no previous knowledge or involvement with SOA I was pretty keen to understand it. Not being a textbook kind of guy I am always looking for a metaphor, analogy or image to help myself to understand things and bizarrely enough when I learned about SOA and BPM my analogy was clear:

Mr Potato Head.

Why? Well SOA is all about providing flexibility. It’s all about having a modular structure of architecture that is “as flexible as the business needs it to be”. It is in contrast to the “old” IT ways of building rigid systems that are slow and costly to change. If we think about Mr Potato head as our offerring to the customer, the business decides what Mr Potato head looks like (services). Now imagine that each one of Mr Potato head’s bits (ears, eyes, hats, etc) is a business process. These processes make up the offerring or service to the customer. So the business decides what he looks like and IT plug together his individual processes out of their big box of ears, eyes, mouths, etc.

If the business decide that the want to change their offerring to the customer and hence the processes involved they simply tell IT what they want and IT go back to their big box of Mr Potato head parts and pick out a new process (ear, eye, etc!). Mr potato head now looks different as they have changed the process and the customer receives a new service or offerring from the company.

If we look at how this would have worked in the old days the business would have come to IT with their request and IT would have told them that all Mr Potato head’s parts were glued together and that to change their processes they’d have to e.g. cut off an arm, build a new one and glue it on. This would take time, money and a lot of effort.

But with our new SOA oriented business Mr Potato head can take on the world! We simply plug in our new processes to provide the business with what they need. Thus the business can respond rapidly. The Business and IT are in complete alignment.

HP Discovery and Dependency Mapping


Understanding the capability of an organisation is becoming fundamental to any transformation/change programme. Typically, this capability is often captured through Business Process mapping and modelling techniques. However, as technology advances continue, more and more vendors are providing automated solutions and tools to help with “discovering” assets across the enterprise and interpreting the “dependency” between Business services and Technology typically delivered through IT Departments.

This article introduces interesting advances being made by HP in the area of Application Discovery and Dependency Mapping (ADDM).

I’d encourage CTOs, Enterprise Architects and IT Directors to continue reading and to embrace these new advances to help them better understand how to align Business and Technology in their organisation.

Advanced visibility into services and infrastructure

HP Discovery and Dependency Mapping Advanced Edition (DDMA) software automates discovery and dependency mapping of services, applications, and underlying infrastructure. Mapping helps you perform failure impact analyses which minimize downtime. Improved visibility into IT helps you transform into a modern, flexible, and converged infrastructure that reduces operational expense, defers capital expense, and improves business uptime. 80% of all service disruptions are caused by faulty changes, and DDMA provides the visibility required for more effective changes.

Key benefits

  • Increased productivity by automating the discovery of infrastructure and software
  • Lowered mean time to resolution for critical events by understanding service decomposition
  • Increased business service availability by intelligently choosing issues to address
  • Improved visibility into existing legacy IT infrastructure for data center transformation
  • Better planning for modernization of application portfolios and IT infrastructure

Further Reading

If you’re organisation is looking to map IT dependencies to reduce downtime and expense, and plan for change, you should consider HP’s DDMA solution. See below a white paper and a rich media demonstration.

Read the latest EMA Radar Report ranking HP Discovery and Dependency Mapping Advanced Edition (DDMA) software as the “best of show” product.

For a demonstration of this solution, click here. Note that this is a Silverlight demonstration and works best in Internet Explorer v8+.

Take Charge of 
Application Integration Chaos


As I continued to serve as the Enterprise Architect at Aer Lingus (Dublin, Ireland), I am collecting references from EA journals, magazines and general articles which mirror my experience in shaping the IT Strategy, Enterprise Architecture and transforming People, Process and Technology as part of a 3-5 year programme of work.

One key capability that I’m introducing relates to Business Process Management (BPM) and Enterprise Application Integration (EAI) as part of an effort to consolidate, rationalise and retire the application landscape which previously extended to 242 applications that will be reduced to 52 core applications (in Year 1 – 2011). The article below is pertinent to my current experiences and hits the spot. Recall, in my “past life” I was a TIBCO and IBM consultant and have over 15 years of experience carrying out integration of applications (EAI) and information (EII) which typically extends to Information management strategies …

Want to know more about what I’m doing to transform Business/IT capabilities at Aer Lingus? Leave your contact details and any other comments/feedback and I’ll be in touch …


At a high level, application integration means leveraging technical infrastructure to make diverse applications communicate with each other to achieve a business objective. The integration needs to be seamless and reliable, regardless of platform and geographical location of these applications.

The move toward service-oriented architecture, business process management, and software as a service has accelerated the recognition that application integration can increase business efficiency. To ensure that the integration is both beneficial and feasible, one should closely examine the business processes that are being supported before focusing on the systems and technologies themselves. Understanding this is the key to determining how to select the most suitable integration technologies.

This article examines the application integration challenges while considering the enterprise diverse technology landscape and architectural concerns. The presented “scenario-driven approach” describes how to successfully implement application integration standards at the enterprise level, leveraging the TOGAF methodology, and, ultimately, supporting the business capabilities.

Challenges of Application Integration

Making suboptimal or incorrect choices when selecting your technology toolset can lead to complex architectural issues, which in turn lead to tightly coupled systems and support and maintenance problems. Moreover, guidelines and best practices espoused by architecture groups are not consistently written down, nor are they consistently followed by application development teams at large. Without clear standards on when to use which technology, and for what purpose, one runs the risk of creating an unnecessarily complex technology environment.

Many enterprises have also suffered from organic growth and now have every integration technology of the last two decades in place: JDBC/ODBC, database links, remote method invocation (RMI), enterprise Java beans (EJB), Web services, Java message services (JMS), MQSeries, and flat files abound. This technology diversity has resulted in increased support costs and has adversely affected system performance.

Additionally, there are organizational barriers that may impact your application integration efforts as well. These issues typically arise from the fact that enterprise IT systems span multiple departments in the organization. Different development teams in the enterprise, if not properly guided or monitored, may tend to choose the path of least resistance or resort to technologies that they are familiar with, which can add to integration complexity.

Organizational issues in application integration can be tricky in a larger organization. However, these companies often have the greatest need for an effective application integration environment. Therefore, it is particularly important to clearly define standards, so that development groups can focus on business needs.

Levels of Application Integration

Application integration can occur at many different levels of a system’s architecture, including the data layer, the application layer, the service layer, and the presentation layer.

Integrating applications at the data layer can sometimes be the quickest implementation approach, due to the simplicity and power of data layer integration technologies. Data layer integration includes the use of database links, shared database catalogs, and direct database queries. However, this approach leaves many internal application details exposed, and upstream and downstream application changes result in significant impact, rework, and testing.

With application style integration, low-level implementation details may or may not be exposed or accessible from other systems. Application layer integration includes the use of flat files, message queues, and remote procedure calls. An “API” of some level is leveraged. Although this approach gives a better degree of separation, there are still problems with this approach.

Message-based technologies such as JMS and MQSeries are examples of popular queuing solutions but are based on proprietary implementations. Error handling can be problematic as messages can be lost on “undeliverable mail” queues. The level of application details that is needed to expose for interfacing applications is one of the major disadvantages of integration using remote procedure calls (RMI and RPC style of integration). In short, integrating applications at the application layer is preferable to data layer integration, but still has its own issues.

Service-based integration includes the use of Web services. The advantage of Web services integration is interoperability, even though the integration can still be point to point. Each Web service client still has the responsibility of knowing which service to call, so the addition of new endpoints will result in additional coding.

An enterprise service bus (ESB) combines the strengths of existing service-based integration technologies but provides more abstraction and interoperability. Application integration using an ESB combines message-oriented processing and Web services, which is the foundation for an event-driven SOA. In our opinion, services-based integration, especially when paired with an ESB, is the preferred method for all application integration in an enterprise.

Now What?

To get your application integration under control, we recommend that you work within your existing landscape of systems and technologies and leverage the TOGAF methodology, instead of kicking off a large “EAI project.”

We use a “scenario-driven approach” that focuses on aligning business scenarios with supporting technologies. This approach consists of the following steps:

  1. Leverage TOGAF architecture principles.
  2. Identify technology standards and create building blocks.
  3. Identify usage scenarios and map scenarios to the technology standards.

Leverage TOGAF Architecture Principles

The main reason to integrate applications is to support a business process. Taking a technology-first approach can lead to inflexible solutions that can be costlier to maintain as the business environment changes.

Architecture groups do not typically create technology standards in a vacuum for idealistic purposes (although some development teams may have differing opinions). When developing standards, you need to make sure there is direct traceability to enterprise architecture principles such as:

  • Creating loosely coupled interfaces.
  • Setting platform independent, open standards.
  • Developing reusable, shared services.
  • Minimizing application impact.
  • Promoting data consistency.
  • Recognizing that business logic is a strategic business asset and should not be placed in closed vendor solutions.

These principles are based on the default TOGAF enterprise architecture principles; however, they may be too vague for some project teams and leave much room for interpretation. To address this vagueness, the architecture group needs to identify acceptable integration technologies and map usage scenarios to those technologies to control the complexity of the integration environment.

Identify Technology Standards and Create Building Blocks

Based on organizational strategy, solutions in place, and staffing and skill levels, you should then identify the technology standards that are preferred at your company. Determine if your strategy is to be a custom Java shop, a Microsoft shop, or an SAP enterprise. Regardless of the technology, set clear standards for technology usage and avoid allowing developers to choose their favorite technology to use for integration.

TOGAF promotes the use of building blocks to support effective enterprise architecture. Building blocks are simply a package of functionality defined to meet business needs. For example, an architecture building block is a high-level, abstract architectural pattern. A solution building block is a specific instance of a technology or product. Every organization must decide for itself what arrangements of building blocks work best for it.

For each architecture building block, create the corresponding and solution building block(s). For example, ETL is an architecture building block; it can be realized by a solution building block consisting of products such as Informatica PowerCenter or Business Objects Data Integrator.

A well-specified catalog of building blocks will lead to improvements in application integration, facilitate interoperability, control technical diversity, and provide flexibility in the creation of new systems and applications.

Table 1 lists a sample of the approved architecture and corresponding solution building blocks.

File 931

Map Scenarios to Standards

As a final step, map the integration technologies to various usage scenarios. You will need to develop usage scenarios that can be used to identify types of application interaction.

Our set of usage scenarios includes:

  • Perform domain entity validation.
  • Synchronize business information in multiple systems.
  • Notify of state change between applications.
  • Notify other systems of event of interest.

Notice how our usage scenarios are not technology focused—the scenario is not “put a message on an application queue”; the scenario is “Notify other systems of event of interest.”

We use a matrix to map the preferred integration standards to various usage scenarios at various layers, i.e. data, business logic, and presentation. The goal of this matrix is to enable consistent architecture between system interfaces by establishing a common set of prescribed integration and usage patterns.

For example, if a usage scenario is to synchronize information in multiple systems, the preferred integration standard would be to use message bus or ESB as suggested by the matrix. Any deviations from preferred integration techniques would require an exception process to be followed.

The preferred technology for a particular usage scenario is indicated on the matrix with a + sign; a – sign indicates that this technology is acceptable but not preferred. Also, some technologies such as FTP and DB links are not included, meaning that they are not permitted and their usage would require an exception approval.

The matrix acts as a guide to architecture and solution delivery teams. Basing decisions on this matrix minimizes the decision variability across teams and provides a base for service orientation that ensures interoperability and true integration in a heterogeneous enterprise landscape.


The key to successful services-based integration is to focus on why applications need to exchange information. Our scenario-driven approach enables the application teams to use the matrix as a guide to identify approved integration technologies based on business requirements. This approach helps to control technical diversity as well as fosters consistent integration standards across the enterprise.

Figure 1

File 932

By Carrin Tunney, an enterprise architect at DTE Energy with more than 16 years in application development and distributed systems, TOGAF certified as well as a Sun Certified Enterprise Architect, and Srini Sastry, a technical architect at DTE Energy with more than 15 years of application development experience and also holds TOGAF certification and Sun Certified Enterprise Architect certification.

SOA Gateway Trends for 2011 and Beyond | Cloud Zone

Interested in learning more about SOA Gateways? Typically, when embarking on a SOA programme (or project), it is important to plan ahead for a service registry/repository deployment & configuration. Several vendors have hardware / appliance style solutions e.g. IBM InfoSphere DataStage.

Read on …

SOA Gateway Trends for 2011 and Beyond | Cloud Zone.

Airline reservations systems: can IT deliver?

Currently looking at modernisation options for Aer Lingus mainframe-based Passenger Management System (PMS).

As part of the Enterprise Architecture I’m developing, modernisation will be served through a Service Oriented Architecture (SOA) which will also exploit EAI and BPM capabilities while also leveraging Enterprise Information Management led solutions to help address our customer information needs.

While I look at this, I came across an article worth reviewing. Although this article was written in 2008, it’s relevance today is still pressing. Ironically, now that concepts such as SOA, EAI/BPM, EIM, etc are mainstream and better understood, RES modernisation (amongst other things) is now more real than ever before.

Interested to learn more? Read on …

Airline reservations systems: can IT deliver?.

Solvency II


Solvency II:

Solvency II is a fundamental review of the capital adequacy regime for the European insurance industry. It aims to establish a revised set of EU-wide capital requirements and risk management standards that will replace the current Solvency requirements.


The European Commission publishes the technical specifications for the fifth quantitative impact study (QIS5), see the FSA’s QIS5 page for further information.

The Insurance Sector Newsletters contain useful information for firms about the FSA’s approach to moving from ICAS to Solvency II.

The FSA publishes Delivering Solvency II – an update that summarises the key policy developments and implementation activities.

The Solvency II Directive is due to be implemented on 1 November 2012. Any changes to the go live date will be formally communicated by the European Commission, when the FSA will consider and communicate the potential impact on planning and preparations for itself and firms.


The Solvency II Directive will apply to all insurance and reinsurance firms with gross premium income exceeding €5 million or gross technical provisions in excess of €25 million (please see Article 4 of the Directive for full details).

In a nutshell:

  • Solvency II will set out new, strengthened EU-wide requirements on capital adequacy and risk management for insurers with the aim of increasing policyholder protection; and
  • the strengthened regime should reduce the possibility of consumer loss or market disruption in insurance.

Central elements:

Central elements of the Solvency II regime include:

  1. Demonstrating adequate Financial Resources (Pillar 1): applies to all firms and considers key quantitative requirements, including own funds, technical provisions and calculating Solvency II capital requirements (the Solvency Capital Requirement -SCR, and Minimum Capital Requirement -MCR), with the SCR calculated either through an approved full or partial internal model, or through the European standard formula approach.
  2. Demonstrating an adequate System of Governance (Pillar 2): including effective risk management system and prospective risk identification through the Own Risk and Solvency Assessment (ORSA).
  3. Supervisory Review Process: the overall process conducted by the supervisory authority in reviewing insurance and reinsurance undertakings, ensuring compliance with the Directive requirements and identifying those with financial and/or organisational weaknesses susceptible to producing higher risks to policyholders.
  4. Public Disclosure and Regulatory Reporting Requirements (Pillar 3).

Adoption procedure:

Solvency II is being created in accordance with the Lamfalussy four-level process:

  • Level 1: framework principles: this involves developing a European legislative instrument that sets out essential framework principles, including implementing powers for detailed measures at Level 2.
  • Level 2: implementing measures: this involves developing more detailed implementing measures (prepared by the Commission following advice from CEIOPS) that are needed to operationalise the Level 1 framework legislation
  • Level 3: guidance: CEIOPS works on joint interpretation recommendations, consistent guidelines and common standards. CEIOPS also conducts peer reviews and compares regulatory practice to ensure consistent implementation and application.
  • Level 4: enforcement: more vigorous enforcement action by the Commission is underpinned by enhanced cooperation between member states, regulators and the private sector.

The Level 1 Directive text was adopted by the European Parliament on 22 April 2009 and was endorsed by the Council of Ministers on 5 May 2009, thus concluding the legislative process for adoption. This was a key step in the creation of Solvency II.  The Directive includes a ‘go live’ implementation date of 1 November 2012 for the new requirements, which will replace our current regime.

Delivering Solvency II:

In June 2010 we published Delivering Solvency II giving a summary of the key policy developments and implementation activities.  The first issue includes: Completing the fifth QIS; Deciding to use an internal model; Reporting, disclosure and market discipline (Pillar 3); System of Governance; Getting involved in FSA forums; and Key contacts.

Delivering Solvency II

How to Combine Lean Six Sigma, SOA & BPM To Deliver Real Business Results

Managing Waste and Improving Efficiencies

In the current economic market, organisations are often forced to seek innovative ways to save time, money and reduce waste. This is typically achieved through reviewing waste management and understanding where there may be efficiency gains. Sadly, this activity is not given the attention, time, effort or resources necessary to deliver long-term business benefits. The end result is often poor management decisions, redundancies and “quick & dirty” cost-cutting initiatives leading to low morale across the organisation.

What’s the Solution?


Organisations should invest time in reviewing how they can adapt their SOA/BPM strategies to include Lean Six Sigma techniques.

Lean Six Sigma (LSS) produces real results in difficult economic times by uncovering process waste, reducing non-value adding activity, and increasing productivity. The benefits are even felt in IT. According to the consulting firm McKinsey & Company, “companies can reduce application development and maintenance costs by up to 40%.” That application development productivity can be improved “by up to 50%” by applying LSS techniques, freeing budget for needed investments.



Business process management (BPM) and service-oriented architectures (SOAs) combine with LSS to accelerate improvements and results. At the same time, they increase organizational flexibility and technology enabled responsiveness.Many successful companies have found that the linkages are clear. Early adopters who have worked their way past cultural and organizational barriers are seeing impressive performance and financial results:




  • Improved responsiveness to market challenges and changes through aligned and significantly more flexible business and technical architectures
  • Improved ability to innovate and achieve strategic differentiation by driving change into the market and tuning processes to meet the specific needs of key market segments
  • Reduced process costs through automation and an improved ability to monitor, detect, and respond to problems by using real-time data, automated alerts, and planned escalation
  • Significantly lower technical implementation costs through shared process models and higher levels of component reuse
  • Lower analysis costs and reduced risk through process simulation capabilities and an improved ability to gain feedback and buy-in prior to coding 



The rewards can be great, especially for those who take action now.

Lean Six Sigma (LSS) produces real results in difficult economic times by uncovering process waste, reducing non-value adding activity, and increasing productivity.

Business process management (BPM) and service-oriented architectures (SOAs) combine with LSS to accelerate improvements and results. At the same time, this combination increases organizational flexibility and technology-enabled responsiveness, key to positioning the company for growth as the economy improves.

Process improvement experts are uniquely positioned to play a key role in this transformation as they are able to leverage their business and technical knowledge in combination with the tools and techniques of Lean Six Sigma.

Examples will be provided along with recomendations for getting started.


  • Understand the basics of Lean Six Sigma and how BPM and SOA support the Lean Six Sigma methodology
  • Understand how to use data to select the right improvement project
  • Understand how Business Architects and Business Analysts can play a role in accelerating results

The presentation is available for download here was provided by Hans Skalle an esteemed colleague from IBM’s Global Business Integration Group.

Hans Skalle specializes in Business Process Management (BPM) solutions, business process modeling, and the development of financial models and business cases to support BPM and integration software investments. He has worked directly in the Information Technology industry since 1980 including 8 years as a Business Analyst. He is the lead author of an IBM Redpaper: Aligning Business Process Management, Service-Oriented Architecture, and Lean Six Sigma for Real Business Results. 

Hans has more than 20 years of hands-on process improvement consulting experience and is a past Master Evaluator for Minnesota’s Malcolm Baldrige-based Quality Award in the US. He has in-depth knowledge of various performance improvement methodologies including Six Sigma, Lean Sigma, ISO 9000 and other tools and techniques used to drive and sustain continuous improvement and competitive advantage through change and innovation.

Monitoring SOA end-to-end


For those organisations that have moved into live running of business applications based on SOA, one of the (many) current headaches is monitoring and managing end to end transactions. Although application, network and infrastructure monitoring tools have been around for many years, the loosely coupled nature of SOA presents some challenges in providing the transaction visibility, integrity and recovery capability that mainframe users have enjoyed since the 1970s.

Of course, this state of affairs is nothing new for an emerging technology standard such as SOA. The move from host-based computing to distributed client-server environments produced similar problems for transaction monitoring. The introduction of multi-phase commit functionality helped to provide better distributed transaction management.

Nowadays, application architectures typically have several layers that are not tightly integrated with each other. Most modern applications are accessed via a web browser across the internet/intranet, hosted on a portal server which in turn calls web services, possibly choreographed via an ESB, orchestrated by a process engine, run on one or more application servers, using business rules from a rules engine, calling legacy applications and databases on mainframes or servers in one or more data centres. And don’t get me started on where Software as a Service or Cloud Computing fits in…

So, when a web user of the business service experiences a problem (not responding, misbehaving, returning errors), how do we identify and rectify where the problem lies?

Enterprise Monitoring Framework

It would be a fair assumption that you already have an enterprise monitoring framework providing monitoring data on networks, servers, security, database, and some applications. The main additional requirements that an SOA environment might bring are: portal, web server, processes, enterprise service bus, services and an application server. For many of these components discrete monitoring tools are either built in or available. In fact, if any layer of your current stack is not instrumented you should consider replacing it with a product that provides the relevant performance statistics.

The two parts of the stack that won’t be initially ready for monitoring, not surprisingly, are the process layer and the services themselves. The runtime of the business process is typically either done as execution of Business Process Execution Language (BPEL) or within the black box of a specialist BPM tool. It is also possible that the process is being choreographed within the portal layer, or as an old fashioned program-like service running in the application server. Both of these architecturally unsound (but sometimes more practical) approaches still require the same monitoring instrumentation as for processes and services, as in all these cases there is limited built-in monitoring. Services, be they web services or other standard encapsulated code, are by their nature programs transforming the input into an output as specified. To find out what happens within a service, we need to ensure that the service tells us.

Therefore we are back to the old programming approach to providing insight to what is happening within code: alerts and flags. My experience is that currently you will still need to architect some code-based alerting into the processes and services. This is complicated by the need to understand the context in which the process or service will be invoked and consumed. One way this is done is by returning a status message containing the transaction ID to a monitoring console or database on completion of the task. If there has been a problem, an error code is typically returned that can be actioned by your service monitoring infrastructure. However, in the loosely coupled world of SOA, the specific service may be being used by a number of different business processes, so the response to the error condition will need to meet the particular requirements of this process.

To understand this context requires an overall End to End Service Management Strategy, comprising the following:

  • Business Service Monitoring Strategy  (BSMS) – This defines the business metrics and events that need to be captured and measured during execution of the high level business process or service.
  • Business Transaction Management (BTM) – In a traditional CICS-like Transaction Processing (TP) environment each transaction is managed to provide trusted commit, rollback and recovery. In an SOA world you will need to emulate this across the whole business process. Some of the more sophisticated process engines and ESBs provide tooling to reasonably easily enable the tracking of each transaction, and provide the audit trail to prove completion, or enable rollback of the transaction. However you will still need to define and develop the recovery procedures yourself to cover the round trip.
  • Business Activity Monitoring (BAM) – Even if the transaction completes there could still be performance issues or delays in returning the results of each transaction. This requires more detailed activity monitoring of each component of your SOA stack to identify potential and actual bottlenecks. As you can imagine, in a large stack there could be a considerable number of components to be monitored in the complete journey. Expecting your monitoring team to keep track of all of this manually is unreasonable. An automated script-based intelligent tracking system is required to meet the service levels your support teams or outsourcer will be held to.

Having been working on a large SOA transformation programme for the past few months, I have been working through the challenges of delivering this. If you deliver all the above will you achieve end to end SOA monitoring? My experience is that this provides the groundwork.

Original article written by John Moe, Head of Business Integration, TORI Global


John Moe, Over more than 25 years, John has managed a number of strategic transformation programmes for large companies, using ERP technologies, process improvement and change management techniques to ensure successful adoption and transition.

More recently John has consulted and mentored extensively on SOA and BPM, helping individuals, teams and organisations to understand and gain significant business benefit from these architectural approaches.

A former Gartner Consulting Director, John works closely with senior IT management to remove the perceived gap between Business and IT, using a synthesis of process, people and systems to make change stick.

Combining Lean Six Sigma and SOA/BPM

Reducing Operating Costs

Today the pressure is on the CIO and the IT organization to identify, enable, and create new business opportunities while dramatically reducing operating costs. In virtually every industry, aggressive, more technologically agile competitors are now offering new products and services faster or are executing processes more efficiently, to win customers, market share, and profit.

Thankfully, advances in technology and technical standards, specifically SOAs, are now allowing IT budgets to be reclaimed and the organization to be repositioned. New technical tools and capabilities complement traditional BPM methods and even unlock existing application functionality to greatly accelerate process improvement and innovation.

Leading firms are using BPM technologies to accomplish the following tasks:

  • Choreograph human and system interactions
  • Provide real-time visibility into key performance indicators (KPIs)
  • Manage escalations in the event of failure or missed targets
  • Provide the foundation for continued process improvement and optimization

However, on their own SOA/BPM do not address waste management or efficiency considerations. This is where Lean Six Sigma can help.

Step Back – Definitions & Terms

Lean Six Sigma (LSS) produces real results in difficult economic times by uncovering process waste, reducing non-value adding activity, and increasing productivity. The benefits are even felt in IT. According to the consulting firm McKinsey & Company, “companies can reduce application development and maintenance costs by up to 40%.” That application development productivity can be improved “by up to 50%” by applying LSS techniques, freeing budget for needed investments. 

Business process management (BPM) and service-oriented architectures(SOAs) combine with LSS to accelerate improvements and results. At the same time, they increase organizational flexibility and technology enabled responsiveness.

Many successful companies have found that the linkages are clear. Early adopters who have worked their way past cultural and organizational barriers are seeing impressive performance and financial results: 

  • Improved responsiveness to market challenges and changes through aligned and  market and tuning processes to meet the specific needs of key market segments
  • Improved ability to innovate and achieve strategic differentiation by driving change into the and respond to problems by using real-time data, automated alerts, and planned escalation
  • Reduced process costs through automation and an improved ability to monitor, detect,higher levels of component reuse
  • Significantly lower technical implementation costs through shared process models and improved ability to gain feedback and buy-in prior to coding
  • Lower analysis costs and reduced risk through process simulation capabilities and an improved ability to gain feedback and buy-in prior to coding


Lean Six Sigma (LSS) and business process management have much in common. Both methodologies use iterative improvement and design techniques to deliver financial and performance benefits through better managed and optimized processes.  

By combining key concepts from LSS with the capabilities of BPM (including process modeling and analysis, automation, and executive dashboards that deliver real-time performance metrics to process consumers), a company can ensure that its people are focused on the most meaningful value-added work. SOAs add increased flexibility so that processes can be quickly assembled from reusable Lego-type building blocks of technical functionality.

Companies that successfully bring together LSS, BPM, and SOA initiatives will realize a competitive advantage.

To fully understand the linkages between BPM, SOA, and LSS, and fully realize the benefits of these linkages, it is important to establish definitions and list key concepts for each initiative.


Process improvement experts are uniquely positioned to play a key role in this transformation as they are able to leverage their business and technical knowledge in combination with the tools and techniques of Lean Six Sigma.

Take Aways

  • Understand the basics of Lean Six Sigma and how BPM and SOA support the Lean Six Sigma methodology
  • Understand how to use data to select the right improvement project
  • Understand how Business Analysts can play a role in accelerating results
  • Consider embedding Lean Six methodology into your SOA/BPM strategy as part of your overall Enterprise Architecture

The rewards can be great, especially for those who take action now.

The Human Side of SOA – Part 2

All organizations have both short term and long-term goals. For most businesses, IT is no longer viewed as a strategic differentiator or even as a strategic enabler. It has become “part of the scenery” at best and, at worst, a hindrance.

The powerful combination of SOA and BPM has the potential to return IT to its former position as an agent for strategic advantage. The optimal value of SOA then is to support initiatives that are aligned with corporate strategy, especially those that focus on a move to exploit BPM. 

To this end, many organizations are following the advice of industry analysts to create an SOA Competency Center (SOA CC) that acts as a shared services group to oversee the transition to an Integrated Services Environment (ISE). 

The SOA CC ensures that an organization not only adopts the principles of SOA but also makes sure that it is adopted correctly and with the least disruption to the IT organization while supporting business users. This team must  comprise  members across the organization, from both IT and business. 

Traditionally, application deployments were stove-piped or functionally oriented in design. A department had a business need for an application with a specific set of functionalities, and the IT organization would build or buy the application, and then install, configure and maintain it. This process continued across multiple applications and departments. Each time the data and business logic were only available to that application and user community.

For example, the human resources department might use an HR vendor application, the sales and marketing department might use a CRM vendor system and the customer support group might use its own custom application. 

The move to an ISE is, at least in part, an evolution from a technology focus to a business focus, from a functional or departmental orientation to a process-orientation.

Integration moves away from the technology focus of subroutines, methods and components to business components that are focused on the discreet and granular events that are part of the business process.

By moving to ever higher levels of abstraction, IT’s deliverables achieve closer alignment with the artifacts of business modeling and begin to close the gulf of understanding between these two organisations that, in truth, rely on each other for their continued existence. 

The essence of the ISE and its primary consequence is increased business agility which, like virtue, can be said to be its own reward.

Projects must now be approached with an eye to the future and, more importantly, the re-use of the services created by that application. The company, as well as the project, needs to be aligned to corporate directions. 

The move to an ISE is a journey and, like any worthwhile journey, is not completed in a day and is not without costs.

The journey to an ISE rewards bold strokes, requires executive commitment and demands concerted effort in pursuit of a vision.

Ultimately, however, the success or failure of the transition depends on the people who undertake the journey, and it is the human side of SOA that will make or break the project.