Great Thinking, Great Strategy

Introduction

In many organisations, those members of the C-suite or Senior Management often are over-confident about their strategies believing firmly that they are doing the right thing.

However, in many organisations, those responsible for strategy typically fail. Retrospectively, it is sometimes clear why. Unfortunately, these same organisations do not actively look back and learn lessons. Instead, they continue implementing “bad” strategies and blame any failures on other circumstances or factors beyond their control.

This video by Mark Chussil demonstrates techniques for improving the way we think to help us improve how we develop (and then successfully deliver) strategy. If you’re interested in strategy and delivering real change, please watch the video and read the “citations” section for other information.

Why Strategies Fail

Watch Mark Chussil’s speech, “Why Strategies Fail: Human Strategists and Biased Tools,” delivered December 9, 2011, to the Chief Strategy Officers Summit in New York City, sponsored by The IE Group. Please see “Citations” below for material not visible on the slides.

Citations

The slides in the “Why Strategies Fail” speech contain citations for other’s work. Because the slides are not visible in the video, here are those citations.

  • The 90% confidence quiz was inspired by a similar exercise in Decision Traps, an excellent book by Jay Russo and Paul Schoemaker.
  • Quotations from Steve Burd of Safeway and Craig Herkert of Supervalu came from an article in The Wall Street Journal of October 16, 2009.
  • Performance data from Safeway and Supervalu came from Fortune.com.
  • The quotation at the end — “Those are my principles, and if you don’t like them…well, I have others” — is from Groucho Marx.

HP Discovery and Dependency Mapping

Introduction

Understanding the capability of an organisation is becoming fundamental to any transformation/change programme. Typically, this capability is often captured through Business Process mapping and modelling techniques. However, as technology advances continue, more and more vendors are providing automated solutions and tools to help with “discovering” assets across the enterprise and interpreting the “dependency” between Business services and Technology typically delivered through IT Departments.

This article introduces interesting advances being made by HP in the area of Application Discovery and Dependency Mapping (ADDM).

I’d encourage CTOs, Enterprise Architects and IT Directors to continue reading and to embrace these new advances to help them better understand how to align Business and Technology in their organisation.

Advanced visibility into services and infrastructure

HP Discovery and Dependency Mapping Advanced Edition (DDMA) software automates discovery and dependency mapping of services, applications, and underlying infrastructure. Mapping helps you perform failure impact analyses which minimize downtime. Improved visibility into IT helps you transform into a modern, flexible, and converged infrastructure that reduces operational expense, defers capital expense, and improves business uptime. 80% of all service disruptions are caused by faulty changes, and DDMA provides the visibility required for more effective changes.

Key benefits

  • Increased productivity by automating the discovery of infrastructure and software
  • Lowered mean time to resolution for critical events by understanding service decomposition
  • Increased business service availability by intelligently choosing issues to address
  • Improved visibility into existing legacy IT infrastructure for data center transformation
  • Better planning for modernization of application portfolios and IT infrastructure

Further Reading

If you’re organisation is looking to map IT dependencies to reduce downtime and expense, and plan for change, you should consider HP’s DDMA solution. See below a white paper and a rich media demonstration.

Read the latest EMA Radar Report ranking HP Discovery and Dependency Mapping Advanced Edition (DDMA) software as the “best of show” product.

For a demonstration of this solution, click here. Note that this is a Silverlight demonstration and works best in Internet Explorer v8+.

The Seven Layers of the OSI Model

Introduction

As interest and take up of Cloud Computing and XaaS-based (PaaS, IaaS, DaaS, SaaS, etc) utility computing solutions increase, CIOs, CTOs, Enterprise Architects and IT Directors find themselves increasingly under pressure to understand the impact of new technologies that could help improve the agility of the an organisation and improve the competitiveness of an organisation.

However, when embarking on transformation/change initiatives, several organisations stop at Business Process Re-engineering, Business Process Management and Business Capability Mapping activities in an effort to understand how to re-align the Business and Technology functions.

To properly comprehend an organisation all aspects of “people, process and technology” need to be understood.

At this stage, when considering “processes and technology”, it is worthwhile taking a step back and reminding ourselves of the “old school” Open Systems Interconnection model (OSI model) which touches on functions of a communications system in terms of abstraction layers. This model is something that all Architects should be mindful of when looking to understand an organisation holistically. It holds the key to properly capturing information that underpins the IT related considerations that all IT departments must manage.

The OSI Model (a gentle reminder)

The Open Systems Interconnection model (OSI model) is a product of the Open Systems Interconnection effort at the International Organization for Standardization. It is a prescription of characterizing and standardizing the functions of a communications system in terms of abstraction layers. Similar communication functions are grouped into logical layers.

The OSI, or Open System Interconnection, model defines a networking framework for implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application layer in one station, and proceeding to the bottom layer, over the channel to the next station and back up the hierarchy.

Application (Layer 7)

This layer supports application and end-user processes. Communication partners are identified, quality of service is identified, user authentication and privacy are considered, and any constraints on data syntax are identified. Everything at this layer is application-specific. This layer provides application services for file transfers, e-mail, and other network software services. Telnet and FTP are applications that exist entirely in the application level. Tiered application architectures are part of this layer.

Presentation (Layer 6)

This layer provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.

Session (Layer 5)

This layer establishes, manages and terminates connections between applications. The session layer sets up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each end. It deals with session and connection coordination.

Transport (Layer 4)

This layer provides transparent transfer of data between end systems, or hosts, and is responsible for end-to-end error recovery and flow control. It ensures complete data transfer.

Network (Layer 3)

This layer provides switching and routing technologies, creating logical paths, known as virtual circuits, for transmitting data from node to node. Routing and forwarding are functions of this layer, as well as addressing, internetworking, error handling, congestion control and packet sequencing.

Data Link (Layer 2)

At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol knowledge and management and handles errors in the physical layer, flow control and frame synchronization. The data link layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. The MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. The LLC layer controls frame synchronization, flow control and error checking.

Physical (Layer 1)

This layer conveys the bit stream – electrical impulse, light or radio signal — through the network at the electrical and mechanical level. It provides the hardware means of sending and receiving data on a carrier, including defining cables, cards and physical aspects. Fast Ethernet, RS232, and ATM are protocols with physical layer components.

Mintzberg’s Ten Management Roles

During my time at Aer Lingus, Organisational Transformation has been a key topic of importance. I’ve learnt that to make real change happen, “people change” is where it is at. Until organisations go back to basics and focus on, and invest in, their core asset (i.e. people), “real change” will never be achieved or sustained.

My interest led me to read about Henry Mintzberg’s management roles and framework. This is something that resonates with me in line with my recent experiences.

To learn more, read on …

Mintzberg’s Ten Management Roles.

 

An ‘open source’ model airline – Ryanair

I haven’t flown with Ryanair before but have several friends and acquaintances who are regularly impressed with it’s low-budget service which survived the bad weather and snow in December 2010/January 2011.

As I learn more about the Airline business and seek to understand new, innovative ways in which an Enterprise Architecture approach could serve Aer  Lingus well, I came across an interesting article by Alan Williamson which describes an “open-source” airline model used by Ryanair.

Interested? Click here to learn more.

Design your IT architecture around key business questions

As I continue in my role as an Enterprise Architect at Aer Lingus, I realise that years of doing Enterprise Architecture, Transformation Programmes and IT Change Management have really been about ONE thing … Understand the role of the business – it’s strategy, it’s rules, it’s processes, it’s mission and combining these with IT to really differentiate an organisation from the competition.

While I think about this … A relevant article springs to mind. Interested? Read on …

Design your IT architecture around key business questions.

Document-centric BPM and the emergence of case management, Part II

Document-centric BPM and the emergence of case management, Part II

By Alan Earls , 12/16/2010

Editor’s Note: In Part I of this special report, technology journalist Alan Earls examines whether document-centric BPM is morphing into case management. Here, in Part II, he describes the power-and the complexity-of case management BPM.

Case management BPM –also sometimes known as dynamic BPM, or, in IBM parlance, advanced case management-has been getting lots of attention lately.

With its roots in document-centric BPM, case management BPM can be a natural evolutionary direction for some organizations. But with its much greater complexity and higher ambitions in terms of what it seeks to accomplish, it’s not for everyone.

One key driver for developing and adopting case management BPM: extremely high payroll costs for knowledge workers in developed countries, according to IDC analyst Maureen Fleming. Knowledge workers tend to work on lots of projects, with the concept of the “case” as an underlying core principal. As a result, organizations interested in understanding processes tied to this often highly unstructured work need to gain a better understanding of case management to better understand how to make their knowledge-centric work more efficient and systematic.

“Case management BPM is expensive from the larger vendors and relatively immature for lower-cost BPM suite vendors,” Fleming warns. Furthermore, there are often skills gaps on the professional services side that present their own set of challenges. “Depending on who you talk to, case management is either huge or just a subset of the BPM software market. In other words, there is a lot of variation in how vendors view this as an opportunity,” she notes.

Customers, of course, view it more in terms of the problem at hand. “Enterprises that view case management as a content-centric problem look for different types of solutions than companies that view this as a process problem,” Fleming says.

In her view, case management is inherently an integration-or a mashup-of multiple content and data types driven by requirements. While that concept is straightforward, getting there isn’t. “With a BPM suite, there is often a discovery phase that helps the process actors articulate linear workflow, but I’m not seeing the same level of sophistication for case management, which is highly process-centric but only partially linear,” she adds.

As with any application, Fleming notes, there are multiple potential pitfalls and risks. One is the possibility of adopting a system that is relatively inflexible, making it difficult to adjust to meet evolving needs. Another is underestimating integration requirements.

Still, Fleming says that IDC expects decision-centric BPM, including case management, to grow faster over the next few years than classical BPMS-based process applications, though from a smaller base. “In general, we believe decision-centric automation will grow faster than most types of applications and middleware over the next five years,” she says.

Automating the right tasks in the right way

ebizQ contributor James Taylor, a specialist in decision management and related areas, also focuses on the complexity of this flavor of BPM. “There was a convergence around the idea that some processes had complex data, multiple documents and lots of people involved,” says Taylor, who is CEO and principal consultant at Decision Management Solutions. “Some approaches are very technology-centric, focused on integration, and some were more focused on people.”

In other words, in its evolution, BPM had already shown that many processes had work activities that could be highly automated. However, those processes still sometimes had exceptions as well as tasks that did not lend themselves to an automated approach.

Now, advanced case management solutions (the term that Taylor prefers to use) can provide automation that prompts for human intervention when needed but then continues to automate for better efficiency. “For instance, you now see insurance companies automating the claims process and banks automating loan origination in a way that integrates straight through processing and complex case/exception management,” he explains.

“These kinds of systems monitor the process, apply rules and predictive analytics to make decisions and know when to escalate an issue for intervention,” he says. “The system does what it is good at and lets people do the things, like talking to other people, that they are good at in a seamless whole.”

 

Document-centric BPM and the emergence of case management, Part I

Document-centric BPM and the emergence of case management, Part I

By Alan Earls , 12/10/2010

Some application categories are short-lived. Capabilities change, business conditions morph and buzzwords fade away. But in the instance of what’s currently called (at least by some) document-centric BPM, there’s a remarkable degree of consistency between the old and the new.

According to Sandy Kemsley, an independent BPM consultant based in Toronto, its roots go back some 30 years.

In the early 1980s, new document-imaging and workflow approaches were emerging. “BPM was always part of document management, but then it started to become a separate discipline in the late 1990s,” Kemsley says.

Big analyst firms came in with new theories and terminology proliferated. Now there was human-centric BPM, integration-centric BPM and, of course, document-centric BPM.

Where does that leave decision-makers who are considering a document-centric solution–or living with one they already own? Kemsley says that despite being in a somewhat “muddled” state, document-centric functionality is a well-established winner. When it comes to systems that handle things like transactional documents, the value is clear: the ROI comes from reduced need for data re-entry and reduced head count. “That has been the case for the past 30 years and nothing has changed that,” she says.

What has changed is the emergence of additional flavors of BPM, especially case-management-centric BPM (sometimes known as dynamic BPM), which offers potential efficiencies for workers handling complex semi-structured or unstructured processes.

“A current challenge is to know your problem and then figure out how to map requirements into structured or unstructured workflow—and if it falls in between, you may need to do both,” says Kemsley. For example, she notes, an insurance claim may start off very structured and then quickly get into unstructured case management territory, where processes and solutions have to be invented or applied uniquely. It is still the same “case,” but having the right tools makes for better management.

Case management can also be thought of as a form of customer service, she says: “One person can now see everything that is being done and what stage everything is at.”

Document-centric BPM has revolutionized processes in many organizations and case management offers similar potential. “This kind of system means people don’t have to be at their desk, they can start to work at home or wherever,” Kemsley says.

Adds Gartner analyst Toby Bell: “If you are doing document-centric today, you should look to leverage that toward a case-management, human-driven approach in the future.”

‘People-driven’ BPM

When explaining what case-management-centric-BPM is and why organizations need it, Kemsley says simply: “You want to get away from people having notes about clients stuck to their monitors.” Elaborating on the difference between the two BPM flavors, Kemsley notes that document-centric BPM has focused mostly on enhancing repeatable, process-oriented activities, traditionally built around paper documents. In contrast, its offspring–case-management or “dynamic” BPM–links documents, people and all kinds of social media to enhance the messy process of addressing an insurance claim, a government benefits appeal or a legal case. “In situations of that type, when people are able to more effectively share and track information, it benefits the individual and the company gets benefits, too, so they can incentivize the individuals to participate,” she says.

A 2009 Forrester Wave report (“Dynamic Case Management – An Old Idea Catches New Fire”) contends that demand for case-based or “people-driven” BPM products is an outgrowth of the service sector’s adoption of many of the Lean and Six Sigma approaches long used in industry.

The result has been the gradual elimination of many tasks through automation, outsourcing or process improvements. Analyst Craig LeClair, who wrote the Forrest Wave report, uses an insurance industry claim as an example. “Scanning in claims documents and entering data into a claims system is where traditional [document-centric] BPM would coordinate activity among the submission, underwriting, policy creation, claims and customer service,” LeClair says. BPM would also traditionally extract metadata from core processes and make it available to better serve customers across all lines, he notes.

Exceptions to the rule

What’s left, increasingly, is “exception management” – handling the more complex tasks that can’t be fit into a preformed solution. In other words: case management. “Today’s knowledge workers have a greater variety of tasks to deal with and they aren’t locked down in one place, like the production workers traditionally served by document-centric BPM,” LeClair says. The tasks left over are more diverse and require a broader level of information support and even analytical support.

What the new processes look like might be “snippets of structured functionality” as well as social technology to get access to expertise. “Image capture and document management are still very important, but case-management capability is where the big, high-value developments are,” says LeClair.

If you’re considering a case-based BPM system, LeClair recommends thinking about it from a business-process vantage point. One key distinction between the document-centric BPM system you might currently have and a case-based system is that all “exceptions” are carefully scripted in document-based system, he says. In contrast, in the dynamic or case-based world, the business outcome becomes the driver. For that reason, “companies need to involve their business process analysts early,” LeClair says. “They should try to align desired business outcomes in their existing BPM system with the strategic goals of the organization, and then use that as the basis for moving forward to dynamic, case-based BPM.”

In his report, Le Clair notes that the rapid emergence of dynamic BPM may spur acquisitions among industry players and could bring in others, such as Oracle, which has relevant ECM and BPM assets. And, he warns, BPM pros need to keep in mind that case management needs to be considered as a “lean approach for automating processes,” but with much more control given to the “worker.” Indeed, he urges a “design for people” approach that incorporates Web 2.0 approaches.

And, he recommends: “Reengineer the process first, then pick the tool. Focusing on the tool too early is a huge pitfall.”

 

Taking an Enterprise Architecture approach to BPM

Taking an Enterprise Architecture approach to BPM

By Peter Schooff, Contributing Editor, ebizQ , 12/17/2010

In this Q & A, Peter Schooff speaks with ebizQ contributor Dr. Alexander Samarin about corporate BPM strategies. Samarin, chief enterprise architect of the African Development Bank, is author of “Improving Enterprise Business Process Management Systems” (Trafford Publishing, October 2009).

PS: Why should a company consider BPM?

AS: For me, BPM is three complementary things. First, there’s the BPM discipline, how to use processes to better manage an enterprise. Second, [there’s] BPM software [and] technology from many, many vendors. And third, [there’s the] architecture of a BPM enterprise system that is built to manage, to govern, execution of the processes within enterprises.

Together, they’re very powerful tools and [a] primary force to make an understandable, explicit and executable coordination between systems, employees, customers and partners. With such coordination, it becomes possible to monitor the dynamic of various indicators, values and risks, and it helps people for better decision-making. In addition, it helps with the evaluation of feasibility and impact of future changes.

PS: What are some problems that companies face when they go to BPM?

AS: Typical problems at first [involve questions about] “What is it?” A lot of efforts are spent to explain BPM. Usually within a company, there is a mixture of opinions from the Internet. So [building] a commonly agreed-upon understanding about BPM is a must.

Second : “What does it do for me?” [It’s necessary to explain] to everyone how BPM will address his or her concerns and how his or her current working practices will be changed for the better. Of course, it’s not necessary to talk to each of the thousand people within a company; instead, be prepared for talks with about 20 groups of people.

[A third question involves project size.] BPM projects usually start small, without a bigger view or understanding of how to grow. [I] recommend considering BPM as an enterprise-wide initiative from the very beginning.

The last typical is: “How do we change it?” By definition, any BPM solution will be changed…So [my recommended] approach is to architect flexibility because many, many changes will be carried out.

PS: Is an enterprise architecture approach important for companies that are considering BPM?

AS: Yes, both enterprise architecture and BPM are enterprise-wide activities or programs, and you don’t want them to collide. At first look, they’re very different. Enterprise architecture is about [going from] this state to this state as a transition, and the typical lifespan is [measured in] years. BPM, on the contrary, is talking about continual improvements and a typical time span is weeks or months.

But the two may be very complementary and enrich each another. For example, enterprise architecture does a great job in describing the enterprise genotype, or full nomenclature, of enterprise artifacts. And there are many techniques to evaluate enterprise phenotype, a set of observable characteristics such as performance. But enterprise architecture cannot answer how enterprise genotype defines enterprise phenotype. From the BPM side, it’s very strong with executable models of relationships between artifacts. And in this way, it can form some kind of a bridge between enterprise genotype and enterprise phenotype.

Actually, your enterprise architecture team should be the best friend of your BPM initiative and vice-versa.

PS: Essentially we’re talking about processes that are human processes. So what is exactly the social aspect of BPM?

AS: English is not my mother language, so I [looked] at the different meanings of this word “social” and took some of those meanings.

First, [it is] affordable to everyone. BPM and new tools are now more affordable for small and medium enterprises and governments. Second, [it involves] public or common ownership. Right now, this is mainly organization of work and provisioning of convenient access to different artifacts, which is a common practice in modern BPM tool.

And the third meaning, which is the simplest, [involves] human-initiated interdependent relationships with others…Of course, that last meaning is the most interesting for clients of BPM.

PS: You touched on this in the first question, but to drill down a little bit more, exactly how does BPM help companies solve their problems?

AS: BPM mainly helps companies through managing the common understanding of work. One aspect [is] coordination. Coordination is externalized from people’s heads, applications, quality documents and habits in an understandable and explicit form. One of the forms is the well-known BPMN [Business Process Modeling Notation] diagrams.

Then this explicit coordination is used through the whole improvement lifecycle: plan or model, implement, do or execute, check or control, and act or optimize. In many enterprises, those phases are carried out by different roles and often different languages are used…And each time information is moved from one role to another, there are some translation errors, explicit coordination in BPM removes the source of those errors.

Then BPM helps people to express coordination the same way, so that different people within an enterprise are solving the same problems in very similar ways, thus improving reusability.

And finally, BPM makes your enterprise information more flexible because executable coordination serves as a way to assemble bigger services from smaller ones. So BPM helps companies to make some kind of breach, glue or guidance between strategy and execution.

PS: What do you see ahead for BPM?

AS: I can see that it should [include] better understanding among BPM experts, more practical standards, easy comparable business cases, more commonly agreed-upon knowledge, better interchange been tools from different vendors. We know that BPM is a vendor-driven market right now; I see it [becoming] more customer-driven in the year ahead.

This Q & A was excerpted from a recent ebizQ podcast. It has been edited for editorial style, clarity and length.

 

Solvency II

REF: http://www.fsa.gov.uk/pages/About/What/International/solvency/index.shtml

Solvency II:

Solvency II is a fundamental review of the capital adequacy regime for the European insurance industry. It aims to establish a revised set of EU-wide capital requirements and risk management standards that will replace the current Solvency requirements.

News:

The European Commission publishes the technical specifications for the fifth quantitative impact study (QIS5), see the FSA’s QIS5 page for further information.

The Insurance Sector Newsletters contain useful information for firms about the FSA’s approach to moving from ICAS to Solvency II.

The FSA publishes Delivering Solvency II – an update that summarises the key policy developments and implementation activities.

The Solvency II Directive is due to be implemented on 1 November 2012. Any changes to the go live date will be formally communicated by the European Commission, when the FSA will consider and communicate the potential impact on planning and preparations for itself and firms.

Application:

The Solvency II Directive will apply to all insurance and reinsurance firms with gross premium income exceeding €5 million or gross technical provisions in excess of €25 million (please see Article 4 of the Directive for full details).

In a nutshell:

  • Solvency II will set out new, strengthened EU-wide requirements on capital adequacy and risk management for insurers with the aim of increasing policyholder protection; and
  • the strengthened regime should reduce the possibility of consumer loss or market disruption in insurance.

Central elements:

Central elements of the Solvency II regime include:

  1. Demonstrating adequate Financial Resources (Pillar 1): applies to all firms and considers key quantitative requirements, including own funds, technical provisions and calculating Solvency II capital requirements (the Solvency Capital Requirement -SCR, and Minimum Capital Requirement -MCR), with the SCR calculated either through an approved full or partial internal model, or through the European standard formula approach.
  2. Demonstrating an adequate System of Governance (Pillar 2): including effective risk management system and prospective risk identification through the Own Risk and Solvency Assessment (ORSA).
  3. Supervisory Review Process: the overall process conducted by the supervisory authority in reviewing insurance and reinsurance undertakings, ensuring compliance with the Directive requirements and identifying those with financial and/or organisational weaknesses susceptible to producing higher risks to policyholders.
  4. Public Disclosure and Regulatory Reporting Requirements (Pillar 3).

Adoption procedure:

Solvency II is being created in accordance with the Lamfalussy four-level process:

  • Level 1: framework principles: this involves developing a European legislative instrument that sets out essential framework principles, including implementing powers for detailed measures at Level 2.
  • Level 2: implementing measures: this involves developing more detailed implementing measures (prepared by the Commission following advice from CEIOPS) that are needed to operationalise the Level 1 framework legislation
  • Level 3: guidance: CEIOPS works on joint interpretation recommendations, consistent guidelines and common standards. CEIOPS also conducts peer reviews and compares regulatory practice to ensure consistent implementation and application.
  • Level 4: enforcement: more vigorous enforcement action by the Commission is underpinned by enhanced cooperation between member states, regulators and the private sector.

The Level 1 Directive text was adopted by the European Parliament on 22 April 2009 and was endorsed by the Council of Ministers on 5 May 2009, thus concluding the legislative process for adoption. This was a key step in the creation of Solvency II.  The Directive includes a ‘go live’ implementation date of 1 November 2012 for the new requirements, which will replace our current regime.

Delivering Solvency II:

In June 2010 we published Delivering Solvency II giving a summary of the key policy developments and implementation activities.  The first issue includes: Completing the fifth QIS; Deciding to use an internal model; Reporting, disclosure and market discipline (Pillar 3); System of Governance; Getting involved in FSA forums; and Key contacts.

Delivering Solvency II