What is IT Governance?

For corporates considering a new IT governance programme, the first requirement is to agree upon what it means, what it involves and who is responsible for its implementation and oversight. This includes ensuring that external IT service are also following accepted It governance guidelines so that best practice is maintained throughout the IT environment whether in-house or outsourced.

Inadequate IT governance is not the exception, especially in mid-sized enterprises, but perhaps more surprisingly it is also a condition common to many large enterprises as well.

One of the root causes for these challenges is that those people who are responsible for the success of IT initiatives often use the term “governance” loosely, without sharing a common understanding of the term and without completely comprehending what it actually involves. In these cases the first imperative to implementing a coherent corporate governance environment is to define what the term “governance” actually means.

The next step is to identify the key distinctions between good and poor governance and having done so, to then determine the path from poor to good governance over a pre-determined and realistic period of time.

What is governance?

A good place to start in our quest for a clear definition is the World Bank, which has described a common understanding of governance. It is defined as: ‘The rule of the rulers typically with a given set of rules’.

Or more simply put, governance is the process by which authority is conferred on rulers, by which they make the rules and by which those rules are enforced and modified.

How does the World Bank concept of governance translate to enterprises?

Corporate governance (the rules) refers to the formation and steering of the rules and processes of an organisation by which businesses are operated, regulated and controlled for effective achievement of corporate goals. Corporate governance structures (the rulers) are those bodies or councils which are specifically concerned with governance, while the Board of Directors are finally accountable for the application of good governance. Typically, they carry out their governance duties via committees that oversee critical areas such as audit, compensation, acquisitions and so on.

To complicate matters, different corporate governance guidelines and regulations are used by different countries. One of the most commonly referred is the OECD Principles of Corporate Governance. Another is the Sarbanes Oxley Act, a United States Federal law on accounting reform. There are also industry specific regulations like Basel III for Banking, HIPAA for Health Insurance, and so on.

The importance of IT governance

Since organisations are increasingly dependent on IT for their operations and profitability, the need for better accountability of technology-related decisions has become a key part of corporate governance, making IT governance a highly strategic subset of the overall enterprise governance.

In the case of IT, governance – or the rules – links IT strategies to the overall enterprise goals and strategies. It also institutionalises best practices for planning, acquiring, implementing and monitoring IT performance; it manages the risks that IT poses to business and it ensures accountability of IT costs.

The IT governance structure

An organisation’s IT strategy committee, or the equivalent, is typically composed of board and non-board members which together form the governance structure that oversees IT governance. They are the rulers who may in turn have sub-committees or groups who are responsible for specific areas of IT governance.

Over the years multiple industry standard IT governance and control frameworks have evolved and are available for enterprises to adopt. The most commonly referred to are: ISO/IEC 38500:2008 Corporate Governance of information technology and the Control Objectives for Information and Related Technology (COBIT).

In addition to these there are also many other related frameworks and methodologies which help enterprises to address specific aspects of their IT governance. Fortunately the Calder-Moir IT Governance Framework has drawn upon and integrated the wide range of management frameworks, standards and methodologies that exists today – some of which overlap and compete – into a conceptual approach that provides an effective visualisation of IT governance.

Where does IT outsourcing governance fit?

Most enterprises today outsource at least some, and in many cases all, of their IT or IT-enabled business services to third parties. Because IT is now such a prominent driver of business success and efficiency, it has become vitally important for organisations to accept that while they may outsource their IT service delivery, they must continue to be accountable for the service delivery to the business. Organisations need to know their third party service providers are following the accepted principles of good governance to ensure they are in a position to effectively manage the risks and continue to deliver value to their corporate customers.

This specific focus, called ‘outsourcing governance’, is essentially a sub-set of IT governance and its primary focus is regulating the interface between the enterprise and the outsourced service provider. One crucial consideration when considering outsourcing governance is that given the close interrelationship between the in-house and outsourced IT environment, focusing on IT outsourcing governance invariably proves inadequate – it must be considered within the context of IT governance as a whole.

by Paul Michaels, CEO of ImprovIT, and Navin Anand, Managing Partner & Sudha Iyer, Consultant at WhiteBox Business Solutions 

Self-Service Business Intelligence: Empowering Users to Generate Insights

Executive Summary

In today’s economic environment, organizations must use business intelligence (BI) to make smarter, faster decisions. The business case for BI is well established. Access to BI is what gives companies their competitive edge and allows them to discover new business opportunities. Yet, in too many organizations, decisions are still not based on business intelligence because of the inability to keep up with demand for information and analytics. IT has been stripped down to the barest numbers, even while information workers are demanding more control and faster access to BI and business data. To satisfy this demand and accelerate time to value, one approach involves setting up an environment in which the information workers can create and access specific sets of BI reports, queries, and analytics themselves—with minimal IT intervention—in a self-service BI (SS BI) environment.

Information workers become more self-sufficient by having an environment that is easy to use and supplies information that is easy to consume. It is these two themes—ease of use and information consumability—that play crucial roles in a fully functioning SS BI environment.

Self-service BI is defined as the facilities within the BI environment that enable BI users to become more self-reliant and less dependent on the IT organization. These facilities focus on four main objectives:

  1. easy access to source data for reporting and analysis,
  2. easy-to-use BI tools and improved support for data analysis,
  3. fast-to-deploy and easy-to-manage data warehouse options such as appliances and cloud computing, and
  4. simpler and customizable end-user interfaces.

Tenents of Self-Service BI

  • Make BI tools easy to use
  • Make BI results easy to consume and enhance
  • Make it easy to access source data
  • Make DW solutions fast to deploy and easy to manage

10 Key Recommendations

1.    Don’t assume that simply installing easy-to-use BI tools creates a self-service BI environment.

It’s a start, but it just isn’t that simple. You must have a solid and sound infrastructure in place that supplies the required data. The infrastructure requires planning and design, data integration and data quality processing, data models for the data warehouse and marts, scalable databases, and so on. It requires an understanding of the types of data the information workers will need.

The bottom line is that your job is to make these functions look easy and appealing. Simply installing technologies will not make your BI environment self-service enabled. What will make it easier and more appealing is to have a complete and solid infrastructure in place that makes access easy, the creation of analytics simple, and the display of results easy to understand. It also means giving the right environment to the right workers. Whether consumer, producer, or collaborator, the technology must match the tasks users want to perform in a way that is simple and engaging.

2.    IT needs to monitor the self-service BI environment.

There must be a layer of administration and manageability. Ensure that IT has monitoring and oversight capabilities when information workers deploy, share, and collaborate using BI capabilities. IT should be able to monitor the usage of any BI component that an information worker publishes, whether the data used was
from a governed or ungoverned source. They should also know who else is using it. IT must be able to determine which queries are too costly, long-running, or bog down the performance of other queries.

IT not only needs to monitor BI components, but also needs to secure, validate, and audit them.

The key here is to ensure that business users feel they have the “power” or ability to create their analytic capabilities while IT still has the ability to monitor when they need to jump in and help out.

3.    Support collaborative business intelligence.

Enable different types of information workers to share BI results and work together to define new ways of viewing and analyzing data. Start simply—use a setup that IT can configure easily and use technology that the information worker can understand and use easily. SS BI may need to mimic something information workers are already familiar with (Microsoft Office, for example). Use technology that meshes with your traditional BI environment and/or interfaces seamlessly with it. You will need to provide collaborative features that enable teams of information workers to develop and publish charts, dashboards, and so on, and the users of these analytics to rate or comment on them.

4.    Don’t give information workers too much responsibility.

Most information workers really don’t want the entire responsibility for generating information and reports. It’s not part of their job!

They may find the tools and infrastructure too difficult to use, or they may forget their training before they can use the environment. Make sure that those who do construct self-service BI components also define key metrics, entities, hierarchies, and terms in a consistent fashion.

They should be trained to use the existing technical and business metadata as well as the existing standards and nomenclature.

You should strive to strike a balance between self-service and IT-generated delivery of information. You can do this by taking small steps toward selfservice if your business users are not used to technology, fear doing something “wrong,” or feel they are not properly trained for these activities. Nothing will destroy a self-service environment faster than no one using it. It may take more handholding than you expect. One key successfactor, though, cannot be ignored—the business users must play by the rules when it comes to  defining their metrics, analytics, algorithms, and so on.

5.    Understand the information requirements of information workers and provide appropriate tools/ reports/dashboards.

Understand what each group of information workers wants to accomplish with BI. What are their motivations? What are their skill sets, capabilities, and even interest
in learning how to serve themselves? You may find that most of your information users are consumers with little interest in creating, producing, or generating their own reports, queries, or analytics. But be aware that information workers change their roles frequently.

The best practice here is to get inside the heads of your users to understand what it is that they want to do, accomplish, or create. One suggestion is to examine or be familiar with their compensation models. Their bonus structure will give you a clear idea of what motivates them at work!

In addition, keep in mind that this may be a new service to many business people. Their reluctance to embrace it may come from fear of the unknown, inertia around the way they have always done things, or ignorance about the benefits that they might receive from the environment. In any case, be prepared to change what the users can do—design ways to monitor the utilization of the environment. As users become familiar with the self-service environment, many may begin to change their role from consumer to producer, from producer to collaborator, and so on.

6.    Create a starter set of standard reports, analyses, and widgets.

Provide a library of standard BI components (queries, reports, analyses, widgets). Make them appealing to information consumers (the largest audience). These can also act as templates for the information producers.

The best practice is to make these parameter-driven and customizable. It an amazing but true fact that one of these reports can replace hundreds of hard-coded, customized reports and analyses. The ability to select parameters based on immediate needs also makes consumers feel as though they are truly self-sufficient. They are not overwhelmed, because the BI results have simple, intuitive interfaces to filter, navigate, and analyze a predefined set of data. All of these “starter” components will help with the adoption of self-service BI simply because we all make better editors than creators. So the more that you supply, the faster the adoption.

7.    Establish a governance committee.

The governance committee should consist of representatives from both the information worker population and IT professionals. Their responsibilities include reviewing requests for new components or modifications to existing standard ones, determining whether an existing component can satisfy a request or if a new one is needed, examining requests for self-service, determining what to provide, and identifying needed training.

Governance also includes the creation of role-based access and security by a particular user group as well as the determination of which self-service objects should be promoted to the governed environment for general use.

Remember that the governance committee should promote the use of self-service BI, not hinder its adoption. It is not meant to be a restrictive group, so it should perform the needed PR and communications about its purposes to ensure this message is heard.

8.    Allow the data warehouse to be used with other types of data.

There are times when urgent business requirements cannot be satisfied in a timely manner using the data warehouse alone. It may be that other sources of data, such as operational data, external information, or analytic data from other sources, must be brought together for the needed analytic. In this case, data virtualization provides a quick way to give rapid and flexible access to multiple data sources. However, you will need to provide a monitoring mechanism for the sources accessed to ensure that the performance of these systems is not negatively affected.

The governance committee should be involved in this process.

We all know that emergencies happen—requests come in with an urgency that cannot be met through traditional mechanisms. Workarounds happen. In fact, there is data that may be needed regularly for analytics but should not or cannot be incorporated into the data warehouse—for example, real-time or sensitive data. Data federation technologies have come a long way to allow different data sources to be combined in a virtual fashion and yet act as if they were physically integrated.

Data governance and some form of monitoring will be needed to ensure that the end-run or workaround can be halted if the data is subsequently incorporated
into the data warehouse. Note: retrofitting can be painful!

9.    Buffer less experienced information workers from the complexities of the BI environment.

Use features such as Web browsers, interactive graphics, wizards, drop-down lists, and prompts to guide users through BI tasks. This will free up IT professionals from spending large amounts of time responding to requests for new data, building new reports, and so on. It also gives the information consumers a sense of control and adds to the flexibility of the overall BI environment.

But beware—what’s intuitive to a BI professional is not necessarily intuitive to a naïve user. BI implementers have to think outside of their own boxes to truly understand what business users who want SS BI really need. It may mean doing their job for a day!

10.    Watch your costs.

This is a major product differentiator.

If you already have a BI vendor’s platform in place, you can often add a self-service capability with minimal effort and cost.
Many vendors offer entry-level products geared toward companies with limited budgets. Some companies use open source solutions, but there may be additional “deployment” costs.

Consider software-as-a-service (SaaS) offerings to cut capital and IT staff costs.

You must be careful not to break your budget through your self-service BI implementation! There are many deployment options available to BI implementers today that can greatly reduce the costs of these environments. However, remember to ensure that their deployment options will fit into your overall conceptual and technical architecture.

Download the Report

This report describes the technological underpinnings of these four objectives in great detail while recognizing that there are two opposing forces at work—the need for IT to control the creation and distribution of BI assets and the demand from information workers to have freedom and flexibility without requiring IT help. Companies seeking to implement self-service BI must reach a middle ground in which information workers have free access to data, analytics, and BI components while IT has oversight into the SS BI environment to observe its utilization. This gives the information workers the independence and self-determination they need to answer questions and make decisions while giving IT the ability to monitor the SS BI environment and apply governance and security measures where necessary. For guidance, this report provides practical recommendations to ensure a successful SS BI environment.

To access the report, click here.

IT risks—a director’s perspective

(Extracted from PWC’s ‘To the Point’ series – Spring 2011)

Some directors may be uncomfortable with the subject of information technology. Given how complex companies’ enterprise systems are, directors may be unclear about the questions they should be asking or the answers they should expect. But for some companies, where IT enables the company’s operations, it represents a major risk that boards should oversee.

How does a director know whether to step up the level of IT oversight? Much depends on the company and its complexity. Greater director oversight of IT is likely warranted if your company:

  • has a high volume of transactions; for example, a financial services company
  • collects and stores sensitive data about third parties (customers, patients)
  • has an open access network or open databases, allowing entry to the system by outsiders
  • maintains proprietary know—how, processes, procedures, or other intellectual property
  • has a multi—national scope

Even if your company doesn’t have these environmental factors, you should consider the need to increase director oversight when the level of IT risk increases, such as when:

  • major IT projects are underway—new systems, technologies or platforms
  • integrating programs from more than one platform—using “best of breed” products from different providers that require “bridging” programs to pass data from one platform to another
  • integrating an acquired business—especially one on a different IT platform
  • technology is enabling a new corporate strategy

So, how can boards be comfortable they are in a position to oversee IT risks that are important to the company? By

  • having someone on the board with reasonable technology skills,
  • asking the right questions and applying skepticism when considering the answers, for example, by asking follow—up questions and seeking corroboration through other sources, possibly an independent board advisor
  • understanding the full cost of technology, including the consulting fees to install the systems, as well as the licensing fees, equipment, training, maintenance, etc., and assess the implications of any cost variability
  • getting regular updates on project status and understanding the factors that would signal when a project is in trouble

IT oversight often falls to the audit committee, though strategically significant technologies might be overseen by the full board. And it’s important to realize technology oversight doesn’t end with major systems as we’ve discussed here. Directors should be aware of and comfortable with the company’s web presence, as well as its use of social media and its policies governing such use (see also To the Point, “Social Media: What Directors Need to Know,” Summer 2010).

Mintzberg’s Ten Management Roles

During my time at Aer Lingus, Organisational Transformation has been a key topic of importance. I’ve learnt that to make real change happen, “people change” is where it is at. Until organisations go back to basics and focus on, and invest in, their core asset (i.e. people), “real change” will never be achieved or sustained.

My interest led me to read about Henry Mintzberg’s management roles and framework. This is something that resonates with me in line with my recent experiences.

To learn more, read on …

Mintzberg’s Ten Management Roles.

 

Henry Mintzberg organizational configurations model framework

Too manyconsultancies and “experts” overuse and abuse the term Business/IT or Business/Technology alignment. What they often forget is that the core asset of an organisation is its people as well as process and technology.

I’ve realised that transformation an organisations people is of paramount importance to a Business Transformation or Enterprise Change programme. That said, it is important that organisations rethink the role of people across the company and take a fresh look at the structure of the organisation to maximise the potential of its people. People are not one dimensional. Until CEOs and organisations take heed of this, all we have is a white collar industrial factory made of individuals who are consigned to pigeon holes/boxes stopping real change, innovation and competition.

If, like me, you’re passionate about making “real change” happen in organisations … You will want to understand and consider an organisational model offered by Henry Mintzberg.

Read on about Henry Mintzberg organizational configurations model framework.

 

Why Governance? And Why Now?

This is a topic that is of key interest to me in my role as an Enterprise Architect and IT Strategist.

As any architect will attest, governance is not so much a question of ‘why,’ but rather about ‘how’ and ‘when.’ More specifically, conversations and debates usually focus on how much governance is really necessary, as well as when and where to apply it.

Presently, I’m serving as the Enterprise Architect for Aer Lingus (Dublin, Ireland) and tasked with introducing IT Governance and the role of the Technical Design Authority (TDA) with a view of installing quality assurance disciplines to improve Business/IT alignment. I am also keen to ensure that IT has the correct controls and structures in place to avoid being “pushed around” by the Business.

Governance is not a new topic by any means. In this day and age, where end-users are fast becoming used to more responsive, agile and scalable IT solutions, governance is required to ensure that “demand and supply” between Business and IT are managed respectfully and properly.

Read on …

Why Governance? And Why Now?

By Ron Karas , 08/05/2010

Now, three initiatives are bringing a lot of these conversations to the forefront: cloud computing, SOA and mainframe modernization. There are similarities in the way governance is approached in each of these categories. Each is intended to break down silos, protect and preserve the integrity of information, and provide IT with more agility to create business value.

As more applications and services are exposed and potentially proliferate throughout the Web and across composite applications and services, the greater the risk associated with access and reuse of these technology assets. This gap will continue to widen as more products and services are introduced and integrated. As the infrastructure continues to evolve, there will be a demand for improved transparency due to the higher likelihood of policy violations and coding errors.

Yet, governing those assets as they evolve with the infrastructure can be tricky in terms of responsibility and ownership. That’s because it’s hard to clearly define the boundaries of an application or service once its used by different teams. This becomes increasingly more complex once an application or service is tweaked to address a specific business need; more changes to the software increase the vulnerability of coding errors if governance is not appropriately applied.

Applying governance after the horse has left the barn can often be difficult and somewhat ineffective. In this context, governance is regarded as a tactical effort focused on tools and functions within the infrastructure, as opposed to a more strategic initiative designed to align technology with the company’s larger business goals.

There are several reasons, or excuses, as to why governance sometimes takes a back seat in the overall IT strategy. It usually takes a combination of culture and software development processes that view governance as the step to take when things go awry or to be applied to only the most critical applications and services. While governance may be a priority for certain departments and controls may be in place with regard to how much of an application or service is shared, inconsistent governance practices will eventually make themselves known in unexpected ways.

When to Start

The specifics of where and when to start with governance depends on the existing infrastructure, and its maturity and reach. The simple answer to where and when to apply governance is at the onset of an IT initiative. Lack of governance from the earliest stages has a cost. The cost of fixing software code after it’s been deployed can be from 30 to 200 times higher than if the issues were addressed as the code was being written.

In an ideal situation, governance is part of the planning and design phases and carried out throughout the software development life cycle. Unfortunately, going back in time and retracing steps is usually not an option, especially for larger organizations with multiple IT balls in the air. When new initiatives such as cloud or SOA are being mapped out, they present the opportunity to insert governance as part of the overall technology strategy.

But what if there isn’t an immediate and new opportunity to extend governance policies and practices beyond their initial scope? This brings up the question of allowing governance efforts to simply remain the same. Why fix it if it’s not broken? While a company may choose to keep its IT infrastructure status quo for many sound business and architectural reasons, the reality is that there will come a time when they’ll need to interact and exchange information with a partner, customer or other outside organizations who have exposed part of their infrastructure to the Web. To mitigate the associated risks and increase transparency will require more stringent governance by both parties.

Culture

Governance is most effective when introduced through the combination of culture and technology. This requires raising awareness of its relevance and importance to the organization in a manner that’s in step with the company culture and existing processes. Engaging developers, encouraging the sharing of best practices and creation of policies and attaching rewards are some ways to achieve this.

From a technology standpoint, the use of existing policies and best practices can accelerate governance efforts significantly reducing the hours required to create these tools from scratch. The concept of policies and best practices are usually not foreign to development teams. However, consistent enforcement in the form of a real governance initiative typically is. In fact, many of them are widely available from architects and developers who realize the value to the industry as whole by paying it forward.

Can there ever be too much governance? Yes – too much governance can be worse than too little governance if it hinders productivity. Start off with a more passive approach to governance in the early stages of development, notifying project teams of issues and their potential impact. As you move closer to production, you can take a more active approach, such as blocking users from checking in artifacts to the registry/repository unless they comply with established policies.

An important point to keep in mind is when organizations are instituting governance for the first time it is a new effort and will require additional costs in the short term but ultimately will reduce overall development costs.

Making governance worthwhile requires a consistent approach to developing and deploying applications and services. It also means enforcing policies across different departments based on a common set of best practices, standards, policies, and patterns.

Finally, it stands to reason that if the services and applications are going to be distributed throughout the infrastructure, so should governance. This way, enterprises can put into place the policies and best practices that should be followed as the software continues to evolve and serve different parts of the organization whether it’s an SOA, cloud or any major IT architecture.

 

Solvency II

REF: http://www.fsa.gov.uk/pages/About/What/International/solvency/index.shtml

Solvency II:

Solvency II is a fundamental review of the capital adequacy regime for the European insurance industry. It aims to establish a revised set of EU-wide capital requirements and risk management standards that will replace the current Solvency requirements.

News:

The European Commission publishes the technical specifications for the fifth quantitative impact study (QIS5), see the FSA’s QIS5 page for further information.

The Insurance Sector Newsletters contain useful information for firms about the FSA’s approach to moving from ICAS to Solvency II.

The FSA publishes Delivering Solvency II – an update that summarises the key policy developments and implementation activities.

The Solvency II Directive is due to be implemented on 1 November 2012. Any changes to the go live date will be formally communicated by the European Commission, when the FSA will consider and communicate the potential impact on planning and preparations for itself and firms.

Application:

The Solvency II Directive will apply to all insurance and reinsurance firms with gross premium income exceeding €5 million or gross technical provisions in excess of €25 million (please see Article 4 of the Directive for full details).

In a nutshell:

  • Solvency II will set out new, strengthened EU-wide requirements on capital adequacy and risk management for insurers with the aim of increasing policyholder protection; and
  • the strengthened regime should reduce the possibility of consumer loss or market disruption in insurance.

Central elements:

Central elements of the Solvency II regime include:

  1. Demonstrating adequate Financial Resources (Pillar 1): applies to all firms and considers key quantitative requirements, including own funds, technical provisions and calculating Solvency II capital requirements (the Solvency Capital Requirement -SCR, and Minimum Capital Requirement -MCR), with the SCR calculated either through an approved full or partial internal model, or through the European standard formula approach.
  2. Demonstrating an adequate System of Governance (Pillar 2): including effective risk management system and prospective risk identification through the Own Risk and Solvency Assessment (ORSA).
  3. Supervisory Review Process: the overall process conducted by the supervisory authority in reviewing insurance and reinsurance undertakings, ensuring compliance with the Directive requirements and identifying those with financial and/or organisational weaknesses susceptible to producing higher risks to policyholders.
  4. Public Disclosure and Regulatory Reporting Requirements (Pillar 3).

Adoption procedure:

Solvency II is being created in accordance with the Lamfalussy four-level process:

  • Level 1: framework principles: this involves developing a European legislative instrument that sets out essential framework principles, including implementing powers for detailed measures at Level 2.
  • Level 2: implementing measures: this involves developing more detailed implementing measures (prepared by the Commission following advice from CEIOPS) that are needed to operationalise the Level 1 framework legislation
  • Level 3: guidance: CEIOPS works on joint interpretation recommendations, consistent guidelines and common standards. CEIOPS also conducts peer reviews and compares regulatory practice to ensure consistent implementation and application.
  • Level 4: enforcement: more vigorous enforcement action by the Commission is underpinned by enhanced cooperation between member states, regulators and the private sector.

The Level 1 Directive text was adopted by the European Parliament on 22 April 2009 and was endorsed by the Council of Ministers on 5 May 2009, thus concluding the legislative process for adoption. This was a key step in the creation of Solvency II.  The Directive includes a ‘go live’ implementation date of 1 November 2012 for the new requirements, which will replace our current regime.

Delivering Solvency II:

In June 2010 we published Delivering Solvency II giving a summary of the key policy developments and implementation activities.  The first issue includes: Completing the fifth QIS; Deciding to use an internal model; Reporting, disclosure and market discipline (Pillar 3); System of Governance; Getting involved in FSA forums; and Key contacts.

Delivering Solvency II

Solvency II: What CIOs need to know

Solvency II, the EU directive that updates capital adequacy rules for the European insurance industry, is about to move to centre stage. We look at what IT departments of insurance companies in the UK must do.

Compliance with Solvency II will provide IT managers with many challenges, not least the sheer scale of the exercise. There is also greater complexity, with legal rules shifting from spelling out a series of provisions, to being a principles-based system.

Peter Skinner, the British MEP who nursed the package though the European Parliament, says, “Solvency II shifts the focus of supervisory authorities from merely checking compliance with a tick-the-box approach based on a set of rules to more proactively supervising the risk management of individual companies based on a set of principles.”

The directive, which cleared the Brussels legislative machinery in April last year, requires IT architectures to be ready for the directive’s enactment in national legislation by 31 October 2012. Non-compliance could endanger an insurance company’s right to trade.

Timing

Timing for setting up the modelling software to meet of the new rules has to follow a set programme, broken down into stages. For instance, according to risk management consultancy Watson Wyatt, the UK Financial Services Authority (FSA) required that as early as March 2009, firms should have stated whether they plan to apply for internal model approval,

By June to November 2010, the start of the first model dry-run period should have started. By October 2011, the FSA should be in receipt of the first batch of dry-run submissions. Second dry runs should take place in 2011 and 2012, with the FSA review/approval process running from 2012.

Jürgen Weiss, principal research analyst at Gartner, reckons that some companies will start the main IT work in three months, but others will not get going for another nine months.

Weiss says most European insurers are still in “a discovery phase”. IT managers are uncertain about budgeting their future Solvency II programmes. Some have not even requested an IT budget to cover work in 2010 on the regulations.

Almost all IT organisations that are familiar with the regulations have focused exclusively on the first of the three pillars of Solvency II. This primarily addresses the quantitative capital requirements for European insurers and the actuarial models with which these requirements are being calculated.

Gartner believes the efforts to comply with Pillar 2 requirements will be significantly higher than the efforts for the other two Solvency pillar investments because of the heterogeneous IT landscape of many insurers and the efforts to at least semi-automate data collection and normalisation. Weiss says this is worrying.

He says several level-two implementation measures on Solvency II, published in November 2009 by the EU’s advisory body for the insurance industry, the Committee of European Insurance and Occupational Pensions (CEIOPS) explicitly address IT issues. Examples are advice on data quality, data governance and documentation.

Collaboration

Weiss says risk managers and actuaries should now be collaborating with their IT colleagues. Business and IT managers should also be aware that Solvency II requires a holistic approach to risk management, encompassing people, processes and applications.

A contrasting view on timing comes from Steve Bell, financial services advisory partner at Ernst & Young: “With regard to timescales, there are a large number of interim dates, and in my experience for most clients they are not running too late as there is time remaining to gear up programmes.

“IT will need to deliver new or enhanced risk management systems. This will be the key IT new system build. The bigger challenge will be to provide accurate data to a lower level of granularity on more regular intervals than before from source systems many of which in insurance are legacy in nature. This is the area that will be harder for insurance IT teams.”

Management consultancy Deloitte and Touche is helping insurance companies to specify their IT needs to enable them to purchase compliance software. It says an official EU guideline, IP 58, gives a steer on the pre-application process, offering “advice on supervisory reporting and disclosure deals with the requirements for insurance companies to report to both the regulators and the public”.

Suppliers

Companies lining up to supply insurance companies with the software they need include: IBM, SAS, SAP, Oracle, Sungard, Fermat, EMB, Algorithmics, Towers Perrin, plus a fragmented array of specialist application providers.

IBM says the revision of its insurance industry framework – which it describes as a blueprint to address all three pillars of Solvency II – is complete and already being used by more than 150 insurers.

Isabella Hess, senior managing consultant at IBM Global Business Services, says IT departments should be thinking about adopting an enterprise-wide information architecture as, for most large groups, the concepts and strategic direction are “more or less ready”.

Hess says there is little real choice as the regulator and analysts are unlikely to look favourably on any big player adopting a more simplistic, standard model-based risk management framework.

Data quality

One fundamental issue is the quality, availability and traceability of data. For example, the data “granularity” defined by data models is usually hardwired into policy systems and can be difficult and expensive to change. (Granularity is the level of detail of “attributes, “fields, and data types that can be provided).

Similarly, insurers may need to collect more comprehensive information about the quality and risk-sensitivity of their investments portfolios than was required previously, and do so more frequently and faster.

Solvency II software will eventually better reflect an insurance company’s exposure to risk. This will enable a company to plan its business development, liquidity management and risk appetite to get the best payback on its capital reserves. In other words, IT managers will be buying a system that will enable firms to make better use of their capital.

Hess refers warmly to phase two of the International Accounting Standard Board’s forthcoming IFRS 4 on insurance contracts, for which an exposure draft is due in the second quarter of 2010. In planning Solvency II architecture, she advocates the use of other industry standards. These include Acord, the emerging international standard for information exchange, and service-oriented architecture for data exchange.

How much will it cost?

Hess says large insurance firms could be facing bills for around €100m each as a result of Solvency II. However, many have partially completed work on data and processes, leaving only “heavy lift work in intellectual modelling and embedding their enterprise risk models” to be done.

Hess estimates that second-tier insurers will need to invest between €30m and €40m over three years. Smaller companies could expect to pay €1m to – €1.5m each.

According to the CEA, the European insurance and reinsurance federation, the total number of insurance companies operating in the EU is 5,200. This figure could be increased if one takes in associated economic zones, such as the European Economic Community.

More accurate ideas on cost are likely to come from the publication of the next Quantitative Impact Studies (QIS) on Solvency II, expected in August 2010. The fifth in a series of reports, the study aims to assess the likely impact on insurance markets and products, social and economic impacts and the likely impact on insurers’ balance sheets and business behaviour of the potential policy options being considered by the EC.

Insurance companies are reminded that in 2012 it will not be enough just to say that you have purchased a Solvency II compliance software package. A Brussels mandarin close to the directive emphasises that this will not satisfy the regulators. “In the UK, the FSA will never give blind approval to the software itself, but will check on functionality,” he says.

Continue reading

The Business Case for SOA & The Role of NextGen Architects

ABSTRACT

Five years ago, the business case for Service-Oriented Architecture (SOA) didn’t matter. Organizations pursued SOA initiatives based on a desire to achieve ‘competitive advantage’ and ‘agility’ without specifying clear metrics and success criteria. A great deal of time and effort is spent on technology architecture, governance and vendor assessments, which is good, but the fundamental point is that SOA is about business.

 Today’s budgetary environment of doing less with less, coupled with the current economic climate, makes it important for SOA champions to make their business cases for investment compelling and to create and maintain momentum.

What is required is a new generation of architects capable of taking a business-led and multi-disciplinary approach to SOA enabling better Business/IT alignment. Next generation (NextGen) architects are no longer part of an IT department or individual IT projects. NextGen architects are “architects of the business” – organizational resources delivering business-led SOA solutions to directly support new business initiatives, oversee business change, governance and budgetary controls.

This webinar includes advices and recommendations based on actual experiences and successes across a range of Public and Private industries.

FUTHER INFORMATION

Join me on 26 Feb 2010 at 11:00 am (GMT) by clicking here.

This presentation has been postponed. I will provide further updates once a new date/time has been agreed.

Apologies for any inconvenience.