Self-Service Business Intelligence: Empowering Users to Generate Insights

Executive Summary

In today’s economic environment, organizations must use business intelligence (BI) to make smarter, faster decisions. The business case for BI is well established. Access to BI is what gives companies their competitive edge and allows them to discover new business opportunities. Yet, in too many organizations, decisions are still not based on business intelligence because of the inability to keep up with demand for information and analytics. IT has been stripped down to the barest numbers, even while information workers are demanding more control and faster access to BI and business data. To satisfy this demand and accelerate time to value, one approach involves setting up an environment in which the information workers can create and access specific sets of BI reports, queries, and analytics themselves—with minimal IT intervention—in a self-service BI (SS BI) environment.

Information workers become more self-sufficient by having an environment that is easy to use and supplies information that is easy to consume. It is these two themes—ease of use and information consumability—that play crucial roles in a fully functioning SS BI environment.

Self-service BI is defined as the facilities within the BI environment that enable BI users to become more self-reliant and less dependent on the IT organization. These facilities focus on four main objectives:

  1. easy access to source data for reporting and analysis,
  2. easy-to-use BI tools and improved support for data analysis,
  3. fast-to-deploy and easy-to-manage data warehouse options such as appliances and cloud computing, and
  4. simpler and customizable end-user interfaces.

Tenents of Self-Service BI

  • Make BI tools easy to use
  • Make BI results easy to consume and enhance
  • Make it easy to access source data
  • Make DW solutions fast to deploy and easy to manage

10 Key Recommendations

1.    Don’t assume that simply installing easy-to-use BI tools creates a self-service BI environment.

It’s a start, but it just isn’t that simple. You must have a solid and sound infrastructure in place that supplies the required data. The infrastructure requires planning and design, data integration and data quality processing, data models for the data warehouse and marts, scalable databases, and so on. It requires an understanding of the types of data the information workers will need.

The bottom line is that your job is to make these functions look easy and appealing. Simply installing technologies will not make your BI environment self-service enabled. What will make it easier and more appealing is to have a complete and solid infrastructure in place that makes access easy, the creation of analytics simple, and the display of results easy to understand. It also means giving the right environment to the right workers. Whether consumer, producer, or collaborator, the technology must match the tasks users want to perform in a way that is simple and engaging.

2.    IT needs to monitor the self-service BI environment.

There must be a layer of administration and manageability. Ensure that IT has monitoring and oversight capabilities when information workers deploy, share, and collaborate using BI capabilities. IT should be able to monitor the usage of any BI component that an information worker publishes, whether the data used was
from a governed or ungoverned source. They should also know who else is using it. IT must be able to determine which queries are too costly, long-running, or bog down the performance of other queries.

IT not only needs to monitor BI components, but also needs to secure, validate, and audit them.

The key here is to ensure that business users feel they have the “power” or ability to create their analytic capabilities while IT still has the ability to monitor when they need to jump in and help out.

3.    Support collaborative business intelligence.

Enable different types of information workers to share BI results and work together to define new ways of viewing and analyzing data. Start simply—use a setup that IT can configure easily and use technology that the information worker can understand and use easily. SS BI may need to mimic something information workers are already familiar with (Microsoft Office, for example). Use technology that meshes with your traditional BI environment and/or interfaces seamlessly with it. You will need to provide collaborative features that enable teams of information workers to develop and publish charts, dashboards, and so on, and the users of these analytics to rate or comment on them.

4.    Don’t give information workers too much responsibility.

Most information workers really don’t want the entire responsibility for generating information and reports. It’s not part of their job!

They may find the tools and infrastructure too difficult to use, or they may forget their training before they can use the environment. Make sure that those who do construct self-service BI components also define key metrics, entities, hierarchies, and terms in a consistent fashion.

They should be trained to use the existing technical and business metadata as well as the existing standards and nomenclature.

You should strive to strike a balance between self-service and IT-generated delivery of information. You can do this by taking small steps toward selfservice if your business users are not used to technology, fear doing something “wrong,” or feel they are not properly trained for these activities. Nothing will destroy a self-service environment faster than no one using it. It may take more handholding than you expect. One key successfactor, though, cannot be ignored—the business users must play by the rules when it comes to  defining their metrics, analytics, algorithms, and so on.

5.    Understand the information requirements of information workers and provide appropriate tools/ reports/dashboards.

Understand what each group of information workers wants to accomplish with BI. What are their motivations? What are their skill sets, capabilities, and even interest
in learning how to serve themselves? You may find that most of your information users are consumers with little interest in creating, producing, or generating their own reports, queries, or analytics. But be aware that information workers change their roles frequently.

The best practice here is to get inside the heads of your users to understand what it is that they want to do, accomplish, or create. One suggestion is to examine or be familiar with their compensation models. Their bonus structure will give you a clear idea of what motivates them at work!

In addition, keep in mind that this may be a new service to many business people. Their reluctance to embrace it may come from fear of the unknown, inertia around the way they have always done things, or ignorance about the benefits that they might receive from the environment. In any case, be prepared to change what the users can do—design ways to monitor the utilization of the environment. As users become familiar with the self-service environment, many may begin to change their role from consumer to producer, from producer to collaborator, and so on.

6.    Create a starter set of standard reports, analyses, and widgets.

Provide a library of standard BI components (queries, reports, analyses, widgets). Make them appealing to information consumers (the largest audience). These can also act as templates for the information producers.

The best practice is to make these parameter-driven and customizable. It an amazing but true fact that one of these reports can replace hundreds of hard-coded, customized reports and analyses. The ability to select parameters based on immediate needs also makes consumers feel as though they are truly self-sufficient. They are not overwhelmed, because the BI results have simple, intuitive interfaces to filter, navigate, and analyze a predefined set of data. All of these “starter” components will help with the adoption of self-service BI simply because we all make better editors than creators. So the more that you supply, the faster the adoption.

7.    Establish a governance committee.

The governance committee should consist of representatives from both the information worker population and IT professionals. Their responsibilities include reviewing requests for new components or modifications to existing standard ones, determining whether an existing component can satisfy a request or if a new one is needed, examining requests for self-service, determining what to provide, and identifying needed training.

Governance also includes the creation of role-based access and security by a particular user group as well as the determination of which self-service objects should be promoted to the governed environment for general use.

Remember that the governance committee should promote the use of self-service BI, not hinder its adoption. It is not meant to be a restrictive group, so it should perform the needed PR and communications about its purposes to ensure this message is heard.

8.    Allow the data warehouse to be used with other types of data.

There are times when urgent business requirements cannot be satisfied in a timely manner using the data warehouse alone. It may be that other sources of data, such as operational data, external information, or analytic data from other sources, must be brought together for the needed analytic. In this case, data virtualization provides a quick way to give rapid and flexible access to multiple data sources. However, you will need to provide a monitoring mechanism for the sources accessed to ensure that the performance of these systems is not negatively affected.

The governance committee should be involved in this process.

We all know that emergencies happen—requests come in with an urgency that cannot be met through traditional mechanisms. Workarounds happen. In fact, there is data that may be needed regularly for analytics but should not or cannot be incorporated into the data warehouse—for example, real-time or sensitive data. Data federation technologies have come a long way to allow different data sources to be combined in a virtual fashion and yet act as if they were physically integrated.

Data governance and some form of monitoring will be needed to ensure that the end-run or workaround can be halted if the data is subsequently incorporated
into the data warehouse. Note: retrofitting can be painful!

9.    Buffer less experienced information workers from the complexities of the BI environment.

Use features such as Web browsers, interactive graphics, wizards, drop-down lists, and prompts to guide users through BI tasks. This will free up IT professionals from spending large amounts of time responding to requests for new data, building new reports, and so on. It also gives the information consumers a sense of control and adds to the flexibility of the overall BI environment.

But beware—what’s intuitive to a BI professional is not necessarily intuitive to a naïve user. BI implementers have to think outside of their own boxes to truly understand what business users who want SS BI really need. It may mean doing their job for a day!

10.    Watch your costs.

This is a major product differentiator.

If you already have a BI vendor’s platform in place, you can often add a self-service capability with minimal effort and cost.
Many vendors offer entry-level products geared toward companies with limited budgets. Some companies use open source solutions, but there may be additional “deployment” costs.

Consider software-as-a-service (SaaS) offerings to cut capital and IT staff costs.

You must be careful not to break your budget through your self-service BI implementation! There are many deployment options available to BI implementers today that can greatly reduce the costs of these environments. However, remember to ensure that their deployment options will fit into your overall conceptual and technical architecture.

Download the Report

This report describes the technological underpinnings of these four objectives in great detail while recognizing that there are two opposing forces at work—the need for IT to control the creation and distribution of BI assets and the demand from information workers to have freedom and flexibility without requiring IT help. Companies seeking to implement self-service BI must reach a middle ground in which information workers have free access to data, analytics, and BI components while IT has oversight into the SS BI environment to observe its utilization. This gives the information workers the independence and self-determination they need to answer questions and make decisions while giving IT the ability to monitor the SS BI environment and apply governance and security measures where necessary. For guidance, this report provides practical recommendations to ensure a successful SS BI environment.

To access the report, click here.

Solvency II: What CIOs need to know

Solvency II, the EU directive that updates capital adequacy rules for the European insurance industry, is about to move to centre stage. We look at what IT departments of insurance companies in the UK must do.

Compliance with Solvency II will provide IT managers with many challenges, not least the sheer scale of the exercise. There is also greater complexity, with legal rules shifting from spelling out a series of provisions, to being a principles-based system.

Peter Skinner, the British MEP who nursed the package though the European Parliament, says, “Solvency II shifts the focus of supervisory authorities from merely checking compliance with a tick-the-box approach based on a set of rules to more proactively supervising the risk management of individual companies based on a set of principles.”

The directive, which cleared the Brussels legislative machinery in April last year, requires IT architectures to be ready for the directive’s enactment in national legislation by 31 October 2012. Non-compliance could endanger an insurance company’s right to trade.


Timing for setting up the modelling software to meet of the new rules has to follow a set programme, broken down into stages. For instance, according to risk management consultancy Watson Wyatt, the UK Financial Services Authority (FSA) required that as early as March 2009, firms should have stated whether they plan to apply for internal model approval,

By June to November 2010, the start of the first model dry-run period should have started. By October 2011, the FSA should be in receipt of the first batch of dry-run submissions. Second dry runs should take place in 2011 and 2012, with the FSA review/approval process running from 2012.

Jürgen Weiss, principal research analyst at Gartner, reckons that some companies will start the main IT work in three months, but others will not get going for another nine months.

Weiss says most European insurers are still in “a discovery phase”. IT managers are uncertain about budgeting their future Solvency II programmes. Some have not even requested an IT budget to cover work in 2010 on the regulations.

Almost all IT organisations that are familiar with the regulations have focused exclusively on the first of the three pillars of Solvency II. This primarily addresses the quantitative capital requirements for European insurers and the actuarial models with which these requirements are being calculated.

Gartner believes the efforts to comply with Pillar 2 requirements will be significantly higher than the efforts for the other two Solvency pillar investments because of the heterogeneous IT landscape of many insurers and the efforts to at least semi-automate data collection and normalisation. Weiss says this is worrying.

He says several level-two implementation measures on Solvency II, published in November 2009 by the EU’s advisory body for the insurance industry, the Committee of European Insurance and Occupational Pensions (CEIOPS) explicitly address IT issues. Examples are advice on data quality, data governance and documentation.


Weiss says risk managers and actuaries should now be collaborating with their IT colleagues. Business and IT managers should also be aware that Solvency II requires a holistic approach to risk management, encompassing people, processes and applications.

A contrasting view on timing comes from Steve Bell, financial services advisory partner at Ernst & Young: “With regard to timescales, there are a large number of interim dates, and in my experience for most clients they are not running too late as there is time remaining to gear up programmes.

“IT will need to deliver new or enhanced risk management systems. This will be the key IT new system build. The bigger challenge will be to provide accurate data to a lower level of granularity on more regular intervals than before from source systems many of which in insurance are legacy in nature. This is the area that will be harder for insurance IT teams.”

Management consultancy Deloitte and Touche is helping insurance companies to specify their IT needs to enable them to purchase compliance software. It says an official EU guideline, IP 58, gives a steer on the pre-application process, offering “advice on supervisory reporting and disclosure deals with the requirements for insurance companies to report to both the regulators and the public”.


Companies lining up to supply insurance companies with the software they need include: IBM, SAS, SAP, Oracle, Sungard, Fermat, EMB, Algorithmics, Towers Perrin, plus a fragmented array of specialist application providers.

IBM says the revision of its insurance industry framework – which it describes as a blueprint to address all three pillars of Solvency II – is complete and already being used by more than 150 insurers.

Isabella Hess, senior managing consultant at IBM Global Business Services, says IT departments should be thinking about adopting an enterprise-wide information architecture as, for most large groups, the concepts and strategic direction are “more or less ready”.

Hess says there is little real choice as the regulator and analysts are unlikely to look favourably on any big player adopting a more simplistic, standard model-based risk management framework.

Data quality

One fundamental issue is the quality, availability and traceability of data. For example, the data “granularity” defined by data models is usually hardwired into policy systems and can be difficult and expensive to change. (Granularity is the level of detail of “attributes, “fields, and data types that can be provided).

Similarly, insurers may need to collect more comprehensive information about the quality and risk-sensitivity of their investments portfolios than was required previously, and do so more frequently and faster.

Solvency II software will eventually better reflect an insurance company’s exposure to risk. This will enable a company to plan its business development, liquidity management and risk appetite to get the best payback on its capital reserves. In other words, IT managers will be buying a system that will enable firms to make better use of their capital.

Hess refers warmly to phase two of the International Accounting Standard Board’s forthcoming IFRS 4 on insurance contracts, for which an exposure draft is due in the second quarter of 2010. In planning Solvency II architecture, she advocates the use of other industry standards. These include Acord, the emerging international standard for information exchange, and service-oriented architecture for data exchange.

How much will it cost?

Hess says large insurance firms could be facing bills for around €100m each as a result of Solvency II. However, many have partially completed work on data and processes, leaving only “heavy lift work in intellectual modelling and embedding their enterprise risk models” to be done.

Hess estimates that second-tier insurers will need to invest between €30m and €40m over three years. Smaller companies could expect to pay €1m to – €1.5m each.

According to the CEA, the European insurance and reinsurance federation, the total number of insurance companies operating in the EU is 5,200. This figure could be increased if one takes in associated economic zones, such as the European Economic Community.

More accurate ideas on cost are likely to come from the publication of the next Quantitative Impact Studies (QIS) on Solvency II, expected in August 2010. The fifth in a series of reports, the study aims to assess the likely impact on insurance markets and products, social and economic impacts and the likely impact on insurers’ balance sheets and business behaviour of the potential policy options being considered by the EC.

Insurance companies are reminded that in 2012 it will not be enough just to say that you have purchased a Solvency II compliance software package. A Brussels mandarin close to the directive emphasises that this will not satisfy the regulators. “In the UK, the FSA will never give blind approval to the software itself, but will check on functionality,” he says.

Continue reading