Canadian Underwriter
Feature

Managing Data, Reducing Risk


May 1, 2010   by Wesley Gill


Print this page Share

Even before the recent financial crisis, risk management and changes to regulatory capital frameworks were considered the next big things for insurers. With rating agencies recommending that insurers implement an enterprise risk approach to ensure an excellent rating standard, and with implementation dates for new regulatory capital frameworks fast approaching, insurers cannot afford to wait to start risk management projects. Insurance companies can ascertain a lot from the lessons banks learned when implementing Basel II legislation. One of the biggest issues the banks faced in Basel II related to data management, not modelling. The banks concentrated on the modelling side and considered themselves well-equipped for risk modelling. But they found the data required by the regulatory framework was not available in a consistent or reliable form to populate their sophisticated models to a standard that was acceptable for regulatory purposes. As a result, 70% to 80% of the effort associated with many of the regulatory capital framework initiatives related to the collection, cleansing, preparation and storage of the required data.

PLANNING A DATA MANAGEMENT PROJECT

For any insurer, risk models are going to rely on someone pulling together a lot of data from different — typically incompatible — IT systems distributed throughout the entire company. Such data might feature different names for the same things (John Smith or J. Smith), discrepancies in how data is represented (April 1, 2009 or Apr 10/09), multiple representations of the same data (i. e. both internal and external customer risk ratings, making it difficult to come up with one risk rating for the customer) and so on.

Ideally there should be an enterprise data warehouse for the whole company, one that integrates all relevant information from internal IT systems and external sources, and that serves as a uniform and reliable source for all kinds of analytical tasks, including risk analysis. In this way, you can be sure the same source data used for balance sheet reporting is also taken into account for risk codes, which is extremely important for the validity, transparency, reconciliation and acceptance of analysis and reports.

Fortunately, one of the insurance company’s most important assets is its data, but this is often underused. One reason why this data is underused is because up to three-quarters of it is unstructured data — i.e. emails, adjuster notes, financial reports, etc. Hence, insurers must implement both data-mining and text-mining techniques to gain maximum benefits from their risk management applications.

To ensure a successful data management project, an insurer must focus on the core basic elements: defined data sources, data exchange or ETL (extract, transform and load) processes, data quality, a unified data model and repository and governance.

DATA MANAGEMENT FOR RISK Data sources

One of the most difficult parts of a data management project is determining which data sources will need to be accessed. A risk management system should be able to process and integrate information from a variety of sources, including front-and back-office applications and of course mergers and acquisitions. Because of these activities, insurance companies are notorious for having multiple legacy systems. These legacy systems often result in multiple data silos.

Another data challenge facing insurance companies is the wide variety of formats. Data may be stored in relational databases, flat files, outdated database formats, XML or other formats. Additionally, some of the relevant data may be unstructured, free-form text. Once the data has been identified, insurers must decide how to extract the data, where to store it and how to store it.

Unified data model and data repository

A data warehouse is essentially a large relational database that consolidates data from core data sources of various applications.

Sometimes a data warehouse will have one or more “data marts” with specific purposes. Data marts are smaller, subsidiary databases populated from the main data warehouse (the official source).

When implementing a data warehouse, it is imperative to use a data model. The data model is a “single version of truth” that stores comprehensive, accurate, consolidated and historical information related to the insurance industry. The model should be consistently modified to reflect the evolving entities and business issues that affect the capital and risk platform. According to research firm Celent, “insurers should consider purchasing a predefined data model, preferably one that is compatible with ACORD XML standards, from a systems integrator or technology vendor with deep experience in data mastery for insurance.”

ETL and data quality

Data mapping and cleansing are by far the most challenging parts of any data management project. It is generally recognized that 70% to 80% of the implementation effort for a risk management project is associated with data management. The use of ETL tools designed specifically for the insurance industry can provide significant efficiencies.

Data quality is paramount for any system that operates for the sole purpose of producing valuable business information. No insurance company can ever be sure that its economic and/or regulatory capital calculations are accurate and reliable if the supporting data is not cleansed and validated according to de-fined business rules. A simple description or abbreviation can have multiple meanings. Within auto insurance, the abbreviation BI stands for “bodily injury,” while for business owner policy insurance, BI represents “business interruption.” These discrepancies are only exaggerated as insurers implement a cross-border data warehouse.

Auditability/governance

A vital component of any risk management system is the ability to reproduce the results from a regulatory and governance perspective. One challenge facing insurers with regulatory capital initiatives is with the disclosure requirements prescribed and the ability to reconcile and trace the data back to its origins.

To address this issue, a data management system is critical. The system should generate audit trails and trace information throughout the flow of data, from the point of extraction within the source systems all the way to report generation. This way, regulators and internal auditors can validate the path of the data, and internal developers can use this log information to iteratively improve process capacity. Additionally, a data management system should maintain historical data, which is vital for back-testing, historical simulations and comparison of calculations/analysis results between time frames.

CONCLUSION

To survive and emerge stronger, it is essential for insurance companies to implement an enterprise risk management strategy that meets current, expected and future regulatory requirements. Fundamental to this initiative is a holistic, unified approach to data management– one that ensures a smooth flow of information throughout the organization.

Although new regulatory capital frameworks are not due to be implemented until 2013 or later, from both a business perspective and a change management perspective, it would be advisable to start early with the design of a data management system that is flexible enough to anticipate regulatory requirements.

———

Data is one of the insurance company’s most important assets. But it is often underused because up to three-quarters of it is unstructured (such as emails, adjuster notes, financial reports, etc.).


Print this page Share

Have your say:

Your email address will not be published. Required fields are marked *

*