Canadian Underwriter
Feature

Diving Into Big Data


February 1, 2013   by Craig Harris


Print this page Share

An IBM white paper notes that more than 2.5 quintillion bytes of data are created each day, while about 90% of the data in the world today has been created in the last two years alone. Velocity often refers to “real-time” or streaming data and the need to incorporate that information into business processes and decision-making. And then there is the variety of data: structured, semi-structured and unstructured information. This could involve text, sensor data, audio, video, radio-frequency identification (RFID), click streams, log files, Twitter and Facebook posts – essentially, any type of measurable data.

“All of these terms are valid and interpret what ‘big data’ can be determined to be,” says Mark Cairns, vice president and chief information officer for RSA Canada. “Personally, I see big data as a meshing of information from every possible touchpoint… and the activities to evaluate, interpret and glean value from this data by turning it into information… to enhance any organization’s decision-making capabilities,” Cairns says.

“It (big data) is the bringing together of a large volume of data from a variety of sources that can exceed an organization’s ability to access and analyze it in a timely manner,” observes Anna McCrindell, vice president of commercial insurance solutions for Gore Mutual Insurance Company.

“Big data to us is how we can effectively and intelligently manage massive amounts of data (historic data, transactional data and external data), structured and unstructured, to support our business operation and gain competitive advantage,” offers Robert Merizzi, chief information officer and executive vice president, business systems transformation for Aviva Canada.

Of the “Vs” associated with big data, “variety” may be the most relevant part of the equation for the p&c insurance industry, notes Aon’s global chief information officer, Steve Betts.

“When we look at the primary insurance market, I would say that only about 10% to 20% of data is in a structured format, in terms of it being organized by headers and key information,” Betts reports. “The majority of data is in the underlying descriptions and exposures, so getting a handle on that and having the ability to draw insights from that data presents a huge opportunity across the industry,” he suggests.

NEW INSIGHTS

The ability to interpret new and emerging forms of data is what interests most brokers and insurance companies. “What was once a disjointed and ‘siloed’ set of information can now offer real-time insights when these varying data sets are compiled into an interactive structure,” suggests Geoffrey Kendrick, senior underwriter in Zurich Canada’s global corporate division. “No longer is interpretation and analysis limited to a sole set of parameters that offered a singular view or application. Credible modelling for future behaviours and a better understanding of historical trends is what is now available,” Kendrick says.

“I think the real challenge is how you extract value from that mass of internal and external data, evaluate and then apply it to business decision-making in terms of underwriting, claims and marketing,” says Dan Adamson, president and CEO of Outside Intelligence, a firm that offers big data services to insurers such as Gore Mutual, RSA Canada and The Dominion.

There are several commonly cited areas for application of big data in the p&c sector, such as marketing and new business generation, broker measurement, risk assessment, underwriting and pricing, risk or loss mitigation and claims management, especially fraud detection.

Given that more insurance companies are spending substantial money on advertising, it should be no surprise that senior executives are asking questions about the return on investment, says Julie Donahue, vice president, insurance global services at IBM.

“A lot of insurance companies are evaluating marketing programs to measure degree of success and new business generation,” Donahue reports. “Some of the questions coming down from the board level are: Was this sponsorship effective? How much sales growth did that marketing campaign yield? There are advantages if an insurer can pull different parts of data together to measure these areas.”

Risk selection and broker productivity are also emerging areas that are top of mind among insurance companies. “They (insurers) want to know what percentage of brokers are selling their products as opposed to stuff from other companies, especially given that a big chunk of business is distributed through brokers,” Donahue notes. “What are the sales percentages divided by brokers? Is there a certain threshold of business per brokerage per year? Does it fluctuate by months or quarters? Some of those answers will likely involve big data.”

Adamson agrees the type of business brokers bring to an insurer is a heightened area of scrutiny. “Companies have been examining the risk selection of brokers and the kinds of risks that are coming in to them from an underwriting perspective,” he says. “There may be too much of a negative risk in terms of selection, but there may also be areas where brokers are leaving out positive information, which could lead to more aggressive pricing on an account. That is data insurers want to have,” he explains.

Adamson cites the example of a bakery for which an insurer provides commercial coverage. “Does that insurer take into account inspection reports, online reviews, safety records? This type of information is not necessarily structured,” he says. “We had an example of a bakery that was voted one of the top employers in its region. The broker had no idea, but this information was publicly available and the underwriter was quite interested in it,” Adamson adds.

INFORMATION OPPORTUNITY

Brokers themselves are also showing increased interest in big data, sources say. Adamson reports his firm is working with Hub International in this area.

“(Brokers) want to more accurately assess risks and provide detailed information to underwriters,” he suggests. “I think brokers are going to be looped in to this, but like always, there are those who are leading this and others that do business the traditional way. The Hub people see this as an opportunity to know more about their customers and provide better information to their insurance company partners,” he adds.

Aid to Risk Assessment

One example of big data in broker risk assessment is Aon’s Global Risk Insight Platform (GRIP), says Betts. This repository of information offers clients real-time data to compare coverage options and pricing against benchmarks in relevant sectors.

For Betts, “GRIP pulls together the data into one place across insurance businesses, geographies, industry segments. When you look at the fact that we do business in more than 120 countries, and draw $60 billion in premiums for our primary business, that represents a lot of data. Our focus has been on getting this data together and overlaying it to allow us to draw business insights.”

He cites the example of a client in a certain industry looking to expand to another country. “We now have the span and scope of data to provide key metrics on that region,” Betts says. “Where are the risks? What are the market conditions? How does that apply to property, general liability, directors and officers liability?”

The increasing prevalence of unstructured data is a relatively new dimension for this kind of risk assessment, Betts notes. “Traditionally, high-level analysis of risks and exposures were heavily focused on structured elements of data, but as the tools mature, we are really able to draw more insight out of broader, unstructured sets of disparate data,” he observes. “We need to go across the spectrum with our broker and reinsurance business, and that means looking at exposure information, underwriting, policy and premium data, claims data, portfolio risk and reinsurance/cat exposures,” Betts says.

Aid to Underwriting

Underwriting is another area of big data that insurance companies have started to embrace. McCrindell notes that Gore Mutual’s “first step” involved using Outside Intelligence Risk Discovery, a platform for big data analysis.

“We are able to have rapid access to large amounts of data from many sources that may not have been as readily accessible before,” McCrindell says. “This allows us to deliver decisions on submissions faster to our broker partners, as well as learn more about the risks we already write. The competitive advantage can be lost if too much time is taken to analyze a risk,” she says.

“The goal is to focus on giving underwriters better information to identify and price risks more accurately,” Adamson comments. “We know insurers want to prove this out in terms of return on investment. For example, if we can get ahead of certain kinds of risks by gathering information from new sources and integrate that into existing data, can we underwrite and price that risk more accurately?”

Guidewire’s Eugene Lee suggests that big data can also result in better underwriting decisions in real time. “It means better underwriting decisions for underwriters who can access real-time forecast and forensic weather information to determine whether they should inspect a property for prior damage or block a policy change if the property falls inside the path of an oncoming hurricane,” he notes as one example.

For RSA’s Mark Cairns, he expects that pricing will be influenced by new data sources and analytic tools. “In the insurance industry, we have seen some of our larger pioneers taking advantage of vehicle telemetry to aid in driver insight analysis,” he says. “This will lead to another metric or rating component when determining the optimal pricing (and) alter how traditional actuarial-based personal auto rating has been calculated,” he predicts.

Aid to Loss Prevention

Access to new forms of data can affect loss prevention measures for clients, says Kendrick. “Zurich has a wealth of data that we utilize for our partners; from implementing industry-specific loss reduction strategies to global risk transfer solutions that result in operational efficiencies,” he notes. “(We) take advantage of broad data sets to help identify evolving and international claims trends and make strategic underwriting decisions.”

FRAUD DETECTOR

When it comes to claims, one of the most frequently touted benefits of big data is in improved fraud detection. IBM is using big data analysis techniques for insurance fraud with a large Canadian p&c insurer – an initiative that Donahue reports is the first of its kind in the world.

“In terms of fraud, we are taking internal, external data and using sophisticated analytic tools to look at it through a different lens,” Donahue explains, adding that initial results of the project are expected this spring. “We are backing up a bit into the customer selection and underwriting process. We’re asking: ‘Can we use data, claims, customer information and location intelligence to screen out fraud earlier in the process?'”

The true benefits of big data fraud detection and prevention will be found in greater information sharing amongst p&c insurance companies, Donahue suggests. “The real opportunity, of course, is to do this across companies,” she says. “As many in the industry know, organized fraud rings will use multiple companies, so access to the widest possible data is crucial.”

“There is a strong need to have industry-wide data sharing to gain business insights (e.g. anti-fraud) that will enable the insurance industry to provide better service and lower cost to the consumers,” suggests Robert Merizzi of Aviva Canada.

Lee also notes that big data may have a wider application in the efficiency of claims management departments. “(Big data) can mean reduced claim expenses for managers who can more accurately allocate their internal and independent adjusting resources to those properties that require inspection and remote adjust those that do not,” he comments.

While there are rapidly emerging examples of big data in the p&c insurance sector, several sources say that true business adoption of unstructured information is still in the early stages.

“At RSA, we are in our infancy taking advantage of big data,” Cairns observes. “We are in the process of augmenting our infrastructure to include the requisite tools to be able to interpret big data in a more succinct and timely manner,” he says.

“I don’t think of big data as a thing,” Lee says. “It’s the water we swim in. So the practical application of big data in the insurance industry is to help our organizations do everything we do, but to do them more effectively.”

HURDLES TO CLEAR

Several challenges stand in the way of full-fledged implementation of big data analysis for real business outcomes.

“I think one of the key challenges is that there is no magic formula to bringing disparate data sets together to get a common basis from which to draw insights,” Betts notes. “You have to constantly work at it, and in some cases, it is a gradual, iterative approach,” he says.

“I go back to proving out the return on investment,” Adamson says. “You can’t just do a science experiment – the data, both external and internal, has to be extracted into meaningful business decisions. It is not just dumping more screens of data onto underwriters; they already have that. It means weeding out the noise and synthesizing the data to allow the underwriters to do their jobs better,” he says.

“There are many challenges when it comes to big data,” Merizzi observes. “A few come (to) mind: How to manage and store these massive amounts of data in a cost-effective way? How to deal with historic data that have different formats? How to combine structured and unstructured data seamlessly in order to enable analytics?”

For McCrindell, one of the “Vs” that applies to big data is a potential concern for insurers: veracity. “One of the main challenges when it comes to big data is being able to identify what information is credible,” she suggests. “Ensuring the accuracy of data is key. As an insurer, we need to vet what is credible, and balance it with traditional information and experience to make the best decisions,” she emphasizes.

In addition, big data’s arrival on the insurance scene has been preceded by pre-existing IT projects at companies. “In many cases, insurance companies already have several multi-year data warehousing projects on the go, so they need to look for solutions that help piece this information together,” Adamson says. “We often find we can help them with the external data part of the solution, and then we can deal with the integration problem.”

The p&c insurance industry has a frequently cited reluctance to be on the leading edge of technological innovation. Big data’s tendency to be seen as a “living exercise” that will continue to grow and “morph dynamically” may represent a “contrarian mindset to the traditional insurance space,” Cairns suggests.

Donahue holds that insurers have to roll up their sleeves and get started. “A lot of insurers spend a great deal of time getting ready for the party; they should just have the party. Define a key business issue, understand your desired outcomes and start a big data pilot project,” she says.

For Betts, there is an undercurrent of interest in big data among Aon senior managers that he has not traditionally seen in IT projects. “Our insurance executives in the risk business have shown a real enthusiasm with the possibilities found in GRIP and big data,” he says. “When they see what is possible through this data, they get excited,” he adds.

“With the changes in data analytics and actuarial capabilities, companies are now able to do analysis that was not previously possible,” McCrindell observes. “This changes how business is priced, the speed at which insurers can respond, and their ability to provide sound risk management sol
utions for their broker partners,” she says.

“The applications for utilizing a big data structure are unlimited and present meaningful opportunities for insurers to offer new, optimized and tailored risk reduction strategies,” Kendrick notes.

“The possibilities are almost endless what can be gleaned from all this information,” Cairns adds. “Firms that can take advantage of the tens of millions of tweets per day or the in excess of 10 terabytes of Facebook information per day will be able to garner immense value from both aspects,” he argues.

“I think the biggest challenge for enterprises is analogous to the challenges we face as a society; namely, that we need to become a lot more ‘data-literate,'” Lee concludes. “What is data good for? How should it be read and interpreted? As our society becomes more data-literate, consumer expectations will change and businesses will have to change along with them.”


Print this page Share

Have your say:

Your email address will not be published. Required fields are marked *

*