Our Blog

Big Data Analytics on Insurance

How Big Data Analytics is Re-Inventing the Insurance Business

Insurance

The insurance industry is–by definition and by practice–generally averse to risk. But thanks to the success of early adopters of data analytics, insurance companies in the $1.1-trillion U.S. market are scrambling to ramp up their own data analytics practices before it’s too late.

In his 25 years in the insurance business, Capgemini’s Seth Rachlin has never seen insurance companies move so quickly to change their business models.

“Traditionally insurance has been a slow-moving business,” says Rachlin, who is a vice president within Capgemini Financial Servic’s insurance business unit, and heads up its data analytics practice. “But the pace of change frankly in the past two to three years is something I’ve never seen before within the industry.”

The industry’s “reform” moment occurred when a handful of insurance companies at the bleeding edge of analytics posted stellar results. That triggered a chain reaction that is still playing out today.

“It’s driven by, broadly speaking, the capability of data and analytics to materially impact performance,” Rachlin says.  “We’re seeing a tremendous desire to leverage technology broadly, and data more specifically. The business is getting it, and the business is wanting to act on it. And I think there’s even a level of fear of being left behind.”

The Transformation

The analytic transformation originated in the automotive insurance market, which accounts for a big chunk of the larger $500-billion property and casualty (P/C) insurance business in this country.

Traditionally, car insurance companies would price policies based on ratings classes. There were perhaps 10 to 20 variables that went into these pricing calculations–things like the age of the driver, gender, ZIP code, how many miles driven, and driving record.

But then a handful of auto insurance firms started gathering a lot more data about potential clients—such as credit scores and reputational data from Yelp–and using it to populate models that have upwards of 1,000 variables.

All this data allowed the models to be composed of a much higher number of more fine-grained ratings classes. As the ratings classes got smaller and more targeted, it allowed the early analytic adopters to not only price their risk more effectively than their competitors using traditional pricing models, but to lower their claims payouts too.

“That experience of using data and using models built on the data to better price and better select risk – that’s been going on by leading companies for a number of years now,” Rachlin says. “But everybody’s kind of got religion and they’re trying to apply them more broadly across the industry to the issues of how price effects customer acquisition and how data can influence risk selection.”

But why is this occurring now? According to Rachlin, there are two main reasons: Advances in the sophistication of statistical modeling techniques and the availability of parallel computing power.

Check out the full story

This entry was posted in Industry News. Bookmark the permalink.

Comments are closed.

PerformanceG2 Menu