Analytics: Absolutely Critical but Potentially Deceptive, Especially Now!

We have talked in the last few years about the Three Levers of Value: selling more, managing risk better, and costing less to operate. Of course, achieving these Levers depends on data and the insights analytics provide from that data. As insurers progress on their analytics journey, they need to collect, assess, and enrich data if necessary, then apply analytics models to obtain insights that drive action, either through formal processes or targeted interactions.

Data Governance and Stewardship

Data itself is very hard to govern, in part because ownership of data is unclear, definitions of data are inconsistent, and quality of data is difficult to measure. Owners of data include CIOs, chief data officers, actuarial, underwriting, claims, or occasionally a data governance committee. A data infrastructure, independent of core operating systems, must be established, including operational data stores, virtual data warehouses, data marts, and tools that can be used to access and visualize the data.

Definitions around data and how it can be used need to be identified in a data dictionary or thesaurus. There are many types of data, including customer; policy; claims; financial; operating; and unstructured data like notes, videos, call recordings, and pictures. The data can come from within the organization or from third parties.

Finally, data needs to be kept current and relevant. As we collect more and more data to feed into our analytics models through the data infrastructure, we need to understand the five “Vs” of data:

  • Volume: How much data is coming in
  • Velocity: The speed at which data is coming in
  • Variety: The types of data, including structured and unstructured, from various sources
  • Veracity: Quality of data, including whether it is “good” enough for the purpose needed
  • Value: Can you monetize the data collected by using it to push one of the Three Levers?

Analytics and Models

Data becomes the input into our analytics models. It is now possible to automate the matching, gathering, and analysis of data. We can use AI algorithms to scan through the large volumes of disparate external and internal data carriers have at their disposal to determine the most relevant data and ingest it into predictive analytics models. The AI models “learn” over time, based on their historical success rates. Decisions can be initially recommended and executed autonomously down the road. The huge volumes and variety of data can speed up the accuracy of the analytics being performed.

Historically, data and analytics were reactive. We would collect data about things that had happened and run analysis using analytics models with scenarios that had different underlying assumptions to make decisions manually. Once a decision was made, it needed to be implemented. This could happen through a process improvement, changing an underwriting guideline, altering a claims adjudication process, or accelerating/decelerating an activity like selling a product.

Now, analytics utilizing AI can be embedded in all parts of a process, driving its outcome. For example, underwriting processes can utilize data-driven analytics to understand what data to gather, match data, pre-fill data into forms or screens, identify and score risk, decide whether to recommend underwriting the policy, and update and assess the overall portfolio risk across policies.

We still run scenarios with different underlying assumptions. However, the data being fed into the scenarios can be updated in real time. Given the statistical probability of the outcomes, we can alter our actions. We can utilize an Agile engineering model to continue to improve our analytics and AI algorithms, bringing together the folks that build the models with the folks that understand the technology and data architecture.

Analytics and Insight

Sounds great! However, this whole thing can be quite deceptive. There are a number of subjective aspects of analytics modeling related to AI algorithms. The structure of the models themselves, the underlying casualty assumptions, and the lag between what the model says will happen vs. what actually does happen can be very large.

Let’s take an example right out of the news. IHME does COVID-19 analytics modeling for infections, testing, beds needed, ICU beds needed, and deaths per day at a country and state level. It is based on data that comes from hospitals as well as mobility information from de-identified mobile phone data. The model originally predicted about 65,000 deaths by August 4. Today it is predicting 147,000 deaths by that day. What changed? Well, states have relaxed social distancing efforts to open up their economies, and people are moving around more. This directly impacts the death rate.

Another thing that changed: the quality of the data. Nursing home and in-home infections and deaths were not being report initially. We are still unsure if all the data is accurate. If it’s not, the conclusions from the analytics models will not be accurate.

The same risks occur in modeling insurance outcomes. We may believe we have correct data about workers’ compensation risks. However, COVID-19 creates secondary illness like strokes and heart attacks; people may be out of work much longer than the two to three weeks it takes to get over a mild COVID-19 infection. If no one is driving a car, is the underwriting model to price an auto policy accurate? What about professional liability insurance? Could a doctor’s malpractice risk be altered by the pandemic or perhaps the risk of a hair salon being sued because they were not providing proper protections?

We don’t want to throw the baby out with the bath water. Just because our existing assumptions have changed doesn’t mean all the data we collected or the models we created should be disposed of. We need to update assumptions, allow AI algorithms to relearn (accepting errors in the short term), and use Agile processes to continually evolve the analytics models we rely on. Most importantly, we need to apply common sense to the situation at hand so we can continue to leverage predictive analytics and AI.

Upcoming Special Interest Group

On June 11, my colleague Eric Weisburg and I will host our Virtual Special Interest Group to cover the latest challenges for analytics. Please register and join us and your peers in the conversation.

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
11 + 2 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

How can we help?

If you have a question specific to your industry, speak with an expert.  Call us today to learn about the benefits of becoming a client.

Talk to an Expert

Receive email updates relevant to you.  Subscribe to entire practices or to selected topics within
practices.

Get Email Updates