Analytics: In the time of COVID-19, can it really be predictive and unbiased?

Analytics models are like living organisms. They need to be reviewed, recalibrated, and updated based on the initial outcomes they produce and the relevance of the historical data fed into them.

During the first week of December, Novarica SVP Mitch Wein and I hosted our last Analytics Special Interest Group of the year. We were joined by a panel of IT leaders—Josh Edwards of Pekin Insurance, Asif Syed of The Hartford Steam Boiler, Kristi Altshuler of Donegal Group, and Jim Kinzie of QBE Insurance Group—to discuss analytics during the pandemic, counterintuitive lessons from data, and the ethics of data sets and models.

The pandemic has disrupted the data lifecycle. Insurers are questioning the relevance of data in current and future predictive models—pre-pandemic data may not represent the post-pandemic future.

Many insurers are holding off recalibrating their models because they don’t fully understand the driving behaviors and exposures. They are in an analytics holding pattern, using this time to build governance processes and models that monitor and identify potential risks and exposures in their books of business.

The panel discussed an interesting use case regarding the personal auto refunds that policyholders were receiving. A participant talked about how their company decided not to distribute refunds. They came to this conclusion after mining unstructured social media brand-sentiment data surrounding insurers that did deliver refunds. They found that the discount had a negative incentive and was polarizing, as many policyholders felt they deserved a bigger discount. Our panelists shared other instances where data findings appeared to be almost counterintuitive.

Towards the end of the session, one of our panelists asked how companies ensure that they create models that are more than just predictive, taking into account bias and ethics. Our panelists discussed how the answer begins with the model’s initial design, ensuring that business leadership selects and discusses a diverse set of inputs. Machines can’t run unmonitored; there need to be checkpoints and an understanding that a model needs to be retired if it drifts too far in the wrong direction. We heard from one panelist that evaluating the initial data sets for bias is critical for model development. Insurers need to be aware that any model could potentially produce bias results.

We look forward to continuing the discussion with data and analytics professionals at our next Analytics Special Interest Group on February 11th, 2021.

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
2 + 15 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

How can we help?

If you have a question specific to your industry, speak with an expert.  Call us today to learn about the benefits of becoming a client.

Talk to an Expert

Receive email updates relevant to you.  Subscribe to entire practices or to selected topics within
practices.

Get Email Updates