Are You Ready for Artificial Intelligence?

Insurance carriers are contemplating how to leverage artificial intelligence applications—if they haven’t implemented them already—to improve top-line growth, customer experience, claim fraud detection, risk assessments, and more.

There are many types of AI applications and tools to support sophisticated algorithms and machine learning techniques. Above all else, the factor that contributes to the success or failure of any AI application is data quality. This is not to diminish the importance of robust AI tools and good data scientists. However, they still need high-quality raw material to make a good product. In the case of AI, the raw material is data.

CIOs must take the steps to prepare their organizations’ internal and third-party data to feed these algorithms and tools. Here are some of the data issues to consider when doing so.

Discussions of data augmentation using new public data and various third-party sources are common in IT departments today. Before insurer CIOs explore outside data, they should assess the quality, structure, and availability of their critical internal information.

Consider the following. Does the carrier have mastering in place for key data elements that link across the enterprise? What is the source of truth for critical data elements? Are there policies and governances in place to manage data element change during new projects or initiatives? If the answer is no, what will happen when a project changes an element that may come into play downstream—e.g., in an algorithm that handles customer calls or drives a chatbot? CIOs need to identify this change before it happens, not when an algorithm generates errors and customers call to complain.

A report by Novarica, Master Data Management in a Big Data World, provides a checklist of best practices for managing critical internal data. Best practices include including identifying a business data sponsor, organizational structure for the data, aligning data architecture to business goals, and identifying policies to govern data to name a few of the best practices. Data mastering is a multi-year journey that requires evolution via incremental steps.

Then there is the matter of data storage. Most carriers have invested in some form of structured data warehouse. CIOs must assess whether this structure will be able to support the intense data pulls and manipulation data scientists require to develop algorithms. If not, organizations may need to export data to Hadoop or another utility that allows for more flexibility and speed.

Organizations will also need to determine who will manage this data—IT, or data science teams—and whether to host the data on-premise or in the cloud. Many organizations now leverage the scalability and flexibility of the cloud. Without the necessary controls in place, that errant query can impact cost models.

Then there is the task of determining what third-party data to pull from the myriad sources of information—everything from images to data elements. Carriers need to determine how they will link this data to their own as well as verify the source, quality, and coverage of that third-party data. Organizations often consider multiple sources to build a complete data set.

The industry now has tremendous opportunities to leverage data science to change business models. Even so, programmers, tools, and advanced processes cannot build superior products from inferior raw materials. Prepare for AI by preparing data.

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
3 + 6 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

How can we help?

If you have a question specific to your industry, speak with an expert.  Call us today to learn about the benefits of becoming a client.

Talk to an Expert

Receive email updates relevant to you.  Subscribe to entire practices or to selected topics within
practices.

Get Email Updates