Large Language Model Threat: What CISOs Should Know About the World of ChatGPT

Report Summary

Large Language Model Threat: What CISOs Should Know About the World of ChatGPT

Today, hackers can use LLMs such as ChatGPT to create malware and craft perfect phishing emails.

Tari Schreider
Strategic Advisor

March 14, 2023 – Large language models (LLMs) are deep learning algorithms that recognize, summarize, translate, predict, and generate text and other content based on knowledge gained from massive data sets. LLM platforms trained with massive amounts of adversarial attack information would have a force multiplication effect. Next-gen hackers won’t suffer the same frailties of poor coding, misspelled phishing emails, and poor target reconnaissance as their predecessors, resulting in more successful hacking incidents.

This Impact Brief serves as a primer on LLMs to introduce CISOs to a threat they likely have not had much time to contemplate. This emerging threat is illustrated by a BlackBerry survey of 1,500 IT decision-makers who said a successful cyberattack would be credited to ChatGPT within the year.

Clients of Aite-Novarica Group’s Cybersecurity service can download this report and the corresponding charts.

This report mentions BlackBerry, Check Point Research, Council of the EU, DeepMind, European Parliament, George Mason University, Google, Microsoft, OpenAI, Perplexity AI, Replika, WithSecure, and Writesonic Inc.

How can we help?

If you have a question specific to your industry, speak with an expert.  Call us today to learn about the benefits of becoming a client.

Talk to an Expert

Receive email updates relevant to you.  Subscribe to entire practices or to selected topics within

Get Email Updates