Report

Large Language Model Threat: What CISOs Should Know About the World of ChatGPT

Today, hackers can use LLMs such as ChatGPT to create malware and craft perfect phishing emails.
/

March 14, 2023 – Large language models (LLMs) are deep learning algorithms that recognize, summarize, translate, predict, and generate text and other content based on knowledge gained from massive data sets. LLM platforms trained with massive amounts of adversarial attack information would have a force multiplication effect. Next-gen hackers won’t suffer the same frailties of poor coding, misspelled phishing emails, and poor target reconnaissance as their predecessors, resulting in more successful hacking incidents.

This Impact Brief serves as a primer on LLMs to introduce CISOs to a threat they likely have not had much time to contemplate. This emerging threat is illustrated by a BlackBerry survey of 1,500 IT decision-makers who said a successful cyberattack would be credited to ChatGPT within the year.

Clients of Aite-Novarica Group’s Cybersecurity service can download this report.

This report mentions BlackBerry, Check Point Research, Council of the EU, DeepMind, European Parliament, George Mason University, Google, Microsoft, OpenAI, Perplexity AI, Replika, WithSecure, and Writesonic Inc.

Related Content

CIO/CTO Checklist: Explaining AI and ML Algorithm Outcomes to Regulators

CIOs, CTOs, and heads of architecture must embrace best practices that support AI explainability.  

Get Summary Report

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.