BLOG POST

Is ChatGPT a Threat?

/

What Is ChatGPT?

ChatGPT is a chatbot that conducts human-like conversations to answer questions and solve problems. OpenAI built this natural language processing tool on top of OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) family of large language models (LLMs) and fine-tuned it using supervised and reinforcement learning techniques. It launched on November 30, 2022, and it had 1 million users within five days. Microsoft has been a significant benefactor for ChatGPT, investing US$1 billion and another US$2 billion for the computing power OpenAI needed to build and refine its LLMs since 2019.

Can Hackers Use ChatGPT?

The answer is an unequivocal yes. Hackers were among the 1 million users downloading ChatGPT upon its release. ChatGPT holds the potential to train and aid the next generation of hackers. Artificial intelligence (AI) systems such as ChatGPT can be used to create malware and craft perfect phishing emails. Using ChatGPT, next-gen hackers will not suffer the same frailties as their predecessors, such as poor coding, misspelled words in phishing emails, and poor target reconnaissance, resulting in more successful hacking incidents. Organizations need to hone their cybersecurity defenses to fight a rapidly evolving and adaptable adversary that emerges from the depths of the dark web with the skills and knowledge to infiltrate any attack surface.

This emerging threat is illustrated by a BlackBerry survey of 1,500 IT decision-makers who stated they believe there will be a successful cyberattack credited to ChatGPT within the year. Some efforts have been made to keep ChatGPT out of hackers’ hands through techniques, including geofencing, that restrict access to certain countries, such as Russia. That said, it is only a matter of time before the hacker world weaponizes existing LLMs.

What Is the Threat Introduced by ChatGPT?

Hackers have learned that LLMs such as ChatGPT can assist with many hacker activities, including writing malware scripts without the need to be an experienced coder. Hacking with LLMs as a silent partner goes way beyond hacking-as-a-service in bringing nefarious capabilities to the masses. Combating hackers armed with LLMs will require innovative means to fend off cybercrime and fraud. 

If you think the prospect of hackers embracing LLMs is fanciful, the researchers at WithSecure would vehemently disagree. Researchers at WithSecure were able to train an LLM using malicious prompts to create outcomes of interest to hackers. 

WithSecure created phishing emails and SMS messages; social media messages designed to troll or harass, or damage brands; social media messages designed to advertise, sell, or legitimize scams; and convincing fake news articles. LLMs can be prompted to respond according to specific personas, for example, “answer like a doctor” or “answer like a consultant.” Nothing is standing in the way of LLMs trained to “answer like a hacker.”

Researchers at Check Point Research explored the dark web to see if cybercriminals had taken notice of ChatGPT. They found an underground thread titled “ChatGPT – benefits of malware.” The thread disclosed how hackers experimented with ChatGPT to recreate malware strains. Another forum participant described how ChatGPT helps write a python script used for malicious code. Today’s threat is not smarter hackers but hackers armed with code-generating LLMs that allow attacks at a scale and level of sophistication never seen previously. LLMs are poised to bridge the knowledge and experience gap that separates good hackers from great ones.  

Will Hackers Create Their Version of ChatGPT?

You might wonder if hackers could create an LLM through access to many of the LLMs available as open-source code. It is unlikely that hackers would create LLMs owing to the cost and resources required. As a point of comparison, GPT-3 has around 175 billion parameters at an estimated training cost of 355 years and at least US$4.6 million in 2020. Secondarily, it is nearly impossible to imagine a disciplined consortium of hackers training an LLM using a next-word prediction method that is universally applicable to the hacker world.

The research from WithSecure and Check Point Research supports this assumption and has brought to light hackers already arming themselves with LLMs. Aite-Novarica Group believes that hackers are not likely to create an LLM but rather coop existing LLMs for nefarious means. But with several  GPT models available in open-source, one can never say never to a hacker or crime consortium creating an LLM.

As a CISO, What Should I Do?

Aite-Novarica Group offers the following consideration for CISOs:

  • Recognize that LLMs are already in use within your organization. A chief concern is the amount of confidential information used to train LLMs and the reliance on responses as statements of fact. LLMs open up the very real possibility of data misuse and can call into question the integrity of an organization if used unchecked.  
  • Create a privacy policy on using LLMs in any aspect of the business as soon as possible.
  • Keep abreast of laws such as the European Council Artificial Intelligence Act, which specifically covers the privacy and protection of individuals and companies affected by AI.  
  • Consider the efficacy of cybersecurity technologies when creators use AI to develop tools and services. Question cybersecurity vendors to understand whether ChatGPT is used as part of their development methodology.  
  • Verify whether existing cybersecurity technologies can detect threats created by LLMs. One area of immediate concern would be revising phishing protection solutions and training them to detect increasingly more realistic phishing emails. 
  • Look past ChatGPT, as many other LLMs may be used within your organization.

If you were wondering, none of this blog was written using ChatGPT. Or was it? How could you tell?