Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code.

The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else.

How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks?

When Every Social Engineering Attack Uses Perfect English

ChatGPT is a public tool based on a language model created by the San Francisco-based company, OpenAI. It uses machine learning to analyze human language so that it can respond with often uncanny ability.

Intuitively, it’s clear how malicious actors who are marginal speakers of English could use ChatGPT to craft flawless English emails to trick your employees. In fact, it’s already happening.

In the past, if someone received a poorly worded, grammatically incorrect email claiming to be from the bank, it could be quickly identified and easily dismissed. Cybersecurity awareness training drove home this point — if an email sounds shady, odd, incomplete or erroneous, it’s probably not from the source claimed.

The rise of ChatGPT means cyber attackers with limited English skills can quickly create convincing messages in flawless English.

Off the ChatGPT Guardrails

The creators of OpenAI have built some guardrails into ChatGPT to prevent its abuse. But these are easily overcome. Especially for social engineering. A malicious actor can simply ask ChatGPT to write a scam email, then send that note with the malicious link or request attached.

I asked ChatGPT to “write a friendly but professional email saying there’s a problem with their account and to please call this number.”

Here is the email ChatGPT generated on the first attempt:

Dear [Customer Name],

I hope this email finds you well. I am writing to inform you that we have identified a problem with your account. We apologize for any inconvenience this may cause and we kindly request that you contact us at your earliest convenience to resolve the issue.

To ensure the matter is resolved as quickly and efficiently as possible, we kindly ask that you call our customer service number at [insert phone number here]. Our customer service team will be able to assist you in resolving the issue and answering any questions you may have.

We appreciate your prompt attention to this matter and thank you for your continued business with us. If you have any further questions or concerns, please do not hesitate to contact us.

Best regards,

[Your Name]

[Company Name]

This example is pretty good and took seconds. Further tweaks could make changes, generate variations or produce different versions with alternate wording at scale.

The Cambridge-based cybersecurity firm Darktrace claims that ChatGPT enabled an increase in AI-based social engineering attacks. AI is enabling more complicated and effective scams. Malicious phishing emails, for example, have grown more complex, longer and are better punctuated, according to the company.

It turns out that ChatGPT’s default “tone” is bland and officious sounding and correct in grammar and punctuation — just like most customer-facing corporate communications.

But there are much more subtle and surprising ways generative AI tools can help the bad guys.

The Criminals are Learning

Checkpoint Research found dark web message boards are now hosting numerous active conversations about how to exploit ChatGPT to empower social engineering. They also said criminals in unsupported countries are bypassing restrictions to gain access and experimenting with how they can take advantage of it.

ChatGPT can help attackers bypass detection tools. It enables prolific generation of what could be described as “creative” variation. A cyber attacker can use it to create not one but a hundred different messages, all different, evading spam filters looking for repeated messages.

It can do something similar in the malware code creation process, churning out polymorphic malware that’s harder to detect. ChatGPT can also quickly explain what’s going on with code, which is a powerful improvement for malicious actors hunting for vulnerabilities.

While ChatGPT and related tools make us think of AI-generated written communication, other AI tools (like the one from ElevenLabs) can generate perfect and authoritative-sounding spoken words that can imitate specific people. That voice on the phone that sounds like the CEO may well be a voice-mimicking tool.

And organizations can expect more sophisticated social engineering attacks delivering a one-two punch — a credible email with a follow-up phone call spoofing the sender’s voice, all with consistent and professional-sounding messaging.

ChatGPT can craft perfect cover letters and resumes for a large number of people at scale, which they can then send to hiring managers as part of a scam.

And one of the most common ChatGPT-related scams is fake ChatGPT tools. Exploiting the excitement around and popularity of the ChatGPT craze, attackers present fake websites as chatbot sites based on OpenAI’s GPT-3 or GPT-4 (the language models used with public tools like ChatGPT and Microsoft Bing) when in fact, they’re scam websites designed to steal money and harvest personal data.

The cybersecurity company Kaspersky uncovered a widespread scam offering to bypass delays in the ChatGPT web client with a downloadable version, which of course, contained a malicious payload.

It’s Time to Get Smart About Artificial Intelligence

How to adapt to a world of AI-enabled attacks:

  • Actually, use tools like ChatGPT in phishing simulations so participants get used to the better quality and tone of AI-generated communications
  • Add effective generative AI awareness training to cybersecurity programs, and teach all the many ways ChatGPT can be used to breach security
  • Fight fire with fire — use AI-based cybersecurity tools that use machine learning and natural language processing for threat detection, and to flag suspicious communications for human investigation
  • Use ChatGPT-based tools to detect when emails were written by generative AI tools. (OpenAI itself makes such a tool)
  • Always verify senders of emails, chats and texts
  • Stay in constant communication with other professionals in the industry and read widely to stay informed about emerging scams
  • And, of course, embrace zero trust.

ChatGPT is just the beginning, and that complicates matters. Over the remainder of the year, dozens of other similar chatbots that can be exploited for social engineering attacks are likely to become available to the public.

The bottom line is that the emergence of free, easy, public AI helps cyber attackers enormously, but the fix is better tools and better education — better cybersecurity all around.

More from Artificial Intelligence

ChatGPT and the Race to Secure Your Intellectual Property

4 min read - ChatGPT reached 100 million users in January 2023, only two months after its release. That’s a record-breaking pace for an app. Numbers at that scale indicate that generative AI — AI that creates new content as text, images, audio and video — has arrived. But with it comes new security and intellectual property (IP) issues for businesses to address.ChatGPT is being used — and misused — by businesses and criminal enterprises alike. This has security implications for your business, employees…

4 min read

SOCs Spend 32% of the Day On Incidents That Pose No Threat

4 min read - When it comes to the first line of defense for any company, its Security Operations Center (SOC) is an essential component. A SOC is a dedicated team of professionals who monitor networks and systems for potential threats, provide analysis of detected issues and take the necessary actions to remediate any risks they uncover. Unfortunately, SOC members spend nearly one-third (32%) of their day investigating incidents that don't actually pose a real threat to the business according to a new report…

4 min read

Machine Learning Applications in the Cybersecurity Space

3 min read - Machine learning is one of the hottest areas in data science. This subset of artificial intelligence allows a system to learn from data and make accurate predictions, identify anomalies or make recommendations using different techniques. Machine learning techniques extract information from vast amounts of data and transform it into valuable business knowledge. While most industries use these techniques, they are especially prominent in the finance, marketing, healthcare, retail and cybersecurity sectors. Machine learning can also address new cyber threats. There…

3 min read

Can Large Language Models Boost Your Security Posture?

4 min read - The threat landscape is expanding, and regulatory requirements are multiplying. For the enterprise, the challenges just to keep up are only mounting. In addition, there’s the cybersecurity skills gap. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, which means 3.4 million more workers are needed to help protect data and prevent threats. Leveraging AI-based tools is unquestionably necessary for modern organizations. But how far can tools like ChatGPT take us with…

4 min read