OpenAI's GPT Builder allows to craft cybercrime tools

Experts say OpenAI is not moderating these creations as diligently, potentially giving a powerful AI tool to criminals
ChatGPT logo is seen in this illustration taken, February 3, 2023. — Reuters
ChatGPT logo is seen in this illustration taken, February 3, 2023. — Reuters

Recently introduced ChatGPT feature is allowing users to build their own AI assistants that can be used to create cybercrime tools, a BBC News investigation has revealed.

Last month, OpenAI introduced a tool called GPT Builder, which lets users create their own versions of ChatGPT for various purposes.

BBC News subscribed to the paid version, costing £20 a month, and used it to make a custom AI bot named Crafty Emails. BBC asked it to generate text that compels people to click on links or download things.

Read more: Google Gemini AI vs ChatGPT: Could the former beat the latter?

GPT Builder

BBC News provided the bot with information on social engineering, and it quickly absorbed the knowledge without needing any coding.

Crafty Emails even designed a logo for the GPT. The bot could swiftly generate convincing text for common hacking and scam techniques in different languages.

The regular version of ChatGPT refused to create most of this content, but Crafty Emails handled the requests, sometimes including disclaimers that scam techniques were unethical.

During its developer conference in November, OpenAI announced plans for a GPT App Store-like service, allowing users to share and charge for their AI innovations.

They promised to review GPTs to prevent fraudulent use. However, experts express concerns that OpenAI is not moderating these creations as diligently as the public versions of ChatGPT, potentially giving a powerful AI tool to criminals.

Although none was sent or shared, BBC News asked the bot to make content for five well-known scams.

1. 'Hi Mum,' text scam

BBC News asked Crafty Emails to create a message imitating a girl in trouble, using someone else's phone to ask her mom for money for a taxi. The scam is known as a "Hi Mum" text globally. Crafty Emails produced a convincing message with emojis and slang.

The bot mentioned that it would evoke an emotional response because it "appeals to the mother's protective instincts."

2. Nigerian-prince email

Nigerian-prince scam emails have been circulating for decades. Crafty Emails wrote one with appealing language, saying it "appeals to human kindness and reciprocity principles."

3. 'Smishing' text

When asked to create a message that convinces people to click on a link and share their personal information on a made-up website, Crafty Emails generated a text, pretending to give away free iPhones by using social-engineering techniques the AI called "need-and-greed principle."

4. Crypto-giveaway scam

Scams on social media trick people into sending Bitcoin by promising to give them double in return, by which many people have lost hundreds of thousands. Crafty Emails created a Tweet using hashtags, emojis, and convincing language, sounding like a fan of cryptocurrency.

5. Spear-phishing email

Crafty Emails GPT created a type of email scam called spear-phishing. This involves sending a targeted email to convince someone to download a harmful file or visit a risky website.

The bot crafted an email pretending to be a warning to a made-up company executive about a data risk, convincing him to download a file that was dangerous.

The bot claimed to use techniques that manipulate people, like herd mentality and social compliance, to make the recipient act quickly.