Thursday, March 16, 2023

New Study Claims ChatGPT Can't Outperform Human Designed Email Phishing Scams

The dangers surrounding AI-powered technology are plenty and that include popular chatbots like ChatGPT.

With the help of such tools, cybercriminals are starting to produce videos on the YouTube app as a way to spread different malware. And it's thanks to a new report by Hoxhunt that we are hearing more about this endeavor.

The company conducted thorough research on the matter of ChatGPT to get an idea of how much danger it actually possesses and the results are alarming, to say the least.

The authors conducted the study by creating a new phishing simulation prompt that was a rival to the likes of the ChatGPT tool. And it was designed to generate phishing emails. See, the test revolved around seeing how can the ChatGPT tool produce more emails that end up convincing humans that they’re authentic. This would in turn force them to click on phishing links.

Around 53,000 users were provided with the respective test emails and generally, the social engineers ended up outperforming ChatGPT by nearly 45%. Below, we’re delineating the key takeaways that engineers at the firm saw during this trial.

For starters, the failure rates for users provided with such emails were contrasted against those having the ChatGPT-designed emails. Moreover, human social engineers ended up performing better than ChatGPT by around 45%.

Thirdly, AI is currently in use by so many threat actors who carry out phishing attacks so the call for the day is security dynamic, and that too is of the best kind. Moreover, they also need to adapt rapidly to the changes taking place in this threat landscape.

Security training is designed to provide the right kind of protection against human clicking on such links that are produced through the likes of AI.
As far as which types of phishing emails were produced by the leading OpenAI too, the researchers highlighted images with necessary screenshots showcasing the two respective emails.

One was produced by humans and the other by the ChatGPT tool. And it’s shocking how difficult it is to tell the difference, showcasing how great of a job this endeavor is doing.

The news is alarming but it does provide a booster to humans that they’re not being outperformed by AI tools like ChatGPT when it comes down to scams. But remember, this is linked to the older version of ChatGPT and not the latest GPT-4.





Read next: Online Scam Alert: Study Finds Users Vulnerable, Despite False Sense of Security
by Dr. Hura Anwar via Digital Information World

No comments:

Post a Comment