AI chatbots have become a huge craze as of late, with the accuracy of their responses being hailed as a sign of an advancing industry. In spite of the fact that this is the case, these tools can be used for malicious purposes just like any other type of tool. The most popular chatbot out there, ChatGPT, has recently been started getting used for phishing attacks. Malicious actors use the chatbot to draft convincing emails that can convince users to click on unsafe links.
With all of that having been said and now out of the way, it is important to note that cybersecurity experts at Check Point Research recently sought to demonstrate how dangerous ChatGPT can be if used with ill intent. They did this by creating an Excel file through ChatGPT that was injected with malicious code.
All they had to do was tell the chatbot to write code that would execute a malicious download form a URL once text was entered into an Excel file. The code that the chatbot created was chillingly effective, and it highlights how dangerous it can be with all things having been considered and taken into account.
The researchers then went on to use the chatbot to draft a phishing email. They gave precise instructions about what brand the chatbot should mimic, and they then received just what they had asked for. While they also received a prompt saying that their account had been locked for suspicious activity, it was quite easy for them to work around this so such prompts may be less useful than might have been the case otherwise.
Any advancement in technology will bring its share of pros and cons. ChatGPT is an exciting step forward in an industry that could revolutionize the entire world, but that does not change the very real risks that it can pose. More work needs to be done to safeguard users, and a well educated consumer may be able to spot a phishing email from a mile off. ChatGPT itself also needs to be optimized to prevent it being used for such things.
Read next: This Dangerous Android Malware Attacks Crypto Exchanges and Banks Alike
by Zia Muhammad via Digital Information World
No comments:
Post a Comment