A recent study conducted by researchers at Swiss Federal Institute of Technology Lausanne, or EPFL for short, suggested that GPT-4 is better at persuasion than human beings by a margin of just under 82%, or 81.7% to be precise. The study involved conducting debates between 820 people around a wide range of topics. These topics included highly charged subjects such as whether or not race should be considered in the admission criteria of colleges, as well as low risk topics like whether or not the penny should continue to be circulated as legal tender.
It is important to note that the people that participated in this study were often paired up with an AI. This resulted in the finding that AI was able to more effectively provide persuasive arguments. When personalization wasn’t factored into the equation, AI was able to be more persuasive by 21.3%, but personalization took this all the way to 81.7% with all things having been considered and taken into account.
It bears mentioning that the participants were often able to tell when they were speaking to a chatbot in some way, shape or form. In spite of the fact that this is the case, this did not change how persuasive they found the LLM using chatbot’s arguments to be at this current point in time. One thing that can be gleaned from this is that these writing patterns are rather easy to pinpoint, since 75% of the people that participated in the study were able to discern the true identity of the chatbot.
The relative simplicity of the prompts seems to suggest that LLMs will have no issue being persuasive to humans in the near to distant future. This also indicates that malicious actors may be able to use them for their own nefarious purposes, including situations wherein they are attempting to conduct a social engineering attack. As a result of the fact that this is the case, people must be educated about the dangers of believing what they read online.
Image: DIW-Aigen
Read next: AI Can Vastly Reduce the Carbon Footprint of Humans, Here's How
by Zia Muhammad via Digital Information World
No comments:
Post a Comment