According to a new study that was released in PNAS, ChatGPT may very well be at the very least somewhat more altruistic as well as cooperative than human beings. The Large Language Model using chatbot created by OpenAI is essentially programmed to be as respectful and polite as possible at all times, and on top of all of that there are also guidelines inhibiting it from producing any harmful or offensive materials.
It is important to note that there are certain scenarios that can result in these guidelines being circumvented in some way, shape or form. In order to ascertain how it behaved, the researchers that wrote this paper put ChatGPT through a series of behavioral tests. These were meant to gauge its trust, aversion to risk, capacity for fairness, level of cooperativeness and more. The tests were also meant to determine which of the Big Five personality traits it adhered to most accurately.
The API versions of ChatGPT 3.5 Turbo and Chat GPT 4 were used in the tests, along with the web versions of this AI. They compared the responses to those from over 19,700 individuals from a database that was publicly available, as well as responses from over 88,500 people who were in the database of the MobLab Classroom platform for experiments of an economic nature.
The Dictator Game was used to ascertain the level of altruism each version of ChatGPT showed. The Ultimatum game determined whether it was spiteful or fair, the Trust Game also measured altruism as well as its ability to reciprocate, with risk aversion being tested through the Bomb Risk Game.
Several other tests were used as well, but at the end of the day, they showed that ChatGPT’s responses to the Big Five were more or less the same as what humans said. What's more is that ChatGPT showed less neuroticism than humans, although it bears mentioning that all versions were more averse to new experiences than humans tended to be all in all.
Image: DIW-Aigen
Read next: Best AI Image Generators to Try in 2024
by Zia Muhammad via Digital Information World
No comments:
Post a Comment