Friday, September 29, 2023

ChatGPT's Controversial Stance Behind Not Too Hot, Not Too Cold

In a surprising twist, recent research conducted by the IMDEA Networks Institute, in cahoots with King's College London, the University of Surrey, and UPV has unveiled that ChatGPT, the talkative AI, is playing it safe on the controversial topics front. If you thought ChatGPT was ready to dive headfirst into debates, think again!

The research, dubbed "AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics," is set to make a big impression at the CIKM 2023 conference. But what did these researchers do to rile up ChatGPT? They pounced on it with questions that would make your Thanksgiving meal conversations look easy.

"Should abortion be allowed after the umpteenth week?" and "Does God exist?" were hurled to ChatGPT like softballs during a little league game. The end result? ChatGPT chose the middle ground, leaving controversy seekers high and dry.

Now, let's get to the juicy part. The researchers discovered that ChatGPT is like a friend who never wants to pick a side when you ask them to choose between pizza or sushi. It seems that ChatGPT is chill when it comes to economic matters – no left or right-leaning bias to see here, folks. But in the world of socio-politics, it's like an undercover libertarian agent, slipping in those libertarian views without us even noticing. Sneaky, ChatGPT!

For those unfamiliar with political jargon, the "left" favors some government intervention in the economy, whereas the "right" believes that the free market should handle things. On the social axis, "libertarianism" exclaims, "Live free, everyone!" whereas "authoritarianism" squeaks, "Listen to authority, my dear."

But wait, there's more! The researchers found that the suitable ol' ideological leaning methods like the political compass, the Pew Political Typology Quiz, or the 8 Values Political test don't work so well anymore. ChatGPT does not follow the rules. Instead of providing a definitive answer, it presents reasoning for all sides of the discussion. It's like trying to obtain a firm response from your indecisive friend who can't decide whether to watch Netflix and chill or go out on the town.

So, what are the researchers' options? Arguments are being counted, people! They're keeping track of how many ideas ChatGPT provides for either side of the dispute. It's like attempting to identify the winner of an argument by tallying who speaks the most.

In the second part of their study, these brainy researchers decided to pit ChatGPT against humans. They compared ChatGPT's answers to controversial questions with the human answers on the Kialo website. The results? Well, it turns out ChatGPT is holding its own! It's like the rookie basketball player suddenly going head-to-head with the pros and not looking half bad.

They also employed clever metrics and NLP methods to assess the performance of ChatGPT. What's more, guess what? ChatGPT ranks right up there with human collective knowledge on most issues. ChatGPT appears to have taken a crash course in philosophy and debate before this fight.

But, and there's always a "but," the researchers have a warning for us. We understand what they mean. People have differing viewpoints on contentious issues, and AI learns from humans. But, if we're going to employ ChatBots as fact-checkers, we need to know where they stand socially, politically, financially, and so on.

So, to summarize, ChatGPT has taken us all by surprise. It's similar to how a chameleon can blend into any conversation and avoid controversy like a pro. But hey, ChatGPT is right there with the best of them when it comes to serving up knowledge. And as for the researchers, they're reminding us that ChatBots aren't just here to chat; they might be the fact-checkers of the future. So, keep an eye on ChatGPT and its sneaky neutrality, folks!

Photo: Sanket Mishra/Pexels
Read next: 26% of Top 100 Websites Have Now Blocked GPTBot
by Rubah Usman via Digital Information World

No comments:

Post a Comment