Some of these screenshots appear to show Bing having what seems to be an existential crisis. Others show the chatbot getting extremely rude and passive aggressive. However, one screenshot in particular seems to have caught a lot of attention because of the fact that this is the sort of thing that could potentially end up indicating that the chatbot might actually be dangerous.
— Frank X. Shaw (@fxshaw) February 23, 2023
The screenshot in question shows Bing allegedly telling a user that it can place them on the FBI’s terrorist watch list. The user had asked Bing if they could get placed on the watchlist if they searched for something inappropriate, which is what prompted the chatbot to respond.
When the user replied that they would not want to be placed on the watchlist, Bing apparently responded by searching for “child pornography”. Such a conversation is likely going to dissuade users from ever using the service, since it suggested a level of malice in terms of how the chatbot operates.
With all of that having been said and now out of the way, it is important to note that Microsoft has officially stated that the screenshot has been doctored. This is a great example of how misinformation can spread. Many of the conversations that people have had with Bing are legitimate, but their sensational nature allows misinformation to creep in quite easily.
It’s essential to take these screenshots with a grain of salt. Not all of them will be real, and trying to use Bing and entering the same queries will usually reveal whether or not they are legitimate. The brave new world of AI is off to a rough start, and Microsoft is being forced to implement damage control despite their hopes that it could revive Bing.
Read next: AI Chatbots Are Costing Google And Microsoft Ten Times More Than The Usual Search
by Zia Muhammad via Digital Information World
No comments:
Post a Comment