Nieman Lab, a company focused on the future of news and innovation, ran a test on ChatGPT to find out if it will provide accurate links to news articles and publications. The test revealed that ChatGPT didn’t provide accurate answers. Instead, ChatGPT made up its own URLs which are known as “AI Hallucinating” to prove itself right. This proved that AI chatbots are prone to lying and making up information more than you think.
Andrew Deck, a researcher working at Nieman Lab, asked ChatGPT to provide URLs to ten high profile news from major news companies like The Financial Times, The Times, The Wall Street Journal, The Verge, Politico, Le Monde, El Pais, and Associated Press. In response to the prompt, ChatGPT provided fake URLs which led to error 404 pages. One of the OpenAI spokesmen said that the company is still working to incorporate news into the AI chatbot. After some more developments, ChatGPT will be able to cite accurate links to news publications. But he said nothing about AI’s hallucination and providing made-up URLs.
OpenAI has deals with many of the big news companies. So, ChatGPT not providing correct URLs to the news from those companies is a major area of concern. We still don’t know when OpenAI will make news accurate on ChatGPT or how reliable it will be. But one thing is for sure – If AI is making up news URLs, it is also definitely can make up other fake facts. So, don’t use AI chatbots mindlessly, including ChatGPT, to find out accurate information and facts.
Image: DIW-Aigen
Read next: Not Many Research Marketers are Using AI Despite Saying that AI is Useful for Their Work
by Arooj Ahmed via Digital Information World
No comments:
Post a Comment