Monday, December 23, 2024

BMJ Study Finds Cognitive Weaknesses in AI Models, Challenging Human Replacement Claims

A new study published in The BMJ finds that AI chatbots are showing signs of cognitive impairment just like humans and this pattern is mostly seen in older models. The study is a great challenge to different studies and researches saying that AI is going to replace humans in medicine and teaching because now AI is showing signs of dementia and other cognitive problems like the ones seen in older humans. There are many studies that state that artificial intelligence will be able to do accurate medical diagnosis soon but this study says that it doesn't seem possible now that AI is showing cognitive decline.

Many AI models and LLMs like Google Gemini 1.0 and 1.5, OpenAI's ChatGPT-4 and 4o and Anthropic’s Claude 3.5 were assessed for the studies so the researchers could know which ones are showing cognitive decline. It was found that these AI models, especially the older ones, showed signs of cognitive impairment and performed the worst on tests which were done on them. Researchers used Montreal Cognitive Assessment (MoCA) on the models which are used to test early signs of dementia in older people. The maximum score of the test is 30 and includes questions related to language, attention, executive functions, memory and visuospatial skills, and a score above 26 is considered normal.

The LLMs were tested and were asked questions according to the test Gemini 1.0 scored the lowest with 16 out of 30. The highest score was achieved by GPT-4o (26 out of 30), followed by Claude and GPT-4 (25 out of 30). A practicing neurologist did all the tests and evaluated the results. The test showed that all AI models did the worst in visuospatial skills and executive tasks as well as a clock drawing test. Gemini models also didn't do well in delayed recall tasks where a sequence of five-word sentences is memorized and then recalled.

Most of the AI models which were assessed did well in language, naming, abstraction and attention. The researchers say that results of this test shows that AI models cannot perform perfectly in a clinical setting because they are showing some signs of weaknesses. So, this means that AI models aren't going to replace humans anytime soon because they are experiencing cognitive impairment and as long as this issue isn't solved, humans are going to take the lead. Researchers also suggested treating AI models with cognitive impairment the same way we treat human patients with similar issues.


Read next: Survey Predicts Global Warming, Economic Struggles, and AI Job Shifts in 2025

by Arooj Ahmed via Digital Information World

Is Instagram the Future of Meta's Advertising Revenue? Reels and TikTok Ban May Hold the Answer

According to a report by research firm Emarketer, Instagram will account for over half of Meta Platforms' ad revenue in the US next year. It is because the engagement on Instagram is increasing with more users joining the platform every month, and this is affecting the revenue of the platform positively too. Instagram reels, which are competing with TikTok and YouTube shots, also get a high number of engagements and marketers are quite interested to place more ads on Instagram because of this very reason. As marketers will invest more on the platform, it will boost the overall revenue of Instagram.

There is also a chance that TikTok may get banned in the US so Instagram Reels and YouTube Shorts will become strong competitors for advertising and Instagram will see some rise in its growth. Principal analyst at Emarketer, Jasmine Enberg, says that users are spending two-thirds of their time by watching reels on Instagram so it is quite right to say that Instagram is now a video platform. If TikTok gets banned in the US, one fifth of TikTok ad dollars in the US will come to Instagram, helping in the growth of its revenue. 53.7% of Instagram revenue in 2024 came from its feed while 24.6% of the revenue came from Stories on Instagram. There will be a 9.6% rise in Instagram’s revenue in 2025 after combining revenue from Instagram feed, reels and Threads. So, it seems that better days are coming for Meta, especially Instagram, in the US in 2025.


Read next:

• Positive Reviews Face Deletion as Google Tightens Moderation Against Fake Ratings

Survey Predicts Global Warming, Economic Struggles, and AI Job Shifts in 2025
by Arooj Ahmed via Digital Information World

Sunday, December 22, 2024

Survey Predicts Global Warming, Economic Struggles, and AI Job Shifts in 2025

Approaching new year makes us wonder what will happen in the next year and what events can happen. As the time is passing by rapidly, a lot of things in the world are changing in seconds too. To know what people in different countries predict about the things happening in the new year, Ipsos conducted a survey of 23,700 people from 33 countries. This survey just talks about what general population think will happen in the next year and not the thoughts of analysts and experts. The results of the survey showed that people predict environmental, economical and technological change in the next year.

The most predictions respondents made were about climate change. 80% of the respondents said that they think average global temperatures will increase in 2025, with eight in ten respondents saying that the world will get warmer in the next year. Most of the respondents who had this belief were from Indonesia (91%), Philippines (89%) and Malaysia (88%). 72% of the respondents also think that the temperature of the region where they live will change in the next year. 52% of the respondents said that they think that their country’s government would introduce some measures to combat climate change. On the other hand, 84% respondents from China also said that they think their government will take measures to reduce carbon emissions to control climate change.

79% of the respondents predicted that they think prices of the products will increase more in 2025 but their incomes will remain the same or will only see a slight increase. Two thirds of the respondents also think that AI is going to take over jobs in 2025, but 43% think that AI will increase more opportunities for people in their countries. 59% said that they think many people will start living in virtual worlds in the next year, which is a 3% increase from 2022.

Respondents also predicted a new global pandemic by a new virus (47%) in 2025, while many people are hopeful that people in their countries will become tolerant of each other (33%) in the next year. There were also some predictions about the Ukraine war ending in 2025 (27%), which is about 3 in 10 respondents. The predictions about Middle East conflicts (Israel-Gaza war) ending in 2025 is a little less, with 2 in 10 or 22% respondents predicting it.

What’s Next for 2025? Climate Crisis, AI Disruption, and a Possible Pandemic Top Public Predictions

Read next: New Survey Shows Mobile Internet Services Usage is Declining Across the World
by Arooj Ahmed via Digital Information World

Positive Reviews Face Deletion as Google Tightens Moderation Against Fake Ratings

There has been a trend going on where the reviews of Google are declining and their local visibility has also started to decrease. Google reviews are important to bring trust among customers so this trend is worrisome for search engine optimizers (SEOs) and small businesses. Negative reviews aren't the only ones which are getting deleted, because positive reviews are also suddenly getting less visible. To uncover why this is happening and how to get through this, GMBapi analyzed 5 million reviews from customers in 70 countries and the results are surprising. Most of us must be thinking that negative reviews are the ones which are being targeted for deletion on Google, but the results of the analysis show that 73.1% of the reviews which were deleted on Google are actually five star rated.


This shows that Google is actually trying to make its platform free from fake reviews as there are many businesses which buy reviews so that they can attract more customers. One star reviews (12.8%) were the ones which were the second most deleted reviews on Google, probably because of the use of abusive and inappropriate language used in them. As the reviews with two to four stars (8% to 2%) are more balanced and mostly show honest reviews, they were very less likely to be deleted by Google.

Now the question arises whether review deletions can cause algorithm updates on Google, and the answer is that it is very unlikely. The rapid review deletions by Google also suggests that there were few changes made in review moderation by Google, but the SEO world hasn't studied how this can impact businesses and local ranking algorithms. There could be some algorithm changes as Google moderates more reviews but saying anything about it is too early.

One thing that the analysis of reviews found was that reviews which had no replies were more likely to get deleted (66.1%). This suggests that businesses need to get engaged with their customers and reply to their reviews as this will help maintain a positive reputation of the business. Only 33.9% of the reviews with replies got deleted by Google. The type of reviews that got deleted the most were “Service and Staff” which talks about staff friendliness, customer service quality and expertise of the business. The second most deleted reviews were about “product of Service Quality”, followed by “Environment and Accessibility”.

GMBapi also developed a machine learning model called Random Forest Algorithm to find out what factors in a review resulted in its deletion. Factors like review leng, sentiment and rating were the most important ones when it came to Google review moderation. This analysis highlights that it is important for businesses to embrace local SEO tools, engage with their reviewers, report fake reviews and avoid incentivized reviews.

Read next: AI Overviews Dominate Search, Taking 48% of Mobile Screen Space
by Arooj Ahmed via Digital Information World

Saturday, December 21, 2024

TikTok Faces Year-Long Ban in This Country After Shocking Incident Involving Youth Violence

Albania has decided to ban TikTok for a full year. This happened after nationwide outrage over the tragic killing of a 14-year-old boy by his classmate. The killer even posted shocking pictures on Snapchat after the incident. Prime Minister Edi Rama said TikTok is making young people more violent and announced it will be banned for a year.

The government also plans to start programs to guide kids and support parents about social media. Over the last month, Rama met with teachers, parents, and psychologists to discuss these issues. But for now, there are no clear details about these plans.

TikTok stated it was requesting 'urgent clarification' from the Albanian government

Other countries are also making stricter laws to protect kids on social media. In the U.S., a rule to force TikTok to sell its American operations is expected in January. It’s being challenged in court, though. Another proposal there wants to ban kids under 13 from using social media. Australia has already passed a rule banning kids under 16 from using platforms like TikTok, Instagram, and YouTube. Social media companies now have to make sure kids follow this law. The U.K. is also thinking of something similar.

Experts are divided on these bans. A report from the National Academies says there’s not enough proof linking social media to mental health problems. They suggest tougher rules for companies instead of banning platforms altogether.

Image: DIW-Aigen

Read next: Is OpenAI on the Verge of a Major AI Setback with GPT-5’s Delays

by Asim BN via Digital Information World

Is OpenAI on the Verge of a Major AI Setback with GPT-5’s Delays?

OpenAI faces significant challenges in the development of GPT-5, with progress falling short of expectations and deadlines being pushed further than anticipated. According to The Wall Street Journal, the 18-month effort behind the model, code-named Orion, hasn’t yet delivered results that justify its monumental costs.

The initial training sessions, crucial for fine-tuning AI models, have taken longer and cost more than expected. While GPT-5 demonstrates incremental improvements over earlier iterations, the advancements haven’t reached the transformative leap expected to sustain the model’s steep operational costs.

To tackle these challenges, OpenAI has adopted unconventional approaches. Beyond relying on publicly available datasets, the company is creating custom data by hiring experts to write code and solve complex problems. It’s also generating synthetic data using its existing models, such as o1. This multi-pronged approach underscores the immense effort required to push AI boundaries.

The stakes have never been higher, with everything on the line and no room for failure. With competitors like Google DeepMind, Anthropic, and Microsoft racing to establish dominance in AI innovation, OpenAI is under pressure to maintain its leadership. Orion’s slower-than-expected progress could open opportunities for rivals to gain ground or even reshape market dynamics.

Meanwhile, OpenAI’s financial demands are escalating. The company reportedly spends hundreds of millions annually on hardware and research, with GPT-5’s development adding to that burden. Critics question whether AI development at this scale remains sustainable, especially as scrutiny grows over its environmental and ethical costs.

The delay also has implications for OpenAI’s broader strategy. Its recent move into enterprise tools, such as ChatGPT for businesses, signals a pivot towards monetizing existing technologies. If GPT-5 fails to deliver a game-changing edge, OpenAI may face tougher decisions about balancing innovation with profitability.

With no official release date for Orion this year, the industry will be closely watching OpenAI’s next steps. This moment serves as a reminder that innovation at the frontier of AI isn’t just about progress—it’s about navigating the trade-offs between ambition, cost, and real-world impact.

Image: DIW-Aigen

Read next: 

• Google Uses AI To Power New Chrome Browser Scam Protection

• Tensions Flare Between Long Lost Friends Sam Altman and Elon Musk As OpenAI CEO Calls Tesla Boss ‘A Clear Bully’

• Leen Kawas on Being a Resilient Leader: Navigating Change with Vision and Emotional Intelligence
by Asim BN via Digital Information World

Google Uses AI To Power New Chrome Browser Scam Protection

Search engine giant Google is making use of AI to power the latest Chrome scam protection. The innovative feature not only comprehensively analyzes brands but also the intention of those pages.

As shared by one user on X, the new feature is flagged as a part of Chrome Canary. It’s called Client Side Detection Brand which uses LLM to analyze web pages across different devices. Once enabled, it can inquire about the brand and page’s intention.

The goal appears to be helping scam detection services find brands and their purpose for the webpage. At the end of the day, it just makes it so much simpler to identify any real scams lurking out there. The fact that it works on Windows, Linux, and Mac is a big deal.


For now, it’s not quite clear how this feature works but it might roll out warnings when users visit a scam online page. For instance, visiting false Microsoft tech support or those speaking about infected devices when that’s not the case. Some go as far as to urge users to dial a specific number.

Chrome AI might analyze the promoted brands or the language used for this page. When it detects scams like incorrect domains, it displays alerts so users are aware. They can immediately avoid interactions with this page or share personal details. Such a tool is currently in an experimental phase and might be linked to the new Enhanced Protection feature for Chrome that also makes use of AI.

As per the Android maker, the new Enhanced Protection feature makes use of AI to give real-time protection against anything dangerous like websites, extensions, or downloads. Before the start of October, Enhanced Protection did not use AI. it was more described as being proactive protection.

Since then, it’s been updated to become AI-based protection. For now, Google is more likely to use its pre-trained data to understand web material and alert users about dangerous incidents including thefts and scams.

While the feature sounds great, it’s still in the initial testing phase. Google has made it very clear that its goal is a more secure and private Chrome and using AI technology can help it achieve goals. When and how it plans on doing this, only time can tell.

Read next: Google Search Will Include A New ‘AI Mode’ That’s Similar In Design to its Gemini AI Chabot
by Dr. Hura Anwar via Digital Information World