A study from the Association for the Advancement of Artificial Intelligence examines the disconnect between public perception and actual AI performance. Although AI systems continue evolving, ensuring accurate responses remains an unresolved challenge.
Despite extensive funding, prominent AI models struggle to maintain reliability. The AAAI’s research panel collected insights from experts and surveyed hundreds of participants to assess current capabilities.
The findings indicate that widely used AI models face difficulties with factual accuracy. In evaluations using straightforward question sets, these systems provided incorrect answers in more than half of the cases. Researchers have attempted various methods to enhance precision, such as retrieving relevant documents before response generation, applying automated reasoning to eliminate inconsistencies, and guiding AI through step-by-step problem-solving processes.
Even with these refinements, meaningful progress has been limited. Approximately 60 percent of AI specialists remain skeptical about achieving reliable factual accuracy in the near term. This reinforces the importance of human oversight when using AI tools, particularly in domains where precision is essential, such as finance and healthcare.
The study also highlights a major gap in understanding. Nearly 79 percent of AI experts believe the general public overestimates current AI capabilities. Many individuals lack the necessary knowledge to critically evaluate claims made about AI advancements. Industry analysts have observed that AI enthusiasm recently peaked and is now entering a period of reduced expectations. This trend influences digital marketing strategies, where businesses may allocate resources based on unrealistic assumptions about AI’s potential. When results do not align with projections, financial setbacks may occur.
Additionally, 74 percent of researchers argue that AI development is shaped more by popular interest than by scientific necessity. This raises concerns that fundamental challenges, including factual reliability, might be overlooked in favor of commercially appealing advancements.
Organizations adopting AI-driven solutions must recognize the limitations of these technologies. Regular evaluations and expert reviews are essential to mitigating errors, particularly in regulated sectors where misinformation carries significant consequences.
AI-generated content can negatively impact credibility if inaccuracies persist. Search platforms may deprioritize sites that publish unreliable information, reinforcing the need for careful oversight. A balanced approach where AI assists but humans validate remains the most effective strategy for maintaining trust and relevance.
Beyond content creation, decision-makers must take a measured approach to AI investment. Committing resources to new technologies without proven returns can result in costly miscalculations. Businesses that develop a clear understanding of AI’s capabilities and constraints will be better positioned to implement sustainable strategies that deliver real value.
Image: DIW-Aigen
Read next:
• Phones Aren’t the Only Distraction: Study Shows Workplace Procrastination Persists Despite Device Distance
• How Is AI Fueling a Data Explosion Bigger Than All of Human History?
• New Survey Shows that Gmail is the Most Used Email Service Provider in the US
by Asim BN via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Showing posts with label Social Media. Show all posts
Showing posts with label Social Media. Show all posts
Monday, March 31, 2025
New Survey Shows that Gmail is the Most Used Email Service Provider in the US
Google has recently announced that it is integrating Gemini AI Assistant into Gmail and this has made many users anxious because of concerns related to generative AI reading their personal emails. Google has also mentioned a lot of advantages of this AI integration into Gmail like faster email searches, prioritizing emails, and highlighting important emails with the help of Gemini. However, users are skeptical and question why AI is needed for these tasks and are concerned that AI models would be trained on their emails.
Statista Consumer Insights did a recent survey to find out which email service providers are dominating the US market, and it was no surprise that Gmail is currently being dominated with 75% of the respondents using it. The second most used email provider is Yahoo Mail but it is a lot behind Gmail, with 31% of the respondents using it.
Other email service providers being used by Americans are Microsoft Outlook/Hotmail (25%), Apple iCloud Mail (17%) and AOL Mail (10%). 9% of respondents also reported using At&T Mail, while Spectrum and Xfinity (Comcast) are also being used by 8% and 7% of respondents respectively. The survey was done among 1249 US respondents between the ages of 18 and 64.
Read next: Digital Fatigue: A Third of Americans Willing To Trade Security for Convenience
by Arooj Ahmed via Digital Information World
Statista Consumer Insights did a recent survey to find out which email service providers are dominating the US market, and it was no surprise that Gmail is currently being dominated with 75% of the respondents using it. The second most used email provider is Yahoo Mail but it is a lot behind Gmail, with 31% of the respondents using it.
Other email service providers being used by Americans are Microsoft Outlook/Hotmail (25%), Apple iCloud Mail (17%) and AOL Mail (10%). 9% of respondents also reported using At&T Mail, while Spectrum and Xfinity (Comcast) are also being used by 8% and 7% of respondents respectively. The survey was done among 1249 US respondents between the ages of 18 and 64.
Read next: Digital Fatigue: A Third of Americans Willing To Trade Security for Convenience
by Arooj Ahmed via Digital Information World
How Is AI Fueling a Data Explosion Bigger Than All of Human History?
Right now, there is a lot of data which is collected, stored, and processed in the digital world. As technology is advancing, so is the rise and influence of data. We used data from Avison Young and IDC Global DataSphere Forecast to draw a visualization of rapid increase in data over the years and what challenges and opportunities are associated with it. Projections indicate that the next three years will generate more data than all of human history combined. One of the biggest reasons for this mass generation of data will be due to artificial intelligence, starting from 2014 with the release of the first generative AI model and then the launch of OpenAI’s GPT-1 in 2018.
In 2010, the worldwide data was just 2 zettabytes, which increased gradually to 13 zettabytes in 2014. In 2018, the worldwide data reached 33 zettabytes and by 2020, it had increased to 64 zettabytes.
In 2022, the worldwide data reached 101 zettabytes from 84 zettabytes in 2021. It was the year when OpenAI’s ChatGPT got released and 1 million users visited the platform within the five days of its release. Nowadays there are a lot of AI products that people are using in their daily lives, and that's why data is estimated to reach 182 zettabytes in 2025. Before 2023, the total data created was about 542 zettabytes starting from 2010. But from 2024 to 2026, about 552 zettabytes of data are going to be generated, which highlights the growing trends in technology and AI market.
Read next: Digital Fatigue: A Third of Americans Willing To Trade Security for Convenience
by Arooj Ahmed via Digital Information World
In 2010, the worldwide data was just 2 zettabytes, which increased gradually to 13 zettabytes in 2014. In 2018, the worldwide data reached 33 zettabytes and by 2020, it had increased to 64 zettabytes.
In 2022, the worldwide data reached 101 zettabytes from 84 zettabytes in 2021. It was the year when OpenAI’s ChatGPT got released and 1 million users visited the platform within the five days of its release. Nowadays there are a lot of AI products that people are using in their daily lives, and that's why data is estimated to reach 182 zettabytes in 2025. Before 2023, the total data created was about 542 zettabytes starting from 2010. But from 2024 to 2026, about 552 zettabytes of data are going to be generated, which highlights the growing trends in technology and AI market.
Read next: Digital Fatigue: A Third of Americans Willing To Trade Security for Convenience
by Arooj Ahmed via Digital Information World
Sunday, March 30, 2025
Phones Aren’t the Only Distraction: Study Shows Workplace Procrastination Persists Despite Device Distance
According to a new study published in Frontiers of Computer, putting the smartphone away from people isn't enough to reduce procrastination and disruption so they can focus on their work. The study wanted to know if placing smartphones away at work can reduce workers' non-work-related smartphone usage. For the study, the researchers gathered 22 participants and made them work in a soundproof and private room with their usual work devices, which include phones and laptops. The smartphones received usual notifications which were not controlled by researchers.
The researchers experimented with two conditions, with the first one being putting the phone on a desk within easy reach. The other condition was placing the phone 1.5 meters away on another desk. The only difference between these two conditions was the distance between the smartphone and the participants. The results found that putting the smartphone away reduced phone use but the participants started distracting themselves by other means, like using their laptops instead of mobile phones.
It didn't matter what the placement of the phone was because it didn't put any difference in focus and the time spent on work and leisure activities remained the same. The study also found that the participants used smartphones as the preferred devices for distraction because they provided a connection with work and loved ones. As smartphones have everything from alarm clocks to navigation systems and from sources of information to music players, people prefer using them over other devices. Even if smartphones aren't serving any purpose, people can still use social media for entertainment. Even though computers can still serve all these purposes, they aren't that easy to use and portable.
The researchers suggest some ways to reduce distractions at work, such as silencing or scheduling notifications. However, they also admit that avoiding complete phone use is impossible and highly unlikely because people are completely dependent on their phones and cannot resist them, especially the younger ones.
Image: DIW-Aigen
Read next: Job Hopping Becomes the Norm as 70% of U.S. Workers Eye Career Moves
by Arooj Ahmed via Digital Information World
The researchers experimented with two conditions, with the first one being putting the phone on a desk within easy reach. The other condition was placing the phone 1.5 meters away on another desk. The only difference between these two conditions was the distance between the smartphone and the participants. The results found that putting the smartphone away reduced phone use but the participants started distracting themselves by other means, like using their laptops instead of mobile phones.
It didn't matter what the placement of the phone was because it didn't put any difference in focus and the time spent on work and leisure activities remained the same. The study also found that the participants used smartphones as the preferred devices for distraction because they provided a connection with work and loved ones. As smartphones have everything from alarm clocks to navigation systems and from sources of information to music players, people prefer using them over other devices. Even if smartphones aren't serving any purpose, people can still use social media for entertainment. Even though computers can still serve all these purposes, they aren't that easy to use and portable.
The researchers suggest some ways to reduce distractions at work, such as silencing or scheduling notifications. However, they also admit that avoiding complete phone use is impossible and highly unlikely because people are completely dependent on their phones and cannot resist them, especially the younger ones.
Image: DIW-Aigen
Read next: Job Hopping Becomes the Norm as 70% of U.S. Workers Eye Career Moves
by Arooj Ahmed via Digital Information World
Are AI Crawlers Threatening Website Performance, SEO, and Bandwidth Costs?
There has been an increase in AI crawlers on different websites and these bots are affecting the search ranking and speed of those websites These AI crawlers are from companies like Anthropic, OpenAI, and Amazon, and are crawling on websites to gather data for AI models. For instance, SourceHut has blocked many cloud providers like Microsoft Azure and Google Cloud because they were sending too much bot traffic to websites.
According to data from Vercel, OpenAI’s GPTBot made 569 million bot requests in a month while Anthropic's Claude made 370 million requests. Around 20% of Google’s search crawler volume is because of AI crawlers. DoubleVerify found that there was an 86% increase in general invalid traffic (GIVT) in late 2024 because of AI crawlers and 16% of these bots were from ClaudeBot, GPTBot, and AppleBot.
Chart: Doubleverify
Read the Docs project reported that they have reduced their daily traffic from 800GB to 200GB by blocking those AI crawlers which has saved them around $1500 per month.
AI crawlers are different from traditional crawlers in their depth and frequency and consume more resources by revisiting the same pages every few hours. SEO professionals and website owners need to manage AI crawlers while maintaining visibility in search results. Check server logs and bandwidth spikes for any unusual activities and monitor high traffic to resource-heavy pages. Using robots.txt and Cloudflare’s AI Labyrinth can also help in blocking any unauthorized bot traffic on websites.
Read next:
• American Support for the TikTok Ban Hits New Low, Study Claims
• YouTube Updates Shorts View Count To Capture Every Play While Testing Variable Notification Frequency For Better Engagement
by Arooj Ahmed via Digital Information World
According to data from Vercel, OpenAI’s GPTBot made 569 million bot requests in a month while Anthropic's Claude made 370 million requests. Around 20% of Google’s search crawler volume is because of AI crawlers. DoubleVerify found that there was an 86% increase in general invalid traffic (GIVT) in late 2024 because of AI crawlers and 16% of these bots were from ClaudeBot, GPTBot, and AppleBot.
Chart: Doubleverify
Read the Docs project reported that they have reduced their daily traffic from 800GB to 200GB by blocking those AI crawlers which has saved them around $1500 per month.
AI crawlers are different from traditional crawlers in their depth and frequency and consume more resources by revisiting the same pages every few hours. SEO professionals and website owners need to manage AI crawlers while maintaining visibility in search results. Check server logs and bandwidth spikes for any unusual activities and monitor high traffic to resource-heavy pages. Using robots.txt and Cloudflare’s AI Labyrinth can also help in blocking any unauthorized bot traffic on websites.
Read next:
• American Support for the TikTok Ban Hits New Low, Study Claims
• YouTube Updates Shorts View Count To Capture Every Play While Testing Variable Notification Frequency For Better Engagement
by Arooj Ahmed via Digital Information World
AI-Powered Sextortion Scams Surge: Cybercriminals Exploit Data Breaches for Blackmail
According to a new blog by AVAST threat intel researchers, many cybercriminals are now combining AI with data breaches to execute sextortion attacks. These scammers are using AI and other stolen data to carry out personalized scams and many online daters are falling victim to it. There was a 137% increase in sextortion attacks in the US and a 49% and 34% increase in the UK and Australia respectively. The cybercriminals are also using new tactics to carry out these attacks.
Threat Intelligence Director at Avast, Michal Salat, says sextortion victims are receiving alarming messages that claim that hackers have access to their private videos and images. The scams become more credible because of data breaches from stolen passwords. Scammers are also using AI to create deep fake images and explicit videos where they paste the victim's face into other bodies. As AI is improving, extortion of texts, emails, and calls is also getting advanced and the victims get worried because of the fear of exposure.
Scammers are also pulling images from Google Maps to threaten victims with fabricated images of their homes. These scammers are using the dark web to gather emails, names, and addresses of the victims and then they combine this personal information with Google Maps images to create unsettling footage of victims’ homes. The scammers are also claiming to have access to the devices of the victims and they are threatening them to leak their personal information or sexual content.
Even though all those images and threats are AI-generated, they still shock the victims especially if their personal data is accurate and then they feel pressure to comply with ransom demands. About 15,000 Bitcoin wallets are linked to Google Maps scams which means that scammers are making huge profits. Do not open attachments or reply to suspicious emails, texts, or calls to protect yourself from these scams. Teenagers are more vulnerable to these attacks and they often become victims of these attacks through social media. They should stay calm if this kind of situation happens to them and should pay the ransom demand.
Image: DIW-Aigen
Read next: Too Much Social Media? Study Links Heavy Use to Rising Irritability
by Arooj Ahmed via Digital Information World
Threat Intelligence Director at Avast, Michal Salat, says sextortion victims are receiving alarming messages that claim that hackers have access to their private videos and images. The scams become more credible because of data breaches from stolen passwords. Scammers are also using AI to create deep fake images and explicit videos where they paste the victim's face into other bodies. As AI is improving, extortion of texts, emails, and calls is also getting advanced and the victims get worried because of the fear of exposure.
Scammers are also pulling images from Google Maps to threaten victims with fabricated images of their homes. These scammers are using the dark web to gather emails, names, and addresses of the victims and then they combine this personal information with Google Maps images to create unsettling footage of victims’ homes. The scammers are also claiming to have access to the devices of the victims and they are threatening them to leak their personal information or sexual content.
Even though all those images and threats are AI-generated, they still shock the victims especially if their personal data is accurate and then they feel pressure to comply with ransom demands. About 15,000 Bitcoin wallets are linked to Google Maps scams which means that scammers are making huge profits. Do not open attachments or reply to suspicious emails, texts, or calls to protect yourself from these scams. Teenagers are more vulnerable to these attacks and they often become victims of these attacks through social media. They should stay calm if this kind of situation happens to them and should pay the ransom demand.
Image: DIW-Aigen
Read next: Too Much Social Media? Study Links Heavy Use to Rising Irritability
by Arooj Ahmed via Digital Information World
Sunday, March 16, 2025
AI’s Growing Influence on Workplaces Sparks Concerns Over Declining Critical Thinking
Researchers from Microsoft Research and Carnegie Mellon University studied knowledge workers by analyzing AI tools with 1,000 real-world examples to determine whether AI is changing our critical thinking.
The results of the study showed that AI is changing how we think at work and impacting our job satisfaction and related challenges. The researchers studied how knowledge workers apply critical thinking while working with AI and what the causes are that make them think more or less while using AI tools.
The study found that the more people trust AI, the less they question its results, and those who think that their skills are better than AI tend to think critically about AI responses. This is also creating a risk because as AI is getting better, we are becoming less likely to question its output even when a little brainpower needs to be flexed. There are different factors that are preventing people from thinking critically, like awareness barriers, motivation barriers, and ability barriers.
Senior researcher at Microsoft, Lev Tankelevitch, says that most people are less critical about AI output when tasks they perform using AI are low stakes, and they naturally become critical when tasks are high-stakes. The main concern is that if workers are not using critical thinking regularly while using AI, they may forget to use it even when it truly matters. There is no doubt that generative AI has made cognitive tasks easier, like helping workers in areas like comprehension, knowledge gathering, and analysis.
Even though AI is retrieving information quickly, professionals should focus on checking accuracy. AI is also being used for problem-solving, but it is important for workers to refine and adapt to solutions in real-world scenarios. Professionals are also supervising AI to ensure that it gives high-quality and relevant results instead of performing tasks on their own. As AI roles are evolving in workplaces, jobs are going to shift towards AI prompt engineering and quality control as well as output verification. The level of success in workplaces will be defined by how employees direct and access AI instead of just personal task execution. To make sure workers are using critical thinking while using AI, organizations should integrate verification steps into workflows and design AI interfaces that force users to critically analyze every response. The skills needed in AI-driven workplaces are also evolving but critical thinking skills remain important.
Image: DIW-Aigen
Read next: Social Media Abstinence Lowers Anxiety, But Mindful Usage Helps Prevent Loneliness, Researchers Discover
by Arooj Ahmed via Digital Information World
The results of the study showed that AI is changing how we think at work and impacting our job satisfaction and related challenges. The researchers studied how knowledge workers apply critical thinking while working with AI and what the causes are that make them think more or less while using AI tools.
The study found that the more people trust AI, the less they question its results, and those who think that their skills are better than AI tend to think critically about AI responses. This is also creating a risk because as AI is getting better, we are becoming less likely to question its output even when a little brainpower needs to be flexed. There are different factors that are preventing people from thinking critically, like awareness barriers, motivation barriers, and ability barriers.
Senior researcher at Microsoft, Lev Tankelevitch, says that most people are less critical about AI output when tasks they perform using AI are low stakes, and they naturally become critical when tasks are high-stakes. The main concern is that if workers are not using critical thinking regularly while using AI, they may forget to use it even when it truly matters. There is no doubt that generative AI has made cognitive tasks easier, like helping workers in areas like comprehension, knowledge gathering, and analysis.
Even though AI is retrieving information quickly, professionals should focus on checking accuracy. AI is also being used for problem-solving, but it is important for workers to refine and adapt to solutions in real-world scenarios. Professionals are also supervising AI to ensure that it gives high-quality and relevant results instead of performing tasks on their own. As AI roles are evolving in workplaces, jobs are going to shift towards AI prompt engineering and quality control as well as output verification. The level of success in workplaces will be defined by how employees direct and access AI instead of just personal task execution. To make sure workers are using critical thinking while using AI, organizations should integrate verification steps into workflows and design AI interfaces that force users to critically analyze every response. The skills needed in AI-driven workplaces are also evolving but critical thinking skills remain important.
Image: DIW-Aigen
Read next: Social Media Abstinence Lowers Anxiety, But Mindful Usage Helps Prevent Loneliness, Researchers Discover
by Arooj Ahmed via Digital Information World
Can You Trust AI for Medical Advice? New Study Uncovers the Risky Truth
According to a new study published in NPJ Digital Medicine, some Spanish researchers tried to investigate if the large language models are reliable when it comes to giving health advice. The researchers tested seven LLMs, including OpenAI's ChatGPT, ChatGPT-4 and Meta's Llama 3, with 150 medical questions, and the researchers found that all the models tested had varied results. Most of the AI-based search engines give incomplete or incorrect results when users ask them some health-related questions. Even though AI-powered chatbots are increasingly in demand there haven't been proper studies which could show that LLMs give reliable medical-related results. This study found that the results of LLMs accuracy depend on the phrasing, retrieval bias, and reasoning, but they can still produce misinformation.
For the study, the researchers assessed four search engines: Google, Yahoo!, DuckDuckGo and Bing, and seven LLMs including ChatGPT, GPT-4, Flan-T5, Llama3 and MedLlama3. The results showed that ChatGPT, GPT-4, Llama3 and MedLlama3 had the upper hand in most evaluations, while Flan-T5 lagged behind the pack. For search engines, the researchers analyzed the top 20 ranked results. A passage extraction model was used to identify relevant snippets and a reading comprehension model was used to determine if the snippets had a definitive yes/no answer. Two types of users' behaviors were also seen: Lazy users stopped searching as soon as they found the first clear answer, while the diligent users cross-referenced three sources before deciding on an answer. The lazy users were the ones who got the most accurate answers, which shows that top-ranked answers are accurate most of the time.
For large language models, the researchers used different prompting strategies like asking a question without any context, using friendly wording, and using expert wording. The study also provided LLMs some sample Q&As which helped some models but didn't have any effect on others. Retrieval-augmented generation method was also used where LLMs were provided search engine results before they generated their own responses. The performance of the AI models was measured through accuracy, common errors in their responses, and improvements through retrieval augmentation.
The results of the study showed that search engines answered 50-70% queries accurately while LLMs had an 80% accuracy rate. The responses from LLMs varied on the basis of how questions were framed, and the expert prompt (using expert tone) was the most effective but sometimes resulted in less definitive answers. Bing had the most reliable answers, but it wasn't any better than Yahoo!, Google, and DuckDuckGo. Many search results from search engines were irrelevant or off-topic while the precision improved 80-90% by filtering for relevant answers. Smaller LLMs showed improvements in their performance after search engine snippets were added. But poor quality retrieval worsened the accuracy of LLMs, especially for Covid-19 related queries.
The error analysis of LLMs showed that there were three major failures of LLMs when it comes to health-related queries: Incorrect medical consensus understanding, misinterpreting questions, and ambiguous answers. The study showed that the performance of LLMs varied based on the dataset they were being questioned from, with a dataset from 2020 generating more accurate responses than a dataset from 2021.
Read next: AI Search Traffic Jumps 123% as ChatGPT and Perplexity Reshape SMB Strategies
by Arooj Ahmed via Digital Information World
For the study, the researchers assessed four search engines: Google, Yahoo!, DuckDuckGo and Bing, and seven LLMs including ChatGPT, GPT-4, Flan-T5, Llama3 and MedLlama3. The results showed that ChatGPT, GPT-4, Llama3 and MedLlama3 had the upper hand in most evaluations, while Flan-T5 lagged behind the pack. For search engines, the researchers analyzed the top 20 ranked results. A passage extraction model was used to identify relevant snippets and a reading comprehension model was used to determine if the snippets had a definitive yes/no answer. Two types of users' behaviors were also seen: Lazy users stopped searching as soon as they found the first clear answer, while the diligent users cross-referenced three sources before deciding on an answer. The lazy users were the ones who got the most accurate answers, which shows that top-ranked answers are accurate most of the time.
For large language models, the researchers used different prompting strategies like asking a question without any context, using friendly wording, and using expert wording. The study also provided LLMs some sample Q&As which helped some models but didn't have any effect on others. Retrieval-augmented generation method was also used where LLMs were provided search engine results before they generated their own responses. The performance of the AI models was measured through accuracy, common errors in their responses, and improvements through retrieval augmentation.
The results of the study showed that search engines answered 50-70% queries accurately while LLMs had an 80% accuracy rate. The responses from LLMs varied on the basis of how questions were framed, and the expert prompt (using expert tone) was the most effective but sometimes resulted in less definitive answers. Bing had the most reliable answers, but it wasn't any better than Yahoo!, Google, and DuckDuckGo. Many search results from search engines were irrelevant or off-topic while the precision improved 80-90% by filtering for relevant answers. Smaller LLMs showed improvements in their performance after search engine snippets were added. But poor quality retrieval worsened the accuracy of LLMs, especially for Covid-19 related queries.
The error analysis of LLMs showed that there were three major failures of LLMs when it comes to health-related queries: Incorrect medical consensus understanding, misinterpreting questions, and ambiguous answers. The study showed that the performance of LLMs varied based on the dataset they were being questioned from, with a dataset from 2020 generating more accurate responses than a dataset from 2021.
Read next: AI Search Traffic Jumps 123% as ChatGPT and Perplexity Reshape SMB Strategies
by Arooj Ahmed via Digital Information World
Saturday, March 15, 2025
Social Media Abstinence Lowers Anxiety, But Mindful Usage Helps Prevent Loneliness, Researchers Discover
According to a new study from the University of British Columbia, we can still protect our mental health without quitting social media completely. We can do this by prioritizing meaningful connections over mindless scrolling on social media. Doing a complete digital detox doesn't seem realistic and possible for many people because of modern work life, so this study proves that you can still keep your mental health healthy while using social media by stopping yourself from mindless scrolling.
Many young adults use social media, which gives them some advantages as well as disadvantages, as social media is helping people to stay connected to different communities. But prolonged use of social media can also lead to increased anxiety, depression, and other mental health issues, which can affect the daily lives of users. For the research, the researchers gathered 393 social media users between the ages of 17 and 29 who had experienced some form of negative effects of social media usage. These participants were then divided into three groups: a tutorial group which learned healthy social media habits, an abstinence group which stopped using social media completely, and a control group which continued their usual social media usage. The researchers tracked the social media activities of these groups for six weeks and assessed their mental health aspects.
The results of the study showed that the abstinence group saw a huge decline in their social media usage, while the tutorial group also became a little mindful and started using social media selectively. The results also showed that the abstinence and tutorial group spent less time scrolling on social media passively and stopped comparing themselves to others. The tutorial group saw significant changes while the abstinence group saw the biggest changes.
The researchers found that the results from each group proved to be effective in different mental health aspects. The tutorial group method is effective in reducing loneliness and FOMO, while the abstinence group method can help reduce anxiety and depression. On the other hand, the abstinence method wasn't effective for reducing loneliness, probably because it cut off social connection. This shows that completely cutting off social media may seem highly effective for various mental health issues, it can also lead to social isolation. The tutorial method helped participants use social media when it truly mattered, and they learned to notice when social media makes them feel good or bad, they started unfollowing or muting accounts that triggered negative feelings in them and they started engaging with their friends and family through comments and messages instead of mindless scrolling.
Image: DIW-Aigen
Read next: Keto and Low Carb Diet Proven to Reverse Several Chronic Diseases, Yet Big-Pharma Influence Suppresses Mainstream Awareness
by Arooj Ahmed via Digital Information World
Many young adults use social media, which gives them some advantages as well as disadvantages, as social media is helping people to stay connected to different communities. But prolonged use of social media can also lead to increased anxiety, depression, and other mental health issues, which can affect the daily lives of users. For the research, the researchers gathered 393 social media users between the ages of 17 and 29 who had experienced some form of negative effects of social media usage. These participants were then divided into three groups: a tutorial group which learned healthy social media habits, an abstinence group which stopped using social media completely, and a control group which continued their usual social media usage. The researchers tracked the social media activities of these groups for six weeks and assessed their mental health aspects.
The results of the study showed that the abstinence group saw a huge decline in their social media usage, while the tutorial group also became a little mindful and started using social media selectively. The results also showed that the abstinence and tutorial group spent less time scrolling on social media passively and stopped comparing themselves to others. The tutorial group saw significant changes while the abstinence group saw the biggest changes.
The researchers found that the results from each group proved to be effective in different mental health aspects. The tutorial group method is effective in reducing loneliness and FOMO, while the abstinence group method can help reduce anxiety and depression. On the other hand, the abstinence method wasn't effective for reducing loneliness, probably because it cut off social connection. This shows that completely cutting off social media may seem highly effective for various mental health issues, it can also lead to social isolation. The tutorial method helped participants use social media when it truly mattered, and they learned to notice when social media makes them feel good or bad, they started unfollowing or muting accounts that triggered negative feelings in them and they started engaging with their friends and family through comments and messages instead of mindless scrolling.
Image: DIW-Aigen
Read next: Keto and Low Carb Diet Proven to Reverse Several Chronic Diseases, Yet Big-Pharma Influence Suppresses Mainstream Awareness
by Arooj Ahmed via Digital Information World
Did the Biden Administration Order Big Tech to Censor AI Platforms? Republican Congressman Issues New Letter to Find Out
16 of America’s leading tech companies were issued letters, including the likes of Google and OpenAI, to determine whether or not the Biden administration had ordered censorship on AI tools.
The letter was issued by Republican Congressman Jim Jordan who demanded reports of past communications that might give rise to this alarming finding of the former president regarding censorship of lawful speech inside AI platforms.
We’ve seen Trump’s top tech advisors hit on the subject in the past, including how they would be getting to the bottom of this. This matter is the next phase inside the culture war between Silicon Valley and conservatives. Many held the opinion that silencing voices on various social media apps was unlawful to begin with. Now, the Trump administration wants to know if AI giants and other intermediary firms were involved in it or not.
Letters were released to the heads of OpenAI, Apple, and Google, where a report was highlighted that spoke about the Biden administration’s efforts to suppress speech inside AI.
The latest case also saw other tech companies like Amazon, IBM, Meta, Scale AI, Microsoft, Cohere, and more get asked for more details on this matter. The deadline is March 2,7, where all companies are asked to provide replies on this front.
What everyone is curious about is how one leading tech giant is missing from the list, and it’s Musk’s xAI. This obviously has a lot to do with the fact that Musk’s entry into politics and closeness with the president makes him immune to such investigations.
Many companies were anticipating such an investigation to come sometime soon and that’s why many leading AI firms have altered the ways AI chatbot assistants handle all sorts of politically sensitive questions.
During the year’s start, OpenAI shared how it was altering how it trains AI models to represent different perspectives and make sure ChatGPT was not censoring certain people’s views.
OpenAI denied that this was an attempt to appease the Trump admin but instead a means to double down on the firm’s core values. Meanwhile, Anthropic shared how its latest AI model won’t reply to fewer queries any longer. No matter how controversial the question might be, it would try its best to provide a nuanced reply.
Other firms were slower in the race to make changes to their AI systems and how they tackle political prompts. Remember. Google shared how its Gemini won’t be giving out replies to any political themed queries. Interestingly, even after the elections were over, the assistant does not consistently reply to political questions such as who is the head of state of America.
Some tech giant executives like Meta’s CEO appear to have added more fuel to the fire of conservative accusations regarding censorship at Silicon Valley. Zuckerberg mentioned that his firm was pressurized to silence certain kinds of content related to COVID-19 misinformation.
Image: DIW-Aigen
Read next: Apple’s Legal Battle Hearing with UK Government Against Access to User Data Heard Behind Closed Doors
by Dr. Hura Anwar via Digital Information World
The letter was issued by Republican Congressman Jim Jordan who demanded reports of past communications that might give rise to this alarming finding of the former president regarding censorship of lawful speech inside AI platforms.
We’ve seen Trump’s top tech advisors hit on the subject in the past, including how they would be getting to the bottom of this. This matter is the next phase inside the culture war between Silicon Valley and conservatives. Many held the opinion that silencing voices on various social media apps was unlawful to begin with. Now, the Trump administration wants to know if AI giants and other intermediary firms were involved in it or not.
Letters were released to the heads of OpenAI, Apple, and Google, where a report was highlighted that spoke about the Biden administration’s efforts to suppress speech inside AI.
The latest case also saw other tech companies like Amazon, IBM, Meta, Scale AI, Microsoft, Cohere, and more get asked for more details on this matter. The deadline is March 2,7, where all companies are asked to provide replies on this front.
What everyone is curious about is how one leading tech giant is missing from the list, and it’s Musk’s xAI. This obviously has a lot to do with the fact that Musk’s entry into politics and closeness with the president makes him immune to such investigations.
Many companies were anticipating such an investigation to come sometime soon and that’s why many leading AI firms have altered the ways AI chatbot assistants handle all sorts of politically sensitive questions.
During the year’s start, OpenAI shared how it was altering how it trains AI models to represent different perspectives and make sure ChatGPT was not censoring certain people’s views.
OpenAI denied that this was an attempt to appease the Trump admin but instead a means to double down on the firm’s core values. Meanwhile, Anthropic shared how its latest AI model won’t reply to fewer queries any longer. No matter how controversial the question might be, it would try its best to provide a nuanced reply.
Other firms were slower in the race to make changes to their AI systems and how they tackle political prompts. Remember. Google shared how its Gemini won’t be giving out replies to any political themed queries. Interestingly, even after the elections were over, the assistant does not consistently reply to political questions such as who is the head of state of America.
Some tech giant executives like Meta’s CEO appear to have added more fuel to the fire of conservative accusations regarding censorship at Silicon Valley. Zuckerberg mentioned that his firm was pressurized to silence certain kinds of content related to COVID-19 misinformation.
Image: DIW-Aigen
Read next: Apple’s Legal Battle Hearing with UK Government Against Access to User Data Heard Behind Closed Doors
by Dr. Hura Anwar via Digital Information World
Apple’s Legal Battle Hearing with UK Government Against Access to User Data Heard Behind Closed Doors
A hearing in the Apple legal battle with the British government regarding access to user data was recently held behind closed doors.
The news comes just one day after we saw privacy groups in the UK demand the proceedings be done more transparently as the public’s rights were at stake. Despite that, the press failed to get entry into the courtroom and was denied access.
The American tech giant rolled out an appeal against the decision made by the tribunal regarding the Home Office’s decision to demand encrypted data that’s found on Apple’s iCloud servers. Many media outlets, including the BBC, Computer Weekly, and the Guardian, made submissions for entry, but that did not go as planned and they were denied entry.
It was interesting to see Sir James Eadie, who usually represents the government in top cases, make an entry to the courtroom for this hearing.
The iPhone maker is fighting a tech capability notice rolled out under the nation’s Investigatory Powers Act. This forces organizations to help different law enforcement agencies in getting evidence.
This notice demands access to the company’s ADP service that focuses mostly on highly sensitive encrypted data belonging to users that is stored remotely across servers.
Apple shared how it was not in favor of the decision. Therefore, the order was challenged at the tribunal, which is designed to determine if the domestic challenge service act as per law or not. The news comes after the Cupertino firm chose to withdraw from the ADP in February this year.
Apple time and time again reiterates how it never designed a backdoor or any kind of master key for products and will never be following this kind of practice in the future as well.
The ADP makes use of E2E, which means only account holders get the chance for file decryption. Meanwhile, messaging services like iMessage and FaceTime continue to be E2E through default means.
The government’s hidden legal demands are commonly known as a tech capability notice or TCN. Therefore, recipients of this TCN can’t share the existence of this order until they’re provided with permission from the home secretary. The website also spoke about how hearings should be dealt with in private only when it’s deemed absolutely necessary to do so.
It also mentioned how there needs to be zero information disclosure that might be a threat to the country’s national security. Two days back, American lawmakers were seen calling on the tribunal to get rid of this cloak of secrecy related to the British government’s order and make Friday’s hearing public.
As per a recent report from Bloomberg, officials in the UK rolled out talks with American counterparts regarding this order. Therefore, the UK continues to assure America that it’s not getting any kind of blanket access. It only wishes for data related to serious criminal offenses or terrorist attacks, or matters linked to abuse.
Image: DIW-Aigen
Read next: Google All Set to Replace Google Assistant with Gemini This Year
by Dr. Hura Anwar via Digital Information World
The news comes just one day after we saw privacy groups in the UK demand the proceedings be done more transparently as the public’s rights were at stake. Despite that, the press failed to get entry into the courtroom and was denied access.
The American tech giant rolled out an appeal against the decision made by the tribunal regarding the Home Office’s decision to demand encrypted data that’s found on Apple’s iCloud servers. Many media outlets, including the BBC, Computer Weekly, and the Guardian, made submissions for entry, but that did not go as planned and they were denied entry.
It was interesting to see Sir James Eadie, who usually represents the government in top cases, make an entry to the courtroom for this hearing.
The iPhone maker is fighting a tech capability notice rolled out under the nation’s Investigatory Powers Act. This forces organizations to help different law enforcement agencies in getting evidence.
This notice demands access to the company’s ADP service that focuses mostly on highly sensitive encrypted data belonging to users that is stored remotely across servers.
Apple shared how it was not in favor of the decision. Therefore, the order was challenged at the tribunal, which is designed to determine if the domestic challenge service act as per law or not. The news comes after the Cupertino firm chose to withdraw from the ADP in February this year.
Apple time and time again reiterates how it never designed a backdoor or any kind of master key for products and will never be following this kind of practice in the future as well.
The ADP makes use of E2E, which means only account holders get the chance for file decryption. Meanwhile, messaging services like iMessage and FaceTime continue to be E2E through default means.
The government’s hidden legal demands are commonly known as a tech capability notice or TCN. Therefore, recipients of this TCN can’t share the existence of this order until they’re provided with permission from the home secretary. The website also spoke about how hearings should be dealt with in private only when it’s deemed absolutely necessary to do so.
It also mentioned how there needs to be zero information disclosure that might be a threat to the country’s national security. Two days back, American lawmakers were seen calling on the tribunal to get rid of this cloak of secrecy related to the British government’s order and make Friday’s hearing public.
As per a recent report from Bloomberg, officials in the UK rolled out talks with American counterparts regarding this order. Therefore, the UK continues to assure America that it’s not getting any kind of blanket access. It only wishes for data related to serious criminal offenses or terrorist attacks, or matters linked to abuse.
Image: DIW-Aigen
Read next: Google All Set to Replace Google Assistant with Gemini This Year
by Dr. Hura Anwar via Digital Information World
Google All Set to Replace Google Assistant with Gemini This Year
Tech giant Google has confirmed that it is closing its doors on Google Assistant as that will be replaced with Gemini later on in 2025.
Google shared how the change would apply to all smartphones, tablets, vehicles, and devices that link to your phone like smartwatches and headphones. The same would be the case for television-linked devices.
The news is not surprising at all for most of us because when the tech giant rolled out Bard, it shared how it would bring forward Bard features to its Assistant. Then last year, we saw the company get rid of a host of features found on Google Assistant.
The company shared the news through a blog post where it mentioned how it was busy upgrading more users on cellphones to make way for this change. Later that same year, it mentioned how Assistant won’t be accessible on most devices for new installs through the mobile app store.
The company shared how it’s busy upgrading tablets, vehicles, and devices that link to your smartphone. It’s also keen on introducing the latest Google-based experience to all home devices such as televisions and displays.
The company posted how the change wouldn’t be for older devices like Android 9 and earlier phones that don’t have the right RAM specifics. The requirements for this device on Android entail the Gemini app being up for grabs on nearly all Android devices having 2 GB RAM and running the Android 10 system and beyond. Other than that, it includes Android tablets like Pixel Tablet featuring 2GB RAM or beyond and again having Android 10 and above.
The news is great as Gemini is really equipped to do a lot of tasks on Google’s behalf. So we don’t think anyone would mind the change. This change also signals how Gemini might be the future for Google in not only Android but also for Search.
Image: DIW-Aigen
Read next:
• What the Latest SEO Study Means for Your Business’s Online Visibility
• Global Smartphone Sales Flat in Q4 2024 as Android Faces Record Lows in India, US
by Dr. Hura Anwar via Digital Information World
Google shared how the change would apply to all smartphones, tablets, vehicles, and devices that link to your phone like smartwatches and headphones. The same would be the case for television-linked devices.
The news is not surprising at all for most of us because when the tech giant rolled out Bard, it shared how it would bring forward Bard features to its Assistant. Then last year, we saw the company get rid of a host of features found on Google Assistant.
The company shared the news through a blog post where it mentioned how it was busy upgrading more users on cellphones to make way for this change. Later that same year, it mentioned how Assistant won’t be accessible on most devices for new installs through the mobile app store.
The company shared how it’s busy upgrading tablets, vehicles, and devices that link to your smartphone. It’s also keen on introducing the latest Google-based experience to all home devices such as televisions and displays.
The company posted how the change wouldn’t be for older devices like Android 9 and earlier phones that don’t have the right RAM specifics. The requirements for this device on Android entail the Gemini app being up for grabs on nearly all Android devices having 2 GB RAM and running the Android 10 system and beyond. Other than that, it includes Android tablets like Pixel Tablet featuring 2GB RAM or beyond and again having Android 10 and above.
The news is great as Gemini is really equipped to do a lot of tasks on Google’s behalf. So we don’t think anyone would mind the change. This change also signals how Gemini might be the future for Google in not only Android but also for Search.
Image: DIW-Aigen
Read next:
• What the Latest SEO Study Means for Your Business’s Online Visibility
• Global Smartphone Sales Flat in Q4 2024 as Android Faces Record Lows in India, US
by Dr. Hura Anwar via Digital Information World
Friday, March 14, 2025
Global Smartphone Sales Flat in Q4 2024 as Android Faces Record Lows in India, US
According to a new report by Counterpoint Research, there were little to no changes in global smartphone sales in Q4 2024, which led to only small changes in global OS shares. Android had 74% market share, the same as last year, but still faced some challenges. Even though Android was stable, it hit its lowest quarterly share in India and the US. Google and Motorola sales growth helped in the double-digit YoY declines among smaller brands in the US. The iPhone also has strong market stability with its sales only declining by 1%.
There was a 4% market share reported of Harmony OS, with unchanged YoY. In China, Harmony’s OS share increased by 19% mostly because of sales of new launches. For the fourth consecutive quarter, Harmony OS stayed ahead of iOS. On the other hand, the smartphone subsidies in China boosted local OEMs, which will probably result in further growth.
Read next: TikTok Leads Revenue and Downloads Despite US Ban Concerns, ChatGPT Surges, DeepSeek Joins Top Ten
by Arooj Ahmed via Digital Information World
There was a 4% market share reported of Harmony OS, with unchanged YoY. In China, Harmony’s OS share increased by 19% mostly because of sales of new launches. For the fourth consecutive quarter, Harmony OS stayed ahead of iOS. On the other hand, the smartphone subsidies in China boosted local OEMs, which will probably result in further growth.
Read next: TikTok Leads Revenue and Downloads Despite US Ban Concerns, ChatGPT Surges, DeepSeek Joins Top Ten
by Arooj Ahmed via Digital Information World
Meta Adopts X’s Community Notes for Fact-Checking, Testing Begins with 200K Interested Users
Tech giant Meta has confirmed that it’s going to be using Elon Musk’s X technology for its much anticipated crowdsourced fact-checking feature called Community Notes.
The news is not too surprising considering X was the first to have Community Notes as a part of its program. Therefore, Meta shared a post on this matter in detail yesterday. They claim the latest tool for content moderation will feature the same kind of open-source algorithm that X is based on. With time, Meta hopes that the algorithm will be modified enough to serve the Facebook, Threads, and Instagram apps.
Today, X’s algorithm is open sourced, which means other tech giants can learn and use it as they desire. Therefore, Meta wants to build further on what X has created, learn from research experts in the domain, and also better the system for their own array of apps.
As their own variant develops with time, they hope to explore various algorithms that provide support to Community Notes with similar rankings and ratings. As shared by the head of Meta, the feature will be a better alternative to the previous fact checkers and human moderators on the app.
We can confirm that the feature goes into testing by Meta next week. The company has already explained how users can become a contributor for Community Notes, provided they meet its list of requirements. This includes being above the age of 18 and having verified phone numbers.
Contributors won’t get the chance to submit these Community Notes for ads, but they can do so on nearly every other kind of content. This might be a post by Meta, politicians, public figures, and so on. Any post receiving the Community Note cannot get an appeal, but at the same time, there’s no more penalty for that kind of content being flagged online.
The social media giant says it’s well aware of how this feature will give rise to more content, but that will not affect the content on display and how frequently it gets shared online. A spokesperson from the company explained media outlets that this Community Notes won’t be a replacement for any kind of content moderation.
They have no plans in the pipeline to open source or give details about the system and its workings in public. However, that does not mean we won’t see it happening in the future soon.
Today, more than 200k individuals have expressed a desire to become a contributor for the moderation feature. But that does not mean the waiting list is closed for others. If you’d like to take part in it, you can.
Experts are already debating about Community Notes and how well they are at replacing fact checkers. The majority consensus is that while the tool does provide some context for content published online, it can never replace a formal fact checker.
The system is not perfect and it can exploit several groups or companies with an agenda of their own. However, Meta claims that publishing these kinds of notes needs agreement between various individuals and groups. It’s a policy created to protect against organized campaigns trying to influence the system with a personal agenda.
The model for Community Notes will expand across the US after Meta feels they’re comfortable with how things are going from the initial testing phase results. But we can only confirm that as time goes by.
Read next:
• New Report Shows AI Chatbots and Search Engines Are Unable to Refer Traffic to Websites Despite Increase in AI Scraping
• New Research Shows Frequent App Crashes Result in Lower User Engagement
by Dr. Hura Anwar via Digital Information World
The news is not too surprising considering X was the first to have Community Notes as a part of its program. Therefore, Meta shared a post on this matter in detail yesterday. They claim the latest tool for content moderation will feature the same kind of open-source algorithm that X is based on. With time, Meta hopes that the algorithm will be modified enough to serve the Facebook, Threads, and Instagram apps.
Today, X’s algorithm is open sourced, which means other tech giants can learn and use it as they desire. Therefore, Meta wants to build further on what X has created, learn from research experts in the domain, and also better the system for their own array of apps.
As their own variant develops with time, they hope to explore various algorithms that provide support to Community Notes with similar rankings and ratings. As shared by the head of Meta, the feature will be a better alternative to the previous fact checkers and human moderators on the app.
We can confirm that the feature goes into testing by Meta next week. The company has already explained how users can become a contributor for Community Notes, provided they meet its list of requirements. This includes being above the age of 18 and having verified phone numbers.
Contributors won’t get the chance to submit these Community Notes for ads, but they can do so on nearly every other kind of content. This might be a post by Meta, politicians, public figures, and so on. Any post receiving the Community Note cannot get an appeal, but at the same time, there’s no more penalty for that kind of content being flagged online.
The social media giant says it’s well aware of how this feature will give rise to more content, but that will not affect the content on display and how frequently it gets shared online. A spokesperson from the company explained media outlets that this Community Notes won’t be a replacement for any kind of content moderation.
They have no plans in the pipeline to open source or give details about the system and its workings in public. However, that does not mean we won’t see it happening in the future soon.
Today, more than 200k individuals have expressed a desire to become a contributor for the moderation feature. But that does not mean the waiting list is closed for others. If you’d like to take part in it, you can.
Experts are already debating about Community Notes and how well they are at replacing fact checkers. The majority consensus is that while the tool does provide some context for content published online, it can never replace a formal fact checker.
The system is not perfect and it can exploit several groups or companies with an agenda of their own. However, Meta claims that publishing these kinds of notes needs agreement between various individuals and groups. It’s a policy created to protect against organized campaigns trying to influence the system with a personal agenda.
The model for Community Notes will expand across the US after Meta feels they’re comfortable with how things are going from the initial testing phase results. But we can only confirm that as time goes by.
Read next:
• New Report Shows AI Chatbots and Search Engines Are Unable to Refer Traffic to Websites Despite Increase in AI Scraping
• New Research Shows Frequent App Crashes Result in Lower User Engagement
by Dr. Hura Anwar via Digital Information World
Thursday, March 13, 2025
New Research Shows Frequent App Crashes Result in Lower User Engagement
Mobile crashes can have a strong impact on user engagement. Many developers are releasing rush apps which have missing features and crash consistently. Last year, a crash on the Sanos App made the company lose millions of dollars and the CEO lost his job as well. App companies are rushing to release their apps, but they are not fixing the issues, which is leading to constant crashes and the apps freeze frequently.
Researchers found that these crashes on the apps reduce user consumption and shorten their time span on the apps. When a page is crashed one time, page views increase because users feel the Zeigarnik Effect, which refers to interruptions in user goals, and people start feeling psychological tension and want to complete their goals.
But when an app is crashing frequently, users start feeling frustrated and completely stop using the app. One of the biggest examples of it is HBO Max’s transition to Max in 2023, when frequent crashes annoyed users, they simply stopped using the app. When new features get released on the app, they can also contribute to crashes, which can affect the app revenue. When there are low engagements on an app, its advertising revenue declines as it depends on page views. In-app purchases also get affected because of crashes and hence, the overall revenue of the app goes down dramatically.
The researchers say that it is best to stay cautious with pre-mature app releases because they can lead to clustered crashes. It is best to not mass release updates and first test them with users who are more prone to crashes. If they do well, then the update can be expanded to a broader customer base.
Image: DIW-Aigen
Read next: Americans Waste 2 Hours Daily on Phones, Here’s What’s Stealing Their Focus!
by Arooj Ahmed via Digital Information World
Researchers found that these crashes on the apps reduce user consumption and shorten their time span on the apps. When a page is crashed one time, page views increase because users feel the Zeigarnik Effect, which refers to interruptions in user goals, and people start feeling psychological tension and want to complete their goals.
But when an app is crashing frequently, users start feeling frustrated and completely stop using the app. One of the biggest examples of it is HBO Max’s transition to Max in 2023, when frequent crashes annoyed users, they simply stopped using the app. When new features get released on the app, they can also contribute to crashes, which can affect the app revenue. When there are low engagements on an app, its advertising revenue declines as it depends on page views. In-app purchases also get affected because of crashes and hence, the overall revenue of the app goes down dramatically.
The researchers say that it is best to stay cautious with pre-mature app releases because they can lead to clustered crashes. It is best to not mass release updates and first test them with users who are more prone to crashes. If they do well, then the update can be expanded to a broader customer base.
Image: DIW-Aigen
Read next: Americans Waste 2 Hours Daily on Phones, Here’s What’s Stealing Their Focus!
by Arooj Ahmed via Digital Information World
ChatGPT Can’t Keep Up: Google Handles 373x More Traffic and Keeps Growing
According to new analysis by SparkToro, AI chatbots and search engines are popular, but they are still unable to beat traditional Google search. The analysis found that Google Search saw 373 times more traffic than ChatGPT and its traffic has also increased year-over-year. Many users, marketers, and other analysts claimed that AI chatbots and search engines are competing with Google Search, but the research shows that Google is still very much dominant.
The research also shows that ChatGPT’s market share will still be less than 1% if it is receiving 1 billion search-related queries daily. Semrush study found that only 30% of queries on ChatGPT fall in the traditional search category, while ChatGPT only uses search for 46% of queries. Google gets about 14 billion queries per day, which makes its market share 93.57%. According to Google, it saw more than 5 trillion searches in 2024. ChatGPT has 0.25% market share with 37.5% traditional search related queries per day. Yahoo has 1.35% market share, Microsoft Bing 4.10% share, and DuckDuckGo has 0.73% market share in the search market, which shows that AI chatbots like ChatGPT are still much behind.
According to data by Datos, there was a 21.64% increase in Google search from 2023 to 2024. Google’s CEO, Sundar Pichai, also says that one of the reasons why Google is seeing a surge in searches is because of AI Overviews, as many users are using this feature. Even though many users are still using Google, it doesn't mean that websites are getting more traffic or clicks. The analysis also found that 60% of the Google searches didn't end up with a click on a website, which makes about 3 trillion searches in 2024 without any clicks.
Read next: Google’s Secret to Staying on Top – 86.94% of Americans Still Use It Daily!
by Arooj Ahmed via Digital Information World
The research also shows that ChatGPT’s market share will still be less than 1% if it is receiving 1 billion search-related queries daily. Semrush study found that only 30% of queries on ChatGPT fall in the traditional search category, while ChatGPT only uses search for 46% of queries. Google gets about 14 billion queries per day, which makes its market share 93.57%. According to Google, it saw more than 5 trillion searches in 2024. ChatGPT has 0.25% market share with 37.5% traditional search related queries per day. Yahoo has 1.35% market share, Microsoft Bing 4.10% share, and DuckDuckGo has 0.73% market share in the search market, which shows that AI chatbots like ChatGPT are still much behind.
According to data by Datos, there was a 21.64% increase in Google search from 2023 to 2024. Google’s CEO, Sundar Pichai, also says that one of the reasons why Google is seeing a surge in searches is because of AI Overviews, as many users are using this feature. Even though many users are still using Google, it doesn't mean that websites are getting more traffic or clicks. The analysis also found that 60% of the Google searches didn't end up with a click on a website, which makes about 3 trillion searches in 2024 without any clicks.
Read next: Google’s Secret to Staying on Top – 86.94% of Americans Still Use It Daily!
by Arooj Ahmed via Digital Information World
Wednesday, March 12, 2025
New Report Found that Only 4% of the Global Populations Hold a Bitcoin
According to a new report from a BTC financial services company called River, only 4% of the world population holds a Bitcoin despite its growing popularity. In the US, 14% of the individuals hold a Bitcoin, which makes America the top country with the highest concentration of Bitcoin ownership. America also has the highest adoption rate for Bitcoin currency, while the country with the lowest adoption rate for Bitcoin is Africa at 1.6%. The study also highlights that Bitcoin constitutes 0.2% of global wealth. Its total addressable market is estimated at $225 trillion, assuming it captures 50% of store-of-value assets.
The report by River says that Bitcoin has only achieved 3% of its maximum adoption potential, which means that its adoption is still at early stages. Developed countries are more open to using Bitcoins than developing countries. The 3% metric was calculated by analyzing individual as well as institutional ownership. Bitcoin also became a US government reserve asset, but there are still a lot of hurdles that are on the way of Bitcoin mass adoption globally.
The things which are stopping Bitcoin’s mass adoption are technical and financial education. There are a lot of misconceptions about Bitcoin and most people think of it as a Ponzi Scheme or scam. Digital currencies are highly volatile, which is good for short-term traders but isn't that good for daily transactions. The high volatility rates affect the developing countries the hardest, and they have to turn to the US dollar stablecoins for lower transaction fees and stability.
Read next: AI Search Is Lying to You, And It’s Getting Worse
by Arooj Ahmed via Digital Information World
The report by River says that Bitcoin has only achieved 3% of its maximum adoption potential, which means that its adoption is still at early stages. Developed countries are more open to using Bitcoins than developing countries. The 3% metric was calculated by analyzing individual as well as institutional ownership. Bitcoin also became a US government reserve asset, but there are still a lot of hurdles that are on the way of Bitcoin mass adoption globally.
The things which are stopping Bitcoin’s mass adoption are technical and financial education. There are a lot of misconceptions about Bitcoin and most people think of it as a Ponzi Scheme or scam. Digital currencies are highly volatile, which is good for short-term traders but isn't that good for daily transactions. The high volatility rates affect the developing countries the hardest, and they have to turn to the US dollar stablecoins for lower transaction fees and stability.
Read next: AI Search Is Lying to You, And It’s Getting Worse
by Arooj Ahmed via Digital Information World
AI Search Is Lying to You, And It’s Getting Worse
Facts matter. Trust matters. But in the race to reinvent search, both are getting trampled. A recent Columbia Journalism Review study reveals a hard truth — machines, built to deliver answers in an instant, are often serving up fiction with a straight face. Instead of guiding users to reliable sources, search engines now deal in confidence, not accuracy, replacing verifiable facts with AI-generated guesswork. The promise was a smarter way to find information; the reality is a flood of misinformation, dressed up as truth, delivered without a second thought.
The study highlights a growing issue with AI search tools scraping online content to generate responses. Instead of directing users to the original sources, these systems often provide instant answers, significantly reducing website traffic. A separate, unrelated study also found that click-through rates from AI-generated search results and chatbots were substantially lower than those from Google Search. The situation becomes even more problematic when these AI tools fabricate citations, misleading users by linking to non-existent or broken URLs.
An analysis of multiple AI search models found that over half of the citations generated by Google’s Gemini and xAI’s Grok 3 led to fabricated or inaccessible webpages. More broadly, chatbots were found to deliver incorrect information in more than 60% of cases. Among the evaluated models, Grok 3 had the highest error rate, with 94% of its responses containing inaccuracies. Gemini fared slightly better but only provided a fully correct answer once in ten attempts. Perplexity, though the most accurate of the models tested, still returned incorrect responses 37% of the time.
The study’s authors noted that multiple AI models appeared to disregard the Robot Exclusion Protocol, a standard that allows websites to restrict automated content scraping. This disregard raises ethical concerns about how AI search engines collect and repurpose online information. Their findings align with a previous study published in November 2024 that examined ChatGPT’s search capabilities, revealing consistent patterns of confident but incorrect responses, misleading citations, and unreliable information retrieval.
Experts have warned that generative AI models pose significant risks to information transparency and media credibility. Critics such as Chirag Shah and Emily M. Bender have raised concerns that AI search engines remove user agency, amplify bias in information access, and frequently present misleading or toxic answers that users may accept without question.
The study analyzed 1,600 queries to compare how different generative AI search models retrieved article details such as headlines, publishers, publication dates, and URLs. The evaluation included ChatGPT Search, Microsoft CoPilot, DeepSeek Search, Perplexity along with its Pro version, xAI’s Grok-2 and Grok-3 Search, and Google Gemini. The models were tested using direct excerpts from ten randomly selected articles sourced from 20 different publishers. The results underscore a significant challenge for AI-driven search, showing that despite their growing integration into digital platforms, these tools still struggle with accuracy and citation reliability.
Read next:
• How to Increase Subscribers on YouTube?
• Social Media Users Unknowingly Participate in Marketing Experiments, Research Reveals
• Engagement Trends Show Threads Growing, X’s Virality Strength, and Bluesky’s Slowdown
by Arooj Ahmed via Digital Information World
The study highlights a growing issue with AI search tools scraping online content to generate responses. Instead of directing users to the original sources, these systems often provide instant answers, significantly reducing website traffic. A separate, unrelated study also found that click-through rates from AI-generated search results and chatbots were substantially lower than those from Google Search. The situation becomes even more problematic when these AI tools fabricate citations, misleading users by linking to non-existent or broken URLs.
An analysis of multiple AI search models found that over half of the citations generated by Google’s Gemini and xAI’s Grok 3 led to fabricated or inaccessible webpages. More broadly, chatbots were found to deliver incorrect information in more than 60% of cases. Among the evaluated models, Grok 3 had the highest error rate, with 94% of its responses containing inaccuracies. Gemini fared slightly better but only provided a fully correct answer once in ten attempts. Perplexity, though the most accurate of the models tested, still returned incorrect responses 37% of the time.
The study’s authors noted that multiple AI models appeared to disregard the Robot Exclusion Protocol, a standard that allows websites to restrict automated content scraping. This disregard raises ethical concerns about how AI search engines collect and repurpose online information. Their findings align with a previous study published in November 2024 that examined ChatGPT’s search capabilities, revealing consistent patterns of confident but incorrect responses, misleading citations, and unreliable information retrieval.
Experts have warned that generative AI models pose significant risks to information transparency and media credibility. Critics such as Chirag Shah and Emily M. Bender have raised concerns that AI search engines remove user agency, amplify bias in information access, and frequently present misleading or toxic answers that users may accept without question.
The study analyzed 1,600 queries to compare how different generative AI search models retrieved article details such as headlines, publishers, publication dates, and URLs. The evaluation included ChatGPT Search, Microsoft CoPilot, DeepSeek Search, Perplexity along with its Pro version, xAI’s Grok-2 and Grok-3 Search, and Google Gemini. The models were tested using direct excerpts from ten randomly selected articles sourced from 20 different publishers. The results underscore a significant challenge for AI-driven search, showing that despite their growing integration into digital platforms, these tools still struggle with accuracy and citation reliability.
Read next:
• How to Increase Subscribers on YouTube?
• Social Media Users Unknowingly Participate in Marketing Experiments, Research Reveals
• Engagement Trends Show Threads Growing, X’s Virality Strength, and Bluesky’s Slowdown
by Arooj Ahmed via Digital Information World
Engagement Trends Show Threads Growing, X’s Virality Strength, and Bluesky’s Slowdown
BufferApp analyzed 1.7 million posts from X, Threads, and Bluesky and found that these three platforms have a common median engagement rate, that is four interactions per post. This may tell us that these platforms are similar in terms of engagements, but that isn’t the case because they have different patterns, dynamics, audience behavior, and consistency when it comes to posts. A data scientist for Buffer analyzed posts from 56,000 users to see the trends on X, Threads, and Bluesky. Engagements mean the total number of reactions a post receives, which can include likes, comments, and reposts.
The study highlights that posts on Threads have higher engagement, but some data shows that posts on X have as many engagements as posts on Threads. Engagement rate means percentage of people who interact with a post (i.e. like, comment etc.), while total engagements count all the interactions. It is important to know engagement rate on a post if you want to see how much a post can engage the audience, while total engagements can tell the overall interaction on the platform.
In 2024, the posts on X, Threads and Bluesky had the same number of engagements, with a median of four engagements per post. But if we look at February 2025 data, we get to know that posts on Threads received a median 5 engagements, X remained at 4 engagements, while engagements on Bluesky reduced to 3. This may not seem like much of a difference, but this shows that each platform is developing its own distinct identities.
Median engagements show how a post performs, but it doesn’t show any viral content. The gap between median and average engagement shows if a post has gone viral. X gets 328 average engagements, Threads get 58, and Bluesky gets 21 average engagements on posts. The standard deviation on X gets more than 5,000, which means that it is highly unpredictable, while Threads and Bluesky have lower engagements, but they are consistent. If a platform has high standard deviation, it means that it has great viral potential, while lower standard deviation means predictable engagements.
Because of all these factors, X is the platform with the most viral potential. Even though posts on X have a median four engagements, a post can go to extreme levels of virality if it takes off. Threads has moderate engagement, but it is stabilizing quickly. The potential to go viral on Threads is random, but it has steadier audience growth. Bluesky has a small engagement spread and it is more community driven than viral reach.
Read next: Even with Reduced Expectations for Ratings, Consumers Actively Contribute Reviews on Google and Social Media
by Arooj Ahmed via Digital Information World
The study highlights that posts on Threads have higher engagement, but some data shows that posts on X have as many engagements as posts on Threads. Engagement rate means percentage of people who interact with a post (i.e. like, comment etc.), while total engagements count all the interactions. It is important to know engagement rate on a post if you want to see how much a post can engage the audience, while total engagements can tell the overall interaction on the platform.
In 2024, the posts on X, Threads and Bluesky had the same number of engagements, with a median of four engagements per post. But if we look at February 2025 data, we get to know that posts on Threads received a median 5 engagements, X remained at 4 engagements, while engagements on Bluesky reduced to 3. This may not seem like much of a difference, but this shows that each platform is developing its own distinct identities.
Median engagements show how a post performs, but it doesn’t show any viral content. The gap between median and average engagement shows if a post has gone viral. X gets 328 average engagements, Threads get 58, and Bluesky gets 21 average engagements on posts. The standard deviation on X gets more than 5,000, which means that it is highly unpredictable, while Threads and Bluesky have lower engagements, but they are consistent. If a platform has high standard deviation, it means that it has great viral potential, while lower standard deviation means predictable engagements.
Because of all these factors, X is the platform with the most viral potential. Even though posts on X have a median four engagements, a post can go to extreme levels of virality if it takes off. Threads has moderate engagement, but it is stabilizing quickly. The potential to go viral on Threads is random, but it has steadier audience growth. Bluesky has a small engagement spread and it is more community driven than viral reach.
Read next: Even with Reduced Expectations for Ratings, Consumers Actively Contribute Reviews on Google and Social Media
by Arooj Ahmed via Digital Information World
OpenAI is Rolling Out New Responses API Tool That Can Search Through Large Volumes of Online Data
The future of AI includes chatbots or agents, and that’s why the makers of ChatGPT are trying their best to assist developers design one of their own.
The organization is releasing a New Responses API tool that offers building blocks so that developers can benefit. In other words, it's saying hello to agents that can go through huge volumes of online data while carrying out numerous tasks on the PC, just so the user does not need to.
As per the head of Deep Research and Operator, some agents the company can design themselves, but knowing that the internet is so complex, many industries and use cases require a foundation. Based on that, developers can design efficient agents as per their needs.
The new tool will be built into web search on the same exact model that ChatGPT utilizes when searching files. This gives developers the chance to get data in real-time and citations from the internet while utilizing GPT-4o and 4o mini. It also entails another feature for use on computers only, like its own Operation model, so users can allow it to perform tasks on their behalf.
The goal here is to provide assistance to agents working to provide the best customer support. They can go through FAQs or even serve to find age-old cases if working as a legal agent.
In other news, the AI giant shared its Agents SDK, which it calls a means for developers to display the AI agents workflow. Several of these agents can work as a unit to solve even the most difficult tasks. This should make it so much simpler for developers to manage agents and make sure they are working to a single goal.
The launch of the latest Responses API and Agents SDK is built on previous tools that the company rolled out to developers. Common examples include Chat Completions API. This provides developers the chance to design AI tools that provide replies to user queries. In the same way, the company is making plans to get rid of the Assistants API with this latest invention by the middle of next year. As per OpenAI, it’s added plenty of key improvements into it, after considering feedback from developers.
Read next: Hidden Threat: Even One Breath in These Cities Could Be Life-Threatening
by Dr. Hura Anwar via Digital Information World
The organization is releasing a New Responses API tool that offers building blocks so that developers can benefit. In other words, it's saying hello to agents that can go through huge volumes of online data while carrying out numerous tasks on the PC, just so the user does not need to.
As per the head of Deep Research and Operator, some agents the company can design themselves, but knowing that the internet is so complex, many industries and use cases require a foundation. Based on that, developers can design efficient agents as per their needs.
The new tool will be built into web search on the same exact model that ChatGPT utilizes when searching files. This gives developers the chance to get data in real-time and citations from the internet while utilizing GPT-4o and 4o mini. It also entails another feature for use on computers only, like its own Operation model, so users can allow it to perform tasks on their behalf.
The goal here is to provide assistance to agents working to provide the best customer support. They can go through FAQs or even serve to find age-old cases if working as a legal agent.
In other news, the AI giant shared its Agents SDK, which it calls a means for developers to display the AI agents workflow. Several of these agents can work as a unit to solve even the most difficult tasks. This should make it so much simpler for developers to manage agents and make sure they are working to a single goal.
The launch of the latest Responses API and Agents SDK is built on previous tools that the company rolled out to developers. Common examples include Chat Completions API. This provides developers the chance to design AI tools that provide replies to user queries. In the same way, the company is making plans to get rid of the Assistants API with this latest invention by the middle of next year. As per OpenAI, it’s added plenty of key improvements into it, after considering feedback from developers.
Read next: Hidden Threat: Even One Breath in These Cities Could Be Life-Threatening
by Dr. Hura Anwar via Digital Information World
Subscribe to:
Posts (Atom)