Tech giant Meta has confirmed that it’s going to be using Elon Musk’s X technology for its much anticipated crowdsourced fact-checking feature called Community Notes.
The news is not too surprising considering X was the first to have Community Notes as a part of its program. Therefore, Meta shared a post on this matter in detail yesterday. They claim the latest tool for content moderation will feature the same kind of open-source algorithm that X is based on. With time, Meta hopes that the algorithm will be modified enough to serve the Facebook, Threads, and Instagram apps.
Today, X’s algorithm is open sourced, which means other tech giants can learn and use it as they desire. Therefore, Meta wants to build further on what X has created, learn from research experts in the domain, and also better the system for their own array of apps.
As their own variant develops with time, they hope to explore various algorithms that provide support to Community Notes with similar rankings and ratings. As shared by the head of Meta, the feature will be a better alternative to the previous fact checkers and human moderators on the app.
We can confirm that the feature goes into testing by Meta next week. The company has already explained how users can become a contributor for Community Notes, provided they meet its list of requirements. This includes being above the age of 18 and having verified phone numbers.
Contributors won’t get the chance to submit these Community Notes for ads, but they can do so on nearly every other kind of content. This might be a post by Meta, politicians, public figures, and so on. Any post receiving the Community Note cannot get an appeal, but at the same time, there’s no more penalty for that kind of content being flagged online.
The social media giant says it’s well aware of how this feature will give rise to more content, but that will not affect the content on display and how frequently it gets shared online. A spokesperson from the company explained media outlets that this Community Notes won’t be a replacement for any kind of content moderation.
They have no plans in the pipeline to open source or give details about the system and its workings in public. However, that does not mean we won’t see it happening in the future soon.
Today, more than 200k individuals have expressed a desire to become a contributor for the moderation feature. But that does not mean the waiting list is closed for others. If you’d like to take part in it, you can.
Experts are already debating about Community Notes and how well they are at replacing fact checkers. The majority consensus is that while the tool does provide some context for content published online, it can never replace a formal fact checker.
The system is not perfect and it can exploit several groups or companies with an agenda of their own. However, Meta claims that publishing these kinds of notes needs agreement between various individuals and groups. It’s a policy created to protect against organized campaigns trying to influence the system with a personal agenda.
The model for Community Notes will expand across the US after Meta feels they’re comfortable with how things are going from the initial testing phase results. But we can only confirm that as time goes by.
Read next:
• New Report Shows AI Chatbots and Search Engines Are Unable to Refer Traffic to Websites Despite Increase in AI Scraping
• New Research Shows Frequent App Crashes Result in Lower User Engagement
by Dr. Hura Anwar via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Friday, March 14, 2025
Thursday, March 13, 2025
New Research Shows Frequent App Crashes Result in Lower User Engagement
Mobile crashes can have a strong impact on user engagement. Many developers are releasing rush apps which have missing features and crash consistently. Last year, a crash on the Sanos App made the company lose millions of dollars and the CEO lost his job as well. App companies are rushing to release their apps, but they are not fixing the issues, which is leading to constant crashes and the apps freeze frequently.
Researchers found that these crashes on the apps reduce user consumption and shorten their time span on the apps. When a page is crashed one time, page views increase because users feel the Zeigarnik Effect, which refers to interruptions in user goals, and people start feeling psychological tension and want to complete their goals.
But when an app is crashing frequently, users start feeling frustrated and completely stop using the app. One of the biggest examples of it is HBO Max’s transition to Max in 2023, when frequent crashes annoyed users, they simply stopped using the app. When new features get released on the app, they can also contribute to crashes, which can affect the app revenue. When there are low engagements on an app, its advertising revenue declines as it depends on page views. In-app purchases also get affected because of crashes and hence, the overall revenue of the app goes down dramatically.
The researchers say that it is best to stay cautious with pre-mature app releases because they can lead to clustered crashes. It is best to not mass release updates and first test them with users who are more prone to crashes. If they do well, then the update can be expanded to a broader customer base.
Image: DIW-Aigen
Read next: Americans Waste 2 Hours Daily on Phones, Here’s What’s Stealing Their Focus!
by Arooj Ahmed via Digital Information World
Researchers found that these crashes on the apps reduce user consumption and shorten their time span on the apps. When a page is crashed one time, page views increase because users feel the Zeigarnik Effect, which refers to interruptions in user goals, and people start feeling psychological tension and want to complete their goals.
But when an app is crashing frequently, users start feeling frustrated and completely stop using the app. One of the biggest examples of it is HBO Max’s transition to Max in 2023, when frequent crashes annoyed users, they simply stopped using the app. When new features get released on the app, they can also contribute to crashes, which can affect the app revenue. When there are low engagements on an app, its advertising revenue declines as it depends on page views. In-app purchases also get affected because of crashes and hence, the overall revenue of the app goes down dramatically.
The researchers say that it is best to stay cautious with pre-mature app releases because they can lead to clustered crashes. It is best to not mass release updates and first test them with users who are more prone to crashes. If they do well, then the update can be expanded to a broader customer base.
Image: DIW-Aigen
Read next: Americans Waste 2 Hours Daily on Phones, Here’s What’s Stealing Their Focus!
by Arooj Ahmed via Digital Information World
ChatGPT Can’t Keep Up: Google Handles 373x More Traffic and Keeps Growing
According to new analysis by SparkToro, AI chatbots and search engines are popular, but they are still unable to beat traditional Google search. The analysis found that Google Search saw 373 times more traffic than ChatGPT and its traffic has also increased year-over-year. Many users, marketers, and other analysts claimed that AI chatbots and search engines are competing with Google Search, but the research shows that Google is still very much dominant.
The research also shows that ChatGPT’s market share will still be less than 1% if it is receiving 1 billion search-related queries daily. Semrush study found that only 30% of queries on ChatGPT fall in the traditional search category, while ChatGPT only uses search for 46% of queries. Google gets about 14 billion queries per day, which makes its market share 93.57%. According to Google, it saw more than 5 trillion searches in 2024. ChatGPT has 0.25% market share with 37.5% traditional search related queries per day. Yahoo has 1.35% market share, Microsoft Bing 4.10% share, and DuckDuckGo has 0.73% market share in the search market, which shows that AI chatbots like ChatGPT are still much behind.
According to data by Datos, there was a 21.64% increase in Google search from 2023 to 2024. Google’s CEO, Sundar Pichai, also says that one of the reasons why Google is seeing a surge in searches is because of AI Overviews, as many users are using this feature. Even though many users are still using Google, it doesn't mean that websites are getting more traffic or clicks. The analysis also found that 60% of the Google searches didn't end up with a click on a website, which makes about 3 trillion searches in 2024 without any clicks.
Read next: Google’s Secret to Staying on Top – 86.94% of Americans Still Use It Daily!
by Arooj Ahmed via Digital Information World
The research also shows that ChatGPT’s market share will still be less than 1% if it is receiving 1 billion search-related queries daily. Semrush study found that only 30% of queries on ChatGPT fall in the traditional search category, while ChatGPT only uses search for 46% of queries. Google gets about 14 billion queries per day, which makes its market share 93.57%. According to Google, it saw more than 5 trillion searches in 2024. ChatGPT has 0.25% market share with 37.5% traditional search related queries per day. Yahoo has 1.35% market share, Microsoft Bing 4.10% share, and DuckDuckGo has 0.73% market share in the search market, which shows that AI chatbots like ChatGPT are still much behind.
According to data by Datos, there was a 21.64% increase in Google search from 2023 to 2024. Google’s CEO, Sundar Pichai, also says that one of the reasons why Google is seeing a surge in searches is because of AI Overviews, as many users are using this feature. Even though many users are still using Google, it doesn't mean that websites are getting more traffic or clicks. The analysis also found that 60% of the Google searches didn't end up with a click on a website, which makes about 3 trillion searches in 2024 without any clicks.
Read next: Google’s Secret to Staying on Top – 86.94% of Americans Still Use It Daily!
by Arooj Ahmed via Digital Information World
Wednesday, March 12, 2025
New Report Found that Only 4% of the Global Populations Hold a Bitcoin
According to a new report from a BTC financial services company called River, only 4% of the world population holds a Bitcoin despite its growing popularity. In the US, 14% of the individuals hold a Bitcoin, which makes America the top country with the highest concentration of Bitcoin ownership. America also has the highest adoption rate for Bitcoin currency, while the country with the lowest adoption rate for Bitcoin is Africa at 1.6%. The study also highlights that Bitcoin constitutes 0.2% of global wealth. Its total addressable market is estimated at $225 trillion, assuming it captures 50% of store-of-value assets.
The report by River says that Bitcoin has only achieved 3% of its maximum adoption potential, which means that its adoption is still at early stages. Developed countries are more open to using Bitcoins than developing countries. The 3% metric was calculated by analyzing individual as well as institutional ownership. Bitcoin also became a US government reserve asset, but there are still a lot of hurdles that are on the way of Bitcoin mass adoption globally.
The things which are stopping Bitcoin’s mass adoption are technical and financial education. There are a lot of misconceptions about Bitcoin and most people think of it as a Ponzi Scheme or scam. Digital currencies are highly volatile, which is good for short-term traders but isn't that good for daily transactions. The high volatility rates affect the developing countries the hardest, and they have to turn to the US dollar stablecoins for lower transaction fees and stability.
Read next: AI Search Is Lying to You, And It’s Getting Worse
by Arooj Ahmed via Digital Information World
The report by River says that Bitcoin has only achieved 3% of its maximum adoption potential, which means that its adoption is still at early stages. Developed countries are more open to using Bitcoins than developing countries. The 3% metric was calculated by analyzing individual as well as institutional ownership. Bitcoin also became a US government reserve asset, but there are still a lot of hurdles that are on the way of Bitcoin mass adoption globally.
The things which are stopping Bitcoin’s mass adoption are technical and financial education. There are a lot of misconceptions about Bitcoin and most people think of it as a Ponzi Scheme or scam. Digital currencies are highly volatile, which is good for short-term traders but isn't that good for daily transactions. The high volatility rates affect the developing countries the hardest, and they have to turn to the US dollar stablecoins for lower transaction fees and stability.
Read next: AI Search Is Lying to You, And It’s Getting Worse
by Arooj Ahmed via Digital Information World
AI Search Is Lying to You, And It’s Getting Worse
Facts matter. Trust matters. But in the race to reinvent search, both are getting trampled. A recent Columbia Journalism Review study reveals a hard truth — machines, built to deliver answers in an instant, are often serving up fiction with a straight face. Instead of guiding users to reliable sources, search engines now deal in confidence, not accuracy, replacing verifiable facts with AI-generated guesswork. The promise was a smarter way to find information; the reality is a flood of misinformation, dressed up as truth, delivered without a second thought.
The study highlights a growing issue with AI search tools scraping online content to generate responses. Instead of directing users to the original sources, these systems often provide instant answers, significantly reducing website traffic. A separate, unrelated study also found that click-through rates from AI-generated search results and chatbots were substantially lower than those from Google Search. The situation becomes even more problematic when these AI tools fabricate citations, misleading users by linking to non-existent or broken URLs.
An analysis of multiple AI search models found that over half of the citations generated by Google’s Gemini and xAI’s Grok 3 led to fabricated or inaccessible webpages. More broadly, chatbots were found to deliver incorrect information in more than 60% of cases. Among the evaluated models, Grok 3 had the highest error rate, with 94% of its responses containing inaccuracies. Gemini fared slightly better but only provided a fully correct answer once in ten attempts. Perplexity, though the most accurate of the models tested, still returned incorrect responses 37% of the time.
The study’s authors noted that multiple AI models appeared to disregard the Robot Exclusion Protocol, a standard that allows websites to restrict automated content scraping. This disregard raises ethical concerns about how AI search engines collect and repurpose online information. Their findings align with a previous study published in November 2024 that examined ChatGPT’s search capabilities, revealing consistent patterns of confident but incorrect responses, misleading citations, and unreliable information retrieval.
Experts have warned that generative AI models pose significant risks to information transparency and media credibility. Critics such as Chirag Shah and Emily M. Bender have raised concerns that AI search engines remove user agency, amplify bias in information access, and frequently present misleading or toxic answers that users may accept without question.
The study analyzed 1,600 queries to compare how different generative AI search models retrieved article details such as headlines, publishers, publication dates, and URLs. The evaluation included ChatGPT Search, Microsoft CoPilot, DeepSeek Search, Perplexity along with its Pro version, xAI’s Grok-2 and Grok-3 Search, and Google Gemini. The models were tested using direct excerpts from ten randomly selected articles sourced from 20 different publishers. The results underscore a significant challenge for AI-driven search, showing that despite their growing integration into digital platforms, these tools still struggle with accuracy and citation reliability.
Read next:
• How to Increase Subscribers on YouTube?
• Social Media Users Unknowingly Participate in Marketing Experiments, Research Reveals
• Engagement Trends Show Threads Growing, X’s Virality Strength, and Bluesky’s Slowdown
by Arooj Ahmed via Digital Information World
The study highlights a growing issue with AI search tools scraping online content to generate responses. Instead of directing users to the original sources, these systems often provide instant answers, significantly reducing website traffic. A separate, unrelated study also found that click-through rates from AI-generated search results and chatbots were substantially lower than those from Google Search. The situation becomes even more problematic when these AI tools fabricate citations, misleading users by linking to non-existent or broken URLs.
An analysis of multiple AI search models found that over half of the citations generated by Google’s Gemini and xAI’s Grok 3 led to fabricated or inaccessible webpages. More broadly, chatbots were found to deliver incorrect information in more than 60% of cases. Among the evaluated models, Grok 3 had the highest error rate, with 94% of its responses containing inaccuracies. Gemini fared slightly better but only provided a fully correct answer once in ten attempts. Perplexity, though the most accurate of the models tested, still returned incorrect responses 37% of the time.
The study’s authors noted that multiple AI models appeared to disregard the Robot Exclusion Protocol, a standard that allows websites to restrict automated content scraping. This disregard raises ethical concerns about how AI search engines collect and repurpose online information. Their findings align with a previous study published in November 2024 that examined ChatGPT’s search capabilities, revealing consistent patterns of confident but incorrect responses, misleading citations, and unreliable information retrieval.
Experts have warned that generative AI models pose significant risks to information transparency and media credibility. Critics such as Chirag Shah and Emily M. Bender have raised concerns that AI search engines remove user agency, amplify bias in information access, and frequently present misleading or toxic answers that users may accept without question.
The study analyzed 1,600 queries to compare how different generative AI search models retrieved article details such as headlines, publishers, publication dates, and URLs. The evaluation included ChatGPT Search, Microsoft CoPilot, DeepSeek Search, Perplexity along with its Pro version, xAI’s Grok-2 and Grok-3 Search, and Google Gemini. The models were tested using direct excerpts from ten randomly selected articles sourced from 20 different publishers. The results underscore a significant challenge for AI-driven search, showing that despite their growing integration into digital platforms, these tools still struggle with accuracy and citation reliability.
Read next:
• How to Increase Subscribers on YouTube?
• Social Media Users Unknowingly Participate in Marketing Experiments, Research Reveals
• Engagement Trends Show Threads Growing, X’s Virality Strength, and Bluesky’s Slowdown
by Arooj Ahmed via Digital Information World
Engagement Trends Show Threads Growing, X’s Virality Strength, and Bluesky’s Slowdown
BufferApp analyzed 1.7 million posts from X, Threads, and Bluesky and found that these three platforms have a common median engagement rate, that is four interactions per post. This may tell us that these platforms are similar in terms of engagements, but that isn’t the case because they have different patterns, dynamics, audience behavior, and consistency when it comes to posts. A data scientist for Buffer analyzed posts from 56,000 users to see the trends on X, Threads, and Bluesky. Engagements mean the total number of reactions a post receives, which can include likes, comments, and reposts.
The study highlights that posts on Threads have higher engagement, but some data shows that posts on X have as many engagements as posts on Threads. Engagement rate means percentage of people who interact with a post (i.e. like, comment etc.), while total engagements count all the interactions. It is important to know engagement rate on a post if you want to see how much a post can engage the audience, while total engagements can tell the overall interaction on the platform.
In 2024, the posts on X, Threads and Bluesky had the same number of engagements, with a median of four engagements per post. But if we look at February 2025 data, we get to know that posts on Threads received a median 5 engagements, X remained at 4 engagements, while engagements on Bluesky reduced to 3. This may not seem like much of a difference, but this shows that each platform is developing its own distinct identities.
Median engagements show how a post performs, but it doesn’t show any viral content. The gap between median and average engagement shows if a post has gone viral. X gets 328 average engagements, Threads get 58, and Bluesky gets 21 average engagements on posts. The standard deviation on X gets more than 5,000, which means that it is highly unpredictable, while Threads and Bluesky have lower engagements, but they are consistent. If a platform has high standard deviation, it means that it has great viral potential, while lower standard deviation means predictable engagements.
Because of all these factors, X is the platform with the most viral potential. Even though posts on X have a median four engagements, a post can go to extreme levels of virality if it takes off. Threads has moderate engagement, but it is stabilizing quickly. The potential to go viral on Threads is random, but it has steadier audience growth. Bluesky has a small engagement spread and it is more community driven than viral reach.
Read next: Even with Reduced Expectations for Ratings, Consumers Actively Contribute Reviews on Google and Social Media
by Arooj Ahmed via Digital Information World
The study highlights that posts on Threads have higher engagement, but some data shows that posts on X have as many engagements as posts on Threads. Engagement rate means percentage of people who interact with a post (i.e. like, comment etc.), while total engagements count all the interactions. It is important to know engagement rate on a post if you want to see how much a post can engage the audience, while total engagements can tell the overall interaction on the platform.
In 2024, the posts on X, Threads and Bluesky had the same number of engagements, with a median of four engagements per post. But if we look at February 2025 data, we get to know that posts on Threads received a median 5 engagements, X remained at 4 engagements, while engagements on Bluesky reduced to 3. This may not seem like much of a difference, but this shows that each platform is developing its own distinct identities.
Median engagements show how a post performs, but it doesn’t show any viral content. The gap between median and average engagement shows if a post has gone viral. X gets 328 average engagements, Threads get 58, and Bluesky gets 21 average engagements on posts. The standard deviation on X gets more than 5,000, which means that it is highly unpredictable, while Threads and Bluesky have lower engagements, but they are consistent. If a platform has high standard deviation, it means that it has great viral potential, while lower standard deviation means predictable engagements.
Because of all these factors, X is the platform with the most viral potential. Even though posts on X have a median four engagements, a post can go to extreme levels of virality if it takes off. Threads has moderate engagement, but it is stabilizing quickly. The potential to go viral on Threads is random, but it has steadier audience growth. Bluesky has a small engagement spread and it is more community driven than viral reach.
Read next: Even with Reduced Expectations for Ratings, Consumers Actively Contribute Reviews on Google and Social Media
by Arooj Ahmed via Digital Information World
OpenAI is Rolling Out New Responses API Tool That Can Search Through Large Volumes of Online Data
The future of AI includes chatbots or agents, and that’s why the makers of ChatGPT are trying their best to assist developers design one of their own.
The organization is releasing a New Responses API tool that offers building blocks so that developers can benefit. In other words, it's saying hello to agents that can go through huge volumes of online data while carrying out numerous tasks on the PC, just so the user does not need to.
As per the head of Deep Research and Operator, some agents the company can design themselves, but knowing that the internet is so complex, many industries and use cases require a foundation. Based on that, developers can design efficient agents as per their needs.
The new tool will be built into web search on the same exact model that ChatGPT utilizes when searching files. This gives developers the chance to get data in real-time and citations from the internet while utilizing GPT-4o and 4o mini. It also entails another feature for use on computers only, like its own Operation model, so users can allow it to perform tasks on their behalf.
The goal here is to provide assistance to agents working to provide the best customer support. They can go through FAQs or even serve to find age-old cases if working as a legal agent.
In other news, the AI giant shared its Agents SDK, which it calls a means for developers to display the AI agents workflow. Several of these agents can work as a unit to solve even the most difficult tasks. This should make it so much simpler for developers to manage agents and make sure they are working to a single goal.
The launch of the latest Responses API and Agents SDK is built on previous tools that the company rolled out to developers. Common examples include Chat Completions API. This provides developers the chance to design AI tools that provide replies to user queries. In the same way, the company is making plans to get rid of the Assistants API with this latest invention by the middle of next year. As per OpenAI, it’s added plenty of key improvements into it, after considering feedback from developers.
Read next: Hidden Threat: Even One Breath in These Cities Could Be Life-Threatening
by Dr. Hura Anwar via Digital Information World
The organization is releasing a New Responses API tool that offers building blocks so that developers can benefit. In other words, it's saying hello to agents that can go through huge volumes of online data while carrying out numerous tasks on the PC, just so the user does not need to.
As per the head of Deep Research and Operator, some agents the company can design themselves, but knowing that the internet is so complex, many industries and use cases require a foundation. Based on that, developers can design efficient agents as per their needs.
The new tool will be built into web search on the same exact model that ChatGPT utilizes when searching files. This gives developers the chance to get data in real-time and citations from the internet while utilizing GPT-4o and 4o mini. It also entails another feature for use on computers only, like its own Operation model, so users can allow it to perform tasks on their behalf.
The goal here is to provide assistance to agents working to provide the best customer support. They can go through FAQs or even serve to find age-old cases if working as a legal agent.
In other news, the AI giant shared its Agents SDK, which it calls a means for developers to display the AI agents workflow. Several of these agents can work as a unit to solve even the most difficult tasks. This should make it so much simpler for developers to manage agents and make sure they are working to a single goal.
The launch of the latest Responses API and Agents SDK is built on previous tools that the company rolled out to developers. Common examples include Chat Completions API. This provides developers the chance to design AI tools that provide replies to user queries. In the same way, the company is making plans to get rid of the Assistants API with this latest invention by the middle of next year. As per OpenAI, it’s added plenty of key improvements into it, after considering feedback from developers.
Read next: Hidden Threat: Even One Breath in These Cities Could Be Life-Threatening
by Dr. Hura Anwar via Digital Information World
Subscribe to:
Posts (Atom)










