Tech giant Meta has just confirmed that an alert was sent out to all pages on Facebook that their page was not aimed at kids under 13.
Facebook’s parent firm acknowledged that this was an error in the form of a bug and now it has been fixed. The matter was concerning for many page managers who reviewed the alert and became worried. We saw a user Wocky-Slush-Jo-Mama on Reddit be one of the first few who shared the picture of the alert which was visible on a page.
Image: u/Wocky-Slush-Jo-Mama
The image showed Meta looking to clarify if the page was directed to kids or not. They asked them and many others to confirm by September 30 that it was not meant for those below the 13-year age bracket. When you clicked on that alert, it brought forward more information about this.
The entire process seemed like it was created to attain more explicit agreements from pages to ensure it isn’t directed toward youngsters. This would give the social media giant the liberty to remove any page that it felt was aimed to target minors. Since page owners were sent reminders and they would directly need to agree, it did appear more like enforcement to ensure kids remain safe on Meta’s apps.
Meta confirmed that they were aware of the bug and therefore fixed it. They’re now experimenting with alerts to make sure the pages are in line with the Terms of Use that stop people under the age of 13 from using this platform. In such a case, the alerts were sent by error, it added.
Therefore, it was a false alarm and there’s no reason to get worried right now. Meta says users will no longer be seeing this on the app but if they are, they shouldn’t be worried. From what we can see, it does appear the tech giant might be looking to confirm such facts in the future.
This would mean that if your content did end up targeting minors, they might need to rethink the whole approach. It’s already a part of Meta’s policies that minors shouldn’t be on the app and if they are, pages cannot produce content that targets them.
Read next: New Study Shows AI Cannot Be Trusted for News as It Lacks Accuracy
by Dr. Hura Anwar via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Tuesday, February 18, 2025
New Study Shows AI Cannot Be Trusted for News as It Lacks Accuracy
According to a new study by British Broadcasting Corporation, AI assistants often provide inaccurate news and misleading news to users which can have drastic effects. The journalists at BBC asked AI chatbots like CoPilot, ChatGPT, Perplexity and Gemini 100 questions about current news and asked them to cite BBC articles as their sources. The results showed that 51% of the responses from AI had significant issues, while 91% had slight issues. 19% of the sources which cited BBC content had incorrect statistics and date while 13% of the quotes from BBC articles were fabricated or altered. AI assistants couldn't differentiate between facts and opinions and couldn't provide context.
This shows that AI assistants shouldn't be used for reliable news because their hallucination and misinformation issues can mislead the audience. One of the responses from Google Gemini stated that the NHS advises people to not start vaping but the actual article advised people to start vaping if they want to quit smoking. Some other responses also provided inaccurate information about political leaders as well as TV presenters.
This study matters because it is important for people to trust news, no matter where it is from even from AI assistants. Some people prefer human-centric journalism over AI while others said they partly trust news from AI. So this means accuracy matters the most to people and human reviews is essential even with AI use. AI also lacks context often so it can also become misleading and problematic if used for news.
Image: DIW-Aigen
Read next: New Study Shows LLMs are Good At Generalizing on their Own Without Human Input
by Arooj Ahmed via Digital Information World
This shows that AI assistants shouldn't be used for reliable news because their hallucination and misinformation issues can mislead the audience. One of the responses from Google Gemini stated that the NHS advises people to not start vaping but the actual article advised people to start vaping if they want to quit smoking. Some other responses also provided inaccurate information about political leaders as well as TV presenters.
This study matters because it is important for people to trust news, no matter where it is from even from AI assistants. Some people prefer human-centric journalism over AI while others said they partly trust news from AI. So this means accuracy matters the most to people and human reviews is essential even with AI use. AI also lacks context often so it can also become misleading and problematic if used for news.
Image: DIW-Aigen
Read next: New Study Shows LLMs are Good At Generalizing on their Own Without Human Input
by Arooj Ahmed via Digital Information World
Monday, February 17, 2025
New Data Shows There Has Been an Increase in YouTube Videos Being Cited on AI Overviews
According to new data from BrightEdge, citations from YouTube on Google AI Overviews have grown by 25.21% since the start of 2025. Most of the YouTube citations on Google AI Overviews are from the healthcare industry, with 41.97% of the citations of AIO. Google says that less than 1% of views on YouTube videos come from search but Google is still favoring YouTube videos to perform well. Most of the videos that are being cited are step-by-step tutorials, visual demonstrations and comparisons among products. If you want your videos to be cited on Google AI Overviews, it is important that you align SEO strategies with your videos so there is a big chance for your YouTube video to get featured.
There was a 35.6% increase on instructional content which is being cited on AIO, followed by 32.5% increase on visual demonstrations content like queries for physical techniques. A 22.5% increase was seen in examples/verifications-type content like visual proof and product comparisons, while a 9.4% increase was seen on news and live coverage content.
By industry, most of the YouTube videos were being cited on AIO on healthcare queries (41.97%), followed by eCommerce queries (30.87%) and B2B Tech (18.68%). 9.52% of finance queries and 8.65% of travel queries were also making YouTube videos come up on AI Overviews.
Read next: Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
by Arooj Ahmed via Digital Information World
There was a 35.6% increase on instructional content which is being cited on AIO, followed by 32.5% increase on visual demonstrations content like queries for physical techniques. A 22.5% increase was seen in examples/verifications-type content like visual proof and product comparisons, while a 9.4% increase was seen on news and live coverage content.
By industry, most of the YouTube videos were being cited on AIO on healthcare queries (41.97%), followed by eCommerce queries (30.87%) and B2B Tech (18.68%). 9.52% of finance queries and 8.65% of travel queries were also making YouTube videos come up on AI Overviews.
Read next: Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
by Arooj Ahmed via Digital Information World
New Study Shows LLMs are Good At Generalizing on their Own Without Human Input
According to a new study by Hong Kong University and University of California, large language models can generalize things and find better solutions if they are left alone to solve them. This study challenges that belief that large language models require proper training examples to start generalizing things on their own. Many large language models are being trained on supervised fine-tuning (SFT) in which a model gets trained on a large set of handcrafted examples after being trained on raw data. After a model has trained on SFT, it further goes into training for reinforcement learning from human feedback where it learns about human preferences and which responses humans like the best from the models.
SFT guides a model’s behavior but gathering data for it is costly and needs a lot of time and effort so now the developers have applied reinforcement learning approaches in large language models where they give a model a task, and make it learn about it without a handcrafted example. One of the biggest examples of this is DeepSeek-R1 which uses reinforcement learning to learn about complex reasoning tasks.
One of the biggest problems that comes up while training LLMs is overfitting where the LLMs do good on training data but cannot generalize on their own when they are given unseen examples. When a model is being trained, it gives an impression that it has learned the task completely, but it only memorizes it for the training. Complex AI models find it hard to differentiate between memorization and generalization so this new study analyzed RL and SFT training of large language models in textual and visual reasoning tasks.
During the experiment, the researchers used two tasks, with one being GeneralPoints which is used to access arithmetic reasoning of LLMs. The model is given four cards and is asked to combine them to reach a specific target number. The researchers trained the models on one set of rules and then tested them with a different rule to understand rule-based generalization. They also evaluated LLMs on different colored cards to access their visual generalization.
V-IRL was the second task researchers used to test models' spatial reasoning capabilities which used realistic visual input to test the models. The tests were run on LLama 3.2 Vision-11B and the results showed that reinforcement learning consistently improved performance on examples that were very different from the training data. This shows that RL is better at generalizing than SFT, but initial SFT training is important to achieve desirable results for RL training.
Image: DIW-Aigen
Read next:
• Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Arooj Ahmed via Digital Information World
SFT guides a model’s behavior but gathering data for it is costly and needs a lot of time and effort so now the developers have applied reinforcement learning approaches in large language models where they give a model a task, and make it learn about it without a handcrafted example. One of the biggest examples of this is DeepSeek-R1 which uses reinforcement learning to learn about complex reasoning tasks.
One of the biggest problems that comes up while training LLMs is overfitting where the LLMs do good on training data but cannot generalize on their own when they are given unseen examples. When a model is being trained, it gives an impression that it has learned the task completely, but it only memorizes it for the training. Complex AI models find it hard to differentiate between memorization and generalization so this new study analyzed RL and SFT training of large language models in textual and visual reasoning tasks.
During the experiment, the researchers used two tasks, with one being GeneralPoints which is used to access arithmetic reasoning of LLMs. The model is given four cards and is asked to combine them to reach a specific target number. The researchers trained the models on one set of rules and then tested them with a different rule to understand rule-based generalization. They also evaluated LLMs on different colored cards to access their visual generalization.
V-IRL was the second task researchers used to test models' spatial reasoning capabilities which used realistic visual input to test the models. The tests were run on LLama 3.2 Vision-11B and the results showed that reinforcement learning consistently improved performance on examples that were very different from the training data. This shows that RL is better at generalizing than SFT, but initial SFT training is important to achieve desirable results for RL training.
Image: DIW-Aigen
Read next:
• Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Arooj Ahmed via Digital Information World
Sunday, February 16, 2025
Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
Google’s latest decision on online tracking has sparked backlash from privacy advocates, who argue that the move undermines user protection. The update, set to take effect on Sunday, enables advertisers to gather detailed information through a technique called fingerprinting. This method compiles signals from devices and browsers, such as network details and hardware specifics, allowing advertisers to build distinctive user profiles. Critics believe this change significantly reduces individual control over how personal information is accessed and utilized.
Google maintains that similar tracking mechanisms are already standard within the industry, asserting that it continues to promote ethical data usage. However, this policy shift contradicts its prior stance. In 2019, the company had denounced fingerprinting as a way to bypass user choice, calling it an unfair practice.
Explaining its reasoning, Google states that the way people interact with digital platforms has evolved, particularly through smart TVs, gaming systems, and other internet-connected devices. It argues that conventional tracking tools like cookies, which users can manage through permission settings, are becoming less effective. The company claims its new approach improves security while allowing businesses to navigate emerging digital spaces without compromising privacy.
Opponents argue that the policy grants Google and the broader advertising sector unchecked power over tracking methods that users cannot easily avoid. Martin Thomson, a lead engineer at Mozilla, warns that fingerprinting expands Google’s influence in targeted advertising, eroding privacy safeguards in the process. Unlike cookies, which can be blocked or deleted, fingerprinting works passively in the background, leaving individuals with limited options to prevent tracking.
This data collection method pulls information such as screen dimensions and language preferences, which are necessary for optimizing website displays. However, when merged with details like time zone, power status, and browser specifics, these factors form an identifiable pattern, making it easier to recognize and track individuals across the web. Previously, Google had blocked advertisers from using IP addresses for targeted marketing, but this new policy change removes that restriction, raising concerns among privacy-focused groups.
Lena Cohen, a technology specialist at the Electronic Frontier Foundation, believes this decision signals Google's prioritization of financial interests over consumer protection. She warns that while the company presents fingerprinting as a necessary tool for digital advertising, it also increases exposure to third-party entities such as data brokers, surveillance firms, and law enforcement agencies. Privacy activists argue that fingerprinting strips individuals of meaningful control, making it significantly harder to manage their online footprint.
Even within the advertising sector, some professionals question the ethical impact of this change. Pete Wallace, an executive at advertising technology firm GumGum, describes fingerprinting as an ambiguous practice that operates in a regulatory gray area. He suggests that the industry was gradually shifting toward stronger privacy safeguards, making this policy reversal concerning. His company, which has collaborated with major media organizations on advertising strategies, instead relies on contextual marketing, an approach that analyzes webpage content rather than tracking user-specific data. He believes Google’s decision shifts the balance of power in advertising, favoring corporate data collection over individual privacy. While he hopes businesses recognize the risks associated with fingerprinting, he anticipates that many will adopt the technique to refine their ad targeting strategies.
Online advertising remains the backbone of the internet’s economic structure, allowing platforms to offer free access to content. However, this business model often forces users to relinquish privacy in exchange for digital services. Regulators have started paying closer attention, with the UK’s Information Commissioner’s Office (ICO) voicing concerns about the impact of fingerprinting on consumer autonomy. The agency warns that fingerprinting severely restricts user choice while diminishing transparency regarding data collection practices.
Stephen Almond, a senior official at the ICO, has criticized the move as irresponsible, stating that advertisers must now prove how fingerprinting complies with legal data protection requirements. He argues that this approach contradicts broader efforts to enhance user privacy, placing greater responsibility on businesses to justify their tracking practices.
In response to the criticism, Google reaffirmed its commitment to discussions with regulators, including the ICO. The company insists that data signals such as IP addresses have long been utilized within the industry and asserts that its implementation remains controlled, particularly in fraud prevention. Google maintains that individuals still have a say in whether they receive customized advertisements and emphasizes its intention to encourage ethical data handling across the sector.
Image: DIW-Aigen
Read next:
• Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
• AI Adoption Surges: 60% of Americans Use AI Weekly, ChatGPT Leads With 77.97% Usage
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Asim BN via Digital Information World
Google maintains that similar tracking mechanisms are already standard within the industry, asserting that it continues to promote ethical data usage. However, this policy shift contradicts its prior stance. In 2019, the company had denounced fingerprinting as a way to bypass user choice, calling it an unfair practice.
Explaining its reasoning, Google states that the way people interact with digital platforms has evolved, particularly through smart TVs, gaming systems, and other internet-connected devices. It argues that conventional tracking tools like cookies, which users can manage through permission settings, are becoming less effective. The company claims its new approach improves security while allowing businesses to navigate emerging digital spaces without compromising privacy.
Opponents argue that the policy grants Google and the broader advertising sector unchecked power over tracking methods that users cannot easily avoid. Martin Thomson, a lead engineer at Mozilla, warns that fingerprinting expands Google’s influence in targeted advertising, eroding privacy safeguards in the process. Unlike cookies, which can be blocked or deleted, fingerprinting works passively in the background, leaving individuals with limited options to prevent tracking.
This data collection method pulls information such as screen dimensions and language preferences, which are necessary for optimizing website displays. However, when merged with details like time zone, power status, and browser specifics, these factors form an identifiable pattern, making it easier to recognize and track individuals across the web. Previously, Google had blocked advertisers from using IP addresses for targeted marketing, but this new policy change removes that restriction, raising concerns among privacy-focused groups.
Lena Cohen, a technology specialist at the Electronic Frontier Foundation, believes this decision signals Google's prioritization of financial interests over consumer protection. She warns that while the company presents fingerprinting as a necessary tool for digital advertising, it also increases exposure to third-party entities such as data brokers, surveillance firms, and law enforcement agencies. Privacy activists argue that fingerprinting strips individuals of meaningful control, making it significantly harder to manage their online footprint.
Even within the advertising sector, some professionals question the ethical impact of this change. Pete Wallace, an executive at advertising technology firm GumGum, describes fingerprinting as an ambiguous practice that operates in a regulatory gray area. He suggests that the industry was gradually shifting toward stronger privacy safeguards, making this policy reversal concerning. His company, which has collaborated with major media organizations on advertising strategies, instead relies on contextual marketing, an approach that analyzes webpage content rather than tracking user-specific data. He believes Google’s decision shifts the balance of power in advertising, favoring corporate data collection over individual privacy. While he hopes businesses recognize the risks associated with fingerprinting, he anticipates that many will adopt the technique to refine their ad targeting strategies.
Online advertising remains the backbone of the internet’s economic structure, allowing platforms to offer free access to content. However, this business model often forces users to relinquish privacy in exchange for digital services. Regulators have started paying closer attention, with the UK’s Information Commissioner’s Office (ICO) voicing concerns about the impact of fingerprinting on consumer autonomy. The agency warns that fingerprinting severely restricts user choice while diminishing transparency regarding data collection practices.
Stephen Almond, a senior official at the ICO, has criticized the move as irresponsible, stating that advertisers must now prove how fingerprinting complies with legal data protection requirements. He argues that this approach contradicts broader efforts to enhance user privacy, placing greater responsibility on businesses to justify their tracking practices.
In response to the criticism, Google reaffirmed its commitment to discussions with regulators, including the ICO. The company insists that data signals such as IP addresses have long been utilized within the industry and asserts that its implementation remains controlled, particularly in fraud prevention. Google maintains that individuals still have a say in whether they receive customized advertisements and emphasizes its intention to encourage ethical data handling across the sector.
Image: DIW-Aigen
Read next:
• Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
• AI Adoption Surges: 60% of Americans Use AI Weekly, ChatGPT Leads With 77.97% Usage
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Asim BN via Digital Information World
As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
According to a new study by China’s Peking University, ChatGPT has started to become biased in its political views and is showing a right-wing shift now. AI models are supposed to be unbiased in all of their opinions, including political opinions but this new research shows that ChatGPT has most of its viewpoints in the right wing in newer models. Previous researchers showed that ChatGPT had a liberal bias and even though it is still giving some of its viewpoints in the left wing, new models of ChatGPT are showing a shift. The authors of the study tested ChatGPT through Political Compass Test to come to this conclusion.
Many people may assume that this change in ChatGPT’s viewpoints was mainly because of Donald Trump being elected as a president once again or how Big Tech is supporting conservations in administration, the researchers say that this change is mostly because of a change in training data used to train ChatGPT models and how political topics are being filtered from that data. It can also be assumed that ChatGPT supporting right wing opinions can also be due to how users are interacting with it because ChatGPT learns from the interactions with its users. The right wing shift has been observed in GPT 3.5 as well as GPT-4 models.
The rapid change in ChatGPT’s political viewpoint isn’t something to be concerned about but the researchers say that it should continuously be monitored to see how it is affecting human decision making. China’s DeepSeek also shows a lot of biases in some topics so it is acceptable for xAI and ChatGPT to have their own biases.
Image: DIW-Aigen
Read next: 12 Phishing Attacks a Day – Are Companies Ignoring the Growing Threat of Cyber Destruction?
by Arooj Ahmed via Digital Information World
Many people may assume that this change in ChatGPT’s viewpoints was mainly because of Donald Trump being elected as a president once again or how Big Tech is supporting conservations in administration, the researchers say that this change is mostly because of a change in training data used to train ChatGPT models and how political topics are being filtered from that data. It can also be assumed that ChatGPT supporting right wing opinions can also be due to how users are interacting with it because ChatGPT learns from the interactions with its users. The right wing shift has been observed in GPT 3.5 as well as GPT-4 models.
The rapid change in ChatGPT’s political viewpoint isn’t something to be concerned about but the researchers say that it should continuously be monitored to see how it is affecting human decision making. China’s DeepSeek also shows a lot of biases in some topics so it is acceptable for xAI and ChatGPT to have their own biases.
Image: DIW-Aigen
Read next: 12 Phishing Attacks a Day – Are Companies Ignoring the Growing Threat of Cyber Destruction?
by Arooj Ahmed via Digital Information World
AI Adoption Surges: 60% of Americans Use AI Weekly, ChatGPT Leads With 77.97% Usage
According to a study by an AI company called Listening, most Americans say that AI tools are making their lives easier, with three in five respondents saying that their quality of life has improved a lot after using AI. One in six respondents also said that they are dependent on AI in some way and cannot live without it now. The study didn't tell the total number of participants that took part in the survey but the basic purpose of the study was to find out in which US states do most AI users live as the survey was done in every US state.
According to the study, 77.97% of Americans use ChatGPT which makes it the most used AI tool in the US. The second most used AI tool by Americans is Google Translate (44.89%) and Gemini (33.23%). Americans in the survey said that the most common purposes that they use AI tools for are writing and editing (62.77%), online search (61.47%) and summarizing text (42.77%). Brainstorming (39%) and generative art (32.09%) are also some purposes Americans use AI tools for the most.
60% of the respondents said that they use AI at least once a week while one third of respondents said that their AI use has increased a lot in the past year. The states where Americans are most reliant on AI are Oregon, Florida and Arizona. On the other hand, the states where Americans are least reliant on AI are Missouri, Mississippi and Rhode Island. There was a reliance score from 0-100 which showed how reliant Americans from each state are on AI. The states with higher average scores were most reliant while states with lower scores were least reliant. This shows that AI use isn't concentrated in the North East or West Coast only, rather it is widespread all over the country.
Take a look at the charts below for more insights:
Read next: Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
by Arooj Ahmed via Digital Information World
According to the study, 77.97% of Americans use ChatGPT which makes it the most used AI tool in the US. The second most used AI tool by Americans is Google Translate (44.89%) and Gemini (33.23%). Americans in the survey said that the most common purposes that they use AI tools for are writing and editing (62.77%), online search (61.47%) and summarizing text (42.77%). Brainstorming (39%) and generative art (32.09%) are also some purposes Americans use AI tools for the most.
60% of the respondents said that they use AI at least once a week while one third of respondents said that their AI use has increased a lot in the past year. The states where Americans are most reliant on AI are Oregon, Florida and Arizona. On the other hand, the states where Americans are least reliant on AI are Missouri, Mississippi and Rhode Island. There was a reliance score from 0-100 which showed how reliant Americans from each state are on AI. The states with higher average scores were most reliant while states with lower scores were least reliant. This shows that AI use isn't concentrated in the North East or West Coast only, rather it is widespread all over the country.
Take a look at the charts below for more insights:
Read next: Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
by Arooj Ahmed via Digital Information World
Subscribe to:
Comments (Atom)








