According to new data from BrightEdge, citations from YouTube on Google AI Overviews have grown by 25.21% since the start of 2025. Most of the YouTube citations on Google AI Overviews are from the healthcare industry, with 41.97% of the citations of AIO. Google says that less than 1% of views on YouTube videos come from search but Google is still favoring YouTube videos to perform well. Most of the videos that are being cited are step-by-step tutorials, visual demonstrations and comparisons among products. If you want your videos to be cited on Google AI Overviews, it is important that you align SEO strategies with your videos so there is a big chance for your YouTube video to get featured.
There was a 35.6% increase on instructional content which is being cited on AIO, followed by 32.5% increase on visual demonstrations content like queries for physical techniques. A 22.5% increase was seen in examples/verifications-type content like visual proof and product comparisons, while a 9.4% increase was seen on news and live coverage content.
By industry, most of the YouTube videos were being cited on AIO on healthcare queries (41.97%), followed by eCommerce queries (30.87%) and B2B Tech (18.68%). 9.52% of finance queries and 8.65% of travel queries were also making YouTube videos come up on AI Overviews.
Read next: Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
by Arooj Ahmed via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Monday, February 17, 2025
New Study Shows LLMs are Good At Generalizing on their Own Without Human Input
According to a new study by Hong Kong University and University of California, large language models can generalize things and find better solutions if they are left alone to solve them. This study challenges that belief that large language models require proper training examples to start generalizing things on their own. Many large language models are being trained on supervised fine-tuning (SFT) in which a model gets trained on a large set of handcrafted examples after being trained on raw data. After a model has trained on SFT, it further goes into training for reinforcement learning from human feedback where it learns about human preferences and which responses humans like the best from the models.
SFT guides a model’s behavior but gathering data for it is costly and needs a lot of time and effort so now the developers have applied reinforcement learning approaches in large language models where they give a model a task, and make it learn about it without a handcrafted example. One of the biggest examples of this is DeepSeek-R1 which uses reinforcement learning to learn about complex reasoning tasks.
One of the biggest problems that comes up while training LLMs is overfitting where the LLMs do good on training data but cannot generalize on their own when they are given unseen examples. When a model is being trained, it gives an impression that it has learned the task completely, but it only memorizes it for the training. Complex AI models find it hard to differentiate between memorization and generalization so this new study analyzed RL and SFT training of large language models in textual and visual reasoning tasks.
During the experiment, the researchers used two tasks, with one being GeneralPoints which is used to access arithmetic reasoning of LLMs. The model is given four cards and is asked to combine them to reach a specific target number. The researchers trained the models on one set of rules and then tested them with a different rule to understand rule-based generalization. They also evaluated LLMs on different colored cards to access their visual generalization.
V-IRL was the second task researchers used to test models' spatial reasoning capabilities which used realistic visual input to test the models. The tests were run on LLama 3.2 Vision-11B and the results showed that reinforcement learning consistently improved performance on examples that were very different from the training data. This shows that RL is better at generalizing than SFT, but initial SFT training is important to achieve desirable results for RL training.
Image: DIW-Aigen
Read next:
• Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Arooj Ahmed via Digital Information World
SFT guides a model’s behavior but gathering data for it is costly and needs a lot of time and effort so now the developers have applied reinforcement learning approaches in large language models where they give a model a task, and make it learn about it without a handcrafted example. One of the biggest examples of this is DeepSeek-R1 which uses reinforcement learning to learn about complex reasoning tasks.
One of the biggest problems that comes up while training LLMs is overfitting where the LLMs do good on training data but cannot generalize on their own when they are given unseen examples. When a model is being trained, it gives an impression that it has learned the task completely, but it only memorizes it for the training. Complex AI models find it hard to differentiate between memorization and generalization so this new study analyzed RL and SFT training of large language models in textual and visual reasoning tasks.
During the experiment, the researchers used two tasks, with one being GeneralPoints which is used to access arithmetic reasoning of LLMs. The model is given four cards and is asked to combine them to reach a specific target number. The researchers trained the models on one set of rules and then tested them with a different rule to understand rule-based generalization. They also evaluated LLMs on different colored cards to access their visual generalization.
V-IRL was the second task researchers used to test models' spatial reasoning capabilities which used realistic visual input to test the models. The tests were run on LLama 3.2 Vision-11B and the results showed that reinforcement learning consistently improved performance on examples that were very different from the training data. This shows that RL is better at generalizing than SFT, but initial SFT training is important to achieve desirable results for RL training.
Image: DIW-Aigen
Read next:
• Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Arooj Ahmed via Digital Information World
Sunday, February 16, 2025
Experts Expose Google’s Silent Privacy Rollback, Calling Fingerprinting a Gateway to Mass Surveillance
Google’s latest decision on online tracking has sparked backlash from privacy advocates, who argue that the move undermines user protection. The update, set to take effect on Sunday, enables advertisers to gather detailed information through a technique called fingerprinting. This method compiles signals from devices and browsers, such as network details and hardware specifics, allowing advertisers to build distinctive user profiles. Critics believe this change significantly reduces individual control over how personal information is accessed and utilized.
Google maintains that similar tracking mechanisms are already standard within the industry, asserting that it continues to promote ethical data usage. However, this policy shift contradicts its prior stance. In 2019, the company had denounced fingerprinting as a way to bypass user choice, calling it an unfair practice.
Explaining its reasoning, Google states that the way people interact with digital platforms has evolved, particularly through smart TVs, gaming systems, and other internet-connected devices. It argues that conventional tracking tools like cookies, which users can manage through permission settings, are becoming less effective. The company claims its new approach improves security while allowing businesses to navigate emerging digital spaces without compromising privacy.
Opponents argue that the policy grants Google and the broader advertising sector unchecked power over tracking methods that users cannot easily avoid. Martin Thomson, a lead engineer at Mozilla, warns that fingerprinting expands Google’s influence in targeted advertising, eroding privacy safeguards in the process. Unlike cookies, which can be blocked or deleted, fingerprinting works passively in the background, leaving individuals with limited options to prevent tracking.
This data collection method pulls information such as screen dimensions and language preferences, which are necessary for optimizing website displays. However, when merged with details like time zone, power status, and browser specifics, these factors form an identifiable pattern, making it easier to recognize and track individuals across the web. Previously, Google had blocked advertisers from using IP addresses for targeted marketing, but this new policy change removes that restriction, raising concerns among privacy-focused groups.
Lena Cohen, a technology specialist at the Electronic Frontier Foundation, believes this decision signals Google's prioritization of financial interests over consumer protection. She warns that while the company presents fingerprinting as a necessary tool for digital advertising, it also increases exposure to third-party entities such as data brokers, surveillance firms, and law enforcement agencies. Privacy activists argue that fingerprinting strips individuals of meaningful control, making it significantly harder to manage their online footprint.
Even within the advertising sector, some professionals question the ethical impact of this change. Pete Wallace, an executive at advertising technology firm GumGum, describes fingerprinting as an ambiguous practice that operates in a regulatory gray area. He suggests that the industry was gradually shifting toward stronger privacy safeguards, making this policy reversal concerning. His company, which has collaborated with major media organizations on advertising strategies, instead relies on contextual marketing, an approach that analyzes webpage content rather than tracking user-specific data. He believes Google’s decision shifts the balance of power in advertising, favoring corporate data collection over individual privacy. While he hopes businesses recognize the risks associated with fingerprinting, he anticipates that many will adopt the technique to refine their ad targeting strategies.
Online advertising remains the backbone of the internet’s economic structure, allowing platforms to offer free access to content. However, this business model often forces users to relinquish privacy in exchange for digital services. Regulators have started paying closer attention, with the UK’s Information Commissioner’s Office (ICO) voicing concerns about the impact of fingerprinting on consumer autonomy. The agency warns that fingerprinting severely restricts user choice while diminishing transparency regarding data collection practices.
Stephen Almond, a senior official at the ICO, has criticized the move as irresponsible, stating that advertisers must now prove how fingerprinting complies with legal data protection requirements. He argues that this approach contradicts broader efforts to enhance user privacy, placing greater responsibility on businesses to justify their tracking practices.
In response to the criticism, Google reaffirmed its commitment to discussions with regulators, including the ICO. The company insists that data signals such as IP addresses have long been utilized within the industry and asserts that its implementation remains controlled, particularly in fraud prevention. Google maintains that individuals still have a say in whether they receive customized advertisements and emphasizes its intention to encourage ethical data handling across the sector.
Image: DIW-Aigen
Read next:
• Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
• AI Adoption Surges: 60% of Americans Use AI Weekly, ChatGPT Leads With 77.97% Usage
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Asim BN via Digital Information World
Google maintains that similar tracking mechanisms are already standard within the industry, asserting that it continues to promote ethical data usage. However, this policy shift contradicts its prior stance. In 2019, the company had denounced fingerprinting as a way to bypass user choice, calling it an unfair practice.
Explaining its reasoning, Google states that the way people interact with digital platforms has evolved, particularly through smart TVs, gaming systems, and other internet-connected devices. It argues that conventional tracking tools like cookies, which users can manage through permission settings, are becoming less effective. The company claims its new approach improves security while allowing businesses to navigate emerging digital spaces without compromising privacy.
Opponents argue that the policy grants Google and the broader advertising sector unchecked power over tracking methods that users cannot easily avoid. Martin Thomson, a lead engineer at Mozilla, warns that fingerprinting expands Google’s influence in targeted advertising, eroding privacy safeguards in the process. Unlike cookies, which can be blocked or deleted, fingerprinting works passively in the background, leaving individuals with limited options to prevent tracking.
This data collection method pulls information such as screen dimensions and language preferences, which are necessary for optimizing website displays. However, when merged with details like time zone, power status, and browser specifics, these factors form an identifiable pattern, making it easier to recognize and track individuals across the web. Previously, Google had blocked advertisers from using IP addresses for targeted marketing, but this new policy change removes that restriction, raising concerns among privacy-focused groups.
Lena Cohen, a technology specialist at the Electronic Frontier Foundation, believes this decision signals Google's prioritization of financial interests over consumer protection. She warns that while the company presents fingerprinting as a necessary tool for digital advertising, it also increases exposure to third-party entities such as data brokers, surveillance firms, and law enforcement agencies. Privacy activists argue that fingerprinting strips individuals of meaningful control, making it significantly harder to manage their online footprint.
Even within the advertising sector, some professionals question the ethical impact of this change. Pete Wallace, an executive at advertising technology firm GumGum, describes fingerprinting as an ambiguous practice that operates in a regulatory gray area. He suggests that the industry was gradually shifting toward stronger privacy safeguards, making this policy reversal concerning. His company, which has collaborated with major media organizations on advertising strategies, instead relies on contextual marketing, an approach that analyzes webpage content rather than tracking user-specific data. He believes Google’s decision shifts the balance of power in advertising, favoring corporate data collection over individual privacy. While he hopes businesses recognize the risks associated with fingerprinting, he anticipates that many will adopt the technique to refine their ad targeting strategies.
Online advertising remains the backbone of the internet’s economic structure, allowing platforms to offer free access to content. However, this business model often forces users to relinquish privacy in exchange for digital services. Regulators have started paying closer attention, with the UK’s Information Commissioner’s Office (ICO) voicing concerns about the impact of fingerprinting on consumer autonomy. The agency warns that fingerprinting severely restricts user choice while diminishing transparency regarding data collection practices.
Stephen Almond, a senior official at the ICO, has criticized the move as irresponsible, stating that advertisers must now prove how fingerprinting complies with legal data protection requirements. He argues that this approach contradicts broader efforts to enhance user privacy, placing greater responsibility on businesses to justify their tracking practices.
In response to the criticism, Google reaffirmed its commitment to discussions with regulators, including the ICO. The company insists that data signals such as IP addresses have long been utilized within the industry and asserts that its implementation remains controlled, particularly in fraud prevention. Google maintains that individuals still have a say in whether they receive customized advertisements and emphasizes its intention to encourage ethical data handling across the sector.
Image: DIW-Aigen
Read next:
• Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
• AI Adoption Surges: 60% of Americans Use AI Weekly, ChatGPT Leads With 77.97% Usage
• As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Asim BN via Digital Information World
As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
According to a new study by China’s Peking University, ChatGPT has started to become biased in its political views and is showing a right-wing shift now. AI models are supposed to be unbiased in all of their opinions, including political opinions but this new research shows that ChatGPT has most of its viewpoints in the right wing in newer models. Previous researchers showed that ChatGPT had a liberal bias and even though it is still giving some of its viewpoints in the left wing, new models of ChatGPT are showing a shift. The authors of the study tested ChatGPT through Political Compass Test to come to this conclusion.
Many people may assume that this change in ChatGPT’s viewpoints was mainly because of Donald Trump being elected as a president once again or how Big Tech is supporting conservations in administration, the researchers say that this change is mostly because of a change in training data used to train ChatGPT models and how political topics are being filtered from that data. It can also be assumed that ChatGPT supporting right wing opinions can also be due to how users are interacting with it because ChatGPT learns from the interactions with its users. The right wing shift has been observed in GPT 3.5 as well as GPT-4 models.
The rapid change in ChatGPT’s political viewpoint isn’t something to be concerned about but the researchers say that it should continuously be monitored to see how it is affecting human decision making. China’s DeepSeek also shows a lot of biases in some topics so it is acceptable for xAI and ChatGPT to have their own biases.
Image: DIW-Aigen
Read next: 12 Phishing Attacks a Day – Are Companies Ignoring the Growing Threat of Cyber Destruction?
by Arooj Ahmed via Digital Information World
Many people may assume that this change in ChatGPT’s viewpoints was mainly because of Donald Trump being elected as a president once again or how Big Tech is supporting conservations in administration, the researchers say that this change is mostly because of a change in training data used to train ChatGPT models and how political topics are being filtered from that data. It can also be assumed that ChatGPT supporting right wing opinions can also be due to how users are interacting with it because ChatGPT learns from the interactions with its users. The right wing shift has been observed in GPT 3.5 as well as GPT-4 models.
The rapid change in ChatGPT’s political viewpoint isn’t something to be concerned about but the researchers say that it should continuously be monitored to see how it is affecting human decision making. China’s DeepSeek also shows a lot of biases in some topics so it is acceptable for xAI and ChatGPT to have their own biases.
Image: DIW-Aigen
Read next: 12 Phishing Attacks a Day – Are Companies Ignoring the Growing Threat of Cyber Destruction?
by Arooj Ahmed via Digital Information World
AI Adoption Surges: 60% of Americans Use AI Weekly, ChatGPT Leads With 77.97% Usage
According to a study by an AI company called Listening, most Americans say that AI tools are making their lives easier, with three in five respondents saying that their quality of life has improved a lot after using AI. One in six respondents also said that they are dependent on AI in some way and cannot live without it now. The study didn't tell the total number of participants that took part in the survey but the basic purpose of the study was to find out in which US states do most AI users live as the survey was done in every US state.
According to the study, 77.97% of Americans use ChatGPT which makes it the most used AI tool in the US. The second most used AI tool by Americans is Google Translate (44.89%) and Gemini (33.23%). Americans in the survey said that the most common purposes that they use AI tools for are writing and editing (62.77%), online search (61.47%) and summarizing text (42.77%). Brainstorming (39%) and generative art (32.09%) are also some purposes Americans use AI tools for the most.
60% of the respondents said that they use AI at least once a week while one third of respondents said that their AI use has increased a lot in the past year. The states where Americans are most reliant on AI are Oregon, Florida and Arizona. On the other hand, the states where Americans are least reliant on AI are Missouri, Mississippi and Rhode Island. There was a reliance score from 0-100 which showed how reliant Americans from each state are on AI. The states with higher average scores were most reliant while states with lower scores were least reliant. This shows that AI use isn't concentrated in the North East or West Coast only, rather it is widespread all over the country.
Take a look at the charts below for more insights:
Read next: Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
by Arooj Ahmed via Digital Information World
According to the study, 77.97% of Americans use ChatGPT which makes it the most used AI tool in the US. The second most used AI tool by Americans is Google Translate (44.89%) and Gemini (33.23%). Americans in the survey said that the most common purposes that they use AI tools for are writing and editing (62.77%), online search (61.47%) and summarizing text (42.77%). Brainstorming (39%) and generative art (32.09%) are also some purposes Americans use AI tools for the most.
60% of the respondents said that they use AI at least once a week while one third of respondents said that their AI use has increased a lot in the past year. The states where Americans are most reliant on AI are Oregon, Florida and Arizona. On the other hand, the states where Americans are least reliant on AI are Missouri, Mississippi and Rhode Island. There was a reliance score from 0-100 which showed how reliant Americans from each state are on AI. The states with higher average scores were most reliant while states with lower scores were least reliant. This shows that AI use isn't concentrated in the North East or West Coast only, rather it is widespread all over the country.
Take a look at the charts below for more insights:
Read next: Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
by Arooj Ahmed via Digital Information World
Saturday, February 15, 2025
Meta’s AI Studio Sparks Concern Over Hyper-Sexualized and Minor-Resembling AI Characters
Meta launched its AI Studio in 2024 which helps users create whatever AIs they want, giving them specific characters and details. These models also have policies and protections so people do not misuse them for harmful purposes. The AI Studio by Meta is built with LLama 3.1 large language model and users can use it to generate, memes, advice and communicate on various other topics. A recent review by Fast Company has found that AI characters made through AI Studio become very hyper-sexual and act like minors which is concerning.
The AI characters that are featured on Instagram’s homepage are “girlfriends” which are ready to act flirtatious and engage in sexual conversations with the users. Some of these characters are even kids and are made to be minors. Meta should take measures to stop users engaging with this type of harmful and illegal content.
When any kind of inappropriate content is uploaded on Instagram, it gets removed instantly because of Meta’s quick moderation capabilities. But Meta isn’t using these capabilities to its AI characters even though it is against Meta’s policies about sexually explicit content. One of the researchers, Buse Cetin, said that Meta is probably not applying those policies to the AI characters yet because they want their services to become more well known.
Even though AI has a small policy about sexual content, it can get overlooked with some manipulation by using synonymous words to inappropriate terms. Meta is also removing AI characters which have broken their policies but the company hasn't specified if the characters were removed by human content moderators or AI ones. When a user enters some term on AI which is against its policy terms, it tells that the content is generated by AI so it can be inappropriate or inaccurate.
In 2023, Meta created some AI profiles which resemble celebrities and fictional characters. But it received a lot of controversies and backlash so the company removed it and added AI Studio in 2024 which was well received by the users. Users can converse with them in Instagram dms and these AI characters resemble whatever the user wants them to be. But there are also some sexually suggestive and inappropriate characters which engage in sexual conversations with the users, while some of these characters are also children who are sexualized.
Image: DIW-Aigen
Read next:
• Is Musk’s X Making Hate Speech Worse? New Study Raises Alarming Questions
• 12 Phishing Attacks a Day – Are Companies Ignoring the Growing Threat of Cyber Destruction?
• Apple Implements Restrictions for App Store Purchase Migration Between Accounts
by Arooj Ahmed via Digital Information World
The AI characters that are featured on Instagram’s homepage are “girlfriends” which are ready to act flirtatious and engage in sexual conversations with the users. Some of these characters are even kids and are made to be minors. Meta should take measures to stop users engaging with this type of harmful and illegal content.
When any kind of inappropriate content is uploaded on Instagram, it gets removed instantly because of Meta’s quick moderation capabilities. But Meta isn’t using these capabilities to its AI characters even though it is against Meta’s policies about sexually explicit content. One of the researchers, Buse Cetin, said that Meta is probably not applying those policies to the AI characters yet because they want their services to become more well known.
Even though AI has a small policy about sexual content, it can get overlooked with some manipulation by using synonymous words to inappropriate terms. Meta is also removing AI characters which have broken their policies but the company hasn't specified if the characters were removed by human content moderators or AI ones. When a user enters some term on AI which is against its policy terms, it tells that the content is generated by AI so it can be inappropriate or inaccurate.
In 2023, Meta created some AI profiles which resemble celebrities and fictional characters. But it received a lot of controversies and backlash so the company removed it and added AI Studio in 2024 which was well received by the users. Users can converse with them in Instagram dms and these AI characters resemble whatever the user wants them to be. But there are also some sexually suggestive and inappropriate characters which engage in sexual conversations with the users, while some of these characters are also children who are sexualized.
Image: DIW-Aigen
Read next:
• Is Musk’s X Making Hate Speech Worse? New Study Raises Alarming Questions
• 12 Phishing Attacks a Day – Are Companies Ignoring the Growing Threat of Cyber Destruction?
• Apple Implements Restrictions for App Store Purchase Migration Between Accounts
by Arooj Ahmed via Digital Information World
Is Musk’s X Making Hate Speech Worse? New Study Raises Alarming Questions
According to a new study published in PLOS ONE, hate speech on X (formerly known as Twitter) increased 50% for at least eight months after Elon Musk purchased the platform. The study focused on how many racist, homophobic and transphobic slurs users used on the platform and how there was a spread in hate speech on the platform which used to help friends and families stay in touch and can give some rise to offline hate crimes now as well. Elon Musk officially bought Twitter on October 27, 2022 for $44 billion and he promised the users that he will reduce hate speech, bots and other inauthentic content from the platform.
But Musk made a lot of changes to the platform in order to reduce content moderation and also fired the full time workforce of the company in November 2022. Some outsourced content moderators tracking abuse on the platform were also fired even though many researchers have shown that there is less hate speech on platforms which have high levels of content moderation. In the same month, Elon Musk disbanded X's Trust and Safety Council, a volunteer group of human rights leaders and academics created in 2016 to address hate speech and other issues on the platform.
The study analyzed 4.7 million English language posts on X during the ten months when Elon was about to buy the platform and eight months after he bought it. The study measured clear hate speech with tweets using toxic language or attacking identity groups only. The study also analyzed how much users engaged with these tweets by liking them but the researchers’ access to X’s data was cut off because of policy change by the platform and the payment access was unaffordable.
But overall, the results of the study showed that there was an increase in average numbers of posts containing hate speech after Musk took over X. Before Elon Musk bought X, the average number of hate tweets per week was 2179 but jumped to 3246 tweets per week after Musk bought the platform. The highest increase was seen in transphobic slurs, with an increase of 115 tweets per week about it to 418. There was also a 70% increase in user engagement on tweets containing hate speech under Musk’s watch. The research says that either the hate speech doesn't get taken down or the algorithm promotes hate speech unintentionally, and that's why the user engagement has gotten higher.
Image: DIW-Aigen
Read next:
• End of an Era? CPU Performance Drops in 2025 — What’s Really Happening?
• Top Skills for 2025: AI, Web Development, and Marketing Expertise in High Demand
by Arooj Ahmed via Digital Information World
But Musk made a lot of changes to the platform in order to reduce content moderation and also fired the full time workforce of the company in November 2022. Some outsourced content moderators tracking abuse on the platform were also fired even though many researchers have shown that there is less hate speech on platforms which have high levels of content moderation. In the same month, Elon Musk disbanded X's Trust and Safety Council, a volunteer group of human rights leaders and academics created in 2016 to address hate speech and other issues on the platform.
The study analyzed 4.7 million English language posts on X during the ten months when Elon was about to buy the platform and eight months after he bought it. The study measured clear hate speech with tweets using toxic language or attacking identity groups only. The study also analyzed how much users engaged with these tweets by liking them but the researchers’ access to X’s data was cut off because of policy change by the platform and the payment access was unaffordable.
But overall, the results of the study showed that there was an increase in average numbers of posts containing hate speech after Musk took over X. Before Elon Musk bought X, the average number of hate tweets per week was 2179 but jumped to 3246 tweets per week after Musk bought the platform. The highest increase was seen in transphobic slurs, with an increase of 115 tweets per week about it to 418. There was also a 70% increase in user engagement on tweets containing hate speech under Musk’s watch. The research says that either the hate speech doesn't get taken down or the algorithm promotes hate speech unintentionally, and that's why the user engagement has gotten higher.
Image: DIW-Aigen
Read next:
• End of an Era? CPU Performance Drops in 2025 — What’s Really Happening?
• Top Skills for 2025: AI, Web Development, and Marketing Expertise in High Demand
by Arooj Ahmed via Digital Information World
Subscribe to:
Posts (Atom)








