We are just hours away from seeing the popular social media app TikTok banned in the US on the grounds of a threat to national security.
The Supreme Court shared its decision yesterday rejecting the company’s appeal to continue functioning in America. However, not everyone is on the same page when it comes to the ban.
Amnesty Tech’s Deputy Director says the dangers and harms that American lawmakers feel about TikTok is more of a phobia than anything else. She outright blasted the decision and also spoke about the risks and harms of big tech continuing to exist, even after the ban.
It seemed like she was more in favor of TikTok than Meta and Google whom she says continually pose a major threat to the country as well. She called the banning of TikTok a decision that went against human rights of freedom of expression.
Moreover, she added that the risks of data collection and algorithms exist on all social media apps and not only TikTok. While she agreed that the content put out on the ByteDance-owned platform was dangerous for the youth, she also highlighted how Meta had so much hate taking place.
This is why she urged the upcoming Trump administration to think twice before acting and focus more on the issue of big tech instead of single platforms. More focus should be on the business model designs of these apps that use sensitive data to produce an addictive interface.
On Friday, we saw the Supreme Court upholding the law that banned TikTok if it didn’t divest into the hands of an American buyer. The deadline is this Sunday but the company refuses to give in and made it very clear that this would not be possible.
In 2023, we saw Amnesty International share two reports that shared the great abuse that young minds go through after using TikTok. This includes emphasis on how it promotes self-harm and details about suicidal intentions.
As a whole, 19 countries have barred TikTok from functioning in some form. Eight of those made it illegal for both the government sector as well as the general public. The most noteworthy mention is India who spoke about the app banning it because it was not safe for the youth. Soon after that, it started to ban 59 more apps from China to stop the spread of Chinese influence in the country. The same was the case for Iran, Jordan, and Afghanistan.
It’s interesting how the blanket of so many bans on TikTok is more prevalent in Asia than anywhere else. Officials spoke about how it was not clear if China can really extract data belonging to app users and use it for their own benefit or not but that’s a fear that the West has from the start.
Some also fear China’s emerging power and presence cannot be denied. Therefore, to stop its influence from spreading, the easy way out seems to be a ban.
Read next: EU Regulators Intensify Scrutiny of X's Algorithms in Ongoing Digital Services Act Investigation
by Dr. Hura Anwar via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Saturday, January 18, 2025
Friday, January 17, 2025
Google’s Search Monopoly Faces Explosive Legal Battle with OpenAI’s Nick Turley as Key Witness!
Google’s dominance in search is under scrutiny, as the U.S. seeks to show how it blocks competition. To strengthen its antitrust case, it has enlisted Nick Turley, OpenAI’s head of product for ChatGPT, as a key witness.
As part of its strategy, the DOJ is calling on executives from competitors like OpenAI, Microsoft, and Perplexity. It has identified Perplexity’s CBO Dmitry Shevelenko as a potential witness, though his participation remains uncertain, with no comment from the company so far.
Recent legal filings confirm Nick Turley will testify on behalf of the DOJ. Turley is expected to address specific topics, including how generative AI interacts with search tools, the challenges new players face entering the market, and the importance of data sharing. Although these themes are central to the case, the DOJ has not disclosed precise questions for Turley.
The term “search access points,” widely referenced in the filings, describes products like Google Chrome that facilitate online searches. This issue has gained renewed focus since OpenAI launched its own AI-powered browser in late 2024.
As Turley’s testimony approaches, Google has subpoenaed OpenAI for a mountain of documents. Tensions have risen as Google accuses OpenAI of withholding key evidence. OpenAI, in turn, argues that Google’s demands are overly broad and intended to burden its senior executives, including CEO Sam Altman.
OpenAI has consented to provide documents related to its AI strategy, integration of AI into search products, and its partnership with Microsoft. However, Google insists on accessing older materials, including those predating ChatGPT’s November 2022 launch, claiming these could undermine Turley’s testimony. OpenAI contends that such records no longer reflect the competitive landscape.
The two sides remain locked in a dispute, with OpenAI urging the court to dismiss Google’s expansive document requests. Neither Google, OpenAI, nor the DOJ has commented on the matter.
Image: DIW-Aigen
H/T: TechCrunch
Read next:
• Google Tests PermissionsAI for Chrome That Analyzes User Behavior To Make Website Permissions Less Annoying
• New Report Highlights AI Challenges, Expanding Applications, and Emerging Competitors Shaping 2025's AI Landscape
• Social Media Outrage Drives Viral Misinformation, Study Finds
by Asim BN via Digital Information World
In 2024, a court ruled that Google maintains a monopoly in search, prompting discussions about possible penalties. As Google works on its appeal, the Department of Justice (DOJ) is urging the court to consider strict measures, including separating the Chrome browser from the company or restricting its ability to release browser-related products for ten years.
As part of its strategy, the DOJ is calling on executives from competitors like OpenAI, Microsoft, and Perplexity. It has identified Perplexity’s CBO Dmitry Shevelenko as a potential witness, though his participation remains uncertain, with no comment from the company so far.
Recent legal filings confirm Nick Turley will testify on behalf of the DOJ. Turley is expected to address specific topics, including how generative AI interacts with search tools, the challenges new players face entering the market, and the importance of data sharing. Although these themes are central to the case, the DOJ has not disclosed precise questions for Turley.
The term “search access points,” widely referenced in the filings, describes products like Google Chrome that facilitate online searches. This issue has gained renewed focus since OpenAI launched its own AI-powered browser in late 2024.
OpenAI has consented to provide documents related to its AI strategy, integration of AI into search products, and its partnership with Microsoft. However, Google insists on accessing older materials, including those predating ChatGPT’s November 2022 launch, claiming these could undermine Turley’s testimony. OpenAI contends that such records no longer reflect the competitive landscape.
The two sides remain locked in a dispute, with OpenAI urging the court to dismiss Google’s expansive document requests. Neither Google, OpenAI, nor the DOJ has commented on the matter.
Image: DIW-Aigen
H/T: TechCrunch
Read next:
• Google Tests PermissionsAI for Chrome That Analyzes User Behavior To Make Website Permissions Less Annoying
• New Report Highlights AI Challenges, Expanding Applications, and Emerging Competitors Shaping 2025's AI Landscape
• Social Media Outrage Drives Viral Misinformation, Study Finds
by Asim BN via Digital Information World
Social Media Outrage Drives Viral Misinformation, Study Finds
According to a recent research, outrage makes misinformation travel faster because it provokes users, especially on social media. Misinformation creates more hype and that's why it gets shared more as compared to credible news. The author of the study, William J. Brady, says that understanding the psychology of misinformation is important so we can limit its spread. Most social media users love sharing news which aligns with the beliefs of their followers or groups they follow, no matter if the information is true or false. Many previous studies were done on how our emotions spread misinformation but this study talks about moral outrage that plays a role in spreading misinformation.
For the study, datasets from Twitter (now X) and Facebook were analyzed. On Twitter, the researchers analyzed 44000 tweets and 2400 responses to tweets which were linked to misinformation. On Facebook, the researchers analyzed one million shared links and the reactions to the posts. The data analyzed was taken from years between 2017 and 2021 as it provided researchers to find the consistency between reactions.
The researchers found that responses from misinformation on Twitter and Facebook elicited more outrage responses than in trustworthy news sources. On Facebook, the links of misinformation got more “angry” reactions, while links containing misinformation elicited moral outrage and were shared frequently on Twitter. It was also found that links with misinformation on Facebook were more likely to get shared without users reading them completely. It showed that as misinformation produced more outrage, it is more likely to get shared than trustworthy and reliable news sources.
To study more on these aspects, the researchers conducted two experiments among 1475 participants. In the first experiment, the participants were given 20 headlines, half from trustworthy news sources and half from misinformation news sources to find out which ones elicited more outrage in them. The results found that no matter if the headlines were trustworthy or not, the ones producing more outrage were more likely to get shared.
In the second experiment, participants were asked to tell how accurate they think the headlines they were given are. The participants named headlines from trustworthy news sources more accurate, irrespective of which headline evoked more outrage in them or not. It shows that even though outrage on specific headlines makes them more likely to get shared, it doesn't mean that participants couldn't differentiate truth from misinformation.
Image: DIW-Aigen
Read next: Study Finds AI-Generated Summaries Simpler and Easier to Comprehend
by Arooj Ahmed via Digital Information World
For the study, datasets from Twitter (now X) and Facebook were analyzed. On Twitter, the researchers analyzed 44000 tweets and 2400 responses to tweets which were linked to misinformation. On Facebook, the researchers analyzed one million shared links and the reactions to the posts. The data analyzed was taken from years between 2017 and 2021 as it provided researchers to find the consistency between reactions.
The researchers found that responses from misinformation on Twitter and Facebook elicited more outrage responses than in trustworthy news sources. On Facebook, the links of misinformation got more “angry” reactions, while links containing misinformation elicited moral outrage and were shared frequently on Twitter. It was also found that links with misinformation on Facebook were more likely to get shared without users reading them completely. It showed that as misinformation produced more outrage, it is more likely to get shared than trustworthy and reliable news sources.
To study more on these aspects, the researchers conducted two experiments among 1475 participants. In the first experiment, the participants were given 20 headlines, half from trustworthy news sources and half from misinformation news sources to find out which ones elicited more outrage in them. The results found that no matter if the headlines were trustworthy or not, the ones producing more outrage were more likely to get shared.
In the second experiment, participants were asked to tell how accurate they think the headlines they were given are. The participants named headlines from trustworthy news sources more accurate, irrespective of which headline evoked more outrage in them or not. It shows that even though outrage on specific headlines makes them more likely to get shared, it doesn't mean that participants couldn't differentiate truth from misinformation.
Image: DIW-Aigen
Read next: Study Finds AI-Generated Summaries Simpler and Easier to Comprehend
by Arooj Ahmed via Digital Information World
Privacy Group NOYB Launches First GDPR Complaints Against Chinese Tech Companies
NOYB just rolled out its first set of GDPR complaints against tech firms located in China.
The privacy group advocate shared filings made against top tech firms that include Temu, WeChat, Shein, AliExpress, and TikTok. They allege that all of these businesses shared data about users in the EU with third parties present in China.
The group now seeks restrictions on data transfers to countries like China including fines of nearly 4% of the company’s global revenue. The organization is known for its campaigns including those it carried out against Facebook under the leadership of top activist Max Schrems previously.
The GDPR is a rule that covers data privacy present in the European Union. Under this law, data transfers located outside the region are passed only if the origin country does not undermine data protection.
As per their privacy policies, SHEIN, TikTok, AliExpress, and Xiaomi are all involved in data transfers to China. They published this news in a press release. Meanwhile, as per NOYB, both WeChat and Temu transferred data to third countries. As per the corporate structure seen on Temu and WeChat, it probably indicates transfers to China as well, they added.
We’ve seen other leading American tech giants come under the radar for their actions. This includes Apple, Meta, and more for possible violations of the GDPR. In this case, it’s the first time that we’re seeing the organization take action against companies present in China.
Image: DIW-Aigen
Read next: Study Finds AI-Generated Summaries Simpler and Easier to Comprehend
by Dr. Hura Anwar via Digital Information World
The privacy group advocate shared filings made against top tech firms that include Temu, WeChat, Shein, AliExpress, and TikTok. They allege that all of these businesses shared data about users in the EU with third parties present in China.
The group now seeks restrictions on data transfers to countries like China including fines of nearly 4% of the company’s global revenue. The organization is known for its campaigns including those it carried out against Facebook under the leadership of top activist Max Schrems previously.
The GDPR is a rule that covers data privacy present in the European Union. Under this law, data transfers located outside the region are passed only if the origin country does not undermine data protection.
As per their privacy policies, SHEIN, TikTok, AliExpress, and Xiaomi are all involved in data transfers to China. They published this news in a press release. Meanwhile, as per NOYB, both WeChat and Temu transferred data to third countries. As per the corporate structure seen on Temu and WeChat, it probably indicates transfers to China as well, they added.
We’ve seen other leading American tech giants come under the radar for their actions. This includes Apple, Meta, and more for possible violations of the GDPR. In this case, it’s the first time that we’re seeing the organization take action against companies present in China.
Image: DIW-Aigen
Read next: Study Finds AI-Generated Summaries Simpler and Easier to Comprehend
by Dr. Hura Anwar via Digital Information World
Study Finds AI-Generated Summaries Simpler and Easier to Comprehend
A new study published in PNAS Nexus finds that many general readers find AI generated summaries of scientific studies to be more comprehensive and easy to read. AI generated summaries can also make public perceptions of scientists more trustworthy, as the summaries can be read easily by the public and they can develop a more positive attitude towards scientific information. Many large language models like ChatGPT can be used in text summarization because they have the ability to process and generate natural language using deep learning.
The researcher of the study, David M. Markowitz explored how using AI summaries for scientific studies can improve our everyday life, as they can help people understand scientific context which would ultimately help scientists and researchers. For the study, Markowitz analyzed lay summaries and technical abstracts of 34,000 articles from Proceedings of the National Academy of Sciences (PNAS). To access the simplicity in the summaries, he used a linguistic tool called Linguistic inquiry and Word Count (LIWC) which evaluates texts on the basis of common words used, writing style and readability. Common words refer to the words we use in our everyday lives, writing style means if the text is formal or informal and readability means sentence length and how complex the vocabulary is. The results of the analysis showed that lay summaries were simpler than scientific abstracts, with more common words, short sentences and less formal writing styles. This made Markowitz explore AI generated summaries of scientific research.
The researcher wanted to find out if AI generated summaries can provide more simple texts than lay summaries authored by humans. Using ChatGPT-4, he selected 800 scientific abstracts from PNAC and asked the model to generate summaries. In the other study, 2274 participants were asked to evaluate AI and human summaries. They were asked to evaluate credibility, trustworthiness and intelligence of authors of the summaries provided to them and were also asked to tell whether they believe the summaries given to them were written by AI or a human.
Then the participants were asked to read these summaries and write what they understood. They also answered multiple choice questions which were evaluated on the basis of accuracy and detail. The results showed that participants showed better comprehension after reading AI generated summaries. However, participants rated perception of intelligence lower for AI generated summaries. Credibility and trustworthiness had no major differences on both types of summaries.
Read next: From Second to First: Why India Could Overtake the USA in Developer Numbers by 2028
by Arooj Ahmed via Digital Information World
The researcher of the study, David M. Markowitz explored how using AI summaries for scientific studies can improve our everyday life, as they can help people understand scientific context which would ultimately help scientists and researchers. For the study, Markowitz analyzed lay summaries and technical abstracts of 34,000 articles from Proceedings of the National Academy of Sciences (PNAS). To access the simplicity in the summaries, he used a linguistic tool called Linguistic inquiry and Word Count (LIWC) which evaluates texts on the basis of common words used, writing style and readability. Common words refer to the words we use in our everyday lives, writing style means if the text is formal or informal and readability means sentence length and how complex the vocabulary is. The results of the analysis showed that lay summaries were simpler than scientific abstracts, with more common words, short sentences and less formal writing styles. This made Markowitz explore AI generated summaries of scientific research.
The researcher wanted to find out if AI generated summaries can provide more simple texts than lay summaries authored by humans. Using ChatGPT-4, he selected 800 scientific abstracts from PNAC and asked the model to generate summaries. In the other study, 2274 participants were asked to evaluate AI and human summaries. They were asked to evaluate credibility, trustworthiness and intelligence of authors of the summaries provided to them and were also asked to tell whether they believe the summaries given to them were written by AI or a human.
- Also read: Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
Then the participants were asked to read these summaries and write what they understood. They also answered multiple choice questions which were evaluated on the basis of accuracy and detail. The results showed that participants showed better comprehension after reading AI generated summaries. However, participants rated perception of intelligence lower for AI generated summaries. Credibility and trustworthiness had no major differences on both types of summaries.
Read next: From Second to First: Why India Could Overtake the USA in Developer Numbers by 2028
by Arooj Ahmed via Digital Information World
Thursday, January 16, 2025
From Second to First: Why India Could Overtake the USA in Developer Numbers by 2028
A recent report by GitHub shows the biggest communities of developers all over the world, with India on its way to be at the top by 2028. The report also shows that India had the second biggest community of software and code developers in the world in 2024. In 2022, China was surpassed by India in the list of countries with most developers and have been in that spot since. The country with the highest number of developers is the USA. But India stands out because it has seen rapid growth in the number of code developers since 2013.
GitHub estimates that India has more than 17 million developers right now. Another country which has seen rapid growth in its developers is Brazil partly because of the GitHub Education program. The United Kingdom has the fifth biggest community of developers, followed by Russia. The top biggest developer communities are the same as last year, except Pakistan which is now ranked 20th and Philippines which is now ranked 18th biggest developer communities. GitHub says that it is good to see many non-English speaking countries having some of the biggest developer communities. Generative AI tools are also helping developers in coding and other purposes. India also had the second highest GitHub Education users in 2024.
Read next:
• Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
• TikTok Getting Banned in the US is Going to Bring More Share to Other Social Media Platforms
• Is ChatGPT Helping or Hurting Gen Z’s Education? Here’s What Studies Reveal
by Arooj Ahmed via Digital Information World
GitHub estimates that India has more than 17 million developers right now. Another country which has seen rapid growth in its developers is Brazil partly because of the GitHub Education program. The United Kingdom has the fifth biggest community of developers, followed by Russia. The top biggest developer communities are the same as last year, except Pakistan which is now ranked 20th and Philippines which is now ranked 18th biggest developer communities. GitHub says that it is good to see many non-English speaking countries having some of the biggest developer communities. Generative AI tools are also helping developers in coding and other purposes. India also had the second highest GitHub Education users in 2024.
Read next:
• Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
• TikTok Getting Banned in the US is Going to Bring More Share to Other Social Media Platforms
• Is ChatGPT Helping or Hurting Gen Z’s Education? Here’s What Studies Reveal
by Arooj Ahmed via Digital Information World
Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
A new study published in Proceedings of the National Academy of Sciences reveals that many large language models (LLMs) like ChatGPT are showing “theory of mind” abilities which are seen in humans. While testing ChatGPT-4, the researchers found that it can perform 75% of the tasks that a six year old can too. This shows that LLMs are showing improvement in their reasoning abilities. Theory of mind refers to the ability of humans to understand beliefs, emotions and mental states of other people, and then they interact with them on the basis of that. In humans, this ability is developed in their early childhood and continues to develop throughout their lives.
The researcher, Michal Kosinski, said that LLMs can predict preferences of users based on what websites they visits, what products they purchases, their music choices and other behavioral data. While predicting the behaviors, it is also important to know the psychological processes of the individuals. For the study on LLMs, the researcher used a false-belief task, a psychological test, to understand the ability of LLMs to predict responses.
Two types of tasks, the Unexpected Contents task and the Unexpected Transfer task, were used for the false-belief test. In the Unexpected Contents task, a subject sees an object with a misleading title and assumes the misleading title to be accurate. In an Unexpected Transfer task, an object gets moved without the subject knowing and the subject searches for the object in the same place. The LLMs tested had to predict and conclude what a human would do if he encountered these two situations. Kosinski evaluated 11 LLMs and created 40 false beliefs to test them. Each false-belief scenario targeted the model's comprehension and understanding of the real world.
The results of the tests showed that GPT-1 and GPT-2 weren't able to solve false-belief tasks, concluding that earlier models of ChatGPT don't have ability to do so. On the other hand, 20% of the tasks were performed accurately by ChatGPT-3 which is equivalent to tasks performed by a three year old. The LLM with the best performance was ChatGPT-4 which was able to complete 75% of the tasks accurately. It predicted 90% of Unexpected Contents tasks while 60% of Unexpected Transfer tasks. The results also showed that ChatGPT-4 was able to adjust its predictions based on context and reasoning instead of simple patterns.
Image: DIW-Aigen
Read next: AI Chatbots Provide Non-Judgmental Mental Health Support but Struggle with Memory and Complex Issues
by Arooj Ahmed via Digital Information World
The researcher, Michal Kosinski, said that LLMs can predict preferences of users based on what websites they visits, what products they purchases, their music choices and other behavioral data. While predicting the behaviors, it is also important to know the psychological processes of the individuals. For the study on LLMs, the researcher used a false-belief task, a psychological test, to understand the ability of LLMs to predict responses.
Two types of tasks, the Unexpected Contents task and the Unexpected Transfer task, were used for the false-belief test. In the Unexpected Contents task, a subject sees an object with a misleading title and assumes the misleading title to be accurate. In an Unexpected Transfer task, an object gets moved without the subject knowing and the subject searches for the object in the same place. The LLMs tested had to predict and conclude what a human would do if he encountered these two situations. Kosinski evaluated 11 LLMs and created 40 false beliefs to test them. Each false-belief scenario targeted the model's comprehension and understanding of the real world.
The results of the tests showed that GPT-1 and GPT-2 weren't able to solve false-belief tasks, concluding that earlier models of ChatGPT don't have ability to do so. On the other hand, 20% of the tasks were performed accurately by ChatGPT-3 which is equivalent to tasks performed by a three year old. The LLM with the best performance was ChatGPT-4 which was able to complete 75% of the tasks accurately. It predicted 90% of Unexpected Contents tasks while 60% of Unexpected Transfer tasks. The results also showed that ChatGPT-4 was able to adjust its predictions based on context and reasoning instead of simple patterns.
Image: DIW-Aigen
Read next: AI Chatbots Provide Non-Judgmental Mental Health Support but Struggle with Memory and Complex Issues
by Arooj Ahmed via Digital Information World
Subscribe to:
Posts (Atom)