NOYB just rolled out its first set of GDPR complaints against tech firms located in China.
The privacy group advocate shared filings made against top tech firms that include Temu, WeChat, Shein, AliExpress, and TikTok. They allege that all of these businesses shared data about users in the EU with third parties present in China.
The group now seeks restrictions on data transfers to countries like China including fines of nearly 4% of the company’s global revenue. The organization is known for its campaigns including those it carried out against Facebook under the leadership of top activist Max Schrems previously.
The GDPR is a rule that covers data privacy present in the European Union. Under this law, data transfers located outside the region are passed only if the origin country does not undermine data protection.
As per their privacy policies, SHEIN, TikTok, AliExpress, and Xiaomi are all involved in data transfers to China. They published this news in a press release. Meanwhile, as per NOYB, both WeChat and Temu transferred data to third countries. As per the corporate structure seen on Temu and WeChat, it probably indicates transfers to China as well, they added.
We’ve seen other leading American tech giants come under the radar for their actions. This includes Apple, Meta, and more for possible violations of the GDPR. In this case, it’s the first time that we’re seeing the organization take action against companies present in China.
Image: DIW-Aigen
Read next: Study Finds AI-Generated Summaries Simpler and Easier to Comprehend
by Dr. Hura Anwar via Digital Information World
Mr Branding
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Friday, January 17, 2025
Study Finds AI-Generated Summaries Simpler and Easier to Comprehend
A new study published in PNAS Nexus finds that many general readers find AI generated summaries of scientific studies to be more comprehensive and easy to read. AI generated summaries can also make public perceptions of scientists more trustworthy, as the summaries can be read easily by the public and they can develop a more positive attitude towards scientific information. Many large language models like ChatGPT can be used in text summarization because they have the ability to process and generate natural language using deep learning.
The researcher of the study, David M. Markowitz explored how using AI summaries for scientific studies can improve our everyday life, as they can help people understand scientific context which would ultimately help scientists and researchers. For the study, Markowitz analyzed lay summaries and technical abstracts of 34,000 articles from Proceedings of the National Academy of Sciences (PNAS). To access the simplicity in the summaries, he used a linguistic tool called Linguistic inquiry and Word Count (LIWC) which evaluates texts on the basis of common words used, writing style and readability. Common words refer to the words we use in our everyday lives, writing style means if the text is formal or informal and readability means sentence length and how complex the vocabulary is. The results of the analysis showed that lay summaries were simpler than scientific abstracts, with more common words, short sentences and less formal writing styles. This made Markowitz explore AI generated summaries of scientific research.
The researcher wanted to find out if AI generated summaries can provide more simple texts than lay summaries authored by humans. Using ChatGPT-4, he selected 800 scientific abstracts from PNAC and asked the model to generate summaries. In the other study, 2274 participants were asked to evaluate AI and human summaries. They were asked to evaluate credibility, trustworthiness and intelligence of authors of the summaries provided to them and were also asked to tell whether they believe the summaries given to them were written by AI or a human.
Then the participants were asked to read these summaries and write what they understood. They also answered multiple choice questions which were evaluated on the basis of accuracy and detail. The results showed that participants showed better comprehension after reading AI generated summaries. However, participants rated perception of intelligence lower for AI generated summaries. Credibility and trustworthiness had no major differences on both types of summaries.
Read next: From Second to First: Why India Could Overtake the USA in Developer Numbers by 2028
by Arooj Ahmed via Digital Information World
The researcher of the study, David M. Markowitz explored how using AI summaries for scientific studies can improve our everyday life, as they can help people understand scientific context which would ultimately help scientists and researchers. For the study, Markowitz analyzed lay summaries and technical abstracts of 34,000 articles from Proceedings of the National Academy of Sciences (PNAS). To access the simplicity in the summaries, he used a linguistic tool called Linguistic inquiry and Word Count (LIWC) which evaluates texts on the basis of common words used, writing style and readability. Common words refer to the words we use in our everyday lives, writing style means if the text is formal or informal and readability means sentence length and how complex the vocabulary is. The results of the analysis showed that lay summaries were simpler than scientific abstracts, with more common words, short sentences and less formal writing styles. This made Markowitz explore AI generated summaries of scientific research.
The researcher wanted to find out if AI generated summaries can provide more simple texts than lay summaries authored by humans. Using ChatGPT-4, he selected 800 scientific abstracts from PNAC and asked the model to generate summaries. In the other study, 2274 participants were asked to evaluate AI and human summaries. They were asked to evaluate credibility, trustworthiness and intelligence of authors of the summaries provided to them and were also asked to tell whether they believe the summaries given to them were written by AI or a human.
- Also read: Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
Then the participants were asked to read these summaries and write what they understood. They also answered multiple choice questions which were evaluated on the basis of accuracy and detail. The results showed that participants showed better comprehension after reading AI generated summaries. However, participants rated perception of intelligence lower for AI generated summaries. Credibility and trustworthiness had no major differences on both types of summaries.
Read next: From Second to First: Why India Could Overtake the USA in Developer Numbers by 2028
by Arooj Ahmed via Digital Information World
Thursday, January 16, 2025
From Second to First: Why India Could Overtake the USA in Developer Numbers by 2028
A recent report by GitHub shows the biggest communities of developers all over the world, with India on its way to be at the top by 2028. The report also shows that India had the second biggest community of software and code developers in the world in 2024. In 2022, China was surpassed by India in the list of countries with most developers and have been in that spot since. The country with the highest number of developers is the USA. But India stands out because it has seen rapid growth in the number of code developers since 2013.
GitHub estimates that India has more than 17 million developers right now. Another country which has seen rapid growth in its developers is Brazil partly because of the GitHub Education program. The United Kingdom has the fifth biggest community of developers, followed by Russia. The top biggest developer communities are the same as last year, except Pakistan which is now ranked 20th and Philippines which is now ranked 18th biggest developer communities. GitHub says that it is good to see many non-English speaking countries having some of the biggest developer communities. Generative AI tools are also helping developers in coding and other purposes. India also had the second highest GitHub Education users in 2024.
Read next:
• Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
• TikTok Getting Banned in the US is Going to Bring More Share to Other Social Media Platforms
• Is ChatGPT Helping or Hurting Gen Z’s Education? Here’s What Studies Reveal
by Arooj Ahmed via Digital Information World
GitHub estimates that India has more than 17 million developers right now. Another country which has seen rapid growth in its developers is Brazil partly because of the GitHub Education program. The United Kingdom has the fifth biggest community of developers, followed by Russia. The top biggest developer communities are the same as last year, except Pakistan which is now ranked 20th and Philippines which is now ranked 18th biggest developer communities. GitHub says that it is good to see many non-English speaking countries having some of the biggest developer communities. Generative AI tools are also helping developers in coding and other purposes. India also had the second highest GitHub Education users in 2024.
Read next:
• Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
• TikTok Getting Banned in the US is Going to Bring More Share to Other Social Media Platforms
• Is ChatGPT Helping or Hurting Gen Z’s Education? Here’s What Studies Reveal
by Arooj Ahmed via Digital Information World
Study Reveals ChatGPT-4's Remarkable 'Theory of Mind' Abilities, Outperforming Previous Models
A new study published in Proceedings of the National Academy of Sciences reveals that many large language models (LLMs) like ChatGPT are showing “theory of mind” abilities which are seen in humans. While testing ChatGPT-4, the researchers found that it can perform 75% of the tasks that a six year old can too. This shows that LLMs are showing improvement in their reasoning abilities. Theory of mind refers to the ability of humans to understand beliefs, emotions and mental states of other people, and then they interact with them on the basis of that. In humans, this ability is developed in their early childhood and continues to develop throughout their lives.
The researcher, Michal Kosinski, said that LLMs can predict preferences of users based on what websites they visits, what products they purchases, their music choices and other behavioral data. While predicting the behaviors, it is also important to know the psychological processes of the individuals. For the study on LLMs, the researcher used a false-belief task, a psychological test, to understand the ability of LLMs to predict responses.
Two types of tasks, the Unexpected Contents task and the Unexpected Transfer task, were used for the false-belief test. In the Unexpected Contents task, a subject sees an object with a misleading title and assumes the misleading title to be accurate. In an Unexpected Transfer task, an object gets moved without the subject knowing and the subject searches for the object in the same place. The LLMs tested had to predict and conclude what a human would do if he encountered these two situations. Kosinski evaluated 11 LLMs and created 40 false beliefs to test them. Each false-belief scenario targeted the model's comprehension and understanding of the real world.
The results of the tests showed that GPT-1 and GPT-2 weren't able to solve false-belief tasks, concluding that earlier models of ChatGPT don't have ability to do so. On the other hand, 20% of the tasks were performed accurately by ChatGPT-3 which is equivalent to tasks performed by a three year old. The LLM with the best performance was ChatGPT-4 which was able to complete 75% of the tasks accurately. It predicted 90% of Unexpected Contents tasks while 60% of Unexpected Transfer tasks. The results also showed that ChatGPT-4 was able to adjust its predictions based on context and reasoning instead of simple patterns.
Image: DIW-Aigen
Read next: AI Chatbots Provide Non-Judgmental Mental Health Support but Struggle with Memory and Complex Issues
by Arooj Ahmed via Digital Information World
The researcher, Michal Kosinski, said that LLMs can predict preferences of users based on what websites they visits, what products they purchases, their music choices and other behavioral data. While predicting the behaviors, it is also important to know the psychological processes of the individuals. For the study on LLMs, the researcher used a false-belief task, a psychological test, to understand the ability of LLMs to predict responses.
Two types of tasks, the Unexpected Contents task and the Unexpected Transfer task, were used for the false-belief test. In the Unexpected Contents task, a subject sees an object with a misleading title and assumes the misleading title to be accurate. In an Unexpected Transfer task, an object gets moved without the subject knowing and the subject searches for the object in the same place. The LLMs tested had to predict and conclude what a human would do if he encountered these two situations. Kosinski evaluated 11 LLMs and created 40 false beliefs to test them. Each false-belief scenario targeted the model's comprehension and understanding of the real world.
The results of the tests showed that GPT-1 and GPT-2 weren't able to solve false-belief tasks, concluding that earlier models of ChatGPT don't have ability to do so. On the other hand, 20% of the tasks were performed accurately by ChatGPT-3 which is equivalent to tasks performed by a three year old. The LLM with the best performance was ChatGPT-4 which was able to complete 75% of the tasks accurately. It predicted 90% of Unexpected Contents tasks while 60% of Unexpected Transfer tasks. The results also showed that ChatGPT-4 was able to adjust its predictions based on context and reasoning instead of simple patterns.
Image: DIW-Aigen
Read next: AI Chatbots Provide Non-Judgmental Mental Health Support but Struggle with Memory and Complex Issues
by Arooj Ahmed via Digital Information World
‘Nudify’ Apps Continue Advertising on Meta’s Platforms Despite Company’s Attempt of Banning Adult Content
Meta seems to have its hands full with a lot of problems after entering 2025.
The tech giant has clear set guidelines about adult content sharing on its popular social media apps including Facebook and Instagram. However, that does not seem to be enough to stop AI-based nudity apps from spreading, as per 404Media.
These apps use AI technology to produce naked or explicit images. The shocking part is how they continue to be marketed across Instagram and Facebook despite Meta’s efforts to crack down on adult content.
The nudity apps create images of popular celebs and influencers without consent. Most of them are explicit in nature and seem very real at first glance. If that’s not all, their ads are doing great on Meta’s apps as they. Users fall for them and engage with them, allowing them to flourish and expand. One prominent example is Crush AI.
This image modifier platform gets most of its users from Facebook and Instagram, both under the ownership of Meta. Furthermore, data taken from Similarweb last month shed light on it getting a quarter of a million visits for the service. That’s nearly 90% of the traffic coming from Meta’s apps.
The nudify app makes it very clear from the start what users can expect from its service. The ads feature real people such as models and celebrities, not to mention famous OnlyFans stars. You can upload any image or simply erase a popular star’s clothing with the service.
Meta is trying its best to crack down on the ads but it seems like its efforts might be in vain. Apps like Crush AI produce fake profile pictures through AI and run new ads through those pages. Interestingly, every ad markets unique domain names which redirect the app when you click on it.
Reports from Mashable shared more on the matter and how AI apps and deepfakes keep using the advertising platform to give rise to more traffic and money from AI-based adult material. The ads go against the company’s policies and this firm removes them altogether after becoming aware of certain ads.
Meta, on the other hand, uses another standard for moderating ads. Content shared by users on Facebook and Instagram gets detected by Meta automatically and therefore removed as a result. However, a lot of the content goes unnoticed when published on the company’s ad platform.
Image: DIW-Aigen
Read next: The Truth About X’s User Stats: Are the Numbers Adding Up?
by Dr. Hura Anwar via Digital Information World
The tech giant has clear set guidelines about adult content sharing on its popular social media apps including Facebook and Instagram. However, that does not seem to be enough to stop AI-based nudity apps from spreading, as per 404Media.
These apps use AI technology to produce naked or explicit images. The shocking part is how they continue to be marketed across Instagram and Facebook despite Meta’s efforts to crack down on adult content.
The nudity apps create images of popular celebs and influencers without consent. Most of them are explicit in nature and seem very real at first glance. If that’s not all, their ads are doing great on Meta’s apps as they. Users fall for them and engage with them, allowing them to flourish and expand. One prominent example is Crush AI.
This image modifier platform gets most of its users from Facebook and Instagram, both under the ownership of Meta. Furthermore, data taken from Similarweb last month shed light on it getting a quarter of a million visits for the service. That’s nearly 90% of the traffic coming from Meta’s apps.
The nudify app makes it very clear from the start what users can expect from its service. The ads feature real people such as models and celebrities, not to mention famous OnlyFans stars. You can upload any image or simply erase a popular star’s clothing with the service.
Meta is trying its best to crack down on the ads but it seems like its efforts might be in vain. Apps like Crush AI produce fake profile pictures through AI and run new ads through those pages. Interestingly, every ad markets unique domain names which redirect the app when you click on it.
Reports from Mashable shared more on the matter and how AI apps and deepfakes keep using the advertising platform to give rise to more traffic and money from AI-based adult material. The ads go against the company’s policies and this firm removes them altogether after becoming aware of certain ads.
Meta, on the other hand, uses another standard for moderating ads. Content shared by users on Facebook and Instagram gets detected by Meta automatically and therefore removed as a result. However, a lot of the content goes unnoticed when published on the company’s ad platform.
Image: DIW-Aigen
Read next: The Truth About X’s User Stats: Are the Numbers Adding Up?
by Dr. Hura Anwar via Digital Information World
The Truth About X’s User Stats: Are the Numbers Adding Up?
X is trying hard to share its performance figures which do not look super cool, because all of the original figures can be tracked in seconds and they cannot deceive people. X has gotten a track record of sharing exaggerated data and they are still doing it in 2025. The CEO of X, Linda Yaccarino, shared a post on X and claimed that users have spent 364 billion seconds on the platform in 2024, which makes about 11,500 years combined. She didn't say whether this stats was for per day or per year, but it is presumed that it was for per day. Now if we equate average time spent by one user on X per day, it will only equate to 0.07 minutes per day which isn't that much.
X has 250 million daily active users and if we divide 364 billion seconds per active user, it is only equal to 24 minutes per day. Even though it is enough, it is still not equal to what X had claimed in March 2024, saying that users spend an average of 30 minutes on the app per day. It was also reported by X that 8 billion cumulative active user minutes per day are spent on X, which means 480 billion daily user seconds.
This suggests that either X is experiencing a decline in usage, or there may be inaccuracies in its reported stats. Before Elon Musk, it was also reported that users were spending even 38 minutes per day on the app and then it also decreased to 24 minutes per day. After Elon Musk took over X, its daily usage decreased drastically. X is known for not being transparent with their data and figures, so we cannot know what the CEO is reporting without proper context.
What we know for sure is that X isn't achieving anything because it's seeing a drop in its daily active users. This isn't a criticism to X because it's a private company and doesn't need to provide any of the official data. But it is just a little observation that X is really chaotic with its data reporting and has a tendency to give inaccurate data without any explanation.
Image: DIW-Aigen
Read next:
• Can Backlinks Boost Your Brand’s Presence on ChatGPT? New Research Questions Their Impact
• AI Chatbots Provide Non-Judgmental Mental Health Support but Struggle with Memory and Complex Issues
by Arooj Ahmed via Digital Information World
X has 250 million daily active users and if we divide 364 billion seconds per active user, it is only equal to 24 minutes per day. Even though it is enough, it is still not equal to what X had claimed in March 2024, saying that users spend an average of 30 minutes on the app per day. It was also reported by X that 8 billion cumulative active user minutes per day are spent on X, which means 480 billion daily user seconds.
This suggests that either X is experiencing a decline in usage, or there may be inaccuracies in its reported stats. Before Elon Musk, it was also reported that users were spending even 38 minutes per day on the app and then it also decreased to 24 minutes per day. After Elon Musk took over X, its daily usage decreased drastically. X is known for not being transparent with their data and figures, so we cannot know what the CEO is reporting without proper context.
What we know for sure is that X isn't achieving anything because it's seeing a drop in its daily active users. This isn't a criticism to X because it's a private company and doesn't need to provide any of the official data. But it is just a little observation that X is really chaotic with its data reporting and has a tendency to give inaccurate data without any explanation.
Image: DIW-Aigen
Read next:
• Can Backlinks Boost Your Brand’s Presence on ChatGPT? New Research Questions Their Impact
• AI Chatbots Provide Non-Judgmental Mental Health Support but Struggle with Memory and Complex Issues
by Arooj Ahmed via Digital Information World
Wednesday, January 15, 2025
Can Backlinks Boost Your Brand’s Presence on ChatGPT? New Research Questions Their Impact
Seer Interactive, a digital marketing agency, published a new study which found that website mentions in ChatGPT answers are affected with brand mentions on Google rankings. Many brands aim to get discovered on LLMs like ChatGPT and Copilot but optimization for website mentions on LLMs is a bit different. Generative engine optimization is somewhat different from traditional search engine optimization and many brands don't seem to know about it. However, by focusing on producing content that is both high-quality and relevant, brands can significantly improve their chances of achieving visibility on search engines and generative AI platforms like ChatGPT..
To get features on ChatGPT, your website rankings on Google matter but the variety of the content and backlinks don't. The study found that LLMs mentioned websites the most which were ranked on Google’s page one. Some of the Bing rankings also seemed to matter to get mentioned on LLMs. There was some expectation that backlinks might play a big role in getting mentioned on LLMs but the study found that they don't have much correlation with that. If your website has diverse content formats, it also makes it hard for websites to get mentioned on LLMs.
An analysis of 10,000 questions about SaaS industries and finance were asked on ChatGPT 4o API for the study. After running the questions, it was found that backlinks do not have any correlation in triggering brand mentions. There are also some factors other than search rankings that affect LLM mention, like website's on-page optimizations, PR partnerships and many more. There was also a study by DemandSphere and Botify about AI Overviews, mentioning that 75% of websites featured on AI Overviews are from top 12 organic search result rankings. So this shows that if you want to have mentions of your website or blog in AI generated answers on LLMs, make sure your website is ranking well on search results.
Read next: Job Trends In The Spotlight: AI Is Transforming Industries And That’s Causing Skills to Become Obsolete
by Arooj Ahmed via Digital Information World
To get features on ChatGPT, your website rankings on Google matter but the variety of the content and backlinks don't. The study found that LLMs mentioned websites the most which were ranked on Google’s page one. Some of the Bing rankings also seemed to matter to get mentioned on LLMs. There was some expectation that backlinks might play a big role in getting mentioned on LLMs but the study found that they don't have much correlation with that. If your website has diverse content formats, it also makes it hard for websites to get mentioned on LLMs.
An analysis of 10,000 questions about SaaS industries and finance were asked on ChatGPT 4o API for the study. After running the questions, it was found that backlinks do not have any correlation in triggering brand mentions. There are also some factors other than search rankings that affect LLM mention, like website's on-page optimizations, PR partnerships and many more. There was also a study by DemandSphere and Botify about AI Overviews, mentioning that 75% of websites featured on AI Overviews are from top 12 organic search result rankings. So this shows that if you want to have mentions of your website or blog in AI generated answers on LLMs, make sure your website is ranking well on search results.
Read next: Job Trends In The Spotlight: AI Is Transforming Industries And That’s Causing Skills to Become Obsolete
by Arooj Ahmed via Digital Information World
Subscribe to:
Posts (Atom)