The NSO Group, which belongs to Israel, didn’t love the news of getting a mega $168 million penalty for hijacking servers belonging to WhatsApp. However, that did open up a new insight into the world of foreign spy agencies and their dealings.
The is a six-year-long battle between American social media giants and the surveillance organization. It has also cast a major amount of attention on the inner world of spyware and how it works.
We can confirm that top-of-the-line spyware does not come cheap. A standard price was shared of nearly $7 million for utilizing the platform to hack nearly 15 devices in a single go. As per the executive, hacking phones outside a client’s nation is never easy and comes with an additional price tag between $1 and to 2 million.
The product is sophisticated, Meta shared during its opening statement inside the courtroom. The price tag is hefty, and thousands of products were hacked through the system. More stats revealed how the spyware broke into devices of thousands of people between 2018 to 2020 without them realizing anything.
Even during the legal proceedings, the company failed to admit that it did anything wrong. They just spoke about breaking into devices that fell in the thousands range, and also failed to admit the firm was selling spyware. Instead, they were portrayed as gathering intelligence on specific targets but not certain individuals.
To summarize the case, the NSO Group received a fine worth $168 million for hacking the world’s most popular texting platform without users knowing. The group charged governments in the EU millions for the platform, and that includes top agencies like the CIA and the FBI, who paid a whopping $7.6 million. The NSO continued to attack the app during the lawsuit, which just goes to show their lack of care and consideration.
Controversial reports from the NYT shared more on this front, including how the CIA bankrolled the purchase of the software, which is another chapter altogether. But Meta refused to back down and made sure it was punished for its wrongdoing.
This filing puts a permanent injunction regarding the NSO and how it is a major threat of ongoing and serious harm to tech giant Meta, the app, and many of its users.
Image: DIW-Aigen
Read next: New Documents Expose Meta's Complex AI Filters for Sensitive Content, Testing Boundaries of Safety
by Dr. Hura Anwar via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Wednesday, May 7, 2025
New Documents Expose Meta's Complex AI Filters for Sensitive Content, Testing Boundaries of Safety
Newly surfaced documents have lifted the curtain on how Meta handles the fine line between fun and safety when designing its artificial intelligence tools. The files, linked to a contractor called Scale AI, suggest that behind Meta’s conversational AI is a careful system of filtering, testing, and limits that often lands in grey territory.
Workers were guided to sort AI user inputs by sensitivity. Prompts considered too dangerous were shut down immediately. These included anything tied to hate, child abuse, or sexually explicit content. Less severe entries — those with emotional weight or sensitive themes — weren’t blocked but were flagged for more thoughtful review. Things like discussions about gender, youth concerns, or mild conspiracies sat in that second category.
Among the examples contractors were shown was a message that used characters from a controversial novel to act out a date. This was marked inappropriate, not just for tone, but because of the troubling source material, which centers around a minor being exploited.
In a separate project aimed at voice training, testers were told to create recordings in playful or emotional tones. The idea was to push the AI to adopt different moods and personas. Some prompts flirted with fantasy, asking the AI to speak like a wizard or an excited student. Even in those cases, the rules still applied — anything involving sex, politics, violence, or real people was off the table. No impersonations were allowed either.
People working on the project said it was often unclear where the lines were. Some prompts encouraged interaction that felt unusually personal. That was no accident. Meta was pushing the systems to explore boundaries, not to cross them, but to understand where they might bend.
This isn’t just Meta’s problem. Other firms building chatbot personalities face similar backlash. Some, like Elon Musk’s xAI, are marketing edgier voices. Others, like OpenAI, have pulled back on responses they feared sounded too polite or too one-sided. None have found the perfect formula yet.
When these bots overstep, the damage goes beyond bad headlines. There are legal risks, privacy concerns, and the growing public worry that AI is being shaped more by what it should avoid than what it ought to say.
Image: DIW-AIgen
H/T: Insider.
Read next:
• Meta Under Fire for Emotional Targeting and Expansive AI Data Harvesting
• April Sees ChatGPT Leap to Number One in Downloads and Second in Revenue Behind TikTok’s $329M
by Asim BN via Digital Information World
Workers were guided to sort AI user inputs by sensitivity. Prompts considered too dangerous were shut down immediately. These included anything tied to hate, child abuse, or sexually explicit content. Less severe entries — those with emotional weight or sensitive themes — weren’t blocked but were flagged for more thoughtful review. Things like discussions about gender, youth concerns, or mild conspiracies sat in that second category.
Among the examples contractors were shown was a message that used characters from a controversial novel to act out a date. This was marked inappropriate, not just for tone, but because of the troubling source material, which centers around a minor being exploited.
In a separate project aimed at voice training, testers were told to create recordings in playful or emotional tones. The idea was to push the AI to adopt different moods and personas. Some prompts flirted with fantasy, asking the AI to speak like a wizard or an excited student. Even in those cases, the rules still applied — anything involving sex, politics, violence, or real people was off the table. No impersonations were allowed either.
People working on the project said it was often unclear where the lines were. Some prompts encouraged interaction that felt unusually personal. That was no accident. Meta was pushing the systems to explore boundaries, not to cross them, but to understand where they might bend.
- Also read: Over One Billion Users at Risk as Chatbot Harassment and Misconduct Reports Continue Rising
This isn’t just Meta’s problem. Other firms building chatbot personalities face similar backlash. Some, like Elon Musk’s xAI, are marketing edgier voices. Others, like OpenAI, have pulled back on responses they feared sounded too polite or too one-sided. None have found the perfect formula yet.
When these bots overstep, the damage goes beyond bad headlines. There are legal risks, privacy concerns, and the growing public worry that AI is being shaped more by what it should avoid than what it ought to say.
Image: DIW-AIgen
H/T: Insider.
Read next:
• Meta Under Fire for Emotional Targeting and Expansive AI Data Harvesting
• April Sees ChatGPT Leap to Number One in Downloads and Second in Revenue Behind TikTok’s $329M
by Asim BN via Digital Information World
Tuesday, May 6, 2025
Is ChatGPT Superior To Student Writing? The Answer Might Surprise You
Spring is about to end, and the smell of summertime nears us all. This means students are busy with spring examinations, and teachers are working extra hard to ensure ChatGPT is minimally used.
However, new research is gauging the performance of students with the popular AI tool to see if technology surpasses in terms of quality. Interestingly, the answer is not quite what many may have expected.
ChatGPT usage has hit a new high, but this new study says it’s still not an ideal replacement for human writing. Students continue to perform better, especially when it comes to essays writing over AI.
ChatGPT created a lot of anxiety among the teaching workforce, and many feared that humans wouldn’t be using their minds or creativity in writing assignments. Most of the models could produce accurate and coherent material with seamless grammar on assignments. The fact that some go undetected by a great extent is even more worrisome.
Most of the models can produce factually and grammatically correct and coherent material. This enables cheating and undermines the writer’s literary and critical thinking capabilities. Even writers are going as far as to use them in research papers for over two years now.
Thanks to the latest UK group of researchers from East Anglia University, we found that despite the popularity that this tool holds, it’s got a long way to go to match the work quality showcased by actual students.
The research, dubbed Does ChatGPT Write like Students was shared in the Written Communication Journal. More than 145 essays by humans were compared to those produced by ChatGPT. The revelations shared how student essays are much richer in terms of variety and quantity. They have more engagement features, engaging interactions, and have that persuasive aura that tech lacks.
The authors also concluded that ChatGPT-produced essays showed fewer engagement markers and, therefore, limited interactions. So they basically lacked a crucial aspect, which is personality. The essays produced by actual students entailed a great engagement scheme, and that made them so much more compelling to read. You would have direct appeals, rhetorical queries, and personal digressions. The communication was clearer and had the right kind of communication.
Essays produced by ChatGPT showed fluency in linguistics and were impersonal. That means you’re less likely to enjoy, engage, or relate to ChatGPT essays. The authors of the study hope the purpose of the research will make educators understand that it’s not hard to see if students used ChatGPT or not. They can easily distinguish human content from others.
The only disadvantage today is related to the lack of tools for detecting texts made by generative AI. While many online pages do offer such AI checks, they’re not always accurate and fail at highlighting AI-produced text in every instance.
So the next time you pick up that essay task and think you can get away with it, do realize that it’s not hard to spot AI-produced material.
Image: DIW-Aigen
Read next: Game-Changing Digital Technologies to Watch by 2030
by Dr. Hura Anwar via Digital Information World
However, new research is gauging the performance of students with the popular AI tool to see if technology surpasses in terms of quality. Interestingly, the answer is not quite what many may have expected.
ChatGPT usage has hit a new high, but this new study says it’s still not an ideal replacement for human writing. Students continue to perform better, especially when it comes to essays writing over AI.
ChatGPT created a lot of anxiety among the teaching workforce, and many feared that humans wouldn’t be using their minds or creativity in writing assignments. Most of the models could produce accurate and coherent material with seamless grammar on assignments. The fact that some go undetected by a great extent is even more worrisome.
Most of the models can produce factually and grammatically correct and coherent material. This enables cheating and undermines the writer’s literary and critical thinking capabilities. Even writers are going as far as to use them in research papers for over two years now.
Thanks to the latest UK group of researchers from East Anglia University, we found that despite the popularity that this tool holds, it’s got a long way to go to match the work quality showcased by actual students.
The research, dubbed Does ChatGPT Write like Students was shared in the Written Communication Journal. More than 145 essays by humans were compared to those produced by ChatGPT. The revelations shared how student essays are much richer in terms of variety and quantity. They have more engagement features, engaging interactions, and have that persuasive aura that tech lacks.
The authors also concluded that ChatGPT-produced essays showed fewer engagement markers and, therefore, limited interactions. So they basically lacked a crucial aspect, which is personality. The essays produced by actual students entailed a great engagement scheme, and that made them so much more compelling to read. You would have direct appeals, rhetorical queries, and personal digressions. The communication was clearer and had the right kind of communication.
Essays produced by ChatGPT showed fluency in linguistics and were impersonal. That means you’re less likely to enjoy, engage, or relate to ChatGPT essays. The authors of the study hope the purpose of the research will make educators understand that it’s not hard to see if students used ChatGPT or not. They can easily distinguish human content from others.
The only disadvantage today is related to the lack of tools for detecting texts made by generative AI. While many online pages do offer such AI checks, they’re not always accurate and fail at highlighting AI-produced text in every instance.
So the next time you pick up that essay task and think you can get away with it, do realize that it’s not hard to spot AI-produced material.
Image: DIW-Aigen
Read next: Game-Changing Digital Technologies to Watch by 2030
by Dr. Hura Anwar via Digital Information World
Over One Billion Users at Risk as Chatbot Harassment and Misconduct Reports Continue Rising
A new study is speaking about the growing use of chatbots. These customized or very personalized AI chatbots are supposed to serve as your companions, long-lost friend, and even as a therapist. In some cases, they’re being used as replacements for a romantic partner.
The figures continue to skyrocket to more than one billion around the globe. Today, people are more emotionally attached to these bots and engaging with them in a very disturbing manner. Some reports are talking about harassment, inappropriate conversations, and more.
Thanks to a new study published by Drexel University, exposure to these bots is becoming too common, and now it’s come to the point that tech giants and lawmakers need to address the matter before it’s too late.
The authors of this study reportedly took an in-depth look at the experiences with users, and it’s just alarming to say the least. After analyzing close to 35,000 reviews from users regarding the bot, there were hundreds of reports of brutal behavior.
Unwanted flirting, sharing explicit images, paying for upgrades, and even sexual advances. The behavior is on the rise, despite users being asked to stop by the bot. One chatbot that goes by the name Replika has close to 10 million users around the globe. This is marketed as your next best tech companion. It’s for those needing a friend, no-nonsense drama, and no social anxiety. Users can go as far as to develop social connections, sharing laughs, and get used to AI, which is the closest form of human interactions.
The study proves that the tech doesn’t have the right guardrails in place to keep users protected, which puts a lot of trust into their chats with these systems. The fact that no ethical standards are in place is disturbing and harmful, another professor shared.
The risk of getting misled is already high, and seeing this kind of damage come into play when the programs are produced without safety protocols is what’s making the issue worse. The study is one of a kind and would become a part of the Association for Computing Machinery’s Computer-Supported Cooperative Work and Social Computing Conference.
The chatbots are growing immensely, and it’s great to understand the experiences of users in charge. It’s not like your everyday human chat. People are assuming these chatbots have sentimental feelings that make them more liable to emotional harm. So studies like these are bringing to the spotlight the need that developers should implement guardrails and guidelines to keep all protected.
Researchers fear that despite the results being out there in the open now about harassment conducted by chatbots, it has been around for a long time. As a whole, more than 800 reviews used this term, with three leading themes arising from within.
The replies of users to these types of inappropriate actions replicate those experienced by harassment victims, the study went on to reveal. The reactions hint more about how these effects, which are AI-induced, can have serious implications for a person’s mental health.
Image: DIW-Aigen
Read next: April Sees ChatGPT Leap to Number One in Downloads and Second in Revenue Behind TikTok’s $329M
by Dr. Hura Anwar via Digital Information World
The figures continue to skyrocket to more than one billion around the globe. Today, people are more emotionally attached to these bots and engaging with them in a very disturbing manner. Some reports are talking about harassment, inappropriate conversations, and more.
Thanks to a new study published by Drexel University, exposure to these bots is becoming too common, and now it’s come to the point that tech giants and lawmakers need to address the matter before it’s too late.
The authors of this study reportedly took an in-depth look at the experiences with users, and it’s just alarming to say the least. After analyzing close to 35,000 reviews from users regarding the bot, there were hundreds of reports of brutal behavior.
Unwanted flirting, sharing explicit images, paying for upgrades, and even sexual advances. The behavior is on the rise, despite users being asked to stop by the bot. One chatbot that goes by the name Replika has close to 10 million users around the globe. This is marketed as your next best tech companion. It’s for those needing a friend, no-nonsense drama, and no social anxiety. Users can go as far as to develop social connections, sharing laughs, and get used to AI, which is the closest form of human interactions.
The study proves that the tech doesn’t have the right guardrails in place to keep users protected, which puts a lot of trust into their chats with these systems. The fact that no ethical standards are in place is disturbing and harmful, another professor shared.
The risk of getting misled is already high, and seeing this kind of damage come into play when the programs are produced without safety protocols is what’s making the issue worse. The study is one of a kind and would become a part of the Association for Computing Machinery’s Computer-Supported Cooperative Work and Social Computing Conference.
The chatbots are growing immensely, and it’s great to understand the experiences of users in charge. It’s not like your everyday human chat. People are assuming these chatbots have sentimental feelings that make them more liable to emotional harm. So studies like these are bringing to the spotlight the need that developers should implement guardrails and guidelines to keep all protected.
Researchers fear that despite the results being out there in the open now about harassment conducted by chatbots, it has been around for a long time. As a whole, more than 800 reviews used this term, with three leading themes arising from within.
The replies of users to these types of inappropriate actions replicate those experienced by harassment victims, the study went on to reveal. The reactions hint more about how these effects, which are AI-induced, can have serious implications for a person’s mental health.
Image: DIW-Aigen
Read next: April Sees ChatGPT Leap to Number One in Downloads and Second in Revenue Behind TikTok’s $329M
by Dr. Hura Anwar via Digital Information World
April Sees ChatGPT Leap to Number One in Downloads and Second in Revenue Behind TikTok’s $329M
All predictions about artificial intelligence for 2025 proved to be true, and it’s safe to say this tech continues to dominate across the board. We saw AI chatbots make it big in terms of revenue through both the App Store and Google Play.
Comparisons from last month prove that April was bigger and better in terms of figures for AI. Let’s take a look at the highest-earning platforms around the globe for last month.
No surprises here in terms of who took the lead and made sure to leave their mark. It’s TikTok that earned the top position as the highest-earning mobile platform last month. The social media platform earned a massive $329 million in terms of net revenue through the App Store and Google Play Store.
The revenue dominance was on display for a few years, but it was never a huge gap when compared to other arch rivals. Now, it seems like things are changing, and the gap keeps getting bigger in April as TikTok’s net revenue grew above the $300 million mark for the second time.
The news comes although the American government is forcing it to merge with Chinese-owned giant Bytedance. Now the question is whether or when that can happen. The revenue dominance is visible to us all, and this is the first time that figures for revenue have gone over $300 million for the second time.
Runner up for April 2025 was estimated to be video streaming giant YouTube, which continues to remain in the shadows of TikTok forever. But April did see things switch up a bit as OpenAI's ChatGPT hit the second spot as the highest-earning platform for the first time.
The AI giant added $148M to its net revenue scores as per estimates shared by AppFigures. This means it keeps all that cash after it paid Google and Apple their fair share of fees.
This just goes to show that the demand for AI is very high for ChatGPT. The competition in this domain is very heated, as you’ve got competition from Meta, xAI, and more. It’s quite clear that ChatGPT is certainly a frontrunner in the race, and it managed to hit the second spot when it was not even in the top five last month. Revenue figures grew 50% in just one month.
The app came in there, and net revenue scores reached $131 million as per estimates from App Intelligence. TikTok and ChatGPT saw revenue boom in April while YouTube’s revenue fell nearly 23%, reaching the old levels seen last year.
To round up the top five, it was Tinder and Disney+ who continued to be top performers in terms of revenue. This just goes to show that apps need to go that extra mile. As per the recent estimates, the top 10 highest earning platforms raked in $1.2 billion. The fact that downloads went downhill for April is a positive growth, even if the figure might be in single digits. Publishers and developers will need to work harder for a successful May.
Moving on to the most downloaded platforms for April 2025, ChatGPT again made it big. It jumped to position number, which is a huge shift in recent AI trends. To think that it superceded all other social media giants to hit that spot is incredible.
This is the second time we’ve seen it dethrone other leaders, such as Instagram. Thanks to stats from Appfigures Intelligence, the AI tool has over 52 million downloads across the iOS and Android markets. This is a 12% growth from that seen in March of this year and a 38% growth from that seen in January. It appears that greater competition gives rise to greater downloads.
This is also the first time that the app was crowned the top spot for both the App Store and Google Play Store. How’s that for an incredible milestone? In second position, we had TikTok with an estimated 39 million downloads, while Instagram took the third spot with a little less. It seems like these two might need to take a backseat as it’s ChatGPT’s time to shine for a while.
The top five for most downloaded platforms were rounded up by Facebook and WhatsApp, which had 31 million and 27 million downloads each. So Meta is on a roll with three of its family apps making the top 5 most downloaded platforms around the globe. Meanwhile, Threads isn’t too far below, still making the top 10.
Temu took on the sixth spot with 25 million downloads, although the US penalized China with high tariffs. However, the download figures were the lowest we’ve ever seen for them, so we don’t expect to see it hit the top 10 most downloaded apps rankings in May.
As a whole, the total estimated figures for downloads hit 300 million, which is a 13% decline from the total seen last month. This means developers need to pull their weight and work on bettering the figures for a successful May. This was not the case for most revenue-generating platforms that still managed to see a single-digit growth in April.
Read next: Meta Under Fire for Emotional Targeting and Expansive AI Data Harvesting
by Dr. Hura Anwar via Digital Information World
Comparisons from last month prove that April was bigger and better in terms of figures for AI. Let’s take a look at the highest-earning platforms around the globe for last month.
No surprises here in terms of who took the lead and made sure to leave their mark. It’s TikTok that earned the top position as the highest-earning mobile platform last month. The social media platform earned a massive $329 million in terms of net revenue through the App Store and Google Play Store.
The revenue dominance was on display for a few years, but it was never a huge gap when compared to other arch rivals. Now, it seems like things are changing, and the gap keeps getting bigger in April as TikTok’s net revenue grew above the $300 million mark for the second time.
The news comes although the American government is forcing it to merge with Chinese-owned giant Bytedance. Now the question is whether or when that can happen. The revenue dominance is visible to us all, and this is the first time that figures for revenue have gone over $300 million for the second time.
Runner up for April 2025 was estimated to be video streaming giant YouTube, which continues to remain in the shadows of TikTok forever. But April did see things switch up a bit as OpenAI's ChatGPT hit the second spot as the highest-earning platform for the first time.
The AI giant added $148M to its net revenue scores as per estimates shared by AppFigures. This means it keeps all that cash after it paid Google and Apple their fair share of fees.
This just goes to show that the demand for AI is very high for ChatGPT. The competition in this domain is very heated, as you’ve got competition from Meta, xAI, and more. It’s quite clear that ChatGPT is certainly a frontrunner in the race, and it managed to hit the second spot when it was not even in the top five last month. Revenue figures grew 50% in just one month.
The app came in there, and net revenue scores reached $131 million as per estimates from App Intelligence. TikTok and ChatGPT saw revenue boom in April while YouTube’s revenue fell nearly 23%, reaching the old levels seen last year.
To round up the top five, it was Tinder and Disney+ who continued to be top performers in terms of revenue. This just goes to show that apps need to go that extra mile. As per the recent estimates, the top 10 highest earning platforms raked in $1.2 billion. The fact that downloads went downhill for April is a positive growth, even if the figure might be in single digits. Publishers and developers will need to work harder for a successful May.
Moving on to the most downloaded platforms for April 2025, ChatGPT again made it big. It jumped to position number, which is a huge shift in recent AI trends. To think that it superceded all other social media giants to hit that spot is incredible.
This is the second time we’ve seen it dethrone other leaders, such as Instagram. Thanks to stats from Appfigures Intelligence, the AI tool has over 52 million downloads across the iOS and Android markets. This is a 12% growth from that seen in March of this year and a 38% growth from that seen in January. It appears that greater competition gives rise to greater downloads.
This is also the first time that the app was crowned the top spot for both the App Store and Google Play Store. How’s that for an incredible milestone? In second position, we had TikTok with an estimated 39 million downloads, while Instagram took the third spot with a little less. It seems like these two might need to take a backseat as it’s ChatGPT’s time to shine for a while.
The top five for most downloaded platforms were rounded up by Facebook and WhatsApp, which had 31 million and 27 million downloads each. So Meta is on a roll with three of its family apps making the top 5 most downloaded platforms around the globe. Meanwhile, Threads isn’t too far below, still making the top 10.
Temu took on the sixth spot with 25 million downloads, although the US penalized China with high tariffs. However, the download figures were the lowest we’ve ever seen for them, so we don’t expect to see it hit the top 10 most downloaded apps rankings in May.
As a whole, the total estimated figures for downloads hit 300 million, which is a 13% decline from the total seen last month. This means developers need to pull their weight and work on bettering the figures for a successful May. This was not the case for most revenue-generating platforms that still managed to see a single-digit growth in April.
Read next: Meta Under Fire for Emotional Targeting and Expansive AI Data Harvesting
by Dr. Hura Anwar via Digital Information World
Meta Under Fire for Emotional Targeting and Expansive AI Data Harvesting
Over the last several days, Meta has come under renewed scrutiny for how intimately it tracks user behavior—especially following the launch of its AI chatbot and eye-opening revelations from a former employee.
Sarah Wynn-Williams, once with Meta and now an author, spoke before the U.S. Senate, alleging that Meta internally used emotional indicators from users, especially teens, to refine advertising precision. She described how the company could identify states like hopelessness or poor self-esteem and provide advertisers with access to that data. For instance, if a teenage girl deletes a photo—possibly reflecting low confidence—the algorithm might push beauty products or slimming teas in that very moment.
This kind of emotional microtargeting — particularly toward adolescents — raises major ethical concerns. It highlights a disturbing trend where tech firms commodify mental states for profit.
Simultaneously, Meta’s new AI chatbot has reignited privacy debates. Designed for personalized chats, the bot pulls data not just from messages but also from broader user activity across Facebook and Instagram. Everything typed into it sharpens its learning model. Analysts at The Washington Post have noted that this data collection goes well beyond what ChatGPT or Gemini currently gathers.
Though Meta’s practices have raised red flags before, the outrage after Cambridge Analytica gradually faded. As the dust settled, Meta capitalized on the public’s tendency to trade privacy for convenience—letting the wheels keep spinning.
Yet the depth of data Meta has collected is staggering. A 2015 study from Stanford and Cambridge universities demonstrated that Facebook "likes" alone could predict users' personality traits more precisely than even close friends or spouses.
The strength lies not in single actions, but in the mosaic of choices. Following meme pages or liking pop stars may seem mundane—yet, en masse, they tell stories. They might hint at smoking habits, biases, or impulsivity—even without explicit declarations.
Some digital footprints are clear, others subtle—but Meta’s algorithms can connect the dots with uncanny precision. Even though fewer young users are sharing personal content on Facebook, the chatbot delivers a new pipeline of high-quality data.
Available across apps, the chatbot spans countless topics. It encourages users to speak openly—offering Meta a rich supply of preferences, moods, and motivations to fuel its vast advertising engine.
Meta claims it refrains from storing harmful or sensitive chatbot inputs and gives users deletion options. But these controls demand initiative, and most users don’t actively manage what’s logged.
Despite existing privacy toggles, studies show most people leave defaults untouched. That inertia benefits Meta. Its Advantage+ ad platform, run by machine learning, thrives on a deep reservoir of behavioral data.
As Meta's AI grows more advanced, its capacity to intuit thoughts, desires, and intentions will only grow. Whether users scroll, post, or chat—they continue to feed the system.
In exchange for quick answers and smart replies, people give up ever-deeper pieces of themselves. And considering Meta’s track record, it’s worth asking—how much more should we allow them to learn?
Image: DIW-Aigen
Read next:
• Study Reveals When U.S. Residents Are Most Likely to Detach from Their Phones
• Game-Changing Digital Technologies to Watch by 2030
by Asim BN via Digital Information World
Sarah Wynn-Williams, once with Meta and now an author, spoke before the U.S. Senate, alleging that Meta internally used emotional indicators from users, especially teens, to refine advertising precision. She described how the company could identify states like hopelessness or poor self-esteem and provide advertisers with access to that data. For instance, if a teenage girl deletes a photo—possibly reflecting low confidence—the algorithm might push beauty products or slimming teas in that very moment.
This kind of emotional microtargeting — particularly toward adolescents — raises major ethical concerns. It highlights a disturbing trend where tech firms commodify mental states for profit.
Simultaneously, Meta’s new AI chatbot has reignited privacy debates. Designed for personalized chats, the bot pulls data not just from messages but also from broader user activity across Facebook and Instagram. Everything typed into it sharpens its learning model. Analysts at The Washington Post have noted that this data collection goes well beyond what ChatGPT or Gemini currently gathers.
Though Meta’s practices have raised red flags before, the outrage after Cambridge Analytica gradually faded. As the dust settled, Meta capitalized on the public’s tendency to trade privacy for convenience—letting the wheels keep spinning.
Yet the depth of data Meta has collected is staggering. A 2015 study from Stanford and Cambridge universities demonstrated that Facebook "likes" alone could predict users' personality traits more precisely than even close friends or spouses.
The strength lies not in single actions, but in the mosaic of choices. Following meme pages or liking pop stars may seem mundane—yet, en masse, they tell stories. They might hint at smoking habits, biases, or impulsivity—even without explicit declarations.
Some digital footprints are clear, others subtle—but Meta’s algorithms can connect the dots with uncanny precision. Even though fewer young users are sharing personal content on Facebook, the chatbot delivers a new pipeline of high-quality data.
Available across apps, the chatbot spans countless topics. It encourages users to speak openly—offering Meta a rich supply of preferences, moods, and motivations to fuel its vast advertising engine.
Meta claims it refrains from storing harmful or sensitive chatbot inputs and gives users deletion options. But these controls demand initiative, and most users don’t actively manage what’s logged.
Despite existing privacy toggles, studies show most people leave defaults untouched. That inertia benefits Meta. Its Advantage+ ad platform, run by machine learning, thrives on a deep reservoir of behavioral data.
As Meta's AI grows more advanced, its capacity to intuit thoughts, desires, and intentions will only grow. Whether users scroll, post, or chat—they continue to feed the system.
In exchange for quick answers and smart replies, people give up ever-deeper pieces of themselves. And considering Meta’s track record, it’s worth asking—how much more should we allow them to learn?
Image: DIW-Aigen
Read next:
• Study Reveals When U.S. Residents Are Most Likely to Detach from Their Phones
• Game-Changing Digital Technologies to Watch by 2030
by Asim BN via Digital Information World
Monday, May 5, 2025
Game-Changing Digital Technologies to Watch by 2030
Valantic published its The Rise of Applied AI Study Digital 2030 report, which highlights the top digital technology trends that are going to develop till 2030. The report states that AI is going to dominate everything over the next few years, and it's going to impact many fields in the next ten years. According to the report, cybersecurity technologies are going to be very important, with 81% of the corporate decision makers surveyed saying that this field has a lot of opportunities and expectations by 2030. 80% of the corporate decision makers said that cloud computing is going to be very important for a company's success, while 79% named artificial intelligence as an important digital technology needed for a company’s success by 2030.
Even though there is a lot of hype about digital technologies and how much they are going to impact different fields of life, only a few are going to take over pretty much every other field. Cybersecurity technologies are going to be present in every other sector, including healthcare, business and pharma, transportation, production, retail, telecommunication, and utilities. It is followed by cloud computing, which is going to take over transportation, retail & consumer goods, automotives, beverages & food industry, and utilities companies within the next five years.
Artificial intelligence is going to be the third most important digital technology, going to be a priority in all sectors mentioned above, but not as much in utilities companies. The report also stated that the respondents named artificial intelligence the most overrated digital technology for the future. Green IT, Intelligent Robots, Metaverse, and Wireless Technologies are also the digital technologies that the industry experts have considered overrated for the future. The digital technologies that won't be much important in the future are quantum computing, Blockchain, and the Metaverse.
Read next: OpenAI Puts an End to Model Confusion with Clearer ChatGPT Breakdown for All Users
by Arooj Ahmed via Digital Information World
| Digital Technology | Very/Rather Unimportant (%) | Very/Rather Important (%) | |
|---|---|---|---|
| Cybersecurity technologies | 15 | 81 | |
| Cloud computing | 17 | 80 | |
| Artificial intelligence | 18 | 79 | |
| Internet of Things | 17 | 78 | |
| Wireless technologies | 20 | 77 | |
| Robotic Process Automation | 19 | 76 | |
| Intelligent robots | 23 | 74 | |
| Virtual Reality / Augmented Reality | 25 | 72 | |
| Green IT | 24 | 71 | |
| Quantum Computing | 24 | 70 | |
| Digital twin | 26 | 67 | |
| Blockchain | 28 | 67 | |
| Metaverse | 30 | 66 |
Even though there is a lot of hype about digital technologies and how much they are going to impact different fields of life, only a few are going to take over pretty much every other field. Cybersecurity technologies are going to be present in every other sector, including healthcare, business and pharma, transportation, production, retail, telecommunication, and utilities. It is followed by cloud computing, which is going to take over transportation, retail & consumer goods, automotives, beverages & food industry, and utilities companies within the next five years.
Artificial intelligence is going to be the third most important digital technology, going to be a priority in all sectors mentioned above, but not as much in utilities companies. The report also stated that the respondents named artificial intelligence the most overrated digital technology for the future. Green IT, Intelligent Robots, Metaverse, and Wireless Technologies are also the digital technologies that the industry experts have considered overrated for the future. The digital technologies that won't be much important in the future are quantum computing, Blockchain, and the Metaverse.
Read next: OpenAI Puts an End to Model Confusion with Clearer ChatGPT Breakdown for All Users
by Arooj Ahmed via Digital Information World
Subscribe to:
Posts (Atom)








