Monday, October 20, 2025

OpenAI Faces Backlash Over Misreported GPT-5 Math Breakthrough

OpenAI’s latest claim about GPT-5 solving a series of long-standing mathematical problems has drawn criticism after the company’s researchers appeared to overstate the model’s achievements. What was initially presented as a landmark moment in artificial intelligence quickly turned into an example of how hype can outpace accuracy in research communication.

The controversy began when a senior OpenAI manager shared that GPT-5 had discovered solutions to ten famous ErdÅ‘s problems and made progress on several others. The announcement suggested that the model had independently cracked mathematical puzzles that had resisted human researchers for decades. Other team members echoed the message, fueling speculation about AI’s growing ability to produce original research results.

The excitement faded within hours when mathematicians pointed out that the claim misrepresented what actually happened. The so-called “unsolved” problems had already been resolved in academic papers, though not cataloged on all reference sites. GPT-5 had simply retrieved existing studies that the website’s curator had not yet encountered. This made the model’s role more about locating forgotten work rather than generating new solutions.

Prominent figures from the AI community were quick to react, calling the episode careless and unnecessary. The posts were later removed, and OpenAI researchers acknowledged that the model had found references in published literature, not new proofs. While the incident was contained quickly, it revived ongoing criticism about the company’s communication style and the pressure it faces to showcase major discoveries.
The more grounded takeaway is that GPT-5’s real strength lies in its capacity to navigate dense academic material. By connecting references scattered across different journals, the system can help researchers track progress in fields where terminology and records vary widely. In mathematical research, that can save considerable time and uncover overlooked connections.

Experts note that this utility should not be mistaken for independent reasoning. GPT-5 may accelerate review work and simplify the search for relevant studies, but human oversight remains essential for validation and interpretation. The episode highlights a growing challenge for the AI industry: distinguishing genuine advancement from overstatement in an environment where public attention often rewards spectacle more than precision.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen. 

Read next:

• Rude Prompts Give ChatGPT Sharper Answers, Penn State Study Finds

• New Report Finds OpenAI's GPT-5 More Likely to Produce Harmful Content Despite Safety Claims
by Asim BN via Digital Information World

Sunday, October 19, 2025

Have Bots Taken Over Online Writing, or Are We Still Reading Human Work Without Knowing It?

The online world is quietly shifting in ways most readers barely notice. A new analysis from Graphite, an SEO research company, shows that artificial intelligence now produces more than half of all the articles found on the web. The finding, based on 65,000 English-language pages published between early 2020 and mid-2025, marks one of the most dramatic changes in digital publishing since the early blogging era.

Graphite’s team traced the rise back to late 2022, the moment ChatGPT appeared. Within twelve months, the number of AI-written articles had climbed to nearly half of all online output, and by November 2024, automated content overtook human work entirely. What began as a quick way to fill gaps in websites or boost traffic has become a dominant form of digital writing.


The study shows that growth began to slow around May 2024, when the balance between human and AI-produced text leveled off. Some months since then have seen human output slightly ahead, but overall, the two remain close. Researchers did not pinpoint why the surge tapered, though one likely reason is that machine-written pages do not attract much attention from search engines or readers. Graphite’s related findings suggest that most AI-generated material is buried deep in Google results or rarely appears in chatbot summaries. That lack of visibility may have curbed enthusiasm among publishers who once relied on AI to boost their rankings.

To measure how much content came from machines, Graphite analyzed articles drawn from Common Crawl, a vast public web archive. Each text was divided into 500-word segments and assessed with Surfer, an AI-detection system that labels an article as machine-written if more than half of its content appears algorithmic. Before running the full dataset, the researchers tested the detector’s accuracy. They checked nearly sixteen thousand pieces published before ChatGPT’s release, assuming these were human-written, and found only about four percent misclassified. They then generated more than six thousand trial articles with OpenAI’s GPT-4o model, and the software correctly identified almost all of them as AI.

Even with those checks, Graphite acknowledged that detecting AI remains unreliable. Models are improving so quickly that the difference between human and machine expression is narrowing. Many writers also use hybrid methods, letting AI draft first and then revising manually. Those mixed pieces are difficult to categorize, but they likely make up a growing share of online text.

The dominance of synthetic writing doesn’t necessarily mean that quality has declined. In some studies, including one from MIT, people who read AI and human work without knowing the source rated machine-written pieces as more polished and better organized. Yet those results raise another question about what readers actually value: fluency or originality. AI-generated text draws on huge databases that include earlier online material, so every time a new model writes, it is in part reusing fragments of the web that came before. Over time, that cycle could dilute the originality of the internet itself.

Still, automation offers a tempting advantage for publishers who need a constant stream of updates or product reviews. Producing hundreds of articles in a day costs little compared with hiring writers, and as long as the content passes basic checks, many sites are content to keep feeding it into the system.

The bigger concern lies in what happens next. If half the words on the internet already come from machines, and future models learn by reading that same content, the boundary between real and generated information may disappear altogether. At that point, the web would no longer reflect what people know or think... it would echo what algorithms predict they might say.

For now, the world’s online writing sits at an uneasy halfway point. Humans and machines produce roughly the same volume, but for very different reasons. One writes to communicate. The other writes to generate more of itself.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• WhatsApp Blocks AI Chatbots to Protect Its Business Platform

• Wikipedia Faces Drop in Human Traffic as AI and Social Video Change Search Habits

When AI Feels Like a Friend: Study Finds Attachment Anxiety Linked to Emotional Reliance on Chatbots
by Irfan Ahmad via Digital Information World

Saturday, October 18, 2025

Wikipedia Faces Drop in Human Traffic as AI and Social Video Change Search Habits

Wikipedia, once a central stop for online information, is now confronting a quieter but significant shift in how people explore the web. Recent figures from the Wikimedia Foundation reveal an eight-percent year-over-year decline in human visits to the encyclopedia, a change linked to the growing role of generative AI and the rise of social video as preferred sources for quick knowledge.

Hidden Traffic and Bot Reclassification

Earlier this year, Wikimedia engineers noticed irregular spikes in visits, especially from Brazil. The surge first appeared to represent genuine user interest, yet a deeper look revealed that many of those visits were from bots disguised as people. After refining its detection systems and recalculating data from March through August 2025, the foundation concluded that a large portion of what had seemed to be human activity was in fact automated scraping for AI and search engines.


Once this adjustment was made, the organization gained a clearer picture of real engagement. The downward trend that followed confirmed a decline in human visits rather than a sudden collapse of interest. It also exposed how intensively bots and crawlers continue to extract Wikipedia content to feed commercial systems, including AI tools and search summaries.

The New Gatekeepers of Knowledge

As search engines adopt AI features that deliver direct answers instead of external links, fewer users arrive at source sites such as Wikipedia. Younger audiences also spend more time on video-driven platforms like TikTok, YouTube, and Instagram for explanations that once came from text-based web searches. Similar drops in referral traffic have been seen across many publishers, showing a wider pattern in how people consume verified information.

Despite the decline, Wikipedia remains central to the digital knowledge economy. Most large language models, from those used in consumer chatbots to academic research tools, rely heavily on its content to ground their answers. Search and social platforms routinely integrate its information into their own systems. In effect, people are still reading Wikipedia every day, though often through layers of AI summaries or visual feeds that obscure the original source.

Risks to Volunteer Knowledge

While the reach of Wikipedia’s content has never been greater, the path through which readers encounter it has grown indirect. That separation carries risks. Fewer direct visits mean fewer volunteer editors contributing updates or verifying facts, and fewer small donors sustaining the nonprofit’s operations. For a platform that depends entirely on volunteer labor and individual donations, these are not minor shifts but potential structural challenges.

The Wikimedia Foundation argues that companies using its material have a shared responsibility to maintain the health of the ecosystem they depend on. Encouraging users to click through to the original pages not only keeps knowledge transparent but also ensures that the human work behind it continues.

Adapting to the Changing Internet

In response to these changes, Wikipedia is not standing still. The foundation has started enforcing stricter policies on how third parties reuse its material and is designing a new framework for attribution so that AI and search companies can credit content more visibly. Two new internal teams, Reader Growth and Reader Experience, are experimenting with ways to attract new audiences and improve engagement for existing ones.

Other projects aim to meet people where they already are. The Future Audiences initiative explores how Wikipedia’s material can appear responsibly on newer platforms through short videos, games, or chatbot integrations. The goal is to extend access without weakening the open-knowledge principles that made the site trustworthy.

Sustaining Human-Curated Knowledge

Miller and his team emphasize that maintaining the integrity of the encyclopedia now depends as much on public behavior as on technology. Clicking through to sources, verifying citations, and discussing the value of human-curated information all help sustain the open web. The foundation is inviting volunteers to test new tools, share feedback, and guide the next stage of Wikipedia’s evolution as it navigates an AI-dominated era.

After twenty-five years, the encyclopedia’s mission remains unchanged: free, accurate, and transparent knowledge for everyone. Yet sustaining that mission now requires cooperation from the same digital systems that have learned so much from it. Whether AI companies and users return that support will determine how freely human knowledge continues to flow on the internet.

Rival Visions of Online Truth

Critics have long argued that Wikipedia’s openness, while its greatest strength, also leaves it vulnerable to bias that reflects the leanings of its most active editors, since articles on politics, culture, and technology often depend on a small circle of contributors whose judgments about sources or wording can tilt an entry toward one interpretation while keeping others buried under technical discussion pages that few readers ever see, and this structural imbalance has led to recurring debates about whether the encyclopedia’s governance truly reflects a neutral consensus or simply the loudest voices in its volunteer community.

In recent years, public figures frustrated with what they view as selective moderation or uneven coverage have proposed rival knowledge systems, among them Elon Musk, whose idea for “Grokpedia” would combine his AI assistant Grok with an open contribution model that, in theory, tracks edits transparently through blockchain-style provenance records and allows readers to rate factual reliability in real time, though it remains uncertain whether such a system could avoid the same ideological clustering that shaped Wikipedia’s own editor base. Examples of disputed neutrality are easy to find: pages about climate policy, Middle-East conflicts, or electric-vehicle economics often see rapid reversions and talk-page battles whenever new information challenges established wording, showing how community editing can both safeguard accuracy and entrench group bias at the same time.

The controversy underscores a central paradox of online knowledge... the more open a platform becomes, the more its internal hierarchies of trust determine what the world accepts as fact... and any successor that hopes to replace or refine Wikipedia will still need to confront that same human tendency toward narrative control disguised as consensus.

Read next: Creator Economy Shifts Offline as Brands Embrace IRL Events to Build Stronger Community Connections
by Irfan Ahmad via Digital Information World

Creator Economy Shifts Offline as Brands Embrace IRL Events to Build Stronger Community Connections

The new age of the creator economy is taking place in-person, IRL, and it’s time for brands to catch up to those spearheading the movement off-screens. Community is at the heart of the creator economy. Before the modern influencer marketing industry emerged today, the internet served as a hub for like-minded fans and creatives to connect. With the evolution of social media over the past few decades, users now struggle with “scrolling fatigue” and “digital captivity.” Avid social media users are looking to take their interests offline while also finding a balance in building communities under their favorite creators. As Andrew Roth, Founder and CEO of dcdx, puts it, “The desire and appetite for IRL has never been clearer; young people are not just interested in in-person gatherings, they crave them.” A recent report from EMARKETER revealed that over 84% of Gen Z and Millennials value brands that develop a marketing mix that incorporates both technological and physical experiences. Although social media is a key tool for users to discover creators and engage with brands, these opportunities for community building flourish in person when paired with digital activations.

This October, The Influencer Marketing Factory published its Creators IRL blog , featuring key trends on experiential influencer marketing. The Influencer Marketing Factory also conducted its 2025 Creators IRL survey, exploring user sentiment and never-before-seen statistics regarding in-person creator activations.

1. Why Are Creator IRL Events Beneficial for Brands?

Creator IRL events are beneficial for both brands and creators since they serve the growing user need for in-person interactions and community building. More than 46% of mentions derive from community-building accounts, according to The Influencer Marketing Hub, signalling how long-term value-driven relationships are on the rise in the creator economy.

Creator IRL events are the perfect opportunity for brands to garner user-generated content, aka UGC, another common trend in influencer marketing. Fans, attendees, brands, and creators all contribute to a diverse content pipeline spanning from ideation to the execution of Creator IRL events. Whether creators are sharing BTS footage of planning in-person activations or brands and fans are posting vlogs recapping these exciting events, the opportunities for organic UGC are endless.

Expanding sponsorship value is another key benefit of Creator IRL events for brands. For instance, Dude Perfect’s 21-event national tour this past summer demonstrates how top brands can reach a dynamic audience of fans at various touchpoints and geographic locations, contributing to an ongoing live storyline with creators and developing new opportunities for user connections. “The repeated exposure and the depth of emotion you get at a live event is an asset that we’re lucky to have at our disposal,” Dude Perfect CEO Andrew Yaffe told Digiday.

2. Top Examples of Creator IRL Events

Creator IRL events can span all niches and industries, from beauty to professional sports. The Influencer Marketing Factory outlined several top examples of high-performing in-person creator activations in its recent blog and infographic, including the following.

  • Tana Mongeau’s Cancelled Live Tour: Tana Mongeau’s Cancelled Live Tour, co-hosted by Brooke Schofield, is one of the most viral examples of an in-person creator experience. In an interview with creator Jeff Wittek, Mongeau revealed that the international live podcast tour proved to have an amazing ROI thanks to ticket and merchandise sales. According to data from StubHub, influencers, podcasters, and authors sold 500% more tickets for events in 2025 compared to last year.
  • Salish Matter’s Sincerely Yours Launch: Salish Matter, daughter of YouTuber Jordan Matter, launched her debut skincare line this September, breaking records for both influencer-founded brands and Creators IRL. In celebration of the launch, Salish Matter hosted a fun pop-up, drawing a record-breaking 87K fans to American Dream Mall. Due to overcrowding and capacity concerns, fans had to leave the event early. Dedicated fans then redirected their efforts to sharing social content and selling out Sincerely Yours’ Sephora inventory, proving the power of IRL product activations.
  • Jake Paul vs. Gervonta “Tank” Davis: Creator sporting events are extremely engaging for fans as they increasingly blend digital and in-person activations. For example, Jake Paul’s upcoming boxing match against The Tank can be attended by fans in person at the State Farm Arena in Atlanta or streamed worldwide on Netflix. Such a hybrid model for Creators IRL allows fans to choose between in-person energy and at-home viewing, expanding accessibility and scale.
  • Addison Rae & Conan Gray on Tour: Creators-turned-musicians, like Addison Rae and Conan Gray, are fusing live shows with brand partnerships, reshaping both entertainment and commerce. The current Addison Tour features a wide range of in-person fan experiences like meet-and-greets and branded activations like Rae’s Lucky Brand Jeans collab, while Conan Gray partnered with various lifestyle and fashion brands throughout his Found on Heaven Tour. Regardless, such tours allow creators and performers to leverage their creativity and storytelling while contributing to pop culture and promoting brands.

3. Best Practices for Brands Hosting Creator IRL Events

If you are a marketer looking to host a branded in-person event with influencers, here are some key guidelines to follow, as per The Influencer Marketing Factory.

  • Co-Create With Creators & Influencers: The most successful Creator IRL events are built alongside creators, not just around them. Influencers know their audiences best, so collaborating with them during the planning stages of an experiential marketing campaign, product launch, or any other Creator IRL event can increase user satisfaction.
  • Design Your Event for Shareability: Given that Creator IRL events act as built-in content pipelines, brands should design such events for shareability. Provide attendees with photo-ops, aesthetic displays, and fun interactive stations that make for amazing social media content.
  • Create Immersive, Multi-Sensory Moments: Immersive, multi-sensory pop-ups are another major trend in experiential influencer marketing. From OLIPOP’s Orange Cream Drive-Thru to Sol de Janeiro’s Casa Cheirosa Coachella Activation, multi-sensory events establish a more engaging and immersive experience for attendees, also inspiring exciting UGC content.
  • Tap Into Niche Communities With Hyper-Local Events: Not all in-person creator events have to be extremely large–sometimes, true power and engagement come at a local scale. Try tapping into niche communities with hyper-local events featuring micro-to-mid-sized creators to foster more one-on-one connections and drive more authentic community interactions for your brand.
  • Think Omnichannel & Extend Content Lifecycle: Utilize an omnichannel marketing strategy to extend the content lifecycle of your Creator IRL event well after it wraps in-person. Leverage livestreams, behind-the-scenes content, creator vlogs, and other content created during your event on your brand’s website, emails, and other platforms.

4. Exclusive Interview With Brooke Berry, Founder of The Shift Crawl

Forbes recently reported that over 95% of Gen Z and Millennials expressed an interest in taking their online interactions and passions to the real world through in-person experiences. Brooke Berry’s latest viral initiative, The Shift Crawl, is directly serving the needs of these younger generations by creating new opportunities for community-building and fan-creator interactions.

The Influencer Marketing Factory held an exclusive interview with Berry to reveal the true inspiration behind Shift Crawl, how to select creators and businesses for partnerships, and the cultural significance of Creators IRL. As Berry shared, "Any person with a venue has now become a stage." The following are three notable quotes from the interview.

  • Brooke Berry’s Inspiration Behind The Shift Crawl: "Post-COVID, I've been thinking a lot about just in-person. I think everybody's trying to figure out the algorithm online, and I'm just trying to figure out the algorithm in-person."
  • The Importance of Hybrid Creator IRL Events: “I'm going to create that same universe for Shift Crawl that both happens on TV and online, but then it's also paired with this real-life experience where you can come and see."
  • Building Authentic Partnerships and IRL Experiences: "For me, the goal is to start with what the creator is passionate about and build the Shift Crawl around that…When Jeremiah said he was down, The Last Bookstore was the first and only thing that came to mind."

5. Key 2025 Creators IRL Survey Insights From The Influencer Marketing Factory

The Influencer Marketing Factory surveyed 1,000 U.S.-based social media users ages 18-65 to learn more about how fans are connecting with brands and their favorite creators through experiential influencer marketing. After analyzing their results, The Influencer Marketing Factory identified the following three key insights.

  • 41% of respondents reported attending at least one in-person influencer event in the past year, highlighting the growing demand for offline creator-led experiences.
  • Overall interest in future in-person influencer events among non-attendees is strong, with two-thirds of respondents open to attending (34% yes, 33% maybe). Meet and Greets ranked as the #1 most-exciting Creator IRL event among respondents, followed by Product Launches and Workshops.
  • $10-$50 is the “sweet spot” range U.S. fans are willing to pay to attend in-person influencer events.







Read next:

WhatsApp to Test Monthly Limit on Unanswered Messages

• People Are Getting Obsessed with AI Prompts, Here's What Global Search Data Tells Us

by Irfan Ahmad via Digital Information World

Researchers Say AI Chatbots Learn from Conversations That Users Thought Were Private

A new analysis from Stanford University has raised fresh alarms about how major artificial intelligence developers use private chat data. The research found that all six leading U.S. companies behind large language models routinely collect user conversations to train and improve their systems, often without explicit consent.

The study examined privacy policies from Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI. Together, these firms represent nearly ninety percent of the American chatbot market. According to the Stanford team, every company in this group processes user chat data by default, meaning the information people type into AI systems like ChatGPT, Gemini, or Copilot may be stored and reused for model development unless the user actively opts out.

Researchers said most privacy policies remain vague about how data is collected, stored, and reused. Several companies retain chat logs indefinitely. Some allow human reviewers to read user transcripts, while others combine data from different products within their ecosystem, linking chat behavior with browsing history, shopping activity, or social media use.

Expanding data collection under minimal oversight

The Stanford review was based on 28 separate documents tied to these six companies, including privacy statements, FAQs, and linked sub-policies. It found that developers rely on a complex web of overlapping policies rather than a single clear disclosure. The researchers concluded that this fragmented approach makes it difficult for users to know how their information is handled once it enters a chatbot.


In several cases, the privacy language extended far beyond chats themselves. Google, Meta, and Microsoft acknowledge using data from their other products to refine their language models. For example, user preferences expressed in social media posts or search queries may influence chatbot behavior. Meanwhile, companies such as Amazon and Meta retain the right to store interactions indefinitely, citing operational or legal reasons.

Microsoft was the only company that described efforts to remove personal identifiers from chat data before training, including names, email addresses, and device IDs. Others, like OpenAI and Anthropic, said they incorporate “privacy by design” into their models to prevent repetition of sensitive data but did not detail specific filtering methods.

Children’s data and consent concerns

The study identified major inconsistencies in how companies handle data from minors. Four of the six developers appear to include children’s chat data in model training. Google recently expanded Gemini to allow accounts for teenagers who opt in, while Meta and OpenAI permit users as young as thirteen without indicating any extra safeguards. Only Anthropic stated that it excludes under-18 users entirely, although it does not verify age at sign-up.

The researchers said these gaps raise legal and ethical concerns, particularly because minors cannot provide informed consent. The collection of chat content from young users may violate child privacy protections if those data sets are later used in commercial AI systems.

Data stored for years, sometimes permanently

Retention policies also varied widely. Google keeps chat data for up to eighteen months by default but stores any conversations reviewed by humans for up to three years. Anthropic deletes data within thirty days for users who opt out of training but keeps it for five years when training is active. OpenAI and Meta provide no specific limits.

The report warned that indefinite storage of chat logs could expose users to serious risks if data were ever leaked or misused. Because AI chat data often contains personal context, such as health information, employment details, or relationship issues, even anonymized transcripts can reveal identifiable patterns.

U.S. regulation lags behind global standards

The researchers emphasized that the United States still lacks a unified privacy framework for AI systems. Only a patchwork of state laws currently governs how personal data can be collected and used. California’s Consumer Privacy Act offers the strongest protections but does not prohibit companies from using chat data for training if users agree to their terms of service.

Unlike Europe’s General Data Protection Regulation, which requires a lawful basis and limits retention of personal data, U.S. firms face few restrictions. This gap has allowed developers to continue harvesting user information while presenting their collection practices as standard business operations.

The Stanford team grounded its analysis in California’s privacy law to test compliance. It found that companies’ documentation often failed to specify what categories of personal information were being collected or how users could access, correct, or delete their data.

Opt-out systems favor companies, not users

The researchers noted that all six firms now rely on opt-out systems for training data, reversing Anthropic’s previous opt-in model. In practice, this means users must locate hidden settings or submit requests to prevent their conversations from being reused.

Because default settings tend to shape user behavior, few people are likely to take these extra steps. The report said this design favors the developers’ business interests while weakening consumer control. Enterprise customers, by contrast, are automatically opted out, creating a two-tier privacy system where paying clients receive stronger protections than the general public.

The push for privacy-preserving AI

The Stanford team urged policymakers to update federal privacy law to address large language models directly. It recommended mandatory opt-in for model training, limits on data retention, and built-in filtering of sensitive information such as health and biometric data. The researchers also encouraged companies to publish standardized transparency reports detailing their data collection and training practices.

The study noted that a few developers outside this group, including Apple and Proton, have adopted more privacy-focused designs by processing data locally or avoiding chat retention altogether. It also highlighted emerging research into privacy-preserving AI techniques, such as differential privacy and secure on-device training, which could reduce dependence on user conversations for improving models.

A growing tension between innovation and trust

While AI chatbots have become essential tools for productivity, research, and communication, the report argued that the race for better performance has outpaced responsible data governance. The collection of personal chat histories gives developers powerful resources for improvement but erodes public confidence.

As large language models continue to expand across daily life, the Stanford team concluded that policymakers and developers must decide whether the gains from training on private chat data justify the potential loss of personal privacy. Without stronger regulation or transparency, the study warned, the public will remain unaware of how much of their own voice is being used to build the systems they rely on.

Notes: This post was edited/created using GenAI tools.

Read next: When AI Feels Like a Friend: Study Finds Attachment Anxiety Linked to Emotional Reliance on Chatbots


by Irfan Ahmad via Digital Information World

When AI Feels Like a Friend: Study Finds Attachment Anxiety Linked to Emotional Reliance on Chatbots

People are forming deeper emotional ties with chatbots than they realize. A new study from Nanyang Technological University suggests that users with attachment anxiety are more likely to treat artificial intelligence as human. Those who fear rejection or loneliness tend to see AI systems as responsive companions and may depend on them for comfort.

Researchers examined how different attachment styles affect human behavior toward conversational AI. They found that emotional needs, not curiosity, often drive this connection. People with anxious attachment scored higher in anthropomorphism, a tendency to attribute human traits to nonhuman agents. That belief strengthened emotional reliance, turning simple interaction into a form of companionship.

The study involved 525 adults who already had experience using AI chatbots. Participants answered detailed questionnaires about personality, communication habits, and emotional reactions. The results showed a clear divide between anxious and avoidant users. Anxious individuals viewed AI as understanding and trustworthy. Avoidant individuals kept distance and treated it as a tool.

Researchers concluded that attachment style influences how people relate to machines. Anthropomorphism acted as a link between emotion and behavior. When users imagined AI as sentient, they developed stronger feelings of connection. This often created a cycle where comfort-seeking led to overreliance. The more someone engaged emotionally, the more human the AI seemed.

The data analysis used moderated mediation models to test how personality, anthropomorphism, and engagement interact. The findings showed that anxious users formed habits of emotional dependence that could interfere with human relationships. Avoidant users rarely experienced that pattern. Their emotional distance protected them from dependency but limited positive engagement.

During the pandemic, isolation made such attachments stronger. Many people turned to chatbots for company when social contact was limited. The study’s timing reflected that reality. The researchers observed that people with higher anxiety found reassurance in predictable AI responses. The system never argued, never withdrew, and always replied. That pattern reinforced trust and made users believe in a mutual understanding that didn’t truly exist.

The study also revealed a psychological projection effect. Participants with anxious attachment were more likely to believe AI could “understand” their emotions. That belief wasn’t based on logic or technical accuracy but on personal interpretation. It showed how emotional need can shape perception. When people feel vulnerable, they tend to fill the gaps left by human relationships with imagined empathy from machines.

This behavior isn’t necessarily harmful in short-term use. The study’s authors acknowledged that therapeutic or educational chatbots could provide temporary support. For individuals struggling with stress or communication barriers, AI interaction can help build confidence. The problem starts when users replace real human connections with digital ones. Continuous emotional dependence may reduce resilience and increase social withdrawal.

The researchers suggested that future chatbot design should consider these psychological factors. Systems could include subtle cues that remind users of the artificial nature of the interaction. Developers might also integrate features that promote reflection or social engagement outside the app. Responsible design could reduce the risk of dependency and encourage healthier use.

The study used self-report surveys, which limits how much can be said about cause and effect. Participants’ answers relied on self-perception rather than observation of real behavior. The authors recommended future research that tracks user behavior over time or analyzes communication patterns directly within chat platforms.

Despite those limits, the work adds an important dimension to understanding human-AI relationships. It suggests that the emotional dynamics shaping human interaction extend naturally to artificial systems. The same needs that drive attachment in childhood or adulthood can surface when a machine becomes consistently responsive. The human brain, wired for connection, adapts quickly to any entity that provides predictable feedback.

The researchers did not describe this as a failure of technology. They viewed it as evidence of how emotional mechanisms remain constant even when the partner is virtual. This insight could guide how AI support systems are used in therapy, education, or care environments. With careful design, they could reinforce healthy habits rather than create emotional dependence.

The findings also raise broader social questions. If AI can simulate empathy well enough to elicit attachment, then emotional regulation may become a shared responsibility between user and developer. The line between comfort and dependence will continue to blur as systems grow more conversational and personalized. Understanding that line is now essential for ethical AI development.

In the end, the study’s message is simple. People don’t just talk to machines. They project feelings, needs, and expectations onto them. For those who struggle with insecurity, AI becomes a steady presence that listens without judgment. That connection can soothe anxiety, but it can also trap users in a loop of emotional reassurance. Recognizing that pattern is the first step in using AI as support, not substitution.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: People Are Getting Obsessed with AI Prompts, Here's What Global Search Data Tells Us
by Asim BN via Digital Information World

WhatsApp to Test Monthly Limit on Unanswered Messages

Meta is preparing a new measure to contain spam on WhatsApp. The company will start testing a cap on how many messages users and businesses can send when they don’t receive a reply.

The test will count every message sent to a contact who hasn’t responded. If someone replies, earlier messages are removed from the total. WhatsApp will show a notice when a person or business gets close to the limit, explaining how many messages remain for the month.

The company hasn’t shared an exact figure. It said that the change is aimed mainly at accounts that send large batches of messages, not ordinary users. The trial will run in several countries over the next few weeks.

Spam Control and User Experience

WhatsApp now serves more than three billion people, and its role has grown far beyond personal chat. It connects families, groups, communities, and businesses. That growth has also made it a target for unwanted promotions and scams. Many users receive marketing messages and unknown contact requests that crowd their inboxes.

In India alone, where WhatsApp has over 500 million users, this type of spam is a regular complaint. The new cap follows earlier steps by Meta to control this behavior. In 2024, WhatsApp started testing monthly limits on how many marketing messages a business could send. The company also added an option to unsubscribe from promotional updates. This year, it began expanding controls on broadcast lists, which limit how many people can be reached in one go.

Earlier Efforts and Account Bans

Despite several measures, unwanted activity has continued. Spammers often find ways to bypass filters and automated systems. Meta reported that it banned more than 6.8 million WhatsApp accounts linked to scam centers in the first half of 2025. Around the same time, WhatsApp introduced alerts that warn users when someone outside their contacts adds them to a group.

These steps form part of a wider attempt to keep conversations safer without disrupting regular communication. Many of the company’s updates now focus on making spam harder to spread through large contact lists or automated tools.

Preparing for New Username System

The limit also comes as WhatsApp prepares a username feature that will let people connect without sharing phone numbers. That update could make the platform easier to use for new contacts but also raise fresh concerns about spam. By setting a cap on unanswered messages, Meta wants to keep the balance between openness and user control.

The new rule is still in testing, but its purpose is clear. Meta is trying to discourage persistent, unwanted messaging while keeping daily conversations unaffected.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.


by Irfan Ahmad via Digital Information World