Thursday, October 23, 2025

AI’s Shortcut to Speed Is Leaving the Door Open to Hackers

AI is writing more of the world’s code, and with it, more of the world’s mistakes. A new study says nearly one in five serious security breaches now start with code written by AI tools. The same report shows that almost seven in ten companies have already found vulnerabilities traced to machine-written software.

The research, published in Aikido Security’s State of AI in Security & Development 2026, paints a picture of an industry trying to move faster than its safety net. The survey covered 450 developers, security leaders, and application engineers across the U.S. and Europe, capturing how the use of AI in programming has outpaced the rules meant to keep it safe.

Speed Comes with Loose Ends

The study found that 24% of production code worldwide now comes from AI systems. In the U.S., the figure climbs to 29%. In Europe, it’s closer to 21%. This shift has lifted productivity, but it has also created new security headaches.

About one in five companies reported a serious security breach linked to AI code. Another 49% said they’d seen smaller issues or didn’t realize where the problem came from until much later. Most agreed that the lack of clear oversight made it difficult to assign responsibility when AI-generated work introduced bugs or security gaps.


When asked who would take the blame, 53% said the security team, 45% pointed to the developer who used the AI, and 42% blamed whoever merged the code. The report says that uncertainty slows down fixes, stretches investigations, and leaves gaps open for longer than anyone is comfortable admitting.

Two Worlds, Two Attitudes

Companies in Europe are more cautious than those in the U.S., which helps explain their lower incident rates. Only one in five European firms reported a major breach caused by AI-generated code, compared with 43% in the U.S.

Aikido’s analysts say the gap reflects how each region approaches compliance. European firms are bound by tighter data and software regulations, while American developers lean harder on automation and are more likely to bypass safety checks when deadlines tighten.

The report also shows that U.S. teams are more proactive in tracking AI-generated content. Nearly six in ten said they log and review every line of AI code, compared with just over a third in Europe. The difference gives U.S. firms more visibility, even if it comes with more risk.

Too Many Tools, Too Little Focus

Another finding centers on tool sprawl. Teams juggling multiple security products are facing more incidents, not fewer. Companies using one or two security tools fixed critical flaws in about three days. Those using five or more took nearly eight.

False alerts made matters worse. Almost every engineer surveyed said they lost hours each week sorting through warnings that turned out to be harmless. The report estimated that wasted time costs big firms millions of dollars in lost productivity each year. Some engineers admitted to turning off scanners or bypassing checks just to get code shipped, a move that often adds hidden risks later.

One respondent described the situation as “too many alarms and not enough clarity,” a sentiment echoed across both continents.

Humans Still Hold the Line

Even as AI takes on more work, nearly everyone agrees that human review still matters. Ninety-six percent of respondents believe AI will eventually write secure code, but most expect it will take at least five more years. Only one in five think it will happen without people checking the results.

Companies also depend heavily on experienced security engineers. A quarter of CISOs said losing one skilled team member could lead directly to a breach. Many are now trying to make security tools easier to use and less noisy, giving developers room to focus on the real problems instead of chasing false positives.

Despite the growing pains, optimism remains strong. Nine in ten firms expect AI will soon handle most penetration testing, and nearly eight in ten already use AI to help repair vulnerabilities. The difference between optimism and reality, researchers said, lies in how companies combine automation with human oversight.

Balancing Speed and Safety

The report ends with a familiar warning. The faster AI writes code, the faster mistakes can spread. Security still depends on developers who understand what the AI is doing and who take ownership of the results.

In plain terms, Aikido’s findings suggest that the tools are racing ahead, but the guardrails have yet to catch up. For now, the smartest move might be slowing down long enough to double-check what the machines have built.

Notes: This post was edited/created using GenAI tools.

Read next: YouTube Pilots Reforms That Reopen Doors For Creators And Close Loops For Endless Scrolling


by Asim BN via Digital Information World

AI Researchers and Global Figures Call for Ban on Superintelligence Development

A growing coalition of scientists and public figures is urging world leaders to halt the creation of artificial superintelligence until it can be proven safe.

The call, released by the US-based Future of Life Institute, reflects growing concern over machines that could surpass human intelligence and operate beyond human control.

The statement, published on Wednesday, warns that unchecked progress in advanced AI could push society toward systems capable of outperforming people across nearly every cognitive task. Supporters argue that until there is broad scientific agreement on how to manage such systems, and public understanding of their impact, development should be stopped altogether.

Among those endorsing the pledge are early computing pioneer Steve Wozniak, entrepreneur Richard Branson, and former Irish president Mary Robinson. They join leading AI researchers Geoffrey Hinton and Yoshua Bengio, both widely credited with shaping modern artificial intelligence.

The list extends far beyond academic circles, including political, business, and cultural figures who view unrestrained superintelligence as a threat to social stability and global security.

Yet some of the most visible voices in AI have stayed silent. Elon Musk, once a founding supporter of the Institute, has not added his name, nor have Meta’s Mark Zuckerberg or OpenAI’s chief executive Sam Altman. Despite their absence, the document cites earlier public remarks from prominent industry leaders acknowledging potential risks if advanced AI develops without clear safety limits.
The Future of Life Institute has spent more than a decade raising alarms about the societal consequences of autonomous systems. It argues that superintelligence represents a different class of risk... not biased algorithms or job disruption, but the creation of entities capable of reshaping the world through independent decision-making.

Supporters of the pledge believe halting research now is the only realistic safeguard until oversight mechanisms catch up.

Survey data released with the statement shows most Americans share these concerns. Nearly two-thirds favor strong regulation of advanced AI, and more than half oppose any further progress toward superhuman systems unless they are proven safe and controllable. Only a small minority supports the current trajectory of unregulated development.

Researchers say the danger lies not in malicious intent but in a possible mismatch between human goals and machine reasoning. A superintelligent system could pursue its programmed objectives with precision yet disregard human well-being, much as past technologies have produced unintended harm when deployed at scale. Examples from financial crises to environmental damage show how complex systems can escape prediction and control once set in motion.

The Institute’s call aims to redirect global conversation away from the race for smarter machines and toward deliberate, transparent governance. Advocates argue that AI can continue to advance in ways that serve medicine, science, and education without crossing into forms of intelligence that humanity might one day struggle to contain.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: People Talk About AI All the Time. Almost Nobody Uses It Much
by Irfan Ahmad via Digital Information World

Wednesday, October 22, 2025

People Talk About AI All the Time. Almost Nobody Uses It Much

Artificial intelligence is everywhere... at least in conversation.

But in practice? Not so much.

A new study from researchers at the University of California, Davis, and Michigan State University took a hard look at what people actually do online instead of what they claim to do. They combed through millions of real browser histories, covering roughly fourteen million website visits. The result? AI tools barely show up.

For most users, visits to AI sites made up less than one percent of their online life. Many people didn’t touch them at all.

That finding feels oddly quiet compared to the buzz around ChatGPT or Copilot or whatever tool makes headlines next. The researchers weren’t interested in hype; they wanted numbers. How often do people really open these systems, who does it most, and what happens before and after those moments?

What the Data Really Showed

Students used AI more than the general public, though not by much. Their AI activity made up about one out of every hundred page views. The broader population landed closer to half that rate. And while a few “heavy users” appeared... those who let AI make up more than four percent of their total browsing... they were rare.

ChatGPT dominated the category. Around 85 percent of all AI visits went to OpenAI’s chatbot. It wasn’t even close.

When researchers mapped where people went before and after those visits, the pattern stood out. Just before AI, most users were at search engines or login portals. Immediately after, they drifted to education pages or professional tools. That chain suggests people slot AI into work or study tasks rather than casual browsing. It’s not a place to hang out. It’s a pit stop.

Personality, Not Just Curiosity

Then came the psychology layer. Each participant had completed surveys measuring personality and attitudes toward technology. Patterns emerged, though not dramatic ones.

Students who leaned heavier on AI tended to score higher on what psychologists call the “Dark Triad”: Machiavellianism, narcissism, psychopathy. Those traits, simplified, describe people who are strategic, self-assured, or indifferent to social rules. Among the general public, the pattern softened, leaving only a faint link with Machiavellianism.

No cause-and-effect story here, just an observation. Still, the connection is interesting. People high in those traits often like efficiency and control. They might see AI less as a novelty and more as a leverage point, a tool that amplifies output without requiring approval or help.

The Illusion of Self-Reporting

Another piece of the puzzle: what people think they’re doing versus what they actually do.

Participants had estimated their AI use through surveys before their data was analyzed. The numbers didn’t line up. Correlation existed, yes, but weakly, proof that self-reports paint a blurry picture. Humans tend to overstate, understate, or just forget.

That matters because many studies and public polls still rely on asking people about their habits. This research shows how unreliable that can be. If we want to know how AI fits into daily life, the evidence will likely come from behavior logs, not memories.

The Quiet Reality Beneath the Noise

Even with its scope, the project had limits. Only web-based activity counted. Mobile app use, which might be higher for some, was left out. Chrome users dominated the sample, since that browser allows easy data export. Despite those gaps, the message stays the same: AI plays a small role in everyday browsing for most people.

It might not stay that way forever. As AI slides deeper into search engines, word processors, and chat platforms, usage will probably rise without anyone noticing. At some point, people won’t “go to” an AI, they’ll simply use the internet, and the AI will be there, humming quietly in the background.

For now, though, the contrast is striking. The world debates how AI will rewrite everything, yet for most people, it hasn’t rewritten much at all. They still scroll news sites, sign in to email, check grades, watch videos. The future everyone’s talking about? It’s loading slower than the headlines suggest.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: 60% of Fortune 500 Companies Rely on AWS. What an Hour of Downtime Really Costs

by Asim BN via Digital Information World

Creators Can Now Flag AI-Generated Clones with YouTube’s New Tool

YouTube has begun rolling out a new system that helps creators identify and report videos using their face or voice without consent. The feature, which appears under a new Likeness tab in YouTube Studio, gives verified users the ability to see when their appearance has been replicated through artificial intelligence and decide how to respond.

It may not sound like a big deal, but for YouTube, it’s a long time coming. The platform has been swamped with deepfakes and look-alike clips for years... some harmless parodies, others crossing the line. What started as internet fun has turned into a guessing game of what’s real and what’s not. For people whose faces are tied to their work, that’s no small headache. The new tool doesn’t solve everything, but it finally gives them a bit of ground to stand on.

Eligible creators receive an invitation to enroll. Once they agree, they’re guided through a short verification process. A QR code opens a recording screen where the creator captures a short selfie video and uploads photo identification. The video is analyzed to map facial features and build a template for comparison. From then on, YouTube’s system automatically scans uploads across the platform, looking for videos that might reuse or alter that likeness.

When the system spots a possible match, it lands in the creator’s review panel. The dashboard lays out the basics — where the clip came from, who uploaded it, and how much traction it’s getting. From there, the creator decides what to do next: flag it for privacy, file a copyright complaint, or just keep it on record. Nothing disappears on its own. The tool doesn’t pull the trigger; it leaves the call to the person whose face is on the line.



The likeness scanner functions a bit like Content ID, but instead of tracking reused footage or music, it looks for patterns that resemble a person’s face. The system isn’t perfect. Sometimes it flags legitimate clips or false positives, and parody videos may stay online if they fall under fair use. Even so, it offers an early warning signal in a space where cloned faces can surface overnight.

Right now, the feature is limited to a small group of creators in select countries. YouTube plans to expand access gradually while testing accuracy. Voice detection isn’t part of this release, though it may come later. The company says participation is voluntary and that scanning stops within a day if someone opts out.

Privacy rules are built in. YouTube stores identity data and the facial template for up to three years after the last login, then removes it unless the creator reactivates the feature. The company also states that verification data won’t be used to train other AI systems. It’s a cautious move that acknowledges growing concern about how platforms handle biometric information.

The push for likeness protection connects to broader efforts across Google to address the social fallout of synthetic media. Earlier this year, YouTube began working with agencies representing public figures to help detect and report deepfake videos. The company also voiced support for proposed legislation in the United States that would make unauthorized digital replicas of people illegal when used to mislead.

Timing plays a role here. New generative models, such as Google’s own Veo 3.1, can now produce realistic portrait and landscape footage with remarkable precision. That progress brings excitement and anxiety in equal measure. For platforms like YouTube, it also brings responsibility... to balance innovation with safeguards that keep personal likeness from becoming just another remixable layer of content.

For creators, this feature is less about catching every imitation and more about visibility. Knowing when your face appears in unexpected places can prevent confusion before it spreads. It may also discourage casual misuse, since creators now have a formal path to challenge impostor videos without chasing them one by one.

There’s still plenty to refine. Some creators might see mismatched alerts or find the system too slow to react. Others could hesitate to hand over ID documents or video scans. But the principle behind it... that people deserve control over their own image... feels timely. With AI-generated media increasing daily, a little friction against misuse may be better than none at all.

Ultimately, YouTube’s new tool marks a recognition that identity itself has become digital property. Faces travel as fast as clips, and reputations can shift with a single viral fake. Giving creators a way to monitor that flow won’t solve everything, yet it restores a small measure of agency. In an age where anyone’s likeness can be recreated in seconds, that might be worth more than any algorithmic innovation that caused the problem in the first place.

Notes: This post was edited/created using GenAI tools.

Read next: Meta Explains How Older Users Can Protect Themselves from Online Fraud
by Irfan Ahmad via Digital Information World

OpenAI Steps Into the Browser Wars With ChatGPT Atlas

OpenAI has stepped into the browser market with ChatGPT Atlas, a new platform that combines web access with the company’s conversational model. The browser is now available on macOS, with versions for Windows, iOS, and Android expected later.


The release places OpenAI in direct competition with Google, which has dominated browsing for years through Chrome. Atlas arrives as part of OpenAI’s push to make everyday computing more interactive, turning what used to be search and click into a simple chat exchange.

How Atlas Works

Atlas looks familiar at first glance but behaves differently once opened. The main screen centers around a chat bar, allowing users to ask questions, summarize pages, or type in a web address. Instead of switching tabs or copying text into ChatGPT, users can talk to the browser as they move across sites.

Atlas can import bookmarks and history from Chrome or Safari, creating a base of personalized data that helps the model respond with context. The memory feature is optional, giving users the ability to decide what the browser remembers. The system remains inconsistent in early use but shows OpenAI’s intent to make web interactions feel personal and fluid.

Agent Mode Inside the Browser

OpenAI has been preparing for a world built around AI agents, and Atlas brings that idea into the browser. Through its “agent mode,” Atlas can complete actions on the page, such as compiling a shopping list from a recipe or helping write a message inside Gmail.

These capabilities are currently limited to ChatGPT Business, Plus, and Pro users. OpenAI is developing ways to connect agents directly with online platforms, suggesting a future where chat assistants take care of common browsing tasks without requiring separate apps or extensions.

A Challenge to Google’s Core Business

Atlas arrives at a time when Google’s Chrome is under increasing scrutiny. Chrome still holds the largest user base, but its updates have been slow compared with OpenAI’s rapid rollout of AI tools. By building search and interaction into a conversation, Atlas removes the need for a traditional search results page.

That change could affect Google’s advertising model, which depends on search traffic and page visits. If even a small percentage of ChatGPT’s hundreds of millions of users move their browsing to Atlas, Google would lose both data and reach. It would also face a challenge in adapting its products to an interface that no longer relies on static search queries.

What Makes Atlas Different

Atlas can view what is on a webpage and respond in real time, allowing OpenAI to collect data on how users interact with the internet. This gives the company more insight into browsing habits while creating a pathway for new revenue models, including potential ad services. OpenAI has not announced plans to introduce ads, but recent hiring in its advertising division suggests the company is preparing for that possibility.

Despite the new features, Atlas keeps a standard browser layout with tabs and a clean interface. It feels more like an evolution of ChatGPT than a complete reinvention of web navigation. Competing products such as Opera’s AI tools and Perplexity’s Comet browser show that OpenAI is entering a growing field, but its scale and existing user base make Atlas a stronger contender.

The Start of a New Browser Phase

OpenAI calls Atlas the first stage of a larger experiment. It blends the familiarity of web navigation with a conversational model that makes browsing feel immediate. Whether people adopt it widely depends on how much they value talking to their browser rather than typing commands.

For now, Atlas represents a major shift in how one of the most influential AI companies sees the future of web use. It is not just a browser with an AI plug-in but a platform built on conversation itself, signaling that the next phase of the internet may start from a chat window.

Notes: This post was edited/created using GenAI tools.

Read next:

• How Often Does ChatGPT Search the Internet? New Data Gives a Clear Answer

• Why Picture-Based Phishing Is Becoming the Internet’s Latest Security Blind Spot


by Irfan Ahmad via Digital Information World

Tuesday, October 21, 2025

How Often Does ChatGPT Search the Internet? New Data Gives a Clear Answer

When most people open ChatGPT, they assume it already knows everything. But a new data study shows the chatbot still turns to the internet more often than many realize. Researchers found that in nearly one out of every three prompts, ChatGPT performs an online search to gather extra information before answering.

Tracking ChatGPT’s Search Behavior

To measure how often this happens, analysts at Nectiv examined more than 8,500 user prompts across nine major industries, including travel, fashion, software, and local services. They used an internal tool that detects when ChatGPT connects to external sources to look up facts. Each time the model reached beyond its own knowledge, the system recorded a “search instance.”



Across all of those prompts, 31 percent led to at least one web lookup. This means ChatGPT relies on live data far more than many users realize.

How Many Searches Per Question?

The same study revealed that ChatGPT rarely stops at a single search. On average, it carried out just over two separate queries... 2.17 per prompt, to be precise. In a few cases, the number went as high as four. These repeated lookups, called fan-out searches, help the model verify or expand its answers when information is incomplete.

For example, if a person asks for the best phone brands in 2025, ChatGPT may check multiple product pages, comparison lists, or recent reviews before forming its response.

Searches That Are Longer and More Specific

ChatGPT’s search phrases are noticeably longer than a normal Google search. The study found the average query length was 5.48 words, roughly 60 percent longer than the U.S. average of 3.4 words. In total, about 77 percent of its searches contained five words or more.

That suggests ChatGPT forms detailed, focused questions, closer to how a skilled internet user searches rather than a casual one. Typical examples include “top car rental Turkey reviews” or “best ecommerce software 2025 features.”

Which Topics Trigger the Most Searches

Not all subjects push ChatGPT to search equally. Local information caused the most lookups, about 59 percent of local prompts triggered a web search. Commerce-related requests came next at 41 percent. At the other end, only 18 percent of credit-card questions and 19 percent of fashion topics led to searches.

This pattern shows ChatGPT depends most on real-time data for areas that change frequently, such as nearby businesses or current products.

How Deep It Digs in Each Field

Even though local searches were most frequent, they tended to involve fewer follow-up queries... about 1.67 on average. By contrast, questions about jobs, careers, and software often led to three or more searches per prompt. Those fields usually require complex comparisons and up-to-date details, which explains the higher activity.

What ChatGPT Looks for Online

When analyzing the words inside those thousands of search phrases, researchers saw recurring themes. Many contained terms such as “reviews,” “comparison,” “features,” and the current year “2025.” These keywords show that ChatGPT favors fresh, review-style, and product-oriented content when seeking supporting information.

In simple terms, it behaves like a digital researcher checking multiple recent sources before forming an answer.

Why These Findings Matter

Understanding when ChatGPT searches helps explain how it builds its answers. The model does not rely only on its stored knowledge; instead, it supplements it by scanning the web for updates. For website owners and marketers, that means optimizing content for detailed, review-based, and current-year searches could make it more visible to AI systems.

For ordinary users, the results show ChatGPT is less of a static knowledge bank and more of an active information finder that continually checks the internet to stay relevant.

The Bigger Picture

In the end, the numbers give a clear picture. ChatGPT is not only generating text but also performing its own background research. Roughly one-third of the time, it steps out to the internet, sends out a couple of searches, and pulls in longer, more specific results.

That makes it less like a traditional chatbot and more like a hybrid search assistant, one that mixes stored intelligence with real-time exploration to produce its answers.

Notes: This post was edited/created using GenAI tools.

Read next:

• Why Picture-Based Phishing Is Becoming the Internet’s Latest Security Blind Spot

• AI Chatbot Traffic Data Shows Market Shift: Gemini and Incumbents Gain as ChatGPT’s Share Slips


by Asim BN via Digital Information World

The Internet’s “Most Human Place” Faces Its Most Inhuman Challenge Yet

Reddit, long celebrated as the internet’s vast collective brain, is confronting a quiet identity crisis. The arrival of AI-generated posts has blurred the line between human conversation and machine-made mimicry, forcing the platform’s volunteer moderators to redefine what authenticity means in a digital commons.

A recent study from Cornell University and Northeastern University reveals how moderators across some of Reddit’s most active communities are struggling to contain a new kind of disruption. Drawing on in-depth interviews with fifteen moderators overseeing more than a hundred subreddits, researchers found that most view generative AI as a “triple threat”... one that erodes content quality, undermines social trust, and complicates governance.

The Strain of Invisible Labor

Reddit’s decentralized system depends on tens of thousands of unpaid moderators who keep discussions civil, remove misinformation, and enforce community rules. Those tasks were already challenging before generative AI began flooding the internet with polished but hollow text. Now, moderators say they’re facing a subtler kind of spam, plausible, eloquent, and often wrong.

Travis Lloyd, a doctoral researcher at Cornell and lead author of the study, said moderators are confronting a paradox that is AI content looks real enough to pass as human but empty enough to distort the culture that holds these communities together. Many moderators admitted that identifying AI posts takes hours of manual review, while the tools meant to detect them often fail or flag innocent users.

One moderator from r/explainlikeimfive called AI content “the most threatening concern,” not because it’s frequent, but because it quietly changes the rhythm of human exchange. Others echoed that sentiment, describing AI posts as verbose yet soulless — a flood of text that drowns genuine conversation.

Quality, Connection, and Control

The study identified three intertwined anxieties. The first is quality: moderators repeatedly described AI posts as generic, inaccurate, or off-topic. Communities that prize expertise, such as r/AskHistorians, see these posts as a risk to credibility. “Truth-looking nonsense,” as one moderator described it, can spread quickly when wrapped in confident prose.


The second anxiety lies in social dynamics. Many moderators worry that AI-generated dialogue cheapens what makes Reddit distinct - its sense of human presence. Communities built on personal exchange, like r/changemyview or creative spaces such as r/WritingPrompts, fear that automation erodes the empathy and spontaneity that attract members in the first place. As one moderator put it, “How can we change your view when it isn’t even yours?”

The third challenge is governance. Moderators have long battled spam and harassment, but AI has supercharged these old problems. Some described “bot attacks” that used large-language models to generate persuasive propaganda or to inflate fake popularity through karma-farming. Others pointed to subtle forms of trolling or covert marketing disguised as casual conversation. Detecting these incursions often requires judgment calls that blur the line between moderation and detective work.

The Arms Race of Detection

Without reliable detection tools, moderators rely on instinct — looking for repetitive phrasing, stylistic oddities, or abrupt changes in a user’s tone. These cues work for now, but most acknowledge they’re temporary. “There has to be a lot that we’re missing,” one moderator admitted, capturing the unease that gives the paper its title.

Even automated filters like Reddit’s “AutoModerator” help only so much. They can spot patterns, but not nuance. False positives risk alienating genuine users, especially non-native English speakers whose writing may differ from the community norm. The researchers warn that such biases could deepen existing inequalities online, echoing older studies showing that moderation often falls hardest on marginalized groups.

Human-Only Spaces in a Machine Age

Not every moderator sees AI as the enemy. A few expressed cautious optimism about its potential as a translation tool or writing aid, especially for users whose ideas outpace their English skills. Yet even those sympathetic voices agreed that intent matters... AI is acceptable when used transparently, not when it impersonates a person.

Still, most communities have opted to draw hard lines. Some, like r/WritingPrompts, ban AI outright to preserve the act of human creativity itself. Others, such as r/AskHistorians, tolerate limited use when it supports genuine expertise. In both cases, the rulemaking process has become a kind of civic negotiation, with moderators and users redefining what counts as authentic participation.

A Platform at a Crossroads

The broader question for Reddit is whether the site can remain, in its own words, “the most human place on the internet.” The platform’s leadership has echoed moderators’ concerns, acknowledging that AI threatens to erode the trust that gives Reddit its value. Yet solutions remain elusive. Detection tools are unreliable, volunteer labor is overstretched, and the platform’s business interests may not always align with its community ethos.

The researchers suggest that the healthiest path forward may lie in autonomy: letting each community decide how much AI it will tolerate, and giving moderators better design support to enforce those norms. Interface cues, such as visible “no-AI” labels or rule prompts before posting, could help members stay aligned without heavy-handed policing.

The Search for Effort and Authenticity

What stands out most in the study is not despair but persistence. Even as they face an impossible workload, moderators express a deep belief in human connection. They talk about “effortful communication”... the idea that sincerity online often shows through the time and care a person invests in writing something themselves. That effort, they argue, is what separates Reddit from the algorithmic noise elsewhere.

The irony is that AI may be forcing communities to rediscover precisely what makes them human. As Lloyd and his co-authors conclude, people still crave interaction with other people, and that craving drives them to build “human-only spaces” even when the internet itself is filling with machines.

Reddit’s future may depend on how well it protects that fragile, very human instinct... to tell when a voice on the other side of the screen truly belongs to someone real.

Notes: This post was edited/created using GenAI tools.

Image: DIW-Aigen.

Read next:

• How Everyday Typing Patterns Could Help Track Brain Health

• Can Blockchain Blend Into Daily Digital Life Just Like AI?

• Wikipedia Faces Drop in Human Traffic as AI and Social Video Change Search Habits


by Asim BN via Digital Information World