Sunday, August 3, 2025

In 2025, Americans No Longer Search the Same Way. Trust, Age, and Task Now Shape Where They Go

People in the U.S. still use search engines to look things up, but the digital landscape has split.

A survey conducted by Claneo in early 2025 shows that search engines remain the default tool for general knowledge, with 72 percent saying they use them several times a week. But other platforms are creeping in. Around 25 percent of people now say they regularly use AI chatbots, while 15 percent rely on AI-powered search engines. These numbers aren’t leading, but they’re growing steadily.

Among younger users, platform habits are different. Those aged 16 to 27 don’t just search less with traditional engines, they also favor tools that look and feel different. YouTube is used frequently by 68 percent of this group. Instagram follows close behind at 65 percent. TikTok isn’t far off either, drawing in 58 percent of these younger users. At the same time, 34 percent of them already use AI chatbots for information searches. That’s a stronger uptake than seen in any older age group.

Americans still prefer search engines, but younger users shift to AI, YouTube, TikTok, and Instagram.




When it comes to trust, the picture is split. AI search tools are gaining credibility, with 79 percent of respondents saying they trust AI-based search engines. A slightly smaller share, 77 percent, say they feel the same about AI chatbots. These confidence levels still trail behind older, more established platforms. Amazon scores highest on trust, reaching 87 percent. Search engines and YouTube follow, both at 86 percent. Walmart and Pinterest sit just behind at 85 percent.

Other platforms rank lower. Trust in Asian e-commerce services remains fragile. Thirty-one percent of Americans surveyed describe them as untrustworthy. Short-message platforms are next, with 28 percent sharing doubts. Facebook also draws concern from 27 percent of respondents. TikTok lands at 25 percent. Distrust of AI tools is lower, but still present, 23 percent rate AI chatbots as untrustworthy, while 21 percent say the same about AI search engines.

What people search for also affects where they go. For general knowledge, search engines still dominate. About 64 percent of Americans choose them first when they want broad information. YouTube is next, but far lower, with 22 percent. AI chatbots pull 17 percent, and Wikipedia accounts for 14 percent. Others like Reddit, TikTok, and Facebook are used less often for this purpose.

When the search is easy, say, a quick fact check, people still turn to engines first. Forty-seven percent use them for simple information tasks. Twenty-eight percent use AI chatbots. AI-powered engines make up 23 percent. Reddit handles 21 percent of these queries. These preferences change when the question gets more complicated.

For harder topics, search engine use falls to 36 percent. AI chatbots hold steady at 27 percent. AI search engines reach 21 percent, and Reddit remains close behind with 20 percent. These shifts show that users often branch out when questions get deeper or the information becomes harder to sort through.

Product search is a different story. People don't use traditional engines as much when they shop. Just 44 percent search for products through general engines. Amazon isn’t far behind at 41 percent. Walmart draws in 32 percent. The pattern changes again depending on price.

If users are shopping for affordable goods, Walmart leads with 55 percent. Amazon follows at 51 percent. Asian shopping platforms pull in 45 percent. But for expensive items, the field tightens. Amazon and eBay both land at 22 percent. Price comparison websites trail at 16 percent.

Entertainment still dominates social media. Many people say they go to YouTube, Instagram, or TikTok mainly for trends and videos. YouTube ranks highest at 59 percent. Instagram draws 54 percent. Facebook and TikTok are used this way by 53 and 51 percent of users, respectively. Still, these platforms are slowly entering the search space too, especially for brands and products.

When people decide which platform to use, trust and clarity matter more than speed. Forty-nine percent of users say trustworthiness is the most important trait in online search results. Clear, understandable content comes next at 38 percent. Low prices matter to 35 percent. Ratings and reviews sit close behind at 34 percent. Layout and presentation also play a role, but they rank lower.

According the the survey, speed ranks even lower. Many users say they’ll wait longer for solid answers, especially if the information comes from a source they trust. This points to a shift where credibility and content quality outweigh speed or design.

People don’t only use AI for search. In fact, AI tools are being used for a wide mix of tasks. Nineteen percent of users rely on them to process complex information. Another 19 percent use them for research. Creative help is close behind at 17 percent. Finding simple facts accounts for 16 percent of usage. Writing and text generation stands at 15 percent.

That said, not everyone is on board. In the U.S., 39 percent of survey participants said they don’t use AI for any of these purposes. In Germany, the number is lower, around 29 percent.

Search is no longer one platform serving every need. In 2025, users break their habits into categories. They don’t search in the same place for a how-to guide, a winter coat, and a technical article. Younger audiences are driving many of these changes, but the trend isn’t limited to them. People across all age groups are choosing platforms that fit the job.

Some platforms are rising because they’re easier to trust. Others gain ground because they handle complexity well. The decision of where to search now rests on a simple question: what’s the task?

Read next: Most Americans Still Use Social Media, But 41% Are Pulling Back in 2025


by Irfan Ahmad via Digital Information World

Saturday, August 2, 2025

Most Americans Still Use Social Media, But 41% Are Pulling Back in 2025

Social media is still part of everyday life for nearly everyone in the United States. In a recent study conducted by PartnerCentric, 99 percent of respondents said they use at least one social network. But that doesn’t mean all is steady. Many are shifting how they interact with these apps. In 2025, more than four in ten Americans say they’re cutting back, and 16 percent have already quit one platform entirely. Some of those who left just moved to another app.

Facebook Keeps Its Hold as Most Used and Most Profitable

Facebook is still the most dominant social media platform in the country. Eighty-six percent of Americans use it. Among Boomers and Millennials, the usage rate is almost identical, both at 87 percent. Gen Z leads slightly at 90 percent. Gen X follows at 82 percent.

People spend time there. A lot of it. On average, users are spending two hours a day on Facebook. For Gen Z and Millennials, that goes up to two hours and twelve minutes. Boomers use it less but still average just over ninety minutes a day.

It’s not just for scrolling. People are shopping there too. The average user spent $133 on Facebook Marketplace last month. That's the highest among all platforms.

Most users don’t post much. Only 7 percent said they share content regularly. Others interact sometimes, but 43 percent mostly scroll. They view, but don’t engage.

TikTok Grabs the Most Time per Day

TikTok has grown, but its reach still doesn’t match Facebook. Fifty-six percent of Americans use it. Among Gen Z, that number jumps to 79 percent. Millennials come in at 58 percent. Gen X trails at 45 percent. Only a third of Boomers use TikTok.

Even with fewer users, TikTok eats up more time. Gen Z users spend the most, averaging three hours daily. Millennials and Boomers both hover around two and a half hours. Gen X lands at two hours and twelve minutes.

TikTok Shop is also gaining traction. People spent an average of $40 on the platform in the past month. Millennials spent the most at $50. Boomers barely used it, with an average of $1.

What draws users in? About one-third said they use it to follow trends. For Gen Z, nearly 1 in 5 said they mainly use TikTok for memes and humor.

Instagram Leads in Reach but Not in Time

Instagram remains one of the most widely used platforms. Seventy-nine percent of Americans use it. Among Gen Z, the number is even higher, 89 percent. Millennials follow at 81 percent. Gen X comes in at 74 percent, and Boomers at 57 percent.

Average time spent is lower than TikTok or Facebook. Overall, people spend about 1.9 hours per day on Instagram. Gen Z again leads at 2 hours and 18 minutes. Boomers spend around 96 minutes.

Users go to Instagram for different reasons. One-third use it to keep up with people they know. Others use it to follow trends or browse memes. Among Gen Z, 26 percent go there for trends. About 11 percent use it just for humor content.

X Still Serves as a Key News Platform

X (formerly Twitter) hasn’t disappeared. Sixty-two percent of Americans use it. Among Gen Z, 73 percent are active. Millennials follow at 63 percent. Usage among Gen X and Boomers is lower, 56 and 51 percent.

Time spent on X averages 1 hour and 12 minutes daily. Gen Z stays longest again, with 1 hour and 36 minutes.

What keeps people on X? News. Thirty-seven percent said that’s their main reason for logging in. It’s especially popular for live updates and fast headlines.

Younger Americans Are Turning to Social Media Instead of Google

Search habits are changing. More people are using social apps to look things up. That’s especially true among younger adults.

For product reviews, 60 percent of Americans still start with Google. But Gen Z and Millennials are different. Over 10 percent prefer Reddit instead. Half of Gen Z doesn’t choose Google first for reviews at all.

Boomers have their own habits. Ten percent of them check YouTube before using Google to review a product. When searching for something specific, 48 percent of Americans still go to Google first. But one in three Boomers skip Google and go straight to Amazon or another shopping site.

TikTok has become a go-to search tool for Gen Z. One in ten of them uses TikTok to look up products. More surprisingly, 28 percent now turn to TikTok instead of Google for recipes.

Reddit, Discord, and Substack Are Getting Bigger

Community-focused platforms are also growing. Reddit is now used by two-thirds of Americans. Many join to talk with others who share the same interests. Others go there to read honest reviews. Among Boomers, 17 percent use Reddit first when checking out products, more than Google.

Discord is another growing app. Forty percent of Americans use it. Half of Gen Z is on Discord. Around 18 percent of users pay for access to at least one private server.

Substack is smaller but growing. Just over 20 percent of Americans use it. Among them, 26 percent pay for one or more subscriptions.

Pinterest Sees a Comeback Among Gen Z

Pinterest is attracting younger users again. Fifty-four percent of Gen Z adults use it. Across all ages, usage is 43 percent.

Gen Z uses Pinterest as a search tool too. Some even use it instead of Google to find recipes. Millennials lean more toward using Pinterest for tracking style or culture trends. One in five Millennials said that was their main use.

Offline Outreach Still Matters

Not everyone is online all the time. As some pull back from social apps, other methods of reaching them become more important.

Email is still effective. So are newsletters and printed ads, like flyers or posters. These offline channels remain useful, especially for people who’ve stepped away from social platforms altogether.

People Are Using Tools to Limit Social Time

Screen limit apps are part of the new pattern. One in four Americans uses a screentime tool to control how long they spend on social apps. These tools lock or block apps for hours at a time.

Some people have taken a bigger step. Sixteen percent quit at least one app this year. For many, that meant switching to a different one, rather than quitting entirely.

Only 1 percent of people in the survey said they’ve left all social platforms. None of them said they felt like they were missing anything.

Survey Overview

The data in this report comes from a nationwide survey conducted in May 2025. A total of 994 Americans took part. Half identified as male, 49 percent as female, and 1 percent as nonbinary. Ages ranged from 18 to 75, with an average age of 41. The study focused on how people use social media, how much time they spend on it, and how habits are shifting in 2025.





Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Agentic AI Coding Tools Gain Momentum in Corporate Engineering Workflows
by Irfan Ahmad via Digital Information World

Agentic AI Coding Tools Gain Momentum in Corporate Engineering Workflows

The use of autonomous AI tools in software engineering is accelerating fast, with new data suggesting that companies are no longer limiting artificial intelligence to passive assistance. Instead, more engineering teams are deploying what are known as agentic AI systems, which are capable of carrying out tasks on their own, without human confirmation at every step.

Between December 2024 and May 2025, a sample of over 400 companies showed a major shift in how these tools are used. The data, drawn from Jellyfish’s engineering management platform, showed that agentic AI tools were adopted by just over half of organizations at the start of the year. By May, that number had grown sharply, with 82% of firms using these tools in day-to-day engineering work.

These tools go beyond offering suggestions or generating small code snippets. Instead, they take direct action in the development workflow, such as writing code, opening code reviews, submitting commits, and leaving review feedback without prompting. This movement marks a key transition from interactive systems that rely on constant human oversight to more autonomous systems that operate with minimal supervision.

Among the many entry points for AI adoption, automated code reviews have emerged as the most common. That’s partly because they present fewer risks and allow teams to experiment without committing to full workflow automation. In this area, the numbers tell a clear story. Between January and May, the share of companies using AI-powered code reviews grew from 39% to 76%. For some early adopters, these tools now handle as much as 80% of all code reviews.

This shift has been accompanied by small but measurable efficiency gains. Average cycle times for reviews completed by AI were modestly faster in the second quarter of 2025, suggesting that these tools may already be contributing to higher throughput in some teams. Overall, usage of agentic code review tools rose by 11% among early adopters during the same period.

Several tools have become favorites among engineering teams, especially for reviewing code. GitHub Copilot Reviewer, Cursor BugBot, and CodeRabbit remain widely used, while platforms like Graphite and Greptile are becoming more popular. Bito.ai has also emerged as a new player in this space.




Still, while AI has firmly established itself in the review phase of software development, a smaller but growing group of companies is now exploring fully agentic coding workflows. These involve agents not only checking code, but also writing and submitting it into production pipelines. Although the overall share of companies testing these workflows remains low, it has increased significantly. Back in January, fewer than 2% of companies had any such pilot in place. By May, nearly 8% had started to test autonomous code writing and submission processes.

The expansion of this category is being helped along by tools like Claude Code, Devin, and Codex, which some teams are already using in internal workflows. Adoption of this kind of fully autonomous tooling rose 4.5 times in just five months, reflecting a growing readiness among some firms to delegate entire programming tasks to AI systems.

This steady move toward greater autonomy shows how quickly engineering organizations are adapting their development processes to integrate more capable AI. With most teams now past the experimentation phase, and more pushing into deeper automation, the shift toward AI-native workflows appears to be underway.

Read next: OpenAI’s Cheaper ChatGPT Go Tier, Pinned Chats, and Themes Signal Broader Rollout Before GPT-5
by Web Desk via Digital Information World

OpenAI’s Cheaper ChatGPT Go Tier, Pinned Chats, and Themes Signal Broader Rollout Before GPT-5

OpenAI is developing a new subscription option called ChatGPT Go, offering a cheaper alternative to its existing paid tiers, as spotted by Tibor Blaho. At the same time, it has begun testing design updates across its mobile and web platforms, including interface changes that let users pin chats, mark favorites, and personalise color themes. These updates, still unannounced, reflect a series of silent changes rolling out ahead of a possible GPT-5 launch.

The new Go plan is expected to cost less than the current $20 Plus tier. Internally, pricing discussions place it somewhere between $10 and $15. This tier would likely give users consistent access to modern models like o3, but without premium functions like agents, advanced customisation, or developer features included in higher subscriptions.


OpenAI’s current structure includes a $200 Pro tier with broader access limits, early tool previews, and advanced support. The Go option would sit below Plus and appeal to users with more casual or infrequent AI needs, including those who want stable model usage but don’t need enterprise-grade tools.

Alongside pricing changes, OpenAI has pushed interface experiments to selected accounts on both web and mobile. The most prominent test involves a redesigned sidebar that introduces the ability to pin chats, letting users keep key conversations visible regardless of recency. A new “Favorites” section is also being tested, allowing quicker access to saved threads.


These layout tools, though still hidden for many, signal a shift toward more persistent workspace control within ChatGPT. There is no official toggle for enabling the changes, suggesting they are being remotely activated on a per-user basis while OpenAI refines the rollout.

Customization features have also started appearing in ChatGPT’s Android beta app. Version 1.2025.210 includes an expanded color system for chat themes. Basic options, such as green, yellow, pink, blue, and orange, are available to all users. Two additional themes come with account-based restrictions. Purple is available to Plus, Pro, Team, and Enterprise users. The black theme is currently limited to Pro accounts only. These distinctions also appear in the ChatGPT web interface through the "Chat Theme" experiment, though availability remains inconsistent.


All three changes, pricing, personalization, and persistent sidebar control, are emerging within a short window and appear to be part of a larger adjustment cycle. OpenAI is expected to unveil GPT-5 shortly, and these incremental changes suggest preparations are underway to align the platform’s interface and plan structure with upcoming model capabilities.

Whether ChatGPT Go launches broadly or stays in limited testing, the current activity reflects OpenAI’s ongoing efforts to scale the product across more usage levels while reshaping how users navigate, save, and organise their conversations.

Read next: Tech Debt and Brand Trust: Travis Schreiber on Why Old Systems Erode Your Reputation
by Irfan Ahmad via Digital Information World

Friday, August 1, 2025

Tech Debt and Brand Trust: Travis Schreiber on Why Old Systems Erode Your Reputation

Your tech stack isn’t just about productivity. It’s tied directly to how customers see your business. Slow systems, broken workflows, and outdated tools frustrate users and quietly erode trust. Over time, these problems add up, and they show up in reviews, complaints, and even security risks.

Travis Schreiber, Director of Operations at Erase , has seen this play out repeatedly. He’s spent years helping companies connect their backend processes with their reputation strategies. “Most of the time, people don’t think about how their tech impacts perception until it’s too late,” he says. “You get a few bad reviews because your customer portal is clunky or an integration fails, and suddenly it’s a pattern that anyone Googling you can see.”

Here’s how old tech stacks chip away at trust, why it matters, and what businesses can do to fix it.

Tech Debt Isn’t Just Internal

When most teams talk about tech debt, they treat it as an internal issue, an IT headache or a project they’ll get to later. But customers notice it long before leadership does.

A 2024 Salesforce study found that 88% of customers say experience matters as much as the product itself. Laggy checkout flows, outdated design, or broken automations don’t just annoy people, they push them toward competitors.

Schreiber recalls working with a mid-sized car insurance company that ran on a legacy billing system. “It was fine until it wasn’t,” he says. “When their invoices started going out late, support tickets piled up, and people started posting screenshots of errors on social media. It wasn’t just about fixing the billing tool anymore. It became a reputation problem.”

How Security Risks Amplify

Old tools aren’t just clunky, they’re vulnerable. Legacy systems often miss modern security patches or require custom fixes that get deprioritized.

“Outdated CRMs are one of the biggest risks we see,” Schreiber notes. “We had a massive healthcare client whose internal communication platform accidentally indexed private internal chat logs on Google.”

The reputational damage from a single breach can outlast the technical fix. According to IBM’s 2023 Cost of a Data Breach Report, 51% of consumers say they won’t do business with a company after a breach.

Customers rarely care if the root cause was outdated middleware or an API misconfiguration. They care that their data wasn’t safe, and they’ll share it publicly.

The Link Between Tech and Perception

Even simple annoyances tie back to brand trust. Poor mobile optimization, email errors from bad automations, or slow response times due to clunky ticketing systems all create a paper trail online.

“Negative reviews almost never say, ‘Your backend API failed,’” Schreiber explains. “They say, ‘I couldn’t log in,’ or, ‘They didn’t get back to me for a week.’ The tech problem turns into a trust problem instantly.”

Over time, these touchpoints stack. You don’t just lose a sale. You lose credibility. Search results start surfacing complaints. Prospects see screenshots in forums. AI summaries and reputation tools pick up that chatter.

Fixing Tech Debt Before It Hits Reviews

The good news: this isn’t just an IT problem. It’s operational. It’s fixable if you treat tech debt as part of brand protection, not a separate track.

1. Audit Your Stack

Review every tool and integration that touches customers. “Look at it like a customer would,” Schreiber says. “Sign up for your own service. Click every email. Use your own support system. If it feels slow or clunky, they feel it too.”

2. Prioritize Patches Over New Features

Don’t ignore updates for the tools you already use. Companies obsess over adding flashy features while their login process still takes 45 seconds to load. Fix the basics before building anything new.

3. Secure Automations

Automated workflows save time, but unsecured or misconfigured ones expose data. Audit permissions and remove any stale connections.

4. Embed Reputation Monitoring

Set up alerts for complaints about broken systems. Tools like Brand24 or even simple Google Alerts help you catch issues early. If your billing portal is glitching and three people mention it on Reddit, you want to know before it’s on page one of your search results.

Bake Reputation Into Operations

Tech debt isn’t just about code. It’s about how your operational processes either protect or damage your reputation.

Automate Review Monitoring

If a system failure triggers a wave of bad reviews, you should know immediately. Integrate review tracking into your workflows. Assign someone ownership of responding quickly with context and resolution.

Standardize Communication Scripts

When tech fails, the response matters as much as the fix. Build scripts for customer-facing teams that explain outages or errors clearly. “The worst thing you can do is go silent,” Schreiber says. “Even a quick post saying, ‘We know, we’re fixing it,’ buys you goodwill.”

Document and Train

Tech fixes don’t stick if your team doesn’t know how to use them. Build simple documentation, and train staff on every major system.

Why Reputation Starts With Infrastructure

Reputation management is often seen as PR. In reality, it’s operational. The tools you use and how you maintain them directly shape how customers talk about you.

“You can spend six figures on brand campaigns, but if your login page times out, none of that matters,” Schreiber says.

Modern search amplifies this. AI summaries and review aggregators don’t care how strong your marketing is, they scrape whatever complaints or praise are most visible. If old tech is creating new problems, that’s what will surface first.

The Bottom Line

Your tech stack isn’t invisible. Customers feel it every time they interact with your business. When outdated systems or ignored fixes get in the way, they don’t just hurt efficiency. They quietly chip away at trust.

By treating tech maintenance and process design as part of reputation management, businesses can stay ahead. Audit systems, fix what customers feel first, and embed safeguards that keep problems from leaking into public view.

Because once it’s out there, it’s not just an IT ticket, it’s a Google result. And that’s a much harder fix.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: 

• AI Models Write Code That Works, But Often Miss Security Basics

• Cybercrime Grows More Aggressive in 2025 as Identity Becomes a Central Target
by Asim BN via Digital Information World

Cybercrime Grows More Aggressive in 2025 as Identity Becomes a Central Target

A spike in infostealers and ransomware reveals how stolen credentials are now central to large-scale attacks.

The first half of 2025 has brought major changes to how cyberattacks are launched and carried out. Threat groups have begun relying more on tools that steal personal data from browsers and devices. This shift has helped them carry out more damaging attacks against companies and individuals around the world.

Flashpoint's latest analysis shows that identity-based intrusions have become the fastest-growing threat this year. The report tracks a steep rise in the use of infostealers, malware designed to extract saved passwords, login cookies, and payment details from everyday devices. These tools now serve as a starting point for more serious threats like ransomware and large data breaches.

Over six months, the number of stolen credentials has jumped by 800 percent. Analysts say this has allowed attackers to move quickly between targets using stolen access, rather than trying to break through with brute-force methods.

How infostealers shape larger breaches

Once malware gains access to a browser or device, it often retrieves saved account data. This can include email logins, work credentials, and session tokens. If attackers can get hold of even a single active session, they may be able to bypass multi-factor security or access internal systems without raising alarms.

This kind of access lets attackers explore deeper layers of a network. In many recent cases, a single compromised device has led to a full-scale breach of a company’s data.

Flashpoint identifies several malware types leading the surge. Lumma and RedLine remain the most active, although other families like StealC and Acreed are appearing more often on cybercrime forums. The tools are often sold at low cost and used repeatedly across different targets.

Ransomware spreads through the same infection points

The same stolen credentials often help ransomware groups break into corporate systems. This type of malware locks files, demands payment, and can also leak sensitive data. Since January, the number of ransomware incidents has risen 179 percent.

Many of these attacks trace back to earlier infostealer infections. The initial access gained through stolen logins often opens a path to internal systems, where attackers then install ransomware. This two-step approach has become a common pattern this year.

Security teams now face threats that combine multiple tools and stages, rather than relying on a single method. The ability to link these threats early is becoming essential.

Public vulnerabilities grow faster than defenses can keep up

Another issue making things worse is the sharp rise in known software flaws. Since February, public disclosures of vulnerabilities have grown by 246 percent. Exploit code for many of these flaws is widely available, up 179 percent in the same time.

Researchers also points to a major lag in public databases that track vulnerabilities. Tens of thousands of issues remain unanalyzed in sources like the National Vulnerability Database. This leaves security teams without critical information as they try to manage growing exposure.

The speed at which attackers take advantage of newly published exploits continues to shrink. In some cases, malware begins using a vulnerability within hours of it appearing online.

Data breaches reflect a wider failure to contain access

Data breaches have also spiked. So far in 2025, their frequency has climbed 235 percent. In 78 percent of the cases tracked, attackers got in through unauthorized access, most often by using stolen credentials.

The United States has been the most affected, with two-thirds of global breaches recorded there. Much of the stolen data includes personal information, which is often used for fraud or resold on dark web platforms. Once released, this kind of data tends to circulate for years.

Some of the biggest breaches in recent months have been linked to logs from infostealers. These logs are often posted on underground sites shortly after collection and then reused in follow-up attacks. Industries like healthcare, telecommunications, and legal services remain especially vulnerable.

Geographic spread of infostealer infections


Flashpoint’s research lists the countries where the most infostealer logs have been uploaded. India ranks first, followed by the United States, Brazil, and Indonesia. Other nations with high infection rates include Pakistan, Mexico, Egypt, the Philippines, Vietnam, and Argentina.

These countries have become prime sources of stolen credentials now circulating online. In many cases, the malware behind these logs was never detected by the original user.

Broader patterns in a shifting landscape

This year’s attacks show a move toward layered threats. A typical campaign might begin with a cheap malware infection, move into credential theft, and end in ransomware or data extortion. This structure allows attackers to cause more damage without increasing effort.

At the same time, the boundary between cybercrime and global conflict is becoming less clear. Threat actors tied to state interests or working in politically unstable regions are using similar tools and tactics. This makes it harder for defenders to separate criminal groups from state-aligned campaigns.

Security teams now face both technical and strategic challenges. Many organizations are still focused on incident response, but that approach no longer matches the speed or complexity of current threats.

A shift toward early detection, attack surface reduction, and more timely intelligence will be critical for stopping these threats before they spread.

Notes: This post was edited/created using GenAI tools. 

Read next: AI Models Write Code That Works, But Often Miss Security Basics


by Web Desk via Digital Information World

OpenAI Pulls ChatGPT Search Feature After User Chats Appear in Google

OpenAI has taken down a feature that briefly made ChatGPT conversations searchable on Google after users began discovering that private discussions, some containing personal or sensitive corporate information, had become publicly visible online. The change followed a surge in attention on social media where people shared how entire conversations, including prompts and responses, could be found using a targeted search format. These shared links had been generated by users through ChatGPT’s own interface, where they had the option to make a conversation public. Once the public link was placed somewhere accessible to search engines, it became indexed just like any other webpage.

Although the feature required an explicit opt-in, many users either misunderstood its reach or failed to realize that clicking a simple checkbox would allow search engines to index the full content of a chat. As a result, people found examples that revealed names, job roles, locations, and even internal planning notes. In some cases, the content involved real business data, including references to client work or strategic decisions. One widely circulated example showed details about a consultant, including their name and job title, which had been picked up by Google's crawler and appeared in the open web results.


The company moved quickly to pull the feature within hours of the issue gaining traction online. But the incident highlighted a growing tension between collaborative AI use and the risks that come with publishing generated content, especially when privacy expectations are not made fully clear at the point of sharing. Even though the interface technically required users to go through multiple steps to make a conversation shareable, the design failed to convey the full extent of the consequences. A checkbox and a share link proved too easy to overlook, especially when users were focused on sharing something helpful or interesting.

Image: @wavefnx / X

This event is not the first time AI tools have allowed sensitive content to leak into public view. In previous cases, platforms like Google’s Bard and Meta’s chatbot tools also saw user conversations appear in search results or on public feeds. While those companies eventually responded with changes to their systems, the pattern remains familiar. AI products often launch with limited controls, and only after issues arise do the developers begin closing the gaps. What’s become clear is that privacy needs to be a core part of the design process rather than an afterthought fixed in response to public backlash.

In this case, OpenAI stated that enterprise users were not affected, since those accounts include different data protections. But the broader exposure still created risks for regular users, including those working in professional settings who use ChatGPT for early-stage writing, content drafting, or even internal planning. If a team member shared a conversation without understanding the public nature of the link, their company’s ideas could have been made accessible to anyone who knew where to look.

Some experts urged users to take action by checking their ChatGPT settings and reviewing which conversations had been shared in the past. Users can visit the data controls menu, view the shared links, and delete any that remain active. Searching for a brand name using Google’s “site:chatgpt.com/share” format can also reveal whether any indexed material is still visible. In many cases, people shared content innocently, but once those links are indexed, they become part of the searchable web until removed manually or delisted by the platform.
The situation also pointed to a wider challenge for companies adopting generative AI tools in business settings. Many organizations have begun integrating AI into daily work, whether to brainstorm marketing strategies or write client-facing drafts. But they may not always realize that a single act of sharing could expose internal knowledge far beyond its intended audience. Without strict internal policies or staff training, mistakes can happen quickly and remain unnoticed until they show up in a search result.

OpenAI’s swift response likely limited the spread of these conversations, though some content had already been cached or archived by the time the feature was taken offline. What remains uncertain is how many users were affected, or how widely their shared material circulated before the links were removed. Regardless of the numbers, the case has prompted new questions about how AI tools handle public visibility, and whether existing safeguards are enough to protect users from accidental exposure.

While the original intention behind the share feature may have been to encourage collaboration or allow useful chats to be viewed by others, its rollout showed how easily privacy can be compromised when interface design does not match the complexity of real-world use. Even when technical consent is given, it may not be informed. That gap between what users intend and what systems permit has now created a reputational cost for the company, and a learning moment for anyone deploying AI at scale.

For businesses, the incident serves as a reminder that data shared with AI tools should be treated with the same care as internal documents. Conversations with chatbots may feel informal or experimental, but once shared, they can end up outside the company’s control. To avoid similar issues, enterprises should conduct audits, clarify usage policies, and establish guardrails before allowing employees to rely on AI for confidential or strategic work. The risks are not always visible at first, but when exposed, the impact can be immediate and difficult to reverse.

This episode has shown how even a small checkbox can open the door to unintended consequences. As AI tools become more powerful and widely used, both companies and users will need stronger frameworks to ensure that privacy, once granted, isn’t quietly lost along the way.

Notes: This post was edited/created using GenAI tools.

Read next: AI-Powered Apps Are Redefining Mobile Categories in 2025
by Web Desk via Digital Information World