Friday, August 1, 2025

OpenAI Pulls ChatGPT Search Feature After User Chats Appear in Google

OpenAI has taken down a feature that briefly made ChatGPT conversations searchable on Google after users began discovering that private discussions, some containing personal or sensitive corporate information, had become publicly visible online. The change followed a surge in attention on social media where people shared how entire conversations, including prompts and responses, could be found using a targeted search format. These shared links had been generated by users through ChatGPT’s own interface, where they had the option to make a conversation public. Once the public link was placed somewhere accessible to search engines, it became indexed just like any other webpage.

Although the feature required an explicit opt-in, many users either misunderstood its reach or failed to realize that clicking a simple checkbox would allow search engines to index the full content of a chat. As a result, people found examples that revealed names, job roles, locations, and even internal planning notes. In some cases, the content involved real business data, including references to client work or strategic decisions. One widely circulated example showed details about a consultant, including their name and job title, which had been picked up by Google's crawler and appeared in the open web results.


The company moved quickly to pull the feature within hours of the issue gaining traction online. But the incident highlighted a growing tension between collaborative AI use and the risks that come with publishing generated content, especially when privacy expectations are not made fully clear at the point of sharing. Even though the interface technically required users to go through multiple steps to make a conversation shareable, the design failed to convey the full extent of the consequences. A checkbox and a share link proved too easy to overlook, especially when users were focused on sharing something helpful or interesting.

Image: @wavefnx / X

This event is not the first time AI tools have allowed sensitive content to leak into public view. In previous cases, platforms like Google’s Bard and Meta’s chatbot tools also saw user conversations appear in search results or on public feeds. While those companies eventually responded with changes to their systems, the pattern remains familiar. AI products often launch with limited controls, and only after issues arise do the developers begin closing the gaps. What’s become clear is that privacy needs to be a core part of the design process rather than an afterthought fixed in response to public backlash.

In this case, OpenAI stated that enterprise users were not affected, since those accounts include different data protections. But the broader exposure still created risks for regular users, including those working in professional settings who use ChatGPT for early-stage writing, content drafting, or even internal planning. If a team member shared a conversation without understanding the public nature of the link, their company’s ideas could have been made accessible to anyone who knew where to look.

Some experts urged users to take action by checking their ChatGPT settings and reviewing which conversations had been shared in the past. Users can visit the data controls menu, view the shared links, and delete any that remain active. Searching for a brand name using Google’s “site:chatgpt.com/share” format can also reveal whether any indexed material is still visible. In many cases, people shared content innocently, but once those links are indexed, they become part of the searchable web until removed manually or delisted by the platform.
The situation also pointed to a wider challenge for companies adopting generative AI tools in business settings. Many organizations have begun integrating AI into daily work, whether to brainstorm marketing strategies or write client-facing drafts. But they may not always realize that a single act of sharing could expose internal knowledge far beyond its intended audience. Without strict internal policies or staff training, mistakes can happen quickly and remain unnoticed until they show up in a search result.

OpenAI’s swift response likely limited the spread of these conversations, though some content had already been cached or archived by the time the feature was taken offline. What remains uncertain is how many users were affected, or how widely their shared material circulated before the links were removed. Regardless of the numbers, the case has prompted new questions about how AI tools handle public visibility, and whether existing safeguards are enough to protect users from accidental exposure.

While the original intention behind the share feature may have been to encourage collaboration or allow useful chats to be viewed by others, its rollout showed how easily privacy can be compromised when interface design does not match the complexity of real-world use. Even when technical consent is given, it may not be informed. That gap between what users intend and what systems permit has now created a reputational cost for the company, and a learning moment for anyone deploying AI at scale.

For businesses, the incident serves as a reminder that data shared with AI tools should be treated with the same care as internal documents. Conversations with chatbots may feel informal or experimental, but once shared, they can end up outside the company’s control. To avoid similar issues, enterprises should conduct audits, clarify usage policies, and establish guardrails before allowing employees to rely on AI for confidential or strategic work. The risks are not always visible at first, but when exposed, the impact can be immediate and difficult to reverse.

This episode has shown how even a small checkbox can open the door to unintended consequences. As AI tools become more powerful and widely used, both companies and users will need stronger frameworks to ensure that privacy, once granted, isn’t quietly lost along the way.

Notes: This post was edited/created using GenAI tools.

Read next: AI-Powered Apps Are Redefining Mobile Categories in 2025
by Web Desk via Digital Information World

Thursday, July 31, 2025

AI-Powered Apps Are Redefining Mobile Categories in 2025

Artificial intelligence is now a regular part of mobile software. In 2025, more app developers have built AI features into their tools. Users have responded by applying those features across a wide range of tasks, many outside traditional work or learning environments.

ChatGPT Sees Broader Use Outside Work Hours

One sign of this shift is how people are using AI assistants like ChatGPT. Last year, most usage happened during weekdays. This was common among tools designed for jobs or school. But this year, ChatGPT's weekend usage has grown. The pattern looks more like general search apps, which people rely on whether they are working or not.

Lifestyle Prompts Overtake Educational Tasks


Prompt data shows a rise in lifestyle and entertainment queries. Topics such as wellness, travel, shopping, and meal planning have grown. These prompt types now take up a larger share compared to 2024. Educational and technical queries remain popular, but their share has declined. People are using AI in more personal ways.

Functional Apps Face Pressure from General AI Tools

Many category-specific apps are now competing with flexible AI platforms. Apps for budgeting, nutrition tracking, and study help are seeing some users shift to general chatbots. These chatbots can answer many kinds of questions, so people explore them for a broader range of needs.

AI Mentions Surge Across App Stores

Thousands of mobile apps launched with AI-related keywords this year. Software tools remain the most active category for these updates. Apps in wellness, employment, learning, and finance also added AI references. Developers are adapting to the demand for smarter app experiences.

Mixed Outcomes for Traditional App Subgenres

Some mobile app subgenres are growing despite the presence of AI competition. A few are even seeing better performance on user metrics. The difference seems to depend on how they use AI. Apps that add task-specific AI tools tend to hold their ground. For example, nutrition apps now include image-based calorie tracking powered by AI. This gives users quicker results than traditional logging.

Tailored Features Help Apps Stay Competitive

Generic chatbot tools can handle many tasks, but often miss the fine detail users expect from niche apps. To compete, developers are building features that solve narrow problems more efficiently. Apps that respond to this need may retain their users. Those that don’t may be replaced.

H/T: Sensor Tower.

Read next: ChatGPT and Google AI Give Different Answers to the Same Questions, Study Finds


by Irfan Ahmad via Digital Information World

YouTube Relaxes Its Rules on Swear Words in Early Video Content

YouTube has loosened its restrictions around how bad language affects video monetization, making it easier for creators to earn money even if their clips include profanity in the opening seconds. The company has updated its Advertiser Friendly Guidelines, easing one of the more contentious policies that had caused frustration among content creators in recent years.

Reversal of Previous Tightening

This change rolls back a stricter rule introduced in 2023, which had made any video that featured strong language in its first few seconds ineligible for full advertising revenue. That earlier revision had followed an even broader update in 2022, when YouTube first tightened its rules to limit the use of violence and offensive language in monetized content. The policy especially impacted gaming creators, whose streams often include in-game violence and spontaneous speech that could contain swear words.
After a wave of criticism, YouTube softened its approach slightly in 2023 by narrowing the restriction window to the first seven seconds of a video. But even that adjustment didn’t fully address creators’ concerns, as many videos were still receiving limited monetization, marked by the platform’s yellow icon that indicates reduced ad income.

What’s Changing Now

Under the latest update, videos that include profanity within the first seven seconds will no longer be automatically penalized. This means that creators can now retain full advertising revenue, even if strong language appears near the start of their content. While this adjustment makes the rules more flexible, it does not entirely lift all limits related to language.
Creators should still be aware that titles and thumbnails containing bad language will continue to trigger monetization restrictions. In addition, if profanity appears too frequently within a video, even if the early seconds are allowed, monetization may still be reduced under the platform’s guidelines.

The Role of Ad Placement

The earlier policy around bad language in a video’s opening moments had mainly stemmed from concerns about how close a brand’s advertisement appeared to offensive material. Advertisers typically prefer a buffer between their message and any strong language. But changes in advertiser tools now allow brands to fine-tune where and how their ads appear, including setting limits on content sensitivity. That flexibility has given YouTube more room to relax its own rules without risking ad relationships.

By shifting the responsibility onto advertisers to control the kinds of content they want to appear next to, YouTube can now allow creators more freedom in how they speak, without necessarily hurting its advertising model.

Limitations Still Apply

Although this update gives creators more breathing room, it’s not a free pass for excessive swearing. Videos that rely heavily on profanity, or repeatedly use strong language throughout, may still see limited monetization. And inappropriate language in text elements like video titles and thumbnails remains a red flag for YouTube’s ad systems.

So while early swearing will no longer automatically lead to reduced income, creators still need to moderate how much strong language they use if they want to fully benefit from the change.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen

Read next: Meta Reshapes AI Strategy Amid Talent Surge, Billion-Dollar Bets, and Rising Caution on Openness
by Web Desk via Digital Information World

Meta Reshapes AI Strategy Amid Talent Surge, Billion-Dollar Bets, and Rising Caution on Openness

Mark Zuckerberg has introduced new language around Meta's AI development plans that indicates a potential shift in the company’s direction. Over the last several months, Meta has invested heavily in both infrastructure and talent. At the same time, internal communications and earnings remarks have started to reflect a more cautious approach to openness, particularly as the company begins building what it calls “personal superintelligence.”

A recently published note from Zuckerberg laid out this vision. It described a future where highly capable AI systems assist individuals in their personal goals. He did not define superintelligence in technical terms or explain how such systems would be built. Instead, the memo outlined a broad idea of personalized tools that could support users across creative, social, and productivity tasks.

This approach contrasts with other companies that are developing general-purpose models aimed at automating workflows. Zuckerberg's message suggested that Meta wants to keep the focus on individual empowerment. This vision also connects to the company’s longer-term goal of moving away from reliance on mobile devices. Meta has spent years developing smart glasses and has signaled interest in making them central to future computing experiences.

Talent Shifts and Billion-Dollar Recruiting

Meta has increased its hiring of AI experts since early this year. In one move, it brought in the founder of Scale AI through a $14.8 billion investment. That deal placed Alexandr Wang as the company’s new Chief AI Officer. The company has also recruited individuals from OpenAI and Apple, including engineers who contributed to major large language models.

Recent reporting indicated that Meta extended multiyear offers worth hundreds of millions, and in one case more than $1 billion, to staff from Thinking Machines Lab. The startup was founded by a former OpenAI executive. Meta did not confirm the financial details but acknowledged the interest in expanding the team.

Alongside those moves, Meta has committed over $72 billion to AI infrastructure. That includes compute power, model training capacity, and scaling systems. These steps suggest the company is preparing to build more advanced AI models, even as it evaluates how much of that work to make public.

Open Source Remains Unclear

For years, Meta positioned open-source AI as a safer and more inclusive approach. Company leaders argued that transparency could prevent misuse and help governments understand how models work. More recently, Zuckerberg indicated that the company may not share some of its largest models in the future.

His recent statements during Meta’s second-quarter earnings call included references to safety and practicality. According to him, some of the models now being developed are so complex that releasing them would have little benefit for outside developers. In some cases, he added, sharing might give an advantage to competing firms.

These comments followed a memo that suggested Meta would remain a leader in open-source work but would be more selective about what gets released. While this is not a reversal of past policy, it shows a growing awareness inside Meta that some advanced AI models may carry risks that make full transparency harder to justify.

Usage Gains Tied to AI Integration

Meta’s recent product performance also reflects increased use of AI to drive engagement. Time spent on Facebook rose 5 percent in the second quarter. Instagram saw a 6 percent gain. Both trends were attributed to updates in recommendation systems, which now use large language models to present more relevant content.

The company also noted that video viewership grew by 20 percent over the past year. Instagram played a large role in that growth, as Meta has focused on promoting original material and improving content ranking methods. Threads, its messaging-based app, has seen an increase in daily use following the integration of new AI tools.


All in all, Meta reported 3.4 billion family daily active people across its platforms in June. That figure included Facebook, Instagram, Messenger, and WhatsApp. It marked a 6 percent increase from the previous year and supported a 22 percent rise in revenue across those apps, reaching $47.1 billion in the quarter.

Broader Shifts in AI Positioning

Zuckerberg’s internal memo came just before Meta’s Q2 earnings report. The timing appeared aligned with the company’s efforts to frame its AI investments to investors. With delays affecting the launch of its larger Llama 4 model, internal reports suggested that Meta’s leadership had been reevaluating its approach. Some concerns were raised about the tradeoff between openness and competitive advantage.

There has also been tension around the slow progress of Meta’s generative AI roadmap. Executives inside the company have reportedly questioned whether its development pipeline can keep pace with external labs. These concerns may have shaped the more cautious stance reflected in the memo and earnings discussion.

At the same time, Meta seems to be preparing for a future in which its computing platforms are less dependent on Apple and Google. Smart glasses, which the company continues to develop, were described as key devices in future AI use. Zuckerberg pointed to this shift as an opportunity to move users toward platforms owned and controlled directly by Meta.

The company’s strategy remains focused on scaling up its AI capabilities, refining its product experience, and adjusting its messaging around transparency. While the long-term details remain unclear, the recent changes suggest that Meta is actively shaping its next phase of AI development around tighter control, personal devices, and internal platforms.

Notes: This post was edited/created using GenAI tools.

Read next: Most Adults Use AI Without Realizing, But True Power Remains Untapped


by Irfan Ahmad via Digital Information World

Wednesday, July 30, 2025

Most Adults Use AI Without Realizing, But True Power Remains Untapped

A new poll has found that most adults in the United States have used artificial intelligence for online searches. Younger people report using it more frequently than older age groups, and for more types of tasks.

Online Search Remains the Main Use

Among all surveyed adults, 6 in 10 said they use AI at least sometimes to find information online. That rate rises to nearly three-quarters for people under the age of 30. Searching is the most common AI-related activity, based on the eight categories included in the poll.

This may understate its true usage, since many search engines now include AI-generated summaries automatically. People may be receiving answers produced by AI without realising it.

Work-Related Use Is Still Limited

The data also shows that AI has not become a major part of most workplaces. About 4 in 10 adults said they have used AI to assist with work tasks. A smaller share mentioned using it for email writing, creative projects, or entertainment. Fewer than one in four reported using AI for shopping.

Younger users are more likely to include AI in their work. Some use it to plan meals or generate ideas, while others rely on it to help write or code. Still, this type of usage remains less common among the general public.

Generational Differences Are Clear

Younger adults are more engaged with AI overall. Around 6 in 10 of those under 30 said they have used it to brainstorm. Only about 2 in 10 older adults said the same. Daily use for idea generation is more frequent among people in their twenties.

Older users show less interest in applying AI beyond basic information lookups. They tend to avoid using it for more personal or technical tasks.

AI Companionship Is Rare

The least common form of interaction with AI was companionship. Fewer than 2 in 10 adults overall reported using AI for that purpose. Among people under 30, the rate rises to about a quarter.

The survey results suggest that this type of usage remains outside the mainstream. Most people do not view AI as a substitute for personal interaction, although some younger users said they understand why others might explore it.

Overall Usage Remains Focused

The findings indicate that while AI tools have entered public use, they are still seen as limited-purpose systems. Most interactions involve information searches, and regular use beyond that is less frequent. Adoption has grown, but remains uneven across tasks and age groups.

The poll was conducted by the Associated Press and NORC Center for Public Affairs Research between July 10 and July 14. It included 1,437 adults drawn from a representative national sample, with a margin of error of 3.6 percentage points.


Notes: This post was edited/created using GenAI tools.

Read next: How Hidden Bluetooth and WiFi Signals Let Mobile Apps Track You Indoors
by Irfan Ahmad via Digital Information World

Walmart Tops Global 500 Again as U.S., China, and Tech Titans Dominate $41.7 Trillion List

Walmart has landed in first place again, as it has for over a decade now. The U.S. retail giant sits at the head of this year’s Fortune Global 500, the annual list ranking companies by revenue. Amazon took second. Behind them came State Grid of China, followed by Saudi Aramco, and then China National Petroleum. It’s a line-up that doesn’t look unfamiliar, but the weight behind these names keeps growing. Taken together, just the ten highest-ranked companies earned more than $4.7 trillion last year. Most of that came from retail, oil, healthcare, or finance, sectors that continue to stretch across borders and markets.

UnitedHealth rises, Apple slips

Apple is still inside the top ten. But not quite where it was. It dropped one spot this year, pushed aside by UnitedHealth Group, which moved from eighth to seventh. That same reshuffle appeared in the U.S.-only Fortune 500 a month earlier, so the shift didn’t come as a surprise. Apple’s fall wasn’t dramatic, but it did reflect the fact that healthcare, as an industry, isn’t slowing. CVS Health and Berkshire Hathaway also stayed strong in the upper tier, keeping U.S. firms in control of most of the top ten.

Global revenue grows slowly, but steadily

The full list of 500 companies brought in $41.7 trillion in revenue last year. That’s about 1.8 percent higher than the year before. Profits came in just under $3 trillion, the second highest total Fortune has ever recorded for its global list. Saudi Aramco, once again, took the lead on earnings. It posted $105 billion in profit alone, the fourth straight year it has held the top position in that category.

Sectors with staying power

Finance continues to dominate in size. There are 121 financial companies in the Global 500. Energy, long a staple of the list, came next with 79. Then came motor vehicles and parts, with 35 firms. The tech sector wasn’t far behind, landing 34. Healthcare had 33. These five areas together made up most of the list and drove nearly two-thirds of total revenue. So while innovation is constant, the largest corporate machines, oil, banks, manufacturers, still hold most of the weight.

China and the U.S. still lead, close together

There were 138 American companies on this year’s list. Greater China, including Hong Kong, Macau, and Taiwan, came in just behind, with 130. Nine out of the ten most profitable companies came from these two countries. That balance hasn’t changed much lately. It’s still China and the United States shaping what this list looks like. Between them, they hold the pulse of global corporate power, even as growth slows in some industries and picks up in others.

The biggest tech names earned big, even with lower ranks

Amazon stayed at number two. Apple remained in the top ten. Alphabet landed at 13, Microsoft showed up at 22. Then came Meta at 41, Nvidia at 66, and Tesla at 106. None of them topped the revenue rankings, but their profits stood out. The group brought in $2 trillion in revenue and earned $484 billion in net income. That’s more than most countries generate in GDP. They didn’t move much in rank, but their financial output still dwarfs what most businesses achieve in years. No other group, outside energy, posted returns that high.

Record number of women CEOs

There are 33 women now leading companies on the Fortune Global 500 list. That’s about 6.6 percent of the total. It's still low, but it’s the highest count so far. Most of them are in the U.S., though China, France, the U.K., and Brazil have some as well. Some names stood out more than others, Mary Barra at GM, Jane Fraser at Citigroup, Sarah London at Centene, Sandy Xu at JD.com, but the broader pattern is slow, steady increase.

Where these companies are based

Companies on the list are spread across 243 cities and 36 countries. Beijing, Tokyo, New York, Shanghai, and London are home to more of them than anywhere else. London re-entered the top five after a few years off the leaderboard. These cities, often finance hubs or government capitals, keep drawing the biggest corporations. Their infrastructure, access, and labor markets still offer scale.

New companies join, even as others fall away

Nine companies made it onto the list for the first time. Among them were QNB Group, ICICI Bank, and Lithia Motors. They entered from different industries and countries, but their arrival signals movement in the sectors they belong to, especially banking and auto retail. Their exact rankings weren’t near the top, but entry alone is a sign of growth or change worth noting.

What the numbers don’t show

Revenue climbed. Profits held steady. But the backdrop is far less calm. Geopolitical shifts, trade disputes, and fast-moving changes in AI policy are already starting to affect how companies grow, spend, and plan. These issues haven’t upended the rankings just yet, but the next few years may look different. The Fortune Global 500 still captures the biggest players, it always has, but what happens next may depend less on size and more on how these firms navigate what's coming.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Study Finds SpaceX’s Starlink Satellites Leak Radio Noise Into Protected Bands, Disrupting Earth Telescopes
by Irfan Ahmad via Digital Information World

YouTube Begins Using Age-Guessing Technology in the U.S.

YouTube has started testing a new system in the United States that tries to figure out how old users are based on how they use the platform. This technology looks at different pieces of information to estimate whether someone might be a teenager, even if they said something else when they created their account.

The new system is being introduced in stages. Only a small group of users will see it at first. YouTube plans to check how it performs before making it more widely available.

Why This Matters for Teen Users

If the system decides that someone is under 18, YouTube will automatically switch on features that are meant to offer a safer experience. This includes turning off personalized ads and reducing how often certain types of videos are shown repeatedly. It will also activate reminders for screen time and bedtime, along with other tools that support healthy use of the platform.

These settings already exist, but until now, they were only applied to users who had confirmed their age. Many teens skip that step or enter a different date when signing up. YouTube is trying to cover that gap by using patterns in user behavior instead of relying on what people enter.

How Mistakes Are Handled

Some adults may be flagged by mistake. If that happens, they’ll be asked to prove their age. They can do this by uploading a photo of a government ID, using a credit card, or submitting a live selfie. Once verified, they can access content that is only available to users over 18.

If the system believes someone is an adult based on their account history or usage habits, they won’t need to go through the verification process.

Background on YouTube’s Plans

YouTube had already mentioned its plan to use this kind of technology earlier this year. The move fits into a broader effort to add more safety features for young users. In the past, the platform launched a separate app for children and introduced supervised accounts for teens.

The new system builds on that approach and focuses on users who are signed in. Those who aren’t logged in already face limits on what they can watch, especially when it comes to age-restricted videos.

What Data Is Being Used

YouTube hasn’t listed every detail, but it said it will look at how long an account has been active and how people interact with videos. The goal is to make a reasonable guess without asking for too much personal data upfront.

This is part of a larger trend where companies are being pushed to do more to protect minors online. Lawmakers in several U.S. states are working on or have passed new rules that make age checks or parental consent a requirement. These include places like Texas, Georgia, Florida, Utah, Maryland, and Connecticut.

Some of those laws are already being challenged in court, and a few haven’t taken effect yet. But the direction is clear. Governments want more responsibility from platforms when it comes to younger users.

Other Countries Are Moving Too

In the United Kingdom, a new law passed in 2023 has also started to take effect. It requires websites to check the age of their users. Platforms that don’t follow the rules could face penalties.

YouTube’s update is one example of how companies are responding to this shift. The use of machine learning to estimate age is becoming more common, even though the full list of signals being used is usually kept private.

For now, the rollout remains small. The company said it will expand once it’s sure the system works as intended.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Facebook Most Cited in Online Abuse Reports from Environmental Activists
by Irfan Ahmad via Digital Information World