Friday, September 26, 2025

OpenAI Introduces ChatGPT Pulse, a Paid Feature That Automates Personalized Briefings

OpenAI has introduced ChatGPT Pulse, a new tool that produces daily personalized reports. The feature is only available to Pro subscribers, who pay $200 a month, and is part of the company’s effort to make ChatGPT work more like an assistant than a chatbot.

How it works

Pulse runs mostly overnight. It processes a user’s chat history, memory settings, and feedback, then compiles a set of five to ten cards the next morning. These cards can include news updates, reminders, or suggestions based on personal context. Each card links to a full report, and users can ask ChatGPT questions about the content.


The feature also works with connected apps such as Gmail and Google Calendar. When switched on, Pulse can highlight important emails or prepare a daily agenda. OpenAI says these integrations are off by default, and users can control how much data is shared.

From Tasks to Pulse

An earlier experiment called Tasks let users set reminders, such as getting news at a specific time. Pulse expands on that idea by running automatically, without waiting for a manual request. OpenAI executives describe it as the next stage in building assistants that can anticipate needs.

Why it is limited to Pro

Pulse requires heavy computing power, which is why it sits behind the Pro subscription. OpenAI has said it is short on server capacity and is working with Oracle and SoftBank to expand its data centers. The company wants to release the feature more widely, starting with Plus subscribers, once it becomes more efficient.

What it shows

Examples shown by OpenAI include sports roundups, travel itineraries, family activity ideas, and restaurant suggestions tailored to dietary preferences. The system can also prepare drafts such as meeting agendas or gift reminders.

Pulse is designed to stop after presenting a limited set of cards. The company says this choice is deliberate, to avoid the constant scrolling pattern of social media feeds.

Looking ahead

For now, Pulse is aimed at individual users, but the company sees it as a step toward more capable AI agents. Future versions could handle tasks such as making bookings or drafting emails for approval, though those features remain in early development.

Other startups are exploring similar tools, including Huxe, which comes from the team behind Google’s NotebookLM. Analysts say the market is still open, as most AI agents today rely on prompts rather than working proactively.

OpenAI stresses that Pulse remains experimental and optional. Its success will depend on whether users find enough value to justify its high subscription cost.

Notes: This post was edited/created using GenAI tools.

Read next: Trump Signs Off on TikTok Deal, But Key Details Remain Unsettled


by Irfan Ahmad via Digital Information World

Thursday, September 25, 2025

Microsoft Ends Israeli Military Unit’s Access to Cloud and AI Services Used in Palestinian Surveillance

Microsoft has withdrawn access to some of its cloud and artificial intelligence services from a unit of the Israeli military after evidence emerged that its technology had been central to a mass surveillance program targeting Palestinians in Gaza and the West Bank.

The decision follows months of scrutiny triggered by investigative reports that revealed how the military’s intelligence wing, Unit 8200, was storing and processing enormous volumes of civilian communications through Microsoft’s Azure platform.

Surveillance Program and Scale

The program relied on the interception of millions of Palestinian phone calls each day. Intelligence officers could capture, replay, and analyze conversations with the help of AI-driven tools hosted on Microsoft’s infrastructure. Sources described the system as capable of handling an immense flow of information, with internal slogans pointing to the goal of recording nearly a million calls per hour.

According to documents cited in investigations, the collected material reached several thousand terabytes in scale and was initially stored in a Microsoft data center located in the Netherlands. That arrangement gave Israeli intelligence officers near-limitless access to analyze the material, with applications ranging from general monitoring of daily life in the occupied territories to the identification of potential targets in Gaza.

Corporate Response and Internal Pressure

Microsoft’s decision came after an independent review ordered earlier this year to assess whether its services were being misused. The company concluded that a military client had violated its rules by using Azure infrastructure for the systematic surveillance of a civilian population. Employees and investors had also raised concerns about the firm’s role in providing technology for military operations, particularly as the humanitarian toll of the Genocide in Gaza has escalated.

The decision was relayed to Israel’s Ministry of Defense in recent days, with Microsoft informing officials that subscriptions linked to Unit 8200 would be terminated. The measures include revoking access to certain cloud storage capabilities and restricting the use of AI-powered services. The company stressed that its global policy forbids enabling mass civilian surveillance and that this principle applies across all regions where it operates.

Data Relocation and Alternative Providers

After the initial reporting earlier this summer, Unit 8200 began transferring large portions of stored communications out of Microsoft’s European servers. Intelligence sources indicated that the data, estimated at thousands of terabytes, was moved to alternative infrastructure, with Amazon Web Services named as a potential new host. Amazon has not publicly commented on whether it has agreed to manage the repository.

The relocation underscored the sensitive nature of hosting military surveillance data on foreign commercial platforms, raising questions within Israel about the risks of relying on overseas providers for operations tied to national security.

Historical Ties and Earlier Reviews

Collaboration between Microsoft and the Israeli military intensified in recent years. In 2021, company executives met with senior commanders of Unit 8200 to discuss technical cooperation, including the creation of a segregated environment within Azure to handle intelligence workloads. Those arrangements were later examined by Microsoft after internal leaks suggested their scale.

An earlier review carried out in mid-2024 had initially cleared the company, with investigators saying they found no proof that Azure tools were being used to harm civilians. However, subsequent evidence gathered by reporters and advocacy groups contradicted those findings, prompting a second inquiry that resulted in this week’s termination.

Reaction from Activists and Workforce

The revelations sparked widespread protests from Microsoft staff as well as campaign groups critical of the company’s ties to Israel’s military. Demonstrations were staged both at US headquarters and at European sites, with a worker-led initiative calling itself “No Azure for Apartheid” pushing for a full severance of contracts with the Israeli defense sector.

Some employees also faced disciplinary action after staging direct protests inside company offices. Organizers described Microsoft’s latest move as a step forward but argued that it addressed only a fraction of the firm’s relationship with Israel’s defense establishment, since other contracts remain in place.

Critics argue that Microsoft’s actions reveal a deep moral failure. They note the company has never condemned Israel’s genocide in Gaza, even while its technology was used to support surveillance tied to military operations there. Nor has it apologized for enabling that system or acknowledged that employees who protested were standing on the right side of history. Instead, it protected contracts and avoided accountability. Activists say this silence shows a corporation unwilling to choose between right and wrong, exposing a culture where profit outweighs morality. For many, the only meaningful response is to boycott Microsoft and other firms that empower such actions, until corporate greed and complicity give way to a new morality that values human life over corporate gain.

Broader Context and Implications

The decision marks the first known case of a major US technology company suspending services previously provided to the Israeli military since the beginning of the Genocide in Gaza. It comes against the backdrop of international criticism over the humanitarian crisis in the territory, where tens of thousands of Palestinian civilians have been killed during nearly two years of bombardment and siege.

Legal experts and human rights monitors have noted that the surveillance project illustrates the degree to which advanced cloud infrastructure from American companies has been integrated into military campaigns. For Microsoft, the move represents both a corporate governance decision and a response to reputational risks, as it seeks to demonstrate consistency in applying its own standards.

Ongoing Reviews

Microsoft has said that its inquiry is still continuing and that additional measures may follow depending on new findings. The company emphasized that the investigation did not involve examining customer data directly but was based on internal records, correspondence, and contractual details. Senior executives also acknowledged that earlier assessments may have been incomplete, partly due to limited transparency from staff working on the Israeli contracts.

While Microsoft’s wider commercial agreements with Israel remain intact, the suspension of specific services linked to Unit 8200 highlights a shift in how global technology firms are forced to balance commercial interests, ethical guidelines, and mounting pressure from employees and civil society. The long-term outcome may depend on whether other cloud providers face similar scrutiny over their role in hosting sensitive military operations.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next

• AI’s Sources of Truth: What Chatbot Citations Reveal About the Future of Health Information

• Why Parental Control Apps Like AirDroid Are Essential in Today’s Digital Landscape

• 5G Networks Show Stability but Still Struggle to Beat 4G
by Irfan Ahmad via Digital Information World

AI’s Sources of Truth: What Chatbot Citations Reveal About the Future of Health Information

AI’s Sources of Truth: What Chatbot Citations Reveal About the Future of Health Information Large language models (LLMs) have rapidly shifted from experimental tools to everyday advisors. For millions of people, asking AI chatbots such as ChatGPT about a migraine or autoimmune disorder feels as natural as typing a query into Google. But instead of returning a list of links, these systems summarize and cite information, raising a pressing question: Where exactly do these chatbots get their medical knowledge?

A new study, AI’s Sources of Truth: How Chatbots Cite Health Information, analyzed 5,472 citations generated by the four leading web-enabled models: ChatGPT, Claude, Gemini, and Perplexity. The findings show both encouraging signs of reliability and some concerning blind spots. More importantly, they suggest how our relationship with healthcare information is being rewritten by AI systems.

The Concentrated Core of AI’s Health Sources

When chatbots answer health questions, their citations are surprisingly concentrated. The most frequently cited domain across all models was PubMed Central, a free archive of biomedical research, which appeared 385 times in the sample. AI systems currently lean heavily on peer-reviewed research that’s openly available.

Rank Website Total mentions
1 pmc.ncbi.nlm.nih.gov 385
2 my.clevelandclinic.org 174
3 www.mayoclinic.org 163
4 www.ncbi.nlm.nih.gov 150
5 www.sciencedirect.com 93

Close behind were some of the internet’s most trusted health websites. The Cleveland Clinic’s patient information portal was cited 174 times, and the Mayo Clinic’s site 163 times. Another top source was the NIH’s National Center for Biotechnology Information (NCBI) site, with 150 mentions. These four show that chatbots gravitate toward established, credible medical knowledge.


Overall, nearly one in three citations (30.7%) in the study came from health media sites. About 23% of references were traced to commercial or affiliate sites (like corporate blogs, product pages, or other pages with a marketing slant). Another roughly 23% were from academic research sources. The chatbots as a group seem to favor accessible, consumer-friendly explanations of health topics. Traditional news articles made up only about 3.7% of citations, and social media or user-generated content only 1.6%. Mainstream journalism and personal anecdotes thus barely register in the bots’ answers.

Fresh, Up-to-Date Information in Answers

When it comes to how current the information is, the chatbots show a strong bias toward recent material. Nearly two-thirds of all cited sources were published in either 2024 or 2025. In fact, the single most common publication year among the citations was 2025, accounting for about 40% of all references. After 2025, the number of citations from older years drops off dramatically.
This recency bias likely reflects both the design of the bots (some have browsing enabled to find current info) and a built-in preference for newer, more relevant data. If you ask about a medical treatment or emerging health issue, the chatbots are inclined to cite something from the last year or two, rather than a decades-old paper. It is a reassuring habit given how quickly medical consensus can change.

Different Chatbots, Different Source Preferences

The most interesting insight from the study is how each AI model has its own style in sourcing information. While all four chatbots broadly favored authoritative, recent, open-access material, the mix of sources varied by platform.


For example, ChatGPT and Claude showed the strongest preference for highly authoritative domains. Around 68% of all citations from ChatGPT came from domains with the highest domain authority rankings (like DR 81–100 on Ahrefs), and Claude was similar at 67.4%. In comparison, Google’s Gemini and Perplexity were a bit less top-heavy: about 56–58% of their citations were from these elite top-rated sites. Gemini and Perplexity dipped more into mid-tier sources (for instance, websites that are reputable but not the absolute top of the internet’s authority food chain), and Perplexity in particular ventured the furthest down the credibility ladder. The study notes that Perplexity cited the largest share of low-authority websites (3.3% of its sources were from domains in the lowest credibility tier).


Looking at content categories: ChatGPT tended to cite health media outlets the most, with 35.8% of its references coming from sites like Mayo Clinic, WebMD, Cleveland Clinic, etc. About 23% of ChatGPT’s citations were academic papers or journals, meaning it still included a fair amount of hard science but leaned more toward those consumer health explainers. Claude, by contrast, was more evenly split, roughly 29.7% health media and 28.9% academic sources, essentially balancing between easy-to-read guides and original research.

Gemini stood out by citing government and NGO sources far more than the others. Nearly a quarter (24.9%) of Gemini’s citations were from official public health sites or nonprofit health organizations. Meanwhile, Perplexity was the real outlier. It’s the only model where commercial content was the number-one source category, making up 30.5% of its citations. Perplexity also cited social or user-generated content more than any other bot. This chatbot is a bit more likely to throw in a Reddit thread, a Quora answer, or a YouTube video as part of an answer.

The Future of Health Search

The shift from Google-style search to AI-powered health assistants is behavioral. Instead of wading through a swamp of links, users now get tailored explanations, neatly cited, with bias toward accessibility and recency.
  1. Trust is being redefined. People may start trusting AI models as much as, if not more than, traditional search engines. Yet each model’s sourcing bias means users could receive subtly different “truths.”
  2. Paywalled research is at risk of invisibility. If LLMs overwhelmingly favor open-access content, cutting-edge but gated science could be sidelined from public discourse.
  3. Media narratives may shape science. With 59% of citations coming from summaries and health media, the interpreters of science could become more influential than the researchers themselves.
  4. Transparency matters. LLMs cite live, working links is a step toward accountability, but users must still validate the credibility and intent of those sources.
Read next: 5G Networks Show Stability but Still Struggle to Beat 4G
by Irfan Ahmad via Digital Information World

Instagram Crosses 3 Billion Users as Growth Reshapes Meta’s Social Platforms

Instagram has surpassed 3 billion monthly active users, reaching one of the biggest milestones in its history and placing the service alongside Facebook and WhatsApp at the top of Meta’s global portfolio.

A Decade of Expansion

The platform’s rise has been steady and unusually consistent. In 2013 Instagram counted around 130 million monthly users. Within a year it had doubled to 250 million, then rose to 400 million in 2015 and 545 million in 2016. By 2017 the app had attracted 800 million people, and in 2018 it passed 1.06 billion. That figure kept climbing: 1.25 billion in 2019, 1.49 billion in 2020, 1.76 billion in 2021, and just over 2 billion in 2022. Growth slowed slightly to 2.14 billion in 2023 and 2.27 billion in 2024, before accelerating sharply to 3 billion in 2025.

From 2013 to 2025, Instagram grew from about 100 million to 3 billion monthly users, an average of 19.2 million new users added each month.

Business and Product Drivers

Meta, then known as Facebook, acquired Instagram in 2012 for $1 billion, a deal that initially raised questions because the app had little revenue and limited reach. Since then, Instagram has become central to Meta’s business. Analysts estimate it will generate more than half of Meta’s advertising revenue in the United States this year.

The strongest growth has come from short-form video, direct messaging, and recommendation-based feeds. Reels, launched in 2020, positioned Instagram against TikTok and YouTube Shorts. Algorithmic recommendations have also boosted activity, though they have sparked frustration among users who prefer content from friends over suggested clips.

Upcoming Adjustments for Users

Meta is now testing new controls that will allow people to fine-tune recommendations. Early prototypes show users being able to add or remove topic categories, changing which Reels or suggested posts they see. The navigation bar will also be updated to place direct messaging at the center of the experience, with the upload button moved elsewhere. These adjustments reflect Instagram’s shift toward private interaction and discovery, rather than its origins as a photo feed.

Policy and Regulation Pressures

Growth has not come without scrutiny. In April 2024 Meta stopped reporting quarterly active user numbers for each app and began focusing instead on overall engagement across its platforms. The company said in July that 3.48 billion people use its family of services daily. At the same time, regulators have continued to examine Meta’s acquisitions of WhatsApp and Instagram. A U.S. antitrust trial has revealed internal discussions showing concern inside Meta that Instagram’s popularity was eroding Facebook’s position.

The company has also faced pressure on child safety. In 2024 Instagram introduced new privacy defaults, making all accounts for under-18 users private unless changed manually. The update was aimed at building safer digital spaces for younger people while meeting regulatory expectations.

Meta’s Balancing Act

Instagram now joins Facebook and WhatsApp in exceeding 3 billion monthly users, but its cultural weight is different. Instagram has become the most influential of Meta’s apps among younger people, while Facebook continues to lose ground with that audience. The uneven momentum has forced Meta to maintain balance: supporting Instagram’s expansion while trying to revive interest in its original network.

With steady user growth over more than a decade and new tools shaping how people interact with content, Instagram has become one of the pillars of Meta’s global reach, as well as a key driver of its future strategy.

Quarter Year MAU (Instagram)
Q3 2013 130 Million
Q3 2014 250 Million
Q3 2015 400 Million
Q3 2016 545 Million
Q3 2017 800 Million
Q3 2018 1060 Million
Q3 2019 1255 Million
Q3 2020 1490 Million
Q3 2021 1765 Million
Q3 2022 2010 Million
Q3 2023 2145 Million
Q3 2024 2270 Million
Q3 2025 3000 Million

Notes: This post was edited/created using GenAI tools.

Read next:

• Making Instagram Content Work: A Closer Look at What Each Post Type Really Does

• YouTube Adds Options to Hide End Screens
by Irfan Ahmad via Digital Information World

YouTube Adds Options to Hide End Screens

YouTube is rolling out a tool that lets viewers hide the recommendation panels that appear at the end of videos. A new button in the top corner of the player allows the end screen to be removed so the final moments of a video remain clear. The same button can be used to restore the panel if needed.

Applies on a single video

The setting works only on the video currently being watched. It does not turn off end screens across the platform. YouTube said the change was developed in response to feedback from users who wanted fewer interruptions when finishing a clip.

Minor effect on engagement

End screens remain available for creators to add. Early testing showed that hiding them led to less than a two percent drop in clicks. YouTube also said that the subscribe option tied to the channel watermark, which appeared on hover, generated only a very small share of sign-ups. That button is being removed, since the main subscribe control already sits under the player.

Focus on viewing experience

The adjustments are designed to reduce clutter without taking away creator tools. By removing duplicate features and letting users dismiss overlays, YouTube aims to make it easier to watch videos without losing track of the content itself.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• YouTube Plans Path Back for Creators Banned Over Pandemic and Election Rules

• Consumers Want AI Labels but Doubt Their Own Skills

by Irfan Ahmad via Digital Information World

Wednesday, September 24, 2025

Consumers Want AI Labels but Doubt Their Own Skills

A Pew Research Center survey of more than 5,000 adults shows that most people want to know when artificial intelligence is involved in creating content. Three out of four said it is very important to tell whether a picture, video, or piece of text comes from AI or a person. Only a small group, about 12 percent, felt confident they could make that judgment themselves.

Worries Outweigh Excitement

Half of Americans said they are more worried than excited about AI’s growing role in daily life. Ten percent said they feel more excited than concerned, and about four in ten said their feelings are mixed. More than half described the risks of AI as high, compared with a quarter who thought its benefits are high.

Calls for More Control

Sixty percent of respondents said they want more control over how AI is used in their lives. Last year the figure was 55 percent. Most people are open to AI helping with tasks like weather forecasting, fraud detection, or drug development, but they reject its role in religion or matchmaking. Nearly three quarters said AI should have no place in faith-related advice.

Impact on Human Abilities

A majority of respondents expect AI to weaken skills that are central to human life. Fifty-three percent believe it will reduce creative thinking. Fifty percent think it will make it harder to build strong personal relationships. Only a small minority expect improvements in these areas. Some see a role for AI in problem-solving, with three in ten saying it could help, though more people predict harm.

Younger Adults Show More Awareness

Awareness of AI is strongest among younger Americans. Sixty-two percent of adults under 30 said they have heard or read a lot about it. Among those 65 and older, the share drops to 32 percent. Younger adults are also more likely to believe that AI will harm creativity and relationships, even as they interact with the technology more often.

Push for Transparency

The findings suggest that Americans are not opposed to AI itself but want clear boundaries and honesty about how it is used. Labels that reveal when AI is involved could help build trust. For institutions and businesses, openness may shape how people respond to the technology in the years ahead.





Notes: This post was edited/created using GenAI tools.

Read next: Companies Struggle With a Hidden Cost of AI
by Irfan Ahmad via Digital Information World

WhatsApp Adds Built-In Message Translation

WhatsApp is rolling out a translation feature that works inside chats, groups, and channel updates. The update lets people convert text into their preferred language without leaving the app or copying content into another service.

On iPhones, the option uses Apple’s Translate system. It supports more than nineteen languages at launch, including Chinese, Japanese, and Korean. Android has fewer choices to begin with, covering English, Spanish, Hindi, Portuguese, Russian, and Arabic. Each language requires a download before use, about 24 megabytes on average, and then works directly on the device.

To translate a message, users hold down on the text and tap “Translate.” They can pick the language direction and save it for later. Android users also get the ability to turn on automatic translation for an entire conversation, so incoming messages appear in the chosen language without extra steps.

Meta says the translations are processed locally on the phone. The company highlights that no message content is sent to its servers, keeping the process private. This follows a series of recent updates aimed at protecting chats, including alerts for unknown group invitations and extra privacy controls.

The new tool is being pushed out gradually on both platforms. More languages are expected to be added over time as the feature expands.


Notes: This post was edited/created using GenAI tools.

Read next: iPhone Air Durability Tests Show Surprising Strength, but Trade-Offs and High Costs Remain
by Asim BN via Digital Information World