Friday, February 20, 2026

Most AI Bots Lack Published Formal Safety and Evaluation Documents, Study Finds

Story: Fred Lewsey.

Reviewed by Ayaz Khan.

An investigation into 30 top AI agents finds just four have published formal safety and evaluation documents relating to the actual bots.

Many of us now use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports.

However, a new study of the “AI agent ecosystem” suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is “dangerously lagging”.

A research team led by the University of Cambridge has found that AI developers share plenty of data on what these agents can do, while withholding evidence of the safety practices needed to assess any risks posed by AI.

The AI Agent Index, a project that includes researchers from MIT, Stanford and the Hebrew University of Jerusalem, investigated the abilities, transparency and safety of thirty “state of the art” AI agents, based on public information and correspondence with developers.

The latest update of the Index is led by Leon Staufer, a researcher studying for an MPhil at Cambridge’s Leverhulme Centre for the Future of Intelligence. It looked at available data for a range of leading chat, browser and workflow AI bots built mainly in the US and China.

The team found a “significant transparency gap”. Developers of just four AI bots in the Index publish agent-specific “system cards”: formal safety and evaluation documents that cover everything from autonomy levels and behaviour to real-world risk analyses.*

Additionally, 25 out of 30 AI agents in the Index do not disclose internal safety results, while 23 out of 30 agents provide no data from third-party testing, despite these being the empirical evidence needed to rigorously assess risk.

Known security incidents or concerns have only been published for five AI agents, while “prompt injection vulnerabilities” – when malicious instructions manipulate the agent into ignoring safeguards – are documented for two of those agents.

Of the five Chinese AI agents analysed for the Index, only one had published any safety frameworks or compliance standards of any kind.

“Many developers tick the AI safety box by focusing on the large language model underneath, while providing little or no disclosure about the safety of the agents built on top,” said Cambridge University’s Leon Staufer, lead author of the Index update.

“Behaviours that are critical to AI safety emerge from the planning, tools, memory, and policies of the agent itself, not just the underlying model, and very few developers share these evaluations.”

Most AI Developers Do Not Publish Safety and Evaluation Documents for Their AI Bots
Image: The 2025 AI Agent Index. For 198 out of 1,350 fields, no public information was found. Missing information is concentrated in 'Ecosystem Interaction' and 'Safety' categories. Only 4 agents provide agent-specific system cards.

In fact, the researchers identify 13 AI agents that exhibit “frontier levels” of autonomy, yet only four of these disclose any safety evaluations of the bot itself.

“Developers publish broad, top-level safety and ethics frameworks that sound reassuring, but are publishing limited empirical evidence needed to actually understand the risks,” Staufer said.

“Developers are much more forthcoming about the capabilities of their AI agent. This transparency asymmetry suggests a weaker form of safety washing.”

The latest annual update provides verified information across 1,350 fields for the thirty prominent AI bots, as available up to the last day of 2025.

Criteria for featuring in the Index included public availability and ease of use, and developers with a market valuation of over US$1 billion. Some 80% of the Index bots were released or had major updates in the last two years.

The Index update shows that – outside of Chinese AI bots – almost all agents depend on a few foundation models (GPT, Claude, Gemini), a significant concentration of platform power behind the AI revolution, as well as potential systemic choke points.

Also read: Generative AI has seven distinct roles in combating misinformation

“This shared dependency creates potential single points of failure,” said Staufer. “A pricing change, service outage, or safety regression in one model could cascade across hundreds of AI agents. It also creates opportunities for safety evaluations and monitoring.”

Many of the least transparent agents are AI-enhanced web browsers designed to carry out tasks on the open web on a user’s behalf: clicking, scrolling, and filling in forms for tasks ranging from buying limited-release tickets to monitoring eBay bids.

Browser agents have the highest rate of missing safety information: 64% of safety-related fields unreported. They also operate at the highest levels of autonomy.**

This is closely followed by enterprise agents, business management AI aimed at reliably automating work tasks, with 63% of safety-related fields missing. Chat agents are missing 43% of safety-related fields in the Index.***

Staufer points out that there are no established standards for how AI agents should behave on the web. Most agents do not disclose their AI nature to end users or third parties by default.****Only three agents support watermarking of generated media to identify it as from AI.

At least six AI agents in the Index explicitly use types of code and IP addresses designed to mimic human browsing behaviour and bypass anti-bot protections.

“Website operators can no longer distinguish between a human visitor, a legitimate agent, and a bot scraping content,” said Staufer. “This has significant implications for everything from online shopping and form-filling to booking services and content scraping.”

The update includes a case study on Perplexity Comet: one of the most autonomous browser-based AI agents in the Index, as well as one of the most high-risk and least transparent.

Comet is marketed on its ability to “work just like a human assistant”. Amazon has already threatened legal action over Comet not identifying itself as an AI agent when interacting with its services.

“Without proper safety disclosures, vulnerabilities may only come to light when they are exploited,” said Staufer.

“For example, browser agents can act directly in the real world by making purchases, filling in forms, or accessing accounts. This means that the consequences of a security flaw can be immediate and far-reaching.”

Staufer points out that last year, security researchers discovered that malicious content on a webpage could hijack a browser agent into executing commands, while other attacks were able to extract users' private data from connected services.

Added Staufer: “The latest AI Agent Index reveals the widening gap between the pace of deployment and the pace of safety evaluation. Most developers share little information about safety, evaluations, and societal impacts.”

“AI agents are getting more autonomous and more capable of acting in the real world, but the transparency and governance frameworks needed to manage that shift are dangerously lagging.”


by External Contributor via Digital Information World

No comments:

Post a Comment