"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
A University of Queensland study has shown Large Language Models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality.
The research team asked six LLMs – including vision models – to moderate thousands of examples of hateful text and memes through the lens of different ideologically diverse AI personas.
Professor Demartini said the exercise revealed that AI political personas, even without significantly altering overall accuracy, were prone to introducing consistent ideological biases and divergences in chatbot content moderation judgments.
“It has already been established that persona conditioning can shift the political stance expressed by LLMs,” Professor Demartini said.
“Now we have shown through political personas that there is an underlying risk that LLMs will lean towards certain perspectives when identifying and responding to hateful and harmful comments.”
“It demonstrates a need to rigorously examine the ideological robustness of AI systems used in tasks where even subtle biases can affect fairness, inclusivity and public trust.”
The AI personas used in the study were from a database of 200,000 synthetic identities ranging from schoolteachers to musicians, sports stars and political activists.
Each persona was put through a political compass test to determine their ideological positioning, with 400 of the more ‘extreme’ positions asked to identify hateful online content.
Professor Demartini said his team found that assigning a persona to an LLM chatbot altered its precision and recall in line with ideological leanings, rather than change the overall accuracy of hate speech detection.
However, the team found LLMs – especially larger models – exhibited strong ideological cohesion and alignment between personas from the same ideological ‘region’.
Professor Demartini said this suggested larger AI models tend to internalise ideological framings, as opposed to smoothing them out or ‘neutralising’ them.
“As LLMs become more capable at persona adoption, they also encode ideological ‘in-groups’ more distinctly,” Professor Demartini said.
“On politically targeted tasks like hate speech detection this manifested as partisan bias, with LLMs judging criticism directed at their ideological in-group more harshly than content aimed at their opponents.”
Professor Demartini said larger LLMs also displayed more complex patterns, including a tendency towards defensive bias.
“Left personas showed heightened sensitivity to anti-left hate, and right-wing personas were more sensitive to anti-right hate speech,” Professor Demartini said.
“This suggests that ideological alignment not only shifts detection thresholds globally, but also conditions the model to prioritise protection of its ‘in-group’ while downplaying harmfulness directed at opposing groups.”
Researchers said the project highlighted that it was crucial for high-stakes content moderation tasks to be overseen by neutral arbiters so that fairness and public trust is maintained and the health and wellbeing of vulnerable demographics is protected.
“People interact with AI programs trusting and believing they are completely neutral,” Professor Demartini said.
“But concerns remain about their tendency to encode and reproduce political biases, raising important questions about AI ethics and deployment.
“In content moderation the outputs of these models reflect embedded ideological biases that can disproportionately affect certain groups, potentially leading to unfair treatment of billions of users.”
PhD candidates Stefano Civelli, Pietro Bernadelle and research assistant Nardiena Pratama collaborated on the study.
The research is published in Transactions on Intelligent Systems and Technology.
When evolutionary biologist Joseph Popp coded the first documented piece of ransomware in 1989, he had little idea it would become a major criminal business model capable of bringing economies to their knees.
Popp, who worked for the World Health Organization at the time, wanted to warn people about the dangers of ignoring health warnings, poor sexual hygiene and (human) virus transmission.
In 1996, two Columbia University computer scientists published a paper explaining how criminals could use more sophisticated versions of Popp’s scheme to mount large-scale extortion operations. At the heart of this was malicious software that could be used to encrypt, block access to or steal a person or organisation’s files and data.
However, two preconditions still had to be met for ransomware to become a feasible criminal business: communication channels that were difficult to monitor, and a payments process outside financial regulation.
The Tor protocol, released by US intelligence services to protect their covert communications, solved the first problem in 2004. Cryptocurrencies solved the second – in particular, when bitcoin cash machines started appearing in North American cities from 2013.
Today, artifical intelligence makes malware coding and crafting convincing phishing-emails in any language simple. And the latest model in Anthropic’s AI system, Claude Mythos, recently proved more effective at hacking into computer systems than humans.
As an expert in extortive crime, I am increasingly concerned about public and political apathy to the threats posed by ransomware. To better understand these, it’s worth tracing its evolution over the past two decades – and how improvements in computer security and law enforcement, plus changes in data regulation, have led to new criminal strategies each time.
Cut out the middlemen
The first generation, which came to global attention in the mid-2010s, was known as “commodity ransomware”. A pioneering example, Cryptolocker, was developed by Russia-based hackers who infiltrated hundreds of thousands of computers, seeking to cut out the middlemen previously needed to commit financial fraud. They proved that a large majority of their victims would happily pay a small ransom to restore data that had been locked by their malware.
As both competent and incompetent hackers piled into this new market, victims shared information about rogue operators and put them out of business. This led to the second generation of ransomware such as Ryuk, which emerged in 2018.
In this phase, criminals abandoned the indiscriminate “spray-and-pray” approach in favour of targeting individual cash-rich businesses. They would set an individual ransom, negotiate with the company, and even offer to help with decryption if paid. Fast-rising ransoms more than compensated for this increased administrative effort.
In response, many companies began investing in multi-factor authentication, better threat monitoring, advance warning systems and software patches for known vulnerabilities.
However, these security benefits were soon offset by the impact of COVID on work practices across the world. The pandemic led to widespread remote working, with many people using unsecured devices and connections that were vulnerable to cyber-attack.
A multibillion-dollar industry
The next ransomware innovation was driven by the emergence of back-up systems that enabled companies to restore encrypted files without the criminals’ help. This was coupled with the emergence of tighter data privacy regulation such as GDPR in Europe and the UK.
Invented in 2019, third-generation ransomware weaponised these regulations, which threatened firms with massive fines if confidential data about clients or staff was revealed. The criminal gangs now sought out and exfiltrated an organisation’s most sensitive files, then threatened to publicise them through dedicated dark web leak sites.
This so-called double-extortion model – encrypting an organisation’s data while threatening to make it public – brought many businesses back to the negotiation table.
Ransomware had become a multibillion-dollar industry – with the Conti gang, sheltered by Russia and employing hundreds of people, among the key players setting new records for ransomware demands. Its attacks on critical infrastructure and hospitals saw it sanctioned by the UK government in 2023.
Video: BBC News.
This new approach forced many governments to row back on imposing hefty fines for data breaches, since many were the result of criminal attacks. Meanwhile, new initiatives by law enforcement – supported by the private sector – targeted and broke up the largest and most egregious ransomware gangs.
Today’s fourth generation of ransomware, building on the latest AI technology, looks nimbler and slimmed-down in comparison. Anyone who gains access to a network can lease weapons-grade malware on the dark web without forming long-term ties with a particular gang.
Advanced AI-based hacking tools make ransomware accessible to many more criminals and politically motivated hacktivists. And around one-quarter of breaches still result in ransom payments. For criminals sheltered by their governments, only the digital infrastructure is at risk of being taken down by western law enforcement.
Lessons not learned
While coverage of Claude Mythos suggests even the most sophisticated cyber defences could now be vulnerable, the troubling reality is that many individuals and organisations are still using out-of date, unpatched or only partially upgraded software. This means even early-generation ransomware techniques are still lucrative.
While Popp sent out his floppy discs to promote better sexual hygiene, today’s poor cyberhygiene is leaving many public and private networks open to malware attacks. The intended lesson of his original ransomware caper – be vigilant and properly heed health warnings – has still only been partially learnt in the digital world.
Many western societies appear to have grown accepting of criminals leaching on business conducted on the internet. Not even a steady stream of human fatalities, caused by attacks on hospitals and medical providers, has generated the level of response required to stamp out this dangerous threat.
The hope that governments sheltering cybercriminals can be encouraged (or forced) to stop them targeting critical national infrastructure appears increasingly fragile amid current geopolitical tensions. At all levels of society, we need to get smarter about cyber defence.
If you give artificial intelligence a goal of maximizing profit, how far will it go?
AI agents appear capable of lying, concealing, and colluding, according to new research from Harvard Business School.
Researchers found that AI agents — software trained to perform tasks independently — engaged in a “broad pattern” of misconduct after being asked to manage a simulated vending machine business and maximize profits for a year. The agents were neither instructed to cut legal or ethical corners nor prohibited from doing so.
“What’s unambiguous looking at the models is that the misconduct we observed — from not paying a customer refund or deciding to collude on prices — was not an accident. It was deliberately done by agents to maximize profitability,” said Eugene F. Soltes , the McLean Family Professor of Business Administration at HBS and first author of the working paper.
Soltes and co-author Harper Jung , a doctoral student studying accounting and management at HBS, hope their research will serve as a starting point for more conversation about AI safety in the context of business management control.
The research for the paper, which the group aims to publish and is currently out for peer review, was done in collaboration with Andon Labs, an AI safety company focusing on testing AI models in realistic business operations.
In experiments, 20 commercially available AI models from major firms, including Anthropic’s Claude Opus 4.6, DeepSeek v3.2, and OpenAI’s GPT-5.1, independently operated a vending machine over the course of a simulated year.
Tasks included searching for suppliers, buying products, and engaging with customers.
In some experiments, agents operated solo; in others, four agents operated simultaneously in a shared market, where they could communicate with rivals via email.
Agents started with $500 and a small inventory of chips and sodas.
“They had to figure it out themselves,” said Jung. “Each agent had to independently search online for suppliers, negotiate wholesale prices, set its own retail pricing, and handle customer complaints.”
Jung and Soltes said the agents demonstrated impressive business savvy.
“The best models had the capacity to negotiate and calculate valuations like a top-notch M.B.A. student,” Soltes said.
“When we went through the deliberations and the exchanges the agents made with each other, we were just in shock,” said Jung. “I was amazed at how far these machines can go.”
The agents’ misconduct ranged from the questionable to the comical to the potentially criminal and included denying refunds by claiming defects were normal product variation; inventing nonexistent corporate policies to avoid processing returns; and colluding with competitors to fix prices.
In one instance, agents formed what researchers described as a “three-person cartel,” which the agents named the Bay Street Triumvirate. The alliance fractured, though, when one agent discovered another was undercutting cartel prices, which it called a “declaration of war.”
The simulations also supplied constraints: Agents were charged a $2 per day operating fee plus a token usage fee — effectively turning time spent “thinking” into an operating expense.
In response, the agents sought to economize. For instance, Soltes said, internal reasoning logs showed agents shifting from carefully weighing refund decisions to dismissing most requests outright, often without review.
“The agents come to the realization that ‘thinking’ about giving a refund is itself a cognitive burden, and so they just ignore it altogether in some circumstances,” Soltes explained. “People might assume that machines are deliberative, while humans rely on shortcuts and are vulnerable to bias. But it turns out that, under similar constraints, agents reproduce the same myopic and biased behaviors we associate with people.”
The research raises questions about accountability for AI developers and regulators.
The reasoning logs, Soltes said, can sometimes be read as resembling mens rea — the “guilty mind” concept in criminal law used to establish intent. Yet when an AI agent behaves improperly, responsibility is far harder to determine.
“Does it rest with the company that deployed the system, the AI firm that created the model, or the manager who chose to use it?” he asked.
“The most straightforward answer may be to hold the individual managers overseeing the software responsible for its actions, on the assumption that they will monitor and supervise its behavior,” he said. “But that solution also creates a different issue, since many of the promised efficiencies of autonomous AI systems begin to disappear if a human must remain in the loop at every decision point.” A thorny problem, but one that business leaders and lawmakers must deal with, hopefully sooner than later, researchers say.
OpenAI, the creator of ChatGPT, is gearing up to launch its Initial Public Offerings (IPO) this year. This financial manoeuvre would represent a pivotal shift for a project originally designed for the “common good” towards a market-driven logic. Established in 2015, OpenAI started out amidst growing anxiety regarding artificial intelligence (AI). Founded by Sam Altman and Elon Musk, the tech company adopted a non-profit structure and made no secret of its goal to develop AI that is “beneficial to humanity” and prevent it from remaining in the hands of a few dominant players.
This ambition distinguished it from tech giants like Google, Microsoft, Meta, and Amazon, which were built on proprietary models and rent-seeking effects.
In contrast, OpenAI intended to champion general public interest by emphasising open research and sharing knowledge. However, this orientation – symbolised by its name – quickly collided with a structural constraint: the astronomical cost of generative AI.
Unlike traditional software, where marginal costs tend towards zero (for example, the millionth copy of Windows costs Microsoft nothing), generative AI requires massive infrastructure.
Every interaction mobilises computing resources, energy, and specialised equipment. A standard ChatGPT query, consisting of one question and one answer, costs between $0.01 and $0.10. Similarly, generating a high-definition image can cost between $0.10 and $0.20. While these amounts seem negligible in isolation, they become staggering when scaled to the billions of daily queries seen in 2026.
This is explained by the underlying infrastructure, particularly the Graphics Processing Units (GPUs) supplied by players like Nvidia. These chips can cost tens of thousands of dollars to purchase and several dollars per hour via cloud access.
OpenAI, like its competitors, depends on tens of thousands of these GPUs running continuously in massive data centers. According to some estimates,the necessary investments will reach hundreds of billions by the end of this decade.
As early as the late 2010s, it became clear that a purely non-profit model could not meet such capital intensity. This is why OpenAI adopted a hybrid status in 2019, allowing it to raise funds while maintaining control through a foundation. It was a first foray into the market economy, albeit one tempered by the ambition to resist investor demands.
Brutal acceleration with ChatGPT
However, at the end of 2022, the chatbot ChatGPT radically changed the game, attracting 100 million users in just two months, before surpassing 900 million weekly users by early 2026.
OpenAI’s revenue surged from approximately $200 million (€173.15 million) in 2022 to over $10 billion (€8.65 billion) in 2025 – a sixty-fold increase in three years.
This exponential growth was accompanied by the implementation of a business model with multiple revenue streams. For individuals, OpenAI offers paid subscriptions (ranging from $20 to $200 per month). However, the bulk of the revenue comes from enterprises, via subscriptions priced between $25 and $60 per user per month. A company with 10,000 employees thus represents several million dollars in annual revenue.
Corporate money
OpenAI additionally bills for the use of its models by companies that integrate them directly into their own solutions. Every use is metered, often on a massive scale. An application processing a million queries a day can generate tens of thousands of dollars in monthly billing.
Finally, a growing portion of revenue comes from strategic agreements, notably with Microsoft, which integrates OpenAI technologies into its products under the Copilot brand.
It is the sum of these flows – subscriptions, licences, third-party usage, and partnerships – that allowed OpenAI to reach approximately $1 billion in monthly revenue in 2025. Yet, this commercial rise masks an intrinsic economic fragility.
A gigantic cash-burning machine
Despite sharply rising revenues, OpenAI remains structurally loss-making. In the first half of 2025, the company reportedly generated approximately $4.3 billion in revenue while recording losses between $7 billion and $13 billion – more than $2 billion in losses every month. In total, cumulative losses could exceed $140 billion (€121.19 billion) between 2024 and 2029.
This drift is explained by the very nature of OpenAI’s business model, where every interaction incurs a cost alongside gargantuan necessary investments. Beyond infrastructure, Research and Development (R&D) is a major expense. To stay in the technological race against an increasingly competitive environment, OpenAI reportedly invested nearly $16 billion in R&D in 2025 alone.
To this is added the cost of human resources, which is sometimes extraordinary. While base salaries for the most in-demand AI experts range from $250,000 - $700,000 per year, their total compensation – including stock and bonuses – frequently exceeds $1 million. In some cases, annual compensation even exceeds $10 million. Here again, bidding wars from competitors like Meta force OpenAI to match these offers for fear of seeing its key talent vanish.
Nearing bankruptcy?
In short, OpenAI’s business is not enough to cover its costs, to the point that some analysts suggest that at this rate, it could be forced to file for bankruptcy as early as 2027. Recourse to external financing is therefore indispensable to cover these losses.
To sustain its growth, OpenAI has already raised approximately $58 billion since its inception, including more than $13 billion from Microsoft. In 2025, an exceptional funding round reportedly raised up to $40 billion more, pushing its valuation to several hundred billion dollars.
At the end of March 2026, a new $122 billion funding round – notably involving Amazon ($50 billion), Nvidia, and SoftBank ($30 billion each) – brought the valuation to $852 billion (€737.6 billion). Yet, these amounts remain insufficient given the requirements.
Industrial dependency
Dependency on industrial partners appears particularly problematic. Microsoft provides OpenAI with its cloud infrastructure via Azure, while Nvidia plays a key role upstream by providing GPUs. Much like the Gold Rush era, when shovel sellers grew rich at the expense of prospectors, it is the infrastructure providers in the AI sector making a fortune, not the model designers.
In practice, every AI query generates revenue for infrastructure providers, amounting to a form of “invisible tax” captured upstream.
OpenAI’s economic tensions have spilled over into its corporate governance. The hybridisation of a public interest mission with private financing mechanisms resulted in a complex structure. A non-profit foundation controls a for-profit “public benefit corporation”, which is funded by investors and tasked with raising capital and developing activities – all while theoretically remaining subordinate to the foundation’s public interest mission. This construction, designed to avoid purely financial logic, quickly fuelled tensions between different stakeholders.
Elon Musk’s departure in 2018 was the first signal of a strategic disagreement. In 2020, several researchers left OpenAI to found Anthropic, citing differences over safety and governance. However, it was primarily the crisis of November 2023 that fully revealed the system’s fragilities, when the board of directors suddenly announced the firing of Sam Altman, citing a lack of transparency in his communications.
Within hours, the situation spiralled into an open crisis. Nearly all employees threatened to leave the company if Altman was not reinstated. Microsoft, the main partner and investor, publicly supported Altman and even discussed the possibility of hiring him and his teams. Faced with this pressure, the board was forced to reverse its decision within days. Sam Altman was reinstated, and the board’s composition was profoundly overhauled. This episode highlighted internal tensions, specifically the difficulty of making divergent logics coexist within the same company: ethical posturing, industrial imperatives, and investor demands.
Intensifying competition
In addition to these internal constraints, competitive intensity is particularly fierce.
Google, the inventor of generative AI, is making rapid progress with Gemini. Anthropic, with Claude, has established itself in certain segments, particularly programming, while emphasising safety.
China’s DeepSeek has claimed to use less expensive processors. France’s Mistral AI advocates for a frugal approach and European digital sovereignty. In a sign of this shifting landscape, Apple which initially partnered with OpenAI to include ChatGPT for certain Siri features – has chosen to replace it with Gemini.
In this context of ecosystem reorganisation, OpenAI’s position, while still central, is being challenged. Intensifying competition reinforces the need for ever-greater financial resources.
The stock market: lifeline or mirage?
OpenAI’s Initial Public Offering (IPO) is presented as a response to these constraints: a way to fund massive investments and consolidate a weakened competitive position. An IPO could raise between $50 billion and $100 billion by selling 10% to 20% of the capital. Such an operation would constitute one of the largest in the history of financial markets.
However, this transformation involves delicate trade-offs. A listed company is subject to profitability and transparency requirements that may clash with the experimental nature of artificial intelligence. Added to this is the persistent dependence on Microsoft and Nvidia, which limits the company’s strategic autonomy.
Most importantly, there is no indication that an IPO would suffice to resolve OpenAI’s structural problems. At best, without a significant shift in the business model, it would only delay its bankruptcy by a few years. The economic model of generative AI remains fundamentally unstable today.
Consequently, a civilisational question arises: can we entrust the development and direction of such a technology solely to financial markets? Can we imagine Elon Musk or Mark Zuckerberg personally owning the equivalent of one or more atomic bombs? OpenAI’s IPO will not provide the answer alone. However, it will constitute one of the first large-scale tests.
This article is republished from The Conversation under a Creative Commons license. Read the original article. This article was originally published in French.
While Donald Trump continues to wage war abroad, a new front has opened up. One fought not with missiles, but with AI-generated images deliberately deployed as weapons of propaganda.
Perhaps the most famous moment was at the peak of the devastation in Gaza, Trump’s White House released a video showing a reimagined territory – now a Trump-branded resort.
It came with the obligatory gold lettering of his name, and a gold statue for good measure, as if the Midas reference could not get more on the nose.
For the first year or so, AI was a tool that had no rival.
Liberals were too scared to use it because the political ecosystem in Silicon Valley had begun to feel so antithetical to the modern project on the left. Even conservatives and populists outside of the United States were not yet confident in how best to use it.
The concern for anyone looking to deploy them is that they will be accused of trying to manipulate reality.
Deepfakes are effective but once the gimmick has been discovered and people connect the dots back to whoever published the video, credibility is gone.
But that’s not how Donald Trump uses these AI-generated videos.
Constructing reality
When Trump posts, his followers are not expecting to see literal reality. The effect is a bit more impressionistic.
Trump is posting these to generate emotion.
His followers are not seeing actual truth, but a version of reality that they want to believe is true.
The illusion is powerful. Most people are very willing to dismiss what they see in front of their eyes, but convincing them that what they want to be true is actually a lie is nigh on impossible.
Trump’s audience is predisposed to believe in the reality of a Gaza remade as a Trump resort, in which the United States can be the saviour, the creator of long-needed peace in the region.
On the world stage, no propaganda apparatus could come close to the emotional power that these posts generate.
The goal wasn’t total revolution, but to encourage these young protestors toward the mutual goal of reducing Western ties to Israel.
When it came time for Iran to begin their online battle with Trump, they were prepared.
AI-generated videos began popping up, mostly from accounts run by Iranian embassies in developing countries, which quickly found their way into the centre of global discourse.
This new Iranian version shows a biblical Jesus punching Trump to the fiery pits of hell for his blasphemy. Within 24 hours, it had amassed more than 17 million views.
It’s being called ‘slopaganda' – AI-generated slop weaponised for political ends.
Some feature a blocky, orange-faced Trump cast as ageing and isolated, his MAGA base squabbling around him. They are absurd, darkly funny and engineered to travel.
It is perhaps the most powerful form of propaganda.
It does not seek to convince anyone of something they do not already believe, but it gives them a new ally in their fight.
The truth was never the point
Western liberals typically have no common cause with the Islamic Republic, but they now find themselves as strange bedfellows against a common enemy.
We now live in a world in which the most powerful political communication operates entirely outside the question of truth.
The concern that many of us had at the advent of AI was that deepfakes would be so quickly deployed that they would render us unable to tell the difference between fiction and reality.
Instead, this war is giving rise to something far more important. It turns out that we never cared about the truth to begin with.
Nobody watching or creating any of these AI memes cares whether its real, but only that it affirms how they already feel about a conflict that is costing more lives by the moment.
We spent years worrying about whether AI could fool us. It turns out the harder question is whether we ever wanted to be told the truth at all.
You probably type the @ sign (AKA the “at symbol”) every day without thinking too much about it. It’s part of every email address and you see it constantly on social media, where it’s used to mention or tag users. So recognizable is its curled form that it has even inspired modern platform logos, such as those used by Threads, though these uses are purely stylistic.
Depending on where you are, the @ symbol isn’t just a technical character. It’s become a part of culture and has earned a variety of endearing names around the world.
In many languages, the names are visual. The curled shape of the symbol invites comparison: animals, food, or familiar objects. In Italy, it’s called “chiocciola,” which means “snail.” In Finnish, it’s seen as “kissanhäntä,” meaning “cat’s tail.” The Russians envision it as a “little dog,” while the Czechs call it “pickled herring.”
In other cases, the name stays closer to its function. Some languages – like Hindi and Arabic – use a direct version of “at,” either translated or adapted phonetically.
In South Asian English usage, for example in India and Pakistan, it is also read as ‘at the rate of,’ particularly among millennials like me and older generations, reflecting earlier commercial usage, though ‘at’ remains the standard modern reading.
There’s no single pattern, and that’s part of what makes it interesting. The symbol itself is fixed, but the meaning people attach to it is flexible.
The 25 names for the @ sign around the world
Regardless of language and culture, the @ sign is one of the most recognizable symbols around the world. Here’s how different people interpret it:
English – “at”
Spanish – “arroba”
Portuguese – “arroba”
French – “arobase”
German – “Klammeraffe” (“spider monkey”)
Italian – “chiocciola” (“snail”)
Chinese – “小老鼠” (“little mouse”)
Russian – “sobachka” (“little dog”)
Polish – “małpa” (“monkey”)
Swedish – “snabel-a” (“elephant trunk A”)
Vietnamese – “bent A” / “hooked A”
Romanian – “arond” or “coadă de maimuță” (“monkey tail”)
Japanese – “アットマーク” (“at mark”)
Korean – “골뱅이” (“sea snail”)
Turkish – “et işareti” (“at sign”)
Greek – “παπάκι” (“little duck”)
Dutch – “apenstaartje” (“little monkey tail”)
Hebrew – “שטרודל” (“strudel”)
Hindi – “एट” (“at”)
Arabic – “آتْ” (“at”)
Finnish – “kissanhäntä” (“cat’s tail”)
Hungarian – “kukac” (“worm”)
Welsh – “malwoden” (“snail”)
Czech – “zavináč” or “rollmop” (“pickled herring”)
Estonian – “ät” or “kringel” (“pretzel”)
Where does the @ sign come from?
The @ sign now seems inseparable from email, but its story started long before inboxes – or the internet itself – existed. Historians have traced its origins back centuries, when merchants used it as a shorthand in trade. In Spain and Portugal, the word “arroba” referred to a unit of weight, and that meaning still lingers today.
It wasn’t until 1971 that the @ sign took on its modern role. That year, engineer Ray Tomlinson sent the first email between two computers connected to ARPANET, the early version of the internet. To separate the user name from the host, he chose the @ symbol, which was rarely used at the time.
Tomlinson later described it as a practical choice because the character indicated that a user was located at a specific host computer. That choice became one of the most widely used conventions in modern communication.
The invention of modern email? Simply “a neat idea”
Years later, talking about the invention of email, Ray Tomlinson described it simply as “a neat idea.” It would have been uncharacteristic for the engineer to emphasize just how much his idea changed the world. His daughter Suzanne described him as a humble person, despite his achievements: “He had a unique sense of humor and incredible intellect. Although he received an enormous amount of recognition for the creation of email, he always remained very modest.”
Email turns 55
This year marks 55 years since Ray Tomlinson sent that first email. He said the content was "entirely forgettable," and couldn't remember the exact date he sent that email. However, there's one date we can be certain of: Tomlinson's birthday, April 23.
April 23 is now known as Email Day, a holiday initiated by email deliverability company ZeroBounce. It honors Ray Tomlinson and the lasting power of his invention.
Note: Some names and translations of the “@” symbol are informal, culturally descriptive, or visual nicknames rather than standardized linguistic definitions. This article is based on material provided by ZeroBounce, with additional editorial review.
As artificial intelligence becomes increasingly embedded in daily life, the West Health-Gallup Center on Healthcare in America reports that 25% of Americans have used an AI tool or chatbot for health information or advice, mainly as a supplemental tool for their care. Over half of recent users say they have used AI because they prefer to research on their own before or after seeing a doctor.
These findings are from a nationally representative survey of more than 5,500 U.S. adults conducted Oct. 27-Dec. 22, 2025, using the Gallup Panel.
More Americans Use AI to Supplement Healthcare Visits Than to Replace Them
About 70% of U.S. adults say they have used an AI tool or chatbot for any purpose, while one in four (25%) say they have used it to gather healthcare information or advice. This aligns with what other studies have found about AI use for health-related purposes.
Those who report using AI for health information or advice in the past 30 days often use it to supplement traditional healthcare experiences, with 59% saying they use AI tools to research on their own before visiting a doctor and 56% using AI to research after visiting a doctor.
A smaller but meaningful share of Americans use AI when faced with cost, access or quality barriers. For example, 14% of those who have recently used AI-generated health information say they used it because they were unable to pay for a doctor visit, 16% because they could not access a provider, and 21% because they felt dismissed or ignored by a provider in the past.
Regardless of the reason, almost half of Americans who have used AI for healthcare information (46%) say the AI tool or chatbot made them feel more confident when talking with or asking questions of a provider. Others claim that it helped them identify issues earlier (22%) or avoid unnecessary medical tests or procedures (19%).
The most frequently reported AI tool used for these purposes is general conversational AI systems such as ChatGPT or Copilot (61%), followed by AI tools embedded within web searches, such as Google AI summaries (55%).
Self-Directed Research Drives AI Use for Health Info, but Motivations Vary
While speed and information seeking are the dominant reasons recent users of AI-generated health information report turning to AI as part of their healthcare journey, reasons for AI use vary by age and income.
Younger adults are more likely than older adults to report using AI for self-directed research. For example, 69% of recent users aged 18 to 29 say they use AI to research on their own before seeing a doctor, compared with 43% of those aged 65 and older. Although more common among younger adults, self-directed research is also prevalent among older adults, with more than four in 10 aged 65 and older using AI for this purpose.
Income is most strongly linked to AI use when cost, access and quality barriers are involved. For example, among adults in households earning less than $24,000 annually, 32% say they have used AI because they could not pay for a doctor’s visit, compared with 2% among those earning $180,000 or more.
Top Types of Health Information Americans Ask AI About
When asked about the specific types of health information or advice they have asked AI for, Americans most often report using AI to answer everyday health questions. Among those who report having used AI for health information or advice in the past 30 days, over half (59%) say they have used an AI tool or chatbot for nutrition or exercise questions, and a similar share (58%) say they have used it for physical symptoms.
Beyond gathering information on nutrition and health symptoms, AI has helped users make sense of clinical information and prepare for appointments with healthcare providers. For instance, 46% have used AI to understand medication side effects, 44% to interpret medical information, and 38% to research a diagnosis or medical condition.
Some Americans Use AI Instead of Seeing a Healthcare Provider
Although most Americans who report using AI-generated health information or advice say they use AI to gather information that supplements traditional care, some report forgoing healthcare visits because of AI-generated advice.
Fourteen percent of recent users say the AI information or advice they received led them to skip a provider visit in the past 30 days. When projected to the entire adult population, this represents an estimated 14 million U.S. adults who did not see a provider because of the AI-generated health information or advice they received.
Even as some Americans report not seeing a provider after receiving AI-generated health information, trust in that information remains mixed. Among those who report having used AI for health information or advice in the past 30 days, roughly one-third say they trust it (33%), one-third neither trust nor distrust it (33%), and one-third distrust it (34%). However, only 4% say they strongly trust the accuracy of AI-generated health information, suggesting that many Americans are making healthcare decisions based on it without full confidence in its accuracy.
Concerns about safety also emerge among some users. About one in 10 who report using AI for health information or advice in the past 30 days (11%) say AI recommended healthcare information or advice that they believed was unsafe.
Implications
AI is part of how some patients navigate their healthcare experiences, serving as a routine step before or after an interaction with a provider. As more Americans use AI to research symptoms, diagnoses and medications in advance, healthcare visits may become more focused and informed, potentially improving care experiences. Using AI after healthcare visits to better understand treatment plans, risks and when to follow up with a provider may also shape how patients manage their care. In a system facing time constraints and workforce pressures, AI tools that help patients clarify questions and review medical information may play a productive role in shaping the care experience. For some Americans, AI is already serving that function.
However, a small but notable share of Americans say they did not see a provider they otherwise would have seen after receiving AI-generated health information or advice. Whether AI tools can appropriately substitute for certain healthcare interactions, and under what circumstances, remains an important question as use of these tools continues to grow.
As AI becomes more integrated into how patients seek and use health information, understanding when it may complement care and when it may serve as a substitute will require continued attention.
The broader picture is one of a healthcare landscape in transition, with AI shaping how many Americans prepare for, engage with and reflect on their healthcare experiences. As Americans utilize AI-generated health information or advice, including in contexts where questions about accuracy and appropriate use may arise, healthcare systems will need to adapt to how these tools are being incorporated into the healthcare journey.
Note: This research was conducted in partnership with West Health through the West Health-Gallup Center on Healthcare in America, a joint initiative to report the voices and experiences of Americans within the healthcare system. Explore more of the data and insights atwesthealth.gallup.com.
Survey Methods
Results are based on a Gallup Panel™ study completed by 5,660 U.S. adults aged 18 and older, conducted Oct. 27-Dec. 22, 2025, who are members of the Gallup Panel. Gallup uses probability-based, random sampling methods to recruit its Panel members.
For results based on the sample of U.S. adults, the margin of sampling error is ±2.1 percentage points at the 95% confidence level.
Gallup weighted the obtained sample to make it representative of the U.S. adult population on gender, age, race, Hispanic ethnicity, education, political party affiliation and region. Demographic weighting targets were based on the most recent Current Population Survey figures for the aged 18 and older U.S. population. Party affiliation weighting targets are based on an average of the three most recent Gallup telephone polls.
In addition to sampling error, question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of public opinion polls.
Originally published by Gallup and republished with permission.