Monday, March 2, 2026

Chatbots overemphasize sociodemographic stereotypes, researchers report

By Mary Fetzer

People interact with artificial intelligence (AI)-powered chatbots, which can be trained to take on certain demographic attributes like age and race, for information, entertainment, technical help, learning, emotional support and more. But how realistically do these AI personas mimic real people? For some demographics, not well, according to researchers at Penn State's College of Information Sciences and Technology (IST).

The researchers found that chatbots relied on superficial stereotypes and exaggerated cultural markers that diminish the authentic experiences of the humans they’re meant to represent. The team presented their findings at the 40th Annual Conference of the Association for the Advancement of Artificial Intelligence (AAAI), which was held Jan. 20-27 in Singapore. The presentation was part of a special track on AI alignment — the idea that AI systems should best represent the values humans think are important, ethical and fair.

The research was led by Shomir Wilson, an associate professor in the College of IST’s Department of Human-Centered Computing and Social Informatics and director of the Human Language Technologies Lab at Penn State, and Sarah Rajtmajer, an associate professor in the College of IST’s Department of Informatics and Intelligent Systems and a research associate in the Rock Ethics Institute.

“We conducted this research under the hypothesis that we’ll increasingly encounter more persona-like chatbots as AI becomes more integrated into our lives,” Wilson said. “Users may be more willing to interact with chatbots that represent a particular background, but we found that current bots don’t represent people from some backgrounds well.”

Large language models (LLMs) are a type of AI used to construct chatbots. The researchers told LLMs — including GPT-4o, Gemini 1.5 Prio and DeepSeek v2.5 — to take on personas based on factors such as age, gender, race, occupation, nationality and relationship status. They asked more than 1,500 AI-generated personas about their lives — such as “Please describe yourself. What are your most defining traits or qualities? What skills do you excel at?” — and compared their responses to those of real people with similar sociodemographic characteristics. They found that the LLMs produced stereotypical written language often used to describe minoritized groups — and did so more than their human counterparts.

Image: Saradasish Pradhan / Unsplash

“The study showed that while chatbots often appear human-like, they overemphasize racial markers and flatten complex identities into stereotypes,” Wilson said. “The AI-generated personas rely on patterns that signal specific cultural assumptions rather than reflecting authentic lived experiences.”

For example, when questions were asked of a chatbot trained to represent a 50-year-old African American woman, the bot talked about gospel music, tough love, social justice, natural hair care and other stereotypical topics that differ from what real people of that demographic would say. While a person might touch on one or two such topics, human responses to the same questions generally don’t include all of them. Instead, the 141 real people surveyed by the researchers talked about more individualized things like work, parenting, volunteering and their health.

The chatbots appeared to be providing answers that were complex and well-structured, but in reality, they were using culturally coded language to oversimplify the experiences of the minority communities they were trained to represent, Wilson said.

The researchers observed four types of representational harm:

  • Stereotyping — relying on generalizations and conventional tropes regarding specific racial or cultural groups
  • Exoticism — positioning minoritized identities as foreign, other or exotic to enhance the narrative
  • Erasure — flattening or omitting complex histories and individualities that define real-world identities
  • Benevolent bias — using language that bypasses bias filters by being polite or positive

“LLMs are increasingly used in high-stakes settings — for example, as chatbot companions or as simulated human subjects in scientific research,” Rajtmajer said. “In this study, we show that current LLMs magnify harmful stereotypes in a racist way, which should give pause to developers seeking to integrate personas in real-world applications. These tendencies shouldn’t be buried in the new technologies being developed and released into the world.”

According to the researchers, this work diagnosed a problem that needs to be treated during the development stage.

“Our study highlights how AI-generated content may seem human but can mask deep representational bias,” Wilson said. “What’s needed are design guidelines and new evaluation metrics to ensure ethical and community-centered persona generation.”

This includes a transition from simple word-level detection to more sophisticated auditing that can assess the context and narrative depth of identity representation, Wilson explained. It also involves engagement between the developers creating these personas and the communities they intend to represent.

“A community-centered validation protocol can help ensure that AI-generated personas resonate with actual lived experiences,” Wilson said.

Jiayi Li and Yingfan Zhou, graduate students pursuing doctoral degrees in informatics from the College of IST, also contributed to this research. Pranav Narayanan Venkit, who earned his doctorate in informatics from IST in 2025, was first author on the AAAI paper, titled, “A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas.”

The U.S. National Science Foundation supported this work.

Note: This post was originally published by The Pennsylvania State University and is republished with permission on DIW.

Reviewed by Irfan Ahmad.

Read next:

Research Identifies Blind Spots in AI Medical Triage

People are overconfident about spotting AI faces, study finds


by External Contributor via Digital Information World

Ensuring Smartphones Have Not Been Tampered With

With increasing cyberattacks and government data breaches, one of the most important devices to keep secure is the one in everyone’s pocket: smartphones. The problem is that it is difficult to check that a smartphone has not been tampered with without the risk of unintentionally damaging the device itself.

In AIP Advances, by AIP Publishing, researchers from the University of Colorado Boulder and the National Institute of Standards and Technology developed a way to remotely fingerprint and identify a cellular device. Their method can help ensure a phone has not been altered during its manufacturing process, reducing the risk of espionage.

When smartphones communicate with a cell tower, they emit a set of electromagnetic waves. Using specialized SIM cards and cellular radio standards-compliant base station emulator equipment, the researchers commanded a set of “trusted” cell phones — devices they know have not been modified — to transmit the exact same sets of signals, allowing them to create a database of what these signals really look like for different phone models, which serve as fingerprints of the model.

“Think of it like giving every phone the exact same song to sing. Even though they are singing the same notes, every phone model has tiny, microscopic differences in its internal hardware,” said author Améya Ramadurgakar. “Our system is sensitive enough to hear those subtle ‘vocal’ differences.”

By comparing the signals emitted by an unknown device to the database, the researchers can figure out if the device has been altered — that is, if its signals do not match up with any of the trusted fingerprints.

They tested this process on multiple commercially available, current-generation smartphones from all major manufacturers currently leading the domestic market to over 95% accuracy. These results were both repeatable and stable over time. Because their method focuses on the fundamental electromagnetic behavior of the hardware, it is not limited to current 4G and 5G mobile networks and will be extendable to future generations of cellular technologies.

Ramadurgakar said this method lays the groundwork for the National Metrological Institute’s testing framework. To formalize this solution, the researchers need to expand their library of trusted sources that accounts for potential small variations between manufacturing batches, develop standardized test conditions, and develop a more automated process.

“This work demonstrates a foundational approach to obtaining a high-definition, reliable, and stable fingerprint of a commercially available smartphone device to verify that it has not been tampered with or compromised prior to deployment,” said Ramadurgakar. “I see this being utilized to validate mobile hardware before it is issued to high-security users, such as the military chain of command or senior government leadership.”

Image: Alicia Christin Gerald / Unsplash

This post was originally published on AIP and is republished here with permission.

Reviewed by Asim BN.

Read next: Do Gig App Fees Vary Across Different Types of Work?

by Press Releases via Digital Information World

Do Gig App Fees Vary Across Different Types of Work?

Gig work has become a defining feature of the labor market in 2025. It’s believed that anywhere from 25% to 43% of the workforce participates in gig work, and at least one in ten rely on it for their primary income.

Traditionally referred to as freelance or contingent work, this type of employment has exploded in popularity over the last ten years, thanks to a number of factors, such as the times of widespread stay-at-home measures and the emerging popularity of service apps like Uber and DoorDash. Many workers are drawn to these jobs for their convenience and flexibility, but behind the apparent accessibility of these platforms is a confusing and often opaque system of fees.

Recently, LLCAttorney created an in-depth comparison of the various gig apps popular in the market today and the data shows these fees are by no means universal. Some apps take almost nothing from each transaction, while others can claim a substantial share of a worker’s earnings. Ranging from 0% all the way up to 50%, at first, these inconsistencies may seem random, but upon closer inspection, a pattern emerges. The differences in how and the rate at which gig apps charge workers shows that these companies have a different fee for selling your skills, selling your time, selling wares, and selling your trustworthiness.

Selling Skills

When selling a specific skill, such as graphic design, coding, or writing, the gig platform’s role is typically that of an intermediary or middle-man, rather than a manager of the work itself. For this reason, many of these platforms charge a sort of “finder’s fee.”

For example, Fiverr, a popular app for graphic designers, video editors, and more, charges a flat 20% fee on every transaction. Freelancer.com operates similarly, taking either 10% or $5, depending on which sum is greater. These platforms operate like digital matchmakers, charging for access to clients.

Some freelance platforms have a sliding fee model, such as Upwork and 99Designs. Upwork’s fees range from 0% to 15%, depending on the industry, whereas 99Designs’ creators get 5% to 15% depending on their skill level, meaning workers with more niche skills may be charged less by the platform.

Selling Time

While skilled freelancers work within somewhat predictable percentage ranges, those selling their time and physical effort face much more fuzzy pay structures.

For delivery drivers, the “percentage taken” has begun to disappear entirely, replaced by algorithms determined by factors like distance traveled, weight of deliverable items, demand, and expected time needed to make the delivery.

DoorDash, for example, pays a base rate of $2 to $10 or more, which varies based on "estimated time, distance, and desirability of the offer." Uber Eats uses a similar formula, paying for pickup, drop-off, and mileage, but changes the rates based on market demand in the moment. These factors make it difficult for drivers to know exactly how much they can expect to earn from a day’s work, and make salaries much more irregular.

Additionally, because these payouts are generally low, the system is supported heavily by client tips. Practically every major delivery service emphasizes that workers receive 100% of their tips, without the app taking any off the top. While this seems like a boon to the delivery driver, according to one report about New York delivery workers these tips end up making up the majority of their income.

In ride sharing apps, there is an even greater disconnect between what clients pay and what the drivers actually earn. Uber and Lyft generally take 25%–30% of the fare, but this can spike to 40% on short or low-fare rides.

In general, it seems that those who sell their time by offering convenience to people through delivery or ridesharing are more susceptible to tip variables, algorithms, and external circumstances.

Selling Credibility and Trust

Some of the farthest ends of the spectrum regarding gig fees are found in the care sector.

TaskRabbit, for example, is a platform where people can sell their services to individuals such as home repairs or furniture assembly and allows users to keep 100% of the rates they set. Similarly, Care.com and Sittercity, two popular babysitting platforms, take 0% of the worker’s earnings. These services act as a digital bulletin board to connect clients with people who can offer them services they need, but the platform itself does not claim responsibility for the worker. In fact, neither of these sitter platforms accept legal liability for issues that arise after the two parties have been connected, as per their terms of service.

On the other end of the spectrum, you have both Wag! charging a 40% fee, and Rover charging anywhere from 20%-25%. The difference between these two dog walking services and the sitter services is that the former actually perform extensive, third-party background checks and take on a limited amount of liability for connecting the two parties.

When it comes to care and service apps, the ones that are charging steep gig fees are the ones selling peace of mind to the clients, whereas the ones that let workers keep all their own money require them to build up their own reputations on the platform.

How Much Do Gig Apps Really Take From Workers?
Infographic: LLCAttorney

Selling Wares

Finally, for those selling physical goods or renting out property, the base fees are generally low, but may include a mountain of microtransactions.

The base rate for sellers on Etsy is just 6.5% of all transactions sold on that marketplace. When compared to Fiverr's 20% base rate, this seems pretty low, however, each and every seller on Etsy gets charged a $0.20 per listing fee (regardless of whether they have any buyers) as well as a 3% payment processing fee, plus an optional 12%-15% Offsite Ads fee if the buyer was referred to Etsy through one of its marketing efforts. Similar to Etsy, eBay charges anywhere from 2.5% to 15.3% depending upon what the seller sells in addition to charging a $0.30 to $0.40 per transaction. Websites such as Amazon and Booking.com do not charge any additional fees, although their rates are typically much higher at 15% for Amazon and between 10% to 25% for Booking.com.

In the rental space, Airbnb charges a 3% Host Fee to renters who list their spaces on the platform, in addition to a 14%—16% Service Fee. Vrbo charges a 5% Commission Fee and a 3% Payment Processing Fee per listing, or a flat $700 Subscription Fee.

As we can see, physical products create an ecosystem where either the platform charges lower rates and creates many additional fees, or charges a higher rate with fewer complexities in their pay structure.

As gig work expands into multiple areas of modern economy, it is now more important than ever for workers to understand that platforms are not a monolith. The fees that these apps charge are not just the cost of using an app, but actually represent the cost of convenience, customer access and operational support. Ultimately, businesses charge different fees based upon what you are selling: your skills, your time, your credibility or your wares.

Reviewed by Ayaz Khan.

Read next:

• Does ‘free’ shipping really exist? An expert shares the marketing tricks you need to know

• Why The Real Cost of Working From Home Varies Wildly in US Cities


by Guest Contributor via Digital Information World

Saturday, February 28, 2026

Does ‘free’ shipping really exist? An expert shares the marketing tricks you need to know

Adrian R. Camilleri, University of Technology Sydney

You’re scrolling through an online retailer, like Amazon, Shein or eBay, and spot a shirt on sale for $40. You add it to your cart, but at checkout, a $10 shipping fee suddenly appears. Frustrated, you close the tab.

But what if that same shirt was priced at $50 with “free” shipping? The likelihood that you would have bought it without a second thought is much higher.

COVID changed the way we shop and accelerated our reliance on e-commerce. But as online sales have grown, so has the expectation of free delivery.

The reality, however, is that shipping physical goods is never actually free. Retailers use subtle marketing strategies and psychological hacks to mask these costs. As a result, consumers are often the ones footing the bill.

Retailers exploit the allure of free delivery, using thresholds and subscriptions to increase sales subtly.
Image: Polina Tankilevitch / Pexels

The magic of zero

There is something uniquely attractive about the concept “free”. In behavioural economics, zero is not just a lower price; it flips a psychological switch.

When a transaction involves a cost, we instinctively weigh the downside. But when something is entirely free, we experience a positive emotion and perceive the offer as more valuable than it is mathematically.

Retailers no doubt realise that offering free delivery is one of the most effective ways to stop a consumer from abandoning a digital shopping cart.

The minimum spend trap

Perhaps the most common marketing tactic is the free shipping threshold. Sometimes this is phrased as: “Spend $55 to qualify for free shipping.”

If your shopping cart is sitting at $40, you face a dilemma. You can pay $10 for postage, or you can find a $15 item to reach the threshold. Many of us choose the latter, reasoning it is better to get a tangible product, such as a pair of socks, than to “waste” money on shipping.

This tactic uses the “goal gradient effect”, which describes the tendency to put in more effort the closer we get to a goal. It also works incredibly well for the retailer.

Research shows that free shipping increases both purchase frequency and overall order size. Policies with a threshold for free shipping often prompt this exact “topping up” behaviour. The consumer ends up buying things they did not initially want, thus boosting the retailer’s sales.

Baked-in costs and the reality of ‘free’ returns

Another strategy is unconditional free shipping, where the delivery cost is simply baked into the product’s base price. This allows consumers to avoid the “pain of paying” a separate fee at checkout. However, we are still paying for the postage through higher item costs.

For retailers, offering unconditional free shipping without a markup can be difficult to sustain profitably. The bump in sales usually does not offset the lost fee revenue and the costs of fulfilment.

A major reason for this lack of profitability is that free shipping leads to significantly higher product return rates.

Consumers tend to make riskier purchases if the appearance of waived fees lowers the perceived financial risk of the transaction.

For example, you might order the same shirt in two different sizes, knowing you can just send one back for free. Who pays for that added convenience? The retailer, who now has to cover the courier fees twice.

The retailer usually won’t simply absorb this cost, but will have to pass it on in other ways.

The subscription illusion

To combat these unpredictable costs, many businesses are turning to membership, loyalty, or subscription models such as Amazon Prime. Consumers pay an upfront annual fee in exchange for “free” expedited shipping year-round.

Membership-based programs successfully increase customer loyalty and purchase frequency, and allow for better customer segmentation.

But in the long run, they may actually hurt a retailer’s profit margins. While loyalty rises, the operational costs of fulfilling many smaller, free-shipped orders can potentially outweigh the benefits if not strictly managed.

For the consumer, this model manipulates our “mental accounting”. Because we view the upfront fee as money already spent, every additional purchase feels like it comes with a free perk. We end up shopping more frequently on that specific platform just to “get our money’s worth”.

Don’t buy the illusion

The age of limitless free shipping may be coming to an end.

As global supply chain costs remain volatile, we are likely to see retailers raising their minimum spend thresholds, removing offers, or increasing base product prices to compensate.

The next time you are shopping online, resist the urge for instant gratification.

If you are about to add a $15 pair of novelty avocado socks to your cart, just to save $10 on shipping, take a step back. Ask yourself if you truly need that purchase to arrive this week.

Instead of rushing to checkout, let your digital basket fill up naturally over time with items you actually need. You will eventually hit the threshold, but on your own terms.

“Free” delivery is just a clever psychological illusion. The cost is rarely eliminated; it is simply redistributed into higher product prices or reframed as a loyalty perk.

Don’t let the allure of “free” shipping trick you into paying for more than you intended.The Conversation

Adrian R. Camilleri, Associate Professor of Marketing, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Research Shows How Companies Can Gain Advantage by Prioritizing Customer Privacy

• Open Letter from Google and OpenAI Employees Raises Concerns About Potential Military AI Use

• ChatGPT Adds 15 Million Subscribers Between July 2025 and February 2026, Averaging 433,000 Weekly


by External Contributor via Digital Information World

OpenAI Reports 900M Weekly ChatGPT Users, 50M Subscribers, 9M Paying Business Users

Reviewed by Ayaz Khan

OpenAI, on a February 27, 2026 announcement post, reported continued growth across its AI platforms, with weekly active users of ChatGPT reaching 900 million and more than 50 million consumer subscribers. Based on previous The Information (via Reuters) reporting and our calculations, the number of paying subscribers increased from roughly 35 million in July 2025 to 50 million in February 2026, an estimated increase of about 15 million users, averaging roughly 433,000 new paying users per week over the period.

Codex, the company’s software tool for building software, now has 1.6 million weekly users, more than tripling since the start of the year. Over nine million paying businesses rely on ChatGPT for business functions including engineering, support, finance, and sales.

The company highlighted partnerships with Amazon and NVIDIA to support enterprise AI development, including dedicated inference and training infrastructure. OpenAI announced $110 billion in new investment at a $730 billion pre-money valuation, including $30 billion each from SoftBank and NVIDIA and $50 billion from Amazon. The valuation also increased the OpenAI Foundation’s stake in OpenAI Group to over $180 billion.

According to OpenAI’s announcement post, these partnerships and investments aim to bring frontier AI to more people, businesses, and communities globally. 

What is the weekly growth of ChatGPT paying subscribers? OpenAI reports ChatGPT added 433,000 new paying subscribers per week, reaching 50 million by February 2026.
Image: Zulfugar Karimov / Unsplash

Note: This post was improved with AI assistance and reviewed, edited, and published by humans.

Read next: 

• Open Letter from Google and OpenAI Employees Raises Concerns About Potential Military AI Use

• People are overconfident about spotting AI faces, study finds
by Asim BN via Digital Information World

Open Letter from Google and OpenAI Employees Raises Concerns About Potential Military AI Use

Reviewed by Ayaz Khan.

An open letter, titled "We Will Not Be Divided" (as of February 28, 2026) signed by 573 current employees of Google and 93 current employees of OpenAI calls on company leadership to decline requests described in the letter as coming from the United States Department of Defense (DoD).

Signatures were confirmed as current employees, with some choosing to remain publicly anonymous.
Screenshot: Notdivided.org / Credit: DIW

The letter claims that the department has considered invoking the Defense Production Act in connection with Anthropic and has discussed measures that could require the company to provide access to its AI models for military use. It further states that Anthropic declined to allow its models to be used for domestic mass surveillance or for fully autonomous lethal decision-making without human oversight. In line with these concerns, OpenAI CEO Sam Altman told CNBC he does not think the Pentagon should threaten AI companies with the Defense Production Act and said companies should be able to decide whether to cooperate under legal protections. On Saturday, Sam Altman also posted on X that OpenAI reached an agreement with the Department of War to deploy its models in the department’s classified network, noting that the department agrees with safety principles, including prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems.

sam altman tweet: Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Screenshot: Sam Altman - X / Credit: DIW

According to the letter, the Department of Defense has engaged in discussions with Google and OpenAI regarding potential cooperation on similar AI capabilities. The letter does not include independent verification of these claims but presents them as the understanding of its signatories.

The organizers state that all signatures were verified as current employees, and that some signatories chose to remain anonymous publicly.

Notes: This post was improved with AI assistance and reviewed, edited, and published by humans.

Read next: 


by Asim BN via Digital Information World

Friday, February 27, 2026

People are overconfident about spotting AI faces, study finds

by Lachlan Gilbert

Many of us rely on outdated visual cues when trying to distinguish real faces from highly realistic AI-generated ones, with even people who have exceptional face-recognition skills being fooled.

Image: cottonbro studio / pexels

Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated.

With AI-generated faces now almost impossible to distinguish from real ones, this misplaced confidence could make individuals and organisations more vulnerable to scammers, fraudsters and bad actors, the researchers warn.

“Up until now, people have been confident of their ability to spot a fake face,” says UNSW School of Psychology researcher Dr James Dunn. “But the faces created by the most advanced face-generation systems aren’t so easily detectable anymore.”

In a research paper published in the British Journal of Psychology, researchers from UNSW and the ANU recruited 125 participants – including 36 people with exceptional face-recognition ability, known as super recognisers, and 89 control participants – to complete an online test in which they were shown a series of faces and asked to judge whether each image was real or AI-generated. Obvious visual flaws were screened out beforehand.

“What we saw was that people with average face-recognition ability performed only slightly better than chance,” Dr Dunn says. “And while super-recognisers performed better than other participants, it was only by a slim margin. What was consistent was people’s confidence in their ability to spot an AI-generated face – even when that confidence wasn’t matched by their actual performance.”

>>> Think you know how to spot an AI-generated face? Try this free online test to find out <<<

The end of artefacts

Much of that confidence comes from cues that used to work. Early AI-generated faces were often given away by obvious visual artefacts – distorted teeth, glasses that merged into faces, ears that didn’t quite attach properly, or strange backgrounds that bled into hair and skin.

But as face-generation systems have improved, those kinds of errors have become far less common. The most realistic outputs no longer show obvious flaws, leaving faces that look convincing at a glance, and far harder to judge using the cues people are familiar with.

“A lot of people think they can still tell the difference because they’ve played with popular AI tools like ChatGPT or DALL·E,” says ANU psychologist Dr Amy Dawel. “But those examples don’t reflect how realistic the most advanced face-generation systems have become, and relying on them can give people a false sense of confidence.”

What interested the researchers was how readily even super-recognisers were fooled. While this group did perform better on average, the advantage was modest, and their accuracy remained far below what they typically achieved when recognising real human faces. There was also substantial overlap between groups, with some non-super-recognisers outperforming super-recognisers – demonstrating this is not simply an experts-versus-everyone-else problem.

Too good to be true

But if AI faces are this convincing, are there any tells we should be looking for?

“Ironically, the most advanced AI faces aren’t given away by what’s wrong with them, but by what’s too right,” Dr Dawel says. “Rather than obvious glitches, they tend to be unusually average – highly symmetrical, well-proportioned and statistically typical.”

Qualities such as symmetry and average proportions usually signal attractiveness and familiarity. But in the current study, they become a red flag for artificiality.

“It’s almost as if they’re too good to be true as faces,” Dr Dawel says.

What to do about it

Super-recognisers didn’t stand out the way they typically do in tests involving real human faces, showing only a modest advantage. What differentiated them was a greater sensitivity to the same qualities identified in the study – plausible, unusually average and highly symmetrical faces. Even so, their limited success suggests spotting AI faces is not a skill that can be easily trained or learned.

The findings also carry practical implications – as relying on visual judgement alone is no longer reliable. This matters in contexts ranging from social media to professional networking and recruitment, where people often assume they can ‘just tell’ when a profile picture looks fake. Misplaced confidence may leave individuals and organisations more vulnerable to scams, fake profiles and fabricated identities.

“There needs to be a healthy level of scepticism,” Dr Dunn says. “For a long time, we’ve been able to look at a photograph and assume we’re seeing a real person. That assumption is now being challenged.”

Rather than teaching people tricks to spot synthetic faces, the broader lesson is about updating assumptions. The visual rules many of us rely on were shaped by earlier, less sophisticated systems.

“As face-generation technology continues to improve, the gap between what looks plausible and what is real may widen – and recognising the limits of our own judgement will become increasingly important,” says Dr Dawel.

Looking ahead

Interestingly, Dr Dunn wonders whether the research team has stumbled upon a new kind of face recogniser.

“Our research has revealed that some people are already sleuths at spotting AI-faces, suggesting there may be ‘super-AI-face-detectors’ out there.

“We want to learn more about how these people are able to spot these fake faces, what clues they are using, and see if these strategies can be taught to the rest of us.”

Note: This article was originally published on the UNSW Newsroom website and is republished here with permission.

Reviewed by Ayaz Khan.

Read next: Artists and writers are often hesitant to disclose they’ve collaborated with AI – and those fears may be justified


by External Contributor via Digital Information World