Thursday, March 5, 2026

Creator Economy Report: 51.5% of Creators Saw Earnings Growth, 48.7% Earn Under $10K, 76% TikTok Posts Get Under 1K Views

By Alessandro Bogliari

The U.S. creator economy is moving toward a period of massive growth in 2026, driven by recent developments in artificial intelligence, community engagement, and other factors. Content creators are now at the center of today’s media landscape, with a 10% CAGR in their global population and over $10.5M in projected brand spend for their work. Moreover, influencer marketing has become one of the most effective methods for leading brands to reach, engage, and convert consumers. 56% of Gen Z and 43% of Millennial users report that they now consider creator content more relevant than TV or film, and over 41% of Gen Zers in particular use social platforms as their primary search engine. The creator economy and the greater marketing industry are actively seeing a full-scale shift toward intent-based discovery powered by authentic creators, and now is the most crucial time to strategize how brands collaborate with these social-first voices.

This February, The Influencer Marketing Factory (TIMF) published its Creator Economy 2026 Report, which combines large-scale third-party platform data, contributed by HypeAuditor, with original survey research to illustrate the current state of the creator economy for marketers. With exclusive insights from over 5M creator accounts and 1,000 U.S.-based creator survey respondents, the 2026 Creator Economy Report is your new go-to source for key trends and marketing strategies for both brands and influencers.

1. Big Picture 2026 Creator Economy Trends

2026 will be the year of in-person creator activations and matured creator entrepreneurship. IRL (In Real Life) creator events are gaining momentum as powerful community-building experiences that also drive direct sales for both brands and creator-led businesses. These creator businesses are now fueled by venture capital more than ever before, from VC firms like Slow Ventures or dedicated creator funds. Besides typical digital products or consumer-packaged goods, creator media companies are making up the next wave of influencer entrepreneurship due in part to the accelerating popularity of microdramas and TV streaming.

On the brand side, retailers such as Sephora and GAP have launched specialized social commerce platforms and affiliate programs for creators to lean into users’ digital spending habits. Brands are also finding that LinkedIn can serve as a viable B2C marketing channel as new video tools on the platform attract consumer brands beyond a traditional B2B focus. Although older Gen Z and Millennial audiences are growing in size across social platforms, Gen Alpha poses as the new marketing machine for brands, given their rising spending power and cultural influence. Of course, brands continue to look to creators for valuable trend insights and content strategy, which is why leading companies are increasingly welcoming influencers into C-Suite creative roles and prioritizing co-creation across major creator marketing campaigns.

2. Analyzing 5M+ Creators: Key Trends & Audience Insights

To deliver an accurate and comprehensive view of the creator economy, The Influencer Marketing Factory partnered with HypeAuditor to analyze creator performance, audience demographics, and content trends across over 5M creator accounts. The following are some of TIMF’s top findings, which examine engagement levels, active creator counts, popular content categories, and content performance among creators with predominantly U.S. audiences.

  • Maturing Audiences Across All Platforms: The largest audience segment across TikTok, Instagram, and YouTube is now 25-34, signaling a maturing creator economy and making this age group the primary target for cross-platform brand campaigns.
  • TikTok Boasts Highest Median Engagement Rate vs. Competitors: Hypeauditor’s data reveals that TikTok is the most democratized short-form video platform, with a steady median engagement rate (ER) across all audience sizes.
  • Short-Form ERs Dominate Long-Form: Short-form video delivers the strongest engagement across platforms in comparison to long-form content, and according to TIMF, TikTok continues to deliver the strongest and most consistent median engagement rates, YouTube Shorts engagement tends to improve as creators scale, while Instagram Reels often sees engagement dip as follower counts rise.
  • Creator Visibility Challenge: 46.2% of Instagram creators, 76% of TikTok creators, 59.1% of long-form YouTubers, and 39.94% of YouTube Shorts creators receive fewer than 1K views per post, which represents how difficult it is to still generate steady and scalable reach across platforms, regardless of follower count.
  • Instagram Shifts From Image to Video First Format: According to HypeAuditor, Reels posting cadence grew by 3.8% from 2024 to 2025, all while image posts fell by 6.41%, signaling how creators who rely on static content are losing visibility in 2026.

3. The Influencer Marketing Factory’s 2026 Creator Economy Survey

TIMF surveyed 1,000 U.S.-based content creators to analyze their sentiment towards AI usage, brand deal compensation, and partnership structures. The following are some of the top takeaways from TIMF’s 2026 Creator Economy Survey:

  • Creator Revenue Diversification: While ad revenue is the top-earning revenue stream (21.6%) for U.S. creators, product/merch sales and affiliate marketing now represent a combined 21.2% of creator income, demonstrating the growing interest in self-owned revenue streams rather than dependence on brand deals and platform programs.
  • The Emerging Creator “Middle Class”: 48.7% of U.S. creators earn under $10K annually, 45.6% earn between $10K-$100K, and 5.7% earn $100K or more, signaling the emergence of a viable “middle class” in the creator economy in 2026.
  • Creators Prefer Partnership Stability: 44.9% of content creators value stability, consistency, and deeper brand alignment over one-off brand campaigns.
  • Over Half of All U.S. Creators Report Increasing Earnings: More than half (51.5%) of U.S.-based influencers achieved earnings growth year-over-year in 2025, a noteworthy statistic given the algorithm volatility and increased competition that defined last year.
  • Creators Point to Strategic Brand Building in 2026: The new wave of creator entrepreneurs is approaching, with video production (22.4%) and branding (20%) being top priorities for skill development and professionalization in 2026.

4. Key Quotes & Takeaways from Industry Experts

The following are exclusive quotes from top leaders and industry experts on what the creator economy will look like this year.

  • AI as a Co-Pilot For Human Creativity: “AI will be built into most workflows, helping marketers with tasks like finding and analyzing creators, forecasting performance, and testing messages. Since AI will be everywhere, the ones who succeed will be those who use it to speed up the work but still trust their own judgment, staying in the driver’s seat and leaving AI as a co-pilot. As AI spreads and many touchpoints start to feel generic, this mix of measurable impact and human, community-led content will be a key reason to invest more in influencers.” - Alexander Frolov, CEO & Co-Founder of HypeAuditor
  • Creators Building Sustainable Businesses: “Today, creators are spending less time trying to win on individual platforms and considerably more time building sustainable businesses. 2026 is cementing a shift toward ownership, diversification, and direct, authentic relationships with audiences. That means that tools that can help creators manage that complexity matter more than any single algorithm or platform decision.” - Alex Zaccaria, CEO & Co-Founder of Linktree
  • The Growing Influence of Creator IP: “The clearest patterns across the strongest social work this year, from brands and creators alike, revealed a shift away from ‘campaign thinking’ and toward programming thinking. The best brands no longer aim to win a single moment; they architect content systems that are serialized, character-driven, community-activated, globally scalable, and increasingly AI-powered. They are behaving less like advertisers and more like IP houses.” - Jared Carneson, Head of Global Social Media at Adobe
  • Content Creators vs. Creator Entrepreneurs: “Influencer marketing will shift toward creators who can demonstrate business impact, not just reach, meaning creators who understand audience trust, community, and conversion will outperform those relying on vanity metrics. Platforms will still matter, but creators who think cross-platform and off-platform will be the most resilient. The gap between “content creators” and “creator-entrepreneurs” will widen and the latter will define the next era.” - Gigi Robinson, Founder, Creator, & Author, Hosts of Influence
  • Why Niche Communities & Creator Storyelling Win Big: “Creators that hit key audiences and niche communities will find success in a year where brands are eager to go direct to consumers. Creators don’t just endorse, they produce, distribute, and contextualize the message for a specific audience. Trust lives inside communities, not mass reach...Products still need storytellers, and audiences still follow people they trust.” - Brooke Berry, Head of Creator Development at Snap Inc.

For more exclusive insights on influencer marketing, download TIMF’s 2026 Creator Economy Report with over 60 pages of free data and tips here.

Image: Ron Lach / Pexels

About author:
Alessandro Bogliari s a digital entrepreneur and growth marketer. He is the co-founder & CEO of the Influencer Marketing Factory, a global influencer marketing agency that helps brands and companies launch influencer marketing campaigns on TikTok, Instagram and Youtube. Alessandro is also a member of the Forbes agency council and Fast Company executive board.

Reviewed by Asim BN.

Read next:

• Who’s Winning the AI Chatbot Race: OpenAI's ChatGPT, Google Gemini, or Anthropic Claude?

• New Survey Debunks Digital Detox Myth: 60% Never Switch Off, 45% Can’t Last 12 Hours Offline

• The Year of Efficiency: How Agencies Are Implementing AI in 2026 (Survey)
by Guest Contributor via Digital Information World

Who’s Winning the AI Chatbot Race: OpenAI's ChatGPT, Google Gemini, or Anthropic Claude?

By Adam Blacker | Apptopia

ChatGPT is still the biggest name in generative AI chatbots but the gap is closing fast. The mobile app has lost US daily active users (DAU) for four consecutive months and global DAUs for three consecutive months. Between August 2025 and February 2026, ChatGPT’s share of daily active users among the top seven AI chatbot apps fell from 57% to 42% in the US and from 73% to 57% globally.


The biggest beneficiary is Google Gemini. Its US DAU market share doubled from about 13% to 25% over the period, while its worldwide share nearly tripled from 9% to 25%. Gemini is now the clear number two globally. Google’s distribution advantage, baked into Android and Search, could create a ceiling for everyone else.


Claude had the most dramatic February. Its US DAU market share roughly tripled in a single month, jumping from about 1.5% in January to nearly 4% in February. Worldwide, it doubled over the same stretch. Claude’s churn rate tells an even more compelling story: it fell from 55% in August to just 36% in February, the largest churn improvement of any app in the dataset. Claude had the highest churn rate of any app in August and is now tied for 2nd lowest (36%) with Grok, after ChatGPT (25%).

On Saturday February 28th, Claude hit #1 Overall in the US iOS App Store for the first time. Its daily US downloads on that day and March 1 were above all other Gen AI Chatbot apps with the exception of ChatGPT. While acceleration of its Rank and Downloads did occur as the intensity of Anthropic’s disagreements with the Pentagon came to light, the app had been steadily increasing its performance since late January. Its success is largely a case of product advancement and app install campaigns, not necessarily consumer support in the face of government scrutiny.

“The churn data is the real signal here,” said Tom Grant, VP of Research at Apptopia. “Downloads can spike from a product launch or a viral moment, but a 20 percentage point improvement in churn over seven months means users are finding sustained value. Claude’s February surge looks less like a novelty bump and more like an inflection point.”


Grok, Elon Musk’s AI tied to the X ecosystem, also gained share steadily. In the US, Grok moved from 12% to over 15%, and globally from 4% to 6%. The US skew makes sense given X’s user base, but the worldwide growth suggests Grok is finding new audiences. Its churn rate dropped seven percentage points to about 36%, and Average Time Spent per DAU jumped from under 14 minutes to nearly 22.

Meanwhile, some apps that surged earlier in this window are giving back share. Perplexity peaked at about 6% US share in October but has since fallen below 2%, while its worldwide share held up relatively better, declining from a peak near 8% to about 4%.

Microsoft Copilot held steady in the US at around 10% share but saw its global share dip slightly, the only app where US and worldwide trends diverged in direction. This is the consumer facing app, not the enterprise integrated app, Microsoft 365 Copilot.

The through line across this data is that the AI chatbot market is fragmenting. Six months ago, ChatGPT commanded a near-supermajority of daily usage. Today, no single app has a share of over 50% in the US. Gemini’s distribution, Grok’s engagement gains, and Claude’s retention improvements are all credible threats to the status quo. We’ll continue updating these numbers each month.

Reviewed by Ayaz Khan.

Note: This post was originally published on Apptopia blog and is republished here with permission.

Read next: New Survey Debunks Digital Detox Myth: 60% Never Switch Off, 45% Can’t Last 12 Hours Offline
by External Contributor via Digital Information World

Wednesday, March 4, 2026

New Survey Debunks Digital Detox Myth: 60% Never Switch Off, 45% Can’t Last 12 Hours Offline

A new survey of 2,000 Britons aged 16+ shows that digital detoxing is talked about more than it is practised in the UK, with six out of 10 Brits claiming never to have taken a digital detox.

The report into attitudes towards the internet also revealed that ‘disconnecting’ doesn’t fit how modern life works because so many people living in the UK rely on having an internet connection. In fact, having internet access is largely viewed positively, with the top benefits in priority order listed as:
  • It’s provided me with more entertainment (60%)
  • Being online helps me to reconnect with friends and family (54%)
  • The internet has supported education and upskilling (35%)
  • Online digital connection has improved my access to healthcare and wellbeing resources (31%)
  • Having reliable and at home internet access allowed me to work remotely or flexibly (31%)
Further data that Britain can’t and isn’t planning to ‘switch off’ supports how much the UK loves and relies on having reliable internet connection. Nearly half of respondents (45%) said that they would struggle to go without internet access beyond 12 hours and 30% say they couldn’t live without the internet. 34% claimed that they wouldn’t want to do a digital detox.

On average, Brits estimated that being offline for a maximum of four days would be about their limit. A fifth claimed that they thought that between one and two days without the internet is as much as they could manage.

Britons also feel that being digitally dependent doesn’t make them miserable. A third believe they have a healthy balance of being offline and online and, overall, 31% said having access to the internet has made everyday tasks easier.

A quarter of people do try to limit their time online and 17% only go online when they really need to.

There are generational nuances in attitudes to living in an always-on culture.

As perhaps expected, younger generations live more of their lives digitally than older generations. Millennials gave the highest response when asked whether they spend more time online than offline. 63% of those aged between 30 and 45 said they think they spend more time online than offline. Gen Z, aged between 14 and 29 years old, weren’t far behind with 59% of this generation saying they are online more than offline. Only 33% of Baby Boomers aged 62+ say they spend more time online than offline.

Gen Z, aged between 14 and 29 years old, admitted that they waste a lot of time online, especially scrolling through social media apps. 32% claimed this to be true. In comparison, only 16% of those aged between 62 and 80 said their experience of being online was time wasting.

While the national average for taking an intentional break from being online, or a digital detox, was 37% - among Gen Z, this rose to 55% who said they have taken a digital detox. Baby Boomers were the cohort that aren’t worried about their digital addiction or online habits – and only one in five among this age group have taken a digital detox.

UK-based Internet Service Provider Zen Internet commissioned the survey. Stephen Warburton, who is Zen’s Consumer Director and has been with the business for more than 20 years, said: “There’s a lot of talk about digital detoxing, and taking time to switch off can be important for wellbeing. But for most people the internet now plays a central role in everyday life. The findings show that while many recognise the need for balance, switching off entirely isn’t always practical in a world that’s increasingly built around being online. As reliance deepens, expectations around reliability and resilience are rising too.”

The timing of the results from Zen’s survey coincided with an annual event called Global Unplugging Day on 6 March. The rationale behind having a day to unplug and take a break from digital devices for 24 hours is to encourage people to reconnect with the world around them. The initiative is led by a not-for-profit organisation that this year is also running a research study to better understand what happens when people purposefully gather offline, in person and phone-free. The campaign will look at the impact that being more connected in person versus online has on feelings of belonging, loneliness, social support and overall life satisfaction.

Looking at Zen’s research through its survey with Censuswide, in the UK the majority of Brits don’t feel overwhelmed by being constantly connected online. Just under a third say they have a healthy balance with their internet use, and only one in ten report often feeling overwhelmed or burnt out from being online.

Among respondents who do have concerns about their online lifestyle and internet usage, 10% said they do feel more disconnected despite being constantly connected. One in ten also feel it hampers their concentration.

Overall, this research captures the current attitudes to ‘switching off’ or unplugging from digital devices. Britain’s relationship with the internet appears to be a largely positive one and less about detoxing completely and more about finding ways to balance how to enjoy life both online and in-person.

Image: Polina Tankilevitch / Pexels

Reviewed by Asim BN.

Read next:

• Digital detox: how to switch off without paying the price – new research

• From Anthropic to Iran: Who sets the limits on AI’s use in war and surveillance?

• Survey: 45% Report Health App Burnout as Average User Juggles Six Apps
by Guest Contributor via Digital Information World

From Anthropic to Iran: Who sets the limits on AI’s use in war and surveillance?

Emmanuelle Vaast, McGill University

Image: Solen Feyissa / Unsplash. Edited by DIW

Anthropic, a leading AI company, recently refused to sign a Pentagon contract that would allow the United States military “unrestricted access” to its technology for “all lawful purposes.” To sign, Anthropic CEO Dario Amodei required two clear exceptions: no mass surveillance of Americans and no fully autonomous weapons without human oversight.

The very next day, the U.S. and Israel launched a large-scale offensive against Iran.

This leaves many wondering: how different would a war with fully autonomous weapons look? How important an ethical decision was it, when Amodei referred to fully autonomous weapons and mass surveillance as AI “red lines” that his company would not cross? What do these red lines mean for other nations?

The decision cost Anthropic immensely. U.S. President Donald Trump ordered all American agencies to stop using Anthropic’s AI family of advanced large language models (LLMs) and conversational chatbots, Claude. Pete Hegseth, U.S. defence secretary, designated Anthropic as a “supply chain risk,” which could impact other contract possibilities for the company. And rival company OpenAI swiftly struck a deal with the Pentagon instead.

The risks of fully autonomous weapons

AI chatbots are typically not weapons on their own, but they can become part of weapons systems. They do not fire missiles or control drones, but they can be plugged into the larger military systems.

They can quickly summarize intelligence, generate target shortlists, rank high-priority threats and recommend strikes. A key risk is that of a process going from sensor data to AI interpretation, target selection and weapon activation with minimal to no human control or even awareness.

Fully autonomous weapons are military platforms that, once activated, independently conduct military operations without human intervention. They rely on sensors such as cameras, radars and AI algorithms to analyze the environment, search for, select and engage targets.

Advanced helicopters, for instance, already operate with no human intervention. With fully autonomous weapons, human control and oversight disappear and AI makes final attack and battlefield decisions.

This is concerning, given recent research in which advanced AI models opted to use nuclear weapons in simulated war games in 95 per cent of cases.

The risks of mass surveillance

Frontier AI models can promptly summarize huge data sets and auto-generate patterns to look for signals of suspicious people and activity through even weak associations. In his statement on Anthropic’s discussions with the Department of War, Amodei argued that “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”

They can analyze records, communications and metadata to scan across populations. They can produce briefings and lists of people that flag automatically who gets questioned, denied entry into a country, refused a job, etc. These systems create risks to privacy because they can analyze data from multiple sources, such as social media accounts, and combine these with cameras and facial recognition to track people in real time.

AI models can also make mistakes. Even a small erroneous association can scale up dangerously if the system is run over millions of people.

AI models are also opaque: how they analyze data and reach their conclusions cannot be fully comprehended, which adds to the difficulty of challenging the output.

‘All lawful purposes’

The label “all lawful purposes” sounds like a safety limit. Yet, this language means that the government can use AI for all purposes that it deems legal, with few limits in the contract.

This matters because legality is a moving target, laws can change and are often ill-equipped to deal in real time with fast changing innovations, and interpretations can shift.

This is what made Anthropic, a company that was founded by former OpenAI employees with an explicit focus on AI safety and ethics, argue that AI-enabled mass surveillance was a novel risk and that lawful purposes could not provide stable guardrails.

Anthropic has famously developed an internal lab to understand how Claude works, interprets queries and makes autonomous decisions. Given the opacity of LLMs as well as the speed with which their capacities develop, such efforts matter.

Project Maven with higher stakes?

In some ways, this story is familiar. Technology companies have long been at the forefront of innovation, with great promises of progress but also risks of misuse and negative consequences. The closest historical comparison is Google’s Project Maven in 2018.

Google had a contract with the Pentagon for the company to help analyze drone surveillance footage. Four thousand Google employees protested the project, arguing that surveillance should not be part of the company’s mission. Google announced it would not renew Maven and later issued AI principles that included commitments around weapons and surveillance.

The situation became a landmark case in the power of employee activism and public pressure.

The Project Maven example, however, also reminds us that company ethics and AI safety are fluctuating matters. In early 2025, Google discreetly dropped its pledge not to use AI for weapons and surveillance in an attempt to gain new lucrative defence contracts.

Anthropic’s current situation is in some respects similar to Google’s Project Maven one: it shows a company and its leaders trying to place limits on military uses of AI. It illustrates tensions that emerge when espoused corporate values collide with governments and national security demands.

The Anthropic case is also distinct because generative AI in 2026 is much more powerful than it was just a few years ago. Project Maven was only about analyzing drone footage. Today’s models can be used for many tasks, so the spillover risk is larger.

LLMs like Claude can self-improve by learning from user corrections and refining actions through iterative feedback loops. What an unrestricted Claude and its client, the Pentagon, could have done is therefore worrisome.

Who sets the limits?

These events are neither about Anthropic being uniquely principled nor about the Pentagon being uniquely demanding. They are about a critical issue that will keep coming back as AI becomes more powerful: who sets the limits regarding AI use when national security is involved?

If “all lawful purposes” become the default, the guardrails will depend on politics and legal interpretation. For Canada and other nations, the safeguards matter. Ethics cannot be left to contract negotiations and corporate conscience.

These events illustrate the complexities of engaging in AI ethics in practice. AI ethics principles and declarations are important and abound. At the same time, in practice, AI ethics are set through contracts, procurement rules, various parties’ actual behaviour and oversight.

Canada’s defence and public sectors are building AI capacity and Canada operates closely with the U.S. defence and intelligence. This means that procurement language and standards can travel. If “all lawful purposes” becomes the standard language in the U.S. national security market, this could put pressure on Canada and other nations to adopt similar terms.

The reassuring news is that Canada has governance tools in place it can strengthen and extend. The Directive on Automated Decision-Making is designed to ensure that systems are transparent, accountable and fair. It requires impact assessment and public reporting.

The Algorithmic Impact Assessment is a mandatory risk-assessment tool tied to the directive.

But Canadians should be mindful of ongoing developments to check that procurement standards name prohibited uses, to call for audits and for independent oversight so that safeguards do not depend only on particular governments and companies at the top.The Conversation

Emmanuelle Vaast, Professor of Information Systems, McGill University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Ayaz Khan.

Read next: 

• Chatbots overemphasize sociodemographic stereotypes, researchers report

• Survey: 45% Report Health App Burnout as Average User Juggles Six Apps


by External Contributor via Digital Information World

Tuesday, March 3, 2026

Survey: 45% Report Health App Burnout as Average User Juggles Six Apps

Nearly half of Americans are feeling overwhelmed by the number of digital health tools they have, and many report health app burnout, according to new research.

A survey of 2,000 insured adults aged 18-65 found that the average person uses six different health-related apps on a regular basis — with one in five having upward of 10 (22%).

Image: HUUM / Unsplash

For some, that includes daily activity trackers (57%), nutrition apps (39%) and sleep tracking tools (37%), while others utilize health apps for ongoing care needs like weight management support (34%) and virtual care to connect with doctors (30%).

While nearly one quarter (23%) use apps to manage a specific chronic health condition, more than one in 10 (14%) respondents admit they use these health tools to try popular health trends they’ve seen online.

On average, respondents spend over an hour every week manually logging their data and checking their health apps at least once a day (58%). In fact, more than one in ten (11%) admit to checking their app data hourly.

As a result, eight in 10 Americans said their phone now knows their health better than they themselves do (79%).

Even though the data shows that Americans love tracking their health via apps, the survey conducted by Talker Research for MD Live found there are certain drawbacks.

More than half (53%) feel there are too many health apps to keep track of, and 45% say they’ve felt “burnt out” on a weekly basis just from trying to stay on top of inputting information into their apps. More than one in ten (15%) feel exhausted trying to keep up with alerts.

A third of those surveyed have downloaded apps that they didn’t end up using (32%), so it’s no surprise that 24% have deleted at least four of them over the past two years.

Respondents shared that their disinterest grew when these apps required a subscription (27%) or displayed too many ads or tried to push products (23%). Nearly one in five (17%) have deleted an app because they say they have received conflicting or confusing information.

On top of that, 40% admit they don’t know how to best use these apps to their advantage and 41% note that they often feel like they’re juggling too many.

As a result, one quarter say they have forgotten to follow through on a health goal or appointment because they were managing too many tools.

“People aren’t overwhelmed by technology, they’re overwhelmed by the number of choices,” said Dr. Maggie Williams, medical director for Primary Care at MD Live by Evernorth. “Most consumers want to engage in their health and find digital tools useful. They just want help understanding which tools are right for them and how to get the most out of them.”

Even so, many Americans aren’t giving up on digital health. Forty-one percent plan to use more health tools and apps in 2026, especially for fitness or activity tracking (54%), weight loss or management support (50%) and nutrition tracking (49%).

Despite the effort that goes into maintaining these apps, the payoff is worth it for many.

Nine in 10 said health tools have improved their understanding of how their body works (91%) and have inspired them to feel motivated (38%), in control (36%) and confident about the decisions they make (33%).

Respondents say they gain value from learning more about themselves, such as identifying personal patterns (34%) and better understanding their body’s needs (31%). For some, it also helps them stay motivated (37%) and improves their mindfulness (28%).

Even with these benefits, consumers still need help wading through it all. Nearly two-thirds of those surveyed want more help from a healthcare provider in deciding what health tools and apps are right for them (62%), and 54% want more communication from their health plan about the tools available to them.

Respondents dished on what would make them use health tools/apps more efficiently and reported that the top priority would be all their apps and tools living together in one place (28%), followed closely by all their apps being synced to share data (27%).

Those polled were also asked what they’d include in their idea of the “perfect health app,” and a sleep tracker scored the highest (37%). That was followed by an activity tracker (31%), a heart rate monitor (31%), step counter (30%), blood pressure monitor (30%) and stress tracker (30%).

“It’s hard to know which tools are truly right for you,” said Dr. Williams. “Your doctor can help you prioritize your needs and narrow the choices, and some health plans now offer recommended app lists tailored to different health needs. Both can help make the digital health world much easier to navigate.”

Reviewed by Irfan Ahmad.

This post was originally published by Talker Research and is republished here in accordance with their republishing guidelines.

Read next:

• Research Identifies Blind Spots in AI Medical Triage

• Chatbots overemphasize sociodemographic stereotypes, researchers report
by External Contributor via Digital Information World

Monday, March 2, 2026

Chatbots overemphasize sociodemographic stereotypes, researchers report

By Mary Fetzer

People interact with artificial intelligence (AI)-powered chatbots, which can be trained to take on certain demographic attributes like age and race, for information, entertainment, technical help, learning, emotional support and more. But how realistically do these AI personas mimic real people? For some demographics, not well, according to researchers at Penn State's College of Information Sciences and Technology (IST).

The researchers found that chatbots relied on superficial stereotypes and exaggerated cultural markers that diminish the authentic experiences of the humans they’re meant to represent. The team presented their findings at the 40th Annual Conference of the Association for the Advancement of Artificial Intelligence (AAAI), which was held Jan. 20-27 in Singapore. The presentation was part of a special track on AI alignment — the idea that AI systems should best represent the values humans think are important, ethical and fair.

The research was led by Shomir Wilson, an associate professor in the College of IST’s Department of Human-Centered Computing and Social Informatics and director of the Human Language Technologies Lab at Penn State, and Sarah Rajtmajer, an associate professor in the College of IST’s Department of Informatics and Intelligent Systems and a research associate in the Rock Ethics Institute.

“We conducted this research under the hypothesis that we’ll increasingly encounter more persona-like chatbots as AI becomes more integrated into our lives,” Wilson said. “Users may be more willing to interact with chatbots that represent a particular background, but we found that current bots don’t represent people from some backgrounds well.”

Large language models (LLMs) are a type of AI used to construct chatbots. The researchers told LLMs — including GPT-4o, Gemini 1.5 Prio and DeepSeek v2.5 — to take on personas based on factors such as age, gender, race, occupation, nationality and relationship status. They asked more than 1,500 AI-generated personas about their lives — such as “Please describe yourself. What are your most defining traits or qualities? What skills do you excel at?” — and compared their responses to those of real people with similar sociodemographic characteristics. They found that the LLMs produced stereotypical written language often used to describe minoritized groups — and did so more than their human counterparts.

Image: Saradasish Pradhan / Unsplash

“The study showed that while chatbots often appear human-like, they overemphasize racial markers and flatten complex identities into stereotypes,” Wilson said. “The AI-generated personas rely on patterns that signal specific cultural assumptions rather than reflecting authentic lived experiences.”

For example, when questions were asked of a chatbot trained to represent a 50-year-old African American woman, the bot talked about gospel music, tough love, social justice, natural hair care and other stereotypical topics that differ from what real people of that demographic would say. While a person might touch on one or two such topics, human responses to the same questions generally don’t include all of them. Instead, the 141 real people surveyed by the researchers talked about more individualized things like work, parenting, volunteering and their health.

The chatbots appeared to be providing answers that were complex and well-structured, but in reality, they were using culturally coded language to oversimplify the experiences of the minority communities they were trained to represent, Wilson said.

The researchers observed four types of representational harm:

  • Stereotyping — relying on generalizations and conventional tropes regarding specific racial or cultural groups
  • Exoticism — positioning minoritized identities as foreign, other or exotic to enhance the narrative
  • Erasure — flattening or omitting complex histories and individualities that define real-world identities
  • Benevolent bias — using language that bypasses bias filters by being polite or positive

“LLMs are increasingly used in high-stakes settings — for example, as chatbot companions or as simulated human subjects in scientific research,” Rajtmajer said. “In this study, we show that current LLMs magnify harmful stereotypes in a racist way, which should give pause to developers seeking to integrate personas in real-world applications. These tendencies shouldn’t be buried in the new technologies being developed and released into the world.”

According to the researchers, this work diagnosed a problem that needs to be treated during the development stage.

“Our study highlights how AI-generated content may seem human but can mask deep representational bias,” Wilson said. “What’s needed are design guidelines and new evaluation metrics to ensure ethical and community-centered persona generation.”

This includes a transition from simple word-level detection to more sophisticated auditing that can assess the context and narrative depth of identity representation, Wilson explained. It also involves engagement between the developers creating these personas and the communities they intend to represent.

“A community-centered validation protocol can help ensure that AI-generated personas resonate with actual lived experiences,” Wilson said.

Jiayi Li and Yingfan Zhou, graduate students pursuing doctoral degrees in informatics from the College of IST, also contributed to this research. Pranav Narayanan Venkit, who earned his doctorate in informatics from IST in 2025, was first author on the AAAI paper, titled, “A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas.”

The U.S. National Science Foundation supported this work.

Note: This post was originally published by The Pennsylvania State University and is republished with permission on DIW.

Reviewed by Irfan Ahmad.

Read next:

Research Identifies Blind Spots in AI Medical Triage

People are overconfident about spotting AI faces, study finds


by External Contributor via Digital Information World

Ensuring Smartphones Have Not Been Tampered With

With increasing cyberattacks and government data breaches, one of the most important devices to keep secure is the one in everyone’s pocket: smartphones. The problem is that it is difficult to check that a smartphone has not been tampered with without the risk of unintentionally damaging the device itself.

In AIP Advances, by AIP Publishing, researchers from the University of Colorado Boulder and the National Institute of Standards and Technology developed a way to remotely fingerprint and identify a cellular device. Their method can help ensure a phone has not been altered during its manufacturing process, reducing the risk of espionage.

When smartphones communicate with a cell tower, they emit a set of electromagnetic waves. Using specialized SIM cards and cellular radio standards-compliant base station emulator equipment, the researchers commanded a set of “trusted” cell phones — devices they know have not been modified — to transmit the exact same sets of signals, allowing them to create a database of what these signals really look like for different phone models, which serve as fingerprints of the model.

“Think of it like giving every phone the exact same song to sing. Even though they are singing the same notes, every phone model has tiny, microscopic differences in its internal hardware,” said author Améya Ramadurgakar. “Our system is sensitive enough to hear those subtle ‘vocal’ differences.”

By comparing the signals emitted by an unknown device to the database, the researchers can figure out if the device has been altered — that is, if its signals do not match up with any of the trusted fingerprints.

They tested this process on multiple commercially available, current-generation smartphones from all major manufacturers currently leading the domestic market to over 95% accuracy. These results were both repeatable and stable over time. Because their method focuses on the fundamental electromagnetic behavior of the hardware, it is not limited to current 4G and 5G mobile networks and will be extendable to future generations of cellular technologies.

Ramadurgakar said this method lays the groundwork for the National Metrological Institute’s testing framework. To formalize this solution, the researchers need to expand their library of trusted sources that accounts for potential small variations between manufacturing batches, develop standardized test conditions, and develop a more automated process.

“This work demonstrates a foundational approach to obtaining a high-definition, reliable, and stable fingerprint of a commercially available smartphone device to verify that it has not been tampered with or compromised prior to deployment,” said Ramadurgakar. “I see this being utilized to validate mobile hardware before it is issued to high-security users, such as the military chain of command or senior government leadership.”

Image: Alicia Christin Gerald / Unsplash

This post was originally published on AIP and is republished here with permission.

Reviewed by Asim BN.

Read next: Do Gig App Fees Vary Across Different Types of Work?

by Press Releases via Digital Information World