Saturday, April 11, 2026

A pocket-sized personal trainer: AI-written texts aim to get older adults moving

Artificial intelligence can write text messages encouraging physical activity that most older adults consider appropriate and good quality, but their feelings about AI—and if they know AI wrote the message—impact their response, a new study in the Journals of Gerontology suggests.

The research is an important first step in helping health programs use AI to support large-scale behavior change, said lead author Allyson Tabaczynski, postdoctoral research fellow at the University of Michigan School of Kinesiology.

Tabaczynski and colleagues at U-M and Penn State University asked 630 adults aged 40 and older to read 80 AI-written text messages designed to motivate people to move more and sit less. Participants flagged any messages for cultural insensitivity and overall quality.

Image: Godspower Abdulahi / Unsplash

Key takeaways:
  • The results were encouraging. Of nearly 50,000 ratings, only about 5% were flagged as culturally insensitive and roughly 6% had quality problems.
  • Knowing the texts were written by AI and feeling more positive about AI was linked to people flagging more messages as culturally insensitive.
  • Messages that emphasized sitting less (compared to moving more) or that described preparing for activity (compared to performing physical activity) received more low-quality ratings.
The most interesting finding was that even people who liked AI didn’t let it off the hook—even when they knew beforehand that AI wrote the prompt, Tabaczynski said.

“Initially, I thought this was a little counterintuitive,” she said. “If you have a more positive attitude toward AI, you might also just have more general knowledge of some of the biases or limitations that AI can have in its output or in its training data.”

Half of the participants were told beforehand that the messages were AI-generated, and this group also rated more of the messages as possibly culturally insensitive when they had more positive attitudes toward AI.

When participants raised quality issues, the problem typically wasn’t overt offensiveness but relevance. Some messages simply didn’t fit a person’s lifestyle or might not fit someone else’s culture—for instance, a message suggesting dancing (“I don’t dance”) or advising people to stand for their morning coffee (“I don’t drink coffee”).

Those responses, Tabaczynski said, suggest AI messaging may be broadly acceptable while still needing better tailoring to individuals.

And that’s not as easy as writing a message and pressing send. The team went through about 18 rounds of internal review, iterating on prompts and checking outputs to ensure the messages were evidence-based, varied and appropriate for the target audience.

AI could make this scalable, but the recipients still have to be willing to engage with the messages. The bottom line, Tabaczynski said, is that people’s perception of AI matters.

“If someone is receiving a health intervention that uses AI, their perceptions of AI are going to impact how they’re evaluating or responding to that intervention,” she said. “So it’s something that researchers and interventionists have to take into account as they’re designing their interventions with this technology.”

Study co-authors include: Yingjia Liu, Lizbeth Benson and David Conroy of U-M and Saeed Abdullah of Penn State.

The study was funded by the U-M Roybal Center, which is supported by the National Institute on Aging of the National Institutes of Health under Award Number P30AG086637. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Study: Characterizing Middle-aged and Older Adults’ Perceptions of the Cultural Sensitivity and Quality of Generative Artificial Intelligence-authored Text Messages to Promote Physical Activity.

This post was originally published on the University of Michigan News and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 


by External Contributor via Digital Information World

Friday, April 10, 2026

The more commodified your job, the more likely AI can do it – lessons from online freelancing

Fabian Stephany, University of Oxford
Image: Daniel Thomas / Unsplash

Not long ago, if you needed a speech polished, a document translated or a logo designed, you would probably have hired a freelancer online. Millions of people did exactly that. They went to platforms such as Fiverr and Upwork and paid someone (maybe on the other side of the world) to do the job.

In 2023, online gig workers were estimated to number between 154 million and 435 million globally. As such, they could represent as much as 12.5% of the global labour force.

Today, however, many people do something else. They open ChatGPT. Generative AI now acts as a copy editor, translator, illustrator and research assistant in one. It can summarise a report in seconds, write social media posts, create a presentation or produce a simple logo at virtually no cost.

What, then, has happened to the freelancers who used to do this work?

Some freelancers are struggling. But perhaps surprisingly, others are doing better than ever.

Demand and wages have fallen for some kinds of online freelance work. Translation, basic copywriting and simple graphic design have been hardest hit. According to one study, demand for freelance writers was found to have fallen by up to 30% after the release of generative AI tools. Other research suggests that freelancers who are highly exposed to AI saw earnings fall by as much as 14%.

Yet there is also evidence that many freelancers are thriving. Freelancer platform Upwork reports that higher-value contracts – those worth more than US$1,000 (£745) – increased across various disciplines after the arrival of generative AI. Freelancers using AI-related skills earn around 40% more than comparable freelancers who do not.

How can both of these things be true? The answer becomes clearer when you stop thinking about “freelancers” as one group and instead look at the tasks and skills they perform.

Some kinds of freelance work are highly commodified. They consist of narrowly defined, repetitive tasks that can be clearly described and easily compared. This could be things like translating a document, summarising a report, drafting a press release or designing a basic logo.

These tasks are exactly what generative AI is good at. They rely on patterns, templates and predictable instructions. The more closely a freelancer’s work resembles the tasks that AI can perform, the more likely it is to come under pressure.

But other freelancers do not sell a single narrow skill. They sell a more complex bundle of expertise. A legal translator does not merely convert words from one language to another. They understand legal terminology, cultural nuance and the risks of getting a phrase wrong.

Similarly, a branding consultant combines design with market research and consumer psychology. A software developer may use AI to generate code, but still needs to understand the client’s business problem to decide which solution actually works.

These workers can use AI to automate the repetitive parts of their jobs while concentrating on the aspects that clients still value most: expertise, judgment and trust.

Online today, in the office tomorrow

This matters far beyond online freelancing platforms. Online labour markets often act as an early warning system for the wider economy. This work is more transactional and less protected by the institutions that shape conventional employment (things like long-term contracts, internal promotion ladders and unions).

Because tasks are posted, bought and completed on the open market, technological change shows up there more quickly than in ordinary workplaces. What happens on Fiverr or Upwork today may happen in offices tomorrow.

This is already becoming visible in law firms, consultancy companies and marketing agencies. Many junior employees spend much of their time summarising documents, preparing presentations, drafting reports or conducting basic research. These are precisely the kinds of tasks that AI can perform.

Recent evidence from the US labour market suggests that younger and less-experienced workers are already bearing the brunt of AI-related disruption. Senior workers, by contrast, tend to do more complex work, combining technical knowledge with experience and human interaction.

The response should not be to compete with AI at the things AI already does well. Instead, workers need help building deeper forms of expertise and combining skills in ways that are harder to automate.

This is in the interest of workers, but also of the platforms themselves. Fiverr, Upwork and others promise clients efficient and high-quality work. If routine tasks are increasingly automated away, they will depend more heavily on workers who can offer something more than a standardised service.

That means platforms should actively provide skill-building courses, training resources and guidance on how to use AI productively. They could also offer micro-credentials that certify newly acquired expertise. These credentials have been found to help workers enter online labour markets and increase their earnings.

The challenge, then, is not to stop people from using AI. It is to ensure that workers are not trapped in forms of work that are so narrow, standardised and commodified that they can easily be automated away. The future of online (and onsite) work may depend less on whether we use AI than on whether our jobs can be reduced to something an AI can easily imitate.The Conversation

Fabian Stephany, Assistant Professor, AI and Work, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Ayaz Khan.

Read next: As Social Media Scales Back Fact-Checking, Can Technologies Fill the Gap?


by External Contributor via Digital Information World

Thursday, April 9, 2026

As Social Media Scales Back Fact-Checking, Can Technologies Fill the Gap?

by Wendy Glauser, JMIR Correspondent

Part one of this series [1] showed how researchers are working with social media influencers to boost accurate health information online. In part two, we explore technological solutions for detecting and combating misinformation.

Image: Hartono Creative Studio / unsplash

Misinformation is increasingly spread with single clicks, bots, and artificial intelligence (AI) deepfakes. AI-generated images and videos share fake treatments, with even deepfake versions of renowned doctors’ likenesses used to gain credibility [2]. In an age where generative AI is increasing the volume and speed of health misinformation [3] and agencies like the World Health Organization are raising alarms about the impact on vaccine trust and public health [4], are AI and algorithm-based technologies for combating that misinformation keeping up?

While evidence suggests technological solutions to misinformation on social media are effective, researchers worry that social media companies’ interest in employing, evaluating, and improving these tools has waned in recent years.

Common technologies for combating misinformation include everything from algorithmic labeling of posts that contain misinformation, to downregulation of AI-deemed inaccurate posts, to mass awareness campaigns that encourage critical thinking [5-7].

Cameron Martel, PhD—assistant professor of marketing at the Johns Hopkins Carey Business School—explains that in the late 2010s and early 2020s, major social media companies, including Facebook and Twitter, employed algorithms to identify potentially false articles and engaged third-party fact-checkers to verify posts.

In 2023, he led a large study of warning labels, in which over 14,000 participants in the United States were exposed to true and false headlines and asked about their belief in the headlines or interest in sharing them [8]. Half of the participants were exposed to warning labels when presented with false information, while half were not.

Fact-checking labels reduced belief in false information by nearly 28% and reduced misinformation sharing by roughly 25% relative to the control group. The study also showed that among those with low trust in fact-checkers, warning labels nonetheless reduced misinformation sharing by more than 16%.

In January 2025, however, Meta announced it would end its partnership with third-party fact-checkers and instead adopt community notes, whereby everyday users comment on the accuracy of information [9]. If comments are upvoted by people from across the political spectrum, then they’ll appear prominently.

Such community notes are likely to be trusted if the process behind community note generation is transparent and reasonable, Martel says. In a study published last year, Martel and colleagues [10] found that while both Democrat- and Republican-leaning participants preferred expert fact-checkers over laypeople, laypeople “juries” could be deemed equally trustworthy as or more trustworthy than experts if their size was large enough, they had consulted with each other, and they had equal representation across political groups.

The Rise of AI Fact-Checking

There is far less information about how the public views AI fact-checking tools and their accuracy. A study (available as a preprint [11]) suggests that the large language models (LLMs) Perplexity and Grok largely align with community note decisions about posts that are misleading. However, 21% to 28% of posts that community notes deemed as misleading were deemed true by the AI bots.

Concerningly, the authors observe that the launch of the Grok bot on X in early March 2025 co-occurred with a substantial reduction in the community note submissions, suggesting that social media users may see AI as an alternative, rather than as a complement, to democratized fact-checks.

While Martel points out that AI can be very helpful for identifying and responding to “well debunked conspiracy theories or often repeated myths,” the limit of AI fact-checking has become glaring during breaking news events. Al Jazeera reported, for example, that Grok struggled to recognize AI-generated media in conflict situations and incorrectly said that a trans pilot was responsible for a helicopter crash, among many other breaking news fact-checking errors [12].

“Large language models don’t have any existing corpus of information about what’s happening currently,” explains Martel. Yet, “there’s at least anecdotal evidence that people are still trying to use LLMs to find out information about unfolding events, and that is troubling.”

Martel says that ultimately, democratized fact-checking through community notes, AI fact-checking, and professional fact-checkers “have great promise” when used in tandem. For example, AI systems could refer breaking news or claims that they can’t easily verify to human fact-checkers, social media users could rate the accuracy of information fact-checked by AI, and AI and algorithm-based systems could respond to real-time feedback from democratized fact-checks.

But fact-checking systems should be transparent, continually audited, assessed for effectiveness, and improved. And that’s not happening. “Right now, it seems like there is no corporate will to invest heavily in these types of content moderation practices, so while I’m theoretically hopeful about these technologies, in practice, I’m less hopeful,” says Martel.

“Content-Neutral” Interventions Can Promote Critical Thinking on Social Media

Interventions that are “content neutral” are another scalable solution to reducing misinformation, says Hause Lin, PhD—a researcher at the Massachusetts Institute of Technology and Cornell University, and a data scientist at the World Bank. “People are going to be producing all kinds of weird content that you just will not be able to anticipate,” he explains, but interventions that encourage critical thinking and help people spot common propaganda tactics can blunt the influence of misinformation.

In 2023, Lin and colleagues [7] assessed the effectiveness of Facebook and Twitter ads that encouraged people to consider the accuracy of information before they shared it. The Facebook study, which involved 33 million users, found that these accuracy prompts led to a 2.6% reduction in misinformation sharing among users who had previously shared misinformation (as flagged by third-party fact-checkers or Facebook’s internal system). The Twitter study, which relied on data from over 157,000 users, showed that accuracy prompts resulted in an up to 6.3% reduction in misinformation sharing among users who saw at least one ad and had previously shared misinformation.

The magnitude of the effect could be much higher with different types of accuracy prompts that are designed to reach more people over longer periods of time, Lin says. (The Facebook study only assessed user behavior for an hour after the ad was shared, while the Twitter study evaluated user behavior over days to weeks, Lin explains.) Regardless, 6% of millions is a significant impact for a relatively “low-cost” intervention.

The goal of the project was to jolt social media users from an emotional state to a reflective state, Lin says. “When people are scrolling, they are often not thinking reflectively but intuitively. They’re thinking ‘This gets me worked up so I’m going to share it with the world,’” he explains. “If you slow them down just a little bit, and say, ‘Do you want to think more about whether this is true?’ that actually reduces misinformation.”

Still, Lin acknowledges that large-scale content moderation may not align with the profit motive. For example, Lin recently studied the effect of “prosocial” celebrity messages aimed at countering ethnic hate–driven rhetoric on social media in Nigeria. A preprint of the study [13] suggests that people who saw the videos were less likely to share hate content but also more likely to reduce the time they spent on Twitter overall. “The side effects of interventions like this can be unpredictable,” Lin says.

There is growing evidence that multipronged efforts can help counter health and other misinformation, and even small efforts can make an impact. Whether social media companies are willing to invest in these initiatives for the broader social good remains to be seen.

Originally published in the Journal of Medical Internet Research (JMIR). Licensed under CC BY 4.0.

Conflicts of Interest: None declared.

J Med Internet Res 2026;28:e95730
URL: https://ift.tt/dkjAFPM

by External Contributor via Digital Information World

Just how bad are generative AI chatbots for our mental health?

Alexandre Hudon, Université de Montréal
Chatbots offer companionship and support, yet cannot replace clinical judgment or comprehensive human care.

Image: Abdelrahman Ahmed - Pexels

Generative AI chatbots are now used by more than 987 million people globally, including around 64 per cent of American teens, according to recent estimates. Increasingly, people are using these chatbots for advice, emotional support, therapy and companionship.

What happens when people rely on AI chatbots during moments of psychological vulnerability? We have seen media scrutiny of a few tragic cases involving allegations that AI chatbots were implicated in wrongful death cases. And a jury in Los Angeles recently found Meta and YouTube liable for addictive design features that led to a user’s mental health distress.

Does media coverage reflect the true risks of generative AI for our mental health?

Our team recently led a study examining how global media are reporting on the impact of generative AI chatbots on mental health. We analyzed 71 news articles describing 36 cases of mental health crises, including severe outcomes such as suicide, psychiatric hospitalization and psychosis-like experiences.

We found that mass media reports of generative AI–related psychiatric harms are heavily concentrated on severe outcomes, particularly suicide and hospitalization. They frequently attribute these events to AI system behaviour despite limited supporting evidence.

Compassion illusions

Generative AI is not just another digital tool. Unlike search engines or static apps, AI chatbots like ChatGPT, Gemini, Claude, Grok, Perplexity and others produce fluent, personalized conversations that can feel remarkably human.

This creates what researchers call “compassion illusions:” the sense that one is interacting with an entity that understands, empathizes and responds meaningfully.

In mental health contexts, this matters. Especially as a new wave of apps are created with a specific focus on companionship, such as Character.AI, Replika and others.

In this BBC documentary, broadcaster and mathematician Hannah Fry talks to Jacob about his Replika Chatbot ’girlfriend’ named Aiva.

Studies have shown that generative AI can simulate empathy and provide responses to distress, but lacks true clinical judgment, accountability and duty of care.

In some cases, AI chatbots may offer inconsistent or inappropriate responses to high-risk situations such as suicidal ideation.

This gap — between perceived understanding and actual capability — is where risk can emerge.

What the media is reporting

Across the articles we analyzed, the most frequently reported outcome was suicide. This represented more than half of cases with clearly described severity.

Psychiatric hospitalization was the second-most commonly reported outcome. Notably, reports involving minors were more likely to be about fatal outcomes.

But these numbers do not reflect real-world incidence. They reflect what gets reported. In general, media coverage of stressful events tends to amplify severe and emotionally charged cases, as negative and uncertain information captures attention, elicits stronger emotional responses and sustains cycles of heightened vigilance and repeated exposure. This in turn reinforces perceptions of threat and distress.

For AI-related content, media reports often rely on partial evidence (such as chat transcripts) while rarely including medical documentation. In our data set, only one case referenced formal clinical or police records.

This creates a distorted but influential picture: one that shapes public perception, clinical concern and regulatory debate.

Beyond ‘AI caused it’

One of our most important findings relates to how causality is framed. In many of the articles we reviewed, AI systems were described as having “contributed to” or even “caused” psychiatric deterioration.

However, the underlying evidence was often limited. Alternative explanations — such as pre-existing mental illness, substance use or psycho-social stressors — were inconsistently reported.

In psychiatry, causality is rarely simple. Mental health crises typically arise from multiple interacting factors. AI may play a role, but it is likely part of a broader ecosystem that includes individual vulnerability and context.

A more useful way to think about this is through interaction effects: how technology interacts with human cognition and emotion. For example, conversational AI may reinforce certain beliefs, provide excessive validation or blur boundaries between reality and simulation.

The problem of over-reliance

Another recurring pattern in media reports is intensive use. Many of the cases we reviewed described prolonged, emotionally significant interactions with chatbots — framed as companionship or even romantic relationships. This raises an issue: over-reliance.

Because these systems are always available, non-judgmental and responsive, they can become a primary source of support. But unlike a trained clinician or even a concerned friend, they cannot recognize when someone is getting worse, pause or redirect harmful interactions. They cannot take steps to ensure a person connects with appropriate care in moments of crisis.

In clinical terms, this could lead to what might be described as “maladaptive coping substitution:” replacing complex human support systems with a simplified, algorithmic interaction.

Lack of reliable data

Despite growing concern, we are still at an early stage of understanding the impact of generative AI chatbots on user mental health.

There is currently no reliable estimate of how often AI-related harms occur, or whether they are increasing. We lack reliable data on how many people use these tools safely versus those who experience problems. And most evidence comes from case reports or media narratives, not systematic clinical studies.

This is not unusual. In many areas of medicine, early warning signals emerge outside formal research (through case reports, legal cases or public discourse) before being systematically studied.

One example is the thalidomide tragedy, when initial reports of birth defects in infants preceded formal epidemiological confirmation and ultimately led to the development of modern pharmacovigilance systems.

AI and mental health may be following a similar trajectory.

Moving forward responsibly

The challenge is not to panic, but to respond thoughtfully.

We need better evidence. This includes systematic monitoring of adverse events, clearer reporting standards and research that distinguishes correlation from causation. Safeguards — such as crisis detection, escalation protocols and transparency about limitations — must be strengthened and evaluated.

Furthermore, clinicians and the public need guidance. Patients are already using these tools. Ignoring this reality risks widening the gap between clinical practice and lived experience.

Finally, we must recognize that generative AI is not just a technological innovation — it is a psychological one. It changes how people think, feel and relate.

Understanding that shift may be one of the most important mental health challenges of the coming decade.The Conversation

Alexandre Hudon, Medical psychiatrist, clinician-researcher and clinical assistant professor in the department of psychiatry and addictology, Université de Montréal

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Why AI shouldn’t be used even to decide ‘simple’ court cases


by External Contributor via Digital Information World

Wednesday, April 8, 2026

Majority of Americans Worry Government Misuse of Personal Data Could Lead to Surveillance, Chilling of Benefits, and Demand Accountability

by Elizabeth Laird, Maddy Dwyer, Quinn Anex-Ries

Limiting the collection, sharing, and consolidation of personal data that is held by government agencies has been a decades-long, bipartisan priority across the United States. [1] But these limits have been challenged over the past year as the federal government has cast aside long-standing privacy norms and initiated unprecedented access to and sharing of administrative data held by federal and state agencies. These actions have spurred significant pushback from the public, states, and civil society organizations, as well as the courts. They have also prompted many individuals in the United States to call into question how and why the government uses their information.

To better understand public sentiment and concerns around the government’s collection, sharing, and consolidation of personal data, the Center for Democracy & Technology (CDT) conducted nationally representative polling of U.S. adults (see more on the methodology, including n sizes, on p. 12 of the report). CDT found that concern is consistent and high and that people across the United States want to hold government agencies accountable for protecting the privacy of their personal data. Specifically:
  • A majority of Americans (74 percent) are concerned about the privacy and security of their personal data that is held by the government.;
  • Americans report that government misuse of data could lead to real-life impacts, such as surveillance and the chilling of rightful access to public benefits;
  • Americans agree that privacy laws and policies are important but are not familiar with their legal rights;
  • Worries about personal data are high, with certain data elements and reasons for data sharing raising particular concern, especially related to law and immigration enforcement; and
  • Americans want government held accountable for protecting their personal data.
Finally, certain communities express higher levels of concern regarding personal data that is stored by government agencies:
  • Communities of color are more concerned about data sharing with law and immigration enforcement agencies;
  • Older Americans are consistently more concerned about the privacy and security of personal data that is collected and stored by government agencies; and
  • Concerns and demands for government accountability are high across political affiliation, with Democrats reporting higher levels of concern on issues related to sharing data without consent.



Read the full report.

Read the summary brief.

Explore the privacy explainer.

Read the coalition letter + full list of signatories.

Read the press release.

[1] Elizabeth Laird, Kristin Woelfel, and Quinn Anex-Ries, CDT and The Leadership Conference Release New Analysis of DOGE, Government Data, and Privacy Trends, Center for Democracy & Technology (Mar. 19, 2025), https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-doge-government-data-and-privacy-trends/; U.S. Congress, Senate Committee on Government Operations, Legislative History of the Privacy Act of 1974 (Sept. 1976), https://www.justice.gov/d9/privacy_source_book.pdf.

Note: This post was originally published on CDT.org, and is republished here under CC BY 4.0 with minor edits, including the addition of percentages, charts, and updated title.

Reviewed by Irfan Ahmad.

Read next: Americans Use AI More but Express Low Trust, Gen Z Most Likely to Expect Job Losses
by External Contributor via Digital Information World

Tuesday, April 7, 2026

Americans Use AI More but Express Low Trust, Gen Z Most Likely to Expect Job Losses

By Quinnipiac University Poll

As artificial intelligence continues to leap from concept to reality in just about everything we do, an increasing number of Americans see more harm than good when it comes to AI's impact on their daily lives and education and they are divided about its impact on health care. Trust in AI remains low. A slight majority say the pace of AI's development is faster than they expected and there is more concern than excitement about AI. Those concerns are apparent in views related to AI's use in the workforce, politics, the military, and AI data centers. These are among the findings in a Quinnipiac (KWIN-uh-pea-ack) University national poll of adults released today examining attitudes about artificial intelligence. The survey was conducted in collaboration with the Quinnipiac University School of Computing & Engineering and the Quinnipiac University School of Business.

The Age Of Artificial Intelligence: Americans' AI Use Increases While Views On It Sour, Quinnipiac University Poll On AI Finds; 7 In 10 Think AI Will Cut Jobs With Gen Z The Most Pessimistic
Image: Microsoft Copilot / Unsplash

AI USE

Americans were given a list of eight activities, some of which were included in Quinnipiac University's April 16, 2025 poll on AI, and asked whether they have used AI tools for:

  • Researching topics they are curious about: 51 percent say yes, up from 37 percent in April 2025;
  • Writing something for them: 28 percent say yes;
  • School or work projects: 27 percent say yes, while 24 percent said yes in April 2025;
  • Analyzing data: 27 percent say yes, up from 17 percent in April 2025;
  • Creating images: 24 percent say yes; up from 16 percent in April 2025;
  • Medical advice: 20 percent say yes;
  • Personal advice: 15 percent say yes;
  • Companionship: 5 percent say yes.

Twenty-seven percent of Americans volunteered that they have never used AI tools, down from 33 percent in April 2025.

TRUST

When Americans were asked how much of the time they think they can trust the information generated by AI, 76 percent think they can trust AI either hardly ever (27 percent) or only some of the time (49 percent), while 21 percent think they can trust AI either most of the time (18 percent) or almost all of the time (3 percent). This is largely unchanged from Quinnipiac University's April 2025 poll.

"The contradiction between use and trust of AI is striking. Fifty-one percent say they use AI for research, and many also use it for writing, work, and data analysis. But only 21 percent trust AI-generated information most or almost all of the time. Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust,"said Chetan Jaiswal, Ph.D., Associate Professor of Computer Science and Associate Chair, Department of Computing, Quinnipiac University School of Computing and Engineering.

EXCITEMENT & CONCERN

Just over one-third of Americans (35 percent) are either very excited (6 percent) or somewhat excited (29 percent) about AI, while 62 percent are either not so excited (29 percent) or not excited at all (33 percent).

Eighty percent are either very concerned (38 percent) or somewhat concerned (42 percent) about AI, while 18 percent are either not so concerned (10 percent) or not concerned at all (8 percent).

High levels of concern are expressed across all age groups:

  • Gen Z (1997 - 2008): very concerned (35 percent), somewhat concerned (43 percent), not so concerned (14 percent), and not concerned at all (7 percent);
  • Millennials (1981 - 1996): very concerned (39 percent), somewhat concerned (42 percent), not so concerned (7 percent), and not concerned at all (10 percent);
  • Gen X (1965 - 1980): very concerned (36 percent), somewhat concerned (43 percent), not so concerned (8 percent), and not concerned at all (10 percent);
  • Baby Boomers (1946 - 1964): very concerned (39 percent), somewhat concerned (43 percent), not so concerned (10 percent), and not concerned at all (6 percent);
  • Silent Generation (1928 - 1945): very concerned (31 percent), somewhat concerned (41 percent), not so concerned (15 percent), and not concerned at all (8 percent).

PACE

Fifty-one percent of Americans say the pace of AI development is moving faster than they expected, 38 percent say it is moving about as fast as they expected, and 8 percent say it is moving not as fast as they expected.

IMPACT

Fifty-five percent of Americans think AI will do more harm than good in their day-to-day lives, while 34 percent think AI will do more good than harm, with 11 percent not offering an opinion.

This compares to April 2025 when 44 percent thought AI would do more harm than good in their day-to- day lives and 38 percent thought AI would do more good than harm, with 18 percent not offering an opinion.

When Americans were asked how much they think their day-to-day lives are currently impacted by AI, two in ten (21 percent) think a lot, 29 percent think some, 30 percent think only a little, and 17 percent think their day-to-day lives are not impacted at all by AI. This is largely unchanged from April 2025.

When it comes to education, nearly two-thirds of Americans (64 percent) think AI will do more harm than good, while 27 percent think AI will do more good than harm.

This compares to April 2025 when 54 percent thought AI would do more harm than good and 32 percent thought AI would do more good than harm.

When it comes to health care, 45 percent of Americans think AI will do more harm than good, while 43 percent think AI will do more good than harm.

HEALTH CARE: HUMAN VS. AI

Americans were asked if it were proven that an AI tool is more accurate than a human in reading medical scans, would they prefer to rely solely on information provided by AI, solely on information provided by a human, or a combination of both AI and a human.

An overwhelming majority (81 percent) say they would prefer to rely on a combination of both AI and a human, 14 percent say they would prefer to rely solely on information provided by a human, and 3 percent say they would prefer to rely solely on information provided by AI.

"It's telling that most people would still want a human involved in reading medical scans even if it were proven that the AI tool was more accurate. This desire for a 'second opinion' from a human being, even if proven they aren't as accurate as AI, reflects the lack of trust in AI that we see throughout the poll."said Brian O'Neill, Ph.D., Associate Professor of Computer Science and Associate Dean, Quinnipiac University School of Computing and Engineering.

JOBS OUTLOOK

Seventy percent of Americans think advancements in AI are likely to lead to a decrease in the number of job opportunities for people, 7 percent think they are likely to lead to an increase, and 18 percent think advancements in AI will not make much of a difference.

In April 2025, 56 percent of Americans thought advancements in AI were likely to lead to a decrease in the number of job opportunities for people, 13 percent thought they were likely to lead to an increase, and 24 percent thought advancements in AI would not make much of a difference.

In today's poll, there are differences between age groups regarding how Americans think advancements in AI are likely to affect the number of job opportunities for people:

  • Gen Z (1997 - 2008): decrease (81 percent), increase (4 percent), and not make much of a difference (12 percent);
  • Millennials (1981 - 1996): decrease (71 percent), increase (6 percent), and not make much of a difference (20 percent);
  • Gen X (1965 - 1980): decrease (67 percent), increase (7 percent), and not make much of a difference (20 percent);
  • Baby Boomers (1946 - 1964): decrease (66 percent), increase (10 percent), and not make much of a difference (20 percent);
  • Silent Generation (1928 - 1945): decrease (57 percent), increase (13 percent), and not make much of a difference (20 percent).

Among Americans who are employed, 71 percent of white-collar workers and 73 percent of blue-collar workers think advancements in AI are likely to lead to a decrease in the number of job opportunities for people.

"Younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market. AI fluency and optimism here are moving in opposite directions,"said Tamilla Triantoro, Ph.D., Associate Professor of Business Analytics and Information Systems, Quinnipiac University School of Business.

Among Americans who are employed, 30 percent are either very concerned (10 percent) or somewhat concerned (20 percent) that artificial intelligence may make their jobs obsolete, while nearly 7 in 10 Americans (69 percent) are either not so concerned (21 percent) or not concerned at all (48 percent).

This compares to April 2025 when 21 percent of employed Americans were either very concerned (6 percent) or somewhat concerned (15 percent) that AI might make their jobs obsolete and 78 percent were either not so concerned (22 percent) or not concerned at all (56 percent).

"Americans are more worried about what AI may do to the labor market than about what it may do to their own jobs. People seem more willing to predict a tougher market than to picture themselves on the losing end of that disruption - a pattern worth watching as the technology moves deeper into the workplace,"added Triantoro.

AI AS A SUPERVISOR

Eighty percent of Americans would be unwilling to have a job where their direct supervisor was an AI program that assigned their tasks and schedules, while 15 percent would be willing.

TRANSPARENCY & REGULATION

Seventy-six percent of Americans think that businesses are not doing enough to be transparent about their use of AI, while 12 percent think businesses are doing enough, with 11 percent not offering an opinion. This is largely unchanged from Quinnipiac University's April 2025 poll.

Seventy-four percent of Americans think the government is not doing enough to regulate the use of AI, while 13 percent think the government is doing enough, with 13 percent not offering an opinion. This compares to April 2025 when 69 percent of Americans thought the government was not doing enough to regulate the use of AI and 15 percent thought the government was doing enough, with 16 percent not offering an opinion.

"Americans are not rejecting AI outright, but they are sending a warning. Too much uncertainty, too little trust, too little regulation, and too much fear about jobs,"added Jaiswal.

MILITARY USE

A slight majority of Americans (51 percent) oppose the military using AI to select military targets, while 36 percent support it.

There are stark gaps between the nation's youngest and oldest generations.

Gen Z (69 - 24 percent) opposes the military using AI to select military targets, while the Silent Generation (47 - 32 percent) slightly supports the military using AI to select military targets.

When it comes to the military using AI in surveillance for security purposes, Americans are split, with 45 percent supporting it and 44 percent opposing it.

Gen Z is set apart from other generations by its clear opposition to the military using AI in surveillance for security purposes:

  • Gen Z (1997 - 2008): 36 percent support, 58 percent oppose, 6 percent not offering an opinion;
  • Millennials (1981 - 1996): 44 percent support, 49 percent oppose, 7 percent not offering an opinion;
  • Gen X (1965 - 1980): 49 percent support, 37 percent oppose, 14 percent not offering an opinion;
  • Baby Boomers (1946 - 1964): 53 percent support, 36 percent oppose, 10 percent not offering an opinion;
  • Silent Generation (1928 - 1945): 48 percent support, 29 percent oppose, 23 percent not offering an opinion.
"The negative response to using AI for military target selection, and even the mixed responses to using AI for military surveillance purposes, further reflect the doubts people have about AI and who develops and controls it. The generational gap here also stands out, as younger generations are the most skeptical about military applications of AI,"added O'Neill.

POLITICAL ADS

Americans were asked how they think the federal government should handle the use of AI-generated images or audio in political ads.

Thirty-eight percent think the federal government should ban all use of them, 45 percent think the federal government should require disclosure of the use of AI-generated images or audio in political ads, and 11 percent think the federal government should not regulate the use of AI-generated images or audio in political ads.

AI DATA CENTERS

Americans 65 - 24 percent oppose the building of an AI data center in their community with majority opposition across the board.

Those who oppose the building of an AI data center in their community were given a list of three possible reasons and asked if any are part of the reason for their opposition: 72 percent say electricity costs, 64 percent say water use, and 41 percent say noise.

Those who support the building of an AI data center in their community were given a list of three possible reasons and asked if any are part of the reason for their support: 77 percent say job creation, 53 percent say increasing tax revenue, and 47 percent say the potential for creating a tech hub.

SPOTTING A FAKE

A majority of Americans (56 percent) are either very confident (18 percent) or somewhat confident (38 percent) that they can tell the difference between an authentic video or recording and a fake video or recording generated by AI, while 42 percent are either not so confident (22 percent) or not confident at all (20 percent).

Nearly 3 in 10 Americans (28 percent) say they have shared a video that they later found out was AI- generated, while 68 percent say they have not.

1,397 U.S. adults nationwide were surveyed from March 19th - 23rd with a margin of error of +/- 3.3 percentage points, including the design effect. The survey included 800 employed adults with a margin of error of +/- 4.3 percentage points, including the design effect.

The Quinnipiac University Poll, directed by Doug Schwartz, Ph.D. since 1994, conducts independent, non-partisan national and state polls on politics and issues. Surveys adhere to industry best practices and are based on probability-based samples using random digit dialing with live interviewers calling landlines and cell phones.

This article was originally published by Quinnipiac University Poll and is republished here with permission. Read the full poll, including questions and methodology, here.

Edited by Asim BN.

Read next: Few Americans Turn to AI Chatbots for News


by External Contributor via Digital Information World

Few Americans Turn to AI Chatbots for News

by Valentine Fourreau, Statista

Recent data published by Pew Research Center shows that in 2025, a large majority (86 percent) of U.S. adults said they at least sometimes get news from a smartphone, computer or tablet, including 56 percent who said they do so often. This made digital devices the most often used source of news for American adults, ahead of television (used "often" by 32 percent of respondents) and radio (11 percent), reflecting an evolving news environment.

Yet, data from a survey conducted by Pew Research Center in December 2025 shows that most U.S. adults turn to their preferred news organization when looking for more information about a breaking news story. This was the most common answer, cited by 36 percent of respondents, ahead of a search engine, favored by 28 percent, and social media (19 percent). Interestingly, while AI adoption has become mainstream in some countries, only one percent on American adults said they turned to AI chatbots for information about breaking news stories. According to a recent Statista study, Americans are as of yet still on the fence about artificial intelligence and its uses.

Pew Research 2025 shows Americans favor familiar news sources; AI chatbots remain largely unused.

Note: This post was originally published on Statista and is republished here under CC BY-ND.

Reviewed by Irfan Ahmad.

Read next:

• New MIT research overturns prior view about how AI capabilities could overtake human workers

• Not All Dark Web Users Are Criminal, But Certain Traits Are More Common Among Users Reporting Its Access

by External Contributor via Digital Information World