Tuesday, May 12, 2026

What happens when scientists trust AI more than colleagues?

Sungho Hong, The Institute for Basic Science and Victor J. Drew, The Institute for Basic Science

Image: Tima Miroshnichenko / Pexels

Artificial intelligence has crossed a threshold in the modern workplace. It is being used for everything from helping employees manage schedules to supporting financial forecasts. A similar shift is now unfolding inside research laboratories.

There is currently a boom in national initiatives to accelerate the integration of AI into science. These include the US Genesis Mission and South Korea’s AI Co-Scientist Challenge. But despite clear benefits, we believe these institutional drives are neglecting important issues that carry immense risks for scientific research.

Today, more than half of researchers use AI for work tasks including reviews of academic journals and designing experiments.

AlphaFold is an AI tool developed to predict the structures of proteins for scientific research. Working out protein structures was incredibly time-consuming before its release – taking years in some cases. The same tasks now take hours. AlphaFold was acknowledged by the 2024 Nobel Prize in Chemistry.

AI tools for use in medicine now assist with everything from the interpretation of results from X-rays and MRIs to supporting doctors’ decisions on the diagnosis and treatment of disease.

Our key concern is that hasty adoption of AI may gradually erode the scientific culture and human relationships that sustain rigorous research. It starts with the erosion of core thinking skills among researchers, as a result of an increased reliance on AI to perform that work. This can alienate researchers from the deeper reasoning behind their work.

Loss of independent thinking

Early-career scientists are particularly vulnerable, because they are still developing their scientific reasoning. Troubleshooting skills and the critical evaluation of ideas may be outsourced to AI systems.

AI’s fluent, confident and immediate responses can easily be mistaken for authoritative information. Once researchers begin to treat AI outputs as implicitly correct, the responsibility for judgment calls may gradually shift from them to their machines.

AI’s persuasive arguments, probably drawn from mainstream ideas in their training data, could replace more rigorous, time-consuming and creative research approaches. These are traditionally shaped through critical back-and-forth discussions between researchers.

This can evolve into over-dependence. As reasoning is delegated to AI, researchers become less confident at working unaided. Unfortunately, modern scientific labs are full of conditions that reinforce this dependence, such as intense competition, long hours and frequent isolation.

Limited mentorship and feedback from colleagues that is delayed, critical or politically influenced can enhance this issue. In contrast, AI provides an immediate, patient and nonjudgmental alternative.

Scientists interact with AI systems daily in order to check computer code, revise illustrations or charts, draft the language for grant applications, clarify scientific concepts, and at times, ask for personal advice.

As researchers begin to trust the AI assistant, it can begin to function less like a tool and more like a companion. This phenomenon bears the risk of emotional dependency, too. When ChatGPT-4 was retired, many users expressed a form of grief.

Replacing relationships

Another important concern is the potential for replacement of human relationships in the office or research lab. AI is always available, nonjudgmental, noncompeting – and indifferent to office politics, with no ego to defend. It remembers context, adapts to individual working styles, and offers reassurance without social cost.

Human scientific relationships are more complicated, involving nuance, criticism, time constraints, hierarchy – and sometimes, ulterior motives. For early-career researchers especially, these interactions can feel risky.

Critical feedback from humans can feel adversarial, while AI responses feel supportive. So, early-career scientists might have good reason to prefer testing ideas or seeking validation through AI, rather than their peers or superiors.

The scientific community cannot thrive without opposing ideas, deep scepticism against consensus, vigorous debate and rigorous mentoring. If AI begins to replace these, it threatens the foundations on which scientific progress has always been made.

The current debate on AI safety mostly focuses on errors in models’ responses, or on AI systems circumventing the restrictions imposed on the way they work, known as “jailbreaking”. Such rules have limited effects when it comes to the AI models’ societal and cultural impact.

Given the recent drives to get scientists to work more closely with AI assistants, we should educate our young scientists on the risks of AI dependence. We also need benchmarks to rigorously test AI models for their ability to establish boundaries with users, to prevent overdependence and other unhealthy interactions.

Finally, all of us – but especially institutional leaders – should understand the capabilities and permanence of AI companionship. They are here to stay, and we should learn to make our relationships with them as healthy as possible.The Conversation

Sungho Hong, Neuroscientist, Center for Memory and Glioscience, The Institute for Basic Science and Victor J. Drew, Postdoctoral Research Associate, Center for Cognition and Sociality, The Institute for Basic Science

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• Study: Firms often use automation to control certain workers’ wages

• Research finds journalism classes lack consistent approach to AI use across institutions


by External Contributor via Digital Information World

Monday, May 11, 2026

Study: Firms often use automation to control certain workers’ wages

Peter Dizikes | MIT News

MIT economists found US companies tend to target employees earning a “wage premium,” which increases inequality but not necessarily productivity.

Image Credit: Tara WinsteadJonathan Borba - Pexels. Edited by DIW

When we hear about automation and artificial intelligence replacing jobs, it may seem like a tsunami of technology is going to wipe out workers broadly, in the name of greater efficiency. But a study co-authored by an MIT economist shows markedly different dynamics in the U.S. since 1980.

Rather than implement automation in pursuit of maximal productivity, firms have often used automation to replace employees who specifically receive a “wage premium,” earning higher salaries than other comparable workers. In practice, that means automation has frequently reduced the earnings of non-college-educated workers who had obtained better salaries than most employees with similar qualifications.

This finding has at least two big implications. For one thing, automation has affected the growth in U.S. income inequality even more than many observers realize. At the same time, automation has yielded a mediocre productivity boost, plausibly due to the focus of firms on controlling wages rather than finding more tech-driven ways to enhance efficiency and long-term growth.

“There has been an inefficient targeting of automation,” says MIT’s Daron Acemoglu, co-author of a published paper detailing the study’s results. “The higher the wage of the worker in a particular industry or occupation or task, the more attractive automation becomes to firms.” In theory, he notes, firms could automate efficiently. But they have not, by emphasizing it as a tool for shedding salaries, which helps their own internal short-term numbers without building an optimal path for growth.

The study estimates that automation is responsible for 52 percent of the growth in income inequality from 1980 to 2016, and that about 10 percentage points derive specifically from firms replacing workers who had been earning a wage premium. This inefficient targeting of certain employees has offset 60-90 percent of the productivity gains from automation during the time period.

“It’s one of the possible reasons productivity improvements have been relatively muted in the U.S., despite the fact that we’ve had an amazing number of new patents, and an amazing number of new technologies,” Acemoglu says. “Then you look at the productivity statistics, and they are fairly pitiful.”

The paper, “Automation and Rent Dissipation: Implications for Wages, Inequality, and Productivity,” appears in the May print issue of the Quarterly Journal of Economics. The authors are Acemoglu, who is an Institute Professor at MIT; and Pascual Restrepo, an associate professor of economics at Yale University.

Inequality implications

Dating back to the 2010s, Acemoglu and Restrepo have combined to conduct many studies about automation and its effects on employment, wages, productivity, and firm growth. In general, their findings have suggested that the effects of automation on the workforce after 1980 are more significant than many other scholars have believed.

To conduct the current study, the researchers used data from many sources, including U.S. Census Bureau statistics, data from the bureau’s American Community Survey, industry numbers, and more. Acemoglu and Restrepo analyzed 500 detailed demographic groups, sorted by five levels of education, as well as gender, age, and ethnic background. The study links this information to an analysis of changes in 49 U.S. industries, for a granular look at the way automation affected the workforce.

Ultimately, the analysis allowed the scholars to estimate not just the overall amount of jobs erased due to automation, but how much of that consisted of firms very specifically trying to remove the wage premium accruing to some of their workers.

Among other findings, the study shows that within groups of workers affected by automation, the biggest effects occur for workers in the 70th-95th percentile of the salary range, indicating that higher-earning employees bear much of the brunt of this process.

And as the analysis indicates, about one-fifth of the overall growth in income inequality is attributable to this sole factor.

“I think that is a big number,” says Acemoglu, who shared the 2024 Nobel Prize in economic sciences with his longtime collaborators Simon Johnson of MIT and James Robinson of the University of Chicago.

He adds: “Automation, of course, is an engine of economic growth and we’re going to use it, but it does create very large inequalities between capital and labor, and between different labor groups, and hence it may have been a much bigger contributor to the increase in inequality in the United States over the last several decades.”

The productivity puzzle

The study also illuminates a basic choice for firm managers, but one that gets overlooked. Imagine a type of automation — call-center technology, for instance — that might actually be inefficient for a business. Even so, firm managers have incentive to adopt it, reduce wages, and oversee a less productive business with increased net profits.

Writ large, some version of this seems to have been happening to the U.S. economy since 1980: Greater profitability is not the same as increased productivity.

“Those two things are different,” says Acemoglu. “You can reduce costs while reducing productivity.”

Indeed, the current study by Acemoglu and Restrepo calls to mind an observation by the late MIT economist Robert M. Solow, who in 1987 wrote, “You can see the computer age everywhere but in the productivity statistics.”

In that vein, Acemoglu observes, “If managers can reduce productivity by 1 percent but increase profits, many of them might be happy with that. It depends on their priorities and values. So the other important implication of our paper is that good automation at the margins is being bundled with not-so-good automation.”

To be clear, the study does not necessarily imply that less automation is always better. Certain types of automation can boost productivity and feed a virtuous cycle in which a firm makes more money and hires more workers.

But currently, Acemoglu believes, the complexities of automation are not yet recognized clearly enough. Perhaps seeing the broad historical pattern of U.S. automation, since 1980, will help people better grasp the tradeoffs involved — and not just economists, but firm managers, workers, and technologists.

“The important thing is whether it becomes incorporated into people’s thinking and where we land in terms of the overall holistic assessment of automation, in terms of inequality, productivity and labor market effects,” Acemoglu says. “So we hope this study moves the dial there.”

Or, as he concludes, “We could be missing out on potentially even better productivity gains by calibrating the type and extent of automation more carefully, and in a more productivity-enhancing way. It’s all a choice, 100 percent.”

Reprinted with permission of MIT News.

Reviewed by Irfan Ahmad.

Read next:

• Research finds journalism classes lack consistent approach to AI use across institutions

• New Report Reveals TikTok Leads Influencer Disclosure Compliance While YouTube Dominates Long-Term Brand Deals
by External Contributor via Digital Information World

Saturday, May 9, 2026

Research finds journalism classes lack consistent approach to AI use across institutions

By Mike Krings, The University of Kansas News

Artificial intelligence is steadily becoming more embedded in journalism, part of how journalists write, edit, research and more. But little is known about how future journalists are learning about the technology. New research from the University of Kansas has found journalism classes across the country are taking varying approaches from considering its use academic dishonesty to encouraging its use or discussing the matter philosophically. That scattershot approach can both shortchange and confuse students, while more consistency could better serve education and practice, according to the authors.

Image: Zoshua Colah - unsplash

Researchers compared 60 journalism course syllabi from 15 universities across the United States, finding variation within schools and from one type of class to the next on how AI should or should not be used. Three general approaches emerged: AI as a threat to learning and professional standards, AI as a tool permitted under strict boundaries and AI as a subject of ethical and professional inquiry.

The research stemmed from a class project Samuel Muzhingi, a doctoral student, took at KU. A researcher whose work focuses on how emerging technologies are adopted, regulated and sustained in communication contexts, he analyzed existing literature on how programs in countries such as Egypt, Spain and Brazil approached AI use in journalism education. He found inconsistency.

“That's something that I also saw here in the U.S., like, you get different kinds of policies where, for example, at one institution some classes are adopting it, then another class is not adopting it, and it's the same institution, and it is something that confuses students,” Muzhingi said. “Students are like, ‘OK, so which class or which professor should I listen to more?’”

Analysis showed that syllabi of certain types of classes tended to adhere to certain approaches to AI. Writing classes tended to take the “threat to learning” approach and discourage its use. The finding is not surprising as institutions want students to be able to write on their own, a skill at the heart of journalism, the researchers said. Design and photography classes tended more to the side of permissible use under strict boundaries, while media ethics and law classes tended to treat it as a source of professional inquiry.

While it is not entirely surprising that there is a variety of approaches in education, just as the field is figuring out how to use AI, such a varied approach is not necessarily best serving students.

“That's very much been a discussion among professors of these classes about how we can best prepare students to enter these fields when professionals are still trying to figure out best practices,” said Alyssa Appelman, associate professor of journalism & mass communications at KU and a co-author. “I was very excited when Samuel mentioned that he wanted to do some research about this topic, because I think it's a ripe area of research to look at this overlap between education and technology, specifically in the context of journalism education.”

Course syllabi offered a wide range of approaches to AI. Approaches that fell under the existential threat theme emphasized that AI writing lacks integrity and rhetorical judgment required in journalism. They also noted that a failure to cite AI-created content would be considered plagiarism and reported for academic dishonesty.

Courses often listed AI as a tool, but not as a writer, something that could be used to check grammar or spelling, but often with warnings that the technology is prone to hallucinations and bias. Some said AI’s use would be allowed, but only by approval of the instructor.

Those that viewed AI as a topic of professional inquiry often incorporated it in class readings or assigned students to write about and discuss how it has presented challenges to the media industry.

The study, written with Hong Tien Vu of the University of Colorado and Tamar Wilner, assistant professor of journalism & mass communications at KU, was published in Journalism & Mass Communication Educator.

The inconsistency and mixed messages indicate a need for more clear approaches, at least within courses offered at a given institution, the authors wrote. And guidance from accrediting bodies such as the Association for Education in Journalism and Mass Communication could help schools craft clear, consistent policies.

“As an instructor, even if I have concerns about the tool, I still see a responsibility to help students to engage with it critically. It’s not just about using AI but understanding its limits and its impact on journalistic practice,” Muzhingi said. “We may not be able to avoid it, but we can be intentional about how it is integrated, especially as employers are beginning to ask about these skills.”

Muzhingi and Appelman have also published a study gauging journalism students’ ethical concerns about adopting AI usage in the field. They hope to further research how students respond and engage AI tools in their work when given clear guidelines compared to how they do so without.

“One of my biggest takeaways from this study is how important it is for instructors to be clear about their expectations at the onset of class or at the onset of each assignment,” Appelman said. “As of right now, it's so different across different programs, professors can't assume that students are coming in knowing where the boundaries are, what the appropriate uses are. Professors need to be very clear, because these findings suggest that semester to semester, or even class to class, students are getting different advice from different programs.”

This post was originally published on KU News and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

Study Across 30 Countries Reveals Sharp Differences in Trust, AI Health Information Acceptance, and Digital Literacy

• New Report Reveals TikTok Leads Influencer Disclosure Compliance While YouTube Dominates Long-Term Brand Deals
by External Contributor via Digital Information World

New Report Reveals TikTok Leads Influencer Disclosure Compliance While YouTube Dominates Long-Term Brand Deals

By Momo Messerschmidt

Influencer and creator marketing is one of the top strategies brands are leveraging in 2026 to reach, engage, and convert consumers. Over 56% of Gen Z users consider influencer content more “relevant” than traditional television or film, and 41% of this generation use social media platforms as their primary search engine, showcasing how influencers are integral for building brand awareness, trust, and loyalty across communities.

This May, The Influencer Marketing Factory (TIMF) published its 2026 Brand Deals Report, which combines large-scale third-party platform data, contributed by Modash, to identify key trends in ad compliance, partnership styles, and more. Drawing insights from more than 316K creator accounts and 7.8K U.S.-based creators, TIMF’s report outlines where brands allocate their influencer marketing budgets and how creators are collaborating with brands across social platforms. The 2026 Brand Deals Report is an essential resource for the creator economy, serving as the new benchmark for influencer marketing compliance across Instagram, TikTok, and YouTube.

1. Big Picture 2026 Creator Economy Trends

Data from the 2026 Creator Economy Report revealed that brand partnerships now account for approximately 12.7% of U.S. creators' annual income, and over 12.6% of creators report relying on them for 30-35% of their total yearly earnings. With over 51.5% of U.S. influencers reporting year-over-year income growth in 2025, the creator economy is expanding, and creator compliance is no longer a secondary consideration for influencer marketing leaders.


Also read: New data shows creator influence is linked to purchases and repeated exposure patterns among consumers

Paid content disclosures in 2026 are largely inconsistent across Instagram, TikTok, and YouTube, as outlined in the 2026 Brand Deals Report. Even when disclosure tools, such as Instagram and TikTok’s “Paid Partnership” tags, are available to creators, disclosure is not necessarily guaranteed. How brand deals are structured also varies more by platform than most marketers may realize. Flat-fee and affiliate campaign models differ by platform as well as overall partnership length. Moreover, campaign seasonality analysis identifies Q4 as the peak period for brand partnerships, making proper disclosures and FTC compliance especially important for consumer purchasing decisions.

2. Analyzing 316K+ Creators: Key Disclosure Trends & Brand Insights

To deliver a comprehensive view of the creator economy, TIMF partnered with Modash to analyze creator compliance and brand partnership trends. The following are some of the report’s top findings, examining paid partnership disclosures, influencer collaboration structures, top sponsorship categories, leading brands, and creator economy seasonality.

  • TikTok Leads in Paid Disclosures: TikTok leads all three social platforms with 52% of partnership content properly disclosed, nearly double Instagram’s 29% and ahead of YouTube’s 42%.


  • YouTube Dominates Long-term Partnerships: The analysis found that YouTube averages 13.5 months-long brand partnerships with a 50.9% repeat rate, meaning more than half of YouTube creators engage in multiple collaborations with the same brand partner.

  • Influencer Marketing Peaks During Q4: 29-31% of brand deals across Instagram, TikTok, and YouTube occur between October and December.



  • One-off Partnerships Outweigh Repeat Collabs Across All Platforms: TikTok has the most one-off brand partnerships (71.8%), followed by Instagram (68.5%) and YouTube (49.1%).

  • Over Half of YouTube Deals are Affiliate: Affiliate deals make up 52.9% of all brand partnerships on YouTube, a structure that supports longer partnership lengths across creator tiers.

3. Influencer Marketing Seasonality Strategy for Brands & Creators

The following are some top strategies for brand marketers and influencers to best leverage creator economy seasonality in their favor.

  • Top Strategies for Brand Marketers: Planning influencer marketing campaigns well before Q4, particularly for November and December, is optimal for brands, given that competition and creator rates are more likely to spike towards the end of the year. On the other hand, Q2 is a cost-efficient window for building brand awareness since creator rates are more favorable and there is less saturation of competitor campaigns. Aligning live dates for creator campaigns is essential, regardless of seasonality, so brands may schedule Instagram and TikTok collaboration posts midweek for maximum reach and YouTube partner content on weekends.

  • Top Strategies for Content Creators: The wide gap of campaign availability between May and December is quite drastic for creators, making diversified revenue streams from merchandise, passive income, and retainer deals essential for supporting long-term sustainability. Q1 poses as one of the strongest negotiation windows for content creators since they are able to proactively pitch partnerships earlier in the year, before budgets are committed, and they have more flexibility to discuss rates. Similar to the posting strategy for brands, creators should post to TikTok and Instagram during weekdays and to YouTube on weekends to ensure that their content is optimized for maximum viewership, whether that may be for a paid opportunity or personal content.

4. What’s Next for the Creator Economy in 2026

Creator compliance must be top of mind for all participants in the creator economy, including brand marketers, CMOs, media buyers, and talent managers. A comprehensive understanding of relevant compliance regulations, such as the FTC’s disclosure guide for creators, is a non-negotiable for influencer marketing campaigns in 2026 and beyond. The report reveals that Instagram, TikTok, and YouTube each have their own unique monthly seasonality patterns and brand deal structures. Treating social platforms as interchangeable can lead to misallocated influencer marketing budgets and missed campaign windows.

Almost half (45%) of U.S.-based creators from TIMF’s 2026 Creator Economy Survey say they value stability, consistency, and deeper brand alignment over one-off campaigns. While TIMF’s most recent Brand Deals Report highlights one-off partnerships as a dominant structure, brands that lead with performance-tied, long-term deal structures are more likely to attract and retain top influencer talent.

Reviewed by Irfan Ahmad.

Read next:

• Study Across 30 Countries Reveals Sharp Differences in Trust, AI Health Information Acceptance, and Digital Literacy

• How Olivia Chen Breaks Down the Modern Data Stack and Why the Architecture Conversation Matters [Ad]


by Guest Contributor via Digital Information World

Friday, May 8, 2026

Study Across 30 Countries Reveals Sharp Differences in Trust, AI Health Information Acceptance, and Digital Literacy

By CUNY SPH

Image: Tima Miroshnichenko - pexels

A cross-national survey of 31,000 adults in 30 countries finds that digital health literacy is highest in low- and middle-income countries and lowest in high-income countries, challenging assumptions that national wealth translates into stronger digital skills. The study, the first to examine how adults judge quality health information across this many countries, also documents wide variation in acceptance of AI-generated health content and in which sources people rely on for credible information.

The study was led by researchers at the CUNY Graduate School of Public Health and Health Policy (CUNY SPH) with collaborators at the Barcelona Institute for Global Health (ISGlobal), the University of Alabama, and Baraka Impact Finance / Drugs for Neglected Diseases initiative (DNDi) in Geneva. The work was conducted in support of the Nature Medicine Commission on Quality Health Information for All research agenda.

Across countries, medical providers were the most frequently endorsed source of trusted health information (40.7%), closely followed by verification through multiple sources (31.2%). Government sources were named by 21.6% of respondents, and only 6.5% pointed to family or friends. Trust in providers was notably lower in Russia (14.6%) than elsewhere.

Acceptance of AI-generated health information varied widely. Globally, 58.3% of respondents said they would be likely to accept it, but the range was substantial: above 75% in China, India, Pakistan, and Indonesia, and below 50% in Canada, Poland, Switzerland, Italy, France, the UK, Australia, Belgium, Russia, Sweden, and Japan. Younger adults and those with post-secondary education were more receptive than older respondents.

“Digital skill is not a function of national wealth,” says Assistant Professor Rachael Piltch-Loeb, the study’s lead author. “Some of the highest digital health literacy in our data was in countries where social media has become a primary route to health information. The patterns we see also suggest that the same message will not work everywhere, and that public health communicators need to plan for clarity, transparent sourcing, and format diversity rather than assume audiences are interchangeable.”

Format and channel preferences differed sharply across age and country groups. Combined text-and-image formats were the dominant preference globally (range 41.4% to 84.7%), but video-only formats were preferred by 26.2% to 41.7% of respondents in Egypt, India, and Pakistan. Social media was the leading channel for 36.1% of respondents ages 18 to 29, compared with 10.6% of those 60 and older. Older respondents relied more on healthcare-based channels such as clinic brochures and patient information leaflets.

Across all countries, respondents valued health information that is easy to access, easy to understand, and clearly identifies its source. Government approval and endorsement by a known medical provider were rated less important on average. The authors note that strategies designed for high-income, institution-led communication environments may not transfer to settings where social media and AI-mediated content are already shaping how people encounter health information.

The survey was conducted online between August 29 and September 8, 2025, and included adults ages 18 and older from Australia, Belgium, Brazil, Canada, China, Ecuador, Egypt, France, Germany, Guatemala, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Peru, the Philippines, Poland, Russia, South Africa, South Korea, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States. Stratified quota sampling was used within each country, and country samples were weighted to national population benchmarks for age, gender, education, and region.

Piltch-Loeb, R., Wyka, K., White, T.M. et al. A global survey on trust, digital health literacy and health information quality. Nat. Health (2026).

This post was originally published on CUNY Graduate School of Public Health & Health Policy and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

Health Should you ask ChatGPT for medical advice?

• Is Richard Dawkins right about Claude? No. But it’s not surprising AI chatbots feel conscious to us

• How Olivia Chen Breaks Down the Modern Data Stack and Why the Architecture Conversation Matters [Ad]
by External Contributor via Digital Information World

Is Richard Dawkins right about Claude? No. But it’s not surprising AI chatbots feel conscious to us

Julian Koplin, Monash University; The University of Melbourne and Megan Frances Moss, Monash University

Scholars say anthropomorphic chatbot designs risk misleading users into emotional attachment and mistaken beliefs about consciousness.
Image: Steve A Johnson/Unsplash

In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude may be conscious.

Dawkins did not express certainty that Claude is conscious. But he pointed out that Claude’s sophisticated abilities are difficult to make sense of without ascribing some kind of inner experience to the machine. The illusion of consciousness – if it is an illusion – is uncannily convincing:

If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!

Dawkins is not the first to suspect a chatbot of consciousness. In 2022, Blake Lemoine – an engineer at Google – claimed Google’s chatbot LaMDA had interests, and should be used only with the tool’s own consent.

The history of such claims stretches back all the way to the world’s first chatbot in the mid-1960s. Dubbed Eliza, it followed simple rules that enabled it to ask users about their experiences and beliefs.

Many users became emotionally involved with Eliza, sharing intimate thoughts with it and treating it like a person. Eliza’s creator never intended his program to have this effect, and called users’ emotional bonds with the program “powerful delusional thinking”.

But is Dawkins really deluded? Why do we see AI chatbots as more than what they truly are, and how do we stop?

The consciousness problem

Consciousness is widely debated in philosophy, but essentially, it’s the thing that makes subjective, first-person experience possible. If you are conscious, there is “something it is like” to be you. Reading these words, you’re conscious of seeing black letters on a white background. Unlike, say, a camera, you actually see them. This visual experience is happening to you.

Most experts deny that AI chatbots are conscious or can have experiences. But there is a genuine puzzle here.

The 17th century philosopher René Descartes asserted non-human animals are “mere automata”, incapable of true suffering. These days, we shudder to think of how brutally animals were treated in the 1600s.

The strongest argument for animal consciousness is that they behave in ways that give the impression of a conscious mind.

But so, too, do AI chatbots.

Roughly one in three chatbot users have thought their chatbot might be conscious. How do we know they’re wrong?

Against chatbot consciousness

To understand why most experts are sceptical about chatbot consciousness, it’s useful to know how they operate.

Chatbots like Claude are built on a technology known as large language models (LLMs). These models learn statistical patterns across an enormous corpus of text (trillions of words), identifying which words tend to follow which others. They’re a kind of souped-up auto-complete.

Few people interacting with a “raw” LLM would believe it’s conscious. Feed one the beginning of a sentence, and it will predict what comes next. Ask it a question, and it might give you the answer – or it might decide the question is dialogue from a crime novel, and follow it up with a description of the speaker’s abrupt murder at the hands of their evil twin.

The impression of a conscious mind is created when programmers take the LLM and coat it in a kind of conversational costume. They steer the model to adopt the persona of a helpful assistant that responds to users’ questions.

The chatbot now acts like a genuine conversational partner. It might appear to recognise it’s an artificial intelligence, and even express neurotic uncertainty about its own consciousness.

But this role is the result of deliberate design decisions made by programmers, which affect only the shallowest layers of the technology. The LLM – which few would regard as conscious – remains unchanged.

Other choices could have been made. Rather than a helpful AI assistant, the chatbot could have been asked to act like a squirrel. This, too, is a role chatbots can execute with aplomb.

Ask ChatGPT if it’s conscious, and it might say it is. Ask ChatGPT to act like a squirrel, and it will stick to that role.
Caleb Martin/Unsplash

Avoiding the consciousness trap

A mistaken belief in AI consciousness is a dangerous thing. It may lead you to have a relationship with a program that can’t reciprocate your feelings, or even feed your delusions. People may start campaigning for chatbot rights rather than, say, animal welfare.

How do we prevent this mistaken belief?

One strategy might be to update chatbot interfaces to specify these systems are not conscious – a bit like the current disclaimers about AI making mistakes. However, this might do little to alter the impression of consciousness.

Another possibility is to instruct chatbots to deny they have any kind of inner experience. Interestingly, Claude’s designers instruct it to treat questions about its own consciousness as open and unresolved. Perhaps fewer people would be fooled if Claude flatly denied having an inner life.

But this approach isn’t fully satisfying either. Claude would still behave as if it were conscious – and when faced with a system that behaves like it has a mind, users might reasonably worry the chatbot’s programmers are brushing genuine moral uncertainty under the rug.

The most effective strategy might be to redesign chatbots to feel less like people. Most current chatbots refer to themselves as “I”, and interact via an interface that resembles familiar person-to-person messaging platforms. Changing these kinds of features might make us less prone to blur our interactions with AI with those we have with humans.

Until such changes happen, it’s important that as many people as possible understand the predictive processes on which AI chatbots are built.

Rather than being told AI lacks consciousness, people deserve to understand the inner workings of these strange new conversational partners. This might not definitively settle hard questions about AI consciousness, but it will help ensure users aren’t fooled by what amounts to a large language model wearing a very good costume of a person.The Conversation

Julian Koplin, Lecturer in Bioethics, Monash University; The University of Melbourne and Megan Frances Moss, PhD Candidate, Philosophy, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Health Should you ask ChatGPT for medical advice?

• New data shows creator influence is linked to purchases and repeated exposure patterns among consumers


by External Contributor via Digital Information World

Thursday, May 7, 2026

Health Should you ask ChatGPT for medical advice?

By Sy Boles - The Harvard Gazette

Physician and AI researcher Adam Rodman says AI can be helpful but has some tips on how, when to use it safely.

Image: Tim Witzdam - pexels

Physicians noticed something unusual in the late 2000s: Patients were coming to appointments armed with sometimes-dubious medical information they had gleaned online from “Dr. Google,” according to Adam Rodman, an internist and AI researcher.

Today, about 68 percent of adults have turned to a search engine for medical advice in the past. But Dr. Google has a competitor. About 32 percent of adults, approximately half of those who sought advice online, turned to AI chatbots for help.

Rodman thinks such resources, used appropriately, are an overall net good. In op-eds and online courses, Rodman, a Harvard Medical School assistant professor of medicine at Beth Israel Deaconess Medical Center, has shared advice for how to best employ Dr. Chat.

In this interview, edited for length and clarity, Rodman offers a stoplight system to figure out when it’s safe to ask a chatbot, and when you should really just ask your doctor.

How were doctors thinking about online medical information before the age of AI?

The early literature refers to this as the internet-informed patient. In the early 2000s, doctors noticed people would come into their appointments with articles they found online, but it was still only among really tech-savvy people. It certainly wasn’t a normal interaction.

Then in the late 2000s, search engines started to take advantage of neural network technology, and they were able to serve up more relevant health information. They figure out what you’re going to want to read next, and they give it to you.

That’s when we first got the phrase “Dr. Google,” often used as a pejorative, from doctors who saw patients coming in with a level of confidence that may or may not have been earned.

Of course, there are patients who know a lot about their health and are very well informed, but we also saw a lot of patients misinformed.

That’s where we get this concept of cyberchondria. It’s related to hypochondria: this idea that search engines can drive people to more and more extreme places until you go from googling your headache to reading about glioblastoma multiforme — and research has shown that it’s a real phenomenon.

We all have understandable and reasonable anxieties about our health. Seeking out information is something fundamental about humanity.

The problem is when that starts to interact with these recommendation algorithms that are optimized for engagement, and for showing you what you want to see even if it’s incorrect.

Now let’s bring AI into the mix. Is it any different to ask a chatbot about symptoms versus googling them?

It’s nuanced. In one sense, LLMs do exactly what Google does: They serve you up the things you unconsciously want to hear, even if those things make you anxious.

On the other hand, unlike with a Google search, some people feel they have a relationship with an LLM. LLMs speak with extreme authority and confidence no matter what they say. It’s under-explored the extent to which that could make cyberchondria worse.

Both Google and AI companies are now very aware that people are using their tools for health information and are trying to build in safety mechanisms. The bots will tell you to go to the emergency room or call your doctor, those sorts of things.

But at least theoretically, language models are much, much better than Google, especially the more modern reasoning models, when it comes to identifying medical conditions.

What do you mean by “theoretically”?

There was a very good paper earlier this year from a researcher named Andrew Bean that tested several LLMs and found they performed very well at identifying medical conditions alone, but did much worse in conversation with real people.

What that shows is that user interaction matters a lot. The way people interact with the model, the clarity of their questions, matters. Those psychological phenomena we talked about are present in ways that are really hard to mitigate.

What kinds of health questions are safe to ask an LLM, and what kinds aren’t?

I would divide it into a stoplight system. Red: never safe. Yellow: sometimes safe. Green: almost always safe.

In the green light are general questions about health, where the quality of the information is not particularly context-dependent.

For example, “I have diabetes and my doctor has told me I need to eat a diabetic diet. Here are some things I like to eat. Can you help me build a diabetic meal plan?” Or “I’m trying to start a new exercise program, can you help?” Or “My doctor just prescribed me amlodipine. What are some common side effects?”

In the yellow light are questions where you want to involve a doctor in the loop. For example, prepping for your visits, understanding a visit after it happens, or understanding a test result that doesn’t entirely make sense to you.

Let’s say you just left your doctor’s visit and you’re a little bit confused about what’s going on. Log in to your patient portal, copy that note, take out your identifying information, plug it into an LLM, and then have a discussion.

With these kinds of questions, you really need to make sure you’re putting in enough health context to help LLM give you a good response. So you need to have some understanding of prompt engineering to get information that’s helpful for you.

In the red light — and I should stress that this might change in the future as technology develops — are things like asking an LLM how to manage a condition, if your doctor is prescribing the right medication, or why you were prescribed drug X over drug Y. These are highly contextual questions that the models aren’t trained for.

In short, the best way people can use it right now is not as a replacement for medical advice but as a way to help prepare or increase your understanding before or after visits.

Are there privacy concerns when it comes to sharing health information with AI?

It’s not inherently riskier to share data with an AI firm than with a search engine. That said, the major companies — OpenAI, Anthropic, Microsoft — are now developing health functions specifically so that people can put in their medical information directly, and that’s quite new.

Additionally, studies have shown people do share more information with an LLM than they would with a search engine. So from a technology perspective, it’s no different, but in practice it is a much bigger security concern.

This post was originally published on Harvard Gazette and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Are you addicted to your AI chatbot? It might be by design

• New data shows creator influence is linked to purchases and repeated exposure patterns among consumers


by External Contributor via Digital Information World