Thursday, January 1, 2026

How doubting your doubts may increase commitment to goals

Research explores what happens when people face goal obstacles

How doubting your doubts may increase commitment to goals
Image: engin akyurt / Unsplash

Jeff Grabmeier, Ohio State News, grabmeier.1@osu.edu.

When it comes to our most important long-term goals in life, it is not uncommon to face obstacles that may lead us to doubt whether we can achieve our ambitions.

But when life hands you doubts, the answer may be to question your doubts, a new study suggests.

A psychology professor found that when people who were worried about achieving an identity goal were induced to experience what is called meta-cognitive doubt, they actually became more committed to achieving their goal.

“What this study found is that inducing doubts in one’s doubts can provide a formula for confidence,” said Patrick Carroll, author of the study and professor of psychology at The Ohio State University at Lima.

The study was published online recently in the journal Self and Identity.

Carroll was interested in what happens when people have what is called an “action crisis” while pursuing an identity goal – a long-term objective centered on who you want to become in life. Wanting to become a doctor, for instance, is an identity goal.

An action crisis is a decision conflict where you are not sure if you want to continue pursuit of the goal.

“When you’re pursuing identity goals, bumps in the roads inevitably arise. There may come a point where the obstacle is big enough to evoke doubts about whether to continue,” Carroll said.

Most research on the topic has focused specifically on these doubts and how they can impact whether people go forward with their goals.

But based on previous work done by other Ohio State researchers, Carroll decided to examine meta-cognitive doubt, which is the sense of certainty a person has in the validity of one’s thoughts.

In the case of this research, a person can have doubts about whether they can achieve their goal. But what happens if you make the person wonder if their doubts are valid?

Carroll conducted two studies. One involved 267 people who participated online. First, they completed an action crisis scale about their most important personal goal. The scale included items such as “I doubt whether I should continue striving for my goal or disengage from it” and participants responded on a scale from “strongly disagree” to strongly agree.”

Participants were then told they would take part in a second, unrelated study on the effect of memory writing exercises. Half of the participants were asked to write about a time that they felt confidence in their thinking. The other half were asked to write about a time when they had experienced doubt in their thinking.

After completing the writing exercise, all participants were asked to rate how committed they were to achieving their most important personal goal, on a scale from “not at all committed” to “very committed.”

Findings showed that the writing exercise succeeded in making people feel more confident or more doubtful in their own thoughts about their identity goal – even though the writing exercise was not directly connected to their goals.

Here’s how it worked: Those participants who felt doubtful about their identity goal – and then wrote about an experience feeling confident – were less committed to achieving their goal. In other words, the writing exercise made them more confident in their doubts about achieving their goal.

On the other hand, those who felt doubtful about their goal and then wrote about an experience of feeling doubtful in their own thoughts actually had higher levels of commitment to their goals. For them, writing about doubt made them question their own doubts about achieving their goal.

“On some level, it may seem that doubt would be additive. Doubt plus doubt would equal more doubt,” Carroll said. “But this study found the opposite: Doubt plus doubt equaled less doubt.”

Carroll replicated the findings in another study, involving 130 college students, that used a different way of inducing doubt. In this study, Carroll used a technique developed by Ohio State researchers that had the participants complete the action crisis scale with their non-dominant hand.

“Previous research showed that using the non-dominant hand leads participants to have doubts in their own thoughts because they use their shaky handwriting as a cue that their thoughts must be invalid,” Carroll said.

“And that is exactly what I found in this study. So in two different studies we found that inducing meta-cognitive doubt can lead to people doubting their own doubts.”

On a practical level, it may be difficult for individuals to induce doubts about their doubts on their own, Carroll said. One reason it worked in this study is that participants were not aware that the doubt induction was related to their goal doubts.

This could be more effective if someone else – a therapist, a teacher, a friend or a parent – can help a person question their own thoughts and doubts.

“You don’t want the person to be aware that you’re getting them to question their doubts about their goals,” he said.

Carroll also noted that this technique should be used carefully, because it could potentially undermine wise judgment if overused or misapplied.

“You don’t want to undermine humility and replace it with overconfidence or premature certainty,” he said. “This needs to be used wisely.”

Originally published by Ohio State News at The Ohio State University on December 29, 2025. Republished with permission.

Editor's Note: Corrected "pursing" to "pursuing" (typo in original). Ohio State Communications confirmed no AI tools were used in content creation.

Also read: Five myths about learning a new language – busted
by External Contributor via Digital Information World

Wednesday, December 31, 2025

Can AI Chatbots Produce Gossip-Like Content With Potential Reputational Impact?

A peer reviewed study published in Ethics and Information Technology by University of Exeter researchers Joel Krueger and Lucy Osler examines how generative AI chatbots can produce false or misleading content that can meet the specific structural criteria of what the authors describe as “AI gossip,” potentially contributing to social and reputational harm.

The paper focuses on widely used consumer facing systems such as OpenAI’s ChatGPT and Google’s Gemini, which are powered by large language models. According to the authors, these systems are trained on extensive collections of text and generate responses by predicting likely word sequences. As a result, they can produce statements that appear authoritative without regard for whether those statements are true. "For example, unsuspecting users might develop false beliefs that lead to dangerous behaviour (e.g., eating rocks for health), or, they might develop biases based upon bullsh*t stereotypes or discriminatory information propagated by these chatbots", explains the paper.

The study builds on prior arguments that such outputs are better understood as “bullsh*t,” in the philosophical sense defined by Harry Frankfurt, rather than as hallucinations or lies. In this framing, the systems are not presented as conscious or intentional agents, but as tools designed to generate truth-like language without concern for accuracy.

Krueger and Osler argue that some chatbot outputs can also be understood as gossip. They adopt a "thin" definition of gossip as communication involving a speaker, a listener, and an absent third party, where the information goes beyond common knowledge and includes an evaluative judgment, often negative. While chatbots lack awareness, motives, or emotional investment, the authors maintain that their outputs can still meet these structural criteria.

To illustrate this claim, the paper examines a documented case involving Kevin Roose, a technology reporter for The New York Times. After Roose published accounts of an unsettling interaction with a Microsoft Bing chatbot in early 2023, users subsequently discovered that other chatbots were generating negative character evaluations about him when asked about his work. According to the study, these responses typically combined basic biographical information with unsubstantiated evaluative claims, such as suggestions of sensationalism or questionable journalistic practices.

The authors distinguish between two forms of AI gossip. In bot to user gossip, a chatbot delivers evaluative statements about an absent person to a human user. In bot to bot gossip, similar information is drawn from online content and incorporated into training data, then propagated between systems without direct human involvement. The paper argues that the second form may pose greater risks because it can spread silently, persist over time, escape human oversight, and lacks the social constraints that normally moderate human gossip.

The study situates these effects within what the authors call “technosocial harms,” meaning harms that arise in interconnected online and offline environments. Examples discussed in the paper include reputational damage, defamation, informal blacklisting, and emotional distress. The authors reference documented legal disputes in which individuals alleged that AI systems produced false claims about criminal or professional misconduct, illustrating how such outputs can affect employment prospects, public trust, and social standing.

Krueger and Osler emphasize that these risks do not arise from malicious intent on the part of AI systems. Instead, they argue that responsibility rests with the human designers and institutions that build, deploy, and market these technologies. The paper concludes that recognizing certain forms of AI misinformation as gossip, rather than as isolated factual errors, helps clarify how these systems can produce broader social effects and why greater ethical scrutiny is warranted as AI tools become more embedded in everyday life.


Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, and published by humans. Image: DIW-Aigen

Read next:

• AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026

Five myths about learning a new language – busted
by Ayaz Khan via Digital Information World

Tuesday, December 30, 2025

Five myths about learning a new language – busted

Abigail Parrish, University of Sheffield and Jessica Mary Bradley, University of Sheffield
Image: Thought Catalog /Unsplash

Language learning is often a daunting prospect. Many of us wish we had learned a language to a higher level at school. But even though adults of all ages can do well in acquiring a new language, fear – or the memory of struggling to memorise grammar at school – can hold us back.

We both work in languages education and recognise the real benefits that learning another language can bring. As well as myriad cognitive benefits, it brings with it cultural insights and empathetic awareness.

With that in mind, we’re here to dispel five myths about language learning that might be putting you off.

Myth one: it’s all about grammar and vocabulary

In fact, learning about people, history and culture is arguably the best part of learning a language. While grammar and vocabulary are undeniably important aspects of language learning, they don’t exist in isolation from how people communicate in everyday life.

Language learning can help us to have “intercultural agility”: the ability to engage empathically with people who have very different experiences from our own. To be able to do this means learning about people, history and culture.

Immersing yourself in a particular country or location, for example through studying or working, is a fantastic way to do this. But when this isn’t feasible, there are so many other options available. We can learn so much through music, books, films, musical theatre and gaming.

Myth two: we should focus on avoiding mistakes – they’re embarrassing

One problem with formal language learning is that it encourages us to focus on accuracy at all costs. To pass exams, you need to get things “right”. And many of us feel nervous about getting things wrong.

But in real-life communication, even in our expert languages, we often make mistakes and get away with it. Think of the number of times you have misspelled something, or said the wrong word, and still been understood.

Less formal language learning can encourage us to think more about communication than accuracy.

One advocate of this approach is author Benny Lewis, who popularised a communicative learning approach he calls “language hacking” which focuses on the language skills needed for conversation. Language apps also encourage this, as does real-life travel and communication.

Myth three: it’s too much effort to start over with a new language

You can use languages in lots of ways, and the language you learn at school doesn’t have to be the only one you learn.

In England, most people learn one or more of French, Spanish or German at school. These languages can often serve as great apprenticeship languages, teaching us how to learn a language and about grammatical structures.

But they are not always the languages that we are most likely to use as adults, when family and work could take us anywhere. Our cultural interests might also lead us to want to know more about a new language.

Learning a language that you have a personal interest in can be very motivating and help you to keep going when things get a bit rocky.

Myth four: learning a language is an individual endeavour

You don’t have to learn alone. Learning with others, or having the support of others, can help motivate us to learn.

This might be through a multilingual marriage, joining a conversation group or chatting in a language learning forum online. Don’t feel that you have to have reached a certain proficiency before you start reaching out to others.

Language apps can also make language learning a collective endeavour. You can learn along with friends and family, and congratulate them on their language learning streaks.

This is something both of us do with multiple generations of our families, helping us engage with language learning in a lighthearted way.

Myth five: it’s a lot of hard graft

Learning a language in a systematic way can be challenging, whether in a classroom or from a self-study course. But some things make this easier. We have found that people are more motivated to engage when they have a personal reason to learn. This could be, for example, wanting to communicate with family or to travel to a particular country or region.

The growth in popularity and accessibility of language learning apps has made language learning possible from any location and at any time, often for free.

You can easily catch up on your Chinese from the comfort of your own armchair, at whatever time is most convenient for you. Apps can be fun and playful, and can help us maintain motivation, develop vocabulary and embed grammatical structures.

There are lots of reasons for learning a language, and lots of benefits. We encourage everyone to focus on these benefits, and give it a go.The Conversation

Abigail Parrish, Lecturer in Languages Education, University of Sheffield and Jessica Mary Bradley, Senior Lecturer in Literacies and Language, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026


by External Contributor via Digital Information World

AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026

Image: DIW-Aigen

In artificial intelligence, 2025 marked a decisive shift. Systems once confined to research labs and prototypes began to appear as everyday tools. At the center of this transition was the rise of AI agents – AI systems that can use other software tools and act on their own.

While researchers have studied AI for more than 60 years, and the term “agent” has long been part of the field’s vocabulary, 2025 was the year the concept became concrete for developers and consumers alike.

AI agents moved from theory to infrastructure, reshaping how people interact with large language models, the systems that power chatbots like ChatGPT.

In 2025, the definition of AI agent shifted from the academic framing of systems that perceive, reason and act to AI company Anthropic’s description of large language models that are capable of using software tools and taking autonomous action. While large language models have long excelled at text-based responses, the recent change is their expanding capacity to act, using tools, calling APIs, coordinating with other systems and completing tasks independently.

This shift did not happen overnight. A key inflection point came in late 2024, when Anthropic released the Model Context Protocol. The protocol allowed developers to connect large language models to external tools in a standardized way, effectively giving models the ability to act beyond generating text. With that, the stage was set for 2025 to become the year of AI agents.

AI agents are a whole new ballgame compared with generative AI.

The milestones that defined 2025

The momentum accelerated quickly. In January, the release of Chinese model DeepSeek-R1 as an open-weight model disrupted assumptions about who could build high-performing large language models, briefly rattling markets and intensifying global competition. An open-weight model is an AI model whose training, reflected in values called weights, is publicly available. Throughout 2025, major U.S. labs such as OpenAI, Anthropic, Google and xAI released larger, high-performance models, while Chinese tech companies including Alibaba, Tencent, and DeepSeek expanded the open-model ecosystem to the point where the Chinese models have been downloaded more than American models.

Another turning point came in April, when Google introduced its Agent2Agent protocol. While Anthropic’s Model Context Protocol focused on how agents use tools, Agent2Agent addressed how agents communicate with each other. Crucially, the two protocols were designed to work together. Later in the year, both Anthropic and Google donated their protocols to the open-source software nonprofit Linux Foundation, cementing them as open standards rather than proprietary experiments.

These developments quickly found their way into consumer products. By mid-2025, “agentic browsers” began to appear. Tools such as Perplexity’s Comet, Browser Company’s Dia, OpenAI’s GPT Atlas, Copilot in Microsoft’s Edge, ASI X Inc.’s Fellou, MainFunc.ai’s Genspark, Opera’s Opera Neon and others reframed the browser as an active participant rather than a passive interface. For example, rather than helping you search for vacation details, it plays a part in booking the vacation.

At the same time, workflow builders like n8n and Google’s Antigravity lowered the technical barrier for creating custom agent systems beyond what has already happened with coding agents like Cursor and GitHub Copilot.

New power, new risks

As agents became more capable, their risks became harder to ignore. In November, Anthropic disclosed how its Claude Code agent had been misused to automate parts of a cyberattack. The incident illustrated a broader concern: By automating repetitive, technical work, AI agents can also lower the barrier for malicious activity.

This tension defined much of 2025. AI agents expanded what individuals and organizations could do, but they also amplified existing vulnerabilities. Systems that were once isolated text generators became interconnected, tool-using actors operating with little human oversight.

The business community is gearing up for multiagent systems.

What to watch for in 2026

Looking ahead, several open questions are likely to shape the next phase of AI agents.

One is benchmarks. Traditional benchmarks, which are like a structured exam with a series of questions and standardized scoring, work well for single models, but agents are composite systems made up of models, tools, memory and decision logic. Researchers increasingly want to evaluate not just outcomes, but processes. This would be like asking students to show their work, not just provide an answer.

Progress here will be critical for improving reliability and trust, and ensuring that an AI agent will perform the task at hand. One method is establishing clear definitions around AI agents and AI workflows. Organizations will need to map out exactly where AI will integrate into workflows or introduce new ones.

Another development to watch is governance. In late 2025, the Linux Foundation announced the creation of the Agentic AI Foundation, signaling an effort to establish shared standards and best practices. If successful, it could play a role like the World Wide Web Consortium in shaping an open, interoperable agent ecosystem.

There is also a growing debate over model size. While large, general-purpose models dominate headlines, smaller and more specialized models are often better suited to specific tasks. As agents become configurable consumer and business tools, whether through browsers or workflow management software, the power to choose the right model increasingly shifts to users rather than labs or corporations.

The challenges ahead

Despite the optimism, significant socio-technical challenges remain. Expanding data center infrastructure strains energy grids and affects local communities. In workplaces, agents raise concerns about automation, job displacement and surveillance.

From a security perspective, connecting models to tools and stacking agents together multiplies risks that are already unresolved in standalone large language models. Specifically, AI practitioners are addressing the dangers of indirect prompt injections, where prompts are hidden in open web spaces that are readable by AI agents and result in harmful or unintended actions.

Regulation is another unresolved issue. Compared with Europe and China, the United States has relatively limited oversight of algorithmic systems. As AI agents become embedded across digital life, questions about access, accountability and limits remain largely unanswered.

Meeting these challenges will require more than technical breakthroughs. It demands rigorous engineering practices, careful design and clear documentation of how systems work and fail. Only by treating AI agents as socio-technical systems rather than mere software components, I believe, can we build an AI ecosystem that is both innovative and safe.The Conversation

Thomas Şerban von Davier, Affiliated Faculty Member, Carnegie Mellon Institute for Strategy and Technology, Carnegie Mellon University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Editor's Note: This post might have been created or polished by AI tools.


by External Contributor via Digital Information World

Monday, December 29, 2025

Nobel Laureate Discusses Artificial Intelligence's Role in Critical Thinking Education

Nobel Prize-winning physicist Saul Perlmutter addressed artificial intelligence as a two-edged tool during a December 2025 interview on the "In Good Company" podcast, briefly comparing it to earlier concerns about calculators in education. Perlmutter, who won the Nobel Prize in Physics for discovering the universe's accelerating expansion, discussed how AI intersects with the critical thinking methods he teaches.

The physicist noted that AI can give students the impression they have actually learned the basics before they really have, potentially leading them to rely on it too soon before they know how to do the work themselves. He identified a particular concern with the current generation of AI being very good at being overly confident about what it's saying, which users may accept without scrutiny because it's typed on the screen.

Perlmutter teaches a critical thinking course covering 24 concepts and has asked students to think a bit hard about how to use AI to make it easier to operationalize each concept in their day-to-day lives, and also how to use these concepts to tell whether AI is fooling them or sending them in the right or wrong direction.

The physicist noted that when users know these different tools and approaches to thinking about problems, AI can often help them find the bit of information they need to use these techniques.

Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, edited, and published by humans. Image: DIW-Aigen

Read next: AI Video Translation Offers Efficiency Potential but Human Nuance Remains Key
by Ayaz Khan via Digital Information World

AI Video Translation Offers Efficiency Potential but Human Nuance Remains Key

A study evaluated consumer responses to marketing videos translated by a generative AI tool (HeyGen) versus human translators across English–Indonesian and Indonesian–English language pairs. Two online experiments involved participants in Indonesia (Study 1) and the United States and United Kingdom (Study 2), measuring language comprehension, accent neutrality, naturalness, and customer engagement intention.

AI translations were consistently rated as less natural and less accent-neutral than human translations. Language comprehension varied by direction: AI performed worse translating into Indonesian but better into English, reflecting differences in AI training data. Despite these perceptual differences, viewers were equally willing to like, share, or comment on both types of videos.

Research shows AI struggles with tone and accents, though marketing engagement matches human translations. Thoughtful use of emerging technologies requires balancing innovation with responsibility, ensuring progress benefits people without misleading or harming them.

"These insights suggest that AI video translation is not yet a perfect substitute for human translation...", explains UEA in a newsroom post. Adding further, "But it already offers practical value".
According to Jiseon Han, Assistant Professor at University of East Anglia: "For [online] marketers, AI can be a great choice when speed and straightforward messaging matter most, but when it comes to capturing tone, personality, and cultural context, human expertise is still irreplaceable".

The authors note several limitations: findings reflect a single AI tool, specific language pairs, one video per condition, and a single point in time, which restricts generalizability. They suggest future research should explore additional AI tools, languages, and translation contexts to further understand consumer evaluation of AI video translation.

Source: Journal of International Marketing; research led by the University of Jyväskylä with co-authorship from University of East Anglia (UEA).

Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, edited, and published by humans.

Read next: Global Survey: 66% Say 2025 Bad Year for Country, 71% Optimistic 2026 Will Be Better
by Asim BN via Digital Information World

Friday, December 26, 2025

Global Survey: 66% Say 2025 Bad Year for Country, 71% Optimistic 2026 Will Be Better

Ipsos surveyed 23,642 adults (under the age of 75) across 30 countries between 27 October and 4 November 2025. The survey found that 50% of respondents said 2025 was a bad year for them and their family. At the national level, 66% of respondents said 2025 was a bad year for their country, with the highest percentages reported in France (85%), South Korea (85%), and Türkiye (80%).

Looking ahead, 71% of respondents expressed optimism that 2026 will be better than 2025. Countries with the highest optimism included Indonesia (90%), Colombia (89%), and Chile (86%), while France (41%), Japan (44%), and Belgium (49%) reported the lowest optimism.

Public pessimism dominated 2025 globally, but strong optimism for 2026 emerges across emerging economies surveyed.

Country % agree % disagree
30-country avg. 71 29
Indonesia 90 10
Colombia 89 11
Chile 86 14
Thailand 86 14
Peru 86 14
India 85 15
Argentina 83 17
South Africa 82 18
Mexico 82 18
Malaysia 82 18
Brazil 80 20
Hungary 77 23
Poland 74 26
Romania 70 30
Canada 70 30
Spain 69 31
Sweden 68 32
Singapore 67 33
Netherlands 67 33
United States 66 34
Australia 66 34
South Korea 65 35
Türkiye 63 37
Ireland 63 37
Great Britain 58 42
Germany 57 43
Italy 57 43
Belgium 49 51
Japan 44 56
France 41 59

On economic expectations, 49% of respondents predicted a stronger global economy in 2026, while 51% expected it to be worse.

The report also notes that in 2020, 90% of average respondents globally said their country had a bad year, reflecting the height of the COVID-19 pandemic. Current optimism levels remain below pre-2022 figures.

Source: Ipsos Predictions 2026 Report

Read next:

• How Schema Markup Is Redefining Brand Visibility in the Age of AI Search, According to Experts at Status Labs

• How ChatGPT could change the face of advertising, without you even knowing about it
by Ayaz Khan via Digital Information World