Tuesday, December 16, 2025

Researchers Find Structured Human–AI Collaboration Is Required for Creative Gains

What generative AI typically does best – recognise patterns and predict the next step in a sequence – can seem fundamentally at odds with the intangibility of human creativity and imagination. However, Cambridge researchers suggest that AI can be a useful creative partner, as long as there is clear guidance on how ideas should be developed together.
Image: DIW-Aigen

It’s a big question for people working on creative tasks: can humans and AI work together to boost creativity? Studies have yielded widely inconsistent results in how humans working with AI can handle creative tasks together.

But a team of researchers, including from the University of Cambridge, say that while simply adding AI into a process won’t improve creativity, AI can be an effective creative partner, if there are clear instructions and guidance – for both humans and AI - on how to develop ideas together.

Their research, published in the journal Information Systems Research, offers practical insight for improving human-AI collaboration to improve creativity.

“Adding AI doesn’t automatically lead to better ideas,” said co-author Dr Yeun Joon Kim, from Cambridge Judge Business School. “For human-AI pairs to work together and improve ideas over time, organisations must provide targeted support – such as guidance on how to build on and adapt ideas – to help employees and AI learn how to create more effectively.”

In their research, Joon and his colleagues redefined ‘augmented learning’ – a term first used in 1962 to describe how technology can help people learn more effectively.

The researchers argue that in the age of generative AI (GenAI), learning is no longer just about improving human understanding. Instead, it’s becoming a shared process where humans and AI learn and create together. The researchers describe this as an evolving partnership where both sides adjust their roles across tasks such as generating ideas, giving feedback and refining concepts.

Traditionally, technology was seen as a tool that simply made information easier to access. But GenAI, they say, acts more like a collaborator. Once a human enters a prompt, the system can take an active role in shaping ideas and decisions: shifting augmented learning from an individual process to a collective one.

The study points to Netflix as an example of human–AI teamwork. Rather than treating scriptwriting as a single task, Netflix breaks it into stages like idea generation and evaluation. Human writers create early drafts, while AI analyses character arcs, pacing and audience trends to help refine stories and improve how shows are developed and marketed.

Joon says he became interested in this research because AI was originally developed to answer a longstanding question: “Can machines generate something genuinely new?” Because while traditional technologies excel at routine tasks, many doubted that technology could make a creative contribution.

“When GenAI systems became widely available, I noticed that although they could generate ideas rapidly, people did not know how to collaborate with them in a way that improved creativity,” he said.

“We wanted to figure out how people can learn to work with GenAI in a more intentional way, so that human–GenAI co-creation leads to stronger joint results rather than just more content,” said co-author Dr Luna Luan from the University of Queensland.

The research included three linked studies, each involving between 160 and 200 human participants. The first study found that human-AI teams did not automatically become more creative over time when tackling social and environmental problems. The second study explored why. It identified three types of collaboration – humans proposing ideas, asking AI for ideas, and jointly refining ideas – and found that only joint refinement boosted creativity. But participants rarely increased this behaviour. A third study showed that simply instructing people to focus more on co-developing ideas led to clear improvements in human–AI creativity across repeated tasks.

“We were surprised that human-AI pairs did not naturally improve through repeated collaboration,” said Joon. “Despite AI’s generative power, creativity did not increase over time. We found that improvement occurred only when we introduced a deliberate intervention.”

“Specifically, instructing participants to engage in idea co-development – focusing on exchanging feedback and refining existing ideas rather than endlessly generating new ideas – was the key.”

The researchers say GenAI tools need to do more than churn out ideas. Their findings show that human–AI teams became less creative over time, mainly because they stopped engaging in the back-and-forth refinement that actually improves results. They say that AI systems should be designed to prompt users to give feedback, expand on suggestions and refine ideas, rather than racing through idea generation.

For organisations, the message is that creativity won’t automatically improve just by adding AI. Effective collaboration requires structure: clear instructions, templates and workflows that help people recognise when to challenge, refine or build on AI-generated ideas. Training should also teach staff how to treat AI as a creative partner by practising feedback exchange and iterative improvement.

The authors also call for a shift in how companies think about human–AI work. Instead of choosing between automation and collaboration, tasks should be broken into stages where humans and AI play different roles: for example, AI generating options and humans evaluating and refining them.

They warn that although many firms rushed to adopt GenAI after the release of ChatGPT in 2022, simply using the technology does not guarantee greater creativity at work. Its impact depends on how well people understand it and how effectively they collaborate with it.

The research was co-authored by Luna Luan, a Lecturer in Management at the University of Queensland in Australia, who recently earned her PhD at Cambridge Judge Business School, Yeun Joon Kim of Cambridge Judge and Jing Zhou, Professor of Management at Rice University in Texas.

Reference:

Yingyue Luna Luan, Yeun Joon Kim, Jing Zhou. ‘ Augmented Learning for Joint Creativity in Human-GenAI Co-Creation .’ Information Systems Research (2025). DOI: 10.1287/isre.2024.0984

Adapted from a piece published on the Cambridge Judge Business School website.

This article was originally published on University of Cambridge, with an updated title, and is shared under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

Read next: The ‘one chatbot per child’ model for AI in classrooms conflicts with what research shows: Learning is a social process

by External Contributor via Digital Information World

The ‘one chatbot per child’ model for AI in classrooms conflicts with what research shows: Learning is a social process

Niral Shah, University of Washington

Image: Ethan Garvey / unsplash

In the Star Trek universe, the audience occasionally gets a glimpse inside schools on the planet Vulcan. Young children stand alone in pods surrounded by 360-degree digital screens. Adults wander among the pods but do not talk to the students. Instead, each child interacts only with a sophisticated artificial intelligence, which peppers them with questions about everything from mathematics to philosophy.

This is not the reality in today’s classrooms on Earth. For many technology leaders building modern AI, however, a vision of AI-driven personalized learning holds considerable appeal. Outspoken venture capitalist Marc Andreessen, for example, imagines that “the AI tutor will be by each child’s side every step of their development.”

Years ago, I studied computer science and interned in Silicon Valley. Later, as a public school teacher, I was often the first to bring technology into my classroom. I was dazzled by the promise of a digital future in education.

Now as a social scientist who studies how people learn, I believe K-12 schools need to question predominant visions of AI for education.

Individualized learning has its place. But decades of educational research is also clear that learning is a social endeavor at its core. Classrooms that privilege personalized AI chatbots overlook that fact.

School districts under pressure

Generative AI is coming to K-12 classrooms. Some of the largest school districts in the country, such as Houston and Miami, have signed expensive contracts to bring AI to thousands of students. Amid declining enrollment, perhaps AI offers a way for districts to both cut costs and seem cutting edge.

Pressure is also coming from both industry and the federal government. Tech companies have spent billions of dollars building generative AI and see a potential market in public schools. Republican and Democratic administrations have been enthusiastic about AI’s potential for education.

Decades ago, educators promoted the benefits of “One Laptop per Child.” Today it seems we may be on the cusp of “one chatbot per child.” What does educational research tell us about what this model could mean for children’s learning and well-being?

Learning is a social process

During much of the 20th century, learning was understood mainly as a matter of individual cognition. In contrast, the latest science on learning paints a more multidimensional picture.

Scientists now understand that seemingly individual processes – such as building new knowledge – are actually deeply rooted in social interactions with the world around us.

Neuroscience research has shown that even from a young age, people’s social relationships influence which of our genes turn on and off. This matters because gene expression affects how our brains develop and our capacity to learn.

In classrooms, this suggests that opportunities for social interaction – for instance, children listening to their classmates’ ideas and haggling over what is true and why – can support brain health and academic learning.

Research in the social sciences has long since proved the value of high-quality classroom discourse. For example, in a well-cited 1991 study involving over 1,000 middle school students across more than 50 English classrooms, researchers Martin Nystrand and Adam Gamoran found that children performed significantly better in classes “exhibiting more uptake, more authenticity of questions, more contiguity of reading, and more discussion time.”

In short, research tells us that rich learning happens when students have opportunities to interact with other people in meaningful ways.

AI in classrooms lacks research evidence

What does all of this mean for AI in education?

Introducing any new technology into a classroom, especially one as alien as generative AI, is a major change. It seems reasonable that high-stakes decisions should be based on solid research evidence.

But there’s one problem: The studies that school leaders need just aren’t there yet. No one really knows how generative AI in K-12 classrooms will affect children’s learning and social development.

Current research on generative AI’s impact on student learning is limited, inconclusive and tends to focus on older students – not K-12 children. Studies of AI use thus far have tended to focus on either learning outcomes or individual cognitive activity.

Although standardized test scores and critical thinking skills matter, they represent a small piece of the educational experience. It is also important to understand generative AI’s real-life impact on students.

For example: How does it feel to learn from a chatbot, day after day? What is the longer-term impact on children’s mental health? How does AI use affect children’s relationships with each other and with their teachers? What kinds of relationships might children form with the chatbots themselves? What will AI mean for educational inequities related to social forces such as race and disability?

More broadly, I think now is the time to ask: What is the purpose of K-12 education? What do we, as a society, actually want children to learn?

Of course, every child should learn how to write essays and do basic arithmetic. But beyond academic outcomes, I believe schools can also teach students how to become thoughtful citizens in their communities.

To prepare young people to grapple with complex societal issues, the National Academy of Education has called for classrooms where students learn to engage in civic discourse across subject areas. That kind of learning happens best through messy discussions with people who don’t think alike.

To be clear, not everything in a classroom needs to involve discussions among classmates. And research does indicate that individualized instruction can also enhance social forms of learning.

So I don’t want to rule out the possibility that classroom-based generative AI might augment learning or the quality of students’ social interactions. However, the tech industry’s deep investments in individualized forms of AI – as well as the disappointing history of technology in classrooms – should give schools pause.

Good teaching blends social and individual processes. My concern about personalized AI tutors is how they might crowd out already infrequent opportunities for social interaction, further isolating children in classrooms.

Center children’s learning and development

Education is a relational enterprise. Technology may play a role, but as students spend more and more class time on laptops and tablets, I don’t think screens should displace the human-to-human interactions at the heart of education.

I see the beneficial application of any new technology in the classroom – AI or otherwise – as a way to build upon the social fabric of human learning. At its best it facilitates, rather than impedes, children’s development as people. As schools consider how and whether to use generative AI, the years of research on how children learn offer a way to move forward.The Conversation

Niral Shah, Associate Professor of Learning Sciences & Human Development, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• Merriam-Webster Names “Slop” as 2025 Word of the Year

• Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight


by External Contributor via Digital Information World

Merriam-Webster Names “Slop” as 2025 Word of the Year

Merriam-Webster has chosen “slop” as its 2025 Word of the Year, the dictionary publisher announced on December, 14th 2025.

The term is defined by Merriam-Webster as low-quality digital content produced usually in quantity by means of artificial intelligence.

Editors said the word reflected the large amount of AI-generated material encountered during the year, including unrealistic images, deceptive online content, automated writing, and similar media appearing across digital platforms.

Merriam-Webster noted that the word’s tone was less fearful than mocking, often used to highlight the limits of artificial intelligence rather than express alarm.

The publisher also reported high public interest in other words during 2025, including gerrymander, performative, touch grass, and tariff, based on "lookup data".

One important thing to note is that Merriam-Webster doesn't explain their selection process clearly. We also couldn't find a recent official page showing how they chose the word this year.

Merriam-Webster names slop 2025 Word of the Year, mocking proliferation of low-quality AI-generated digital content.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-Aigen

Read next: Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight
by Asim BN via Digital Information World

Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight

As momentum behind meaningful legislation on AI in the UK has appeared to stall, new research from the Ada Lovelace Institute shows that this delay – and the government’s broader shift away from regulation – is increasingly out of step with public attitudes.

The nationally representative polling examines not only whether the UK public support regulation of AI, but also how they expect it to function, and where gaps between public expectations and policy ambition may lie. Key findings include:

  • The public support independent regulation. The UK public do not trust private companies to self-regulate. There is strong public support (89%) for an independent regulator for AI, equipped with enforcement powers.
  • The public prioritise fairness, positive social impacts and safety. AI is firmly embedded in public consciousness and 91% of the public feel it is important that AI systems are developed and used in ways that treat people fairly. They want this to be prioritised over economic gains, speed of innovation and international competition when presented with trade-offs.
  • The public feel disenfranchised and excluded from AI decision-making, and mistrust key institutions. Many people feel excluded from government decision-making. 84% fear that, when regulating AI, the government will prioritise its partnerships with large technology companies over the public interest.
  • The public expect ongoing monitoring and clear lines of accountability. People support mechanisms such as independent standards, transparency reporting and top-down accountability to ensure effective monitoring of AI systems, both before and after they are deployed.
  • Public trust in institutions shaping AI remains low. Over half (51%) do not trust large technology companies to act in the public's interest, while distrust is even higher for social media companies (69%), and 59% do not trust the government.
Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight

Nuala Polo, UK Public Policy Lead at the Ada Lovelace Institute, said:

“Our research is clear: there is a major misalignment between what the UK public want and what the government is offering in terms of AI regulation. The government is betting big on AI, but success requires public trust. When people do not trust that government policy will protect them, they are less likely to adopt new technologies, and more likely to lose confidence in public institutions and services, including the government itself.”

Michael Birtwistle, Associate Director at the Ada Lovelace Institute, said:

“Examples of the unmanaged risks – and sometimes fatal harms – of AI systems are increasingly making the headlines. Trust is built with meaningful incentives to manage harm. We see these incentives in food, aviation and medicines – consequential technologies like AI should not be treated any differently. Continued inaction on AI harms will come with serious costs to the potential benefits of adoption.”

Note: This post was first published by the Ada Lovelace Institute with minor additions, including the paragraph on public trust in institutions shaping AI and updated title.

Read next: Best way for employers to support employees with chronic mental illness is by offering flexibility
by Press Releases via Digital Information World

Monday, December 15, 2025

Best way for employers to support employees with chronic mental illness is by offering flexibility

More than 20% of Americans will be diagnosed with mental illness in their lifetimes. They will, that is, experience conditions that influence the way they think, feel and act – and that may initially seem incompatible with the demands of work.

Our new research suggests that what people living with chronic mental illnesses need most to succeed at work is for their managers to be flexible and trust them.

This includes the freedom to adjust their schedules and workloads to make their jobs more compatible with their efforts to manage and treat their symptoms. For that to happen, managers need to trust that these workers are committed to their jobs and their employers.

We’re management professors who reviewed hundreds of blog and Reddit posts and conducted in-depth interviews with 59 people. And those are the most significant findings from our peer-reviewed study, published in the October 2025 issue of the Academy of Management Journal.

Scouring Reddit posts and conducting interviews

We gathered our data from three sources: anonymous blog posts from 171 people, Reddit posts from 781 people, and in-depth interviews with 59 workers employed in a variety of jobs across multiple industries.

All these people worked while dealing with chronic mental illness, such as major depressive disorder, generalized anxiety disorder and bipolar disorder. The blog posts were maintained by a nonprofit concerned with the experiences of individuals living with mental illness. We focused on posts tagged “work.”

To identify relevant data on Reddit, we searched using a combination of the word “work” with several terms associated with mental illness. Additionally, we restricted our data collection to unsolicited narratives published prior to mid-March 2020 to avoid overlap with the employment changes that occurred during the COVID-19 pandemic. Because this data was gathered from the internet, we couldn’t obtain details about participants’ gender, age, profession or education.

We also recruited people to interview through social media postings, advertising in a public university’s alumni listserv and contacting an organization that focuses on men’s mental health. We also made requests of those we’d already interviewed to see whether they had recommendations for other people to possibly interview.

The interviews took place in 2020 and 2021.

Speaking with people from all walks of working life

About 37% of the people we interviewed identified as women, and their average age was 41.5 years. Approximately 80% of them identified as Caucasian, 3.5% Black, 3.5% Hispanic, and less than 2% identified as either Indian, Korean American, mixed race or Middle Eastern and North African. About 3.5% chose not to answer.

They held a variety of jobs, including lawyer, professor, touring musician, consultant, teacher, real estate manager, chief technology officer, salesperson, restaurant server, travel agency manager, graphic designer, tester for manufacturing plant, chemical engineer and bus driver. Several worked in tech fields.

When the employees who we studied were trusted and given flexibility, they became better able to do their jobs while also attending to their well-being.

Employees who had lived with their condition for years used what we call “personalized disengagement and engagement strategies” to manage their symptoms. That refers to the fact that people with mental illness respond best to different coping strategies depending on their own preferences and symptoms, instead of using generic techniques they learned from self-help resources or peers.

Examples of personalized disengagement strategies ranged from leaving workspaces to meditate to taking a walk, to finding a quiet space to cry.

Engagement strategies included immersing more deeply into work and having conversations with co-workers. These coping strategies will sound familiar to most people, including those without any chronic mental health conditions. But workplaces don’t always give employees, regardless of their disability status, the flexibility and self-determination necessary to enact their strategies. In fact, a recent survey by Mind Share Partners found that nearly half of employees didn’t even feel like they could disconnect from their jobs after working hours or while on vacation.

Many employees also told us that they benefited from trust and flexibility in the period after they were diagnosed, when they needed to explore different therapies and treatment techniques.

When managers allow for flexibility, trust workers to do what they need to do to address their symptoms, and convey their compassion, employees with chronic mental illness are more likely to keep their jobs and get their work done.

Affecting most employers

Mental illnesses became more prevalent in the aftermath of COVID-19,especially among adolescents and young adults.

So, if you’re an employer, chances are that our research is relevant to your workforce.

Depression, a common mental illness, had an estimated cost of US$1 trillion annually in lost productivity in 2019, the World Health Organization has estimated.

People with anxiety and mood disorders, including bipolar disorder and major depressive disorder, may periodically have symptoms that interfere with their ability to do their jobs.

And while doing those jobs, they risk being stigmatized by co-workers who may know little about mental illness or be judgmental about people with those chronic conditions. That adds further stress beyond what others would experience at work.

Employee assistance programs could be falling short

In response, many employers offer benefits to help employees cope with mental and emotional problems, such as employee assistance programs, mental-wellness app subscriptions and stigma-reduction efforts.

These one-size-fits-all initiatives can help improve functioning for those with occasional or short-term emotional problems, and they can help improve leaders’ ability to respond to employees’ distress, which is crucial.

But as a whole, they are not enough to solve the problem.

Employee assistance programs, which nearly all big companies offer, have not proved systematically helpful to workers in achieving their goals. One study found that they reduced employees’ absences but did not reduce their work-related distress.

Another study even found that workers who used these programs became more inclined to leave their jobs.

Not missing out on peak performers

Contrary to stereotypes, people with chronic anxiety and depression, such as those we studied, are generally as capable of success in the workplace as anyone else in the right context.

Extremely high performers, such as the late actor Carrie Fisher and the Olympic swimmer Michael Phelps, are two such examples of people with a mental illness who were top achievers in their field.

If you were a manager, wouldn’t you want people of this caliber working for you? If so, then it’s important to create the right conditions, which many employers fail to do despite their best efforts.

Needing more mental health support

Companies will face increasing pressure to support those with mental illness and other mental health challenges.

Monster’s 2024 State of the Graduate Report found that Gen Z employees, people born between 1996 and 2010 and are currently in their teens and 20s, are increasingly prioritizing support for mental health at work, with 92% of 18- to 24-year-olds surveyed wanting a job where they are comfortable discussing their mental health at work.

This trend suggests that employers wishing to attract top entry-level talent will need to effectively support mental health, highlighting the importance of continuing to research this issue.The Conversation

Sherry Thatcher, Regal Distinguished Professor of Management and Entrepreneurship, University of Tennessee and Emily Rosado-Solomon, Assistant Professor of Management, Babson College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: 

• Revealing the AI Knowledge Gap in Marketing, The Cost of Upskilling

• Gen Z is burning out at work more than any other generation — here’s why and what can be done


by External Contributor via Digital Information World

Small Businesses Face Growing Cybercrime Burden, Survey Shows

More than four in five small businesses experienced a cybercrime in 2025, according to the Identity Theft Resource Center’s Business Impact Report. The nonprofit surveyed 662 owners and executives at companies with 500 or fewer employees, including solopreneurs, about incidents over the previous 12 months.

The findings show that 81 percent of respondents suffered a security breach, a data breach, or both. AI-enabled attacks were identified as a root cause in more than 40 percent of incidents, reflecting a shift towards more technologically advanced external threats.

The financial impact was notable, with 37% of affected businesses reporting losses exceeding $500,000.

To manage these costs, 38.3 percent of small business leaders reported raising prices, this burden functions as an invisible "cyber tax," pushing businesses to raise prices and contributing to inflation. The report also notes declining confidence in cybersecurity preparedness and reduced use of basic security measures, despite growing concern about AI-driven risks.

Over one-third of breached small businesses reported losses exceeding $500,000, highlighting severe cybercrime costs.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: Online shopping makes it harder to make ethical consumption choices, research says
by Ayaz Khan via Digital Information World

Online shopping makes it harder to make ethical consumption choices, research says

By Caroline Moraes - Professor of Marketing and Consumer Research.

Image: Vitaly Gariev / Unsplash

As the Christmas shopping period begins in earnest following Black Friday and Cyber Monday, new research led by the University of Birmingham and the University of Bristol sheds light on how consumers’ environmental and social concerns fail to translate into ethical purchasing actions during online shopping.

The study, published in the Journal of Business Ethics, explores how competitive shopping environments and marketing tactics can influence moral decision-making among consumers. It reveals that the intense focus on bargains and limited-time offers, such as those prevalent during the festive sales periods, can lead shoppers to discount any concerns they may have about sustainability or fair labour, in pursuit of a deal.

Caroline Moraes, Professor of Marketing and Consumer Research from the Centre for Responsible Business at the University of Birmingham, and co-author of the study, said: "Our findings show that the tactics used by online shops create tensions between ethical intentions and actual behaviour. Many consumers aspire to shop responsibly by buying sustainably and ethically made products. But the design of websites and the urgency and excitement that people experience across online shopping platforms, which increase even further during events like Black Friday and Boxing Day sales, can often override these values.”

The qualitative study examined how self-described ‘ethically oriented’ consumers practice online shopping for clothes.

"Buying a loved one a gift or purchasing new clothes during the festive season shouldn’t come at the cost of our values and the environment." Prof Caroline Moraes, University of Birmingham

Dr Fiona Spotswood, Associate Professor in Marketing and Consumption at the University of Bristol Business School, and lead-author of the study, said: “We paid attention to how participants navigated existing digital retail websites, how they balance social and environmental information with other product information, and how they perform online shopping routines.”

The paper outlines that ethical decision-making is inhibited by some key characteristics of online shopping, including:

  • Online shopping websites are designed for passive habitual scrolling and browsing.
  • Price and aesthetic appeal being front and centre of products’ selling points rather than ethical factors.
  • Lack of information about the ethical and environmental sustainability credentials of products.
  • Being pressured to make an immediate purchase with limited-time deals.

The research calls for retailers to adopt responsible marketing practices, ensuring transparency and fairness in promotional strategies and including ethical and sustainability criteria in their online shopping websites. It also urges consumers to reflect on the broader social and environmental impact of their purchases, particularly during peak shopping periods when ethical considerations are most likely to be compromised.

Professor Moraes said: “With more of us shopping online than ever before, our research serves as a timely reminder that people do want to be more ethical in their shopping practices, but it can be incredibly hard to act in that way. Businesses should take this into consideration when it comes to their e-commerce offering. Buying a loved one a gift or purchasing new clothes during the festive season shouldn’t come at the cost of our values and the environment.”

Four tips on how to shop more ethically online

  1. Pause before you purchase. If you recognise you have been scrolling/browsing for a long time, take a break and ask yourself if you or the person you are buying for really needs this before hitting purchase.
  2. Search for specific sustainable options. Look directly for eco-friendly products and brands that prioritise fair labour practices and that have this information easily available.
  3. Avoid overbuying. Resist the urge to stockpile just because it is on sale at the click of a button. Someone else might need that item more than you do.
  4. Re-style and/or purchase second-hand. If you are shopping for clothes, consider re-styling what you already have and/or purchase second-hand items that can help you create your very own versions of the new styles you see online.
About the author: Caroline Moraes is Professor of Marketing and Consumer Research at Birmingham Business School, University of Birmingham, UK.

This post was originally published on University of Birmingham and republished with permission.

Read next:

• Study Finds Higher Digital Skills Linked to Greater Privacy and Misinformation Concerns

Human-AI Collaboration Requires Structured Guidance, Research Shows
by External Contributor via Digital Information World