Tuesday, December 23, 2025

How ChatGPT could change the face of advertising, without you even knowing about it

Nessa Keddo, King's College London
Image: DIW-Aigen

Online adverts are sometimes so personal that they feel eerie. Even as a researcher in this area, I’m slightly startled when I get a message asking if my son still needs school shirts a few hours after browsing for clothes for my children.

Personal messaging is part of a strategy used by advertisers to build a more intense relationship with consumers. It often consists of pop-up adverts or follow-up emails reminding us of all the products we have looked at but not yet purchased.

This is a result of AI’s rapidly developing ability to automate the advertising content we are presented with. And that technology is only going to get more sophisticated.

OpenAI, for example, has hinted that advertising may soon be part of the company’s ChatGPT service (which now has 800 million weekly users). And this could really turbocharge the personal relationship with customers that big brands are desperate for.

ChatGPT already uses some advanced personalisation, making search recommendations based on a user’s search history, chats and other connected apps such as a calendar. So if you have a trip to Barcelona marked in your diary, it will provide you – unprompted – with recommendations of where to eat and what to do when you get there.

In October 2025, the company introduced ChatGPT Atlas, a search browser which can automate purchases. For instance, while you search for beach kit for your trip to Barcelona, it may ask: “Would you like me to create a pre-trip beach essentials list?” and then provide links to products for you to buy.

“Agent mode” takes this a step further. If a browser is open on the page of a swimsuit, a chat box will appear where you can ask specific questions. With the browser history saved, you can log back in and ask: “Can you find that swimsuit I was looking at last week and add it to the basket in a size 14?”

Another new feature (only in the US at the moment), “instant checkout”, is a partnership with Shopify and Etsy which allows users to browse and immediately purchase products without leaving the platform. Retailers pay a small fee on sales, which is how OpenAI monetises this service.

However, only around 2% of all ChatGPT searches are shopping-related, so other means of making money are necessary – which is where full-on incorporated advertising may come in.

One app, lots of ads?

OpenAI’s rapid growth requires heavy investment, and its chief financial officer, Sarah Friar, has said the company is “weighing up an ads model”, as well as recruiting advertising specialists from rivals Meta and Google.

But this will take some time to get right. Some ChatGPT users have already been critical of a shopping feature which they said made them feel like they were being sold to. Clearly a re-design is being considered, as the feature was temporarily removed in December 2025.

So there will continue to be experimentation into how AI can be part of what marketers call the “consumer journey” – the process customers go through before they end up buying something.

Some consumers prefer to use customer reviews and their own research or experience. Others appreciate AI recommendations, but studies suggest that overall, some sense of autonomy is essential for people to truly consider themselves happy customers. It has also been shown that audiences dislike aggressive “retargeting”, where they are continuously bombarded with the same adverts.

So the option of ChatGPT automatically providing product recommendations, summaries and even purchasing items on our behalf might seem very tempting to big brands. But most consumers will still prefer a sense of agency when it comes to spending their money.

This may be why advertisers will work on new ways to blur the lines – where internet search results are blended with undeclared brand messaging and product recommendations. This has long been the case on Chinese platforms such as WeChat, which includes e-commerce, gaming, messaging, calling and social networking – but with advertising at its core.

In fact, platforms in the west seem far behind their East Asian counterparts, where users can do most of their day-to-day tasks using just one app. In the future, a similarly centralised approach may be inevitable elsewhere – as will subliminal advertising, with the huge potential for data collection that a single multi-functional app can provide.

Ultimately, transparency will be minimal and advertising will be more difficult to recognise, which could be hard on vulnerable users – and not the kind of ethically responsible AI that many are hoping for.The Conversation

Nessa Keddo, Senior Lecturer in Media, Diversity and Technology, King's College London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Shrinking AI memory boosts accuracy


by External Contributor via Digital Information World

Shrinking AI memory boosts accuracy

Researchers have developed a new way to compress the memory used by AI models to increase their accuracy in complex tasks or help save significant amounts of energy.

Shrinking AI memory boosts accuracy
Image: Luke Jones / Unsplash

Experts from University of Edinburgh and NVIDIA found that large language models (LLMs) using memory eight times smaller than an uncompressed LLM scored better on maths, science and coding tests while spending the same amount of time reasoning.

The method can be used in an alternative way to help LLMs respond to more user queries simultaneously, reducing the amount of power needed per task.

As well as energy savings, experts say the improvements could benefit AI systems that are used to solve complicated tasks or in devices that have slow or limited memory, such as smart home devices and wearable technology.

Problem solving

By “thinking” about more complex hypotheses or exploring more hypotheses concurrently, AI models improve their problem-solving abilities. In practice, this is achieved by generating more reasoning threads – a step-by-step logical process used to solve problems – in text form.

The model’s memory – called the KV cache – which stores the portions of the threads generated, can act as a bottleneck, as its size slows down the generation of reasoning thread outputs during inference – the process by which AI models respond to an input prompt, such as answering a user query.

The more threads there are, and the longer they are, the more memory is required. The larger the memory size used, the longer the LLM takes to retrieve the KV cache from the part of the AI device where it is stored.

Memory compression

To overcome this, the team developed a method to compress the models’ memory – called Dynamic Memory Sparsification (DMS). Instead of keeping every token – the units of data that an AI model processes – DMS decides which ones are important enough to keep and which ones can be deleted.

There is a slight delay between the time when the decisions to delete tokens using sparsification are made and when they are removed. This gives the model a chance to pass on any valuable information from the evicted tokens to preserved ones.

In managing which tokens to keep and which to discard, DMS lets the AI model "think” in more depth or explore more possible solutions without needing extra computer power.

Models tested

The researchers tested DMS on different versions of the AI models Llama and Qwen and compared their performance to models without compression.

The models’ performance was assessed using standardised tests. It was found even with memories compressed to one eighth their original size, LLMs fully retain their original accuracy in difficult tasks while accelerating reasoning compared with non-compressed models.

In the standardised maths test AIME 24, which served as the qualifier for the United States Mathematical Olympiad, the compressed models performed twelve points better on average using the same number of KV cache reads to produce an answer.

For GPQA Diamond – a series of complex questions in biology, chemistry and physics authored by PhD-level experts – the models performed over eight points better.

The models were also tested with LiveCode Bench, which measures how well AI models can write code. The compressed models scored on average ten points better than non-compressed models.

In a nutshell, our models can reason faster but with the same quality. Hence, for an equivalent time budget for reasoning, they can explore more and longer reasoning threads. This improves their ability to solve complex problems in maths, science, and coding.

Dr Edoardo Ponti - GAIL Fellow and Lecturer in Natural Language Processing at the University’s School of Informatics

The findings from this work were peer reviewed and were presented at the prestigious AI conference NeurIPS.

Dr Ponti and his team will continue to investigate ways how large AI systems represent and remember information, making them far more efficient and sustainable as part of a 1.5 million euros European Research Council-funded project called AToM-FM.

This article has been republished on DIW with permission from The University of Edinburgh.

Read next:

• Subnational income inequality revealed: Regional successes may hold key to addressing widening gap globally

• Why many Americans avoid negotiating, even when it costs them


by External Contributor via Digital Information World

Monday, December 22, 2025

Subnational income inequality revealed: Regional successes may hold key to addressing widening gap globally

A new study visualises three decades of income inequality data, the most comprehensive worldwide mapping to be done at a subnational level. Confirming worsening income inequality for areas with over 3.6 billion inhabitants, it also reveals hidden ‘bright spots’ where policy may be closing the gap.

Income inequality is one of the most important measures of economic health, social justice and quality of life. More reliably trackable than wealth inequality, which was recently given a gloomy report card by the G20, income inequality is particularly relevant to immediate economic relief, mobility and people’s everyday standard of living.

The new study, from an international team led by Aalto University and Cambridge University, is the first to comprehensively map three decades of income inequality data within 151 nations around the world. Despite finding that income inequality is worsening for half the world’s people, the study also indicates that effective policy may be helping to bridge the gap in regions such as Latin America — ‘bright spots’ in administrative areas that account for around a third of the global population.

‘This research gives us much more detail than the existing datasets, allowing us to zoom in on specific regions within countries,’ says one of the study’s lead authors, Professor Matti Kummu, from Aalto University.‘This is significant because in many countries national data would tell us that inequality has not changed much over the past decades, while subnational data tells a very different story.’

‘The new data is particularly relevant in light of recent failings around wealth inequality, given that it could help shed light on what policy levers might be pulled to address inequality in the short-term,’ says co-lead author Daniel Chrisendo, now an Assistant Professor at Cambridge University.

‘We have vastly more complete data on income than we do on wealth, which tends to be much harder to uncover and track,’ explains Chrisendo. ‘Especially given that income inequality leads to wealth inequality, it’s critical to tackle both forms — but income inequality is perhaps the easiest to address from an immediate policy perspective.’

The study was published in Nature Sustainability on 5th December, and the new global subnational Gini coefficient (SubNGini) dataset, spanning 1990-2023, is publicly accessible online. Global annual data and trends can be explored visually using the Online Tool, which enables users to explore how income inequality has played out in regions around the globe and also download the data for further analyses.

Pinpointing the role of policy

There are many examples where regional efforts have shone more brightly than is revealed by national statistics, say the researchers. However India, China and Brazil all present interesting case studies that affect large swathes of the global population.

‘With regards to India, relative success in the south is linked to sustained investments in public health, education, infrastructure and economic development that have benefited the local population more broadly,’ says Chrisendo.

Meanwhile, in China, market-oriented reforms and open-door policy have driven economic growth and dramatically reduced poverty since the 1990s. ‘But we can also see how this growth has been uneven, likely due to the Chinese government’s ‘Hukou’ policy limiting rural migrants' access to urban services,’ he explains. In response, the government has implemented various policy measures — such as regional development programs and relaxed Hukou restrictions — to address disparities and support internal migrants.

In Brazil, the mapping shows a potential correlation between reduced inequality and a regional cash transfer programme providing cash to poor families on condition of their children attending school and receiving vaccinations.

‘Overall, being able to visualise these success stories and pinpoint the changing trends in time could help decision-makers see what works,’ says Chrisendo.

Income inequality rising for half the world’s people

Relative income growth for the world’s poorest 40 percent is one of the UN’s Sustainable Development Goals (SDGs), yet the study confirms the collective failure to meet this goal by 2030. ‘Unfortunately, not only are we quite far from that goal, but the trend for rising inequality is actually stronger than we thought,’ says Kummu.

The researchers are now expanding the data visualisation to encompass a vast range of other socio-economical indicators, from how populations are aging, to life expectancy and time spent in schooling, to improved access to drinking water — with the extensive new datasets slated for public launch in 2026.

As an expert in global food systems and sustainable use of natural resources, Kummu hopes the new datasets can be used to better understand, for example, the linkages between development and environmental changes. The recent study revealed links between more unequal regions and lower ecological diversity, which he would like to explore further.

‘It’s ambitious, but to have subnational, high quality data spanning over three decades is crucial to understand different social responses to environmental changes and vice versa. It gives us the means to start understanding the causalities, not just the correlations — and with that comes the power to make better decisions,’ he concludes.

Matti Kummu - Professori - T213 Built Environment - matti.kummu@aalto.fi - +358504075171

Daniel Chrisendo - Assistant Professor in Rural and Agricultural Economics, University of Cambridge - dc951@cam.ac.uk

More:

Editor's Note: This article has been republished on DIW with permission from Aalto University. Original publication date: December 4, 2025.

Read next:

• Why many Americans avoid negotiating, even when it costs them

• How U.S. Employees Report Using AI at Work

by External Contributor via Digital Information World

Sunday, December 21, 2025

Why many Americans avoid negotiating, even when it costs them

Would you pay thousands of dollars more for a car just to skip the negotiation process? According to new research by David Hunsaker, clinical associate professor of management at the IU Kelley School of Business Indianapolis, many Americans would—and do.

Study finds negotiation avoidance costs consumers thousands annually. Discover the psychology behind it and three strategies to overcome negotiation anxiety today.
Image: Cytonn Photography / Unsplash

How common is this mindset?

“Across five studies, we found that 95% of individuals choose not to negotiate up to 51% of the time,” Hunsaker explained. This means negotiation avoidance is not the exception, rather the norm.

The research, published in Negotiation and Conflict Management Research, was conducted by Hunsaker in collaboration with Hong Zhang of Leuphana University and Alice J. Lee of Cornell University. Their work explores why people avoid negotiating, what it costs them, and how organizations can respond.

Negotiation avoidance is the norm, not the exception

This study spans five large-scale experiments exploring why people avoid negotiating and what it costs them. The research examines:

  • How often individuals forgo negotiation opportunities
  • The Threshold for Negotiation Initiation (TFNI)—the minimum savings people need to justify negotiating
  • The Willingness to Pay to Avoid Negotiation (WTP-AN)—how much extra people will pay to skip negotiating
  • Whether interventions, such as utility comparisons or social norm prompts, can reduce avoidance

“Our work focuses on how much individuals are willing to sacrifice, or even pay, to avoid negotiating altogether,” David explained.

The idea for this research emerged at a negotiation conference in Israel. Hunsaker and his colleagues visited a market where bargaining is expected, yet none of them negotiated. “We asked ourselves: Why don’t people negotiate even when the opportunity is clear?” Hunsaker recalled.

“We framed this research around a simple question: When you have the chance to negotiate, will you?” Hunsaker said. “Even in traditional contexts like buying a car, companies now advertise ‘no-haggle pricing’ as a selling point. Businesses can raise prices by 5% to 11%, and more than half of consumers will pay it.”

The research also revealed that people judge negotiation value by percentage saved, not the absolute dollar amount.

“On average, participants needed savings of 21% to 36% of an item’s price before considering negotiation worthwhile,” Hunsaker noted. “This shows that decisions are driven by perceived proportional value—not absolute dollars.”

Hunsaker hopes the findings spark awareness. “Negotiation aversion is real, but at key points in your career, negotiation skills matter,” he emphasized. “Recognizing these tendencies is the first step toward overcoming them.”

Negotiation tips from the expert

To help you become a better negotiator, here are three tips from Dr. Hunsaker:

Preparation is everything

“Most of the work happens before the conversation begins,” Hunsaker said. “Information is power. Know your options and be honest about whether you have strong alternatives. If you don’t, you’ll enter with less leverage. Many people overlook this step—understand your position before you negotiate.”

Start higher than your target

“This is hard for a lot of people because you don’t want to sound selfish, but there needs to be room for concessions. If you don’t make that room, the other party will become upset. Start with an offer better than your goal and it will help the other party feel more satisfied with the deal.” Hunsaker shared.

Focus on relationships, not victory

“It’s about developing strong relationships. People that go into negotiation with a winning mindset end up burning bridges or hurting feelings. The people you most often negotiate with will be repeat customers or longtime clients. If you burn those bridges, you will miss out on deals later. Focus on doing well but also focus on listening to the other party and creating a foundation of trust,” Hunsaker said.

David Hunsaker is a clinical associate professor of management at the Kelley School of Business Indianapolis. He joined the faculty in 2024 and specializes in organizational behavior and negotiation.

This article was first published on the Indiana University Kelley School of Business website on December 16, 2025. Republished with permission.

Read next: 

• Most Data Centers Are Located Outside Recommended Temperature Ranges

How U.S. Employees Report Using AI at Work


by External Contributor via Digital Information World

Saturday, December 20, 2025

How U.S. Employees Report Using AI at Work

As workplaces increasingly integrate emerging technologies, understanding how people actually employ these tools provides crucial insight into their practical value and evolving role in professional settings.

A Gallup workforce survey conducted in 2025 found that employees who used artificial intelligence (AI) at work reported using it for information-related and idea-generation purposes. Among U.S. employees surveyed in the second quarter of 2025 who said they used AI at least yearly, 42% reported using it to consolidate information, while 41% said they used it to generate ideas. Another 36% reported using AI to support learning new things. Gallup noted that these reported uses did not change meaningfully from its initial measurement in the second quarter of 2024.

When asked about the types of AI tools they used in their role, more than six in ten AI-using employees reported using chatbots or virtual assistants. AI-powered editing and writing tools were the next most commonly reported, followed by AI coding assistants. Use of more specialized tools, including those designed for data science or analytics, was less common overall but more frequently reported by employees who used AI at work more often.

AI Use Percentage Selected
To consolidate information or data 42%
To generate ideas 41%
To learn new things 36%
To automate basic tasks 34%
To identify problems 20%
To interact/transact with customers 13%
To collaborate with coworkers 11%
Other 11%
To make predictions 9%
To set up, operate, or monitor complex equipment or devices 8%

AI Tools Employees Use in Their Roles

AI Use Percentage Selected
Chatbots or virtual assistants 61%
AI writing and editing tools 36%
AI coding assistants 14%
Image, video, or audio generators 13%
Data science or analytics tools 13%
Task, scheduling, or project management tools 13%
Meeting assistants or transcription tools 12%
Presentation or slide deck tools 10%
AI-powered search or research tools 10%
Email or communication management tools 9%
Knowledge or information management tools 8%
Automation or robotic process automation (RPA) tools 5%
Other 4%

Gallup also reported that in the third quarter of 2025, 45% of U.S. employees said they used AI at work at least a few times a year, while daily use remained limited to about 10% of the workforce.

When tools make it easier to learn, solve problems, or work more effectively, they earn their place in daily practice.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, facted-checked and published by humans.

Read next:

• Most Data Centers Are Located Outside Recommended Temperature Ranges

• Resolve to stop punching the clock: Why you might be able to change when and how long you work
by Asim BN via Digital Information World

Most Data Centers Are Located Outside Recommended Temperature Ranges

Data center placement influences electricity demand and cooling requirements, which are documented factors in energy system planning.

An analysis by Rest of World found that a majority of the world’s operational data centers are located in climates outside the industry’s recommended temperature range.

The analysis combined climate records from the Copernicus Climate Data Store with facility location data from Data Center Map, covering 8,808 operational data centers worldwide as of October 2025.

Industry standards recommend average operating temperatures between 18°C and 27°C. Nearly 7,000 data centers were located outside that range, with most situated in regions cooler than recommended. About 600 data centers, representing less than 10% of the total, were located in areas with average annual temperatures above 27°C. In 21 countries, including Nigeria, Singapore, Thailand, and the United Arab Emirates, all operational data centers were located in regions exceeding the recommended temperature range.

The findings draw attention to the operational strain associated with cooling data centers in hotter climates.


Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Read next: Image: DIW-Aigen

Read next: Resolve to stop punching the clock: Why you might be able to change when and how long you work
by Ayaz Khan via Digital Information World

Friday, December 19, 2025

Resolve to stop punching the clock: Why you might be able to change when and how long you work


Image: Luis Villasmil / Unsplash

About 1 in 3 Americans make at least one New Year’s resolution, according to Pew Research. While most of these vows focus on weight loss, fitness and other health-related goals, many fall into a distinct category: work.

Work-related New Year’s resolutions tend to focus on someone’s current job and career, whether to find a new job or, if the timing and conditions are right, whether to embark on a new career path.

We’re an organizational psychologist and a philosopher who have teamed up to study why people work – and what they give up for it. We believe that there is good reason to consider concerns that apply to many if not most professionals: how much work to do and when to get it done, as well as how to make sure your work doesn’t harm your physical and mental health – while attaining some semblance of work-life balance.

How we got here

Most Americans consider the 40-hour workweek, which calls for employees being on the job from nine to five, to be a standard schedule.

This ubiquitous notion is the basis of a hit Dolly Parton song and 1980 comedy film, “9 to 5,” in which the country music star had a starring role. Microsoft Outlook calendars by default shade those hours with a different color than the rest of the day.

This schedule didn’t always reign supreme.

Prior to the Great Depression, which lasted from 1929-1941, 6-day workweeks were the norm. In most industries, U.S. workers got Sundays off so they could go to church. Eventually, it became customary for employees to get half of Saturday off too.

Legislation that President Franklin D. Roosevelt signed into law as part of his sweeping New Deal reforms helped establish the 40-hour workweek as we know it today. Labor unions had long advocated for this abridged schedule, and their activism helped crystallize it across diverse occupations.

Despite many changes in technology as well as when and how work gets done, these hours have had a surprising amount of staying power.

Americans work longer hours

In general, workers in richer countries tend to work fewer hours. However, in the U.S. today, people work more on average than in most other wealthy countries.

For many Americans, this is not so much a choice as it is part of an entrenched working culture.

There are many factors that can interfere with thriving at work, including boredom, an abusive boss or an absence of meaning and purpose. In any of those cases, it’s worth asking whether the time spent at work is worth it. Only 1 in 3 employed Americans say that they are thriving.

What’s more, employee engagement is at a 10-year low. For both engaged and disengaged employees, burnout increased as the number of work hours rose. People who were working more than 45 hours per week were at greatest risk for burnout, according to Gallup.

However, the average number of hours Americans spend working has declined from 44 hours and 6 minutes in 2019 to just under 43 hours per week in 2024. The reduction is sharper for younger employees.

We think this could be a sign that younger Americans are pushing back after years of being pressured to embrace a “hustle culture” in which people brag about working 80 and even 100 hours per week.

Critiques of ‘hustle culture’ are becoming more common.

Fight against a pervasive notion

Anne-Marie Slaughter, a lawyer and political scientist who wears many hats, coined the term “time macho” more than a decade ago to convey the notion that someone who puts in longer hours at the office automatically will outperform their colleagues.

Another term, “face time,” describes the time that we are seen by others doing our work. In some workplaces, the quantity of an employee’s face time is treated as a measure of whether they are dependable – or uncommitted.

It can be easy to jump to the conclusion that putting in more hours at the office automatically boosts an employee’s performance. However, researchers have found that productivity decreases with the number of hours worked due to fatigue.

Even those with the luxury to choose how much time they devote to work sometimes presume that they need to clock as many hours as possible to demonstrate their commitment to their jobs.

To be sure, for a significant amount of the workforce, there is no choice about how much to work because that time is dictated, whether by employers, the needs of the job or the growing necessity to work multiple jobs to make ends meet.

4-day workweek experiments

One way to shave hours off the workweek is to get more days off.

A multinational working group has examined experiments with a four-day workweek: an arrangement in which people work 80% of the time – 32 hours over four days – while getting paid the same as when they worked a standard 40-hour week. Following an initial pilot in the U.S. and Ireland in 2022, the working group has expanded to six continents. The researchers consistently found that employers and employees alike thrive in this setup and that their work didn’t suffer.

Most of those employees, who ranged from government workers to technology professionals, got Friday off. Shifting to having a three-day weekend meant that employees had more time to take care of themselves and their families. Productivity and performance metrics remained high.

Waiting for technology to take a load off

Many employment experts wonder whether advances in artificial intelligence will reduce the number of hours that Americans work.

Might AI relieve us all of the tasks we dread doing, leaving us only with the work we want to do – and which, presumably, would be worth spending time on? That does sound great to both of us.

But there’s no guarantee that this will be the case.

We think the likeliest scenario is one in which the advantages of AI are unevenly distributed among people who work for a living. Economist John Maynard Keynes predicted almost a century ago that “technological unemployment” would lead to 15-hour workweeks by 2030. As that year approaches, it’s become clear that he got that wrong.

Researchers have found that for every working hour that technology saves us, it increases our work intensity. That means work becomes more stressful and expectations regarding productivity rise.

Deciding when and how much time to work

Many adults spend so much time working that they have few waking hours left for fitness, relationships, new hobbies or anything else.

If you have a choice in the matter of when and how much you work, should you choose differently?

Even questioning whether you should stick to the 40-hour workweek is a luxury, but it’s well worth considering changing your work routines as a new year gets underway if that’s a possibility for you. To get buy-in from employers, consider demonstrating how you will still deliver your core work within your desired time frame.

And, if you are fortunate enough to be able to choose to work less or work differently, perhaps you can pass it on: You probably have the power and privilege to influence the working hours of others you employ or supervise.The Conversation

Jennifer Tosti-Kharas, Professor of Management, Babson College and Christopher Wong Michaelson, Professor of Ethics and Business Law, University of St. Thomas

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• What the hyperproduction of AI slop is doing to science

• Task scams are up 485% in 2025 and job seekers are losing millions


by External Contributor via Digital Information World