"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Written by: Joe Stafford - The University of Manchester. Reviewed by Asim BN.
Image: Nano banana
Researchers at The University of Manchester are calling for stronger, coordinated partnerships to tackle modern slavery and human trafficking, warning that gaps between organisations risk leaving victims and survivors without consistent protection and support.
Their appeal comes in a new review commissioned by Greater Manchester Combined Authority (GMCA) , which examines how organisations across the city region work together to identify, safeguard and support people affected by modern slavery and human trafficking. The review focuses on partnerships involving local authorities, statutory services, law enforcement, housing providers and voluntary and community sector organisations.
The authors argue that tackling modern slavery depends on robust, long-term collaboration rather than ad hoc arrangements. While organisations across Greater Manchester have developed innovative partnership approaches, the review finds that these are not always embedded consistently across the system. Among the review’s key recommendations, the authors are calling for:
- Clearer strategic governance to strengthen modern slavery and human trafficking partnerships at a Greater Manchester-wide level.
- More consistent roles and responsibilities across organisations, so victims/survivors do not fall through gaps between services.
- Improved information-sharing and referral pathways, ensuring concerns are acted on quickly and safely.
- Sustainable funding and resources to support partnership working, rather than reliance on short-term arrangements.
- Stronger links between safeguarding, housing, immigration advice and criminal justice responses, reflecting the needs of victims.
The review suggests that where partnerships are well established, outcomes for victims are more likely to be improved. Such embedded collaboration enables earlier identification of exploitation, better safeguarding responses and coordinated support to help individuals recover and rebuild their lives. Strong partnerships also support disruption of criminal activity by improving intelligence-sharing and joint working.
However, the authors highlight challenges which can weaken partnership arrangements including variations in local practice, capacity pressures and funding uncertainty. Frontline professionals reported that without clear structures and shared accountability, collaboration often relies on personal relationships, making it fragile and difficult to sustain.
The researchers also note that victims and survivors of modern slavery often face overlapping vulnerabilities including insecure housing, mental ill-health and immigration insecurity. Without joined-up working across sectors, these complexities can delay support and increase the risk of re-exploitation.
The authors stress that the findings have national relevance due to a relatively cohesive modern slavery partnership approach in Greater Manchester. As awareness of modern slavery grows, public bodies across the UK face pressure to demonstrate good quality partnership responses. The review positions Greater Manchester as a potential leader, but cautions that this requires investment in governance, coordination and shared learning.
“This review shows that partnership working is not optional when tackling modern slavery and human trafficking - it is essential. The needs of victims and survivors cut across organisational boundaries, and responses must do the same. Our recommendations set out how partners across Greater Manchester can strengthen their approach and provide protection and support.” - Dr Jon Davies.
Unless you’ve been off-grid in the last five years, chances are you’ve watched the meteoric rise of artificial intelligence and its usage by humans unfold online right in front of you. What’s more, you’ve probably gotten curious, logged on to an AI model, and asked it a question or two (yes, we know you did, so don’t deny it). Ever since ChatGPT’s debut in late 2022, people have started to depend on artificial intelligence to answer their questions and help navigate life, from putting together the perfect pasta recipe to more serious topics, such as creating a will or making investment decisions. And as more companies like Google and Meta have introduced their own AI models, this trend of human reliance on AI has only increased.
This isn’t necessarily a negative, as AI models make research incredibly easy and efficient compared to sorting through the various articles a search algorithm throws at you to find an answer. However, while students might use AI to cheat on exams and lawyers have used it to (incorrectly) cite case law, the larger concern isn’t the use of artificial intelligence, but people’s trust and reliance on this technology.
Although you might just want to use AI to figure out why the cat keeps digging in the litter box, things start to get a little dicey if you take the information AI offers without fact-checking the info or accounting for its biases. And when people start to use AI as a financial advisor instead of a human one, they run the risk of receiving poor advice that could lead them to financial hardship in the future.
More than eight out of ten Americans now trust artificial intelligence to help guide their financial decisions, as per BestMoney survey.
Pros of Using AI with Finances
Artificial intelligence excels when it comes to sorting information and relaying that information to the user in a different way. In terms of general financial questions, ChatGPT and other models do a good job of breaking down those confusing finance concepts and explaining them in layman’s terms. For example, someone may not understand how budgeting helps track monthly expenses and control spending. AI can explain it clearly using practical, easy-to-follow examples.
AI can also help with straightforward financial input. If someone with little spreadsheet experience wants to use one for their personal budget, AI can take the numbers and format them into a ready-to-go spreadsheet. What’s more, AI could explain some functions and formulas that they might find useful in the future, expanding their knowledge base as well as their ability to use the program for their budgeting needs.
For encyclopedic and financial information, AI is as good as Google. But, when it comes to the important questions, like those that impact your finances, you’re better off taking those to a pro.
The biggest issue with AI is that it cannot generate something new, as the algorithm picks up information from the internet and sorts through it to answer questions. This may not matter for questions about recipes, but it can pose an issue for questions that require critical thinking. No matter how real AI might seem, it isn’t human. So while it can give you options and advice, it doesn’t have the discernment to tell you which options are good or bad or best for you.
Another pitfall of using AI with finances is that you may not know what sources the information came from. For example, if AI is telling you that it's a good time to invest in a particular company, you should wonder why and what sources it used to come up with that information. If it’s pulling positive information from the company’s website, the bias can skew AI’s output. Even worse, AI could hallucinate and give a totally made-up answer. So checking sources and information is key. Even slightly incorrect or inaccurate information can wreak havoc on your financial future if you make the wrong decision.
When it comes to your finances, you usually have to include a lot of personal information to get an answer tailored to you, but you’re relying on AI companies to remove your personal data without any way of checking or guaranteeing that. Uploading your personal information can put you at risk, and without adding in your personal information, it’s difficult to get the specific answers you need to make sound financial decisions that account for your unique circumstances.
Best Practices of Using AI for Finances
Artificial intelligence is not evil, it’s a tool. And if you use AI, it’s important to develop good habits and avoid being overly dependent on it for every answer, especially when it comes to important life decisions. Below are some dos and don’ts to keep in mind when using AI for help.
Do doublecheck… and then triplecheck. AI is known to hallucinate, give out false information, and could be using biased information. By double-checking answers, information, and sources used, you can avoid operating on incorrect information. Don’t just ask for information but also ask AI for the sources it used, so you can check and see if they’re reputable and reliable.
Don’t use AI for personal matters. ChatGPT is good at explaining concepts from a bird’s-eye view, but things get tricky when you ask it to apply that reasoning to your personal life. You can ask AI to explain how the stock market works, but don’t ask it to take the current climate of the stock market and upload your personal financial information to get back advice on what you should invest in.
Do use a human backup, especially on the financial questions. There’s a reason why AI hasn’t taken over the financial advisor sector yet. Artificial intelligence can’t think critically, but humans can! Take advantage of human advice and compare it to what AI says, remembering that both can make mistakes, but only one has original thoughts and ideas.
Conclusion
People talk about financial literacy and media literacy, but it’s clear that the world is going to need to develop AI literacy skills as well. Artificial intelligence is an amazing invention that has many possibilities with just as many limitations. By building good AI literacy habits, people can continue to use AI as a tool for many questions they might have, even those about finances. By remembering to double-check information, keep questions general, and ask a human financial professional about questions with big consequences, AI and humans might just develop a healthy relationship with each other.
Generative AI was trained on centuries of art and writing produced by humans.
But scientistsand critics have wondered what would happen once AI became widely adopted and started training on its outputs.
A new study points to some answers.
In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.
The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.
Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.
The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.
For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.
After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.
The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.
The familiar is the default
This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.
But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.
This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.
The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.
Cultural stagnation or acceleration?
For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.
What has been missing from this debate is empirical evidence showing where homogenization actually begins.
The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.
This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.
Retraining would amplify this effect. But it is not its source.
This is no moral panic
Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.
But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”
The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.
This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.
They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.
In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.
This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.
Lost in translation
Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.
In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.
But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.
The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”
If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.
The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.
Cultural stagnation is no longer speculation. It’s already happening.
Nine in 10 Americans use AI on their phone — but only 38% actually realize that they do.
Image: DIW-Aigen
A survey of 2,000 adults explored how AI is used every day, finding that many were unaware of its presence in their everyday lives — like weather alerts (42%), call screening (35%), autocorrect (34%), voice assistants (26%) and auto brightness (25%).
For many, AI-powered camera features like night mode (19%) and photo memory slideshows (20%) are essential to capturing and enjoying their photos.
Conducted by Talker Research for Samsung, the survey found that half of respondents (51%) don’t think they use AI on their phone, yet 86% reported using common AI tools daily when prompted with a list of features.
When it comes to their phone in general, one in six use their phone for at least 10 different career-related tasks in a day, and more than twice that percentage get a similar number of personal tasks done daily on their device (38%).
More than half of Americans primarily use their phone for tasks related to their job more than any other device (55%) — especially Gen Z (74%).
As a result, 47% think their phone is essential for their career, with younger respondents in millennial (65%) and Gen Z (62%) age ranges being the most likely to agree.
Similarly, six in 10 use their phone for staying organized more than they use other devices.
While the average person only uses half of the apps on their phone regularly, 57% are confident that they’d be able to describe what every feature on their phone does, whether or not they use it.
Even with their phones always on deck, there’s much to learn. A third of respondents discover new features on their phone at least once a month (34%).
When it comes to AI usage, some of the more specific uses include practical assistance. One respondent said that they use AI to get help with ideas, while another said they use it for organizing tasks better. A third of respondents use AI for job applications.
Some respondents are using AI for more creative purposes, like teaching them how to cook, helping them write lyrics or asking random questions for entertaining conversations.
Of those who didn’t initially think they used AI regularly, but then learned that they do, a quarter said learning that it’s already a part of their everyday life made their opinion of AI more favorable.
Americans are mostly interested in using AI for helping them save time on tasks (28%), while others want it to help make tasks easier (27%), provide instant solutions (23%) and to improve their skills or learn new things (22%).
As tech continues to evolve, the average respondent thinks we have about three years left of traditional phone use before AI changes how we interact with our devices; one in five think we have less than a year.
When asked about features that they’d like to see from their phone in the next decade, some want even more advanced AI capabilities like “health monitoring, detecting vital signs and providing personalized wellness insights and alerts” or “anticipating my thoughts and auto-inserting them without me having to type.”
Others have even greater aspirations for their phone, with one wanting it to drive their car and another hoping it can charge itself without needing electricity.
New Features People Want Their Phone To Be Capable Of In The Next Decade
“Knowing its owner by sense of touch and emotion and alerts to things we deem necessary.”
“Anticipating my thoughts and auto-inserting them without me having to type.”
“Understand your long-term preferences and goals.”
“I hope phones in the next 10 years will be able to fully project and interact with 3D holograms, allowing me to have virtual meetings, watch movies or even manipulate objects in 3D space without needing any extra devices.”
“I hope my phone will be able to last an entire week on a single charge while staying just as fast and powerful.”
“I’m hoping my phone will have advanced AI-powered health monitoring, detecting vital signs and providing personalized wellness insights and alerts.”
“I hope my phone will be capable of real time language translation during phone calls within the next 10 years.”
“I’m imagining something like: my phone can listen contextually to conversations (with privacy safeguards, of course) and instantly give me helpful suggestions.”
“To charge without needing electricity.”
“To take control of my finances and monthly bill paying.”
“Use eye controls to control the movement of the screen.”
“Calling for help in certain emergencies with a certain safe word.”
People have become used to living with AI fairly quickly. ChatGPT is barely three years old, but has changed the way many of us communicate or deal with large amounts of information.
It has also led to serious concerns about jobs. For if machines become better than people at reading complex legal texts, or translating languages, or presenting arguments, won’t those old fashioned human employees become irrelevant? Surely mass unemployment is on the horizon?
Yet, when we look at the big numbers of the economy, this is not what’s happening.
Unemployment in the EU is at a historical low of around 6%, half the level of ten years ago. In the UK, it is even lower, at 5.1%, roughly the level of the booming early 2000s, and it is even lower again (4.4%) in the US.
The reason why there are still so many jobs is that while technology does make some human enterprise obsolete, it also creates new kinds of work to be done.
It’s happened before. In 1800 for example, around a third of British workers were farmers. Now the proportion working in agriculture is around 1%.
Or more recently, after the first ATM in the world was unveiled by Barclays in London in 1967, there were fears that staff at high street bank branches would disappear.
The opposite turned out to be the case. In the US, over the 30-year period of ATM growth, the number of bank tellers actually increased by 10%. ATMs made it cheaper to open bank branches (because they needed fewer tellers) and more communities gained access to financial services.
Only now, with a bank on every phone, is the number of high street bank staff in steep decline.
An imposition?
But yes, AI will take away some jobs. A third of Americans worry they will lose theirs to AI, and many of them will be right.
But since the industrial revolution, the world has seen a flow of innovations, sustaining an unprecedented exponential economic growth.
AI, like the computer, the internet, the railways, or electric appliances, is a slow revolution. It will gradually change habits, but in doing so, provide opportunities for new businesses to emerge.
And just as there has been no immediate AI boom when it comes to economic growth, there is no immediate shift in employment. What we see instead are largely firms using AI as an excuse for standard job cutting exercises. This then leads to a different question about how AI will change how meaningful our jobs are and how much money we earn.
With technology, it can go either way.
Bank tellers became more valuable with the arrival of ATMs because instead of just counting money, they could offer advice. And in 2016, Geoff Hinton, a major figure in the development of of AI, recommended that the world “should stop training radiologists” because robots were getting better than humans at analysing images.
Ten years later, demand for radiologists in the US is at a record high. Using AI to analyse images has made the job more valuable, not less, because radiologists can treat more patients (most of whom probably want to deal with a human)
So as a worker, what you want to find is a job where the machines make you more productive – not one where you become a servant to the machines.
Any inequality?
Another question raised by AI is whether it will reduce or increase the inequality between workers.
At first, many thought that allowing everyone to access an AI assistant with skills in processing information or clear communication would decrease earning inequality. But other recent research found the opposite, with highly skilled entrepreneurs gaining the most from having access to AI support.
One reason for this is that taking advice is itself a skill. In my own research with colleagues, we found that giving chess players top-quality advice does little to close the gap between the best and the worst – because lower-ability players were less likely to follow high-quality advice.
And perhaps that’s the biggest risk AI brings. That some people benefit from it much more than others.
In that situation, there might be one group which uses AI to manage their everyday lives, but find themselves stuck in low-productivity jobs with no prospect of a decent salary. And another smaller group of privileged, well-educated workers who thrive by controlling the machines and the wealth they create.
Every technological revolution in history has made the world richer, healthier and more comfortable. But transitions are always hard. What matters next is how societies can help everyone to be the boss of the machines – not their servants.
By Zeke Hughes, Managing Director – QuestBrand, The Harris Poll
Brand Momentum is the one metric that’s the clearest indicator of your brand’s future success. This is why and how to use it.
For the past decade, marketers have looked to the usual suspects of brand equity, customer familiarity, consideration, and awareness to give us a clear picture of a brand’s health. While these are still solid metrics that tell you where your brand is positioned in the market, to fully understand where it’s headed, you need to track its momentum.
Why is tracking Brand Momentum so important?
Brand Momentum is critical because it tells you in real-time if customers find your brand relevant, on the rise, stagnating, or slipping. According to our HarrisQuest research, this is one of your earliest indicators of brand health. It tells you what’s working right now, where you can afford to double down to accelerate growth, or where you need to course correct. It predicts not only loyalty, but revenue, share of wallet, and even market valuation.
How is Brand Momentum different from Brand Equity?
Brand Equity tells you how durable and resilient your reputation is. There’s no denying it is a critical metric to track. But your Brand Momentum score tells you if your reputation is losing or gaining energy. This distinction is important as your brand may have high awareness and strong legacy equity, while quietly losing relevance.
“At its core Brand Momentum answers a simple question: based on what people have seen, read, or heard, is this brand on the rise, holding steady, or slipping?”
Because Brand Momentum is rooted in lived experience – media coverage, product launches, cultural moments, social conversation – it tends to respond faster than lagging indicators like purchase intent or loyalty.
Why does Brand Momentum matter now more than ever before?
Firstly, Brand Momentum has taken center stage because brand perception is now being shaped in real time. Rather than by quarterly cycles, your brand is subject to the whims of social media – where sentiment can turn on a dime.
“Reports tell you what’s happened; Brand Momentum tells you what’s on the horizon.”
Secondly, younger audiences have shortened the brand-forgiveness window. Gen Z and Gen Alpha are reassessing brands constantly. Rewarding brands that seem authentic and aligned with their values and punishing those that don’t.
These brands don’t just plateau, they fall behind. Brand decisions no longer live in the boardroom and industry papers; they spill out onto social media and public conversation. Layoffs, supply issues, or substandard service are amplified and have external momentum repercussions.
Brand Momentum is emotional before it’s rational
Across industries, the brands gaining momentum aren’t necessarily those that are the cheapest, biggest, or most innovative. Although we do see that in some cases. The brands that have the highest uptick with Brand Momentum are those that consistently trigger the right emotional responses for their category.
For example, in automotive, trust and dependability still dominate. As EV continues to disrupt, brands that signal reliability are rewarded with momentum. Toyota has mastered the art of trust. Its brand narrative has remained focused on reliability for decades. This is further reinforced by engineering quality, low recall rates, and long vehicle lifespans.
Images: The Harris Poll
In streaming, value and content relevance matter more than content quantity. Apple TV’s momentum with millennials in 2025 was driven by a clear shift from shows that were “prestige but niche” to shows that were “reliably worth paying for” – anchored in content consistency and cultural relevance. This reinforces the platform as one that consistently delivers quality. And, for Millennials, Brand Momentum builds when subscriptions feel justified month after month, not just for one tentpole release.
When it comes to e-commerce, where competition is fierce, momentum hinges on trust. Ultra-low pricing drives short-term attention, but the momentum isn’t there as the confidence in quality and safety isn’t strong enough.
This is why performance marketing alone rarely sustains momentum. Visibility without that emotional connection spikes sales but doesn’t build a trajectory.
How leaders should use Brand Momentum
Tracking Brand Momentum is only useful if it serves to change behavior. The most effective teams use it in three ways:
1. An early warning system
Sudden drops or dips in momentum flag reputational risks before it becomes a full-on crisis. Equally, sudden upticks flag what’s working and needs to be amplified.
2. A prioritization tool
By linking changes to certain actions: partnerships, campaigns, messaging, and launches, teams can immediately see what’s landing and what isn’t.
3. A strategic compass
Momentum clarifies whether a brand is culturally aligned or drifting. It forces leadership to confront not just performance, but relevance. Good performance in one quarter shouldn’t be celebrated in isolation.
Reports tell you what’s happened; Brand Momentum tells you what’s on the horizon.
Image: For illustrative purposes. Credit: Kristen Morith / Unsplash
I grew up in rural Colorado, deep in the mountains, and I can still remember the first time I visited Denver in the early 2000s. The city sits on the plain, skyscrapers rising and buildings extending far into the distance. Except, as we drove out of the mountains, I could barely see the city – the entire plain was covered in a brown, hazy cloud.
That brown, hazy cloud was mostly made of ozone, a lung-irritating gas that causes decreases in lung function, inflammation, respiratory symptoms like coughing, and can trigger asthma attacks.
Denver still has air pollution problems, due in part to its geography, which creates temperature inversions that can hold pollution near the ground. But since 1990, ozone has decreased 18% across the U.S., reducing the smog that choked many cities in the 1960s and 1970s. The concentration of tiny dustlike particles of air pollution called PM2.5 has also decreased, by 37% since 2000.
These decreases occurred largely because of one of the most successful public health policies ever implemented by the United States: the Clean Air Act, first passed in 1970. The Clean Air Act regulates air pollution emissions and authorizes the Environmental Protection Agency to set air quality standards for the nation.
For years, when the Environmental Protection Agency assessed the economic impact of new regulations, it weighed both the health costs for Americans and the compliance costs for businesses. The Trump administration is now planning to drop half of that calculation – the monetary health benefits of reducing both ozone and PM2.5 – when weighing the economic impact of regulating sources of air pollution.
I am an environmental epidemiologist, and one of the things I study is people’s exposure to air pollution and how it affects health. Measuring the impact of air quality policies – including quantifying how much money is saved in health care costs when people are exposed to less air pollution – is important because it helps policymakers determine if the benefits of a regulation are worth the costs.
What air pollution does to your body
Breathing in air pollution like ozone and PM2.5 harms nearly every major system in the human body.
It is particularly hard on the cardiovascular, respiratory and neurological systems. Numerous studies have found that PM2.5 exposure is associated with increased death from cardiovascular diseases like coronary heart disease. Even short-term exposure to either PM2.5 or ozone can increase hospitalizations for heart attacks and strokes.
What’s in the air you breathe?
In the respiratory system, PM2.5 exposure is associated with a 10% increased risk for respiratory diseases and symptoms such as wheezing and bronchitis in children. More recent evidence suggests that PM2.5 exposure can increase the risk of Alzheimer’s disease and other cognitive disorders. In addition, the International Agency for Research on Cancer has designated PM2.5 as a carcinogen, or cancer-causing agent.
Reducing air pollution has been proven to save lives, reduce health care costs and improve quality of life.
For example, a study led by scientists at the EPA estimated that a 39% nationwide decrease in airborne PM2.5 from 1990 to 2010 corresponded to a 54% drop in deaths from ischemic heart disease, chronic obstructive pulmonary disease, lung cancer and stroke.
In the same period, the study found that a 9% decline in ozone corresponded to a 13% drop in deaths from chronic respiratory disease. All of these illnesses are costly for the patients and the public, both in the treatment costs that raise insurance prices and the economic losses when people are too ill to work.
Yet another study found that nationally, an increase of 1 microgram per square meter in weekly PM2.5 exposure was associated with a 0.82% increase in asthma inhaler use. The authors calculated that decreasing PM2.5 by that amount would mean US$350 million in annual economic benefits.
Especially for people with lung diseases like asthma or sarcoidosis, increased PM2.5 concentrations can reduce quality of life by worsening lung function.
Uncertainty is something we all deal with on a daily basis. Think of the weather. Forecasts have varying degrees of accuracy. The high temperature might not get quite as high as the prediction, or might be a bit hotter. That is uncertainty.
The EPA wrote in a notice dated Jan. 9, 2026, that its historical practice of providing estimates of the monetized impact of reducing pollution leads the public to believe that the EPA has a clearer understanding of these monetary benefits than it actually does.
Therefore, the EPA wrote, the agency will stop estimating monetary benefits from reducing pollution until it is “confident enough in the modeling to properly monetize those impacts.”
This is like ignoring weather forecasts because they might not be perfect. Even though there is uncertainty, the estimate is still useful.
Estimates of the monetary costs and benefits of regulating pollution sources are used to understand if the regulation is worth its cost. Without considering the health costs and benefits, it may be easier for infrastructure that emits high levels of air pollution to be built and operated.
What the evidence shows
Several studies have shown the impact of pollution sources like power plants on health.
For example, the retirement of coal and oil power plants has been connected with a reduction in preterm birth to mothers living near the power plants. Scientists studied 57,000 births in California and found the percentage of babies born preterm to mothers living within 3.1 miles (5 kilometers) of a coal- or oil-fueled power plant fell from 7% to 5.1% after the power plant was retired.
Another study in the Louisville, Kentucky, area found that four coal-fired power plants either retiring or installing pollution-reduction technologies such as flue-gas desulfurization systems coincided with a drop in hospitalizations and emergency department visits for asthma and reduced asthma-medication use.
Reducing preterm birth, hospitalizations, emergency department visits and medication use saves money by preventing expensive health care for treatment, hospital stays and medications. For example, researchers estimated that for children born in 2016, the lifetime cost of preterm birth, including medical and delivery care, special education interventions and lost productivity due to disability in adulthood, was in excess of $25.2 billion.
Circling back to Denver: The region is a fast-growing data center hub, and utilities are expecting power demand to skyrocket over the next 15 years. That means more power plants will be needed, and with the EPA’s changes, they may be held to lower pollution standards.