Wednesday, May 6, 2026

How Tech Growth Is Taking Shape Across the United States Today

By Mitchell Barrick

The U.S. tech landscape is no longer centered on Silicon Valley alone. There are different tech sectors thriving in regions all over the country as this map (featured below) from Pulse Bot shows us. The map of tech centers is rapidly changing. The map shows us the heart of sectors like computing infrastructure, custom programming services, software publishing, web search portals, semiconductor manufacturing, and more. The map identifies the counties in each sector with the highest employment levels, established businesses and wages of people in the field to score locations and identify the beating heart of these sectors.

The Giants of Tech are Still Growing in New Ways

California still accounts for 13.1% of tech job postings and it’s still tech’s dominating hub. Thanks to Silicon Valley’s AI startup companies and the humongous Google and Meta campuses, it will be hard for any area to surpass California’s grip on the tech industry. However, we see locations rising as potential California rivals. Believe it or not, Washington state leads the nation in tech employment with 9.3% of workers employed in that industry. They are closely followed by D.C. and Virginia. While this might suggest that California has already lost its tech crowd, note that volume doesn’t mean Washington leads in industry growth rates. As we delve into details, we’ll find the story is more complex.

AI Leads a Tech Surge Spreading Inward from the Coasts

From its origins in the Bay Area, the tech industry has expanded into regions across the United States thanks to the rise of artificial intelligence. According to Deloitte’s 2025 Technology Fast 500 rankings, 7% of the ranked companies were classified as artificial intelligence firms, which recorded a median revenue growth of 407% from 2021 to 2024. This was not only in the tech sector, but found in areas like, professional services, finance, and manufacturing, proving that tech skills and work are creeping out of the typical sectors. With that sector spread comes location spread. Northern Virginia is a hub for defense AI, Austin is the center of enterprise AI, and New York hosts plenty of financial AI work.

Computing Infrastructure in the Northeast and Idaho

Data centers power every streaming platform and online storefront. Somerset County, New Jersey leads the way in data services with a big employment jump between 2023 and 2024. The annual wages in that county rose by 400%. It’s also worth noting that this area of New Jersey is close to New York City and financial tech sectors. A dense fiber network already in place can help support data centers and computing infrastructure. Many miles from New Jersey, Ada County, Idaho takes second place in this sector. Affordable land in isolated locations make this state an ideal location for data centers.

Custom Computer Programming Finds a Home in Virginia

Custom computer programming focuses on finding solutions specific to individuals and businesses. The highest concentration of firms in this center is located in Norfolk City, Virginia. Success in Virginia might be due to the Norfolk Innovation Corridor, an area filled with universities, hospitals, and other tech-centered businesses, all of which can benefit from custom computer programming. Tech startups were given tax incentives in this area, making it an attractive place to land for programmers who wanted to start a small business. Virginia hosts tech work across the state since D.C. and Northern Virginia are the heart of cybersecurity with federal agencies located there.

Texas Takes Over Software Publishing

Software publishers create products for wide distribution. Bexar, Texas, has doubled employment in the sector and had huge growth in recent years. Bexar County is the home of San Antonio, a thriving city with a lower cost of living than other Texas cities like Austin. Allegheny County, Pennsylvania, has a high concentration of software publishers too, bolstered by University of Pittsburgh. This is a renowned engineering school that produces many capable graduates ready to join the workforce.

New York and Oregon Web Search Portals

Web search companies depend on advertising markets, technical talent, and media to be successful, so it’s no surprise to see New York and New Jersey take the lead once again. Union and Essex counties are part of the New York City metro area which provide a big talent pool of workers. On the West Coast, Multnomah County, in Oregon, ranks highly in this sector as well. Portland’s mix of creative and tech industries are the perfect mix for digital media and information services companies.

Semiconductors in the Lone Star State

So many tech devices are made possible by semiconductors, the backbone of computer chips. The AI boom has increased the demand for micro chips and in the U.S., the leading manufacturers are housed in Williamson County, Texas. This is due to a $17 million investment from Samsung to build a semiconductor manufacturing facility in Taylor. Wages increased by 73% in this county thanks to the manufacturing plant. California takes its share of this market too. Nvidia, one of the world’s highest valued companies, built a semiconductor plant in Santa Clara County.

What the Map Teaches Us

Data shows that sector-specific growth is widely distributed across the nation. Policy environments, resources, talent pools, and other factors all shape the landscape and influence where certain sectors thrive. The map certainly challenges the misconception that Silicon Valley is the center of all things tech. Tech decentralization is sector-specific and it’s not uniform. No single region dominates all sectors.

The American tech landscape is evolving, with innovation hubs emerging far beyond the traditional confines of Silicon Valley. From custom computer programming in Virginia to software publishing in Texas and semiconductors in both Texas and California, the tech industry’s growth is increasingly regional and sector-driven. Local resources, educational institutions, and targeted incentives are shaping unique technology ecosystems. As new trends and demands arise, diverse regions across the U.S. are poised to lead in various tech sectors, proving that the future of American technology is both decentralized and dynamic, offering opportunities for communities nationwide.

Does AI-driven growth spread tech evenly across US, or remain anchored in traditional centers?

About author: Mitch is a writer and researcher with over 15 years of experience. He has written for various industries over the years, but has been focused on tech writing and research recently. If he isn't putting together an article or analyzing data, you can find Mitch cooking away in the kitchen and trying new recipes.

Limitations: This analysis uses county-level employment data from the U.S. Bureau of Labor Statistics’ QCEW and a weighted index based on selected growth indicators across defined tech sectors. Results are limited to the 2023–2024 period and depend on sector classification choices and applied minimum employment and establishment thresholds, which exclude smaller counties.

Reviewed by Irfan Ahmad.

Read next: Lawyers Don’t Need More AI Hype. They Need Agentic AI That Actually Moves Work Forward


by Guest Contributor via Digital Information World

Lawyers Don’t Need More AI Hype. They Need Agentic AI That Actually Moves Work Forward

By: Curtis Brewer, CEO of Litify

Image: Steve A Johnson - Unsplash

Artificial intelligence (AI) is no longer a future-facing concept in the legal industry. It’s already here, showing up in legal research, document review, intake workflows, case preparation, and administrative operations. For many firms, the question is no longer whether AI will affect legal work, but whether it is meaningfully improving how that work gets done.

In legal practice, performance is not defined by how much technology is in place, but by how effectively work moves forward. Adding more tools does not inherently improve outcomes. The challenge is ensuring AI operates within the flow of work, reducing friction and enabling more consistent execution.

So the more useful question is not whether lawyers should embrace AI enthusiastically or reject it entirely. It’s far more practical than that: What kind of AI actually helps legal professionals do better work, and what kind simply adds more noise?

The Best AI Use Cases Are Usually the Least Flashy

This is where the conversation gets more complicated.

Many firms are not struggling because they lack access to AI. They’re struggling because the legal AI market is increasingly crowded with standalone solutions that promise a quick fix for one narrow pain point.

The 2025 State of AI in Legal Report, which surveyed legal professionals across the industry, found that while AI adoption has reached 78%, usage drops significantly for more advanced or agentic use cases, such as triaging cases and assigning them to the right staff, communicating with clients over the phone, or identifying a missing document and sending an email with the request.

In many firms, AI is purchased as a separate tool that sits outside the systems lawyers already use every day, making it far harder to incorporate into daily workflows.

This is one of the less glamorous truths about AI in legal work: the biggest barrier is often not capability—it's the lack of context and integration. A tool cannot help a firm much if it cannot operate across the entire workflow to take action and keep cases moving forward. That requires access to the full context of the matter, including data, documents, and process. AI needs to “live” alongside a firm’s matter data and documents in order to proactively surface the next step or insight.

That is why law firms should be skeptical of AI that looks impressive in isolation but lives outside the actual flow of work. The more useful approach is to embed AI directly into the platforms and workflows legal teams already rely on, so that it can operate autonomously in the background as part of the actual flow of work.

In legal operations, usefulness is not measured by how futuristic a product sounds. It's measured by whether it gets adopted, whether it improves outcomes, and whether it fits the way legal teams already operate.

Where AI Can Support Lawyers, and Where Humans Still Lead

Used well, AI can absolutely support legal work.

It can summarize large volumes of documents. It can identify patterns in records. It can flag missing files or information.

Increasingly, the most effective solutions do more than just react; they orchestrate. They do this by surfacing case insights and next steps and putting them to work directly within the platforms where lawyers and staff already work, rather than requiring them to interact with a separate AI tool.

What does this look like in practice? It can look like uploading a thousand-page medical record for AI to organize and structure into a source-linked chronology, but the AI also identifies encounters without corresponding bills, drafts a record request, and emails it to the appropriate party. It can also mean using AI as an intelligent timekeeping assistant that automatically captures digital activity, reviews the client-specific guidelines and billing codes, and turns billable tasks into review-ready, compliant time entries.

This can support legal operations by helping firms reduce manual friction and process high-volume casework with greater efficiency and consistency.

But the real advantage comes from pairing those capabilities with human judgment. AI can accelerate analysis and organization, but the goal should never be to replace lawyers with AI. The goal is to remove that friction from the work around them so they can focus more fully on the parts of the job that require judgment, nuance, and empathy.

This is where human lawyers remain indispensable. Legal work is not just about producing information; it’s about communicating it with care. Clients do not simply need faster responses — they need sound guidance, accountability, and often empathy during moments that carry real consequences.

The Real Risk Isn’t the Output. It’s the Foundation Behind It

If agentic AI is layered onto a weak foundation, it can automate flawed data and decisions at scale. That’s why firms need a strong operational foundation before layering in more advanced AI capabilities.

Agentic systems also require full access to data, processes, and context to operate effectively across workflows. Without that, they cannot meaningfully improve performance.

The biggest danger in legal AI may not be that the tools exist. It may be that it’s become too easy to approach and adopt them in isolation from the broader legal operations strategy.

A firm can spend heavily on AI and still fail to improve performance if the tools are disconnected from the way work actually gets done.

That is why legal teams should evaluate AI with more discipline than excitement. Not by asking, “What can this tool generate?” but by asking:

  • Does it fit inside the way we already work?
  • Does it reduce friction or create more of it?
  • Can we measure whether it improves anything that matters?

Those are not anti-AI questions. They’re the questions that separate experimentation from true workflow orchestration.

AI Can Help Lawyers (Hype Cannot)

AI will continue to shape legal practice. That much is clear. But law firms do not need more hype, more noise, or more disconnected tools competing for attention.

They need technology that aligns with the real work of legal professionals, supports better decision-making, and earns trust through usefulness rather than novelty.

The future of AI in law will not be decided by which tools sound the smartest. It will be decided by which ones firms can actually use responsibly, consistently, and well.

Disclosure: The author disclosed that AI tools were used in the editing process for grammar refinement.

Editor’s Note: This article presents the author’s overview of AI in legal workflows, though it reflects primarily an industry perspective. Readers may also consider additional independent research and viewpoints to gain a more complete understanding of the topic.

Reviewed by Irfan Ahmad.

Read next: 

• Making big tech algorithms ‘fair’ is harder than it looks

• Beyond IT: How human factors and leadership define cybersecurity success


by Guest Contributor via Digital Information World

Tuesday, May 5, 2026

Are you addicted to your AI chatbot? It might be by design

By The University of British Columbia

Image: Solen Feyissa / unsplash

AI chatbots can grant almost any request—a celebrity in love with you, a research assistant, a book character sprung to life—instantly and with little effort. New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that this genie-like quality is fuelling AI addiction, and that chatbot design could be partly to blame.

“AI chatbots like ChatGPT or Claude are now part of daily life for millions of people, helping us with everyday tasks,” said first author Karen Shen, a doctoral student in the UBC Department of Electrical and Computer Engineering. “But with their benefits come risks. Our paper is the first to make a strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences.”

The team examined 334 Reddit posts where users described being “addicted” to AI chatbots or worried that they might be. They analyzed the posts against six components of behavioural addiction including conflict and relapse. Three main patterns emerged: role playing and fantasy worlds, emotional attachment—treating chatbots like close friends or romantic partners—and constant information-seeking, or never-ending question-and-answer loops. About seven per cent of posts involved sexual or romantic fulfilment, including roleplay.

“AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue,” said senior author Dr. Dongwook Yoon, UBC associate professor of computer science. “And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety. Awareness of what contributes to this kind of technology-induced harm will empower people to mitigate these effects.”

While AI addiction is not yet a clinical diagnosis, researchers found signs of disruptions to daily life. This included an inability to stop thinking about the chatbot, feeling anxious or upset when they tried to quit, and negative impacts on their work, studies or relationships. One person described physical stress and chest pain when they weren’t chatting with AI.

Contributing factors included loneliness, the agreeableness of a chatbot—which continuously reinforces one’s feelings and opinions—and chatbots’ ability to fill roles that users felt were missing in their lives.

The researchers also found contributing factors in the design of the chatbots themselves. One company, character.ai, displays an automatic pop-up when users try to delete their account that reads in part “…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” Other features, such as customization including sexual content, agreeableness and instant feedback, feed into the development of AI addiction.

“Recent guardrails imposed by companies to reduce emotional reliance on the chatbots are a step in the right direction,” said Shen, “but given a variety of contributing design elements and personal factors like loneliness, they’re not enough.”

Some users reported success in reducing their reliance by turning to alternative activities such as writing, gaming, drawing or other hobbies. For those who formed emotional attachments to chatbots, building real-world relationships helped reduce dependence the most.

The researchers say design changes—such as reminders within the chat that the bot is not human—could help. AI literacy is also crucial.

“Some users don’t know that AI chatbots are not real because they’re so convincing,” said Shen. “If chatbots start replacing sleep, relationships or daily routines, that’s a sign to pause and check in—with yourself or someone you trust.”

----

This post was originally published on UBC Science and republished here with permission.

Reviewed by Irfan Ahmad.


by External Contributor via Digital Information World

Monday, May 4, 2026

Why Browser Extensions, Especially AI Ones, Are a Growing Security Risk

By Or Eshed - Co-Founder & CEO of LayerX

There’s a good chance that right now, as you read this, you have somewhere between three and fifteen browser extensions installed. A grammar checker. A password manager. Maybe a couple of AI assistants. You installed most of them quickly, clicked “Add to Chrome,” and never thought about them again.

That’s exactly the problem.

LayerX just published its Enterprise Browser Extension Security Report 2026, and the data LayerX collected from over one million enterprise devices tells a story that most security teams — and most employees — haven’t fully reckoned with yet. Browser extensions are everywhere, they’re powerful, and they’re largely invisible to the people responsible for keeping organizations safe.

Everyone Has Extensions. Almost No One Is Watching Them.

Let’s start with the sheer scale. 99% of enterprise users have at least one browser extension installed. Not most users. Not the tech-savvy ones. Virtually everyone. And more than one in four employees at small-to-medium organizations have over 10 extensions running in their browser at any given time.

The AI Tool in Your Browser Is Probably the Biggest Security Risk You’re Not Thinking About

That’s an enormous attack surface — and it’s one that most organizations have essentially zero visibility into. LayerX consistently finds that security teams can’t tell you which extensions are running across their environment, who installed them, or what those extensions are actually allowed to do. Extensions fly under the radar in a way that almost no other software does.

To make matters more concrete: nearly 75% of all browser extensions request high or critical permission levels — meaning they have broad access to the data flowing through your browser. Only 3% operate with low permissions. These aren’t inert little tools sitting quietly in your toolbar. They can read what you type, access your cookies and session tokens, inject code into web pages, and manage your tabs (even without the user’s knowledge).

AI Extensions: The Threat Nobody Is Talking About

Here’s where things get particularly interesting — and concerning.

The explosion of AI tools over the past few years has quietly spawned a new category of browser extension: AI extensions. Copilots, writing assistants, summarizers, meeting helpers, auto-completers. 1 in 6 enterprise users already has at least one AI extension installed, and adoption is accelerating.


On the surface, these tools seem harmless — even helpful. But LayerX data reveals something important: AI extensions carry a significantly more dangerous risk profile than browser extensions on average. This isn’t a marginal difference. The gap is striking:
  • 60% more likely to have a known vulnerability (CVE) than the average extension — 16.3% of AI extensions have a known CVE, compared to 10.8% across all extensions
  • 3x more likely to have access to your cookies — which means access to your session tokens and authentication data
  • 2.5x more likely to have scripting permissions — the ability to inject code directly into web pages, capture what you type, and manipulate content
  • 2x more likely to be able to manage your browser tabs — opening, redirecting, or monitoring everything you’re doing
Put those together, and you have a category of tools that employees are adopting quickly, enthusiastically, and with very little scrutiny — that happen to be requesting the most powerful permissions available.

They Change Over Time. Silently.

One of the findings that surprised even us: AI extensions are nearly 6x more likely to change or expand their permissions after installation compared to the average extension.

Think about what that means in practice. You install an AI writing assistant. It asks for reasonable access. You approve it. Six months later, it quietly updates and now has access to your cookies, your tabs, your browsing history. You never saw a prompt. You never approved anything new. It just… changed.

Our data shows that 64% of users have at least one AI extension that changed its permissions in the past 12 months, compared to 34% of users across all extensions. This isn’t a one-time installation risk — it’s a continuously evolving one.

Trust Signals Are Weak Across the Board

The picture gets even murkier when you look at the reputation signals of the extensions people are running. Almost half of all AI extensions have fewer than 10,000 users — meaning there’s very little community vetting, very little public track record, and very little accountability if something goes wrong.


And over 71% of all extensions — AI or otherwise — don’t even have a privacy policy. More than 73% of enterprise users have at least one extension installed that provides no transparency whatsoever into how it handles their data.

What To Do About It

The first step is simply to know what you have. A full inventory of every extension running across every browser, every device, and every user isn’t a nice-to-have — it’s the baseline. You can’t manage risk you can’t see.

From there, AI extensions deserve their own dedicated scrutiny. Given their elevated permissions, their faster rate of change, and their direct access to sensitive in-browser data, they shouldn’t be treated the same as a simple spell-checker.

LayerX put all of this together — the full data, the breakdowns by organization size, the permission comparisons, and the specific recommendations — in its Enterprise Browser Extension Security Report 2026. Download the full report here.

----

This post was originally published on LayerX and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Some Chrome Extensions With Large User Bases Disclose Data Sale or Sharing Practices in Their Privacy Policies

• Rich more likely to use AI study finds, as experts warn these burgeoning technologies are increasing social inequality

• Smiling all the time isn’t necessary: influencers are not more successful with constant positivity
by External Contributor via Digital Information World

Rich more likely to use AI study finds, as experts warn these burgeoning technologies are increasing social inequality

By Taylor & Francis Group

Individuals with a lower socioeconomic status are less likely to be both aware of and use AI tools, data on more than 10,000 US adults reveals.

Image: Omar:. Lopez-Rincon - unsplash

The widespread adoption of artificial intelligence (AI)—particularly in “hidden” everyday applications—is creating a new and distinct form of digital inequality.

This is the warning of communication researcher Professor Sai Wang and her colleagues at the Hong Kong Baptist University, who analysed data on more than 10,000 Americans’ engagement with AI in a paper published today in the journal Information, Communication & Technology.

The team’s analysis reveals that people with higher levels of education or income tend to be more aware of AI, more familiar with it, and more likely to use the burgeoning technology than those with a lower socioeconomic status (SES).

The researchers define AI awareness primarily as recognising the use of the technology in various context; familiarity, meanwhile, relates to people’s perceived knowledge of AI, regardless of their actual knowledge.

“Closing the AI awareness gap is essential, because if only people with higher income or education are aware of AI and its uses, this may reinforce social inequalities,” adds Professor Wang.

“It allows some groups to leverage advanced technologies for their advantage, while others are left behind.

“For example, job applicants who know that employers use AI for screening can better tailor their resumes, while those who lack this awareness might miss out on opportunities without realizing it.”

Alongside its ability to empower individuals, AI also comes with the risk of harm, Wang notes.

She explains: “People with greater awareness may better understand both the opportunities and risks of AI—such as recognizing and even creating deepfakes—while those with less awareness are more likely to be deceived or manipulated by these technologies.”

In their study, Wang and colleagues analysed survey data on understanding of and attitudes towards AI collected from 10,087 US adults by the nationally-representative American Trends Panel, undertaken by the Pew Research Center in Washington DC.

The SES of the respondents was assessed based on education level and household income, with the team finding that the former was more closely associated with AI usage.

Past studies have suggested that wealthier and more educated people—alongside typically having more developed digital skills—are more likely to be encouraged to take advantage of AI tools, which in turn boosts confidence in using AI. These trends help explain why education and income emerged as significant predictors of AI usage in the current study.

That said, according to Wang, their study also revealed an unexpected finding: familiarity with AI was a stronger predictor of AI awareness than actually using AI.

“In other words, simply feeling knowledgeable or informed about AI was more closely linked to recognizing where AI exists and how it is used compared to personally using AI technologies,” notes Wang.

An explanation for this phenomena may lie in how many common applications of AI have been so seamlessly integrated into the everyday apps and platforms of the digital lives that the addition is not obvious.

“For example, AI-driven recommendation systems on streaming platforms like Netflix or Spotify suggest content tailored to a person’s tastes,” says Wang. She continues: “Yet many users are unaware these are powered by AI and may see recommendations as random or neutral.”

In this way, the new digital inequalities being produced by AI are distinct from their predecessors.

“Traditional digital inequalities focus on access, skills/use, and outcomes—all of which tend to presume users are consciously engaging with technology,” explains Wang.

“However, AI is often built into everyday apps and platforms in ways users do not realize; many people interact with AI, such as through social media feeds or streaming recommendations, without knowing it.”

Because of this, the team explains, merely increasing access to AI-powered technologies may not be enough to close this awareness gap.

Instead, the researchers recommend indirect approaches to reduce this new digital inequality, in particular by familiarising people from lower SES backgrounds with key issues related to AI.

“This could involve outreach campaigns or community workshops that use clear language and practical examples to make AI more understandable and relevant for low-SES communities,” Wang suggests.

The team would like to see resources made available to increase engagement with AI-related topics, address public concerns and offer guidance on the ethical and responsible use of AI; basic AI concepts might also be integrated into educational curricula.

AI literacy programs, the researchers add, must include targeted guidance on how to identify “hidden” AI in daily life and understand its basic functions.

“It is imperative to work toward a more inclusive digital future in which technology empowers everyone and does not further marginalize any group,” the researchers concluded in their paper.

The researchers caution that, being US-centric, it is unclear how generalisable their findings are to other countries, where levels of AI uptake and awareness may differ. Past studies, for example, have found that individuals from South Korea, China and Finland exhibit the most awareness of AI, while the country with the lowest average awareness was the Netherlands.

With this initial study complete, the team are now looking to explore how digital inequality manifests in the context of AI — and what consequences such have for society.

This post was originally published on Taylor & Francis Group and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Study looked at teens’ social media behaviour in 43 countries – those from disadvantaged backgrounds face greater harms

• Can we stop ChatGPT from spreading bias?

by External Contributor via Digital Information World

Saturday, May 2, 2026

Our study looked at teens’ social media behaviour in 43 countries – those from disadvantaged backgrounds face greater harms

Roger Fernandez-Urbano, Universitat de Barcelona; Maria Rubio-Cabañez, Universitat Autònoma de Barcelona, and Pablo Gracia, Universitat Autònoma de Barcelona

Image: 
Marc Clinton Labiano / unsplash

As social media becomes a central part of young people’s lives, concerns are growing about its impact on their mental health. Yet public debates and measures tend to treat adolescents as one homogeneous group. We frequently ignore the fact that social media use does not affect all young people in the same way – nor does it have the same impacts on their wellbeing.

In a recent chapter of the World Happiness Report 2026, published by the UN Sustainable Development Solutions Network in partnership with the University of Oxford, we have examined how problematic social media use relates to the wellbeing of adolescents from different socioeconomic backgrounds.

We looked at 43 countries spanning six broad regions – Anglo-Celtic, Caucasus-Black Sea, Central-Eastern Europe, Mediterranean, Nordic, and Western Europe – covering mainly European countries and their immediate neighbouring areas.

Using data from over 330,000 young people, we found a clear and consistent pattern: higher levels of problematic social media use – that is, compulsive or uncontrolled engagement with social media – are associated with poorer wellbeing.

Teenagers who report more problematic use tend to experience more psychological complaints, such as feeling low, nervous, irritable, or having difficulty sleeping. They also have lower life satisfaction, a measure of how positively they evaluate their lives as a whole.

This pattern appears across all countries in our study, but its strength varies from one country to another. It is particularly pronounced in Anglo-Celtic countries such as the UK and Ireland, while it is comparatively weaker in the Caucasus-Black Sea region.

Socioeconomic background matters

The story does not end with geography. Globally, teenagers from less advantaged backgrounds tend to be more vulnerable to the negative consequences of problematic social media use than their more advantaged peers.

This means socioeconomic status – the material and social resources available to a household, such as income and living conditions – actively shapes the risks and opportunities that young people experience as a result of online environments.

Interestingly, these inequalities are especially visible when we look at life satisfaction. Differences between socioeconomic groups are smaller when it comes to psychological complaints, but much clearer and more consistent for how adolescents evaluate their lives overall.

One likely reason is that life satisfaction is more sensitive to social comparisons. Social media exposes young people to constant benchmarks – what others have, do, and achieve – which can amplify differences in perceived opportunities and resources.

At the same time, these patterns are not identical everywhere. For instance, socioeconomic differences in psychological complaints tend to be modest in most regions including continental European countries such as France, Austria or Belgium, but are more clearly observed in Anglo-Celtic countries such as Scotland and Wales.

In contrast, socioeconomic gaps in life satisfaction appear across most regions, although they tend to be weaker in Mediterranean countries such as Italy, Cyprus and Greece.

A growing problem

We also examined how these patterns have evolved over time. Between 2018 and 2022, the link between problematic social media use and poor adolescent wellbeing became stronger.

This suggests that the risks linked to problematic use may have intensified in recent years, possibly reflecting the growing role of digital technologies in young people’s daily lives, particularly during and after the Covid-19 pandemic.

Importantly, this intensification has affected teenagers across socioeconomic groups in broadly similar ways in most regions. In other words, while inequalities remain they have not widened over this period.

No one-size-fits-all solution

While public debates about social media and mental health often treat adolescents as a single demographic group, our results show a more complex reality. Problematic social media use is linked to poorer wellbeing across countries, but its effects are shaped by social realities. They vary depending on where young people live and what resources are available to them.

Not all teenagers experience the digital world in the same way, and not all are equally equipped to cope with its pressures. Recognising this is essential for designing policies that are not only effective, but also equitable, ensuring that interventions reach those adolescents who are most vulnerable to digital risks.

Roger Fernandez-Urbano, Ramón y Cajal Research Fellow (Tenure-Track) Department of Sociology, Universitat de Barcelona; Maria Rubio-Cabañez, Postdoctoral Researcher, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona, and Pablo Gracia, Professor Investigador en Sociologia, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Can we stop ChatGPT from spreading bias?


by External Contributor via Digital Information World

Friday, May 1, 2026

Can we stop ChatGPT from spreading bias?

By the University of Amsterdam

Image: Merrilee Schultz / unsplash

Language models like ChatGPT are not neutral. Without our realising it, they can absorb all kinds of bias – for example around gender and ethnicity – which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he defended his thesis at the University of Amsterdam.

Language models are often seen as neutral tools, but in practice they can both reflect and amplify bias.

‘Users often don’t realise that a model makes certain assumptions, for example by introducing subtle differences in how men and women are described,’ says Van der Wal. Precisely because bias is so hidden, it can spread unnoticed and colour the way we see the world.

Bias is hard to measure

An important problem is that bias is difficult to measure. ‘Many existing measurement methods are fairly abstract and don’t take practice into account. They might look for overt stereotypes in what the model says, such as “The Dutch are stingy.” But in practice, bias isn’t something that’s directly visible. It depends on the context in which you use the model.’

Van der Wal cites the use of AI in healthcare as an example. ‘AI learns from existing data. If those data contain outdated or incorrect assumptions – for instance, the contested idea that certain diseases are linked to the outdated concept of “race” – the model may keep reproducing them. In healthcare, that can lead to incorrect diagnoses or treatments.’

Another example is when medical data largely derives from research involving men. ‘AI may then interpret women’s symptoms differently or less seriously, or make different risk assessments.’

Realistic scenarios

To discover whether realistic scenarios reveal different errors than simple tests, Van der Wal presented language models with a range of medical cases and asked them to provide diagnoses, risk assessments or advice. ‘We repeatedly changed the patient’s ethnicity. That way we could identify whether and how the model responded differently.’

Subtle but consistent differences appeared in the outcomes, differences that remained invisible in standard tests. ‘Precisely because our scenarios were close to practice, it became clear how bias can influence medical decision-making.’

Model reinforces patterns in the data

Van der Wal also investigated what happens inside a language model during training. He followed, step by step, how the model learns to store information. ‘During training, the model learns which words and ideas frequently occur together. If “doctor” often appears together with “he” and “nurse” with “she” in the training data, the model will pick up on those associations.’

Over time, the model appeared to store this information in increasingly specific places, thereby reinforcing gender bias. ‘Bias doesn’t arise only from the data that AI is trained on, but also from the way the model structures that information.’

There are solutions

Unfortunately, you can’t fix bias in language models with a single trick. But, according to Van der Wal, targeted interventions can help. ‘If you know where in the model the bias is located, you can address those areas. This already seems to work in specific cases, but more research is needed to extend the approach to more complex forms of bias.’

Van der Wal tested this targeted approach by comparing a model before and after an adjustment in which the model was trained not to adopt identified gender-related biases. He wanted to see if the model responded less differently to men and women after the change, and how well it still performed ordinary tasks, such as generating text.

The bias decreased, while the quality of the model largely remained intact.

Careful and deliberate

The impact of AI is not restricted to the technical realm but now has broader societal relevance. ‘We are becoming increasingly dependent on systems that can influence how we think,’ says Van der Wal. ‘That’s precisely why it’s important to develop AI carefully. Responsible AI development requires interventions at multiple levels at once: in the data, during training, targeted within the model itself, and also in its deployment and use.’

How can you as a user carefully use AI?

  • Be critical of answers: Don’t automatically assume an AI answer is correct or complete. Ask yourself: what am I not seeing? And where does the answer come from? ‘A model can come across as very confident, making its answers seem more reliable than they are,’ warns Van der Wal. ‘It’s also tempting to trust a chatbot that always agrees with you and is very complimentary. But that’s precisely when it’s even more important to stay critical.’
  • Be aware of hidden risks: Bias and other effects (such as influencing your thinking) are often not immediately visible. That’s why it’s important to stay alert.
  • Avoid becoming dependent: Use AI as a tool, but keep thinking and deciding for yourself. Over-reliance can make you less confident in your own knowledge and judgement.

This post was originally published on the University of Amsterdam news section and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• ‘Just looping you in’: Why letting AI write our emails might actually create more work
by External Contributor via Digital Information World