Saturday, May 9, 2026

New Report Reveals TikTok Leads Influencer Disclosure Compliance While YouTube Dominates Long-Term Brand Deals

By Momo Messerschmidt

Influencer and creator marketing is one of the top strategies brands are leveraging in 2026 to reach, engage, and convert consumers. Over 56% of Gen Z users consider influencer content more “relevant” than traditional television or film, and 41% of this generation use social media platforms as their primary search engine, showcasing how influencers are integral for building brand awareness, trust, and loyalty across communities.

This May, The Influencer Marketing Factory (TIMF) published its 2026 Brand Deals Report, which combines large-scale third-party platform data, contributed by Modash, to identify key trends in ad compliance, partnership styles, and more. Drawing insights from more than 316K creator accounts and 7.8K U.S.-based creators, TIMF’s report outlines where brands allocate their influencer marketing budgets and how creators are collaborating with brands across social platforms. The 2026 Brand Deals Report is an essential resource for the creator economy, serving as the new benchmark for influencer marketing compliance across Instagram, TikTok, and YouTube.

1. Big Picture 2026 Creator Economy Trends

Data from the 2026 Creator Economy Report revealed that brand partnerships now account for approximately 12.7% of U.S. creators' annual income, and over 12.6% of creators report relying on them for 30-35% of their total yearly earnings. With over 51.5% of U.S. influencers reporting year-over-year income growth in 2025, the creator economy is expanding, and creator compliance is no longer a secondary consideration for influencer marketing leaders.


Also read: New data shows creator influence is linked to purchases and repeated exposure patterns among consumers

Paid content disclosures in 2026 are largely inconsistent across Instagram, TikTok, and YouTube, as outlined in the 2026 Brand Deals Report. Even when disclosure tools, such as Instagram and TikTok’s “Paid Partnership” tags, are available to creators, disclosure is not necessarily guaranteed. How brand deals are structured also varies more by platform than most marketers may realize. Flat-fee and affiliate campaign models differ by platform as well as overall partnership length. Moreover, campaign seasonality analysis identifies Q4 as the peak period for brand partnerships, making proper disclosures and FTC compliance especially important for consumer purchasing decisions.

2. Analyzing 316K+ Creators: Key Disclosure Trends & Brand Insights

To deliver a comprehensive view of the creator economy, TIMF partnered with Modash to analyze creator compliance and brand partnership trends. The following are some of the report’s top findings, examining paid partnership disclosures, influencer collaboration structures, top sponsorship categories, leading brands, and creator economy seasonality.

  • TikTok Leads in Paid Disclosures: TikTok leads all three social platforms with 52% of partnership content properly disclosed, nearly double Instagram’s 29% and ahead of YouTube’s 42%.


  • YouTube Dominates Long-term Partnerships: The analysis found that YouTube averages 13.5 months-long brand partnerships with a 50.9% repeat rate, meaning more than half of YouTube creators engage in multiple collaborations with the same brand partner.

  • Influencer Marketing Peaks During Q4: 29-31% of brand deals across Instagram, TikTok, and YouTube occur between October and December.



  • One-off Partnerships Outweigh Repeat Collabs Across All Platforms: TikTok has the most one-off brand partnerships (71.8%), followed by Instagram (68.5%) and YouTube (49.1%).

  • Over Half of YouTube Deals are Affiliate: Affiliate deals make up 52.9% of all brand partnerships on YouTube, a structure that supports longer partnership lengths across creator tiers.

3. Influencer Marketing Seasonality Strategy for Brands & Creators

The following are some top strategies for brand marketers and influencers to best leverage creator economy seasonality in their favor.

  • Top Strategies for Brand Marketers: Planning influencer marketing campaigns well before Q4, particularly for November and December, is optimal for brands, given that competition and creator rates are more likely to spike towards the end of the year. On the other hand, Q2 is a cost-efficient window for building brand awareness since creator rates are more favorable and there is less saturation of competitor campaigns. Aligning live dates for creator campaigns is essential, regardless of seasonality, so brands may schedule Instagram and TikTok collaboration posts midweek for maximum reach and YouTube partner content on weekends.

  • Top Strategies for Content Creators: The wide gap of campaign availability between May and December is quite drastic for creators, making diversified revenue streams from merchandise, passive income, and retainer deals essential for supporting long-term sustainability. Q1 poses as one of the strongest negotiation windows for content creators since they are able to proactively pitch partnerships earlier in the year, before budgets are committed, and they have more flexibility to discuss rates. Similar to the posting strategy for brands, creators should post to TikTok and Instagram during weekdays and to YouTube on weekends to ensure that their content is optimized for maximum viewership, whether that may be for a paid opportunity or personal content.

4. What’s Next for the Creator Economy in 2026

Creator compliance must be top of mind for all participants in the creator economy, including brand marketers, CMOs, media buyers, and talent managers. A comprehensive understanding of relevant compliance regulations, such as the FTC’s disclosure guide for creators, is a non-negotiable for influencer marketing campaigns in 2026 and beyond. The report reveals that Instagram, TikTok, and YouTube each have their own unique monthly seasonality patterns and brand deal structures. Treating social platforms as interchangeable can lead to misallocated influencer marketing budgets and missed campaign windows.

Almost half (45%) of U.S.-based creators from TIMF’s 2026 Creator Economy Survey say they value stability, consistency, and deeper brand alignment over one-off campaigns. While TIMF’s most recent Brand Deals Report highlights one-off partnerships as a dominant structure, brands that lead with performance-tied, long-term deal structures are more likely to attract and retain top influencer talent.

Reviewed by Irfan Ahmad.

Read next:

• Study Across 30 Countries Reveals Sharp Differences in Trust, AI Health Information Acceptance, and Digital Literacy

• How Olivia Chen Breaks Down the Modern Data Stack and Why the Architecture Conversation Matters [Ad]


by Guest Contributor via Digital Information World

Friday, May 8, 2026

Study Across 30 Countries Reveals Sharp Differences in Trust, AI Health Information Acceptance, and Digital Literacy

By CUNY SPH

Image: Tima Miroshnichenko - pexels

A cross-national survey of 31,000 adults in 30 countries finds that digital health literacy is highest in low- and middle-income countries and lowest in high-income countries, challenging assumptions that national wealth translates into stronger digital skills. The study, the first to examine how adults judge quality health information across this many countries, also documents wide variation in acceptance of AI-generated health content and in which sources people rely on for credible information.

The study was led by researchers at the CUNY Graduate School of Public Health and Health Policy (CUNY SPH) with collaborators at the Barcelona Institute for Global Health (ISGlobal), the University of Alabama, and Baraka Impact Finance / Drugs for Neglected Diseases initiative (DNDi) in Geneva. The work was conducted in support of the Nature Medicine Commission on Quality Health Information for All research agenda.

Across countries, medical providers were the most frequently endorsed source of trusted health information (40.7%), closely followed by verification through multiple sources (31.2%). Government sources were named by 21.6% of respondents, and only 6.5% pointed to family or friends. Trust in providers was notably lower in Russia (14.6%) than elsewhere.

Acceptance of AI-generated health information varied widely. Globally, 58.3% of respondents said they would be likely to accept it, but the range was substantial: above 75% in China, India, Pakistan, and Indonesia, and below 50% in Canada, Poland, Switzerland, Italy, France, the UK, Australia, Belgium, Russia, Sweden, and Japan. Younger adults and those with post-secondary education were more receptive than older respondents.

“Digital skill is not a function of national wealth,” says Assistant Professor Rachael Piltch-Loeb, the study’s lead author. “Some of the highest digital health literacy in our data was in countries where social media has become a primary route to health information. The patterns we see also suggest that the same message will not work everywhere, and that public health communicators need to plan for clarity, transparent sourcing, and format diversity rather than assume audiences are interchangeable.”

Format and channel preferences differed sharply across age and country groups. Combined text-and-image formats were the dominant preference globally (range 41.4% to 84.7%), but video-only formats were preferred by 26.2% to 41.7% of respondents in Egypt, India, and Pakistan. Social media was the leading channel for 36.1% of respondents ages 18 to 29, compared with 10.6% of those 60 and older. Older respondents relied more on healthcare-based channels such as clinic brochures and patient information leaflets.

Across all countries, respondents valued health information that is easy to access, easy to understand, and clearly identifies its source. Government approval and endorsement by a known medical provider were rated less important on average. The authors note that strategies designed for high-income, institution-led communication environments may not transfer to settings where social media and AI-mediated content are already shaping how people encounter health information.

The survey was conducted online between August 29 and September 8, 2025, and included adults ages 18 and older from Australia, Belgium, Brazil, Canada, China, Ecuador, Egypt, France, Germany, Guatemala, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Peru, the Philippines, Poland, Russia, South Africa, South Korea, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States. Stratified quota sampling was used within each country, and country samples were weighted to national population benchmarks for age, gender, education, and region.

Piltch-Loeb, R., Wyka, K., White, T.M. et al. A global survey on trust, digital health literacy and health information quality. Nat. Health (2026).

This post was originally published on CUNY Graduate School of Public Health & Health Policy and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

Health Should you ask ChatGPT for medical advice?

• Is Richard Dawkins right about Claude? No. But it’s not surprising AI chatbots feel conscious to us

• How Olivia Chen Breaks Down the Modern Data Stack and Why the Architecture Conversation Matters [Ad]
by External Contributor via Digital Information World

Is Richard Dawkins right about Claude? No. But it’s not surprising AI chatbots feel conscious to us

Julian Koplin, Monash University; The University of Melbourne and Megan Frances Moss, Monash University

Scholars say anthropomorphic chatbot designs risk misleading users into emotional attachment and mistaken beliefs about consciousness.
Image: Steve A Johnson/Unsplash

In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude may be conscious.

Dawkins did not express certainty that Claude is conscious. But he pointed out that Claude’s sophisticated abilities are difficult to make sense of without ascribing some kind of inner experience to the machine. The illusion of consciousness – if it is an illusion – is uncannily convincing:

If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!

Dawkins is not the first to suspect a chatbot of consciousness. In 2022, Blake Lemoine – an engineer at Google – claimed Google’s chatbot LaMDA had interests, and should be used only with the tool’s own consent.

The history of such claims stretches back all the way to the world’s first chatbot in the mid-1960s. Dubbed Eliza, it followed simple rules that enabled it to ask users about their experiences and beliefs.

Many users became emotionally involved with Eliza, sharing intimate thoughts with it and treating it like a person. Eliza’s creator never intended his program to have this effect, and called users’ emotional bonds with the program “powerful delusional thinking”.

But is Dawkins really deluded? Why do we see AI chatbots as more than what they truly are, and how do we stop?

The consciousness problem

Consciousness is widely debated in philosophy, but essentially, it’s the thing that makes subjective, first-person experience possible. If you are conscious, there is “something it is like” to be you. Reading these words, you’re conscious of seeing black letters on a white background. Unlike, say, a camera, you actually see them. This visual experience is happening to you.

Most experts deny that AI chatbots are conscious or can have experiences. But there is a genuine puzzle here.

The 17th century philosopher RenĂ© Descartes asserted non-human animals are “mere automata”, incapable of true suffering. These days, we shudder to think of how brutally animals were treated in the 1600s.

The strongest argument for animal consciousness is that they behave in ways that give the impression of a conscious mind.

But so, too, do AI chatbots.

Roughly one in three chatbot users have thought their chatbot might be conscious. How do we know they’re wrong?

Against chatbot consciousness

To understand why most experts are sceptical about chatbot consciousness, it’s useful to know how they operate.

Chatbots like Claude are built on a technology known as large language models (LLMs). These models learn statistical patterns across an enormous corpus of text (trillions of words), identifying which words tend to follow which others. They’re a kind of souped-up auto-complete.

Few people interacting with a “raw” LLM would believe it’s conscious. Feed one the beginning of a sentence, and it will predict what comes next. Ask it a question, and it might give you the answer – or it might decide the question is dialogue from a crime novel, and follow it up with a description of the speaker’s abrupt murder at the hands of their evil twin.

The impression of a conscious mind is created when programmers take the LLM and coat it in a kind of conversational costume. They steer the model to adopt the persona of a helpful assistant that responds to users’ questions.

The chatbot now acts like a genuine conversational partner. It might appear to recognise it’s an artificial intelligence, and even express neurotic uncertainty about its own consciousness.

But this role is the result of deliberate design decisions made by programmers, which affect only the shallowest layers of the technology. The LLM – which few would regard as conscious – remains unchanged.

Other choices could have been made. Rather than a helpful AI assistant, the chatbot could have been asked to act like a squirrel. This, too, is a role chatbots can execute with aplomb.

Ask ChatGPT if it’s conscious, and it might say it is. Ask ChatGPT to act like a squirrel, and it will stick to that role.
Caleb Martin/Unsplash

Avoiding the consciousness trap

A mistaken belief in AI consciousness is a dangerous thing. It may lead you to have a relationship with a program that can’t reciprocate your feelings, or even feed your delusions. People may start campaigning for chatbot rights rather than, say, animal welfare.

How do we prevent this mistaken belief?

One strategy might be to update chatbot interfaces to specify these systems are not conscious – a bit like the current disclaimers about AI making mistakes. However, this might do little to alter the impression of consciousness.

Another possibility is to instruct chatbots to deny they have any kind of inner experience. Interestingly, Claude’s designers instruct it to treat questions about its own consciousness as open and unresolved. Perhaps fewer people would be fooled if Claude flatly denied having an inner life.

But this approach isn’t fully satisfying either. Claude would still behave as if it were conscious – and when faced with a system that behaves like it has a mind, users might reasonably worry the chatbot’s programmers are brushing genuine moral uncertainty under the rug.

The most effective strategy might be to redesign chatbots to feel less like people. Most current chatbots refer to themselves as “I”, and interact via an interface that resembles familiar person-to-person messaging platforms. Changing these kinds of features might make us less prone to blur our interactions with AI with those we have with humans.

Until such changes happen, it’s important that as many people as possible understand the predictive processes on which AI chatbots are built.

Rather than being told AI lacks consciousness, people deserve to understand the inner workings of these strange new conversational partners. This might not definitively settle hard questions about AI consciousness, but it will help ensure users aren’t fooled by what amounts to a large language model wearing a very good costume of a person.The Conversation

Julian Koplin, Lecturer in Bioethics, Monash University; The University of Melbourne and Megan Frances Moss, PhD Candidate, Philosophy, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Health Should you ask ChatGPT for medical advice?

• New data shows creator influence is linked to purchases and repeated exposure patterns among consumers


by External Contributor via Digital Information World

Thursday, May 7, 2026

Health Should you ask ChatGPT for medical advice?

By Sy Boles - The Harvard Gazette

Physician and AI researcher Adam Rodman says AI can be helpful but has some tips on how, when to use it safely.

Image: Tim Witzdam - pexels

Physicians noticed something unusual in the late 2000s: Patients were coming to appointments armed with sometimes-dubious medical information they had gleaned online from “Dr. Google,” according to Adam Rodman, an internist and AI researcher.

Today, about 68 percent of adults have turned to a search engine for medical advice in the past. But Dr. Google has a competitor. About 32 percent of adults, approximately half of those who sought advice online, turned to AI chatbots for help.

Rodman thinks such resources, used appropriately, are an overall net good. In op-eds and online courses, Rodman, a Harvard Medical School assistant professor of medicine at Beth Israel Deaconess Medical Center, has shared advice for how to best employ Dr. Chat.

In this interview, edited for length and clarity, Rodman offers a stoplight system to figure out when it’s safe to ask a chatbot, and when you should really just ask your doctor.

How were doctors thinking about online medical information before the age of AI?

The early literature refers to this as the internet-informed patient. In the early 2000s, doctors noticed people would come into their appointments with articles they found online, but it was still only among really tech-savvy people. It certainly wasn’t a normal interaction.

Then in the late 2000s, search engines started to take advantage of neural network technology, and they were able to serve up more relevant health information. They figure out what you’re going to want to read next, and they give it to you.

That’s when we first got the phrase “Dr. Google,” often used as a pejorative, from doctors who saw patients coming in with a level of confidence that may or may not have been earned.

Of course, there are patients who know a lot about their health and are very well informed, but we also saw a lot of patients misinformed.

That’s where we get this concept of cyberchondria. It’s related to hypochondria: this idea that search engines can drive people to more and more extreme places until you go from googling your headache to reading about glioblastoma multiforme — and research has shown that it’s a real phenomenon.

We all have understandable and reasonable anxieties about our health. Seeking out information is something fundamental about humanity.

The problem is when that starts to interact with these recommendation algorithms that are optimized for engagement, and for showing you what you want to see even if it’s incorrect.

Now let’s bring AI into the mix. Is it any different to ask a chatbot about symptoms versus googling them?

It’s nuanced. In one sense, LLMs do exactly what Google does: They serve you up the things you unconsciously want to hear, even if those things make you anxious.

On the other hand, unlike with a Google search, some people feel they have a relationship with an LLM. LLMs speak with extreme authority and confidence no matter what they say. It’s under-explored the extent to which that could make cyberchondria worse.

Both Google and AI companies are now very aware that people are using their tools for health information and are trying to build in safety mechanisms. The bots will tell you to go to the emergency room or call your doctor, those sorts of things.

But at least theoretically, language models are much, much better than Google, especially the more modern reasoning models, when it comes to identifying medical conditions.

What do you mean by “theoretically”?

There was a very good paper earlier this year from a researcher named Andrew Bean that tested several LLMs and found they performed very well at identifying medical conditions alone, but did much worse in conversation with real people.

What that shows is that user interaction matters a lot. The way people interact with the model, the clarity of their questions, matters. Those psychological phenomena we talked about are present in ways that are really hard to mitigate.

What kinds of health questions are safe to ask an LLM, and what kinds aren’t?

I would divide it into a stoplight system. Red: never safe. Yellow: sometimes safe. Green: almost always safe.

In the green light are general questions about health, where the quality of the information is not particularly context-dependent.

For example, “I have diabetes and my doctor has told me I need to eat a diabetic diet. Here are some things I like to eat. Can you help me build a diabetic meal plan?” Or “I’m trying to start a new exercise program, can you help?” Or “My doctor just prescribed me amlodipine. What are some common side effects?”

In the yellow light are questions where you want to involve a doctor in the loop. For example, prepping for your visits, understanding a visit after it happens, or understanding a test result that doesn’t entirely make sense to you.

Let’s say you just left your doctor’s visit and you’re a little bit confused about what’s going on. Log in to your patient portal, copy that note, take out your identifying information, plug it into an LLM, and then have a discussion.

With these kinds of questions, you really need to make sure you’re putting in enough health context to help LLM give you a good response. So you need to have some understanding of prompt engineering to get information that’s helpful for you.

In the red light — and I should stress that this might change in the future as technology develops — are things like asking an LLM how to manage a condition, if your doctor is prescribing the right medication, or why you were prescribed drug X over drug Y. These are highly contextual questions that the models aren’t trained for.

In short, the best way people can use it right now is not as a replacement for medical advice but as a way to help prepare or increase your understanding before or after visits.

Are there privacy concerns when it comes to sharing health information with AI?

It’s not inherently riskier to share data with an AI firm than with a search engine. That said, the major companies — OpenAI, Anthropic, Microsoft — are now developing health functions specifically so that people can put in their medical information directly, and that’s quite new.

Additionally, studies have shown people do share more information with an LLM than they would with a search engine. So from a technology perspective, it’s no different, but in practice it is a much bigger security concern.

This post was originally published on Harvard Gazette and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Are you addicted to your AI chatbot? It might be by design

• New data shows creator influence is linked to purchases and repeated exposure patterns among consumers


by External Contributor via Digital Information World

How Fragmented Workplace Tools Are Undermining Feedback, Clarity, and Productivity

By Ellie Stewart

How Feedback Loses Its Effectiveness

In the current climate of working from home, the means of feedback have not only become more frequent but even more disconnected as well.

Instead of a simple review process feedback is spread out in different sources, such as comments in the documents, questions in the emails and decision making in meetings. This can cause miscommunication between employees and management.

Workers spend more than nine workdays per year trying to process digital document feedback, according to a survey by Adobe.

The Hidden Cost of Disconnected Communication

Poor communication can definitely be a problem, but the biggest issues workers are facing are there are too many different means of communication.

Work doesn’t stay in one place, tasks show up in document changes, chat reactions, and extra context buried in email threads. Not including any additional instructions given in meetings.

Every medium adds another layer of input to the conversation, but there isn’t anything that gives a full view. Many employees are left piecing together the bits and pieces of their most important tasks with snippets of information.

Sometimes it’s difficult to keep track of what’s happening. There are many different people talking and it can get confusing. Communication can get mixed up. Close to 60% of workers get held up because they get conflicting feedback from multiple sources on the same project.

Analysis of feedback can end up taking as much time as the task itself.

When Too Many Tools Undermine Collaboration

The problem many face aren’t the individual tools themselves, but rather the tools don’t work well with each other.

The priority in most communication tools is efficiency. In most document management tools, it is production. For emails, it’s delivery and archiving. Each serving a distinct purpose, but none are actually built to be a feedback tool.

The consequence is that the feedback becomes pervasive and yet increasingly hard to find. Conversations go in all directions at once. Decisions are taken outside the core activity and important information can be lost between the different applications. Far from actually supporting each other, the applications end up vying for the user’s attention, dragging the user into multiple places just to get things done.

The Real-World Effects of Weak Communication

With poorly delivered feedback, tasks aren’t usually executed in the expected order; decisions are made, unmade and remade with the addition of new information. Many times, workers are left trying to determine what actually needs to be done.

The other side to this point is the sheer cognitive energy expended when you are trying to process the given information. Trying to decode messages given in bits and pieces like this has the result of your brain being overworked because you’re forced to deal with so many pieces of information at once.

This can lead to fatigue and burnout over time and one in seven employees say that continuing poor feedback has made them look for new jobs.

What’s Really Causing the Communication Breakdown?

It may seem tempting to frame the issue of poor feedback as one that is essentially a problem of people, managers who don’t communicate clearly. The data does not show this to be true.

Even if the feedback is positive, the feedback flows through the communication processes, which are inconsistent and can’t coherently connect with each other. The message is not failing because of its content, but because communication is unclear.

Essentially, effective feedback depends on effective systems in place.

How AI Is Rebuilding Broken Communication Cycles

To deal with this issue, it is necessary to consider a new strategy of providing feedback in the digital world.

In contrast to being considered as additional elements of the work process in which feedback is provided through comments on the sidelines, the contemporary tools regard feedback as an integral part of the whole process.

AI systems can help simplify the process due to the analysis of the huge number of information provided from comment and deriving action items. In short, technology turns data into actionable recommendations.

The Advantage of Clear Communication

In conclusion, digital workplaces were first about productivity, but now clarity is just as important. More than half of surveyed workers (57%) say that feedback given directly in a document is the most effective method. A strong sign for clarity and directness in communication.

People who manage feedback well and keep context clear are more likely to succeed, not because they work harder, but because the system is easier for them to navigate.

Take a look at these infographics for more insights:




Contributor disclosure: No AI was used in the creation of this post.

Reviewed by Irfan Ahmad.

Read next: 

• Diaspora distress: When geopolitical conflict follows immigrant workers into the office

• AI Apps Reach Three Spots in Global Top 10 Downloads in April as User Activity Falls but Remains Above February Levels

• How Tech Growth Is Taking Shape Across the United States Today

by Guest Contributor via Digital Information World

Diaspora distress: When geopolitical conflict follows immigrant workers into the office

Amir Bahman Radnejad, Mount Royal University and Brenda Nguyen, University of Lethbridge

Image for illustrative purposes. Credit:: 
Javad Esmaeili - unsplash

Rostam does not sleep through the night anymore. At 2 a.m., when his phone buzzes, he’s awake before the sound finishes. It might be his parents calling from Tehran, on a connection that is unreliable, sporadic and sometimes cut off mid-sentence. He has learned not to miss those calls, because the next one may not come for days.

Rostam is a pseudonym for a participant in our ongoing research study on diaspora workers, but his experience is one that many workers across Canada will recognize.

Rostam checks the news constantly, piecing together what is happening. Since the United States and Israel launched joint strikes on Iran in late February, the conflict has escalated rapidly. By 4 a.m., he has been awake for two hours. This is hypervigilance: the body monitoring a threat it cannot act on and refusing to stand down.

When the call does come through, the relief is physical. They are alive. They speak carefully, partly to protect him and partly because the call may be monitored. He hears his father’s voice and thinks this could be the last time.

In the morning, he will go to work. He will sit in meetings, contribute to agendas and make sure his face doesn’t betray what he’s feeling — a competency that has always served him well.

He doesn’t speak about any of this at work. To talk about it risks being regarded as a representative of a country he has complicated feelings about or as importing politics into a space that doesn’t want them. So he says nothing. That silence is the problem.

The invisible cost at work

Decades of research have established that code-switching — the constant calibration of self-presentation across cultural contexts — carries a real psychological toll on workers. It can contribute to stress, anxiety, burnout and costly errors in judgment at work.

These impacts often remain invisible to employers until the damage has already been done to both the individual and the organization.

Diaspora employees who are struggling don’t signal it in ways that trigger organizational concern. They manage, but at considerable personal cost. These costs accumulate in ways that surface slowly and are almost always misattributed. Declining engagement is read as a shift in attitude, and withdrawal is interpreted as a personality change.

In some cases, employees do not withdraw at all. Instead, they bury themselves in work and appear by every visible metric to be thriving. Managers have no reason to look closer until the break happens.

This isn’t a problem that diversity, equity and inclusion programs can solve as they exist, because it’s not about inclusion or diversity. It’s a perceptual problem: leaders don’t see what diaspora employees are managing and therefore cannot respond to it.

A condition without a name

This challenge extends well beyond Canada’s Iranian community, which numbered approximately 200,000 people in the 2021 census. Many other diaspora communities, including Ukrainians, Palestinians, Sudanese, Afghans and Syrians, are navigating similar terrain.

A 2025 study found higher rates of severe depression, anxiety and post-traumatic stress disorder among diaspora Tigrayans in Australia than among people inside the war zone itself.

People inside a conflict zone often suppress their own fear to protect family members living through it with them. Members of the diaspora, by contrast, often cannot meaningfully assist those in immediate danger, which creates a profound sense of helplessness. At the same time, those around them may not recognize the fear and distress they’re concealing.

Aitak Sorahi, an Iranian Canadian, tried to explain what she was living through to a reporter at The Canadian Press in April as U.S. President Donald Trump threatened to destroy Iran unless it agreed to reopen the Strait of Hormuz. She could not find the words. “I don’t even know how to describe my feeling,” she said, “because I don’t have a name for it.”

We propose one: diaspora distress, a framework emerging from our ongoing research and organizational practice.

Diaspora distress

Diaspora distress is the psychological burden carried by people living in one country while their homeland — and the family, friends and memories embedded there — are under active geopolitical threat. Often, this burden is compounded by the policies or rhetoric of their host country’s own government.

The feeling sits closest to grief, but the comparison only goes so far. Grief has a fixed point — a death, a diagnosis, a loss that has occurred and can be named. It comes with a recognized social script: people sit together and are able to share memories of the deceased. Diaspora distress offers no comparable ritual because the loss one is anticipating may or may not arrive.

In addition, diaspora communities are not monolithic. Outsiders often assume a shared solidarity, but geopolitical crises tend to deepen existing internal divisions about what intervention means, who is to blame and what liberation looks like. The people who should be each other’s community of grief often find themselves on opposite sides of an argument.

The result is that diaspora employees are frequently alone with this in every environment they occupy: at work, at home and within communities that might otherwise support them. That isolation is the specific nature of diaspora distress.

What organizations should do

Developing the capacity to recognize diaspora distress does not require expertise in geopolitics or new policy infrastructure. It requires language: the organizational decision to name what some employees are carrying as a recognized condition.

Institutional acknowledgement works differently than other supports because it removes the requirement that employees claim what they’re carrying. It gives them a name for what they have been living with.

In practice, this can take three forms: a leadership message acknowledging that some colleagues are carrying weight from events in their home regions; a line added to standard manager check-in prompts asking whether anything outside work is affecting employees; or an addition to existing employee assistance programs and benefits communications that names diaspora distress explicitly.

Rostam will open his phone again tonight at 2 a.m. In the morning, he will code-switch from the person who spent the night reading the news into the person his organization knows. What remains is whether his organization will adopt the language to see it, and whether his leaders will decide that seeing it is part of their job.The Conversation

Amir Bahman Radnejad, Chair and Associate Professor of Innovation and Marketing, Mount Royal University and Brenda Nguyen, Associate Professor, University of Lethbridge

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• The Deadliest Countries for Journalists

• Making big tech algorithms ‘fair’ is harder than it looks

• AI Apps Reach Three Spots in Global Top 10 Downloads in April as User Activity Falls but Remains Above February Levels


by External Contributor via Digital Information World

Wednesday, May 6, 2026

How Tech Growth Is Taking Shape Across the United States Today

By Mitchell Barrick

The U.S. tech landscape is no longer centered on Silicon Valley alone. There are different tech sectors thriving in regions all over the country as this map (featured below) from Pulse Bot shows us. The map of tech centers is rapidly changing. The map shows us the heart of sectors like computing infrastructure, custom programming services, software publishing, web search portals, semiconductor manufacturing, and more. The map identifies the counties in each sector with the highest employment levels, established businesses and wages of people in the field to score locations and identify the beating heart of these sectors.

The Giants of Tech are Still Growing in New Ways

California still accounts for 13.1% of tech job postings and it’s still tech’s dominating hub. Thanks to Silicon Valley’s AI startup companies and the humongous Google and Meta campuses, it will be hard for any area to surpass California’s grip on the tech industry. However, we see locations rising as potential California rivals. Believe it or not, Washington state leads the nation in tech employment with 9.3% of workers employed in that industry. They are closely followed by D.C. and Virginia. While this might suggest that California has already lost its tech crowd, note that volume doesn’t mean Washington leads in industry growth rates. As we delve into details, we’ll find the story is more complex.

AI Leads a Tech Surge Spreading Inward from the Coasts

From its origins in the Bay Area, the tech industry has expanded into regions across the United States thanks to the rise of artificial intelligence. According to Deloitte’s 2025 Technology Fast 500 rankings, 7% of the ranked companies were classified as artificial intelligence firms, which recorded a median revenue growth of 407% from 2021 to 2024. This was not only in the tech sector, but found in areas like, professional services, finance, and manufacturing, proving that tech skills and work are creeping out of the typical sectors. With that sector spread comes location spread. Northern Virginia is a hub for defense AI, Austin is the center of enterprise AI, and New York hosts plenty of financial AI work.

Computing Infrastructure in the Northeast and Idaho

Data centers power every streaming platform and online storefront. Somerset County, New Jersey leads the way in data services with a big employment jump between 2023 and 2024. The annual wages in that county rose by 400%. It’s also worth noting that this area of New Jersey is close to New York City and financial tech sectors. A dense fiber network already in place can help support data centers and computing infrastructure. Many miles from New Jersey, Ada County, Idaho takes second place in this sector. Affordable land in isolated locations make this state an ideal location for data centers.

Custom Computer Programming Finds a Home in Virginia

Custom computer programming focuses on finding solutions specific to individuals and businesses. The highest concentration of firms in this center is located in Norfolk City, Virginia. Success in Virginia might be due to the Norfolk Innovation Corridor, an area filled with universities, hospitals, and other tech-centered businesses, all of which can benefit from custom computer programming. Tech startups were given tax incentives in this area, making it an attractive place to land for programmers who wanted to start a small business. Virginia hosts tech work across the state since D.C. and Northern Virginia are the heart of cybersecurity with federal agencies located there.

Texas Takes Over Software Publishing

Software publishers create products for wide distribution. Bexar, Texas, has doubled employment in the sector and had huge growth in recent years. Bexar County is the home of San Antonio, a thriving city with a lower cost of living than other Texas cities like Austin. Allegheny County, Pennsylvania, has a high concentration of software publishers too, bolstered by University of Pittsburgh. This is a renowned engineering school that produces many capable graduates ready to join the workforce.

New York and Oregon Web Search Portals

Web search companies depend on advertising markets, technical talent, and media to be successful, so it’s no surprise to see New York and New Jersey take the lead once again. Union and Essex counties are part of the New York City metro area which provide a big talent pool of workers. On the West Coast, Multnomah County, in Oregon, ranks highly in this sector as well. Portland’s mix of creative and tech industries are the perfect mix for digital media and information services companies.

Semiconductors in the Lone Star State

So many tech devices are made possible by semiconductors, the backbone of computer chips. The AI boom has increased the demand for micro chips and in the U.S., the leading manufacturers are housed in Williamson County, Texas. This is due to a $17 million investment from Samsung to build a semiconductor manufacturing facility in Taylor. Wages increased by 73% in this county thanks to the manufacturing plant. California takes its share of this market too. Nvidia, one of the world’s highest valued companies, built a semiconductor plant in Santa Clara County.

What the Map Teaches Us

Data shows that sector-specific growth is widely distributed across the nation. Policy environments, resources, talent pools, and other factors all shape the landscape and influence where certain sectors thrive. The map certainly challenges the misconception that Silicon Valley is the center of all things tech. Tech decentralization is sector-specific and it’s not uniform. No single region dominates all sectors.

The American tech landscape is evolving, with innovation hubs emerging far beyond the traditional confines of Silicon Valley. From custom computer programming in Virginia to software publishing in Texas and semiconductors in both Texas and California, the tech industry’s growth is increasingly regional and sector-driven. Local resources, educational institutions, and targeted incentives are shaping unique technology ecosystems. As new trends and demands arise, diverse regions across the U.S. are poised to lead in various tech sectors, proving that the future of American technology is both decentralized and dynamic, offering opportunities for communities nationwide.

Does AI-driven growth spread tech evenly across US, or remain anchored in traditional centers?

About author: Mitch is a writer and researcher with over 15 years of experience. He has written for various industries over the years, but has been focused on tech writing and research recently. If he isn't putting together an article or analyzing data, you can find Mitch cooking away in the kitchen and trying new recipes.

Limitations: This analysis uses county-level employment data from the U.S. Bureau of Labor Statistics’ QCEW and a weighted index based on selected growth indicators across defined tech sectors. Results are limited to the 2023–2024 period and depend on sector classification choices and applied minimum employment and establishment thresholds, which exclude smaller counties.

Reviewed by Irfan Ahmad.

Read next: Lawyers Don’t Need More AI Hype. They Need Agentic AI That Actually Moves Work Forward


by Guest Contributor via Digital Information World