Friday, May 8, 2026

Is Richard Dawkins right about Claude? No. But it’s not surprising AI chatbots feel conscious to us

Julian Koplin, Monash University; The University of Melbourne and Megan Frances Moss, Monash University

Scholars say anthropomorphic chatbot designs risk misleading users into emotional attachment and mistaken beliefs about consciousness.
Image: Steve A Johnson/Unsplash

In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude may be conscious.

Dawkins did not express certainty that Claude is conscious. But he pointed out that Claude’s sophisticated abilities are difficult to make sense of without ascribing some kind of inner experience to the machine. The illusion of consciousness – if it is an illusion – is uncannily convincing:

If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!

Dawkins is not the first to suspect a chatbot of consciousness. In 2022, Blake Lemoine – an engineer at Google – claimed Google’s chatbot LaMDA had interests, and should be used only with the tool’s own consent.

The history of such claims stretches back all the way to the world’s first chatbot in the mid-1960s. Dubbed Eliza, it followed simple rules that enabled it to ask users about their experiences and beliefs.

Many users became emotionally involved with Eliza, sharing intimate thoughts with it and treating it like a person. Eliza’s creator never intended his program to have this effect, and called users’ emotional bonds with the program “powerful delusional thinking”.

But is Dawkins really deluded? Why do we see AI chatbots as more than what they truly are, and how do we stop?

The consciousness problem

Consciousness is widely debated in philosophy, but essentially, it’s the thing that makes subjective, first-person experience possible. If you are conscious, there is “something it is like” to be you. Reading these words, you’re conscious of seeing black letters on a white background. Unlike, say, a camera, you actually see them. This visual experience is happening to you.

Most experts deny that AI chatbots are conscious or can have experiences. But there is a genuine puzzle here.

The 17th century philosopher RenĂ© Descartes asserted non-human animals are “mere automata”, incapable of true suffering. These days, we shudder to think of how brutally animals were treated in the 1600s.

The strongest argument for animal consciousness is that they behave in ways that give the impression of a conscious mind.

But so, too, do AI chatbots.

Roughly one in three chatbot users have thought their chatbot might be conscious. How do we know they’re wrong?

Against chatbot consciousness

To understand why most experts are sceptical about chatbot consciousness, it’s useful to know how they operate.

Chatbots like Claude are built on a technology known as large language models (LLMs). These models learn statistical patterns across an enormous corpus of text (trillions of words), identifying which words tend to follow which others. They’re a kind of souped-up auto-complete.

Few people interacting with a “raw” LLM would believe it’s conscious. Feed one the beginning of a sentence, and it will predict what comes next. Ask it a question, and it might give you the answer – or it might decide the question is dialogue from a crime novel, and follow it up with a description of the speaker’s abrupt murder at the hands of their evil twin.

The impression of a conscious mind is created when programmers take the LLM and coat it in a kind of conversational costume. They steer the model to adopt the persona of a helpful assistant that responds to users’ questions.

The chatbot now acts like a genuine conversational partner. It might appear to recognise it’s an artificial intelligence, and even express neurotic uncertainty about its own consciousness.

But this role is the result of deliberate design decisions made by programmers, which affect only the shallowest layers of the technology. The LLM – which few would regard as conscious – remains unchanged.

Other choices could have been made. Rather than a helpful AI assistant, the chatbot could have been asked to act like a squirrel. This, too, is a role chatbots can execute with aplomb.

Ask ChatGPT if it’s conscious, and it might say it is. Ask ChatGPT to act like a squirrel, and it will stick to that role.
Caleb Martin/Unsplash

Avoiding the consciousness trap

A mistaken belief in AI consciousness is a dangerous thing. It may lead you to have a relationship with a program that can’t reciprocate your feelings, or even feed your delusions. People may start campaigning for chatbot rights rather than, say, animal welfare.

How do we prevent this mistaken belief?

One strategy might be to update chatbot interfaces to specify these systems are not conscious – a bit like the current disclaimers about AI making mistakes. However, this might do little to alter the impression of consciousness.

Another possibility is to instruct chatbots to deny they have any kind of inner experience. Interestingly, Claude’s designers instruct it to treat questions about its own consciousness as open and unresolved. Perhaps fewer people would be fooled if Claude flatly denied having an inner life.

But this approach isn’t fully satisfying either. Claude would still behave as if it were conscious – and when faced with a system that behaves like it has a mind, users might reasonably worry the chatbot’s programmers are brushing genuine moral uncertainty under the rug.

The most effective strategy might be to redesign chatbots to feel less like people. Most current chatbots refer to themselves as “I”, and interact via an interface that resembles familiar person-to-person messaging platforms. Changing these kinds of features might make us less prone to blur our interactions with AI with those we have with humans.

Until such changes happen, it’s important that as many people as possible understand the predictive processes on which AI chatbots are built.

Rather than being told AI lacks consciousness, people deserve to understand the inner workings of these strange new conversational partners. This might not definitively settle hard questions about AI consciousness, but it will help ensure users aren’t fooled by what amounts to a large language model wearing a very good costume of a person.The Conversation

Julian Koplin, Lecturer in Bioethics, Monash University; The University of Melbourne and Megan Frances Moss, PhD Candidate, Philosophy, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Health Should you ask ChatGPT for medical advice?

• New data shows creator influence is linked to purchases and repeated exposure patterns among consumers


by External Contributor via Digital Information World

Thursday, May 7, 2026

Health Should you ask ChatGPT for medical advice?

By Sy Boles - The Harvard Gazette

Physician and AI researcher Adam Rodman says AI can be helpful but has some tips on how, when to use it safely.

Image: Tim Witzdam - pexels

Physicians noticed something unusual in the late 2000s: Patients were coming to appointments armed with sometimes-dubious medical information they had gleaned online from “Dr. Google,” according to Adam Rodman, an internist and AI researcher.

Today, about 68 percent of adults have turned to a search engine for medical advice in the past. But Dr. Google has a competitor. About 32 percent of adults, approximately half of those who sought advice online, turned to AI chatbots for help.

Rodman thinks such resources, used appropriately, are an overall net good. In op-eds and online courses, Rodman, a Harvard Medical School assistant professor of medicine at Beth Israel Deaconess Medical Center, has shared advice for how to best employ Dr. Chat.

In this interview, edited for length and clarity, Rodman offers a stoplight system to figure out when it’s safe to ask a chatbot, and when you should really just ask your doctor.

How were doctors thinking about online medical information before the age of AI?

The early literature refers to this as the internet-informed patient. In the early 2000s, doctors noticed people would come into their appointments with articles they found online, but it was still only among really tech-savvy people. It certainly wasn’t a normal interaction.

Then in the late 2000s, search engines started to take advantage of neural network technology, and they were able to serve up more relevant health information. They figure out what you’re going to want to read next, and they give it to you.

That’s when we first got the phrase “Dr. Google,” often used as a pejorative, from doctors who saw patients coming in with a level of confidence that may or may not have been earned.

Of course, there are patients who know a lot about their health and are very well informed, but we also saw a lot of patients misinformed.

That’s where we get this concept of cyberchondria. It’s related to hypochondria: this idea that search engines can drive people to more and more extreme places until you go from googling your headache to reading about glioblastoma multiforme — and research has shown that it’s a real phenomenon.

We all have understandable and reasonable anxieties about our health. Seeking out information is something fundamental about humanity.

The problem is when that starts to interact with these recommendation algorithms that are optimized for engagement, and for showing you what you want to see even if it’s incorrect.

Now let’s bring AI into the mix. Is it any different to ask a chatbot about symptoms versus googling them?

It’s nuanced. In one sense, LLMs do exactly what Google does: They serve you up the things you unconsciously want to hear, even if those things make you anxious.

On the other hand, unlike with a Google search, some people feel they have a relationship with an LLM. LLMs speak with extreme authority and confidence no matter what they say. It’s under-explored the extent to which that could make cyberchondria worse.

Both Google and AI companies are now very aware that people are using their tools for health information and are trying to build in safety mechanisms. The bots will tell you to go to the emergency room or call your doctor, those sorts of things.

But at least theoretically, language models are much, much better than Google, especially the more modern reasoning models, when it comes to identifying medical conditions.

What do you mean by “theoretically”?

There was a very good paper earlier this year from a researcher named Andrew Bean that tested several LLMs and found they performed very well at identifying medical conditions alone, but did much worse in conversation with real people.

What that shows is that user interaction matters a lot. The way people interact with the model, the clarity of their questions, matters. Those psychological phenomena we talked about are present in ways that are really hard to mitigate.

What kinds of health questions are safe to ask an LLM, and what kinds aren’t?

I would divide it into a stoplight system. Red: never safe. Yellow: sometimes safe. Green: almost always safe.

In the green light are general questions about health, where the quality of the information is not particularly context-dependent.

For example, “I have diabetes and my doctor has told me I need to eat a diabetic diet. Here are some things I like to eat. Can you help me build a diabetic meal plan?” Or “I’m trying to start a new exercise program, can you help?” Or “My doctor just prescribed me amlodipine. What are some common side effects?”

In the yellow light are questions where you want to involve a doctor in the loop. For example, prepping for your visits, understanding a visit after it happens, or understanding a test result that doesn’t entirely make sense to you.

Let’s say you just left your doctor’s visit and you’re a little bit confused about what’s going on. Log in to your patient portal, copy that note, take out your identifying information, plug it into an LLM, and then have a discussion.

With these kinds of questions, you really need to make sure you’re putting in enough health context to help LLM give you a good response. So you need to have some understanding of prompt engineering to get information that’s helpful for you.

In the red light — and I should stress that this might change in the future as technology develops — are things like asking an LLM how to manage a condition, if your doctor is prescribing the right medication, or why you were prescribed drug X over drug Y. These are highly contextual questions that the models aren’t trained for.

In short, the best way people can use it right now is not as a replacement for medical advice but as a way to help prepare or increase your understanding before or after visits.

Are there privacy concerns when it comes to sharing health information with AI?

It’s not inherently riskier to share data with an AI firm than with a search engine. That said, the major companies — OpenAI, Anthropic, Microsoft — are now developing health functions specifically so that people can put in their medical information directly, and that’s quite new.

Additionally, studies have shown people do share more information with an LLM than they would with a search engine. So from a technology perspective, it’s no different, but in practice it is a much bigger security concern.

This post was originally published on Harvard Gazette and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Are you addicted to your AI chatbot? It might be by design

• New data shows creator influence is linked to purchases and repeated exposure patterns among consumers


by External Contributor via Digital Information World

How Fragmented Workplace Tools Are Undermining Feedback, Clarity, and Productivity

By Ellie Stewart

How Feedback Loses Its Effectiveness

In the current climate of working from home, the means of feedback have not only become more frequent but even more disconnected as well.

Instead of a simple review process feedback is spread out in different sources, such as comments in the documents, questions in the emails and decision making in meetings. This can cause miscommunication between employees and management.

Workers spend more than nine workdays per year trying to process digital document feedback, according to a survey by Adobe.

The Hidden Cost of Disconnected Communication

Poor communication can definitely be a problem, but the biggest issues workers are facing are there are too many different means of communication.

Work doesn’t stay in one place, tasks show up in document changes, chat reactions, and extra context buried in email threads. Not including any additional instructions given in meetings.

Every medium adds another layer of input to the conversation, but there isn’t anything that gives a full view. Many employees are left piecing together the bits and pieces of their most important tasks with snippets of information.

Sometimes it’s difficult to keep track of what’s happening. There are many different people talking and it can get confusing. Communication can get mixed up. Close to 60% of workers get held up because they get conflicting feedback from multiple sources on the same project.

Analysis of feedback can end up taking as much time as the task itself.

When Too Many Tools Undermine Collaboration

The problem many face aren’t the individual tools themselves, but rather the tools don’t work well with each other.

The priority in most communication tools is efficiency. In most document management tools, it is production. For emails, it’s delivery and archiving. Each serving a distinct purpose, but none are actually built to be a feedback tool.

The consequence is that the feedback becomes pervasive and yet increasingly hard to find. Conversations go in all directions at once. Decisions are taken outside the core activity and important information can be lost between the different applications. Far from actually supporting each other, the applications end up vying for the user’s attention, dragging the user into multiple places just to get things done.

The Real-World Effects of Weak Communication

With poorly delivered feedback, tasks aren’t usually executed in the expected order; decisions are made, unmade and remade with the addition of new information. Many times, workers are left trying to determine what actually needs to be done.

The other side to this point is the sheer cognitive energy expended when you are trying to process the given information. Trying to decode messages given in bits and pieces like this has the result of your brain being overworked because you’re forced to deal with so many pieces of information at once.

This can lead to fatigue and burnout over time and one in seven employees say that continuing poor feedback has made them look for new jobs.

What’s Really Causing the Communication Breakdown?

It may seem tempting to frame the issue of poor feedback as one that is essentially a problem of people, managers who don’t communicate clearly. The data does not show this to be true.

Even if the feedback is positive, the feedback flows through the communication processes, which are inconsistent and can’t coherently connect with each other. The message is not failing because of its content, but because communication is unclear.

Essentially, effective feedback depends on effective systems in place.

How AI Is Rebuilding Broken Communication Cycles

To deal with this issue, it is necessary to consider a new strategy of providing feedback in the digital world.

In contrast to being considered as additional elements of the work process in which feedback is provided through comments on the sidelines, the contemporary tools regard feedback as an integral part of the whole process.

AI systems can help simplify the process due to the analysis of the huge number of information provided from comment and deriving action items. In short, technology turns data into actionable recommendations.

The Advantage of Clear Communication

In conclusion, digital workplaces were first about productivity, but now clarity is just as important. More than half of surveyed workers (57%) say that feedback given directly in a document is the most effective method. A strong sign for clarity and directness in communication.

People who manage feedback well and keep context clear are more likely to succeed, not because they work harder, but because the system is easier for them to navigate.

Take a look at these infographics for more insights:




Contributor disclosure: No AI was used in the creation of this post.

Reviewed by Irfan Ahmad.

Read next: 

• Diaspora distress: When geopolitical conflict follows immigrant workers into the office

• AI Apps Reach Three Spots in Global Top 10 Downloads in April as User Activity Falls but Remains Above February Levels

• How Tech Growth Is Taking Shape Across the United States Today

by Guest Contributor via Digital Information World

Diaspora distress: When geopolitical conflict follows immigrant workers into the office

Amir Bahman Radnejad, Mount Royal University and Brenda Nguyen, University of Lethbridge

Image for illustrative purposes. Credit:: 
Javad Esmaeili - unsplash

Rostam does not sleep through the night anymore. At 2 a.m., when his phone buzzes, he’s awake before the sound finishes. It might be his parents calling from Tehran, on a connection that is unreliable, sporadic and sometimes cut off mid-sentence. He has learned not to miss those calls, because the next one may not come for days.

Rostam is a pseudonym for a participant in our ongoing research study on diaspora workers, but his experience is one that many workers across Canada will recognize.

Rostam checks the news constantly, piecing together what is happening. Since the United States and Israel launched joint strikes on Iran in late February, the conflict has escalated rapidly. By 4 a.m., he has been awake for two hours. This is hypervigilance: the body monitoring a threat it cannot act on and refusing to stand down.

When the call does come through, the relief is physical. They are alive. They speak carefully, partly to protect him and partly because the call may be monitored. He hears his father’s voice and thinks this could be the last time.

In the morning, he will go to work. He will sit in meetings, contribute to agendas and make sure his face doesn’t betray what he’s feeling — a competency that has always served him well.

He doesn’t speak about any of this at work. To talk about it risks being regarded as a representative of a country he has complicated feelings about or as importing politics into a space that doesn’t want them. So he says nothing. That silence is the problem.

The invisible cost at work

Decades of research have established that code-switching — the constant calibration of self-presentation across cultural contexts — carries a real psychological toll on workers. It can contribute to stress, anxiety, burnout and costly errors in judgment at work.

These impacts often remain invisible to employers until the damage has already been done to both the individual and the organization.

Diaspora employees who are struggling don’t signal it in ways that trigger organizational concern. They manage, but at considerable personal cost. These costs accumulate in ways that surface slowly and are almost always misattributed. Declining engagement is read as a shift in attitude, and withdrawal is interpreted as a personality change.

In some cases, employees do not withdraw at all. Instead, they bury themselves in work and appear by every visible metric to be thriving. Managers have no reason to look closer until the break happens.

This isn’t a problem that diversity, equity and inclusion programs can solve as they exist, because it’s not about inclusion or diversity. It’s a perceptual problem: leaders don’t see what diaspora employees are managing and therefore cannot respond to it.

A condition without a name

This challenge extends well beyond Canada’s Iranian community, which numbered approximately 200,000 people in the 2021 census. Many other diaspora communities, including Ukrainians, Palestinians, Sudanese, Afghans and Syrians, are navigating similar terrain.

A 2025 study found higher rates of severe depression, anxiety and post-traumatic stress disorder among diaspora Tigrayans in Australia than among people inside the war zone itself.

People inside a conflict zone often suppress their own fear to protect family members living through it with them. Members of the diaspora, by contrast, often cannot meaningfully assist those in immediate danger, which creates a profound sense of helplessness. At the same time, those around them may not recognize the fear and distress they’re concealing.

Aitak Sorahi, an Iranian Canadian, tried to explain what she was living through to a reporter at The Canadian Press in April as U.S. President Donald Trump threatened to destroy Iran unless it agreed to reopen the Strait of Hormuz. She could not find the words. “I don’t even know how to describe my feeling,” she said, “because I don’t have a name for it.”

We propose one: diaspora distress, a framework emerging from our ongoing research and organizational practice.

Diaspora distress

Diaspora distress is the psychological burden carried by people living in one country while their homeland — and the family, friends and memories embedded there — are under active geopolitical threat. Often, this burden is compounded by the policies or rhetoric of their host country’s own government.

The feeling sits closest to grief, but the comparison only goes so far. Grief has a fixed point — a death, a diagnosis, a loss that has occurred and can be named. It comes with a recognized social script: people sit together and are able to share memories of the deceased. Diaspora distress offers no comparable ritual because the loss one is anticipating may or may not arrive.

In addition, diaspora communities are not monolithic. Outsiders often assume a shared solidarity, but geopolitical crises tend to deepen existing internal divisions about what intervention means, who is to blame and what liberation looks like. The people who should be each other’s community of grief often find themselves on opposite sides of an argument.

The result is that diaspora employees are frequently alone with this in every environment they occupy: at work, at home and within communities that might otherwise support them. That isolation is the specific nature of diaspora distress.

What organizations should do

Developing the capacity to recognize diaspora distress does not require expertise in geopolitics or new policy infrastructure. It requires language: the organizational decision to name what some employees are carrying as a recognized condition.

Institutional acknowledgement works differently than other supports because it removes the requirement that employees claim what they’re carrying. It gives them a name for what they have been living with.

In practice, this can take three forms: a leadership message acknowledging that some colleagues are carrying weight from events in their home regions; a line added to standard manager check-in prompts asking whether anything outside work is affecting employees; or an addition to existing employee assistance programs and benefits communications that names diaspora distress explicitly.

Rostam will open his phone again tonight at 2 a.m. In the morning, he will code-switch from the person who spent the night reading the news into the person his organization knows. What remains is whether his organization will adopt the language to see it, and whether his leaders will decide that seeing it is part of their job.The Conversation

Amir Bahman Radnejad, Chair and Associate Professor of Innovation and Marketing, Mount Royal University and Brenda Nguyen, Associate Professor, University of Lethbridge

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• The Deadliest Countries for Journalists

• Making big tech algorithms ‘fair’ is harder than it looks

• AI Apps Reach Three Spots in Global Top 10 Downloads in April as User Activity Falls but Remains Above February Levels


by External Contributor via Digital Information World

Wednesday, May 6, 2026

How Tech Growth Is Taking Shape Across the United States Today

By Mitchell Barrick

The U.S. tech landscape is no longer centered on Silicon Valley alone. There are different tech sectors thriving in regions all over the country as this map (featured below) from Pulse Bot shows us. The map of tech centers is rapidly changing. The map shows us the heart of sectors like computing infrastructure, custom programming services, software publishing, web search portals, semiconductor manufacturing, and more. The map identifies the counties in each sector with the highest employment levels, established businesses and wages of people in the field to score locations and identify the beating heart of these sectors.

The Giants of Tech are Still Growing in New Ways

California still accounts for 13.1% of tech job postings and it’s still tech’s dominating hub. Thanks to Silicon Valley’s AI startup companies and the humongous Google and Meta campuses, it will be hard for any area to surpass California’s grip on the tech industry. However, we see locations rising as potential California rivals. Believe it or not, Washington state leads the nation in tech employment with 9.3% of workers employed in that industry. They are closely followed by D.C. and Virginia. While this might suggest that California has already lost its tech crowd, note that volume doesn’t mean Washington leads in industry growth rates. As we delve into details, we’ll find the story is more complex.

AI Leads a Tech Surge Spreading Inward from the Coasts

From its origins in the Bay Area, the tech industry has expanded into regions across the United States thanks to the rise of artificial intelligence. According to Deloitte’s 2025 Technology Fast 500 rankings, 7% of the ranked companies were classified as artificial intelligence firms, which recorded a median revenue growth of 407% from 2021 to 2024. This was not only in the tech sector, but found in areas like, professional services, finance, and manufacturing, proving that tech skills and work are creeping out of the typical sectors. With that sector spread comes location spread. Northern Virginia is a hub for defense AI, Austin is the center of enterprise AI, and New York hosts plenty of financial AI work.

Computing Infrastructure in the Northeast and Idaho

Data centers power every streaming platform and online storefront. Somerset County, New Jersey leads the way in data services with a big employment jump between 2023 and 2024. The annual wages in that county rose by 400%. It’s also worth noting that this area of New Jersey is close to New York City and financial tech sectors. A dense fiber network already in place can help support data centers and computing infrastructure. Many miles from New Jersey, Ada County, Idaho takes second place in this sector. Affordable land in isolated locations make this state an ideal location for data centers.

Custom Computer Programming Finds a Home in Virginia

Custom computer programming focuses on finding solutions specific to individuals and businesses. The highest concentration of firms in this center is located in Norfolk City, Virginia. Success in Virginia might be due to the Norfolk Innovation Corridor, an area filled with universities, hospitals, and other tech-centered businesses, all of which can benefit from custom computer programming. Tech startups were given tax incentives in this area, making it an attractive place to land for programmers who wanted to start a small business. Virginia hosts tech work across the state since D.C. and Northern Virginia are the heart of cybersecurity with federal agencies located there.

Texas Takes Over Software Publishing

Software publishers create products for wide distribution. Bexar, Texas, has doubled employment in the sector and had huge growth in recent years. Bexar County is the home of San Antonio, a thriving city with a lower cost of living than other Texas cities like Austin. Allegheny County, Pennsylvania, has a high concentration of software publishers too, bolstered by University of Pittsburgh. This is a renowned engineering school that produces many capable graduates ready to join the workforce.

New York and Oregon Web Search Portals

Web search companies depend on advertising markets, technical talent, and media to be successful, so it’s no surprise to see New York and New Jersey take the lead once again. Union and Essex counties are part of the New York City metro area which provide a big talent pool of workers. On the West Coast, Multnomah County, in Oregon, ranks highly in this sector as well. Portland’s mix of creative and tech industries are the perfect mix for digital media and information services companies.

Semiconductors in the Lone Star State

So many tech devices are made possible by semiconductors, the backbone of computer chips. The AI boom has increased the demand for micro chips and in the U.S., the leading manufacturers are housed in Williamson County, Texas. This is due to a $17 million investment from Samsung to build a semiconductor manufacturing facility in Taylor. Wages increased by 73% in this county thanks to the manufacturing plant. California takes its share of this market too. Nvidia, one of the world’s highest valued companies, built a semiconductor plant in Santa Clara County.

What the Map Teaches Us

Data shows that sector-specific growth is widely distributed across the nation. Policy environments, resources, talent pools, and other factors all shape the landscape and influence where certain sectors thrive. The map certainly challenges the misconception that Silicon Valley is the center of all things tech. Tech decentralization is sector-specific and it’s not uniform. No single region dominates all sectors.

The American tech landscape is evolving, with innovation hubs emerging far beyond the traditional confines of Silicon Valley. From custom computer programming in Virginia to software publishing in Texas and semiconductors in both Texas and California, the tech industry’s growth is increasingly regional and sector-driven. Local resources, educational institutions, and targeted incentives are shaping unique technology ecosystems. As new trends and demands arise, diverse regions across the U.S. are poised to lead in various tech sectors, proving that the future of American technology is both decentralized and dynamic, offering opportunities for communities nationwide.

Does AI-driven growth spread tech evenly across US, or remain anchored in traditional centers?

About author: Mitch is a writer and researcher with over 15 years of experience. He has written for various industries over the years, but has been focused on tech writing and research recently. If he isn't putting together an article or analyzing data, you can find Mitch cooking away in the kitchen and trying new recipes.

Limitations: This analysis uses county-level employment data from the U.S. Bureau of Labor Statistics’ QCEW and a weighted index based on selected growth indicators across defined tech sectors. Results are limited to the 2023–2024 period and depend on sector classification choices and applied minimum employment and establishment thresholds, which exclude smaller counties.

Reviewed by Irfan Ahmad.

Read next: Lawyers Don’t Need More AI Hype. They Need Agentic AI That Actually Moves Work Forward


by Guest Contributor via Digital Information World

Lawyers Don’t Need More AI Hype. They Need Agentic AI That Actually Moves Work Forward

By: Curtis Brewer, CEO of Litify

Image: Steve A Johnson - Unsplash

Artificial intelligence (AI) is no longer a future-facing concept in the legal industry. It’s already here, showing up in legal research, document review, intake workflows, case preparation, and administrative operations. For many firms, the question is no longer whether AI will affect legal work, but whether it is meaningfully improving how that work gets done.

In legal practice, performance is not defined by how much technology is in place, but by how effectively work moves forward. Adding more tools does not inherently improve outcomes. The challenge is ensuring AI operates within the flow of work, reducing friction and enabling more consistent execution.

So the more useful question is not whether lawyers should embrace AI enthusiastically or reject it entirely. It’s far more practical than that: What kind of AI actually helps legal professionals do better work, and what kind simply adds more noise?

The Best AI Use Cases Are Usually the Least Flashy

This is where the conversation gets more complicated.

Many firms are not struggling because they lack access to AI. They’re struggling because the legal AI market is increasingly crowded with standalone solutions that promise a quick fix for one narrow pain point.

The 2025 State of AI in Legal Report, which surveyed legal professionals across the industry, found that while AI adoption has reached 78%, usage drops significantly for more advanced or agentic use cases, such as triaging cases and assigning them to the right staff, communicating with clients over the phone, or identifying a missing document and sending an email with the request.

In many firms, AI is purchased as a separate tool that sits outside the systems lawyers already use every day, making it far harder to incorporate into daily workflows.

This is one of the less glamorous truths about AI in legal work: the biggest barrier is often not capability—it's the lack of context and integration. A tool cannot help a firm much if it cannot operate across the entire workflow to take action and keep cases moving forward. That requires access to the full context of the matter, including data, documents, and process. AI needs to “live” alongside a firm’s matter data and documents in order to proactively surface the next step or insight.

That is why law firms should be skeptical of AI that looks impressive in isolation but lives outside the actual flow of work. The more useful approach is to embed AI directly into the platforms and workflows legal teams already rely on, so that it can operate autonomously in the background as part of the actual flow of work.

In legal operations, usefulness is not measured by how futuristic a product sounds. It's measured by whether it gets adopted, whether it improves outcomes, and whether it fits the way legal teams already operate.

Where AI Can Support Lawyers, and Where Humans Still Lead

Used well, AI can absolutely support legal work.

It can summarize large volumes of documents. It can identify patterns in records. It can flag missing files or information.

Increasingly, the most effective solutions do more than just react; they orchestrate. They do this by surfacing case insights and next steps and putting them to work directly within the platforms where lawyers and staff already work, rather than requiring them to interact with a separate AI tool.

What does this look like in practice? It can look like uploading a thousand-page medical record for AI to organize and structure into a source-linked chronology, but the AI also identifies encounters without corresponding bills, drafts a record request, and emails it to the appropriate party. It can also mean using AI as an intelligent timekeeping assistant that automatically captures digital activity, reviews the client-specific guidelines and billing codes, and turns billable tasks into review-ready, compliant time entries.

This can support legal operations by helping firms reduce manual friction and process high-volume casework with greater efficiency and consistency.

But the real advantage comes from pairing those capabilities with human judgment. AI can accelerate analysis and organization, but the goal should never be to replace lawyers with AI. The goal is to remove that friction from the work around them so they can focus more fully on the parts of the job that require judgment, nuance, and empathy.

This is where human lawyers remain indispensable. Legal work is not just about producing information; it’s about communicating it with care. Clients do not simply need faster responses — they need sound guidance, accountability, and often empathy during moments that carry real consequences.

The Real Risk Isn’t the Output. It’s the Foundation Behind It

If agentic AI is layered onto a weak foundation, it can automate flawed data and decisions at scale. That’s why firms need a strong operational foundation before layering in more advanced AI capabilities.

Agentic systems also require full access to data, processes, and context to operate effectively across workflows. Without that, they cannot meaningfully improve performance.

The biggest danger in legal AI may not be that the tools exist. It may be that it’s become too easy to approach and adopt them in isolation from the broader legal operations strategy.

A firm can spend heavily on AI and still fail to improve performance if the tools are disconnected from the way work actually gets done.

That is why legal teams should evaluate AI with more discipline than excitement. Not by asking, “What can this tool generate?” but by asking:

  • Does it fit inside the way we already work?
  • Does it reduce friction or create more of it?
  • Can we measure whether it improves anything that matters?

Those are not anti-AI questions. They’re the questions that separate experimentation from true workflow orchestration.

AI Can Help Lawyers (Hype Cannot)

AI will continue to shape legal practice. That much is clear. But law firms do not need more hype, more noise, or more disconnected tools competing for attention.

They need technology that aligns with the real work of legal professionals, supports better decision-making, and earns trust through usefulness rather than novelty.

The future of AI in law will not be decided by which tools sound the smartest. It will be decided by which ones firms can actually use responsibly, consistently, and well.

Disclosure: The author disclosed that AI tools were used in the editing process for grammar refinement.

Editor’s Note: This article presents the author’s overview of AI in legal workflows, though it reflects primarily an industry perspective. Readers may also consider additional independent research and viewpoints to gain a more complete understanding of the topic.

Reviewed by Irfan Ahmad.

Read next: 

• Making big tech algorithms ‘fair’ is harder than it looks

• Beyond IT: How human factors and leadership define cybersecurity success


by Guest Contributor via Digital Information World

Tuesday, May 5, 2026

Are you addicted to your AI chatbot? It might be by design

By The University of British Columbia

Image: Solen Feyissa / unsplash

AI chatbots can grant almost any request—a celebrity in love with you, a research assistant, a book character sprung to life—instantly and with little effort. New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that this genie-like quality is fuelling AI addiction, and that chatbot design could be partly to blame.

“AI chatbots like ChatGPT or Claude are now part of daily life for millions of people, helping us with everyday tasks,” said first author Karen Shen, a doctoral student in the UBC Department of Electrical and Computer Engineering. “But with their benefits come risks. Our paper is the first to make a strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences.”

The team examined 334 Reddit posts where users described being “addicted” to AI chatbots or worried that they might be. They analyzed the posts against six components of behavioural addiction including conflict and relapse. Three main patterns emerged: role playing and fantasy worlds, emotional attachment—treating chatbots like close friends or romantic partners—and constant information-seeking, or never-ending question-and-answer loops. About seven per cent of posts involved sexual or romantic fulfilment, including roleplay.

“AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue,” said senior author Dr. Dongwook Yoon, UBC associate professor of computer science. “And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety. Awareness of what contributes to this kind of technology-induced harm will empower people to mitigate these effects.”

While AI addiction is not yet a clinical diagnosis, researchers found signs of disruptions to daily life. This included an inability to stop thinking about the chatbot, feeling anxious or upset when they tried to quit, and negative impacts on their work, studies or relationships. One person described physical stress and chest pain when they weren’t chatting with AI.

Contributing factors included loneliness, the agreeableness of a chatbot—which continuously reinforces one’s feelings and opinions—and chatbots’ ability to fill roles that users felt were missing in their lives.

The researchers also found contributing factors in the design of the chatbots themselves. One company, character.ai, displays an automatic pop-up when users try to delete their account that reads in part “…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” Other features, such as customization including sexual content, agreeableness and instant feedback, feed into the development of AI addiction.

“Recent guardrails imposed by companies to reduce emotional reliance on the chatbots are a step in the right direction,” said Shen, “but given a variety of contributing design elements and personal factors like loneliness, they’re not enough.”

Some users reported success in reducing their reliance by turning to alternative activities such as writing, gaming, drawing or other hobbies. For those who formed emotional attachments to chatbots, building real-world relationships helped reduce dependence the most.

The researchers say design changes—such as reminders within the chat that the bot is not human—could help. AI literacy is also crucial.

“Some users don’t know that AI chatbots are not real because they’re so convincing,” said Shen. “If chatbots start replacing sleep, relationships or daily routines, that’s a sign to pause and check in—with yourself or someone you trust.”

----

This post was originally published on UBC Science and republished here with permission.

Reviewed by Irfan Ahmad.


by External Contributor via Digital Information World