Tuesday, January 27, 2026

Most Americans Don’t Know About Web Hosting, But Know They Want It Cheap

Written by Derick Migliacci. Edited by Asim BN. Reviewed by Ayaz Khan.

Web hosting knowledge in America

Web hosting is one of the most important aspects of websites on the internet. All sites need a hosting provider to keep their pages running and users online. Web hosting is currently supporting billions of websites and is also a multi-billion dollar industry with exponential growth in the future.

As a foundational piece of the internet, you would assume that a majority of internet-dependent Americans would know a lot about this concept. But according to cybersecurity website, All About Cookies, that may not be the case.

A 2025 All About Cookies survey found that only 40% of Americans have a general idea of the concept, while 24% reported they don't know the meaning of web hosting.

Survey finds most Americans lack web hosting knowledge, prioritizing affordability and ease when building websites.

Additionally, the All About Cookies survey found that around one-fourth (25%) of respondents couldn’t correctly identify the essential functions and tasks of what a web host does. While most correctly identified that a web host stores website files and data, manages domain names, and makes a website visible to online users, some confuse web hosting responsibilities with tasks that align more with web design.


As evidenced by the graphic above, 27% of respondents incorrectly believed that web hosting consists of design elements, which is typically handled by a dedicated web or UX designer. Other users incorrectly believed cybersecurity measures like protecting sites from viruses or installing antiviruses as tasks a web host would handle. Alongside the incorrect answers, 6% of survey participants admitted they were not fully sure about what web hosting entailed.

Price is the #1 factor for people when considering a web host

Many individuals build websites for personal brands, their own interests, or a variety of other reasons and require a host to get these websites live.

Survey respondents put affordability at the forefront of their minds when making web hosting decisions, as evidenced by the graphic below.


Ease of setup was the second most important prioritization at 46%, followed by security and backups (39%).

Nearly one-third of Americans have tried building a website

With affordability and ease of setup at the top of user’s lists, it’s no surprise that many turn to popular web building providers such as Squarespace or Wix. These sites provide users with ease of setup without having to spend the money or resources to outsource the project. According to the All About Cookies survey, 74% of respondents who tried building a website on their own relied on these tools.


With the combination of ease of use and heavy marketing tactics, web builders are at the top of the list when getting their site up and running. With that ease of use and affordability, these users are trading the full scale controllability that web hosting would provide them for a cheaper and more simplistic avenue than web hosting would.

Control over web hosting versus website builders is understood by only 40% of respondents with general web hosting knowledge. According to data pulled from the All About Cookies survey, 32% of Americans have actually tried building a website at some point in time by attempting to code it from scratch.

Web Building For Small Businesses

Web building is a huge part of helping to build a small business, with the current age of online shopping and digital marketing it’s almost impossible to not employ someone or attempt to host a website yourself for your business.

With many small businesses having limited funds when starting out, a majority build and manage their own site in-house. 65% of small business owners don’t outsource their websites and opt to build one themselves due to factors such as lack of funds or resources.


When it comes to how much small business owners pay for web building, they’re willing to spend $250 annually, on average. Of the small business owners surveyed, 68% reported spending between $50 and $250 annually.

Final Thoughts

Web hosting and building is something that users need in today’s digital age, and these findings prove that small business owners and individuals alike recognize that. The disconnect comes with people’s lack of education on the topic, but their desire to host and build with affordability as their main priority regardless of the reason for why they want to host or build their site.

Having a full knowledge of the web hosting and building landscape could benefit small business owners as well as individuals regardless of if users choose to build/host/code or employ someone to do so. Users who can combine their knowledge of web hosting with their wants and needs can make more informed web hosting decisions that could prove to give them a leg up from their less experienced peers.

About author:
Derick Migliacci is a Digital PR Strategist for AllAboutCookies.org. He brings 3 years of experience in the PR world as well as a passion for digital trends, cybersecurity, and technology.

Read next: 

• Gen Z Financial Struggles: 72% Social Life Impact, 67% Mental Health Hit, 47% Have One Hour or Less Free Daily

• Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening
by Guest Contributor via Digital Information World

Monday, January 26, 2026

Gen Z Financial Struggles: 72% Social Life Impact, 67% Mental Health Hit, 47% Have One Hour or Less Free Daily

Edited by Asim BN

Financial challenges have a wide-reaching impact, and in a recent study, Gen Z reported their social lives (72%), mental health (67%) and physical health (62%) have suffered due to money constraints in the last year.

Image: Karolina Grabowska kaboompics.com / pexels

The survey of 2,000 Gen Z hourly workers also found that more than a third (36%) are working multiple jobs, and just about half (47%) have an hour or less of free time each day.

Despite their grind, more than two-thirds of Gen Z workers (68%) doubt they’ll ever be able to fully retire.

And those who are confident they’ll be able to retire are working two jobs, on average, while those who are uncertain about retiring work just one.

This poses the question: Will Gen Z actually need to work two jobs to have enough money to retire?

The survey was conducted by Talker Research on behalf of DailyPay to investigate Gen Z’s financial health and the ways money difficulties have impacted their overall well-being, work and retirement plans.

According to the findings, most Gen Z (77%) think they’ll need to work past the typical retirement age to make ends meet: Half (49%) believe they’ll need to work full-time, and 29% anticipate they’ll need to work at least part-time.

And while, holistically, 67% of Gen Z hourly workers are still proactively saving for retirement, only a small group of those who don’t think they’ll retire are still saving for it just in case (44%), putting this significant group of people in a precarious financial position.

Looking at respondents’ work/life balance, the majority of Gen Z respondents (56%) went so far as to say they don’t feel like they have lives outside of their jobs.

Respondents also said they eat mostly home-cooked meals (44%), shop at discount stores (38%), opt to do free activities for fun (36%) and even cut their own hair (26%) to limit their spending.

Interestingly, Gen Z also reported that they’re financially responsible for one other person, on average, along with themselves.

Considering this, some of their more intense money-saving habits make a bit more sense in context. These include keeping the thermostat very low in the winter and high in the summer (18%), taking short or cold showers (15%) and air-drying their clothes instead of using a dryer (13%).

2025 was an incredibly difficult financial year for many, if not most, and when Gen Zers were asked about the most extreme things they did in the last year to save money, the responses put things into perspective.

Some respondents said they cut back on showering to reduce their water bills, while others turned off their hot water or electricity, did laundry in the bathtub and stopped buying necessities like groceries and toilet paper.

“Gen Z is facing a financial crisis that is actively undermining their health, their work performance and their hope for retirement,” said Andrew Brandman, chief operating officer at DailyPay. “The outdated pay cycle is misaligned with the younger generation’s modern financial needs and, for many, is negatively impacting their stability and well-being.”

In the study, the majority of Gen Z hourly workers (63%) reported their work performance has taken a hit in the last year because of their money worries.

More than a third (35%) also admitted they accepted their current jobs because they were desperate for work and many (31%) ended up in their current positions because they were attracted to how frequently they’d be paid (e.g. daily, weekly), instead of an attraction to the role itself.

“On-Demand Pay is no longer a niche perk; for many, it’s an essential benefit that restores control over pay and provides financial security to the employee,” said Brandman. “Empowering workers with real-time access to the pay they’ve already earned can be one of the most effective ways to help Gen Z stabilize their finances and thrive.”

GEN Z’S TOP MONEY SAVING HACKS

  • Eating mostly home-cooked meals (44%)
  • Shopping at discount stores (38%)
  • Using coupon apps, cashback sites and waiting for sales (36%)
  • Doing free activities for fun (36%)
  • Meal prepping (31%)
  • Buying in bulk (30%)
  • Cutting my own hair (26%)
  • Buying secondhand things (25%)
  • Buying generic brands only (24%)
  • DIY home repairs (21%)
  • DIY car maintenance (19%)
  • Keeping the thermostat very low in the cold months and higher during the warm months (18%)
  • Using public transit (17%)
  • Carpooling with others when possible (16%)
  • Biking or walking instead of driving (15%)
  • Taking shorter or cold showers (15%)
  • Air-drying clothes instead of using a dryer (13%)

Note: This article was originally published on TalkerResearch and republishing here as per their guidelines.

Read next: Feeling unprepared for the AI boom? You’re not alone
by External Contributor via Digital Information World

Saturday, January 24, 2026

Feeling unprepared for the AI boom? You’re not alone

Patrick Barry, University of Michigan

Image: DIW-Aigen

Journalist Ira Glass, who hosts the NPR show “This American Life,” is not a computer scientist. He doesn’t work at Google, Apple or Nvidia. But he does have a great ear for useful phrases, and in 2024 he organized an entire episode around one that might resonate with anyone who feels blindsided by the pace of AI development: “Unprepared for what has already happened.”

Coined by science journalist Alex Steffen, the phrase captures the unsettling feeling that “the experience and expertise you’ve built up” may now be obsolete – or, at least, a lot less valuable than it once was.

Whenever I lead workshops in law firms, government agencies or nonprofit organizations, I hear that same concern. Highly educated, accomplished professionals worry whether there will be a place for them in an economy where generative AI can quickly – and relativity cheaply – complete a growing list of tasks that an extremely large number of people currently get paid to do.

Seeing a future that doesn’t include you

In technology reporter Cade Metz’s 2022 book, “Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World,” he describes the panic that washed over a veteran researcher at Microsoft named Chris Brockett when Brockett first encountered an artificial intelligence program that could essentially perform everything he’d spent decades learning how to master.

Overcome by the thought that a piece of software had now made his entire skill set and knowledge base irrelevant, Brockett was actually rushed to the hospital because he thought he was having a heart attack.

“My 52-year-old body had one of those moments when I saw a future where I wasn’t involved,” he later told Metz.

In his 2018 book, “Life 3.0: Being Human in the Age of Artificial Intelligence,” MIT physicist Max Tegmark expresses a similar anxiety.

“As technology keeps improving, will the rise of AI eventually eclipse those abilities that provide my current sense of self-worth and value on the job market?”

The answer to that question, unnervingly, can often feel outside of our individual control.

“We’re seeing more AI-related products and advancements in a single day than we saw in a single year a decade ago,” a Silicon Valley product manager told a reporter for Vanity Fair back in 2023. Things have only accelerated since then.

Even Dario Amodei – the co-founder and CEO of Anthropic, the company that created the popular chatbot Claude – has been shaken by the increasing power of AI tools. “I think of all the times when I wrote code,” he said in an interview on the tech podcast “Hard Fork.” “It’s like a part of my identity that I’m good at this. And then I’m like, oh, my god, there’s going to be these (AI) systems that [can perform a lot better than I can].”

The irony that these fears live inside the brain of someone who leads one of the most important AI companies in the world is not lost on Amodei.

“Even as the one who’s building these systems,” he added, “even as one of the ones who benefits most from (them), there’s still something a bit threatening about (them).”

Autor and agency

Yet as the labor economist David Autor has argued, we all have more agency over the future than we might think.

In 2024, Autor was interviewed by Bloomberg News soon after publishing a research paper titled Applying AI to Rebuild Middle-Class Jobs. The paper explores the idea that AI, if managed well, might be able to help a larger set of people perform the kind of higher-value – and higher-paying – “decision-making tasks currently arrogated to elite experts like doctors, lawyers, coders and educators.”

This shift, Autor suggests, “would improve the quality of jobs for workers without college degrees, moderate earnings inequality, and – akin to what the Industrial Revolution did for consumer goods – lower the cost of key services such as healthcare, education and legal expertise.”

It’s an interesting, hopeful argument, and Autor, who has spent decades studying the effects of automation and computerization on the workforce, has the intellectual heft to explain it without coming across as Pollyannish.

But what I found most heartening about the interview was Autor’s response to a question about a type of “AI doomerism” that believes that widespread economic displacement is inevitable and there’s nothing we can do to stop it.

“The future should not be treated as a forecasting or prediction exercise,” he said. “It should be treated as a design problem – because the future is not (something) where we just wait and see what happens. … We have enormous control over the future in which we live, and [the quality of that future] depends on the investments and structures that we create today.”

At the starting line

I try to emphasize Autor’s point about the future being more of a “design problem” than a “prediction exercise” in all the AI courses and workshops I teach to law students and lawyers, many of whom fret over their own job prospects.

The nice thing about the current AI moment, I tell them, is that there is still time for deliberate action. Although the first scientific paper on neural networks was published all the way back in 1943, we’re still very much in the early stages of so-called “generative AI.”

No student or employee is hopelessly behind. Nor is anyone commandingly ahead.

Instead, each of us is in an enviable spot: right at the starting line.The Conversation

Patrick Barry, Clinical Assistant Professor of Law and Director of Digital Academic Initiatives, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: AI-Assisted Coding Reaches 29% of New US Software Code


by External Contributor via Digital Information World

Friday, January 23, 2026

AI-Assisted Coding Reaches 29% of New US Software Code

Edited by Asim BN. Reviewed by Ayaz Khan

Generative AI is reshaping software development – and fast. A new study published in Science shows that AI-assisted coding is spreading rapidly, though unevenly: in the U.S., the share of new code relying on AI rose from 5% in 2022 to 29% in early 2025, compared with just 12% in China. AI usage is highest among less experienced programmers, but productivity gains go to seasoned developers.

The Study In A Nutshell

  • AI-assisted coding is spreading rapidly: In the U.S., the share of AI-generated code rose from 5% in 2022 to nearly 30% by the end of 2024
  • Large regional gaps: Adoption was highest in the U.S. (29%), followed by Germany (23%), France (24%) and India (20%); China (12%) and Russia (15%) lag behind (as of early 2025)
  • Measured productivity gains: In the aggregate, generative AI increased programmers’ productivity by an estimated of 3.6%
  • Substantial economic impact: AI-assisted coding adds at least $23 billion per year to the U.S. economy
  • Unequal effects: Less experienced programmers use AI more often, but productivity gains accrue almost exclusively to experienced developers

The software industry is enormous. In the U.S. economy alone, firms spend an estimated $600 billion a year in wages on coding-related work. Every day, billions of lines of code keep the global economy running. How is AI changing this backbone of modern life?

In a study published in Science, a research team led by the Complexity Science Hub (CSH) found that by the end of 2024, around one-third of all newly written software functions – self-contained subroutines in a computer program – in the United States were already being created with the support of AI systems.

“We analyzed more than 30 million Python contributions from roughly 160,000 developers on GitHub, the world’s largest collaborative programming platform,” says Simone Daniotti of CSH and Utrecht University. GitHub records every step of coding – additions, edits, improvements – allowing researchers to track programming work across the globe in real time. Python is one of the most widely used programming languages in the world.

Regional Gaps Are Large

The team used a specially trained AI model to identify whether blocks of code were AI-generated, for instance via ChatGPT or GitHub Copilot.

“The results show extremely rapid diffusion,” explains Frank Neffke, who leads the Transforming Economies group at CSH. “In the U.S., AI-assisted coding jumped from around 5% in 2022 to nearly 30% in the last quarter of 2024.”

At the same time, the study found wide differences across countries. “While the share of AI-supported code is highest in the U.S. at 29%, Germany reaches 23% and France 24%, followed by India at 20%, which has been catching up fast,” he says, while Russia (15%) and China (12%) still lagged behind at the end of our study.

“It’s no surprise the U.S. leads – that’s where the leading LLMs come from. Users in China and Russia have faced barriers to accessing these models, blocked by their own governments or by the providers themselves, though VPN workarounds exist. Recent domestic Chinese breakthroughs like DeepSeek, released after our data ends in early 2025, suggest this gap may close quickly,” says Johannes Wachs, a faculty member at CSH and associate professor at Corvinus University of Budapest.

AI-Assisted Coding Reaches 29% of New US Software Code
Global diffusion of AI-assisted coding and its impact | Left: The share of AI-written Python functions (2019-2024) grows rapidly, but countries differ in their adoption rates. The U.S. leads the early adoption of generative AI, followed by European nations such as France and Germany. From 2023 onward, India rapidly catches up, whereas adoption in China and Russia progresses more slowly. Right: Comparing usage rates for the same programmers at different points in time, generative AI adoption is associated with increased productivity (commits), breadth of functionality (library use) and exploration of new functionality (library entry), but only for senior developers, while early-career developers do not derive any statistically significant benefits from using generative AI (c) Complexity Science Hub

Experienced Developers Benefit Most

The study shows that the use of generative AI increased programmers’ productivity by 3.6% by the end of 2024. “That may sound modest, but at the scale of the global software industry it represents a sizeable gain,” says Neffke, who is also a professor at Interdisciplinary Transformation University Austria (IT:U).

The study finds no differences in AI usage between women and men. By contrast, experience levels matter: less experienced programmers use generative AI in 37% of their code, compared to just 27% for experienced programmers. Despite this, the productivity gains the study documents are driven exclusively by experienced users. “Beginners hardly benefit at all,” says Daniotti. Generative AI therefore does not automatically level the playing field; it can widen existing gaps.

In addition, experienced software developers experiment more with new libraries and unusual combinations of existing software tools. “This suggests that AI does not only accelerate routine tasks, but also speeds up learning, helping experienced programmers widen their capabilities and more easily venture into new domains of software development,” says Wachs.

Economic Gains

What does all of this mean for the economy? “The U.S. spends an estimated $637 billion to $1.06 trillion annually in wages on programming tasks, according to an analysis of about 900 different occupations,” says co-author Xiangnan Feng from CSH. If 29% of code is AI-assisted and productivity rises by 3.6%, that adds between $23 and $38 billion in value each year. “This is likely a conservative estimate,” Neffke points out, “the economic impact of generative AI in software development was already substantial at the end of 2024 and is likely to have increased further since our analysis.”

“When even a car has essentially become a software product, we need to understand the hurdles to AI adoption – at the company, regional, and national levels – as quickly as possible”. Frank Neffke - SH Faculty.

Looking Ahead

Software development is undergoing profound transformation. AI is becoming central to digital infrastructure, boosting productivity and fostering innovation – but mainly for people who already have substantial work experience.

“For businesses, policymakers, and educational institutes, the key question is not whether AI will be used, but how to make its benefits accessible without reinforcing inequalities,” says Wachs. “When even a car has essentially become a software product, we need to understand the hurdles to AI adoption – at the company, regional, and national levels – as quickly as possible,” Neffke adds.

About the study

The study “ Who is using AI to code? Global diffusion and impact of Generative AI ” by Simone Daniotti, Johannes Wachs, Xiangnan Feng, and Frank Neffke has been published in Science (doi: 10.1126/science.adz9311).

Note: This post was originally published on the Complexity Science Hub and is republished on DIW with permission. No AI was used in writing this post.


by External Contributor via Digital Information World

Lack of coordination is leaving modern slavery victims and survivors vulnerable, say experts

Written by: Joe Stafford - The University of Manchester. Reviewed by Asim BN.
Image: Nano banana

Researchers at The University of Manchester are calling for stronger, coordinated partnerships to tackle modern slavery and human trafficking, warning that gaps between organisations risk leaving victims and survivors without consistent protection and support.

Their appeal comes in a new review commissioned by Greater Manchester Combined Authority (GMCA) , which examines how organisations across the city region work together to identify, safeguard and support people affected by modern slavery and human trafficking. The review focuses on partnerships involving local authorities, statutory services, law enforcement, housing providers and voluntary and community sector organisations.

The authors argue that tackling modern slavery depends on robust, long-term collaboration rather than ad hoc arrangements. While organisations across Greater Manchester have developed innovative partnership approaches, the review finds that these are not always embedded consistently across the system. Among the review’s key recommendations, the authors are calling for:

- Clearer strategic governance to strengthen modern slavery and human trafficking partnerships at a Greater Manchester-wide level.

- More consistent roles and responsibilities across organisations, so victims/survivors do not fall through gaps between services.

- Improved information-sharing and referral pathways, ensuring concerns are acted on quickly and safely.

- Sustainable funding and resources to support partnership working, rather than reliance on short-term arrangements.

- Stronger links between safeguarding, housing, immigration advice and criminal justice responses, reflecting the needs of victims.

The review suggests that where partnerships are well established, outcomes for victims are more likely to be improved. Such embedded collaboration enables earlier identification of exploitation, better safeguarding responses and coordinated support to help individuals recover and rebuild their lives. Strong partnerships also support disruption of criminal activity by improving intelligence-sharing and joint working.

However, the authors highlight challenges which can weaken partnership arrangements including variations in local practice, capacity pressures and funding uncertainty. Frontline professionals reported that without clear structures and shared accountability, collaboration often relies on personal relationships, making it fragile and difficult to sustain.

The researchers also note that victims and survivors of modern slavery often face overlapping vulnerabilities including insecure housing, mental ill-health and immigration insecurity. Without joined-up working across sectors, these complexities can delay support and increase the risk of re-exploitation.

The authors stress that the findings have national relevance due to a relatively cohesive modern slavery partnership approach in Greater Manchester. As awareness of modern slavery grows, public bodies across the UK face pressure to demonstrate good quality partnership responses. The review positions Greater Manchester as a potential leader, but cautions that this requires investment in governance, coordination and shared learning.

“This review shows that partnership working is not optional when tackling modern slavery and human trafficking - it is essential. The needs of victims and survivors cut across organisational boundaries, and responses must do the same. Our recommendations set out how partners across Greater Manchester can strengthen their approach and provide protection and support.” - Dr Jon Davies.

This article was originally published by The University of Manchester and is republished with permission.


by External Contributor via Digital Information World

Over 8 in 10 of Americans Trust AI for Financial Advice, and It Has Experts Worried

Written by Rachel Perez. Reviewed by Ayaz Khan.

Unless you’ve been off-grid in the last five years, chances are you’ve watched the meteoric rise of artificial intelligence and its usage by humans unfold online right in front of you. What’s more, you’ve probably gotten curious, logged on to an AI model, and asked it a question or two (yes, we know you did, so don’t deny it). Ever since ChatGPT’s debut in late 2022, people have started to depend on artificial intelligence to answer their questions and help navigate life, from putting together the perfect pasta recipe to more serious topics, such as creating a will or making investment decisions. And as more companies like Google and Meta have introduced their own AI models, this trend of human reliance on AI has only increased.

This isn’t necessarily a negative, as AI models make research incredibly easy and efficient compared to sorting through the various articles a search algorithm throws at you to find an answer. However, while students might use AI to cheat on exams and lawyers have used it to (incorrectly) cite case law, the larger concern isn’t the use of artificial intelligence, but people’s trust and reliance on this technology.

Although you might just want to use AI to figure out why the cat keeps digging in the litter box, things start to get a little dicey if you take the information AI offers without fact-checking the info or accounting for its biases. And when people start to use AI as a financial advisor instead of a human one, they run the risk of receiving poor advice that could lead them to financial hardship in the future.

More than eight out of ten Americans now trust artificial intelligence to help guide their financial decisions, as per BestMoney survey.




Pros of Using AI with Finances

Artificial intelligence excels when it comes to sorting information and relaying that information to the user in a different way. In terms of general financial questions, ChatGPT and other models do a good job of breaking down those confusing finance concepts and explaining them in layman’s terms. For example, someone may not understand how budgeting helps track monthly expenses and control spending. AI can explain it clearly using practical, easy-to-follow examples.

AI can also help with straightforward financial input. If someone with little spreadsheet experience wants to use one for their personal budget, AI can take the numbers and format them into a ready-to-go spreadsheet. What’s more, AI could explain some functions and formulas that they might find useful in the future, expanding their knowledge base as well as their ability to use the program for their budgeting needs.

For encyclopedic and financial information, AI is as good as Google. But, when it comes to the important questions, like those that impact your finances, you’re better off taking those to a pro.

Cons of Using AI with Finances

The biggest issue with AI is that it cannot generate something new, as the algorithm picks up information from the internet and sorts through it to answer questions. This may not matter for questions about recipes, but it can pose an issue for questions that require critical thinking. No matter how real AI might seem, it isn’t human. So while it can give you options and advice, it doesn’t have the discernment to tell you which options are good or bad or best for you.

Another pitfall of using AI with finances is that you may not know what sources the information came from. For example, if AI is telling you that it's a good time to invest in a particular company, you should wonder why and what sources it used to come up with that information. If it’s pulling positive information from the company’s website, the bias can skew AI’s output. Even worse, AI could hallucinate and give a totally made-up answer. So checking sources and information is key. Even slightly incorrect or inaccurate information can wreak havoc on your financial future if you make the wrong decision.

When it comes to your finances, you usually have to include a lot of personal information to get an answer tailored to you, but you’re relying on AI companies to remove your personal data without any way of checking or guaranteeing that. Uploading your personal information can put you at risk, and without adding in your personal information, it’s difficult to get the specific answers you need to make sound financial decisions that account for your unique circumstances.

Best Practices of Using AI for Finances

Artificial intelligence is not evil, it’s a tool. And if you use AI, it’s important to develop good habits and avoid being overly dependent on it for every answer, especially when it comes to important life decisions. Below are some dos and don’ts to keep in mind when using AI for help.

  1. Do doublecheck… and then triplecheck. AI is known to hallucinate, give out false information, and could be using biased information. By double-checking answers, information, and sources used, you can avoid operating on incorrect information. Don’t just ask for information but also ask AI for the sources it used, so you can check and see if they’re reputable and reliable.
  2. Don’t use AI for personal matters. ChatGPT is good at explaining concepts from a bird’s-eye view, but things get tricky when you ask it to apply that reasoning to your personal life. You can ask AI to explain how the stock market works, but don’t ask it to take the current climate of the stock market and upload your personal financial information to get back advice on what you should invest in.
  3. Do use a human backup, especially on the financial questions. There’s a reason why AI hasn’t taken over the financial advisor sector yet. Artificial intelligence can’t think critically, but humans can! Take advantage of human advice and compare it to what AI says, remembering that both can make mistakes, but only one has original thoughts and ideas.

Conclusion

People talk about financial literacy and media literacy, but it’s clear that the world is going to need to develop AI literacy skills as well. Artificial intelligence is an amazing invention that has many possibilities with just as many limitations. By building good AI literacy habits, people can continue to use AI as a tool for many questions they might have, even those about finances. By remembering to double-check information, keep questions general, and ask a human financial professional about questions with big consequences, AI and humans might just develop a healthy relationship with each other.

Read next:

• AI-induced cultural stagnation is no longer speculation − it’s already happening

• Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening


by External Contributor via Digital Information World

AI-induced cultural stagnation is no longer speculation − it’s already happening

Ahmed Elgammal, Rutgers University

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Ã…ström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

Researchers find repeated text-image loops compress meaning, producing but empty visuals dubbed visual elevator music.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.The Conversation

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening

• Why AI has not led to mass unemployment


by External Contributor via Digital Information World