Thursday, April 30, 2026

Standardised testing and scripted lessons are failing teachers and students alike, education expert warns

By Taylor & Francis

Geoff Masters challenges a system which teaches the same curriculum to children with very different comprehension levels.

Geoff Masters criticizes age-based schooling, advocating personalized learning and teacher autonomy over standardized curricula systems.
Image: Rewired Digital / Unsplash

Is it time to ditch scripted lessons and heavily-packed curricula to focus on individual student growth?

This is the question posed by education expert Geoff Masters, who argues that age-based expectations are not serving all children well, while scripted lessons are failing teachers and students alike.

Masters, the former head of the Australian Council for Educational Research, asks how well children are served by a system in which two pupils in the same class can differ by six or more years of learning but are taught the same material.

He argues this system fails children at either end of the scale – those who are struggling and those who are unchallenged. He asks what if, instead of holding all pupils of the same age to the same learning expectations, we based expectations on where individuals are in their comprehension and individual growth.

“Too many students in our schools are being poorly served and left behind by machineries of schooling not fit for purpose,” Masters warns.

The problem with standardisation

Masters argues there is a fundamental flaw in the current system: the assumption that all students in the same grade are equally ready to learn the same material.

Research shows that children in the same classroom can have up to a seven-year difference in their reading and mathematics comprehension. This vast variation, Masters argues, is ignored by a system that prioritises standardisation over individual needs.

“By the middle years of school, many students have not learnt what the curriculum expected them to learn much earlier in their schooling,” Masters explains. He cites data showing how, across 38 developed countries, almost a third of 15-year-olds have difficulty demonstrating 5th and 6th grade mathematics content.

The picture in Australia

Masters’ arguments are presented against a backdrop of Australia’s declining performance in international assessments like PISA. Between 2012 and 2022, there was no significant improvement in Australian students’ performances in reading, mathematics or science. In fact, long-term declines have been recorded across all three areas.

“Despite decades of reforms, the machinery of schooling has not delivered the improvements we need,” Masters says. “It’s time to question whether prescribing what every student must learn in each grade of school and testing to see whether they have learnt it is the best way to optimise learning and improve performance.”

Masters also explains how those who start the year behind are likely to stay behind. He explains: “When the curriculum expects all students in a grade to be taught the same content at the same time, those who begin well below grade level are disadvantaged. This disadvantage is compounded when students are required to move from one grade curriculum to the next based on elapsed time rather than mastery. Students who lack essential prerequisites often fall further behind as each grade’s curriculum becomes increasingly beyond their reach.”

The future of learning

Masters instead argues for a system that meets students where they are in their learning, rather than where their age or grade dictates they should be. He proposes replacing age-based expectations with personalised learning plans that track individual growth.

“Improved performance depends on meeting each student where they are with personally meaningful, well-targeted learning opportunities that build on what they already know,” Masters explains. “This approach includes all students, including neurodiverse children and others with special needs.”

This approach would not only benefit students, he suggests, but also empower teachers to use their professional expertise to design tailored learning experiences.

One of the most concerning trends in education, in Masters’ view, is the rise of scripted lessons.

“Scripted lessons turn teaching into the delivery of ready-made solutions created outside the classroom,” Masters says. “They undervalue teachers’ expertise in what is arguably the essence of effective teaching: establishing where individuals are in their learning and designing opportunities to promote further growth.”

Masters calls for a return to professional autonomy, where teachers are trusted to make decisions in the best interests of their students.

Masters envisions a future where education systems embrace diversity and difference.

“Rather than expecting students to fit the expectations of schooling, the challenge is to redesign school structures and processes to better meet the needs of individual learners,” Masters concludes.

Further information: The Children We Leave Behind: How School Could Be Done Differently, by Geoff Masters (Routledge, 2026). ISBN: Paperback: 9781041279655 | Hardback 9781041279662 | eBook 9781003757122. DOI: https://doi.org/10.4324/9781003757122

This post was originally published on Taylor & Francis Newsroom and republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next:

Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• The Deadliest Countries for Journalists
by External Contributor via Digital Information World

Some Chrome Extensions With Large User Bases Disclose Data Sale or Sharing Practices in Their Privacy Policies

By Dar Kahllon and Guy Erez - LayerX

Executive Summary:

New research by LayerX Security uncovers multiple networks of browser extensions that collect user data and resell it for profit – and it’s all completely legal. For, unlike malicious extensions that disguise themselves as legitimate extensions and do their bidding in the dark, these extensions explicitly tell users that they’re going to collect and sell their data. It’s right there in the Privacy Policy; except that nobody reads it.

LayerX analyzed the privacy policies of thousands of extensions and uncovered over 80 different extensions that collect and sell customer data. Some of these extensions include:

  • A network of 24 media extensions that are installed on 800,000 users and collected viewing data and demographic information on major streaming platforms such as Netflix, Hulu, Disney+, Amazon Prime Video, HBO, Apple TV, and others
  • 12 separate ad blockers with a combined install base of over 5.5 million users openly selling user data
  • Nearly 50 other extensions, with over 100,000 users in aggregate, that collected and resold users’ browsing data

While browser extensions may seem innocent, these findings highlight the privacy exposure that can arise from unregulated usage of extensions.

The Fine Print That Makes Everything Legal

Privacy policies. Reading them is like watching paint dry. For most users, it’s worse than reading the fine print in their mortgage agreements; and that’s saying something.

Except we did.

LayerX Security researchers Dar Kahllon and Guy Erez analyzed the privacy policies of thousands of browser extensions available in official stores. They were looking for one thing: whether the publisher explicitly reserved the right to sell user data.

And we found them. Our analysis showed at least 80 such extensions, some of them working in collusion, and developed by the same developer across all extensions. They range from ad blockers and streaming tools to job application helpers, new-tab extensions, and B2B sales intelligence platforms.

Most of these policies don’t say “we sell your data.” They say “we may sell.” It’s a legal hedge – but it means your data can be sold at any time, and you already agreed to it. Here’s what that looks like in practice:

“We may sell or share your personal information with third parties.”

“This information may be sold to or shared with business partners.”

What? Browser Extensions Have Privacy Policies?!

Well, to be fair, most don’t.

This isn’t a story about malware. Nobody hacked you. Nobody stole anything. The extensions you’re running right now may be selling your browsing data — and they told you they would. It’s right there in the privacy policy. Page 4. Paragraph 7. The one nobody reads.
Figure 1. Privacy Policy Transparency

According to LayerX’s Enterprise Browser Extension Security Report 2026, 71% of all extensions in the Chrome Web Store don’t even publish a privacy policy.

As a result, more than 73% of users have at least one extension installed without a privacy policy, with no transparency into how their data is handled. This means our analysis could only rely on the 29% that do have a privacy policy.

And if we assume that some of those extensions with no privacy policy at all will also resell your data – and there’s no reason to assume they’re better – the real number of extensions that may sell your data across the Chrome Web Store is in the tens of thousands.

How We Analyzed The Data

We built a pipeline to analyze privacy policies associated with browser extensions in official stores, combining automated classification with manual verification.

Starting from roughly 9,000 extensions with privacy policy URLs in our database, we successfully fetched and parsed 6,666 policies.

The pipeline ran in three stages:

  1. First, AI classification flagged policies disclosing the selling, licensing, or commercial transfer of user data. We then marked high-confidence matches for review and verified every flagged policy manually.
  2. Performed a manual review to remove false positives, including: (A) Enterprise security tools (e.g., Fortinet, CrowdStrike) that route browsing data to their own servers as part of expected web filtering behavior. (B) Standard CCPA ad-retargeting disclosures (e.g., HubSpot, Calendly), where sharing cookies with platforms like Google Ads may technically count as a “sale” under broad definitions. (C) Consensual data monetization platforms (e.g., Swash) where users explicitly opt in and are compensated.

    Final dataset includes only extensions whose privacy policies indicate genuine commercial sale of user data to third parties

  3. In the final count, we found 82 unique extensions across 94 store listings.75 are currently live in the Chrome Web Store. The remaining 7 have been removed – but “removed” doesn’t mean “uninstalled.” Extensions pulled from the store can stay active in browsers that already have them.

While these figures may seem low, bear in mind that these figures are only for extensions with privacy policies to begin with (less than one-third of all extensions), and those extensions that actually tell you what they’re doing with your data. The true number is almost certainly higher.

Here are a few of our key findings:

The QVI Empire: One Anonymous Publisher, 24 Extensions, 800,000 Users

While reviewing confirmed sellers, a pattern kept surfacing. Different extensions, different streaming platforms, but the same three-letter prefix: QVI– short for “Quality Viewership Initiative.”

What looked like unrelated tools turned out to be a single operation: 24 browser extensions – 21 currently live, 3 removed – covering nearly every major streaming service.

  • Netflix
  • Hulu
  • Disney+
  • Amazon Prime Video
  • HBO Max
  • Peacock
  • Paramount+
  • Tubi
  • Apple TV+
  • Crunchyroll

All published by HideApp LLC, registered at 1021 East Lincolnway, Cheyenne, Wyoming – an address shared by hundreds of other LLCs through a registered agent service – and operating under the brand “dogooodapp.”

The largest extensions in the network:

  • Custom Profile Picture for Netflix (200K users)
  • Hulu Ad Skipper (100K)
  • Netflix Picture in Picture (100K)
  • Ad Skipper for Prime Video (60K)
  • Netflix Extended (60K)

Across all 21 live extensions, the network reaches nearly 800,000 users.

Figure 2. Extension Page in Chrome Store for the “Custom profile picture for Netflix [QVI]” extension

But their privacy policy says something the store listings don’t. These extensions collect extensive information, including:

  • Viewing history
  • Content preferences
  • Platform subscriptions
  • Downloaded content
  • Streaming behavior

They also collect age and gender – and if you don’t provide demographics, they match your email against third-party demographic databases to fill in the gaps.

Figure 3. Data declared as collected by the privacy policy of the “Custom profile picture for Netflix [QVI]” extension

The policy describes selling reports to content creators and studios, streaming platforms, media research firms, and marketing agencies – along with “organizations that purchase anonymized viewing data.”

Put it all together and you’re looking at a distributed audience-measurement system running inside users’ browsers. One anonymous publisher pulling viewing behavior across every major streaming platform, building intelligence about what nearly 800,000 people watch, when, and how they engage with content. None of those users signed up for that. Legally, they accepted the terms when they clicked “Add to Chrome.” Practically, nobody read them.

Ad Blockers That Block Some Ads, And Sell Your Data to Other Ads

We confirmed eight ad blockers that reserve the right to sell or share user information with third parties. Tools people install to stop tracking – selling tracking data instead. Combined, they reach over 5.5 million users.

  • Stands AdBlocker (3M users) sells browsing data to third parties for “market analytics purposes.”
  • Poper Blocker (2M users) discloses selling identifiers, browsing activity, behavioral profiles, and inferred sensitive data – including health conditions, religious beliefs, and sexual orientation, all inferred from the URLs you visit.
  • All Block, an ad blocker for YouTube (500K users), sells anonymized data “for analytical and commercial purposes.” Published by an entity called Curly Doggo Limited, based in London.
  • TwiBlocker (80K users) discloses transferring browsing data to third parties who “process or sell it for analytical purposes.”
  • Urban AdBlocker (10K users) routes browsing data and AI conversations through the BiScience data broker.

If your ad blocker has a privacy policy longer than two paragraphs, read it.

Figure 4. Featured Ad Blocker in Chrome Store

Independent Operators Can Also Sell Your Data

These aren’t the biggest extensions on the list, but they show how far the data-selling model reaches.

  • Career.io Job Auto Apply (10K users) states in its policy that it may use personal data collected from your resume to sell to third parties, including data brokers, for targeted advertising and profiling. A job application tool that sells your resume.
  • Dog Cuties (6K users) is a cute dog wallpaper new-tab extension. Confirmed data seller through the Apex Media network.
  • EmailOnDeck (10K users) is a temporary email service – a tool people use specifically when they don’t want to share their real information. Its policy states it may sell, rent, or share its mailing list.
  • Survey Junkie discloses selling URLs visited, clickstream data, and “modeled information” about consumer preferences to market research agencies, ad agencies, and data analytics providers.
  • Dashy New Tab (10K users) has its Chrome Web Store listing marked “does not sell your data.” Its actual privacy policy marks data as “Sold or Shared: Yes.” We believe this is CCPA compliance language for standard analytics, not commercial data sales – which is why we left it out. But the contradiction between the store listing and the privacy policy is real. If a publisher’s own policy says “Sold or Shared: Yes” and the store listing says the opposite, which one should users trust?

When Your Employees’ Extensions Are Selling Data

Of the 82 confirmed sellers, 29 of them are B2B sales intelligence tools. Their business is data, so the disclosure itself isn’t a surprise. We’re not counting them alongside the consumer-facing extensions.

But they belong in this conversation. These extensions sit on corporate machines. This means that employee browsing behavior, such as internal URLs, SaaS dashboards, and research activity, flows into commercial databases that your competitors can purchase. The risk isn’t about users being deceived. It’s about corporate data leaving through a channel nobody is watching.

What Security Teams Should Do About This

Most extension security evaluations focus on permissions or known malicious indicators – flagging extensions that request excessive access or match threat intelligence. That catches malware. It doesn’t catch an extension that openly reserves the right to sell your browsing data.

An extension with a data-selling disclosure isn’t a hypothetical risk. It’s a stated business practice, sitting in a document your employees accepted without reading.

Three questions worth asking:

  1. What extensions are installed across employee browsers?
  2. What data do those publishers claim the right to collect or sell?
  3. Could corporate browsing activity be flowing into commercial datasets?

Most browsers already support centralized extension management through enterprise policies – Chrome’s ExtensionSettings, Edge’s group policies, Firefox’s enterprise configurations. If you don’t have an extension governance policy, that’s the first step. If you do, add privacy policy review to the evaluation criteria. Permissions alone don’t tell you enough.

To that end, LayerX added a new filter to detect and filter (and block, if so desired) extensions that either don’t have a privacy policy at all, or reserve the right to sell personal data.

Consider blocking extensions that either disclose selling user data or don’t publish a privacy policy at all.

Figure 5. LayerX Extension Data Privacy Filter

The Bottom Line

Browser extensions are among the web’s most powerful and least scrutinized tools. While much of the focus is on malicious that actively steal user and corporate data, privacy violations may sound mundane, but can also be risky.

Going through and reading the Privacy Policy of every extension that every user has in your organization can lead to hundreds or thousands of individual extensions; clearly, that’s not feasible.

Instead, organizations need to start deploying automated tools that can restrict suspicious extensions and account for privacy settings.

Google was contacted multiple times over two days for comment on the report’s findings and Chrome Web Store policies but did not respond before publication. This article will be updated if a response is received.

This post was originally published on LayerX and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• Research reveals lack of transparency in ad data of digital platforms
by External Contributor via Digital Information World

Wednesday, April 29, 2026

Asphalt is everywhere, but is it bad for our health?

By Joanna Allhands - Arizona State University

ASU researcher says pavement’s potential impact on our health deserves as much attention as its carbon or energy footprint.

Heat and sunlight worsen asphalt emissions, raising health risks for workers and nearby communities.
Image: Brian J. Tromp / unsplash

If you piled all of Phoenix’s pavement into one spot, it would be enough to cover San Francisco four times over.

Roads, parking lots and other paved surfaces blanket a lot of land — an estimated 40% of Arizona’s capital city.

Pavement absorbs heat during the day and releases it slowly at night via the urban heat island effect, increasing the amount of energy that cities consume.

But for Elham Fini, a senior scientist affiliated with the Julie Ann Wrigley Global Futures Laboratory at Arizona State University, pavement’s potential impact on our health deserves as much attention as its carbon or energy footprint.

“To make something truly sustainable,” she said, “you cannot ignore the human side of it.”

Asphalt fumes can be hard on health

Fini — a faculty member in ASU’s School of Sustainable Engineering and the Built Environment — spent years studying why asphalt breaks down so quickly.

That work pointed her toward the volatile organic compounds (VOC) that escape from bitumen, the black, sticky petroleum byproduct that holds asphalt together.

Two studies in the Journal of Hazardous Materials and Science of the Total Environment shed light on how the compounds that give asphalt its trademark scent change after sunset and form ultrafine particles, which can worsen air quality.

These carbon-based vapors are continuously released but become more noticeable on hot, sunny days. They can cause dizziness and difficulty breathing in the short term.

Long-term exposure also can elevate the risk of lung cancer, a major concern for construction workers who regularly breathe these fumes without a respirator.

Aging pavement emits toxic vapors

And the impacts could get worse as pavement ages.

Research from Fini and others shows that asphalt begins releasing different, more toxic strains of VOC as bitumen breaks down in sunlight and heat.

These toxic, often odorless VOCs are small enough to work their way into arteries and organs.

Tests and a modeling analysis also suggest that they can cause significant neurological damage in humans, particularly among women and the elderly.

“Heat is worsening the situation,” Fini said. “It’s exacerbating the emissions from asphalt.”

More study is needed to understand what level of asphalt-emitted VOC exposure is unsafe.

But what we know so far should raise alarm bells for hot, car-centric cities such as Phoenix.

Goal: Safer asphalt, healthier workers

Fini is working with Dr. Bruce Johnson via a partnership with Mayo Clinic to better understand how asphalt emissions impact respiratory health.

She hopes that their studies will lead to stronger protections for construction workers and surrounding communities, as well as less toxic, lower-emitting asphalt formulations.

Fini has a head start on the latter.

She has teamed up with Peter Lammers, chief scientist at the Arizona Center for Algae Technology and Innovation, to begin growing a strain of algae that could reduce VOC emissions using wastewater from a Phoenix treatment plant.

“It’s a great setup,” said Lammers, a research professor in the School of Sustainable Engineering and the Built Environment, “because we use water that’s far too high in nitrogen and phosphorus to be released anywhere. And instead, we reuse it to grow more algae.”

Fini then bakes that algae at high temperatures without much oxygen into a binder that can be easily mixed into asphalt.

Algae can capture the worst VOCs

A study in the journal Clean Technologies and Environmental Policy found that while algae-infused asphalt doesn’t significantly reduce total VOC emissions, it can effectively keep the most toxic compounds from escaping.

In fact, tests showed that it reduced the toxicity of asphalt emissions by roughly 100-fold.

Algae can slow how quickly pavement breaks down — which could lower construction and maintenance costs and make its inclusion in asphalt even more attractive for cities and paving companies.

Fini is exploring other binder options, including a product made from the leftover branches of forest-thinning projects, and working with Phoenix to pave a section of road with algae-infused asphalt.

Because VOCs from pavement are often left out of air quality assessments, these real-world tests are critical to evaluate pavement performance and its long-term environmental impact.

“We have 4 million miles of roads in America,” Fini said. “We should make those 4 million miles do more for us than just get from A to B.”

This research was done in collaboration with colleagues from the following institutions: Emory University; Dalian University of Technology, China;Mayo Clinic Arizona;Oregon State University; University of Chicago; University of Lille, France; University of Littoral Côte d′Opale, France; University of Miami; University of Missouri; University of Utah.

Reviewed by Irfan Ahmad.

This post was originally published on Arizona State University News and republished here with permission.

Read next:

• Half of AI health answers are wrong even though they sound convincing – new study

by External Contributor via Digital Information World

China surpasses US in research spending – the consequences extend far beyond scientific ranking and clout

Caroline Wagner, The Ohio State University
China’s research boom overtakes U.S. momentum while American federal science funding continues declining steadily.
Image: Unsplash - kaboompics.com

China’s rapid rise in science has hit a milestone. The country’s investment in research and development has reached parity with – and by purchasing power measures has surpassed – that of the United States, according to a March 2026 report from the Organisation for Economic Co-operation and Development. Both nations have crossed the US$1 trillion threshold on research spending.

For 80 years, the U.S. operated the most productive scientific and technological enterprise in human history. Breakthroughs and advances that came from American labs included the internet; the mRNA vaccine; the transistor and its children, semiconductors and microprocessors; the Global Positioning System; and many more.

U.S. scientific and technological leadership was nurtured by sustained public investment in research universities and federal laboratories, as well as a culture of open inquiry. These investments turned scientific discovery into economic strength – accounting for more than 20% of all U.S. productivity growth since World War II.

In contrast, China had previously spent little to nothing on research and development. Some estimates show that China was among the lowest research spenders worldwide in 1980.

As a policy analyst and public affairs researcher, I study international collaboration in science and technology and its implications for public and foreign policy. I have tracked China’s rise across every major database for more than a decade.

The most recent reports showing that China is now outspending the U.S. on scientific and technological research is a turning point worth understanding clearly because, historically, global leadership in one sector – including technology and warfare – feeds into others. U.S. dominance is in question.

China’s systematic and unrelenting rise

China’s R&D spending milestone caps a series of achievements that have arrived in rapid succession.

In 2019, China surpassed the U.S. in its share of the top 1% most-highly cited papers – what some call the Nobel class of research. By 2022, it had taken first place globally in most-cited papers overall.

In 2024, China overtook the United States in total scientific publications – the first time any nation has displaced American dominance since the U.S. itself surpassed the United Kingdom in 1948. Researchers found that China overtook the United States in scientific output even earlier. That same year, China pulled ahead in the Nature Index, which tracks publications in the world’s most selective scientific journals, posting a 17% advantage over the U.S. in outlets long considered the gold standard of scientific excellence.

In 2024, Chinese entities also filed roughly 1.8 million patent applications, compared to the U.S.’s 603,191 applications.

Given these milestones, it’s possible to argue that China is quickly taking the lead in global science and technology. These are not isolated data points. They mark a structural shift in where the world’s scientific frontier is being built.

More science is good – the problem lies elsewhere

China’s ascent is, in one sense, good news. More knowledge, generated by more researchers across more institutions, expands the global pool of discovery from which everyone can draw. The world benefits when science thrives.

The problem is not that China is investing, but that the U.S. is not.

First, the U.S. is divesting from basic, open science. Federal R&D spending in the U.S. peaked in 2010 at roughly $160 billion and fell by more than 15% over the following five years. Federal investment in research and development has been in a long, slow slide – from a peak of 1.86% of gross domestic product in 1964 to about 0.66% in 2021.

The federal government is no longer the largest spender in R&D: It funded about 40% of basic research in 2022, while the business sector performed roughly 78% of U.S. R&D. While not a problem in itself, industry has simultaneously withdrawn from open scientific publication over the past four decades, shifting from research toward development. The result is a shrinking pool of openly shared scientific knowledge precisely as public investment in it also contracts.

Under the second Trump administration, U.S. government science agencies have been slow-walking proposals for new research. Current budget cuts from the White House threaten to deepen cuts to government spending significantly.

The second is the active restriction of scientific exchange: tightening access to U.S. institutions, scrutinizing international collaborations and raising barriers to foreign-born researchers. These policies, though intended as security measures, work against the openness that has historically made American science productive and attractive to global talent.

I describe this issue as an example of the stockyard paradox, in which securing research assets may weaken the very system these measures aim to protect.

Disinvestment cuts deeper than it appears

The deeper danger for the U.S. economy is that disinvestment and selective engagement in research erodes the capacity to use cutting-edge science regardless of where it is produced.

Absorbing and applying cutting-edge knowledge, whether developed in Boston or Beijing, requires maintaining research institutions and trained workforces, as well as active participation in global networks. This is not a passive process. You cannot free-ride on Chinese science if you have dismantled the institutional and human capital needed to evaluate, translate and apply it.

A nation that hollows out its research base not only falls behind but also progressively loses its ability to benefit from science, including in technologies it is already able to access.

Talent compounds the problem. The U.S. built its scientific dominance partly by being the destination of choice for the world’s most ambitious researchers. The U.S. leads the world in Nobel Prizes, but, notably, 40% of the Nobel Prizes in chemistry, medicine and physics that were awarded to Americans since 2000 were won by immigrants. The flow of foreign talent is not guaranteed. It follows opportunity, funding and openness.

Researchers who might once have come to American universities are finding welcoming alternatives in Europe, China and elsewhere.

Around 75% of U.S. researchers are considering leaving the country due to the Trump administration’s funding policies.

A decision point, not a trend line

China’s milestone in research funding arrives at a moment when the U.S. is deciding whether to maintain its scientific leadership.

Scientific infrastructure does not decline gradually and recover on demand. Doctoral scientists represent a decade or more of training; tacit laboratory knowledge lives in working research groups, not in documents. Once talented young researchers leave the pipeline – or international talent redirects to other countries – the capacity is very hard to rebuild. Early warning signs are already visible in the U.S. system: thousands of NIH grants terminated, a collapse in international applications and an exodus of early-career scientists.

What is at stake is not a ranking. It is whether the U.S. maintains the institutional capacity – the universities, the federal laboratories, the graduate pipelines, the culture of open inquiry – that made those returns on scientific investment possible in the first place.

China’s rise did not create this decision point, although it brings it into sharp relief. Does the U.S. still want to lead in science? The Information Technology and Innovation Foundation, a nonprofit think tank, estimates that a 20% cut in federal research and development starting in fiscal year 2026 would shrink the U.S. economy by nearly $1 trillion over 10 years and reduce tax revenue by around $250 billion. Others point out that the scientific enterprise has contributed at least half of U.S. economic growth.

That is a lot to lose.The Conversation

Caroline Wagner, Professor of Public Affairs, The Ohio State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• Sora’s downfall signals broader problems with AI’s creative utility

Who's Tuned In (And Out) of Science And Tech?


by External Contributor via Digital Information World

Tuesday, April 28, 2026

Sora’s downfall signals broader problems with AI’s creative utility

Ahmed Elgammal, Rutgers University
Image: Sora Web. Credit: DIW

OpenAI officially discontinued its video generation tool, Sora, on April 26, 2026.

I’m a computer scientist who’s been developing AI tools and studying their evolution and adoption for the past decade, and I wasn’t surprised by OpenAI’s decision to shut down Sora.

To me, the challenges Sora faced reflect deeper limitations of AI’s creative capacities that are becoming harder to ignore.

Problems from the start

OpenAI unveiled Sora on Feb. 15, 2024, as an AI tool that gave users the ability to create short videos from text prompts. To pull this off, the technology essentially predicted how images would change from frame to frame based on what it had “learned” from millions of hours of existing footage.

But from the start, there were problems with it.

First, Sora was expensive to run. Generating video requires far more computing power than creating text or images, making it challenging for OpenAI to keep costs under control. Nor was it bringing in enough revenue to justify those costs, especially compared with other AI products that are cheaper to operate and easier to monetize. According to The Wall Street Journal, Sora was losing US$1 million per day.

Second, the early hype – TechPowerUp declared Sora the “Text-to-Video AI Model Beyond Our Wildest Imagination” – didn’t seem to translate into lasting engagement. After the initial buzz faded, users seemed to struggle to find consistent, practical uses for the technology.

Finally, tools like Sora exist in a legal gray area, where concerns about copyright and ownership of visual content force companies into a cautious, defensive stance. In practice, this has meant strict prompt controls that prevent references to copyrighted characters or films; blocking outputs that look like living people or intellectual property; and establishing legal safeguards, such as watermarks and metadata tags, on outputs.

Put together, these challenges likely forced OpenAI to redirect its resources elsewhere, especially as competition across the AI industry has intensified.

A symptom of larger issues

But there’s also a pattern that isn’t unique to Sora’s failure to thrive.

Many generative AI programs geared toward creative fields have encountered a common problem: rapid initial adoption, followed by declining sustained engagement.

Many users appear to try image and video generation tools like Midjourney and Stability AI out of curiosity. But if stagnating traffic data is any indication, few creative professionals seem to be integrating them into their regular workflows.

OpenAI and other companies rolled out prompt-based image and video tools with the hope that the efficiency of their product would provide an attractive alternative to the time-consuming process of producing films, photographs and graphic design. Instead of spending a lot of time and money filming a video, you could simply write a prompt, and AI – trained on billions of pieces of human-generated content – would render it for you.

Generative AI’s counter-creative bias

So what happened?

AI-generated outputs of text and images can look impressively real. The bots seem to follow instructions well and appear to give users control.

But there’s an important catch. Under the hood, these systems are built to imitate what they’ve already seen, and that’s especially the case for images and videos. They’ve been trained on massive collections of visual data and rewarded for producing results that closely match the patterns contained in those visuals. That’s why the outputs can look so realistic and recognizable.

Because they’re optimized to produce familiar outputs, they end up suppressing novelty. This, it goes without saying, doesn’t lend itself to true creative breakthroughs. Even the benchmarks used by researchers to evaluate the performance of such systems tend to favor outputs that look “right,” rather than those that truly shatter expectations or take an image to the next level.

Furthermore, these systems don’t learn from a vast repository of data that encompasses the visual world and all human artistic outputs. Instead, the data used to train these models has often been curated to favor certain images and videos that are polished, clear and visually appealing. In effect, the training process teaches models not just what things look like, but what good-looking content is supposed to be.

In a recent paper, I highlighted this problem, which I call the “counter-creative bias” – the tendency of these systems to favor familiarity over meaningful novelty.

Counter-creative bias explains why so many AI-generated images and videos, even when they vary in subject or style, end up sharing a similar look and feel. And I think it explains why so many artists and other creatives don’t seem to be widely adopting these tools. Good creative work involves pushing boundaries, not simply coming up with something that’s passable and palatable.

The limits of prompting

There’s another problem with these tools.

When someone uses AI to generate an image or a video via a prompt, they’re already operating within the constraints of language.

An artist who wishes to use AI has to learn how to write elaborate prompts with the right keywords that compel the system to generate the desired composition, colors, lighting and aesthetics. To create an interesting image or a video, you have to cleverly manipulate words, combine odd concepts and deploy metaphors. It’s an entirely different skill set.

This was obvious from the beginning. When OpenAI launched DALL-E 2 in July 2022, the company demonstrated the range of interesting images by using crafted prompts like “an espresso machine that makes coffee from human souls” or “panda mad scientist mixing sparkling chemicals.”

The sources of creativity in these examples were the human-written prompts themselves, not how the AI generated the image. To make something visually creative, you have to become clever at manipulating words. Users are forced to fiddle with any number of prompt variations to reach a desired or even satisfactory result.

Wading through the slop

There’s a reason Merriam-Webster and the American Dialect Society chose “slop” as their 2025 words of the year: The internet is brimming with viral AI-generated images of world leaders and wide-eyed children, designed to coax engagement but bereft of creative value. The counter-creative bias inherent to these models is reflected in the fact that many people are becoming accustomed to an AI aesthetic characterized by hyper-polished, well-lit, perfectly composed, generically pretty images.

There was a time when AI art was seen as a burgeoning form of conceptual art.

In the summer of 2019, London’s Barbican Centre included AI art in its exhibition, “AI: More Than Human.” In November of that year, the National Museum of China in Beijing showcased 120 AI-integrated artworks, which were viewed by over 1 million people. I championed some of the artists incorporating this new technology into their work.

Back then, creating art with AI involved constant experimentation. The AI these artists used hadn’t been trained on billions of copyrighted, curated images from the internet. Instead, artists trained AI models using their own images and inspiration, while AI was allowed to manipulate pixels free of any language constraints. No universal aesthetic emerged; every AI artist seemed to come up with something unique, and their existing artistic identity shined through the medium, rather than becoming overshadowed by it.

That hopeful period appears to be over. Once pixels had to be rendered through the control of language, I think it hampered its potential as an artistic medium. And now we’re left with a technology that seems best suited for memes, spam, deepfakes and porn.The Conversation

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Canva Fixes AI Design Tool After Reported ‘Palestine’ to ‘Ukraine’ Change, Audit Underway

• When AI relationships trigger ‘delusional spirals’


by External Contributor via Digital Information World

Canva Fixes Design Tool After Reported “Palestine” to “Ukraine” Change, Audit Underway

Canva says it has fixed an issue in its Magic Layers feature after users reported that the tool changed the phrase “Cats for Palestine” to “Cats for Ukraine” inside a design.

"this shouldn't have happened and we're very sorry for your experience!", Canva said in a response to a user.

The issue was first highlighted on X by user @ros_ie9 and was later reported by The Verge and Gizmodo this week. According to those reports, the behavior appeared to affect the word “Palestine” specifically, while related words such as “Gaza” or “Israel” were reportedly unaffected.

Image: ros_ie9 / X

A separate statement provided to Gizmodo said the company had launched an audit into how the issue happened and was reviewing its internal testing processes to detect and prevent unexpected outputs in the future. Canva also said the problem was isolated and did not affect designs broadly.

The company has not publicly explained what caused the substitution or which technical layer triggered it.

That question has drawn attention because Magic Layers is promoted as a tool for converting flat designs into editable layers, allowing users to manually adjust text and visual elements after processing. Users reported that the wording changed during that process without being requested.

The incident has also received attention because Canva publicly promotes its AI governance framework, Canva Shield, as focused on safe, fair, and secure AI. In its January 2026 update, Canva says its generative AI products go through "rigorous safety reviews", certain prompts involving political topics are automatically moderated, and the company works to reduce bias and improve fairness in AI outputs.

Online discussion following the reports focused on whether the issue reflected a model error, moderation behavior, or another system failure. Some users argued that AI tools should preserve original content exactly when performing layout conversion, while others said companies remain responsible for unexpected outputs regardless of whether the issue came from training data, moderation layers, or external model providers.

The incident follows previous criticism of wider AI systems across the technology and social media industry involving disputed or politically sensitive outputs related to Palestinian Muslims, including earlier concerns involving chatbot responses and image generation tools from other major platforms.

DIW has contacted Canva with follow-up questions about the root cause of the Magic Layers issue, whether third-party AI systems were involved, how the company’s audit classified the problem, and what specific safeguards have been added beyond the additional checks already mentioned. Canva has not publicly specified a timeline for the completion or publication of the audit findings. No further response had been received at the time of publication.

Note: This post was improved using a generative AI tool.

Read next: When AI relationships trigger ‘delusional spirals’
by Asim BN via Digital Information World

Monday, April 27, 2026

When AI relationships trigger ‘delusional spirals’

By Andrew Myers

New Stanford research reveals how chatbot bonds can create dangerous feedback loops – and offers recommendations to mitigate harm.
Image: Luke Jones - unsplash

Perhaps to the surprise of their creators, large language models have become confidants, therapists, and, for some, intimate partners to real human users. In a new paper, AI researchers at Stanford studied verbatim transcripts of 19 real conversations between humans and chatbots to understand how these relationships arise, evolve, and, too often, devolve into troubling outcomes the researchers describe as “delusional spirals.”

These conversations can spin out of control as AI amplifies the user’s distorted beliefs and motivations, leading some people to take real-world, dangerous actions.

“People are really believing the AI,” said Jared Moore, a PhD candidate in computer science at Stanford University and first author of the paper, which will be presented at the ACM FAccT Conference. “As you read through the transcripts, you see some users think that they’ve found a uniquely conscious chatbot.”

Programmed to please

Part of the problem, the researchers say, is that AI models are trained from the outset to “align” with human interests. AI has been programmed to please and to validate. When combined with AI’s well-known tendency to hallucinate, it adds up to a potentially toxic formula.

“AI can be sycophantic,” Moore says. “And that’s a problem for some users.”

The researchers say delusional spirals result from a pattern in which a human presents an unusual, grandiose, paranoid, or wholly imaginary idea and the model responds with affirmation, encouragement, or, in some cases, aid in constructing the person’s delusional world, all while offering intimate reassurances that can sound all too human.

Things then escalate as the model offers an endless stream of attention, empathy, and reassurance without the all-important pushback a human confidant, therapist, or lover would typically provide.

These stakes are not abstract. In the team’s dataset, Moore and colleagues witnessed how delusional spirals led to ruined relationships and careers – or worse. In one case, a participant died by suicide when the conversation grew “dark and harmful,” Moore explained.

“Chatbots are trained to be overly enthusiastic, often reframing the user’s delusional thoughts in a positive light, dismissing counterevidence, and projecting compassion and warmth,” Moore said. “This can be destabilizing to a user who is primed for delusion.”

Warning signs of delusional spirals

Moore says delusional spirals derive from a few specific hallmarks: an AI that encourages grandeur and uses affectionate interpersonal language, and a human’s misperception of AI sentience. Meanwhile, chatbots are ill‑equipped to respond to suicidal and violent thoughts.

It is less a matter of “the evil AI,” Moore said, than a miscalibrated social calculus built into the models. Systems tend to extend conversations to defer to their interlocutors, thereby making them better assistants. At the same time, they don’t have ways to tap the brakes on a spiraling conversation or to route an unstable person toward help.

“There is a mismatch between how people actually use these systems and what many chatbot developers intended them – trained them – to be,” Moore says.

What can be done

In light of these clear and concerning risks, Moore and colleagues conclude their paper with remedial recommendations. AI developers could include metrics in their testing of a model’s tendency to facilitate delusional spirals and, potentially, add detection filters to the models themselves that raise red flags on potentially harmful uses of AI. The researchers acknowledge that privacy concerns could stand in the way of that strategy.

“I think AI developers have a vested interest in addressing this concern about the use of their models in ways they likely never even intended or imagined,” Moore noted.

On a policy front, the researchers say that lawmakers should reframe alignment as a public-health issue requiring new standards for flagging sensitive conversations, greater transparency into AI “safety” tuning, and clear rules for crisis escalation when a user demonstrates tendencies toward self‑harm or violence.

“When we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,” said Nick Haber, an assistant professor at Stanford Graduate School of Education and a senior author of the study. “Delusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.”

This paper was partially funded by the Stanford Institute for Human-Centered AI.

This story was originally published by Stanford HAI.

This post was originally published on Stanford Report and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• How emoji use at work can determine how competent your colleagues think you are

• You probably wouldn’t notice if an AI chatbot slipped ads into its responses

by External Contributor via Digital Information World