Friday, June 27, 2025

Meta's Threads Adds Independent Content Filters, Breaking More Ties with Instagram

Meta continues to carve out a separate identity for Threads, and its latest feature update takes another step in that direction. The app has introduced a dedicated content filter system that no longer relies on Instagram’s settings, giving users finer control over what shows up in their Threads experience.

Previously, people using Threads had to share one universal filter setting with Instagram, meaning any phrase or emoji blocked on one platform would be hidden on the other as well. With the new update, that link has been removed. Now, users can adjust filters within Threads itself, without affecting their Instagram preferences.


The feature, called Hidden Words, acts as a filter for unwanted content across several parts of the app. Whether it's a comment on your profile, a reply in a thread, or something in your search results, you can block words, phrases, or emojis you’d rather not see. Threads also lets you mute certain topics for up to 30 days, which can be helpful if you want to avoid spoilers or distance yourself from overhyped discussions for a while.

What’s new in this update is the ability to manage those filters in groups, making it easier to control the types of content that appear in one go. The platform’s leadership says the move is part of a broader effort to give users more influence over how they engage with the app, so they feel more at ease sharing and participating.

Since Threads made its debut in 2023, the app has gradually been moving out from Instagram’s shadow. Meta has started testing a standalone messaging system within Threads, and users can now deactivate a Threads profile without touching their Instagram account. These changes show that Meta is still reshaping Threads to be a space with its own rules and rhythms, rather than just a secondary feed for Instagram users.

Read next: YouTube Tests AI Tools That Could Change How Users Search, And How Creators Earn
by Irfan Ahmad via Digital Information World

OpenAI Disagrees with Dire Prediction About AI Replacing Entry-Level Jobs

OpenAI’s leadership doesn’t share the alarmist view that artificial intelligence is on track to wipe out half of all entry-level white-collar jobs, at least not in the short term.

Speaking at a live event hosted by The New York Times, OpenAI COO Brad Lightcap responded directly to a recent claim made by Anthropic CEO Dario Amodei, who predicted that up to 50% of junior office roles could vanish in the next few years. Lightcap said there’s no clear data supporting that scenario.

OpenAI, he explained, works with companies across nearly every sector. While those businesses are increasingly integrating AI into their workflows, he said they’re not replacing staff in large numbers. In fact, Lightcap argued that much of the fear around job loss misses where real friction is occurring. According to him, the employees who may be most vulnerable are not new hires, but longer-tenured staff who struggle to adapt to modern tools.

He emphasized that AI adoption is still unfolding gradually. So far, businesses are looking to augment their teams, not dismantle them.

Sam Altman, OpenAI’s CEO, offered a slightly more complex view. He doesn’t believe Amodei’s timeline is realistic either, but he didn’t rule out meaningful disruption. Altman acknowledged that some roles will likely disappear and that, compared to earlier waves of innovation, this transition could move faster. But he pointed out that past technologies have generally created more jobs than they destroyed, and he expects a similar outcome with AI.

Altman suggested that younger workers, especially those already fluent in AI tools, may be better positioned for this shift than some expect. He also stressed that societal change tends to move more slowly than technology does. Even when powerful tools exist, companies and institutions often take years to adjust, something he sees as a stabilizing force.

Both leaders agreed that the anxiety around AI is valid, particularly for people whose roles feel uncertain. But they pushed back on the idea that the job market is already collapsing. They described today’s AI as transformative but not yet capable of sweeping away entire industries.

Lightcap also noted a broader trend: while some feared AI would shrink engineering teams, many businesses are now asking for more developers, not fewer. With AI boosting output, companies are scaling faster, sometimes needing more staff, not less, to keep up with demand.

Altman, for his part, called for empathy and preparation. He didn’t deny that change will be painful for some, but he remains optimistic about long-term outcomes. The challenge, as he sees it, is helping people move with the technology, not against it.

The conversation pointed to a more grounded reality than the one some AI critics or enthusiasts describe. For OpenAI’s top leadership, the tools may be evolving quickly, but the way humans and organizations absorb them is more gradual, and that, they say, makes all the difference.


Image: DIW-Aigen

Read next: Consumers Are Asking AI Chatbots About More Than Just Tech, New Data Shows
by Irfan Ahmad via Digital Information World

Thursday, June 26, 2025

Consumers Are Asking AI Chatbots About More Than Just Tech, New Data Shows

The way people use AI chatbots is shifting. A year ago, most users turned to tools like OpenAI's ChatGPT for coding help and software tasks. That made sense since early adopters were largely from the tech crowd. But recent data shows this pattern is changing fast.

By early 2025, software-related prompts have dropped sharply. Back in spring 2024, software development made up 44% of all user prompts. Now it's down to 29%. In its place, a wide mix of new topics has appeared. People are asking more about personal finance, economics, entertainment, history, and education.

Software Prompts Dip as Finance and History Rise in ChatGPT Usage Trends

The biggest jump has come from people trying to sort out their finances. Over the past year, prompts about money, taxes, and the wider economy have grown faster than any other category. These now account for 13% of prompts, up from just 4% last year. More people seem to be asking chatbots to help explain things like inflation, tariffs, or how to handle their budgets.

Interest in entertainment, history, and general learning has picked up too. Chatbots are becoming a place for people to explore not just work topics but everyday questions and personal interests. At the same time, prompts about artificial intelligence and machine learning have slipped a little, from 15% to 14%. People aren’t just curious about AI anymore, they’re more focused on what they can do with it.

This shift shows that AI tools are now reaching far beyond the early tech-savvy crowd. The kinds of questions people are asking reveal what’s on their minds, what they find confusing, and what choices they’re trying to make.

For businesses, this change could be useful. Prompt data may soon become a key way to track what people are interested in across different industries. Companies might start using this insight to follow new trends or to understand what their customers are really thinking about.

The growing variety of prompts paints a simple picture: AI chatbots aren’t just for coding anymore. They’re turning into everyday tools that help people make sense of their world.

ChatGPT Prompt Topics March 2024 - Apr 24 March 2025 - April 2025
Software Development 44% 29%
History & Society 13% 15%
AI & Machine Learning 15% 14%
Economics, Finance, & Tax 4% 13%
Entertainment 6% 8%
Education & Academia 6% 7%
Tech Brands & Platforms 4% 5%
Law & Legal 3% 4%
US Politics & Government 2% 3%
Climate & Environment 3% 2%

Source: SensorTower - How ChatGPT is Reshaping Consumer Life

Read next: Researchers Examine How AI Interprets Human Personality Using Language and Psychological Models
by Irfan Ahmad via Digital Information World

Court Sides With Meta in Authors’ AI Lawsuit, Dismisses Copyright Claims

A recent court decision has handed Meta a significant win in a copyright lawsuit brought by a group of 13 authors, among them Sarah Silverman. These authors had argued that the company misused their books to train its artificial intelligence systems, but the federal judge overseeing the case dismissed their claims.

The ruling, delivered by Judge Vince Chhabria, effectively ended the dispute without the need for a jury. In his assessment, the judge found that Meta’s use of the copyrighted material fell within the boundaries of fair use, making it legally permissible in this particular situation.

This outcome closely follows another courtroom victory for Anthropic in a similar copyright case, adding to a growing pattern that appears to be favoring technology companies. For years, tech firms have battled accusations that using copyrighted works to train AI models infringes on intellectual property rights. Now, some recent court decisions are beginning to lean in their favor. Still, these rulings do not settle the matter in a broad sense.

In fact, the judge emphasized that his conclusion only applied to this specific case. He pointed out that the authors who brought the lawsuit had struggled to frame the right arguments and had not provided enough convincing evidence to support their position. The decision does not give tech companies blanket approval to train AI models on any copyrighted material without consequence. It simply reflects the failure of the authors to build a strong enough case this time.

One of the key reasons the judge sided with Meta was the view that the company’s AI models were not just copying the books but using them in ways that changed their original purpose. This idea of transformation plays an important role when courts look at whether the use of copyrighted work is fair. Another factor that worked against the authors was the absence of clear evidence showing that Meta’s actions had damaged the commercial value of their books. Without demonstrating real harm to their market, the authors' claims were left without solid ground.

While this case focused on the use of books, it is far from the end of the legal road. Other lawsuits are still moving forward, including high-profile cases where companies like OpenAI and Microsoft are facing challenges for training AI models on news articles. At the same time, firms such as Midjourney are being sued over the use of films and television shows in their AI training processes.

The judge noted that each of these cases will depend heavily on their specific details. Some types of creative works may stand on shakier ground when it comes to fair use, particularly when AI-generated outputs could compete more directly with the original products. For example, the news industry might face greater risks of market disruption compared to other creative fields.

For now, the Meta decision is a notable step in an evolving debate, but it stops short of providing clear rules for everyone. More complex battles over AI and copyright are still ahead.


Image: DIW-Aigen

Read next: Google’s Gemini AI Will Access Phone, Messages, WhatsApp on Android Regardless of Activity Setting
by Irfan Ahmad via Digital Information World

Google’s Gemini AI Will Access Phone, Messages, WhatsApp on Android Regardless of Activity Setting

Google is once again stepping deeper into the private spaces of Android phones. This time, its Gemini AI system is preparing to weave itself more tightly into the daily apps people use, whether or not they’ve agreed to it. Starting from July 7, 2025, Gemini will begin working alongside core apps like Phone, Messages, WhatsApp, and various system utilities, regardless of whether a user has turned Gemini’s app activity tracking on or off.

At first glance, this may not sound too different from Google’s usual updates. Yet for many users and privacy advocates, this one feels like another chapter in a familiar story. Over the years, Google has repeatedly positioned itself as both the gateway to convenient digital life and the quiet collector of that life’s details. Time and again, the company has blurred the lines between improving services and expanding surveillance. The search engine years. The Gmail scans. The location history that kept ticking even when paused. Google’s track record shows a habit of designing tools that serve users but also quietly harvest data, often in ways that are only fully understood after headlines force a closer look.

This Gemini update seems to follow that same well-trodden path. Google says Gemini will now help users perform simple tasks like making calls or sending texts without the need to store their conversations in long-term activity logs. Before, using Gemini’s phone and messaging features required that history tracking be switched on, meaning Google could keep those interactions beyond a brief window. Now, the company says those same features will be available even if users have disabled the Gemini Apps Activity setting. Google maintains that chats won’t be saved for more than three days in these cases and won’t be used to train its AI models.
Some have argued that this change is actually a step forward for privacy. It allows basic assistant functions to work without long-term data storage. Others see it differently. The concern is less about what is written in Google’s policy updates and more about what happens behind the familiar fog of vague wording. When the company says Gemini will “help you use” these apps, what does that really mean? Will Gemini quietly scan message contents? Will it access call logs? Will it peek into WhatsApp exchanges under the hood? The language is open-ended, leaving many unsure where Gemini’s reach will stop.

It doesn’t help that the notification email linked users to a privacy hub that offered little practical guidance. Some Android owners have yet to receive the notice at all, adding to the confusion. Google has offered some reassurance, pointing to the ability to turn off these app connections, but the steps to do so aren’t exactly front and center. Even now, many users remain in the dark about what’s changing and how to control it.
This is not the first time Google has rolled out a new feature wrapped in flexibility on the surface but tied to deeper system integration underneath. Across the wider tech industry, this pattern is not unique. Companies often introduce helpful new tools with quiet trade-offs buried in the details. Over the past decade, the push to make digital assistants smarter has steadily chipped away at user control. Features arrive switched on by default, and opting out is rarely as simple as it sounds.

Image: DIW-Aigen

Read next: 

Study Reveals Gaps in AI Moderation as Youth Slang Outpaces Detection Systems
by Irfan Ahmad via Digital Information World

Wednesday, June 25, 2025

Study Reveals Gaps in AI Moderation as Youth Slang Outpaces Detection Systems

Young people have always felt misunderstood by the adults around them. That’s not new. What’s changing is the gap. It’s widening. Now even artificial intelligence can’t keep up with Gen Alpha.

At a recent tech conference in Athens focused on fairness and accountability, a student named Manisha Mehta presented research that points to a surprising issue. Kids’ fast-changing slang is often completely missed by the AI systems meant to keep them safe online.

Mehta's study looked at how well kids, their parents, and professional moderators could handle modern slang, comparing them to four well-known AI language tools developed by OpenAI, Google, Anthropic, and Meta. The goal was simple that is to see if people and machines could figure out what the slang actually meant, understand when the tone changed, and catch possible hidden risks.

To put the research together, Mehta worked with 24 classmates to build a list of 100 Gen Alpha phrases. Some phrases could either support or tease, depending on how and when they were used. Others came straight from gaming and social media circles. Expressions like “let him cook” or “ate that up” could either cheer someone on or poke fun at them. Words like “got ratioed” or “secure the bag” were pulled from the fast-moving world of online chats and games.

One of the key things that stood out was how often adults completely missed what these phrases meant. Parents and moderators were often left guessing, while the AI tools weren’t much better. The study makes it clear: many of the systems meant to keep kids safe simply don’t understand the language they’re using.

When the kids were tested on meanings, shifting tones, and spotting hidden harm, they almost always got it right. Their scores stayed high across the board. Parents, though, struggled badly. They often missed key meanings and failed to notice when a friendly phrase turned hurtful. Professional moderators didn’t do much better.

What this really shows is that adults, whether they’re at home or working to keep social platforms safe, can’t fully protect kids if they don’t understand the language those kids are using. A parent might only catch one out of every three moments when their child is quietly mocked or bullied in Instagram comments.

When tested on the same slang, the four AI tools landed roughly where the parents did. This suggests the data used to train these systems probably comes from more adult-focused language. Since most of what’s written in books or online comes from older people, it makes sense that these AI tools haven’t fully absorbed the latest slang from teenagers.

There’s more at stake here than just missed meanings. Gen Alpha, born in the years after smartphones became part of everyday life, has grown up fully connected to the internet. Many of their earliest social experiences have happened online, far from the view of parents and teachers. The systems built to watch over them can’t easily keep up, especially since much of the moderation now depends on automated tools. Parents can’t watch every post or chat, and even professional moderators miss things hidden in what seems like harmless talk. Meanwhile, kids’ slang keeps moving so quickly that what’s popular today could easily sound old in just a few months.

The study points to a subtle but growing gap. It’s not just a difference in age. It’s a difference in language. And when children and the systems meant to protect them don’t speak the same language, danger can easily slip through unseen.

Image: DIW-Aigen

Read next: Human vs. AI Perception: Research Uncovers Striking Differences in Object Recognition
by Asim BN via Digital Information World

Tuesday, June 24, 2025

Google Chrome for Android Finally Lets You Move the Address Bar After a Decade

Google is adding something small to Chrome on Android, but for many people, it might make a real difference. The address bar, which has always sat at the top of the screen in Chrome’s mobile version, can now be moved. If you want, you can place it at the bottom, where it may be easier to reach, especially on phones with larger displays.

The update is not arriving for everyone at once. It’s starting to appear now for some people, and Google is gradually making it available more widely. When it turns up on your device, you can press and hold the address bar to bring up an option that moves it to the lower edge of the screen. There’s also a setting inside the browser’s menu where you can make the same change, if you prefer to adjust it there.

For Android users, the address bar has remained in the same place for more than a decade. Chrome first launched on Android back in 2012, and in all that time, the bar at the top has been the usual way to browse. Although Google has changed and improved some things over the years, like letting the bar disappear when you scroll upwards to create more space on the page, the position of the bar itself has stayed the same.

For some, reaching the top of the screen can be awkward, particularly when using the phone with one hand. Moving the bar to the bottom might help with that. It’s not really a new idea. Google has tried this approach before, though in those earlier tests, the feature never became a regular part of Chrome for Android.

Interestingly, people using Chrome on iPhones have already had a similar choice. On iOS, it’s possible to switch between a top and bottom bar, either by pressing and holding the bar or by using the browser’s settings. That version has offered this flexibility for a while.

It might come as a surprise that Google is only now bringing the option properly to Android. Other browsers added it quite some time ago. The Windows Phone, which has long disappeared from the market, was already giving users a bottom bar back in 2012. Apple’s Safari browser introduced a similar change in 2021, letting people move the bar for easier access.

Even so, this update means Chrome for Android is now catching up, giving people the chance to choose a layout that works better for them. For many users, having that choice may make everyday browsing feel a bit more comfortable.


Read next: AI Firms Face New Legal Boundaries as Court Differentiates Fair Use from Copyright Theft
by Irfan Ahmad via Digital Information World