Friday, May 16, 2025

Codex Arrives in ChatGPT as OpenAI’s New Assistant for Developers Writing and Reviewing Code

After much anticipation, OpenAI has released a research preview of its new coding assistant, called Codex. A tool designed to help seasoned developers hand off repetitive programming chores to an AI that not only writes usable code but also explains each decision it makes in the process.

New Codex Tool From OpenAI Lets Developers Automate Code With Built-In Reasoning and Project Context

Accessible directly through the ChatGPT interface, Codex appears as a new sidebar tool within the web app. Developers interact with it by entering a prompt, then choosing whether they want code generation or guidance. With either path, the tool responds within a simulated environment built to mirror the user's actual development stack.

Each task runs inside an isolated container that pulls in the user's current codebase. This gives Codex enough context to offer relevant, accurate output rather than vague or out-of-place suggestions. If users want to guide Codex’s behavior further, they can add a special AGENTS.md file to their repository. That document acts like a manual tailored for the AI—defining architecture details, coding conventions, and project-specific notes.



This system runs on a specialized model called codex-1, which stems from OpenAI’s o3 family but is further refined through reinforcement learning. It was trained not just to output code but to evaluate, test, and revise it—closer to how real programmers work when iterating.

OpenAI isn’t ignoring the skepticism around AI coding tools. Developers have long pointed out that these systems can produce clumsy, error-prone, or even unsafe code—especially when they're asked to generate entire scripts rather than just assist with small pieces. Codex-1 tries to tackle those concerns head-on by making its process visible. Instead of spitting out answers in a black box, it works step by step, showing how it arrives at decisions and flagging uncertainties when needed.

That said, OpenAI urges users not to treat the system as a finished product. Any code it produces still needs thorough human review before it’s used in production. Codex is meant to assist—not replace—the developer’s judgment.

The tool is already rolling out to users on ChatGPT Pro, Team, and Enterprise plans, with support for educational and Plus accounts on the horizon. During the early preview window, access is unrestricted and free for those accounts, though OpenAI has confirmed that rate caps and a pricing model will arrive later as usage scales.

Read next: ChatGPT Usage Statistics: Numbers Behind Its Worldwide Growth and Reach
by Irfan Ahmad via Digital Information World

Deepfake Technology Explained: Risks, Uses, and How to Detect Fake Videos

While technology advances at breakneck speed, misinformation and disinformation are keeping pace. Deepfakes are getting quite common on social media platforms like YouTube, Facebook, TikTok, and Instagram. Recently, a deepfake video of the Ukrainian President Volodymyr Zelensky was circulating on social media sites where he seemed to tell Ukrainian soldiers to surrender to Russian soldiers. Even though the video clearly appeared to be a deepfake, it still posed the question about how this technology can be used to spread false information, especially if the media gets blindsided as well. A similar situation unfolded recently amid heightened tensions between India and Pakistan, where manipulated videos and misleading posts circulated widely, falsely portraying cross-border military actions. These viral pieces of content not only inflamed public sentiment but also risked escalating conflict based on fabricated narratives.

Deepfakes didn't even start with politics, rather, they started with deepfakes of celebrities like Taylor Swift and Gal Gadot. Soon, many companies started using them, and one company even allowed people to animate the pictures of their deceased loved ones to bring them to life. Nowadays, deepfakes have become so advanced that it is hard to differentiate between what's real and what's fake. In this article, we will explore deepfakes a bit deeper and how people and organisations can identify them to keep themselves safe from being fooled.

What is a Deepfake?

A deepfake is a fake video, image or audio created by AI that seems real, and the technology used to create it is called deep learning. The technology and tools make people do or say things they never actually did, and many public figures become victims of it.

Deepfakes have been growing a lot since 2018, and now they have reached a peak in numbers, with 85,000 harmful deepfakes by the end of 2020. Deepfakes are also harmful because they can spread fake news, support unethical political goals, and be used for revenge as well. If you want to create powerful and realistic deepfakes, it takes skilled experts and powerful computers, but now people can even do it without having any knowledge because of cheap or free apps. This is because of advancements in AI and cloud technology, and it's getting harder to separate fake from real.

Are Deepfakes Illegal?

Now that a lot of people are using deepfakes, another question arises about the legality or illegality of deepfakes. The answer is that it isn't completely illegal to create deepfakes, but it depends on how they are being used. Many people use deepfakes for entertainment, and they are quite harmless, but the ones used to harm and exploit someone or spread misinformation can become illegal.

In the EU, the rules around deepfakes are strict, and regulations like AI laws, GDPR, disinformation policies, and copyright rules tackle deepfake issues legally. However, none of these regulations directly address deepfakes, and it is still unclear if deepfakes can be used in courts as evidence. Israel recently introduced a new law that requires all edited images to be labeled as such, and this can soon be applied to deepfakes as well. Other countries are also trying to deal with deepfake issues on their own, but none have come up with any clear regulations and laws about deepfakes.

Deepfakes: How Do They Even Work?

We all now know that deepfakes work by using artificial intelligence, but it's actually the neural networks that make deepfakes mimic how someone looks or sounds. AI models are trained on data sets of thousands to millions of images, audio clips, and videos, and they can learn to mimic the voice, facial expressions, and movements of a person with accuracy.

When it comes to face swapping, the AI creates a digital version of someone's face and then overlaps it with someone else’s, which perfectly mimics the individual’s facial expressions.

Voice cloning also works in a similar way with AI analyzing the audio recordings of a person's voice and then generating a new audio that sounds exactly like what the person could have said.

The more visual and audio data the AI has, the more realistic the results become, and as technology is improving, it is becoming harder for people to differentiate between real and fake. There are different apps for deepfakes like FaceApp, which is mainly used for fun but can still make real and convincing videos of people, even if the user doesn't have any technical skills.

The Purpose of Deepfakes

Deepfakes aren't always used for malicious purposes, as they can also be used for educational, creative and empowering causes as well. They can be used in a positive way as well as in a negative way.

Positive Uses of Deepfakes

In positive settings, deepfakes are being used in films to de-age actors or bring back performances of people who have passed away. They are also being used in voiceovers, parodies, and e-books for entertainment purposes and to make the storytelling more engaging and interesting.

Some teachers are also using deepfakes in educational settings for historical figures and to bring them ‘to life’ to make it more fun and engaging for students. Deepfakes are also being used for marketing, virtual exhibitions, and presentations. Even criminal investigators are using deepfakes to enhance their communication and analysis.

Negative Uses of Deepfakes:

On the other side, deepfakes are also being used in a negative and exploitative way. Deepfakes can be used for identity theft, fraud, and to blackmail people about things they never said or did. Deepfakes are also spreading fake news that looks believable and real, especially when there's a conflict or a political event.

Deepfakes are also being used for warfare manipulation, especially seen during the Ukraine war when the fake video of the President was circulating. There was another example of deepfake being destructive when actor Jordan Peele made it appear that Barack Obama was calling Donald Trump a name by using a deepfake.

Spotting a Deepfake

Even though deepfakes have become more real than ever, there are still some signs that can help you spot them even if you aren't a tech expert. The following are some ways to detect a deepfake:

1- Check the Source:

Before believing anything you see on the internet, always ask the question: Where does the video or the image come from?

If it is from an unknown or suspicious account, be cautious and do not believe its authenticity it. Check if it's a fan or parody account or a trusted news outlet, and always look for reliable sources.

2- Take a screenshot

Take a screenshot of the image or the video and run it on Bing Images or Google Images. This can help you trace whether the image is real or what the original version of it is. Also, check the sources from which the images have been posted.

3- Fast Check Yourself

Quickly check if the deepfake is being reported by credible news sources. If no trustworthy news outlet covers the story or event, it is probably fake. Trusting yourself and your gut feeling when it comes to deepfakes is also important.

There are also some other ways that can help you detect a deepfake:

● Look for visual cues like unnatural head or body moments and weird lightning or color.
● Notice any strange eye movements, unnatural blurs, like blinking too hard or their eyes moving in a strange manner.
● Look for facial expressions and notice if their face is aligning with their emotions well or if there are no expressions at all.
● AI always struggles with fine details like teeth and hair, so take a keen look at them, and if they are too perfect, it can be a deepfake.
● Listen to the audio carefully and hear if it feels off or mismatched, and if there are unnatural silences. Look at whether the mouth is matching the words perfectly or not.
● Last but not least, always trust your instincts and confirm the reliability of the video before sharing it.

Practical Tools and Resources to Detect Deepfakes

There are several tools available today that can help you identify deepfakes and verify the authenticity of videos and images. For example, Deepware Scanner allows users to upload videos and check if they are manipulated using AI detection models. Another great option is InVID, a powerful browser plugin that breaks down videos into key frames and helps analyze images and videos shared on social media. Sensity AI offers advanced deepfake detection services used by companies and researchers to spot manipulated media. Reality Defender uses real-time AI to detect and flag deepfakes on websites and social platforms. Combining these tools with trusted fact-checking sites and critical thinking will help you avoid falling victim to fake content.

Effect of Deepfakes on Companies:

Deepfakes are a serious threat to companies because they are hard to spot and can be used in fraud. Cybercriminals often target celebrities, politicians, and businesses by making their harmful deepfakes. For example, they can make a deepfake of a company’s CEO saying something controversial or leaking sensitive information, and this can lead to a loss of finances and damage to the reputation of the company.

In 2020, a criminal used a deepfake voice of the CEO of a company and tricked a bank director in Dubai to approve the transfer of $35 million. The voice sounded so real that the director believed it was real, and many similar scams have come to the surface now, where criminals target people for money using deepfakes.

Companies must stay alert as deepfakes become a serious security risk. Training employees to recognize suspicious videos or audio is essential, especially for executives who might be impersonated. Implementing strict verification procedures for sensitive communications—like confirming unusual payment requests via multiple channels—can prevent costly scams. Many organizations are also adopting AI-powered security tools to detect manipulated media before it reaches internal systems. Staying informed and having clear policies around digital content helps reduce the chances of falling victim to deepfake fraud.

Keep Yourself Safe From Deepfakes

Deepfakes began as something simple, but now they have become a global threat and can be used to make convincing fake videos and images of people and events that can become problematic. Now, anyone can make a deepfake, which means no expert is needed for it, so now it has made everyone unsafe in this digital world.

If you come across a video or image that seems suspicious, avoid sharing it until you verify its authenticity. Report the content to the platform where it appeared, such as YouTube, Facebook, or TikTok, which have policies against manipulated media. Use the verification tools mentioned earlier to analyze the content or check if credible news sources have covered the event. Sharing unverified deepfakes only helps spread misinformation and can cause real harm, so a cautious approach protects both you and others.

Deepfakes are becoming realistic, and spotting them is becoming a challenge, but human awareness and using the right technology can make it stop before it becomes a big threat.


Image: DIW-AIgen

Read next: ChatGPT Usage Statistics: Numbers Behind Its Worldwide Growth and Reach
by Arooj Ahmed via Digital Information World

Eric Schmidt: True AI Power Lies Beyond Language and Chatbots

Most people look at artificial intelligence and stop at chatbots. Words, sentences, clever answers. That’s where the fascination ends. But Eric Schmidt, who once ran Google, sees something else moving underneath, something harder to notice and maybe more important.

He recently spoke at a TED event, and what he pointed to wasn’t flashy. Not language. Not summaries. Planning. Strategy. That’s the shift. While the crowd is watching the show, he says, the real tech is learning how to think a few steps ahead—and do it alone.

Eric Schmidt says AI’s biggest leap isn’t chat but systems learning to plan, adapt, and act independently.
Image: Ted / YT

Systems built on reinforcement learning, a method that lets them try, fail, adjust—are getting sharper. This isn’t about chat anymore. It’s about machines figuring out how to act without being told what comes next.

Right now, he says, everyone still thinks in terms of text. But AI is already going further. First it was language. Then came sequence—useful in biology, for instance. Now it’s about forming plans, solving problems in layers. And soon? Machines running whole business operations on their own. Quietly.

It’s Already in Use—Just Not Where Most People Are Looking

While people play around with bots that write poems or emails, deeper tools are being built. Schmidt mentioned a few: o3, Deepseek R1. They don’t just give answers. They try things out. Go back. Rethink. They loop.
He’s using some of them, personally. After investing in the space industry, he wanted to understand it better. Not casually—deeply. Instead of reading textbooks, he asked AI systems to study for him. One spent about 15 minutes pulling together a dense, research-heavy writeup. Schmidt called it “extraordinary.” Not polished, but insightful. Not written for style, but for depth.

Experts Say Planning Is Where Intelligence Starts to Look Real

Schmidt isn’t alone here. Yann LeCun from Meta has said similar things—today’s large language models don’t really think, he argues. They don’t understand space, memory, consequences.

To fix that, LeCun proposed a new kind of model—H-JEPA. Its goal? Let machines figure out steps, not just sentences. The idea is that, to be smart, an AI has to work with goals. It needs to try, adjust, move forward. That’s different from just guessing the next word.

And Schmidt agrees. Without that kind of reasoning, it’s all just smoke.

The Stakes Go Beyond Tech—It’s Geopolitical Now

He also framed the issue in terms of global power. Countries aren’t watching from the sidelines. The U.S. and China, he warned, are in a kind of AI arms race.

Trade restrictions have already changed how China approaches AI. With limited access to advanced chips, they’ve been pushed to rely on algorithmic workarounds—code over hardware. In a way, that’s making them faster.

Schmidt claimed that discussions about AI are happening inside military circles—real ones. Strategic ones. He even mentioned that in some rooms, people have suggested preemptive moves. That’s not science fiction. He thinks it’s five years away, maybe less.

Still, He Sees Room for Hope—If People Keep Up

Despite the tension, Schmidt isn’t all doom. He still sees AI as something that could change lives—in education, medicine, material science. He talked about AI tutors that adapt to the person, tools that help researchers explore what’s still unknown.

But he warned: this isn’t something to watch passively. “Ride the wave,” he said. Not once. Every day. The idea is, if you’re not using it, someone else is—and they’ll move faster than you.

In his mind, this isn’t like electricity or the internet. It’s bigger. AI, especially general intelligence, will shape the next century—or millennium. And it’s already starting to.



Read next: Future ChatGPT Could Store and Analyze Your Entire Digital Life
by Asim BN via Digital Information World

Thursday, May 15, 2025

New Threads Update Makes Link Sharing More Powerful for Creators

For users posting on Threads — especially digital creators, media professionals, and online publishers — links have long felt like an afterthought. Posts containing URLs often struggled to find traction in the app’s recommendation engine, leaving many to wonder whether external content was being subtly discouraged.

This wasn’t entirely speculative. Instagram and Threads executive Adam Mosseri has openly acknowledged that links aren’t a priority in the platform’s content ranking process. While he stopped short of calling it intentional suppression, the outcome remained the same: posts with links tended to fade into the background.

That’s beginning to shift.

Meta, now pivoting harder toward creators as a cornerstone of Threads’ growth, is starting to adjust how links function on the platform. One of the more visible changes is a new option allowing users to embed up to five links directly within their bio — offering more flexibility than the previous single-link constraint.

More consequential, though, is how the app is starting to treat links in posts. Rather than fading out, posts that contain URLs are now being included more frequently in the app’s recommendations. And to make that visibility measurable, Meta has introduced new link-specific analytics that show creators how their audience engages with shared content — including how many taps a given link receives.

Meta, in a recent update, emphasized that the goal is to help creators expand their reach, even beyond the confines of the Threads platform itself.



While this may be welcome news for those hoping to convert their Threads presence into tangible traffic, questions remain. The recommendation engine still operates with limited transparency, and even high-follower accounts often see more reach from suggested posts than from their own followers. That dynamic makes the algorithm’s behavior harder to predict — especially when the performance of link-based content is on the line.

Media organizations and independent publishers, in particular, have reported inconsistent outcomes. A very small set of big publishers saw more reliable referral traffic from new networks like Bluesky, which despite its size, delivered steadier results. However, a recent uptick in link-driven engagement on Threads has been observed — especially after Meta rolled back its previous stance on promoting political content.

That said, caution remains warranted. Meta’s history is marked by frequent strategic pivots that have, in the past, disrupted the ecosystems of creators and publishers. Even so, with the platform loosening its grip on link visibility and engagement, there may be an emerging window for experimentation — one that could be worth exploring for anyone seeking new channels of audience growth.

Read next:

• Privacy Group Noyb Challenges Meta’s AI Training Plans Citing GDPR Violations

ChatGPT Usage Statistics: Numbers Behind Its Worldwide Growth and Reach
by Irfan Ahmad via Digital Information World

ChatGPT Usage Statistics: Numbers Behind Its Worldwide Growth and Reach

OpenAI's ChatGPT has become an important part of people's lives, and as of May 2025, it has 170 million daily visitors (based on our estimates), with 800 million weekly active users. People rely a lot on ChatGPT, and it handles over 1 billion queries every day. With the rapid growth, it is expected that OpenAI will earn up to $11 billion in revenue by the end of 2025.

Total Number of ChatGPT users

In February 2025, ChatGPT had 400 million weekly users, which doubled to 800 million weekly users in March 2025. Open AI’s CEO, Sam Altman, also said during TED 2025 that ChatGPT’s user base doubled in just a few weeks.

In comparison, ChatGPT had just 50 million active weekly users in January 2023, which jumped to 100 million in August 2023. By October 2024, ChatGPT had 250 million weekly active users, which reached 300 million in December 2024.

Beyond the Hype: ChatGPT Stats That Tell the Real Growth Story

Monthly Active Users (MAUs) of ChatGPT

In January 2023, ChatGPT had 100 million monthly active users, and it reached 173 million monthly active users in April 2023. In 2024, ChatGPT reached 180.5 million monthly active users. It reached 400 million weekly active users as of February 2025. According to a court document, revealed by Google, ChatGPT had approximately 600 million monthly active users in March. It should also be kept in mind that ChatGPT reached 100 million monthly active users just after two months of its launch, showing how quickly people started using it.

Daily Active Users (DAUs) of ChatGPT

In 2025, ChatGPT had an average of 170 million daily visitors. It's important to note that this visit count, calculated by Digital Information World using Similarweb’s April 2025 data, which reflects total visits, not unique users. As one person may visit multiple times, so actual active daily user numbers may vary. The daily average of 170 million visits was derived by dividing 5.1 billion monthly visits by 30 days. Most people use ChatGPT regularly, so its daily active users typically range from 148 to 200 million, based on data from tools like SEMRush.

How Many Queries Does ChatGPT Receive per Day?

According to data from OpenAI, ChatGPT handles over 1 billion queries every day. People heavily rely on ChatGPT for their questions, daily tasks, and for conversations.

Within a week of its launch, ChatGPT was already handling 10 million queries every day, and its usage has grown rapidly, as well as its user base.

OpenAI has also launched ChatGPT Pro on December 5th, which is priced at $200 a month and is best for engineers, researchers, and power users who need advanced AI usage daily.

User Demographics of ChatGPT

Gender:

There is a balanced gender split among users of ChatGPT, with 54% of its users being male and 46% of them being female, as per SimilarWeb insights.

Age:

Most of the users of ChatGPT are young adults and professionals. Around 55% of users of ChatGPT are between the ages of 18-34. Users aged 34–54 make up 32% of ChatGPT's total user base, while those aged 55 and older account for just 13%.

ChatGPT Users by Country

Even though ChatGPT is used globally, some countries have higher numbers of its users. The US has the highest number of ChatGPT users (18.11%), followed by India (7.99%) and Brazil (5.4%).

As per Similarweb, the UK has 3.39% of the total ChatGPT users, while Republic of Korea has 3.94% of users. 61.16% of the ChatGPT users are from different other countries.

ChatGPT Traffic Stats

As per SimilerWeb data, ChatGPT web traffic has grown a lot, especially in 2025. In April 2025, ChatGPT’s website got 5.1 billion visits, which is a 30.77% increase from 3.9 billion in February 2025. This shows that the engagement with ChatGPT is rising. Between late 2023 to early 2025, traffic on ChatGPT was steady, and jumped between September and October 2024 from 3.1 billion to 3.7 billion. In April 2024, there were 1.81 billion monthly visits on ChatGPT, and it reached 5.1 billion monthly visits in April 2025.

Month Monthly ChatGPT Website Visits
April 2025 5.1 billion
March 2025 4.5 billion
February 2025 3.9 billion
January 2025 3.8 billion
December 2024 3.7 billion
November 2024 3.8 billion
October 2024 3.7 billion
September 2024 3.1 billion
August 2024 2.6 billion
July 2024 2.44 billion
April 2024 1.81 billion
January 2024 1.6 billion
December 2023 1.6 billion
November 2023 1.7 billion
February 2023 1 billion
January 2023 616 million
December 2022 266 million
November 2022 152.7 million

Revenue of ChatGPT

OpenAI, which is the parent company of ChatGPT, is getting a lot of revenue from it. According to the CFO of OpenAI, Sarah Friar, ChatGPT is going to reach $11 billion in revenue in 2025. In 2022, ChatGPT had less than $10 million in revenue, and it reached $1 billion in 2023. In 2024, the revenue of ChatGPT was $3.7 billion. That means OpenAI’s revenue reached around $10 billion within three years, which shows how much demand there is for ChatGPT globally.

ChatGPT Plus Users

There are around 11 million users of ChatGPT who are subscribed to ChatGPT Plus, with around 1 million users subscribed to team and business plans, which give them more advanced features.

Average User Session

On average, users spend 13 minutes 38 seconds per session on ChatGPT, while 14 minutes 14 seconds was the longest session time recorded in October 2023.

ChatGPT Users as Compared to Other Platforms

Platform Time To Reach 100 Million Users Industry
Instagram Threads 2 days Social Media
ChatGPT 2 months Chatbot
TikTok 9 months Social Media
Youtube 18 months Social Media
Instagram 30 months Social Media
Facebook 54 months Social Media
Twitter 60 months Social Media
Spotify 132 months Music Streaming
Netflix 216 months Entertainment

ChatGPT has grown faster than any other major tech platform, reaching 100 million users in just 2 months. Only Instagram Threads was faster than ChatGPT in reaching that milestone in just 2 days, but its usage has dropped while ChatGPT’s usage continues to increase.

The third fastest app to reach 100 million users was TikTok, and did so in 9 months, while YouTube reached 100 million users in 18 months after its launch

ChatGPT also reached 1 million users in just 5 days, the fastest to do so after Instagram Threads, which did so in just an hour. Instagram reached 1 million users in 2 months, while Spotify reached this milestone in just 5 months.

Platform Year of Launch Time To Reach 1 Million Users
Instagram Threads 2023 1 hour
ChatGPT 2022 5 days
Instagram. 2010 2 months
Spotify 2008 5 months
Dropbox 2008 7 months
Facebook 2004 10 months
FourSquare 2009 13 months
Twitter 2006 2 years
Airbnb 2008 2.5 years
Kickstarter 2009 2.5 years
Netflix 1999 3.5 years

Interesting Facts:

As per SimilarWeb data 61.05% of ChatGPT's traffic comes from YouTube, which means from all the social media platforms YouTube viewers are the largest referral source of its user base, likely driven by content creators, tutorials, and AI-related videos linking to the platform.

According to SimilarWeb data, the United States was the biggest market for ChatGPT in April 2025, accounting for approximately 924 million of the 5.1 billion visits, or 18.11% of the total traffic.
Data from SEMrush suggests that many users still either mispronounce or mistype the popular term "ChatGPT," with some searching for "Chat GBT" (673,000 monthly searches as of April 2025) and "Chat Got" (246,000 searches), instead of ChatGPT.

Market Share of ChatGPT

Description AI Search Market Share Quarterly User Growth (Est.)
ChatGPT (excluding Copilot) GPT-3.5, GPT-4 59.90%
Microsoft Copilot GPT-4 14.30%
Google Gemini Gemini 13.40%
Perplexity Mistral 7B, Llama 2 6.30%
Claude AI Claude 3 3.30%
Grok Grok 2, Grok 3 0.70%
Deepseek DeepSeek V3 0.70%
Komo Not publicly disclosed 0.60%
Brave Leo AI Mixtral 8x7B 0.30%
Andi Not publicly disclosed 0.20%

ChatGPT leads in market share among the generative AI chatbot space, with 59.2% of the total AI search market. It is followed by a 14.4% share of Microsoft CoPilot and a 13.5% share of Google Gemini.

ChatGPT App Statistics

Overall, ChatGPT has 34 million average monthly downloads and is rated 4.5 stars on Google Play and 4.9 stars on the Apple App Store. On the App Store, ChatGPT is also ranked number 1 in productivity apps.

In January 2025, ChatGPT got 37 million new downloads, a 4.02% decrease from 38.6 million in December 2024. The highest number of downloads of ChatGPT recorded was in April 2025 (52 million).

Conclusion:

ChatGPT is seeing rapid growth, with users increasing monthly in large amounts. It is because ChatGPT offers the best AI technology and responses, and gives users a platform for different purposes.

Read next: 

• Offline Nations in an Online Age: Mapping the World's Digital Divide

• Chatbots Hallucinate More With Confident or Short Prompts, Accuracy Drops Up to 20% in Critical Tasks

• OpenAI Becomes the Default Setting for Corporate AI Spend

• US Data Centers Projected to Consume 606 TWh of Energy by 2030
by Arooj Ahmed via Digital Information World

OpenAI Unveils Safety Tracking Hub Amid Transparency Concerns and Legal Pressure

Tech giant OpenAI introduces a much-needed safety evaluations page, meant to track how its models behave when pushed beyond its limits. Rather than waiting for users questions to pile up, the company now puts confusion patterns, bad answers, obedience gaps, and trick responses together under one roof.

It is important to note that the launch of this Safety Evaluations hub didn’t come out of the blue. OpenAI has been under fire lately. Multiple lawsuits claim it has relied on protected material to train its systems.

This new safety hub expands earlier efforts. In the past, system cards gave one-time reports when a model launched. Those weren’t updated often. This new hub, however, should evolve over time. It includes performance details about GPT-4.1 up through 4.5 and keeps that data open to visitors.

Though it sounds useful, the page isn’t flawless. OpenAI checks its own work. It also decides what gets shared. That makes it harder for outsiders to trust everything shown there. Which means there’s no third-party audit, no independent voice checking what’s missing or misrepresented.

OpenAI says it wants better visibility into how its models perform. But it holds the steering wheel and the map. So, while the platform may bring progress, it still leaves observers wondering what they’re not seeing.

Safety Evaluation Portal Launched by OpenAI, Critics Question Selective Disclosures

Image: DIW-AIgen

Read next: Banned Without Warning: Pinterest Apologizes Late, Users Still Distrust Platform
by Irfan Ahmad via Digital Information World

Wednesday, May 14, 2025

Banned Without Warning: Pinterest Apologizes Late, Users Still Distrust Platform

For weeks, Pinterest users were left in the dark — locked out of their accounts, confused, and, in many cases, furious. Without any notice, users found their profiles suspended or content suddenly gone. Many of them insisted they had followed the rules. But still, the bans kept happening. And during all that time, Pinterest said almost nothing.

People turned to Reddit, X, and community forums, trying to figure out what was going on. Some had lost years of saved Pins, carefully collected over time. Others said their normal posts had vanished without explanation. When they reached out to support, they got cold or copy-paste replies—if they heard back at all.

That silence only made things worse.

At first, when Pinterest did speak up, on May 1, it didn’t say much. The platform simply asked affected users to send private messages, as if the issue wasn’t widespread. There was no clear apology, no public plan, and for many users, no comfort. Some began talking about possible legal steps or even messaging Pinterest’s executives directly on LinkedIn.

But now, finally, there’s some clarity.

On May 13, Pinterest officially admitted there had been a mistake. The company said the bans weren’t caused by any AI system, as many had assumed. Instead, the issue came from a problem inside their own systems — and some accounts had been flagged and blocked by accident.

Pinterest said it’s already restoring access for users who were wrongly banned. It also promised to improve how these kinds of errors are handled in the future.

Still, for many users, the apology came too late. Trust has been shaken. People are saying the damage is already done. And although the company has started making things right, some feel they were left unheard for too long.

This issue with Pinterest isn’t a one-off. It reflects a broader problem across the entire social media industry. Many platforms, including Meta, have leaned too heavily on automated systems and artificial intelligence for content moderation and account verification. In Meta’s case, some users are being asked to verify their identity using a video selfie, a process largely controlled by AI. But instead of improving safety, this tech-driven approach often ends up rejecting real people, locking them out for no valid reason, and offering no clear way to appeal. It’s a growing trend: less human support, more machine errors, and a worse experience for the very users these platforms are built for.

Delayed acknowledgment and absence of human help worsened Pinterest backlash, exposing growing dependence on flawed automated moderation systems.
Image: DIW-Aigen

Read next:

• Generative AI Platforms See Remarkable Engagement, With Users Spending 6+ Minutes per Session

• Tools of the Future, Shame of the Present: Why AI Users Stay Quiet at Work

• The World's Largest Unconnected Populations
by Irfan Ahmad via Digital Information World