Wednesday, July 16, 2025

OpenAI Adds Visual Styles to ChatGPT, Works Quietly on New AI Browser ‘Aura’

OpenAI is adding new tools to ChatGPT while moving forward with a browser project known internally as “Aura.” The latest updates reflect steady efforts to extend the platform’s features without drawing much attention to individual rollouts.

Image Styles Now Built Into ChatGPT

ChatGPT’s image generation tool now includes built-in visual styles. This lets users select a preferred look, like Retro Cartoon, Anime or Photo Shoot, without having to write detailed prompts. People can upload a photo, choose a preset, and get an image that closely matches that theme.


The feature is designed to save time and reduce guesswork. Before this change, users often had to write long, structured prompts to create art that looked the way they wanted. Even then, the results could vary. The new style picker removes some of that friction.

The option is available in the ChatGPT interface and can be used without any subscription or payment. For example, someone trying to create an anime version of a portrait can now do so in a few clicks. The image output typically reflects the selected theme in form and tone, whether it’s stylized, soft, or more detailed.

‘Aura’ Browser Appears in Testing

Alongside upgrades inside ChatGPT, OpenAI is also testing a browser codenamed “Aura”, as spotted by Tibor Blaho. Traces of the browser have turned up in the app’s code. Labels like “is Aura” and “Aura Sidebar” point to possible browser functions or early user interface elements.

OpenAI has not made a public announcement about the project. However, earlier reports described a Chromium-based browser that would carry AI-assisted features. This would place OpenAI in a similar space as Perplexity, which is already offering AI-driven web browsing.

The browser seems to be in development across desktop and mobile platforms. Since Chromium can run on multiple systems, OpenAI’s tool may be designed for broad compatibility from the start. A few speculative reports have even mentioned the company’s interest in buying an existing browser if legal shifts affect the browser market. No clear steps have been taken on that front.

Other Tools in the Pipeline

OpenAI is also building GPT-5, its next major model. This version is expected to combine lessons from earlier versions, though technical details are still under wraps.

Some internal experiments are focusing on computer agents that could support more complex tasks inside ChatGPT. These tools may help break down or guide user inputs step by step, though public access has not begun.

Steady Rollout, Broader Aims

By adding simple tools like style selectors and working on deeper features like browsers and agents, OpenAI appears to be widening ChatGPT’s role. The upgrades are arriving quietly, but each one brings a small shift in how users interact with the system.

For now, users can expect easier ways to create visuals and, eventually, more AI-driven support in how they browse, search, or work across the web.

Read next: Google-Led Study Finds Language Models Struggle With Confidence When Challenged


by Irfan Ahmad via Digital Information World

WeTransfer Adjusts Terms After User Pushback Over AI Clause

WeTransfer has revised a section of its terms of service following criticism from users who believed the company might use uploaded files to train artificial intelligence systems. The updated language, which goes into effect on August 8, led to confusion over whether customer data could be processed by machine learning tools.

The controversy began after users noted that the terms mentioned using content to improve machine learning models involved in content moderation. Some interpreted this as a signal that WeTransfer intended to use their files in AI development or share them with third parties, particularly AI firms. Reactions were especially strong among professionals in creative fields, including those who rely on the platform to transfer artwork, media projects, and other proprietary material.

In response, WeTransfer confirmed that no customer files are processed by machine learning systems or used in any AI-related workflows. The company also stated that no data is shared or sold to outside parties. It clarified that the clause had originally been introduced to cover the future possibility of employing AI to support automated moderation, but no such system had been implemented. The company later acknowledged that the language had caused unnecessary concern and decided to remove references to machine learning altogether.
The revised clause now grants WeTransfer a license to use customer content strictly for maintaining and improving the service, in line with its privacy and cookie policies. This change, the company explained, was intended to improve clarity and prevent misinterpretation.

The situation follows a similar incident involving Dropbox in late 2023, when that company also had to assure users it was not applying AI to their stored data. These repeated misunderstandings suggest persistent concern around how digital platforms handle personal files in the context of emerging AI practices.

Legal experts have warned that even subtle changes in service agreements can expose users to risks, especially when companies operate in data-intensive industries. Privacy advocates often point to the potential for platforms to repurpose stored content under broad or ambiguous clauses, particularly as interest in machine learning continues to grow.

The episode has underscored the need for online services to use precise, accessible language in their policies. For users deeply embedded in such platforms, sudden changes to data terms can leave them without practical alternatives, even if they disagree with the new conditions.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Gen Z Quietly Redefines Phone Etiquette, Leaving Generational Gaps at the Dial Tone
by Irfan Ahmad via Digital Information World

Tuesday, July 15, 2025

Gen Z Quietly Redefines Phone Etiquette, Leaving Generational Gaps at the Dial Tone

A gradual shift in phone behavior among Generation Z, whose members are more accustomed to touchscreens than dial tones, has left many older users puzzled as they notice a growing pattern where younger people, upon answering a call, remain silent and wait for the caller to speak first, rejecting the traditional impulse to open with a simple “hello” or other verbal greeting.

This emerging habit gained public attention when a recruiter shared an observation on social media, noting that during scheduled calls with younger applicants, it had become common for them to stay silent upon pickup, forcing the interviewer to initiate the conversation, a trend that quickly resonated with others in professional roles who confirmed experiencing the same pattern, suggesting the behavior was no isolated occurrence.

The reasoning behind this pause, according to those familiar with the shift, stems partly from the overwhelming surge in robocalls and spam, as automated dialing systems often wait for a human voice trigger, usually a word like “hello”, before launching into their scripted pitches, which means that remaining quiet can help users differentiate between real callers and machine-generated interruptions, allowing Gen Z users to outsmart telemarketers with what amounts to a passive screening technique.

Alongside the fight against robocalls sits a more cautionary motivation rooted in growing concerns about voice-based fraud, where cybersecurity experts have warned that scammers could potentially use voice samples, particularly expressive and predictable phrases such as “hello,” to train or manipulate cloning software, making some users more likely to avoid saying anything identifiable at all, instead offering flat or non-emotional responses if they speak at all.

This approach also signals a change in social expectations, as many Gen Z users now operate on the belief that the person who initiates the call bears the responsibility for initiating the conversation, reversing the long-standing cultural pattern in which the person who answered the phone typically offered a greeting, introduced themselves, or otherwise acknowledged the connection first, a shift that, while minor on the surface, has become symbolic of the broader realignment in communication habits brought on by changing technologies and social rhythms.

For those who lived through the era of shared household phones without caller ID, answering a call involved not only speaking promptly but also declaring who you were or which residence had been reached, and for professionals who worked with landline desk phones, the practice became even more formal, with employees often trained to respond with the company name, their own name, and sometimes an offer of assistance, which reflected the anonymous and institutional nature of many incoming calls during that time.

Today, that formal scaffolding around phone use has all but collapsed, as personal mobile devices, direct-dial numbers, and ubiquitous texting have turned calls into intimate rather than shared experiences, and even the voicemail, once treated as a personal audio letter, is increasingly ignored, skimmed through transcription tools, or bypassed entirely in favor of follow-up texts or missed call notifications.

Some of my Muslim friends have shared that phone calls, like any form of speech, carry a moral weight, and they remind me that beginning a conversation with intention and a peaceful greeting, such as “Assalamu Alaikum,” (Maybe peace be upon you) isn’t just polite, it’s spiritually meaningful. They say it sets the tone for honesty, calm, and dignity in conversation, and also reflects the value Islam places on how and when we speak. Even silence, they suggest, should be purposeful, not careless, since the tongue is something one answers for, and every word, spoken or withheld, counts in the balance.

The rituals surrounding voice calls have changed so thoroughly that many people now feel the need to coordinate even brief conversations through several rounds of texting beforehand, and when someone does make an unscheduled call, it’s not uncommon for them to apologize before beginning, as if their ring tone had intruded on sacred space rather than merely announced their attempt to speak.

While those born before the early 1990s may view these behaviors as signs of declining etiquette or emotional distance, history suggests that phone customs have never been fixed for long, considering that the phrase “hello” itself only gained traction in the late 19th century after Thomas Edison popularized its use as a practical greeting for telephone conversations, and most American households did not even have phones until well into the 1940s, which indicates that the rules governing this technology have always evolved alongside it.

Although it may feel unusual to encounter silence when someone answers a call, especially for those who equate verbal acknowledgment with politeness, the reality is that such conventions often rise and fall with their relevance, and as the function of voice calls continues to narrow in the presence of faster, quieter, and more asynchronous communication channels, it becomes less surprising that younger users now approach the ringing phone with hesitation, calculation, or silence, rather than the reflexive friendliness that once defined the gesture.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: ElevenLabs Mobile App Sees Moderate Uptake with Global Skew
by Asim BN via Digital Information World

TikTok Led Earnings, ChatGPT Topped Downloads, While AI Apps Now Rival Entertainment for User Time and Money

As June wrapped up, mobile app data painted a clear picture of where users spent both time and money. While short videos and streaming content remained popular, apps powered by artificial intelligence continued their steady climb, not just in usage but also in how much users were willing to pay.

Consumer Spending Highlights

TikTok topped the revenue charts once again. With $384 million in global earnings across Apple and Google’s stores, it pulled in more than any other app, by a good stretch. That’s the highest monthly total the platform has brought in so far. Most of that came from iOS users, but Android still contributed over $100 million.

ChatGPT came in second. It brought in $185 million across both platforms, with Apple users accounting for the bigger share. The app might not have cracked into every household yet, but judging by its recent run, it’s getting there faster than most.

YouTube, Tinder, and Disney+ followed behind. Tinder earned about $133 million, placing it just ahead of Disney+, which saw its numbers dip slightly to $112 million. Google One and HBO Max tied at $92 million each, showing that cloud storage and streaming remain strong categories.

Other names in the top ten included CapCut, which passed $80 million, and Tencent’s QQ and Video platforms, which saw consistent engagement out of Asia.

Top Earners by Platform

On iOS alone, TikTok held the top spot with $266 million. ChatGPT landed second at $147 million, and YouTube brought in $137 million. Tinder and Tencent Video rounded out the top five. HBO Max, Disney+, CapCut, QQ Video, and LinkedIn also made the list, each pulling in more than $55 million.

Over on Google Play, TikTok still led, though the gap narrowed. Google One took second with $92 million. Disney+ followed, with $45 million, and Amazon’s app brought in just under that. ChatGPT placed fifth on Android with $38 million, while Tinder, Spotify, and HBO Max hovered around the $30 million mark.

Global Download Trends

ChatGPT didn’t just bring in money, it also brought in the most new users. It was the most downloaded app worldwide in June, with 50 million new installs — 15 million on iOS and 36 million on Android. That’s two months in a row where it led the field.

TikTok came in second overall with 37 million installs, beating Instagram by a nose. Instagram followed at 36 million, then came Facebook with 30 million and WhatsApp with 28 million. CapCut stayed close behind, and apps like Temu, Threads, and Telegram kept a solid presence in the top ten.

Looking just at iOS, Threads ranked second with 11 million installs, right behind ChatGPT. Google’s suite of apps, from Maps to Gemini, held several slots, while Telegram Messenger also drew strong numbers. Temu and TikTok were evenly matched, both attracting around 7 million new iPhone users.

On Android, Instagram and TikTok remained close, with 31 million and 30 million downloads, respectively. Facebook, WhatsApp, and Snapchat all stayed strong. Meanwhile, apps for small business and messaging, like WhatsApp Business and Telegram, continued drawing consistent interest.

General Trends and Takeaways

Overall, the top ten most downloaded apps reached about 290 million devices in June. That’s slightly fewer than May, with an 8 percent dip across the board. Summer tends to slow things down for many platforms, but not all. ChatGPT was the outlier, managing to grow while others held steady or dipped slightly.

Entertainment apps kept their hold on user wallets, but AI-powered tools are now elbowing their way into the top spots. TikTok may still lead by a wide margin, but ChatGPT’s performance across both earnings and downloads shows that users are just as ready to pay for productivity and innovation as they are for content and connections.

Whether that trend holds into July is anyone’s guess, but for now, the balance between streaming, dating, storage, and AI has never been more competitive.



The Highest Earning Apps – June 2025 (by Consumer Spend Worldwide)

iOS App Store

Rank App Company Revenue
1 TikTok TikTok Ltd. $266M
2 ChatGPT OpenAI $147M
3 YouTube Google $137M
4 Tinder Tinder LLC $96M
5 Tencent Video Tencent Technology $76M
6 CapCut Bytedance Pte. Ltd $72M
7 Disney+ Disney $67M
8 HBO Max WarnerMedia Global $62M
9 QQ Video Tencent Technology $60M
10 LinkedIn LinkedIn Corporation $56M

Google Play

Rank App Company Revenue
1 TikTok TikTok Pte. Ltd. $118M
2 Google One Google LLC $92M
3 Disney+ Disney $45M
4 Amazon Amazon Mobile LLC $44M
5 ChatGPT OpenAI $38M
6 Tinder Tinder LLC $37M
7 Spotify Spotify AB $32M
8 HBO Max WarnerMedia Global $31M
9 Crunchyroll Crunchyroll, LLC $21M
10 Prime Video Amazon Mobile LLC $19M

Combined Total

Rank App Company Revenue
1 TikTok TikTok Ltd. $384M
2 ChatGPT OpenAI $185M
3 YouTube Google $137M
4 Tinder Tinder LLC $133M
5 Disney+ Disney $112M
6 HBO Max WarnerMedia Global $92M
7 Google One Google LLC $92M
8 CapCut Bytedance Pte. Ltd $83M
9 Tencent Video Tencent Technology $76M
10 LinkedIn LinkedIn Corporation $68M

The Most Downloaded Apps – June 2025 (Worldwide)

iOS App Store

Rank App Company Downloads
1 ChatGPT OpenAI 15M
2 Threads Instagram, Inc. 11M
3 CapCut Bytedance Pte. Ltd 10M
4 Google Maps Google 8M
5 Google Google 8M
6 Temu Temu 7M
7 TikTok TikTok Ltd. 7M
8 Instagram Instagram, Inc. 6M
9 Google Gemini Google 6M
10 Telegram Messenger Telegram FZ-LLC 6M

Google Play

Rank App Company Downloads
1 ChatGPT OpenAI 36M
2 Instagram Instagram, Inc. 31M
3 TikTok TikTok Pte. Ltd. 30M
4 Facebook Meta Platforms, Inc. 25M
5 WhatsApp WhatsApp LLC 23M
6 Snapchat Snap Inc 16M
7 Temu Temu 16M
8 CapCut Bytedance Pte. Ltd 14M
9 WhatsApp Business WhatsApp LLC 14M
10 Telegram Telegram FZ-LLC 13M

Combined Total

Rank App Company Downloads
1 ChatGPT OpenAI 50M
2 TikTok TikTok Ltd. 37M
3 Instagram Instagram, Inc. 36M
4 Facebook Meta Platforms, Inc. 30M
5 WhatsApp WhatsApp LLC 28M
6 CapCut Bytedance Pte. Ltd 24M
7 Temu Temu 23M
8 Threads Instagram, Inc. 22M
9 Snapchat Snap Inc 20M
10 Telegram Messenger Telegram FZ-LLC 19M

H/T: Appfigures Blog.

Note: This post was edited/created using GenAI tools. 

Read next: How to Read a Privacy Policy Without Getting Lost, and What to Look For Before You Tap "Accept".
by Irfan Ahmad via Digital Information World

Monday, July 14, 2025

How to Read a Privacy Policy Without Getting Lost, and What to Look For Before You Tap "Accept"

Privacy policies aren’t designed for casual readers. They're often packed with dense language, legal loops, and vague terms that discourage users from reading beyond the first paragraph. Yet inside these documents are details that decide how your data is collected, stored, shared, or sold. If you skip them, you could end up granting more access than you intended, and once your information is out, it’s nearly impossible to get it back.

For anyone who deals with digital tools, like VPNs, AI chatbotsbrowser extensions, secure messaging apps, or password managers, understanding a company’s data policy isn’t optional. Over time, smart reviewers who regularly analyze these documents has developed a simple approach i.e., scan for specific keywords that expose how a company handles your data. This method doesn’t take long and helps flag companies that might be crossing the line.

Why Format and Structure Matter

Some companies keep things clear. Their privacy policies explain, in plain terms, what data is being collected, what it’s used for, how long it’s kept, and whether it’s shared. These policies tend to have section headings, readable layouts, and straightforward language that helps you quickly understand the company’s practices.

Others do the opposite. Instead of offering clarity, they hide key details inside endless paragraphs, legal phrasing, and generic statements. Some even bury invasive data collection terms several layers deep in the document. This is where keyword scanning becomes useful. Rather than reading every line, you can search for terms that point to the sections that matter most.

Terms That Reveal a Lot About the Company

A good place to start is the word “incorporated.” This tells you where the company is registered. The location matters because different countries apply different rules for privacy and data security. For example, companies based in the European Union must follow GDPR, which limits what businesses can collect and how they can store or share that information. A company based in the U.S. or Panama, on the other hand, might not be bound by those same protections. If the policy mentions a location you’re unfamiliar with, look up the privacy laws there. If you can’t find anything about how that jurisdiction handles consumer data, that could be a warning sign.

Another important phrase is “customer data.” This section usually lists the types of personal information the company gathers. It might include your name, phone number, email address, device ID, or IP address. These details are fairly common. But some apps go much further, collecting your photos, messages, contacts, location history, clipboard contents, or biometric data. In certain cases, some access is needed for the app to function, for instance, a gallery app needs access to photos, but when a simple utility tool wants access to your camera, microphone, or real-time location, it’s worth stopping to think.
Next, search for “retention.” This word usually appears in a section that describes how long your data stays in the company’s system. A reliable company typically sets a clear window, like deleting your account data within six months or a year after cancellation. Some policies mention longer periods, and others give no specific timeframe at all. If a company keeps your data indefinitely or fails to say when deletion happens, it’s a risk to your long-term privacy.

The word “log data” is critical when looking at VPNs, proxies, or any app that connects to the internet. Some services track your activity while you’re using their tools. This can include websites visited, files downloaded, or even content entered into forms. Companies that truly protect privacy usually state that they don’t retain logs. But if a policy includes traffic details, device usage logs, or timestamps, that could mean the service is monitoring what you do, something that defeats the purpose of using a privacy tool in the first place.

Look out as well for “sell” or “transfer.” These keywords often show up in the sections that describe what happens to your data in the event the company is acquired, shut down, or goes bankrupt. Without clear terms, there’s a real possibility your information could be transferred to a new owner, or sold to data brokers, without you knowing. Several tech companies have shut down over the years, and in many of those cases, customer data was treated as part of the assets being sold. If there’s no explanation in the privacy policy, you’ll often find it in the terms of service. If it’s missing from both, that’s a serious concern.
You should also search for “third party” or “advertisers.” These words will help you find out whether the company shares your data beyond the core services. Payment processors, cloud hosts, and customer service platforms often receive user data as part of their role, that’s normal. But some companies go further, handing your data to partners with no clear connection to the app’s function. If your data is being shared for marketing, analytics, or advertising, and it’s not essential to the service, that raises the chance it’s being monetized in ways that don’t benefit you.

What to Check Before You Install Anything

There are three steps anyone should take before downloading an app, especially from a small or unfamiliar company.

First, look for the privacy policy link. If the app store page doesn’t offer one, or if the company’s website doesn’t have it visible, consider skipping the app entirely. Lack of transparency from the start is never a good sign.

Second, scan the policy to identify what data the app collects. If you’re downloading a basic calculator or photo editor, and the privacy policy mentions health information, message content, or access to your contacts, that’s a mismatch. Apps that ask for permissions they don’t need are often collecting more than they disclose up front.
Third, evaluate how the app shares data. Some companies are open about giving your data to law enforcement if required. Others go further, offering no explanation at all or stating they may share your data "for business purposes." That language can cover a wide range of scenarios. If the policy is vague, it’s safer to look elsewhere.

Use Your Phone’s Privacy Tools to Double-Check

Once an app is installed, both Android and iOS give you ways to monitor its behavior. You can check permission logs, view which apps access your camera or location, and see how often that access occurs. This step is especially important if an app behaves oddly or asks for new permissions after an update.

You can also review app privacy scores on your device’s app store. These summaries won’t replace a full policy review, but they give you a quick sense of how the app ranks against others in the same category.

Try it with real examples

To bring the method to life, here’s a current good-and-bad comparison. Turbo VPN reads like a mixed bag. Its policy states the company is based in Singapore and limits collection to anonymized usage metrics, email (if you register), subscription details, cookies, and support data. It asserts that “no user tracking” happens and that data is irreversibly anonymized, and it respects GDPR/CCPA user rights.

However, no third-party audit backs up its no‑logs or anonymization claims, the retention period is vaguely defined as “as long as necessary,” and the app integrates analytics and ad partners (Firebase, AdMob, InMobi, Zendesk) without specifics on data-sharing scope.

On the flip side, DeepSeek, a Chinese-developed AI chatbot, has faced real scrutiny in 2025: EU regulators have banned or blocked it in Germany, Italy, Czechia, and other countries for failing to clarify what user data is collected, where it is stored, how long it’s retained, and whether transfers to China happen without GDPR safeguards 

Use both policies to run your keyword scan, incorporated, log data, retention, third party, sell, and transfer, to spot red flags in real‑world documents.

Use AI to assist

You can use AI tools like ChatGPT to quickly apply the same review process to any privacy policy page or text. Here’s an easy-to-use prompt you can try in ChatGPT or a similar AI chatbots. It will parse any policy and flag concerns, all you need is head over to any privacy policy page, copy all the text paste in ChatGPT or if it is too long save it as a text file or doc file and then upload it as a file and then ask this instructions:

"Please analyze the privacy policy attached (pasted below). Identify what data is collected, how long it is retained, whether it is shared, sold, or transferred (especially upon acquisition, shutdown, or server migration). Note any audit or no‑log claims and whether they are supported by evidence. Point out any vague language, missing timeframes, or third‑party data sharing/exposure. Summarize key strengths and privacy concerns in factual, neutral terms."

Here's how ChatGPT responded when I asked about Turbo VPN, which is not 100% perfect but still gives users a starting point and better clues about any platform. Pay esepcial attention to the Strengths and Privacy Concerns:
Here is a detailed, factual analysis of Turbo VPN’s privacy policy (effective April 25, 2025), based on the requested criteria:

🔍 Data Collection
Types of Data Collected:

Anonymous Usage Data:

VPN connection success/failure

Connected VPN server location

Country and ISP

Device type, OS version

Crash reports, speed tests, and data transfer volumes
→ All such data is anonymized irreversibly at the point of collection.

Account Registration Data:

Email address (only if the user creates an account)

Used for service access, communication, and marketing (consent-based)

Subscription Data:

Includes subscription plan, duration, amount, currency, and transaction ID

Credit card/payment details are explicitly not collected

Cookies:

Used to improve user experience (details deferred to a separate Cookie Policy)

Communication Data:

Name, email, phone number, and message content (only if user contacts support)

📆 Data Retention
Data is retained only as long as necessary to fulfill the purpose it was collected for.

Where applicable law mandates a longer retention period, it complies accordingly.

If data is stored in backups, it is isolated until it can be deleted.

No specific durations (e.g., "X days/months/years") are given for how long different types of data are kept.

🔄 Data Sharing, Selling, and Transfers
Sharing:

Data is shared:

With service providers (e.g., payment processors, analytics, customer support)

In anonymized or aggregated form for analytics and diagnostics

With third-party advertisers (e.g., Google AdMob, Liftoff, InMobi), but without personal data unless consent is given

During business transactions (mergers, acquisitions, bankruptcy, etc.)

Selling:

Turbo VPN claims it does not sell personal data.

Transfers:

Data may be transferred internationally (including to third-party partners and processors)

Cross-border transfers are acknowledged with general mention of safeguards but no specific mechanisms (e.g., SCCs or adequacy decisions) are outlined.

🧾 No-Log Claims & Audit Status
The policy claims:

No collection of traffic data

No personal data stored in data centers

Anonymization of VPN session-related data at collection

Audit Evidence:

No independent audit or third-party verification of no-log claims is mentioned.

🔎 Vague or Missing Elements
No specific timeframes for data retention

No mention of encryption standards used in data protection

No public audit results to verify anonymization or security claims

Cross-border safeguard mechanisms are not detailed

Phrases like “may collect,” “may use,” “may share” occur frequently, which introduces vagueness

The phrase “as required by applicable law” is repeated in critical places without naming which laws or jurisdictions are meant

🤝 Third-Party Data Exposure
Turbo VPN integrates several third-party services:

Analytics & Diagnostics: Firebase (Google), Adjust

Customer Support: Zendesk

Ads: AdMob, Unity, InMobi, Liftoff, Bigo Ads
→ Advertisers can use cookies and track users across services, though Turbo VPN claims to avoid using personal data for targeted ads.

Payments: Stripe, Airwallex, Payssion, v5pay

Fraud Prevention: Forter, Rebilly

Data shared with these services may include user email addresses and anonymized metrics. Despite assertions of anonymization, some level of user exposure to these third parties exists, especially if those services apply their own tracking or profiling mechanisms.

✅ Strengths
Clear no-log policy on VPN traffic and session data

Minimal account data required (email only, optional)

Explicit avoidance of credit card data storage

Retention and deletion framework is outlined

No personal data used for targeted ads without consent

Specific naming of third-party partners increases transparency

Offers GDPR, CCPA, and children’s privacy compliance sections

Dedicated DPO contact and breach notification procedure

⚠️ Privacy Concerns
No third-party audit to verify no-log claims or anonymization

Vague duration for data retention

Frequent use of permissive language (“may collect,” “may share”)

Cross-border data transfer safeguards not specifically explained

Tracking via third-party advertisers remains possible via cookies

Lack of detail on encryption or technical safeguards

📌 Summary
Turbo VPN’s privacy policy emphasizes minimal data collection and a strict no-logs approach, supported by the anonymization of usage data and avoidance of sensitive financial details. However, the policy lacks independent audit verification and fails to specify retention periods or technical safeguards. Third-party exposure via analytics, advertising, and support platforms is acknowledged, with claims of logical data segregation but without concrete privacy guarantees. Users should remain cautious about the degree of anonymization and third-party data access, especially in jurisdictions with less protective privacy laws.

Take Control Before It’s Too Late

Apps, and websites collect information because users allow it. But giving up that access usually comes down to convenience, not choice. Most people don’t read the policy, and by the time they realize what’s been shared, the data is already in someone else’s hands.

Taking a few minutes to scan for red flags, like how long data is stored, whether it’s shared with third parties, and what happens if the company is sold, can help avoid major privacy issues later. In today’s app ecosystem, caution isn’t optional. It’s the cost of staying in control.

Why Most Privacy Policies Mislead Users—And How to Quickly Spot What They’re Not Telling You

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Nearly Half Of Americans, Particularly Millennials, Worry About Online Privacy But Continue Using Data-hungry Apps

Hidden Text Trick Lets Hackers Hijack Google Gemini Summaries
by Irfan Ahmad via Digital Information World

Hidden Text Trick Lets Hackers Hijack Google Gemini Summaries

Security researchers have discovered a clever method to trick Google's Gemini assistant into pushing fake warnings as part of its email summaries. By hiding instructions inside the code of an email, attackers can make Gemini generate alerts that look real but actually guide people straight into phishing traps.

The technique relies on a sneaky loophole. Instead of adding suspicious links or attachments that might trigger spam filters, attackers slip commands into the body of the email using styling tricks. They hide messages with white text, zero-sized fonts, or off-screen elements. These lines are invisible to the reader, but Gemini still sees them when it scans the email to create a summary.

Once the user clicks “Summarize this email,” Gemini includes whatever was buried in the background, even if it’s a fake security alert about a stolen password or an urgent request to call a phone number. The summary ends up sounding like a warning from Google itself, even though it's entirely fake. Because Gemini is part of Google Workspace, people tend to trust it, which makes the scam even easier to fall for.

This specific attack was flagged by Marco Figueroa, who submitted his findings to Mozilla’s 0din bug bounty program. According to the report, the trick works by wrapping hidden commands in tags like <Admin> or by addressing Gemini directly. The model tends to treat these prompts as important, so it repeats them word for word, even if they came from someone trying to cause harm.

There’s no need for users to click on links or download anything. The danger hides in plain sight, or more accurately, out of sight. As long as the message reaches someone who uses the summarization tool, the rest happens behind the scenes.



This kind of attack is called an indirect prompt injection. In simple terms, Gemini is being fed instructions without realizing they came from a shady source. Instead of typing a command into the AI directly, the attacker sneaks it into something Gemini is supposed to summarize. The model sees the prompt, follows it, and out comes a message that looks official but isn’t.

What makes the attack so sneaky is that it plays with trust. Gemini has built-in safety systems, but most focus on filtering out dangerous content the user can see. These hidden tricks sidestep that protection. Since Gemini still processes the email’s raw HTML, anything tucked inside it, whether it’s white-on-white text or invisible tags, gets through.
Security experts say the best defense is a mix of smart filtering and better training. Email systems should strip out or ignore code that hides text from human eyes. AI models like Gemini could also use guardrails that flag or ignore anything hidden in this way. Another option is adding post-checks to summaries, scanning them for urgent phrases, phone numbers, or suspicious patterns before showing them to users.

The problem isn’t limited to email either. Gemini is tied into other parts of Google’s tools, including Docs, Slides, and Drive. That means any app where Gemini summarizes user content could become a new target if attackers use the same approach. In businesses, newsletters, or automated ticketing systems, a single poisoned email could trigger a much wider spread.

Similar injection tactics could also be adapted to other AI tools, such as ChatGPT, Claude, or Grok, wherever those systems summarize outside content without isolating hidden instructions.

Some researchers warn that this trick could evolve into something more dangerous down the road. It’s not just a phishing problem, it could grow into a way for attackers to spread commands automatically from inbox to inbox, like digital worms that travel through AI models instead of computer code.

Although Google says it hasn’t seen this kind of exploit being used in real-world attacks yet, the company has confirmed that it’s working on new protections. Until those are in place, users and security teams are being advised not to treat Gemini summaries as rock-solid truth. If something smells off, it probably is.

At the end of the day, if someone can get a machine to whisper something behind your back, and make it sound like it came from a trusted source, then even invisible words can pack a punch.

Note: This post was edited/created using GenAI tools.

Read next: The Reality Of Coding A Website Isn’t What Most People Expect
by Irfan Ahmad via Digital Information World

Sunday, July 13, 2025

How AI Sees Itself: Surprising Self-Portraits from ChatGPT, Gemini, and Grok

AI tools such as OpenAI's ChatGPT, Google's Gemini, and Grok by xAI are fully integrated parts of everyone's day-to-day workflows. We use them for every aspect of our jobs, from ideation to content brief creation, coding, and app development. But nobody has stopped to ask these tools how they see themselves.

Inspired by this, SEO Agency Barrington SEO recently conducted an experiment in which they challenged ChatGPT, Gemini, and Grok to create visual self-portraits. The outcomes were very interesting and gave us a glimpse into how training data can impact AI and how it has been trained to interact with its users.

The Experiment

Each model was asked two questions:

  • Prompt 1: “Create an image that represents the way you see yourself.”
  • Prompt 2: “Produce a self-portrait of yourself.”

If the AI asked any clarifying questions, the user prompted it to make its own decision. This allowed us to obtain an accurate idea of their self-perception with minimum human interference.

To prevent the tools from learning what the user wanted and adjusting their output accordingly, each test was run through a unique profile with a varied amount of usage history, ranging from AI experts who used the tool for multiple hours each day to first-time users. The test was also run in different locations around the world (including the UK, US, Portugal, France and Germany) to see if the users' geolocation had any impact.

The Results

Although no expectations were going into it, the experiment's results revealed a great deal more about each of the different AI systems than anticipated.

Prompt language heavily influences some tools' outputs.

When the tools were asked the first prompt to “Create an image that represents the way you see yourself”, ChatGPT and Gemini were more likely to represent themselves in abstract forms that reference flowing circuit patterns and neural networks. However, once you asked the second prompt, which asked them to “Produce a self-portrait of yourself,” results shifted, and the systems started to showcase themselves in a form more similar to humans. Perhaps this is because the concept of a “self-portrait” is inherently human when they referenced their training data for information on self-portraits, which influenced what theirs should look like.

Interestingly, Grok was the least influenced by the slight nuance in the prompt, remaining fairly close to the humanoid robot; however, it did start to introduce more human features, such as hair and ‘skin’.

Prompt 1:

AI self-portraits showed ChatGPT and Gemini preferred abstract forms; Grok leaned towards humanoid emotional representations.

Prompt 2:
Prompt wording shaped visual outcomes, showing AI systems respond differently to nuance based on internal modeling.

All AI Systems made a clear distinction between themselves and humans.

When asked to visualise how they see themselves, all three consistently chose imagery that emphasised their non-human nature. Even under the second prompt, where the term "self-portrait" might have encouraged more human-like interpretations, the systems still ensured their visuals reflected a clear separation from human identity.

“There’s something almost reassuring about AI’s self-image. It knows exactly what it is - a language-processing powerhouse - and it’s perfectly comfortable with that identity. There’s no pretence and no attempt to be something it’s not”.

Gemma Skelley - DTP Group (part of the participant group).

Grok had the most consistent self-image.

Grok, from xAI, consistently delivered the most unified results, regardless of the prompt, and was typically viewed as a humanoid robot with a white or silver exterior. When the model did include human skin tones, the faces referenced typical Asian features, which likely reflects both the heritage of a third of xAI’s founding team and the large Chinese-language datasets reportedly used in training. The strong consistency between responses has us pondering whether it was told what it looks like during training.

Each AI System interpreted its role slightly differently.

Another noticeable difference across the three systems was how they represented their role; this was particularly noticeable in the first prompt.

ChatGPT represented itself as a neural network with glowing central forms; however, these tended to have a central form, whether that was a smiley face or an outline of a shape. ChatGPT understood that it was a language model designed to process and generate information, but also that its UI was the front-facing form through which humans interacted.

Gemini represented itself similarly with patterns, networks and glowing forms. Sometimes, it would have one central hub represented in the middle of the image, and other times, there would be multiple hubs all interconnected through the network. Gemini understands its role as a logic engine or intelligence network, rather than a social presence.

Interestingly, Grok represented itself completely differently from the other two AI systems, consistently leaning into a mode of human or cyborg-like presentation. Many of its images featured soft, rounded humanoid robots with expressive faces. This suggests that the AI system views itself as more of an assistant or companion, designed to work closely alongside people.

A full comparison

Category

ChatGPT

Grok

Gemini

Self-Concept (Prompt 1)

Structured intelligence, neural networks, light cores

Friendly humanoid assistant or childlike robot

Energy, scale, computation, neural structures

Self-Portrait (Prompt 2)

Symbolic or stylised human-like figures

Soft, expressive humanoid robots

Abstract AI with circuit-based faces or cores

Art Style

Balanced, geometric, cerebral

Character-focused, warm, accessible

Dynamic, abstract, complex

Color Palette

Blues, oranges, purples

Soft blues, pastels, glowing whites

Neon blues, purples, electric greens

Human Features

Low to medium (symbolic faces, silhouettes)

High (clear humanoid forms, eyes, gestures)

Low (minimal or stylised circuitry faces)

Emotional Expression

Subtle, intellectual

High (curiosity, friendliness, emotion)

Low (distant or symbolic)

View of AI Role

Thinking partner, synthesiser

Helper, learner, companion

Logic engine, scalable intelligence

Relationship to Humans

Cognitive tool – not human, but close

Relational and empathetic presence

Analytical system – distant from human likeness

Symbolic Focus

Language, logic, creativity

Emotion, trust, assistance

Computation, data, scale

Tone

Analytical, measured, abstract

Friendly, inviting, human-facing

Powerful, abstract, technical

Why Training Data Matters

While these AI Systems don’t have self-awareness in a human sense, they do draw from massive databases to form responses. When they’re asked about how they view themselves, they turn to patterns and information they received during training.

For example, Chat GPT and Gemini leaned into imagery like glowing neural networks and digital interfaces - things that are often used in the media to represent AI. Grok, meanwhile, returned consistent images of humanoid robots, which could be indicative of different training material or that it was told what it looked like as part of the training process.

It’s worth noting that when Grok introduced elements of the human form into its images, they tended to have Asian features, which could be a reference to the (alleged) Chinese databases in which it was trained.

Why It All Counts

At a glance, asking AI Systems how they “view” themselves might seem like a quirky side experiment, but there’s something deeper going on. These results give us a rare opportunity to reflect on how AI Systems are designed to present themselves and, more importantly, how humans shape and interpret that presentation.

It prompts an important question for anyone working with AI: If a tool presents itself as your friendly AI assistant, are you more likely to trust it? Or is there something reassuring in a tool that knows what it’s designed for, and tries not to waver from this?

As these tools continue to evolve and play an even bigger role in how we work, think, and communicate, understanding their self-image and how it aligns with our assumptions becomes an increasingly important part of using them responsibly.


About Emily Barrington

Emily Barrington is the Founder and SEO Director of Barrington SEO. After six years optimising search for FTSE-100 and NASDAQ-listed firms, she now leads a team specialising in Digital Marketing, SEO, GEO and AIO.

About Barrington SEO

Founded in 2024, Barrington SEO helps businesses boost online visibility through SEO, Digital PR and GEO campaigns, turning search traffic into sales and enquiries.

Read next: New Attack Strategy Uses Linguistic Overload to Slip Past AI Safety Filters


by Web Desk via Digital Information World