"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Thursday, June 19, 2025
OpenAI Tests Direct Gmail and Calendar Integration in ChatGPT, Prompting Data Privacy Concerns
At present, Gmail integration exists only through OpenAI’s Deep Research tool, which limits usage to background sourcing within longer reports. The latest test suggests a more hands-on approach is coming, one that turns email into an active part of the chat flow. If fully launched, this would bring ChatGPT closer to acting as a digital assistant that manages both information and personal workflow in real time.
But the shift raises questions about how personal data might be handled. Access to email bodies and calendar entries could, if left unchecked, open a path for sensitive information to feed back into the system. While OpenAI says user privacy is a priority, its track record has not always supported that. Earlier this month, it was revealed that conversations deleted by users inside ChatGPT had still been stored, undermining confidence in how memory and consent are managed.
This is especially relevant if Gmail access becomes part of the chatbot’s Search tool, where user queries already interact with broader AI functions. With no public explanation yet of whether email content will be used to train the model, the idea of handing over inbox access may give some users pause, even if the interface promises convenience.
ChatGPT’s integration with Google services could make it more powerful, but only if the boundaries around data use are clearly drawn. Without that, the benefit of automation may come at the cost of control.
Read next:
• Researchers Link Browser Fingerprints to Ad Targeting, Undermining Online Privacy Promises
• Two-Fifths of U.S. Workers Now Use AI; Frontline Adoption Stalls, Leadership Use Climbs Sharply
by Irfan Ahmad via Digital Information World
Two-Fifths of U.S. Workers Now Use AI; Frontline Adoption Stalls, Leadership Use Climbs Sharply
Over the past two years, the number of employees who say they use AI tools at least a few times each year has nearly doubled, rising from just over one-fifth of the workforce in 2023 to around two-fifths in 2025. Within that same period, the proportion of workers engaging with AI on a weekly basis has almost doubled as well, while daily use has quietly increased from four to eight percent over the past year alone.
The growth in AI use has been most visible in white-collar occupations, particularly in sectors such as technology, consulting, and financial services. In these industries, between one-third and one-half of employees now report using AI frequently in the course of their work. Among white-collar professionals more broadly, over one in four are now regular users of AI, a clear increase compared to the previous year.
Outside office-based roles, however, the picture has remained mostly unchanged. For those working in frontline jobs or production-related positions, regular use of AI has not followed the same trend. In fact, the percentage of these workers who report using AI a few times a week or more has remained flat, showing only a slight dip since 2023. The contrast between sectors where AI is being embraced and those where it remains largely unused suggests a growing divide in workplace technology exposure.
Among employees who manage other leaders or oversee larger teams, AI adoption appears to be advancing at a faster pace than among individual contributors. Roughly one in three of these senior leaders now report frequent AI use, which is about double the rate observed among non-managers. This suggests that those in strategic or supervisory roles may be more likely to explore or depend on AI-based tools in their decision-making processes.
Despite these shifts, most workers have not changed their outlook on the risk AI may pose to job security. The proportion of employees who believe it is likely that automation or AI will eliminate their position within the next five years remains consistent with previous years, holding steady at around fifteen percent. However, this figure rises slightly in some fields, particularly among those working in technology, retail, and finance, where around one in five anticipate that their roles could eventually be replaced.
Although more organisations are beginning to introduce AI into their operations, many have done so without offering clear guidance or structured support for their staff. While just under half of all employees now say that AI is being introduced into their workplace in some form, fewer than one in four say that their employer has provided a detailed strategy or communicated a clear plan about how AI should be used. Only three in ten say that their organisation has issued either broad guidelines or formal policies governing the use of AI tools. This means that many employees are encountering AI without knowing where it fits into the rules or priorities of their workplace.
When asked about the challenges surrounding the use of AI, employees most often point to confusion about its purpose or relevance. Even among those who regularly use AI at work, only a small proportion strongly agree that the tools they are given are genuinely helpful for the tasks they perform. For others, especially those without first-hand experience, the usefulness of AI remains unclear.
Where staff have used AI to support customer-facing tasks, feedback is more positive. Most workers with direct experience in this area say that AI has improved their interactions with customers. In contrast, those who have never used AI in this way are far less likely to believe it would make any difference, with fewer than one in five expecting a benefit.
Research indicates that a clearer sense of direction from leadership may be key to expanding AI use across the workplace. Employees who say their organisation has shared a detailed plan are significantly more likely to feel both comfortable and well-prepared to work with AI tools. In fact, those with this level of communication are several times more likely to describe themselves as confident users, compared with peers who have received no guidance.
If companies are serious about using AI more widely, it may not be enough to simply provide access to new tools. The evidence suggests that helping employees understand how AI fits into their role — and offering structured, practical support — is what makes the difference between curiosity and genuine adoption.
H/T: Gallup.
Read next: Researchers Link Browser Fingerprints to Ad Targeting, Undermining Online Privacy Promises
by Irfan Ahmad via Digital Information World
Facebook Adds Passkey Logins to Strengthen Account Security
With this update, users can use tools like face recognition or fingerprint scanning to unlock their accounts. What makes this approach more secure is that your login information never leaves your device. Instead, it’s stored safely on the phone itself and used to confirm your identity when needed.
Facebook is gradually introducing this feature on mobile phones that run iOS or Android. Messenger will also be added to the rollout a bit later. Once the setup is complete, the same passkey can be used to access both services.
People who shop using Meta Pay will notice that the new system can also help fill in payment details automatically. It is being added as part of a wider effort to protect sensitive information, including private backups of messages.
Other social platforms such as TikTok, LinkedIn, and X already brought in passkey support some time ago. While Facebook is joining the shift a little later, the change still gives users a more reliable way to stay in control of their accounts.
When the option becomes available, it will be listed in the account settings under the Accounts Center. Once turned on, it can be used right away on any supported device. For older devices, people can still rely on their usual login methods such as passwords or codes.
This update follows a growing focus on stronger security for online platforms, especially as concerns about account theft and data protection continue to rise. By giving people more options, Facebook is helping users take another step towards safer access.
Read next: 16 Billion Login Records Leak Online in One of the Largest Credential Exposures to Date
by Irfan Ahmad via Digital Information World
Wednesday, June 18, 2025
16 Billion Login Records Leak Online in One of the Largest Credential Exposures to Date
A digital breach of unprecedented scale has quietly unfolded online. Security researchers, after months of monitoring, have identified a network of exposed datasets containing a combined total of over 16 billion login records. These collections, found on unsecured servers, include usernames and passwords gathered from a wide range of platforms and services.
As reported by CyberNews, the data appears to originate from a mix of infostealer malware, credential stuffing sets, and previously unreported leaks. According to the investigators, the datasets surfaced across various storage systems left open on the internet, with some briefly accessible to the public. Although their availability was short-lived, the exposure window was long enough for researchers to capture and analyze a significant portion of the records.
The credentials come from a wide spread of online environments. Included are accounts linked to social media platforms, corporate tools, cloud services, VPN portals, and even government resources. Many of the records followed a repeating format, typically a web address, followed by login details and an associated password. This structure matches the way modern malware tends to collect sensitive information, allowing for automated use in later attacks.
Several factors set this incident apart. Unlike older leaks that sometimes recirculate in cybercrime forums, the bulk of the data here appears recent and unreported. In fact, researchers say only one previously known dataset — containing around 184 million records — was already in public discussion before this. The rest, according to early analysis, represent newly surfaced material. Some files included not only basic login credentials but also session tokens, cookies, and metadata, all of which can be exploited in targeted intrusions.
The variety in dataset naming conventions has made it difficult to trace every origin point. Some files were labeled generically, using terms like “logins” or “credentials,” while others hinted at geographic or platform-specific links. For instance, a collection exceeding 3.5 billion entries appeared tied to the Portuguese-speaking world, and another with over 455 million entries seemed connected to users in the Russian Federation. Smaller sets, like one named after Telegram, suggest the targeting of specific platforms or services.
Cybersecurity experts following the case have noted how such aggregated credential data fuels a range of malicious campaigns. Among the most likely threats are phishing schemes, identity theft operations, ransomware deployment, and business email compromise attempts. Because the records include both older and newer entries, some individuals and organizations may be at risk without even realizing it.
One of the more troubling aspects is the lack of clear ownership over the exposed data. While some believe that portions could have been compiled by security analysts for research, much of it is presumed to have passed through the hands of cybercriminal actors. The scale of the exposure makes it likely that the datasets are already being used or sold through underground channels.
Although no one can fully undo what has already leaked, security professionals are urging action. They advise individuals to review their existing accounts and update passwords, especially for any services used regularly. Enabling multi-factor authentication can reduce the risk of unauthorized access. Organizations, meanwhile, are encouraged to audit their systems, look for signs of compromise, and educate users on how to respond to potential phishing attempts or credential theft.
Massive breaches of this nature are becoming more common. Just last year, the RockYou2024 password dump revealed nearly 10 billion unique passwords. Earlier this year, another massive incident, known as the Mother of All Breaches, surfaced with over 26 billion records. This latest event, though smaller in scale than MOAB, is still notable because of the focus and freshness of its contents.
At a time when digital infrastructure underpins nearly every aspect of life and business, maintaining control over authentication data has never been more critical. While not all exposed records may be actively in use, even a small fraction of successful logins can result in major disruptions for individuals and companies alike. What matters now is not simply what leaked, but how quickly users and institutions respond to secure their systems.
Read next:
• Firms Rethink Internal AI Builds to Cut Costs, Improve Control, and Manage Risks of Autonomous Decisions
• Position Bias in AI Models Threatens Accuracy in High-Stakes Applications, MIT Warns
by Irfan Ahmad via Digital Information World
Firms Rethink Internal AI Builds to Cut Costs, Improve Control, and Manage Risks of Autonomous Decisions
Recent industry analysis from Gartner suggests that artificial intelligence is becoming more than just a tool in the background. If current trends continue, by the middle of this decade, around half of the important choices made inside companies could be influenced or directly completed by AI. This shift is not simply about speed, but about how information is processed, evaluated, and turned into action.
Where AI is handled properly, executives may find they can respond faster to change and manage resources more effectively. But where it’s deployed without proper oversight or alignment with business goals, the consequences could be harder to manage. Mistakes at scale are not just expensive, they can be hard to reverse.
In practical terms, these AI agents act as a kind of middle layer between raw data and final decisions. They’re designed to pull in streams of information, assess them in real time, and guide leadership through the more complex layers of judgment. While they don’t remove the need for people, they do change how people approach strategy and planning.
Some firms have already begun to restructure how departments work together. Analysts and data teams are now expected to sit closer to management. Their role isn’t just to deliver charts, but to help shape what kind of questions get asked in the first place. AI becomes more useful when it’s matched with human judgment that knows where to focus.
Not every firm will get this balance right. In fact, the same forecasts warn that a large number of data leaders may fall short in managing the synthetic data they use to train and test models. This could introduce weak points in both accuracy and compliance, which in turn could affect broader business outcomes.
AI itself isn’t neutral. It reflects the quality of the data behind it and the rules set by those who deploy it. In the future, some company boards may even start to bring automated systems into their oversight processes. By the end of the decade, it’s expected that a portion of global boards will begin using AI to independently review and challenge high-stakes decisions made by executives. This doesn't replace accountability, but it reshapes where and how it's applied.
Meanwhile, a growing number of firms are considering whether to build their own generative AI systems instead of relying on external providers. Those who go that route often cite lower long-term costs and stronger control over how their systems evolve. But the choice also comes with increased pressure to understand the risks from the inside out.
The role of leadership is changing. It’s no longer enough to manage teams and review quarterly plans. Those in charge will need to understand what machines can do, where they fall short, and how to make the most of a future where decisions are no longer made in isolation.
Image: DIW-Aigen
Read next: Position Bias in AI Models Threatens Accuracy in High-Stakes Applications, MIT Warns
by Irfan Ahmad via Digital Information World
Tuesday, June 17, 2025
Position Bias in AI Models Threatens Accuracy in High-Stakes Applications, MIT Warns
The researchers traced the root of this issue, referred to as “position bias”, to architectural design choices and how these models are trained to process sequences. Central to the analysis is how the attention mechanism, a core component of models like GPT-4 or LLaMA, handles the flow of information across multiple layers.
Using graph theory, the team demonstrated that attention patterns are not evenly distributed. Instead, certain tokens become dominant simply due to their position. When the model reads from left to right, earlier tokens often accumulate more influence as the layers deepen, even when their content is less relevant. This effect intensifies as more layers are added, creating a cascade where initial tokens disproportionately shape the model's decisions.
The study shows that even without adding any formal position tracking, the structure of the model itself introduces a preference for the start of the sequence. In experiments with synthetic retrieval tasks, the performance of the models dipped when key information was placed in the middle of the input. The retrieval curve followed a U-shape, strong at the start, weaker in the center, then improving slightly at the end.
This behavior wasn’t incidental. Controlled tests confirmed that position bias emerged even when the training data had no such leanings. In setups where the data favored certain positions, the models amplified those biases. When models were trained on sequences biased toward the beginning and end, they mirrored that pattern, heavily underperforming in the center.
The paper also explored how positional encoding schemes, tools designed to help the model track where a word appears, can partially counteract this effect. Techniques like decay masks and rotary encodings introduce a fading influence based on distance, nudging the model to attend more evenly across the sequence. However, these methods alone don’t eliminate the bias, especially in deeper networks where earlier layers already tilt the attention forward.
In practical terms, this means that users relying on AI models for tasks like legal search, coding assistance, or medical records review may unknowingly encounter blind spots. If key content appears mid-document, the model might miss or misjudge it, even if everything else in the system functions as intended.
The implications go beyond diagnostics. By showing that position bias is both an architectural and data-driven phenomenon, the researchers offer pathways to mitigate it. Adjustments in attention masks, fewer layers, and smarter use of positional encodings can help rebalance the focus. The study also suggests that fine-tuning models on more uniformly distributed data could be essential in high-stakes domains where omission carries risk.
The research not only maps the bias but explains its evolution. As tokens move through the model, their contextual representations are repeatedly reshaped. Those that appear earlier begin to dominate, not because they contain better information, but because they become more deeply embedded in the model's reasoning. In this sense, the bias is baked into the system’s logic.
Rather than treating this as a bug, the team sees it as an opportunity for improvement. Their framework doesn’t just diagnose; it provides tools to reshape how models perceive position. By better understanding these internal biases, developers can build systems that reason more fairly and consistently across the full length of input, beginning, middle, and end.
Image: DIW-Aigen
Read next: Why a Wrench Might Outlast Code in the Age of AI
by Irfan Ahmad via Digital Information World
OpenAI Rolls Out ChatGPT Image Generation via WhatsApp at +18002428478
This rollout opens the tool to all users globally, according to the announcement shared through the company’s official X account. But for a platform so deeply associated with cutting-edge AI, the decision to lean on a toll-free number, something more closely tied to landline-era habits, comes across as a curious throwback.
It’s difficult to gauge how many users were actively hoping to reach an AI tool through a method that predates the smartphone. The symbolic use of a “1-800” code may even be lost on those who never had to think about long-distance calling. Still, the move could signal an effort to make AI services feel more approachable to demographics that might be less comfortable navigating app stores, new interfaces, or competing platforms.
In some ways, it suggests that OpenAI is trying to widen its reach, not by adding complexity, but by lowering the technical barrier. For users already familiar with WhatsApp, sending a quick message might feel less intimidating than signing into a new app or website. And for those who recall a time when customer service meant dialing a toll-free number, this might feel oddly familiar, even if the conversation now happens with an algorithm rather than a human voice.
Read next: Marketers Brace for Soaring Content Needs as Expectations Shift Through 2027
by Irfan Ahmad via Digital Information World







