Google has recently announced that it is integrating Gemini AI Assistant into Gmail and this has made many users anxious because of concerns related to generative AI reading their personal emails. Google has also mentioned a lot of advantages of this AI integration into Gmail like faster email searches, prioritizing emails, and highlighting important emails with the help of Gemini. However, users are skeptical and question why AI is needed for these tasks and are concerned that AI models would be trained on their emails.
Statista Consumer Insights did a recent survey to find out which email service providers are dominating the US market, and it was no surprise that Gmail is currently being dominated with 75% of the respondents using it. The second most used email provider is Yahoo Mail but it is a lot behind Gmail, with 31% of the respondents using it.
Other email service providers being used by Americans are Microsoft Outlook/Hotmail (25%), Apple iCloud Mail (17%) and AOL Mail (10%). 9% of respondents also reported using At&T Mail, while Spectrum and Xfinity (Comcast) are also being used by 8% and 7% of respondents respectively. The survey was done among 1249 US respondents between the ages of 18 and 64.
Read next: Digital Fatigue: A Third of Americans Willing To Trade Security for Convenience
by Arooj Ahmed via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Monday, March 31, 2025
How Is AI Fueling a Data Explosion Bigger Than All of Human History?
Right now, there is a lot of data which is collected, stored, and processed in the digital world. As technology is advancing, so is the rise and influence of data. We used data from Avison Young and IDC Global DataSphere Forecast to draw a visualization of rapid increase in data over the years and what challenges and opportunities are associated with it. Projections indicate that the next three years will generate more data than all of human history combined. One of the biggest reasons for this mass generation of data will be due to artificial intelligence, starting from 2014 with the release of the first generative AI model and then the launch of OpenAI’s GPT-1 in 2018.
In 2010, the worldwide data was just 2 zettabytes, which increased gradually to 13 zettabytes in 2014. In 2018, the worldwide data reached 33 zettabytes and by 2020, it had increased to 64 zettabytes.
In 2022, the worldwide data reached 101 zettabytes from 84 zettabytes in 2021. It was the year when OpenAI’s ChatGPT got released and 1 million users visited the platform within the five days of its release. Nowadays there are a lot of AI products that people are using in their daily lives, and that's why data is estimated to reach 182 zettabytes in 2025. Before 2023, the total data created was about 542 zettabytes starting from 2010. But from 2024 to 2026, about 552 zettabytes of data are going to be generated, which highlights the growing trends in technology and AI market.
Read next: Digital Fatigue: A Third of Americans Willing To Trade Security for Convenience
by Arooj Ahmed via Digital Information World
In 2010, the worldwide data was just 2 zettabytes, which increased gradually to 13 zettabytes in 2014. In 2018, the worldwide data reached 33 zettabytes and by 2020, it had increased to 64 zettabytes.
In 2022, the worldwide data reached 101 zettabytes from 84 zettabytes in 2021. It was the year when OpenAI’s ChatGPT got released and 1 million users visited the platform within the five days of its release. Nowadays there are a lot of AI products that people are using in their daily lives, and that's why data is estimated to reach 182 zettabytes in 2025. Before 2023, the total data created was about 542 zettabytes starting from 2010. But from 2024 to 2026, about 552 zettabytes of data are going to be generated, which highlights the growing trends in technology and AI market.
Read next: Digital Fatigue: A Third of Americans Willing To Trade Security for Convenience
by Arooj Ahmed via Digital Information World
Sunday, March 30, 2025
Phones Aren’t the Only Distraction: Study Shows Workplace Procrastination Persists Despite Device Distance
According to a new study published in Frontiers of Computer, putting the smartphone away from people isn't enough to reduce procrastination and disruption so they can focus on their work. The study wanted to know if placing smartphones away at work can reduce workers' non-work-related smartphone usage. For the study, the researchers gathered 22 participants and made them work in a soundproof and private room with their usual work devices, which include phones and laptops. The smartphones received usual notifications which were not controlled by researchers.
The researchers experimented with two conditions, with the first one being putting the phone on a desk within easy reach. The other condition was placing the phone 1.5 meters away on another desk. The only difference between these two conditions was the distance between the smartphone and the participants. The results found that putting the smartphone away reduced phone use but the participants started distracting themselves by other means, like using their laptops instead of mobile phones.
It didn't matter what the placement of the phone was because it didn't put any difference in focus and the time spent on work and leisure activities remained the same. The study also found that the participants used smartphones as the preferred devices for distraction because they provided a connection with work and loved ones. As smartphones have everything from alarm clocks to navigation systems and from sources of information to music players, people prefer using them over other devices. Even if smartphones aren't serving any purpose, people can still use social media for entertainment. Even though computers can still serve all these purposes, they aren't that easy to use and portable.
The researchers suggest some ways to reduce distractions at work, such as silencing or scheduling notifications. However, they also admit that avoiding complete phone use is impossible and highly unlikely because people are completely dependent on their phones and cannot resist them, especially the younger ones.
Image: DIW-Aigen
Read next: Job Hopping Becomes the Norm as 70% of U.S. Workers Eye Career Moves
by Arooj Ahmed via Digital Information World
The researchers experimented with two conditions, with the first one being putting the phone on a desk within easy reach. The other condition was placing the phone 1.5 meters away on another desk. The only difference between these two conditions was the distance between the smartphone and the participants. The results found that putting the smartphone away reduced phone use but the participants started distracting themselves by other means, like using their laptops instead of mobile phones.
It didn't matter what the placement of the phone was because it didn't put any difference in focus and the time spent on work and leisure activities remained the same. The study also found that the participants used smartphones as the preferred devices for distraction because they provided a connection with work and loved ones. As smartphones have everything from alarm clocks to navigation systems and from sources of information to music players, people prefer using them over other devices. Even if smartphones aren't serving any purpose, people can still use social media for entertainment. Even though computers can still serve all these purposes, they aren't that easy to use and portable.
The researchers suggest some ways to reduce distractions at work, such as silencing or scheduling notifications. However, they also admit that avoiding complete phone use is impossible and highly unlikely because people are completely dependent on their phones and cannot resist them, especially the younger ones.
Image: DIW-Aigen
Read next: Job Hopping Becomes the Norm as 70% of U.S. Workers Eye Career Moves
by Arooj Ahmed via Digital Information World
Are AI Crawlers Threatening Website Performance, SEO, and Bandwidth Costs?
There has been an increase in AI crawlers on different websites and these bots are affecting the search ranking and speed of those websites These AI crawlers are from companies like Anthropic, OpenAI, and Amazon, and are crawling on websites to gather data for AI models. For instance, SourceHut has blocked many cloud providers like Microsoft Azure and Google Cloud because they were sending too much bot traffic to websites.
According to data from Vercel, OpenAI’s GPTBot made 569 million bot requests in a month while Anthropic's Claude made 370 million requests. Around 20% of Google’s search crawler volume is because of AI crawlers. DoubleVerify found that there was an 86% increase in general invalid traffic (GIVT) in late 2024 because of AI crawlers and 16% of these bots were from ClaudeBot, GPTBot, and AppleBot.
Chart: Doubleverify
Read the Docs project reported that they have reduced their daily traffic from 800GB to 200GB by blocking those AI crawlers which has saved them around $1500 per month.
AI crawlers are different from traditional crawlers in their depth and frequency and consume more resources by revisiting the same pages every few hours. SEO professionals and website owners need to manage AI crawlers while maintaining visibility in search results. Check server logs and bandwidth spikes for any unusual activities and monitor high traffic to resource-heavy pages. Using robots.txt and Cloudflare’s AI Labyrinth can also help in blocking any unauthorized bot traffic on websites.
Read next:
• American Support for the TikTok Ban Hits New Low, Study Claims
• YouTube Updates Shorts View Count To Capture Every Play While Testing Variable Notification Frequency For Better Engagement
by Arooj Ahmed via Digital Information World
According to data from Vercel, OpenAI’s GPTBot made 569 million bot requests in a month while Anthropic's Claude made 370 million requests. Around 20% of Google’s search crawler volume is because of AI crawlers. DoubleVerify found that there was an 86% increase in general invalid traffic (GIVT) in late 2024 because of AI crawlers and 16% of these bots were from ClaudeBot, GPTBot, and AppleBot.
Chart: Doubleverify
Read the Docs project reported that they have reduced their daily traffic from 800GB to 200GB by blocking those AI crawlers which has saved them around $1500 per month.
AI crawlers are different from traditional crawlers in their depth and frequency and consume more resources by revisiting the same pages every few hours. SEO professionals and website owners need to manage AI crawlers while maintaining visibility in search results. Check server logs and bandwidth spikes for any unusual activities and monitor high traffic to resource-heavy pages. Using robots.txt and Cloudflare’s AI Labyrinth can also help in blocking any unauthorized bot traffic on websites.
Read next:
• American Support for the TikTok Ban Hits New Low, Study Claims
• YouTube Updates Shorts View Count To Capture Every Play While Testing Variable Notification Frequency For Better Engagement
by Arooj Ahmed via Digital Information World
AI-Powered Sextortion Scams Surge: Cybercriminals Exploit Data Breaches for Blackmail
According to a new blog by AVAST threat intel researchers, many cybercriminals are now combining AI with data breaches to execute sextortion attacks. These scammers are using AI and other stolen data to carry out personalized scams and many online daters are falling victim to it. There was a 137% increase in sextortion attacks in the US and a 49% and 34% increase in the UK and Australia respectively. The cybercriminals are also using new tactics to carry out these attacks.
Threat Intelligence Director at Avast, Michal Salat, says sextortion victims are receiving alarming messages that claim that hackers have access to their private videos and images. The scams become more credible because of data breaches from stolen passwords. Scammers are also using AI to create deep fake images and explicit videos where they paste the victim's face into other bodies. As AI is improving, extortion of texts, emails, and calls is also getting advanced and the victims get worried because of the fear of exposure.
Scammers are also pulling images from Google Maps to threaten victims with fabricated images of their homes. These scammers are using the dark web to gather emails, names, and addresses of the victims and then they combine this personal information with Google Maps images to create unsettling footage of victims’ homes. The scammers are also claiming to have access to the devices of the victims and they are threatening them to leak their personal information or sexual content.
Even though all those images and threats are AI-generated, they still shock the victims especially if their personal data is accurate and then they feel pressure to comply with ransom demands. About 15,000 Bitcoin wallets are linked to Google Maps scams which means that scammers are making huge profits. Do not open attachments or reply to suspicious emails, texts, or calls to protect yourself from these scams. Teenagers are more vulnerable to these attacks and they often become victims of these attacks through social media. They should stay calm if this kind of situation happens to them and should pay the ransom demand.
Image: DIW-Aigen
Read next: Too Much Social Media? Study Links Heavy Use to Rising Irritability
by Arooj Ahmed via Digital Information World
Threat Intelligence Director at Avast, Michal Salat, says sextortion victims are receiving alarming messages that claim that hackers have access to their private videos and images. The scams become more credible because of data breaches from stolen passwords. Scammers are also using AI to create deep fake images and explicit videos where they paste the victim's face into other bodies. As AI is improving, extortion of texts, emails, and calls is also getting advanced and the victims get worried because of the fear of exposure.
Scammers are also pulling images from Google Maps to threaten victims with fabricated images of their homes. These scammers are using the dark web to gather emails, names, and addresses of the victims and then they combine this personal information with Google Maps images to create unsettling footage of victims’ homes. The scammers are also claiming to have access to the devices of the victims and they are threatening them to leak their personal information or sexual content.
Even though all those images and threats are AI-generated, they still shock the victims especially if their personal data is accurate and then they feel pressure to comply with ransom demands. About 15,000 Bitcoin wallets are linked to Google Maps scams which means that scammers are making huge profits. Do not open attachments or reply to suspicious emails, texts, or calls to protect yourself from these scams. Teenagers are more vulnerable to these attacks and they often become victims of these attacks through social media. They should stay calm if this kind of situation happens to them and should pay the ransom demand.
Image: DIW-Aigen
Read next: Too Much Social Media? Study Links Heavy Use to Rising Irritability
by Arooj Ahmed via Digital Information World
Sunday, March 16, 2025
AI’s Growing Influence on Workplaces Sparks Concerns Over Declining Critical Thinking
Researchers from Microsoft Research and Carnegie Mellon University studied knowledge workers by analyzing AI tools with 1,000 real-world examples to determine whether AI is changing our critical thinking.
The results of the study showed that AI is changing how we think at work and impacting our job satisfaction and related challenges. The researchers studied how knowledge workers apply critical thinking while working with AI and what the causes are that make them think more or less while using AI tools.
The study found that the more people trust AI, the less they question its results, and those who think that their skills are better than AI tend to think critically about AI responses. This is also creating a risk because as AI is getting better, we are becoming less likely to question its output even when a little brainpower needs to be flexed. There are different factors that are preventing people from thinking critically, like awareness barriers, motivation barriers, and ability barriers.
Senior researcher at Microsoft, Lev Tankelevitch, says that most people are less critical about AI output when tasks they perform using AI are low stakes, and they naturally become critical when tasks are high-stakes. The main concern is that if workers are not using critical thinking regularly while using AI, they may forget to use it even when it truly matters. There is no doubt that generative AI has made cognitive tasks easier, like helping workers in areas like comprehension, knowledge gathering, and analysis.
Even though AI is retrieving information quickly, professionals should focus on checking accuracy. AI is also being used for problem-solving, but it is important for workers to refine and adapt to solutions in real-world scenarios. Professionals are also supervising AI to ensure that it gives high-quality and relevant results instead of performing tasks on their own. As AI roles are evolving in workplaces, jobs are going to shift towards AI prompt engineering and quality control as well as output verification. The level of success in workplaces will be defined by how employees direct and access AI instead of just personal task execution. To make sure workers are using critical thinking while using AI, organizations should integrate verification steps into workflows and design AI interfaces that force users to critically analyze every response. The skills needed in AI-driven workplaces are also evolving but critical thinking skills remain important.
Image: DIW-Aigen
Read next: Social Media Abstinence Lowers Anxiety, But Mindful Usage Helps Prevent Loneliness, Researchers Discover
by Arooj Ahmed via Digital Information World
The results of the study showed that AI is changing how we think at work and impacting our job satisfaction and related challenges. The researchers studied how knowledge workers apply critical thinking while working with AI and what the causes are that make them think more or less while using AI tools.
The study found that the more people trust AI, the less they question its results, and those who think that their skills are better than AI tend to think critically about AI responses. This is also creating a risk because as AI is getting better, we are becoming less likely to question its output even when a little brainpower needs to be flexed. There are different factors that are preventing people from thinking critically, like awareness barriers, motivation barriers, and ability barriers.
Senior researcher at Microsoft, Lev Tankelevitch, says that most people are less critical about AI output when tasks they perform using AI are low stakes, and they naturally become critical when tasks are high-stakes. The main concern is that if workers are not using critical thinking regularly while using AI, they may forget to use it even when it truly matters. There is no doubt that generative AI has made cognitive tasks easier, like helping workers in areas like comprehension, knowledge gathering, and analysis.
Even though AI is retrieving information quickly, professionals should focus on checking accuracy. AI is also being used for problem-solving, but it is important for workers to refine and adapt to solutions in real-world scenarios. Professionals are also supervising AI to ensure that it gives high-quality and relevant results instead of performing tasks on their own. As AI roles are evolving in workplaces, jobs are going to shift towards AI prompt engineering and quality control as well as output verification. The level of success in workplaces will be defined by how employees direct and access AI instead of just personal task execution. To make sure workers are using critical thinking while using AI, organizations should integrate verification steps into workflows and design AI interfaces that force users to critically analyze every response. The skills needed in AI-driven workplaces are also evolving but critical thinking skills remain important.
Image: DIW-Aigen
Read next: Social Media Abstinence Lowers Anxiety, But Mindful Usage Helps Prevent Loneliness, Researchers Discover
by Arooj Ahmed via Digital Information World
Can You Trust AI for Medical Advice? New Study Uncovers the Risky Truth
According to a new study published in NPJ Digital Medicine, some Spanish researchers tried to investigate if the large language models are reliable when it comes to giving health advice. The researchers tested seven LLMs, including OpenAI's ChatGPT, ChatGPT-4 and Meta's Llama 3, with 150 medical questions, and the researchers found that all the models tested had varied results. Most of the AI-based search engines give incomplete or incorrect results when users ask them some health-related questions. Even though AI-powered chatbots are increasingly in demand there haven't been proper studies which could show that LLMs give reliable medical-related results. This study found that the results of LLMs accuracy depend on the phrasing, retrieval bias, and reasoning, but they can still produce misinformation.
For the study, the researchers assessed four search engines: Google, Yahoo!, DuckDuckGo and Bing, and seven LLMs including ChatGPT, GPT-4, Flan-T5, Llama3 and MedLlama3. The results showed that ChatGPT, GPT-4, Llama3 and MedLlama3 had the upper hand in most evaluations, while Flan-T5 lagged behind the pack. For search engines, the researchers analyzed the top 20 ranked results. A passage extraction model was used to identify relevant snippets and a reading comprehension model was used to determine if the snippets had a definitive yes/no answer. Two types of users' behaviors were also seen: Lazy users stopped searching as soon as they found the first clear answer, while the diligent users cross-referenced three sources before deciding on an answer. The lazy users were the ones who got the most accurate answers, which shows that top-ranked answers are accurate most of the time.
For large language models, the researchers used different prompting strategies like asking a question without any context, using friendly wording, and using expert wording. The study also provided LLMs some sample Q&As which helped some models but didn't have any effect on others. Retrieval-augmented generation method was also used where LLMs were provided search engine results before they generated their own responses. The performance of the AI models was measured through accuracy, common errors in their responses, and improvements through retrieval augmentation.
The results of the study showed that search engines answered 50-70% queries accurately while LLMs had an 80% accuracy rate. The responses from LLMs varied on the basis of how questions were framed, and the expert prompt (using expert tone) was the most effective but sometimes resulted in less definitive answers. Bing had the most reliable answers, but it wasn't any better than Yahoo!, Google, and DuckDuckGo. Many search results from search engines were irrelevant or off-topic while the precision improved 80-90% by filtering for relevant answers. Smaller LLMs showed improvements in their performance after search engine snippets were added. But poor quality retrieval worsened the accuracy of LLMs, especially for Covid-19 related queries.
The error analysis of LLMs showed that there were three major failures of LLMs when it comes to health-related queries: Incorrect medical consensus understanding, misinterpreting questions, and ambiguous answers. The study showed that the performance of LLMs varied based on the dataset they were being questioned from, with a dataset from 2020 generating more accurate responses than a dataset from 2021.
Read next: AI Search Traffic Jumps 123% as ChatGPT and Perplexity Reshape SMB Strategies
by Arooj Ahmed via Digital Information World
For the study, the researchers assessed four search engines: Google, Yahoo!, DuckDuckGo and Bing, and seven LLMs including ChatGPT, GPT-4, Flan-T5, Llama3 and MedLlama3. The results showed that ChatGPT, GPT-4, Llama3 and MedLlama3 had the upper hand in most evaluations, while Flan-T5 lagged behind the pack. For search engines, the researchers analyzed the top 20 ranked results. A passage extraction model was used to identify relevant snippets and a reading comprehension model was used to determine if the snippets had a definitive yes/no answer. Two types of users' behaviors were also seen: Lazy users stopped searching as soon as they found the first clear answer, while the diligent users cross-referenced three sources before deciding on an answer. The lazy users were the ones who got the most accurate answers, which shows that top-ranked answers are accurate most of the time.
For large language models, the researchers used different prompting strategies like asking a question without any context, using friendly wording, and using expert wording. The study also provided LLMs some sample Q&As which helped some models but didn't have any effect on others. Retrieval-augmented generation method was also used where LLMs were provided search engine results before they generated their own responses. The performance of the AI models was measured through accuracy, common errors in their responses, and improvements through retrieval augmentation.
The results of the study showed that search engines answered 50-70% queries accurately while LLMs had an 80% accuracy rate. The responses from LLMs varied on the basis of how questions were framed, and the expert prompt (using expert tone) was the most effective but sometimes resulted in less definitive answers. Bing had the most reliable answers, but it wasn't any better than Yahoo!, Google, and DuckDuckGo. Many search results from search engines were irrelevant or off-topic while the precision improved 80-90% by filtering for relevant answers. Smaller LLMs showed improvements in their performance after search engine snippets were added. But poor quality retrieval worsened the accuracy of LLMs, especially for Covid-19 related queries.
The error analysis of LLMs showed that there were three major failures of LLMs when it comes to health-related queries: Incorrect medical consensus understanding, misinterpreting questions, and ambiguous answers. The study showed that the performance of LLMs varied based on the dataset they were being questioned from, with a dataset from 2020 generating more accurate responses than a dataset from 2021.
Read next: AI Search Traffic Jumps 123% as ChatGPT and Perplexity Reshape SMB Strategies
by Arooj Ahmed via Digital Information World
Subscribe to:
Comments (Atom)






