A new study by security researchers is speaking about how iPhone farms are sending nearly 100,000 scam iMessages every single day.
These farms are banks of devices filled with rotating temporary Apple IDs. Instead of texting, they use iMessages so scammers can bypass any kind of spam filters that might be in place on the user’s device. Moreover, these scammers don’t need any special skills to carry out such attacks in the first place. You can think of them as companies providing Phishing-As-A-Service (PhAAS), which is a special kind of phishing attack.
Most of the scams taking place today entail false demands for tasks like traffic toll fees or shipping fees for packages arriving at the Customs. There are also fake warnings about any kind of unpaid tax.
The majority of the scams are rolled out through email and texts, and then a common cat and mouse game takes place between one criminal to the next. There is also the arrival of mobile carriers who try to alert users about suspicious message transfers to block all of them.
The news comes to us from research experts at Catalyst who say that scammers are switching to iMessage as they’re encrypted. Networks cannot see the material, so they aren’t ever blocked or highlighted.
What makes the whole situation so much worse is how one platform from China is offering iPhone farms for sale that people can use by paying. For instance, one example is Lucid, which is operated by threat actors from China. They target 169 entities across 88 different nations around the globe.
It’s all very scalable, and the subscription-based designs enable cybercriminals to carry out large-scale phishing attempts. These can harvest sensitive details like credit cards for matters like money fraud. To ensure it’s not caught, it makes use of Android RCS and Apple’s iMessage to get more support and acceptance. This will bypass all classic filters for spam and also improve better rates for deliver and success.
Some scammers go all the way to produce convincing-looking alternatives to pages for organizations that seem so real, like a courier service. One group called the XinXin group was seen putting phishing templates on sale. These are created to copy postal services, tax refunds, and even road toll fee systems.
There’s even a group on Telegram that sells PhAAS attacks that feature up to 2,000 different members. Experts warn to never clicking on these phishing links seen in emails. No matter how much you might be tempted to do so, it’s better to use personal bookmarks or add a known URL.
It’s simpler to make emails or texts appear like they came from real companies, so we feel trusting anyone online is never advisable. If there’s any message asking for a quick acceptance or forcing you to act immediately to avoid a fee, there is something wrong here for sure.
Image: DIW-Aigen
Read next: OpenAI’s o3 Reasoning AI Model Fails to Impress and Might Be More Expensive to Run than Anticipated
by Dr. Hura Anwar via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Thursday, April 3, 2025
Wikipedia Pays Price of AI Boom: Company Struggling From Rising Costs Due to Bots Scraping Its Articles
Popular online encyclopedia Wikipedia is reportedly paying a major price for the AI boom. The encyclopedia giant is struggling with a rise in costs due to bots scraping content that is used for training AI models.
This is not only a financial constraint but also has to do with a strain across the platform’s bandwidth.
On Tuesday, we saw the nonprofit firm hosting Wikipedia issue a warning about automated requests for its content that keep on growing exponentially. This causes a massive disruption across the website and forces the encyclopedia to add greater capacity and, similarly, increase the billing for data centers.
The infrastructure is created to withstand the rise in traffic from humans during top events, but the traffic levels produced using scraper bots are unpredictable and keep showing up as a rise in costs and higher risk.
The Foundation shared how the bandwidth for downloading content grew 50%. But the traffic here is not arriving from actual humans but automated programs. These keep on installing licensed images to feed pictures to their AI models.
Another serious issue has to do with bots gathering large amounts of data from less famous articles on Wikipedia. Taking a closer look, it was shown that nearly 65% of the traffic arrives through bots. This is an unequal amount when we look at overall pageviews via bots, which make up 35% of the majority.
These bots scrape serious systems in the developer infrastructure, like code review platforms, and that again puts a major strain on the page’s resources. In reply, the online encyclopedia’s site managers impose case-by-case rates that restrict the AI crawlers or that ban them altogether.
To address the issue further, the Wikimedia Foundation says it’s rolling out a more Responsible Use of this Infrastructure plan that identifies the network strain coming from AI bot scrapers that aren’t sustainable.
Wikipedia hopes to get more feedback from the community on how to best tackle this serious issue and identify traffic coming from these bot scrapers and how to filter them out. This will include forcing bot operators to scan through authentication for top volume scrapers and API usage.
Wikipedia knows that it’s a huge threat as their material is free of cost, but the infrastructure isn’t. They have to act now to re-create a healthier balance.
Reddit faced something similar in 2023. Software giant Microsoft, for instance, didn’t alert Reddit about scraping content and using that for AI features. It then blocked Microsoft from scraping its own pages, which Reddit’s CEO openly condemned.
Reddit further decided to take action by charging third-party developers to gain access to their API. This led to the developer to revolt, experience sudden blackouts on the app, and even shut down for some leading clients of the company.
Image: DIW-Aigen
Read next: YouTube in a Position To Become The Leader of Video Streaming in 2025
by Dr. Hura Anwar via Digital Information World
This is not only a financial constraint but also has to do with a strain across the platform’s bandwidth.
On Tuesday, we saw the nonprofit firm hosting Wikipedia issue a warning about automated requests for its content that keep on growing exponentially. This causes a massive disruption across the website and forces the encyclopedia to add greater capacity and, similarly, increase the billing for data centers.
The infrastructure is created to withstand the rise in traffic from humans during top events, but the traffic levels produced using scraper bots are unpredictable and keep showing up as a rise in costs and higher risk.
The Foundation shared how the bandwidth for downloading content grew 50%. But the traffic here is not arriving from actual humans but automated programs. These keep on installing licensed images to feed pictures to their AI models.
Another serious issue has to do with bots gathering large amounts of data from less famous articles on Wikipedia. Taking a closer look, it was shown that nearly 65% of the traffic arrives through bots. This is an unequal amount when we look at overall pageviews via bots, which make up 35% of the majority.
These bots scrape serious systems in the developer infrastructure, like code review platforms, and that again puts a major strain on the page’s resources. In reply, the online encyclopedia’s site managers impose case-by-case rates that restrict the AI crawlers or that ban them altogether.
To address the issue further, the Wikimedia Foundation says it’s rolling out a more Responsible Use of this Infrastructure plan that identifies the network strain coming from AI bot scrapers that aren’t sustainable.
Wikipedia hopes to get more feedback from the community on how to best tackle this serious issue and identify traffic coming from these bot scrapers and how to filter them out. This will include forcing bot operators to scan through authentication for top volume scrapers and API usage.
Wikipedia knows that it’s a huge threat as their material is free of cost, but the infrastructure isn’t. They have to act now to re-create a healthier balance.
Reddit faced something similar in 2023. Software giant Microsoft, for instance, didn’t alert Reddit about scraping content and using that for AI features. It then blocked Microsoft from scraping its own pages, which Reddit’s CEO openly condemned.
Reddit further decided to take action by charging third-party developers to gain access to their API. This led to the developer to revolt, experience sudden blackouts on the app, and even shut down for some leading clients of the company.
Image: DIW-Aigen
Read next: YouTube in a Position To Become The Leader of Video Streaming in 2025
by Dr. Hura Anwar via Digital Information World
Wednesday, April 2, 2025
YouTube in a Position To Become The Leader of Video Streaming in 2025
YouTube is the most-widely used video streaming service in the world with a net worth of $550 billion currently, making it the king of all media. The video streaming service, due to its variety of content, is also poised to become the leader of all video content media in revenue generation and screen timing in 2025 as per the statistics.
According to the analyst Michael Nathanson at MoffettNathanson, YouTube was the second-largest company by revenue in 2024 with a generated revenue of $54.2 billion, just behind Disney, which the company is in a position to replace as the top most company in 2025. Moreover, in the past 12 months, for the second time in February 2025, YouTube became the largest aggregate TV content source in the US as per the screening time. Which means that it surpassed even the giants, like Disney, Fox Corp, Paramount Global, Netflix etc.
According to Nathanson's analysis, if YouTube remains on the same trajectory, it will grow even further in 2025, allowing it to even become the premier streaming aggregator. The numbers surely do indicate this end result. YouTube generated $36.15 billion in ad revenue in 2024 alone, and its subscription revenue in the 12-month period until September 2024 topped at $15 billion. YouTube TV service has become the biggest online TV service in the USA with more than 8 million subscribers; plus, its YouTube music and premium services have crossed 125 million subscribers. He says that YouTube will have 10% of the share of the $85 billion TV industry by the end of 2026 and will surely become the leader in the industry.
Nathanson also predicts that YouTube operating income, which was $7.8 billion in 2024, will likely grow to $10.2 billion this year and will grow further to $13.8 billion in 2027, with operating margin increasing from 14% to 16% and then 18% respectively. This he says will be due to more subscriptions and better subscription rates.
Another factor that will increase YouTube subscribers and help it in increasing its revenue is better smart devices and faster internet. In the upcoming years, better versions of the existing smartphones will enter the market which will be faster and will expand the scope of online streaming services. Also, new ways to give access to faster internet are replacing the existing ones, like the Starlink project by Elon Musk. Such technologies will give access to the internet to areas where the internet is still unknown. Both will force people to prefer online services like Youtube to traditional TV due to its convenience and variety it offers, thus increasing its subscribers.
Image: DIW-Aigen
Read next: OpenAI Anticipates Delays in New Image Generation Tool for ChatGPT Due to Capacity Challenges
by Ehtasham Ahmad via Digital Information World
According to the analyst Michael Nathanson at MoffettNathanson, YouTube was the second-largest company by revenue in 2024 with a generated revenue of $54.2 billion, just behind Disney, which the company is in a position to replace as the top most company in 2025. Moreover, in the past 12 months, for the second time in February 2025, YouTube became the largest aggregate TV content source in the US as per the screening time. Which means that it surpassed even the giants, like Disney, Fox Corp, Paramount Global, Netflix etc.
According to Nathanson's analysis, if YouTube remains on the same trajectory, it will grow even further in 2025, allowing it to even become the premier streaming aggregator. The numbers surely do indicate this end result. YouTube generated $36.15 billion in ad revenue in 2024 alone, and its subscription revenue in the 12-month period until September 2024 topped at $15 billion. YouTube TV service has become the biggest online TV service in the USA with more than 8 million subscribers; plus, its YouTube music and premium services have crossed 125 million subscribers. He says that YouTube will have 10% of the share of the $85 billion TV industry by the end of 2026 and will surely become the leader in the industry.
Nathanson also predicts that YouTube operating income, which was $7.8 billion in 2024, will likely grow to $10.2 billion this year and will grow further to $13.8 billion in 2027, with operating margin increasing from 14% to 16% and then 18% respectively. This he says will be due to more subscriptions and better subscription rates.
Another factor that will increase YouTube subscribers and help it in increasing its revenue is better smart devices and faster internet. In the upcoming years, better versions of the existing smartphones will enter the market which will be faster and will expand the scope of online streaming services. Also, new ways to give access to faster internet are replacing the existing ones, like the Starlink project by Elon Musk. Such technologies will give access to the internet to areas where the internet is still unknown. Both will force people to prefer online services like Youtube to traditional TV due to its convenience and variety it offers, thus increasing its subscribers.
Image: DIW-Aigen
Read next: OpenAI Anticipates Delays in New Image Generation Tool for ChatGPT Due to Capacity Challenges
by Ehtasham Ahmad via Digital Information World
France Penalizes Apple with $162M Fine For Its ATT Privacy Model
iPhone maker Apple was just stuck with a mega $162 million fine by a leading French watchdog over the rollout of its ATT privacy framework.
The penalty comes for abusing its monopoly position as a top distributor of mobile apps in the region after launching the App Tracking Transparency model. As per the Autorité de la concurrence, the Cupertino firm rolled this out in 2021 and carried on for all iOS and iPadOS devices until 2023.
This feature forces mobile apps to get users’ explicit permission to access the phone’s unique ad identifier. Once that’s done, it can be tracked through different platforms and pages for the sake of targeted ads.
Until they get permission, the device cannot track the user, and no revenue is generated in terms of identifier value and targeting. However, once the permission is received, it’s a whole other game where the company will benefit at the users’ tracking cost.
App developers must also speak about the purpose behind why features like tracking are necessary, the French regulator shared. It’s not only about attaining consent.
They also shared how the ATT model is not the main or core issue here, but the fact that there’s no transparency about the matter for users is where the real problem lies. How this ATT gets implemented is not important or proportionate, as Apple shared its objectives for why it wants to protect users’ personal information.
Describing the ATT is mostly complex as the consent attained through such frameworks doesn’t meet the legal obligations needed under this law. It forces developers to use their own consent as solutions. This leads to several different consent pop-ups being put on display to all.
One issue is that consent tracking needs to be shown and confirmed by all users more than once, and also that refusal should only be a one-step process. This would make such models more neutral and acceptable.
Publishers were needed to attain double consent from people for tracking on third-party pages and apps. The iPhone maker didn’t ask for consent from people when it came down to its own apps. So this was another major loophole.
There was also a discussion about how double consent is needed when it comes to the collection of data. However, when it comes to collecting user consent for Apple’s own data collection, it’s only done once, which was called unfair.
It’s worth mentioning how this order doesn’t impose any kind of certain changes made to this framework. As per Reuters, it’s now the company that has to make sure that it’s complying with all the rulings. The fine is a major game changer for Apple, whose earnings hit $36B on a revenue of $124B in Q4 of last year.
A statement that was shared with AP, the company explained how the ATT model is the same for all developers. It has attained support for this feature from different consumers, authorities for data protection, and privacy advocates around the globe.
Image: DIW-Aigen
Read next:
• Social Media Platforms Dominate Time Spent Per Visit, YouTube and Naver Lead Rankings
• Meta Found Complicit In Running Illegal Ads for Settlements in The West Bank Of Palestine
by Dr. Hura Anwar via Digital Information World
The penalty comes for abusing its monopoly position as a top distributor of mobile apps in the region after launching the App Tracking Transparency model. As per the Autorité de la concurrence, the Cupertino firm rolled this out in 2021 and carried on for all iOS and iPadOS devices until 2023.
This feature forces mobile apps to get users’ explicit permission to access the phone’s unique ad identifier. Once that’s done, it can be tracked through different platforms and pages for the sake of targeted ads.
Until they get permission, the device cannot track the user, and no revenue is generated in terms of identifier value and targeting. However, once the permission is received, it’s a whole other game where the company will benefit at the users’ tracking cost.
App developers must also speak about the purpose behind why features like tracking are necessary, the French regulator shared. It’s not only about attaining consent.
They also shared how the ATT model is not the main or core issue here, but the fact that there’s no transparency about the matter for users is where the real problem lies. How this ATT gets implemented is not important or proportionate, as Apple shared its objectives for why it wants to protect users’ personal information.
Describing the ATT is mostly complex as the consent attained through such frameworks doesn’t meet the legal obligations needed under this law. It forces developers to use their own consent as solutions. This leads to several different consent pop-ups being put on display to all.
One issue is that consent tracking needs to be shown and confirmed by all users more than once, and also that refusal should only be a one-step process. This would make such models more neutral and acceptable.
Publishers were needed to attain double consent from people for tracking on third-party pages and apps. The iPhone maker didn’t ask for consent from people when it came down to its own apps. So this was another major loophole.
There was also a discussion about how double consent is needed when it comes to the collection of data. However, when it comes to collecting user consent for Apple’s own data collection, it’s only done once, which was called unfair.
It’s worth mentioning how this order doesn’t impose any kind of certain changes made to this framework. As per Reuters, it’s now the company that has to make sure that it’s complying with all the rulings. The fine is a major game changer for Apple, whose earnings hit $36B on a revenue of $124B in Q4 of last year.
A statement that was shared with AP, the company explained how the ATT model is the same for all developers. It has attained support for this feature from different consumers, authorities for data protection, and privacy advocates around the globe.
Image: DIW-Aigen
Read next:
• Social Media Platforms Dominate Time Spent Per Visit, YouTube and Naver Lead Rankings
• Meta Found Complicit In Running Illegal Ads for Settlements in The West Bank Of Palestine
by Dr. Hura Anwar via Digital Information World
Tuesday, April 1, 2025
Social Media Platforms Dominate Time Spent Per Visit, YouTube and Naver Lead Rankings
Some websites attract more traffic than others due to their importance in today's digital world, such as Google, the most-used search engine. However, Google does not top the list of websites where users spend the most time per visit.
Unsurprisingly, social media platforms dominate the rankings, as people spend more time engaging with content and connecting with others. According to Similarweb data, users spend significantly longer per visit on social media platforms compared to other websites, drawn by diverse entertainment options and real-time updates.
At number two is Naver, the dominant search engine in South Korea, where users spend an average of 16 minutes and 4 seconds per visit. Despite being less known globally, Naver surpasses Google in South Korea in terms of both usage and engagement.
X (formerly Twitter) ranks third, with users spending 12 minutes and 37 seconds per visit. With nearly 650 million users, X serves as a hub for real-time news, discussions, and direct interactions with public figures, contributing to its high engagement levels.
The social media messenger WhatsApp is ranked fourth, with an average visit duration of 12 minutes and 26 seconds. With 3 billion users worldwide, the app remains the preferred choice for messaging, ensuring continuous daily usage.
At number five, Facebook retains its strong user engagement, with visitors spending 10 minutes and 59 seconds per session. Boasting 3.07 billion users, Facebook’s mix of social connections, groups, pages, and entertainment options contributes to its high retention.
Google ranks sixth, with an average visit duration of 10 minutes and 46 seconds. While it is the world's most visited website, its nature as a search engine means users typically enter, find information, and leave relatively quickly compared to social platforms.
Instagram ranks ninth, with users spending 8 minutes and 41 seconds per session. As a leading photo and video-sharing platform with 2 billion users, Instagram attracts users who stay engaged with updates from celebrities, influencers, and friends.
At number ten comes LinkedIn, with an average session duration of 8 minutes and 33 seconds. Given its role as a professional networking and job-seeking platform, users spend extended time browsing job listings and industry updates.
Dzen.ru, a Russian content aggregation platform, ranks eleventh, with visitors spending 8 minutes and 30 seconds per visit.
Beyond these, other platforms rank lower in terms of time spent per visit:
Yahoo.com (7m 43s, rank 12)
Netflix (7m 07s, rank 13)
ChatGPT (6m 47s, rank 14)
Reddit (6m 07s, rank 15)
Amazon (5m 51s, rank 16)
Baidu (5m 35s, rank 17)
TikTok (4m 09s, rank 18)
Wikipedia (3m 18s, rank 19)
Microsoft Online (2m 16s, rank 20)
The data highlights that content format and user engagement strategies significantly influence time spent on websites. Platforms that offer immersive content - such as video, real-time interactions, and personalized feeds - retain users the longest.
Read next:
• Report Highlights AI’s Factual Inaccuracy and Rising Skepticism Among Experts
• New Survey Shows that Gmail is the Most Used Email Service Provider in the US
by Ehtasham Ahmad via Digital Information World
Unsurprisingly, social media platforms dominate the rankings, as people spend more time engaging with content and connecting with others. According to Similarweb data, users spend significantly longer per visit on social media platforms compared to other websites, drawn by diverse entertainment options and real-time updates.
Websites Where Users Spend the Most Time per Visit
YouTube leads the list, with users spending an average of 20 minutes and 47 seconds per visit. As the world's most popular video streaming platform, YouTube generated $36.2 billion in ad revenue in 2024 alone. The platform’s video-based content keeps users engaged longer than text-heavy sites, making it the top-ranked website in terms of time spent.At number two is Naver, the dominant search engine in South Korea, where users spend an average of 16 minutes and 4 seconds per visit. Despite being less known globally, Naver surpasses Google in South Korea in terms of both usage and engagement.
X (formerly Twitter) ranks third, with users spending 12 minutes and 37 seconds per visit. With nearly 650 million users, X serves as a hub for real-time news, discussions, and direct interactions with public figures, contributing to its high engagement levels.
The social media messenger WhatsApp is ranked fourth, with an average visit duration of 12 minutes and 26 seconds. With 3 billion users worldwide, the app remains the preferred choice for messaging, ensuring continuous daily usage.
At number five, Facebook retains its strong user engagement, with visitors spending 10 minutes and 59 seconds per session. Boasting 3.07 billion users, Facebook’s mix of social connections, groups, pages, and entertainment options contributes to its high retention.
Google ranks sixth, with an average visit duration of 10 minutes and 46 seconds. While it is the world's most visited website, its nature as a search engine means users typically enter, find information, and leave relatively quickly compared to social platforms.
Other Notable Websites
At number seven, Yahoo Japan (yahoo.co.jp) holds an average visit time of 8 minutes and 54 seconds, remaining a dominant search engine in Japan. It is followed closely by Yandex, Russia’s largest search engine, where users spend 8 minutes and 48 seconds per visit.Instagram ranks ninth, with users spending 8 minutes and 41 seconds per session. As a leading photo and video-sharing platform with 2 billion users, Instagram attracts users who stay engaged with updates from celebrities, influencers, and friends.
At number ten comes LinkedIn, with an average session duration of 8 minutes and 33 seconds. Given its role as a professional networking and job-seeking platform, users spend extended time browsing job listings and industry updates.
Dzen.ru, a Russian content aggregation platform, ranks eleventh, with visitors spending 8 minutes and 30 seconds per visit.
Beyond these, other platforms rank lower in terms of time spent per visit:
Yahoo.com (7m 43s, rank 12)
Netflix (7m 07s, rank 13)
ChatGPT (6m 47s, rank 14)
Reddit (6m 07s, rank 15)
Amazon (5m 51s, rank 16)
Baidu (5m 35s, rank 17)
TikTok (4m 09s, rank 18)
Wikipedia (3m 18s, rank 19)
Microsoft Online (2m 16s, rank 20)
The data highlights that content format and user engagement strategies significantly influence time spent on websites. Platforms that offer immersive content - such as video, real-time interactions, and personalized feeds - retain users the longest.
| Website | Avg. Visit Duration (h:mm:ss) |
|---|---|
| youtube.com | 0:20:47 |
| naver.com | 0:16:04 |
| x.com | 0:12:37 |
| whatsapp.com | 0:12:26 |
| facebook.com | 0:10:59 |
| google.com | 0:10:46 |
| yahoo.co.jp | 0:08:54 |
| yandex.ru | 0:08:48 |
| instagram.com | 0:08:41 |
| linkedin.com | 0:08:33 |
| dzen.ru | 0:08:30 |
| yahoo.com | 0:07:43 |
| netflix.com | 0:07:07 |
| chatgpt.com | 0:06:47 |
| reddit.com | 0:06:07 |
| amazon.com | 0:05:51 |
| baidu.com | 0:05:35 |
| tiktok.com | 0:04:09 |
| wikipedia.org | 0:03:18 |
| microsoftonline.com | 0:02:16 |
Read next:
• Report Highlights AI’s Factual Inaccuracy and Rising Skepticism Among Experts
• New Survey Shows that Gmail is the Most Used Email Service Provider in the US
by Ehtasham Ahmad via Digital Information World
US Smart Speaker Market: Amazon Echo Reigns Supreme
Statista’s latest report highlights the most favorite smart speaker in America, with Amazon Echo being the top favorite which is currently dominating the US market. The survey, conducted among 2,584 smart speaker owners in the US, found that 61% of the respondents love Amazon Echo voice-activated smart speakers running the Alexa voice assistant.
The second most favorite smart speaker among US consumers is Google Home which is liked by 23% of respondents and runs on Google Assistant. HomePod, which runs Siri, and Nest, which runs Google Assistant, are liked by 16% of respondents each. Bose, which runs Google Assistant as well as Alexa, is a favorite of 11% of Americans. 11% of Americans also love JBL LINK Series smart speakers running Google Assistant and Siri.
The survey also found that 75% of the people in the US do not own a smart speaker which shows that there is no room for market growth. Out of all the US adults who own a smart speaker, six in ten of them said that they own the whole system.
The dominance of Amazon Echo in the US smart speaker market aligns with broader industry trends, where ecosystem integration plays a crucial role in consumer preference. Amazon’s aggressive push into smart home connectivity, along with Alexa’s vast skillset and compatibility with third-party devices, has solidified its lead. However, Google’s advancements in AI and Apple’s focus on privacy and premium audio could reshape the landscape in the coming years. With 75% of Americans yet to adopt smart speakers, future growth may depend on new innovations, improved AI capabilities, and deeper integration with everyday digital experiences.
Read next:
• Privacy Divide Widens: Spain, South Korea Concerned While U.S. Falls Behind, Statista Reports
• New Survey Shows that Gmail is the Most Used Email Service Provider in the US
by Arooj Ahmed via Digital Information World
The second most favorite smart speaker among US consumers is Google Home which is liked by 23% of respondents and runs on Google Assistant. HomePod, which runs Siri, and Nest, which runs Google Assistant, are liked by 16% of respondents each. Bose, which runs Google Assistant as well as Alexa, is a favorite of 11% of Americans. 11% of Americans also love JBL LINK Series smart speakers running Google Assistant and Siri.
The survey also found that 75% of the people in the US do not own a smart speaker which shows that there is no room for market growth. Out of all the US adults who own a smart speaker, six in ten of them said that they own the whole system.
The dominance of Amazon Echo in the US smart speaker market aligns with broader industry trends, where ecosystem integration plays a crucial role in consumer preference. Amazon’s aggressive push into smart home connectivity, along with Alexa’s vast skillset and compatibility with third-party devices, has solidified its lead. However, Google’s advancements in AI and Apple’s focus on privacy and premium audio could reshape the landscape in the coming years. With 75% of Americans yet to adopt smart speakers, future growth may depend on new innovations, improved AI capabilities, and deeper integration with everyday digital experiences.
Read next:
• Privacy Divide Widens: Spain, South Korea Concerned While U.S. Falls Behind, Statista Reports
• New Survey Shows that Gmail is the Most Used Email Service Provider in the US
by Arooj Ahmed via Digital Information World
Monday, March 31, 2025
Report Highlights AI’s Factual Inaccuracy and Rising Skepticism Among Experts
A study from the Association for the Advancement of Artificial Intelligence examines the disconnect between public perception and actual AI performance. Although AI systems continue evolving, ensuring accurate responses remains an unresolved challenge.
Despite extensive funding, prominent AI models struggle to maintain reliability. The AAAI’s research panel collected insights from experts and surveyed hundreds of participants to assess current capabilities.
The findings indicate that widely used AI models face difficulties with factual accuracy. In evaluations using straightforward question sets, these systems provided incorrect answers in more than half of the cases. Researchers have attempted various methods to enhance precision, such as retrieving relevant documents before response generation, applying automated reasoning to eliminate inconsistencies, and guiding AI through step-by-step problem-solving processes.
Even with these refinements, meaningful progress has been limited. Approximately 60 percent of AI specialists remain skeptical about achieving reliable factual accuracy in the near term. This reinforces the importance of human oversight when using AI tools, particularly in domains where precision is essential, such as finance and healthcare.
The study also highlights a major gap in understanding. Nearly 79 percent of AI experts believe the general public overestimates current AI capabilities. Many individuals lack the necessary knowledge to critically evaluate claims made about AI advancements. Industry analysts have observed that AI enthusiasm recently peaked and is now entering a period of reduced expectations. This trend influences digital marketing strategies, where businesses may allocate resources based on unrealistic assumptions about AI’s potential. When results do not align with projections, financial setbacks may occur.
Additionally, 74 percent of researchers argue that AI development is shaped more by popular interest than by scientific necessity. This raises concerns that fundamental challenges, including factual reliability, might be overlooked in favor of commercially appealing advancements.
Organizations adopting AI-driven solutions must recognize the limitations of these technologies. Regular evaluations and expert reviews are essential to mitigating errors, particularly in regulated sectors where misinformation carries significant consequences.
AI-generated content can negatively impact credibility if inaccuracies persist. Search platforms may deprioritize sites that publish unreliable information, reinforcing the need for careful oversight. A balanced approach where AI assists but humans validate remains the most effective strategy for maintaining trust and relevance.
Beyond content creation, decision-makers must take a measured approach to AI investment. Committing resources to new technologies without proven returns can result in costly miscalculations. Businesses that develop a clear understanding of AI’s capabilities and constraints will be better positioned to implement sustainable strategies that deliver real value.
Image: DIW-Aigen
Read next:
• Phones Aren’t the Only Distraction: Study Shows Workplace Procrastination Persists Despite Device Distance
• How Is AI Fueling a Data Explosion Bigger Than All of Human History?
• New Survey Shows that Gmail is the Most Used Email Service Provider in the US
by Asim BN via Digital Information World
Despite extensive funding, prominent AI models struggle to maintain reliability. The AAAI’s research panel collected insights from experts and surveyed hundreds of participants to assess current capabilities.
The findings indicate that widely used AI models face difficulties with factual accuracy. In evaluations using straightforward question sets, these systems provided incorrect answers in more than half of the cases. Researchers have attempted various methods to enhance precision, such as retrieving relevant documents before response generation, applying automated reasoning to eliminate inconsistencies, and guiding AI through step-by-step problem-solving processes.
Even with these refinements, meaningful progress has been limited. Approximately 60 percent of AI specialists remain skeptical about achieving reliable factual accuracy in the near term. This reinforces the importance of human oversight when using AI tools, particularly in domains where precision is essential, such as finance and healthcare.
The study also highlights a major gap in understanding. Nearly 79 percent of AI experts believe the general public overestimates current AI capabilities. Many individuals lack the necessary knowledge to critically evaluate claims made about AI advancements. Industry analysts have observed that AI enthusiasm recently peaked and is now entering a period of reduced expectations. This trend influences digital marketing strategies, where businesses may allocate resources based on unrealistic assumptions about AI’s potential. When results do not align with projections, financial setbacks may occur.
Additionally, 74 percent of researchers argue that AI development is shaped more by popular interest than by scientific necessity. This raises concerns that fundamental challenges, including factual reliability, might be overlooked in favor of commercially appealing advancements.
Organizations adopting AI-driven solutions must recognize the limitations of these technologies. Regular evaluations and expert reviews are essential to mitigating errors, particularly in regulated sectors where misinformation carries significant consequences.
AI-generated content can negatively impact credibility if inaccuracies persist. Search platforms may deprioritize sites that publish unreliable information, reinforcing the need for careful oversight. A balanced approach where AI assists but humans validate remains the most effective strategy for maintaining trust and relevance.
Beyond content creation, decision-makers must take a measured approach to AI investment. Committing resources to new technologies without proven returns can result in costly miscalculations. Businesses that develop a clear understanding of AI’s capabilities and constraints will be better positioned to implement sustainable strategies that deliver real value.
Image: DIW-Aigen
Read next:
• Phones Aren’t the Only Distraction: Study Shows Workplace Procrastination Persists Despite Device Distance
• How Is AI Fueling a Data Explosion Bigger Than All of Human History?
• New Survey Shows that Gmail is the Most Used Email Service Provider in the US
by Asim BN via Digital Information World
Subscribe to:
Comments (Atom)






