"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Tuesday, December 2, 2025
Global Smartphone Market to Grow in 2025 as Memory Shortage Drives Price Pressures for 2026
Apple’s performance accounts for a substantial part of the improved forecast. IDC expects the company to ship 247.4 million iPhones next year, reflecting 6.1% annual growth and marking its highest volume on record. China contributes significantly to this shift. IDC revised Apple’s 2025 outlook for the region from a projected 1% decline to 3% growth after recent monthly sales data showed sustained demand. Globally, Apple’s shipment value is projected to exceed 261 billion dollars in 2025, supported by 7.2% year-over-year growth.
The outlook changes in 2026 as component availability tightens. IDC now expects a 0.9% decline in worldwide smartphone shipments, reversing an earlier projection for slight growth. The revision reflects two factors: a global memory shortage that is raising costs and constraining supply, and Apple’s decision to move the launch of its next base model from late 2026 to early 2027. IDC notes that the shortage is expected to affect lower-end and midrange Android devices more noticeably because they are more sensitive to price increases.
Pricing is expected to rise even as unit volumes soften. IDC forecasts the global average selling price of smartphones to reach 465 dollars in 2026. Higher component costs are expected to push overall market value to 578.9 billion dollars. Manufacturers may raise retail prices or adjust their portfolios toward higher-margin models to manage the impact of memory-related cost increases.
The market enters 2025 with improving conditions, while the balance between component constraints and pricing trends shapes expectations for 2026.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.
Read next:
• How Small Language Models Differ from Large Ones in Power and Purpose
• Microsoft CEO on the Skills That Matter as AI Expands in the Workplace
by Irfan Ahmad via Digital Information World
How Small Language Models Differ from Large Ones in Power and Purpose
As AI becomes increasingly central to how we work, learn and solve problems, understanding the different types of AI models has never been more important. Large language models (LLMs) such as ChatGPT, Claude, Gemini and others are in widespread use. But small ones are increasingly important, too.
Image: DIW-AigenLet’s explore what makes SLMs and LLMs different – and how to choose the right one for your situation.
Firstly, what is a language model?
You can think of language models as incredibly sophisticated pattern-recognition systems that have learned from vast amounts of text.
They can understand questions, generate responses, translate languages, write content, and perform countless other language-related tasks.
The key difference between small and large models lies in their scope, capability and resource requirements.
Small language models are like specialised tools in a toolbox, each designed to do specific jobs extremely well. They typically contain millions to tens of millions of parameters (these are the model’s learned knowledge points).
Large language models, on the other hand, are like having an entire workshop at your disposal – versatile and capable of handling almost any challenge you throw at them, with billions or even trillions of parameters.
What can LLMs do?
Large language models represent the current pinnacle of AI language capabilities. These are the models making headlines for their ability to “write” poetry, debug complex code, engage in conversation, and even help with scientific research.
When you interact with advanced AI assistants such as ChatGPT, Gemini, Copilot or Claude, you’re experiencing the power of LLMs.
- Also read: Which AI Models Answer Most Accurately, and Which Hallucinate Most? New Data Shows Clear Gaps
The primary strength of LLMs is their versatility. They can handle open-ended conversations, switching seamlessly from discussing marketing strategies to explaining scientific concepts to creative writing. This makes them invaluable for businesses that need AI to handle diverse, unpredictable tasks.
A consulting firm, for instance, might use an LLM to analyse market trends, generate comprehensive reports, translate technical documents, and assist with strategic planning – all with the same model.
LLMs excel at tasks requiring nuanced understanding and complex reasoning. They can interpret context and subtle implications, and generate responses that consider multiple factors simultaneously.
If you need AI to review legal contracts, synthesise information from multiple sources, or engage in creative problem-solving, you need the sophisticated capabilities of an LLM.
These models are also excellent at generalising. Train them on diverse data, and they can extrapolate knowledge to handle scenarios they’ve never explicitly encountered.
However, LLMs require significant computational power and usually run in the cloud, rather than on your own device or computer. In turn, this translates to high operational costs. If you’re processing thousands of requests daily, these costs can add up quickly.
When less is more: SLMs
In contrast to LLMs, small language models excel at specific tasks. They’re fast, efficient and affordable.
Take a library’s book recommendation system. An SLM can learn the library’s catalogue. It “understands” genres, authors and reading levels so it can make great recommendations. Because it’s so small, it doesn’t need expensive computers to run.
SLMs are easy to fine-tune. A language learning app can teach an SLM about common grammar mistakes. A medical clinic can train one to understand appointment scheduling. The model becomes an expert in exactly what you need.
SLMs are faster than LLMs, too – they can deliver answers in milliseconds, rather than seconds. This difference may seem small, but it’s noticeable in applications such as grammar checkers or translation apps, which can’t keep users waiting.
Costs are much smaller, too. Small language models are like LED bulbs – efficient and affordable. Large language models are like stadium lights – powerful but expensive.
Schools, non-profits and small businesses can use SLMs for specific tasks without breaking the bank. For example, Microsoft’s Phi-3 small language models are helping power an agricultural information platform in India to provide services to farmers even in remote places with limited internet.
SLMs are also great for constrained systems such as self-driving cars or satellites that have limited processing power, minimal energy budgets, and no reliable cloud connection. LLMs simply can’t run in these environments. But an SLM, with its smaller footprint, can fit onboard.
Both types of models have their place
What’s better – a minivan or a sports car? A downtown studio apartment or a large house in the suburbs? The answer, of course, is that it depends on your needs and your resources.
The landscape of AI models is rapidly evolving, and the line between small and large models is becoming increasingly nuanced. We’re seeing hybrid approaches where businesses use SLMs for routine tasks and escalate to LLMs for complex queries. This approach optimises both cost and performance.
The choice between small and large language models isn’t about which is objectively better – it’s about which better serves your specific needs.
SLMs offer efficiency, speed and cost-effectiveness for focused applications, making them ideal for businesses with specific use cases and resource constraints.
LLMs provide unmatched versatility and sophistication for complex, varied tasks, justifying their higher resource requirements when a highly capable AI is needed.
Lin Tian, Research Fellow, Data Science Institute, University of Technology Sydney and Marian-Andrei Rizoiu, Associate Professor in Behavioral Data Science, University of Technology Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read next: “Rage Bait” Named Oxford Word of the Year 2025
by Web Desk via Digital Information World
Microsoft CEO on the Skills That Matter as AI Expands in the Workplace
Nadella noted that cognitive ability alone is insufficient for leaders and employees. He stated that emotional intelligence and social awareness are becoming more critical as AI automates routine responsibilities. Nadella explained that possessing intellectual capability without emotional intelligence diminishes its value. The workplace is increasingly a space where human interaction and collective problem-solving define outcomes.
When Döpfner asked whether empathy considerations were driving Microsoft to call more people back to the office, Nadella acknowledged the value of physical workspaces for collaboration but emphasized flexibility. While something important gets lost when people don't come together in person, Microsoft maintains a balanced approach rather than imposing rigid mandates. Physical spaces remain valuable for picking up social and emotional cues that enable better innovation and allow humans to accumulate knowledge through context that AI systems have not yet learned.
When asked whether companies could be entirely run by AI, Nadella described the notion as too far-fetched to imagine. He emphasized that human judgment, empathy, and decision-making remain irreplaceable. While AI can augment productivity, leadership and collaborative problem-solving cannot be fully replicated by machines. Nadella described a future work model involving macro delegation to AI agents that handle tasks but return for human guidance and micro steering when they encounter limitations or need direction.
Nadella stressed that successful AI implementation requires four elements. Organizations need a mindset embracing business process re-engineering rather than simply applying AI to existing workflows. They need appropriate tools, the skills to apply those tools effectively, and properly normalized data sets spanning multiple systems. Without this combination, AI projects will likely fail, which may explain why many executives expect productivity gains from AI but few have realized them.
Nadella's remarks reflect a broader perspective on AI adoption. Technology can enhance human capabilities, but leadership and empathy remain central to workplace effectiveness. Even in highly automated environments, human collaboration and understanding continue to shape business outcomes.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.
Read next: Threads Code Reveals AI Tool Designed to Summarize Profile Engagement Patterns
by Ayaz Khan via Digital Information World
Threads Code Reveals AI Tool Designed to Summarize Profile Engagement Patterns
The feature provides visitors with a snapshot of past engagements, including related interests or general activity patterns. It offers a quick overview without scrolling through individual posts or replies, resembling communication summaries on other platforms.
Paluzzi’s findings suggest these summaries could appear for any profile, even without prior interactions. The full functionality remains unconfirmed, and Threads has made no official announcement.
Threads’ development occurs alongside profile transparency tools on other platforms. X recently introduced a feature showing account details such as location, join date and username changes. The tool aims to reduce inauthentic engagement and is not AI-powered.
Commentators under Paluzzi post have noted potential implications of Threads’ summaries, including influencing engagement decisions or highlighting repeated critical interactions. These observations reflect user commentary rather than confirmed outcomes.
No official purposes or verified results are available beyond code analysis and internal testing reports. How the feature would function if released or whether it would be widely deployed remains unknown.
If implemented, the AI tool would allow visitors to quickly understand prior engagement patterns without manually reviewing past activity.
- Also read: Which AI Models Answer Most Accurately, and Which Hallucinate Most? New Data Shows Clear Gaps
Threads’ exploration of AI-assisted summaries reflects a broader trend in social media toward tools that provide context and simplify interaction history. The feature remains experimental, with release timing and full functionality still unknown.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.
Read next: “Rage Bait” Named Oxford Word of the Year 2025
by Asim BN via Digital Information World
“Rage Bait” Named Oxford Word of the Year 2025
Oxford University Press has selected “rage bait” as its Word of the Year for 2025. The term refers to online content deliberately designed to provoke anger or outrage, typically posted to increase traffic or engagement on a website or social media account.
The phrase combines “rage,” meaning a violent outburst of anger, and “bait,” an attractive morsel of food. Although technically two words, Oxford lexicographers treat it as a single unit of meaning, showing how English adapts existing words to express new ideas.
The first recorded use of “rage bait” was in 2002 on Usenet, describing a driver’s reaction to being flashed by another driver. Over time, it evolved into internet slang for content intended to elicit anger, including viral social media posts.
Usage of the term has tripled in the past 12 months, indicating its growing presence in online discourse. Experts note that the word reflects how people interact with and respond to online content.
The Word of the Year was chosen through a combination of public voting and expert review. Two other words were shortlisted: “aura farming,” defined as cultivating an attractive or charismatic persona, and “biohack,” describing efforts to optimize physical or mental performance, health, or wellbeing through lifestyle, diet, supplements, or technology.
Casper Grathwohl, President of Oxford Languages, said the increase in usage highlights growing awareness of the ways online content can influence attention and behavior. He also compared “rage bait” to last year’s Word of the Year, “brain rot,” which described the mental drain of endless scrolling.
The annual Word of the Year reflects terms that captured significant cultural and linguistic trends over the previous 12 months, based on usage data, public engagement, and expert analysis.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-Aigen.
Read next: Which AI Models Answer Most Accurately, and Which Hallucinate Most? New Data Shows Clear Gaps
by Irfan Ahmad via Digital Information World
Monday, December 1, 2025
Which AI Models Answer Most Accurately, and Which Hallucinate Most? New Data Shows Clear Gaps
Recent findings from the European Broadcasting Union show that AI assistants misrepresent news content in 45% of the test cases, regardless of language or region. That result underscores why model accuracy and reliability remain central concerns. Fresh rankings from Artificial Analysis, based on real-world endpoint testing as of 1 December 2025, give a clear picture of how today’s leading systems perform when answering direct questions.
Measuring Accuracy and Hallucination Rates
Artificial Analysis evaluates both proprietary and open weights models through live API endpoints. Their measurements reflect what users experience in actual deployments rather than theoretical performance. Accuracy shows how often a model produces correct answers. Hallucination rate captures how often it responds incorrectly when it should refuse or indicate uncertainty. Since new models launch frequently and providers adjust endpoints, these results can change over time, but the current snapshot still reveals clear trends.
Models With the Highest Hallucination Rates
| Model | Hallucination Rate |
|---|---|
| Claude 4.5 Haiku | 26% |
| Claude 4.5 Sonnet | 48% |
| GPT-5.1 (high) | 51% |
| Claude Opus 4.5 | 58% |
| Magistral Medium 1.2 | 60% |
| Grok 4 | 64% |
| Kimi K2 0905 | 69% |
| Grok 4.1 Fast | 72% |
| Kimi K2 Thinking | 74% |
| Llama Nemotron Super 49B v1.5 | 76% |
| DeepSeek V3.2 Ex | 81% |
| DeepSeek R1 0528 | 83% |
| EXAONE 4.032B | 86% |
| Llama 4 Maverick | 87.58% |
| Gemini 3 Pro Preview (high) | 87.99% |
| Gemini 2.5 Flash (Sep) | 88.31% |
| Gemini 2.5 Pro | 88.57% |
| MiniMax-M2 | 88.88% |
| GPT-5.1 | 89.17% |
| Qwen3 235B A22B 2507 | 89.64% |
| gpt-oss-120B (high) | 89.96% |
| GLM-4.6 | 93.09% |
| gpt-oss-20B (high) | 93.20% |
When it comes to hallucination, the gap between models is striking. Claude 4.5 Haiku has the lowest hallucination rate in this group at 26 percent, yet even this relatively low figure indicates that incorrect answers are common. Several models climb sharply from there. Claude 4.5 Sonnet reaches 48 percent, GPT-5.1 (High) 51 percent, and Claude Opus 4.5 58 percent. Grok 4 produces incorrect responses 64 percent of the time, and Kimi K2 0905 rises to 69 percent. Beyond these, models enter the seventies and eighties. Grok 4.1 Fast shows a 72 percent rate, Kimi K2 Thinking 74 percent, and Llama Nemotron Super 49B v1.5 76 percent. DeepSeek benchmarks show even higher rates, with V3.2 Ex at 81 percent and R1 0528 at 83 percent. Among the highest are EXAONE 4.032B at 86 percent, Llama 4 Maverick at 87.58 percent, and several Gemini models including 3 Pro Preview (High) and 2.5 Flash (Sep) exceeding 87 percent. GLM-4.6 and gpt-oss-20B (High) top the chart at over 93 percent. This spread demonstrates that while some models are relatively restrained, many generate incorrect answers frequently, making hallucination a major challenge for AI systems today.
Top Performers in Accuracy
| Model | Accuracy |
|---|---|
| Gemini 3 Preview (High) | 54% |
| Claude Opus 4.5 | 43% |
| Grok 4 | 40% |
| Gemini 2.5 Pro | 37% |
| GPT-5.1 (High) | 35% |
| Claude 4.5 Sonnet | 31% |
| DeepSeek R1 0508 | 29.28% |
| Kimi K2 Thinking | 29.23% |
| GPT-5.1 | 28% |
| Gemini 2.5 Flash (Sep) | 27% |
| DeepSeek V3.2 Exp | 27% |
| GLM-4.6 | 25% |
| Kimi K2 0905 | 24% |
| Llama 4 Maverick | 24% |
| Grok 4.1 Fast | 23.50% |
| Qwen3 235B A22B 2507 | 22% |
| MiniMax-M2 | 21% |
| Magistral Medium 1.2 | 20% |
| gpt-oss-120B (High) | 20% |
| Claude 4.5 Haiku | 16% |
| Llama Nemotron Super 49B v1.5 | 16% |
| gpt-oss-20B (High) | 15% |
Accuracy presents a different picture. Gemini 3 Preview (High) leads the pack at 54 percent, meaning it correctly answers just over half of all questions, followed by Claude Opus 4.5 at 43 percent and Grok 4 at 40 percent. Gemini 2.5 Pro comes next with 37 percent, while GPT-5.1 (High) reaches 35 percent and Claude 4.5 Sonnet 31 percent. A cluster of models then falls into the upper to mid-twenties: DeepSeek R1 0508 at 29.28 percent, Kimi K2 Thinking at 29.23 percent, GPT-5.1 at 28 percent, and both Gemini 2.5 Flash (Sep) and DeepSeek V3.2 Exp at 27 percent. The remaining models descend to GLM-4.6 at 25 percent, Kimi K2 0905 and Llama 4 Maverick at 24 percent, and EXAONE 4.032B at 13 percent. The spread highlights that even the top-performing models answer fewer than six out of ten questions correctly, showing the inherent difficulty AI faces in delivering consistently reliable responses across a broad set of prompts.
Clear Trade-offs
The contrast between hallucination and accuracy charts shows that strong accuracy does not guarantee low hallucination. Some high-ranking models in accuracy still produce incorrect answers at significant rates. Others deliver lower accuracy yet avoid the highest hallucination levels. These gaps illustrate how unpredictable model behavior remains, even as systems improve.
Read next: ChatGPT Doubles Usage as Google Gemini Reaches 40 Percent
by Irfan Ahmad via Digital Information World
Sunday, November 30, 2025
ChatGPT Doubles Usage as Google Gemini Reaches 40 Percent
ChatGPT usage doubled among U.S. adults over two years, growing from 26 percent in 2023 to 52 percent in 2025, while Google Gemini climbed from 13 percent to 40 percent, according to Statista Consumer Insights surveys.
Microsoft Copilot reached 27 percent in 2025. Every other tool measured in the survey recorded 11 percent or below.
ChatGPT and Gemini scale
ChatGPT has over 800 million weekly users globally and ranks as the top AI app according to mobile analytics firm Sensor Tower (via FT). OpenAI released the tool in November 2022, and more than one million people registered within days.
The Gemini mobile app had about 400 million monthly users in May 2025 and has since reached 650 million. Web analytics company Similarweb found that people spend more time chatting with Gemini than ChatGPT.
Google trains its AI models using custom tensor processing unit chips rather than relying on the Nvidia chips most competitors use. Koray Kavukcuoglu, Google's AI architect and DeepMind's chief technology officer, said Google's approach combines its positions in search, cloud infrastructure and smartphones. The Gemini 3 model released in late November 2025 outperformed OpenAI's GPT-5 on several key benchmarks.
Changes among other tools
As per Statista, Microsoft Copilot grew from 14 percent in 2024 to 27 percent in 2025.
Llama, developed by Meta, dropped 20 percentage points between 2024 and 2025. Usage rose from 16 percent in 2023 to 31 percent in 2024, then fell to 11 percent in 2025.
Claude, developed by Anthropic, appeared in survey results for the first time in 2025 with 8 percent usage. Anthropic has focused on AI safety for corporate customers, and Claude's coding capabilities are widely considered best in class. Mistral Large recorded 4 percent usage in its first survey appearance.
Three tools from earlier surveys did not appear in 2025 results. Snapchat My AI declined from 15 percent in 2023 to 12 percent in 2024. Microsoft Bing AI held at 12 percent in both years. Adobe Firefly registered 8 percent in 2023.
Statista Consumer Insights surveyed 1,250 U.S. adults in November 2023 and August through September 2024. The 2025 survey included 2,050 U.S. adults from June through October 2025.
| AI Tool | 2023 Share | 2024 Share | 2025 Share |
|---|---|---|---|
| ChatGPT | 26% | 31% | 52% |
| Llama (Meta) | 16% | 31% | 11% |
| Google Gemini | 13% | 27% | 40% |
| Microsoft Copilot | N/A | 14% | 27% |
| Microsoft Bing AI | 12% | 12% | N/A |
| Snapchat My AI | 15% | 12% | N/A |
| Adobe Firefly | 8% | N/A | N/A |
| Claude | N/A | N/A | 8% |
| Mistral Large | N/A | N/A | 4% |
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.
Read next:
• Language Models Can Prioritize Sentence Patterns Over Meaning, Study Finds
• AI Models Struggle With Logical Reasoning, And Agreeing With Users Makes It Worse
by Irfan Ahmad via Digital Information World







