"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Monday, December 15, 2025
Small Businesses Face Growing Cybercrime Burden, Survey Shows
The findings show that 81 percent of respondents suffered a security breach, a data breach, or both. AI-enabled attacks were identified as a root cause in more than 40 percent of incidents, reflecting a shift towards more technologically advanced external threats.
The financial impact was notable, with 37% of affected businesses reporting losses exceeding $500,000.
To manage these costs, 38.3 percent of small business leaders reported raising prices, this burden functions as an invisible "cyber tax," pushing businesses to raise prices and contributing to inflation. The report also notes declining confidence in cybersecurity preparedness and reduced use of basic security measures, despite growing concern about AI-driven risks.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.
Read next: Online shopping makes it harder to make ethical consumption choices, research says
by Ayaz Khan via Digital Information World
Online shopping makes it harder to make ethical consumption choices, research says
By Caroline Moraes - Professor of Marketing and Consumer Research.
As the Christmas shopping period begins in earnest following Black Friday and Cyber Monday, new research led by the University of Birmingham and the University of Bristol sheds light on how consumers’ environmental and social concerns fail to translate into ethical purchasing actions during online shopping.
The study, published in the Journal of Business Ethics, explores how competitive shopping environments and marketing tactics can influence moral decision-making among consumers. It reveals that the intense focus on bargains and limited-time offers, such as those prevalent during the festive sales periods, can lead shoppers to discount any concerns they may have about sustainability or fair labour, in pursuit of a deal.
Caroline Moraes, Professor of Marketing and Consumer Research from the Centre for Responsible Business at the University of Birmingham, and co-author of the study, said: "Our findings show that the tactics used by online shops create tensions between ethical intentions and actual behaviour. Many consumers aspire to shop responsibly by buying sustainably and ethically made products. But the design of websites and the urgency and excitement that people experience across online shopping platforms, which increase even further during events like Black Friday and Boxing Day sales, can often override these values.”
The qualitative study examined how self-described ‘ethically oriented’ consumers practice online shopping for clothes.
"Buying a loved one a gift or purchasing new clothes during the festive season shouldn’t come at the cost of our values and the environment." Prof Caroline Moraes, University of Birmingham
Dr Fiona Spotswood, Associate Professor in Marketing and Consumption at the University of Bristol Business School, and lead-author of the study, said: “We paid attention to how participants navigated existing digital retail websites, how they balance social and environmental information with other product information, and how they perform online shopping routines.”
The paper outlines that ethical decision-making is inhibited by some key characteristics of online shopping, including:
- Online shopping websites are designed for passive habitual scrolling and browsing.
- Price and aesthetic appeal being front and centre of products’ selling points rather than ethical factors.
- Lack of information about the ethical and environmental sustainability credentials of products.
- Being pressured to make an immediate purchase with limited-time deals.
The research calls for retailers to adopt responsible marketing practices, ensuring transparency and fairness in promotional strategies and including ethical and sustainability criteria in their online shopping websites. It also urges consumers to reflect on the broader social and environmental impact of their purchases, particularly during peak shopping periods when ethical considerations are most likely to be compromised.
Professor Moraes said: “With more of us shopping online than ever before, our research serves as a timely reminder that people do want to be more ethical in their shopping practices, but it can be incredibly hard to act in that way. Businesses should take this into consideration when it comes to their e-commerce offering. Buying a loved one a gift or purchasing new clothes during the festive season shouldn’t come at the cost of our values and the environment.”
Four tips on how to shop more ethically online
- Pause before you purchase. If you recognise you have been scrolling/browsing for a long time, take a break and ask yourself if you or the person you are buying for really needs this before hitting purchase.
- Search for specific sustainable options. Look directly for eco-friendly products and brands that prioritise fair labour practices and that have this information easily available.
- Avoid overbuying. Resist the urge to stockpile just because it is on sale at the click of a button. Someone else might need that item more than you do.
- Re-style and/or purchase second-hand. If you are shopping for clothes, consider re-styling what you already have and/or purchase second-hand items that can help you create your very own versions of the new styles you see online.
This post was originally published on University of Birmingham and republished with permission.
Read next:
• Study Finds Higher Digital Skills Linked to Greater Privacy and Misinformation Concerns
• Human-AI Collaboration Requires Structured Guidance, Research Shows
by External Contributor via Digital Information World
Study Finds Higher Digital Skills Linked to Greater Privacy and Misinformation Concerns
The research, published in Information, Communication & Society, analyzed European Social Survey data from nearly 50,000 respondents across 29 European countries and Israel between 2020 and 2022. Participants’ views on privacy infringement, misinformation, and work interruptions were combined into a digital concern scale ranging from 0 to 1.
Millennials aged 25 to 44 reported higher concern levels than younger and older age groups. No significant differences were found by gender, income, or urban–rural residence. Concern was lowest in Bulgaria and highest in the Netherlands and the United Kingdom.
The study found that people with greater digital literacy expressed more concern, particularly among frequent internet users, suggesting increased awareness and exposure may heighten unease rather than reduce it.
"Figure 2 depicts people’s digital concerns across the 30 countries. The results show an overall high level of digital concerns, with a mean of 0.65 on the 0–1 scale."
H/T: Taylorandfrancisgroup
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.
Read next: The ‘AI Homeless Man Prank’ reveals a crisis in AI education
by Asim BN via Digital Information World
Sunday, December 14, 2025
The ‘AI Homeless Man Prank’ reveals a crisis in AI education
Learning to distinguish between truth and falsehood is not the only challenge society faces in the AI era. We must also reflect on the human consequences of what we create.
As professors of educational technology at Laval University and education and innovation at Concordia University, we study how to strengthen human agency — the ability to consciously understand, question and transform environments shaped by artificial intelligence and synthetic media — to counter disinformation.
A worrying trend
In one of the most viral “AI Homeless Man Prank” videos, viewed more than two million times, creator Nnamdi Anunobi tricked his mother by sending her fake photos of a homeless man sleeping on her bed. The scene went viral and sparked a wave of imitations across the country.
Two teenagers in Ohio have been charged for triggering false home intrusion alarms, resulting in unnecessary calls to police and real panic. Police departments in Michigan, New York and Wisconsin have issued public warnings that these pranks are wasting emergency resources and dehumanizing the vulnerable.
At the other end of the media spectrum, boxer Jake Paul agreed to experiment with the cameo feature of Sora 2, OpenAI’s video generation tool, by consenting to the use of his image.
But the phenomenon quickly got out of hand: internet users hijacked his face to create ultra-realistic videos in which he appears to be coming out as gay or giving make-up tutorials.
What was supposed to be a technical demonstration turned into a flood of mocking content. His partner, skater Jutta Leerdam, denounced the situation: “I don’t like it, it’s not funny. People believe it.”
These are two phenomena with different intentions: one aimed at making people laugh; the other following a trend. But both reveal the same flaw: that we have democratized technological power without paying attention to issues of morality.
Digital natives without a compass
Today’s cybercrimes — sextortion, fraud, deepnudes, cyberbullying — are not appearing out of nowhere.
Their perpetrators are yesterday’s teenagers: they were taught to code, create and publish online, but rarely to think about the human consequences of their actions.
Juvenile cybercrime is rapidly increasing, fuelled by the widespread use of AI tools and a perception of impunity. Young people are no longer just victims. They are also becoming perpetrators of cyber crime — often “out of curiosity,” for the challenge, or just “for fun.”
And yet, for more than a decade, schools and governments have been educating students about digital citizenship and literacy: developing critical thinking skills, protecting data, adopting responsible online behaviour and verifying sources.
Despite these efforts, cyber-bullying, disinformation and misinformation persist and are intensifying to the point of now being recognized as one of the top global risks for the coming years.
A silent but profound desensitization
These abuses do not stem from innate malice, but from a lack of moral guidance adapted to the digital age.
We are educating young people who are capable of manipulating technology, but sometimes unable to gauge the human impact of their actions, especially in an environment where certain platforms deliberately push the boundaries of what is socially acceptable.
Grok, Elon Musk’s chatbot integrated into X (formerly Twitter), illustrates this drift. AI-generated characters make sexualized, violent or discriminatory comments, presented as simple humorous content. This type of trivialization blurs moral boundaries: in such a context, transgression becomes a form of expression and the absence of responsibility is confused with freedom.
Without guidelines, many young people risk becoming augmented criminals capable of manipulating, defrauding or humiliating on an unprecedented scale.
The mere absence of malicious intent in content creation is no longer enough to prevent harm.
Creating without considering the human consequences, even out of curiosity or for entertainment, fuels collective desensitization as dignity and trust are eroded — making our societies more vulnerable to manipulation and indifference.
From a knowledge crisis to a moral crisis
AI literacy frameworks — conceptual frameworks that define the skills, knowledge and attitudes needed to understand, use and critically and responsibly evaluate AI — have led to significant advances in critical thinking and vigilance. The next step is to incorporate a more human dimension: to reflect on the effects of what we create on others.
Synthetic media undermine our confidence in knowledge because they make the false credible, and the true questionable. The result is that we end up doubting everything – facts, others, sometimes even ourselves. But the crisis we face today goes beyond the epistemic: it is a moral crisis.
Most young people today know how to question manipulated content, but they don’t always understand its human consequences. Young activists, however, are the exception. Whether in Gaza or amid other humanitarian struggles, they are experiencing both the power of digital technology as a tool for mobilization — hashtag campaigns, TikTok videos, symbolic blockades, coordinated actions — and the moral responsibility that this power carries.
But it’s no longer truth alone that is wavering, but our sense of responsibility.
The relationship between humans and technology has been extensively studied. But the relationship between humans through technology-generated content hasn’t been studied enough.
Towards moral sobriety in the digital world
The human impact of AI — moral, psychological, relational — remains the great blind spot in our thinking about the uses of the technology.
Every deepfake, every “prank,” every visual manipulation leaves a human footprint: loss of trust, fear, shame, dehumanization. Just as emissions pollute the air, these attacks pollute our social bonds.
Learning to measure this human footprint means thinking about the consequences of our digital actions before they materialize. It means asking ourselves:
- Who is affected by my creation?
- What emotions and perceptions does it evoke?
- What mark will it leave on someone’s life?
Building a moral ecology of digital technology means recognizing that every image and every broadcast shapes the human environment in which we live.
Educating young people to not want to harm
Laws like the European AI Act define what should be prohibited, but no law can teach why we should not want to cause harm.
In concrete terms, this means:
- Cultivating personal responsibility by helping young people feel accountable for their creations.
- Transmitting values through experience, by inviting them to create and then reflect: how would this person feel?
- Fostering intrinsic motivation, so that they act ethically out of consistency with their own values, not fear of punishment.
- Involving families and communities, transforming schools, homes and public spaces into places for discussion about the human impacts of unethical or simply ill-considered uses of generative AI.
In the age of manufactured media, thinking about the human consequences of what we create is perhaps the most advanced form of intelligence.![]()
Nadia Naffi, Associate Professor, Educational Technology, Université Laval and Ann-Louise Davidson, Innovation Lab Director and Professor, Educational Technology and Innovation Mindset, Concordia University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read next: Revealing the AI Knowledge Gap in Marketing, The Cost of Upskilling
by External Contributor via Digital Information World
Saturday, December 13, 2025
Revealing the AI Knowledge Gap in Marketing, The Cost of Upskilling
While the marketing industry is known to undergo frequent changes, artificial intelligence has increased that evolution at a rate that sometimes feels unattainable to keep up with. Now that features like AI and generative engine optimization are in the mix, marketers are forced to learn and adapt quickly in order to stay competitive.
Because of that, continuous learning is imperative, but many marketers are overworked, underpaid, and undertrained. With urgent tasks, ever-expanding workloads, smaller teams, and shrinking budgets, upskilling often gets postponed in favor of more time-sensitive matters.
A new study from Adobe for Business surveyed more than 400 American marketers to learn more about their tasks, whether they want to upskill, who pays for that training, and if the tools they use help or hurt their productivity. Adobe for Business provided Digital Information World with exclusive data from the survey, including breakdowns by career level and generation not available in the public report.
Extending beyond the job description
The study data revealed that more than one in five marketers reported undertaking 10 or more core responsibilities in their current positions. In addition to their regular duties, they are frequently assigned tasks beyond their job description, with more than half stating they have taken on extra duties outside of their previously agreed-upon tasks.
High-priority tasks can and will arise at any time, causing managers to request that their employees stop their current work to help. In fact, study participants disclosed receiving an average of five ad hoc tasks per week from their superiors.
According to the survey, the most common marketing role responsibilities were identified as marketing strategy (46%), social media marketing (41%), and content marketing (37%). But those who take on additional work tend to focus most on marketing strategy (14%), social media marketing (13%), and market research (10%) alongside their regular duties.
These additional tasks also vary by business size, as marketers at small businesses reported performing 26% more tasks than those at enterprise-level businesses. Workers at small- and medium-sized companies tend to carry out social media duties most often. Large businesses are tasked with added marketing strategy work more frequently, and enterprise companies tack on more project management tasks.
As more work is added to marketers’ plates, the need to familiarize themselves with, if not master, new specialities and training becomes imperative for success.
Marketers want to upskill, and many are funding their training from their own pockets
Marketers are witnessing the most sophisticated technological advancements in recent history, and many are doing what they can to learn new skills and incorporate them into their daily routines.
The study shared that nearly four in five marketers spent their personal time and money outside of work building new skills in the past year, averaging roughly 57 hours of learning off the clock. Of those who trained outside of working hours, more than three in four paid with money from their own pockets, investing an average of $310 in the past 12 months.
Learning and development trends fluctuate across generations. The report found that Gen X marketers spent the most time upskilling than any other age group, with 79% seeking external training outside or regular working hours, averaging 69 learning hours in the past year. Gen Z marketers were only slightly behind on time spent advancing their knowledge, allocating 68 hours, but Gen Z (82%) had the highest percentage of marketers spending personal time expanding their skillset.
Nearly 80% of Millennials and 67% of Baby Boomers committed to outside training, but they spent the least time studying new skills, dedicating only 50 hours and 31 hours, respectively.
There are many different facets of marketing that professionals want to learn more about, but the top focus areas are:
- AI automation (39%)
- Graphic design (31%)
- Data analytics (27%)
- Leadership (26%)
- Data visualization (21%)
- Web development (20%)
- Email marketing (20%)
- TikTok (19%)
While employers expect marketers to understand and utilize AI, only 23% of survey respondents said they have received on-the-job training on the topic.
The data highlighted that every generation of marketers feels that learning AI automation skills is the most essential focus area to pursue outside of work. Baby Boomers feel the most strongly about this, as 67% took time off the clock to learn more. While Millennials were the least likely to use personal time to learn from upskilling, the report unveiled that they received more AI automation training at work than any other generation.
Career-level AI training also ranges, but the data show that lower-level employees tend to get the least amount of AI knowledge improvements, as only 15% of entry-level workers reported receiving on-the-job AI training. Employees at the manager level (30%) were more likely to have received AI training while at work; however, more than half needed to broaden their AI capabilities outside of work.
How inefficiency costs companies and what that looks like
On top of marketers flagging that they are overworked and undertrained, they also reported that their employers often expect them to have access to and mastery of various software and tools that they use, even if they have never used them before. Some even pay for the use of these tools instead of their employers footing the bill.
Roughly one in 10 marketers in the study stated they use eight or more tools weekly, and more than two in five pay out of their own pockets for tools they regularly use. Some feel they hold them back, as respondents said they have lost an estimated 60 hours of productivity annually due to inefficient tools.
The tools marketers say hinder their efficiency the most are:
- Spreadsheets (26%)
- Collaboration tools (18%)
- Customer relationship management systems (13%)
- Email marketing platforms (11%)
- social media management tools (11%)
- Project management (11%)
Marketers want to learn, so much so that they are willing to invest their own personal time and money to get the training they need to perform to the best of their abilities. Continuous learning is among the most impactful ways to do just that in a field that is constantly changing and innovating. As new technologies continue to emerge, it is in businesses’ best interest to ensure marketers are aware and trained on the most up-to-date tools that will empower them to produce the best results.
Visual learner? Scroll to the end of this article to view infographics highlighting key survey findings and statistics
Read next: Google Translate Gets AI Upgrade: Gemini Now Powers Text and Speech Translation
by Web Desk via Digital Information World
Human-AI Collaboration Requires Structured Guidance, Research Shows
Human-AI pairs did not naturally become more creative through repeated collaboration alone, and some became less creative over time. The researchers examined three collaboration approaches i.e.: humans proposing ideas, humans asking AI to generate ideas, and humans and AI jointly refining ideas.
Creativity improved only when participants focused on co-developing ideas, exchanging feedback and building on suggestions, rather than continuously generating new ideas without refinement.
A third experiment showed that explicit instruction to engage in co-development led to clear creativity gains across repeated tasks.
Dr. Yeun Joon Kim of Cambridge Judge Business School stated that organizations must provide targeted support, such as guidance on building and adapting ideas, to help employees and AI learn over time how to create more effectively.
The research indicates that companies should structure AI collaboration through clear instructions and workflows, rather than relying on AI use alone, to improve creative outcomes.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-Aigen
Read next:
• How Much Do Fake Account Verifications Really Cost Across the World?
• Google Translate Gets AI Upgrade: Gemini Now Powers Text and Speech Translation
by Asim BN via Digital Information World
Friday, December 12, 2025
Google Translate Gets AI Upgrade: Gemini Now Powers Text and Speech Translation
Google Translate now uses Gemini-powered text translation in Search and the Translate app to better handle context, including idioms and colloquial phrases by understanding their true meaning rather than translating word-for-word. The rollout begins in the United States and India, covering English translations with nearly 20 languages "including Spanish, Hindi, Chinese" on Android, iOS, and the web (desktop/PC).
Google is also launching a beta live speech-to-speech translation feature that delivers real-time audio translations through headphones. The beta is available on Android in the U.S., Mexico, and India, supports more than 70 languages, and works with any headphones. Google said Apple's iOS support and additional countries are planned for 2026.
The company also expanded language practice tools, adding progress tracking and improved feedback, and extending availability to nearly 20 additional countries.
However, users should note that AI-powered translation feature can occasionally produce inaccurate results or make incorrect assumptions about context and region. While Gemini-powered translations represent significant improvements, blind reliance on any AI translation without verification — especially for critical communications, legal documents, or region-sensitive content — could lead to misunderstandings.
Testing in Pakistan, for example, revealed instances where Google Search's translation feature incorrectly defaulted to Hindi (India's official language) instead of Urdu when searching for regional terms like 'khota meaning in English,' suggesting the model still struggles with regional language detection and localization across South Asian markets.
Image: DIW
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by human editor.
Read next: How Much Do Fake Account Verifications Really Cost Across the World?
by Irfan Ahmad via Digital Information World









