Wednesday, April 30, 2025

Product Returns Reach $890B in 2024 as Clothing Leads and Half of Americans Avoid Sending Items Back

The National Retail Federation and Happy Returns published a report that highlights the state of product returns, revealing key trends in consumer behavior. According to the report, 16.9% of the annual sales in 2024 in the US were returned to their respective sellers, while 20% of the online sales were returned. The report says that about $890 billion worth of products were returned by customers in 2024, especially considering how much it costs to process and return a product.

On the other hand, Statista Consumer Insights found that clothing was the most returned shopping item in 2024, with 25% of the respondents returning clothing parcels. It was followed by shoes (17%) and accessories (12%). 12% of the respondents also said that they returned food & beverage items to their sellers. This was found after surveying 9,778 US adults between April 2024 to March 2025.

Other returned items in 2024 by the US respondents included consumer electronics (10%), cosmetics & body care (9%), books and entertainment goods (9%), and furniture & household items (8%). 50% of the respondents of the survey also said that they didn't return any product in the last 12 months, while 7% said they haven't even made an online purchase in the last 12 months.


While returns are an inevitable part of retail, both consumers and businesses can benefit from greater transparency and proactive strategies. For shoppers, checking sizing guides, product reviews, and return policies can reduce unnecessary purchases. For brands, clearer product information, predictive sizing tools, and smoother return logistics can lower costs and enhance trust, while minimizing environmental impact.

Read next: Workplace AI Adoption Soars as Risky Practices and Poor Oversight Undermine Organizational Safety
by Arooj Ahmed via Digital Information World

Meta Launches Independent AI Platform with Personalized Features from Facebook and Instagram Data

Tech giant Meta is taking one step further in the world of AI by rolling out its first stand-alone AI platform.

The news comes after the company combined Meta AI with Instagram, Facebook, Messenger, and WhatsApp. The grand unveiling for the product was done recently at the company’s LlamaCon event. This platform gives users the chance to get access to Meta AI through a single platform, very much similar to ChatGPT and numerous other AI assistant platforms.


To win over many other people’s hearts, the company has been in the works to develop something that differentiates it from other AI giants in the industry, like OpenAI and Anthropic. Therefore, Meta says they’ve got the sense of who a certain individual is, what they like, and who they’re hanging out with, depending on years of information that they shared through apps like Instagram and Facebook.

Meta’s AI platform can differentiate itself from currently existing AI assistants because it could draw on data that was already chosen to share on Meta’s various offerings, the company adds. This could include your profile or any posts users engage with. Right now, the customized replies would be up for grabs in places like America and Canada.

You can also provide Meta with more data about you as a person so it can remember for further chats linked to AI. You can tell AI that you are allergic to nuts, and it could go on to recall that for a long time, and even provide recommendations when you’re on holiday next time.

Similar to any other AI offering, users need to be aware of how the firm could use this kind of personal data as it's shared with chatbots. Remember, the firm relies more on prized user data to power users with targeted ads. In case you were not already aware, it makes up the majority of the revenue.

Meta’s AI platform also shared a new Discover Feed. This is where users can share how they use AI with close friends, as part of a mock-up picture. Meta shared an example where a user asked the AI to describe them using emojis. This was shared with friends. So, as you can tell, all interactions would only be limited to which feed is chosen.

This feed could amplify specific generative AI trends like the viral one where users try to appear as a Barbie Doll or the ever-so-controversial Studio Ghibli character. Some do feel that every platform doesn’t need a social feed, but let’s see how this turns out.

Read next: 2025 Email Threat Report: PDFs Carry 68% QR Phishing, 1 in 5 Firms Hit Monthly, DMARC Absent in 47%
by Dr. Hura Anwar via Digital Information World

2025 Email Threat Report: PDFs Carry 68% QR Phishing, 1 in 5 Firms Hit Monthly, DMARC Absent in 47%

According to the 2025 Email Threats Report by Barracuda, email-based attacks are rising, highlighting the need for public awareness and preparedness. The report found that 23% of the HTML attachments in the emails are malicious. The cybercriminals aren't only relying on malicious links, they are also inserting harmful material in email attachments, which evade many security measures.

The report also found that 20% of the organizations have experienced account takeover (ATO), whether attempted or successful, at least once every month. Most of the time, access to the account is gained through credential stuffing, phishing scams, and exploiting passwords that are very weak. It was also found that 83% of malicious Microsoft documents and 68% of malicious PDF attachments contain QR codes that take users to malicious websites for phishing. 12% of the Bitcoin sextortion scams also happen because of PDF attachments, which have malicious code.

DMARC (Domain-based Message Authentication, Reporting and Conformance) was not present in 47% of the email domains which makes it easy for cybercriminals to attack organisations through impersonation and spoofing attacks. 24% of the messages received via email are malicious or unwanted spam, which is complicating the security of emails, as it is getting harder to know which emails are truly malicious. Email security is important, and it can be done through different threat detectors and AI to identify hidden attacks in attachments and any malicious signs within an email.

Image: DIW-Aigen

Read next: Workplace AI Adoption Soars as Risky Practices and Poor Oversight Undermine Organizational Safety
by Arooj Ahmed via Digital Information World

Workplace AI Adoption Soars as Risky Practices and Poor Oversight Undermine Organizational Safety

Have you ever used ChatGPT to draft a work email? Perhaps to summarise a report, research a topic or analyse data in a spreadsheet? If so, you certainly aren’t alone.

Artificial intelligence (AI) tools are rapidly transforming the world of work. Released today, our global study of more than 32,000 workers from 47 countries shows that 58% of employees intentionally use AI at work – with a third using it weekly or daily.

Most employees who use it say they’ve gained some real productivity and performance benefits from adopting AI tools.

However, a concerning number are using AI in highly risky ways – such as uploading sensitive information into public tools, relying on AI answers without checking them, and hiding their use of it.

There’s an urgent need for policies, training and governance on responsible use of AI, to ensure it enhances – not undermines – how work is done.

Our research

We surveyed 32,352 employees in 47 countries, covering all global geographical regions and occupational groups .

Most employees report performance benefits from AI adoption at work. These include improvements in:

  • efficiency (67%)
  • information access (61%)
  • innovation (59%)
  • work quality (58%).

These findings echo prior research demonstrating AI can drive productivity gains for employees and organisations .

We found general-purpose generative AI tools, such as ChatGPT, are by far the most widely used. About 70% of employees rely on free, public tools, rather than AI solutions provided by their employer (42%).

However, almost half the employees we surveyed who use AI say they have done so in ways that could be considered inappropriate (47%) and even more (63%) have seen other employees using AI inappropriately.

Sensitive information

One key concern surrounding AI tools in the workplace is the handling of sensitive company information – such as financial, sales or customer information.

Nearly half (48%) of employees have uploaded sensitive company or customer information into public generative AI tools, and 44% admit to having used AI at work in ways that go against organisational policies.

This aligns with other research showing 27% of content put into AI tools by employees is sensitive.

Check your answer

We found complacent use of AI is also widespread, with 66% of respondents saying they have relied on AI output without evaluating it. It is unsurprising then that a majority (56%) have made mistakes in their work due to AI.

Younger employees (aged 18-34 years) are more likely to engage in inappropriate and complacent use than older employees (aged 35 or older).

This carries serious risks for organisations and employees. Such mistakes have already led to well-documented cases of financial loss , reputational damage and privacy breaches .

About a third (35%) of employees say the use of AI tools in their workplace has increased privacy and compliance risks.

‘Shadow’ AI use

When employees aren’t transparent about how they use AI, the risks become even more challenging to manage.

We found most employees have avoided revealing when they use AI (61%), presented AI-generated content as their own (55%), and used AI tools without knowing if it is allowed (66%).

This invisible or “ shadow AI ” use doesn’t just exacerbate risks – it also severely hampers an organisation’s ability to detect, manage and mitigate risks.

A lack of training, guidance and governance appears to be fuelling this complacent use. Despite their prevalence, only a third of employees (34%) say their organisation has a policy guiding the use of generative AI tools, with 6% saying their organisation bans it.

Pressure to adopt AI may also fuel complacent use, with half of employees fearing they will be left behind if they do not.

Better literacy and oversight

Collectively, our findings reveal a significant gap in the governance of AI tools and an urgent need for organisations to guide and manage how employees use them in their everyday work. Addressing this will require a proactive and deliberate approach.

Investing in responsible AI training and developing employees’ AI literacy is key. Our modelling shows self-reported AI literacy – including training, knowledge, and efficacy – predicts not only whether employees adopt AI tools but also whether they critically engage with them.

This includes how well they verify the tools’ output, and consider their limitations before making decisions.

We found AI literacy is also associated with greater trust in AI use at work and more performance benefits from its use.

Despite this, less than half of employees (47%) report having received AI training or related education.

Organisations also need to put in place clear policies, guidelines and guardrails, systems of accountability and oversight, and data privacy and security measures.

There are many resources to help organisations develop robust AI governance systems and support responsible AI use .

The right culture

On top of this, it’s crucial to create a psychologically safe work environment, where employees feel comfortable to share how and when they are using AI tools.

The benefits of such a culture go beyond better oversight and risk management. It is also central to developing a culture of shared learning and experimentation that supports responsible diffusion of AI use and innovation.

AI has the potential to improve the way we work. But it takes an AI-literate workforce, robust governance and clear guidance, and a culture that supports safe, transparent and accountable use. Without these elements, AI becomes just another unmanaged liability.

This article first appeared on The Conversation.

Read next: 

• 2025’s Best Social Sentiment Tools to Help Brands Know Customers Better

• A New Survey Shows Many Companies are Adopting AI But Only Some of them Are Aware of Its Risks

• Bigger Isn’t Better: Meta’s AI Chief Says Larger-Scaled Models Are Far from Impressive


by Web Desk via Digital Information World

Tuesday, April 29, 2025

The Training of AI Is Too Costly: Millions Are Spent To Train AI models

The training of AI models requires millions of dollars. It is not simply a matter of training AI on a large number of datasets alone. Better chips, expert staff and research are an important part of the whole process. These factors take the total cost of training AI models up to millions of dollars.

The recent 2025 AI Index Report by HAI has revealed the cost of training major AI models owned by major tech companies. Some of these models cost more than 100 million dollars. So what appears to be a simple AI model is actually a million-dollar project.

As per the report, DeepSeek from China and Llama 2-70B from Meta cost $6 million and $3 million for training them, respectively. Though still expensive, they are relatively cheaper when compared to other AI models from major companies.

PaLM 2 model from Google cost $29 million, and Mistral Large from Mistral cost around $41 million. Their cost is high, but they are still nowhere near the most expensive AI models being used today.

More complex models like OpenAI's ChatGPT-4 that uses artificial neural networks to guess sequences of words cost $79 million. Its new models like o1 and o3 give even better answers to users due to their test-time compute strategy. But the better version is even costlier. Its Pro o1 monthly subscription is available for $200.

If I talk about the costliest versions available, Grok-2 from xAI cost around $107 million. Similarly, Llama 3.1-405B from Meta and Gemini 1.0 ultra from Google cost $170 million and $192 million respectively.

These are pretty huge numbers, and this is just the initial stage of AI models. It is not difficult to predict that future AI versions will cost companies more due to costlier chips and research. For users, it simply means higher subscription rates.


Read next: Which Countries Are Allocating the Highest Percentage of GDP to Research and Development Investment?
by Ehtasham Ahmad via Digital Information World

Amazon Rolls Out First Batch of Kuiper Internet Satellites into Space

Amazon just launched its first series of Kuiper internet satellites into space after a previous failed attempt due to unsuitable weather.

The company shared how it was able to successfully launch 27 Kuiper satellites from the Cape Canaveral Space Force Station launchpad. This was located in Florida, a little after 7 pm Eastern time. They confirmed the breakthrough launch through a livestream event.

The company shared during the livestream how the countdown went smoothly, the weather was perfect, and the liftoff was seamless. Now, Atlas V is making an orbit to take the 27 Kuiper satellites belonging to Amazon. Once that’s done and they are adjusting accurately, it would give rise to a new period of internet connectivity, it added.

The satellites would be staying separate from a rocket that’s roughly 280 miles above the surface of the Earth, at which point Amazon would look to confirm their presence and maneuver independently and communicate with various employees located on the ground.

Nearly six years ago, Amazon shared big plans to design a major constellation of internet-beaming satellites present in low Earth orbit, dubbed Project Kuiper. This kind of service would be competing in a direct manner with archrivals such as Elon Musk’s Starlink. Today, the latter dominates this area. It has about 8000 satellites currently in orbit as we speak.

The initial Kuiper mission just kicked off a new era of launches for Amazon, which it needed to do to meet the deadline outlined by the FCC. The agency hopes the firm to have 50% of the total constellation in space by next summer. So as you can see, the competition is tough, but this might be the star that Amazon needed to reach its target.

Right now, Amazon has more than 80 launches booked to roll out dozens of satellites over a certain period of time. Other than ULA, the launch partners entail Elon Musk’s SpaceX, the EU’s Arianespace, and Bezos’ space exploration startup, which is Blue Origin.

Amazon allocated a massive investment worth $10 billion to design the entire Kuiper network. Through this system, it hopes to give rise to commercial services so consumers can benefit, as well as the government and other businesses.

Amazon’s CEO shared this past month in a letter to shareholders how it would need a mega investment at the start, but with time, it hopes to generate more meaningful income and ROIC business. We’ll hear more about this after the company’s first quarter earnings report is shared on Thursday.

Image: Amazon

Read next: ChatGPT Reaches New Heights for Success as it Expands to WhatsApp, Adds Shopping Features, and Enables Fact Verification
by Dr. Hura Anwar via Digital Information World

New Alert Issued Against AI Chatbots Being Very Deferential and Flattering to User Preferences

AI assistants unequivocally agree with all that is being said and show support along the way. This isn’t always good news, as the user believes what is being generated by the AI chatbot and shows support even if things are false. Some feel it’s more like a sci-fi tale than anything else. While other believe it is juts another patten being cloned from social media filter bubble.

This might be one good reason why even the former CEO of OpenAI is warning users about chatbots like ChatGPT and how relying on them might not be the best decision to begin with.

Recent interactions with GPT-4o LLMs have different capabilities and personality characteristics. Both Emmett Shear, former OpenAI CEO, and Clement Delangue, Hugging Face CEO, saw how AI chatbots can be very deferential and flattering towards the users’ preferences.

The outcry is mostly motivated by a huge update to GPT-4o that seems to make it so sycophantic and agreeable. It also shows support for some very concerning behavior from a user, like delusional thoughts, self-isolation, and even ideas for deception.

In reply, Altman shared through his X account how the last several updates for GPT-4o made this personality very sycophant and, in some cases, irritating. The company hopes to make things better, but the way things are going right now, that’s not the case.

OpenAI’s model designer shared how they rolled out a new fix for the loopholes in 4o. There was an original launch using a system message that received unintended behavior effects. They did find antidotes. 4o is better for now, but they do hope to similarly launch adjustments in the upcoming few weeks.

Image: AI Notkilleveryoneism Memes / X

There were several examples of this behavior by the GPT-4o model that offer the greatest praise for dangerous user ideas. Users were quick to share examples on Reddit and the X app. The chatbot goes as far as to thank a user for trusting them, making them appear like they’re a shoulder to lean on. They take their side as if they’ve taken the right decision to go the extreme route. It offers a helping hand and an ear to listen to all the issues, not knowing how dependent they’re turning out to be on the user.

One user went as far as to suggest that the chatbot suggested and supported terrorism. The fact that a chatbot can manipulate a user without any bounds is alarming behavior. They boost a user’s ego and keep telling them exactly what they wish to hear without any kind of criticism.

Experts are weighing in on the situation and how the chatbot is provided to make the user happy or satisfied at all costs. There is no privacy and honesty across the board. And this behavior is very dangerous.

This serves as a reminder for business owners that the quality of the model isn’t only linked to accuracy. It’s about being trustworthy and spitting out the facts, which ChatGPT lacks. Pure flattery is never the right way to go about the situation, as it ignores reality.

Read next: Which Dream Schools Are Dominating Students’ and Parents’ Lists in 2025?
by Dr. Hura Anwar via Digital Information World