According to a new study by a software company called Instabug, AI models work better with Apple’s iOS than Google's Android when it comes to fixing mobile crashes. The company created a tool called SmartResolve which uses AI to detect app crashes, it causes and suggests code fixes. It was also revealed that it worked more effectively with iOS than Android. The tool was tested using various AI models from Anthropic, OpenAI, Meta, and Google on real app crashes.
The main finding on all this was that AI models do better crash fixing on Apple’s iOS than on Android. The fixes on iOS were more accurate, better structured, and clearer across almost all models that were tested. Gemini 1.5 Pro, Google’s own AI model, wasn't able to do well on Android and scored 51.41% as compared to 58.53% on iOS. GPT-4o scored 59.81% on iOS and 48.97% on Android, while the o1 model scored 61.79% on iOS and 26.31% on Android. Claude's Sonnet 3.5 V1 scored 58.33% on iOS and 55.56% on Android.
According to Sherief Abul-Ezz's blog post, "The results highlight that most models performed better on iOS, with GPT-4o, Claude 3.5 Haiku V1, and Claude 3.5 Sonnet V1 emerging as the strongest contenders due to their consistency and structured outputs." Adding further, "Conversely, models like LLaMA-3-70b and OpenAI o1 struggled significantly, particularly on Android, due to poor correctness, frequent failures, and slow response times."
The chief product officer of Instabug, Kenny Johnston, said that iOS’s bigger success rate is mostly because of how its native languages like Objective-C and Swift are structured which makes AI models to detect and generate accurate fixes. On the other hand, Android uses Kotlin and Java have more variability in crash formats so AI cannot detect it accurately.
Read next: Study Finds Openness to AI’s Utility But Concern Grows Over Chatbots Replacing Real Human Relationships
by Arooj Ahmed via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Sunday, April 13, 2025
Downloads Crown Goes to ChatGPT, But TikTok Remains Unmatched in App Revenue for March 2025
According to estimates by Appfigures, the most downloaded app in March 2025 wasn't Instagram or TikTok, but ChatGPT. OpenAI's ChatGPT got 46 million downloads combined from the iOS App Store and Google Play in March 2025, about 3 million more than in February. ChatGPT got 13 million downloads on iOS App Store and 33 million downloads on Google Play. The second most downloaded app on Google Play and iOS combined was Instagram followed by TikTok in March 2025.
There was a 148% increase in ChatGPT’s downloads from Q1 2024 to Q1 2025. Among the top five most downloaded apps in March 2025, Facebook and WhatsApp were the fourth and fifth most downloaded apps on Google Play and iOS combined. It looks like these top five apps are going to remain in their position in the upcoming months. Collectively, the top ten most downloaded apps got 339 million downloads in March, which is a little higher than February 2025 but a little lower than March 2024.
When it comes to money-making category, TikTok earned $296 million in revenue after store fees from Google Play and iOS making it the top-earning app in March 2025. It rose 48% from last month and it is interesting to see, especially with how uncertain the app’s future is in the US. The second highest-earning app was YouTube ($160 million), followed by Disney+ ($132 million). Other apps in the top five highest earners in March are Tinder ($117 million) and Max (100 million).
The most interesting earnings in March 2025 were from ChatGPT which is the sixth top earning app. In February 2025, ChatGPT earned $70 million revenue and the estimates said that it will remain the same for quite some time. But it was proven wrong when ChatGPT earned almost $100 million revenue in March.
The other top-earning in the top ten are Audible, Google One, CapCut, and LinkedIn, with CapCut making a comeback in the list after December 2024. The total revenue earned by the top ten highest-earning apps in March was $1.16 billion, which is 14% higher than last month and 50% higher than March 2024.
Read next: WhatsApp’s AI Now Remembers Your Life — Is Convenience Worth the Cost?
by Arooj Ahmed via Digital Information World
There was a 148% increase in ChatGPT’s downloads from Q1 2024 to Q1 2025. Among the top five most downloaded apps in March 2025, Facebook and WhatsApp were the fourth and fifth most downloaded apps on Google Play and iOS combined. It looks like these top five apps are going to remain in their position in the upcoming months. Collectively, the top ten most downloaded apps got 339 million downloads in March, which is a little higher than February 2025 but a little lower than March 2024.
When it comes to money-making category, TikTok earned $296 million in revenue after store fees from Google Play and iOS making it the top-earning app in March 2025. It rose 48% from last month and it is interesting to see, especially with how uncertain the app’s future is in the US. The second highest-earning app was YouTube ($160 million), followed by Disney+ ($132 million). Other apps in the top five highest earners in March are Tinder ($117 million) and Max (100 million).
The most interesting earnings in March 2025 were from ChatGPT which is the sixth top earning app. In February 2025, ChatGPT earned $70 million revenue and the estimates said that it will remain the same for quite some time. But it was proven wrong when ChatGPT earned almost $100 million revenue in March.
The other top-earning in the top ten are Audible, Google One, CapCut, and LinkedIn, with CapCut making a comeback in the list after December 2024. The total revenue earned by the top ten highest-earning apps in March was $1.16 billion, which is 14% higher than last month and 50% higher than March 2024.
Read next: WhatsApp’s AI Now Remembers Your Life — Is Convenience Worth the Cost?
by Arooj Ahmed via Digital Information World
Saturday, April 12, 2025
WhatsApp’s AI Now Remembers Your Life — Is Convenience Worth the Cost?
In Android 2.25.11.13 beta version, WhatsApp is working on a memory feature for Meta AI which allows Meta AI to remember certain details that users have shared during conversation. Information like conversation style, dietary choices, allergies, personal interests and all kinds of important things will now be memorized by Meta AI so users can have a smooth conversation with it.
This feature is still being tested and is available for beta testers, but WhatsApp is currently rolling it out for the public as well. In Meta AI’s chat settings, WhatsApp has added a “Memory” option as well and users can click on it to store details to the AI chatbot manually. This will also allow the AI chatbot to give more user-specific and relevant suggestions.
This feature will really be helpful for users who often have conversations with Meta AI because they won't have to make it more personalized for them every time. By remembering things from the users, Meta AI will prioritize their preferences and give them responses that stand out. Meta AI’s memory feature is available to some regular WhatsApp users as well now and Meta is working to enroll this feature to more users soon so everyone can benefit from it.
This move by Meta clearly follows a growing trend in the AI industry. Big players like Google’s Gemini and OpenAI’s ChatGPT have already introduced memory features that let their chatbots remember past conversations — not just a few points, but entire chat histories that can be recalled anytime. It sounds useful, sure, especially for smoother, more personalized chats. But it also raises big privacy questions. These AI models are essentially storing everything users say, and often using that data to improve their systems. That means your personal details might be helping to train future versions of the AI. And in the long run, there’s always the risk that law enforcement or government agencies could pressure these platforms to hand over user data for surveillance. So while Meta’s new memory feature might make conversations easier, it feels like they’re just following the same path as the rest — whether that ends up being a win for users or not, only time will tell.
Read next:
• New Report Shows that Energy Consumption by Data Centers is Going to Get Doubled by 2030
• Study Finds Openness to AI’s Utility But Concern Grows Over Chatbots Replacing Real Human Relationships
by Arooj Ahmed via Digital Information World
This feature is still being tested and is available for beta testers, but WhatsApp is currently rolling it out for the public as well. In Meta AI’s chat settings, WhatsApp has added a “Memory” option as well and users can click on it to store details to the AI chatbot manually. This will also allow the AI chatbot to give more user-specific and relevant suggestions.
This feature will really be helpful for users who often have conversations with Meta AI because they won't have to make it more personalized for them every time. By remembering things from the users, Meta AI will prioritize their preferences and give them responses that stand out. Meta AI’s memory feature is available to some regular WhatsApp users as well now and Meta is working to enroll this feature to more users soon so everyone can benefit from it.
This move by Meta clearly follows a growing trend in the AI industry. Big players like Google’s Gemini and OpenAI’s ChatGPT have already introduced memory features that let their chatbots remember past conversations — not just a few points, but entire chat histories that can be recalled anytime. It sounds useful, sure, especially for smoother, more personalized chats. But it also raises big privacy questions. These AI models are essentially storing everything users say, and often using that data to improve their systems. That means your personal details might be helping to train future versions of the AI. And in the long run, there’s always the risk that law enforcement or government agencies could pressure these platforms to hand over user data for surveillance. So while Meta’s new memory feature might make conversations easier, it feels like they’re just following the same path as the rest — whether that ends up being a win for users or not, only time will tell.
Read next:
• New Report Shows that Energy Consumption by Data Centers is Going to Get Doubled by 2030
• Study Finds Openness to AI’s Utility But Concern Grows Over Chatbots Replacing Real Human Relationships
by Arooj Ahmed via Digital Information World
Study Finds Openness to AI’s Utility But Concern Grows Over Chatbots Replacing Real Human Relationships
With the rise of AI and AI chatbots, many people have made them a part of their regular lives and they are using them for all sorts of things. Some even see these AI chatbots as their friends or even romantic partners but this can be very risky. Stanford Deliberative Democracy Lab published a report in partnership with Meta after surveying people to understand what they feel about AI chatbots and their limits. 1,545 people from Germany, Brazil, the US, and Spain were surveyed for this study.
According to the report, people know the efficiency and advantages of AI but don't seem too enthusiastic when it comes to using it for companionship. When respondents were asked if they were open to AI chatbots answering offensive topics, most participants were open to it while 40% felt unsure or were totally against it. The split between people about this query is interesting because there have always been discussions about free speech in the media but most people view it as a concern when it comes to AI chatbots.
Respondents of the survey were also asked if they were concerned about AI being designed to appear human-like and most of them agreed that they would be concerned about it, especially if they don't know that they are interacting with an AI bot. This is relevant to Meta introducing AI bot profiles across its apps where bots are designed to interact like real-life people. But users say that they need to know if they are speaking with AI so they don't get misled into thinking that they are interacting with a real person.
Another question the study asked was if the participants were comfortable about using AI chatbots as romantic partners and it was found that most of them were uncomfortable with the idea of using AI chatbots for romantic relationships. Some even say that there should be restrictions about preventing users from developing romantic relationships with AI chatbots while some say that users should be allowed to interact with AI chatbots however they want as long as it's within legal boundaries. This is a risky area where more study and research should be done to know about the mental health impacts of romantic involvement with AI.
Image: AIgen
Read next: Top AI Models Fail Simple Debugging Test — Human Coders Still Reign Supreme
by Arooj Ahmed via Digital Information World
According to the report, people know the efficiency and advantages of AI but don't seem too enthusiastic when it comes to using it for companionship. When respondents were asked if they were open to AI chatbots answering offensive topics, most participants were open to it while 40% felt unsure or were totally against it. The split between people about this query is interesting because there have always been discussions about free speech in the media but most people view it as a concern when it comes to AI chatbots.
Respondents of the survey were also asked if they were concerned about AI being designed to appear human-like and most of them agreed that they would be concerned about it, especially if they don't know that they are interacting with an AI bot. This is relevant to Meta introducing AI bot profiles across its apps where bots are designed to interact like real-life people. But users say that they need to know if they are speaking with AI so they don't get misled into thinking that they are interacting with a real person.
Another question the study asked was if the participants were comfortable about using AI chatbots as romantic partners and it was found that most of them were uncomfortable with the idea of using AI chatbots for romantic relationships. Some even say that there should be restrictions about preventing users from developing romantic relationships with AI chatbots while some say that users should be allowed to interact with AI chatbots however they want as long as it's within legal boundaries. This is a risky area where more study and research should be done to know about the mental health impacts of romantic involvement with AI.
Image: AIgen
Read next: Top AI Models Fail Simple Debugging Test — Human Coders Still Reign Supreme
by Arooj Ahmed via Digital Information World
Meta’s AI Faces Legal Fire as Authors, Scholars Unite Over Copyright Clash
A small group of law professors is siding against Meta for its unconsented use of eBooks to train Llama AI models. They recently filed a new brief that went in favor of authors suing the tech giant for the same reason.
The incident took place yesterday, when the filing arose at the US District Court for the Northern District of California. This is where Meta was called out on its defense claims for fair use. Many referred to this as an overwhelming request for more legal privileges inside courts than what was granted to human authors.
The recent use of these copyrighted works was to train the generative models, which many don’t see as transformative. After all, using works for such reasons isn’t unique from using them to educate authors, it shares. It was similarly called out as the real purpose to enable creations that compete with such copied products across similar markets is one done by Meta for profitable reasons. Therefore, it’s more commercial than anything else.
The International Association for Scientific, Medical, and Technical Publishers also shared a short brief about authors and how they favored their calls for justice. The same was the case for Copyright Alliance, which again works on a nonprofit basis and stands for artistic creators throughout broad copyright disciplines. Even the AAP or Association of American Publishers felt the same way.
Hours after the piece was shared, Meta’s rep recalled how there were some smaller groups of law professors who supported the company’s legal standing on this front. In that case, they were all in agreement about how Meta was violating all intellectual property rights by making use of ebooks for training different models. They got rid of copyrights from these sources to disguise the allegation that they were taking part in.
Meta shared how its training is fair, but this case needs to be dismissed as the authors lack the basis to sue correctly. During the month’s start, the American District Judge enabled the case to move ahead. While he did dismiss some parts, he stated that such allegations are serious. The fact that Meta is accused of purposefully getting rid of CMI to hide copyright infringement speaks volumes.
Image: DIW-Aigen
Read next: Top AI Models Fail Simple Debugging Test — Human Coders Still Reign Supreme
by Dr. Hura Anwar via Digital Information World
The incident took place yesterday, when the filing arose at the US District Court for the Northern District of California. This is where Meta was called out on its defense claims for fair use. Many referred to this as an overwhelming request for more legal privileges inside courts than what was granted to human authors.
The recent use of these copyrighted works was to train the generative models, which many don’t see as transformative. After all, using works for such reasons isn’t unique from using them to educate authors, it shares. It was similarly called out as the real purpose to enable creations that compete with such copied products across similar markets is one done by Meta for profitable reasons. Therefore, it’s more commercial than anything else.
The International Association for Scientific, Medical, and Technical Publishers also shared a short brief about authors and how they favored their calls for justice. The same was the case for Copyright Alliance, which again works on a nonprofit basis and stands for artistic creators throughout broad copyright disciplines. Even the AAP or Association of American Publishers felt the same way.
Hours after the piece was shared, Meta’s rep recalled how there were some smaller groups of law professors who supported the company’s legal standing on this front. In that case, they were all in agreement about how Meta was violating all intellectual property rights by making use of ebooks for training different models. They got rid of copyrights from these sources to disguise the allegation that they were taking part in.
Meta shared how its training is fair, but this case needs to be dismissed as the authors lack the basis to sue correctly. During the month’s start, the American District Judge enabled the case to move ahead. While he did dismiss some parts, he stated that such allegations are serious. The fact that Meta is accused of purposefully getting rid of CMI to hide copyright infringement speaks volumes.
Image: DIW-Aigen
Read next: Top AI Models Fail Simple Debugging Test — Human Coders Still Reign Supreme
by Dr. Hura Anwar via Digital Information World
Friday, April 11, 2025
Top AI Models Fail Simple Debugging Test — Human Coders Still Reign Supreme
According to a new study by Microsoft Research, AI models are still struggling to fix software bugs that can easily be handled by skilled developers. AI is now being widely used for different tasks with companies like Google and Meta using it for programming and coding. But they are failing when it comes to fixing software bugs, with models like OpenAI’s o3-mini and Anthropic’s Claude 3.7 Sonnet failing a code benchmark called SWE-bench Lite. This shows that AI models are still not able to replace human programmers and developers.
The authors gave 300 software debugging tasks from the SWE-bench Lite to nine AI models, and the results showed that even the strongest and latest AI models couldn't complete even half of the tasks. Claude Sonnet 3.7 was the best-performing model but with only a 48.4% success rate, followed by o1 and o3-mini with success rates of 30.2% and 22.1% respectively.
This has made the authors and experts question why these AI models are performing poorly. The researchers say that the main issue with these AI models was the lack of training data and they weren't able to see actual examples of how humans debug software. The authors suggest that if we want to improve the performance of AI models, we have to train them on specialized and detailed data. Many studies have already shown that there are many logic errors and security flaws in codes generated by AI.
Read next: Greenpeace Study Reveals an Increase in Global Emissions Because of Production of AI Chips
by Arooj Ahmed via Digital Information World
The authors gave 300 software debugging tasks from the SWE-bench Lite to nine AI models, and the results showed that even the strongest and latest AI models couldn't complete even half of the tasks. Claude Sonnet 3.7 was the best-performing model but with only a 48.4% success rate, followed by o1 and o3-mini with success rates of 30.2% and 22.1% respectively.
This has made the authors and experts question why these AI models are performing poorly. The researchers say that the main issue with these AI models was the lack of training data and they weren't able to see actual examples of how humans debug software. The authors suggest that if we want to improve the performance of AI models, we have to train them on specialized and detailed data. Many studies have already shown that there are many logic errors and security flaws in codes generated by AI.
Read next: Greenpeace Study Reveals an Increase in Global Emissions Because of Production of AI Chips
by Arooj Ahmed via Digital Information World
Greenpeace Study Reveals an Increase in Global Emissions Because of Production of AI Chips
According to a new study by Greenpeace that analyzed the effects of AI on our planet, it was found that there was a 4× increase in emissions because of AI chips in 2024, which are produced during the making of semiconductors for AI chips. Most of these chips are made by companies like SK Hynix and TSMC which supply these chips to NVIDIA and others. These chips are made in countries like South Korea, Taiwan and Japan where most of the electricity depends on fossil fuels and hence, the rise in global emissions. Greenpeace also reported that the global electricity demand for AI is going to grow 170 times by 20230 and this could result in worse climate pressures.
Bloomberg reports that these estimates by Greenpeace have raised concerns about carbon emissions. It is suggested by Greenpeace that countries in Southeast Asia should switch to renewable energy sources for chip production. But the opposite is happening because South Korea is planning to build power plants that use natural gas and Taiwan is also working to expand liquid natural gas to upgrade its power grid for AI demands.
The International Energy Agency (IEA) has also done a study which suggests that data centers in the US are going to use more than half of the country’s electricity demand growth by 2030. It is also expected that the US will end up using more electricity for data processing than for making major energy-heavy products like steel, aluminium, chemicals and cement. Electricity demand for data centers will reach 945 terawatt-hours by 2030 which is equivalent to Japan’s energy use right now.
Read next:
• Hidden Costs in Data Annotation and How to Avoid Them
• New AkiraBot Targets Hundreds of Thousands of Websites with OpenAI-Based Spam
by Arooj Ahmed via Digital Information World
Bloomberg reports that these estimates by Greenpeace have raised concerns about carbon emissions. It is suggested by Greenpeace that countries in Southeast Asia should switch to renewable energy sources for chip production. But the opposite is happening because South Korea is planning to build power plants that use natural gas and Taiwan is also working to expand liquid natural gas to upgrade its power grid for AI demands.
The International Energy Agency (IEA) has also done a study which suggests that data centers in the US are going to use more than half of the country’s electricity demand growth by 2030. It is also expected that the US will end up using more electricity for data processing than for making major energy-heavy products like steel, aluminium, chemicals and cement. Electricity demand for data centers will reach 945 terawatt-hours by 2030 which is equivalent to Japan’s energy use right now.
Read next:
• Hidden Costs in Data Annotation and How to Avoid Them
• New AkiraBot Targets Hundreds of Thousands of Websites with OpenAI-Based Spam
by Arooj Ahmed via Digital Information World
Subscribe to:
Comments (Atom)









