Monday, October 21, 2024

Google Chrome's New Tab Groups Feature: Save Your Tabs for Future Use

Google has introduced Google Chrome tabs since 2020 and now Google has made some changes to it. Because of this change users can save tabs on Chrome more easily. Tab Groups on Google Chrome are an easy way to organize and categorize your tabs so it can be identified easily. When users create a Tab Group of two or more tabs, they can name it, select a color for them and organize them for better productivity.

If you like to keep a lot of tabs open on Chrome at the same time, Tab Groups help a lot in organizing all of those tabs into different categories. Each user can organize the tabs according to their styles. But these tabs are only temporary, meaning that they are only present until Chrome is open. Once you close Chrome, tab groups likely disappear.

Google introduced a new feature to tackle this problem and now users will be able to save tab groups for later use. This way users won't have to worry about losing their work on the desktop. Google went through different design changes for the saved tab group and right now, the icon for Saved Tab Group is at the left of the bookmarks bar.


The good things about the improved feature is that even if you choose to "Close Group" tabs by right clicking on it, they will still be saved in the Chrome browser's tab icon (just be sure to not close individual tabs inside Group as Chrome won't be able to retrieve them back). To open again the closed group tabs, just click the group icon on the left side of the bookmarks bar to reopen the Tab Group you saved. If you want to see your saved tabs, click on the small grid icon. Keep in mind that you cannot open the same tab group in different windows at the same time.


Read next: Consumers Spent $16.2 Billion on Apps in September 2024; App Store Generated $13.7 Billion Revenue
by Arooj Ahmed via Digital Information World

AI Transformations: Everyday Applications You Didn’t Know About

You know we’ve reached peak interest in artificial intelligence (AI) when Oprah Winfrey hosts a television special about it. AI is truly everywhere. And we will all have a relationship with it – whether using it, building it, governing it or even befriending it.

But what exactly is AI? While most people won’t need to know exactly how it works under the hood, we will all need to understand what it can do. In our conversations with global leaders across business, government and the arts, one thing stood out – you can’t fake it anymore. AI fluency that is.

AI isn’t just about chatbots. To help understand what it is about, we’ve developed a framework which explains the broad broad range of capabilities it offers. We call this the “capabilities stack”.

We see AI systems as having seven basic kinds of capability, each building on the ones below it in the stack. From least complex to most, these are: recognition, classification, prediction, recommendation, automation, generation and interaction.

Recognition

At its core, the kind of AI we are seeing in consumer products today identifies patterns. Unlike traditional coding, where developers explicitly program how a system works, AI “learns” these patterns from vast datasets, enabling it to perform tasks. This “learning” is essentially just advanced mathematics that turns patterns into complex probabilistic models – encoded in so-called artificial neural networks.

Once learned, patterns can be recognised – such as your face, when you open your phone, or when you clear customs at the airport.

Pattern recognition is all around us – whether it’s license plate recognitionwhen you park your car at the mall, or when the police scan your registration. It’s used in manufacturing for quality control to detect defective parts, in health care to identify cancer in MRI scans, or to identify potholes by using buses equipped with cameras that monitor the roads in Sydney.

The AI capabilities stack is a framework for understanding how AI is used. Sandra Peter & Kai Remer, CC BY-NC-ND

Classification

Once an AI system can recognise patterns, we can train it to detect subtle variations and categorise them. This is how your photo app neatly organises albums by family members, or how apps identify and label different kinds of skin lesions. AI classification is also at work behind the scenes when phone companies and banks identify spam and fraud calls.

In New Zealand, non-profit organisation Te Hiku developed an AI language model to classify thousands of hours of recordings to help revitalise Te Reo Māori, the local indigenous language.

Prediction

When AI is trained on past data, it can be used to predict future outcomes. For example, airlines use AI to predict the estimated arrival times of incoming flights and to assign gates on time so you don’t end up waiting on the tarmac.

Similarly, Google Flights uses AI to predict flight delays even before airlines announce them.

In Hong Kong, an AI prediction model saves taxpayer money by predicting when a project needs early intervention to prevent it overrunning its budget and completion date. And when you buy stuff on Amazon, the ecommerce giant uses AI to predict demand and optimise delivery routes, so you get your packages within hours, not just days.

Recommendation

Once we predict, we can make recommendations for what to do next.

If you went to Taylor Swift’s Eras tour concert at Sydney’s Accor stadium, you were kept safe thanks to AI recommendations. A system funded by the New South Wales government used data from multiple sources to analyse the movement and mood of the 80,000 strong crowd, providing real-time recommendations to ensure everyone’s safety.

AI-based recommendations are everywhere. Social media, streaming platforms, delivery services and shopping apps all use past behaviour patterns to present you with their “for you” pages. Even pig farms use pig facial recognition and tracking to alert farmers to any issues and recommend particular interventions.

Automation

It’s a small step from prediction and recommendation to full automation.

In Germany, large wind turbines use AI to keep the lesser spotted eagle safe. An AI algorithm detects approaching birds and automatically slows down the turbines allowing them to pass unharmed.

Closer to home, Melbourne Water uses AI to autonomously regulate its pump control system to reduce energy costs by around 20% per year. In Western Sydney, local buses on key routes are AI-enabled: if a bus is running late, the system predicts its arrival at the next intersection and automatically green-lights its journey.

Generation

Once we can encode complex patterns into neural networks, we can also use these patterns to generate new, similar ones. This works with all kinds of data – images, text, audio and video.

Image generation is now built into many new phones. Don’t like the look on someone’s face? Change into a smile. Want a boat on that lake? Just add it in. And it doesn’t stop there.

Tools such as Runway let you manipulate videos or create new ones with just a text prompt. ElevenLabs allows you to generate synthetic voices or digitise existing ones from short recordings. These can be used to narrate audiobooks, but also carry risks such as deepfake impersonation.

And we haven’t even mentioned large language models such as ChatGPT, which are transforming how we work with text and how we develop computer code. Research by McKinsey found that these models can cut the time required for complex coding tasks by up to 50%.

Interaction

Finally, generative AI also makes it possible to mimic human-like interactions.

Soon, virtual assistants, companions and digital humans will be everywhere. They will attend your Zoom meeting to take notes and schedule follow-up meetings.

Interactive AI assistants, such as IBM’s AskHR bot, will answer your HR questions. And when you get home, your AI friend app will entertain you, while digital humans on social media are ready to sell you anything, any time. And with voice mode activated, even ChatGPT gets in on the inter-action.

Amid the excitement around generative AI, it is important to remember that AI is more than chatbots. It impacts many things beyond the flashy conversational tools – often in ways that quietly improve everyday processes.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: AI Concerns Grow in 2024: Data Security Tops at 46%, Costs at 43%, Accuracy at 36%
by Web Desk via Digital Information World

Sunday, October 20, 2024

Apple's AI Launch Falls Short, But Long-Term Success Likely, Says Gurman

A new report from Mark Gurman backs up the common belief that Apple is behind its rivals in AI development. Apple plans to roll out its first AI-powered Apple Intelligence features this month, but Gurman isn't impressed. He calls the upcoming changes lackluster. Still, he sees Apple's unique strong points. He thinks the company will end up leading the AI field down the road.

Gurman initially noted that Apple’s AI features are set to debut with iOS 18.1, expected to be released on October 28. While the rollout is eagerly anticipated, his latest assessment suggests these features may not live up to expectations. The centerpiece, a notification summary feature, will prove useful only if it performs reliably. Gurman points out that Apple's own research showed significant shortcomings compared to other AI chatbots, with OpenAI's ChatGPT outperforming Siri in both accuracy and overall capability.

"The research found that OpenAI’s ChatGPT was 25% more accurate than Apple’s Siri, and able to answer 30% more questions.", revealed the report, adding further, "In fact, some at Apple believe that its generative AI technology — at least, so far — is more than two years behind the industry leaders."

Apple's track record though, hints that this difference might shrink faster than we think. Gurman points to past cases like Apple Maps saying the company's way of bringing new ideas to life—whether by working on them in-house, bringing in the best people, or investing in tech startups—will push it ahead.

Apple's deep pockets and huge user base give it a big edge. The company can roll out new features across its wide-ranging ecosystem, with billions of devices ready to take on updates. Also, Apple can tweak its hardware to work with new software changes. At first, Apple Intelligence worked on certain devices, but now it's compatible with most iPads and the newest iPhone models, and more are in the pipeline.

Apple's plan to launch M4-based Macs and a new iPhone SE in 2025, plus its move to add AI features to gadgets like the Apple Watch and Vision Pro, show how serious the company is about growing its AI presence. By 2026, we can expect almost every Apple product with a screen to have AI capabilities.
Apple’s more cohesive hardware and software integration stands in contrast to rivals like Google and Samsung, which face challenges in rolling out updates across their more fragmented ecosystems. According to Gurman, these competitors may struggle to match Apple’s pace in releasing new features and upgrades.

Despite Apple’s advancements in AI, Gurman questions whether consumers are genuinely interested in these innovations, suggesting that camera improvements are a more compelling factor for iPhone buyers. He predicts that if the iPhone succeeds this year, it will likely be due to features unrelated to AI.

Image: DIW-Aigen

Read next:

AI Revolution Reshapes Work and Home, Accelerates Faster Than Any Previous Technology
by Asim BN via Digital Information World

Consumers Spent $16.2 Billion on Apps in September 2024; App Store Generated $13.7 Billion Revenue

Most of the app developers ask whether they should build on Google Play Store (Android) or Apple App Store (iOS), and the answer depends on what you are building. The revenue on these two platforms also matter. AppFigures compared the revenue of Google Play and App Store in September 2024 to find out how much revenue is consistent on these platforms. The most important thing to know is that the App Store gets the most revenue (84%) as compared to Google Play (16%).

Consumers spent approximately $16.2 billion on Google Play and App Store in September 2024. Google Play got $2.5 billion and App Store got $13.7 billion. This is all gross revenue which means that these are in-app purchases before Google and Apple took their fees. So, if anyone wants the money, the App Store is the right choice.

Games category gets the most revenue, $5.1 billion on the App Store and Google Play. App Store got $3.9 billion revenue, which makes 75% of the total revenue. A fun fact is that the Gaming category has the smallest share of revenue after the Shopping category. The top five categories in terms of total spend are Lifestyle & Dating, Social Networking, Photo & Video and Entertainment. So if you are an app developer and want to earn revenue, these five categories on the App Store are the best.



Read next: Research Suggests Connection Between Internet Speeds and Obesity Growth
by Arooj Ahmed via Digital Information World

ChatGPT Might Be Top Dog, But These AI Apps Are Snatching Up Profits!

When someone mentions AI app or AI chatbot, ChatGPT is the first thing that comes to mind, and that explains why it saw huge revenue growth in Q3 of 2024. Even though ChatGPT is the biggest platform in terms of revenue growth, it isn't the only one that is seeing incredible revenue growth. AppFigures’ analysis highlights the impressive revenue growth of Claude, CoPilot, and Perplexity across the Google Play Store and Apple App Store. (Sorry Google lovers, Gemini wasn't included in the analysis).

If we talk about Claude, CoPilot and Perplexity collectively, they have already gotten $13.3 million in revenue from January 2024 to September 2024. An important thing to note is that CoPilot didn't start earning until May 2024 and Claude was launched in May 2024. The consumers spent a total of $2.8 million on this trio in September 2024.

CoPilot is catching up on its revenue and is not far behind Perplexity. This year, all three AI models have started getting double digit revenue every month. According to the estimates by AppFigures, the total revenue of Claude, CoPilot and Perplexity chatbots was increased 11% in September 2024. Even though the growth of this trio is incredible, Claude is still seeing the highest revenue growth. It gained 22% month over month revenue in September 2024, while Perplexity got 10% and CoPilot got 1% month over month revenue.

In terms of total spend, Claude comes last at $2.4 million. Perplexity got $3.7 million and CoPilot got $3.6 million. As compared to these three, ChatGPT’s spend is 15 times more every month. But as the trends are changing, the future of search is also going to change. Perplexity is catching up fast so we have to see which AI model will become Google of this era.

ChatGPT isn't the Only AI App That's Growing its Revenue

Read next: Is Google Giving YouTube Unfair Advantage? AI Overviews See Huge Spike in Citations
by Arooj Ahmed via Digital Information World

AI Revolution Reshapes Work and Home, Accelerates Faster Than Any Previous Technology

A new research by the Federal Reserve Bank of St. Louis, Vanderbilt University, and Harvard Kennedy School finds the true extent of AI filtration in our daily lives. The research found five key takeaways by surveying thousands of US workers to see how AI adoption is happening at work and at home.

The research found that generative AI has been adopted faster than any other new technology in the past. 39.4% of Americans between the ages of 18-64 have reported using ChatGPT. It’s been only two years since the release of ChatGPT and it has crossed 30% adoption rate while PCs took three years to hit 20% adoption rate. It is probably due to the cost difference between AI and PC. The adoption of AI at home isn't that costly but PCs used to cost a large amount.

AI adoption outpaces PCs: 39.4% of US adults use ChatGPT, surpassing 30% in two years.

Another finding by the research showed that AI is being used by everyone and not just tech workers. It is most commonly being adopted in business, management and computer related fields and the usage rate has already surpassed 40%. As PC adoption resulted in workplace inequality, AI adoption is also increasing inequality at workplaces. 60% of workers with bachelor's degrees are using AI as compared to 20% of workers without a degree. The researchers say that this could lead to inequalities in the labor market.

Artificial Intelligence is also saving a lot of time in a number of tasks. 57% of respondents are using AI for writing and 49% are using it for searching for information. AI is also saving employees time by summarizing their reports and generating new ideas. The research also talked about how AI can report user productivity but it is still in its early stages. Currently, 0.5% and 3.5% of all US work hours are being assisted by AI and they increase the labor productivity up to 0.125% and 0.875%. Researchers say that we cannot assume anything right now because AI is still in the early stages of its adoption and it will take some time for it to be completely adopted in workforces.

AI saves time: 57% use it for writing, 49% for research, enhancing productivity in daily tasks.

Read next: AI Has Potential to Boost Industry Margins Significantly Over Five Years, Yet Adoption Remains Slow
by Arooj Ahmed via Digital Information World

Saturday, October 19, 2024

Is Google Giving YouTube Unfair Advantage? AI Overviews See Huge Spike in Citations

As per the new findings from the BrightEdge, YouTube citations grew by 310% in Google AI Overviews (AIO) in August and 200% in September.

The reason for such a high surge is that Google is self-preferencing YouTube, a property they own. Another reason is that using Google’s Gemini multimodal model can cite created content in spoken form into generated AI answers.

Besides that, Google’s AIO trends related to shopping and e-commerce were also shared by the BrightEdge SEO platform. There was a 7% drop in shopping keywords from 12.4%, which were the organic search ranking and AIO citations. Nonetheless, the ranking of overall keywords remained in the top 10 and saw an increase from 21.6% in July to 24.5% in September.

The AIO E-commerce queries dropped to 7.63% from 12.04%. But the most significant increase was for unordered collapsed lists for shopping as it went up from 1.3% to 15.9%. At the same time, the volatility for shopping queries fell from 37% to 26%, and the E-commerce AIO height increased from 651 pixels to 914 pixels which is 40.4%.

The changes happened for a reason for queries related to E-commerce, there is a greater focus on supplemental information by Google’s AI Overview. Additionally, since the holidays are coming for Christmas and New Year’s Eve, there is an increase in stabilization so that the optimization efforts are consistent.

Image: DIW-Aigen

Read next:

• How TruthFinder Can Help You Uncover “Hidden” Details With Online Background Checks

• Research Suggests Connection Between Internet Speeds and Obesity Growth

• Meta’s LeCun: General AI Still Distant, World Models Key to Progress


by Ahmed Naeem via Digital Information World