Saturday, July 27, 2024

Research Shows that There's Been an Impressive Growth Seen in MarTech Products in the Last 13 Years

Martech Tribe and Chief Martech conducted a research and found that the software products for marketing technology have transformed and grown a lot in the past 14 years. When the first survey for marketing technology software was conducted in 2011, the number of Martech software was just 150. But in 2024, this number has reached 14,106. There has been 9.304% growth in Martech products in the last 13 years and a 27.8% growth in the last 12 months.


Large Martech companies handle 13% of the total marketing technology apps 36% of Martech apps are from medium sized companies and more than 51% of the marketing technology software or apps come from companies which have less than $10 million revenue. From 2017 to 2023, most of the martech apps come from tail companies(companies with less than $10 million revenue).

There were also 3,135 marketing technology focused ChatGPT bots found in GPT store. 32% of these ChatGPT bots deal with content marketing, 8% help in social media marketing and 8% are helpful in SEO. Other ChatGPT bots that are used in Martech help in product management, video marketing, content marketing and search and social advertising.

Read next: Reddit Outpaces Rivals: Users Spend Most Time on Platform in June 2024
by Arooj Ahmed via Digital Information World

Research Exposes High Costs and Low Returns in Google’s Advertising Ecosystem

If you frequently use Google, you’ve likely encountered ads during your searches. It's a common belief that companies like Nike directly pay for their own ads, but a recent study challenges this assumption. According to research by Northeastern University, published in Proceedings of the International AAAI Conference on Web and Social Media, Google's advertising system, along with similar practices on search engines like Bing and DuckDuckGo, operates as a "sham market."

Christo Wilson, a computer science professor at Northeastern and lead author of the study, explains that companies often pay to advertise on their own brand terms—such as "Nike" or "Adidas"—only to face competition from other advertisers. This practice, known as ad poaching, compels companies to spend heavily to protect their brand's search results, even though these ads are not very effective. Wilson argues that this setup wastes marketing budgets for companies who are unable to see a significant return on their investment.

To investigate the effectiveness of these ads, Wilson’s team recruited U.S. residents and had them install a browser extension that tracked their online activity. This extension captured data on their searches, the ads displayed, and their interactions with search results and ads on Google Chrome. The study found that while some users clicked on competitors' ads, this behavior was infrequent, with a high abandonment rate.

Despite these findings, advertisers continue to invest in these ads. Wilson attributes this to Google's market dominance, which forces advertisers to comply with its advertising policies. Google’s advertising practices are highly profitable, contributing significantly to its revenue. Wilson compares this to rent-seeking behavior, where Google benefits at the expense of advertisers due to its control over the online search market.

Wilson also points out the broader economic implications of this advertising model. He suggests that the high costs of brand ads are indirectly passed on to consumers. For instance, higher prices for products like Nike shoes may partly result from the significant sums companies spend on ineffective search ads.

Regulatory scrutiny has intensified in response to these issues. In the U.S., ongoing antitrust cases against Google reflect concerns about its advertising practices. Meanwhile, India has taken more direct action by banning brand ads that violate trademark rules in 2023. Wilson believes that reforming online advertising practices and increasing market competition could address these issues. However, as long as Google maintains its dominance, significant change may remain elusive.

In conclusion, Wilson’s study highlights critical flaws in the current advertising ecosystem, suggesting that Google's practices create inefficiencies that affect both businesses and consumers. The research advocates for greater competition in online search to rectify these systemic issues.

Image: DIW-Aigen

Read next: X's Hidden Update, Your Tweets Now Fuel Grok AI! Here’s How to Opt-Out!
by Arooj Ahmed via Digital Information World

Friday, July 26, 2024

X's Hidden Update, Your Tweets Now Fuel Grok AI! Here’s How to Opt-Out!

Elon Musk’s social network X, formerly known as Twitter, has recently implemented a new policy that allows the platform to utilize user data to train its AI model, Grok. This change, quietly activated by default, enables X to use users’ tweets, interactions, inputs, and results with Grok for the training and fine-tuning of the AI system. The update was discovered by users, who noted that X had not made any formal announcement regarding this shift.

This policy enables Grok, an AI developed by X.ai, another entity under Musk’s ownership, to access extensive user data. The intent is to improve Grok’s capabilities, making it a competitor to established models like OpenAI’s ChatGPT. Although this move may appear aggressive, it mirrors the AI industry practices. Several AI giants and LLMs have used publicly available data for training their AI systems.

While some users are accepting of the policy, seeing their contribution as a minor part of advancing AI technology, others strongly criticize the lack of notice and the automatic opt-in. They view the default data-sharing setting as an unacceptable breach of privacy.

Users concerned about their data being used can adjust their settings. This must be done through the desktop version of X (as this setting is not available on mobile devices for now).

Users should go to the "Privacy and Safety" settings, that can be accessed simply by visiting this page: https://x.com/settings/grok_settings.

Now on that page, select "Grok," and uncheck the box that authorizes data sharing for training purposes.


Additionally, users have the option to delete their conversation history with Grok.

The policy shift has attracted criticism from privacy regulators, especially in Europe. The Irish Data Protection Commission (DPC), responsible for overseeing X’s compliance with the General Data Protection Regulation (GDPR), has expressed surprise at the automatic opt-in. The DPC has been engaging with X on data processing matters and is seeking clarification on the policy’s compliance with GDPR. Similar data-sharing plans by Meta were recently suspended in Europe due to regulatory concerns.

Read next: A New Research Shows that AI Models Trained by Other AI Models Often Produce Incoherent Output
by Asim BN via Digital Information World

A New Research Shows that AI Models Trained by Other AI Models Often Produce Incoherent Output

A new research published in Nature shows that all the AI models that are trained on AI generated data often give out worse output. A computer scientist from the University of Oxford says that just like printing a picture over and over again produces bad results in the end, AI models also produce content that is incoherent and nonsensical and the term for it is “model collapse”.

The research used many AI models, including the big ones like ChatGPT-3 and found that this model was trained on Common Crawl, an online website with over 3 billion web pages. And as many AI models are using AI generated junk websites, the problem is likely to get worse. The effects of cluttering of data are going to be seen in poor and slow performances of AI models.

To find out how performance of these AI models can be affected, the researchers tuned a large language model on data from Wikipedia and then tuned other generations of that LLM on the output of the first model. The results showed that the LLMs that were tuned on the output of another LLM were more perplexed. The first input was coherent and had well structured sentences. But in the final generation, the LLM showed incoherent and nonsensical sentences.

The researchers say that there is a need to train AI models from the output of other AI models because data on the internet is limited. AI models will have to be trained on synthetic data under controlled environments.

Image: DIW-Aigen

Read next: Upwork Survey: AI Increases Workload for 77% of Workers, 71% Experience Burnout
by Arooj Ahmed via Digital Information World

CIRP Reports that Many Android Users are Switching to iPhone But It is Not Good for Apple

CIRP has published its new study which shows that many Android users are switching to iPhone. Even though many Android users are switching to Apple devices, iPhone 15 sales are still lower than iPhone 14. This can have positive as well as negative effects. In June 2023, 10% of Android users switched to iPhone but now the percentage has reached 17% in June 2024.

This increase in customers may look good for Apple but it is opposite from that. Many Android users who are switching to iPhone say that they don't need the latest iPhone from Apple as they are happy to buy the iPhone with the new OS at a reasonable price. This could result in weaker sales of latest iPhone models, like iPhone 15 right now.

In addition to that, CIRP also reports that there are only a few iPhone users who are upgrading to the latest model. iPhone users used to upgrade to new models as soon as they could, which was always advantageous to Apple. But users not upgrading their iPhone is leading to lower sales of latest models. If we compare the data of Android switchers in March 2024, 17% of Android users switched to iPhone in Q2 which is the highest since 2017 (21%).


Read next: Researchers Are Trying to Identify Deep Fakes Using Astronomy Methods
by Arooj Ahmed via Digital Information World

Researchers Are Trying to Identify Deep Fakes Using Astronomy Methods

Astronomy is also now being used to find deep fakes by looking into the eyes of people. According to Nature, University of Hull researchers in the UK found that it is possible to recognize deep fakes using the CAS System. The CAS System is used by astronomers to measure the concentration, smoothness and asymmetry of galaxies. Astronomers are also going to use the Gini coefficient which is used to analyze the reflection of light in people’s eyes in pictures.

Adejumoke Owolabi, a researcher of the study, used these two methods and found 70% of the results of images showed deep fakes. This means that due to this method, it is easy to find images that are fake by looking at the reflection in people’s eyes. Although this method isn’t always 100% accurate, it can still help in identifying deep fakes.

When the researchers presented the idea to the UK's National Astronomy Meeting, it was liked by the people. Kevin Pimbblet, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull, UK says that a real photograph can be recognized by seeing the reflection in one’s eye and seeing the reflection in another eye that should also be similar. Many scientists are trying to find methods of identifying deep fakes and other methods like facial movements detection using AI, detecting oddly blinking eyes and checking the pulse in people's faces are also being used.

Meta and OpenAI are investing millions of dollars to look for ways that could identify deep fakes. The FBI and Department of Defense are also concerned about the rise in deep fakes that can be harmful to national security. If we use AI detection methods, they can also help AI models improve. The deep fakes are getting more realistic with each passing day and soon, most people are going to believe anything they see online.

Image: DIW-Aigen

Read next: A New Study Reveals that Many LLMs are Not Able to Perform Basic Reasoning Tasks
by Arooj Ahmed via Digital Information World

Thursday, July 25, 2024

A New Study Reveals that Many LLMs are Not Able to Perform Basic Reasoning Tasks

In a paper by researchers from the Juelich Supercomputing Center (JSC), the School of Electrical and Electronic Engineering at the University of Bristol and the LAION AI laboratory, the researchers of the study found that many LLMs perform reasoning but they cannot perform it consistently. The study says that most of the time, LLMs can even perform basic tasks such as simple logical questions.

The authors of the study say that technological and scientific experts should reassess all large language models to analyze their capabilities. They also say that the weaknesses and failures of LLMs should also be analyzed to reveal all the weak basic reasoning capabilities of these LLMs.

The researchers termed this problem as AIW and used different problems to assess how different models behave when faced with different systematic problems. The researchers gave LLMs questions like, “Alice has X brothers and Y sisters. So how many sisters do Alice’s brothers have?”. This was a simple but varied answer and the solution should be to add Y+1 which even school kids could do.

Even though it was a simple question, many LLMs couldn't solve it. They answered with illogical reasoning and gave incorrect answers, disguising them as correct. It is not that big a problem that these AI models give incorrect answers, the bigger problem is that they give such arguments that it becomes hard to not trust them. They were so confident in their arguments that it was hard to identify if their answers were correct or not.

Many LLMs showed a correctness rate below 50%, with larger models like ChatGPT-4o showing 60% correctness. Even though larger AI models are better than smaller ones, they are still not that good in reasoning. The AIW problems were proof that AI models are not capable of basic reasoning. Even though many of them showed high scores in other tests for their capabilities, most of them couldn't solve AIW problems.

Image: DIWAigen

Read next:

• X Rolls Out New ‘More About This Account’ Feature But The Results Aren’t Impressive

• ChatGPT vs Gemini vs Claude: Which Generative AI Model Is Best For Creating Trending Content?
by Arooj Ahmed via Digital Information World