Wednesday, August 9, 2023

AI Content Detection by Search Engines: Google’s Plan and Long-Term Strategy

The advent of AI has heavily impacted individuals in Digital Marketers working for SEO, offering them time-saving and affordable ways to craft original content for their work.

Despite ethical concerns, people wonder if search engines can identify AI-generated content. The question is significant in case it cannot since it undermines several questions regarding the appropriate and ethical usage and implementation of AI.

Using machine generation for content creation is not a novel tech and may not be as unethical as it appears to be.

For example, being the first to break through with important headlines is a competition amongst news sites. They have been using seismometers and stock markets to gather their authentic sources to amplify the speed of their content. Moreover, it is genuine to use accurate information that a robot has generated in their articles, such as the magnitude, date, time, and location of the place where an earthquake is detected.

Such updates are less time-consuming and easy for readers to absorb information quickly.

But on the other hand, there have been several instances of malicious and unethical generation of machine-generated content, commonly referred to as “blackhat” implementations. Google has repeatedly disapproved of practices that use Markov chains to generate low-effort content or text as they fall under the category of automatically generated pages that offer no added value.

The interpretation of “no added value” intrigues many yet stirs confusion in others.

GPTx LLMs and ChatGPT have altered machine-generated content by making the interactions more conversational. LLMs are a glorified version of a phone’s predictive text feature. ChatGPT is a version of generative AI where the outcome is a randomized element that generates different responses to the same prompts.

ChatGPT doesn’t have traditional information after comparing the two. Such drawbacks are called hallucinations and are the reason for the errors. In many instances, ChatGPT tends to contradict itself.

Such instances raise genuine concern when it comes to adding value to content created by AI. The fundamental issue stems from text generation by LLMs. Finding a solution wouldn’t be easy without taking a fresh approach. This is crucial when it comes to topics related to money as such inaccurate content may contribute to adverse effects on people's finances and well-being. CNET and Men’s Health published unauthentic and inaccurate AI-generated content this year.

Although Google claimed to be cautious with generating responses, such as an example of it refusing to show an answer to a medical question about giving a child Tylenol, the SGE would directly contradict this by readily producing an answer to the same question when asked.

Google believes machine-generated content has a venue for answering people’s questions. In May 2021, Google hinted that when they revealed their Multitask Unified Model, based on the fact that people prompt eight questions on average.

And in that process, the searcher would learn more information and ask questions related to that. That way searchers would view more webpages.

Google initiated the idea to take users’ first and expected follow-up questions to generate a solved answer through their index knowledge. If the plan succeeds, SEOs that depend on SERPs may get scrapped, and it would benefit only the user.

But this raises the question regarding Google’s intention to display the searcher a webpage with a pre-generated answer when they have the capabilities to keep them within their search ecosystem as it has the financial incentive to generate it on their own. Examples are Google’s featured snippets and allowing users to look up flights in their SERPs.

If Google views users' generated text as invaluable, it may become a case of pros and cons.

That way, will the search engine achieve a higher revenue in the future by stopping generation costs and making searchers wait for responses or by quickly and inexpensively directing users to sites they are familiar with?

With the rapid usage of ChatGPT came AI Content detectors that display the percentage of artificially-generated content. Those percentages vary with each of them, but all give the same response: indicating AI-generated text. The issue is that people may misinterpret when those detectors claim the content to be 60% written by AI and 40% by humans when in reality the detectors mean that they are 60% sure that 100% of the text is AI-generated.

Tricking the AI detector is simple. Users can use a double exclamation mark to make the detector report that the text is 99% written by a human.

Some website owners may bear such insecurities by believing that Google may not detect AI content.

This year, Google Search Central claims its focus was on the quality of the content and not the way it generated it. This reveals that Google only cares about the output more than the means of content generation.

The answer to whether Google can spot AI-generated content lies in a different question. LLMs have been producing premium quality content in terms of authenticity and meeting Google’s E-E-A-T criteria nonstop. AI is approaching ahead in giving answers that lack content.

However, according to predictions, Google will shift its focus to long-form expert content, providing specific questions rather than directing searchers to multiple small websites.


Read next: Is Google Still the Best Source of Information?
by Ahmed Naeem via Digital Information World

No comments:

Post a Comment