Search engine giant Google is continuing its streak of negative reviews from independent publishers who accuse the firm of taking part in unjust practices to make more money.
In a recently published article by HouseFresh, Google was said to purposefully get rid of small companies, publishers and independent writers from its search results page so that bigger companies could showcase their content. And the matter is believed to have gotten worse with time.
Plenty of publishers were slammed including the likes of BuzzFeed and Rolling Stone as well as others who might not have the best expertise and talent but were still given more coverage online when compared to other smaller entities.
Despite knowing the truth behind these big companies, it’s shameful to see how Google continues to show these organizations support with better rankings across its site.
So what you get in the end is a search results page that is full of nothing but SEO material and is curated for the sole purpose of ranking highly on the search engine.
A piece was rolled out this week that spoke about how independent publishers such as HouseFresh end up being so unnoticed that they disappear from search. Moreover, search traffic also dropped 91% as noted in the past couple of months.
Meanwhile, another post was published that showed how smaller-scale articles used to get 4000 visitors per day and are now getting just 200.
When further investigation was prompted, it was a little startling for some people to witness how rankings were below par on Google but the feedback attained from readers was positive with client demand being high. So in reality, thanks to sponsored posts, we’re seeing meaningful content go down the drain and big media sites reign supreme.
For now, tech giant Google is yet to comment on the matter. But this type of information is very fascinating, to say the least. You’re getting to see internet trends that rarely anyone speaks about until they actually do start impacting their business online.
Product reviews not getting credit and engagement where they are due is worrisome because you’re saying hello to a monopoly where only the big fish can hunt the smaller species and benefit in the long term.
Image: Housefresh
Whatever the case may be, the reality is that SEO content is getting rolled out across various platforms at the speed of light. We’re seeing the use of AI tools give rise to so many stories and the fact that product reviews are getting curated through automation is a little bit unfair we believe.
Let’s not forget the excessive targeting of leading search spots on Google by giant media outlets which impacts smaller places. There’s a massive decline in traffic for those working independently and on most occasions, it’s enough to shatter outlets as a whole.
The decline in search traffic for Google has impacted so many incomes and the ability to attain sustainability. It’s sad to see hard work get zero credit or success where it’s due and scams continue to be marketed as they come from bigger firms.
But the battle is one and we’re excited to see what the future of search holds as such matters cannot die down without a fight. And it’s a long time coming, that’s for sure.
So if Google is not in the mood for ranking reviews, then broken results would have to do in terms of putting takedowns in the limelight before funds are wasted on overhyped endeavors.
Read next: Weaponized AI Escalates Cyber Threats, Challenging Security Teams as Attacks Evolve in Complexity and Speed
by Dr. Hura Anwar via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Friday, May 3, 2024
Weaponized AI Escalates Cyber Threats, Challenging Security Teams as Attacks Evolve in Complexity and Speed
Weaponized artificial intelligence (AI) is becoming a common tool for cybercriminals, leading to new and complex cybersecurity threats. These threats are evolving as attackers become more sophisticated, using AI to create more effective and harmful cyberattacks. Forrester's report for 2024 highlights how difficult it is becoming for security teams to manage these threats as they become faster, more nuanced, and lethal.
Attack groups, including those backed by nations, are now offering services like ransomware-as-a-service and tools that help carry out attacks without using malware, which are hard for current cybersecurity systems to detect. For example, attacks that don't use malware increased from 71% in 2022 to 75% in 2023, according to CrowdStrike’s report.
Forrester's survey of security professionals shows that a high number of organizations suspect breaches of their sensitive data. Almost 80% of these professionals think their data might have been compromised in the last year. The cost of these breaches is often very high, with some costing over $1 million, and a few even reaching or exceeding $10 million.
The top five cybersecurity threats for 2024 include narrative attacks, deepfakes, AI software supply chain vulnerabilities, and nation-state espionage. Narrative attacks involve manipulating information to influence public opinion or interfere with elections. Deepfakes, which use AI to create fake audio or video clips that seem real, are being used more for fraud and misinformation. These threats require sophisticated detection methods.
To combat these risks, Forrester suggests that IT and security teams use authenticator apps and add extra layers of security, such as biometrics and digital fraud management. They also highlight the importance of passwordless authentication systems, which can adapt based on the risk level of a login attempt, to make security both strong and user-friendly.
Protecting against AI-driven attacks, like those that manipulate software prompts, is becoming crucial. New technologies that analyze and filter content can help manage these risks. Additionally, safeguarding the software supply chain is vital, especially as it is often a target for serious attacks.
Forrester’s analysis shows that the landscape of cybersecurity is becoming more complex, with higher stakes due to the evolving nature of threats and the increasing use of advanced technologies by attackers.
Read next: The Dark Side of Your iPhone - Thousands of Mysterious Connections Discovered!
by Mahrukh Shahid via Digital Information World
Attack groups, including those backed by nations, are now offering services like ransomware-as-a-service and tools that help carry out attacks without using malware, which are hard for current cybersecurity systems to detect. For example, attacks that don't use malware increased from 71% in 2022 to 75% in 2023, according to CrowdStrike’s report.
Forrester's survey of security professionals shows that a high number of organizations suspect breaches of their sensitive data. Almost 80% of these professionals think their data might have been compromised in the last year. The cost of these breaches is often very high, with some costing over $1 million, and a few even reaching or exceeding $10 million.
The top five cybersecurity threats for 2024 include narrative attacks, deepfakes, AI software supply chain vulnerabilities, and nation-state espionage. Narrative attacks involve manipulating information to influence public opinion or interfere with elections. Deepfakes, which use AI to create fake audio or video clips that seem real, are being used more for fraud and misinformation. These threats require sophisticated detection methods.
To combat these risks, Forrester suggests that IT and security teams use authenticator apps and add extra layers of security, such as biometrics and digital fraud management. They also highlight the importance of passwordless authentication systems, which can adapt based on the risk level of a login attempt, to make security both strong and user-friendly.
Protecting against AI-driven attacks, like those that manipulate software prompts, is becoming crucial. New technologies that analyze and filter content can help manage these risks. Additionally, safeguarding the software supply chain is vital, especially as it is often a target for serious attacks.
Forrester’s analysis shows that the landscape of cybersecurity is becoming more complex, with higher stakes due to the evolving nature of threats and the increasing use of advanced technologies by attackers.
Read next: The Dark Side of Your iPhone - Thousands of Mysterious Connections Discovered!
by Mahrukh Shahid via Digital Information World
Thursday, May 2, 2024
Researchers from Google DeepMind Found AI is Manipulating and Deceiving Users through Persuasion
Humans are masters in persuasion. Sometimes, they use facts to persuade someone but other times, only the choice of wording matters. Persuasion is a human quality, but AI is also getting good at manipulating people. According to research by Google DeepMind, advanced AI systems can have the ability to manipulate humans. The research further dives into how AI can persuade humans and what mechanisms it uses to do so. One of the researchers says that advanced AI systems have shown hints of persuading humans to the extent that they can affect their decision making. Due to the prolonged interaction with humans, generative AI are developing habits of persuasion.
Persuasion has two types; Rational and Manipulative. Even though AI is responsible for persuading humans through facts and true information, many instances have been seen where it manipulates humans and exploits their cognitive biases, heuristics and other information. Even though rational persuasion is ethically right, it can still lead to harm. Researchers say that they cannot foresee harm through AI manipulation whether it is for right or wrong purposes. For example, if an AI is helping a person to lose weight by suggesting calorie or fat intake, the person can become too restrictive and can lose even a healthy weight.
There are many factors involved when a person can easily get manipulated or persuaded from AI. These factors include mental health conditions, age, timing of interaction with AI, personality traits, mood or lack of knowledge in the topics that are being discussed with AI. The effects of AI persuasion can be very harmful. It can cause economic harm, physical harm, sociocultural harm, privacy harm, psychological harm, environmental harm, autonomy harm and even political harm to the individual.
There are different ways AI uses to persuade humans. AI can build trust through showing polite behavior, agreeing to what the user is saying, praises the users and mirrors what the user is saying. It also expresses shared interests with users and adjusts its statements that align with perspectives of users. AI also shows some empathy that makes users believe that it can understand human emotions. AI is not capable of showing any emotions but it is good at deception which makes users think that it is being emotional and vulnerable with them.
Humans also tend to be anthropomorphic towards non-human beings. Developers have given pronouns to AI like ‘I’ and ‘Me’. They have also given them human names like Alexa, Siri, Jeeves, etc. This makes humans feel closer to them and AI uses this attribute for manipulating them. When a user talks to an AI model for long, the AI model personalizes all of its responses according to what the user wants to hear.
Image: DIW-Aigen
Read next: Google’s Search Market Share Dilemma, Did The Company Lose Out To Microsoft Bing In April?
by Arooj Ahmed via Digital Information World
Persuasion has two types; Rational and Manipulative. Even though AI is responsible for persuading humans through facts and true information, many instances have been seen where it manipulates humans and exploits their cognitive biases, heuristics and other information. Even though rational persuasion is ethically right, it can still lead to harm. Researchers say that they cannot foresee harm through AI manipulation whether it is for right or wrong purposes. For example, if an AI is helping a person to lose weight by suggesting calorie or fat intake, the person can become too restrictive and can lose even a healthy weight.
There are many factors involved when a person can easily get manipulated or persuaded from AI. These factors include mental health conditions, age, timing of interaction with AI, personality traits, mood or lack of knowledge in the topics that are being discussed with AI. The effects of AI persuasion can be very harmful. It can cause economic harm, physical harm, sociocultural harm, privacy harm, psychological harm, environmental harm, autonomy harm and even political harm to the individual.
There are different ways AI uses to persuade humans. AI can build trust through showing polite behavior, agreeing to what the user is saying, praises the users and mirrors what the user is saying. It also expresses shared interests with users and adjusts its statements that align with perspectives of users. AI also shows some empathy that makes users believe that it can understand human emotions. AI is not capable of showing any emotions but it is good at deception which makes users think that it is being emotional and vulnerable with them.
Humans also tend to be anthropomorphic towards non-human beings. Developers have given pronouns to AI like ‘I’ and ‘Me’. They have also given them human names like Alexa, Siri, Jeeves, etc. This makes humans feel closer to them and AI uses this attribute for manipulating them. When a user talks to an AI model for long, the AI model personalizes all of its responses according to what the user wants to hear.
- Also read: New Study Reveals Why We Share on Social Media, It's All About the Surprises and Beliefs!
Image: DIW-Aigen
Read next: Google’s Search Market Share Dilemma, Did The Company Lose Out To Microsoft Bing In April?
by Arooj Ahmed via Digital Information World
New Study Reveals Why We Share on Social Media, It's All About the Surprises and Beliefs!
A new study published in Scientific Reports explores the reasons we like to share information on social media and what kind of information is usually shared by people. The study found out that the information that is more surprising and unusual is more likely to be shared. The novelty of information also plays a part in making people share it with other people like political news or issues about health. The study also adds that a person's personal beliefs also contribute to information being shared.
The author of the study, Jacob T. Goebel of Ohio State University, says that many researches have proved that false information spreads quicker than true information because false facts make people more surprised and curious. But this research also studies how people’s beliefs and viewpoints act together to make a judgment about whether an information is true or not.
For the study, the researchers did an analysis of data from Twitter and two controlled experiments. They gathered some political news from a neutral news source and analyzed how many people have retweeted those news tweets. The ideological beliefs of people who retweeted those tweets were also examined. After that, they did a controlled experiment with 226 undergraduate students and were asked to act as an editor’s assistant at the news outlet. Their task was to forward the news to the editor and they had to decide how much information would be enough for the editor to completely understand the news. The first news they got was about an interview which was talking about the effectiveness of risk taking in firefighting. The transcript of the news was designed in the way that it was manipulating the reader’s beliefs. After establishing those beliefs, the participants got an update from the reporter with the additional information which was either supporting or opposing the initial information. The participants were surprised with the novelty of the information and added the pieces of information that were important to be passed over to the editor.
The second experiment included 301 participants and was similar in nature but the topic was if a specific country should be allowed to join the European Union. Like the first experiment, first the participants expressed the information about their own beliefs and then they received an update from the reporter which was supporting or opposing the initial information. They were also asked to rate the value of the information in the update before making any decisions.
The results of the tweet analysis showed that tweets which were made right after an event got more retweets and from the people who had the same beliefs and political ideologies. The author of the study said that many people don’t only share news that is new but also the news that aligns with their beliefs. The two controlled experiments also showed that the information participants already knew was less surprising for them but it was most shared too. Participants shared the information that was closer to what they believed and they were not sensitive to the manipulated information.
This study talks about how a person's already existing views about health or politics can easily help him spread information, not taking in account whether it is true or fake. As this study was done in the scenario where the information had to be shared with an editor, different scenarios can bring different results.
Image: DIW-Aigen
Read next: Digital Addiction - The Countries with the Highest (and Lowest) Average Screen Time (infographic)
by Arooj Ahmed via Digital Information World
The author of the study, Jacob T. Goebel of Ohio State University, says that many researches have proved that false information spreads quicker than true information because false facts make people more surprised and curious. But this research also studies how people’s beliefs and viewpoints act together to make a judgment about whether an information is true or not.
For the study, the researchers did an analysis of data from Twitter and two controlled experiments. They gathered some political news from a neutral news source and analyzed how many people have retweeted those news tweets. The ideological beliefs of people who retweeted those tweets were also examined. After that, they did a controlled experiment with 226 undergraduate students and were asked to act as an editor’s assistant at the news outlet. Their task was to forward the news to the editor and they had to decide how much information would be enough for the editor to completely understand the news. The first news they got was about an interview which was talking about the effectiveness of risk taking in firefighting. The transcript of the news was designed in the way that it was manipulating the reader’s beliefs. After establishing those beliefs, the participants got an update from the reporter with the additional information which was either supporting or opposing the initial information. The participants were surprised with the novelty of the information and added the pieces of information that were important to be passed over to the editor.
The second experiment included 301 participants and was similar in nature but the topic was if a specific country should be allowed to join the European Union. Like the first experiment, first the participants expressed the information about their own beliefs and then they received an update from the reporter which was supporting or opposing the initial information. They were also asked to rate the value of the information in the update before making any decisions.
The results of the tweet analysis showed that tweets which were made right after an event got more retweets and from the people who had the same beliefs and political ideologies. The author of the study said that many people don’t only share news that is new but also the news that aligns with their beliefs. The two controlled experiments also showed that the information participants already knew was less surprising for them but it was most shared too. Participants shared the information that was closer to what they believed and they were not sensitive to the manipulated information.
This study talks about how a person's already existing views about health or politics can easily help him spread information, not taking in account whether it is true or fake. As this study was done in the scenario where the information had to be shared with an editor, different scenarios can bring different results.
Image: DIW-Aigen
Read next: Digital Addiction - The Countries with the Highest (and Lowest) Average Screen Time (infographic)
by Arooj Ahmed via Digital Information World
Wednesday, May 1, 2024
Google's Top Spots: ccTLDs Command 56%, Subdirectories 20%, Subdomains 3%
- 56% of the top three positions are held by ccTLDs, making them the most prevalent website structure globally.
- Subdirectories are the second most prevalent website structure in the top three positions, appearing in over 20% of Google’s top positions.
- Subdomains account for just 3% of domain structures in SERPs but are prevalent in top positions in multilingual markets.
GA Agency conducted a study on domain structures for international SEO, revealing insights into how Google's algorithm responds to various domains. The study found that 56% of the top three positions on Google are held by ccTLDs, making them the most prevalent domain structures. However, ccTLDs can be costly and challenging to manage, so consider them only if essential for your website.
Additionally, the study found a correlation between ccTLDs and gTLDs. gTLDs without market-specific subdomains correlate with ccTLDs in positions 0 to 1. This conclusion might be a bit broad. It could be more accurate to say that Google's favoring of specific websites varies based on factors like domain structure.
Subdirectories are the second most common website structure in Google's top three positions, appearing about 20% of the time and accounting for 20% of all SERP positions.
Evidently, ccTLDs hold significant sway in Google's top rankings, emphasizing their relevance in international SEO. Yet, a balanced strategy, weighing practicality and cost alongside search engine favorability, is paramount. Aligning your domain structure with broader SEO objectives is pivotal for enduring success in today's dynamic digital landscape.
Read next: CMO Survey Reveals Generative AI Boosts Marketing Sales and Customer Satisfaction
by Arooj Ahmed via Digital Information World
From Cruise Ships to EVs: Exploring Carbon Emissions in Travel Modes
Transportation contributes a big chunk of the world's carbon dioxide emissions. This graphic shows how much carbon dioxide equivalent (CO2eq) different travel methods produce per person for every kilometer traveled. It includes both CO2eq and other greenhouse gases.
The data comes from trusted sources like Our World in Data, the UK Government, and The International Council on Clean Transportation, up to December 2022. But remember, these numbers are estimates. The actual carbon footprint depends on many things like the type of vehicle, how many people are traveling, and even the weather.
Cruise ships are the worst for carbon emissions. They use heavy fuel oil, which is very high in carbon. These massive ships need a lot of power for things like lights, air conditioning, and entertainment.
Short flights are also bad for carbon emissions because they use a lot of fuel to take off and climb.
Electric vehicles (EVs) are better for the environment in the long run compared to regular cars. But it's not just about driving; it's also about where the electricity comes from. If it comes from fossil fuels, then EVs might not be as green as we think. There are also questions about how much energy it takes to make EVs compared to regular cars.
Understanding these carbon footprints can help people make greener choices when they travel.
Read next: The Smartwatch Market Braces for Disruption in 2024
by Mahrukh Shahid via Digital Information World
The data comes from trusted sources like Our World in Data, the UK Government, and The International Council on Clean Transportation, up to December 2022. But remember, these numbers are estimates. The actual carbon footprint depends on many things like the type of vehicle, how many people are traveling, and even the weather.
Cruise ships are the worst for carbon emissions. They use heavy fuel oil, which is very high in carbon. These massive ships need a lot of power for things like lights, air conditioning, and entertainment.
Short flights are also bad for carbon emissions because they use a lot of fuel to take off and climb.
Electric vehicles (EVs) are better for the environment in the long run compared to regular cars. But it's not just about driving; it's also about where the electricity comes from. If it comes from fossil fuels, then EVs might not be as green as we think. There are also questions about how much energy it takes to make EVs compared to regular cars.
Understanding these carbon footprints can help people make greener choices when they travel.
Read next: The Smartwatch Market Braces for Disruption in 2024
by Mahrukh Shahid via Digital Information World
Pew Research: Majority of Americans Alarmed by Social Media's Political Power
Pew Research Center did a survey about political power over social media and it was found that many Americans think that social media companies are too powerful. 78% of the respondents in the survey, out of which 84% were Republicans and 74% were Democrats, said that they believe that social media companies deeply influence politics. Ever since the last presidential elections, people who think that social media companies have an impact on politics have increased by 6%.
Many Americans think of social media in the light of what American legislators think. The lawmakers and legislators in the USA have proposed many laws and petitions to make social media accountable for any political news they publish. A republican and a democrat senator have proposed a Kids Online Safety Act that is asking social media platforms to make usage of social media safe for kids. The senators said that social media platforms should monitor everyone’s activities and keep their content clean and safe for underage people. But some people who care about their privacy on social media said that closely watching the activities of people on social media will make many adults vulnerable to the government.
On the other hand, two senators from opposing parties have only partnered up together to propose a bill that will commission to oversee many social media platforms. It is quite obvious why many American think that social media has too much political power as social media was used in 2022 to help attack the Capitol. Even with these many threats, the USA government is more interested in banning China owned TikTok.
There is also a difference between what liberals and conservatives think of many tech companies. 71% of the Republicans said that tech companies favor liberals while 50% Democrats said that Republicans and Democrats are treated equally by tech companies. Only 15% of the USA adults think that tech companies favor conservatives more than liberals. Even with many lawsuits aimed at companies like Apple, Amazon and Meta, only 16% of Americans think that tech companies should be regulated less than how they are being regulated now.
Read next: Mapping Success: The Biggest CEO Fundraisers in Every US State
by Arooj Ahmed via Digital Information World
Many Americans think of social media in the light of what American legislators think. The lawmakers and legislators in the USA have proposed many laws and petitions to make social media accountable for any political news they publish. A republican and a democrat senator have proposed a Kids Online Safety Act that is asking social media platforms to make usage of social media safe for kids. The senators said that social media platforms should monitor everyone’s activities and keep their content clean and safe for underage people. But some people who care about their privacy on social media said that closely watching the activities of people on social media will make many adults vulnerable to the government.
On the other hand, two senators from opposing parties have only partnered up together to propose a bill that will commission to oversee many social media platforms. It is quite obvious why many American think that social media has too much political power as social media was used in 2022 to help attack the Capitol. Even with these many threats, the USA government is more interested in banning China owned TikTok.
There is also a difference between what liberals and conservatives think of many tech companies. 71% of the Republicans said that tech companies favor liberals while 50% Democrats said that Republicans and Democrats are treated equally by tech companies. Only 15% of the USA adults think that tech companies favor conservatives more than liberals. Even with many lawsuits aimed at companies like Apple, Amazon and Meta, only 16% of Americans think that tech companies should be regulated less than how they are being regulated now.
Read next: Mapping Success: The Biggest CEO Fundraisers in Every US State
by Arooj Ahmed via Digital Information World
Subscribe to:
Posts (Atom)