Thursday, April 27, 2017

5 Technical SEO Traps to Dodge

5 Technical SEO Mistakes to Dodge

This article is part of an SEO series from WooRank. Thank you for supporting the partners who make SitePoint possible.

Search marketing can bring lots of benefits to your business and provide great marketing ROI. If you do it right, organic search provides you with a high-quality source of qualified traffic. However, SEO is much more than keywords and links. There are lots of technical aspects to SEO that, if you’re not careful, can trip you up and keep your site from performing as well as it could.

Here are five of the most common or trickiest to diagnose technical SEO mistakes you should avoid on your website.

Overzealous Robots.txt Files

Your robots.txt file is an important tool for your website’s SEO, and an integral part of making sure your website is properly crawled and indexed by Google. As we’ve explained in the past, there all sorts of reasons you wouldn’t want a page or folder to get indexed by search engines. However, errors in robots.txt files is one of the main culprits behind SEO problems.

A common technique to mitigate duplicate content issues when migrating a website, disallowing entire servers, will cause whole sites not to get indexed. So, if you’re seeing a migrated site failing to get traffic, check your robots.txt file right away. If it looks like this:

User-agent: *
Disallow: /

You’ve got an overzealous file that’s preventing all crawlers from accessing your site.

Fix this by getting more specific with the commands you issue in your file. Stick to specifying specific pages, folders or file types in the disallow lines like so:

User-agent: *
Disallow: /folder/copypage1.html
Disallow: /folder/duplicatepages/
Disallow: *.ppt$

Of course, if you created the original robots.txt file as part of your website migration, wait until you’re done before you start allowing bots to crawl your site.

Inadvertent NoIndex Tags

The meta robots tag goes hand-in-hand with the robots.txt file. In fact, it can be wise to double up by using the meta robots tag on a page you’ve disallowed via robots.txt. The reason is that robots.txt won’t stop search engines from accessing and crawling a page it finds by following a link from another site.

So it could still wind up indexing pages you don’t want crawled.

The solution to this is to add the meta robots noindex tag (also known just as the noindex tag) to pages you really, really don’t want indexed. It’s a simple tag that goes in a page’s <head>:

<meta name="robots” content=”noindex”>

Again, there are plenty of times you’d want to use the meta robots tag to prevent a page from getting indexed. However, if your pages aren’t getting crawled (you can check this using the site: search operator in Google), this should be one of the first things you check.

If you’ve used the site: search operator to check the number of pages you have indexed and it’s way below the number of pages you actually have, it’s time to crawl your site. Use WooRank’s Site Crawl feature to crawl your pages. Then, click on Indexing. Pages that are disallowed via the meta robots tag will be listed here.

WooRank Site Crawl Indexing meta robots URLs

No problems with meta robots here. Nice.

Unoptimized Redirects

No matter how much you try to avoid using them, 301 redirects are sometimes necessary. 301s (and now 302s) enable you to move pages to new locations and still maintain link juice, authority and ranking power for those pages. However, 301s can only help your site’s SEO if you use them correctly. Implementing them incorrectly, or setting them and then forgetting about them completely, will deteriorate your user experience and search optimization.

Continue reading %5 Technical SEO Traps to Dodge%


by Stephen Tasker via SitePoint

No comments:

Post a Comment