Wednesday, September 28, 2016

Pagination and SEO: Best Practices & Common Issues

Pagination for SEO

This article is part of an SEO series from WooRank. Thank you for supporting the partners who make SitePoint possible.

If you’ve got a large site, or even a smaller site that has some pages with longform content, you’ve probably had to deal with pagination at some point. There are several reasons you’d want to paginate your content: You’ve got a series of related articles, a page with lots of search results, your ecommerce site has a large list of products in a category or you’ve got a discussion forum housing several comment threads. In this article we’ll go over some best practices for implementing pagination for SEO, some common problems that arise from pagination and how to resolve those problems.

Pagination Best Practices

When to Paginate

Human usability is the main reason you would want to paginate your content. When you get beyond a few pages worth of links, it starts to get daunting for users. Plus, these pages can take a long time to load, which is a big no-no for both user experience and SEO. The other main reason to paginate content is to limit the number links on one page. It’s generally considered best practice to keep the number of links on a page to 100 or fewer (even though Google dropped this guideline a few years ago), although that’s far from a hard limit. Some reasons to keep the number of outbound links below that number include:

  • Having a large number of links on a page affects robots’ crawl efficiency, and could result in having more valuable pages passed over in favor of less important pages. You can guard against this using sitemaps, but those aren’t ironclad.
  • Link juice is divided evenly among all the links on a page. So the more links you have, the less each one passes. Since you can’t use "nofollow" links anymore to direct link juice, limit the number of links on a page to maximize the value each one passes.
  • Pages with lots of links generally have pretty thin content and aren’t the most legitimate-looking pages on the web. That doesn’t mean your pages are spam, or that search engines will automatically come to that conclusion, but paginating long lists of links will help show that your page is legitimate.

View More/Less Results & View All Page

Giving users the ability to view more or less rows per page can have big positive impacts on your human usability, but can cause headaches for SEOs. If you aren’t careful, you’ll end up with duplicate content getting crawled by search engines if each view more/less option uses a different URL. Use javascript to reload the page, displaying the new number of rows at the same URL.

You can also avoid the whole pagination duplicate content altogether by using the "noindex" meta robot tag and a View All page. Add the noindex tag to your paginated pages’ <head> like this:

<meta name="robots” content=”noindex”/>

Then, add the robots meta tag to your view all page, with "index" as the content value. This technically isn’t necessary as it’s the default unless otherwise indicated, but giving crawlers a push in the right direction doesn’t hurt. Test and submit your sitemap via Google Search Console’s sitemap tool.

Test/Add Sitemap

Use HTML5’s History API to change the URL as new pages are loaded. So when a user reaches the bottom of page two, the page two URL would switch to the page three URL when that content has loaded. The same is true for the opposite: Scrolling up to load page two content will switch the URL back.

Infinite Scroll

Using infinite scroll is a popular way of dealing with content, particularly for long articles and social media pages. It’s also really useful for mobile pages and very user friendly, but it’s not so friendly for search engines. However, there are some ways you can offset that unfriendliness so you can use infinite scroll without hurting your SEO too much.

If your paginated content uses separate URLs, include them in your sitemap. This will ensure that search engines find, crawl and index your content, including the pages loaded via infinite crawl that they otherwise couldn’t access.

Use rel="prev”/”next” to Avoid Duplicate Title Tags & Meta Descriptions

When properly implemented, pagination generally doesn’t result in duplicate page content (unless you’ve decided to use view more/less and/or view all features). However, it can cause problems with two very important SEO factors: title tags and meta descriptions. Use Google Search Console to find instances of duplicate title tags. Find all your duplicate title tags and meta descriptions in the HTML Improvements section under Search Appearance.

HTML Improvements Google Search Console

The rel="prev”/”next” tags are implemented in the </head><head> of the page, and are used to indicate the preceding and succeeding pages in the pagination chain. So, for example, the second page in the chain, example.com/page2, would have the tags implemented like this:

<link rel="prev” href=”http://ift.tt/2cWRddj;

<link rel="next” href=”http://ift.tt/2dlx6cv;

These tags will tell search engines that the pages at the indicated URLs are all linked together, so they won’t see their shared title tags and meta descriptions as duplicates. They’ll also consolidate their indexing properties like link juice, and possibly send visitors from search engines to the first page in the series.

If your paginated series uses URL parameters, for example sorting or filtering, you can also use the canonical tag like so:

<link rel="canonical” href=”http://ift.tt/2cWRvAX;

<link rel="prev” href=”http://ift.tt/2cWRddj;

<link rel="next” href=”http://ift.tt/2dly5Jx;

Unbroken pagination chain

Common Problems With Pagination

If you’re using rel="prev”/”next” annotations, implementing pagination is pretty straightforward. However, you can still take a few wrong turns that will impact how search engines crawl and access your content.

Continue reading %Pagination and SEO: Best Practices & Common Issues%


by Greg Snow-Wasserman via SitePoint

No comments:

Post a Comment