This article is part of an SEO series from WooRank. Thank you for supporting the partners who make SitePoint possible.
What is SEO?
Search engine optimization is the collection of strategies, tactics and techniques used to rank highly in search engine results pages (SERPs) in order to increase the amount of traffic to a website. That’s the traditional answer you’ll find in featured snippets when you Google "what is SEO?" And it’s not wrong — it’s just a little incomplete.
The better, more accurate definition would be "the strategies, tactics and techniques used to rank highly in search engine results for the keywords used by your target audience in order to increase your conversions and reach."
Think of it this way: You own a pizza restaurant in your neighborhood, so you go about optimizing your site for pizza-related keywords. You do a really good job and now you rank in the top ten Google results for the keyword "pizza". The problem is, you’re a local shop and your site isn’t optimized for local search, so people looking for recipes, or the history of pizza, or the nutritional information of pizza are finding you, but maybe not people looking to order pizza in your neighborhood. This is a disaster because your customers aren’t finding you when they are most likely to convert into a sale.
Making sure the right people find you at the right time is what SEO is really all about.
Do I Really Need to Do SEO?
Yes.
Since you’re reading about how to get started on SEO, you’ve probably already realized that you need it. But, if you’re still on the fence, here are some numbers that should convince you:
So search optimization is really important in getting your audience onto your site. But there is another really important reason for you to be doing SEO: your competitors are doing it too. That means not only is our poor pizza restaurant missing out on current sales, its better-optimized competitors are forming relationships with customers, improving brand recognition and repeat sales.
How Do You Do SEO?
Get Your Site Indexed by Google
Since the goal of SEO is to rank highly in search results, your first step is to get your site crawled and indexed by Google. Submit your site to Google via Google Search Console, which doesn’t require you to have an account. If you do have a Google Search Console account, use the Fetch as Google tool in the Crawl section. When Googlebot successfully fetches your page, click the "Submit to index" button.
You can have Google index just the page you submitted by checking the box for URL, or have it index your whole site (assuming all of your pages can be reached by following your internal links), beginning with the submitted URL, by checking "URL and all linked pages".
You can submit your URL to Bing, which requires you to have a Bing Webmaster Tools account.
The next best way to get your site crawled by search engines is to get links to your site in as many (reputable) places as you can. Put a link to your website in the About section of your social media pages, particularly your Twitter profile. Make sure to link your website with your Google+ profile and set up a Google My Business account to link to your website. Not only will this increase your chance of getting your site crawled, it will help your chances of appearing in the Google Answer Box and optimize your knowledge graph rich snippet. If you’ve got a YouTube account for your business, add a link to your channel’s About page and your video descriptions.
The vast majority of these links will be nofollow, so they won’t actually help your ranking via improved link juice, but that’s not the point here. Crawlers still follow those link and will index the sites they land on.
Finally, consider adding a blog to your site. People typically think of blogs as tools for content marketing and on page SEO, but they can also provide a steady stream of fresh content. Sites with blogs have an average of 434% more indexed pages than those without.
Find a more detailed look at getting crawled and indexed by Google here.
Keyword Research
Despite rumors to the contrary, keywords are still very much relevant to SEO and picking the right keywords to optimize your site around is a core component of a successful SEO strategy. The process by which you find those keywords to target is called keyword research. Here are the basic steps to keyword research:
First, discover how people are currently finding you via search engines using Google Search Console, and which keywords are driving your best converting users via Google Analytics. If your site is really new, or doesn’t get much organic traffic, get ideas from your products, categories or by answering the question "What is my website about?" or “What does my business do?” Find new keyword possibilities using a tool such as AdWord’s Keyword Planner from Google or Bing’s keyword research tool, among several other options.
Once you have a nice, long list of keywords you want to target, narrow it down to the those that have enough search volume to make them worth the effort. Google now prevents accounts that don’t reach a certain, unspecified spend threshold, from accessing estimated search volume data, giving only a range of monthly searches. However, you can still access keyword volumes using WooRank’s SERP Checker.
If you don’t have a WooRank Pro or Premium account, you can use Bing’s keyword research tool to find search volume. However, as Bing accounts for less than 10 percent of the market share of search, this data will only unlock the tip of the iceberg. Bing & Google’s keyword tools use PPC data. It won’t be 100% accurate, but it’s close enough to draw accurate conclusions.
When finalizing your keyword strategy, make sure your portfolio has a nice mix of head and long tail keywords. Don’t go overboard with either type. Head keywords will bring you lots of traffic, but it won’t convert very well right away and there’s a good chance you won’t rank very well for them unless you’re a rather big and well-established website. On the other hand, too many long tail keywords can convert like crazy, but won’t bring in enough users to be viable.
Learn more about forming a keyword strategy and doing keyword research here.
Technical SEO
Keywords are a core part of SEO, but there’s more to it than that. You also need to build your site with search engines in mind. Here are the basic technical elements your site needs to improve its search engine optimization.
Robots.txt
A robots.txt file is a simple text file in your website’s root directory. It tells search engine bots which pages can and cannot be crawled. It’s used mostly to keep search engines from indexing pages you don’t want to show up in search results like temporary folders or your legacy site after a redesign or migration. You can block all user-agents, none or individual bots. A very basic robots.txt file that blocks all user-agents looks like this:
User-agent: *
Disallow: /
Allowing all robots to crawl your whole site looks like this:
User-agent: *
Disallow:
You can disallow specific user-agents from accessing specific folders, subfolders or pages by including them as disallow lines under the relevant user-agent line. Some search engines will recognize the ‘allow’ parameter so you can give access to specific files in disallowed folders.
Be very careful with your robots.txt file. Accidentally disallowing all bots, or certain user-agents to the entire server, is a relatively common, and easy-to-make, mistake that can cause huge headaches for SEOs. For an in-depth look at how to use robots.txt, check out our guide here.
XML Sitemaps
Sitemaps are xml files that include every URL on a website and give a few basic details about each page. A simple sitemap for a website with just one page could look like this:
<?xml version="1.0" encoding=”UTF-8”?>
<urlset xmlns="http://ift.tt/xwbjRF” xmlns:xhtml=”http://ift.tt/2bTDKEE;
<url>
<loc>http://ift.tt/2byyU2M;
<lastmod>2016-8-01</lastmod>
<changefreq>monthly</changefreq>
<priority>0.9</priority>
<xhtml:link rel="alternate” hreflang=”fr” href=”http://ift.tt/2bTCP7v;
</url>
Here’s a rundown of what that all means:
<urlset>
is the current protocol for opening and closing sitemaps. It tells crawlers that the sitemap is starting.
<loc>
indicates the URL of the page. It’s required for every page on your site, while the rest of the parameters are technically optional. Always write your URLs uniformly and use the canonical version — i.e. protocol, www resolve, etc.
<lastmod>
is the date your page was last updated or modified.
<changefreq>
tells how often you update your page, which can encourage search engines to recrawl your site when you update pages. Don’t lie though — if search engines see the value in <changefreq>
doesn’t match up with actual changes, they’ll just ignore it.
<priority>
tells how important that URL is compared to the other URLs on the site.
<xml:html>
lets you list alternate versions of the URL, like if you have a multilingual or international site.
Your site will work and can get indexed without a sitemap, and they’re not a ranking signal. But having a sitemap makes the whole process easier and faster. Plus, the more information you give about your pages, the more intelligently search engines can crawl your site, meaning bots are less likely to waste their crawl budget looking at unimportant pages. Sitemaps are especially important when you’re adding new pages or launching a new site that doesn’t have many links, or any links at all. For an in-depth look at XML sitemaps, check out this guide here.
Canonical URLs
Canonical URLs help websites to prevent issues caused by duplicate content. They tell search engines where to find the original version of content that can be found at multiple URLs, showing them which one to list in the search results, and to combining link juice at a single URL.. There are all sorts of legitimate reasons you could end up with duplicate content: content management system, e-commerce product platforms and syndicated content. Search engines will see the rel="canonical” tag, know the content is a copy of the canonical URL, and pass on ranking information to the original page.
Rel="canonical” is implemented in the <head>
of HTML pages and the HTTP header for non-HTML pages:
- HTML:
<link rel="canonical” href=”http://ift.tt/2fI8pbO;
- HTTP:
Links: <http://ift.tt/2eikNds;; rel="canonical”
When you choose a canonical URL, pick the one that’s best optimized for users and search engines, and has content that is well optimized. Make sure you’ve set a preferred domain in Google Search Console.
This works as a www resolve and Google will take this into account when it encounters links to your site out in the wild. It will pass link juice, trust and authority to your preferred domain, maybe with www, even when someone else uses a link without the www.
Learn more about rel="canonical” and dealing with duplicate content here.
URLs
Writing URLs for both human and search engine usability is important — both have an impact on SEO. Use URLs to create a clear hierarchy of information so users always know where they are. Always use the canonical version of your URL (www resolve, https, etc.) and include folders, subfolders and the page, in that order.
URL structure is important to search engines because it helps them understand how that page relates to the rest of the site. Ideally the URL will be similar to your title tag, so it should include your keyword early on. Search engines look for keywords in URLs to determine the topic and relevance of a page.
Optimizing URLs will also help your backlink profile as people are more likely to use relevant anchor text for URLs that are well-structured and include keywords. This can help you to rank for these keywords. For a detailed look at optimizing URLs, check out this guide.
HTML Headers and Subheads
<H1>
- <H6>
tags, also known as headers and subheads, denote the headlines and subheads on your page. These tags, especially the <H1>
tag, are very important for SEO. Search engines use HTML headers to establish:
Continue reading %SEO Bootcamp for Beginners%
by Greg Snow-Wasserman via SitePoint