The Beginner’s Guide to SEO

INTRODUCTION


Welcome to your SEO learning journey!

You’ll get the most out of this guide if your desire to learn search engine optimization (SEO) is exceeded only by your willingness to execute and test concepts.

This guide is designed to describe all major aspects of SEO, from finding the terms and phrases (keywords) that can generate qualified traffic to your website, to making your site friendly to search engines, to building links and marketing the unique value of your site.

The world of search engine optimization is complex and ever-changing, but you can easily understand the basics, and even a small amount of SEO knowledge can make a big difference. Free SEO education is also widely available on the web, including in guides like this! (Woohoo!)

Combine this information with some practice and you are well on your way to becoming a savvy SEO.

The basics of search engine optimization

Ever heard of Maslow’s hierarchy of needs? It’s a theory of psychology that prioritizes the most fundamental human needs (like air, water, and physical safety) over more advanced needs (like esteem and social belonging). The theory is that you can’t achieve the needs at the top without ensuring the more fundamental needs are met first. Love doesn’t matter if you don’t have food.

Our founder, Rand Fishkin, made a similar pyramid to explain the way folks should go about SEO, and we’ve affectionately dubbed it “Mozlow’s hierarchy of SEO needs.”

Here’s what it looks like:

Try Moz Pro, free!

Strong data and smart analytics are must-haves when it comes to SEO work. Try Moz Pro free for 30 days and see why so many marketers trust our SEO tools!

Guide to SEO Basics

As you can see, the foundation of good SEO begins with ensuring crawl accessibility, and moves up from there.

Using this beginner’s guide, we can follow these seven steps to successful SEO:

  1. Crawl accessibility so engines can read your website
  2. Compelling content that answers the searcher’s query
  3. Keyword optimized to attract searchers & engines
  4. Great user experience including a fast load speed and compelling UX
  5. Share-worthy content that earns links, citations, and amplification
  6. Title, URL, & description to draw high CTR in the rankings
  7. Snippet/schema markup to stand out in SERPs

SEO 101

What is it, and why is it important?


Welcome! We’re excited that you’re here!

If you already have a solid understanding of SEO and why it’s important, you can skip to Chapter 2 (though we’d still recommend skimming the best practices from Google and Bing at the end of this chapter; they’re useful refreshers).

For everyone else, this chapter will help build your foundational SEO knowledge and confidence as you move forward.

What is SEO?

SEO stands for “search engine optimization.” It’s the practice of increasing both the quality and quantity of website traffic, as well as exposure to your brand, through non-paid (also known as “organic”) search engine results.

Despite the acronym, SEO is as much about people as it is about search engines themselves. It’s about understanding what people are searching for online, the answers they are seeking, the words they’re using, and the type of content they wish to consume. Knowing the answers to these questions will allow you to connect to the people who are searching online for the solutions you offer.

If knowing your audience’s intent is one side of the SEO coin, delivering it in a way search engine crawlers can find and understand is the other. In this guide, expect to learn how to do both.

What’s that word mean?

If you’re having trouble with any of the definitions in this chapter, be sure to open up our SEO glossary for reference!

Search engine basics

Search engines are answer machines. They scour billions of pieces of content and evaluate thousands of factors to determine which content is most likely to answer your query.

Search engines do all of this by discovering and cataloguing all available content on the Internet (web pages, PDFs, images, videos, etc.) via a process known as “crawling and indexing,” and then ordering it by how well it matches the query in a process we refer to as “ranking.” We’ll cover crawling, indexing, and ranking in more detail in the next chapter.

Which search results are “organic”?

As we said earlier, organic search results are the ones that are earned through effective SEO, not paid for (i.e. not advertising). These used to be easy to spot – the ads were clearly labeled as such and the remaining results typically took the form of “10 blue links” listed below them. But with the way search has changed, how can we spot organic results today?

Today, search engine results pages — often referred to as “SERPs” — are filled with both more advertising and more dynamic organic results formats (called “SERP features”) than we’ve ever seen before. Some examples of SERP features are featured snippets (or answer boxes), People Also Ask boxes, image carousels, etc. New SERP features continue to emerge, driven largely by what people are seeking.

For example, if you search for “Denver weather,” you’ll see a weather forecast for the city of Denver directly in the SERP instead of a link to a site that might have that forecast. And, if you search for “pizza Denver,” you’ll see a “local pack” result made up of Denver pizza places. Convenient, right?

It’s important to remember that search engines make money from advertising. Their goal is to better solve searcher’s queries (within SERPs), to keep searchers coming back, and to keep them on the SERPs longer.

Some SERP features on Google are organic and can be influenced by SEO. These include featured snippets (a promoted organic result that displays an answer inside a box) and related questions (a.k.a. “People Also Ask” boxes).

It’s worth noting that there are many other search features that, even though they aren’t paid advertising, can’t typically be influenced by SEO. These features often have data acquired from proprietary data sources, such as Wikipedia, WebMD, and IMDb.

Why SEO is important

While paid advertising, social media, and other online platforms can generate traffic to websites, the majority of online traffic is driven by search engines.

Organic search results cover more digital real estate, appear more credible to savvy searchers, and receive way more clicks than paid advertisements. For example, of all US searches, only ~2.8% of people click on paid advertisements.

In a nutshell: SEO has ~20X more traffic opportunity than PPC on both mobile and desktop.

SEO is also one of the only online marketing channels that, when set up correctly, can continue to pay dividends over time. If you provide a solid piece of content that deserves to rank for the right keywords, your traffic can snowball over time, whereas advertising needs continuous funding to send traffic to your site.

Search engines are getting smarter, but they still need our help.

Optimizing your site will help deliver better information to search engines so that your content can be properly indexed and displayed within search results.

Should I hire an SEO professional, consultant, or agency?

Depending on your bandwidth, willingness to learn, and the complexity of your website(s), you could perform some basic SEO yourself. Or, you might discover that you would prefer the help of an expert. Either way is okay!

If you end up looking for expert help, it’s important to know that many agencies and consultants “provide SEO services,” but can vary widely in quality. Knowing how to choose a good SEO company can save you a lot of time and money, as the wrong SEO techniques can actually harm your site more than they will help.

We also maintain a recommended list of agency partners, all of whom use Moz SEO tools to power their work on your behalf!

White hat vs black hat SEO

“White hat SEO” refers to SEO techniques, best practices, and strategies that abide by search engine rule, its primary focus to provide more value to people.

“Black hat SEO” refers to techniques and strategies that attempt to spam/fool search engines. While black hat SEO can work, it puts websites at tremendous risk of being penalized and/or de-indexed (removed from search results) and has ethical implications.

Penalized websites have bankrupted businesses. It’s just another reason to be very careful when choosing an SEO expert or agency.

Search engines share similar goals with the SEO industry

Search engines want to help you succeed. In fact, Google even has a Search Engine Optimization Starter Guide, much like the Beginner’s Guide! They’re also quite supportive of efforts by the SEO community. Digital marketing conferences — such as Unbounce, MNsearch, SearchLove, and Moz’s own MozCon — regularly attract engineers and representatives from major search engines.

Google assists webmasters and SEOs through their Webmaster Central Help Forum and by hosting live office hour hangouts. (Bing, unfortunately, shut down their Webmaster Forums in 2014.)

While webmaster guidelines vary from search engine to search engine, the underlying principles stay the same: Don’t try to trick search engines. Instead, provide your visitors with a great online experience. To do that, follow search engine guidelines and fulfill user intent.

Google Webmaster Guidelines

Basic principles:

  • Make pages primarily for users, not search engines.
  • Don’t deceive your users.
  • Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you’d feel comfortable explaining what you’ve done to a website to a Google employee. Another useful test is to ask, “Does this help my users? Would I do this if search engines didn’t exist?”
  • Think about what makes your website unique, valuable, or engaging.

Things to avoid:

  • Automatically generated content
  • Participating in link schemes
  • Creating pages with little or no original content (i.e. copied from somewhere else)
  • Cloaking — the practice of showing search engine crawlers different content than visitors.
  • Hidden text and links
  • Doorway pages — pages created to rank well for specific searches to funnel traffic to your website.

It’s good to be very familiar with Google’s Webmaster Guidelines. Make time to get to know them.

Bing Webmaster Guidelines

Basic principles:

  • Provide clear, deep, engaging, and easy-to-find content on your site.
  • Keep page titles clear and relevant.
  • Links are regarded as a signal of popularity and Bing rewards links that have grown organically.
  • Social influence and social shares are positive signals and can have an impact on how you rank organically in the long run.
  • Page speed is important, along with a positive, useful user experience.
  • Use alt attributes to describe images, so that Bing can better understand the content.

Things to avoid:

  • Thin content, pages showing mostly ads or affiliate links, or that otherwise redirect visitors away to other sites will not rank well.
  • Abusive link tactics that aim to inflate the number and nature of inbound links such as buying links, participating in link schemes, can lead to de-indexing.
  • Ensure clean, concise, keyword-inclusive URL structures are in place. Dynamic parameters can dirty up your URLs and cause duplicate content issues.
  • Make your URLs descriptive, short, keyword rich when possible, and avoid non-letter characters.
  • Burying links in Javascript/Flash/Silverlight; keep content out of these as well.
  • Duplicate content
  • Keyword stuffing
  • Cloaking — the practice of showing search engine crawlers different content than visitors.

Guidelines for representing your local business on Google

If the business for which you perform SEO work operates locally, either out of a storefront or drives to customers’ locations to perform service, it qualifies for a Google My Business listing. For local businesses like these, Google has guidelines that govern what you should and shouldn’t do in creating and managing these listings.

Basic principles:

  • Be sure you’re eligible for inclusion in the Google My Business index; you must have a physical address, even if it’s your home address, and you must serve customers face-to-face, either at your location (like a retail store) or at theirs (like a plumber)
  • Honestly and accurately represent all aspects of your local business data, including its name, address, phone number, website address, business categories, hours of operation, and other features.

Things to avoid

  • Creation of Google My Business listings for entities that aren’t eligible
  • Misrepresentation of any of your core business information, including “stuffing” your business name with geographic or service keywords, or creating listings for fake addresses
  • Use of PO boxes or virtual offices instead of authentic street addresses
  • Abuse of the review portion of the Google My Business listing, via fake positive reviews of your business or fake negative ones of your competitors
  • Costly, novice mistakes stemming from failure to read the fine details of Google’s guidelines

Fulfilling user intent

Instead of violating these guidelines in an attempt to trick search engines into ranking you higher, focus on understanding and fulfilling user intent. When a person searches for something, they have a desired outcome. Whether it’s an answer, concert tickets, or a cat photo, that desired content is their “user intent.”

If a person performs a search for “bands,” is their intent to find musical bands, wedding bands, band saws, or something else?

Your job as an SEO is to quickly provide users with the content they desire in the format in which they desire it.

Common user intent types:

Informational: Searching for information. Example: “What is the best type of laptop for photography?”

Navigational: Searching for a specific website. Example: “Apple”

Transactional: Searching to buy something. Example: “good deals on MacBook Pros”

You can get a glimpse of user intent by Googling your desired keyword(s) and evaluating the current SERP. For example, if there’s a photo carousel, it’s very likely that people searching for that keyword search for photos.

Also evaluate what content your top-ranking competitors are providing that you currently aren’t. How can you provide 10X the value on your website?

Providing relevant, high-quality content on your website will help you rank higher in search results, and more importantly, it will establish credibility and trust with your online audience.

Before you do any of that, you have to first understand your website’s goals to execute a strategic SEO plan.

Know your website/client’s goals

Every website is different, so take the time to really understand a specific site’s business goals. This will not only help you determine which areas of SEO you should focus on, where to track conversions, and how to set benchmarks, but it will also help you create talking points for negotiating SEO projects with clients, bosses, etc.

What will your KPIs (Key Performance Indicators) be to measure the return on SEO investment? More simply, what is your barometer to measure the success of your organic search efforts? You’ll want to have it documented, even if it’s this simple:

For the website ____________, my primary SEO KPI is ____________.

Here are a few common KPIs to get you started:

  • Sales
  • Downloads
  • Email signups
  • Contact form submissions
  • Phone calls

And if your business has a local component, you’ll want to define KPIs for your Google My Business listings, as well. These might include:

  • Clicks-to-call
  • Clicks-to-website
  • Clicks-for-driving-directions

You may have noticed that things like “ranking” and “traffic” weren’t on the KPIs list, and that’s intentional.

“But wait a minute!” You say. “I came here to learn about SEO because I heard it could help me rank and get traffic, and you’re telling me those aren’t important goals?”

Not at all! You’ve heard correctly. SEO can help your website rank higher in search results and consequently drive more traffic to your website, it’s just that ranking and traffic are a means to an end. There’s little use in ranking if no one is clicking through to your site, and there’s little use in increasing your traffic if that traffic isn’t accomplishing a larger business objective.

For example, if you run a lead generation site, would you rather have:

  • 1,000 monthly visitors and 3 people fill out a contact form? Or…
  • 300 monthly visitors and 40 people fill out a contact form?

If you’re using SEO to drive traffic to your site for the purpose of conversions, we hope you’d pick the latter! Before embarking on SEO, make sure you’ve laid out your business goals, then use SEO to help you accomplish them — not the other way around.

SEO accomplishes so much more than vanity metrics. When done well, it helps real businesses achieve real goals for their success.

HOW SEARCH ENGINES WORK: CRAWLING, INDEXING, AND RANKING

As we mentioned in Chapter 1, search engines are answer machines. They exist to discover, understand, and organize the internet’s content in order to offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible to search engines. It’s arguably the most important piece of the SEO puzzle: If your site can’t be found, there’s no way you’ll ever show up in the SERPs (Search Engine Results Page).

How do search engines work?

Search engines work through three primary functions:

  1. Crawling: Scour the Internet for content, looking over the code/content for each URL they find.
  2. Indexing: Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
  3. Ranking: Provide the pieces of content that will best answer a searcher’s query, which means that results are ordered by most relevant to least relevant.

What is search engine crawling?

Crawling is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

What’s that word mean?

Having trouble with any of the definitions in this section? Our SEO glossary has chapter-specific definitions to help you stay up-to-speed.

Search engine robots, also called spiders, crawl from page to page to find new and updated content.

Googlebot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, the crawler is able to find new content and add it to their index called Caffeine — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

What is a search engine index?

Search engines process and store information they find in an index, a huge database of all the content they’ve discovered and deem good enough to serve up to searchers.

Search engine ranking

When someone performs a search, search engines scour their index for highly relevant content and then orders that content in the hopes of solving the searcher’s query. This ordering of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there can be reasons for doing this, if you want your content found by searchers, you have to first make sure it’s accessible to crawlers and is indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the search engine, rather than against it!

Crawling: Can search engines find your pages?

As you’ve just learned, making sure your site gets crawled and indexed is a prerequisite to showing up in the SERPs. If you already have a website, it might be a good idea to start off by seeing how many of your pages are in the index. This will yield some great insights into whether Google is crawling and finding all the pages you want it to, and none that you don’t.

One way to check your indexed pages is “site:yourdomain.com”, an advanced search operator. Head to Google and type “site:yourdomain.com” into the search bar. This will return results Google has in its index for the site specified:

A screenshot of a site:moz.com search in Google, showing the number of results below the search box.

The number of results Google displays (see “About XX results” above) isn’t exact, but it does give you a solid idea of which pages are indexed on your site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don’t currently have one. With this tool, you can submit sitemaps for your site and monitor how many submitted pages have actually been added to Google’s index, among other things.

If you’re not showing up anywhere in the search results, there are a few possible reasons why:

  • Your site is brand new and hasn’t been crawled yet.
  • Your site isn’t linked to from any external websites.
  • Your site’s navigation makes it hard for a robot to crawl it effectively.
  • Your site contains some basic code called crawler directives that is blocking search engines.
  • Your site has been penalized by Google for spammy tactics.

Tell search engines how to crawl your site

If you used Google Search Console or the “site:domain.com” advanced search operator and found that some of your important pages are missing from the index and/or some of your unimportant pages have been mistakenly indexed, there are some optimizations you can implement to better direct Googlebot how you want your web content crawled. Telling search engines how to crawl your site can give you better control of what ends up in the index.

Most people think about making sure Google can find their important pages, but it’s easy to forget that there are likely pages you don’t want Googlebot to find. These might include things like old URLs that have thin content, duplicate URLs (such as sort-and-filter parameters for e-commerce), special promo code pages, staging or test pages, and so on.

To direct Googlebot away from certain pages and sections of your site, use robots.txt.

Robots.txt

Robots.txt files are located in the root directory of websites (ex. yourdomain.com/robots.txt) and suggest which parts of your site search engines should and shouldn’t crawl, as well as the speed at which they crawl your site, via specific robots.txt directives.

How Googlebot treats robots.txt files

  • If Googlebot can’t find a robots.txt file for a site, it proceeds to crawl the site.
  • If Googlebot finds a robots.txt file for a site, it will usually abide by the suggestions and proceed to crawl the site.
  • If Googlebot encounters an error while trying to access a site’s robots.txt file and can’t determine if one exists or not, it won’t crawl the site.

Not all web robots follow robots.txt. People with bad intentions (e.g., e-mail address scrapers) build bots that don’t follow this protocol. In fact, some bad actors use robots.txt files to find where you’ve located your private content. Although it might seem logical to block crawlers from private pages such as login and administration pages so that they don’t show up in the index, placing the location of those URLs in a publicly accessible robots.txt file also means that people with malicious intent can more easily find them. It’s better to NoIndex these pages and gate them behind a login form rather than place them in your robots.txt file.

Defining URL parameters in GSC

Some sites (most common with e-commerce) make the same content available on multiple different URLs by appending certain parameters to URLs. If you’ve ever shopped online, you’ve likely narrowed down your search via filters. For example, you may search for “shoes” on Amazon, and then refine your search by size, color, and style. Each time you refine, the URL changes slightly:

https://www.example.com/products/women/dresses/green.htmhttps://www.example.com/products/women?category=dresses&color=greenhttps://example.com/shopindex.php?product_id=32&highlight=green+dress&cat_id=1&sessionid=123$affid=43

How does Google know which version of the URL to serve to searchers? Google does a pretty good job at figuring out the representative URL on its own, but you can use the URL Parameters feature in Google Search Console to tell Google exactly how you want them to treat your pages. If you use this feature to tell Googlebot “crawl no URLs with ____ parameter,” then you’re essentially asking to hide this content from Googlebot, which could result in the removal of those pages from search results. That’s what you want if those parameters create duplicate pages, but not ideal if you want those pages to be indexed.

Can crawlers find all your important content?

Now that you know some tactics for ensuring search engine crawlers stay away from your unimportant content, let’s learn about the optimizations that can help Googlebot find your important pages.

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections might be obscured for one reason or another. It’s important to make sure that search engines are able to discover all the content you want indexed, and not just your homepage.

Ask yourself this: Can the bot crawl through your website, and not just to it?

A boarded-up door, representing a site that can be crawled to but not crawled through.

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing certain content, search engines won’t see those protected pages. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will be able to find everything that their visitors search for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there’s no guarantee they will be able to read and understand it just yet. It’s always best to add text within the <HTML> markup of your webpage.

Can search engines follow your site navigation?

Just as a crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page. If you’ve got a page you want search engines to find but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get listed in search results.

A depiction of how pages that are linked to can be found by crawlers, whereas a page not linked to in your site navigation exists as an island, undiscoverable.

Common navigation mistakes that can keep crawlers from seeing all of your site:

  • Having a mobile navigation that shows different results than your desktop navigation
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has gotten much better at crawling and understanding Javascript, but it’s still not a perfect process. The more surefire way to ensure something gets found, understood, and indexed by Google is by putting it in the HTML.
  • Personalization, or showing unique navigation to a specific type of visitor versus others, could appear to be cloaking to a search engine crawler
  • Forgetting to link to a primary page on your website through your navigation — remember, links are the paths crawlers follow to new pages!

This is why it’s essential that your website has a clear navigation and helpful URL folder structures.

Do you have clean information architecture?

Information architecture is the practice of organizing and labeling content on a website to improve efficiency and findability for users. The best information architecture is intuitive, meaning that users shouldn’t have to think very hard to flow through your website or to find something.

Are you utilizing sitemaps?

A sitemap is just what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure Google is finding your highest priority pages is to create a file that meets Google’s standards and submit it through Google Search Console. While submitting a sitemap doesn’t replace the need for good site navigation, it can certainly help crawlers follow a path to all of your important pages.

If your site doesn’t have any other sites linking to it, you still might be able to get it indexed by submitting your XML sitemap in Google Search Console. There’s no guarantee they’ll include a submitted URL in their index, but it’s worth a try!

Are crawlers getting errors when they try to access your URLs?

In the process of crawling the URLs on your site, a crawler may encounter errors. You can go to Google Search Console’s “Crawl Errors” report to detect URLs on which this might be happening – this report will show you server errors and not found errors. Server log files can also show you this, as well as a treasure trove of other information such as crawl frequency, but because accessing and dissecting server log files is a more advanced tactic, we won’t discuss it at length in the Beginner’s Guide, although you can learn more about it here.

Before you can do anything meaningful with the crawl error report, it’s important to understand server errors and “not found” errors.

4xx Codes: When search engine crawlers can’t access your content due to a client error

4xx errors are client errors, meaning the requested URL contains bad syntax or cannot be fulfilled. One of the most common 4xx errors is the “404 – not found” error. These might occur because of a URL typo, deleted page, or broken redirect, just to name a few examples. When search engines hit a 404, they can’t access the URL. When users hit a 404, they can get frustrated and leave.

5xx Codes: When search engine crawlers can’t access your content due to a server error

5xx errors are server errors, meaning the server the web page is located on failed to fulfill the searcher or search engine’s request to access the page. In Google Search Console’s “Crawl Error” report, there is a tab dedicated to these errors. These typically happen because the request for the URL timed out, so Googlebot abandoned the request. View Google’s documentation to learn more about fixing server connectivity issues.

Thankfully, there is a way to tell both searchers and search engines that your page has moved — the 301 (permanent) redirect.

Create custom 404 pages!

Customize your 404 page by adding in links to important pages on your site, a site search feature, and even contact information. This should make it less likely that visitors will bounce off your site when they hit a 404.

A depiction of redirecting one page to another.

Say you move a page from example.com/young-dogs/ to example.com/puppies/. Search engines and users need a bridge to cross from the old URL to the new. That bridge is a 301 redirect.

When you do implement a 301: When you don’t implement a 301:
Link Equity Transfers link equity from the page’s old location to the new URL. Without a 301, the authority from the previous URL is not passed on to the new version of the URL.
Indexing Helps Google find and index the new version of the page. The presence of 404 errors on your site alone don’t harm search performance, but letting ranking / trafficked pages 404 can result in them falling out of the index, with rankings and traffic going with them — yikes!
User Experience Ensures users find the page they’re looking for. Allowing your visitors to click on dead links will take them to error pages instead of the intended page, which can be frustrating.

The 301 status code itself means that the page has permanently moved to a new location, so avoid redirecting URLs to irrelevant pages — URLs where the old URL’s content doesn’t actually live. If a page is ranking for a query and you 301 it to a URL with different content, it might drop in rank position because the content that made it relevant to that particular query isn’t there anymore. 301s are powerful — move URLs responsibly!

You also have the option of 302 redirecting a page, but this should be reserved for temporary moves and in cases where passing link equity isn’t as big of a concern. 302s are kind of like a road detour. You’re temporarily siphoning traffic through a certain route, but it won’t be like that forever.

Once you’ve ensured your site is optimized for crawlability, the next order of business is to make sure it can be indexed.

Indexing: How do search engines interpret and store your pages?

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where your discovered pages are stored. After a crawler finds a page, the search engine renders it just like a browser would. In the process of doing so, the search engine analyzes that page’s contents. All of that information is stored in its index.

A robot storing a book in a library.

Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?

Yes, the cached version of your page will reflect a snapshot of the last time Googlebot crawled it.

Google crawls and caches web pages at different frequencies. More established, well-known sites that post frequently like https://www.nytimes.com will be crawled more frequently than the much-less-famous website for Roger the Mozbot’s side hustle, http://www.rogerlovescupcakes…. (if only it were real…)

You can view what your cached version of a page looks like by clicking the drop-down arrow next to the URL in the SERP and choosing “Cached”:

A screenshot of where to see cached results in the SERPs.

You can also view the text-only version of your site to determine if your important content is being crawled and cached effectively.

Are pages ever removed from the index?

Yes, pages can be removed from the index! Some of the main reasons why a URL might be removed include:

  • The URL is returning a “not found” error (4XX) or server error (5XX) – This could be accidental (the page was moved and a 301 redirect was not set up) or intentional (the page was deleted and 404ed in order to get it removed from the index)
  • The URL had a noindex meta tag added – This tag can be added by site owners to instruct the search engine to omit the page from its index.
  • The URL has been manually penalized for violating the search engine’s Webmaster Guidelines and, as a result, was removed from the index.
  • The URL has been blocked from crawling with the addition of a password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s index is no longer showing up, you can use the URL Inspection tool to learn the status of the page, or use Fetch as Google which has a “Request Indexing” feature to submit individual URLs to the index. (Bonus: GSC’s “fetch” tool also has a “render” option that allows you to see if there are any issues with how Google is interpreting your page).

Tell search engines how to index your site

Robots meta directives

Meta directives (or “meta tags”) are instructions you can give to search engines regarding how you want your web page to be treated.

You can tell search engine crawlers things like “do not index this page in search results” or “don’t pass any link equity to any on-page links”. These instructions are executed via Robots Meta Tags in the <head> of your HTML pages (most commonly used) or via the X-Robots-Tag in the HTTP header.

Robots meta tag

The robots meta tag can be used within the <head> of the HTML of your webpage. It can exclude all or specific search engines. The following are the most common meta directives, along with what situations you might apply them in.

index/noindex tells the engines whether the page should be crawled and kept in a search engines’ index for retrieval. If you opt to use “noindex,” you’re communicating to crawlers that you want the page excluded from search results. By default, search engines assume they can index all pages, so using the “index” value is unnecessary.

  • When you might use: You might opt to mark a page as “noindex” if you’re trying to trim thin pages from Google’s index of your site (ex: user generated profile pages) but you still want them accessible to visitors.

follow/nofollow tells search engines whether links on the page should be followed or nofollowed. “Follow” results in bots following the links on your page and passing link equity through to those URLs. Or, if you elect to employ “nofollow,” the search engines will not follow or pass any link equity through to the links on the page. By default, all pages are assumed to have the “follow” attribute.

  • When you might use: nofollow is often used together with noindex when you’re trying to prevent a page from being indexed as well as prevent the crawler from following links on the page.

noarchive is used to restrict search engines from saving a cached copy of the page. By default, the engines will maintain visible copies of all pages they have indexed, accessible to searchers through the cached link in the search results.

  • When you might use: If you run an e-commerce site and your prices change regularly, you might consider the noarchive tag to prevent searchers from seeing outdated pricing.

Here’s an example of a meta robots noindex, nofollow tag:

<!DOCTYPE html><html><head><meta name="robots" content="noindex, nofollow" /></head><body>...</body></html>

This example excludes all search engines from indexing the page and from following any on-page links. If you want to exclude multiple crawlers, like googlebot and bing for example, it’s okay to use multiple robot exclusion tags.

Meta directives affect indexing, not crawling

Googlebot needs to crawl your page in order to see its meta directives, so if you’re trying to prevent crawlers from accessing certain pages, meta directives are not the way to do it. Robots tags must be crawled to be respected.

X-Robots-Tag

The x-robots tag is used within the HTTP header of your URL, providing more flexibility and functionality than meta tags if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply sitewide noindex tags.

For example, you could easily exclude entire folders or file types (like moz.com/no-bake/old-recipes-to-noindex):

<Files ~ “\/?no\-bake\/.*”> Header set X-Robots-Tag “noindex, nofollow”</Files>

The derivatives used in a robots meta tag can also be used in an X-Robots-Tag.

Or specific file types (like PDFs):

<Files ~ “\.pdf$”> Header set X-Robots-Tag “noindex, nofollow”</Files>

Understanding the different ways you can influence crawling and indexing will help you avoid the common pitfalls that can prevent your important pages from getting found.

Ranking: How do search engines rank URLs?

How do search engines ensure that when someone types a query into the search bar, they get relevant results in return? That process is known as ranking, or the ordering of search results by most relevant to least relevant to a particular query.

An artistic interpretation of ranking, with three dogs sitting pretty on first, second, and third-place pedestals.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in meaningful ways. These algorithms have gone through many changes over the years in order to improve the quality of search results. Google, for example, makes algorithm adjustments every day — some of these updates are minor quality tweaks, whereas others are core/broad algorithm updates deployed to tackle a specific issue, like Penguin to tackle link spam. Check out our Google Algorithm Change History for a list of both confirmed and unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn’t always reveal specifics as to why they do what they do, we do know that Google’s aim when making algorithm adjustments is to improve overall search quality. That’s why, in response to algorithm update questions, Google will answer with something along the lines of: “We’re making quality updates all the time.” This indicates that, if your site suffered after an algorithm adjustment, compare it against Google’s Quality Guidelines or Search Quality Rater Guidelines, both are very telling in terms of what search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to searcher’s questions in the most helpful formats. If that’s true, then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See Spot Run.” Over time, their understanding starts to deepen, and they learn semantics — the meaning behind language and the relationship between words and phrases. Eventually, with enough practice, the student knows the language well enough to even understand nuance, and is able to provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to game the system by using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you wanted to rank for a particular keyword like “funny jokes,” you might add the words “funny jokes” a bunch of times onto your page, and make it bold, in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are fun and crazy. Your funny joke awaits. Sit back and read funny jokes because funny jokes can make you happy and funnier. Some funny favorite funny jokes.

This tactic made for terrible user experiences, and instead of laughing at funny jokes, people were bombarded by annoying, hard-to-read text. It may have worked in the past, but this is never what search engines wanted.

The role links play in SEO

When we talk about links, we could mean two things. Backlinks or “inbound links” are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

A depiction of how inbound links and internal links work.

Links have historically played a big role in SEO. Very early on, search engines needed help figuring out which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to any given site helped them do this.

Backlinks work very similarly to real-life WoM (Word-of-Mouth) referrals. Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

  • Referrals from others = good sign of authority
    • Example: Many different people have all told you that Jenny’s Coffee is the best in town
  • Referrals from yourself = biased, so not a good sign of authority
    • Example: Jenny claims that Jenny’s Coffee is the best in town
  • Referrals from irrelevant or low-quality sources = not a good sign of authority and could even get you flagged for spam
    • Example: Jenny paid to have people who have never visited her coffee shop tell others how good it is.
  • No referrals = unclear authority
    • Example: Jenny’s Coffee might be good, but you’ve been unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google’s core algorithm) is a link analysis algorithm named after one of Google’s founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links pointing to it. The assumption is that the more relevant, important, and trustworthy a web page is, the more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites, the better your odds are to rank higher within search results.

The role content plays in SEO

There would be no point to links if they didn’t direct searchers to something. That something is content! Content is more than just words; it’s anything meant to be consumed by searchers — there’s video content, image content, and of course, text. If search engines are answer machines, content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible results, so how do search engines decide which pages the searcher is going to find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the query’s intent. In other words, does this page match the words that were searched and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment, there’s no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you put in your header tags. All those can play a role in how well a page performs in search, but the focus should be on the users who will be reading the content.

Today, with hundreds or even thousands of ranking signals, the top three have stayed fairly consistent: links to your website (which serve as a third-party credibility signals), on-page content (quality content that fulfills a searcher’s intent), and RankBrain.

What is RankBrain?

RankBrain is the machine learning component of Google’s core algorithm. Machine learning is a computer program that continues to improve its predictions over time through new observations and training data. In other words, it’s always learning, and because it’s always learning, search results should be constantly improving.

For example, if RankBrain notices a lower ranking URL providing a better result to users than the higher ranking URLs, you can bet that RankBrain will adjust those results, moving the more relevant result higher and demoting the lesser relevant pages as a byproduct.

An image showing how results can change and are volatile enough to show different rankings even hours later.

Like most things with the search engine, we don’t know exactly what comprises RankBrain, but apparently, neither do the folks at Google.

What does this mean for SEOs?

Because Google will continue leveraging RankBrain to promote the most relevant, helpful content, we need to focus on fulfilling searcher intent more than ever before. Provide the best possible information and experience for searchers who might land on your page, and you’ve taken a big first step to performing well in a RankBrain world.

Engagement metrics: correlation, causation, or both?

With Google rankings, engagement metrics are most likely part correlation and part causation.

When we say engagement metrics, we mean data that represents how searchers interact with your site from search results. This includes things like:

  • Clicks (visits from search)
  • Time on page (amount of time the visitor spent on a page before leaving it)
  • Bounce rate (the percentage of all website sessions where users viewed only one page)
  • Pogo-sticking (clicking on an organic result and then quickly returning to the SERP to choose another result)

Many tests, including Moz’s own ranking factor survey, have indicated that engagement metrics correlate with higher ranking, but causation has been hotly debated. Are good engagement metrics just indicative of highly ranked sites? Or are sites ranked highly because they possess good engagement metrics?

What Google has said

While they’ve never used the term “direct ranking signal,” Google has been clear that they absolutely use click data to modify the SERP for particular queries.

According to Google’s former Chief of Search Quality, Udi Manber:

“The ranking itself is affected by the click data. If we discover that, for a particular query, 80% of people click on #2 and only 10% click on #1, after a while we figure out probably #2 is the one people want, so we’ll switch it.”

Another comment from former Google engineer Edmond Lau corroborates this:

“It’s pretty clear that any reasonable search engine would use click data on their own results to feed back into ranking to improve the quality of search results. The actual mechanics of how click data is used is often proprietary, but Google makes it obvious that it uses click data with its patents on systems like rank-adjusted content items.”

Because Google needs to maintain and improve search quality, it seems inevitable that engagement metrics are more than correlation, but it would appear that Google falls short of calling engagement metrics a “ranking signal” because those metrics are used to improve search quality, and the rank of individual URLs is just a byproduct of that.

What tests have confirmed

Various tests have confirmed that Google will adjust SERP order in response to searcher engagement:

  • Rand Fishkin’s 2014 test resulted in a #7 result moving up to the #1 spot after getting around 200 people to click on the URL from the SERP. Interestingly, ranking improvement seemed to be isolated to the location of the people who visited the link. The rank position spiked in the US, where many participants were located, whereas it remained lower on the page in Google Canada, Google Australia, etc.
  • Larry Kim’s comparison of top pages and their average dwell time pre- and post-RankBrain seemed to indicate that the machine-learning component of Google’s algorithm demotes the rank position of pages that people don’t spend as much time on.
  • Darren Shaw’s testing has shown user behavior’s impact on local search and map pack results as well.

Since user engagement metrics are clearly used to adjust the SERPs for quality, and rank position changes as a byproduct, it’s safe to say that SEOs should optimize for engagement. Engagement doesn’t change the objective quality of your web page, but rather your value to searchers relative to other results for that query. That’s why, after no changes to your page or its backlinks, it could decline in rankings if searchers’ behaviors indicates they like other pages better.

In terms of ranking web pages, engagement metrics act like a fact-checker. Objective factors such as links and content first rank the page, then engagement metrics help Google adjust if they didn’t get it right.

The evolution of search results

Back when search engines lacked a lot of the sophistication they have today, the term “10 blue links” was coined to describe the flat structure of the SERP. Any time a search was performed, Google would return a page with 10 organic results, each in the same format.

A screenshot of what a 10-blue-links SERP looks like.

In this search landscape, holding the #1 spot was the holy grail of SEO. But then something happened. Google began adding results in new formats on their search result pages, called SERP features. Some of these SERP features include:

  • Paid advertisements
  • Featured snippets
  • People Also Ask boxes
  • Local (map) pack
  • Knowledge panel
  • Sitelinks

And Google is adding new ones all the time. They even experimented with “zero-result SERPs,” a phenomenon where only one result from the Knowledge Graph was displayed on the SERP with no results below it except for an option to “view more results.”

The addition of these features caused some initial panic for two main reasons. For one, many of these features caused organic results to be pushed down further on the SERP. Another byproduct is that fewer searchers are clicking on the organic results since more queries are being answered on the SERP itself.

So why would Google do this? It all goes back to the search experience. User behavior indicates that some queries are better satisfied by different content formats. Notice how the different types of SERP features match the different types of query intents.

Query Intent Possible SERP Feature Triggered
Informational Featured snippet
Informational with one answer Knowledge Graph / instant answer
Local Map pack
Transactional Shopping

We’ll talk more about intent in Chapter 3, but for now, it’s important to know that answers can be delivered to searchers in a wide array of formats, and how you structure your content can impact the format in which it appears in search.

Localized search

A search engine like Google has its own proprietary index of local business listings, from which it creates local search results.

If you are performing local SEO work for a business that has a physical location customers can visit (ex: dentist) or for a business that travels to visit their customers (ex: plumber), make sure that you claim, verify, and optimize a free Google My Business Listing.

When it comes to localized search results, Google uses three main factors to determine ranking:

  1. Relevance
  2. Distance
  3. Prominence

Relevance

Relevance is how well a local business matches what the searcher is looking for. To ensure that the business is doing everything it can to be relevant to searchers, make sure the business’ information is thoroughly and accurately filled out.

Distance

Google uses your geo-location to better serve you local results. Local search results are extremely sensitive to proximity, which refers to the location of the searcher and/or the location specified in the query (if the searcher included one).

Organic search results are sensitive to a searcher’s location, though seldom as pronounced as in local pack results.

Prominence

With prominence as a factor, Google is looking to reward businesses that are well-known in the real world. In addition to a business’ offline prominence, Google also looks to some online factors to determine local ranking, such as:

Reviews

The number of Google reviews a local business receives, and the sentiment of those reviews, have a notable impact on their ability to rank in local results.

Citations

A “business citation” or “business listing” is a web-based reference to a local business’ “NAP” (name, address, phone number) on a localized platform (Yelp, Acxiom, YP, Infogroup, Localeze, etc.).

Local rankings are influenced by the number and consistency of local business citations. Google pulls data from a wide variety of sources in continuously making up its local business index. When Google finds multiple consistent references to a business’s name, location, and phone number it strengthens Google’s “trust” in the validity of that data. This then leads to Google being able to show the business with a higher degree of confidence. Google also uses information from other sources on the web, such as links and articles.

Organic ranking

SEO best practices also apply to local SEO, since Google also considers a website’s position in organic search results when determining local ranking.

In the next chapter, you’ll learn on-page best practices that will help Google and users better understand your content.

[Bonus!] Local engagement

Although not listed by Google as a local ranking factor, the role of engagement is only going to increase as time goes on. Google continues to enrich local results by incorporating real-world data like popular times to visit and average length of visits…

Curious about a certain local business’ citation accuracy? Moz has a free tool that can help out, aptly named Check Listing.

A screenshot of the "popular times to visit" result in local search.

…and even provides searchers with the ability to ask the business questions!

A screenshot of the Questions & Answers result in local search.

Undoubtedly now more than ever before, local results are being influenced by real-world data. This interactivity is how searchers interact with and respond to local businesses, rather than purely static (and game-able) information like links and citations.

Since Google wants to deliver the best, most relevant local businesses to searchers, it makes perfect sense for them to use real time engagement metrics to determine quality and relevance.

KEYWORD RESEARCH

Understand what your audience wants to find.


Now that you’ve learned how to show up in search results, let’s determine which strategic keywords to target in your website’s content, and how to craft that content to satisfy both users and search engines.

The power of keyword research lies in better understanding your target market and how they are searching for your content, services, or products.

Keyword research provides you with specific search data that can help you answer questions like:

  • What are people searching for?
  • How many people are searching for it?
  • In what format do they want that information?

In this chapter, you’ll get tools and strategies for uncovering that information, as well as learn tactics that’ll help you avoid keyword research foibles and build strong content. Once you uncover how your target audience is searching for your content, you begin to uncover a whole new world of strategic SEO!

Before keyword research, ask questions

Before you can help a business grow through search engine optimization, you first have to understand who they are, who their customers are, and their goals.

This is where corners are often cut. Too many people bypass this crucial planning step because keyword research takes time, and why spend the time when you already know what you want to rank for?

The answer is that what you want to rank for and what your audience actually wants are often two wildly different things. Focusing on your audience and then using keyword data to hone those insights will make for much more successful campaigns than focusing on arbitrary keywords.

Here’s an example. Frankie & Jo’s (a Seattle-based vegan, gluten-free ice cream shop) has heard about SEO and wants help improving how and how often they show up in organic search results. In order to help them, you need to first understand a little more about their customers. To do so, you might ask questions such as:

  • What types of ice cream, desserts, snacks, etc. are people searching for?
  • Who is searching for these terms?
  • When are people searching for ice cream, snacks, desserts, etc.?
    • Are there seasonality trends throughout the year?
  • How are people searching for ice cream?
    • What words do they use?
    • What questions do they ask?
    • Are more searches performed on mobile devices?
  • Why are people seeking ice cream?
    • Are individuals looking for health-conscious ice cream specifically or just looking to satisfy a sweet tooth?
  • Where are potential customers located — locally, nationally, or internationally?

And finally — here’s the kicker — how can you help provide the best content about ice cream to cultivate a community and fulfill what all those people are searching for? Asking these questions is a crucial planning step that will guide your keyword research and help you craft better content.

What’s that word mean?

Remember, if you’re stumped by any of the terms used in this chapter, our SEO glossary is here to help!

What terms are people searching for?

You may have a way of describing what you do, but how does your audience search for the product, service, or information you provide? Answering this question is a crucial first step in the keyword research process.

Discovering keywords

You likely have a few keywords in mind that you would like to rank for. These will be things like your products, services, or other topics your website addresses, and they are great seed keywords for your research, so start there! You can enter those keywords into a keyword research tool to discover average monthly search volume and similar keywords. We’ll get into search volume in greater depth in the next section, but during the discovery phase, it can help you determine which variations of your keywords are most popular amongst searchers.

Once you enter in your seed keywords into a keyword research tool, you will begin to discover other keywords, common questions, and topics for your content that you might have otherwise missed.

Let’s use the example of a florist that specializes in weddings.

Typing “wedding” and “florist” into a keyword research tool, you may discover highly relevant, highly searched for related terms such as:

  • Wedding bouquets
  • Bridal flowers
  • Wedding flower shop

In the process of discovering relevant keywords for your content, you will likely notice that the search volume of those keywords varies greatly. While you definitely want to target terms that your audience is searching for, in some cases, it may be more advantageous to target terms with lower search volume because they’re far less competitive.

Since both high- and low-competition keywords can be advantageous for your website, learning more about search volume can help you prioritize keywords and pick the ones that will give your website the biggest strategic advantage.

How often are those terms searched?

Uncovering search volume

The higher the search volume for a given keyword or keyword phrase, the more work is typically required to achieve higher rankings. This is often referred to as keyword difficulty and occasionally incorporates SERP features; for example, if many SERP features (like featured snippets, knowledge graph, carousels, etc) are clogging up a keyword’s result page, difficulty will increase. Big brands often take up the top 10 results for high-volume keywords, so if you’re just starting out on the web and going after the same keywords, the uphill battle for ranking can take years of effort.

Typically, the higher the search volume, the greater the competition and effort required to achieve organic ranking success. Go too low, though, and you risk not drawing any searchers to your site. In many cases, it may be most advantageous to target highly specific, lower competition search terms. In SEO, we call those long-tail keywords.

Understanding the long tail

It would be great to rank #1 for the keyword “shoes”… or would it?

It’s wonderful to deal with keywords that have 50,000 searches a month, or even 5,000 searches a month, but in reality, these popular search terms only make up a fraction of all searches performed on the web. In fact, keywords with very high search volumes may even indicate ambiguous intent, which, if you target these terms, it could put you at risk for drawing visitors to your site whose goals don’t match the content your page provides.

Does the searcher want to know the nutritional value of pizza? Order a pizza? Find a restaurant to take their family? Google doesn’t know, so they offer these features to help you refine. Targeting “pizza” means that you’re likely casting too wide a net.

If you’re searching for “pizza,” Google thinks you may also be interested in “cheese.” They’re not wrong…

Was your intent to find a pizza place for lunch? The “Discover more places” SERP feature has that covered.

The remaining 75% lie in the “chunky middle” and “long tail” of search.

A depiction of the search demand curve, showing the 'fat head' (top keywords with high traffic and competition), the 'chunky middle' (medium keywords with medium traffic and competition), and the 'long tail' (less popular and longer keyword phrases with less traffic and lower competition)

Don’t underestimate these less popular keywords. Long tail keywords with lower search volume often convert better, because searchers are more specific and intentional in their searches. For example, a person searching for “shoes” is probably just browsing. On the other hand, someone searching for “best price red womens size 7 running shoe” practically has their wallet out!

Getting strategic with search volume

Now that you’ve discovered relevant search terms for your site and their corresponding search volumes, you can get even more strategic by looking at your competitors and figuring out how searches might differ by season or location.

Keywords by competitor

You’ll likely compile a lot of keywords. How do you know which to tackle first? It could be a good idea to prioritize high-volume keywords that your competitors are not currently ranking for. On the flip side, you could also see which keywords from your list your competitors are already ranking for and prioritize those. The former is great when you want to take advantage of your competitors’ missed opportunities, while the latter is an aggressive strategy that sets you up to compete for keywords your competitors are already performing well for.

Keywords by season

Knowing about seasonal trends can be advantageous in setting a content strategy. For example, if you know that “christmas box” starts to spike in October through December in the United Kingdom, you can prepare content months in advance and give it a big push around those months.

Keywords by region

You can more strategically target a specific location by narrowing down your keyword research to specific towns, counties, or states in the Google Keyword Planner, or evaluate “interest by subregion” in Google Trends. Geo-specific research can help make your content more relevant to your target audience. For example, you might find out that in Texas, the preferred term for a large truck is “big rig,” while in New York, “tractor trailer” is the preferred terminology.

Which format best suits the searcher’s intent?

In Chapter 2, we learned about SERP features. That background is going to help us understand how searchers want to consume information for a particular keyword. The format in which Google chooses to display search results depends on intent, and every query has a unique one. Google describes these intents in their Quality Rater Guidelines as either “know” (find information), “do” (accomplish a goal), “website” (find a specific website), or “visit-in-person” (visit a local business).

While there are thousands of possible search types, let’s take a closer look at five major categories of intent:

1.Informational queries: The searcher needs information, such as the name of a band or the height of the Empire State Building.

If you’re enjoying this chapter so far, be sure to check out the keyword research episode of our One-Hour Guide to SEO video series!

A screenshot of the SERP feature result for the query 'who sings Sweet Caroline' (the answer is Neil Diamond.)

2. Navigational queries: The searcher wants to go to a particular place on the Internet, such as Facebook or the homepage of the NFL.

A screenshot of the query 'moz blog' and resulting SERP.

3. Transactional queries: The searcher wants to do something, such as buy a plane ticket or listen to a song.

A screenshot of the query 'plane tickets to seattle' and resulting SERP.

4. Commercial investigation: The searcher wants to compare products and find the best one for their specific needs.

A screenshot of the query 'ps4 vs ps4 pro' and resulting SERP.

5. Local queries: The searcher wants to find something locally, such as a nearby coffee shop, doctor, or music venue.

A screenshot of the query 'coffee shop near me' and resulting SERP with local results.

An important step in the keyword research process is surveying the SERP landscape for the keyword you want to target in order to get a better gauge of searcher intent. If you want to know what type of content your target audience wants, look to the SERPs!

Google has closely evaluated the behavior of trillions of searches in an attempt to provide the most desired content for each specific keyword search.

Take the search “dresses,” for example:

A screenshot of the query 'dresses' and resulting carousel.

By the shopping carousel, you can infer that Google has determined many people who search for “dresses” want to shop for dresses online.

A screenshot of the local results for the query 'dresses.'

There is also a Local Pack feature for this keyword, indicating Google’s desire to help searchers who may be looking for local dress retailers.

A screenshot of the Refine By results for the query 'dresses.'

If the query is ambiguous, Google will also sometimes include the “refine by” feature to help searchers specify what they’re looking for further. By doing so, the search engine can provide results that better help the searcher accomplish their task.

Google has a wide array of result types it can serve up depending on the query, so if you’re going to target a keyword, look to the SERP to understand what type of content you need to create.

Tools for determining the value of a keyword

How much value would a keyword add to your website? These tools can help you answer that question, so they’d make great additions to your keyword research arsenal:

  • Moz Keyword Explorer – Input a keyword in Keyword Explorer and get information like monthly search volume and SERP features (like local packs or featured snippets) that are ranking for that term. The tool extracts accurate search volume data by using live clickstream data. To learn more about how we’re producing our keyword data, check out Announcing Keyword Explorer.
    • Bonus! Keyword Explorer’s “Difficulty” score can also help you narrow down your keyword options to the phrases you have the best shot at ranking for. The higher a keyword’s score, the more difficult it would be to rank for that term. More about Keyword Difficulty.
  • Google Keyword Planner – Google’s AdWords Keyword Planner has historically been the most common starting point for SEO keyword research. However, Keyword Planner does restrict search volume data by lumping keywords together into large search volume range buckets. To learn more, check out Google Keyword Planner’s Dirty Secrets.
  • Google Trends – Google’s keyword trend tool is great for finding seasonal keyword fluctuations. For example, “funny halloween costume ideas” will peak in the weeks before Halloween.
  • AnswerThePublic – This free tool populates commonly searched for questions around a specific keyword. Bonus! You can use this tool in tandem with another free tool, Keywords Everywhere, to prioritize ATP’s suggestions by search volume.
  • SpyFu Keyword Research Tool – Provides some really neat competitive keyword data.

ON-PAGE SEO

Use your research to craft your message.


Now that you know how your target market is searching, it’s time to dive into on-page SEO, the practice of crafting web pages that answer searcher’s questions. On-page SEO is multifaceted, and extends beyond content into other things like schema and meta tags, which we’ll discuss more at length in the next chapter on technical optimization. For now, put on your wordsmithing hats — it’s time to create your content!

Creating your content

Applying your keyword research

In the last chapter, we learned methods for discovering how your target audience is searching for your content. Now, it’s time to put that research into practice. Here is a simple outline to follow for applying your keyword research:

  1. Survey your keywords and group those with similar topics and intent. Those groups will be your pages, rather than creating individual pages for every keyword variation.
  2. If you haven’t done so already, evaluate the SERP for each keyword or group of keywords to determine what type and format your content should be. Some characteristics of ranking pages to take note of:
    1. Are they image- or video-heavy?
    2. Is the content long-form or short and concise?
    3. Is the content formatted in lists, bullets, or paragraphs?
  3. Ask yourself, “What unique value could I offer to make my page better than the pages that are currently ranking for my keyword?”

On-page SEO allows you to turn your research into content your audience will love. Just make sure to avoid falling into the trap of low-value tactics that could hurt more than help!

What’s that word mean?

There are bound to be a few stumpers in this hefty chapter on on-page optimization — be prepared for unknown terms with our SEO glossary!

Low-value tactics to avoid

Your web content should exist to answer searchers’ questions, to guide them through your site, and to help them understand your site’s purpose. Content should not be created for the purpose of ranking highly in search alone. Ranking is a means to an end, the end being to help searchers. If we put the cart before the horse, we risk falling into the trap of low-value content tactics.

Some of these tactics were introduced in Chapter 2, but by way of review, let’s take a deeper dive into some low-value tactics you should avoid when crafting search engine optimized content.

Thin content

While it’s common for a website to have unique pages on different topics, an older content strategy was to create a page for every single iteration of your keywords in order to rank on page 1 for those highly specific queries.

For example, if you were selling bridal dresses, you might have created individual pages for bridal gowns, bridal dresses, wedding gowns, and wedding dresses, even if each page was essentially saying the same thing. A similar tactic for local businesses was to create multiple pages of content for each city or region from which they wanted clients. These “geo pages” often had the same or very similar content, with the location name being the only unique factor.

Tactics like these clearly weren’t helpful for users, so why did publishers do it? Google wasn’t always as good as it is today at understanding the relationships between words and phrases (or semantics). So, if you wanted to rank on page 1 for “bridal gowns” but you only had a page on “wedding dresses,” that may not have cut it.

This practice created tons of thin, low-quality content across the web, which Google addressed specifically with its 2011 update known as Panda. This algorithm update penalized low-quality pages, which resulted in more quality pages taking the top spots of the SERPs. Google continues to iterate on this process of demoting low-quality content and promoting high-quality content today.

Google is clear that you should have a comprehensive page on a topic instead of multiple, weaker pages for each variation of a keyword.

Try Moz Pro, free!

Moz Pro offers robust page optimization tools to help you improve both user experience and rankings. Try it free for 30 days and see why so many marketers trust our SEO tools!

A depiction of four pages targeting similar keywords with a red X over them, and one page targeting multiple keyword variants with a green checkmark.

Duplicate content

Like it sounds, “duplicate content” refers to content that is shared between domains or between multiple pages of a single domain. “Scraped” content goes a step further, and entails the blatant and unauthorized use of content from other sites. This can include taking content and republishing as-is, or modifying it slightly before republishing, without adding any original content or value.

A depiction of two pages with the exact same content, barring specific location information.

There are plenty of legitimate reasons for internal or cross-domain duplicate content, so Google encourages the use of a rel=canonical tag to point to the original version of the web content. While you don’t need to know about this tag just yet, the main thing to note for now is that your content should be unique in word and in value.

Debunking the “duplicate content penalty” myth

There is no Google penalty for duplicate content. That is to say, for example, if you take an article from the Associated Press and post it on your blog, you won’t get penalized with something like a Manual Action from Google. Google does, however, filter duplicate versions of content from their search results. If two or more pieces of content are substantially similar, Google will choose a canonical (source) URL to display in its search results and hide the duplicate versions. That’s not a penalty. That’s Google filtering to show only one version of a piece of content to improve the searcher’s experience.

Cloaking

A basic tenet of search engine guidelines is to show the same content to the engine’s crawlers that you’d show to a human visitor. This means that you should never hide text in the HTML code of your website that a normal visitor can’t see.

When this guideline is broken, search engines call it “cloaking” and take action to prevent these pages from ranking in search results. Cloaking can be accomplished in any number of ways and for a variety of reasons, both positive and negative. Below is an example of an instance where Spotify showed different content to users than to Google.

Users were presented with a login screen in Spotify when searching for the National Philharmonic orchestra.

Viewing Google’s cached version of the page shows the content Spotify provided to the search engine.

In some cases, Google may let practices that are technically cloaking pass because they contribute to a positive user experience. For more on the subject of hidden content and how Google handles it, see our Whiteboard Friday entitled How Does Google Handle CSS + Javascript “Hidden” Text?

Keyword stuffing

If you’ve ever been told, “You need to include {critical keyword} on this page X times,” you’ve seen the confusion over keyword usage in action. Many people mistakenly think that if you just include a keyword within your page’s content X times, you will automatically rank for it. The truth is, although Google looks for mentions of keywords and related concepts on your site’s pages, the page itself has to add value outside of pure keyword usage. If a page is going to be valuable to users, it won’t sound like it was written by a robot, so incorporate your keywords and phrases naturally in a way that is understandable to your readers.

Below is an example of a keyword-stuffed page of content that also uses another old method: bolding all your targeted keywords. Oy.

An example of a keyword-stuffed paragraph, bolding all the target keywords.

Auto-generated content

Arguably one of the most offensive forms of low-quality content is the kind that is auto-generated, or created programmatically with the intent of manipulating search rankings and not helping users. You may recognize some auto-generated content by how little it makes sense when read — they are technically words, but strung together by a program rather than a human being.

It is worth noting that advancements in machine learning have contributed to more sophisticated auto-generated content that will only get better over time. This is likely why in Google’s quality guidelines on automatically generated content, Google specifically calls out the brand of auto-generated content that attempts to manipulate search rankings, rather than any-and-all auto-generated content.

What to do instead: 10x it!

There is no “secret sauce” to ranking in search results. Google ranks pages highly because it has determined they are the best answers to the searcher’s questions. In today’s search engine, it’s not enough that your page isn’t duplicate, spamming, or broken. Your page has to provide value to searchers and be better than any other page Google is currently serving as the answer to a particular query. Here’s a simple formula for content creation:

  • Search the keyword(s) you want your page to rank for
  • Identify which pages are ranking highly for those keywords
  • Determine what qualities those pages possess
  • Create content that’s better than that

We like to call this 10x content. If you create a page on a keyword that is 10x better than the pages being shown in search results (for that keyword), Google will reward you for it, and better yet, you’ll naturally get people linking to it! Creating 10x content is hard work, but will pay dividends in organic traffic.

Just remember, there’s no magic number when it comes to words on a page. What we should be aiming for is whatever sufficiently satisfies user intent. Some queries can be answered thoroughly and accurately in 300 words while others might require 1,000 words!

NAP: A note for local businesses

If you’re a business that makes in-person contact with your customers, be sure to include your business name, address, and phone number (NAP) prominently, accurately, and consistently throughout your site’s content. This information is often displayed in the footer or header of a local business website, as well as on any “contact us” pages. You’ll also want to mark up this information using local business schema. Schema and structured data are discussed more at length in the “Other optimizations” section of this chapter.

If you are a multi-location business, it’s best to build unique, optimized pages for each location. For example, a business that has locations in Seattle, Tacoma, and Bellevue should consider having a page for each:

example.com/
seattleexample.com/
tacomaexample.com/
bellevueexample.com/

Each page should be uniquely optimized for that location, so the Seattle page would have unique content discussing the Seattle location, list the Seattle NAP, and even testimonials specifically from Seattle customers. If there are dozens, hundreds, or even thousands of locations, a store locator widget could be employed to help you scale.

Local vs national vs international

Just remember that not all businesses operate at the local level and perform what we call “local SEO.” Some businesses want to attract customers on a national level (ex: the entire United States) and others want to attract customers from multiple countries (“international SEO”). Take Moz, for example. Our product (SEO software) is not tied to a specific location, whereas a coffee shop’s is, since customers have to travel to the location to get their caffeine fix.

In this scenario, the coffee shop should optimize their website for their physical location, whereas Moz would target “SEO software” without a location-specific modifier like “Seattle.”

How you choose to optimize your site depends largely on your audience, so make sure you have them in mind when crafting your website content.

Hope you still have some energy left after handling the difficult-yet-rewarding task of putting together a page that is 10x better than your competitors’ pages, because there are just a few more things needed before your page is complete! In the next sections, we’ll talk about the other on-page optimizations your pages need, as well as naming and organizing your content.

Beyond content: Other optimizations your pages need

Can I just bump up the font size to create paragraph headings?

How can I control what title and description show up for my page in search results?

After reading this section, you’ll understand other important on-page elements that help search engines understand the 10x content you just created, so let’s dive in!

Header tags

Header tags are an HTML element used to designate headings on your page. The main header tag, called an H1, is typically reserved for the title of the page. It looks like this:

<h1>Page Title</h1>

There are also sub-headings that go from H2 to H6 tags, although using all of these on a page is not required. The hierarchy of header tags goes from H1 to H6 in descending order of importance.

Each page should have a unique H1 that describes the main topic of the page, this is often automatically created from the title of a page. As the main descriptive title of the page, the H1 should contain that page’s primary keyword or phrase. You should avoid using header tags to mark up non-heading elements, such as navigational buttons and phone numbers. Use header tags to introduce what the following content will discuss.

Take this page about touring Copenhagen, for example:

<h1>Copenhagen Travel Guide</h1>
<h2>Copenhagen by the Seasons</h2>
<h3>Visiting in Winter</h3>
<h3>Visiting in Spring</h3>

The main topic of the page is introduced in the main <h1> heading, and each additional heading is used to introduce a new sub-topic. In this example, the <h2> is more specific than the <h1>, and the <h3> tags are more specific than the <h2>. This is just an example of a structure you could use.

Although what you choose to put in your header tags can be used by search engines to evaluate and rank your page, it’s important to avoid inflating their importance. Header tags are one among many on-page SEO factors, and typically would not move the needle like quality backlinks and content would, so focus on your site visitors when crafting your headings.

Internal links

In Chapter 2, we discussed the importance of having a crawlable website. Part of a website’s crawlability lies in its internal linking structure. When you link to other pages on your website, you ensure that search engine crawlers can find all your site’s pages, you pass link equity (ranking power) to other pages on your site, and you help visitors navigate your site.

The importance of internal linking is well established, but there can be confusion over how this looks in practice.

Link accessibility

Links that require a click (like a navigation drop-down to view) are often hidden from search engine crawlers, so if the only links to internal pages on your website are through these types of links, you may have trouble getting those pages indexed. Opt instead for links that are directly accessible on the page.

Anchor text

Anchor text is the text with which you link to pages. Below, you can see an example of what a hyperlink without anchor text and a hyperlink with anchor text would look like in the HTML.

<a href="http://www.example.com/"></a>
<a href="http://www.example.com/" title="Keyword Text">Keyword Text</a>

On live view, that would look like this:

http://www.example.com/

Keyword Text

The anchor text sends signals to search engines regarding the content of the destination page. For example, if I link to a page on my site using the anchor text “learn SEO,” that’s a good indicator to search engines that the targeted page is one at which people can learn about SEO. Be careful not to overdo it, though. Too many internal links using the same, keyword-stuffed anchor text can appear to search engines that you’re trying to manipulate a page’s ranking. It’s best to make anchor text natural rather than formulaic.

Link volume

In Google’s General Webmaster Guidelines, they say to “limit the number of links on a page to a reasonable number (a few thousand at most).” This is part of Google’s technical guidelines, rather than the quality guideline section, so having too many internal links isn’t something that on its own is going to get you penalized, but it does affect how Google finds and evaluates your pages.

The more links on a page, the less equity each link can pass to its destination page. A page only has so much equity to go around.

So it’s safe to say that you should only link when you mean it! You can learn more about link equity from our SEO Learning Center.

Aside from passing authority between pages, a link is also a way to help users navigate to other pages on your site. This is a case where doing what’s best for search engines is also doing what’s best for searchers. Too many links not only dilute the authority of each link, but they can also be unhelpful and overwhelming. Consider how a searcher might feel landing on a page that looks like this:

Welcome to our gardening website! We have many articles on gardeninghow to garden, and helpful tips on herbsfruitsvegetablesperennials, and annuals. Learn more about gardening from our gardening blog.

Whew! Not only is that a lot of links to process, but it also reads pretty unnaturally and doesn’t contain much substance (which could be considered “thin content” by Google). Focus on quality and helping your users navigate your site, and you likely won’t have to worry about too many links.

Redirection

Removing and renaming pages is a common practice, but in the event that you do move a page, make sure to update the links to that old URL! At the very least, you should make sure to redirect the URL to its new location, but if possible, update all internal links to that URL at the source so that users and crawlers don’t have to pass through redirects to arrive at the destination page. If you choose to redirect only, be careful to avoid redirect chains that are too long (Google says, “Avoid chaining redirects… keep the number of redirects in the chain low, ideally no more than 3 and fewer than 5.”)

Example of a redirect chain:

(original location of content) example.com/location1 →
example.com/location2 → 
(current location of content) example.com/location3

Better:

example.com/location1 → example.com/location3
A depiction of 301 redirecting an old page to a new page.

Image optimization

Images are the biggest culprits of slow web pages! The best way to solve for this is to compress your images. While there is no one-size-fits-all when it comes to image compression, testing various options like “save for web,” image sizing, and compression tools like Optimizilla or ImageOptim for Mac (or Windows alternatives), as well as evaluating what works best is the way to go.

Another way to help optimize your images (and improve your page speed) is by choosing the right image format.

How to choose which image format to use:

  • If your image requires animation, use a GIF.
  • If you don’t need to preserve high image resolution, use JPEG (and test out different compression settings).
  • If you do need to preserve high image resolution, use PNG.
    • If your image has a lot of colors, use PNG-24.
    • If your image doesn’t have a lot of colors, use PNG-8.

Learn more about choosing image formats in Google’s image optimization guide.

There are different ways to keep visitors on a semi-slow loading page by using images that produce a colored box or a very blurry/low resolution version while rendering to help visitors feel as if things are loading faster. We’ll discuss these options in more detail in Chapter 5.

Alt text

Alt text (alternative text) within images is a principle of web accessibility, and is used to describe images to the visually impaired via screen readers. It’s important to have alt text descriptions so that any visually impaired person can understand what the pictures on your website depict.

Search engine bots also crawl alt text to better understand your images, which gives you the added benefit of providing better image context to search engines. Just ensure that your alt descriptions reads naturally for people, and avoid stuffing keywords for search engines.

Bad:

<img src="grumpycat.gif" alt="grumpy cat, cat is grumpy, grumpy cat gif">

Good:

<img src="grumpycat.gif" alt="A black cat looking very grumpy at a big sp

Submit an image sitemap

To ensure that Google can crawl and index your images, submit an image sitemap in your Google Search Console account. This helps Google discover images they may have otherwise missed.

Formatting for readability & featured snippets

Your page could contain the best content ever written on a subject, but if it’s formatted improperly, your audience might never read it! While we can never guarantee that visitors will read our content, there are some principles that can promote readability, including:

  • Text size and color – Avoid fonts that are too tiny. Google recommends 16-point font and above to minimize the need for “pinching and zooming” on mobile. The text color in relation to the page’s background color should also promote readability. Additional information on text can be found in the website accessibility guidelines and via Google’s web accessibility fundamentals.
  • Headings – Breaking up your content with helpful headings can help readers navigate the page. This is especially useful on long pages where a reader might be looking only for information from a particular section.
  • Bullet points – Great for lists, bullet points can help readers skim and more quickly find the information they need.
  • Paragraph breaks – Avoiding walls of text can help prevent page abandonment and encourage site visitors to read more of your page.
  • Supporting media – When appropriate, include images, videos, and widgets that would complement your content.
  • Bold and italics for emphasis – Putting words in bold or italics can add emphasis, so they should be the exception, not the rule. Appropriate use of these formatting options can call out important points you want to communicate.

Formatting can also affect your page’s ability to show up in featured snippets, those “position 0” results that appear above the rest of organic results.

An example of a featured snippet, appearing in “position 0” at the top of a SERP.

There is no special code that you can add to your page to show up here, nor can you pay for this placement, but taking note of the query intent can help you better structure your content for featured snippets. For example, if you’re trying to rank for “cake vs. pie,” it might make sense to include a table in your content, with the benefits of cake in one column and the benefits of pie in the other. Or if you’re trying to rank for “best restaurants to try in Portland,” that could indicate Google wants a list, so formatting your content in bullets could help.

Title tags

A page’s title tag is a descriptive, HTML element that specifies the title of a particular web page. They are nested within the head tag of each page and look like this:

<head> <title>Example Title</title></head>

Each page on your website should have a unique, descriptive title tag. What you input into your title tag field will show up here in search results, although in some cases Google may adjust how your title tag appears in search results.

A screenshot of how a page title appears in the SERPs.

It can also show up in web browsers…

A screenshot of how a page title appears in a browser tab.

Or when you share the link to your page on certain external websites…

A screenshot of how a page title looks when shared on an external site, like Facebook.

Your title tag has a big role to play in people’s first impression of your website, and it’s an incredibly effective tool for drawing searchers to your page over any other result on the SERP. The more compelling your title tag, combined with high rankings in search results, the more visitors you’ll attract to your website. This underscores that SEO is not only about search engines, but rather the entire user experience.

What makes an effective title tag?

  • Keyword usage: Having your target keyword in the title can help both users and search engines understand what your page is about. Also, the closer to the front of the title tag your keywords are, the more likely a user will be to read them (and hopefully click) and the more helpful they can be for ranking.
  • Length: On average, search engines display the first 50–60 characters (~512 pixels) of a title tag in search results. If your title tag exceeds the characters allowed on that SERP, an ellipsis “…” will appear where the title was cut off. While sticking to 50–60 characters is safe, never sacrifice quality for strict character counts. If you can’t get your title tag down to 60 characters without harming its readability, go longer (within reason).
  • Branding: At Moz, we love to end our title tags with a brand name mention because it promotes brand awareness and creates a higher click-through rate among people who are familiar with Moz. Sometimes it makes sense to place your brand at the beginning of the title tag, such as on your homepage, but be mindful of what you’re trying to rank for and place those words closer toward the beginning of your title tag.

Meta descriptions

Like title tags, meta descriptions are HTML elements that describe the contents of the page that they’re on. They are also nested in the head tag, and look like this:

<head>
<meta name=”description” content=”Description of page here.”/>
</head>

What you input into the description field will show up here in search results:

Title tag tips for better traffic

While there are no shortcuts in SEO, there are absolutely a ton of tips and tricks that can boost a page title’s clickability and attractiveness in the SERPs. Check out our Whiteboard Friday on the subject!

A screenshot of how a meta description looks in the SERPs.

For example, if you search “find backlinks,” Google will provide this meta description as it deems it more relevant to the specific search:

A screenshot of the meta description for the query 'find backlinks' in the SERPs.

While the actual meta description is:

A screenshot of the source code for the actual meta description on the page, which differs from what the SERP shows above.

This often helps to improve your meta descriptions for unique searches. However, don’t let this deter you from writing a default page meta description — they’re still extremely valuable.

What makes an effective meta description?

The qualities that make an effective title tag also apply to effective meta descriptions. Although Google says that meta descriptions are not a ranking factor, like title tags, they are incredibly important for click-through rate.

  • Relevance: Meta descriptions should be highly relevant to the content of your page, so it should summarize your key concept in some form. You should give the searcher enough information to know they’ve found a page relevant enough to answer their question, without giving away so much information that it eliminates the need to click through to your web page.
  • Length: Search engines tend to truncate meta descriptions to around 155 characters. It’s best to write meta descriptions between 150–300 characters in length. On some SERPs, you’ll notice that Google gives much more real estate to the descriptions of some pages. This usually happens for web pages ranking right below a featured snippet.

URL structure: Naming and organizing your pages

URL stands for Uniform Resource Locator. URLs are the locations or addresses for individual pieces of content on the web. Like title tags and meta descriptions, search engines display URLs on the SERPs, so URL naming and format can impact click-through rates. Not only do searchers use them to make decisions about which web pages to click on, but URLs are also used by search engines in evaluating and ranking pages.

Clear page naming

Search engines require unique URLs for each page on your website so they can display your pages in search results, but clear URL structure and naming is also helpful for people who are trying to understand what a specific URL is about. For example, which URL is clearer?

example.com/desserts/chocolate-pie

or

example.com/asdf/453?=recipe-23432-1123

Searchers are more likely to click on URLs that reinforce and clarify what information is contained on that page, and less likely to click on URLs that confuse them.

The URL is a minor ranking signal, but you cannot expect to rank on the basis of the words in your domain/page names alone (see Google EMD update). When naming your pages or selecting a domain name, have your audience in mind first.

Page organization

If you discuss multiple topics on your website, you should also make sure to avoid nesting pages under irrelevant folders. For example:

example.com/commercial-litigation/alimony

It would have been better for this fictional multi-practice law firm website to nest alimony under “/family-law/” than to host it under the irrelevant “/commercial-litigation/” section of the website.

The folders in which you locate your content can also send signals about the type, not just the topic, of your content. For example, dated URLs can indicate time-sensitive content. While appropriate for news-based websites, dated URLs for evergreen content can actually turn searchers away because the information seems outdated. For example:

example.com/2015/april/what-is-seo/

vs.

example.com/what-is-seo/

Since the topic “What is SEO?” isn’t confined to a specific date, it’s best to host on a non-dated URL structure or else risk your information appearing stale.

As you can see, what you name your pages, and in what folders you choose to organize your pages, is an important way to clarify the topic of your page to users and search engines.

URL length

While it is not necessary to have a completely flat URL structure, many click-through rate studies indicate that, when given the choice between a URL and a shorter URL, searchers often prefer shorter URLs. Like title tags and meta descriptions that are too long, too-long URLs will also be cut off with an ellipsis. Just remember, having a descriptive URL is just as important, so don’t cut down on URL length if it means sacrificing the URL’s descriptiveness.

example.com/services/plumbing/plumbing-repair/toilets/leaks/

vs.

example.com/plumbing-repair/toilets/

Minimizing length, both by including fewer words in your page names and removing unnecessary subfolders, makes your URLs easier to copy and paste, as well as more clickable.

Keywords in URL

If your page is targeting a specific term or phrase, make sure to include it in the URL. However, don’t go overboard by trying to stuff in multiple keywords for purely SEO purposes. It’s also important to watch out for repeat keywords in different subfolders. For example, you may have naturally incorporated a keyword into a page name, but if located within other folders that are also optimized with that keyword, the URL could begin to appear keyword-stuffed.

Example:

example.com/seattle-dentist/dental-services/dental-crowns/

Keyword overuse in URLs can appear spammy and manipulative. If you aren’t sure whether your keyword usage is too aggressive, just read your URL through the eyes of a searcher and ask, “Does this look natural? Would I click on this?”

Static URLs

The best URLs are those that can easily be read by humans, so you should avoid the overuse of parameters, numbers, and symbols. Using technologies like mod_rewrite for Apache and ISAPI_rewrite for Microsoft, you can easily transform dynamic URLs like this:

http://moz.com/blog?id=123

into a more readable static version like this:

https://moz.com/google-algorithm-change

Hyphens for word separation

Not all web applications accurately interpret separators like underscores (_), plus signs (+), or spaces (%20). Search engines also do not understand how to separate words in URLs when they run together without a separator (example.com/optimizefeaturedsnippets/). Instead, use the hyphen character (-) to separate words in a URL.

Case sensitivity

Sites should avoid case sensitive URLs. Instead of example.com/desserts/Chocolate-Pie-Recipe it would be better to use example.com/desserts/chocolate-pie-recipe. If the site you’re working on has lots of mixed-case URLs indexed, don’t fret — your developers can help. Ask them about adding a rewrite formula to something known as the .htaccess file to automatically make any uppercase URLs lowercase.

Geographic modifiers in URLs

Some local business owners omit geographic terms that describe their physical location or service area because they believe that search engines can figure this out on their own. On the contrary, it’s vital that local business websites’ content, URLs, and other on-site assets make specific mention of city names, neighborhood names, and other regional descriptors. Let both consumers and search engines know exactly where you are and where you serve, rather than relying on your physical location alone.

Protocols: HTTP vs HTTPS

A protocol is that “http” or “https” preceding your domain name. Google recommends that all websites have a secure protocol (the “s” in “https” stands for “secure”). To ensure that your URLs are using the https:// protocol instead of http://, you must obtain an SSL (Secure Sockets Layer) certificate. SSL certificates are used to encrypt data. They ensure that any data passed between the web server and browser of the searcher remains private. As of July 2018, Google Chrome displays “not secure” for all HTTP sites, which could cause these sites to appear untrustworthy to visitors and result in them leaving the site.

TECHNICAL SEO

Basic technical knowledge will help you optimize your site for search engines and establish credibility with developers.


Now that you’ve crafted valuable content on the foundation of solid keyword research, it’s important to make sure it’s not only readable by humans, but by search engines too!

You don’t need to have a deep technical understanding of these concepts, but it is important to grasp what these technical assets do so that you can speak intelligently about them with developers. Speaking your developers’ language is important because you’ll probably need them to carry out some of your optimizations. They’re unlikely to prioritize your asks if they can’t understand your request or see its importance. When you establish credibility and trust with your devs, you can begin to tear away the red tape that often blocks crucial work from getting done.

Beyond cross-team support, understanding technical optimization for SEO is essential if you want to ensure that your web pages are structured for both humans and crawlers. To that end, we’ve divided this chapter into three sections:

  1. How websites work
  2. How search engines understand websites
  3. How users interact with websites

Since the technical structure of a site can have a massive impact on its performance, it’s crucial for everyone to understand these principles. It might also be a good idea to share this part of the guide with your programmers, content writers, and designers so that all parties involved in a site’s construction are on the same page.

Try Moz Pro, free!

Find and fix critical technical issues that could be keeping you out of the SERPs. Try Moz Pro free for 30 days and see why so many marketers trust our SEO tools!

How websites work

If search engine optimization is the process of optimizing a website for search, SEOs need at least a basic understanding of the thing they’re optimizing!

Below, we outline the website’s journey from domain name purchase all the way to its fully rendered state in a browser. An important component of the website’s journey is the critical rendering path, which is the process of a browser turning a website’s code into a viewable page.

Knowing this about websites is important for SEOs to understand for a few reasons:

  • The steps in this webpage assembly process can affect page load times, and speed is not only important for keeping users on your site, but it’s also one of Google’s ranking factors.
  • Google renders certain resources, like JavaScript, on a “second pass.” Google will look at the page without JavaScript first, then a few days to a few weeks later, it will render JavaScript, meaning SEO-critical elements that are added to the page using JavaScript might not get indexed.

Imagine that the website loading process is your commute to work. You get ready at home, gather your things to bring to the office, and then take the fastest route from your home to your work. It would be silly to put on just one of your shoes, take a longer route to work, drop your things off at the office, then immediately return home to get your other shoe, right? That’s sort of what inefficient websites do. This chapter will teach you how to diagnose where your website might be inefficient, what you can do to streamline, and the positive ramifications on your rankings and user experience that can result from that streamlining.

Before a website can be accessed, it needs to be set up!

  1. Domain name is purchased. Domain names like moz.com are purchased from a domain name registrar such as GoDaddy or HostGator. These registrars are just organizations that manage the reservations of domain names.
  2. Domain name is linked to IP address. The Internet doesn’t understand names like “moz.com” as website addresses without the help of domain name servers (DNS). The Internet uses a series of numbers called an Internet protocol (IP) address (ex: 127.0.0.1), but we want to use names like moz.com because they’re easier for humans to remember. We need to use a DNS to link those human-readable names with machine-readable numbers.

How a website gets from server to browser

  1. User requests domain. Now that the name is linked to an IP address via DNS, people can request a website by typing the domain name directly into their browser or by clicking on a link to the website.
  2. Browser makes requests. That request for a web page prompts the browser to make a DNS lookup request to convert the domain name to its IP address. The browser then makes a request to the server for the code your web page is constructed with, such as HTML, CSS, and JavaScript.
  3. Server sends resources. Once the server receives the request for the website, it sends the website files to be assembled in the searcher’s browser.
  4. Browser assembles the web page. The browser has now received the resources from the server, but it still needs to put it all together and render the web page so that the user can see it in their browser. As the browser parses and organizes all the web page’s resources, it’s creating a Document Object Model (DOM). The DOM is what you can see when you right click and “inspect element” on a web page in your Chrome browser (learn how to inspect elements in other browsers).
  5. Browser makes final requests. The browser will only show a web page after all the page’s necessary code is downloaded, parsed, and executed, so at this point, if the browser needs any additional code in order to show your website, it will make an additional request from your server.
  6. Website appears in browser. Whew! After all that, your website has now been transformed (rendered) from code to what you see in your browser.

Now that you know how a website appears in a browser, we’re going to focus on what a website is made of — in other words, the code (programming languages) used to construct those web pages.

The three most common are:

  • HTML – What a website says (titles, body content, etc.)
  • CSS – How a website looks (color, fonts, etc.)
  • JavaScript – How it behaves (interactive, dynamic, etc.)

HTML: What a website says

HTML stands for hypertext markup language, and it serves as the backbone of a website. Elements like headings, paragraphs, lists, and content are all defined in the HTML.

Here’s an example of a webpage and what its corresponding HTML looks like:

This is a screenshot from W3schools.com, our favorite place to learn and practice HTML, CSS, and JavaScript.

HTML is important for SEOs to know because it’s what lives “under the hood” of any page they create or work on. While your CMS likely doesn’t require you to write your pages in HTML (ex: selecting “hyperlink” will allow you to create a link without you having to type in “a href=”), it is what you’re modifying every time you do something to a web page such as adding content, changing the anchor text of internal links, and so on. Google crawls these HTML elements to determine how relevant your document is to a particular query. In other words, what’s in your HTML plays a huge role in how your web page ranks in Google organic search!

CSS: How a website looks

CSS stands for “cascading style sheets,” and this is what causes your web pages to take on certain fonts, colors, and layouts. HTML was created to describe content, rather than to style it, so when CSS entered the scene, it was a game-changer. With CSS, web pages could be “beautified” without requiring manual coding of styles into the HTML of every page — a cumbersome process, especially for large sites.

It wasn’t until 2014 that Google’s indexing system began to render web pages more like an actual browser, as opposed to a text-only browser. A black-hat SEO practice that tried to capitalize on Google’s older indexing system was hiding text and links via CSS for the purpose of manipulating search engine rankings. This “hidden text and links” practice is a violation of Google’s quality guidelines.

Components of CSS that SEOs, in particular, should care about:

  • Since style directives can live in external stylesheet files (CSS files) instead of your page’s HTML, it makes your page less code-heavy, reducing file transfer size and making load times faster.
  • Browsers still have to download resources like your CSS file, so compressing them can make your webpages load faster, and page speed is a ranking factor.
  • Having your pages be more content-heavy than code-heavy can lead to better indexing of your site’s content.
  • Using CSS to hide links and content can get your website manually penalized and removed from Google’s index.

JavaScript: How a website behaves

In the earlier days of the Internet, webpages were built with HTML. When CSS came along, webpage content had the ability to take on some style. When the programming language JavaScript entered the scene, websites could now not only have structure and style, but they could be dynamic.

JavaScript has opened up a lot of opportunities for non-static web page creation. When someone attempts to access a page enhanced with this programming language, that user’s browser will execute the JavaScript against the static HTML that the server returned, resulting in a webpage that comes to life with some sort of interactivity.

You’ve definitely seen JavaScript in action — you just may not have known it! That’s because JavaScript can do almost anything to a page. It could create a pop-up, for example, or it could request third-party resources like ads to display on your page.

Client-side rendering versus server-side rendering

JavaScript can pose some problems for SEO, though, since search engines don’t view JavaScript the same way human visitors do. That’s because of client-side versus server-side rendering. Most JavaScript is executed in a client’s browser. With server-side rendering, on the other hand, the files are executed at the server and the server sends them to the browser in their fully rendered state.

SEO-critical page elements such as text, links, and tags that are loaded on the client’s side with JavaScript, rather than represented in your HTML, are invisible from your page’s code until they are rendered. This means that search engine crawlers won’t see what’s in your JavaScript — at least not initially.

Google says that, as long as you’re not blocking Googlebot from crawling your JavaScript files, they’re generally able to render and understand your web pages just like a browser can, which means that Googlebot should see the same things as a user viewing a site in their browser. However, due to this “second wave of indexing” for client-side JavaScript, Google can miss certain elements that are only available once JavaScript is executed.

There are also some other things that could go wrong during Googlebot’s process of rendering your web pages, which can prevent Google from understanding what’s contained in your JavaScript:

  • You’ve blocked Googlebot from JavaScript resources (ex: with robots.txt, like we learned about in Chapter 2)
  • Your server can’t handle all the requests to crawl your content
  • The JavaScript is too complex or outdated for Googlebot to understand
  • JavaScript doesn’t “lazy load” content into the page until after the crawler has finished with the page and moved on.

Needless to say, while JavaScript does open a lot of possibilities for web page creation, it can also have some serious ramifications for your SEO if you’re not careful.

Thankfully, there’s a way to check whether Google sees the same thing as your visitors. To see a page how Googlebot views your page, use Google Search Console’s “URL Inspection” tool. Simply paste your page’s URL into the GSC search bar:

A screenshot of where to enter a page's URL in Google Search Console.

From here, click “Test Live URL”.

Where to test the live URL version in Google Search Console.

After Googlebot has recrawled your URL, click “View Tested Page” to see how your page is being crawled and rendered.

View Googlebot's live view of your page.

Clicking the “Screenshot” tab adjacent to “HTML” shows how Googlebot smartphone renders your page.

In return, you’ll see how Googlebot sees your page versus how a visitor (or you) may see the page. In the “More Info” tab, Google will also show you a list of any resources they may not have been able to get for the URL you entered.

Understanding the way websites work lays a great foundation for what we’ll talk about next: technical optimizations to help Google understand the pages on your website better.

How search engines understand websites

Imagine being a search engine crawler scanning down a 10,000-word article about how to bake a cake. How do you identify the author, recipe, ingredients, or steps required to bake a cake? This is where schema markup comes in. It allows you to spoon-feed search engines more specific classifications for what type of information is on your page.

Schema is a way to label or organize your content so that search engines have a better understanding of what certain elements on your web pages are. This code provides structure to your data, which is why schema is often referred to as “structured data.” The process of structuring your data is often referred to as “markup” because you are marking up your content with organizational code.

JSON-LD is Google’s preferred schema markup (announced in May ‘16), which Bing also supports. To view a full list of the thousands of available schema markups, visit Schema.org or view the Google Developers Introduction to Structured Data for additional information on how to implement structured data. After you implement the structured data that best suits your web pages, you can test your markup with Google’s Structured Data Testing Tool.

In addition to helping bots like Google understand what a particular piece of content is about, schema markup can also enable special features to accompany your pages in the SERPs. These special features are referred to as “rich snippets,” and you’ve probably seen them in action. They’re things like:

  • Top Stories carousels
  • Review stars
  • Sitelinks search boxes
  • Recipes

Remember, using structured data can help enable a rich snippet to be present, but does not guarantee it. Other types of rich snippets will likely be added in the future as the use of schema markup increases.

Some last words of advice for schema success:

  • You can use multiple types of schema markup on a page. However, if you mark up one element, like a product for example, and there are other products listed on the page, you must also mark up those products.
  • Don’t mark up content that is not visible to visitors and follow Google’s Quality Guidelines. For example, if you add review structured markup to a page, make sure those reviews are actually visible on that page.
  • If you have duplicate pages, Google asks that you mark up each duplicate page with your structured markup, not just the canonical version.
  • Provide original and updated (if applicable) content on your structured data pages.
  • Structured markup should be an accurate reflection of your page.
  • Try to use the most specific type of schema markup for your content.
  • Marked-up reviews should not be written by the business. They should be genuine unpaid business reviews from actual customers.

Tell search engines about your preferred pages with canonicalization

When Google crawls the same content on different web pages, it sometimes doesn’t know which page to index in search results. This is why the rel=”canonical” tag was invented: to help search engines better index the preferred version of content and not all its duplicates.

The rel=”canonical” tag allows you to tell search engines where the original, master version of a piece of content is located. You’re essentially saying, “Hey search engine! Don’t index this; index this source page instead.” So, if you want to republish a piece of content, whether exactly or slightly modified, but don’t want to risk creating duplicate content, the canonical tag is here to save the day.

Where to find rel=canonical in the page's source code.

Proper canonicalization ensures that every unique piece of content on your website has only one URL. To prevent search engines from indexing multiple versions of a single page, Google recommends having a self-referencing canonical tag on every page on your site. Without a canonical tag telling Google which version of your web page is the preferred one, https://www.example.com could get indexed separately from https://example.com, creating duplicates.

“Avoid duplicate content” is an Internet truism, and for good reason! Google wants to reward sites with unique, valuable content — not content that’s taken from other sources and repeated across multiple pages. Because engines want to provide the best searcher experience, they will rarely show multiple versions of the same content, opting instead to show only the canonicalized version, or if a canonical tag does not exist, whichever version they deem most likely to be the original.

It’s also very common for websites to have multiple duplicate pages due to sort and filter options. For example, on an e-commerce site, you might have what’s called a faceted navigation that allows visitors to narrow down products to find exactly what they’re looking for, such as a “sort by” feature that reorders results on the product category page from lowest to highest price. This could create a URL that looks something like this: example.com/mens-shirts?sort=price_ascending. Add in more sort/filter options like color, size, material, brand, etc. and just think about all the variations of your main product category page this would create!

To learn more about different types of duplicate content, this post by Dr. Pete helps distill the different nuances.

How users interact with websites

In Chapter 1, we said that despite SEO standing for search engine optimization, SEO is as much about people as it is about search engines themselves. That’s because search engines exist to serve searchers. This goal helps explain why Google’s algorithm rewards websites that provide the best possible experiences for searchers, and why some websites, despite having qualities like robust backlink profiles, might not perform well in search.

When we understand what makes their web browsing experience optimal, we can create those experiences for maximum search performance.

Ensuring a positive experience for your mobile visitors

Being that well over half of all web traffic today comes from mobile, it’s safe to say that your website should be accessible and easy to navigate for mobile visitors. In April 2015, Google rolled out an update to its algorithm that would promote mobile-friendly pages over non-mobile-friendly pages. So how can you ensure that your website is mobile-friendly? Although there are three main ways to configure your website for mobile, Google recommends responsive web design.

Responsive design

Responsive websites are designed to fit the screen of whatever type of device your visitors are using. You can use CSS to make the web page “respond” to the device size. This is ideal because it prevents visitors from having to double-tap or pinch-and-zoom in order to view the content on your pages. Not sure if your web pages are mobile friendly? You can use Google’s mobile-friendly test to check!

A depiction of how responsive design can change the formatting on a screen, with text either beside or below an image.

AMP

AMP stands for Accelerated Mobile Pages, and it’s used to deliver content to mobile visitors at speeds much greater than with non-AMP delivery. AMP is able to deliver content so fast because it delivers content from its cache servers (not the original site) and uses a special AMP version of HTML and JavaScript.

Learn more about AMP.

Mobile-first indexing

As of 2018, Google started switching websites over to mobile-first indexing. That change sparked some confusion between mobile-friendliness and mobile-first, so it’s helpful to disambiguate. With mobile-first indexing, Google crawls and indexes the mobile version of your web pages. Making your website compatible to mobile screens is good for users and your performance in search, but mobile-first indexing happens independently of mobile-friendliness.

This has raised some concerns for websites that lack parity between mobile and desktop versions, such as showing different content, navigation, links, etc. on their mobile view. A mobile site with different links, for example, will alter the way in which Googlebot (mobile) crawls your site and sends link equity to your other pages.

Improving page speed to mitigate visitor frustration

Google wants to serve content that loads lightning-fast for searchers. We’ve come to expect fast-loading results, and when we don’t get them, we’ll quickly bounce back to the SERP in search of a better, faster page. This is why page speed is a crucial aspect of on-site SEO. We can improve the speed of our web pages by taking advantage of tools like the ones we’ve mentioned below. Click on the links to learn more about each.

Images are one of the main culprits of slow pages!

As discussed in Chapter 4, images are one of the number one reasons for slow-loading web pages! In addition to image compression, optimizing image alt text, choosing the right image format, and submitting image sitemaps, there are other technical ways to optimize the speed and way in which images are shown to your users. Some primary ways to improve image delivery are as follows:

There are more than just three image size versions!

It’s a common misconception that you just need a desktop, tablet, and mobile-sized version of your image. There are a huge variety of screen sizes and resolutions.

1. SRCSET: How to deliver the best image size for each device

The SRCSET attribute allows you to have multiple versions of your image and then specify which version should be used in different situations. This piece of code is added to the <img> tag (where your image is located in the HTML) to provide unique images for specific-sized devices.

This is like the concept of responsive design that we discussed earlier, except for images!

This doesn’t just speed up your image load time, it’s also a unique way to enhance your on-page user experience by providing different and optimal images to different device types.

An image depicting a desktop screen, tablet screen, and phone screen, all with different formatting of text and images.

2. Show visitors image loading is in progress with lazy loading

Lazy loading occurs when you go to a webpage and, instead of seeing a blank white space for where an image will be, a blurry lightweight version of the image or a colored box in its place appears while the surrounding text loads. After a few seconds, the image clearly loads in full resolution. The popular blogging platform Medium does this really well.

The low resolution version is initially loaded, and then the full high resolution version. This also helps to optimize your critical rendering path! So while all of your other page resources are being downloaded, you’re showing a low-resolution teaser image that helps tell users that things are happening/being loaded. For more information on how you should lazy load your images, check out Google’s Lazy Loading Guidance.

Improve speed by condensing and bundling your files

Page speed audits will often make recommendations such as “minify resource,” but what does that actually mean? Minification condenses a code file by removing things like line breaks and spaces, as well as abbreviating code variable names wherever possible.

“Bundling” is another common term you’ll hear in reference to improving page speed. The process of bundling combines a bunch of the same coding language files into one single file. For example, a bunch of JavaScript files could be put into one larger file to reduce the amount of JavaScript files for a browser.

By both minifying and bundling the files needed to construct your web page, you’ll speed up your website and reduce the number of your HTTP (file) requests.

Improving the experience for international audiences

Websites that target audiences from multiple countries should familiarize themselves with international SEO best practices in order to serve up the most relevant experiences. Without these optimizations, international visitors might have difficulty finding the version of your site that caters to them.

There are two main ways a website can be internationalized:

  • Language
    Sites that target speakers of multiple languages are considered multilingual websites. These sites should add something called an hreflang tag to show Google that your page has copy for another language. Learn more about hreflang.
  • Country
    Sites that target audiences in multiple countries are called multi-regional websites and they should choose a URL structure that makes it easy to target their domain or pages to specific countries. This can include the use of a country code top level domain (ccTLD) such as “.ca” for Canada, or a generic top-level domain (gTLD) with a country-specific subfolder such as “example.com/ca” for Canada. Learn more about locale-specific URLs.

LINK BUILDING & ESTABLISHING AUTHORITY

Turn up the volume.


You’ve created content that people are searching for, that answers their questions, and that search engines can understand, but those qualities alone don’t mean it’ll rank. To outrank the rest of the sites with those qualities, you have to establish authority. That can be accomplished by earning links from authoritative websites, building your brand, and nurturing an audience who will help amplify your content.

Google has confirmed that links and quality content (which we covered back in Chapter 4) are two of the three most important ranking factors for SEO. Trustworthy sites tend to link to other trustworthy sites, and spammy sites tend to link to other spammy sites.

But what is a link, exactly? How do you go about earning them from other websites? Let’s start with the basics.

What’s that word mean?

There’s a lot to remember when it comes to the wide world of link building. Check out more definitions for this section in the SEO glossary.

What are links?

Inbound links, also known as backlinks or external links, are HTML hyperlinks that point from one website to another. They’re the currency of the Internet, as they act a lot like real-life reputation. If you went on vacation and asked three people (all completely unrelated to one another) what the best coffee shop in town was, and they all said, “Cuppa Joe on Main Street,” you would feel confident that Cuppa Joe is indeed the best coffee place in town. Links do that for search engines.

Since the late 1990s, search engines have treated links as votes for popularity and importance on the web.

Internal links, or links that connect internal pages of the same domain, work very similarly for your website. A high amount of internal links pointing to a particular page on your site will provide a signal to Google that the page is important, so long as it’s done naturally and not in a spammy way.

The engines themselves have refined the way they view links, now using algorithms to evaluate sites and pages based on the links they find. But what’s in those algorithms? How do the engines evaluate all those links? It all starts with the concept of E-A-T.

You are what you E-A-T

Google’s Search Quality Rater Guidelines put a great deal of importance on the concept of E-A-T — an acronym for expert, authoritative, and trustworthy. Sites that don’t display these characteristics tend to be seen as lower-quality in the eyes of the engines, while those that do are subsequently rewarded. E-A-T is becoming more and more important as search evolves and increases the importance of solving for user intent.

Creating a site that’s considered expert, authoritative, and trustworthy should be your guiding light as you practice SEO. Not only will it simply result in a better site, but it’s future-proof. After all, providing great value to searchers is what Google itself is trying to do.

E-A-T and links to your site

The more popular and important a site is, the more weight the links from that site carry. A site like Wikipedia, for example, has thousands of diverse sites linking to it. This indicates it provides lots of expertise, has cultivated authority, and is trusted among those other sites.

To earn trust and authority with search engines, you’ll need links from websites that display the qualities of E-A-T. These don’t have to be Wikipedia-level sites, but they should provide searchers with credible, trustworthy content.

Moz has proprietary metrics to help you determine how authoritative a site is: Domain AuthorityPage Authority, and Spam Score. In general, you’ll want links from sites with a higher Domain Authority than your sites.

Followed vs. nofollowed links

Remember how links act as votes? The rel=nofollow attribute (pronounced as two words, “no follow”) allows you to link to a resource while removing your “vote” for search engine purposes.

Just like it sounds, “nofollow” tells search engines not to follow the link. Some engines still follow them simply to discover new pages, but these links don’t pass link equity (the “votes of popularity” we talked about above), so they can be useful in situations where a page is either linking to an untrustworthy source or was paid for or created by the owner of the destination page (making it an unnatural link).

Say, for example, you write a post about link building practices, and want to call out an example of poor, spammy link building. You could link to the offending site without signaling to Google that you trust it.

Standard links (ones that haven’t had nofollow added) look like this:

<a href="">I love Moz</a>

Nofollow link markup looks like this:

<a href="" rel="nofollow">I love Moz</a>

If follow links pass all the link equity, shouldn’t that mean you want only follow links?

Not necessarily. Think about all the legitimate places you can create links to your own website: a Facebook profile, a Yelp page, a Twitter account, etc. These are all natural places to add links to your website, but they shouldn’t count as votes for your website. (Setting up a Twitter profile with a link to your site isn’t a vote from Twitter that they like your site.)

It’s natural for your site to have a balance between nofollowed and followed backlinks in its link profile (more on link profiles below). A nofollow link might not pass authority, but it could send valuable traffic to your site and even lead to future followed links.

Use the MozBar extension for Google Chrome to highlight links on any page to find out whether they’re nofollow or follow without ever having to view the source code!

Your link profile

Your link profile is an overall assessment of all the inbound links your site has earned: the total number of links, their quality (or spamminess), their diversity (is one site linking to you hundreds of times, or are hundreds of sites linking to you once?), and more. The state of your link profile helps search engines understand how your site relates to other sites on the Internet. There are various SEO tools that allow you to analyze your link profile and begin to understand its overall makeup.

What are the qualities of a healthy link profile?

When people began to learn about the power of links, they began manipulating them for their benefit. They’d find ways to gain artificial links just to increase their search engine rankings. While these dangerous tactics can sometimes work, they are against Google’s terms of service and can get a website deindexed (removal of web pages or entire domains from search results). You should always try to maintain a healthy link profile.

A healthy link profile is one that indicates to search engines that you’re earning your links and authority fairly. Just like you shouldn’t lie, cheat, or steal, you should strive to ensure your link profile is honest and earned via good, old-fashioned hard work.

Links are earned or editorially placed

Editorial links are links added naturally by sites and pages that want to link to your website.

The foundation of acquiring earned links is almost always through creating high-quality content that people genuinely wish to reference. This is where creating 10X content (a way of describing extremely high-quality content) is essential! If you can provide the best and most interesting resource on the web, people will naturally link to it.

Naturally earned links require no specific action from you, other than the creation of worthy content and the ability to create awareness about it.

Links are relevant and from topically similar websites

Links from websites within a topic-specific community are generally better than links from websites that aren’t relevant to your site. If your website sells dog houses, a link from the Society of Dog Breeders matters much more than one from the Roller Skating Association. Additionally, links from topically irrelevant sources can send confusing signals to search engines regarding what your page is about.

Linking domains don’t have to match the topic of your page exactly, but they should be related. Avoid pursuing backlinks from sources that are completely off-topic; there are far better uses of your time.

Anchor text is descriptive and relevant, without being spammy

Anchor text helps tell Google what the topic of your page is about. If dozens of links point to a page with a variation of a word or phrase, the page has a higher likelihood of ranking well for those types of phrases. However, proceed with caution! Too many backlinks with the same anchor text could indicate to the search engines that you’re trying to manipulate your site’s ranking in search results.

Consider this. You ask ten separate friends at separate times how their day was going, and they each responded with the same phrase:

“Great! I started my day by walking my dog, Peanut, and then had a picante beef Top Ramen for lunch.”

That’s strange, and you’d be quite suspicious of your friends. The same goes for Google. Describing the content of the target page with the anchor text helps them understand what the page is about, but the same description over and over from multiple sources starts to look suspicious. Aim for relevance; avoid spam.

Use the “Anchor Text” report in Moz’s Link Explorer to see what anchor text other websites are using to link to your content.

Links send qualified traffic to your site

Link building should never be solely about search engine rankings. Esteemed SEO and link building thought leader Eric Ward used to say that you should build your links as though Google might disappear tomorrow. In essence, you should focus on acquiring links that will bring qualified traffic to your website — another reason why it’s important to acquire links from relevant websites whose audience would find value in your site, as well.

Use the “Referral Traffic” report in Google Analytics to evaluate websites that are currently sending you traffic. How can you continue to build relationships with similar types of websites?

Link building don’ts & things to avoid

Spammy link profiles are just that: full of links built in unnatural, sneaky, or otherwise low-quality ways. Practices like buying links or engaging in a link exchange might seem like the easy way out, but doing so is dangerous and could put all of your hard work at risk. Google penalizes sites with spammy link profiles, so don’t give in to temptation.

A guiding principle for your link building efforts is to never try to manipulate a site’s ranking in search results.

But isn’t that the entire goal of SEO? To increase a site’s ranking in search results? And herein lies the confusion. Google wants you to earn links, not build them, but the line between the two is often blurry. To avoid penalties for unnatural links (known as “link spam”), Google has made clear what should be avoided.

🚫 Purchased links

Google and Bing both seek to discount the influence of paid links in their organic search results. While a search engine can’t know which links were earned vs. paid for from viewing the link itself, there are clues it uses to detect patterns that indicate foul play. Websites caught buying or selling followed links risk severe penalties that will severely drop their rankings. (By the way, exchanging goods or services for a link is also a form of payment and qualifies as buying links.)

🚫 Link exchanges / reciprocal linking

If you’ve ever received a “you link to me and I’ll link to you” email from someone you have no affiliation with, you’ve been targeted for a link exchange. Google’s quality guidelines caution against “excessive” link exchange and similar partner programs conducted exclusively for the sake of cross-linking, so there is some indication that this type of exchange on a smaller scale might not trigger any link spam alarms.

It is acceptable, and even valuable, to link to people you work with, partner with, or have some other affiliation with and have them link back to you.

It’s the exchange of links at mass scale with unaffiliated sites that can warrant penalties.

🚫 Low-quality directory links

These used to be a popular source of manipulation. A large number of pay-for-placement web directories exist to serve this market and pass themselves off as legitimate, with varying degrees of success. These types of sites tend to look very similar, with large lists of websites and their descriptions (typically, the site’s critical keyword is used as the anchor text to link back to the submittor’s site).

There are many more manipulative link building tactics that search engines have identified. In most cases, they have found algorithmic methods for reducing their impact. As new spam systems emerge, engineers will continue to fight them with targeted algorithms, human reviews, and the collection of spam reports from webmasters and SEOs. By and large, it isn’t worth finding ways around them.

Links should always:

Be earned/editorial

Come from authoritative pages

Increase with time

Come from topically relevant sources

Use relevant, natural anchor text

Bring qualified traffic to your site

Be a healthy mix of follow and nofollow

Be strategically targeted or naturally earned

How to build high-quality backlinks

Link building comes in many shapes and sizes, but one thing is always true: link campaigns should always match your unique goals. With that said, there are some popular methods that tend to work well for most campaigns. This is not an exhaustive list, so visit Moz’s blog posts on link building for more detail on this topic.

Find customer and partner links

If you have partners you work with regularly, or loyal customers that love your brand, there are ways to earn links from them with relative ease. You might send out partnership badges (graphic icons that signify mutual respect), or offer to write up testimonials of their products. Both of those offer things they can display on their website along with links back to you.

Publish a blog

This content and link building strategy is so popular and valuable that it’s one of the few recommended personally by the engineers at Google. Blogs have the unique ability to contribute fresh material on a consistent basis, generate conversations across the web, and earn listings and links from other blogs.

Careful, though — you should avoid low-quality guest posting just for the sake of link building. Google has advised against this and your energy is better spent elsewhere.

Create unique resources

Creating unique, high-quality resources is no easy task, but it’s well worth the effort. High quality content that is promoted in the right ways can be widely shared. It can help to create pieces that have the following traits:

Creating a resource like this is a great way to attract a lot of links with one page. You could also create a highly-specific resource — without as broad of an appeal — that targeted a handful of websites. You might see a higher rate of success, but that approach isn’t as scalable.

Users who see this kind of unique content often want to share it with friends, and bloggers/tech-savvy webmasters who see it will often do so through links. These high-quality, editorially earned votes are invaluable to building trust, authority, and rankings potential.

Build resource pages

Resource pages are a great way to build links. However, to find them you’ll want to know some advanced Google operators to make discovering them a bit easier.

For example, if you were doing link building for a company that made pots and pans, you could search for:

cooking intitle:"resources"

…and see which pages might be good link targets.

This can also give you great ideas for content creation — just think about which types of resources you could create that these pages would all like to reference and link to.

Get involved in your local community

For a local business (one that meets its customers in person), community outreach can result in some of the most valuable and influential links.

  • Engage in sponsorships and scholarships
  • Host or participate in community events, seminars, workshops, and organizations
  • Donate to worthy local causes and join local business associations
  • Post jobs and offer internships
  • Promote loyalty programs
  • Run a local competition
  • Develop real-world relationships with related local businesses to discover how you can team up to improve the health of your local economy

Refurbish top content

You likely already know which of your site’s content earns the most traffic, converts the most customers, or retains visitors for the longest amount of time.

Take that content and refurbish it for other platforms (Slideshare, YouTube, Instagram, Quora, etc.) to expand your acquisition funnel beyond Google.

You can also dust off, update, and simply republish older content on the same platform. If you discover that a few trusted industry websites all linked to a popular resource that’s gone stale, update it and let those industry websites know — you may just earn a good link.

You can also do this with images. Reach out to websites that are using your images and not citing you or linking back to you and ask if they’d mind including a link.

Be newsworthy

Earning the attention of the press, bloggers, and news media is an effective, time-honored way to earn links. Sometimes this is as simple as giving something away for free, releasing a great new product, or stating something controversial. Since so much of SEO is about creating a digital representation of your brand in the real world, to succeed in SEO, you have to be a great brand.

Be personal and genuine

The most common mistake new SEOs make when trying to build links is not taking the time to craft a custom, personal, and valuable initial outreach email. You know as well as anyone how annoying spammy emails can be, so make sure yours doesn’t make people roll their eyes.

Your goal for an initial outreach email is simply to get a response. These tips can help:

  • Make it personal by mentioning something the person is working on, where they went to school, their dog, etc.
  • Provide value. Let them know about a broken link on their website or a page that isn’t working on mobile.
  • Keep it short.
  • Ask one simple question (typically not for a link; you’ll likely want to build a rapport first).

Measuring and improving your link efforts

So far, we’ve gone over the importance of earning quality links to your site over time, as well as some common tactics for doing so. Now, we’ll cover ways to measure the returns on your link building investment and strategies for sustaining quality backlink growth over time.

Total number of links

The most direct way to measure your link building efforts is by tracking the growth of total links to your site or page. Moz’s Link Explorer is a great tool for doing that. For example, say you recently published a blog post that received a lot of attention and you want to track total links that resource earned.

Pop the URL into Link Explorer…

A screenshot of the Link Explorer main page.

And then select “linking domains” from the “metrics over time” section to see month-over-month link growth.

A screenshot of the Metrics Over Time graph in Link Explorer.

You could also do this with a root domain, subdomain, or a specific page.

If you didn’t see the number of backlinks come in that you were aiming for, all hope is not lost! Each link building campaign is something you can learn from. If you want to improve the total links you earn for your next campaign, consider these questions:

Did you create content that was 10x better than anything else out there?

It’s possible that the reason your link building efforts fell flat is that your content wasn’t substantially more valuable than anything else like it. Take a look back at the pages ranking for that term you’re targeting and see if there’s anything else you could do to improve.

Did you promote your content? How?

Promotion is perhaps one of the most difficult aspects of link building, but letting people know about your content and convincing them to link to you is what’s really going to move the needle. For great tips on content promotion, visit Chapter 7 of our Beginner’s Guide to Content Marketing.

How many links do you actually need?

Consider how many backlinks you might actually need to rank for the keyword you were targeting. In Keyword Explorer’s “SERP Analysis” report, you can view the pages that are ranking for the term you’re targeting, as well as how many backlinks those URLs have. This will give you a good benchmark for determining how many links you actually need in order to compete and which websites might be a good link target.

What was the quality of the links you received?

One link from a very authoritative source is more valuable than ten from low-quality sites, so keep in mind that quantity isn’t everything. When targeting sites for backlinks, you can prioritize by how authoritative they are using Domain Authority and Page Authority metrics.

Beyond links: How awareness, amplification, and sentiment impact authority

A lot of the methods you’d use to build links will also indirectly build your brand. In fact, you can view link building as a great way to increase awareness of your brand, the topics on which you’re an authority, and the products or services you offer.

Once your target audience is familiar with you and you have valuable content to share, let your audience know about it! Sharing your content on social platforms will not only make your audience aware of your content, but it can also encourage them to amplify that awareness to their own networks, thereby extending your own reach.

Are social shares the same as links? No. But shares to the right people can result in links. Social shares can also promote an increase in traffic and new visitors to your website, which can grow brand awareness, and with a growth in brand awareness can come a growth in trust and links. The connection between social signals and rankings seems indirect, but even indirect correlations can be helpful for informing strategy.

Trustworthiness goes a long way

For search engines, trust is largely determined by the quality and quantity of the links your domain has earned, but that’s not to say that there aren’t other factors at play that can influence your site’s authority. Think about all the different ways you come to trust a brand:

  • Awareness (you know they exist)
  • Helpfulness (they provide answers to your questions)
  • Integrity (they do what they say they will)
  • Quality (their product or service provides value, possibly more than others you’ve tried)
  • Continued value (they continue to provide value even after you’ve gotten what you needed)
  • Voice (they communicate in unique, memorable ways)
  • Sentiment (others have good things to say about their experience with the brand)

That last point is what we’re going to focus on here. Reviews of your brand, its products, or its services can make or break a business.

In your effort to establish authority from reviews, follow these review rules of thumb:

  • Never pay any individual or agency to create a fake positive review for your business or a fake negative review of a competitor.
  • Don’t review your own business or the businesses of your competitors. Don’t have your staff do so, either.
  • Never offer incentives of any kind in exchange for reviews.
  • All reviews must be left directly by customers in their own accounts; never post reviews on behalf of a customer or employ an agency to do so.
  • Don’t set up a review station/kiosk in your place of business; many reviews stemming from the same IP can be viewed as spam.
  • Read the guidelines of each review platform where you’re hoping to earn reviews.

Be aware that review spam is a problem that’s taken on global proportions, and that violation of governmental truth-in-advertising guidelines has led to legal prosecution and heavy fines. It’s just too dangerous to be worth it. Playing by the rules and offering exceptional customer experiences is the winning combination for building both trust and authority over time.

Authority is built when brands are doing great things in the real-world, making customers happy, creating and sharing great content, and earning links from reputable sources.

Business Bundle for beginners – http://nvksoftwares.tech/business-bundle-sales
Buy original premium Softwares – https://nvksoftwares.tech

This Post Has 2 Comments

  1. zovre lioptor

    I have been browsing on-line greater than three hours nowadays, but I never found any fascinating article like yours. It is lovely value enough for me. Personally, if all web owners and bloggers made just right content as you probably did, the internet shall be much more helpful than ever before.

  2. zovre lioptor

    Very superb visual appeal on this web site, I’d rate it 10 10.

Leave a Reply