This means what is considered best practice is often in flux. What may have been good counsel yesterday, is not so today.
This is especially true for sitemaps, which are almost as old as SEO itself.
The problem is, when every man and their dog has posted answers in forums, published recommendations on blogs and amplified opinions with social media, it takes time to sort valuable advice from misinformation.
So while most of us share a general understanding that submitting a sitemap to Google Search Console is important, you may not know the intricacies of how to implement them in a way that drives SEO key performance indicators (KPIs).
Let’s clear up the confusion around best practices for sitemaps today.
In this article we cover:
What Is an XML Sitemap
In simple terms, an XML sitemap is a list of your website’s URLs.
It acts as a roadmap to tell search engines what content is available and how to reach it.
In the example above, a search engine will find all nine pages in a sitemap with one visit to the XML sitemap file.
On the website, it will have to jump through five internal links to find page 9.
This ability of an XML sitemap to assist crawlers in faster indexation is especially important for websites that:
Have thousands of pages and/or a deep website architecture.
Frequently add new pages.
Frequently change content of existing pages.
Suffer from weak internal linking and orphan pages.
Lack a strong external link profile.
@nishanthstephen generally anything you put in a sitemap will be picked up sooner
Side note: Submitting a sitemap with noindex URLs can also speed up deindexation. This can be more efficient than removing URLs in Google Search Console if you have many to be deindexed. But use this with care and be sure you only add such URLs temporarily to your sitemaps.
Even though search engines can technically find your URLs without it, by including pages in an XML sitemap you’re indicating that you consider them to be quality landing pages.
While there is no guarantee that an XML sitemap will get your pages crawled, let alone indexed or ranked, submitting one certainly increases your chances.
XML Sitemap Format
A one-page site using all available tags would have this XML sitemap:
But how should an SEO use each of these tags? Is all the metadata valuable?
Loc (a.k.a. Location) Tag
This compulsory tag contains the absolute, canonical version of the URL location.
It should accurately reflect your site protocol (http or https) and if you have chosen to include or exclude www.
For international websites, this is also where you can implement your hreflang handling.
By using the xhtml:link attribute to indicate the language and region variants for each URL, you reduce page load time, which the other implementations of link elements in the <head> or HTTP headers can’t offer.
Yoast has an epic post on hreflang for those wanting to learn more.
Lastmod (a.k.a. Last Modified) Tag
An optional but highly recommended tag used to communicate the file’s last modified date and time.
John Mueller acknowledged Google does use the lastmod metadata to understand when the page last changed and if it should be crawled. Contradicting advice from Illyes in 2015.
The URL + last modification date is what we care about for websearch.
Your website needs an XML sitemap, but not necessarily the priority and change frequency metadata.
Use the lastmod tags accurately and focus your attention on ensuring you have the right URLs submitted.
Types of Sitemaps
There are many different types of sitemaps. Let’s look at the ones you actually need.
XML Sitemap Index
XML sitemaps have a couple of limitations:
A maximum of 50,000 URLs.
An uncompressed file size limit of 50MB.
Sitemaps can be compressed using gzip (the file name would become something similar to sitemap.xml.gz) to save bandwidth for your server. But once unzipped, the sitemap still can’t exceed either limit.
Whenever you exceed either limit, you will need to split your URLs across multiple XML sitemaps.
Those sitemaps can then be combined into a single XML sitemap index file, often named sitemap-index.xml. Essentially, a sitemap for sitemaps.
For exceptionally large websites who want to take a more granular approach, you can also create multiple sitemap index files. For example:
But be aware that you cannot nest sitemap index files.
For search engines to easily find every one of your sitemap files at once, you will want to:
Submit your sitemap index(es) to Google Search Console and Bing Webmaster Tools.
Specify your sitemap index URL(s) in your robots.txt file. Pointing search engines directly to your sitemap as you welcome them to crawl.
You can also submit sitemaps by pinging them to Google.
Google no longer pays attention to hreflang entries in “unverified sitemaps”, which Tom Anthony believes to mean those submitted via the ping URL.
XML Image Sitemap
Image sitemaps were designed to improve indexation of image content.
In modern day SEO, however, images are embedded within page content, so will be crawled along with the page URL.
Moreover, it’s best practice to utilize JSON-LD schema.org/ImageObject markup to call out image properties to search engines as it provides more attributes than an image XML sitemap.
Because of this, an XML image sitemap is unnecessary for most websites. Including an image sitemap would only waste crawl budget.
The exception to this is if images help drive your business, such as a stock photo website or ecommerce site gaining product page sessions from Google Image search.
Know that images don’t have to be on the same domain as your website to be submitted in a sitemap. You can use a CDN as long as it’s verified in Search Console.
XML Video Sitemap
Similar to images, if videos are critical to your business, submit an XML video sitemap. If not, a video sitemap is unnecessary.
Save your crawl budget for the page the video is embedded into, ensuring you markup all videos with JSON-LD as a schema.org/VideoObject.
Google News Sitemap
Only sites registered with Google News should use this sitemap.
If you are, include articles published in the last two days, up to a limit of 1,000 URLs per sitemap, and update with fresh articles as soon as they’re published.
Contrary to some online advice, Google News sitemaps don’t support image URL.
Google recommends to use schema.org image or og:image to specify your article thumbnail for Google News.
This is not needed for most websites.
Why? Because Mueller confirmed mobile sitemaps are for feature phone pages only. Not for smartphone-compatibility.
So unless you have unique URLs specifically designed for featured phones, a mobile sitemap will be of no benefit.
XML sitemaps take care of search engine needs. HTML sitemaps were designed to assist human users to find content.
The question becomes, if you have a good user experience and well crafted internal links, do you need a HTML sitemap?
Check the page views of your HTML sitemap in Google Analytics. Chances are, it’s very low. If not, it’s a good indication that you need to improve your website navigation.
HTML sitemaps are generally linked in website footers. Taking link equity from every single page of your website.
Ask yourself. Is that the best use of that link equity? Or are you including an HTML sitemap as a nod to legacy website best practices?
If few humans use it. And search engines don’t need it as you have strong internal linking and an XML sitemap. Does that HTML sitemap have a reason to exist? I would argue no.
Dynamic XML Sitemap
Static sitemaps are simple to create using a tool such as Screaming Frog.
The problem is, as soon as you create or remove a page, your sitemap is outdated. If you modify the content of a page, the sitemap won’t automatically update the lastmod tag.
So unless you love manually creating and uploading sitemaps for every single change, it’s best to avoid static sitemaps.
Dynamic XML sitemaps, on the other hand, are automatically updated by your server to reflect relevant website changes as they occur.
To create a dynamic XML sitemap:
Ask your developer to code a custom script, being sure to provide clear specifications
Use a dynamic sitemap generator tool
Install a plugin for your CMS, for example the Yoast SEO plugin for WordPress
Dynamic XML sitemaps and a sitemap index are modern best practice. Mobile and HTML sitemaps are not.
Use image, video and Google News sitemaps only if improved indexation of these content types drive your KPIs.
XML Sitemap Indexation Optimization
Now for the fun part. How do you use XML sitemaps to drive SEO KPIs.
Only Include SEO Relevant Pages in XML Sitemaps
An XML sitemap is a list of pages you recommend to be crawled, which isn’t necessarily every page of your website.
A search spider arrives at your website with an “allowance” for how many pages it will crawl.
The XML sitemap indicates you consider the included URLs to be more important than those that aren’t blocked but aren’t in the sitemap.
You are using it to tell search engines “I’d really appreciate it if you’d focus on these URLs in particular.”
Essentially, it helps you use crawl budget effectively.
By including only SEO relevant pages, you help search engines crawl your site more intelligently in order to reap the benefits of better indexation.
You should exclude:
Parameter or session ID based URLs.
Site search result pages.
Reply to comment URLs.
Share via email URLs.
URLs created by filtering that are unnecessary for SEO.
Any redirections (3xx), missing pages (4xx) or server error pages (5xx).
Pages blocked by robots.txt.
Pages with noindex.
Resource pages accessible by a lead gen form (e.g., white paper PDFs).
I want to share an example from Michael Cottam about prioritizing pages:
Say your website has 1,000 pages. 475 of those 1,000 pages are SEO relevant content. You highlight those 475 pages in an XML sitemap, essentially asking Google to deprioritize indexing the remainder.
Now, let’s say Google crawls those 475 pages, and algorithmically decides that 175 are “A” grade, 200 are “B+”, and 100 “B” or “B-”. That’s a strong average grade, and probably indicates a quality website to which to send users.
Contrast that against submitting all 1,000 pages via the XML sitemap. Now, Google looks at the 1,000 pages you say are SEO relevant content, and sees over 50 percent are “D” or “F” pages. Your average grade isn’t looking so good anymore and that may harm your organic sessions.
But remember, Google is going to use your XML sitemap only as a clue to what’s important on your site.
Just because it’s not in your XML sitemap doesn’t necessarily mean that Google won’t index those pages.
When it comes to SEO, overall site quality is a key factor.
To assess the quality of your site, turn to the sitemap related reporting in Google Search Console (GSC).
Manage crawl budget by limiting XML sitemap URLs only to SEO relevant pages and invest time to reduce the number of low-quality pages on your website.
Fully Leverage Sitemap Reporting
The sitemaps section in the new Google Search Console is not as data rich as what was previously offered.
It’s primary use now is to confirm your sitemap index has been successfully submitted.
If you have chosen to use descriptive naming conventions, rather than numeric, you can also get a feel for the number of different types of SEO pages that have been “discovered” – aka all URLs found by Google via sitemaps as well as other methods such as following links.
In the new GSC, the more valuable area for SEOs in regard to sitemaps is the Index Coverage report.
The report will default to “All known pages”. Here you can:
Address any “Error” or “Valid with warnings” issues. These often stem from conflicting robots directives. One solved, be sure to validate your fix via the Coverage report.
Look at indexation trends. Most sites are continually adding valuable content, so “Valid” pages (aka those indexed by Google) should steadily increase. Understand the cause of any dramatic changes.
Afterwards, limit the report to the SEO relevant URLs you have included in your sitemap by changing the drop down to “All submitted pages”. Then check the details of all “Excluded” pages.
Reasons for exclusion of sitemap URLs can be put into four action groups:
Quick wins: For duplicate content, canoncials, robots directives, 40X HTTP status codes, redirects or legalities exclusions put in place the appropriate fix.
Investigate page: For both “Submitted URL dropped” and “Crawl anomaly” exclusions investigate further by using the Fetch as Google tool.
Improve page: For “Crawled – currently not indexed” pages, review the page (or page type as generally it will be many URLs of a similar breed) content and internal links. Chances are, it’s suffering from thin content, unoriginal content or is orphaned.
Improve domain: For “Discovered – currently not indexed” pages, Google notes the typical reason for exclusion as they “tried to crawl the URL but the site was overloaded”. Don’t be fooled. It’s more likely that Google decided “it’s not worth the effort” to crawl due to poor internal linking or low content quality seen from the domain. If you see a larger number of these exclusions, review the SEO value of the page (or page types) you have submitted via sitemaps, focus on optimizing crawl budget as well as review your information architecture, including parameters, from both a link and content perspective.
Whatever your plan of action, be sure to note down benchmark KPIs.
The most useful metric to assess the impact of sitemap optimization efforts is the “All submitted pages” indexation rate – calculated by taking the percentage of valid pages out of total discovered URLs.
Work to get this above 80%.
Why not 100%? Because if you have focussed all your energy on ensuring every SEO relevant URL you currently have is indexed, you likely missed opportunities to expand your content coverage.
Note: If you are a larger website who has chosen to break their site down into multiple sitemap indexes, you will be able to filter by those indexes. This will not only allow you to:
See the overview chart on a more granular level.
See a larger number of relevant examples when investigating a type of exclusion.
Tackle indexation rate optimization section by section.
In addition to identifying warnings and errors, you can use the Index Coverage report as an XML sitemap sleuthing tool to isolate indexation problems.
XML Sitemap Best Practice Checklist
Do invest time to:
✓ Include hreflang tags in XML sitemaps
✓ Include the <loc> and <lastmod> tags
✓ Compress sitemap files using gzip
✓ Use a sitemap index file
✓ Use image, video and Google news sitemaps only if indexation drives your KPIs
✓ Dynamically generate XML sitemaps
✓ Ensure URLs are included only in a single sitemap
✓ Reference sitemap index URLs in robots.txt
✓ Submit sitemap index to both Google Search Console and Bing Webmaster Tools
✓ Include only SEO relevant pages in XML sitemaps
✓ Fix all errors and warnings
✓ Analyze trends and types of valid pages
✓ Calculate submitted pages indexation rates
✓ Address causes of exclusion for submitted pages
Now, go check your own sitemap and make sure you’re doing it right.
Feature Image: Paulo Bobita All screenshots taken by author
So what are your site’s standout elements? Here are five:
Recipe ingredients (if your website is about cooking).
Your company’s operating hours.
It’s pretty obvious to you how important these elements are to your site.
But how can you make search engines understand that?
The answer is with schema markup and rich snippets.
What Are Schema Markup & Rich Snippets?
Rich snippets are the extra pieces of information that appear around your website’s link.
Schema markup is HTML code used to create rich snippets.
Take a look at this example.
Let’s say you want to buy a Darth Maul toy lightsaber.
You go on Google and type “Star Wars Darth Maul Lightsaber.”
The photo above shows you the results.
As you can see, you don’t only get links to the different websites selling the toy.
You also get a bunch of additional information to help you decide which link to click.
Amazon shows you star ratings and reviews.
Entertainment Earth includes the toy’s price, length, height, and weight.
Valuable pieces of information, right?
These are your rich snippets.
And if you add them to your site, you’ll naturally boost it higher on Google’s SERPs.
How Rich Snippets Boost Your SEO Ranking
Technically, rich snippets don’t directly affect your site’s ranking on Google’s SERPs.
However, they can still get you ranked higher on Google.
For instance, let’s say you want to bake a Black Forest cake for a loved one’s birthday.
You go on Google and type “Black Forest Cake Recipe.”
Here are the results you get:
As a busy cake-lover, which recipe would you most likely click? One that is rated 5 stars and takes 2 hours and 45 minutes to bake? Or one that is rated 4.9 stars and takes 3 hours and 45 minutes to bake?
Naturally, you’d choose the first option.
The star rating, votes, and description (all rich snippets) encourage you to go to livforcake.com for your recipe.
This is how your rich snippets increase your click-through rate.
And the higher your click-through rate, the more likely Google will notice you.
So how do you add rich snippets to your site?
The Top 5 Schema Plugins for WordPress
The great news is if you’re using WordPress, you don’t have to touch a single line of code.
Simply choose a schema plugin, and creating rich snippets will be easy.
Here are the top five marketers love.
1. Schema Pro
Schema Pro makes adding rich snippets to your site fast and easy.
In a matter of minutes, you can add your preferred configurations to all your pages and posts.
Schema Pro supports 13 useful schema types. These are:
Reviews (music, movies, products, books, etc.).
Recipes (you can create your own attractive schema rich card that’ll boost your click-through rate).
Software applications (add reviews and star ratings to give your applications a boost).
Products (give searchers detailed information on what you’re selling).
Articles (news, blogs, etc.).
Price: $79/month or $249/lifetime
2. All in One Schema Rich Snippets
All in One Schema Rich Snippets is one of the simplest plugins you can find for schema markup.
Although it’s simple, it provides you with snippets for reviews, ratings, events, articles, and software applications.
One great thing about this plugin is you can use it for free.
It doesn’t have a ton of fancy designs to select from, but it does have the basics you need for rich snippets on your site.
The downside of using this plugin is it doesn’t support automation.
You’ll have to add schema markup to each page of your site manually.
What’s interesting about All in One Schema Rich Snippets is it’s made by the same maker as Schema Pro.
In fact, its dashboard heavily advertises Schema Pro.
So how to choose between the two?
Use Schema Pro if you’re running a bigger online business.
The price will be worth it for the automation, unique designs, and schema markup support for local businesses.
If you’re just starting out and want to try something for free?
Go with All in One Schema Rich Snippets.
3. Schema and Structured Data for WP & AMP
Schema and Structured Data for WP & AMP supports 33 schema types.
Three unique ones include:
How To (list the steps in your how-to article to be featured in your rich snippets)
Q&A (if your article is in a question and answer format, you can feature the most relevant questions and answers in your rich snippets)
Audio object (add details about audio you upload like date of upload, length, etc. to your rich snippets)
The best part? If the schema type you’re looking for isn’t part of the 33 this plugin offers, you can request a customized type!
Here are other features this plugin offers:
Reviews pulled from over 75 sites.
Your own customized review rating boxes with schema markup.
Compatibility with other schema plugins like SEO Pressor and WP SEO Schema.
I was recently helping one of my team members diagnose a new prospective customer site to find some low hanging fruit to share with them.
When I checked their home page with our Chrome extension, I found a misplaced canonical tag. We added this type of detection a long time ago when I first encountered the issue.
What is a misplaced SEO tag, you might ask?
Most SEO tags like the title, meta description, canonical, etc. belong in the HTML HEAD. If they get placed in the HTML BODY, Google and other search engines will ignore them.
If you go to the Elements tab, you will find the SEO tags inside the <BODY> tag. But, these tags are supposed to be in the <HEAD>!
Why does something like this happen?
If we check the page using VIEW SOURCE, the canonical tag is placed correctly inside the HTML HEAD (line 56, while the <BODY> is in line 139.).
What is happening here?!
Is this an issue with Google Chrome?
The canonical is also placed in the BODY in Firefox.
We have the same issue with Internet Explorer.
Edge is no exception.
We have the same problem with other browsers.
HTML parsing vs. syntax highlighting
Why is the canonical placed correctly when we check VIEW SOURCE, but not when we check it in the Elements tab?
In order to understand this, I need to introduce a couple of developer concepts: lexical analysis and syntax analysis.
When we load a source page using VIEW SOURCE, the browser automatically color codes programming tokens (HTML tags, HTML comments, etc).
In order to do this, the browser performs basic lexical analysis to break the source page into HTML tokens.
This task is typically performed by a lexer. It is a simple, and low-level task.
All programming language compilers and interpreters use a lexer that can break source text into language tokens.
When we load the source page with the Elements tab, the browser not only does syntax highlighting, but it also builds a DOM tree.
In order to build a DOM tree, it is not enough to know HTML tags and comments from regular text, you also need to know when a tag opens and closes, and their place in the tree hierarchy.
This syntactic analysis requires a parser.
An English spellchecker needs to perform a similar, two-phased analysis of the written text. First, it needs to translate text into nouns, pronouns, adverbs, etc. Then, it needs to apply grammar rules to make sure the part of speech tags are in the right order.
But why are the SEO tags placed in the HTML body?
Parsing HTML from Python
I wrote a Python script to fetch and parse some example pages with errors, find the canonical anywhere in the HTML, and print the DOM path where it was found.
After parsing the same page that shows misplaced SEO tags in the HTML Body, I find them correctly placed in the HTML head.
What are we missing?
Invalid tags in the HTML head
Some HTML tags are only valid in the HTML BODY. For example, <DIV> and <SPAN> tags are invalid in the HTML head.
When I looked closely at the HTML HEAD in our example, I found a script with a hardcoded <SPAN>. This means, the script was meant to be placed in the <BODY>, but the user incorrectly placed it in the head.
Maybe the instructions were not clear, the vendor omitted this information or the user didn’t know how to do this in WordPress.
I tested by moving the script to the BODY but still faced the misplaced canonical issue.
After a bit of trial and error, I found another script that when I moved it to the BODY, the issue disappeared.
While the second script didn’t have any hardcoded invalid tags, it was likely writing one or more to the DOM.
In other words, it was doing it dynamically.
But, why would inserting invalid tags, cause the browser to push the rest of the HTML in the head to the body?
Web browser error tolerance
I created a few example HTML files with the problems I discussed and loaded them in Chrome to show you what happens.
In the first example, I commented out the opening BODY tag. This removes it.
Here you see that if a script writes an invalid tag in the HTML head, it will cause the browser to close it early as before. We have exactly the same problem!
We didn’t see the problem with our Python parser because lxml (the Python parsing library) doesn’t try to fix HTML errors.
Why do browsers do this?
Browsers need to render pages that our Python script doesn’t need to do. If they try to render before correcting mistakes, the pages would look completely broken.
The web is full of pages that would completely break if web browsers didn’t accommodate for errors.
This article from HTML5Rocks provides a fascinating look inside web browsers and helps explain the behavior we see in our examples.
“The HTML5 specification does define some of these requirements. (WebKit summarizes this nicely in the comment at the beginning of the HTML parser class.)
Unfortunately, we have to handle many HTML documents that are not well-formed, so the parser has to be tolerant about errors.
We have to take care of at least the following error conditions:
The element being added is explicitly forbidden inside some outer tag. In this case, we should close all tags up to the one which forbids the element, and add it afterward.
Please read the full article or at least make sure to read at least the section on “Browser’s Error Tolerance” to get a better context.
How to fix this
Fortunately, fixing this problem is actually very simple. We have two alternatives. A lazy one and a proper one.
The proper fix is to track down scripts that insert invalid HTML tags in the head and move them to the HTML body.
The lazy and quickest fix is to move all SEO tags (and other important tags) before any third party scripts. Preferably, right after the opening <HEAD> tag.
You can see how I do it here.
We still have the same invalid tag and script in the HTML head and the SEO tags are also in the head.
Is this a common problem?
I’ve been seeing this issue happening for many years now, and Patrick Stox has also reported seeing the same problem happening often to enterprise sites.
One of the biggest misconceptions about technical SEO is that you do it once and you are done. That would be the case if the sites didn’t change, users/developers didn’t make mistakes and/or Googlebot behavior didn’t change either.
At the moment that is hardly the case.
I’ve been advocating technical SEOs learn developer skills and I hope this case study illustrates the growing importance of this.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
About The Author
Hamlet Batista is CEO and founder of RankSense, an agile SEO platform for online retailers and manufacturers. He holds U.S. patents on innovative SEO technologies, started doing SEO as a successful affiliate marketer back in 2002, and believes great SEO results should not take 6 months.