Connect with us

SEO

High Volume, Low Competition Keywords

Published

on


Are There Any High Volume, Low Competition Keywords Left?

If you’ve done SEO in a competitive industry you know how hard it can be to find topics to rank for that no one else has covered or that don’t require extensive amounts of link building.

You’re smart enough to know that the good old-fashioned “publish and pray” approach to ranking content just doesn’t work.

This is why SEO pros must leverage low competition keywords into their overall strategies so that they can get their sites to compete with the big boys.

Why Are Low Competition Keywords Important?

As SEO pros, if we are starting out with a client who is:

  • A startup.
  • Has a new website.
  • Has little to no domain rating.
  • In an industry that is super competitive.
  • A combination of the above.

…then we are plagued to try and get results, fast.

Beyond the standard optimizations of the website, building our product or service pages, and trying to convince our clients they need a bigger budget for PPC or social promotion, we need to use our SEO expertise to make sure our clients stick around (or our bosses don’t fire us!).

While every client agrees that they understand SEO takes “six to eight” months, come month three with no sales generated they get squeamish.

And this is exactly why we need to use low competition keywords in our SEO and content marketing strategies. Low competition keywords are those that can rank with little to no link building and little to no domain authority.

Performed properly, merely publishing a blog based on a target of low competition keywords can provide (relatively) quick traffic and long-tail rankings.

Better yet, it provides us a solid foundation in which to write more competitive topics and go after more difficult, and often more volume-rich, topics.

Who Should Be Going After Low Competition Keywords vs. High Competition Keywords?

We’ve all reverse-engineered competitor websites in an effort to determine why they outrank ours.

We think, my topic is more in-depth, we have better graphics, better references, more social shares, and my blog is more-up-to-date, so why doesn’t it rank better?

More often than not this is because or competitors have a higher domain rating than us.

Sidenote: I say domain rating (DR), Ahrefs definition of the strength of a website’s backlink profile on a logarithmic scale of 0 to 100, with the latter being the strongest. Domain authority (DA) can also be used interchangeably.

Domain rating is why sites like Forbes can publish articles, do zero outreach, and generate a fraction of the backlinks as the rest of and still outrank us.

Sites with high domain rating are usually trusted resources and can publish content and rank for it while the rest of us have to work a lot harder at it.

Therefore, if you have a site you are trying to rank a blog for, chances are you’ll be going up against a media site or some very-well established juggernaut in your vertical.

This is also why you should be spending more of your time finding and developing content around high volume, low competition keywords, and topics.

Bottom line: high DR sites can go after high keyword difficulty topics and will usually rank for them; low DR sites, well, can’t.

So how do we define a high DR site?

High DR sites are measured generally, from the 0 to 100 scale, but can also be measured on the industry level which can also mean a high DR is industry-related.

For example, a site like Forbes is a general site that covers a broad range of topics and has a DR of 93. Forbes may or may not be going after keywords that you are.

In that case, you may not have to compete with general high DR sites like Forbes (you lucky dog).

If you don’t you are probably competing with other industry sites with a higher DR than yours, meaning you still have to be deliberate with your keyword research and topic creation.

Then again, there are industry sites that have huge DR.

For example, if you’re in the real estate space, there’s a good chance you’ll be going up against Realtor.com (DR90) or Zillow (DR91). In many cases, we’re up against industry sites and general sites for the same keywords/topics.

That’s OK though, we can still compete with these sites!

Where to Find High Volume, Low Competition Keywords

Now that we have an understanding of why low competition keywords are so important to so many of us, let’s take a look at three great ways I like to find high volume, low competition keywords.

Note: The definition of high volume keywords can be subjective. During your research you may find that you are targeting topics that have thousans of searches every month while other times you may be targeting those that have hundreds. Either way, keep in mind that the target keyword and topic may generate lots of long-tail traffic, which is why we will dive into using this research to piece together complete topics.

Method 1: Using Google SERPs to Reverse Engineer Results

Sometimes with keyword research, you can have a direct topic in mind to begin your research and other times you just visit competitors sites to find ideas. This first approach has to do with the former.

For example, my financial planning client Castle Wealth Management does a lot of wealth management for clients that are high net worth and thus wanted to create a blog on the topic of trusts.

So, before diving into a specific topic just because they wanted to cover it, as an SEO and content marketing agency, we push our clients to be more intentional about what we cover.

Being in the finance industry, I knew we’d be up against huge juggernauts like Forbes (DR 93), Nerd Wallet (DR 86), and Kiplinger (DR 84).

Taking to the Google SERPs and using the Ahrefs Chrome extension for its data on DR, Referring Domains, and Keywords a URL ranks for, we quickly found that generic posts on trusts such as “What is a trust?” were indeed dominated by the likes of huge brands like Bankrate.

Further, the topic had a Keyword Difficulty (KD) of 29, the top post had 40 referring domains (RD), and the top site, a domain rating of 89. Since the goal is to generally get within the top three rankings, we decided to move on from this topic.

Keyword difficulty

After a handful of searches we discovered the long-tail topic “why would you put your house in a trust?” was opportune for the picking. Below we see this exact keyword gets 200 searches per month.

Holy Grail SEO: High Volume, Low Competition Keywords

Now, 200 is not really a pot of gold by any means in terms of volume, but you’ll see that sometimes you have to look beyond the primary keyword volume and keep an eye on long-tail traffic.

When researching this topic we found that a site (Darrow Wealth Management) with a similar DR (31 to our client’s 27) was ranking #1.

Further, their post on the topic only had 7 referring domains and was ranking in the top 100 for over 1,600 keywords, many of which were in the top 10. Long-tail goldmine? Perhaps…

Inside Ahrefs, you are able to take a look at all the keywords, their volume, their keyword difficulty, position, and more to get an idea of how your competition stacks up.

Holy Grail SEO: High Volume, Low Competition Keywords

As you can see, Darrow Wealth Management ranks within the top 5 for a ton of searches related to “why would you put your house in a trust.”

With this knowledge in the bank, we can now take an even closer look at our primary keyword to see what kind of backlink profile we need to build if we want to rank on Page 1.

Holy Grail SEO: High Volume, Low Competition Keywords

According to Ahrefs, the keyword difficulty of 3 means we’ll need links from about 4 websites to rank in the top 10 for this keyword. I’ve found this metric to not always be true as we were able to rank #2 for this search with only one backlink.

Fast forward about three months post blog production, and you’ll see our high volume, low competition strategy worked.

Check out how our client’s site now ranks for related keywords:

Holy Grail SEO: High Volume, Low Competition Keywords

While the site is not in Position 1 for the target keyword, we only actively built one backlink to the page and have benefited tremendously from our strategy. The blog ranks for a ton of long-tail searches, over 500, and grew organically over the next eight months:

Holy Grail SEO: High Volume, Low Competition Keywords

This is all but one strategy to find keywords and topics to write about. You can also let software do all the work for you.

Oh, and if you’re saying “well there really isn’t any competition for that at all what a joke this strategy is” just check out the high DR sites that are also going for this keyword that our strategy has beat out.

Our strategy put our client in the second spot and beat out sites like SFGate (DR90), CNN (DR92), and Pocket Sense (DR70) all with one referring domain.

Holy Grail SEO: High Volume, Low Competition Keywords

Method 2: Using Keyword Software to Target Specific Topics

Another method that is really good for finding high volume, low competition keywords is by using software to help you filter through data. Using the Ahrefs Content explorer, we can utilize some great filters to find low hanging fruit content.

Start with a generic keyword search. For example, “garment bag.”

Holy Grail SEO: High Volume, Low Competition Keywords

Now, you’ll see a bunch of pages in the index that mention your target keyword. In this case, there are over 14,000 pages in the index so I want to filter them out.

There are a lot of considerations and filters that can be applied, but part of the fun is playing around with the tool.

First and foremost, however, you’ll want to filter out any pages that are not ranking for anything. I’ll set the organic traffic filter to “From: 500” for starters so we only see pages that rank for over 500 keywords. You can be more aggressive or less aggressive depending on the results.

In this case, the filter leaves me with 173 pages which is pretty good considering where it started.

Holy Grail SEO: High Volume, Low Competition Keywords

Because I want sites that have a low, or relatively modest domain rating, I’ll also filter sites that are above a DR30.

I do this because I will know my chances of finding a topic that a low DR site ranks for means my site can probably do the same. 

This is not always the case, however. Sometimes high DR sites can be goldmines for low competition, high volume topics. But in this example, we’ll stick to low DR sites that have low competition, high traffic blogs we can steal from.

With our new filter set, we now see there are 10 pages to evaluate based on our “garment bags” topic:

Holy Grail SEO: High Volume, Low Competition Keywords

Now, all we have to do is open up the organic traffic tabs on the various sites to see if we can find some keyword opportunities that fit the Holy Grail bill.

Holy Grail SEO: High Volume, Low Competition Keywords

Voila! Holy Grail SEO goldmine!

The first blog has a bunch of topics and the metrics tell me it is a good option to cover.

First off, the referring domains to the site are at one, the site has almost 1,200 keywords in the index, the keywords I see showing up are all really low keyword difficulty, and the volume of the keywords collectively and of a few isolated ones are great (ex. “Garment bag carry on” 1,600 volume, KD1; “garment duffel bag” 500 volume, KD1).

I can repeat the same process of investigation on the other sites that showed up in the list and jot down the other keywords I want to hit in my blog.

This approach is great, but the other approach can often find topics even faster. And that is using competitors to find high volume, low competition Holy Grail content.

Method 3: Looking at Competitors to Find HV/LCK Keywords

One of my favorite Holy Grail SEO techniques is to mine competitors sites for keywords and blog topics.

Even sites with high DR can be goldmines for high volume, low competition keywords.

And as we’ve seen in the Castle Wealth Management example, you can beat out sites with high DR with a well-written low competition keyword target.

For example, let’s stick to our finance example. This is an extremely competitive industry with competition in niche sites, industry sites, and general news sites like CNN.

I’ll visit one of the most popular financial advice sites, NerdWallet, to see if I can’t find some high volume topics to cover.

I bring the site into Ahrefs Site Explorer to dissect it. Inside this tool, we can sort topics by the Top pages – pages that the software tells me ranks for a ton of keywords – as well as set a filter for Keyword Difficulty.

In this case, I set the KD filter to 10 (although you could probably go a lot lower to see what you find).

Holy Grail SEO: High Volume, Low Competition Keywords

Take a look at the filtered settings to see if there are contextually relevant topics that you think your audience would enjoy.

For example, let’s say I run a general financial advice site, right off the bat I see “how to fill out a money order” and “best time to buy a tv” as some potential targets.

To investigate further, I open the Organic Keyword report for each and look at the Volume and Keyword Difficulty of the results.

The Position is also worth noting as well as this shows that solid rankings here can likely be duplicated with a similar blog.

Holy Grail SEO: High Volume, Low Competition KeywordsHoly Grail SEO: High Volume, Low Competition Keywords

Looking at these two side by side, we see that the “Best time to buy a TV” topic has a slightly higher keyword difficulty. Most of the keywords sit around 8 while the “How to fill out a money order” sits around 6.

Further, there seems to be a lot more keyword volume for the money order primary keyword – 23,000 versus 11,000.

Ultimately you may decide to cover both topics but if you were to choose one over the other, you’d want to go with the easier of the two.

Digging a little further still, you’ll want to see how many backlinks you may need for each. You’ll need about 7 and 9 links, for the money order and TV blogs, respectively.

Holy Grail SEO: High Volume, Low Competition KeywordsHoly Grail SEO: High Volume, Low Competition Keywords

Putting Your Keyword Competition Data Together to Create Complete Topics

I’ll be the first to admit that good keyword research is only the beginning. You’ll still need to implement it properly if you want the research to help drive traffic.

Here are some more tips on bringing the grail home.

Keyword Research Still Needs to Be Complemented with Great Writing

Yes, good keyword research does not mean you can get away with writing crappy blogs. This may be an obvious point, but let’s get that out of the way. Here are some resources to help you with that:

Beyond good writing, one tip I have for you is to be exhaustive with a topic. Being exhaustive with a topic is part research and part logic and I find often involves a lot of keyword rich sub-topics.

For example, let’s take a look at our keyword data for the aforementioned “garment bags” research:

Holy Grail SEO: High Volume, Low Competition Keywords

From here, we know we have keyword data and volume but that alone doesn’t make the blog come to life.

What we want to do is then use some logic to determine what would make good talking points in our post while also being intentional about the keyword opportunities. So, let’s say you decide on a general post of “15 Best Garment Bags of 2019” as your topic.

You can then use this keyword data to break down some good sub-topics that obviously have some volume behind them.

Note: For the sub-topic portion I don’t always focus on topics with low keyword difficulty. This can be helpful to do but if a sub-topic is appropriate for your post, don’t be afraid to cover it if you find it is a competitive keyword.

Perhaps in this post, I’ll have my writer cover the following as sub-topics:

  • Best roll up garment bags
  • Best carry on garment bags
  • Best garment bags with wheels
  • Garment bag carry on rules
  • When to get a tri fold garment bag

In this case, my keyword research is dictating my sub-topics, making it a win for the overall topic but also for the implementation of my keyword research.

Also, check out Google SERPs for even more ideas that Google is telling you people are also interested in within the “People also ask” and “Searches related to” sections in the SERPs.

Holy Grail SEO: High Volume, Low Competition KeywordsHoly Grail SEO: High Volume, Low Competition Keywords

The Quest for Holy Grail SEO Never Ends

The ability to drive traffic to sites that are high volume and low competition is very real. It is a great strategy for startups, new websites, sites with little to no domain rating, and sites that just want to build more traffic.

The research is actually quite simple and with a little practice can be implemented in just a few minutes. While we never rule out link building as a deliverable, it is nice to show gains in traffic without having to spend a bunch of time and resources building links.

Give Holy Grail SEO a try for yourself and I promise, done right, it will work wonders for your content marketing strategy.



Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

SEO

Restaurant app Tobiko goes old school by shunning user reviews

Published

on


You can think of Tobiko as a kind of anti-Yelp. Launched in 2018 by Rich Skrenta, the restaurant app relies on data and expert reviews (rather than user reviews) to deliver a kind of curated, foodie-insider experience.

A new Rich Skrenta project. Skrenta is a search veteran with several startups behind him. He was one of the founders of DMOZ, a pioneering web directory that was widely used. Most recently Skrenta was the CEO of human-aided search engine Blekko, whose technology was sold to IBM Watson in roughly 2015.

At the highest level, both DMOZ and Blekko sought to combine human editors and search technology. Tobiko is similar; it uses machine learning, crawling and third-party editorial content to offer restaurant recommendations.

Tobiko screenshots

Betting on expert opinion. Tobiko is also seeking to build a community, and user input will likely factor into recommendations at some point. However, what’s interesting is that Skrenta has shunned user reviews in favor of “trusted expert reviews” (read: critics).

Those expert reviews are represented by a range of publisher logos on profile pages that, when clicked, take the user to reviews or articles about the particular restaurant on those sites. Where available, users can also book reservations. And the app can be personalized by engaging a menu of preferences. (Yelp recently launched broad, site-wide personalization itself.)

While Skrenta is taking something of a philosophical stand in avoiding user reviews, his approach also made the app easier to launch because expert content on third-party sites already existed. Community content takes much longer to reach critical mass. However, Tobiko also could have presented or “summarized” user reviews from third-party sites as Google does in knowledge panels, with TripAdvisor or Facebook for example.

Tobiko is free and currently appears to have no ads. The company also offers a subscription-based option that has additional features.

Why we should care. It’s too early to tell whether Tobiko will succeed, but it provocatively bucks conventional wisdom about the importance of user reviews in the restaurant vertical (although reading lots of expert reviews can be burdensome). As they have gained importance, reviews have become somewhat less reliable, with review fraud on the rise. Last month, Google disclosed an algorithm change that has resulted in a sharp decrease in rich review results showing in Search.

Putting aside gamesmanship and fraud, reviews have brought transparency to online shopping but can also make purchase decisions more time-consuming. It would be inaccurate to say there’s widespread “review fatigue,” but there’s anecdotal evidence supporting the simplicity of expert reviews in some cases. Influencer marketing can be seen as an interesting hybrid between user and expert reviews, though it’s also susceptible to manipulation.


About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.



Continue Reading

SEO

3 Ways to Use XPaths with Large Site Audits

Published

on


When used creatively, XPaths can help improve the efficiency of auditing large websites. Consider this another tool in your SEO toolbelt.

There are endless types of information you can unlock with XPaths, which can be used in any category of online business.

Some popular ways to audit large sites with XPaths include:

In this guide, we’ll cover exactly how to perform these audits in detail.

What Are XPaths?

Simply put, XPath is a syntax that uses path expressions to navigate XML documents and identify specified elements.

This is used to find the exact location of any element on a page using the HTML DOM structure.

We can use XPaths to help extract bits of information such as H1 page titles, product descriptions on ecommerce sites, or really anything that’s available on a page.

While this may sound complex to many people, in practice, it’s actually quite easy!

How to Use XPaths in Screaming Frog

In this guide, we’ll be using Screaming Frog to scrape webpages.

Screaming Frog offers custom extraction methods, such as CSS selectors and XPaths.

It’s entirely possible to use other means to scrape webpages, such as Python. However, the Screaming Frog method requires far less coding knowledge.

(Note: I’m not in any way currently affiliated with Screaming Frog, but I highly recommend their software for web scraping.)

Step 1: Identify Your Data Point

Figure out what data point you want to extract.

For example, let’s pretend Search Engine Journal didn’t have author pages and you wanted to extract the author name for each article.

What you’ll do is:

  • Right-click on the author name.
  • Select Inspect.
  • In the dev tools elements panel, you will see your element already highlighted.
  • Right-click the highlighted HTML element and go to Copy and select Copy XPath.

2 copy xpath

At this point, your computer’s clipboard will have the desired XPath copied.

Step 2: Set up Custom Extraction

In this step, you will need to open Screaming Frog and set up the website you want to crawl. In this instance, I would enter the full Search Engine Journal URL.

  • Go to Configuration > Custom > Extraction

3 setup xpath extraction

  • This will bring up the Custom Extraction configuration window. There are a lot of options here, but if you’re looking to simply extract text, match your configuration to the screenshot below.

4 configure xpath extraction

Step 3: Run Crawl & Export

At this point, you should be all set to run your crawl. You’ll notice that your custom extraction is the second to last column on the right.

When analyzing crawls in bulk, it makes sense to export your crawl into an Excel format. This will allow you to apply a variety of filters, pivot tables, charts, and anything your heart desires.

3 Creative Ways XPaths Help Scale Your Audits

Now that we know how to run an XPath crawl, the possibilities are endless!

We have access to all of the answers, now we just need to find the right questions.

  • What are some aspects of your audit that could be automated?
  • Are there common elements in your content silos that can be extracted for auditing?
  • What are the most important elements on your pages?

The exact problems you’re trying to solve may vary by industry or site type. Below are some unique situations where XPaths can make your SEO life easier.

1. Using XPaths with Redirect Maps

Recently, I had to redesign a site that required a new URL structure. The former pages all had parameters as the URL slug instead of the page name.

This made creating a redirect map for hundreds of pages a complete nightmare!

So I thought to myself, “How can I easily identify each page at scale?”

After analyzing the various page templates, I came to the conclusion that the actual title of the page looked like an H1 but was actually just large paragraph text. This meant that I couldn’t just get the standard H1 data from Screaming Frog.

However, XPaths would allow me to copy the exact location for each page title and extract it in my web scraping report.

In this case I was able to extract the page title for all of the old URLs and match them with the new URLs through the VLOOKUP function in Excel. This automated most of the redirect map work for me.

With any automated work, you may have to perform some spot checking for accuracy.

2. Auditing Ecommerce Sites with XPaths

Auditing Ecommerce sites can be one of the more challenging types of SEO auditing. There are many more factors to consider, such as JavaScript rendering and other dynamic elements.

Sometimes, stakeholders will need product level audits on an ad hoc basis. Sometimes this covers just categories of products, but sometimes it may be the entire site.

Using the XPath extraction method we learned earlier in this article, we can extract all types of data including:

  • Product name
  • Product description
  • Price
  • Review data
  • Image URLs
  • Product Category
  • And much more

This can help identify products that may be lacking valuable information within your ecommerce site.

The cool thing about Screaming Frog is that you can extract multiple data points to stretch your audits even further.

3. Auditing Blogs with XPaths

This is a more common method for using XPaths. Screaming Frog allows you to set parameters to crawl specific subfolders of sites, such as blogs.

However, using XPaths, we can go beyond simple meta data and grab valuable insights to help identify content gap opportunities.

Categories & Tags

One of the most common ways SEO professionals use XPaths for blog auditing is scraping categories and tags.

This is important because it helps us group related blogs together, which can help us identify content cannibalization and gaps.

This is typically the first step in any blog audit.

Keywords

This step is a bit more Excel-focused and advanced. How this works, is you set up an XPath extraction to pull the body copy out of each blog.

Fair warning, this may drastically increase your crawl time.

Whenever you export this crawl into Excel, you will get all of the body text in one cell. I highly recommend that you disable text wrapping, or your spreadsheet will look terrifying.

Next, in the column to the right of your extracted body copy, enter the following formula:

=ISNUMBER(SEARCH("keyword",A1))

In this formula, A1 equals the cell of the body copy.

To scale your efforts, you can have your “keyword” equal the cell that contains your category or tag. However, you may consider adding multiple columns of keywords to get a more accurate and robust picture of your blogging performance.

This formula will present a TRUE/FALSE Boolean value. You can use this to quickly identify keyword opportunities and cannibalization in your blogs.

Author

We’ve already covered this example, but it’s worth noting that this is still an important element to pull from your articles.

When you blend your blog export data with performance data from Google Analytics and Search Console, you can start to determine which authors generate the best performance.

To do this, sort your blogs by author and start tracking average data sets including:

  • Impressions – Search Console
  • Clicks – Search Console
  • Sessions – Analytics
  • Bounce Rate – Analytics
  • Conversions – Analytics
  • Assisted Conversions – Analytics

Share Your Creative XPath Tips

Do you have some creative auditing methods that involve XPaths? Share this article on Twitter or tag me @seocounseling and let me know what I missed!

More Resources:


Image Credits

All screenshots taken by author, October 2019



Continue Reading

SEO

When parsing ‘Googlespeak’ is a distraction

Published

on


Over the almost 16-years of covering search, specifically what Googlers have said in terms of SEO and ranking topics, I have seen my share of contradictory statements. Google’s ranking algorithms are complex, and the way one Googler explains something might sound contradictory to how another Googler talks about it. In reality, they are typically talking about different things or nuances.

Some of it is semantics, some of it is being literal in how one person might explain something while another person speaks figuratively. Some of it is being technically correct versus trying to dumb something down for general practitioners or even non-search marketers to understand. Some of it is that the algorithm can change over the years, so what was true then has evolved.

Does it matter if something is or is not a ranking factor? It can be easy to get wrapped up in details that end up being distractions. Ultimately, SEOs, webmasters, site owners, publishers and those that produce web pages need to care more about providing the best possible web site and web page for the topic. You do not want to chase algorithms and racing after what is or is not a ranking factor. Google’s stated aim is to rank the most relevant results to keep users happy and coming back to the search engine. How Google does that changes over time. It releases core updates, smaller algorithm updates, index updates and more all the time.

For SEOs, the goal is to make sure your pages offer the most authoritative and relevant content for the given query and can be accessed by search crawlers.

When it is and is not a ranking factor. An example of Googlers seeming to contradict themselves popped this week.

Gary Illyes from Google said at Pubcon Thursday that content accuracy is a ranking factor. That raised eyebrows because in past Google has seemed to say content accuracy is not a ranking factor. Last month Google’s Danny Sullivan said, “Machines can’t tell the ‘accuracy’ of content. Our systems rely instead on signals we find align with relevancy of topic and authority.” One could interpret that to mean that if Google cannot tell the accuracy of content, that it would be unable to use accuracy as a ranking factor.

Upon closer look at the context of Illyes comments this week, it’s clear he’s getting at the second part of Sullivan’s comment about using signals to understand “relevancy of topic and authority.” SEO Marie Haynes captured more of the context of Illyes’ comment.

Illyes was talking about YMYL (your money, your life) content. He added that Google goes through “great lengths to surface reputable and trustworthy sources.”

He didn’t outright say Google’s systems are able to tell if a piece of content is factually accurate or not. He implied Google uses multiple signals, like signals that determine reputations and trustworthiness, as a way to infer accuracy.

So is content accuracy a ranking factor? Yes and no. It depends if you are being technical, literal, figurative or explanatory. When I covered the different messaging around content accuracy on my personal site, Sullivan pointed out the difference, he said on Twitter “We don’t know if content is accurate” but “we do look for signals we believe align with that.”

It’s the same with whether there is an E-A-T score. Illyes said there is no E-A-T score. That is correct, technically. But Google has numerous algorithms and ranking signals it uses to figure out E-A-T as an overall theme. Sullivan said on Twitter, “Is E-A-T a ranking factor? Not if you mean there’s some technical thing like with speed that we can measure directly. We do use a variety of signals as a proxy to tell if content seems to match E-A-T as humans would assess it. In that regard, yeah, it’s a ranking factor.”

You can see the dual point Sullivan is making here.

The minutiae. When you have people like me, who for almost 16 years, analyze and scrutinize every word, tweet, blog post or video that Google produces, it can be hard for a Google representative to always convey the exact clear message at every point. Sometimes it is important to step back, look at the bigger picture, and ask yourself, Why is this Googler saying this or not saying that?

Why we should care. It is important to look at long term goals, and as I said above, not chase the algorithm or specific ranking factors but focus on the ultimate goals of your business (money). Produce content and web pages that Google would be proud to rank at the top of the results for a given query and other sites will want to source and link to. And above all, do whatever you can to make the best possible site for users — beyond what your competitors produce.


About The Author

Barry Schwartz is Search Engine Land’s News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.



Continue Reading

Trending

Copyright © 2019 Plolu.