Connect with us

WordPress

SMX replay: SEO that Google tries to correct for you

Published

on

SMX replay: SEO that Google tries to correct for you


Search engines have seen the same SEO mistakes countless times, and as Patrick Stox, SEO specialist at IBM, said during his Insights session at SMX Advanced, “Are you going to throw millions of dollars at a PR campaign to try to get us [SEOs] to convince developers to fix all this stuff? Or are you just going to fix it on your end? And the answer is they fix a ton of stuff on their end.”

During his session, Stox outlined a number of common SEO responsibilities that Google is already correcting for us. You can listen to his entire discussion above, with the full transcript available below.

For more Insights from SMX Advanced, listen to Amanda Milligan’s session on leveraging data storytelling to earn top-tier media coverage or Ashley Mo’s session on improving your YouTube ad performance.

Can’t listen right now? Read the full transcript below

Introduction by George Nguyen:
Meta descriptions? There are best practices for that. Title tags? There are best practices for that. Redirects? There are — you guessed it — best practices for that. Welcome to the Search Engine Land podcast, I’m your host George Nguyen. As you’re probably already aware, the internet can be a messy place, SEOs only have so many hours a day and — as IBM SEO specialist Patrick Stox explains — Google may have already accounted for some of the more common lapses in best practices. Knowing which of these items a search engine can figure out on its own can save you time and allow you to focus on the best practices that will make the most impact. Here’s Patrick’s Insights session from SMX Advanced, in which he discusses a few of the things Google tries to correct for you.

Patrick Stox:
How’s it going? I get to kick off a brand new session type. This should be fun. We’re going to talk a little bit about things that Google and, some for Bing, try to correct for you. If you were in the session earlier with Barry [Schwartz] and Detlef [Johnson], they were discussing some of the things that, you know, the web is messy, people make mistakes and it’s the same mistakes over and over. And if you’re a search engine, what are you going to do? Are you going to throw millions of dollars at a PR campaign to try to get us to convince developers to fix all this stuff? Or are you just going to fix it on your end? And the answer is they fix a ton of stuff on their end.

So the main thing here — I’m here as me. If I say something stupid or wrong, it’s me — not IBM.

The importance of technical SEO may diminish over time. I am going to say “may,” I’m going to say this with a thousand caveats. The reason being, the more stuff that Google fixes, the more stuff that Bing fixes on their end, the less things we actually have to worry about or get right. So, a better way to say this might be, “it’ll change over time” — our job roles will change.

Some of the things: index without being crawled. Everyone knows this. If a page gets linked to Google, sees the links, they’re like, here’s anchor texts. I know that the page is there. People are linking to it. It’s important they index it. Even if we’re blocked, you can’t actually see what’s on that page. They’re still going to do it. They’re still going to index it.

This is something that happens on both Google and Bing: soft 404s. So what happens with a status code of 200, but there’s a message on the page, 200 says okay, there’s a message on the page that says something’s wrong. Like, this isn’t here or whatever. They treat it as a soft 404; this is for Google and Bing. There’s literally dozens of different types of messaging where they will look at the page that you just throw a 200 status code on and say, “that’s actually a 404 page, and they treat that as a soft 404.” They’re like, “we know there’s not actually anything useful there most of the time.” But this happens a lot with JavaScript frameworks because those aren’t typically made to fail. You actually have to do some hacky work arounds, like routing, like Detlef talked about, to a 404 page. So, you have thrown in a 200 but they’re like page not found. Search engines are like, “no, there’s nothing there.”

With crawling, crawl delay can be ignored. Google typically will put as much load on the server as your server can handle, up to the point where they get the pages that they want. Pages may be folded together before being crawled. If you have duplicate sections, say like one on a sub domain or like HTTP, HTTPS, they recognize these patterns and say, I only want one version. I want this one source of truth. Consolidate all the signals there. So before, if they’ve seen it the same way in five different places, then they’re going to just treat that as one. They don’t even have to crawl the page at that point — they’re like, this repeated pattern is always the same.

It kind of works that way with HTTPS, also. This is actually one of the duplicate issues, is that they will typically index HTTPS first over HTTP. So, if you have both and you don’t have a canonical — canonical, we could go either way, but typically they’re going to choose HTTPS when they can.

302 redirects: I think there’s a lot of misunderstanding with SEOs, so I’m actually going to explain how this works. 302s are meant to be temporary, but if you leave them in place long enough, they will become permanent. There’ll be treated exactly like 301s. When the 302 is in place, what happens is if I redirect this page to this page, it actually is like a reverse canonical: all the signals can go back to the original page. But if you leave that for a few weeks, a few months, Google was like, “Nah, that’s really still redirected after all this time. We should be indexing the new page instead.” And then all the signals get consolidated here, instead.

Title tags: Anytime, you know, you don’t write a title tag or it’s not relevant, generic, too long; Google has the option to rewrite this. They’re going to do it a lot, actually. You know, if you just write “Home,” maybe they’re going to add a company name. They’re going to do this for a lot of different reasons, but the main reason I would say is that you know, people were really bad about writing their titles. They were bad about keyword stuffing their titles. And it’s the same with meta descriptions: they’re typically going to pull content from the page. If you don’t write a meta description, they’re going to write one for you. It’s not like, “Hey, that doesn’t exist.”

Lastmod date and site maps — I believe Bing actually ignores this, too. The reason being the sitemap generators, the people making the site maps, this is never ever right. I would say this is one of the things that is probably most wrong, but who cares. They ignore it.

Canonical tags: this is very common. This is like half of my job is trying to figure out how things got consolidated or is something actually a problem. In many cases, the canonical tags will be ignored. Could be other signals in play, like hreflang tags or any number of things. But basically if they think that something is wrong, they’re just going to say, “Nope, canonical is, you know, a suggestion.” It is not a directive. So anytime that they think that the webmaster, the developer, the SEO got it wrong, they’re going to make their best guess at what that should be.

It’s kind of the same with duplicate content. Duplicate content exists on the web. It is everywhere. In Google’s mind, they’re trying to help people by folding the pages together. All these various versions become one. All the signals consolidate to that one page. They’re actually trying to help us by doing that. And they actually do a pretty good job with that.

If you have multiple tags, they’re going to choose the most restrictive. I’ve seen this a thousand times with different CMS systems: in WordPress, you might have your theme adding a tag, plus Yoast adding a tag, plus any number of things can add tags, basically. And usually if there’s five tags that say index and one that’s noindex, they’re going to choose the most restrictive and that’s the noindex.

With links, they’re typically going to ignore them. If you have bad links to your site — I think there was some discussion earlier — are you going to use the disavow file — or this might’ve been last night actually; Barry was talking about this. In general, the answer’s no. If you’re afraid you’re going to have a penalty, maybe, but for the most part you don’t have to worry about the links to your site anymore, which is great.

Then if you’re in local, the NAP listings, a lot of local SEOs we’ll really focus on, like, these all have to be the exact same thing. Well, variations, you know street, spelled out versus “st,” or LLC versus limited liability corporation. There are certain variations where basically they’re going to consolidate. They know that this is another version of this other thing, so they’re going to say it’s the same, it’s fine.

This actually came up earlier too with Barry or Detlef, I can’t remember which, but they were saying that Google only looks at HTTPS in the URL, not whether your certificate is actually valid or not. And that’s 100% true. If you ever crawl a page that has an expired certificate, they go right through. If you look in search console, all the links consolidate. They follow the redirect that’s there even though the user is going to get an error.

And then hreflang, I think again, Barry had mentioned this, this is one of the most complicated things. This is, in my world, the most likely thing that’s going to go wrong a million different ways because it really does get complex. With duplicates, they’re typically going to show the right one anyway, even if you didn’t localize the page at all — like you have 30 versions, all English, as long as the signals are there, it’s going to be okay. It’s when the tags break and that kind of thing, you might end up with the wrong version showing, cause again, they’re folding the pages together; typically, if they’re duplicates, and they’re trying to show one main version. If everything’s right though, they will swap to show the right version for the right person. Within that tag, you know, it’s a best practice to use a dash instead of an underscore — doesn’t really matter; their crawlers are very lenient. Detlef was talking about like, “oh you got to get their semantic HTML right.” Their crawlers have seen this stuff wrong 50 billion different times and honestly they are very lenient on a lot of things.

en-UK instead of en-GB: every hreflang article will tell you this is wrong, but it works. You will never see an error for this. Why? Because UK is not actually a country — it’s a reserved code and they’ve seen it wrong enough that they’re like, “Eh, it’s fine.”

Same with self referencing. You don’t actually need that. Same with relative URLs versus absolute. There are best practices basically. But, then there’s kind of what works and I think where we have to get as an industry is let’s not waste people’s time. If Google, if Bing have fixed this on their end, why are we pushing for it? We’ve got other priorities, other things that we can have done.

They’re even doing this in the browser, now. Most websites do not use lazy loading for their images. Google is going to take that on in the browser and I hope other browsers do this. I think this is the first step. I think they’re going to do a lot more with this, probably like preload directives and a bunch of things, but they’re going to, in the browser, take the strain off the server, off the websites, and they’re just going to be lazy loading images across the web. Now, a lot of people are thinking that they need this loading=“lazy” — that’s actually default. If you do nothing, you have lazy loading on your website as of Chrome 75. And that’s about it, thank you.


About The Author

George Nguyen is an Associate Editor at Third Door Media. His background is in content marketing, journalism, and storytelling.

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

WordPress

Here’s how to set up the Google Site Kit WordPress plugin

Published

on

Here's how to set up the Google Site Kit WordPress plugin


On Oct. 31, Google announced the launch of its Site Kit WordPress plugin that, “enables you to set up and configure key Google services, get insights on how people find and use your site, learn how to improve, and easily monetize your content.”

This plugin allows you to easily connect the following Google Services in a dashboard format within your WordPress backend:

  • Search Console
  • Analytics
  • PageSpeed Insights
  • AdSense
  • Optimize
  • Tag Manager

It brings the convenience of accessing your site’s performance data while logged into the backend of the site. This is great for webmasters, developers and agencies who are often an admin for their own site or a client’s WordPress site. However, it does not offer the robust and dynamic capabilities of a Google Data Studio report or dashboard to sort data so it may not be ideal for a digital marketing manager or CMO.

With that said, it wouldn’t hurt to implement this plugin as it’s actually a nifty tool that can help you stay on top of your site’s performance metrics. It’s also another way to give Google more access to your site which can have some in-direct benefits organically. 

Here is what the Google Site Kit plugin looks like within the WordPress plugin directory.

Installing and setting up Google Site Kit

To utilize the plugin, simply click install and activate as you would any other WordPress plugin. You will then be prompted to complete the set up.

Step 1

Click on the “Start Setup” button.

Step 2

You will be prompted to give access to your site’s Google Search Console profile, which means you need to sign in to the Gmail account that has access to your site’s Search Console profile.

Step 3

Once logged in you need to grant permissions for Google to access the data in your Search Console profile.

Step 4

Once you’ve granted all the respective permissions, you will get a completion notification and can then click on “Go to my Dashboard.”

Step 5

Once you’re in the Dashboard you will see options to connect other services such as Analytics, AdSense and PageSpeed insights. You can now choose to connect these services if you like. If you go to the settings of the plugin you will see additional connection options for Optimize and Tag Manager.

Here is what the dashboard looks like with Search Console, analytics and PageSpeed Insights enabled. You can see a clear breakdown of the respective metrics.

The plugin allows you to dive into each reporting respectively with navigation options on the left to drill down into Search Console and analytics.

There is also an admin bar feature to see individual page stats.

In summary, this is a great plugin by Google but keep in mind it’s just version 1.0. I’m excited to see what features and integrations the later versions will have!


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

Tony Edward is a director of SEO at Tinuiti and an adjunct instructor of search marketing at NYU. Tony has been in the online marketing industry for over 10 years. His background stems from affiliate marketing and he has experience in paid search, social media and video marketing. Tony is also the founder of the Thinking Crypto YouTube channel.

Continue Reading

WordPress

Bing Announces Link Penalties – Search Engine Journal

Published

on

Roger Montti


Bing announced a new link penalties. These link penalties are focused on taking down private blog networks (PBNs), subdomain leasing and manipulative cross-site linking.

Inorganic Site Structure

An inorganic site structure is a linking pattern that uses internal site-level link signals (with subdomains) or cross-site linking patterns (with external domains) in order to manipulate search engine rankings.

While these spam techniques already existed, Bing introduced the concept of calling them “inorganic site structure” in order to describe them.

Bing noted that sites legitimately create subdomains to keep different parts of the site separate, such as support.example.com. These are treated as belonging to the main domain, passing site-level signals to the subdomains.

Bing also said sites like WordPress create standalone sites under subdomains, in which case no site level signals are passed to the subdomains.

Examples of Inorganic Site Structure

An inorganic site structure is when a company leases a subdomain in order to take advantage of site-level signals to rank better. There have been

Private blog networks were also included as inorganic site structure

Domain Boundaries

Bing also introduced the idea of domain boundaries. The idea is that there are boundaries to a domain. Sometimes, as in the case of legitimate subdomains (ex. support.example.com), those boundaries extend out to the subdomain. In other cases like WordPress.com subdomains the boundaries do not extend to the subdomains.

Private Blog Networks (PBNs)
Bing called out PBNs as a form of spam that abuse website boundaries.

“While not all link networks misrepresent website boundaries, there are many cases where a single website is artificially split across many different domains, all cross-linking to one another, for the obvious purpose of rank boosting. This is particularly true of PBNs (private blog networks).”

Subdomain Leasing Penalties

Bing explained why they consider subdomain leasing a spammy activity:

“…we heard concerns from the SEO community around the growing practice of hosting third-party content or letting a third party operate a designated subdomain or subfolder, generally in exchange for compensation.

…the practice equates to buying ranking signals, which is not much different from buying links.”

At the time of this article, I still see a news site subdomain ranking in Bing (and Google). This page belongs to another company. All the links are redirected affiliate type links with parameters meant for tracking the referrals.

According to Archive.org the subdomain page was credited to an anonymous news staffer. Sometime in the summer the author was switched to someone with a name who is labeled as an expert, although the content is still the same.

So if Bing is already handing out penalties that means Bing (and Google who also ranks this page) still have some catching up to do.

Cross-Site Linking

Bing mentioned sites that are essentially one site that are broken up into multiple interlinking sites. Curiously Bing said that these kinds of sites already in violation of other link spam rules but that additional penalties will apply.

Here’s the kind of link structure that Bing used as an example:

illustration of a spammy link networkAll these sites are interlinking to each other. All the sites have related content and according to Bing are essentially the same site. This kind of linking practice goes back many years. They are traditionally known as interlinked websites. They are generally topically related to each other.

Bing used the above example to illustrate interlinked sites that are really just one site.

That link structure resembles the structure of interlinked websites that belong to the same company. If you’re planning a new web venture, it’s generally a good idea to create a site that’s comprehensive than to create a multitude of sites that are focused on just a small part of the niche.

Curiously, in reference to the above illustration, Bing said that kind of link structure was already in violation of link guidelines and that more penalties would be piled on top of those:

“Fig. 3 – All these domains are effectively the same website.
This kind of behavior is already in violation of our link policy.

Going forward, it will be also in violation of our “inorganic site structure” policy and may receive additional penalties.

Takeaway

It’s good news to hear Bing is improving. Competition between search engines encourage innovation and as Bing improves perhaps search traffic may become more diversified as more people switch to Bing as well as other engines like DuckDuckGo.

Read Bing’s announcement: Some Thoughts on Website Boundaries



Continue Reading

WordPress

Google Releases its Site Kit WordPress Plugin Out of Beta

Published

on

Matt Southern


Google has released version 1.0 of its Site Kit plugin for WordPress, which means its officially out of beta after 6 months.

In the time since the developer preview of Site Kit was released, Google says it drastically simplified the setup, fixed bugs, and polished the main user flows.

Site Kit allows WordPress users to access data from Google products right from their site’s dashboard. The plugin aggregates data from Google Search Console, Google Analytics, PageSpeed Insights, and AdSense.

Google Releases its Site Kit WordPress Plugin Out of Beta

With Site Kit there’s no additional code editing required, which makes it easy to set up products like Google Analytics for those without any developer experience.

Anyone can install Site Kit, but Google emphasizes that it’s especially useful for professionals who work on sites for clients. The reasons why include:

  • Clients and other teams can easily access data from Google products by logging into the WordPress dashboard.
  • Clients will see performance states and improvement recommendations directly from Google
  • Site Kit allows you to set roles and permissions and make sure only relevant people can see the data.

To get the most out of Site Kit, Google recommends reviewing the main dashboard on at least a weekly basis. You can also check the stats of individual pages by navigating to the page and clicking on Site Kit in the admin bar.

Google Releases its Site Kit WordPress Plugin Out of Beta

With this data, Google recommends comparing the top performing pages and seeing how people found them. This can help you discover trends, such as which topics get the most engagement on Twitter, which get the most engagement on Facebook, and so on.

To get started with Site Kit, simply install it from your WordPress dashboard.



Continue Reading

Trending

Copyright © 2019 Plolu.