Connect with us


Evergreen Googlebot with Chromium rendering engine: What technical SEOs need to know



A WordPress safety plan for SEOs and developers

It’s been an exciting week with important announcements from the stage at the 2019 Google I/O event. Probably the most impactful announcement is that Google has now committed to regularly updating its Googlebot crawl service to begin using the most recent stable version of their headless Chromium rendering engine. This is a significant leap forward with more than 1,000 features now supported over the previous version.

Nearly all the new feature support is modern JavaScript syntax officially called ECMAScript (ES6). If you are a JavaScript developer, you really want to use the latest version of the language for access to syntactic sugar that continually appears as the language matures. It’s true that if you’re a vanilla JavaScript user, or if you favor one of the modern reactive frameworks, many neat new features come from developers who recommend better patterns for blocks of commonly written code.

One basic example is to add a value to an array, a very common thing to do using push():

  names = [

Reactivity in a Nutshell

In the example above, an array of names is defined and assigned 3 values: Amy, Bruce, and Chris. Then David is added to the list using the push() method. With modern reactive frameworks mutation of values can trigger ‘diff’ evaluations of a page DOM against a newer ‘virtual DOM’ by the framework, and since the array values differ, page values can be updated by JavaScript without reloading the browser window.

Reactivity in web-facing applications is where JavaScript has really added to our capabilities, and where our capabilities continue to advance as modern JavaScript further evolves on the server and in the browser. It gets tricky to keep track of JavaScript written for the server versus JavaScript that gets shipped to the browser. For example, with ES6 you can do the following, including the ability to use ‘let’ (and ‘const’) in definition statements:

  let names = [
  names = [...names, 'David'];

Backward Compatibility

The names array mutation above uses a newer ‘spread operator’ syntax [...names] to represent current values of the names array, and then adds David using an assignment operation instead of the push() method. The newer syntax is not compatible with Chrome 41, and therefore would not work prior to Googlebot’s update to Chrome 74. For developers it is like death by a thousand cuts to have to write or transpile ES6 down for backward compatibility.

Now modern JavaScript syntax will largely start to work straight out of the box with Googlebot and there are literally dozens of new features available such as the one above. Just be aware that Bing and DuckDuckGo (as well as social share crawlers) may not be able to interpret ES6 syntax.

Real-Life Example

The Svelte framework was recently significantly updated and revised to version 3. With this major overhaul came more precisely triggered assignment-based page reactivity. There’s a fun viral video about it going around. Having to write or transpile the ‘names’ array code to older push() syntax for Google in Svelte requires an extra step because push() adds values to an array but it isn’t a variable assignment operation, which is necessary to trigger page reactivity in Svelte 3.

  let names = [
  names = names; // To trigger Svelte reactivity

It’s easy to see why now being able to use ES6:

  names = [...names, 'David'];

…is more developer friendly for Svelte users than before.

Evergreen Chromium rendering

Now that Googlebot’s evergreen Chromium rendering engine can be counted on, React, Angular, Vue, Svelte 3, and vanilla JavaScript users can worry a little less about polyfills specific to Chrome 41 and writing or transpiling down ES6 syntax in projects anymore. Concerns still exist, however. You need to test and make sure the rendering engine is behaving the way you anticipate. Google is more guarded about exposing its resources than a user’s browser would be.

Google recommends that users check out the documentation to find references to Google’s Web Rendering Service (WRS) instances: basically Chromium 74, currently, in products like the mobile-friendly test and the URL Inspection Tool. For example, a Geo location script might ask for browser location services. Google’s rendering engine doesn’t expose that API. These kinds of exceptions in your JavaScript may halt your indexing.

Tracking Googlebot

If you’re still tracking visits from older versions of Chrome in your server logs, eventually they will update the user-agent string to reflect the version of Chrome they are running. Also, keep in mind that Google is a fairly large and dispersed company with divisions that have varying access to its network resources. A particular department might have settings to modify in order to begin using the new Chrome engine, but it stands to reason that everything will be using it very soon, especially for critical Web crawling services.

Technical SEO Advice

What does this mean for technical SEOs? There will be fewer critical indexing issues to point out for sites running modern JavaScript. Traditional advice, however, will remain largely intact. For example, the new rendering engine does not shortcut the indexing render queue for reactive code. That means sites running React, Angular, or Vue etc. are still going to be better off pre-rendering relatively static sites, and best off server-side rendering (SSR) truly dynamic sites.

The nice thing about being a Technical SEO is we get to advise developers about practices that should align with Googlebot and that mostly they ought to be doing in the first place. The nice thing about being a SEO Developer is there’s a never-ending river of exciting modern code to play with, especially with Google now caught up with Chromium 74. The only drawback is evergreen Chromium Googlebot doesn’t help you with Bing, DuckDuckGo, or social media sharing crawlers.

That’s A Pretty Big Drawback

The more things change the more they stay the same. You should still advise clients about pre-rendering and SSR. This ensures that no matter what user-agent you’re dealing with, it will receive rendered content for search or sharing. The predicament we find ourselves in is that if the planned application has a huge volume of reactive parts to it, for example constantly updating sports scores or stock market prices, we must do reactivity and SSR alone won’t work.

That’s when it’s necessary to do SSR and ship custom JavaScript for deferred hydration, similar to code-splitting. Basically, the complete HTML is shipped as fully rendered at the server, and then JavaScript takes care of updating the reactivity parts. If JavaScript doesn’t render in Bing or DuckDuckGo, then it’s all right because you already shipped fully rendered HTML. This can seem excessive but keep in mind that the search engine will only ever be able to represent rankings for your page in the state it was at a particular point in time, anyway.

Why Such Reactivity?

SSR can accomplish the SEO rendering feat across user-agents for you, and user browsers can run JavaScript for reactive features. But why bother? If you are using a reactive framework just because you can, maybe you didn’t need to in the first place. If you want to avoid all the trouble and expense of having myriad complex details to manage when the nature of your site doesn’t require much reactivity, then it’s a really good idea to build static sites using a strategy with pre-rendering if necessary, or write vanilla JavaScript for the feature or two which may actually require reactivity.

Server Side Rendering

If you think server-side rendering is a piece of cake, read a post that describes some of the horrors you might encounter before you charge in, especially if you’re trying to retrofit a pre-existing application. In short, you should be writing universal JavaScript and it gets complex quickly including security implications. Luckily, there is also a terrific new set of nicely written posts that comprise a fairly thorough React tutorial if you’re working from scratch. We highly recommended reading it to supplement the official React guide.

A New Hope

Things move quickly and keeping up can be tough, even for Google. The news that it has updated to Chrome 74 for rendering more of the modern Web is long overdue. It’s important that we know it intends to improve Googlebot to within weeks of the consumer version of Chrome releases. We can now test more code using local software to make sure our sites work with Googlebot. A very intriguing new paradigm for reactivity is Svelte. Svelte has a SSR output mode that you can test directly in its tutorial REPL. Svelte brings us reactivity that is closer to vanilla JavaScript than others, a real achievement.

About The Author

Detlef Johnson is Editor at Large for Third Door Media. He writes a column for Search Engine Land entitled “Technical SEO for Developers.” Detlef is one of the original group of pioneering webmasters who established the professional SEO field more than 20 years ago. Since then he has worked for major search engine technology providers, managed programming and marketing teams for Chicago Tribune, and consulted for numerous entities including Fortune 500 companies. Detlef has a strong understanding of Technical SEO and a passion for Web programming. As a noted technology moderator at our SMX conference series, Detlef will continue to promote SEO excellence combined with marketing-programmer features and webmaster tips.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply


LinkedIn Users Can View All Sponsored Content From the Past 6 Months



LinkedIn pages will soon feature an ‘Ads’ tab showing all sponsored content an advertiser has run in the past six months.

The company says this change is being made in an effort to bring even greater transparency to ads on LinkedIn.

“At LinkedIn, we are committed to providing a safe, trusted, and professional environment where members can connect with each other, engage with relevant content, and grow their careers. Increased transparency to both our customers and members is critical to creating this trusted environment.”

While viewing ads in the new tab, users can click on the ads but the advertiser will not be charged.

Ad clicks from within the ‘Ads’ tab will not impact campaign reporting either.

From a marketing perspective, I see this as being an opportunity for competitor research.

Do you know a company who is killing it with LinkedIn advertising? View their ads tab to see if you can learn from what they’re doing.

Of course, the Ads tab will only show you what their ads look like.

It won’t reveal anything about how those ads are targeted or what the company’s daily budget is. But hey, it’s something.

LinkedIn says this is the first of many updates to come as the company furthers its effort to provide users with useful information about the ads they see.

The new Ads tab is rolling out globally over the next few weeks

Source link

Continue Reading


SEMrush expands to Amazon with Sellerly for product page testing



SEMrush is a popular competitive intelligence platform used by search marketers. The company, recently infused with $40 million in funding to expand beyond Google, Bing and Yahoo insights, has launched a new product called Sellerly specifically for Amazon sellers.

What is Sellerly? Announced Monday, Sellerly designed to give Amazon sellers the ability to split test product detail pages.

“By introducing Sellerly as a seller’s buddy in Amazon marketing, we hope to improve hundreds of existing Amazon sellers’ strategies,” said SEMrush Chief Strategy Officer Eugene Levin in a statement. “Sellerly split testing is only the first step here. We’ve already started to build a community around the new product, which is very important to us. We believe that by combining feedback from users with our leading technology and 10 years of SEO software experience, we will be able to build something truly exceptional for Amazon sellers.”

How does it work? Sellerly is currently free to use. Amazon sellers connect their Amazon accounts to the tool in order to manage their product pages. Sellers can make changes to product detail pages to test against the controls. Sellerly collects data in real time and sellers can then choose winners based on views and conversions.

Sellers can run an unlimited number of tests.

Why we should care. Optimized product detail pages on Amazon is a critical aspect of success on the platform. As Amazon continues to generate an increasing share of e-commerce sales for merchants big and small, and competition only increases, product page optimization becomes even more critical. Amazon does not support AB testing natively. Sellerly is not the first split test product for Amazon product pages to market. Splitly (paid), Listing Dojo (free) are two others that offer similar split testing services.

About The Author

Ginny Marvin is Third Door Media’s Editor-in-Chief, managing day-to-day editorial operations across all of our publications. Ginny writes about paid online marketing topics including paid search, paid social, display and retargeting for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, she has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.

Source link

Continue Reading


Google on Domain Penalties that Don’t Expire



Google’s John Mueller was presented with a peculiar situation of a website with zero notifications of a manual action that cannot rank for it’s own brand name. Mueller analyzed the situation, thought it through, then appeared to reach the conclusion that maybe Google was keeping it from ranking.

This is a problem that has existed for a long time, from before Mueller worked at Google. It’s a penalty that’s associated with a domain that remains even if the domain is registered by a new buyer years later.

Description of the Problem

The site with a penalty has not received notices of a manual penalty.

That’s what makes it weird because, how can a site be penalized if it’s not penalized, right?

The site had an influx of natural links due to word of mouth popularity. Yet even with those links, the site cannot rank for it’s own name or a snippet of content from it’s home page.

Had those natural links or the content been a problem then Google would have notified the site owner.  So the problem is not with the links or the content.

Nevertheless, the site owner disavowed old inbound links from before he purchased the site but the site still did not rank.

Here is how the site owner described the problem:

“We bought the domain three years ago to have a brand called Girlfriend Collective, it’s a clothing company on the Shopify platform.

We haven’t had any… warnings from our webmaster tools that says we have any penalizations… So I was just wondering if there was any other underlying issues that you would know outside of that…

The domain is and the query would be Girlfriend Collective.

It’s been as high as the second page of the SERPs, but… we get quite a few search queries for our own branded terms… it will not show up.

My assumption was that before we bought it, it was a pretty spammy dating directory.”

John Mueller’s response was:

“I can double check to see from our side if there’s anything kind of sticking around there that you’d need to take care of…”

It appears as if Mueller is being circumspect in his answer and doesn’t wish to say that it might be a problem at Google. At this point, he’s still holding on to the possibility that there’s something wrong with the site. You can’t blame him because he probably gets this all the time, where someone thinks it’s Google but it’s really something wrong with the site.

Is There Something Wrong with the Domain Name?

I checked to see what it’s history was. It was linking to adult sites prior to 2004 and sometime in mid 2004 the domain switched it’s monetization strategy away from linking to adult sites to displaying Google ads as a parked domain.

A parked domain is a domain that does not have a website on it. It just has ads. People used to type domain names into the address field and sites like would monetize the “type-in” traffic with Google AdSense, usually with a service that shows ads on the site owner’s behalf in exchange for a percentage of the earnings.

The fact that it was linking to adult sites could be a factor that has caused Google to more or less blacklist and keep it from ranking.

Domain Related Penalties Have Existed for a Long Time

This has happened many times over the years. It used to be standard to check the background of a domain before purchasing it.

I remember the case of a newbie SEO who couldn’t rank for his own brand name. Another SEO who was more competent contacted Google on his behalf and Google lifted the legacy domain penalty.

The Search Query

Mueller referred to the search queries the site owner wanted to rank for as being “generic” and commented that ranking for those kinds of “generic” terms is tricky.

This is what John Mueller said:

“In general, when it comes to kind of generic terms like that, that’s always a bit tricky. But it sounds like you’re not trying to rank for like just… girlfriend. “

However the phrase under discussion was the company name, Girlfriend Collective, which is not a generic phrase.

It could be argued that the domain name is not relevant for the brand name. So perhaps Mueller was referencing the generic nature of the domain name when he commented on ranking for “generic” phrases?

I don’t understand why “generic” phrases entered into this discussion. The site owner answered Mueller to reinforce that he’s not trying to rank for generic phrases, that he just wants to rank for his brand name.

The search phrase the site owner is failing to rank for is Girlfriend Collective. Girlfriend Collective is not a generic keyword phrase.

Is the Site Poorly Optimized?

When you visit the website itself, the word Collective does not exist in the visible content.

The word “collective” is nowhere on the page, not even in the footer copyright. The word is there, but it’s in an image, it has to be in text for Google to recognize it for the regular search results.

That’s a considerable oversight to omit your own brand name from the website’s home page.

Screenshot of's footer

  • The brand name exists in the title tag and other meta data.
  • It does not exist in the visible content where it really matters.
  • The word collective is not a part of the domain name.

A reasonable case could be made that does not merit ranking for the brand name of Girlfriend Collective because the word collective only exists in the title tag of the home page, not on the page itself.

Google Does Not Even Rank it for Page Snippets

However that reasonable case falls apart upon closer scrutiny. If you take any content from the page and search with that snippet of content in Google, you’ll see that the domain name does not even rank for the content that is on it’s own page.

The site is fully indexed, but the content is not allowed to rank.

I searched for the following phrases but only found other pages and social media posts ranking in Google, not

  • “Five classic colors made from recycled water bottles.”
  • “A bunch of old water bottles have never looked so good.”

That first phrase, “Five classic colors…” doesn’t rank anywhere on Google for the first several pages.

But as you can see below, ranks #6 in Bing:

Screenshot of ranking in Bing.Bing has no trouble ranking Girlfriend Collective for a snippet of text taken from the home page. Google does not show it at all. This points to this issue being something to do with Google and not with the site itself.

Even though appears to fall short in its search optimization, that is not the problem. The problem is that Google is preventing any content from that domain from ranking.

The reason Google is preventing that content from ranking is because the domain was problematic in the past. At some point in its history it was filtered from ranking. It’s a Legacy Google Penalty.

Checking the snapshot of via shows that it was being used to promote adult websites prior to 2004.

This is what it looked like sometime in 2004 and onward. It appears to be a parked domain that is showing Google AdSense ads.

Screenshot of from 2004This is a snapshot of circa 2004. It wasn’t a directory as the site owner believed. Checking the HTML source code reveals that the page is displaying Google AdSense ads. That’s what a parked domain looked like.

Parked domains used to be able to rank. But at some point after 2004 Google stopped ranking those pages.

There’s no way to speculate if the domain received it’s penalty before 2004 or after.

Site Can’t Rank for it’s Own Brand Name

There are many reasons why a site can’t rank for it’s own domain name or words from it’s own pages. If you suspect that your site may be suffering from a legacy Google penalty, you can verify the previous content by checking is a non-profit that stores snapshots of what web pages look like. allows you to verify if your domain was previously used by someone else to host low quality content.

Unfortunately, Google does not provide a way to contact them to resolve this matter.

Bing Ranks for Girlfriend Collective

If there was a big problem with links or content on that was keeping it from ranking on Google, then it would very likely be apparent on Bing.

Bing and Google use different algorithms. But if there was something so massively wrong with Girlfriend Collective, whether site quality or a technical issue, there would be a high probability that the massive problem would keep it from ranking at Bing.

Bing has no problem ranking for it’s brand name:

Screenshot of Bing search results showing that it ranks in a normal mannerBing ranks in a normal manner. This may be proof that there is no major issue with the site itself. The problem may be at Google.

Google’s John Mueller Admits it Might be Google

After listening to how the site owner has spent three years waiting for the legacy domain penalty to drop off, three years of uploading disavows, three years of bidding on AdWords for it’s own brand name, John Mueller seemed to realize that the issue was not on the site owner’s side but on Google’s side.

This is what John Mueller offered:

“I need to take a look to see if there’s anything sticking around there because it does seem like the old domain was pretty problematic. So that… always makes it a little bit harder to turn it around into something reasonable.

But it feels like after a couple of years that should be possible. “

In the end, Mueller admitted that it might be something on Google’s side. However an issue that remains is that there is no solution for other publishers. This is not something a publisher can do on their own like a disavow. It’s something a Googler must be made aware of in order to fix.

Watch the Google Webmaster Hangout here

Screenshots by Author, Modified by Author

Source link

Continue Reading


Copyright © 2019 Plolu.