Connect with us

SEO

Evergreen Googlebot with Chromium rendering engine: What technical SEOs need to know

Published

on

A WordPress safety plan for SEOs and developers


It’s been an exciting week with important announcements from the stage at the 2019 Google I/O event. Probably the most impactful announcement is that Google has now committed to regularly updating its Googlebot crawl service to begin using the most recent stable version of their headless Chromium rendering engine. This is a significant leap forward with more than 1,000 features now supported over the previous version.

Nearly all the new feature support is modern JavaScript syntax officially called ECMAScript (ES6). If you are a JavaScript developer, you really want to use the latest version of the language for access to syntactic sugar that continually appears as the language matures. It’s true that if you’re a vanilla JavaScript user, or if you favor one of the modern reactive frameworks, many neat new features come from developers who recommend better patterns for blocks of commonly written code.

One basic example is to add a value to an array, a very common thing to do using push():

<script>
  names = [
    'Amy',
    'Bruce',
    'Chris'
  ];
  names.push('David');
</script>

Reactivity in a Nutshell

In the example above, an array of names is defined and assigned 3 values: Amy, Bruce, and Chris. Then David is added to the list using the push() method. With modern reactive frameworks mutation of values can trigger ‘diff’ evaluations of a page DOM against a newer ‘virtual DOM’ by the framework, and since the array values differ, page values can be updated by JavaScript without reloading the browser window.

Reactivity in web-facing applications is where JavaScript has really added to our capabilities, and where our capabilities continue to advance as modern JavaScript further evolves on the server and in the browser. It gets tricky to keep track of JavaScript written for the server versus JavaScript that gets shipped to the browser. For example, with ES6 you can do the following, including the ability to use ‘let’ (and ‘const’) in definition statements:

<script>
  let names = [
    'Amy',
    'Bruce',
    'Chris'
  ];
  names = [...names, 'David'];
</script>

Backward Compatibility

The names array mutation above uses a newer ‘spread operator’ syntax [...names] to represent current values of the names array, and then adds David using an assignment operation instead of the push() method. The newer syntax is not compatible with Chrome 41, and therefore would not work prior to Googlebot’s update to Chrome 74. For developers it is like death by a thousand cuts to have to write or transpile ES6 down for backward compatibility.

Now modern JavaScript syntax will largely start to work straight out of the box with Googlebot and there are literally dozens of new features available such as the one above. Just be aware that Bing and DuckDuckGo (as well as social share crawlers) may not be able to interpret ES6 syntax.

Real-Life Example

The Svelte framework was recently significantly updated and revised to version 3. With this major overhaul came more precisely triggered assignment-based page reactivity. There’s a fun viral video about it going around. Having to write or transpile the ‘names’ array code to older push() syntax for Google in Svelte requires an extra step because push() adds values to an array but it isn’t a variable assignment operation, which is necessary to trigger page reactivity in Svelte 3.

<script>
  let names = [
    'Amy',
    'Bruce',
    'Chris'
  ];
  names.push('David');
  names = names; // To trigger Svelte reactivity
</script>

It’s easy to see why now being able to use ES6:

<script>
  names = [...names, 'David'];
</script>

…is more developer friendly for Svelte users than before.

Evergreen Chromium rendering

Now that Googlebot’s evergreen Chromium rendering engine can be counted on, React, Angular, Vue, Svelte 3, and vanilla JavaScript users can worry a little less about polyfills specific to Chrome 41 and writing or transpiling down ES6 syntax in projects anymore. Concerns still exist, however. You need to test and make sure the rendering engine is behaving the way you anticipate. Google is more guarded about exposing its resources than a user’s browser would be.

Google recommends that users check out the documentation to find references to Google’s Web Rendering Service (WRS) instances: basically Chromium 74, currently, in products like the mobile-friendly test and the URL Inspection Tool. For example, a Geo location script might ask for browser location services. Google’s rendering engine doesn’t expose that API. These kinds of exceptions in your JavaScript may halt your indexing.

Tracking Googlebot

If you’re still tracking visits from older versions of Chrome in your server logs, eventually they will update the user-agent string to reflect the version of Chrome they are running. Also, keep in mind that Google is a fairly large and dispersed company with divisions that have varying access to its network resources. A particular department might have settings to modify in order to begin using the new Chrome engine, but it stands to reason that everything will be using it very soon, especially for critical Web crawling services.

Technical SEO Advice

What does this mean for technical SEOs? There will be fewer critical indexing issues to point out for sites running modern JavaScript. Traditional advice, however, will remain largely intact. For example, the new rendering engine does not shortcut the indexing render queue for reactive code. That means sites running React, Angular, or Vue etc. are still going to be better off pre-rendering relatively static sites, and best off server-side rendering (SSR) truly dynamic sites.

The nice thing about being a Technical SEO is we get to advise developers about practices that should align with Googlebot and that mostly they ought to be doing in the first place. The nice thing about being a SEO Developer is there’s a never-ending river of exciting modern code to play with, especially with Google now caught up with Chromium 74. The only drawback is evergreen Chromium Googlebot doesn’t help you with Bing, DuckDuckGo, or social media sharing crawlers.

That’s A Pretty Big Drawback

The more things change the more they stay the same. You should still advise clients about pre-rendering and SSR. This ensures that no matter what user-agent you’re dealing with, it will receive rendered content for search or sharing. The predicament we find ourselves in is that if the planned application has a huge volume of reactive parts to it, for example constantly updating sports scores or stock market prices, we must do reactivity and SSR alone won’t work.

That’s when it’s necessary to do SSR and ship custom JavaScript for deferred hydration, similar to code-splitting. Basically, the complete HTML is shipped as fully rendered at the server, and then JavaScript takes care of updating the reactivity parts. If JavaScript doesn’t render in Bing or DuckDuckGo, then it’s all right because you already shipped fully rendered HTML. This can seem excessive but keep in mind that the search engine will only ever be able to represent rankings for your page in the state it was at a particular point in time, anyway.

Why Such Reactivity?

SSR can accomplish the SEO rendering feat across user-agents for you, and user browsers can run JavaScript for reactive features. But why bother? If you are using a reactive framework just because you can, maybe you didn’t need to in the first place. If you want to avoid all the trouble and expense of having myriad complex details to manage when the nature of your site doesn’t require much reactivity, then it’s a really good idea to build static sites using a strategy with pre-rendering if necessary, or write vanilla JavaScript for the feature or two which may actually require reactivity.

Server Side Rendering

If you think server-side rendering is a piece of cake, read a post that describes some of the horrors you might encounter before you charge in, especially if you’re trying to retrofit a pre-existing application. In short, you should be writing universal JavaScript and it gets complex quickly including security implications. Luckily, there is also a terrific new set of nicely written posts that comprise a fairly thorough React tutorial if you’re working from scratch. We highly recommended reading it to supplement the official React guide.

A New Hope

Things move quickly and keeping up can be tough, even for Google. The news that it has updated to Chrome 74 for rendering more of the modern Web is long overdue. It’s important that we know it intends to improve Googlebot to within weeks of the consumer version of Chrome releases. We can now test more code using local software to make sure our sites work with Googlebot. A very intriguing new paradigm for reactivity is Svelte. Svelte has a SSR output mode that you can test directly in its tutorial REPL. Svelte brings us reactivity that is closer to vanilla JavaScript than others, a real achievement.


About The Author

Detlef Johnson is Editor at Large for Third Door Media. He writes a column for Search Engine Land entitled “Technical SEO for Developers.” Detlef is one of the original group of pioneering webmasters who established the professional SEO field more than 20 years ago. Since then he has worked for major search engine technology providers, managed programming and marketing teams for Chicago Tribune, and consulted for numerous entities including Fortune 500 companies. Detlef has a strong understanding of Technical SEO and a passion for Web programming. As a noted technology moderator at our SMX conference series, Detlef will continue to promote SEO excellence combined with marketing-programmer features and webmaster tips.



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

SEO

Google Search Console unparsable structured data report data issue

Published

on


Google has informed us that you may see a spike in errors in the unparsable structured data report within Google Search Console. This is a bug in the reporting system and you do not need to worry. The issue happened between January 13, 2020 and January 16, 2020.

The bug. Google wrote on the data anomalies page “Some users may see a spike in unparsable structured data errors. This was due to an internal misconfiguration that will be fixed soon, and can be ignored.” This was dated January 13, 2020 through January 16, 2020.

To be fixed. Google said they will fix the issue with the internal misconfiguration. It is, however, unclear if the data will be fixed or if you will see a spike in those errors between those date ranges.

Unparsable structured data report. The unparsable structured data report is accessible within Google Search Console by clicking here. The report aggregates structured data syntax errors. It puts all the parsing issues, including structured data syntax errors, that specifically prevented Google from identifying the feature type.

Why we care. The main thing here is that if you see a spike in errors in that report between January 13th and 16th, do not worry. It is a bug with the report and not an issue with your web site. Go back to the report in a few days and make sure that you do not see errors occurring after the 17th of January to be sure you have no technical issues.


About The Author

Barry Schwartz a Contributing Editor to Search Engine Land and a member of the programming team for SMX events. He owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on very advanced SEM topics. Barry’s personal blog is named Cartoon Barry and he can be followed on Twitter here.



Continue Reading

SEO

Google rolls out organic ‘Popular Products’ listings in mobile search results

Published

on


Several years ago now, Google made the significant move to turn product search listings into an entirely paid product. Shopping campaigns, as they’re now called, have accounted for an increasing share of retail search budgets ever since. More recently, however, Google has been augmenting organic search results with product listings. It’s in a product search battle with Amazon, after all. On Thursday, the company announced the official rollout of “Popular Products” for apparel, shoe and similar searches in mobile results.

Organic product listings. Google has been experimenting with ways to surface product listings in organic search results, including Popular Products, which has been spotted for several months now. The section is powered by those organic feeds. Google says it identifies popular products from merchants to show them in a single spot, allowing users to filter by style, department and size type. The listings link to the retailers’ websites.

Popular Products is now live in Google mobile search results.

Why we care. This is part of a broader effort by Google to enhance product search experiences as it faces increasing competition from Amazon and other marketplaces as well as social platforms. Earlier this week, Google announced it has acquired Pointy, a hardware solution for capturing product and inventory data from small local merchants that can then be used in search results (and ads).

In the past few years, Google has also prompted retailers to adopt product schema markup on their sites by adding support for it in Search and Image search results. Then last spring, Google opened up Merchant Center to all retailers, regardless if they were running Shopping campaigns. Any retailer can submit their feed in real-time to Google to make their products eligible in search results.

Ad revenue was certainly at the heart of the shift to paid product listings, but prior to the move, product search on Google was often a terrible user experience with search listings often not matching what was on the landing page, from availability to pricing to even the very product. The move to a paid solution imposed quality standards that forced merchants to clean up their product data and provide it to Google in a structured manner in the form of product feeds through Google Merchant Center.


About The Author

Ginny Marvin is Third Door Media’s Editor-in-Chief, running the day to day editorial operations across all publications and overseeing paid media coverage. Ginny Marvin writes about paid digital advertising and analytics news and trends for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, Ginny has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.



Continue Reading

SEO

Google buys Pointy to bring SMB store inventory online

Published

on


Google is acquiring Irish startup Pointy, the companies announced Tuesday. Pointy has solved a problem that vexed startups for more than a decade: how to bring small, independent retailer inventory online.

The terms of the deal were not disclosed, but Pointy had raised less than $20 million so it probably wasn’t an expensive buy for Google. But it could have a significant impact for the future of product search.

Complements local inventory feeds. This acquisition will help Google offer more local inventory data in Google My Business (GMB) listings, knowledge panels and ads especially. It complements Google Shopping Campaigns’ local inventory ads, which are largely utilized by enterprise merchants and first launched in 2013.

Numerous companies over the last decade tried to solve the challenge of how to bring small business product inventory online. However, most failed because the majority of SMB retailers lack sophisticated inventory management systems that can generate product feeds and integrate with APIs.

Pointy POS hardware

Source: Pointy

How Pointy works. The company created a simple way to get local store inventory online and then showcase that inventory in organic search results or paid search ads. It utilizes a low-cost hardware device that attaches to a point-of-sale barcode scanner (see image above). It’s compatible with multiple other POS systems, including Square.

Once the device is installed, it captures every product sold by the merchant and then creates a digital record of products, which can be pushed out in paid or organic results. (The company also helps small retailers set up local inventory ads using the data.) Pointy also creates local inventory pages for each store and product, which are optimized and can rank for product searches.

Pointy doesn’t actually understand real-time inventory. Cleverly, however, it uses machine learning algorithms to estimate this by measuring product purchase frequency. The system assumes local retailers are going to stock frequently purchased items. That’s an oversimplification, but is essentially how it works.

Pointy said it a blog post that it “serve[s] local retailers in almost every city and every town in the U.S. and throughout Ireland.”

Why we care. The Pointy acquisition will likely help Google in at least three ways:

  • Provide more structured, local inventory data for consumers to find in Search.
  • Generate more advertising revenue over time from independent retailers.
  • Help Google more effectively compete with Amazon in product search.

Notwithstanding the fact that e-commerce outperformed traditional retail over the holidays, most people spend the bulk of their shopping budgets offline and prefer to shop locally. Indeed, Generation Z prefers to shop in stores, according to an A.T. Kearney survey.

One of the reasons that people shop at Amazon is because they can find products they’re looking for. They often don’t know where to find a particular product locally. But if more inventory data becomes available, the more people may opt to buy from local stores instead.


About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.



Continue Reading

Trending

Copyright © 2019 Plolu.