Connect with us

SEO

How to prepare for a JS migration

Published

on


An 80 percent decrease in organic traffic is the nightmare of every business. Unfortunately, such a nightmarish scenario may become reality if a website migration is done incorrectly; instead of improving the current situation it eventually leads to catastrophe.

Source: http://take.ms/V6aDv

There are many types of migrations, such as changing, merging or splitting the domains, redesigning the website or moving to a new framework.

Web development trends are clearly showing that the use of JavaScript has been growing in recent years and JavaScript frameworks are becoming more and more popular. In the future, we can expect that more and more websites will be using JavaScript.

Source: https://httparchive.org/reports/state-of-javascript

As a consequence, SEOs will be faced with the challenge of migrating to JavaScript frameworks.

In this article, I will show you how to prepare for a migration of a website built with a static HTML to a JavaScript framework.

Search engines vs. JavaScript

Google is the only search engine that is able to execute JavaScript and “see” the elements like content and navigation even if they are powered by JavaScript. However, there are two things that you always need to remember when considering changes to a JS framework.

Firstly, Google uses Chrome 41 for rendering pages. This is a three-year old browser that does not support all the modern features needed for rendering advanced features. Even if they can render JS websites in general, it may happen that some important parts will not be discovered due to the reliance on technology that Google can’t process.

Secondly, JS executing is an extremely heavy process so that Google indexes JS websites in two waves. The first wave gets the raw HTML indexed. In the case of JS-powered websites, this translates to almost an empty page. During the second wave, Google executes JavaScript so they can “see” the additional elements loaded by JS. Then they are ready for indexing the full content of the page.

The combination of these two elements makes it so that if you decide to change your current website to the JavaScript framework, you always need to check if Google can efficiently crawl and index your website.

Migration to a JS framework done right

SEOs may not like JavaScript, but it doesn’t mean that its popularity will stop growing. We should get prepared as much as we can and implement the modern framework correctly.

Below you will find information that will help you navigate through the process of changing the current framework. I do not provide “ready-to-go” solutions because your situation will be the result of different factors and there is no universal recipe. However, I want to stress the elements you need to pay particular attention to.

Cover the basics of standard migration

You can’t count on the miracle that Google will understand the change without your help. The whole process of migration should be planned in detail.

I want to keep the focus on JS migration for this article, so if you need detailed migration guidelines, Bastian Grimm has already covered this.

Source: Twitter

Understand your needs in terms of serving the content to Google

This step should be done before anything else. You need to decide on how Google will receive the content of your website. You have two options:

1. Client-side rendering: This means that you are totally relying on Google for rendering. However, if you go for this option you agree on some inefficiency. The first important drawback of this solution is the deferred indexing of your content due to the two waves of indexing mentioned above. Secondly, it may happen that everything doesn’t work properly because Chrome 41 is not supporting all the modern features. And last, but not least, not all search engines can execute JavaScript, so your JS website will seem empty to Bing, Yahoo, Twitter and Facebook.

Source: YouTube

2. Server-side rendering: This solution relies on rendering by an external mechanism or the additional mechanism/component responsible for the rendering of JS websites, creating a static snapshot and serving it to the search engine crawlers. At the Google I/O conference, Google announced that serving a separate version of your website only to the crawler is fine. This is called Dynamic Rendering, which means that you can detect the crawler’s User Agent and send the server-side rendered version. This option also has its disadvantages: creating and maintaining additional infrastructure, possible delays if a heavy page is rendered on the server or possible issues with caching (Googlebot may receive a not-fresh version of the page).

Source: Google

Before migration, you need to answer if you need option A or B.

If the success of your business is built around fresh content (news, real estate offers, coupons), I can’t imagine relying only on the client-side rendered version. It may result in dramatic delays in indexing so your competitors may gain an advantage.

If you have a small website and the content is not updated very often, you can try to leave it as client-side rendered, but you should test before launching the website if Google really does see the content and navigation. The most useful tools to do so are Fetch as Google in GSC and the Chrome 41 browser.

However, Google officially stated that it’s better to use Dynamic Rendering to make sure they will discover frequently changing content correctly and quickly.

Framework vs. solution

If your choice is to use Dynamic Rendering, it’s time to answer how to serve the content to the crawlers. There is no one universal answer. In general, the solution depends on the technology AND developers AND budget AND your needs.

Below you will find a review of the options you have from a few approaches, but the choice is yours:

  • I need an as simple a solution as possible.

Probably I’d go for pre-rendering, for example with prerender.io. It’s an external service that crawls your website, renders your pages and creates static snapshots to serve them if a specific User Agent makes a request. A big advantage of this solution is the fact that you don’t need to create your own infrastructure.

You can schedule recrawling and create fresh snapshots of your pages. However, for bigger and frequently changing websites, it might be difficult to make sure that all the pages are refreshed on time and show the same content both to Googlebot and users.

  • I need a universal solution and I follow the trends.

If you build the website with one of the popular frameworks like React, Vue, or Angular, you can use one of the methods of Server Side Rendering dedicated to a given framework. Here are some popular matches:

Using one of these frameworks installed on the top of React or Vue results in creating a universal application, meaning that the exact same code can be executed both on the server (Server Side rendering) and in the client (Client Side Rendering). It minimizes the issues with a content gap that you could have if you rely on creating snapshots and heavy caching, as with prerender.

  • I need a universal solution and I don’t use a popular framework.

It may happen that you are going to use a framework that does not have a ready-to-use solution for building a universal application. In this case, you can go for building your infrastructure for rendering. It means that you can install a headless browser on your server that will render all the subpages of your website and create the snapshots that are served to the search engine crawlers. Google provides a solution for that – Puppeteer is a library that does a similar job as prender.io. However, everything happens on your infrastructure.

  • I want a long-lasting solution.

For this, I’d use hybrid rendering. It’s said that this solution provides the best experience both to users and the crawlers because users and crawlers receive a server-side rendered version of the page on the initial request. In many cases, serving an SSR page is faster for users rather than executing all the heavy files in the browser. All subsequent user interactions are served by JavaScript. Crawlers do not interact with the website by clicking or scrolling so it’s always a new request to the server and they always receive an SSR version. Sounds good, but it’s not easy to implement.

Source: YouTube

The option that you choose will depend on many factors like technology, developers and budgets. In some cases, you may have a few options, but in many cases, you may have many restrictions, so picking a solution will be a single-choice process.

Testing the implementation

I can’t imagine a migration without creating a staging environment and testing how everything works. Migration to a JavaScript framework adds complexity and additional traps that you need to watch out for.

There are two scenarios. If for some reason you decided to rely on client-side rendering, you need to install Chrome 41 and check how it renders and works. One of the most important points of an audit is checking errors in the console in Chrome Dev Tools. Remember that even a small error in processing JavaScript may result in issues with rendering.

If you decided to use one of the methods of serving the content to the crawler, you will need to have a staging site with this solution installed. Below, I’ll outline the most important elements that should be checked before going live with the website:

1. Content parity

You should always check if users and crawlers are seeing exactly the same content. To do that, you need to switch the user agents in the browser to see the version sent to the crawlers. You should verify the general discrepancies regarding rendering. However, to see the whole picture you will also need to check the DOM (Document Object Model) of your website. Copy the source code from your browser, then change the User Agent to Googlebot and grab the source code as well. Diffchecker will help you to see the differences between the two files. You should especially look for the differences in the content, navigation and metadata.

An extreme situation is when you send an empty HTML file to Googlebot, just as Disqus does.

Source: Google

This is what their SEO Visibility looks like:

Source: http://take.ms/Fu3bL

They’ve seen better days. Now the homepage is not even indexed.

2. Navigation and hyperlinks

To be 100 percent sure that Google sees, crawls and passes link juice, you should follow the clear recommendation of implementing internal links shared at Google I/O Conference 2018.

Source: YouTube

If you rely on server-side rendering methods, you need to check if the HTML of a prerendered version of a page contains all the links that you expect. In other words, if it has the same navigation as your client-side rendered version. Otherwise, Google will not see the internal linking between pages. Critical areas where you may have problems is facet navigation, pagination, and the main menu.

3. Metadata

Metadata should not be dependent on JS at all. Google says that if you load the canonical tag with JavaScript they probably will not see this in the first wave of indexing and they will not re-check this element in the second wave. As a result, the canonical signals might be ignored.

While testing the staging site, always check if an SSR version has the canonical tag in the head section. If yes, confirm that the canonical tag is the correct one. A rule of thumb is always sending consistent signals to the search engine whether you use client or server-side rendering.

While checking the website, always verify if both CSR and SSR versions have the same titles, descriptions and robots instructions.

4. Structured data

Structured data helps the search engine to better understand the content of your website.

Before launching the new website make sure that the SSR version of your website displays all the elements that you want to mark with structured data and if the markups are included in the prerendered version. For example, if you want to add markups to the breadcrumbs navigation. In the first step, check if the breadcrumbs are displayed on the SSR version. In the second step, run the test in Rich Results Tester to see if the markups are valid.

5. Lazy loading

My observations show that modern websites love loading images and content (e.g. products)  with lazy loading. The additional elements are loaded on a scroll event. Perhaps it might be a nice feature for users, but Googlebot can’t scroll, so as a consequence these items will not be discovered.

Seeing that so many webmasters are having problems with lazy loading in an SEO-friendly way, Google published a guideline for the best practices of lazy loading. If you want to load images on a scroll, make sure you support paginated loading. This means that if you scroll, the URLs should change (e.g., by adding the pagination identifiers: ?page=2, ?page=3, etc.) and most importantly, the URLs are updated with the proper content, for example by using History API.

Do not forget about adding rel=”prev” and rel=”next” markups in the head section to indicate the sequence of the pages.

Snapshot generation and cache settings

If you decided to create a snapshot for search engine crawlers, you need to monitor a few additional things.

You must check if the snapshot is an exact copy of the client-side rendered version of your website. You can’t load additional content or links that are not visible to a standard user, because it might be assessed as cloaking. If the process of creating snapshots is not efficient e.g. your pages are very heavy and your server is not that fast, it may result in creating broken snapshots. As a result, you will serve e.g. partially rendered pages to the crawler.

There are some situations when the rendering infrastructure must work at high-speeds, such as Black Friday when you want to update the prices very quickly. You should test the rendering in extreme conditions and see how much time it takes to update a given number of pages.

The last thing is caching. Setting the cache properly is something that will help you to maintain efficiency because many pages might be quickly served directly from the memory. However, if you do not plan the caching correctly, Google may receive stale content.

Monitoring

Monitoring post-migration is a natural step. However, in the case of moving to a JS framework, sometimes there is an additional thing to monitor and optimize.

Moving to a JS framework may affect web performance. In many cases, the payload increases which may result in longer loading times, especially for mobile users. A good practice is monitoring how your users perceive the performance of the website and compare the data before and after migration. To do so you can use Chrome User Experience Report.

Source: Google

It will provide information if the Real User Metrics have changed over time. You should always aim at improving them and loading the website as fast as possible.

Summary

Migration is always a risky process and you can’t be sure of the results. The risks might be mitigated if you plan the whole process in detail. In the case of all types of migrations, planning is as important as the execution. If you take part in the migration to the JS framework, you need to deal with additional complexity. You need to make additional decisions and you need to verify additional things. However, as web development trends continue to head in the direction of using JavaScript more and more, you should be prepared that sooner or later you will need to face a JS migration. Good luck!


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

Maria Cieslak is a Senior Technical SEO Consultant at Elephate, the “Best Small SEO Agency” in Europe. Her day to day involves creating and executing SEO strategies for large international structures and pursuing her interest in modern websites built with JavaScript frameworks. Maria has been a guest speaker at SEO conferences in Europe, including 2018’s SMX London, where she has spoken on a wide range of subjects, including technical SEO and JavaScript. If you are interested in more information on this subject, you should check out Elephate’s “Ultimate Guide to JavaScript SEO“.





Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

SEO

Google Search Console image search reporting bug June 5-7

Published

on


Google posted a notice that between the dates of June 5 through June 7, it was unable to capture data around image search traffic. This is just a reporting bug and did not impact actual search traffic, but the Search Console performance report may show drops in image search traffic in that date range.

The notice. The notice read, “June 5-7: Some image search statistics were not captured during this period due to an internal issue. Because of this, you may see a drop in your image search statistics during this period. The change did not affect user Search results, only the data reporting.”

How do I see this? If you login to Google Search Console, click into your performance report and then filter by clicking on the “search type” filter. You can then select image from the filters.

Here is a screen shot of this filter:

How To Filter By Image Traffic in Google Search Console

Why we should care. If your site gets a lot of Google Image search traffic, you may notice a dip in your traffic reporting within Google Search Console. You may have not noticed a similar dip in your other analytics tools. That being said, Google said this is only a reporting glitch within Google Search Console and did not impact your actual traffic to your web site.


About The Author

Barry Schwartz is Search Engine Land’s News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.

Continue Reading

SEO

Facebook Changes Reach of Comments in News Feed

Published

on


Facebook announced a change to it’s algorithms that will affect the reach of comments on a post. Comments that have specific quality signals will  be highly ranked. Low quality comment practices may result in less reach.

Comment Ranking in News Feeds

Facebook noted that not only are posts ranked in news feeds but comments are also ranked as well.

Posts with comments that have positive quality signals will be seen by more people. Posts with low quality signals will have their news feed reach reduced.

Facebook Comment-Quality Signals

Facebook noted that their updated comment algorithm has four features:

  1. Integrity signals
  2. User indicated preferences
  3. User interaction signals
  4. Moderation signals

Integrity Signals

Integrity Signals are a measure of authenticity. Comments that violate community standards or fall into engagement-bait are negative signals. Violations of community standards are said to be removed.

Engagement Bait

Facebook engagement bait is a practice that has four features:

1. React Baiting

Encouraging users to react to your post

2. Follow and Share Baiting

This is described as telling visitors to like, share or subscribe.

3. Comment Baiting

Encouraging users to comment with a letter or number are given as examples.

. Monetization Baiting

This is described as asking for “stars” in exchange for something else, which could include something trivial like “doing push ups.”

User Indicated Preferences

This is a reference to user polls that Facebook conducts in order to understand what users say they wish to see in comments.

User Interaction Signals

These are signals related to whether users interact with a post.

Moderation Signals

This is a reference to how users hide or delete comments made in their posts.

Here is how Facebook describes it:

“People can moderate the comments on their post by hiding, deleting, or engaging with comments.

Ranking is on by default for Pages and people with a a lot of followers, but Pages and people with a lot of followers can choose to turn off comment ranking.

People who don’t have as many followers will not have comment ranking turned on automatically since there are less comments overall, but any person can decide to enable comment ranking by going to their settings. (See more details here.) “

Facebook Targeting Low Quality Comments

One of the stated goals of this update is to hide low quality posts from people’s Facebook feeds and to promote high quality posts by people you might know.

This is how Facebook described it:

“To improve relevance and quality, we’ll start showing comments on public posts more prominently when:

  • The comments have interactions from the Page or person who originally posted; or
  • The comments or reactions are from friends of the person who posted.”

Read Facebook’s announcement here: Making Public Comments More Meaningful

Images by Shutterstock, Modified by Author

 



Continue Reading

SEO

Build your PPC campaigns with this mini campaign builder script for Google Ads

Published

on


Need to quickly build a campaign or add keywords to an existing one? This script will do the work for you!

All you need to do is input a few keywords and headlines in a spreadsheet and BAM! You’ve got yourself the beginnings of a great campaign.

I’m a firm believer in Single Keyword per Ad Group (SKAG) structure – it increases ad/keyword relevance and therefore improves quality score, makes CPCs cheaper, gets you a higher ad rank and a better CTR.

Sadly, building out SKAG structures is a pretty time-consuming endeavor. You can’t implement millions of keywords and ads without PPC tech powering your builds.

But if a client just needs a couple of new keywords after updating their site with new content, this script is a quick and easy solution.

And that’s exactly what I love about PPC. There’s a special place in my heart for simple scripts anyone can use to achieve tasks that are otherwise repetitive or near-impossible.

What does the script do?

This tool will save a lot of time with small-scale builds where you know exactly which keywords and ad copy you need, for example when you’re adding a few keywords to an existing campaign.

You input your campaign name, keywords, headlines, descriptions, paths and final URL, and it will output three tabs for you: one with keyword combinations, one with negatives, and ads to upload to Google Ads Editor.

It creates one exact and one broad match modifier campaign and creates a list of keywords as exact negatives in the broad campaign to make sure that search terms that match exactly will go through the exact keyword.

I’m sure you’re dying to give it a whirl, so let’s get cracking!

How do you use it?

Make a copy of this spreadsheet (note: you’ll need to authorize the script to run). You’ll find all the instructions there as a future reminder.

Once you’ve got the spreadsheet ready, input the following:

  • The campaign name
  • The campaign name delimiter to distinguish between broad and exact campaigns
  • Headline 1 (if this cell is not specified, then it will be the same as the keyword)
  • Headline 2
  • Optionally, headline 3
  • Description 1
  • Optionally, description 2
  • Optionally, path 1 and path 2
  • The final URL
  • The keywords (you can keep going outside of the box with these!)

You’ll see a handy character counter which will go red if you exceed the character limit. Bear in mind that this tool will assume that you’re using it correctly and so you’ll need to make sure that you’re staying within the limit!

You can also optionally create a second ad variant by choosing the part of your text you want to vary (e.g., headline 2 or description 2) and inputting the copy. Otherwise, just select “None” from the dropdown menu.

Once you’re done, click the gigantic “Go!” Button, and wait for the magic to happen.

It will generate three tabs labelled “Keywords,” “Negatives” and “Ads.” If you want to run the script again with different keywords, make sure you save these tabs elsewhere or rename them to prevent the script from overriding them.

Finally, you can paste these tabs into Editor and update all the relevant settings and adjustments. Job done!

DOWNLOAD: You’ll need to authorize the script to run after you make a copy of this spreadsheet.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

Daniel Gilbert is the CEO at Brainlabs, the best paid media agency in the world (self-declared). He has started and invested in a number of big data and technology startups since leaving Google in 2010.

Continue Reading

Trending

Copyright © 2019 Plolu.