fbpx
Connect with us

SEO

Evergreen Googlebot with Chromium rendering engine: What technical SEOs need to know

It’s been an exciting week with important announcements from the stage at the 2019 Google I/O event. Probably the most impactful announcement is that Google has now committed to regularly updating its Googlebot crawl service to begin using the most recent stable version of their headless Chromium rendering engine. This is a significant leap forward […]

Published

on

A WordPress safety plan for SEOs and developers


It’s been an exciting week with important announcements from the stage at the 2019 Google I/O event. Probably the most impactful announcement is that Google has now committed to regularly updating its Googlebot crawl service to begin using the most recent stable version of their headless Chromium rendering engine. This is a significant leap forward with more than 1,000 features now supported over the previous version.

Nearly all the new feature support is modern JavaScript syntax officially called ECMAScript (ES6). If you are a JavaScript developer, you really want to use the latest version of the language for access to syntactic sugar that continually appears as the language matures. It’s true that if you’re a vanilla JavaScript user, or if you favor one of the modern reactive frameworks, many neat new features come from developers who recommend better patterns for blocks of commonly written code.

One basic example is to add a value to an array, a very common thing to do using push():

<script>
  names = [
    'Amy',
    'Bruce',
    'Chris'
  ];
  names.push('David');
</script>

Reactivity in a Nutshell

In the example above, an array of names is defined and assigned 3 values: Amy, Bruce, and Chris. Then David is added to the list using the push() method. With modern reactive frameworks mutation of values can trigger ‘diff’ evaluations of a page DOM against a newer ‘virtual DOM’ by the framework, and since the array values differ, page values can be updated by JavaScript without reloading the browser window.

Reactivity in web-facing applications is where JavaScript has really added to our capabilities, and where our capabilities continue to advance as modern JavaScript further evolves on the server and in the browser. It gets tricky to keep track of JavaScript written for the server versus JavaScript that gets shipped to the browser. For example, with ES6 you can do the following, including the ability to use ‘let’ (and ‘const’) in definition statements:

<script>
  let names = [
    'Amy',
    'Bruce',
    'Chris'
  ];
  names = [...names, 'David'];
</script>

Backward Compatibility

The names array mutation above uses a newer ‘spread operator’ syntax [...names] to represent current values of the names array, and then adds David using an assignment operation instead of the push() method. The newer syntax is not compatible with Chrome 41, and therefore would not work prior to Googlebot’s update to Chrome 74. For developers it is like death by a thousand cuts to have to write or transpile ES6 down for backward compatibility.

Now modern JavaScript syntax will largely start to work straight out of the box with Googlebot and there are literally dozens of new features available such as the one above. Just be aware that Bing and DuckDuckGo (as well as social share crawlers) may not be able to interpret ES6 syntax.

Real-Life Example

The Svelte framework was recently significantly updated and revised to version 3. With this major overhaul came more precisely triggered assignment-based page reactivity. There’s a fun viral video about it going around. Having to write or transpile the ‘names’ array code to older push() syntax for Google in Svelte requires an extra step because push() adds values to an array but it isn’t a variable assignment operation, which is necessary to trigger page reactivity in Svelte 3.

<script>
  let names = [
    'Amy',
    'Bruce',
    'Chris'
  ];
  names.push('David');
  names = names; // To trigger Svelte reactivity
</script>

It’s easy to see why now being able to use ES6:

<script>
  names = [...names, 'David'];
</script>

…is more developer friendly for Svelte users than before.

Evergreen Chromium rendering

Now that Googlebot’s evergreen Chromium rendering engine can be counted on, React, Angular, Vue, Svelte 3, and vanilla JavaScript users can worry a little less about polyfills specific to Chrome 41 and writing or transpiling down ES6 syntax in projects anymore. Concerns still exist, however. You need to test and make sure the rendering engine is behaving the way you anticipate. Google is more guarded about exposing its resources than a user’s browser would be.

Google recommends that users check out the documentation to find references to Google’s Web Rendering Service (WRS) instances: basically Chromium 74, currently, in products like the mobile-friendly test and the URL Inspection Tool. For example, a Geo location script might ask for browser location services. Google’s rendering engine doesn’t expose that API. These kinds of exceptions in your JavaScript may halt your indexing.

Tracking Googlebot

If you’re still tracking visits from older versions of Chrome in your server logs, eventually they will update the user-agent string to reflect the version of Chrome they are running. Also, keep in mind that Google is a fairly large and dispersed company with divisions that have varying access to its network resources. A particular department might have settings to modify in order to begin using the new Chrome engine, but it stands to reason that everything will be using it very soon, especially for critical Web crawling services.

Technical SEO Advice

What does this mean for technical SEOs? There will be fewer critical indexing issues to point out for sites running modern JavaScript. Traditional advice, however, will remain largely intact. For example, the new rendering engine does not shortcut the indexing render queue for reactive code. That means sites running React, Angular, or Vue etc. are still going to be better off pre-rendering relatively static sites, and best off server-side rendering (SSR) truly dynamic sites.

The nice thing about being a Technical SEO is we get to advise developers about practices that should align with Googlebot and that mostly they ought to be doing in the first place. The nice thing about being a SEO Developer is there’s a never-ending river of exciting modern code to play with, especially with Google now caught up with Chromium 74. The only drawback is evergreen Chromium Googlebot doesn’t help you with Bing, DuckDuckGo, or social media sharing crawlers.

That’s A Pretty Big Drawback

The more things change the more they stay the same. You should still advise clients about pre-rendering and SSR. This ensures that no matter what user-agent you’re dealing with, it will receive rendered content for search or sharing. The predicament we find ourselves in is that if the planned application has a huge volume of reactive parts to it, for example constantly updating sports scores or stock market prices, we must do reactivity and SSR alone won’t work.

That’s when it’s necessary to do SSR and ship custom JavaScript for deferred hydration, similar to code-splitting. Basically, the complete HTML is shipped as fully rendered at the server, and then JavaScript takes care of updating the reactivity parts. If JavaScript doesn’t render in Bing or DuckDuckGo, then it’s all right because you already shipped fully rendered HTML. This can seem excessive but keep in mind that the search engine will only ever be able to represent rankings for your page in the state it was at a particular point in time, anyway.

Why Such Reactivity?

SSR can accomplish the SEO rendering feat across user-agents for you, and user browsers can run JavaScript for reactive features. But why bother? If you are using a reactive framework just because you can, maybe you didn’t need to in the first place. If you want to avoid all the trouble and expense of having myriad complex details to manage when the nature of your site doesn’t require much reactivity, then it’s a really good idea to build static sites using a strategy with pre-rendering if necessary, or write vanilla JavaScript for the feature or two which may actually require reactivity.

Server Side Rendering

If you think server-side rendering is a piece of cake, read a post that describes some of the horrors you might encounter before you charge in, especially if you’re trying to retrofit a pre-existing application. In short, you should be writing universal JavaScript and it gets complex quickly including security implications. Luckily, there is also a terrific new set of nicely written posts that comprise a fairly thorough React tutorial if you’re working from scratch. We highly recommended reading it to supplement the official React guide.

A New Hope

Things move quickly and keeping up can be tough, even for Google. The news that it has updated to Chrome 74 for rendering more of the modern Web is long overdue. It’s important that we know it intends to improve Googlebot to within weeks of the consumer version of Chrome releases. We can now test more code using local software to make sure our sites work with Googlebot. A very intriguing new paradigm for reactivity is Svelte. Svelte has a SSR output mode that you can test directly in its tutorial REPL. Svelte brings us reactivity that is closer to vanilla JavaScript than others, a real achievement.


About The Author

Detlef Johnson is Editor at Large for Third Door Media. He writes a column for Search Engine Land entitled “Technical SEO for Developers.” Detlef is one of the original group of pioneering webmasters who established the professional SEO field more than 20 years ago. Since then he has worked for major search engine technology providers, managed programming and marketing teams for Chicago Tribune, and consulted for numerous entities including Fortune 500 companies. Detlef has a strong understanding of Technical SEO and a passion for Web programming. As a noted technology moderator at our SMX conference series, Detlef will continue to promote SEO excellence combined with marketing-programmer features and webmaster tips.



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

SEO

Eight great ways to audit your ecommerce site’s SEO

30-second summary: Optimizing your ecommerce site’s SEO is essential to maximizing your ecommerce site’s traffic and revenue potential. Improve your SEO by structuring your website and page in a way that is easy for Google to understand. Pagespeed optimization is a necessity, not an option, with the upcoming algorithm change. Writing a relevant, informative blog […]

Published

on


30-second summary:

  • Optimizing your ecommerce site’s SEO is essential to maximizing your ecommerce site’s traffic and revenue potential.
  • Improve your SEO by structuring your website and page in a way that is easy for Google to understand.
  • Pagespeed optimization is a necessity, not an option, with the upcoming algorithm change.
  • Writing a relevant, informative blog on your ecommerce site will help increase organic traffic.

A recent statistic shows that 33% of global ecommerce traffic comes from organic searches. In other words, you will be missing out on a significant portion of potential traffic if your ecommerce site is not optimized for SEO. Although some SEO components are not entirely within your control, you can still do many things to improve ecommerce site’s SEO.

In this article, we’ll go over eight ways you can audit and improve on your ecommerce site’s SEO health.

Additionally, we will discuss why it’s now more important than ever to optimize your page speed for the algorithm update in the coming months.

Let’s get started.

1. Canonical tags for filtered pageviews

In most ecommerce sites, you can filter product types based on different categories, such as color, fit, and size.

If your ecommerce site has many filtered pages, you want to use canonical tags in your filtered pages to avoid duplicate content.

Having duplicate content on your site can make it difficult for Google to rank your page since your duplicate page will compete for the top spots in search results.

The image below shows filters on the Underarmour website. You can also see that the URL comes with the “/shoes” text at the end when you filter based on shoes.

Ecommerce site's SEO tips - filters-on-underarmour-site

By placing the canonical tags on your filtered pages, you tell a Google crawler which page is the main page you want to rank.

In this case, the canonical tag on “underarmour.ca/en-ca/d/mens/shoes” points to the “underarmour.ca/en-ca/d/mens” URL.

Ecommerce site's SEO tips - Use canonical tags

This lets Google know that the latter URL is the primary URL you want to rank. You will also prevent your site from potentially receiving a duplicate penalty from Google.

2. Optimize a meta description for each page

Does each one of your pages have a unique meta description?

Ecommerce site SEO tips - Optimize your meta descriptions based on your keywords

Writing a relevant and unique meta description for each page increases the likelihood of people clicking your link. 

Since click-through-rate (CTR) is an important ranking factor in Google search results, you will be jeopardizing your search ranking if you fail to optimize a meta description for your page.

That said, you may find it difficult to implement this in practice if you have hundreds and thousands of pages on your ecommerce site.

In such a case, I recommend you start optimizing meta descriptions for pages currently making you the most revenue and moving down in the order.

3. Keyword placement

While you certainly want to avoid stuffing your page with your target keywords, it’s a good idea to strategically include them throughout your page to let Google know what keywords you are targeting. 

You will want to include a few keywords in your URL, title, description, and alt image text to ensure your page is relevant for your target keywords.

Here is an example.

Ecommerce site's SEO tips - Place your keywords

And an example of keywords in the description.

Ecommerce site SEO tips - Place your keywords in the product title and description

4. Breadcrumb trails

Breadcrumb trails help website visitors navigate through your website and help organize your site in a well-structured manner.

Tips to audit and improve your ecommerce site's SEO - Use breadcrumbs to improve site navigation and improve CTR

Additionally, breadcrumbs make it easy for Google crawlers to understand your page better, which, in turn, may help your page rank better.

Moreover, search results will show breadcrumbs instead of your page’s permalink, which is another clue that Google emphasizes breadcrumbs for SEO.

Breadcrumbs in search results

You don’t need to overcomplicate things when implementing breadcrumb navigation. Just make sure to follow the standards of good practice, such as using breadcrumbs only when it makes sense and progressing from highest to lowest level.

5. Optimize your ecommerce site for sitelinks

Here is an example of sitelinks that appear in search results for a well-optimized site.

Tips to audit and improve your ecommerce site's SEO- Use sitelinks wisely

Google states that they only show sitelinks for structured websites that allow their algorithms to find good sitelinks.

Sitelinks improve click-through-rates and help searchers quickly see pages that are relevant to their interests. 

Furthermore, they add credibility to your sites. Google doesn’t show sitelinks for websites that aren’t well optimized and not trustworthy. 

If Google trusts your website enough to give a large space on search results, you can sort of view that as a seal of approval from Google. 

At the very least, you can assume Google doesn’t see your website as untrustworthy and scammy.

While you can’t fully control whether your ecommerce site shows sitelinks in search results, there are a few things you can do to increase your chance.

  • Enable breadcrumbs and a sitelink search box
  • Structure your website clearly
  • Use anchor text for your internal links

6. Do you have reviews on your product page?

Reviews on your product page serve several purposes.

First, they give useful information to people who visit your website and help them make better purchasing decisions.

Add review snippets and ratings to improve your ecommerce site's SEO

Furthermore, your page can show a star rating in search results, which will help your search result stand out and improve your CTR.

As a matter of fact, a study done by CXL found that review stars can improve CTR by as much as 35%.

You can check out documentation from Google to learn how best to structure your page so you can maximize your site’s chance to display review snippets in search results.

7. Optimize your site speed

You’ve probably heard about the importance of website speed optimization.

But with Google confirming that page loading speed will become a ranking signal in the coming months, it’s now more important than ever to optimize page speed on your website.

According to Moovweb, 87% of ecommerce sites do not meet the page experience standards set out by Google.

You can also see this as an opportunity for your site to gain a competitive edge over others.

There are many things you can do to improve your page speed, such as:

  • Get a better hosting service
  • Use lazy loading, so your image only loads as users scroll down
  • Remove large elements on your page

You can use PageSpeed Insights from Google to determine what elements on your page are slowing down speed.

Google Page Speed Insights

Here is an article with more good advice on optimizing your page speed for the upcoming algorithm changes. 

8. Does your website have informational content?

Finally, it’s always a good idea to write blog posts around keywords that your target audience is searching for.

Add informational content that can answer your consumers' key concerns to optimize your ecommerce site

A study shows that 61% of online shoppers in the US are more confident about purchasing from recommendations that they read in blogs.

Blogging for your ecommerce site can help your business in several different ways.

  1. Build trust with your audience by providing helpful content
  2. Establish your brand as an expert in the field
  3. Improved search engine ranking with relevant, high-quality content on your blog

A blog post you publish today isn’t going to bring an impressive result by tomorrow. But it’s a long-term investment that will help your ecommerce site generate more traffic and build trust and authority with the audience in your niche in the long haul.

Conclusion

Improving SEO for your ecommerce site is rarely ever a quick, easy process.

This is especially true if your site has hundreds of pages that need to be optimized.

In such a case, it would be wise to start with pages that are generating the most amount of traffic and revenues.

Start optimizing from the best-performing pages and move down in the order as your time and effort allows.

Your results with SEO won’t be instantaneous, but you will be glad you put extra effort into optimizing your site’s SEO down the road.

Jin Choi is a writer at the MoneyNam blog.



Source link

Continue Reading

SEO

How to use XPath expressions to enhance your SEO and content strategy

Published

on


30-second summary:

  • As Google increasingly favors sites with content that exudes expertise, authority, and trustworthiness (E-A-T), it is imperative that SEOs and marketers produce content that is not just well written, but that also demonstrates expertise.
  • How do you understand what topics and concerns matter most to your customer base?
  • Can you use Q&As to inform content strategies?
  •  XPath notations can be your treasure trove.
  • Catalyst’s Organic Search Manager, Brad McCourt shares a detailed guide on using XPath notations and your favorite crawler to quickly obtain the Q&As in a straightforward and digestible format.

As Google increasingly favors sites with content that exudes expertise, authority, and trustworthiness (E-A-T), it is imperative that SEOs and marketers produce content that is not just well written, but that also demonstrates expertise. One way to demonstrate expertise on a subject or product is to answer common customer questions directly in your content.

But, how do you identify what those questions are? How do you understand what topics and concerns matter most?   

The good news is that they are hiding in plain sight. Chances are, your consumers have been shouting at the top of their keyboards in the Q&A sections of sites like Amazon.

XPath and how to find customer questions and preferences

These sections are a treasure trove of (mostly) serious questions that real customers have about the products you are selling.

How do you use these Q&As to inform content strategies? XPath notation is your answer.

You can use XPath notations and your favorite crawler to quickly obtain the Q&As in a straightforward and digestible format. XPath spares you from clicking through endless screens of questions by automating the collection of important insights for your content strategy.

What is XPath?

XML Path (XPath) is a query language developed by W3 to navigate XML documents and select specified nodes of data.

The notation XPath uses is called “expressions”. Using these expressions, you can effectively pull any data that you need from a website as long as there is a consistent structure between webpages.

This means you can use this language to pull any publicly available data in the source code, including questions from a selection of Amazon Q&A pages.

This article is not meant to be a comprehensive tutorial on XPath. For that, there are plenty of resources from W3. However, XPath is easy enough to learn with only knowing the structure of XML and HTML documents. This is what makes it such a powerful tool for SEOs regardless of coding prowess.

Let’s walk through an example to show you how…

Using XPath to pull customer questions from Amazon

Pre-req: Pick your web crawler

While most of the big names in web crawling – Botify, DeepCrawl, OnCrawl – all offer the ability to extract data from the source code, I will be using ScreamingFrog in the example below.

ScreamingFrog is by far the most cost-effective option, allowing you to crawl up to 500 URLs without buying a license. For larger projects you can buy a license. This will allow you to crawl as many URLs as your RAM can handle.

Step one: Collect the URLs to crawl

For our example, let’s pretend we’re doing research on the topics we should include in our product pages and listings for microspikes. For those unaware, microspikes are an accessory for your boots or shoes. They give you extra grip in wintry conditions, so they are particularly popular among cold-weather hikers and runners.

Example for finding details using Amazon

Source: https://www.amazon.com/s?k=microspikes

Here we have a list of 13 questions and answer pages for the top microspike pages on Amazon.com. Unfortunately, there is some manual work required to create the list.

List of questions - XPath and creating content

The easiest way is to search for the topic (that is, microspikes) and pull links to the top products listed. If you have the product’s ASIN (Amazon Standard Identification Number) handy, you can also generate the URLs using the above format, but switching out the ASIN.

Step two: Determine the XPath

From here, we need to determine the XPath.

In order to figure out the proper XPath notation to use to pull in the desired text, we have two main options:

  1.       View the Source-CodeDetermine the XPath
  2.       View the rendered source code and copy the XPath directly from Chrome’s Inspect Element tool

Copy XPath

You’ll find that the expression needed to locate all questions in an Amazon Q&A page is:

//span[@class=”a-declarative”]

Here is XPath notation broken down:

  •       // is used to locate all instances of the following expression.
  • Span is the specific tag we’re trying to locate. //span will locate every single <span> tag in the source code. There are over 300 of these, so we’ll need to be more specific.
  • @class specifies that //span[@class] will ensure all <span> tags with an assigned class attribute will be located.
  • @class=”a-declarative” dictates that //span[@class=”a-declarative”] only locates <span> tags where the class attribute is set to “a-declarative” – that is, <span class=”a-declarative”>

There is an extra step in order to return the inner text of the specified tag that is located, but ScreamingFrog does the heavy lifting for us.

It’s important to note that this will only work for Amazon Question and Answer pages. If you wanted to pull questions from, say, Quora, TripAdvisor, or any other site, the expression would have to be adjusted to locate the specific entity you desire to collect on a crawl.

Step three: Configure your crawler

Once you have this all set, you can then go into ScreamingFrog.

Configuration -> Custom -> Extraction

Configure your crawler

This will then take you to the Custom Extraction screen.

Custom extraction screen

This is where you can:

  • Give the extraction a name to make it easier to find after the crawl, especially if you’re extracting more than one entity. ScreamingFrog allows you to extract multiple entities during a single crawl.
  • You can then choose the extraction method. In this article, it is all about XPath, but you also have the option of extracting data via CSSPath and REGEX notation as well.
  • Place the desired XPath expression in the “Enter XPath” field. ScreamingFrog will even check your syntax for you, providing a green checkmark if everything checks out.
  • You then have the option to select what you want extracted, be it the full HTML element or the HTML found within the located tag. For our example, we want to extract the text in between any <span> tags with a class attribute set to “a-declarative” so we select “extract text.”

We can then click OK.

Step four: Crawl the desired URLs

Now it’s time to crawl our list of Amazon Q&A pages for microspikes.

First, we’ll need to switch the Mode in ScreamingFrog from “Spider” to “List.”

Then, we can either add our set of URLs manually or upload them from an Excel or other supported format.

After we confirm the list, ScreamingFrog will crawl each URL we provided, extracting the text between all <span> tags containing the class attribute set to “a-declarative.”

In order to see the data collected, you just need to select “Custom Extraction” in ScreamingFrog.

Run the desired URLs

At first glance, the output might not look that exciting.

However, this is only because a lot of unneeded space is included with the data, so you might see some columns that appear blank if they are not expanded to fully display the contents.

Once you copy and paste the data into Excel or your spreadsheet program of choice, you can finally see the data that has been extracted. After some clean-up, you get the final result:

Final list of questions created using XPath

The result is 118 questions that real customers have asked about microspikes in an easily accessible format. With this data at your fingertips, you’re now ready to incorporate this research into your content strategy.

Content strategies

Before diving into content strategies, a quick word to the wise: you can’t just crawl, scrape and publish content from another site, even if it is publicly accessible.

First, that would be plagiarism and expect to be hit with an DMCA notice. Second, you’re not fooling Google. Google knows the original source of the content, and it is extremely unlikely your content is going to rank well – defeating the purpose of this entire strategy.

Instead, this data can be used to inform your strategy and help you produce high quality, unique content that users are searching for.

Now, how do you get started with your analysis?

I recommend first categorizing the questions. For our example there were many questions about:

  • Sizing: What size microspikes are needed for specific shoe/boot sizes?
  • Proper Use – Whether or not microspikes could be used in stores, on slippery roofs, while fishing, mowing lawns, or for walking on plaster?
  • Features: Are they adjustable, type of material, do they come with a carrying case?
  • Concerns: Are they comfortable, do they damage your footwear, do they damage the type of flooring/ground you’re on, durability?

This is an amazing insight into the potential concerns customers might have before purchasing microspikes.

From here, you can use this information to:

1. Enhance existing content on your product and category pages

Incorporate the topics into the product or category descriptions, answering questions shoppers might have pre-emptively.

For our example, we might want to make it abundantly clear how sizing works – including a sizing chart and specifically mentioning types of footwear the product may or may not be compatible with.

2. Build out a short on-page FAQ section featuring original content, answering commonly asked questions

Make sure to implement FAQPage Schema.org markup for a better chance to appear for listings like People Also Ask sections, which are increasingly taking up real estate in the search results.

For our example, we can answer commonly asked questions about comfort, damage to footwear, durability, and adjustability. We could also address if the product comes with a carrying case and how to best store the product for travel.

3. Produce a product guide, incorporating answers to popular questions surrounding a product or category

Another strategy is to produce an extensive one-stop product guide showcasing specific use cases, sizing, limitations, and features. For our example, we could create specific content for each use case like hiking, running in icy conditions, and more.

Even better, incorporate videos, images, charts, and featured products with a clear path to purchase.

Using this approach your end product will be content that shows expertise, the authority on a subject, and most importantly, addresses customer concerns and questions before they even think to ask. This will help prevent your customers from having to do additional research or contact customer service. Thanks to your informative and helpful content, they will be more ready to make a purchase.

Furthermore, this approach also has the potential to lower product return rates. Informed customers are less likely to purchase the wrong product based upon assumed or incomplete knowledge.

Conclusion

Amazon is just the tip of the iceberg here. You can realistically apply this strategy to any site that has publicly accessible data to extract, be that questions from Quora about a product category, Trip Advisor reviews about hotels, music venues, and attractions, or even discussions on Reddit.

The more informed you are about what your customers are expecting when visiting your site, the better you can serve those expectations, motivate purchases, decrease bounces, and improve organic search performance.

Brad McCourt is an Organic Search Manager at Catalyst’s Boston office. 



Source link

Continue Reading

SEO

Five must knows for advertisers and marketers

Published

on


30-second summary:

  • 2020 set the stage for one of the most disruptive and fluid years search has ever seen.
  • Local search and Google My Business (GMB) set to be key focal areas for search advertisers and marketers amid shifts in COVID era search activity.
  • Google continues to make moves at further integrating ecommerce into search.
  • Manual Text Ads look to be on shaky ground as we move into 2021.
  • Ashley Fletcher, VP of Marketing at Adthena, shares five must-knows for search in 2021.

From algorithm changes to shifts in search activity as a result of COVID-19, 2020 was one of the most disruptive years that the search industry has ever seen. And although positive movements have been made in helping to rein in the COVID-19, a “return to normal” still seems a long way off. However, with the COVID-19 vaccine raising the possibility that “non-COVID era” search habits may return, search professionals are hard at work trying to determine which industry changes are here to stay, and which may fade away, as the world begins to get long overdue COVID relief. This means the landscape of search in 2021 is likely to see just as unpredictable of evolution as it did in 2020.

With that in mind, here are three key areas search advertisers and marketers should pay close attention to as we move into, and through, 2021.

Doubling down on GMB and local search

Remember when Google My Business (GMB) was just a helpful little tool for search advertising and marketing? Those days are now behind us.

Accounting for 33% of how local businesses are ranked, GMB is now a huge factor when it comes to SEO. Moreover, as local continues to become a bigger part of the search environment as more users are opting to stay close to home as a result of the COVID-19 pandemic, advertisers and marketers need to optimize their search strategies appropriately and stay abreast of any enhancements to GMB.

Greater consideration for voice search

With 157 million Amazon Echos in homes around the US at the start of 2020, voice search is poised to continue being a massive player in search moving forward. And given how easy it is, the fact that more smart speakers are set to be purchased in the years to come, voice search is likely to go from a secondary voice channel to a primary one in short order. Therefore, with this new avenue opening up and PPC having to be rethought as a result, advertisers should begin thinking about how to optimize their searches from traditional keyword search logic to spoken word-centric phrases.

Direct buy on Google? Amazon beware

E-commerce is set to be one of the most intriguing areas of search in 2021 as Google continues to indicate that shopping will be a key goal for its platform moving forward. For years, Google has been signaling that shopping and e-commerce are key focal areas for its platform. And through the rollout of features such as Smart Shopping — among other things — Google has never been in a better position to drive sales directly from its SERPs. This means that not only should Amazon be on high-alert, but traditional retail search advertisers need to seriously consider their search strategies in the year ahead.

The end of the text ad?

Could 2021 be the end of the road for text ads? This has been the question on search pros minds particularly since Google briefly scrapped the ability to create text ads in October — not to mention when the ability to create ETAs disappeared from Google Ads dropdown menus on a smaller scale in August. Plus, given the added emphasis being placed on Smart Bidding, it seems that manual text ads could have a limited lifespan at best, and 2021 could be the year where we see this search staple wound down entirely.

Being OK with uncertainty

Search advertisers are used to adapting to continuously evolving circumstances. But 2021 could push the term “evolution” to an extreme. From better understanding search patterns during the COVID era to figuring out which trends are here to stay and which are just passing fads, 2021 is going to be a very hard year for search professionals to get their heads around — let alone always get it right. With that in mind, it has never been more important for search professionals to lean into both technology and teamwork to make sense of what lies ahead. Moreover, search professionals need to move into 2021 with a whole new perspective on flexibility. Simply put, search advertising is set to chart completely foreign waters in 2021, and by embracing the fact that uncertainty is the new normal search professionals will likely have a much easier time adapting to these new circumstances.

Closing note

While 2020 presented the search industry with an unprecedented amount of uncertainty, 2021 could represent a period of even greater unpredictability as several foundational changes seem to be set to take place within the space. However, by keeping an eye on these emerging areas and game planning now, search advertisers and marketers will not only be able to avoid potential headaches and growing pains but be able to put themselves in a position to drive success as soon as possible.

Ashley Fletcher is VP of Marketing at Adthena.



Source link

Continue Reading

Trending