Connect with us

SEO

Automation layering: How PPC pros retain control when automation takes over

Published

on


The PPC track at SMX Advanced kicked off with a keynote by Ginny Marvin where she considered the future of an industry where automation is increasingly doing more of the work humans used to do. Her message was that we can’t escape automation so we must find a way to coexist with the machines.

The topic of automation in PPC comes up a lot but I suspect that when our industry talks about the impact of automation, what is considered are mostly automations built by the likes of Google and Microsoft… disruptive (but not necessarily bad) capabilities like Smart Bidding, close variant keywords, responsive search ads, etc.

But nobody ever said that advertisers can’t be disruptors too. They too can build automations to change the game and give themselves a competitive edge.

Having to build your own automations may sound daunting but remember that they don’t have to be cutting-edge like machine learning in order to be useful. In this post, I’ll explain an easy way to get started with your own automations using the principle of “automation layering.”

Automations from the engines are better with human help

In my new book, Digital Marketing in an AI World, I explain that humans plus machines usually perform better than machines alone. This is not a new concept and one most of you have probably come across in some form or other. One specific example I used to share in presentations came from Wired in 2014 and said that, “The National Weather Service employs meteorologists who, understanding the dynamics of weather systems, can improve forecasts by as much as 25 percent compared with computers alone.”

Because of the potential for better results, PPC pros want to remain involved. They have knowledge about the business that could meaningfully impact results. Sometimes there simply is not enough data for a machine learning system to come up with the same insight. So it’s generally agreed upon that humans + machines can outperform machines alone.

Generally, we tend to translate this concept into the PPC world by saying that account managers need to work together with automations from the engines.

When humans work together with automations from
the ad engines like Google, the results are generally thought to be better than if the automation didn’t have the help of a smart PPC account manager.

Automations from the engines are better with automations from advertisers

Then I started thinking about the role human PPC managers need to play for the premise to be true that humans + machines outperform machines alone. I realized that the humans in that equation could actually be replaced by machines as well, but in this case, machines that are controlled by the PPC pro and not the ad engine. PPC pros could benefit from the control (since they define the automation) and the time savings (because they don’t need to exert control manually).

So we should try to replace some forms of human control with new layers of automation and see if that delivers the same benefits as humans + machines. If we can write down the steps we take, we can teach a machine to do those steps for us. And it can be a simple rule-based approach which is much simpler to create than something based on machine learning.

Humans don’t need to do repetitive manual work to help the automations from the engines. They can teach their own machines to automate their process.

The concept behind automation layering is not a new idea. In engineering, solutions can be broken down into systems that can themselves be connected to other systems. Each system accepts inputs and returns outputs and so long as there is agreement over the format of inputs and outputs, many systems can be strung together and work seamlessly together to solve more complex problems.

Likewise, an automation could interact with other automations. In PPC, let’s call this principle “automation layering.”  This is an important concept because it’s the next evolution of what PPC pros have been doing for years: using their own insights to control what Google does. But just like Google is getting ever-more automated, our control over it should also become more automated.

By replacing the manual work done by the PPC expert with an automation
that follows their logic, PPC teams can still reap the benefits of having more control over automations created by the ad engines.

Let’s look at why automation layering makes sense in PPC.

Escaping automation is not an option

The reason humans worry about automations created by the engines is that we can’t escape these. They are launched at the engine’s discretion and whether we like it or not, we have to spend time figuring out how they impact our work. Given how busy the typical PPC manager is, this extra work is not something to look forward to.

Despite promising great things, the truth is that success with new automations depends on experimentation and reskilling, both tasks that require time to do well. To take an example from aviation, cutting corners with reskilling when new automations are launched can lead to disastrous results as seen with the 737-Max. Luckily in PPC the stakes are not as high, but I believe the analogy is relevant.

Automation layering for close variants

Some new automations cannot be turned off so they force us to change how we work with Google Ads. Close variants are a recent example of this type of change. In September of last year, they redefined what different keyword match types, like “exact match” mean.

Some account managers now spend extra time monitoring search terms triggered for exact match keywords. This would be a great form of human control to turn into automation layering where the PPC manager turns their structured logic for how they check close variants into an automation that does it automatically.

There are two specific ways I’ve shared to layer an automation on top of Google’s exact match keywords to keep control when they expand to close variants with similar meaning.

The first way is to simply check the performance of the close variant to that of the underlying exact keyword. If a user-defined threshold for performance is met, it can automatically be added as a new keyword with its own bid, or as a negative keyword if the performance is significantly lower. Note that close variants when used in conjunction with Smart Bidding should already get the appropriate bid to meet CPA or ROAS targets, but regardless it can’t hurt to add your own layer of automation to confirm this.

The second way is to use the Levenshtein distance calculation to find how far the close variant is from the exact keyword. It is a simple calculation that adds up the number of text changes required to go from one word to another. Every character added, deleted, or changed adds one point. Hence going from the correct spelling of my company name “Optmyzr” to the common typo “Optmyzer” has a Levenshtein distance of 1 (for the addition of the letter “e”). Going from the word “campsite” to “campground” on the other hand has a score of 6 because 4 letters need to be changed and 2 need to be added.

Layer your own automation on top of close variants to determine how different the close variant is to the exact match keyword. The Levenshtein distance function can be used to calculate the number of text changes required to go from one text string to another.

With a Google Ads script, we could write our own automation that turns these manual checks into fully automated ones. Because it’s an automation that we can define, it’s as powerful as the more manual human control that we used to have to put in to get the benefits normally associated with humans + machines.

Automation layering for Smart Bidding

Other automations like Smart Bidding are optional but with their pace of improvements, it’s just a matter of time before even the most ardent fans of doing PPC manually simply won’t be able to make enough of a difference that they can charge a living wage for their manual bid management services.

The machines are simply better at doing the math that predicts future conversions and using this expected conversion rate to turn an advertiser’s business goals around CPA or ROAS into a CPC bid that the ad auction can use to rank the ad against all others.

That said, remember that Smart Bidding is not the same as automated bidding. Part of the bid management process is automated, but there’s still work for humans to do. Things like setting goals and ensuring measurement is working are just two examples of these tasks.

Smart bidding doesn’t mean the entire bid management process is automated. Account managers still need to control dials for seasonality, conversion types, and fluctuating margins. These well-defined processes are great things to automate so they can be layered on Google’s Smart Bidding automation.

Besides needing to dial in adjustments for seasonality, special promotions and figuring out how to connect these limited controls to business goals like acquiring new customers, driving store visits or driving higher repeat sales, there’s still the point that most companies care about profits. Despite what we may think after hearing of Uber’s $1 billion quarterly loss, the reality is that most companies don’t have hordes of cash from VCs and a recent IPO so profits are what helps these businesses grow. Curiously, Google Ads doesn’t really have a Smart Bidding strategy geared towards profits.

So it’s up to the human PPC pro to bridge that gap and perhaps add some automation layering. One way to drive towards profitable PPC is to take margins into account when setting ROAS goals.

More profitable items (the ones with higher margins) can have lower ROAS targets. Remember ROAS in Google is “conv value/cost” (i.e., conversion value divided by ad costs). Assuming the conversion value is the cart value of the sale, for an item with a better margin more of that cart value is the product markup. So a lower ROAS can still deliver a profit whereas for items with low margins, less of the cart value is the markup and hence a higher ROAS is needed to break even.

PPC pros could manually assign different products to different smart shopping campaigns with different ROAS targets but that would be tedious and time consuming, especially if the margins for existing products were to change due to promotions and sales events. A smarter solution would be to apply automation layering and use a tool or script that sends products automatically to the right smart shopping campaigns where Google’s automations could take over.

Conclusion

The engines are automating many things we used to have lots of control over because we used to do them manually: from finding new keywords, to setting better bids, to writing ads. But when the people behind the businesses that advertise on Google get a say, results can be better than if the engine’s automation runs entirely on its own.

Just like Google is adding automations, so should you. Use the concept of Automation Layering to your advantage to retain the level of control you’re used to while also saving time by letting the machines do your work.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

Frederick (“Fred”) Vallaeys was one of the first 500 employees at Google where he spent 10 years building AdWords and teaching advertisers how to get the most out of it as the Google AdWords Evangelist.
Today he is the Cofounder of Optmyzr, an AdWords tool company focused on unique data insights, One-Click Optimizations™, advanced reporting to make account management more efficient, and Enhanced Scripts™ for AdWords. He stays up-to-speed with best practices through his work with SalesX, a search marketing agency focused on turning clicks into revenue. He is a frequent guest speaker at events where he inspires organizations to be more innovative and become better online marketers.

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

SEO

Restaurant app Tobiko goes old school by shunning user reviews

Published

on


You can think of Tobiko as a kind of anti-Yelp. Launched in 2018 by Rich Skrenta, the restaurant app relies on data and expert reviews (rather than user reviews) to deliver a kind of curated, foodie-insider experience.

A new Rich Skrenta project. Skrenta is a search veteran with several startups behind him. He was one of the founders of DMOZ, a pioneering web directory that was widely used. Most recently Skrenta was the CEO of human-aided search engine Blekko, whose technology was sold to IBM Watson in roughly 2015.

At the highest level, both DMOZ and Blekko sought to combine human editors and search technology. Tobiko is similar; it uses machine learning, crawling and third-party editorial content to offer restaurant recommendations.

Tobiko screenshots

Betting on expert opinion. Tobiko is also seeking to build a community, and user input will likely factor into recommendations at some point. However, what’s interesting is that Skrenta has shunned user reviews in favor of “trusted expert reviews” (read: critics).

Those expert reviews are represented by a range of publisher logos on profile pages that, when clicked, take the user to reviews or articles about the particular restaurant on those sites. Where available, users can also book reservations. And the app can be personalized by engaging a menu of preferences. (Yelp recently launched broad, site-wide personalization itself.)

While Skrenta is taking something of a philosophical stand in avoiding user reviews, his approach also made the app easier to launch because expert content on third-party sites already existed. Community content takes much longer to reach critical mass. However, Tobiko also could have presented or “summarized” user reviews from third-party sites as Google does in knowledge panels, with TripAdvisor or Facebook for example.

Tobiko is free and currently appears to have no ads. The company also offers a subscription-based option that has additional features.

Why we should care. It’s too early to tell whether Tobiko will succeed, but it provocatively bucks conventional wisdom about the importance of user reviews in the restaurant vertical (although reading lots of expert reviews can be burdensome). As they have gained importance, reviews have become somewhat less reliable, with review fraud on the rise. Last month, Google disclosed an algorithm change that has resulted in a sharp decrease in rich review results showing in Search.

Putting aside gamesmanship and fraud, reviews have brought transparency to online shopping but can also make purchase decisions more time-consuming. It would be inaccurate to say there’s widespread “review fatigue,” but there’s anecdotal evidence supporting the simplicity of expert reviews in some cases. Influencer marketing can be seen as an interesting hybrid between user and expert reviews, though it’s also susceptible to manipulation.


About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.



Continue Reading

SEO

3 Ways to Use XPaths with Large Site Audits

Published

on


When used creatively, XPaths can help improve the efficiency of auditing large websites. Consider this another tool in your SEO toolbelt.

There are endless types of information you can unlock with XPaths, which can be used in any category of online business.

Some popular ways to audit large sites with XPaths include:

In this guide, we’ll cover exactly how to perform these audits in detail.

What Are XPaths?

Simply put, XPath is a syntax that uses path expressions to navigate XML documents and identify specified elements.

This is used to find the exact location of any element on a page using the HTML DOM structure.

We can use XPaths to help extract bits of information such as H1 page titles, product descriptions on ecommerce sites, or really anything that’s available on a page.

While this may sound complex to many people, in practice, it’s actually quite easy!

How to Use XPaths in Screaming Frog

In this guide, we’ll be using Screaming Frog to scrape webpages.

Screaming Frog offers custom extraction methods, such as CSS selectors and XPaths.

It’s entirely possible to use other means to scrape webpages, such as Python. However, the Screaming Frog method requires far less coding knowledge.

(Note: I’m not in any way currently affiliated with Screaming Frog, but I highly recommend their software for web scraping.)

Step 1: Identify Your Data Point

Figure out what data point you want to extract.

For example, let’s pretend Search Engine Journal didn’t have author pages and you wanted to extract the author name for each article.

What you’ll do is:

  • Right-click on the author name.
  • Select Inspect.
  • In the dev tools elements panel, you will see your element already highlighted.
  • Right-click the highlighted HTML element and go to Copy and select Copy XPath.

2 copy xpath

At this point, your computer’s clipboard will have the desired XPath copied.

Step 2: Set up Custom Extraction

In this step, you will need to open Screaming Frog and set up the website you want to crawl. In this instance, I would enter the full Search Engine Journal URL.

  • Go to Configuration > Custom > Extraction

3 setup xpath extraction

  • This will bring up the Custom Extraction configuration window. There are a lot of options here, but if you’re looking to simply extract text, match your configuration to the screenshot below.

4 configure xpath extraction

Step 3: Run Crawl & Export

At this point, you should be all set to run your crawl. You’ll notice that your custom extraction is the second to last column on the right.

When analyzing crawls in bulk, it makes sense to export your crawl into an Excel format. This will allow you to apply a variety of filters, pivot tables, charts, and anything your heart desires.

3 Creative Ways XPaths Help Scale Your Audits

Now that we know how to run an XPath crawl, the possibilities are endless!

We have access to all of the answers, now we just need to find the right questions.

  • What are some aspects of your audit that could be automated?
  • Are there common elements in your content silos that can be extracted for auditing?
  • What are the most important elements on your pages?

The exact problems you’re trying to solve may vary by industry or site type. Below are some unique situations where XPaths can make your SEO life easier.

1. Using XPaths with Redirect Maps

Recently, I had to redesign a site that required a new URL structure. The former pages all had parameters as the URL slug instead of the page name.

This made creating a redirect map for hundreds of pages a complete nightmare!

So I thought to myself, “How can I easily identify each page at scale?”

After analyzing the various page templates, I came to the conclusion that the actual title of the page looked like an H1 but was actually just large paragraph text. This meant that I couldn’t just get the standard H1 data from Screaming Frog.

However, XPaths would allow me to copy the exact location for each page title and extract it in my web scraping report.

In this case I was able to extract the page title for all of the old URLs and match them with the new URLs through the VLOOKUP function in Excel. This automated most of the redirect map work for me.

With any automated work, you may have to perform some spot checking for accuracy.

2. Auditing Ecommerce Sites with XPaths

Auditing Ecommerce sites can be one of the more challenging types of SEO auditing. There are many more factors to consider, such as JavaScript rendering and other dynamic elements.

Sometimes, stakeholders will need product level audits on an ad hoc basis. Sometimes this covers just categories of products, but sometimes it may be the entire site.

Using the XPath extraction method we learned earlier in this article, we can extract all types of data including:

  • Product name
  • Product description
  • Price
  • Review data
  • Image URLs
  • Product Category
  • And much more

This can help identify products that may be lacking valuable information within your ecommerce site.

The cool thing about Screaming Frog is that you can extract multiple data points to stretch your audits even further.

3. Auditing Blogs with XPaths

This is a more common method for using XPaths. Screaming Frog allows you to set parameters to crawl specific subfolders of sites, such as blogs.

However, using XPaths, we can go beyond simple meta data and grab valuable insights to help identify content gap opportunities.

Categories & Tags

One of the most common ways SEO professionals use XPaths for blog auditing is scraping categories and tags.

This is important because it helps us group related blogs together, which can help us identify content cannibalization and gaps.

This is typically the first step in any blog audit.

Keywords

This step is a bit more Excel-focused and advanced. How this works, is you set up an XPath extraction to pull the body copy out of each blog.

Fair warning, this may drastically increase your crawl time.

Whenever you export this crawl into Excel, you will get all of the body text in one cell. I highly recommend that you disable text wrapping, or your spreadsheet will look terrifying.

Next, in the column to the right of your extracted body copy, enter the following formula:

=ISNUMBER(SEARCH("keyword",A1))

In this formula, A1 equals the cell of the body copy.

To scale your efforts, you can have your “keyword” equal the cell that contains your category or tag. However, you may consider adding multiple columns of keywords to get a more accurate and robust picture of your blogging performance.

This formula will present a TRUE/FALSE Boolean value. You can use this to quickly identify keyword opportunities and cannibalization in your blogs.

Author

We’ve already covered this example, but it’s worth noting that this is still an important element to pull from your articles.

When you blend your blog export data with performance data from Google Analytics and Search Console, you can start to determine which authors generate the best performance.

To do this, sort your blogs by author and start tracking average data sets including:

  • Impressions – Search Console
  • Clicks – Search Console
  • Sessions – Analytics
  • Bounce Rate – Analytics
  • Conversions – Analytics
  • Assisted Conversions – Analytics

Share Your Creative XPath Tips

Do you have some creative auditing methods that involve XPaths? Share this article on Twitter or tag me @seocounseling and let me know what I missed!

More Resources:


Image Credits

All screenshots taken by author, October 2019



Continue Reading

SEO

When parsing ‘Googlespeak’ is a distraction

Published

on


Over the almost 16-years of covering search, specifically what Googlers have said in terms of SEO and ranking topics, I have seen my share of contradictory statements. Google’s ranking algorithms are complex, and the way one Googler explains something might sound contradictory to how another Googler talks about it. In reality, they are typically talking about different things or nuances.

Some of it is semantics, some of it is being literal in how one person might explain something while another person speaks figuratively. Some of it is being technically correct versus trying to dumb something down for general practitioners or even non-search marketers to understand. Some of it is that the algorithm can change over the years, so what was true then has evolved.

Does it matter if something is or is not a ranking factor? It can be easy to get wrapped up in details that end up being distractions. Ultimately, SEOs, webmasters, site owners, publishers and those that produce web pages need to care more about providing the best possible web site and web page for the topic. You do not want to chase algorithms and racing after what is or is not a ranking factor. Google’s stated aim is to rank the most relevant results to keep users happy and coming back to the search engine. How Google does that changes over time. It releases core updates, smaller algorithm updates, index updates and more all the time.

For SEOs, the goal is to make sure your pages offer the most authoritative and relevant content for the given query and can be accessed by search crawlers.

When it is and is not a ranking factor. An example of Googlers seeming to contradict themselves popped this week.

Gary Illyes from Google said at Pubcon Thursday that content accuracy is a ranking factor. That raised eyebrows because in past Google has seemed to say content accuracy is not a ranking factor. Last month Google’s Danny Sullivan said, “Machines can’t tell the ‘accuracy’ of content. Our systems rely instead on signals we find align with relevancy of topic and authority.” One could interpret that to mean that if Google cannot tell the accuracy of content, that it would be unable to use accuracy as a ranking factor.

Upon closer look at the context of Illyes comments this week, it’s clear he’s getting at the second part of Sullivan’s comment about using signals to understand “relevancy of topic and authority.” SEO Marie Haynes captured more of the context of Illyes’ comment.

Illyes was talking about YMYL (your money, your life) content. He added that Google goes through “great lengths to surface reputable and trustworthy sources.”

He didn’t outright say Google’s systems are able to tell if a piece of content is factually accurate or not. He implied Google uses multiple signals, like signals that determine reputations and trustworthiness, as a way to infer accuracy.

So is content accuracy a ranking factor? Yes and no. It depends if you are being technical, literal, figurative or explanatory. When I covered the different messaging around content accuracy on my personal site, Sullivan pointed out the difference, he said on Twitter “We don’t know if content is accurate” but “we do look for signals we believe align with that.”

It’s the same with whether there is an E-A-T score. Illyes said there is no E-A-T score. That is correct, technically. But Google has numerous algorithms and ranking signals it uses to figure out E-A-T as an overall theme. Sullivan said on Twitter, “Is E-A-T a ranking factor? Not if you mean there’s some technical thing like with speed that we can measure directly. We do use a variety of signals as a proxy to tell if content seems to match E-A-T as humans would assess it. In that regard, yeah, it’s a ranking factor.”

You can see the dual point Sullivan is making here.

The minutiae. When you have people like me, who for almost 16 years, analyze and scrutinize every word, tweet, blog post or video that Google produces, it can be hard for a Google representative to always convey the exact clear message at every point. Sometimes it is important to step back, look at the bigger picture, and ask yourself, Why is this Googler saying this or not saying that?

Why we should care. It is important to look at long term goals, and as I said above, not chase the algorithm or specific ranking factors but focus on the ultimate goals of your business (money). Produce content and web pages that Google would be proud to rank at the top of the results for a given query and other sites will want to source and link to. And above all, do whatever you can to make the best possible site for users — beyond what your competitors produce.


About The Author

Barry Schwartz is Search Engine Land’s News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.



Continue Reading

Trending

Copyright © 2019 Plolu.