Keyword-based text ads allow retailers to easily understand a consumer’s window of intent and are able to optimize accordingly.
However, shopping ads were created on a product-based bidding model, which means Google’s auction selects what products show up for a specific search result.
This model takes away an important aspect of optimization and bid control because retailers are not able to bid differently on a consumer throughout their purchase journey.
While this can be disheartening it doesn’t have to be!
There is a solution to the product bidding disadvantage:
Through the shopping setting, campaign priority, you are able to control how much you bid for different types of queries.
How do campaign priorities work?
When you have the same product in multiple shopping campaigns, you can determine which campaign should participate in the auction for that product with the campaign priority – high, medium, or low.
The highest priority campaign will always enter the auction first, regardless of how much you are bidding.
To create a shopping keyword segmentation structure retailers must start by building three campaigns of the same product, or group of products, each with a different priority setting – high, medium, and low.
The priority settings will act as a funnel, filtering down more specific keywords via negatives.
Below is a table to highlight how the shopping keyword segmentation structure works.
Inefficient non-brand queries
Shopping keyword segmentation gives advertisers the ability to:
Own the SERP on branded terms.
Optimize bids based on non-brand performance.
Control what products to advertise at different stages of the purchase journey.
Shopping keyword segmentation is a worthwhile approach to ensure you are driving sales on high intent queries and cutting spend on inefficient head terms.
Dynamic Search Ads Are an Asset, Not an Accessory
Leverage the power of Dynamic Search Ads to expand your keyword set at a lower cost.
When crafting ecommerce campaigns, you’ll often find that the most logical keywords are also the most expensive and least profitable.
There’s an old adage that around 15% of daily searches are not new to Google; sure you’ll find them via broad match with a higher CPC, but it’s more efficient to let DSA do the dirty work for you.
When crafting a DSA campaign, here are a few things you need to keep in mind:
Pages and URLs are your keywords. Make sure to segment like-sections of the site into their own ad groups to maximize copy relevancy.
By the same token, use negatives! Block DSAs from going to irrelevant pages. I doubt there would be any relevant queries coming from pages around careers or return policy.
Leverage a full suite of extensions, just like you would for a keyword targeting campaign. They’ll likely see less volume due to lower ad rank, but better to have them present.
Use any and all audiences available. Spend the majority of your time optimizing toward people, and let the engines pick the keywords.
Smart bidding features (target ROAS/CPA and eCPC) help amplify the effectiveness and efficiency of DSA’s. Use them often.
It’s a common practice to take all converting keywords from a DSA campaign and deploy them into a traditional keyword-targeted campaign to maintain control.
While this is an effective way to ensure maximum volume, oftentimes CPCs spike to the point of being inefficient when targeting every query instead of when DSAs determine its most likely to convert.
Unless a single query is getting dominant volume or underperforming relative to keyword targeting, it’s recommended to leave them be.
Adopt Google Showcase Shopping Ads
In 2016, Google launched a new ad format – Showcase Shopping ads.
This solution looks to better position ecommerce, retail, and fashion advertisers with their customers.
Think of Google Showcase Shopping ads as your digital storefront. It’s the window shopping solution your online customers are looking for.
You’re able to group together different ecommerce, fashion, and retail products using vivid, high-quality digital images.
You can complement existing products with multiple smaller products, or combine several “me-too” products within a larger discounted offer.
Showcase ads are used to target more generic non-brand queries and appear on mobile search results.
On the SERP, the ad features a brand-specific, customized hero image relating to the search query along with two smaller images.
These smaller images show the actual products. When the user clicks into the ad it features the custom hero image, a custom description to help introduce the brand, and up to 10 individual products.
Aside from the visual differences, Showcase Shopping ads use maximum CPE (cost per engagement) bidding, which means that advertisers set the highest amount that they are willing to pay for an engagement.
They then are charged when someone expands the ad and spends 10+ seconds within the ad or when a user clicks on a link to the site before the 10 seconds.
In 2018, Showcase ads continued to gain mobile click share and have continued to gain momentum in 2019.
This ad format is a great branding tool, tailored towards user engagement rather than user acquisition and best used as an upper-funnel tactic.
Connect Online to Offline with Local Inventory Ads
According to Google, almost 80% of shoppers will go in-store when the retailer has an item they want immediately.
One of the best ways to address this expectation of immediate in-store availability is through local inventory ads.
This ad format is a great way to drive customers to your store, capturing their attention by highlighting products available at stores nearby.
Local inventory ads appear on mobile queries that include local intent (for example, “dresses near me”) and will trigger if the user is within 35 miles of a store.
When users click on your ad, they’re immediately directed to your Google-hosted local storefront page.
Your customized storefront page includes:
A product description.
An image of the product.
Links to your website.
Your phone number.
Your store’s hours of operation.
A map providing directions to your store.
Customers can also buy directly by clicking through to your website.
While local inventory ads are a great option for all brick-and-mortar advertisers, the setup and maintenance can prove challenging.
Advertisers must ensure in-store availability and inventory counts in the feed are updated daily.
In an attempt to alleviate the onboarding and maintenance of local feeds, Google launched a local feed partnership program. This new program allows third-party inventory data providers to provide sale and inventory data to Google on behalf of the merchant.
Once an advertiser launches local inventory campaigns the recommended way to measure impact is through multiple sources, like Google Ads and Google Analytics.
Monitoring key metrics like in-store traffic and online orders, as well as other analytics, allow retailers to optimize campaigns toward in-store visits and resulting offline and online sales.
Target the Less Obvious Audience
Audience tools like demographics, customer match and retargeting are some of the more powerful features Google and Bing have created in recent memory.
Advertisers have the ability to customize messaging, increase/decrease bids and generally pinpoint whatever or whomever you want!
Some of the most commonly used (and recommended) audiences are in-market, meaning Google is able to hone in on people who are actively researching to make a significant purchase.
If you’re an insurance company, it seems like a no brainer to add an audience of users who are in market for insurance, right? The challenge is all of your competitors are doing the same thing.
Consider using audiences as a way to find what else your audience likes and target accordingly.
If you’re selling handbags or jewelry, you might find success targeting men who want to buy something for their significant other’s birthday or anniversary.
Boutique fitness club? Try targeting users interested in organic food.
Google has a bevy of tools to help identify these cohorts as well. Head over to the Audience Insights section of the audience manager to get a view of what your audience likes relative to the rest of the country.
Below is a snapshot from a luxury watch seller. Perhaps it shouldn’t be too surprising the audience indexes high for Pools, Sailing, and Trips to Miami!
Re-Evaluate Your KPIs
Return on Ad Spend (ROAS as we all affectionately call it) can be a dangerous metric. It’s a single snapshot in time, evaluating only whether a single order made money or not.
Optimizing to single-purchase ROAS only will diminish the ability to compete in challenging auctions. Consider evaluating towards Cost Per Acquired Customer, Customer Lifetime Value or one-year customer payback as a better true north metric.
For true, top-of-the-funnel prospecting search terms, consider targeting micro-conversions or “steps” as a way to add value without breaking the bank.
Optimize toward an email list subscribe, or use early-stage terms as a way to build retargeting pools to market to later.
Featured Image Credit: Paulo Bobita All screenshots taken by author
The majority of consumers (80% – 90%) routinely consult reviews before buying something, whether online or off. The powerful influence of reviews on purchase behavior has spawned a cottage industry of fake-reviews, a problem that is growing on major sites such as Amazon, Google and Yelp, among other places.
Just over 2% of reviews submitted were fake. TripAdvisor is one of those other places, where reviews form the core of the company’s content and the principle reason consumers visit. How much of the review activity on TripAdvisor is fraudulent? In its inaugural TripAdvisor Transparency Report the company says that 2.1% of all reviews submitted to the site in 2018 were fake. (A total of 4.7% of all review submissions were rejected or removed for violating TripAdvisor’s review guidelines, which extend beyond fraud.)
73% blocked by machine detection. Given the volume of review submissions TripAdvisor receives – more than 66 million in 2018 – that translates into roughly 1.4 million fake reviews. TripAdvisor says that 73% of those fake reviews were blocked before being posted, while the remainder of fake reviews were later removed. The company also says that it has “stopped the activity of more than 75 websites that were caught trying to sell reviews” since 2015.
TripAdvisor defines “fake review” as one “written by someone who is trying to unfairly manipulate a business’ average rating or traveler ranking, such as a staff member or a business’ competitor. Reviews that give an account of a genuine customer’s experience, even if elements of that account are disputed by the business in question, are not categorized as fake.”
The company uses a mix of machine detection, human moderation and community flagging to catch fraudulent reviews. The bulk of inauthentic reviews (91%) are fake positive reviews TripAdvisor says.
TripAdvisor says that the review fraud problem is global, with fake reviews originating in most countries. However, it said there was a higher percentage than average of fake reviews “originating from Russia.” By contrast, China is the source of many fake reviews on Amazon.
Punishing fake reviews. TripAdvisor has a number of penalties and punishments for review fraud. In the first instance of a business being caught posting or buying fake reviews, TripAdvisor imposes a temporary ranking penalty.
Upon multiple infractions, the company will impose a content ban that prevents the individual or individuals in question from posting additional reviews and content on the site. It also prevents the involved parties from creating new accounts to circumvent the ban.
In the most extreme cases, the company will apply a badge of shame (penalty badge) that warns consumers the business has repeatedly attempted to defraud them. This is effectively a kiss of death for the business. Yelp does something similar.
Why we should care. Consumer trust is eroding online. It’s incumbent upon major consumer destinations sites to police their reviews aggressively and prevent unscrupulous merchants from deceiving consumers. Yelp has been widely criticized for its “review filter” but credit the company for its long-standing efforts to protect the integrity of its content.
Google and Amazon, in particular, need to do much more to combat review spam and fraud. Hopefully TripAdvisor’s effort and others like it will inspire them to.
About The Author
Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.
When optimizing our websites for crawlability, our main goal is to make sure that search engines are spending their time on our most important pages so that they are regularly crawled and any new content can be found.
Each time Googlebot visits your website, it has a limited window in which to crawl and discover as many pages and links on your site as possible. When that limit is hit, it will stop.
The time it takes for your pages to be revisited depends on a number of different factors that play into how Google prioritizes URLs for crawling, including:
XML sitemap inclusion.
Position within the site’s architecture.
How frequently the page changes.
The bottom line is: your site only gets Googlebot’s attention for a finite amount of time with each crawl, which could be infrequent. Make sure that time is spent wisely.
It can be hard to know where to start when analyzing how well-optimized your site is for search engine crawlers, especially when you work on a large site with a lot of URLs to analyze, or work in a large company with a lot of competing priorities and outstanding SEO fixes to prioritize.
That’s why I’ve put together this list of top-level checks for assessing crawl hygiene to give you a starting point for your analysis.
1. How Many Pages Are Being Indexed vs. How Many Indexable Pages Are There on the Site?
Why This Is Important
This shows you how many pages on your site are available for Google to index, and how many of those pages Google was actually able to find and how many it determined were important enough to be indexed.
2. How Many Pages Are Being Crawled Overall?
Why This Is Important
Comparing Googlebot’s crawl activity against the number of pages you have on your site can give you insights into how many pages Google either can’t access, or has determined aren’t enough of a priority to schedule to be crawled regularly.
3. How Many Pages Aren’t Indexable?
Why This Is Important
Spending time crawling non-indexable pages isn’t the best use of Google’s crawl budget. Check how many of these pages are being crawled, and whether or not any of them should be made available for indexing.
4. How Many URLs Are Being Disallowed from Being Crawled?
Why This Is Important
This will show you how many pages you are preventing search engines from accessing on your site. It’s important to make sure that these pages aren’t important for indexing or for discovering further pages for crawling.
5. How Many Low-Value Pages Are Being Indexed?
Why This Is Important
Looking at which pages Google has already indexed on your site gives an indication into the areas of the site that the crawler has been able to access.
For example, these might be pages that you haven’t included in your sitemaps as they are low-quality, but have been found and indexed anyway.
6. How Many 4xx Error Pages Are Being Crawled?
Why This Is Important
It’s important to make sure that crawl budget isn’t being used up on error pages instead of pages that you want to have indexed.
Googlebot will periodically try to crawl 404 error pages to see whether the page is live again, so make sure you use 410 status codes correctly to show that pages are gone and don’t need to be recrawled.
7. How Many Internal Redirects Are Being Crawled?
Why This Is Important
Each request that Googlebot makes on a site uses up crawl budget, and this includes any additional requests within each of the steps in a redirect chain.
Help Google crawl more efficiently and conserve crawl budget by making sure only pages with 200 status codes are linked to within your site, and reduce the number of requests being made to pages that aren’t final destination URLs.
8. How Many Canonical Pages Are There vs. Canonicalized Pages?
Why This Is Important
The number of canonicalized pages on your site gives an indication into how much duplication there is on your site. While canonical tags consolidate link equity between sets of duplicate pages, they don’t help crawl budget.
Google will choose to index one page out of a set of canonicalized pages, but to be able to decide which is the primary page, it will first have to crawl all of them.
9. How Many Paginated or Faceted Pages Are Being Crawled?
Why This Is Important
Google only needs to crawl pages that include otherwise undiscovered content or unlinked URLs.
Pagination and facets are usually a source of duplicate URLs and crawler traps, so make sure that these pages that don’t include any unique content or links aren’t being crawled unnecessarily.
As rel=next and rel=prev are no longer supported by Google, ensure your internal linking is optimized to reduce reliance on pagination for page discovery.
10. Are There Mismatches in Page Discovery Across Crawl Sources?
Why This Is Important
If you’re seeing pages being accessed by users through your analytics data that aren’t being crawled by search engines within your log file data, it could be because these pages aren’t as discoverable for search engines as they are for users.
By integrating different data sources with your crawl data, you can spot gaps where pages can’t be easily found by search engines.
Google’s two main sources of URL discovery are external links and XML sitemaps, so if you’re having trouble getting Google to crawl your pages, make sure they are included in your sitemap if they’re not yet being linked to from any other sites that Google already knows about and crawls regularly.
To Sum Up
By running through these 10 checks for your websites that you manage, you should be able to get a better understanding of the crawlability and overall technical health of a site.
Once you identify areas of crawl waste, you can instruct Google to crawl less of those pages by using methods like disallowing them in robots.txt.
You can then start influencing it to crawl more of your important pages by optimizing your site’s architecture and internal linking to make them more prominent and discoverable.
Last week we reported that Google has updated its algorithms to give original reporting preferred ranking in Google search. So when John Shehata, VP of Audience Growth at Condé Nast, a major publishing company, posted on Twitter that Yahoo is outranking the original source of the article, Google took notice.
The complaint. Shehata posted on Twitter, “Recently I see a lot of instances where Google Top Stories ranking syndicated content from Yahoo above or instead of original content. This is disturbing especially for publishers. Yahoo has no canonicals back to original content but sometimes they link back.”
As you can see, he provided screen shots of this happening as evidence.
No canonical. John also mentioned that Yahoo, who is legally syndicating the content on behalf of Conde Nast, is not using a canonical tag to point back to the original source. Google’s recommendation for those allowing others to syndicate content is to have a clause requiring syndicators must use the canonical tag to point back to the source the site is syndicating from. Using this canonical tag indicate to Google which article page is the original source.
The issue. Sometimes those who license content, the syndicators, post the content before or at the same time as the source they are syndicating it from. That makes it hard for Google or other search engines to know which is the original source. That is why Google wrote, “Publishers that allow others to republish content can help ensure that their original versions perform better in Google News by asking those republishing to block or make use of canonical. Google News also encourages those that republish material to consider proactively blocking such content or making use of the canonical, so that we can better identify the original content and credit it appropriately.”
Google’s response. Google Search Liason Danny Sullivan responded on Twitter: “If people deliberately chose to syndicate their content, it makes it difficult to identify the originating source. That’s why we recommend the use of canonical or blocking. The publishers syndicating can require this.”
This affects both web and News results, Sullivan said. In fact, th original reporting algorithm update has not yet rolled out to Google News, it is just for web search currently:
Solution. If you allow people to syndicate your content, you should require them to use the canonical tag or make them block Google from indexing that content. Otherwise, do not always expect Google to be able to figure out where the article originated from, espesially when your syndication partners publish the story before or at the same time that you publish your story.
Why we care. While the original reporting change is interesting in this case, it is somewhat unrelated. If the same article is published on two different sites at the same time, both sites can appear to the search engines as the original source. If these sites are syndicating your content legally, review or update your contracts to require syndicators to either use canonical tags or block their syndicated content from indexing altogether. If syndicators are stealing your content and outranking you, Google should be better at dealing with that algorithmically, otherwise, you can file a DMCA takedown request with Google.
About The Author
Barry Schwartz is Search Engine Land’s News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.