4 Outdated Google Ads Tactics to Reassess

The pay-per-click ad industry is constantly evolving. New features are rolling out relentlessly – running a campaign a year ago is probably different today.

But some outdated Google Ads tactics remain useful when optimized. Here are four.

Quality Score

Google defines the quality score as “a diagnostic tool to give you an idea of ​​how the quality of your ad compares to other advertisers”.

The score measures each keyword on a scale of 1 to 10. A higher number indicates consistency during the search process. For example, if a user searches for “oval coffee tables,” the ad and subsequent landing page should speak the same terms. Keywords with higher quality scores generally have lower click-through costs over time.

One problem with Quality Score, however, is that it emphasizes click-through rate more than conversion. A keyword might have poor Quality Score but excellent conversions. Changing that keyword could improve your Quality Score as well to reduce conversions.

The Quality Score isn’t irrelevant, but it shouldn’t be the deciding factor. For keywords with low quality scores that aren’t converting, consider:

  • Adding negative keywords,
  • By placing your target keywords more frequently in ads,
  • Landing page update to synchronize with ad message.

A / B test

Advertisers once tested ad components by running them against each other in the same ad group. To see which call-to-action, landing page, or ad copy works best, an advertiser would create two ads, which Google would show evenly over time.

This is no longer the case.

Responsive Search Ads contain all headlines and descriptions and automatically show the best combinations in search results. Advertisers don’t know which combinations they are converting, just the overall metrics. Even with just two ads, one will inevitably earn a larger impression share based on your conversion goal. Lack of transparency and uneven distribution of ads prevent accurate testing.

The answer is Ad Variations, which tests a basic component of an ad versus a trial version, 50/50. To test landing pages, an advertiser instructs Google to replace that entity half the time. Advertisers can’t see the metrics for each combination, but they can see if the base or test ad performed better.

In the age of automation, ad variations are the most effective way to test components.

Screenshot of a performance comparison of ad changes from July 19 to August 18.

Ad Variations experiments reveal the overall performance of the version that performed best. Click on the image to enlarge.

Match type Ad groups

Creating ad groups based on match type was common before match type variants and phasing out modified general matches.

For example, the ‘oval table’ themed keywords would have required two ad groups with the same keywords. One contained only exact match keywords, while the other had phrase match. And best of all, all keywords in the exact match group would be negative in the phrase match group, allowing the advertiser to control which ads are shown. Exact matches would show a series of ads, one sentence would match the other.

Setting the campaign to manual bidding allows advertisers to control the cost (and text) for each variant, such as $ 2 for an exact match keyword and $ 1.50 for a phrase match.

Manual offer

Manual bidding allows for bid adjustments such as device and location, but smart bidding automatically adjusts for these items and more. The advanced machine learning offered by smart offers is far superior to manual offers. For example, Smart Bidding takes users’ browsers and operating systems into account.

However, manual bidding is still occasionally useful. For example, bidding above a certain amount on a set of keywords may not be profitable for an advertiser. Manual bidding would set the maximum cost per click, trading the benefits of smart bidding with cost control.

Leave a Comment