-->

How to Be a Data-Driven Advertiser While Still Protecting Your Brand

Article Featured Image

Until, that is, a VP sees one of the ads they’re running and exclaims, “That ad’s not brand-compliant!”

And the whole system stops.

We’ve seen this situation many times before within the cycle of creative development and testing processes.

After much of our own trial and error, here’s what seems to resolve the situation best: prototype ads.

Why Prototype Ads Work

Prototype ads are concept ads. There are often broken out into two types: concepts and variations. Concepts are just what they sound like—big, out-of-the-box ideas that are different than any ad you’ve run before. Concepts take a lot of work to develop, but because they are so different, they are often the source of breakout ads. Variations are incrementally different ads. They’re ideal for testing individual elements in the creative.

Prototype ads work because they let you make data-driven decisions while still mostly staying within brand-driven rules.

Prototype Ads Meet 60 Percent of Brand Guidelines

This requires some buy-in from the brand marketers. It’s uncomfortable for people who have memorized their brand guidelines manual. However, we’ve found that if ads are at least 60 percent complaint with brand guidelines, they won’t damage the brand. And, having that much freedom with ad creation lets the creative team develop ads rapidly. This is essential given the volume of creative required.

Forcing ads to be even 10 percent more brand complaint—so they meet 70 percent of brand requirements—makes creating new ads a lot harder. It slows ad creation down substantially and makes it much more expensive.

If a prototype ad happens to survive the first round of testing, it can be retooled to better fit within brand guidelines. We’ve found that it’s far more efficient to take a winning ad and tweak it a bit to make it brand-compliant than it is to take an underperforming but brand-compliant ad and incrementally test it until it finally (if ever) performs well.

Prototype Ads Work Under the Premise of “Fail Fast”

The first round of testing for prototype ads is ruthless. Each ad will only have about 10,000 impressions to prove itself.

This has three key benefits:

1. It lets your test a large number of ads very quickly. If 95 percent of ads tested will fail to beat the control, it’s essential to be able to test fast to weed through all the losing ads to find that one breakout gem.

2. If an ad is bending brand guidelines, it limits how much the ad will be seen. Ten thousand impressions is not enough exposure to damage a brand, especially if the ad being shown is at least 60 percent brand-complaint.

3. It minimizes wasted ad spend. Prototype ads only get 10,000 impressions to prove themselves. That’s roughly $15 to $20 in ad spend. So no more spending $500 each on ads that don’t perform.

About Statistical Significance

“But 10,000 impressions is not enough to achieve statistical significance,” someone says. “You’re going to get a lot of false positive and false negatives with that system.”

That would be true—if we were doing a typical A/B split test. But we’re not. We’re not looking for small 5 percent to 10 percent improvements. We’re looking for that one breakout ad that will outperform 95 percent of other ads.

Here’s an example of this principle in action. The graphic below shows two views of the Visual Website Optimizer A/B Test Statistical Significance Calculator. 

In the first example, the variation only gets 15 conversions. That’s nice, but it’s not enough for the test to be statistically significant. If that test variation does really well and gets 20 conversions from the same 1,000 visitors, then it achieves statistical significance.

20 conversions is an “earthquake”—the type of performance breakout ads can produce. The first test, with just 15 conversions, doesn’t perform well enough to be a breakout ad.

This is why the results from prototype ad tests can be trusted after so few variations. A prototype ad test is fundamentally different than a standard A/B split-test. It’s not looking for which version performs incrementally better—it’s looking for breakout results. And while we do have to test a lot of ads to find those breakout ads, that’s okay. We have an ad testing machine built to manage it.  

Conclusion

This is how data-driven Facebook advertisers can stay true to their need for high-performing ads but also comply with brand-driven guidelines. It also fits well with how many companies see themselves: as brand-driven while still maximizing their profits.

Prototype ads may not always be perfectly true to brand guidelines, but they are true enough. The added performance they deliver is a good trade for bending branding rules, even if only just a little.

Brian Bowman is the founder and CEO of ConsumerAcquisition.com. He has profitably managed over $1 billion in online advertising spend and product development for leading online brands including: Disney, ABC, Match.com, and Yahoo!

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Buyer's Guide Companies Mentioned