We tested 48 Facebook ads to bust 6 marketing myths

Marketing Shapes Image

Facebook Ads are a predictable and scalable way to find new customers and followers. You can start with just a few dollars a day and any brand can create one. The ads just require a call to action, an image and some text. It seems so simple!

Looks can be deceiving. After setting up 48 individual ads for this test I’ve learned that they can be extremely complex. For a newcomer to the Facebook Ads platform, it can be daunting to ensure your money is being spent in the right place and on the highest-performing ad concepts. Thankfully there are hundreds of experts on just as many sites giving out advice.

But is this advice good? Much of the advice has been repeated so much, the original source and claims have been lost. And after reading a few questionable ones, we were determined to find the truth.

Approach

We picked six well publicized myths and put them through some heavy testing:

Summary of takeaways from this article

These myths focus on image choice. So to ensure no other factors affected outcomes we kept everything else the same: text, call to action, titles, demographic, budget, etc. The default ad setup is shown below:

Prototypical ad we ran

We optimized the ads for Cost Per Click (CPC) throughout the experiment and used it to measure success. Cost Per Click charges the advertisers each time a user interacts or clicks the ads. Why is CPC important? The lower the CPC the more people you can get on your site for the same advertising budget.

With CPC, Facebook only shows the ad to the people who are most likely to actually click on it. Unlike Cost Per Impression which charges per views and is more often used for brand awareness or likes.

We ran 48 ads in total – at least 8 for each myth. In our analysis we tested all ad clicks to a statistical significance of 80%. Each graph we present does include each ad set’s 80% confidence interval for gauging accuracy.– For more details – see the notes at the end.

In all the experiments you should be looking for a lower CPC – a lower bar. The lower the bar, the more effective the ad is. And for a myth to be statistically confirmed it has to pass the significance test. If it is inconclusive or fails the test there is no statistical difference, therefore it is busted. Now let’s get to busting some Facebook Ad myths!

Myth: Ads must be relevant

Facebook ad guidelines directly instruct you to use relevant images.For example Sketchdeck would use ad images that featured technology or design. But GoPro would use images of extreme sports or the outdoors.

We are a design company, so relevant images relate naturally relate to this. We contrast these with four irrelevant images.

Relevant:

Examples of ads we ran

Irrelevant:

Examples of ads we ran

Myth busted

Irrelevant ads performed better!

Chart of CPC on Ads

The irrelevant ads had a 6% lower CPC than relevant ads. And they were found to be statistically significant.

Why is this?

  • We believe that the beauty and uniqueness behind the images overshadows their irrelevance. For example the Golden Gate Bridge from above had a 9% lower CPC than the average of the experiment. And 16% lower CPC than some of the relevant ad images.
  • The ability to stand out against other ads and posts also helped the irrelevant images. Some caught the eye immediately with high contrasting and bright colors.
  • Also the people we targeted for the ads could be overexposed to technology. It is very likely that their feed is full of tech news and related images.
  • It is important to note that we only measured who clicked through our ads. That means the irrelevant images could have lower conversion rates on the site. The people who ended up on the SketchDeck site may not need or care about design.

Myth: Use text on images

The second myth calls for overlaying text on the ad images. We found many experts claiming that a call to action or brand statement on the ad image will make it more successful.

This myth had three different ad sets, the first one was simple and had no text overlaid. The other two featured different brand statements, which can be seen below. Additionally to make sure actions were only influenced by the text, the same images were used throughout.

Text we used on images

Myth busted!

No text ads performed better!

The images with no text overlaid performed consistently better with a 6-7% lower CPC.

Chart of text vs no-text performance

Why is this?

  • We think the ads with text overlaid looked, well, like ads. Users saw the text or call to action, registered it as an ad and then moved on. One of Facebook’s tips is that an ad image should not look out of place in the news feed. And not many users are plastering text across the images that they posts.
  • The images with no text also conveyed a message very effectively. Both screamed that we were a creatively driven company. Which helped them to have a lower CPC on average.
  • It should be noted that as the text size increased, the CPC increased as well.
  • Using a weak call to action could have caused the CPC to be high as well. But with Facebook’s 20% text rule it is hard to create a strong one. The only successful ads we saw had “Free” or “50% off” as their main text.

Myth: People are better than products/objects

Facebook brand advertising guidelines states that images of people using your product is better than just the product. Their reasoning, which makes a lot of sense, is the ad should not look out of place when on a news feed.

The two ad sets were relevant to the design and the design process. One set featured objects/products and the other featured happy people using products.

Objects:

Ads we ran in this experiment

People:

Ads we ran in this experiment

What did we find? Limited difference.

We found that there was no discernible difference between people and objects. Both averaged about the same CPC and statistically speaking there was no difference between the two ads. Therefore the myth is inconclusive!

Chart of experiment results for ads vs no-ads

Although objects had a lower CPC on average, there was not enough separation to declare a winner. With both sets hovering right around the average CPC of the experiment.

Why is this?

  • We think that objects performed slightly better because they fit the overall message of the ad. Notebooks, calligraphy and Photoshop are usually attributed to design work. But the people who really could be doing anything on the computers.
  • The simplicity of the objects could have caused them to have a better CPC as well. The message of the image is a little quicker to comprehend.
  • And again the uniqueness of the object images helped them. Two images from the object set had been released only a few days before we started the experiment. On the object set, some are a few years old. (And happen to be from a co-working space I use on occasion.)

Myth: Smiling women beat unhappy men

The myth claims that a smiling woman will lead to more clicks than anything. But because smiling women against the everything was too vague we tested it against a range of emotions of men.

Male:

Ads we ran with male faces

Female:

Ads we ran with female faces

Myth confirmed!

Female images performed better than the male images.

After running the ads we found that the smiling women outperformed the unhappy men. They averaged 6% less CPC and hovered right around the experiment average.

Female face vs male face ads

But 5 out of the 8 ads in this set cost more than the average CPC. With 3 of them being some of the most expensive ads of the entire experiment. Which indirectly confirms our earlier findings that people do not perform better than objects.

Why is this?

  • We really had no idea how this myth would turn out. It was one that kept popping up with a few claims but no data. Also it is a strategy that may work great for one niche product but fall flat for others.
  • I think the female images performed better than the male’s because they were unique. The male images look like generic tech photos and do not stand out at all. But the female images look like something that fits perfectly in a Facebook or Instagram feed.
  • The female images were some of the hardest stock images to find and that attests to their uniqueness. With the male images I know I have seen them on other websites or ads. I literally just saw one of those images on my Facebook feed a few hours ago.
  • So this may be another case of uniqueness and beauty beating out bland and overused images.

Myth: Don’t use your logo

Our fifth myth is all about logos, something we know a lot about already. The myth states that using a logo on ads would lead to less clicks. It is a fairly easy concept to test but could make an ad look out of place on the news feed or side bar.

There were sets with a large and small SketchDeck logo, and one set with no logo at all. Plus we used the same background images throughout, to make the logo a main focal points.

Ads with our logo experiment

Myth confirmed…Kinda!

Small logos are better than large logos! But the no logo set performed more consistently.

Logos are not a huge detriment for ads as long as they are small or discrete. So the myth is confirmed, kinda. The small logo set actually had the lowest cost per click on average between the two logos. But the set with no logo outperformed the large logo set.

Results of including our logo in ads chart

The small logo set had a 8% lower CPC on average than the large logo set. And when the outliers were dismissed the CPC was 14% lower. The small logo set also included the most popular ad from this entire experiment. Which is one of my favorite pieces of stock photography and can be seen below:

Best performing ad

Why is this?

  • We believe the large logo set performed so poorly because it looked like an ad as well. It was the focal point of the entire image and there was no way to avoid it. Just like the text overlaid myth people saw it for what it was and moved on.
  • In contrast the small logo was discrete enough to be ignored or completely missed. That is what helped it avoid the pitfalls of the large logo set.
  • Additionally the smaller logo set it allowed the images behind it to shine. The large logo set instead masked them and made it hard to find meaning.

Myth: Use ultra simple images

Our final myth states that simple ad images in ads perform better than busy or complex images. There are two schools of thought with this myth. It could be that a simple image is best when users only have a few seconds to digest the image. But on the other hand, a complex image stands out on Facebook.

It was the perfect design myth to test, minimalist vs. eye catching. The first set, which can be seen below, would stand out against the endless updates and breaking stories of a news feed. But again they each take a while to find the true meaning.

The second set of images were simple and featured one focal point. They are easy to consume but have no real eye catching aesthetics.

Complex:

Complex image ad experiment

Simple:

Simple image ad experiment

What did we find?

Limited difference

We found that the simple and complex ad images cost about the same per click. There was only a minuscule difference of under one cent when the CPC were compared.

Difference in performance between complex and simple ads chart

Finding that there was no statistically significant difference between the two sets confirmed our conclusion. So we can bust the myth of simple images performing better than complex ones!

There was one simple image with a low cost per click but the rest cost more than the experiment average. And 5 out of the 8 ad images from this myth cost above that average.

Why is this?

  • We really thought that this myth would clear up some of our earlier findings. But there was no clear winner to be had. Both sets performed right around the experiment average as well.
  • It is possible that we went too extreme with each ad set. The simple ads were too boring and the complex images were too busy.
  • Also there was no concrete meaning conveyed in each of the images. The complex images could easily been misconstrued as a startup or tech publication. And the simple images could be from an education company.

Conclusion

We entered this experiment very curious and ready to learn. It was as much an exercise for us to learn more as it was to help other marketers as well. And I think we definitely accomplished both of those goals.

First we confirmed how much design can impact an ad. It is the core of a Facebook Ad and the first thing prospective customers see. But it is so often relegated to a secondary status.

Also we learned to approach every ad or test with an open mind. We were pleasantly surprised with the outcomes of the irrelevant ads. Never would we predict how consistent and cost effective the results were. The same can be said with the simple and complex images. We were adamant that there would be a clear winner.

Finally, never assume that the all the expert advice will work perfectly for you brand or company. Test quickly and test often to find the best options. We were able to set this experiment up in a few hours and it was a lot larger than most tests. A simple A/B test will work for most!

But unlike the experts we pulled the myths from, our best practices are based on real data and first hand research.

To have a successful Facebook ad:

1. Be unique! Avoid popular images, they do not lead to effective ads

2. Create simple but eye catching images that are well designed

3. Make sure your messaging is clear to all target users

4. Use relevant products or objects instead of people

5. Avoid text but small or discrete logos can work

6. Test everything before you invest long term

For reference our most popular ads a can be seen below:

ads we ran that worked will

Also our least popular ads can be seen below. They follow an interesting pattern:

Ads we ran that performed poorly

Notes:

To make sure that all of our conclusions were not a fluke, we tested each myth for significance. They were each tested at a significance level of 80% to confirm or deny a correlation. Because most of the myths involved rejecting the null we thought that this was a perfect test to further support our findings. Although we would have liked to have used a higher significance level, the small sample size made it difficult. For more information about significance testing please check out this great guide from Yale!

jroberts

jroberts

Redefine what's possible with SketchDeck.

Related reading

Case study: 15Five
Billion dollar companies show off their culture, not their apps on Instagram
After 10,000+ data points, we figured out how to write a perfect Medium post
Why people are bad at estimating timelines
Why marketers shouldn't do it all
Get your emails opened

Redefine what’s possible
with SketchDeck.