Several large tech companies have recently built platforms that claim to educate businesses on how to best market themselves and their products online. Examples include Meta for Business (formerly Facebook for Business; “Get step-by-step advice, industry information, and tools to track your progress, all in one place”), Think with Google (“Go more far in your marketing with Google â), and Twitter for business (âGrow Your Business with Twitter Adsâ).
These sites are very attractive. They provide small and medium-sized businesses with an abundance of really useful information on how to do business online and of course they offer a variety of advertising tools and services designed to help these businesses improve their performance.
All of these sites have the same basic purpose. They want you to understand that their tools and services are powerful and highly personalized – and they want you to invest your marketing dollars in them.
Not as easy as it sounds
Facebook is perhaps the most pushy of the three companies listed above. In recent weeks, the company has been running ads telling all kinds of inspiring stories about the small businesses it has helped with its new services. Maybe you’ve seen some of these ads in airports, in magazines, or on websites. My Jolie Candle, a French candle maker, “find[s] up to 80% of their European customers via Facebook platforms. Chicatella, a Slovenian cosmetics company, “attributes up to 80% of its sales to Facebook apps and services.” Mami Poppins, a German baby clothing supplier, “uses Facebook ads to generate up to half of its revenue.”
It sounds impressive, but should businesses really expect such big effects from advertising? The point is, when Facebook, Google, Twitter, and other big tech companies âeducateâ small businesses about their services, they are often in fact encouraging incorrect conclusions about the causal effects of advertising.
Take the case of one of our consulting clients, a European consumer goods company which, for many years, has positioned its brand around sustainable development. The company wanted to determine whether an online advertisement that claims convenience could actually be more effective than an advertisement that claims sustainability. With the help of Facebook for Business, he performed an A / B test of the two ads and then compared the ad ROI between the two conditions. The test found that the yield was much higher for the durability ad. Which means that’s what the company should invest in, right?
In fact, we don’t know.
There’s a fundamental problem with what Facebook is doing here: The tests it offers under the heading of “A / B” tests are actually not A / B tests at all. This is poorly understood, even by experienced digital marketers.
So what really goes on in these tests? Here is an example :
1) Facebook splits a large audience into two groups – but not everyone in the groups will receive treatment. That is, many people will never see an ad.
2) Facebook starts selecting people from each group and offers different treatment depending on the group in which a person was sampled. For example, a person selected in group 1 will receive a blue ad and a person selected in group 2 will receive a red ad.
3) Facebook then uses machine learning algorithms to refine its selection strategy. The algorithm could learn, for example, that young people are more likely to click on the red ad, so it will start showing that ad more to young people.
See what’s going on here? The machine learning algorithm that Facebook uses to optimize ad serving effectively invalidates the design of the A / B test.
Here is what we mean. A / B testing is built on the idea of âârandom assignment. But are the assignments made in step 3 above random? No. And this has important implications. If you compare people treated in Group 1 with people treated in Group 2, you will no longer be able to draw any conclusions about the causal effect of the treatment, because people treated in Group 1 now differ from people treated in Group 2 by more than dimensions than just processing. Group 2 treated people who received the red ad, for example, would end up being younger than group 1 treated people who received the blue ad. Regardless of this test, it is not an A / B test.
It’s not just Facebook. The Think with Google site suggests that ROI type metrics are causal, when in fact they are only associative.
Imagine a business wanted to know if an advertising campaign is effective in increasing sales. Answering this question, the site suggests, involves a direct combination of basic technology and simple math.
First, you set up conversion tracking for your website. This lets you know if customers who clicked on an ad made a purchase. Second, you calculate the total revenue for those customers and divide by (or subtract from) your ad spend. It’s your ROI, and according to Google it’s “the most important metric for retailers because it shows the real effect Google Ads has on your business.”
In fact, it is not. Google’s analysis is flawed because it lacks a point of comparison. AT really To know if advertising is generating profits for your business, you would need to know what the income would have been in the absence of advertising.
Twitter for business offers a bit more involved proposal.
First, Twitter works with a data broker to access cookies, emails, and other identifying information for a brand’s customers. And then Twitter adds information on how those customers relate to the brand on Twitter – if they click on the brand’s promoted tweets, for example. This supposedly allows marketing analysts to compare the average revenues of customers who have signed up with the brand to the average revenues of customers who have not. If the difference is large enough, according to the theory, it justifies the ad spend.
This analysis is comparative, but only in the sense of comparing apples and oranges. People who regularly buy cosmetics don’t buy them because they see promoted tweets. They see tweets promoted for cosmetics because they regularly buy cosmetics. Customers who see a brand’s promoted tweets, in other words, are very different people than those who don’t.
Companies can answer two types of questions using data: they can answer prediction questions (as in “Will this customer buy?”) And causal inference questions (as in ” Will this ad make that customer buy? “). These questions are different but easy to fuse together. Answering causal inference questions requires making counterfactual comparisons (as in” Would this customer have bought without this ad Smart algorithms and digital tools created by big tech companies often present apple-orange comparisons to support causal inferences.
Big tech should be well aware of the distinction between prediction and causal inference and its importance for efficient allocation of resources – after all, for years they’ve been hiring some of the smartest people on this planet. Targeting potential buyers with ads is pure prediction. It doesn’t require causal inference, and it’s easy to do with today’s data and algorithms. Persuading people to buy is much more difficult.
Large tech companies should be commended for the useful materials and tools they make available to the business community, but small and medium-sized businesses should be aware that advertising platforms pursue their own interests when offering training and education. information, and that these interests may or may not be aligned with those of small businesses.
Editor’s Note (12/16): The title of this piece has been updated.