Online Ad Effectiveness
Google is under antitrust heat for the dominance of its online advertising business. The WSJ provides a cool visual summary of how the ad serving engine works in various scenarios. Steve Tadelis has a contrary take, as he challenges the core underlying assumption about the effectiveness of online advertising. His research concludes that advertising primarily intercepts consumers rather than creating new ones for advertisers.
For more than a century, advertising was an art, not a science. Hard data didn’t exist. An advertising guru of the Don Draper type proclaimed: “What you call love was invented by guys like me to sell nylons” — and advertisers could only hope it was true. You put your commercials on the air, you put your brand in the paper, and you started praying. Would anyone see the ad? Would anyone act on it? Nobody knew. In the early 1990s, the internet sounded the death knell for that era of advertising. Today, we no longer live in the age of Mad Men, but of Math Men.
Looking for customers, clicks, conversions? Google and Facebook know where to find them. With unprecedented precision, these data giants will get the right message delivered to the right people at the right time. Unassuming internet users are lured into online shops, undecided voters are informed about the evils of U.S. presidential candidate Elizabeth Warren, and cars zip by on the screens of potential buyers — a test drive is only a click away.
But is any of it real? What do we really know about the effectiveness of digital advertising? Are advertising platforms any good at manipulating us? You’d be forgiven for thinking the answer to that last question is: yes, extremely good. After all, the market is huge. The amount of money spent on internet ads goes up each year. In 2018, more than $273bn dollars was spent on digital ads globally, according to research firm eMarketer. Most of those ads were purchased from two companies: Google ($116bn in 2018) and Facebook ($54.5bn in 2018).
* * * * *
It all started with a surrealistic phone call to a data consultant. Tadelis was a professor of economics at the University of California, Berkeley when he went and spent a year at eBay in August 2011. During one of his first conversations with eBay’s marketing team, they invited him to sit down with their consultants. The consultants could tell him how profitable each eBay ad campaign had been. And since Tadelis was an economist, maybe he’d like to quiz them about their methods.
“Proprietary transformation functions,” one of the consultants had said on the phone when Tadelis reached out. They used proprietary transformation functions, had 25 years of experience, and a long list of prominent clients. When Tadelis pressed them he realized that “proprietary transformation functions” was only a clever disguise for your garden-variety statistics. You take the weekly expenditure on ads, combine it with the weekly sales, and voila! Fold the mixture into a scatter plot and see what happens. Easy as that!
“This is garbage,” Tadelis thought. Correlation, as any Statistics 101 class will inform you, is not causation. What do these impressive numbers mean if the people who see your ad are the exact same people who were going to use eBay anyway? eBay is no small fry. Surely lots of people looking for shoes end up on the online auction site all by themselves, whether they see an ad or not? Picture this. Luigi’s Pizzeria hires three teenagers to hand out coupons to passersby. After a few weeks of flyering, one of the three turns out to be a marketing genius. Customers keep showing up with coupons distributed by this particular kid. The other two can’t make any sense of it: how does he do it? When they ask him, he explains: “I stand in the waiting area of the pizzeria.”
Economists refer to this as a “selection effect.” It is crucial for advertisers to distinguish such a selection effect (people see your ad, but were already going to click, buy, register, or download) from the advertising effect (people see your ad, and that’s why they start clicking, buying, registering, downloading). Tadelis asked how exactly the consultants made this distinction.
* * * * *
Two weeks later, Tadelis met the marketing consultants in the flesh. The advisers had put together a slick presentation demonstrating how eBay was raking in piles of cash with its brilliant ad campaigns. Tadelis recalled: “I looked around the room, and all I saw were people nodding their heads.” Brand keyword advertising, the presentation informed him, was eBay’s most successful advertising method. Somebody googles “eBay” and for a fee, Google places a link to eBay at the top of the search results. Lots of people, apparently, click on this paid link. So many people, according to the consultants, that the auction website earns at least $12.28 for every dollar it spends on brand keyword advertising — a hefty profit!
Tadelis didn’t buy it. “I thought it was fantastic, and I don’t mean extraordinarily good or attractive. I mean imaginative, fanciful, remote from reality.” His rationale? People really do click on the paid-link to eBay.com an awful lot. But if that link weren’t there, presumably they would click on the link just below it: the free link to eBay.com. The data consultants were basing their profit calculations on clicks they would be getting anyway. Tadelis suggested an experiment: stop advertising for a while, and let’s see whether brand keyword advertising really works. The consultants grumbled. When, a few weeks later, Tadelis contacted the consultants about a follow-up meeting, he was told the follow-up had come and gone. He hadn’t been invited.
A few months after the awkward presentation, though, Tadelis got the chance to conduct his experiment after all. There was a clash going on between the marketing department at eBay and the MSN network (Bing and Yahoo!). Ebay wanted to negotiate lower prices, and to get leverage decided to stop ads for the keyword ‘eBay’. Tadelis got right down to business. Together with his team, he carefully analyzed the effects of the ad stop. Three months later, the results were clear: all the traffic that had previously come from paid links was now coming in through ordinary links. Tadelis had been right all along. Annually, eBay was burning a good $20m on ads targeting the keyword ‘eBay’.
When Tadelis presented his findings to the company, eBay’s financial department finally woke up. The economist was given a free hand: he was permitted to halt all eBay ads on Google for three months throughout a third of the United States. Not just those for the brand’s own name, but also those targeted to match simple keywords like “shoes”, “shirts” and “glassware”. The marketing department anticipated a disaster: sales, they thought, were certain to drop at least 5%.
Week 1: All quiet.
Week 2: Still quiet.
Week 3: Zip, zero, zilch.
The experiment continued for another eight weeks. What was the effect of pulling the ads? Almost none. For every dollar eBay spent on search advertising, they lost roughly 63 cents, according to Tadelis’s calculations. The experiment ended up showing that, for years, eBay had been spending millions of dollars on fruitless online advertising excess, and that the joke had been entirely on the company. To the marketing department everything had been going brilliantly. The high-paid consultants had believed that the campaigns that incurred the biggest losses were the most profitable: they saw brand keyword advertising not as a $20m expense, but a $245.6m return. For Tadelis, it was an eye-opener. “I kind of had the belief that most economists have: businesses are advertising, so it must be good. Because otherwise why would they do it?” He added: “But after my experience at eBay that’s all out of the window.”
* * * * *
Luckily there is a way to measure the unadulterated effect of ads: do an experiment. Divide the target group into two random cohorts in advance: one group sees the ad, the other does not. Designing the experiment thus excludes the effects of selection. Economists at Facebook conducted 15 experiments that showed the enormous impact of selection effects. A large retailer launched a Facebook campaign. Initially it was assumed that the retailer’s ad would only have to be shown 1,490 times before one person actually bought something. But the experiment revealed that many of those people would have shopped there anyway; only one in 14,300 found the webshop because of the ad. In other words, the selection effects were almost 10 times stronger than the advertising effect alone! And this was no exception. Selection effects substantially outweighed advertising effects in most of these Facebook experiments. At its strongest, the selection bias was even 50 (!) times more influential. In seven of the 15 Facebook experiments, advertising effects without selection effects were so small as to be statistically indistinguishable from zero.
Referenced In This Post
The new dot com bubble is here: it’s called online advertisingIn 2018 $273bn was spent on digital ads globally. We delve into the world of clicks, banners and keywords to find out if any of it is real. What do we really know about the effectiveness of digital advertising?