Monthly Archives: June 2013

Why no one can define “premium” inventory

By Eric Picard (Originally published on on June 17th, 2013)

What is premium inventory? The simple answer is that it’s inventory that the advertiser would be happy to run its advertising on if it could manually review every single publisher and page that the ad was going to appear within.

When buyers make “direct” media buys against specific content, they get access to this level of comfort, meaning that they don’t have to worry about where their media dollars end up being spent. But this doesn’t scale well across more than a few dozen sales relationships.

To address this problem of scale, buyers extend their media buys through ad networks and exchange mechanisms. But in this process, they often lose control over where their ads will run. Theoretically the ad network is acting as a proxy of the buyer in order to support the need for “curation” of the ad experience, but this clearly is not usually the case. Ad networks don’t actually have the technology to handle curation of the advertising experience (i.e., monitoring the quality of the publishers and pages they are placing advertising on) at scale any more than the media buyer does, which leads to frequent problems of low quality inventory on ad networks.

Now apply this issue to the new evolution of real-time bidding and ad exchanges. A major problem with buying on exchanges is that the curation problem gets dropped back in the laps of the buyers across more publishers and pages than they can manually curate, which requires a whole new set of skills and tools. But the skills aren’t there yet, and the problem hasn’t been handled well by the various systems providers. So the agencies build out trading desks where that skillset is supposed to live, but the end results of the quality are highly suspect as we’re seeing from all the recent articles on fraud.

So the true answer to this conundrum of what is premium must be to find scalable mechanisms to ensure that a brand’s quality goals for the inventory it is running advertising against are met.
The market needs to be able to efficiently execute media buys against high-quality inventory at media prices that buyers are comfortable paying — if not happy to pay.

The definition of “high quality” is an interesting problem with which I’ve been struggling. Here’s what I’ve come up with: Every brand has its own point of view on “high quality” because it has its own goals and brand guidelines. A pharma advertiser might want to buy ad inventory on health websites, but it might want to only run on general health content, not content that is condition specific. Or an auto advertiser might want to buy ad inventory on auto-related content, but not on reviews of automobiles.

Most brands obviously want to avoid porn, hate speech, and probably gambling pages — but what about content that is very cluttered with ads or where the page layout is so ugly that ads will look like crap? Or pages that are relatively neutral — meaning not good, but not horrible?

Then we run into a problem that nobody has been willing to bring up broadly, but it’s one that gets talked about all the time privately: Inventory is a combination of publisher, page, and audience.

How are we defining audience today? There’s blended data such as comScore or Nielsen data, which use methodologies that are in some cases vetted by third parties, but relatively loosely. There’s first-party data such as CRM, retargeting, or publisher registration data, which will vary broadly in quality based on many issues but are generally well understood by the buyer and the seller. And there’s third-party data from data companies. But frankly, nobody is rating the quality of this data. Even on a baseline level, there are no neutral parties evaluating the methodology used from a data sciences point of view to validate that the method is defensible. And as importantly, there is no neutral party measuring the accuracy of the data quantitatively (e.g., a data provider says that this user is from a household with an income above $200,000, but how have we proven this to be true?).

When we talk about currency in this space, we accept whatever minimum bar that the industry has laid down as truth via the Media Rating Council, hold our nose, and move forward. But we’ve barely got impression guidelines that the industry is willing to accept, let alone all of these other things like page clutter and accuracy of audience data.

And even more importantly, nobody is looking at all the data (publisher, page, audience) from the point of view of the buyer. And as we discussed above, every buyer — and potentially every campaign for every brand — will view quality very differently. Because the skillset of measuring quality is in direct competition with the goal of getting budgets spent efficiently — or what some might call scale — nobody wants to talk about this problem. After all, if buyers start getting picky about the quality of the inventory on any dimension, the worry is that they might reduce the scale of inventory available to them. The issues are directly in conflict with each other. Brand safety, inventory quality, and related issues should be handled as a separate policy matter from media buying, as the minimum quality bar should not be subject to negotiation based on scale issues. Running ads on low-quality sites is a bad idea from a brand perspective, and that line shouldn’t be crossed just to hit a price or volume number.

So instead we talk about the issue sitting in front of our nose that has gotten some press: fraud. The questions that advertisers are raising about our channels center around this concern. But the advertisers should be asking lots of questions about the broader issue — which is, “How are you making sure that my ads are running on high-quality inventory?” Luckily there are some technologies and services on the market that can help provide quality inventory at scale, and this area of product development is only going to get better over time.