Category Archives: Publishers

Programmatic’s place at the top of the marketing funnel

By Eric Picard (Originally Published in iMedia – October 11, 2014)

For decades, modern marketers have developed significant marketing plans with detailed analysis of target audiences. Often before products are designed, significant amounts of market research have been developed and applied against the product or service development process.

When a brand decides to spend millions of dollars to create a product or service, it typically then spends tens to hundreds of thousands of dollars on market research and product planning to get ready to launch it.  And then hundreds of thousands to millions of dollars to market the product.

Most of that market research and product strategy folds over into the marketing plan. And as part of that process, typically very detailed marketing personas are created — sometimes a handful, sometimes more than a dozen. These marketing personas are decomposed into the marketing plan and drive many of the media mix decisions that are used to divvy up budget among channels. And often these do get distributed to the media agency as part of the marketing plan’s translation into media planning and strategy.

But in my experience, it is fairly common that by the time the media buyer gets the media plan from the planners, the marketing personas have been stripped off. And this is even more true when we bring programmatic media into view. As an example, consider a conversation I had this past year with a media buyer at a major trading desk.

This trading desk handles the media buying for a major home improvement retailer. And when I talked with the trading desk buyer about how the company approaches this customer’s media buys over its DSP partner, the buyer looked a little puzzled. To that person, it was about only two things:

  • Buying the “home improvement” segment
  • Setting the rest of the budget to optimize spend against CPA on its web pages and letting the DSP figure the rest out

The problem with this approach is that it’s extremely one dimensional — and loses much of the value that exists within the systems used. It’s like using an F-16 to commute to work. Or an aircraft carrier to run to the store.

I haven’t seen the marketing plan for the client, but I can imagine (having seen a lot of them over the years) that the retailer has several different ones. I’ll make up a few that probably exist in part, and explain how I’d have approached the campaign using a DSP.

Persona 1: Reggie is a 28-year-old single male who lives in a major metropolitan area in a condo that he owns. He makes more than $50,000 a year and mostly shops at the client’s stores to buy décor items, fans, DIY project materials, and probably will buy things like air conditioners, painting supplies, hand tools, etc.

Personas 2 and 3: Sophie is a 35-year-old stay-at-home mother who lives in the suburbs of a major metropolitan area and is married to Tim, a 35-year-old executive who works in the city and commutes. Together they own a house that is more than 4,000 square feet and has at least half an acre of land. Tim is a weekend DIY warrior, who takes on various home improvement projects. He’s likely to take on light construction projects, buying building materials, painting materials, plumbing and electrical, and lots of landscaping tools such as riding mower, blowers, etc. Sophie is an avid gardener who buys numerous plants and gardening materials, and takes frequent courses on design and gardening at the store.

Persona 4: Arthur is 65 years old. He is retired, lives in a modest home in the suburbs, which he owns outright. He is in the process of getting ready to sell the house as he and his wife are looking to move to a smaller place or a retirement community. But he has three adult children who own homes nearby, and he frequently putters and does projects around their houses. He’s likely to buy building and painting materials.

Although I just made up these personas, they’re fairly typical of the kinds of personas I have seen over my career — if anything, they’re a bit light. Additional information that would typically accompany a persona includes the numbers of each of these personas that exist in each DMA in the U.S., perhaps even broken down by ZIP code within each DMA. And then marketing teams typically will use whatever tools are at their disposal to begin matching against mechanisms like PRISM clusters and do some media mix modeling about how to reach these audiences.

At the handoff to media agency partners for digital media, the planners at that point begin using various tools to determine what sites have traffic that matches their target audiences, and an overall media plan and strategy is devised.

Once the plan is handed off to media buyers and their trading desk partners, the thinking is usually quite distilled. Buyers going directly to publishers will send over an RFP that simplifies the media plan (they may also send the media plan) for sending to publishers. They then wait to hear back regarding what inventory is available. The trading desk partners typically decide what audience attributes align against available data segments for their goals.

Now let’s go back to the example I used above about the trading desk with a major client in the home improvement retail space. Given its customer personas, I’d have recommended a few other ways to engage and find audiences.

Perhaps it could target users who own homes of a certain size or homeowners who have been in their home for a certain number of years. It could target each of these segments by age and geography. It could differentiate both creative and offer by each of these. It could vary what products to highlight in its advertising based on some of the criteria, such as age, gender, and other elements. It could target households with children differently than households with adult children not living in the home. It could even target based on the age of children, assuming parents of college age students might be moving kids into apartments or dorms at the end of summer or fall. Or it could target urban apartment dwellers with fans in the summer and suburban homeowners with leaf blowers in the early fall, snowblowers in the late fall, and lawnmowers in the early spring.

In programmatic, we far too often fall into the trap of only feeding the portion of the purchase funnel that is focused only on CPA at low costs of media plus data. As a market, we need to expand how we see programmatic media and really try to dig into the market for data and the use of sophisticated DSP platforms.

Advertisements

The fundamental disconnect between buyers and sellers

By Eric Picard (Originally Published on iMedia – November 20, 2013)

If we break down the way that buyers and sellers view the world from an advertising perspective, the buyer wants to reach a specific audience on quality publications. And the seller wants to sell as much inventory as possible at the highest price.

To these ends, each party has built their own set of processes, technologies, and methodologies. Historically, media buyers would come up with a plan for reaching ideal target audiences, identify publishers that match brand goals and have access to the target audiences, and then send RFPs over to those publishers. Once buyers passed along the RFP, control was largely out of their hands. Buyers could say yes or no to things, they could ask for clarification, and they could negotiate price. But the control over exactly which audience they reach or what pages their ads land on have not been in their control. That has reverted back to the publisher’s sales, account, and operations teams.

Publisher sales organizations, meanwhile, have spent an immense amount of time and effort coming up with methods of “packaging” inventory to ensure the most sales, at the highest price. They have created significantly complex packages — with combinations of highly desirable and aligned inventory to an RFP — with less aligned and less desirable inventory that they require the buyer to take in order to get the inventory they really want.

In conversations with media buyers, I’ve been told that they see their job as “forcing publishers to blow up packages and unbundle the bad stuff from the good stuff.” This tension between buyer and seller can be quite intense — because their goals are generally not seen as aligned. There is a problem of “information asymmetry” in this world, meaning that publishers have all the information about both the buyer’s goals and the publisher’s own inventory and audiences. Ultimately they package that inventory without much input from the buyer other than the original RFP and media plan. Buyers have very little information in this world and rely on the publisher to interpret the buyer’s goals properly and to deliver what they’ve agreed to.

Over in RTB land, media buyers have much more control. In this world, the “information asymmetry” goes in the other direction. Within a DSP or other buying tool, the media buyers specify the audiences they want to reach and the kinds of inventory that are acceptable — even down to creation of a white list of which publishers are acceptable. They use inventory quality vendors, verification vendors, data providers, and all sorts of techniques to gain control over the buying process.

In this world, publishers add very little value (basically none) to the buying process, and they exist with absolute data asymmetry. Not only do they not know why their inventory is being bought (they don’t get an RFP or media plan), but they also often don’t even know who is buying their inventory. They maintain very little control over the selling process in this circumstance, which rightly makes them nervous about RTB.

As the technologies and markets evolve, a new process needs to be developed where publishers and buyers can collaborate. This process must allow publishers to gain insight into the goals of the buyers such that they can make good decisions about where to invest in building content — content that attracts the kinds of audiences that buyers want to reach. And buyers need access to data that publishers have about their audiences (which they don’t normally make available to generic ad exchange buys) that can be bound together with inventory via private exchanges or even programmatic direct technologies. So between the buyer and the seller, we can come together with a strong handshake that drives the right kind of symmetry of information — one that drives the right business outcomes for everyone involved.

Enterprise Adoption Of Ad Tech Will Supercharge The Market

By Eric Picard (Originally published on AdExchanger 11/5/2013)

The appetite for ad technology is just beginning to appeal to new markets in new ways. Expect to see significant growth in the sector over the next five years as marketers and large publishers invest significantly in technology at a scale we’ve never seen.

The context for this shift: Ad technology is moving from a marketing or sales and operations expense to an enterprise-level IT investment. We’re now seeing very significant interest in this space by CIOs and CTOs at major corporations – beyond what we’ve seen in the past, which mainly came from the “digital native” companies, such as Google, eBay, Amazon, Yahoo, Facebook and Microsoft. Now this is becoming much more mainstream.

Historically, digital media was a very small percentage of advertising spending for large advertisers, and a small percentage of revenue for large, traditional media publishers.  But in the last two years, we have passed the tipping point. Let’s handle the two areas separately – starting with the marketer.

Marketers

First, let’s call the marketer by a slightly different name: the enterprise.

Large corporations, or enterprises, have invested massive amounts of money in IT over the last 30 years. Every major function within the enterprise has been through this treatment – from HR to supply chain, finance, procurement and sales to internally driven traditional direct marketing (the intersection of CRM and direct-marketing channels, such as mailing lists and even email marketing).

The great outlier here has been the lack of investment in advertising, which mainly has been driven by the fact that advertising is managed for the most part by agencies. Most marketing departments have allowed their media agency partners to take on the onus of sorting out how to effectively and efficiently spend their marketing budgets. And up until the past few years, digital marketing was a small percentage of spending for most major marketers.

Since there really hasn’t been much value in investing in advertising technology at the enterprise level for marketers on the traditional side, there was little driving change here. But as the percentage of the marketing budget on digital advertising has grown, and as the value of corporate data to digital advertising has grown, a significant shift in thinking has taken place.

Now we’ve got a way, through the RTB infrastructure – and, ultimately, through all infrastructure in the space – to apply the petabytes of corporate data that these companies own to drive digital advertising right down to the impression level. And we have mature infrastructures, bidders, delivery systems, third-party data and data pipelines,and mature technology vendors that can act on all this. None of this existed five years ago at scale.

Publishers

Just as the large marketers are enterprises, so are the large media companies that own the various online and offline publications that create advertising opportunities.

Until the last few years, the very largest of the traditional publishing conglomerates were still not paying much attention to digital media since it was a tiny fraction of overall revenue. But over the last few years there has been a significant shift as executives finally realized that despite the lack of revenue from digital as a channel, from a distribution standpoint, digital media is experiencing explosive growth. And ultimately all the traditional distribution channels – from print to television to radio – are all being subsumed into the digital channel.

You need to look no further than the people who have been hired into the major media companies in the last few years with titles like VP of revenue platforms, GM of programmatic and trading, director of programmatic advertising and VP of yield operations. These senior positions didn’t exist at these companies two years ago, and generally were areas reserved within the digital natives.

The fact that we’re seeing new focus on digital media, with both senior roles and significant investments in people and technology, means that we’re likely to see additional significant investment by these media enterprises over the next few years. I expect to see the shift happen here quickly since the consulting companies upon which they and most enterprises rely to lead these initiatives already have media and entertainment practices.

Suddenly major advertisers and publishers – who are all major enterprises – are looking at the opportunity to apply their significant IT expertise to marketing in a new way. So let’s talk about the way that IT evolved in other channels historically to try to understand what’s about to happen here.

The Evolution Of IT

A major corporation will typically hire large consulting firms with a vertical practice in the area they want to modernize. Note that the biggest consulting firms – we’ll use IBM and Accenture as examples here – have developed vertical practices around nearly every department, large initiative or focus area within an enterprise. Also note that wherever these consulting firms step in to build a practice, they assemble a recommended “stack” of technologies that can be integrated together and create a customized solution for the enterprise. One interesting thing: In nearly every case, there are significant open-source software components that are used within these “stacks” of technology.

When we look carefully at where they’ve developed practices that smell anything like marketing, they’re typically assembled around big data and analytics. There are obvious synergies between all the other vertical practices they’ve created and the intersection of using big data to inform marketing decisions with analytics, based on detailed analysis of other corporate data. So this isn’t a surprise. It also isn’t shocking that there are many major open-source software initiatives around big data, ranging from staples such as Hadoop to startups like MongoDB.

But nowhere in the digital advertising landscape do we see major open source initiatives. Instead we see the massively complex Lumascape ecosystem map, with hundreds of companies in it.

So when we look at the shift to enterprise IT for digital marketing, there are plenty of companies to plug into a “stack” of technologies and build a practice around. But there is very little in the way of open source, and no clear way to actually bind together all the vendors into a cohesive stack that can be used in a repeatable and scalable fashion.

We are seeing some significant consulting firms come into existence in this space, including Unbound Company and 614 Group. I’m certain we’ll see the big players enter the fray as they sniff out opportunity.

When will digital take over traditional media?

By Eric Picard (Originally published on iMediaConnection.com, September 12, 2013)

In 2005 I worked on a project to map the infrastructure used for all traditional media advertising and determine if there was an opportunity to inject the new modern infrastructure of online advertising into the mix. This was a broad look at the space — with the goal to see if any overlap in the buying or selling processes existed at all and if there was a way to subtly or explicitly alter the architecture of online advertising platforms to drive convergence.

If you think about it, this is kind of a no-brainer. Delivering tens or hundreds of billions of ads a day in real time with ad delivery decisions made in a few milliseconds is much harder than getting the contracts signed and images off to printing presses (print media) or ensuring that the video cassettes or files are sent over to the network, broadcaster, or cable operator by a certain deadline. And the act of planning media buys before the buying process begins isn’t very different between traditional media and digital.

I went and interviewed media planners and buyers who worked across media. I talked to publishers in print, TV, radio, out-of-home, etc. And I went and talked to folks at the technology vendor companies who supported advertising in all of these spaces. It was clear to me that converging the process was possible, and as I looked at how the various channels operated, it was also clear that they’d benefit significantly from a more modern architecture and approach.

But in 2005, the idea of digital media technologies and approaches being used to “fix” digital media was clearly too early. It would be like AOL buying Time Warner…Oh yeah, that happened. In any case, the likelihood of getting traditional folks to adopt digital media ad technology in 2005 was simply ludicrous.

And despite progress, and clearly superior technical approaches in digital (if lower revenue from the same content due to business model differences), there’s little danger of traditional and digital media ad convergence in the near term. This is actually a real shame because digital media now is stepping into a real renaissance from an advertising technology perspective.

Programmatic media buying and selling is clearly the future of digital, and I believe they will extend into traditional as well. And within programmatic, RTB is a clear winner (although not the only winner) in the space. The value proposition of RTB for the buyer is incredibly strong.  Buyers get to deliver ads only to the specific audiences they desire and on the specific publishers (or group of publishers) they want their ads associated with. While still mostly used for remnant media monetization, this is changing very fast.

Television is the obvious space to adopt digital media ad technology, and with terms like “Digital Broadcast,” “Digital Cable,” “IPTV,” and others, it would seem on the surface that we’re moments away from RTB making the leap from online display ads and digital video to television.

That’s not quite the case. While great strides are being made in executing on targeted television buys by fantastic companies like Simulmedia, Visible World, and others, this space is still not quite ready to make the transition to real-time ad delivery (what we think of as ad serving in the online space) at large, let alone RTB.

This is because the cable advertising industry is hamstrung by an infrastructure that is designed for throughput and scale of video delivery, which was absolutely not designed with the idea of real-time decisions at the set-top-box (STB) level in mind. Over the years we’ve seen video on demand (VOD) really take off for cable, but even there, where the video content is delivered via a single stream per STB, they didn’t design the infrastructure around advertising experiences. Even the newer players with more advanced and modern infrastructures and modern-sounding names like IPTV, such as Verizon’s FIOS solution, haven’t built in the explicit hooks and solutions needed to support real-time ad delivery decisions across all ad calls. That basically means that for the vast majority of ads, there’s no targeting whatsoever.

Some solutions like Black Arrow and Visible World have done the work to drop themselves into the cable infrastructure for ad delivery, but nobody has seen massive adoption at a scale that would let something happen at the national level. And the cable industry’s internally funded advanced advertising initiative — The Canoe Project — laid off most of its staff last year and has focused on delivering a VOD Clearinghouse to get VOD to scale across cable operators. So in 2013, we’re still not to the point where dynamic video advertising can be delivered on any television show during its broadcast, and even VOD doesn’t yet have a way to easily, cohesively, and dynamically deliver video advertising — let alone providing an RTB marketplace.

On the non-RTB side of programmatic buying and selling, I think we’ll see a lot of progress here in traditional media. Media Ocean has been doing their own flavor of programmatic for quite some time — in fact the Media Ocean name of the post-merger company was a product name within the Donovan Data Systems (DDS) portfolio that helped bind together the DDS TV Buying Product with a Television Network selling product and allowed buyers and sellers to transact on insertion orders programmatically for spot television. With Media Ocean’s new focus on digital media (which is getting rave reviews from folks I’ve talked to who have seen it), there’s little doubt in my mind that these products will extend over to the traditional side of the market and ultimately replace (or be the basis of new versions of) the various legacy products that allowed DDS to dominate the media buying space for decades.

If our industry can get to the point where executing media buys across traditional and digital share a common process until the moment where they diverge from a delivery perspective, I think the market overall will make great headway. And I’m bullish on this — I think we’re not far away but it won’t happen this year.

Life after the death of 3rd Party Cookies

By Eric Picard (Originally published on AdExchanger.com July 8th, 2013)

In spite of plenty of criticism by the IAB and others in the industry, Mozilla is moving forward with its plan to block third-party cookies and to create a “Cookie Clearinghouse” to determine which cookies will be allowed and which will be blocked.  I’ve written many articles about the ethical issues involved in third-party tracking and targeting over the last few years, and one I wrote in March — “We Don’t Need No Stinkin’ Third-Party Cookies” — led to dozens of conversations on this topic with both business and technology people across the industry.

The basic tenor of those conversations was frustration. More interesting to me than the business discussions, which tended to be both inaccurate and hyperbolic, were my conversations with senior technical leaders within various DSPs, SSPs and exchanges. Those leaders’ reactions ranged from completely freaked out to subdued resignation. While it’s clear there are ways we can technically resolve the issues, the real question isn’t whether we can come up with a solution, but how difficult it will be (i.e. how many engineering hours will be required) to pull it off.

Is This The End Or The Beginning?

Ultimately, Mozilla will do whatever it wants to do. It’s completely within its rights to stop supporting third-party cookies, and while that decision may cause chaos for an ecosystem of ad-technology vendors, it’s completely Mozilla’s call. The company is taking a moral stance that’s, frankly, quite defensible. I’m actually surprised it’s taken Mozilla this long to do it, and I don’t expect it will take Microsoft very long to do the same. Google may well follow suit, as taking a similar stance would likely strengthen its own position.

To understand what life after third-party cookies might look like, companies first need to understand how technology vendors use these cookies to target consumers. Outside of technology teams, this understanding is surprisingly difficult to come by, so here’s what you need to know:

Every exchange, Demand-Side Platform, Supply-Side Platform and third-party data company has its own large “cookie store,” a database of every single unique user it encounters, identified by an anonymous cookie. If a DSP, for instance, wants to use information from a third-party data company, it needs to be able to accurately match that third-party cookie data with its own unique-user pool. So in order to identify users across various publishers, all the vendors in the ecosystem have connected with other vendors to synchronize their cookies.

With third-party cookies, they could do this rather simply. While the exact methodology varies by vendor, it essentially boils down to this:

  1. The exchange, DSP, SSP or ad server carves off a small number of impressions for each unique user for cookie synching. All of these systems can predict pretty accurately how many times a day they’ll see each user and on which sites, so they can easily determine which impressions are worth the least amount of money.
  2. When a unique ID shows up in one of these carved-off impressions, the vendor serves up a data-matching pixel for the third-party data company. The vendor places its unique ID for that user into the call to the data company. The data company looks up its own unique ID, which it then passes back to the vendor with the vendor’s unique ID.
  3. That creates a lookup table between the technology vendor and the data company so that when an impression happens, all the various systems are mapped together. In other words, when it encounters a unique ID for which it has a match, the vendor can pass the data company’s ID to the necessary systems in order to bid for an ad placement or make another ad decision.
  4. Because all the vendors have shared their unique IDs with each other and matched them together, this creates a seamless (while still, for all practical purposes, anonymous) map of each user online.

All of this depends on the basic third-party cookie infrastructure Mozilla is planning to block, which means that all of those data linkages will be broken for Mozilla users. Luckily, some alternatives are available.

Alternatives To Third-Party Cookies

1)  First-Party Cookies: First-party cookies also can be (and already are) used for tracking and ad targeting, and they can be synchronized across vendors on behalf of a publisher or advertiser. In my March article about third-party cookies, I discussed how this can be done using subdomains.

Since then, several technical people have told me they couldn’t use the same cross-vendor-lookup model, outlined above, with first-party cookies — but generally agreed that it could be done using subdomain mapping. Managing subdomains at the scale that would be needed, though, creates a new hurdle for the industry. To be clear, for this to work, every publisher would need to map a subdomain for every single vendor and data provider that touches inventory on its site.

So there are two main reasons that switching to first-party cookies is undesirable for the online-ad ecosystem:  first, the amount of work that would need to be done; second, the lack of a process in place to handle all of this in a scalable way.

Personally, I don’t see anything that can’t be solved here. Someone needs to offer the market a technology solution for scalable subdomain mapping, and all the vendors and data companies need to jump through the hoops. It won’t happen in a week, but it shouldn’t take a year. First-party cookie tracking (even with synchronization) is much more ethically defensible than third-party cookies because, with first-party cookies, direct relationships with publishers or advertisers drive the interaction. If the industry does switch to mostly first-party cookies, it will quickly drive publishers to adopt direct relationships with data companies, probably in the form of Data Management Platform relationships.

2) Relying On The Big Guns: Facebook, Google, Amazon and/or other large players will certainly figure out how to take advantage of this situation to provide value to advertisers.

Quite honestly, I think Facebook is in the best position to offer a solution to the marketplace, given that it has the most unique users and its users are generally active across devices. This is very valuable, and while it puts Facebook in a much stronger position than the rest of the market, I really do see Facebook as the best voice of truth for targeting. Despite some bad press and some minor incidents, Facebook appears to be very dedicated to protecting user privacy – and also is already highly scrutinized and policed.

A Facebook-controlled clearinghouse for data vendors could solve many problems across the board. I trust Facebook more than other potential solutions to build the right kind of privacy controls for ad targeting. And because people usually log into only their own Facebook account, this avoids the problems that has hounded cookie-based targeting related to people sharing devices, such as when a husband uses his wife’s computer one afternoon and suddenly her laptop thinks she’s a male fly-fishing enthusiast.

3) Digital Fingerprinting: Fingerprinting, of course, is as complex and as fraught with ethical issues as third-party cookies, but it has the advantage of being an alternative that many companies already are using today. Essentially, fingerprinting analyzes many different data points that are exposed by a unique session, using statistics to create a unique “fingerprint” of a device and its user.

This approach suffers from one of the same problems as cookies, the challenge of dealing with multiple consumers using the same device. But it’s not a bad solution. One advantage is that fingerprinting can take advantage of users with static IP addresses (or IP addresses that are not officially static but that rarely change).

Ultimately, though, this is a moot point because of…

4) IPV6: IPV6 is on the way. This will give every computer and every device a static permanent unique identifier, at which point IPV6 will replace not only cookies, but also fingerprinting and every other form of tracking identification. That said, we’re still a few years away from having enough IPV6 adoption to make this happen.

If Anyone From Mozilla Reads This Article

Rather than blocking third-party cookies completely, it would be fantastic if you could leave them active during each session and just blow them away at the end of each session. This would keep the market from building third-party profiles, but would keep some very convenient features intact. Some examples include frequency capping within a session, so that users don’t have to see the same ad 10 times; and conversion tracking for DR advertisers, given that DR advertisers (for a whole bunch of stupid reasons) typically only care about conversions that happen within an hour of a click. You already have Private Browsing technology; just apply that technology to third-party cookies.

Why no one can define “premium” inventory

By Eric Picard (Originally published on iMediaConnection.com on June 17th, 2013)

What is premium inventory? The simple answer is that it’s inventory that the advertiser would be happy to run its advertising on if it could manually review every single publisher and page that the ad was going to appear within.

When buyers make “direct” media buys against specific content, they get access to this level of comfort, meaning that they don’t have to worry about where their media dollars end up being spent. But this doesn’t scale well across more than a few dozen sales relationships.

To address this problem of scale, buyers extend their media buys through ad networks and exchange mechanisms. But in this process, they often lose control over where their ads will run. Theoretically the ad network is acting as a proxy of the buyer in order to support the need for “curation” of the ad experience, but this clearly is not usually the case. Ad networks don’t actually have the technology to handle curation of the advertising experience (i.e., monitoring the quality of the publishers and pages they are placing advertising on) at scale any more than the media buyer does, which leads to frequent problems of low quality inventory on ad networks.

Now apply this issue to the new evolution of real-time bidding and ad exchanges. A major problem with buying on exchanges is that the curation problem gets dropped back in the laps of the buyers across more publishers and pages than they can manually curate, which requires a whole new set of skills and tools. But the skills aren’t there yet, and the problem hasn’t been handled well by the various systems providers. So the agencies build out trading desks where that skillset is supposed to live, but the end results of the quality are highly suspect as we’re seeing from all the recent articles on fraud.

So the true answer to this conundrum of what is premium must be to find scalable mechanisms to ensure that a brand’s quality goals for the inventory it is running advertising against are met.
The market needs to be able to efficiently execute media buys against high-quality inventory at media prices that buyers are comfortable paying — if not happy to pay.

The definition of “high quality” is an interesting problem with which I’ve been struggling. Here’s what I’ve come up with: Every brand has its own point of view on “high quality” because it has its own goals and brand guidelines. A pharma advertiser might want to buy ad inventory on health websites, but it might want to only run on general health content, not content that is condition specific. Or an auto advertiser might want to buy ad inventory on auto-related content, but not on reviews of automobiles.

Most brands obviously want to avoid porn, hate speech, and probably gambling pages — but what about content that is very cluttered with ads or where the page layout is so ugly that ads will look like crap? Or pages that are relatively neutral — meaning not good, but not horrible?

Then we run into a problem that nobody has been willing to bring up broadly, but it’s one that gets talked about all the time privately: Inventory is a combination of publisher, page, and audience.

How are we defining audience today? There’s blended data such as comScore or Nielsen data, which use methodologies that are in some cases vetted by third parties, but relatively loosely. There’s first-party data such as CRM, retargeting, or publisher registration data, which will vary broadly in quality based on many issues but are generally well understood by the buyer and the seller. And there’s third-party data from data companies. But frankly, nobody is rating the quality of this data. Even on a baseline level, there are no neutral parties evaluating the methodology used from a data sciences point of view to validate that the method is defensible. And as importantly, there is no neutral party measuring the accuracy of the data quantitatively (e.g., a data provider says that this user is from a household with an income above $200,000, but how have we proven this to be true?).

When we talk about currency in this space, we accept whatever minimum bar that the industry has laid down as truth via the Media Rating Council, hold our nose, and move forward. But we’ve barely got impression guidelines that the industry is willing to accept, let alone all of these other things like page clutter and accuracy of audience data.

And even more importantly, nobody is looking at all the data (publisher, page, audience) from the point of view of the buyer. And as we discussed above, every buyer — and potentially every campaign for every brand — will view quality very differently. Because the skillset of measuring quality is in direct competition with the goal of getting budgets spent efficiently — or what some might call scale — nobody wants to talk about this problem. After all, if buyers start getting picky about the quality of the inventory on any dimension, the worry is that they might reduce the scale of inventory available to them. The issues are directly in conflict with each other. Brand safety, inventory quality, and related issues should be handled as a separate policy matter from media buying, as the minimum quality bar should not be subject to negotiation based on scale issues. Running ads on low-quality sites is a bad idea from a brand perspective, and that line shouldn’t be crossed just to hit a price or volume number.

So instead we talk about the issue sitting in front of our nose that has gotten some press: fraud. The questions that advertisers are raising about our channels center around this concern. But the advertisers should be asking lots of questions about the broader issue — which is, “How are you making sure that my ads are running on high-quality inventory?” Luckily there are some technologies and services on the market that can help provide quality inventory at scale, and this area of product development is only going to get better over time.

Which Type Of Fraud Have You Been Suckered Into?

By Eric Picard (Originally published by AdExchanger.com on May 30th, 2013)

For the last few years, Mike Shields over at Adweek has done a great job of calling out bad actors in our space.  He’s shined a great big spotlight on the shadowy underbelly of our industry – especially where ad networks and RTB intersect with ad spend.

Many kinds of fraud take place in digital advertising, but two major kinds are significantly affecting the online display space today. (To be clear, these same types of fraud also affect video, mobile and social. I’m just focusing on display because it attracts more spending and it’s considered more mainstream.) I’ll call these “page fraud” and “bot fraud.”

Page Fraud

This type of fraud is perpetrated by publishers who load many different ads onto one page.  Some of the ads are visible, others hidden.  Sometimes they’re even hidden in “layers,” so that many ads are buried on top of each other and only one is visible. Sometimes the ads are hidden within iframes that are set to 1×1 pixel size (so they’re not visible at all). Sometimes they’re simply rendered off the page in hidden frames or layers.

It’s possible that a publisher using an ad unit provided by an ad network could be unaware that the network is doing something unscrupulous – at least at first.  But they are like pizza shops that sell more pizzas than it’s possible to make with the flour they’ve purchased. They may be unaware of the exact nature of the bad behavior but must eventually realize that something funny is going on. In the same way, bad behavior is very clear to publishers who can compare the number of page views they’re getting with the number of ad impressions they’re selling.  So I don’t cut them any slack.

This page fraud, by the way, is not the same thing as “viewability,” which involves below-the-fold ads that never render visibly on the user’s page.  That fraudulent activity is perpetrated by the company that owns the web page on which the ads are supposed to be displayed.  They knowingly do so by either programming their web pages with these fraudulent techniques or using networks that sell fake ad impressions on their web pages.

There are many fraud-detection techniques you can employ to make sure that your campaign isn’t the victim of page fraud. And there are many companies – such as TrustMetrics, Double Verify and Integral Ad Science – that offer technologies and services to detect, stop and avoid this type of fraud. Foiling it requires page crawling as well as advanced statistical analysis.

Bot Fraud

This second type of fraud, which can be perpetrated by a publisher or a network, is a much nastier kind of fraud than page fraud. It requires real-time protection that should ultimately be built into every ad server in the market.

Bot fraud happens when a fraudster builds a software robot (or bot) – or uses an off-the-shelf bot – that mimics the behavior of a real user. Simple bots pretend to be a person but behave in a repetitive way that can be quickly identified as nonhuman; perhaps the bot doesn’t rotate its IP address often and creates either impressions or clicks faster than humanly possible. But the more sophisticated bots are very difficult to differentiate from humans.

Many of these bots are able to mimic human behavior because they’re backed by “botnets” that sit on thousands of computers across the world and take over legitimate users’ machines.  These “zombie” computers then bring up the fraudsters’ bot software behind the scenes on the user’s machine, creating fake ad impressions on a real human’s computer.  (For more information on botnets, read “A Botnet Primer for Advertisers.”) Another approach that some fraudsters take is to “farm out” the bot work to real humans, who typically sit in public cyber cafes in foreign countries and just visit web pages, refreshing and clicking on ads over and over again. These low-tech “botnets” are generally easy to detect because the traffic, while human and “real,” comes from a single IP address and usually from physical locations where the heavy traffic seems improbable – often China, Vietnam, other Asian countries or Eastern Europe.

Many companies have invested a lot of money to stay ahead of bot fraud. Google’s DoubleClick ad servers already do a good job of avoiding these types of bot fraud, as do Atlas and others.

Anecdotally, though, newer ad servers such as the various DSPs seem to be having trouble with this; I’ve heard examples through the grapevine on pretty much all of them, which has been a bit of a black eye for the RTB space. This kind of fraud has been around for a very long time and only gets more sophisticated; new bots are rolled out as quickly as new detection techniques are developed.

The industry should demand that their ad servers take on this problem of bot fraud detection, as it really can only be handled at scale by significant investment – and it should be built right into the core campaign infrastructure across the board. Much like the issues of “visible impressions” and verification that have gotten a lot of play in the industry press, bot fraud is core to the ad-serving infrastructure and requires a solution that uses ad-serving-based technology. The investment is marginal on top of the existing ad-serving investments that already have been made, and all of these features should be offered for free as part of the existing ad-server fees.

Complain to – or request bot-fraud-detection features from – your ad server, DSP, SSP and exchange to make sure they’re prioritizing feature development properly. If you don’t complain, they won’t prioritize this; instead, you’ll get less-critical new features first.

Why Is This Happening?

I’ve actually been asked this a lot, and the question seems to indicate a misunderstanding – as if it were some sort of weird “hacking” being done to punish the ad industry. The answer is much simpler:  money.  Publishers and ad networks make money by selling ads. If they don’t have much traffic, they don’t make much money. With all the demand flowing across networks and exchanges today, much of the traffic is delivered across far more and smaller sites than in the past. This opens up significant opportunities for unscrupulous fraudsters.

Page fraud is clearly aimed at benefiting the publisher but also benefitting the networks. Bot fraud is a little less clear – and I do believe that some publishers who aren’t aware of fraud are getting paid for bot-created ad impressions.  In these cases, the network that owns the impressions has configured the bots to drive up its revenues. But like I said above, publishers have to be almost incompetent not to notice the difference in the number of impressions delivered by a bot-fraud-committing ad network and the numbers provided by third parties like Alexa, Comscore, Nielsen, Compete, Hitwise, Quantcast, Google Analytics, Omniture and others.

Media buyers should be very skeptical when they see reports from ad networks or DSPs showing millions of impressions coming from sites that clearly aren’t likely to have millions of impressions to sell.  And if you’re buying campaigns with any amount of targeting – especially something that should significantly limit available inventory such as Geo or Income– or with frequency caps, you need to be extra skeptical when reviewing your reports, or use a service that does that analysis for you.

Targeting fundamentals everyone should know

 

By Eric Picard (Originally published in iMediaConnection, April 11th, 2013)

Targeting data is ubiquitous in online advertising and has become close to “currency” as we think about it in advertising. And I mean currency in the same way that we think about Nielsen ratings in TV or impression counts in digital display. We pay for inventory today in many cases based on a combination of the publisher, the content associated with the impression, and the data associated with a variety of elements. This includes the IP address of the computer (lots of derived data comes from this), the context of the page, various content categories and quality metrics, and — of course — behavioral and other user-based targeting attributes.

But for all the vetting done by buyers of base media attributes, such as the publisher or the page or quality scores, there’s still very little understanding of where targeting data comes from. And even less when it comes to understanding how it should be valued and why. So this article is about just that topic: how targeting data is derived and how you should think about it from a value perspective.

Let’s get the basic stuff out of the way: anything derived from the IP address and user agent. When a browser visits a web page, it spits out a bunch of data to the servers that it accesses. The two key attributes are IP address and user agent. The IP address is a simple one; it’s the number assigned to the user’s computer by the internet to allow that computer to be identified by the various servers it touches. It’s a unique number that allows an immense amount of information to be inferred; the key piece of information inferred is the geography of the user.

There are lots of techniques used here to varying degrees of “granularity.” But we’ll just leave it at the idea that companies have amassed lists of IP addresses assigned to specific geographic locations. It’s pretty accurate in most cases, but there are still scenarios where people are connected to the internet via private networks (such as a corporate VPN) that confuse the world by assigning IP addresses to users in one location when they are actually in another. This was the classic problem with IP address based geography back in the days of dial-up, when most users showed up as part of Reston, Va. (where AOL had its data centers). Today where most users are on broadband, the mapping is much more accurate and comprehensive.

As important as geography are the various mappings that are done against location. Claritas, Prism, and other derived data products make use of geography to map a variety of attributes to the user browsing the page. And these techniques have moved out of traditional media (especially direct-response mailing lists) to digital and are quite useful. The only issue is that the further down the chain of assumptions used to derive attributes, the more muddled things become. Statistically, the data still is relevant, but on a per-user basis it is potentially completely inaccurate. That shouldn’t stop you from using this information, nor should you devalue it — but just be clear that there’s a margin of error here.

User agent is an identifier for the browser itself, which can be used to target users of specific browsers but also to identify non-browser activity that chooses to identify itself. For instance, various web crawlers such as search engines identify themselves to the server delivering a web page, and ad servers know not to count those ad impressions as human. This assumes good behavior on behalf of the programmers, and sometimes “real” user agents are spoofed when the intent is to create fake impressions. Sometimes a malicious ad network or bad actor will do this to create fake traffic to drive revenue.

Crawled data

There’s a whole class of data that’s derived by sending a robot to a web page, crawling through the content on the page, and classifying the content based on all sorts of analysis. This mechanism is how Google, Bing, and other search engines classify the web. Contextual targeting systems like AdSense classify the web pages into keywords that can be matched by ad sales systems. And quality companies, like Trust Metrics and others, scan pages and use hundreds or thousands of criteria to value the rank of the page — everything from ensuring that the page doesn’t contain porn or hate speech to analyzing the amount of white space around images and ads and the number of ads on a page.

User targeting

Beyond the basics of browser, IP, and page content, the world is much less simple. Rather than diving into methodologies and trying to simplify a complex problem, I’ll simply list and summarize the options here:

Registration data: Publishers used to require registration in order to access their content and, in that process, request a bunch of data such as address, demographics, psychographics, and interests. This process fell out of favor for many publishers over the years, but it’s coming back hard. Many folks in our industry are cynical about registration data, using their own experiences and feelings to discount the validity of user registration data. But in reality, this data is highly accurate; even for large portals, it is often higher than 70 percent accurate, and for news sites and smaller publishers, it’s much more accurate.

Interestingly, the use of co-registration through Facebook, Twitter, LinkedIn, and others is making this data much more accurate. One of the most valuable things about registration data is that it creates a permanent link between a user and the publisher that lives beyond the cookie. Subsequently captured data from various sessions is extremely accurate even if the user fudged his or her registration information.

First-party behavioral data: Publishers and advertisers have a great advantage over third parties in that they have a direct relationship with the user. This gives them incredible opportunities to create deeply refined targeting segments based on interest, behavior, and especially from custom created content such as showcases, contests, and other registration information. Once a publisher or advertiser creates a profile of a user, it has the means to track and store very rich targeting data — much richer in theory than a third party could easily create. For instance, you might imagine that Yahoo Finance benefits highly from registered users who track their stock portfolio via the site. Similarly, users searching for autos, travel, and other vertical-specific information create immense targeting value.

Publishers curbed their internal targeting efforts years ago because they found that third-party data companies were buying targeted campaigns on publishers and then their high-cost, high-value targeting data was leaking away to third parties. But the world has shifted again, and publishers and advertisers both are benefiting highly from the data management platforms (DMPs) that are now common on the market. The race toward using first-party cookies as the standard for data collection is further strengthening publishers’ positions. Targeted content categories and contests are another way that publishers and advertisers have a huge advantage over third parties.

Creating custom content or contests with the intent to derive high-value audience data that is extremely vertical or particularly valuable is easy when you have a direct relationship with the user. You might imagine that Amazon has a huge lead in the market when it comes to valuation of users by vertical product interest. Similarly, big publishers can segment users into buckets based on their interest in numerous topics that can be used to extrapolate value.

Third-party data: There are many methods used to track and value users based on third-party cookies (those pesky cookies set by companies that generally don’t have a direct relationship with the user — and which are tracking them across websites). Luckily there are lots of articles out there (including many I’ve written) on how this works. But to quickly summarize: Third-party data companies generally make use of third-party cookies that are triggered on numerous websites across the internet via the use of tracking pixels. These pixels are literally just a 1×1 pixel image (sometimes called a “clear pixel”), or even just a simple no-image JavaScript call from the third-party server, that allows them to set and/or access a cookie that they can set on the users’ browsers. These cookies are extremely useful to data companies in tracking users because the same cookie can be accessed on any website, on any domain, across sessions, and sometimes across years of time.

Unfortunately for the third-party data companies, third-party cookies have recently come under intense scrutiny since Apple’s Safari doesn’t allow them by default and Firefox has announced that it will set new defaults in its next browser version to block third-party cookies. This means that those companies relying exclusively on third-party cookies will see their audience share erode and will need to fall back on other methods of tracking and profiling users. Note that these companies all use anonymous cookies and work hard to be safe and fair in their use of data. But the reality is that this method is becoming harder for companies to use.

By following users across websites, these companies can amass large and comprehensive profiles of users such that advertising can be targeted against them in deep ways and more money can be made from those ad impressions.
Read more at http://www.imediaconnection.com/content/33972.asp#qakIxCXJbl9KpiG3.99

What everyone should know about ad serving

By Eric Picard (Originally published in iMediaConnection.com)

Publisher-side ad servers such as DoubleClick for Publishers, Open AdStream, FreeWheel, and others are the most critical components of the ad industry. They’re responsible ultimately for coordination of all the revenue collected by the publisher, and they do an amazing amount of work.

Many people in the industry — especially on the business side of the industry — look at their ad server as mission critical, sort of in the way they look at the electricity provided by their power utility. Critical — but only in that it delivers ads. To ad operations or salespeople, the ad server is most often associated with how they use the user interface — really the workflow they interact with directly. But this is an oversight on their part.

The way that the ad server operates under the surface is actually something everyone in the industry should understand. Only by understanding some of the details of how these systems function can good business decisions be made.

Ad delivery

Ad servers by nature make use of several real-time systems, the most critical being ad delivery. But ad delivery is not a name that adequately describes what those systems do. An ad delivery system is really a decision engine. It reviews an ad impression in the exact moment that it is created (by a user visiting a page), reviews all the information about that impression, and makes the decision about which ad it should deliver. But the real question is this: How does that decision get made?

An impression could be thought of as a molecule made up of atoms. Each atom is an attribute that describes something about that impression. These atomic attributes can be simple media attributes, such as the page location that the ad is imbedded into, the category of content that the page sits within, or the dimensions of the creative. They can be audience attributes such as demographic information taken from the user’s registration data or a third-party data company. They can be complex audience segments provided by a DMP such as “soccer mom” — which is in itself almost a molecular object made up of the attributes of female, parent, children in sports — and of course various other demographic and psychographic atomic attributes.

When taken all together, these attributes define all the possible interpretations of that impression. The delivery engine now must decide (all within a few milliseconds) how to allocate that impression against available line items. This real-time inventory allocation issue is the most critical moment in the life of an impression. Most people in our industry have no understanding of what happens in that moment, which has led to many uninformed business, partnership, and vendor licensing decisions over the years, especially when it comes to operations, inventory management, and yield.

Real-time inventory allocation decides which line items will be matched against an impression. The way these decisions get made reflects the relative importance placed on them by the engineers who wrote the allocation rules. These, of course, are informed by business people who are responsible for yield and revenue, but the reality is that the tuning of allocation against a specific publisher’s needs is not possible in a large shared system. So the rules get tuned as best they can to match the overarching case that most customers face.

Inventory prediction

Well before the impression is generated and has to be allocated out to the impressions in real-time, inventory was sold in advance based on predictions of how much volume would exist in the future. We call these predicted impressions “avails” (for “available to sell”) in our industry, and they’re essentially the basis for how all guaranteed impressions are sold.

We’ll get back to the real-time allocation in a moment, but first let’s talk a bit about avails. The avails calculation done by another component of the ad server, responsible for inventory prediction, is one of the hardest computer science problems facing the industry today. Predicting how much inventory will exist is hard — and extremely complicated.

Imagine if you will that you’ve been asked to predict a different kind of problem than ad serving — perhaps traffic patterns on a state highway system. As you might imagine, predicting how many cars will be on the entire highway next month is probably not very hard to do with a pretty high degree of accuracy. There’s historical data going back years of time, month by month. So you could take a look at the month of April for the last five years, see if there’s any significant variance, and use a bit of somewhat sophisticated math to determine a confidence interval for how many cars will be on the highway in the month of April 2013.

But imagine that you now wanted to zoom into a specific location — let’s say the Golden Gate Bridge. And you wanted to break that prediction down further, let’s say Wednesday, April 3. And let’s say that we wanted to predict not only how many cars would be on the bridge that day, but how many cars with only one passenger. And further, we wanted to know how many of those cars were red and driven by women. And of those red, female-driven cars, how many of them are convertible sports cars? Between 2 and 3 p.m.

Even if you could get some kind of idea how many matches you’ve had in the past, predicting at this level of granularity is very hard. Never mind that there are many outside factors that could affect this; there are short-term issues that could help get more accurate as you get closer in time to the event such as weather and sporting events, and there are much more unpredictable events such as car accidents, earthquakes, etc.

This is essentially the same kind of prediction problem as the avails prediction problem that we face in the online advertising industry. Each time we layer on one bit of data (some defining attribute) onto our inventory definition, we make it harder and harder to predict with any accuracy how many of those impressions will exist. And because we’ve signed up for a guarantee that this inventory will exist, the engineers creating the algorithms that predict how much inventory will exist need to be very conservative on their estimates.

When an ad campaign is booked by an account manager at the publisher, they “pull avails” based on their read of the RFP and media plan and try to find matching inventory. These avails are then reserved in the system (the system puts a hold on avails that are being sent back to the buyer based for a period of time) until the insertion order (I/O) is signed by the buyer. At this moment, a preliminary allocation of predicted avails (impressions that don’t exist yet) is made by a reservation system, which divvies out the avails among the various I/Os. This is another kind of allocation that the ad server does in advance of the campaign actually running live, and it has as much (or even more) impact as the real-time allocation does on overall yield.

How real-time allocation decisions get made

Once a contract has been signed to guarantee that these impressions will in fact be delivered, it’s up to the delivery engine’s allocation system to decide which of the matching impressions to assign to which line items. The primary criteria used to make this decision is how far behind the matching line items are for successfully delivering against their contract, which we call “starvation” (i.e., is the line item starving to death or is it on track to fulfill its obligated impression volume?).

Because the engineers who wrote the avails prediction algorithms were conservative, the system generally has a lot of wiggle room when it comes to delivering against most line items that are not too complex. That means there are usually more impressions available when the impressions are allocated than were predicted ahead of time. So when all the matching line items are not starving, there are other decision criteria that can be used. The clearest one is yield, (i.e., of the available line items to allocate, which one of those lines will get me the most money for this impression?).

Implications of real-time allocation and inventory prediction

There’s a tendency in our industry to think about ad inventory as if it “exists” ahead of time, but as we’ve just seen, an impression is ephemeral. It exists only for a few milliseconds in the brain of a computer that decides what ad to send to the user’s machine. Generally there are many ways that each impression could be fulfilled, and the systems involved have to make millions or billions of decisions every hour.

We tend to think about inventory in terms of premium and remnant, or through a variety of lenses. But the reality is before the inventory is sold or unsold, premium or remnant, or anything else, it gets run through this initial mechanism. In many cases, inventory that is extremely valuable gets allocated to very low CPM impression opportunities or even to remnant because of factors having little to do with what that impression “is.”

There are many vendors in the space, but let’s chat for a moment about two groups of vendors: supply-side platforms (SSPs) and yield management companies.

Yield management firms focus on providing ways for publishers to increase yield on inventory (get more money from the same impressions), and most have different strategies. The two primary companies folks talk to me about these days are Yieldex and Maxifier. Yieldex focuses on the pre-allocation problem — the avails reservations done by account managers as well as the inventory prediction problem. Yieldex also provides a lot of analytics capabilities and is going to factor significantly in the programmatic premium space as well. Maxifier focuses on the real-time allocation problem and finds matches between avails that drive yield up, and it improves matches on other performance metrics like click-through and conversions, as well as any other KPI the publisher tracks, such as viewability or even engagement. Maxifier does this while ensuring that campaigns deliver, since premium campaigns are paid on delivery but measured in many cases on performance. The company is also going to figure heavily into the programmatic premium space, but in a totally different way than Yieldex. In other words, neither company really competes with each other.

Google’s recent release of its dynamic allocation features for the ad exchange (sort of the evolution of the Admeld technology) also plays heavily into real-time allocation and yield decisions. Specifically, the company can compare every impression’s yield opportunity between guaranteed (premium) line items and the response from the DoubleClick Exchange (AdX) to determine on a per-impression basis which will pay the publisher more money. This is very close to what Maxifier does, but Maxifier does this across all SSPs and exchanges involved in the process. Publishers I’ve talked to using all of these technologies have gushed to me about the improvements they’ve seen.

SSPs are another animal altogether. While the yield vendors above are focused on increasing the value of premium inventory and/or maximizing yield between premium and exchange inventory (I think of this as pushing information into the ad server to increase value), the SSPs are given remnant inventory to optimize for yield among all the various venues for clearing remnant inventory. By forcing competition among ad networks, exchanges, and other vehicles, they can drive the price up on remnant inventory.

How to apply this article to your business decisions

I’ve had dozens of conversations with publishers about yield, programmatic premium, SSPs, and other vendors. The most important takeaway I can leave you with is that you should think about premium yield optimization as a totally different track than discussions about remnant inventory.

When it comes to remnant inventory, whoever gets the first “look” at the inventory is likely to provide the highest increase in yield. So when testing remnant options, you have to ensure that you’re testing each one exactly the same way — never beneath each other. Most SSPs and exchanges ultimately provide the same exact demand through slightly different lenses. This means that barring some radical technical superiority — which none have shown me to be the case so far — the decision most likely will come down to ease of integration and ultimately customer service.

What the heck does “programmatic” mean?

By Eric Picard (originally published on iMediaConnection.com 1/10/13)

This article is about programmatic media buying and selling, which I would define as any method of buying or selling media that enables a buyer to complete a media buy and have it go live, all without human intervention from a seller.

Programmatic is a superset of exchange, RTB, auction, and other types of automated media buying and selling that have mainly been proven out for remnant ad inventory clearing mechanisms up until today. So while an auction might or might not be involved in programmatic buying and selling, the roots and infrastructure behind the new programmatic world is based on the same infrastructure that the ad exchanges, DSPs, SSPs, and ad servers have been plumbing and re-plumbing over the last five years.

Let’s talk first about so-called “programmatic premium” inventory, as this is what I’m seeing as the most confusing thing in the market today. Many people still think of programmatic media as remnant inventory sold using real-time bidding. But that’s far from the whole truth today. All display media could (mechanically) be bought and sold programmatically today — whether via RTB or not, whether it’s guaranteed or not, and whether it’s “premium” or not. Eventually all advertising across all media will be bought and sold programmatically. Sometimes it will be bought with a guarantee, sometimes it won’t.

What we’re talking about is how the campaigns get flighted and how ad inventory is allocated against a specific advertiser’s campaign. In premium display advertising, this is done today by humans using tools, mostly on the publisher side of the market. In the programmatic world, all buys — even the up-front buys — will be executed programmatically. So when I say that all ads will be bought and sold programmatically, I mean that literally. If Coke spends $50 million with Disney at an upfront event, that $50 million will still be executed programmatically throughout the life of that buy. The insertion order and RFP process goes away (as we know it) and is replaced by a much more efficient set of processes.

In this new world, sales teams don’t go away. They become more focused on the value that they can add most effectively. That’s in the relationship and evangelism of their properties and the unique content and brand-safe environment that they bring to the table. Sales teams will also engage in broader, more valuable negotiations with buyers — doing more business development and no “order taking.”

In a programmatic world, prices and a whole slew of terms can be negotiated in advance. Essentially what’s happening is that the order-taking process, the RFP, and the inventory avail “look up” that have been intensely manual for the past 20 years are being automated. And APIs between systems have opened up that allow all these various tools to communicate directly and to drive through the existing roadblocks.

Here are five things everyone in our industry should know about programmatic media buying and selling.

It’s inevitable

Programmatic buying and selling is coming, is coming big, and will change the way people buy and sell nearly all media — across all media types — over the next five to 10 years. This will be the case in online display over the next two to three years.

It’s comprehensive

Programmatic is not just RTB, is not just “bidding,” and is not one channel of sales. It’s comprehensive, it’s everything that will be bought and sold, and it’s all forms of media across all sales channels. That’s why I’m hedging by saying five to 10 years, as it will take more than five years to do all these things across all media. But certainly fewer than 10. And a lot is transitioning over the next two years, especially online.

Prices will still vary

In non-programmatic buying and selling (old-fashioned traditional relationship sales), different customers are charged different prices all the time for exactly the same product. That doesn’t go away. Different advertisers get different prices for all sorts of reasons. In the worst case, the buyers might be worse negotiators. But it could be that the advertisers spend more than $1 million monthly with that publisher and therefore get a huge discount on CPM. There are all sorts of reasons that this happens. The same exact thing will happen programmatically. Various advertisers will have hard-coded discounts that are negotiated by humans in advance. Price will drop as thresholds on overall spend are hit. Prestigious brands will get preferences that crappy or unknown brands don’t get. This can all be accommodated right now — this very minute — in almost every major system out there. It’s here. Now.

Complexities will remain

All the various “ad platforms” of the past and the new true ad platforms of today have opened up APIs and can communicate with each other programmatically. This is the way the infrastructure is powering programmatic buying and selling. I can’t stress to all of you how fundamental this change is. It’s not about bidding, auctions, futures, hedges, etc. — although those things will certainly exist and proliferate. It’s about automating the buying and selling process, removing friction from the market, and providing immense increases in value across the board. People talk about how complex the LUMAscape map of ad tech vendors is, but what they miss is that there’s plenty of room for lots of players when they can all easily connect and interact. I do believe we’ll see consolidation — mainly because there’s too little differentiation in the space (lots of copycat companies trying to compete with each other). But I do believe that the ecosystem can afford to be complex.

TV comparisons do not apply

People keep using the television upfronts as the analog to online premium inventory, and the television scatter market as the analog of remnant inventory. That’s not the right metaphor; they’re not equivalent. And even TV will move to programmatic buying and selling in the next decade. But let me lay this old saw to rest once and for all:

  • In television, the up-front is a discount mechanism. Buyers commit to certain spend in order to get a discount. Publishers use the upfront as a hedge in order to mitigate later-term risk by the seller that they will not sell out their inventories.
  • The scatter market is still the equivalent of guaranteed inventory online (although it’s more “reserved” than guaranteed). It’s just sold closer to the date of the inventory going live. I’d argue that with the exception of the random “upfronts” run by some publishers online today, all online premium ad sales is the equivalent of the television scatter market.
  • Remnant is a wholly other thing in television — and isn’t part of the scatter market. TV limits remnant significantly (in healthy economies to about 10 percent of total inventory). We’ve mucked that all up online by selling every impression at any price, which has lowered overall yield and flooded the market with cheap inventory — most of which is worthless.