Monthly Archives: July 2013

How arbitrage works in digital advertising today

By Eric Picard (Originally published on iMediaConnection.com July 11, 2013)

The idea that ad networks, trading desks, and various technical “traders” in the ad ecosystem are “arbitraging” their customers is fairly well understood. The idea that an intermediary of some kind sells ad inventory to a media buyer, but then buys it somewhere else for a lower price is a pretty basic reality in our industry. But what most of us don’t understand is how it gets done and especially how technically advanced firms are doing it.

So let’s talk about this today — how arbitrage is enacted by various constituents, and I’d love to hear some reactions in the comments about how marketers and media buyers feel about it, especially if they weren’t aware of how it was done. Note: There are many ways to do this; I’m just going to give you some examples.

Old school networks

Back in the day, ad networks would go to large publishers and negotiate low price remnant buys (wholesale buys) where they’d buy raw impressions for pennies on the dollar, with the rule being that the network could only resell those impressions without identifying the publisher (blind inventory resale).

The networks that have done this well traditionally apply some targeting capabilities to sell based on context/content and also audience attributes. But even this is all very old school. The more advanced networks even back in the old days employed a variety of yield optimization technologies and techniques on top of targeting to ensure that they took as little risk on inventory as possible.

RTB procurement

Many networks now use the exchanges as their primary “procurement” mechanism for inventory. In this world there’s very little risk for networks, since they can set each individual campaign up in advance to procure inventory at lower prices than they’ve been sold. There is a bit of risk that they won’t be able to procure inventory — i.e. there isn’t enough to cover what they’ve pre-sold. But the risk of being left holding a large amount of inventory that went unsold is much lower and saves money.

Once you mitigate that primary risk and then add in the ability to ensure margin by setting margin limits, which any DSP can do “off the shelf,” the risk in managing an ad network is so low that almost anyone can do it — as long as you don’t care about maximizing your margins. That’s where a whole new class of arbitrage has entered the market.

Technical arbitrage

There are many different ways that companies are innovating around arbitrage, but I’ll give you the baseline summarization so you can understand why many of the networks (or networks that advertise as if they’re some kind of “services based DSP”) are able to be successful today.

Imagine a network that has built an internal ad platform that enables the following:

  • Build a giant (anonymous) cookie pool of all users on the internet.
  • Develop a statistical model for each user that monitors what sites the network has seen them on historically on a daily/day-of-week basis.
  • Develop a historical model showing how much inventory on each site tends to cost in order to win the inventory on the exchange, perhaps even each individual user.
  • When you run a campaign trying to reach a specific type of user, the system will match against each user, then in the milliseconds before the bid needs to be returned, the network’s systems will determine how likely they are to see this user that day — and if they will find them on sites where historically they’ve been able to buy inventory for less money than the one they’re on at the moment.
  • If the algorithm thinks it can find that user for less money, it will either bid low or it will ignore the bid opportunity until it sees that user later in the day — when it probably can win the bid.

This kind of technology is now running on a good number of networks, with many “variations” on this theme — some networks are using data related to time of day to make optimization decisions. One network told me that it finds that users are likely to click and convert first thing in the morning (before they start their busy day), in mid-morning surfing (after they’ve gotten some work done), after lunch (when they’re presumably trying to avoid nodding off), and in the late afternoon before going home for the day. They optimize their bidding strategy around these scenarios either by time of day or (in more sophisticated models) depending on the specific user’s historical behavior.

You shouldn’t begrudge the networks based too much on this “technical arbitrage,” since all that technology requires a significant upfront investment. They’re still giving you access to the same user pool — but one question that nags at me is that they may be giving you that user on sites that are not great.

It also begs the question that if these very technical networks are buying their inventory on a per-impression basis, all the stories about fraud get me a little worried. A truly sophisticated algorithm that matches a unique ID should be able to see that those IDs are getting too many impressions to be human. But I haven’t done any analysis on this — it’s just a latent concern I have.

Advertisement

Life after the death of 3rd Party Cookies

By Eric Picard (Originally published on AdExchanger.com July 8th, 2013)

In spite of plenty of criticism by the IAB and others in the industry, Mozilla is moving forward with its plan to block third-party cookies and to create a “Cookie Clearinghouse” to determine which cookies will be allowed and which will be blocked.  I’ve written many articles about the ethical issues involved in third-party tracking and targeting over the last few years, and one I wrote in March — “We Don’t Need No Stinkin’ Third-Party Cookies” — led to dozens of conversations on this topic with both business and technology people across the industry.

The basic tenor of those conversations was frustration. More interesting to me than the business discussions, which tended to be both inaccurate and hyperbolic, were my conversations with senior technical leaders within various DSPs, SSPs and exchanges. Those leaders’ reactions ranged from completely freaked out to subdued resignation. While it’s clear there are ways we can technically resolve the issues, the real question isn’t whether we can come up with a solution, but how difficult it will be (i.e. how many engineering hours will be required) to pull it off.

Is This The End Or The Beginning?

Ultimately, Mozilla will do whatever it wants to do. It’s completely within its rights to stop supporting third-party cookies, and while that decision may cause chaos for an ecosystem of ad-technology vendors, it’s completely Mozilla’s call. The company is taking a moral stance that’s, frankly, quite defensible. I’m actually surprised it’s taken Mozilla this long to do it, and I don’t expect it will take Microsoft very long to do the same. Google may well follow suit, as taking a similar stance would likely strengthen its own position.

To understand what life after third-party cookies might look like, companies first need to understand how technology vendors use these cookies to target consumers. Outside of technology teams, this understanding is surprisingly difficult to come by, so here’s what you need to know:

Every exchange, Demand-Side Platform, Supply-Side Platform and third-party data company has its own large “cookie store,” a database of every single unique user it encounters, identified by an anonymous cookie. If a DSP, for instance, wants to use information from a third-party data company, it needs to be able to accurately match that third-party cookie data with its own unique-user pool. So in order to identify users across various publishers, all the vendors in the ecosystem have connected with other vendors to synchronize their cookies.

With third-party cookies, they could do this rather simply. While the exact methodology varies by vendor, it essentially boils down to this:

  1. The exchange, DSP, SSP or ad server carves off a small number of impressions for each unique user for cookie synching. All of these systems can predict pretty accurately how many times a day they’ll see each user and on which sites, so they can easily determine which impressions are worth the least amount of money.
  2. When a unique ID shows up in one of these carved-off impressions, the vendor serves up a data-matching pixel for the third-party data company. The vendor places its unique ID for that user into the call to the data company. The data company looks up its own unique ID, which it then passes back to the vendor with the vendor’s unique ID.
  3. That creates a lookup table between the technology vendor and the data company so that when an impression happens, all the various systems are mapped together. In other words, when it encounters a unique ID for which it has a match, the vendor can pass the data company’s ID to the necessary systems in order to bid for an ad placement or make another ad decision.
  4. Because all the vendors have shared their unique IDs with each other and matched them together, this creates a seamless (while still, for all practical purposes, anonymous) map of each user online.

All of this depends on the basic third-party cookie infrastructure Mozilla is planning to block, which means that all of those data linkages will be broken for Mozilla users. Luckily, some alternatives are available.

Alternatives To Third-Party Cookies

1)  First-Party Cookies: First-party cookies also can be (and already are) used for tracking and ad targeting, and they can be synchronized across vendors on behalf of a publisher or advertiser. In my March article about third-party cookies, I discussed how this can be done using subdomains.

Since then, several technical people have told me they couldn’t use the same cross-vendor-lookup model, outlined above, with first-party cookies — but generally agreed that it could be done using subdomain mapping. Managing subdomains at the scale that would be needed, though, creates a new hurdle for the industry. To be clear, for this to work, every publisher would need to map a subdomain for every single vendor and data provider that touches inventory on its site.

So there are two main reasons that switching to first-party cookies is undesirable for the online-ad ecosystem:  first, the amount of work that would need to be done; second, the lack of a process in place to handle all of this in a scalable way.

Personally, I don’t see anything that can’t be solved here. Someone needs to offer the market a technology solution for scalable subdomain mapping, and all the vendors and data companies need to jump through the hoops. It won’t happen in a week, but it shouldn’t take a year. First-party cookie tracking (even with synchronization) is much more ethically defensible than third-party cookies because, with first-party cookies, direct relationships with publishers or advertisers drive the interaction. If the industry does switch to mostly first-party cookies, it will quickly drive publishers to adopt direct relationships with data companies, probably in the form of Data Management Platform relationships.

2) Relying On The Big Guns: Facebook, Google, Amazon and/or other large players will certainly figure out how to take advantage of this situation to provide value to advertisers.

Quite honestly, I think Facebook is in the best position to offer a solution to the marketplace, given that it has the most unique users and its users are generally active across devices. This is very valuable, and while it puts Facebook in a much stronger position than the rest of the market, I really do see Facebook as the best voice of truth for targeting. Despite some bad press and some minor incidents, Facebook appears to be very dedicated to protecting user privacy – and also is already highly scrutinized and policed.

A Facebook-controlled clearinghouse for data vendors could solve many problems across the board. I trust Facebook more than other potential solutions to build the right kind of privacy controls for ad targeting. And because people usually log into only their own Facebook account, this avoids the problems that has hounded cookie-based targeting related to people sharing devices, such as when a husband uses his wife’s computer one afternoon and suddenly her laptop thinks she’s a male fly-fishing enthusiast.

3) Digital Fingerprinting: Fingerprinting, of course, is as complex and as fraught with ethical issues as third-party cookies, but it has the advantage of being an alternative that many companies already are using today. Essentially, fingerprinting analyzes many different data points that are exposed by a unique session, using statistics to create a unique “fingerprint” of a device and its user.

This approach suffers from one of the same problems as cookies, the challenge of dealing with multiple consumers using the same device. But it’s not a bad solution. One advantage is that fingerprinting can take advantage of users with static IP addresses (or IP addresses that are not officially static but that rarely change).

Ultimately, though, this is a moot point because of…

4) IPV6: IPV6 is on the way. This will give every computer and every device a static permanent unique identifier, at which point IPV6 will replace not only cookies, but also fingerprinting and every other form of tracking identification. That said, we’re still a few years away from having enough IPV6 adoption to make this happen.

If Anyone From Mozilla Reads This Article

Rather than blocking third-party cookies completely, it would be fantastic if you could leave them active during each session and just blow them away at the end of each session. This would keep the market from building third-party profiles, but would keep some very convenient features intact. Some examples include frequency capping within a session, so that users don’t have to see the same ad 10 times; and conversion tracking for DR advertisers, given that DR advertisers (for a whole bunch of stupid reasons) typically only care about conversions that happen within an hour of a click. You already have Private Browsing technology; just apply that technology to third-party cookies.