Category Archives: Economics

Life after the death of 3rd Party Cookies

By Eric Picard (Originally published on AdExchanger.com July 8th, 2013)

In spite of plenty of criticism by the IAB and others in the industry, Mozilla is moving forward with its plan to block third-party cookies and to create a “Cookie Clearinghouse” to determine which cookies will be allowed and which will be blocked.  I’ve written many articles about the ethical issues involved in third-party tracking and targeting over the last few years, and one I wrote in March — “We Don’t Need No Stinkin’ Third-Party Cookies” — led to dozens of conversations on this topic with both business and technology people across the industry.

The basic tenor of those conversations was frustration. More interesting to me than the business discussions, which tended to be both inaccurate and hyperbolic, were my conversations with senior technical leaders within various DSPs, SSPs and exchanges. Those leaders’ reactions ranged from completely freaked out to subdued resignation. While it’s clear there are ways we can technically resolve the issues, the real question isn’t whether we can come up with a solution, but how difficult it will be (i.e. how many engineering hours will be required) to pull it off.

Is This The End Or The Beginning?

Ultimately, Mozilla will do whatever it wants to do. It’s completely within its rights to stop supporting third-party cookies, and while that decision may cause chaos for an ecosystem of ad-technology vendors, it’s completely Mozilla’s call. The company is taking a moral stance that’s, frankly, quite defensible. I’m actually surprised it’s taken Mozilla this long to do it, and I don’t expect it will take Microsoft very long to do the same. Google may well follow suit, as taking a similar stance would likely strengthen its own position.

To understand what life after third-party cookies might look like, companies first need to understand how technology vendors use these cookies to target consumers. Outside of technology teams, this understanding is surprisingly difficult to come by, so here’s what you need to know:

Every exchange, Demand-Side Platform, Supply-Side Platform and third-party data company has its own large “cookie store,” a database of every single unique user it encounters, identified by an anonymous cookie. If a DSP, for instance, wants to use information from a third-party data company, it needs to be able to accurately match that third-party cookie data with its own unique-user pool. So in order to identify users across various publishers, all the vendors in the ecosystem have connected with other vendors to synchronize their cookies.

With third-party cookies, they could do this rather simply. While the exact methodology varies by vendor, it essentially boils down to this:

  1. The exchange, DSP, SSP or ad server carves off a small number of impressions for each unique user for cookie synching. All of these systems can predict pretty accurately how many times a day they’ll see each user and on which sites, so they can easily determine which impressions are worth the least amount of money.
  2. When a unique ID shows up in one of these carved-off impressions, the vendor serves up a data-matching pixel for the third-party data company. The vendor places its unique ID for that user into the call to the data company. The data company looks up its own unique ID, which it then passes back to the vendor with the vendor’s unique ID.
  3. That creates a lookup table between the technology vendor and the data company so that when an impression happens, all the various systems are mapped together. In other words, when it encounters a unique ID for which it has a match, the vendor can pass the data company’s ID to the necessary systems in order to bid for an ad placement or make another ad decision.
  4. Because all the vendors have shared their unique IDs with each other and matched them together, this creates a seamless (while still, for all practical purposes, anonymous) map of each user online.

All of this depends on the basic third-party cookie infrastructure Mozilla is planning to block, which means that all of those data linkages will be broken for Mozilla users. Luckily, some alternatives are available.

Alternatives To Third-Party Cookies

1)  First-Party Cookies: First-party cookies also can be (and already are) used for tracking and ad targeting, and they can be synchronized across vendors on behalf of a publisher or advertiser. In my March article about third-party cookies, I discussed how this can be done using subdomains.

Since then, several technical people have told me they couldn’t use the same cross-vendor-lookup model, outlined above, with first-party cookies — but generally agreed that it could be done using subdomain mapping. Managing subdomains at the scale that would be needed, though, creates a new hurdle for the industry. To be clear, for this to work, every publisher would need to map a subdomain for every single vendor and data provider that touches inventory on its site.

So there are two main reasons that switching to first-party cookies is undesirable for the online-ad ecosystem:  first, the amount of work that would need to be done; second, the lack of a process in place to handle all of this in a scalable way.

Personally, I don’t see anything that can’t be solved here. Someone needs to offer the market a technology solution for scalable subdomain mapping, and all the vendors and data companies need to jump through the hoops. It won’t happen in a week, but it shouldn’t take a year. First-party cookie tracking (even with synchronization) is much more ethically defensible than third-party cookies because, with first-party cookies, direct relationships with publishers or advertisers drive the interaction. If the industry does switch to mostly first-party cookies, it will quickly drive publishers to adopt direct relationships with data companies, probably in the form of Data Management Platform relationships.

2) Relying On The Big Guns: Facebook, Google, Amazon and/or other large players will certainly figure out how to take advantage of this situation to provide value to advertisers.

Quite honestly, I think Facebook is in the best position to offer a solution to the marketplace, given that it has the most unique users and its users are generally active across devices. This is very valuable, and while it puts Facebook in a much stronger position than the rest of the market, I really do see Facebook as the best voice of truth for targeting. Despite some bad press and some minor incidents, Facebook appears to be very dedicated to protecting user privacy – and also is already highly scrutinized and policed.

A Facebook-controlled clearinghouse for data vendors could solve many problems across the board. I trust Facebook more than other potential solutions to build the right kind of privacy controls for ad targeting. And because people usually log into only their own Facebook account, this avoids the problems that has hounded cookie-based targeting related to people sharing devices, such as when a husband uses his wife’s computer one afternoon and suddenly her laptop thinks she’s a male fly-fishing enthusiast.

3) Digital Fingerprinting: Fingerprinting, of course, is as complex and as fraught with ethical issues as third-party cookies, but it has the advantage of being an alternative that many companies already are using today. Essentially, fingerprinting analyzes many different data points that are exposed by a unique session, using statistics to create a unique “fingerprint” of a device and its user.

This approach suffers from one of the same problems as cookies, the challenge of dealing with multiple consumers using the same device. But it’s not a bad solution. One advantage is that fingerprinting can take advantage of users with static IP addresses (or IP addresses that are not officially static but that rarely change).

Ultimately, though, this is a moot point because of…

4) IPV6: IPV6 is on the way. This will give every computer and every device a static permanent unique identifier, at which point IPV6 will replace not only cookies, but also fingerprinting and every other form of tracking identification. That said, we’re still a few years away from having enough IPV6 adoption to make this happen.

If Anyone From Mozilla Reads This Article

Rather than blocking third-party cookies completely, it would be fantastic if you could leave them active during each session and just blow them away at the end of each session. This would keep the market from building third-party profiles, but would keep some very convenient features intact. Some examples include frequency capping within a session, so that users don’t have to see the same ad 10 times; and conversion tracking for DR advertisers, given that DR advertisers (for a whole bunch of stupid reasons) typically only care about conversions that happen within an hour of a click. You already have Private Browsing technology; just apply that technology to third-party cookies.

Advertisements

Why no one can define “premium” inventory

By Eric Picard (Originally published on iMediaConnection.com on June 17th, 2013)

What is premium inventory? The simple answer is that it’s inventory that the advertiser would be happy to run its advertising on if it could manually review every single publisher and page that the ad was going to appear within.

When buyers make “direct” media buys against specific content, they get access to this level of comfort, meaning that they don’t have to worry about where their media dollars end up being spent. But this doesn’t scale well across more than a few dozen sales relationships.

To address this problem of scale, buyers extend their media buys through ad networks and exchange mechanisms. But in this process, they often lose control over where their ads will run. Theoretically the ad network is acting as a proxy of the buyer in order to support the need for “curation” of the ad experience, but this clearly is not usually the case. Ad networks don’t actually have the technology to handle curation of the advertising experience (i.e., monitoring the quality of the publishers and pages they are placing advertising on) at scale any more than the media buyer does, which leads to frequent problems of low quality inventory on ad networks.

Now apply this issue to the new evolution of real-time bidding and ad exchanges. A major problem with buying on exchanges is that the curation problem gets dropped back in the laps of the buyers across more publishers and pages than they can manually curate, which requires a whole new set of skills and tools. But the skills aren’t there yet, and the problem hasn’t been handled well by the various systems providers. So the agencies build out trading desks where that skillset is supposed to live, but the end results of the quality are highly suspect as we’re seeing from all the recent articles on fraud.

So the true answer to this conundrum of what is premium must be to find scalable mechanisms to ensure that a brand’s quality goals for the inventory it is running advertising against are met.
The market needs to be able to efficiently execute media buys against high-quality inventory at media prices that buyers are comfortable paying — if not happy to pay.

The definition of “high quality” is an interesting problem with which I’ve been struggling. Here’s what I’ve come up with: Every brand has its own point of view on “high quality” because it has its own goals and brand guidelines. A pharma advertiser might want to buy ad inventory on health websites, but it might want to only run on general health content, not content that is condition specific. Or an auto advertiser might want to buy ad inventory on auto-related content, but not on reviews of automobiles.

Most brands obviously want to avoid porn, hate speech, and probably gambling pages — but what about content that is very cluttered with ads or where the page layout is so ugly that ads will look like crap? Or pages that are relatively neutral — meaning not good, but not horrible?

Then we run into a problem that nobody has been willing to bring up broadly, but it’s one that gets talked about all the time privately: Inventory is a combination of publisher, page, and audience.

How are we defining audience today? There’s blended data such as comScore or Nielsen data, which use methodologies that are in some cases vetted by third parties, but relatively loosely. There’s first-party data such as CRM, retargeting, or publisher registration data, which will vary broadly in quality based on many issues but are generally well understood by the buyer and the seller. And there’s third-party data from data companies. But frankly, nobody is rating the quality of this data. Even on a baseline level, there are no neutral parties evaluating the methodology used from a data sciences point of view to validate that the method is defensible. And as importantly, there is no neutral party measuring the accuracy of the data quantitatively (e.g., a data provider says that this user is from a household with an income above $200,000, but how have we proven this to be true?).

When we talk about currency in this space, we accept whatever minimum bar that the industry has laid down as truth via the Media Rating Council, hold our nose, and move forward. But we’ve barely got impression guidelines that the industry is willing to accept, let alone all of these other things like page clutter and accuracy of audience data.

And even more importantly, nobody is looking at all the data (publisher, page, audience) from the point of view of the buyer. And as we discussed above, every buyer — and potentially every campaign for every brand — will view quality very differently. Because the skillset of measuring quality is in direct competition with the goal of getting budgets spent efficiently — or what some might call scale — nobody wants to talk about this problem. After all, if buyers start getting picky about the quality of the inventory on any dimension, the worry is that they might reduce the scale of inventory available to them. The issues are directly in conflict with each other. Brand safety, inventory quality, and related issues should be handled as a separate policy matter from media buying, as the minimum quality bar should not be subject to negotiation based on scale issues. Running ads on low-quality sites is a bad idea from a brand perspective, and that line shouldn’t be crossed just to hit a price or volume number.

So instead we talk about the issue sitting in front of our nose that has gotten some press: fraud. The questions that advertisers are raising about our channels center around this concern. But the advertisers should be asking lots of questions about the broader issue — which is, “How are you making sure that my ads are running on high-quality inventory?” Luckily there are some technologies and services on the market that can help provide quality inventory at scale, and this area of product development is only going to get better over time.

Which Type Of Fraud Have You Been Suckered Into?

By Eric Picard (Originally published by AdExchanger.com on May 30th, 2013)

For the last few years, Mike Shields over at Adweek has done a great job of calling out bad actors in our space.  He’s shined a great big spotlight on the shadowy underbelly of our industry – especially where ad networks and RTB intersect with ad spend.

Many kinds of fraud take place in digital advertising, but two major kinds are significantly affecting the online display space today. (To be clear, these same types of fraud also affect video, mobile and social. I’m just focusing on display because it attracts more spending and it’s considered more mainstream.) I’ll call these “page fraud” and “bot fraud.”

Page Fraud

This type of fraud is perpetrated by publishers who load many different ads onto one page.  Some of the ads are visible, others hidden.  Sometimes they’re even hidden in “layers,” so that many ads are buried on top of each other and only one is visible. Sometimes the ads are hidden within iframes that are set to 1×1 pixel size (so they’re not visible at all). Sometimes they’re simply rendered off the page in hidden frames or layers.

It’s possible that a publisher using an ad unit provided by an ad network could be unaware that the network is doing something unscrupulous – at least at first.  But they are like pizza shops that sell more pizzas than it’s possible to make with the flour they’ve purchased. They may be unaware of the exact nature of the bad behavior but must eventually realize that something funny is going on. In the same way, bad behavior is very clear to publishers who can compare the number of page views they’re getting with the number of ad impressions they’re selling.  So I don’t cut them any slack.

This page fraud, by the way, is not the same thing as “viewability,” which involves below-the-fold ads that never render visibly on the user’s page.  That fraudulent activity is perpetrated by the company that owns the web page on which the ads are supposed to be displayed.  They knowingly do so by either programming their web pages with these fraudulent techniques or using networks that sell fake ad impressions on their web pages.

There are many fraud-detection techniques you can employ to make sure that your campaign isn’t the victim of page fraud. And there are many companies – such as TrustMetrics, Double Verify and Integral Ad Science – that offer technologies and services to detect, stop and avoid this type of fraud. Foiling it requires page crawling as well as advanced statistical analysis.

Bot Fraud

This second type of fraud, which can be perpetrated by a publisher or a network, is a much nastier kind of fraud than page fraud. It requires real-time protection that should ultimately be built into every ad server in the market.

Bot fraud happens when a fraudster builds a software robot (or bot) – or uses an off-the-shelf bot – that mimics the behavior of a real user. Simple bots pretend to be a person but behave in a repetitive way that can be quickly identified as nonhuman; perhaps the bot doesn’t rotate its IP address often and creates either impressions or clicks faster than humanly possible. But the more sophisticated bots are very difficult to differentiate from humans.

Many of these bots are able to mimic human behavior because they’re backed by “botnets” that sit on thousands of computers across the world and take over legitimate users’ machines.  These “zombie” computers then bring up the fraudsters’ bot software behind the scenes on the user’s machine, creating fake ad impressions on a real human’s computer.  (For more information on botnets, read “A Botnet Primer for Advertisers.”) Another approach that some fraudsters take is to “farm out” the bot work to real humans, who typically sit in public cyber cafes in foreign countries and just visit web pages, refreshing and clicking on ads over and over again. These low-tech “botnets” are generally easy to detect because the traffic, while human and “real,” comes from a single IP address and usually from physical locations where the heavy traffic seems improbable – often China, Vietnam, other Asian countries or Eastern Europe.

Many companies have invested a lot of money to stay ahead of bot fraud. Google’s DoubleClick ad servers already do a good job of avoiding these types of bot fraud, as do Atlas and others.

Anecdotally, though, newer ad servers such as the various DSPs seem to be having trouble with this; I’ve heard examples through the grapevine on pretty much all of them, which has been a bit of a black eye for the RTB space. This kind of fraud has been around for a very long time and only gets more sophisticated; new bots are rolled out as quickly as new detection techniques are developed.

The industry should demand that their ad servers take on this problem of bot fraud detection, as it really can only be handled at scale by significant investment – and it should be built right into the core campaign infrastructure across the board. Much like the issues of “visible impressions” and verification that have gotten a lot of play in the industry press, bot fraud is core to the ad-serving infrastructure and requires a solution that uses ad-serving-based technology. The investment is marginal on top of the existing ad-serving investments that already have been made, and all of these features should be offered for free as part of the existing ad-server fees.

Complain to – or request bot-fraud-detection features from – your ad server, DSP, SSP and exchange to make sure they’re prioritizing feature development properly. If you don’t complain, they won’t prioritize this; instead, you’ll get less-critical new features first.

Why Is This Happening?

I’ve actually been asked this a lot, and the question seems to indicate a misunderstanding – as if it were some sort of weird “hacking” being done to punish the ad industry. The answer is much simpler:  money.  Publishers and ad networks make money by selling ads. If they don’t have much traffic, they don’t make much money. With all the demand flowing across networks and exchanges today, much of the traffic is delivered across far more and smaller sites than in the past. This opens up significant opportunities for unscrupulous fraudsters.

Page fraud is clearly aimed at benefiting the publisher but also benefitting the networks. Bot fraud is a little less clear – and I do believe that some publishers who aren’t aware of fraud are getting paid for bot-created ad impressions.  In these cases, the network that owns the impressions has configured the bots to drive up its revenues. But like I said above, publishers have to be almost incompetent not to notice the difference in the number of impressions delivered by a bot-fraud-committing ad network and the numbers provided by third parties like Alexa, Comscore, Nielsen, Compete, Hitwise, Quantcast, Google Analytics, Omniture and others.

Media buyers should be very skeptical when they see reports from ad networks or DSPs showing millions of impressions coming from sites that clearly aren’t likely to have millions of impressions to sell.  And if you’re buying campaigns with any amount of targeting – especially something that should significantly limit available inventory such as Geo or Income– or with frequency caps, you need to be extra skeptical when reviewing your reports, or use a service that does that analysis for you.

How ad platforms work (and why you should care)

(By Eric Picard, Originally Published in iMediaConnection, 11/8/12)

Ad platforms are now open, meaning that startups and other technology companies can plug into them and take advantage of their feature sets. The ad technology space is now API driven, just like the rest of the web technology space. The significance of this change hasn’t hit a lot of people yet, but it will. The way this change will affect almost all the companies in ad technology will have an impact on everything: buying, selling, optimization, analytics, and investing.

Companies in our space used to have to build out the entire ad technology “stack” in order to build a business. That meant ad delivery (what most people think of as “ad serving”), event counting (impressions, clicks, conversions, and rich media actions), business intelligence, reporting, analytics, billing, etc. After building out all those capabilities, in a way that can scale significantly, each company would build its “differentiator” features. Many companies in the ad technology space have been created based on certain features of an ad platform. But because the ad platforms in our space were “closed,” each company had to build its own ad platform every time. This wasted a lot of time and money and — unbeknownst to investors — created a huge amount of risk.

Almost every startup in the ad platform space has existed at the whim of Google — specifically because of DoubleClick, the most ubiquitous ad platform in the market. When Google acquired DoubleClick, its platform was mostly closed (didn’t have extensive open APIs), and its engineering team subsequently went through a long cycle of re-architecture that essentially halted new feature development for several years. The market demanded new features — such as ad verification, brand safety, viewable impressions, real-time bidding, real-time selling, and others — that didn’t exist in DoubleClick’s platform or any others with traction in the space.

This led to the creation of many new companies in each space where new features were demanded. In some cases, Google bought leaders in those spaces. In others, Google has now started to roll out features that replicate the entirety of some companies’ product offerings. The Google stack is powerful and broad, and the many companies that have built point solutions based on specific features that were once lacking in Google’s platform suddenly are finding themselves competing with a giant who has a very advanced next-generation platform underlying it. Google has either completed or is in the process of integrating all of its acquisitions on top of this platform, and it has done a great job of opening up APIs that allow other companies to plug into the Google stack.

I’ve repeatedly said over the years that at the end of the natural process this industry is going through, we’ll end up with two to three major platforms (possibly four) driving the entire ecosystem, with a healthy ecosystem of other companies sitting on top of them. Right now, our ecosystem isn’t quite healthy — it’s complex and has vast numbers of redundancies. Many of those companies aren’t doing great and are likely to consolidate into the platform ecosystem in the next few years.

So how does the “stack” of the ad platform function? Which companies are likely to exist standalone on top of the stack? Which will get consumed by the stack? And which companies are going to find themselves in trouble?

Let’s take a look.

How ad platforms work (and why should you care)

Pretty much every system is going to have a stack that contains buckets of services and modules that contain something like what you see above. In an ideal platform, each individual service should be available to the external partner and should be consumable by itself. The idea here is that the platform should be decomposable such that the third party can use the whole stack or just the pieces it needs.

Whether we’re discussing the ubiquitous Google stack or those of competitors like AppNexus, the fact that these platforms are open means that, instead of building a replica of a stack like the one above, an ad-tech startup can now just build a new box that isn’t covered by the stack (or stacks) that it plugs into. Thus, those companies can significantly differentiate.

This does beg the question of whether a company can carve out a new business that won’t just be added as a feature set by the core ad platform (instantly creating a large well-funded competitor). To understand this, entrepreneurs and investors should review the offering carefully: How hard would it be to build the features in question? Is the question of growing the business one of technical invention requiring patents and significant intellectual property, or is it one of sales and marketing? Is the offering really a standalone business, or is it just a feature of an ad platform that one would expect to be there? And finally, will the core platforms be the acquirer of this startup or can a real differentiated business be created?

The next few years will be interesting. You can expect these two movements to occur simultaneously: Existing companies will consolidate into the platforms, and new companies will be created that take advantage of the new world — but in ways that require less capital and can fully focus on differentiation and the creation of real businesses of significance.

How Do Companies Make Any Money in Digital?

(By Eric Picard, Originally Published in AdExchanger 10/25/12)

In 2007 I wrote a paper that analyzed the lack of investment from 2001 to 2006 in the basic infrastructure of ad technology.  The dot-com bubble burst had a chilling effect on investment in the ad tech space, and as an industry we focused for about six years on short term gains and short term arbitrage opportunities.

This period saw the rise of ad networks and was all about extracting any value possible out of existing infrastructure, systems, and inventory.  So all the “remnant” inventory in the space, the stuff the publisher’s in-house sales force couldn’t sell, got liquidated at cheap prices.  And those companies with the willingness to take risk and the smarts to invest in technology to increase the value of remnant got off the ground and succeeded in higher efficiency buying and selling, and lived off the margins they created.

But we lost an entire cycle of innovation that could have driven publisher revenue higher on premium inventory – which is required for digital to matter for media companies. There’s been lots of discussion about the drop from dollars to dimes (and more recently to pennies) for traditional media publishers. And while the Wall Street Journal and New York Times might be able to keep a pay-wall intact for digital subscriptions, very few other publications have managed it.

In 2006 the ad tech ecosystem needed a massive influx of investment in order for digital to flourish from a publisher perspective.  These were my observations and predictions at the time:

  • Fragmentation was driving power from the seller to the buyer. Like so:
  • A lack of automation meant cost of sales for publishers, and cost of media buying management for agencies, were vastly higher in digital (greater than 10x what those things cost for traditional on both the buy and sell side).
  • Prices were stagnated in the digital space because of an over-focus on direct response advertisers, and the inability of the space to attract offline brand dollars.
  • Market inefficiency had created a huge arbitrage opportunity for third parties to scrape away a large percentage of revenue from publishers. Where there is revenue, investment will follow.
  • There was a need for targeting and optimization that existing players were not investing in, because the infrastructure that would empower it to take off didn’t exist yet.
  • Significant investment would soon come from venture capital sources that would kick start new innovation in the space, starting with infrastructure and moving to optimization and data, to drive brand dollars online.

Six years later, this is where we are. I did predict pretty successfully what would happen, but what I didn’t predict was how long it would take – nor that the last item having to do with brand dollars would require six  years. This is mainly because I expected that new technology companies would step up to bat across the entirety of what I was describing.  Given that the most upside is on brand dollars, I expected entrepreneurs and investors to focus efforts there.  But that hasn’t been the case.

So what’s the most important thing that has happened in the last six years?

The entire infrastructure of the ad industry has been re-architected, and redeployed.  The critical change is that the infrastructure is now open across the entire “stack” of technologies, and pretty much every major platform is open and extensible. This means that new companies can innovate on specific problems without having to build out their own copy of the stack.  They can build the pieces they care about, the pieces that add specific value and utility for specific purposes – e.g. New Monetization Models for Publishers and Brand Advertisers, New Ad Formats, New Ad Inventory Types, New Impression Standards, New Innovation across Mobile, Video and Social, and so on.

So who will make money in this space, how will they make it, and how much will they make?

I’ve spent a huge portion of my career analyzing the flow of dollars through the ecosystem. Recently I updated an older slide that shows (it’s fairly complex) how dollars verses impressions flow.

The important thing to take away from this slide is that inventory owners are where the dollars pool, whether the inventory owner is a publisher or an inventory aggregator of some kind.  Agencies have traditionally been a pass-through for revenue, pulling off anywhere from 2 to 12% on the media side (the trend has been lower, not higher), and on average 8 to 10% on the creative side depending on scale of the project.  Media agencies are not missing the point here, and have begun to experiment with media aggregation models, which is really what the trading desks are – an adaptation of the ad network model to the new technology stack and from a media agency point of view.

The piece of this conversation that’s relevant to ad tech companies is that so far in the history of this industry, ad technology companies don’t take a large percentage of spend.  In traditional media, the grand-daddy is Donovan Data Systems (now part of Media Ocean), and historically they have taken less than 1% of media spend for offline media systems. In the online space, we’ve seen a greater percentage of spend accrue to ad tech – ad serving systems for instance take anywhere from 2 to 5% of media spend.

So how do ad tech companies make money today and going forward? It’s a key question for pure transactional systems or other pure technology like ad servers, yield management systems, analytics companies, billing platforms, workflow systems, targeting systems, data management platforms, content distribution networks, and content management systems.

There’s only so much money that publishers and advertisers will allow to be skimmed off by companies supplying technology to the ecosystem. In traditional media, publishers have kept their vendors weak – driving them way down in price and percentage of spend they can pull off. This is clearly the case in the television space, where ad management systems are a tiny fraction of spend – much less than 1%.

In the online space, this has been less the case, where a technology vendor can drive significantly more value than in the offline space. But still it’s unlikely that any more than 10% of total media spend will be accepted by the marketplace, for pure technology licensing.

This means that for pure-play ad tech companies with a straightforward technology license model – whether it’s a fixed fee, volume-based pricing, or a  percentage of spend – the only way to get big is to touch a large piece of the overall spend. That means scaled business models that reach a large percentage of ad impressions.  It also means that ultimately there will only be a few winners in the space.

But that’s not bad news. It’s just reality.  And it’s not the only game in town. Many technology companies have both a pure-technology model, and some kind of marketplace model where they participate in the ecosystem as an inventory owner. And it’s here that lots of revenue can be brought into a technology company’s wheelhouse. But its important to be very clear about the difference between top-line media spend verses ‘real’ revenue. Most hybrid companies – think Google for AdSense, or other ad networks – report media spend for their marketplaces as revenue, rather than the revenue they keep. This is an acceptable accounting practice, but isn’t a very good way to value or understand the value of the companies in question. So ‘real revenue’ is always the important number for investors to keep in mind when evaluating companies in this space.

Many ad technology companies will unlock unique value that they will be the first to understand. These technology companies can capitalize on this knowledge by hybridizing into an inventory owner role as well as pure technology – and these are the companies that will break loose bigger opportunities. Google is a great example of a company that runs across the ecosystem – as are Yahoo, Microsoft and AOL.  But some of the next generation companies also play these hybrid roles, and the newest generation will create even greater opportunities.

Entering the Fourth Wave of Ad Technology

By Eric Picard (Originally published on AdExchanger.com, 9/18/2012)

Ad tech is a fascinating and constantly evolving space.  We’ve seen several ‘waves’ of evolution in ad tech over the years, and I believe we’re just about to enter another.  The cycles of investment and innovation are clearly linked, and we can trace this all back to the late 90’s when the first companies entered the advertising technology space.

Wave 1

The early days were about the basics – we needed ways to function as a scalable industry, ways to reach users more effectively, systems to sell ads at scale, systems to buy ads at scale, analytics systems, targeting systems, and rich media advertising technology.

There was lots of investment and hard work in building out these 1.0 version systems in the space. Then the dot-com bubble imploded in 2001, and a lot of companies went out of business.  Investment in the core infrastructure ground to a halt for years. The price of inventory dropped so far and so fast that it took several years before investment in infrastructure could be justified.

We saw this wave last from 1996 through 2001 or 2002 – and during that dot-com meltdown, we saw massive consolidation of companies who were all competing for a piece of a pie that dramatically shrank.  But this consolidation was inevitable, since venture firms generally invest on a five to ten year cycle of return – meaning that they want companies to exit within an ideally 8 year window (or less).

Wave 2

The second wave was really about two things: Paid Search and what I think of as the “rise of the ad networks.”  Paid search is a phenomenon most of us understand pretty well, but the ad network phase of the market – really from 2001 to 2007 was really about arbitrage and remnant ad monetization.  Someone realized that since we had electronic access to all this inventory, we could create a ‘waterfall’ of inventory from the primary sales source to secondary sources, and eventually a ‘daisy-chain’ of sources that created massive new problems of its own.  But the genie was out of the bottle, and this massive supply of inventory that isn’t sold in any other industry was loosed.

It’s actually a little sad to me, because as an industry we flooded the market with super cheap remnant inventory that has caused many problems. But that massive over-supply of inventory did allow the third wave of ad tech innovation to get catalyzed.

Wave 3

Most people believe that the third wave was around ad exchanges, real-time buying and selling, data companies, and what I like to call programmatic buying and selling systems. But those were really just side effects. The third wave was really about building out the next generation infrastructure of advertising. Platforms like AppNexus and Right Media are not just exchanges; they’re fundamentally the next generation infrastructure for the industry.  Even the legacy infrastructure of the space got dramatic architectural overhauls in this period – culminated by the most critical system in our space (DoubleClick for Publishers) getting a massive Google-sponsored overhaul that among other thing opened up the system via extensive APIs so that other technology companies could plug in.

Across the board, this new infrastructure has allowed the myriad ad tech companies to have something to plug into.  This current world of data and real-time transactions is now pretty mature, and it’s extending across media types.  Significant financial investments have been made in the third wave – and most of the money spent in the space has been used to duplicate functionality – rather than innovate significantly on top of what’s been built.  Some call these “Me too” investments in companies that are following earlier startups and refining the model recursively.  That makes a lot of sense, because generally it’s the first group of companies and the third group of companies in a ‘wave’ that get traction. But it leads to a lot of redundancy in the market that is bound to be corrected.

This wave lasted from about 2005 to 2011, when new investments began to be centered on the precepts that happened in Wave 3 – which really was a transition toward ad exchanges (then RTB) and big data.

That’s the same pattern we’ve seen over and over, so I’m confident of where the industry stands today and that we’re starting to enter a new phase. This third major ad tech wave was faster than the first, but a lot of that’s because the pace of technology adoption has sped up significantly with web services and APIs becoming a standard way of operating.

Wave 4

This new wave of innovation we’re entering is really about taking advantage of the changes that have now propagated across the industry. For the first time you can build an ad tech company without having to create every component in the ‘stack’ yourself. Startups can make use of all the other systems out there, access them via APIs, truly execute in the cloud, and build a real company without massive  infrastructure costs.  That’s an amazing thing to participate in, and it wasn’t feasible even 3 years ago.

So we’ll continue to see more of what’s happened in the third wave – with infrastructure investments for those companies that got traction, but that’s really just a continuation of those third wave tech investments, which go through a defined lifecycle of seed, early, then growth stage investments.  Increasingly we’ll see new tech companies sit across the now established wave 3 infrastructure and really take advantage of it.

Another part of what happened in Wave 3 was beyond infrastructure – it involved the scaled investment in big data.  There have been massive investments in big data, which will continue as those investments move into the growth phase. But what’s then needed is companies that focus on what to do with all that data – how to leverage the results that the data miners have exposed.

Wave 4 will really change the economics of advertising significantly – it won’t just be about increasing yield on remnant from $0.20 to $0.50. We’ll see new ad formats that take advantage of multi-modal use (cross device, cross scenario, dynamic creatives that inject richer experiences as well as information), and we’ll see new definitions of ad inventory, including new ad products, packages and bundles.

So I see the next five years as a period where a new herd of ad tech companies will step in and supercharge the space. All this infrastructure investment has been necessary, because the original ad tech platforms were built the wrong way to take advantage of modern programming methodologies.  Now with modern platforms available pretty ubiquitously, we can start focusing on how to change the economics by taking advantage of that investment.

I also think we’re going to see massive consolidation of the third wave companies. Most of the redundancies in the market will be cleaned up.  Today we have many competitors fighting over pieces of the space that can’t support the number of companies in competition – and this is clearly obvious to anyone studying the Lumascape charts.

Unfortunately some of the earlier players who now have customer traction are finding that their technology investments are functionally flawed – they were too early and built out architectures that don’t take advantage of the newer ways of developing software. So we’ll see some of these early players with revenue acquiring smaller newer players to take advantage of their newer more efficient and effective architectures.

Companies doing due diligence on acquisitions need to be really aware of this – that buying the leader in a specific space that’s been around since 2008 may mean that to really grow that business they’ll need to buy a smaller competitor too – and transition customers to the newer platform.

For the investment community it’s also very important to understand that while Wave 3 companies that survive the oncoming consolidation will be very big companies with very high revenues, it is by nature that these infrastructure heavy investments will have lower margins and high volume (low) pricing to hit their high revenues. They still will operate on technology/software revenue margins – over 80% gross margins are the standard that tech companies run after. But the Wave 3 companies have seen their gross revenue numbers be a bit lower than we’d like as an industry.  This is because they are the equivalent of (very technically complex) plumbing for the industry.  There are plenty of places where they invest in intelligence, but the vast majority of their costs and their value deal with massive scale that they can handle, while being open to all the players in the ecosystem to plug in and add value.

Being a Wave 4 company implicitly means that you are able to leverage the existing sunk cost of these companies’ investment.  Thomas Friedman talks about this in “The World is Flat” – one of his core concepts is that every time an industry has seen (what he called) over-investment in enabling infrastructure, a massive economic benefit followed that had broad repercussions.  He cites the example of railroad investment that enabled cheap travel and shipping that led to a massive explosion of growth in the United States.  He cites the investment in data infrastructure globally that led to outsourcing of services to India and other third world countries on a massive scale.  And frequently those leveraging the sunk cost of these infrastructure plays make much more money from their own investments than those who created the opportunity.

So what should investors be watching for as we enter this fourth wave of ad tech innovation?

  1. Companies that are built on light cloud-based architectures that can easily and quickly plug into many other systems, and that don’t need to invest in large infrastructure to grow
  2. Companies that take advantage of the significant investments in big data, but in ways that synthesize or add value to the big data analysis with their own algorithms and optimizations
  3. Companies that can focus the majority of technical resources on innovative and disruptive new technologies – especially those that either synthesize data, optimize the actions of other systems, or fundamentally change the way that money is made in the advertising ecosystem
  4. Companies that are able to achieve scale quickly because they can leverage the existing large open architectures of other systems from Wave 3, but that are fundamentally doing something different than the Wave 3 companies
  5. Companies that take advantage of multiple ecosystems or marketplaces (effectively) are both risky but will have extremely high rewards when they take off

This is an exciting time to be in this space – and I predict that we’ll see significant growth in revenue and capabilities as Wave 4 gets off the ground that vastly eclipse what we’ve seen in any of the other waves.

Why Media Companies Are Being Eaten By Tech Companies

By Eric Picard (Originally published on AdExchanger.com, August 20, 2012)

My friend and colleague Todd Herman (LinkedIn) once wrote a strategy paper about video content when we worked together at Microsoft. Called “Don’t be food,” it was a brilliant paper that laid out a strategy for effectively competing in a world where content is distributed everywhere by anyone.  I love the concept of “Don’t be food.”  It applies to so many existing business models, but clearly where Todd initiated it – Media – it applies incredibly well.

The media business is being forceably evolved through massive disruptions in content distribution. In the past, control over distribution was the primary driver of the media model. Printed material, radio and television content required a complex distribution model. Printing presses and distribution are expensive. Radio and television spectrum is limited, and cable infrastructure is expensive. Most media theory and practices have been deeply influenced by these long term distribution issues, to the point that the media industry is quite rigid in its thinking and cannot easily move forward.

One of my favorite business case studies is the Railroads.  Railroad companies missed massive opportunities as new technologies such as the automobile and airplanes began to be adopted. They saw themselves as being in the “railroad business,” and not the “transportation business.”  Because of this they lost significant opportunities and very few of the powerhouse companies from the rail era continue to exist.

In media, new technologies have been massively disrupting the space for more than a decade. And there is an ongoing debate about technology companies stepping in and disrupting the media companies. Google is a prominent example, and its recent acquisition of Frommer’s is yet another case where it has eaten a content company and continued to expand from pure technology into media.  But Google isn’t moving into media based on the existing rules that the media companies play by – it is approaching media through the lens of technology.

But this issue doesn’t only pertain to the oft-vilified Google: Amazon continues to disrupt the book industry by changing the distribution model through the use of technology, and is clearly gunning for magazine, radio and video content as well.  Microsoft is changing the engagement model and subsequently the distribution of content to the living room via its ever-expanding Xbox footprint, and is broadly expanding toward media with Windows 8, its new Surface tablet devices and smartphones – again using technology.  Apple has turned distribution models on their ears by creating a curated walled garden of myriad distribution vehicles (apps on devices), but charges a toll to the distributors – again using technology to disrupt the media space.  Facebook, Twitter and social media are now beginning to disrupt discovery and distribution in their own ways – barely understood, but again based on technology.

Existing media models are functionally broken – and will continue to be disrupted.  Distribution is always a key facet of the overall media landscape, and will continue to be.  But as distribution channels fragment, and become more open, the role that distribution plays will radically change. Distribution is no longer the key to media – it is inherently important to the overall model of media – but it isn’t the key.

Technology is the key to the future of media. Technology can and has profoundly changed the way content is distributed, and will continue to do so. The future of media is wrapped up in technology, and this is an indicator of why technology companies are eating media companies’ lunches, if not actually consuming them in their entirety.

Media companies don’t understand technology because they are not run by technologists. And there is a vast gulf between the executive leadership of media companies and the needs to understand technology. Every media company should be running significant education efforts to pass along the concepts needed to compete in the technology space, but I’m not convinced even that would be enough to fix the problems they face.

At Microsoft I once had an executive explain to me why most of the executives running businesses at the company were from a software background.  He said something along the lines of, “A super smart engineer who can wrap his or her head around platforms and technology issues can probably learn business concepts and issues faster than a super smart business person can learn technology.”  And he was right – it’s that simple.

Business schools should have requirements today for anyone graduating with an undergraduate or graduate degree to learn how to write software, and most importantly to develop a modern understanding of platforms. These platform models are the future of distribution, and are barely understood even among many technologists. The modern platform models used broadly on the Internet and to create software on devices that drive content distribution are relatively new, and are frequently not understood by people with technical backgrounds who haven’t spent time working with them.

Bad business decisions continue to be made by media companies because of the significant lack of technology leadership in both executive and middle management. As technology evolved, the model for many years was that business people figured out “Why and What” to build and “Where” to distribute it, and engineers figured out “How and When” something could be delivered.  Great technology companies break down the walls between Why, What, How, When and Where. Engineers have just as much say in all of those things as the business people. Great technology companies don’t treat engineers and technologists like “back room nerds.”  They recognize that engineering brilliance can be applied to the business problems facing them, and that technology innovation will drive their businesses to disrupt themselves toward future success.

Media companies must evolve away from their historical strengths based on distribution control, and must embrace technology as a key principal.  And they need great engineers to do so. The problem is, great engineers won’t work for mediocre engineers. Great engineers won’t take bad direction from people they don’t respect, especially business people. And many media companies have treated their existing engineering organizations as an extension of traditional IT models, with mediocre engineering talent endemic in their organizations – frequently top to bottom.

Let me say again; great engineers will not work for mediocre engineers.  This means that the existing CTO and entire engineering infrastructure within a media company will not solve this problem. Before moving forward, executive leadership has to recognize that it is likely that their existing technology organization will fight, block and actively try to sabotage any efforts created outside of their own infrastructure. But it is very clear that without a significant change here, these companies are doomed.

For a traditional media company to compete effectively with Google, Amazon, Apple, Microsoft, Facebook, and the thousands of hot startups now competing with them, they must build world-class engineering organizations. This isn’t a light fuzzy requirement, it’s a core fundamental of their ability to survive for the next century.  These companies must evolve forward.  They must find ways to empower internal disruption.

Media companies must build startup organizations within their own internal structures that are isolated from the existing IT leadership and given bold broad empowered charters with the leeway to disrupt other teams’ businesses.  They must build a new technology driven culture within these large media companies that is separate from the existing groups, and then embrace those internal startups as the future of the company.  This isn’t easy.  It’s nearly impossible.  And this very likely will not work the first time it’s tried. But if media companies don’t commit to this kind of change, they are going to be eaten.

How publishers sell ad inventory

By Eric Picard (Originally published on iMediaConnection, August 09, 2012)

Ad inventory is typically broken down into four buckets: sponsorships, premium guaranteed, audience targeted, and remnant. Each of these buckets can be sold through a variety of sales channels.

Revenue distribution across this “layer-cake” inventory model flows downward — with the vast majority of inventory coming from premium and a significantly lower amount of revenue coming from the remainder:

The process of an advertising sale begins with the media buyer, who sends a request for proposal (RFP) document to numerous publishers. These RFPs typically are written in prose and define the overall goals of the advertiser in question, and of the specific campaign being executed. A typical RFP has between 50 and 100 elements that are laid before the publisher as acceptable or desirable outcomes, and these elements (attributes or attributes of the buy) are generally descriptors of the audience, of the media the advertiser is looking to run on, of the acceptable (and unacceptable) content to be associated with, etc.

Advertising inventory is the base unit sold by a publisher to an advertiser. It is measured in “impressions,” which are defined as an opportunity to show an advertisement to a person. Impressions at their most basic are blank vessels made up of opportunity. Inventory is generally defined in advance by the seller based on a variety of factors, and it is these predefined impressions that are contractually agreed up on between buyer and seller.

Nearly all impressions sold are made up initially of one or two media attributes based on content association (e.g., MSN>Entertainment or MSN>Entertainment>Celebrities; Yahoo>Autos, or Yahoo>Autos>News). Or they’re sold just based on category — in some cases blind, meaning without the knowledge of which publisher the impression ran on. Further refinement of the inventory is based on other attributes such as above the fold, rich media units, or a variety of quality scores. Additional media attributes included in the definition of a piece of sold inventory include various types of targeting and other types of intelligence and filtering such as inventory quality scores and contextual targeting.

Beyond media attributes, there are numerous audience-based targeting attributes available for the buyer to request, or for the seller to offer. These include such attributes as geographic, demographic, psychographic, behavioral, etc.

It is the combination of these various attributes that define the inventory that is sold. Inventory is sold in a number of ways, including on a guaranteed basis (a buyer contracts with a seller for a fixed volume of inventory between specific dates) and on a non-guaranteed basis (if inventory is available that matches, it will be sold, but the seller doesn’t make any guarantees on volume).

In order to predict how much inventory will be available, publisher ad platforms need to look at historical data with seasonality and apply some very sophisticated algorithms to make a guess as to how much inventory will be available during specific date ranges. These “avails,” as they are called, become the basis for how all guaranteed ad sales are done.

But ad inventory has many very complex and difficult-to-predict issues that are endemic to the problem — the problem of predicting how many impressions will exist in a specific month is sort of like imagining how many cars will cross the Golden Gate Bridge in a given week. Predicting this based on historical data isn’t too hard. And predicting the color of the various cars that might cross the bridge is probably feasible with some degree of accuracy. Maybe even predicting the general destinations of the cars crossing the bridge is possible. But trying to predict how many red Toyotas driven by women with an infant in the car who have red hair and who make more than $125,000 annually is probably not a solvable problem.

This is akin to the requests given on a daily basis regarding ad targeting. This type of prediction is extremely technically challenging; nobody has been able to accurately predict how much ad inventory will be available in advance for more than three to four targeting attributes in advance. Therefore, publishers rarely will sell inventory that contains more than three to four attributes because this causes an immense amount of work during the live ad campaign for the publisher’s ad operations team. (They must monitor ad delivery carefully and adjust numerous settings in order to ensure delivery of the campaign.)

Inventory is sold within a contract called an insertion order (I/O), and each sold element is typically called a “line item” on the I/O. Line items correspond to a variety of attributes within the publisher’s inventory management systems. A simple example would be MSN>Entertainment. But a more complex example would be MSN>Entertainment>Women>18-34.

Beyond a typical guaranteed media buy, there are several other mechanisms for selling ads. Some ads are re-sold by a third party such as an ad network (examples include Collective Media, ValueClick, Advertising.com, etc.). Some ads are sold through an automated channel such as a supply-side platform, or SSP (examples include Rubicon, Admeld, PubMatic, etc.). There are also ad exchanges that can sit in the middle of all the transactions, and as the industry has matured, the difference between an exchange and an SSP has become less clear. These exchanges and SSPs then create a marketplace that allows ad networks and various demand-side platforms (DSPs) to compete for the inventory in real time. We’ll refer to this as real-time bidding (RTB) even though in some cases this term doesn’t apply exactly.

The management systems for buying RTB inventory are generally called demand-side platforms (DSPs). In RTB media buys, it is extremely rare to have more than three to four targeting attributes (just like in guaranteed media buys), not because of prediction but because inventory that exists for each campaign or line item that contains more than three to four attributes delivers with extremely low volume. In fact, the amount of inventory available on a per-impression basis as you layer on more targeting attributes generally drops significantly with each new attribute.  This means that a typical line item for an RTB campaign would look very much like the one for a guaranteed buy: Entertainment>Women>18-34.

For a DSP to spend an entire media buy at more than four targeting attributes, the buyer would have to manually create hundreds or thousands of ad campaigns that each would then be manually optimized and managed. It isn’t actually feasible to do this at scale manually.

In summary
In a perfect world, advertisers would be able to find all available ad inventory that matches their goals, with as many attributes as exist on all impressions. The problem is that existing inventory management and ad serving systems are not designed to deal well with more than two to three concurrent targeting attributes, whether for guaranteed media buys or RTB.

So why do advertisers and publishers prefer to sell ads on a guaranteed basis?

Inventory guarantees serve several purposes. The most critical is predictability; media buyers have agreed with the advertiser on a set advertising budget to be spent on a monthly basis throughout the year. They are contractually obligated to spend that budget, and it is one of their primary key performance indicators. Publishers like to have revenue predictability as well, which is solved by selling a guarantee on volumes for a fixed budget.

For all the innovation in the ad-tech space over the last decade, it’s fairly impressive that very few of the core problems of a publisher have been solved. At the end of the day, 60-80 percent of the revenue that publishers bring in comes from their premium inventory, sold on a guaranteed basis — which represents generally less than half of all their available inventory. Nearly all the ad technology innovation in the last decade has focused on what to do with that other half in order to raise the median price of that revenue from nearly zero to a bit more than zero.

It seems to me that there is an opportunity to focus on something else. (And you might imagine that I’m doing just that.)

Why the display ecosystem might implode

(Originally published in iMediaConnection, November 2011) by Eric Picard

I sat on a panel at OMMA display on Monday, and the discussion was designed to determine if ad exchanges were going to be relegated to the land of direct response advertising, or if they would foster brand-friendly environments.

I’ve written a lot over the past few years about the future of display, and the issues we face and need to overcome. But this panel brought up many key issues that I thought I’d take a quick look at in this article.

Creative formats for display are really awful
If we don’t solve this, we might just need to give it up. Display ads are just too small to really give an effective brand experience. Even the “brand-friendly” banner units like the venerable 300×250 are too small. Are we really saying that pre-roll is the best we can do?

I suggest we think through the issue of brand-friendly space, and fix websites to accommodate it. We should strip all the banners off every page of a major publisher, and replace with a brand-friendly unit that gives the advertiser a great venue to show brand content and is still user friendly. It’s not so hard — there are all sorts of vehicles to use here.

“Sliding ad units” that move the content down for a moment on page load, then retract to reveal a “leave behind” unit that can be explored by the consumer (and re-expand the ad) if they’re interested are my personal favorite. I like this better than over-the-page ads that cover the content in general. But even expanding ads (my last startup, Bluestreak, pioneered expanding ads back in 1997, so I’ve thought about this a lot) work well for this kind of thing as long as they don’t expand on mouse-overs. They should expand for one to three seconds on page load, and only re-expand on clicks. If they very quickly expand on page-load, retract to show that they’re “there” and interactive, and the entire expansion and retraction takes less than three to five seconds, consumers won’t backlash too badly.

Targeted reach is critical to brand advertisers
Brand advertisers will pay to reach audiences that they define. They don’t need to have a conversion tracked, nor do they need to track CPA during the life of the campaign. They don’t need to track clicks — except you’ve fought so hard to convince them of this, that they finally have shrugged their shoulders and said, “Fine, show me the clicks.” Too many people in our industry are drinking their own Kool-Aid.

Why do I still have people argue with me that GRPs and TRPs are not what we should use? They’re good enough metrics for massive amounts of ad spend — tens of billions of dollars, in fact. And we have the arrogance in this industry to simply refuse to adopt and promote something that people with money have been requesting for more than 15 years. Really? The customer isn’t right? You know better? They have money to spend.

I get worked up on this topic — it’s ridiculously stupid that we won’t sell a product that customers with big budgets would like to buy from us. And the argument I continue to hear come out of the mouth of smart people? “We can do better.” This is a fool’s errand. When people say, “I’m thirsty, and I’d like to buy a nice bottle of seltzer water from you”, the response isn’t, “No, that’s not what you want. We sell the water and bubbles separately. It’s much, much more effective that way — I have data to prove it.” And they keep saying, “But I just want a nice bottle of seltzer water.” And we keep telling them to pound sand.

More than this — we keep building incredibly complex tools to manage buying and selling in our space. That makes it very hard and inefficient for brand media buyers to adopt online display since they can get massive reach at a reasonable price from traditional media — but not so much from online.

So I have an idea: Let’s sell the customers something they want — gross and targeted reach and frequency (GRPs and TRPs) that mesh well into the combined cross-media plans that they do, and let’s give them tools that don’t require them to get an advanced math degree to use.

And still I’m going to have comments on this article that “we can do better than GRP and TRP.” Fantastic — you go do that. But why not give them GRP and TRP too? Does it hurt to give them what they’re asking for? Calculating GRP and TRP isn’t that hard.

Build tools that are ideal for brands to use and that make it really easy to buy online display advertising in ways that make sense in the context of all the other money they spend. Give consumers better ad formats that actually are great venues to showcase brand ads with emotional impact. It’s not hard technically. It just requires some group consensus. And I fear that we’re not going to pull it off — despite its relative simplicity.

On the OMMA panel I was on, three of five panelists said that they felt that there was a real chance that the economics of display advertising could implode over the next few years. I was on the dissenting side of this panel. Let’s not make me a liar, shall we?