Category Archives: Publishers

Why Facebook will ‘own’ brand advertising

(Originally published in iMediaConnection, February 2012) by Eric Picard

I’ve been watching and reading the Facebook IPO announcement frenzy with curiosity. The most curious meme floating around is the one that pooh-pooh’s its strike price, market cap, and valuation because its ad business “clearly isn’t going to be able to sustain growth the way Google’s did” — to which I call BS.

Here’s why Facebook will ultimately be the powerhouse in brand advertising online (and eventually offline as well):

Facebook is a platform

To really do this one justice, I’d need to write a whole article about the power of platforms and explain why platform effects are almost impossible to defeat once they’ve started. Platform effects are similar to network effects, so let’s start there in case you’re one of the 20 people left on the planet who haven’t learned about them. Network effects are basically when multiple users have adopted a platform or network, causing the platform or network to be more valuable. Telephones are the primal example here — the more people who have a phone, the more valuable the phone platform or network is to its users, therefore more people get telephones. Facebook has cracked that nut — it’s a vast social network, and network effects have rendered it as difficult to avoid getting a Facebook account as they have rendered not having a telephone or email address to be almost impossible.

Platform effects are similar, but even stickier: They come from opening a platform to third party developers. Once you have developers creating software that relies on the use of a platform, the platform becomes more useful and therefore becomes more adopted by end-users. This has been proven repeatedly — from Windows beating the Mac originally because so many software developers and hardware manufacturers supported the Windows PC platform. Apple has of course had the last laugh there, with the iPod/iPhone/iPad apps marketplace taking a page right out of Microsoft’s playbook and kicking them in the teeth.

Facebook is a platform that “consumer facing applications” like Zynga and other game companies have made good use of. But also it’s a massive data and business to business platform, which has been less broadly publicized, but which is beginning to gain adoption. And that part of its platform, tied to the data from the consumer side of its platform, is why advertising will ultimately bow to Facebook (barring some horrible misstep on their part.)

Facebook takes user data in return for free access to the Facebook platform

Facebook requires all users to opt into its platform — and despite all the various privacy debates and discussions about Facebook, it is actually pretty good about being transparent and providing value to users in return for sharing all sorts of data.

Facebook is right now (my opinion — open to debate) the most authoritative source of data on consumers, their interests, and brand affiliations. It’s going to grow and become more comprehensive, meaning that it will become the main source of all data used by brand advertisers to reach targeted users.

To my mind this is already destined to happen — and locked up due to the fact that Facebook is a platform. It builds content that no media company would be able to build (social content.) So in that way it really doesn’t compete with online publishers. Online publishers wisely have adopted Facebook as a distribution platform as well as an authentication platform for allowing consumers to accesstheir content.

It’s only a matter of time before publishers become so intertwined with Facebook’s platform that all their content becomes effectively part of the Facebook platform. But not in a way that publishers should be worried about Facebook disintermediating them. If Facebook is smart, it will work this out now and find a way to give publishers what they want in return for this: Let the publishers own their own targeting data, and work out a way to help them make more money without losing that data ownership.

Facebook will own brand advertising, and will not need to own direct response

Most of the wonks in the ad space are pooh-poohing Facebook because of a near-sighted over focus on direct response advertising. They believe in this false premise because of a single proof point, which is Google paid search advertising. The idea is that, “Since Facebook owns ad inventory that is further ‘up’ the purchase funnel than Google’s, Facebook will never justify a high enough CPM to compete for supremacy in the online space. Since Google is the owner of advertising online, and it did this by creating a vast pool of inventory that is sold at extremely high CPMs (because it is so close to the purchase on the purchase funnel) and because most of the online ad industry has been focused on DR for its entire existence, DR is where online must go.”

The wonks are wrong on this topic. Google undisputedly “owns” paid search advertising. But the entire paid search market is made up of something close to 250 billion monthly ad impressions. Google gets a very high premium on those ads — around $75 CPM. But Facebook has many more ways to play in the ad space than Google, and a lot more inventory to play with. Estimates put display ad volume well above 5 trillion monthly impressions, and Facebook has a huge percentage of these.  Since Facebook can cater to brands, it can be an efficient platform for selling ads to brands that target authoritatively to very granular audiences. Nobody has cracked that nut yet — the targeted reach at granularity and scale “nut” (disclaimer — this is specifically the problem I’ve been working on for the last year.)

So Facebook could own brand advertising online, could own a role as the authoritative data provider for brand advertising, could own the way that the big brand content platform of TV makes its way into a more modern and effective ad model, and could very well be the winner of the online advertising (nay the entire advertising) space for brands.

Facebook will dominate local advertising

Facebook has already grown a massive advertising business, and my bet is that when the details of its ad revenue are fully disclosed, a big chunk of that business will prove to be locally based. It is the only real play to be had for local businesses online right now; the only place to get local audience reach at any kind of scale. Local is a massive advertising market — one that nobody has been able to crack online, and Facebook will be the gateway between traditional media and online media for local advertising. Zuckerberg must already secretly have 200 people working on this problem as I type.

I’m very bullish on Facebook, but then, this is all just my opinion: I don’t have any idea how much of this Facebook really understands itself. All it really needs is some decent ad formats, and it’s got everything pretty well sewn up.

Why publishers should stop selling remnant inventory

(Originally published in iMediaConnection, December 2011) by Eric Picard

Typically, online publishers make their money through the sale of online display advertising, with a few making a lot of money from paid search. But the way that publishers monetize their sites has evolved over the last 10 years to a point where a lot of energy is expended on work that doesn’t pay off very well.

The one thing to keep in mind as we progress through this discussion is yield. From a publisher point of view, this is essentially the profit on inventory sold. It’s always important for publishers to consider yield in any discussion of revenue, because while they may sell inventory at a higher CPM through the human premium sales channel, the cost of sales is always going to be very high there. So when you strip away cost of sales and technology costs, what revenue the publisher keeps is its yield.

Many publishers began selling their remnant inventory off at wholesale prices to ad networks and an ever-changing and evolving ecosystem of other vendors who buy cheap remnant inventory, apply some “special sauce,” and resell the same inventory for a higher price. This arbitrage has evolved as many publishers saw an opportunity to liquidate their entire pool of inventory at any price and never leave any money on the table. But liquidating all inventory at “any price” is a horrible idea, and has led to many unintended consequences, namely driving a perception of “unlimited” inventory out to the market. There’s a reason that the broadcast networks limit the amount of remnant sales they do to approximately 10 percent.

I’ve stated before that most companies get addicted to “bad”‘ revenue that operates at a net loss. I would argue that almost any remnant ad sales at wholesale prices sold in bulk to resellers is “bad” revenue. While cost of sales may be much lower on this sales channel, all the value gets stripped off the inventory, and then the final clearing price of the inventory is super low — probably well under $1 CPM — probably as low as $0.30 in many cases, and often under $0.10.

Yahoo recently announced that they’re going to stop selling inventory this way, and I think this is one of the gutsiest and smartest things I’ve seen a big publisher do in a long time. Here’s why:

Publishers have gone to immense effort to build a refined product to sell. Your audience is not a “raw material.”  You’ve taken the effort of cultivating an audience to consume your content, and you’ve developed a sales force to represent this inventory. By selling wholesale to a reseller, you’ve turned that inventory into a “raw material,” and it’s up to the reseller to then refine the inventory and make it more valuable.

In a perfect world, you would sell your refined audience through direct sales channels and liquidate it all at a very high yield. Since this is simply unlikely for basic human scale factors, there needs to be secondary sales channels that don’t suffer from the same scale problems. One way that publishers have been trying to handle this problem is by applying a dedicated human sales force and yield optimization team to manage their remnant liquidation.

In my opinion they should have a dedicated team to manage selling inventory that “fell through the cracks” of their human sales force, but this is not remnant wholesale practices. They should only be selling the inventory that can be sold as a “refined good,” not a “raw material.” All publishers have high quality inventory that is not able to be sold by their sales team, even if they are “sold out” in the publisher’s sales interfaces.

Because of the way that inventory prediction works, there is generally more actual available inventory than most systems will allow to be reserved. This is because of a technical reason — since the industry has told software engineers that guarantees are being made contractually, it’s using a very high confidence interval on the prediction of avails. Confidence intervals specifically refer to how confident the engineers are that the inventory will actually exist, which is a mathematical prediction.

Since the engineers are being conservative due to the nature of the contract being guaranteed, there is actually always inventory that has sold out in the sales system due to high demand, but at delivery time there is actually more in existence. Additionally, since most publisher side sales systems allow (for very good reasons) the sales team to pull avails and reserve for a short period prior to the deal closing, hoping that their IO will be signed, this causes a lot of premium inventory to get locked up until it’s too late for another sales person to actually sell high demand inventory.

Let’s look at a hypothetical example: Samantha reserves a bunch of inventory for a December 15 start date for a big pharma client who has a proposal in front of them. On December 10, the client comes back and refuses the offer saying it will come back in January. Samantha frees the inventory up to the rest of the sales team, but with only five days left, it goes unsold (even though many other buyers would have loved to get access to it two weeks earlier.)

This high quality inventory is dropped on the butcher shop floor like some sad porterhouse that is washed off and then ground up for hamburger. Across the industry, there are tons of great inventory going to remnant sales that could be sold by the publisher as New York strip, filet, and rib eye rather than ground into hamburger. But the remnant sales channel doesn’t allow for this — and everything looks like hamburger.

Finding a way to offer this inventory to the market through a secondary sales channel and selling it with as much of a possible premium is a critical issue. The inventory should be sold not in a wholesale way with all data stripped away. It should be sold in a channel that avoids conflict and that drives highest possible yield. I’ll give some ideas on what these channels can look like below — but first, let’s discuss the most basic approach: Simply stop selling remnant.

If publishers would simply reinvest the resources they spend selling the 40 to 60 percent of inventory that is currently sold “wholesale” remnant, and put the same headcount to use (maybe different people) focused on selling premium, they only need to increase premium sales a very small percentage would completely offset all their wholesale remnant sales. When 10 percent of your revenue comes from half your inventory — there’s a problem. Better to stop selling it wholesale and bolster your average CPM, and protect your user experience. Ideally remove ad units when there isn’t a sold impression (rather than always delivering ads.) This does require some design changes, but could be well worth it. At the very least, if you can increase your sell-through on premium sales just a tiny percent, you will more than make up for all that unsold remnant.

Yahoo certainly has done this math and determined that it doesn’t want to feed the remnant piece of the pie chart above any longer. Its decision is that it’s going to push the inventory into the purple piece of pie I’ve labeled “audience-based sales.” It’s doing this through Right Media Exchange (RMX), which it owns. It’s requiring that advertisers have a seat on the exchange, and it’s creating programmatic mechanisms for those advertisers (through its partner agencies, trading desks, and DSPs) to purchase Yahoo’s premium inventory in an automated way without going through the premium sales channel. And it looks to me like it’s going to do this in a non-blind way, meaning the buyer will not be buying hamburger — it’ll buy porterhouse or New York strips.

This is a bit of a gamble, but a super smart gamble. Publishers can create “tunnels” through any of the major exchanges, where they can set agency- or advertiser-specific rules about who can buy inventory at what price, with what discounts, and in what ways. This technology has been around for a while, and some publishers have opened “private exchanges” using it. But Yahoo’s taking this to the next level, where it’s only selling its “remnant” inventory this way now. A move I applaud (whoever made this call and got it pushed through, let me know — I want to buy you a drink!)

My sincere hope is that Yahoo sets hard price floors on what it sells through this sales channel, and that it doesn’t liquidate the inventory at any cost. And when it does let retargeting companies or its other former “wholesale” customers of remnant purchase the inventory — it should ensure that it’s getting paid what the inventory (based on the audience they’ve attracted) is actually worth — and force the reseller to identify who the buyer is, and what the closing price of the inventory is. But baby steps are fine with me!

3 ways that display advertising must change — or else

(Originally published in iMediaConnection, October 2011) by Eric Picard

Despite all the excitement in our industry about programmatic buying and selling of inventory (via ad exchanges, DSPs, SSPs, and a variety of direct-to-publisher vehicles like private exchanges and private marketplaces), the vast majority of dollars today are still spent the “old fashioned” way.

Since display ads began being sold in the mid-1990s, very little has changed in the way that the vast majority of ad dollars are spent. Most ad dollars are spent via a guaranteed media buy — either a sponsorship (the brand is placed on a specific location for all impressions served to it) or a volume guarantee (ad space of a specific volume is reserved against either a specific location on a page, or a specific group of pages, but will rotate out dynamically on a per-page view).

Sponsorships are great for buyers and sellers because they’re easy to manage. The buyer gets a fixed location, takes over every impression delivered to that ad location, and the seller doesn’t need to worry much about over- or under-delivery. (Sometimes they will sign up for a volume guarantee here, but many times they don’t.) And generally while sponsorships tend to yield low CPMs for the publisher, the ad buys are frequently for solid brands and the size of a sponsorship tends to be large on a dollar figure, if not large on CPM basis (e.g., it may be a multi-million dollar buy, but the CPM is probably low).

The oft-misunderstood publisher benefit of sponsorships, despite the low CPM, is that the cost of sales tends to be much lower. A sponsorship buy can be executed quickly and doesn’t require a lot of labor after the fact. I’ll discuss more about the issue of cost of sales when I touch on efficiency. But don’t underestimate the importance here.

Guaranteed volume-based buys are in many ways the cause of vast problems in our industry, despite being generally more lucrative and higher yielding on a CPM basis than sponsorships. First, they tend to be very sales and operations intensive, which means the cost of sales is often extremely high (frequently above 30-40 percent, and sometimes significantly higher for some of the most complex campaigns). There are several reasons why guaranteed volume-based buys are complex and costly.

First is that when inventory is sold in advance, there is some degree of prediction involved to determine how much inventory of any specific type or location will exist in the future. This inventory prediction problem is still one of the biggest issues we face as an industry. The ability to predict how many users will visit a specific section or page of a site is quite difficult on its own. Given the guaranteed nature of these buys, the prediction methods need to be extremely accurate, and getting accurate predictions is hard, even just based on seasonality and one or two locations. Once additional parameters, like various types of targeting, frequency capping, and various competitive exclusions are applied, the calculations are near impossible to calculate accurately.

This difficulty with predicting specific inventory in advance is the root of the second problem — optimizing buys on the publisher side during the life of the campaign. This rears its head in general, but much more so when the buy is targeted. Most buyers have no idea of the complexity of delivering these buys and how much work happens behind the scenes at most publishers to pull it off. Frequently there are daily (sometimes multiple daily) optimizations done behind the scenes to make sure a targeted campaign delivers against its goals. This can involve making changes to prioritization in the ad delivery systems, spreading the buy to larger pools of inventory, and bumping lower-paying campaigns out of the same inventory pool (at least temporarily) in order to ensure delivery.

Most publishers are not aware of the vast amount of labor done by ad agencies on their buys across publishers in order to ensure that advertiser goals are met. This can range from just ensuring that volumes that were agreed to are met, to ensuring that click or conversion rates driven by the buy are meeting a performance goal (for the direct-response advertisers). In either case, the amount of work done by agencies to optimize these buys, frequently across dozens of publishers, is huge.

Buying and selling inventory must get more efficient
This brings us to our first big problem that must be solved. Media buying and selling needs to get more efficient. If you compare efficiency (i.e., costs) of buying and selling traditional media versus online media, there’s a very clear difference. I’ve been told by numerous sources that the efficiency is between 10-15 times less efficient for big spenders for buying online versus offline media. And certainly there is a similar lack of efficiency for selling of online media.

One way that both buying and selling can become more efficient is through basic automation. Much of the back and forth of a media buy between buyer and seller is manual. There are not simple standard efficient means of automating the media buying process. There are numerous tools on the market that try to do this in the guaranteed space, but adoption has remained small so far. Between TRAFFIQ (full disclosure: I run product and engineering at TRAFFIQ), Centro, FatTail, isocket, Donovan Data Systems, DoubleClick, and others, there is plenty of choice to automate buying and selling of guaranteed between systems focused on the buy or the sell side of the problem.

And despite the promise of programmatic buying and selling removing much of the inefficiency from the space, most publishers are so worried about putting premium inventory into exchanges that we are still relegating exchanges to massive repositories of remnant inventory. Publishers must start using the private exchange and marketplace functionality that’s available to represent premium inventory.

This doesn’t mean that salespeople go away, and it doesn’t mean that publishers lose control of their inventory. It just means that much of the inefficient order-taking and campaign optimization that is done on both sides of the media buy can be removed from the system and automated. Sales become a more evangelical process, less work goes on behind the scenes, and salespeople stop spending so much time “order-taking.” Today publishers can set dynamic floor prices against exchange cleared inventory, buyers can automate their bids, and at the end of the day, the whole marketplace can get more efficient.

Publishers often say they don’t want this to happen because they fear a drop in the CPM of their guaranteed buys. The reality is that the cost of sales is so extreme on guaranteed media buys — especially targeted or frequency-capped ones — that publishers could easily skim 20-30 percent off their floor price if the cost of sales was significantly reduced.

One major reason that we’re having such trouble in the display industry is the predominance of performance or DR spend in our space. This overemphasis on DR for display has huge consequences to our space — from depressed CPMs to a focus on metrics and methodologies that require a lot of work. This leads us to our second major change that must take place.

Online display must become a brand friendly medium
Let’s face it. As a brand advertiser, you’re much better off putting your message on television or in magazines than on almost any digital vehicle. Our ads are too small to give the brand a proper emotionally reactive vehicle to reach audiences. Even the “brand friendly” 300×250 ad unit is tiny on today’s modern high-resolution screens. Luckily the IAB is responding to this problem with action, and there are many new larger standard ad sizes being promoted across the industry. But publishers have got to adopt them, and buyers have got to demand them as part of their RFPs. We should be moving much faster here — especially when you consider how many new tablet form-factor devices are moving into the hands of consumers.

But beyond the simple size of the ad, the design of most web pages leaves a lot to be desired from the perspective of a brand advertiser. There are too many ad units, not enough “white space,” too much noise on the page, and not enough back-and-forth value to the site’s own visitors or to the brands from the “advertising experience,” meaning the way ads are integrated with content. In a perfect world, the audience and the brand should be at the very least “neutral” in tension, and ideally the ads should be adding value to the viewing experience.

But there hasn’t been a huge outcry from the brands to fix this because they don’t see online as a medium that caters to them or is brand friendly. The flat CPM pricing is fine, but the lack of available GRP or TRP measurement in order to provide some cross-media evaluative metrics is a major roadblock.

Another reason that the biggest brands haven’t come online, beyond both the efficiency and brand friendliness issues, is that the ad units are shared with numerous less brand-centric advertisers, many of which run creatives that no brand advertiser would ever want running alongside their own creatives. This massive over focus we have on direct response or performance advertisers has somewhat tainted online display, and the willingness of publishers to liquidate every single available impression at fire-sale prices has led to overall much lower CPMs than media that have focused on brands as their primary customers. This issue leads to our third and final major change that must happen in online display.

Online display must increase overall CPMs of inventory
If we can transform display into a high-quality space for brand advertising, we should be able to demand higher CPMs. This sounds nice and wonderful to most publishers, but many of the people reading this article will somewhat cynically push back at this point and talk about the “reality” we face in online display today.

So let me dispel a few myths by explaining the economics of our space in terms many of you have probably never heard.

Every emerging media that I have researched or lived through has focused initially on DR advertisers as their primary target in the very beginning. There is an economic theory that drives this: budget elasticity. The idea is that a DR advertiser is theoretically managing spend based on pure ROI. That is, they only buy ads that drive profitable sales of product or services (i.e., the budget is “elastic”). This, in theory, means they will spend as much as they can as long as the media buy creates more revenue than ad spend. And because the media experience is new in an emerging media, and the advertising is novel, response rates to those new ads in new media types tend to start out much higher, and then they will eventually plateau.

The problem with this theory is that it only works out well for publishers catering to DR buyers when the conversion rate on their inventory is high enough to drive high CPMs. The type of inventory that drives high conversion rates is typically extremely well-targeted inventory, typified in our space by paid search advertising, where the users tend to be searching for the very thing that the advertiser is selling. There are some forms of display advertising that also drive high conversion rates. They are frequently driven by retargeting of search queries, very lucrative behavioral segments that show a user’s propensity to buy is higher than average, or similar principles.

Like all other emerging media, when display advertising first started out, the focus was on getting DR advertisers in the door. And like all other emerging media, the response rates on ads were relatively high in the early days. But unlike all emerging media before online display, we wrote software that managed media buys online right at the beginning of this industry. And all of the DR “knobs and dials” were locked down in code, which made it much harder to evolve out of DR into brand advertising. If response rates had grown or remained high, this wouldn’t have mattered. But like most “top of purchase funnel” ad experiences, the response rates are too low to justify high CPMs by the DR advertisers.

When a media type does not drive a very high conversion rate, DR advertisers are only willing to spend a very low CPM. There’s a magic point at which the price of the inventory is low enough that the DR formula for positive ROI starts to make sense even for low performing inventory. This inventory is generally cheaper than 50 cents and frequently cheaper than 5 cents. And there’s a ton of it available in our space. This overemphasis on DR has numerous unintended or unrealized consequences.

Many large publishers sell their guaranteed inventory at well above $3 on average, and many publishers average between $5 and $9 for what is sold by hand. But this typically represents well under half of their inventory, and for many publishers it’s more like 30-40 percent of their total inventory. Once you dip below the conversion threshold of a DR buyer on most ad inventory, you’re driving very hard toward the basement on your prices. And if more than half of your inventory is sold off for less than 20 percent of your total revenue, then something is very wrong with the way we’re managing our space.

Publishers would be much better off stripping half the ads off of their site, redesigning the site to accommodate larger brand-friendly ad units, selling a lot more sponsorships with their human sales force, and selling the remainder of those ads mostly through a very automated sales channel, such as a private exchange, or at the very least automating their sales with one of the available tools.

Even selling10-20 percent more ad inventory through premium channels would significantly increase yield for most publishers than all of the remnant sales that take place today. Simply repurposing the sales and operations teams away from the remnant inventory problem and focusing them on selling premium could solve this.

To conclude, if we can make buying and selling inventory across the online display space more efficient, more brand friendly, and significantly increase our CPMs, then we’re going to have a rapidly growing and expanding space — one that would rival venerable offline media like print and television in size and scale. And that would become the perfect vehicle for those media to travel through as they become “tablet-ized” and “streamed.” But with such a huge overemphasis on DR, massive inefficiencies in buying, and low CPMs, we have a ways to go.

It’s not your data!

(Originally published under the title “Our industry’s Unethical, Indefensible behavior”, in iMediaConnection, April 2011) by Eric Picard

I’ve been writing a lot lately on the topic of online privacy at the intersection of advertising, and particularly the way the third-party tracking ecosystem has been evolving for the past few years. There is an ongoing onslaught of discussion about legislation and how we’re probably going to get regulated. Some of my closest friends in the industry are at odds with my position, and many people are finding themselves diametrically opposed to people they respect over this issue. People are claiming that if we stop the targeting, all the value in this industry will bottom out — that another bubble will burst, and advertising Armageddon will follow. I disagree. I believe a huge amount of value can be generated without marginally ethical behavior.

To me, it’s a very clear issue — one based on ethics and logic. If companies are tracking people across multiple websites without their consent, and without providing any recognizable value, and those people want the tracking stopped — then it should probably stop. There is real money on the table for the companies that do this data collection, and changing the opt-out model to an opt-in model would decimate their financial outlooks. But this ultimately doesn’t matter. As an industry, we are doing something that most people simply don’t want us to do.

When a publisher tracks what its visitors do on that one publisher’s site, tracking is a defensible practice. The online users who visit a publisher’s site are electing to visit that publisher, and as long as the publisher is collecting data to be used only on its own website, then this falls into the standard quid pro quo relationship that already exists.

People get free or reduced-cost content that they desire to consume from a publisher. The publisher shows them ads, and frequently requires that the consumer register or subscribe (regardless of if this is a free or paid subscription) and hand over some data to be used to better sell ads to advertisers. While a person is visiting a publisher’s site, the publisher certainly has the right to track his or her behavior. There are lots of reasons justifying this right. And consumers can choose to simply avoid visiting that particular publisher if they disagree with the publisher’s privacy policy. And having a user specifically opt out of being tracked on that publisher’s site is a great option to provide.

However, my issue is with the practice that has exploded over the past few years, where third-party companies place tracking tags all over the internet — across multiple publishers — and create comprehensive profiles of consumer behavior. This without any discernable value given back to the consumer (I have lots more to say on this issue below) and without their direct knowledge or consent. This tracking is all enabled by third-party tracking using third-party cookies. This capability was not what the browser designers created cookies for, and it is a sort of hack of the way browsers operate. If “hack” is too strong a word, it’s at least an unintended loophole in browser design that has been used in ways that are hardly defensible.

While I am passionate on this topic, I actually think this argument is a moot point in many ways. I predict that the browsers are going to very elegantly enable consumers to block third-party cookies in the next few releases, and the whole house of cards built on top of this loophole in cookie security is going to fall to the ground.

The Internet Explorer team at Microsoft has already announced that IE 9 will make it extremely easy to block third-party cookies and content. And most technical people running the browser groups at Firefox (keep in mind, there really are no business people involved in this open-source browser) and Google (where technology drives most decisions) are all pretty smart; they understand the tracking behavior that they want to shield the public from. This is clearly an issue that technologists understand better than the general population, and most technical people I’ve talked to have arrived at the same conclusion: Blocking third-party tracking is in the best interest of consumers, it should be extremely easy to do, and the decision should be pre-populated as an opt-out.

Most of the discussions I’ve had on the opposite side of this issue have been with business people. They believe that there is no danger to consumers from what they perceive to be anonymous tracking of online behavior. And they continue to look at people who don’t agree with them as privacy fanatics who are irrationally trying to limit their businesses from succeeding. This isn’t the case, and I certainly am not fanatical about privacy. But I’ve learned a lot over the past 10 years about this topic, and on top of this, the market has radically shifted in the past three years. The amount of tracking going on has seen a huge increase, and the safeguards on the data being collected are quite squishy.

There is a real issue here that apparently hasn’t been understood by a lot of non-technical people. So-called anonymous tracking is fairly easily cracked open. And now that there are many mechanisms that have been created for matching cookies across domains and companies, there are numerous broadly correlated profiles of user behavior floating around. Many of the companies that have copies of these profiles are small startups, many without nearly the funding or maturity needed to build extremely secure environments. And even some of the biggest companies out there have had significant security breaches over the last few years — breaches that have leaked millions of people’s data into the public domain.

Many of the executives at the companies operating in this sphere are very reputable and honorable people who are certainly not being malicious or trying to hurt people. But what happens if their companies are purchased by less-reputable entities? Clearly those with scruples will simply quit and find other work. But now we’ve got a company run by unethical and dangerous individuals with access to a ton of data that can pretty quickly and easily be reverse-engineered to do diabolical things.

Or what if a startup isn’t successful and goes into bankruptcy — and the data assets get auctioned off to the highest bidder? Or what if there is a security breach and a hacker gets access to the company’s log files or plants spyware on its servers? There have been cases in this industry of crackers getting into server farms and hosting software there that gave them access to a lot of data. And of course, there is the other problem of companies that are just unethical to begin with.

Many proofs have been created that show how easy it is to reverse-engineer anonymous tracking. With a small amount of data to correlate with non-private activity, any decent engineer can take apart the anonymous shell around a person’s profile and merge it with personally identifiable information from other sources. And suddenly we’ve got non-anonymous profiles with all sorts of data in the hands of not-so-scrupulous people. Not a recipe for comfort.

At this point, the business people typically try to argue that without the work they do, consumers will have the horrible (never mind that it’s what already exists) experience of having to see advertising that is not relevant. The fallacy of this argument states that if we have better targeting, the ads that consumers see will be more relevant, and they will have a better experience visiting websites that are ad-funded.

There is no persuasive argument to be made that consumers benefit (really at all) from third-party tracking. The ads are not perceptibly more relevant (to the consumer), despite the advertiser’s ability to do deep statistical analysis and see a measurable lift in performance. The only groups really benefiting from the third-party tracking that’s going on are the companies that sell it, and to some degree the advertisers that are able to make use of it for a tiny percentage of their overall spend.

This argument is really hard to defend, and has been made by the ad industry for the past 15 years. I’ve made this argument myself a bunch of times. See this video for definitive proof. Please note that watching myself in this video drove two major shifts in my life: First, I saw that even I didn’t really believe this argument anymore, and I stopped championing this position. Second, I realized I needed to lose a ton of weight (which I’ve since done).

The argument of more relevant display ads is a fallacy. There is simply not enough ad inventory available to really improve relevance to a degree that it would meet the bar of a consumer. Getting a tiny percentage lift on CPAs that are already tiny doesn’t matter enough to justify the issues I’m complaining about from a consumer perspective.

Just because I looked at a pair of shoes online and then one out of 50,000 of the ads I see afterwards are for the same pair of shoes doesn’t mean that we’re making advertising more relevant. It means we’re making a few ads more relevant. A tiny handful. A handful that is so small that it won’t for a moment change the way that consumers feel about online ads. And in order to make ads more relevant, we’d need hundreds of thousands or even millions of ads from a similar number of companies in order to make advertising feel more relevant to consumers.

One argument I hear a lot is that consumers prefer the ad experience from paid search because they feel the ads are more relevant. But there is no real comparison to make here. There are something like 5,000 advertisers that make up more than 90 percent of the U.S. ad spend on display, across approximately 5 trillion monthly impressions across hundreds of millions of ad locations. Paid search has more than 400,000 active advertisers at any given time, with only about 250 million impressions per month and only something like 2-3 million commercially viable keywords. Paid search has more relevant ads than display because of this high concentration of advertisers across a small number of ads. We’d need a similar kind of ratio to really appear more relevant to consumers based on targeting in display ads — and we’re nowhere close to this. If someone ever figures out how to get local advertisers to buy display advertising, this could happen — but we’re a long way from this nirvana.

Another argument I hear is that we’re “not as bad as the offline direct marketers, who have been doing much more of this for years, and who have way more data than the online marketers.” And generally the argument is included that consumers clearly haven’t rebelled against direct mail, so they shouldn’t have a problem with what online marketing does.

This is simply silly from my point of view. First, the companies that lead the offline direct marketing industry are exactly the pivotal players that are enabling much of the third-party tracking going on in the online space. They’re the ones gluing together the cookies from multiple parties, so there is no “them vs. us.” We are the same exact industry, and the players are active across the board, across any perceived boundary.

Second, just because consumers have given in on the offline tracking that is going on and data sharing that happens regularly across the credit card and finance industry, this doesn’t imply their implicit acceptance of similar behavior in other venues. Like a frog dropped in warm water and slowly boiled, they didn’t understand what was happening in the offline world until it was too late. Now most consumers understand the issues, and they are not happy about this happening again in the online space where companies are more visibly collecting data about their behavior without permission. At least with the credit card companies, consumers get tangible benefit from the use of the credit card. In the online space, there is no perceptible value.

If you still believe that there is a credible argument to make to the average consumer on this topic, try explaining to an acquaintance who doesn’t work in the online advertising industry what tangible value they get from allowing a third party to track them. And be sure to explain what is really happening, including how many different sites they’re being tracked on without their consent. See if they call foul on you.

And frankly, you need to really question this issue yourself. Imagine your reaction if you found out that some company was hiring people to follow your wife, husband, mother, or children around and note what they do all day in order to build segmentation models for marketing. Imagine that when you confronted them, that their response was, “But we anonymize the data — trust us.” It just doesn’t cut the mustard from my point of view.

I have discussed this issue with lots of consumers, and not a single one — not one person — has ever said that he or she was satisfied with the ability to opt out. Every single one has complained about the fact that this was done without permission.

From a moral and ethical standpoint, I can’t any longer say that third-party tracking is OK with a straight face. I simply don’t believe it. There is no justification I can see from a consumer point of view that they should simply sit back and swallow all this tracking that doesn’t benefit them. Companies are making money off of their personal activity data. Every person I’ve talked to outside of our industry believes they have the right to expect that someone should need to ask permission before tracking.

I now believe that companies with no direct relationship with a consumer should not have the right to track that consumer’s behavior across multiple websites, make money off that consumer’s data, and potentially put that user’s privacy at risk without explicitly asking permission first. First-party tracking is acceptable and justifiable. If I visit a publisher’s website, there’s an understood quid pro quo that all consumers are fairly aware of at this point; they know they need to put up with advertising in order to get access to content and free or reduced-cost tools (e.g., email, IM, etc.).

On the advertiser side, consumers generally don’t have a problem if they are tracked when they visit the website of a company of which they are a customer. Amazon is often used as an example here. Just as there is a reasonable expectation that a shop owner would watch what you’re looking at and make suggestions to you inside their store, Amazon has legitimate reasons to track shopping behavior and provides customer value by doing it.

In the end, just because we can do something doesn’t mean we should do something.

Tagged , ,

Why publishers are afraid of real-time bidding

(Originally published in iMediaConnection, January 2011) by Eric Picard

Real-time bidding (RTB) is a hot topic in the online ad industry these days. (Personally I wish the industry would talk about real-time buying and real-time selling — because bidding is not really a requirement to get the benefits. But that’s perhaps the topic of another article.)

There are a lot of misunderstandings about the issues in the RTB space, particularly from publishers. Many publishers are jumping in with both feet, but a large number are still in wait-and-see mode, and have big concerns that they want addressed before they enter the fray. Some of these concerns are not areas that should cause concern, and there are other issues that they should probably care about — but that aren’t even on their radars.

Years ago, publishers were very concerned about sales channel conflict with ad networks. This concern ultimately was addressed by publishers only allowing resale of their inventory by networks if it was sold blind, meaning that the network could not identify the publisher as the inventory source. Over time, ad networks have become deeply ingrained in the industry ecosystem, and for the most part, the channel conflict issues have been addressed through this blind resale mechanism.

When RTB companies first entered the market, publishers treated them just as if they were ad networks (some of the RTB companies are ad networks, by the way), and they limited the inventory relationship to remnant inventory that was sold blind to match the way it had worked in the ad network case. Most publishers at this point are over focused on the issue of channel conflict here, because the drivers of RTB are different than many in the industry realize.

The RTB space has two faces — one that is focused on acquiring good inventory at a steep discount, and another that is focused on enabling programmatic media buying and selling. The latter focuses on removing the human negotiation process in order to increase efficiency in the purchase and sales processes. While the first group of buyers is discount focused, the second is focused on more efficient and effective media buying. And publishers should not only enable them — they should encourage them.

When publishers sell inventory using RTB, they not only reduce their cost of sales, but they also remove a huge amount of their ad operations costs. The buyer only picks up the inventory if it matches the buyer’s goals, and there is no guarantee involved. No manual hunting for placements that have enough traffic to cover the campaign goals. No fire drills to handle at the end of the month.

While publishers should be concerned about advertisers trying to get media discounts by playing the human and automated sales channels against each other, that’s easily overcome. Almost any mechanism publishers would use to expose their inventory to RTB — such as an exchange or a supply-side platform — would also enable them to set floor pricing. This means that the inventory would be protected from RTB prices undercutting their sales force.

I actually suggest that publishers expose their entire premium portfolio to RTB, and that they should not sell it blind. They should think of the RTB channel as part of their sales forces — a non-human sales team that lets the buyer achieve its goals more efficiently and reduces the costs of sales.

One issue that many publishers (rightfully) fear with RTB is data leakage. But this is an endemic problem that has been floating in the industry for years. Let me start with a more fundamental description of the issue:

Let’s say an advertiser makes a premium buy from a publisher for 50,000 impressions of a behavioral targeting segment called “auto shoppers,” starting on Jan. 1. The buy is frequency capped at a frequency of one, and the advertiser is paying a $45 CPM for the privilege. The ad is being served using third-party ad serving, and the advertiser also drops a tracking code from its demand-side platform into the creative — which they don’t to disclose to the publisher.

Now that the campaign has gone live, the advertiser is reaching an average of about 1,500 unique web browsers per day, setting a cookie on those browsers. Immediately, the advertiser pulls the trigger on a RTB campaign that looks for those same cookies on the various ad exchanges, and it pays a $1.50 CPM to reach them. To make matters worse (from the publisher’s perspective), if that publisher is exposing remnant to RTB in an exchange, the same advertiser might well deliver ads during the run of its $45 CPM campaign on that same publisher for $1.50 CPM to users it has already reached.

This is a real issue — something that happens every day. From the publishers’ perspective, these are data that they have worked hard to collect, and they believe they are their proprietary data. And they feel strongly that nobody should be able to steal those data from them.

The reality is that every ad network out there has been doing this since the beginning of the industry. If ad networks buy targeted inventory, they can cookie users quite easily, and then resell them as part of their targeted network sales.

But ultimately, there is a bigger conversation to have about the value of publisher targeting data:

  • Publishers only know what a specific browser is exposed to when on that publisher’s site, which can be a very slim aperture of activity viewing.
  • Most data that advertisers want access to are pretty much a commodity. How many times does an advertiser need to know what kind of car someone drives, and who has access to that data? A lot of parties know this information, and many have the right to sell it. Only data that are either fairly unique (people who follow their finances on Yahoo Finance probably aren’t going to MSN Money) or that are processed and matched with other activity data (search engine results or shopping activity) have proprietary value that a publisher should protect.
  • Given the industry discussions about privacy, it’s very possible that third-party tracking of data for targeting could run into some issues over the next few years. This will set publishers up with many more unique data opportunities than they currently have.

Ultimately publishers need to grapple with the issue of RTB through the lens of their own business challenges. However, the two biggest issues that I usually hear about from publishers — channel conflict and data leakage — are ones that shouldn’t stop a publisher from participating in RTB. In fact, these issues should probably bring a new view to the conversation.

RTB ultimately has an opportunity to change the game for the better for all ecosystem participants — but only if enough valuable inventory is made available to the channel.

DSPs: What they really are and why you should care

(Originally published in iMediaConnection, May 2010) by Eric Picard

Recently on the Internet Oldtimers List, someone posted a link to a video mashup where someone had taken a clip from the movie “A Few Good Men” and replaced the famous “You can’t handle the truth!” dialogue between Nicholson and Cruise with a farcical semi-humorous debate about demand-side platforms (DSPs). What was interesting about this clip was that its central argument was that DSPs lower the CPM of premium publishers’ impressions (with Cruise arguing for the premium publisher and Nicholson arguing for the DSP).

The video is cute — pretty well done, and worth a view if you’re someone on the inside of this particular space online. But what really surprised me about it was that very few people seem to really understand what’s happening with DSPs in general — and there’s obviously misinformation going around. This particular debate about DSPs lowering the yield of publisher impressions was one I hadn’t heard articulated before.

So let’s get started digging into this by discussing what a demand-side platform really is. These advertiser/agency facing systems let buyers do self-service media buying from publishers; publisher aggregators (sometimes now being called sell-side platforms, or SSPs) like PubMatic, AdMeld, Rubicon, and others; and ad exchanges. The most important part of these mechanisms is that they enable real-time bidding against inventory on these sites. This is really important because in real-time bidding, the DSP can let the buyer specify business rules describing the value of impressions based on their audience attributes. That means the buyer can assign monetary value against specific audiences, and the DSP can bid on every impression in real time based on its actual value to the advertiser.

One reason real-time bidding is so valuable is that advertisers can bring multiple data sources to bear on the valuation problem. This would include the targeting attributes that the publisher lists about its own impressions, data attributes from third-party data providers like BlueKai and others, and most importantly, proprietary data that the advertiser owns about its own set of customers. Based on all these different targeting attributes, the buyer can assign various business rules that align the campaign goals against potential impressions, and the bids can be set against all the various providers of inventory.

The DSP then will begin bidding across the sell-side platforms, exchanges, and any publishers that directly support real-time bidding, and will automatically optimize the bids based on success and results. The result can be as simple as reaching 100,000 people that fit some specific criteria — or it could optimize across CPC or CPA. Real-time bidding is vastly superior to other mechanisms when it comes to ensuring that the advertiser gets the best ROI. But there are some issues.

I’ve heard from many of the DSPs that they are running out of real-time biddable inventory, meaning that their CPMs are rising because their supply is constrained. This might sound funny to those who fondly quote that there is unlimited supply of display inventory — but consider that there are short- and long-term factors driving this imbalance. In the short term, the sources for this type of inventory are still somewhat limited; even with the explosive growth we’re seeing in this category, there are not enough impressions available to satisfy demand. DSPs can still participate in non-real-time auctions in order to supplement impressions, but they lose the extra value they bring to the table when they can examine the impression before bidding.

Long term, there will be lots of impressions being made available. (In fact, I predict that most impressions will ultimately be made available in real-time.) But this real-time bidding world is all based on audience targeting — and the same users that Whole Foods wants to reach are also highly valuable to Best Buy and The Home Depot. This means that those impressions driven by highly desirable audiences will be a small percentage of the total number. But note: Although from a percentage perspective we’re talking small numbers, from a volume perspective that could still represent massive amounts of high bid-density inventory. Paid search impressions are a tiny fraction of display impressions today, yet drive half the revenue in online advertising. This could change significantly if we can drive enough bid density on a small fraction of display inventory that represents valuable audiences.

Next page >>

I have heard some premium publisher folks state concerns that there could be issues with real-time bidding on display inventory due to asymmetric bidding and low bid density. Consider the following example that illustrates how low bid density (leading to asymmetric bids) could be a problem in the future as more impressions become available for real-time bidding. I’ll make it unrealistically simple to illustrate the issue:

An impression shows up for bid. It has the following attributes:

  1. Male
  2. 34 years old
  3. Greater than $150,000 income
  4. Chicago DMA
  5. New parent
  6. Auto shopper
  7. Jewelry shopper
  8. Health club member
  9. Impression is 300×250 pixels
  10. Site category is entertainment

Four advertisers participate in the auction:

Advertiser 1: Pampers — knows nothing extra

Advertiser 2: Ford — knows user owns a BMW and has been shopping for Land Rovers through proprietary data deals

Advertiser 3: Zales — has existing customer data that shows this is an inactive customer, a high spender in past who bought an engagement ring three years ago

Advertiser 4: An independent Chicago diaper service — knows nothing extra

The bidding follows like this:

Pampers bids $1 CPM.

Ford bids $5 CPM — it knows it has a low likelihood of converting this profile, so it doesn’t bid very high.

Zales bids $40 CPM — it knows that this customer bought his engagement ring at Zales three years ago, and given the new parent status, he is likely to be open to buying an expensive Mother’s Day present.

The Chicago diaper service bids $10 CPM based on simple CPA optimization.

Because this is a second price auction, Zales will win, but only pay $10 CPM for the impression. In this simple example, that might not seem too bad. But in reality, it should be possible for the publisher to predict that this impression, based on past bids on similar impressions, would sell for much higher than $5 CPM. So the publisher has not gotten the maximum yield it could have gotten based on the auction it had in play.

In the future, I predict that publishers will make use of yield optimization technology to fix this problem. The publisher should be setting a floor price on a per-impression basis based on its prediction of value to the advertisers in the marketplace. The publisher probably could have comfortably set a floor price that would have given it a higher yield (e.g., set the price at $12 or even $20 CPM based on historical trends for this type of impression and the current bidders in the auction). But this is a very hard technology problem to solve.

In paid search, we’ve seen high bid density drive very high CPMs on highly desirable keywords within the auction. And where the bid density is lower, we’ve frequently seen lower CPMs. Essentially, bid density refers to how many participants within an auction are bidding over the same item. In paid search, overall this hasn’t been a problem — mostly because there are “single digit” millions of commercially viable keywords, and about half a million advertisers competing over them. This leads to pretty good distribution, with some keywords getting lots of competition, and some getting very little — and overall the average yield being very high for the search engine. It’s a supply and demand problem for the most part.

But in online display advertising, there are trillions of display impressions a month with fewer than 10,000 advertisers (at least, in the world we live in today), with most dollars being spent in the U.S. coming from fewer than 3,000 advertisers. Further, the role of agencies could significantly change under this new set of mechanisms. There’s no reason that an agency using a DSP couldn’t withhold bids from its stable of advertisers so that only the top bid available for any advertiser for each impression would be placed. From a bid density perspective, this could be damaging without the kind of yield optimization I mentioned above and the creation of competition between multiple advertisers that normally wouldn’t have competed in the past. But there are still things that could drive lower bid density and lower publisher yield.

For instance: In an extreme world, each agency holding company could have its own DSP, and each of these would offer only one bid per impression as it reviewed the available targeting parameters and determined — based on each advertiser’s business rules — which of their campaigns would have the highest bid. In other words, each DSP could run an internal auction prior to placing a bid in the publisher-facing system. That would reduce the density of the auction on the publisher side significantly, causing the publisher to reduce yield. But it does require significant process change from how things are done today.

In the end, I think publishers would be foolish to worry too much here. It’s likely that their highest value impressions are going to go way up in yield, even if they see a drop on the rest of their impressions. And at the least, those two things should make up for each other. At the best, this could drive average yield higher in online display than we’ve ever seen before.

Why Rich Media Is Suddenly Everywhere

(Originally published in ClickZ, May 2003) by Eric Picard

Obviously, I believe in rich media. I founded an early rich media advertising company. I risked everything, convinced a few friends to join me, and started a company with the mission of improving online advertising results with innovative rich media ad technology.

Rich media has since ceased to be “standalone.” It’s a feature, not a product. It’s the special sauce, the frosting, the packaging that catches a consumer’s eye and draws her into a message.

I always believed rich media would dominate online advertising. Even to the point the term “rich media” would one day disappear. “High-fidelity” audio is an example of a term that vanished once the description became ubiquitous.

Rich media is on the verge of ubiquity. Every site I visit on a daily basis is filled with it. It’s a medium of choice for online ads. Animated GIFs account for the majority, but we’ve seen huge increases in Flash ownership. In 2001, Flash accounted for less than 2 percent of ads served through our ad servers. Today, over a quarter of the ads we serve are Flash.

The real question is, “Why?” Why did rich media finally achieve prominence? Why now?

Broadband Adoption

This is a sore point for me. Broadband adoption has soared over the past few years. Certainly, it hasn’t penetrated as quickly or deeply as some predicted, but it’s made huge gains. Broadband is a factor in rich media adoption, but its effect is more psychological.

Flash (for example) doesn’t require broadband to be effective. Most rich media technologies have employed mechanisms for some time to mitigate dial-up speeds. The misconception that rich media requires broadband certainly made it hard for adoption to take off before there was a consensus that broadband penetration was high enough.

Many of the biggest ad agencies are watching broadband adoption rates because they long to run online video (another story entirely). Until now, they didn’t feel the audience was large enough.

Flash Player Penetration

Macromedia has done an excellent job of hollering from rooftops that its Flash player is on nearly every browser on the planet. Once media and creative teams were armed with stats to show clients, the Flash adoption battle was won.

Branding and Direct Response

Numerous studies prove clearly online advertising has a significant branding effect. Many of those studies show rich media use significantly improves recall and other branding criteria. At the same time, plenty of studies indicate significant conversion boosts when rich media is used. This is one of the few times online brand and direct response advertisers have had consensus.

Falling Click Rates

Any sophisticated online marketer shudders at the thought of the click rate being used as an indicator of effectiveness. Unfortunately, it’s still a widely misused marketing success metric. As rich media enjoys much higher click rates, the metric is easy to use when trying to demonstrate an increase in effectiveness or mitigate a decrease in click rate for nonrich media.

Natural Selection

Possibly the least-discussed but highly significant factor in rich media’s rise. Since that oft-mentioned bubble burst, the online advertising industry suffered tremendous layoffs. Individuals who survived really are the cream of the crop. You’ll find very few dim bulbs anywhere in this industry today. It’s no accident the leaders in this space are all proponents of rich media and willing to push the creative and media planning envelope.

Publisher Financial Need

The number one reason rich media flowered. Online publishers have suffered huge revenue losses. They struggle to get a tiny fraction of prices they once charged for online media. This has made even the most conservative publisher open to almost any type of campaign.

If you’d told me three years ago Yahoo would be a Flash and rich media bastion, I’d have scoffed. The battles I was fighting! It wasn’t that long ago Yahoo had strict limits on the number of rotations an animated GIF could cycle through. Today, the site teems with floating ads, expanding banners, and all sorts of rich media that wouldn’t have been tolerated in the past.

Has Interactive Failed? Not the Way You Think

(Originally published in ClickZ, September 2002) by Eric Picard

Interactive industry pundits are complaining a lot lately about the negative treatment we’re getting from The Wall Street Journal and other traditional media.

Can we blame the media? An appalling lack of understanding about industry issues exists even among the online advertising “experts.” If our experts can’t get a handle on the issues, how can anyone outside be expected to do so? We stink at explaining ourselves to the outside world. We stink at communicating internally.

We argue about a host of issues, all from Balkanized perspectives with little respect for other ways of doing things. Add to this cacophony agendas and approaches within various marketing departments, and confusion starts piling up.

Walk in the Other Person’s Shoes

We need empathy — the ability to see things from another’s perspective. How do you respond to the following statements?

  • Online media should be bought using traditional offline metrics, such as reach and frequency.
  • CPM media buys are absurd. Everyone should buy CPC or cost per acquisition (CPA).

The statements are one dimensional. Each points to valid issues but not to answers.

I see five major constituencies in our industry, although there are probably others. How the two statement above are heard and perceived depends on which group the listener is in:

  • Traditional brand advertisers have advertised offline for years, buying media by gross rating points (GRP), reach and frequency, and other traditional brand media metrics. They understand clearly the science behind branding and prove their value to advertisers by showing them how many people they hit within the target market (sometimes through brand recognition studies).
  • Traditional direct marketers scientifically approach consumers via direct mail and other direct methods. They focus only on successful acquisition and care little about brand effect. They have the research proving what results will be before they lift a finger. This group uses very specific methods and language to describe their work.
  • “Traditional” online advertisers/marketers think of themselves as a hybrid of the first two. They love talking about the branding “side effect” (offensive to brand advertisers) and embrace direct measurement. Their dialect doesn’t quite make sense to brand or direct people outside the online space. Most are decidedly weak in their knowledge of traditional offline marketing concepts. They typically misunderstand the direct marketer’s proven science and have virtually no understanding of branding and associated relevant measures, such as reach and frequency.
  • Online brand advertisers have decided the only way to save online advertising is to build measurement tools that will match those used by their offline counterparts. They have stared to eschew direct-response type information in favor of building consensus for the traditional brand path as applied to online.
  • Online direct advertisers only buy CPA or CPC when they have any say in the matter. They buy CPM when they must, but they make darn sure their actual CPA is very low. Some understand traditional direct offline science pretty well, others think they invented the concept of measuring return on investment (ROI). Those who know the science of offline direct are successful by using the same indices to build models online.

What does this all mean? Just because you’re an online direct advertiser, doesn’t mean you should issue orders that the entire industry move to a response metric to value online advertising. And just because you’re an online brand advertiser, doesn’t mean you should suggest we ignore responses and only focus on methodologies such as GRP. There may be two paths to take — as there are offline.

Rather than snipe at each other because each group has its own agenda, we must unify the messaging from our industry. A divergent but strong positioning of each segment (without diminishing the others) would be an improvement. For example:

  • Online advertising is proving to drive direct response better than any other medium.
  • Online advertising offers the best ROI on branding efforts of any medium.

Issues to be aware of: Online direct has been boosted by lower online media costs. If the online brand crowd is successful, online media will be revitalized — and costs will rise. This will hurt online direct, because they rely on cheap CPC/CPA buys. Unlike offline, online direct and brand share a much higher percentage of the same media space.

Diversionary Tactics

As troubling as the lack of perspective between groups is the lack of clarity in technology companies’ marketing messages. Many use industry issues (real or imagined) as weapons in their own marketing arsenals in ways that further confuse an already confused marketplace.

My comments are not aimed at the companies used as examples (which is why I’m using fake names — although some of you know who’s who), rather at their messaging. I’m not saying marketers at these companies should ignore the value they offer customers. Rather, they shouldn’t inflate minor issues or make untenable claims spun as solutions to major industry problem.

TrueMethods’s marketing inflates minor issues. Its Site Side Ad Serving Solution is promoted as the only privacy-friendly server in the industry, making the case all its competitors share ad-serving data across customers. Virtually nobody in this industry does this. Even those who do cleanse and segregate data to protect customer information. They’d be out of business if they didn’t. This is a minor issue for a few publishers and marketers. It’s not a broad industry problem.

OneStream is a rich media technology company. Its message claims it is building standards for rich media advertising. OneStream doesn’t promote industry standards, just its own solutions. As a business, it should sell its products. What does it have to do with standards? Nothing.

A standard, by definition, applies to numerous offerings from different companies. Anyone can build to agreed-upon standards. OneStream suggests that the solution to a lack of industry standards is for the entire industry to unilaterally use its products. How inconvenient for competitors. If its mission is truly to help set industry standards, it should open its formats and offer standards that competing technology can be built to.

ZeroMedia offers an ad-serving and proprietary client-side creative format for ads. It claims to have solved all problems inherent to “first generation” locally installed ad-serving solutions (such as RealMedia and NetGravity) and “second generation” hosted ad-serving solutions (such as DoubleClick) that use their own server farms. ZeroMedia claims to have solved these problems by using CDNs to serve ads and a proprietary “patent-pending client-side intelligence.”

Many ad-serving solutions use CDNs (including Bluestreak, RealMedia, and others). Their “patent-pending client-side intelligence” requires individual users to choose ad preferences so ads can be targeted to them based on their defined criteria. Since the ad-serving solution seems to rely on this, it drags more issues into question.

Unless this industry starts communicating well, we’re not going to get past the misunderstandings in traditional media. If The Wall Street Journal doesn’t stop bashing online advertising, we’re in trouble. But we can’t complain about misrepresentation in the media if we can’t get our own story straight.

The stories above are on my mind, but I’m sure there are others. What are your suggestions for issues needing some housecleaning? We’ll try to air them here.

How to Play Nice With Technology Gatekeepers

(Originally published in ClickZ, July 2002) by Eric Picard

Back when Bluestreak was a rich media company, I could have written a doctoral thesis on working with tech gatekeepers. This was back in the heady days when publishers had a certain sense of superiority fueled by the artificial inflation of their valuations. We went to extreme lengths to develop rich media technology that didn’t impact user experience — to the point we nearly killed ourselves getting our initial software download down to 5.7k.

For the Web publisher, a technology gatekeeper manages the adoption of third-party ad technologies used by advertisers on the publisher’s site. These include ad servers, rich media, and analysis technology. The goal is to make sure third-party technologies won’t crash the Web site, make user experience suffer, or cause significant data discrepancies between the publisher and the third party.

It wasn’t only technology providers like Bluestreak that faced the gatekeeper issue. Media buyers and creative teams faced it as well. Nearly all the players in the industry were under the close scrutiny and influence of the technology gatekeepers.

They were the sheriffs of the Wild Web portals back in the gold rush. They carried the fastest six-shooters and had a posse of deputies to research, track, and nail the most miniscule bug in a technology. A license to run rich media on Yahoo or AOL was like having Wyatt Earp let you carry your guns into town because he deemed you a “good guy.”

Eventually, a time came when the sheriff was running the town. It was difficult to do any kind of business without making him happy first. When the gold rush dried up, the sheriff lost his posse. The town fathers turned the jail into a welcome center. Suddenly, everyone was allowed to carry his guns in town, even those who fired them into the air after 7 p.m.

Things have started to equalize. Once again, technology gatekeepers have budgets and teams. They are regaining the ability to say no to technologies they don’t approve of. That means it’s time to start learning about this breed of hombres so you can work with them easily (and without flinching when you’re asked to present your guns for inspection).

The technology gatekeeper as sheriff metaphor wasn’t chosen at random. There are a lot of parallels between the jobs and the psychological makeup of these roles.

Keeper of the Peace and Protector of Babies

The technology gatekeeper does her job with a clear conscience. She’s making the experience of visiting her Web site a safe one. She keeps unsavory technology that misbehaves from causing problems in the community. This could be a rogue Java applet, or a Flash file that causes older machines to freeze because they overwhelm the CPU.

Remember: Gatekeepers feel they act in the best interest of the people they represent. Approaching them in any way that puts them in conflict with that role is a bad idea.

Don’t try to sway them by offering a bribe, even an innocent offer of industry schwag or tickets to a trade show. This is a surefire way to get their hackles up. Any tech gatekeeper worth his salt would be insulted or worse by that kind of behavior.

Never try to strong-arm or go around them (to the mayor
— or VP of sales) to get your way. If the VP includes the gatekeeper in the meeting you’ve set up (which she’s likely to do), things will just get uncomfortable. A better approach is to start off on the right foot by having a meeting with all parties ahead of time. Then, move on to the gatekeeper as part of the process. This gets all the issues on the table, sets the everyone’s expectations (including the gatekeeper’s), and makes everyone happy.

The only way to win trust from technology gatekeepers is to be trustworthy. Demonstrate you will not screw them. Keep them from getting in trouble for letting you walk their streets. Build the relationship over time and make sure you don’t let them down.

In ad technology, it’s likely you’ll eventually have a problem. These are the moments when you can actually improve your relationship with the gatekeeper. By being open and honest and doing everything in your power to fix the problem and keep him in the loop, you’ll win his trust and respect.

They Don’t Make ‘Em Like They Used To

The biggest problem we’ll face now that power is returning to gatekeepers is the majority of them are inexperienced. Disney, Yahoo, AOL, and some other major players have kept those important and skilled people in their roles, but they’re the exceptions. Most gatekeepers moved back to the traditional world where jobs with real salaries still exist.

Many of today’s new gatekeepers aren’t experienced in being empowered to turn away revenue under almost any circumstances. They gained their experience in a world where they were left to clean up the mess made by a third party rather than keeping the mess from happening in the first place.

Now that gatekeepers have some say again as the pendulum approaches center, they need advice on how to use of this power. Here’s mine:

Let’s not return to the “good old days” of letting technical issues drive the publisher’s business decisions. I’m a technologist. I completely understand why testing is needed and what can happen when things explode. But many lucrative deals were lost by this industry because of technology gatekeepers’ excessive conservatism.

There was fear user backlash from intrusive technology or techniques would drive people away from the publisher’s free content. This wasn’t the case. Let’s learn from that. Be flexible. At the very least, run live tests with companies without taking weeks and weeks to do so.

In the end, we should all strive for the same thing: success. Ours in particular, the industry’s in general. Everyone needs to work together. The overriding goal of the gatekeeper should be to facilitate the process, not throw a monkey wrench into the works.

Advanced Ad-Serving Features, Part 2: Third-Party Ad Servers

(Originally published in ClickZ, November 2001) by Eric Picard

Last time, we discussed advanced features of site-side servers. Now let’s go deeper. This week, we’ll go into the even-more-advanced advanced features of third-party ad servers.

Third-party servers primarily serve the needs of advertisers and agencies. Sometimes they are called buy-side servers. They are part of the business infrastructure of these groups and must reliably and accurately deliver and report on ad serving and related user actions associated with the ads.

In addition to delivery and basic reporting, third-party servers provide unified comparative reporting for all publishers in a media buy, as well as many advanced features. From a feature standpoint, a third-party server is more complex than its site-side counterpart.

One thing to keep in mind: A third-party server is not able to “refuse” a call for an ad. If an ad tag is supplied from a third-party server to a site-side server and that ad is called, it must be served. Only a site-side server can schedule and deliver ad calls to users.

Beyond Banner Tracking

This is the big feature. Tracking beyond the banner enables the view of an ad session from impression to conversion (and beyond). This is a major reason a third-party server is a must for most advertisers. Some tracking types beyond the banner are:

  • Tracer tags. Tracer tags are single-pixel images placed on pages of the advertiser’s Web site so that activity on those pages can be correlated to the view or click of an ad. 
  • Post-click analysis. The user sees an ad and clicks on it. She arrives at a landing page on the advertiser’s Web site. She travels across three pages that have tracer tags on them. Each intersection of creative/tracer is credited to the advertiser’s reports. 
  • Post-impression (also called post-view) analysis. The user sees an ad but doesn’t click on it. That user (remembering the message) later travels to the advertiser’s Web site on his own. He moves across a number of pages with tracers on them. Each intersection of ad and tracer is correlated and credited to the advertiser’s reports. This analysis is a definitive branding measurement and is sometimes called a brand response report. Not all third-party servers collect post-impression data.

Reporting

  • Cross-publisher reports. A major reason to use a third-party server is that reports are covered across all publishers within a campaign. 
  • Comprehensive data sets. Since both post-view and post-click data must be recounted, reports must be unified and comprehensive.

Analytics

Some third-party servers offer advanced analytics capabilities. This is one of the fastest growing areas in the industry. Far more data is captured in an online ad campaign than in an offline one. Turning that data into actionable information isn’t simple. It takes days or weeks of human intervention and interpretation.

A powerful analytics package solves these problems by providing tools to get at actionable information more quickly. There are two basic types of tools to discuss:

  • Online analytical processing (OLAP) tool. This very powerful analytics tool enables the most control of data and reporting. Great power and flexibility comes at a great price, and few people are technical enough to use an OLAP tool to manipulate their data. In most agencies there are only a few, if any, people who can use these tools. It gets even sparser at the advertiser level. 
  • Wizard. To address problems with OLAP, some companies have started coming up with wizard-based interfaces for the most commonly asked questions. A good wizard-based interface can likely answer such questions as: Which publisher is the best media buy for my campaign goals based on the past six months of running ads across various publishers?

Optimization

Analytics deals with historical analysis to improve ongoing and future campaigns. Optimization deals with live campaigns that must be improved while still running. When done by hand (as is most often the case), only so much can be changed. Humans can optimize to a level of detail only so deep. This is best handled by technology, which provides much deeper analysis of data. Two types of optimization are:

  • Real time. Real-time optimization is the most powerful. Changes are made automatically to creative in rotation across placements based upon actual results read by the optimization tool. Real-time optimization requires real-time data to make changes. Few ad servers use a real-time reporting architecture, relying instead on 24-to-48-hour-delayed data. Real-time benefits include microtrend discovery (intraday changes in behavior within placements) and greater lift based on feedback loops. Additionally — if the system doesn’t make changes automatically, relying instead upon human approval or intervention — the lift is going to be lower. 
  • Recommendation. For situations where real-time data isn’t available, recommendation-based systems are the alternative. These systems read data when available and provide a list of recommendations to enable the customer to make changes. This inherently is a poorer performing model as changes are not happening quickly. Therefore, additional learning for the optimization tool is lost. The faster changes are made, the better the system gets at predicting performance. Still, this is a better method than hand optimization.

Targeting

  • Geographic targeting. Geotargeting is similar to site-side servers but somewhat less effective. You pay for the media regardless of whether you had an appropriate creative for the users an ad was served to. Wherever possible, try to geotarget at the publisher level. 
  • Profile-based targeting. As I detailed last time, ads can be targeted based on Web-surfing habits. Third-party ad servers have the same issues as site-side servers do. 
  • Session-specific targeting. Specifics include domain, browser type, and operating system. Again, this can be accomplished on the site side, usually to greater effect as the publisher only shows the ad (and bills you) when there is an appropriate fit. When served by a third party, you pay for the media even if it doesn’t fit your demographic.(Remember, there are plenty of other types of targeting I’m not covering here).

Trafficking Controls

Without a third-party server, trafficking ads to multiple publishers is a problem. It can be complex, with many points of failure. A good third-party server simplifies the process of trafficking campaigns and should provide valuable accounting methods for successful delivery and approval of your ads by the publisher.

Dynamic Ad Serving

Most publishers have a limit on the number of ads they will accept at one time. Usually this ranges from 5 to 10 creatives per week. Third-party servers use dynamic ad serving to rotate multiple creatives through one ad tag. This allows the advertiser/agency to traffic as many creatives associated with those tags as they want. This simplifies life for the advertiser and the publisher by cutting down significantly on the work done by both.

Conclusion

There are other ad server features not covered here. But this is a column, not a book! You should now be educated enough to talk to a salesperson without too much trepidation.

Next, I’ll write about a topic near and dear to my heart: how to work with tech companies for long-term success. It’s time to set a few things straight about this marketplace. Customers need to understand that while they are in a position to beat up their tech partners (notice I don’t call them vendors) on issues such as price, they should think twice. If there are any tech firms out there that would like to voice their thoughts on the topic, drop me a line.