Category Archives: Online Privacy

Get creative with hyperlocal targeting

By Eric Picard & Max Dowaliby (Originally Published on Imedia – December 16, 2015)

Hyperlocal targeting is the shiniest method of delivering advertising to consumers based on their exact location.  This is geo-targeting taken to the logical conclusion of every person carrying a GPS locator on their person wherever they go, even though GPS is only one method of determining location. The introduction of location data into mobile advertising has allowed advertisers to leverage the always-on, always-connected mobile device as an indicator of location. This has driven hyperlocal targeting to become one of the fastest growing mechanisms to capture dollars allocated to local advertising.

According to Borrell Associates, 42 percent of all local advertising is expected to be digital in 2015, totaling over $47 billion. Sixty-one percent of smartphone users say that they are more likely to buy from mobile sites and applications if they customize content or information to the current location of the user.

There are many complexities to local advertising that have not been sorted out, even with these advances.  For years, analysts have been talking about the coming transition of local dollars to digital, but it is possible that hyperlocal targeting could change this. The main issues in the transition of local to digital has been the so-called “local independents” — the “mom-and-pop” shops — which are the standalone companies that make up the vast majority of local businesses.  For these companies, local for years has meant Yellow Pages and newspapers, and, for the larger ones, radio and potentially local television. Mainly this has been held back due to creative production, as these small businesses don’t have the means to create advertising to fit the needs of the digital space.

The “national-local” advertisers — the brands with local presence — ranging from quick-service restaurants to retailers are the main drivers of adoption of digital. Until mobile location data really became actionable, there was still little reason for the local dollars of the national brands to transition to digital — as geo-targeting was seen as too vague, and the creative value propositions were not quite strong enough. Things are changing.

Hyperlocal targeting is not a simple mechanism for identifying or targeting users. It’s more of an overall set of services for leveraging highly accurate, fresh, and relevant data about a user in order to make the best decision matching the ad opportunity to the consumer, based on their exact location. Let’s explore hyperlocal location targeting (what most people are referring to when they say hyperlocal targeting):

Advertisers have been able to do some sort of location targeting for years now. Targeting based on city, DMA, or zip code have been well-used and well-performing tactics. However, the real challenge here is getting more granular than a zip code. Since mobile phones provide signals that allow us to achieve incredibly granular information, the mass adoption and nonstop usage of these devices has — in many ways — solved the problem for us.

Hyperlocal location targeting refers to the ability to be able to target small areas or “geo-fences,” including radius (general distance from a target location) and polygonal geo-fences (a shape drawn on a map). Both of these mechanisms have uses for targeting users; for instance, a message could be sent to users when inside a certain radius of a specifically targeted destination. An example: A retailer might want to send “message A” when a user is within 2 miles, “message B” when a user is within 1 mile, and “message C” when a user is within 100 meters. This can be an extremely powerful tool to drive foot traffic or engagement. This location information also allows advertisers to provide relevant, and contextually aware content to users. In the case of a polygon, a quick-service restaurant that delivers food might have very specific streets that become a boundary for where they deliver from one location versus another — and radius simply won’t solve for this.

We now have incredibly accurate signals by which we should be able to target users, but there are still some key challenges when trying to leverage this data. Arguably the most important is the accuracy of this information. Depending on where the location information is coming from (browser, in-app, carrier, etc.), the precision varies greatly. Location information is conveyed via latitude-longitude coordinate pairs, and, as such, can vary in degrees of precision. Carrier-provided location data is often only accurate to the area that an individual cell tower provides service to, whereas in-app provided location data can be extremely accurate and place a user inside a retail store, or even in a certain part of a retail store. There is also a large amount of (usually) unintentional location fraud. This refers to revered latitude-longitude pairs, missing coordinates, or centroids (a central point in a city, state, country, etc.). There are numerous location targeting partners who cleanse and validate location data to help this problem, but it remains an issue that cannot be ignored.

Freshness of the location information is important in hyperlocal location targeting. It is critical that a user be messaged when they are physically at a certain location, not when they were there five minutes ago. One of the challenges of dealing with location information is that this data cannot be cached the same way most information about a user can be. Location is fluid, and users are constantly moving. This makes location data at scale an incredible amount of information to process.

However, when these challenges are overcome, there results are worth it. A quality hyperlocal campaign can provide incredible utility and relevance to a user. Messaging a user at the right place and right time works, we know that. It’s all about the execution. Users are clamoring for this kind of utility. Eighty percent of Google searches that included the term “near me” were from mobile devices in Q4 2014. Even more importantly, the prevalence of the term “near me” is up 34 times since 2011. Users now want — if not demand — relevant information and experiences based on where they are. Hyperlocal is a buzz word, and for good reason. Let’s just make sure we use it to its fullest capabilities. Get creative!

Advertisements

The real reason advertising isn’t more relevant

By Eric Picard (Originally Published on iMedia – February 18, 2015)

I have been pretty publicly dismissive of the idea that we will see significant consumer value driven by ad targeting’s creation of more relevant advertising in the near future. Despite the frequent claim in the industry, I’d call this a false meme today; we don’t have nearly enough disparate messages from marketers to segment the population well enough. At the very least this future is further out simply because there are not enough advertisers spending enough money on enough distinct messages for enough distinct industry verticals, or enough products, to allow us to have enough relevant messages to show people.

Let me be clear: There are privacy issues with which we must contend. But if we step past them for the purposes of this article and look just at this issue of relevance driving value to the consumer, we have a long way to go. The current trend toward massive use of retargeting clearly isn’t hitting this mark if we just make our judgment based on anecdotal input from friends, family, and ourselves. How many times have you experienced (or been told by someone else about) the situation where you visit an online store, buy a product, and then get targeted with ads for the product you just purchased for several days afterwards on numerous websites?

Are the ads more relevant to you? Maybe. Do they add any value to you? Quite the opposite. You probably find the situation as annoying as I do. If I buy a new grill, show me products related to grilling — not the damn grill I just bought. If I buy a new pair of shoes, show me clothing or accessories related to the shoes. If I buy a new car, stop showing me ads for that car or even its competitors. Instead, show me ads related to the fact that I just bought that specific car, or even just that are relevant to a recent car buyer. But at the very least, stop wasting your money showing me the exact product I just purchased.

Frankly, there are reasons why the scenarios I suggest above aren’t happening. About 10 years ago, I had a conversation with an executive at a major publisher who was complaining about how irrelevant the ads on the website were to him. He hated the fact that he kept seeing a “toenail fungus” ad when he didn’t have toenail fungus. Instead, he would love to have seen ads for rock climbing gear, as that was his passion and he was currently looking for new gear.

I explained to him that the toenail fungus ad was creating both category and brand awareness so that if and when he eventually got toenail fungus, he’d remember that he could fix the problem. I also noted that we currently had literally not one ad from an advertiser that sold rock climbing gear available to target to him, so we could not meet his ad targeting needs in that way. This caused him pause. He finally got the point and was willing to concede that maybe he was a good target for toenail fungus ads — but that he hated the creative of the ad and found it “disgusting.” I explained that we could adjust the creative acceptance policy of the site to deal with that issue editorially and that maybe the ad would be more effective if the images were less graphic.

In those days, before programmatic advertising, the solution to the problem seemed like it was just around the corner. But now, a decade later, we still haven’t solved the issue. For clarity, I do very much believe that there will be a tipping point — that as we add the infrastructure and data needed to micro-segment audiences, we will see major changes. Once we have the ability to show a high-quality ad experience and effectively segment users to put ads in front of them with the same level of segmentation as a niche magazine content experience, advertisers in the myriad niche segments of advertising will flood the digital channels with creative that can be matched to the right user. We should explore this a bit.

Consider this example: We are trying to build an advertising experience that is more relevant, and the profile of the person is a 45-year-old male suburban homeowner who is an avid golfer and sports car enthusiast, with teenagers in the house. We can probably find some number of ads that are relevant. But if we want to really add value to that person, we need to have deeper profile information with a better experience of where he is in the buying cycle for those individual areas and categorization of creative messages to help tailor the ad experience for the individual.

Example: The avid golfer. There’s a whole ecosystem around golf that could be useful in creating value to the user beyond just showing ads for golf equipment in general. For instance, if our golfer was shopping for a new driver, it would be relevant to show him ads for drivers. Or if several new clubs had been purchased recently, maybe the ads should focus on balls, bags, shoes, or clothing.

Targeting our golfer based on specific product matches are pretty obvious, but equally interesting would be if he lived in the northeast, it was winter, and he’d recently showed interest in booking a vacation. In that case, the systems should be tailoring the vacation advertising around golfing destinations. This means ads for all sorts of products and services need to be categorized by the messaging used within them such that this kind of matching could be accomplished. Similarly, tailoring ads for numerous products and services around golf should be possible and make those messages more relevant to our golfer. But obviously to make that experience work well, we’d need lots of products and services that could be tailored around the “concept” of golf. Otherwise, we’d show this poor guy the same five ads all the time.

Our systems are on the cusp of these capabilities today. In fact, some of these scenarios could be activated by specific vendors in the industry. But the capabilities need to be ubiquitous enough that marketers drive those scenarios into their advertising creative and into their media plans. So it’s a bit of a chicken-and-egg conundrum: Marketers aren’t driving these scenarios to their vendors, so the vendors haven’t yet activated the capabilities to fulfill the scenarios.

We will get there. But it could take some time.

How To Use RTB For Targeted Reach Instead Of Retargeting

By Eric Picard (Originally published on AdExchanger)

I was recently told by an executive in a position to know that 70 to 80% of revenue in the RTB space comes from retargeting. I found that stunning because it basically tells us that the RTB space is incredibly immature. If the vast majority of revenue in the space is retargeting, then nearly all the spending comes from ecommerce companies.

That means we have huge upside in this space because ecommerce companies certainly don’t make up anything near the majority of advertising spending.

Nearly 90% of advertising spend “all-up” is done on a targeted reach basis. In other words, the advertiser has come up with an ideal marketing persona (or series of marketing personas – many brands have five to 10 defined marketing personas) and their media plan is designed to reach people matching that persona. Using old-school methods, such as Nielsen or comScore, they find publishers with audiences matching their marketing personas, and that’s where they’ll buy impressions.

The problem is that this is extremely inaccurate, and wastes budget by spreading it across the whole audience that visits this publisher. On one hand, it’s wasteful because it pushes the message on audiences that don’t match campaign goals. On the other hand, it’s OK if there’s some “waste” in media spending because there’s value in getting the message in front of slight target mismatches.

Case in point: I don’t have cable at home. We watch Hulu, Amazon Prime and Netflix when we consume TV content. But recently, while traveling, I saw a few hours of TV in my hotel each night. I was shocked by the vast number of pharmaceutical ads on broadcast television – especially on the news (which I hardly watch anymore).

Most ads related to conditions I’m not facing today – so in a sense those ads were wasted. But should I ever contract one of those conditions, I’ll likely remember those products exist. Or should one of my close friends or loved ones get stricken with those conditions, I’ll recall that a medication exists and engage in conversation with them.

So yes – this broadcast brand strategy certainly does have some value. As I’ve said before: There’s value in the fact that I know Dodge Ram owners are “RAM Tough.”

On the other hand, we can be much more precise now than in the past — if you can find the data. And if you believe in the methodology that created the data, there are ways to more precisely reach your target personas and target audiences of all flavors.

Find The Right Tools

Using demand-side platforms and social media marketing tools, including the self-service tools within Facebook, it’s now possible to find your target audience in a variety of ways. You can be very narrow or very broad. You can control exactly which sites on which you’ll reach that audience, or you can simply specify on which sites you don’t want to reach your audience.

For brands that are very particular about running ads only on approved content, there is the white list – a specific list of domains matching against publishers that you specifically approve to run ads on. This does limit scale, but there’s no limit on the size of the white list you can create.  And there are vendors like Trust Metrics that you can use to build a custom white list for you, which both hones the targeting to sites that match your brand safety metrics and massively reduces fraud.

Or if you want, you can use private marketplaces to execute buys only on the sites you specifically negotiate with for access to their audiences over RTB. This has a lot of value for pharma and marketers that are extremely sensitive to running ads on sites that match their brand values.

If you want to specify a tightly targeted user base, one that is so targeted that it limits the audience size to only a few thousand users, you can do that using tools like Facebook’s advertising that lets you specify many different elements and tells you how limited the size of your audience is.

Or there are tools like Optim.al, which hones the audience and offers ways to expand or contract it. Or tools that let you find audiences similar to your targeting with less targeting but greater impression volume. (Disclosure: My company Rare Crowds does this.) Or you could use MediaMath’s built-in features to automatically find the right audience that performs best for your campaigns.

Nearly every company playing in the RTB space has functionality designed to meet the needs of advertisers that want to reach specific audiences, not just retarget people who visited your website or who are existing customers. There is the potential to reach people you haven’t reached before, find new customers and prospect for them.

The biggest growth sector for RTB this year is clearly going to be brand advertisers and those that use RTB for targeted reach — just like 90% of all media spending.

The 7 types of targeting you need to know

By Eric Picard (Originally Published in iMedia – May 10, 2014)

For as long as people have been buying ads, they have been targeting their desired audiences. The science behind this obviously has changed over the years. In the beginning — say, back in ancient Greece — it was as simple as putting the name of your pottery shop on a few of your clay pots. This evolved to more location-based models over the millennia, of course, and today we can geo-target your mobile device. End of story? Not quite.

As we think about the evolution of targeted advertising over the past 50 years, there are panel-based “currency” data providers such as Nielsen, Arbitron, and others. These services allow buyers to place ads on specific published content across numerous media, with an understanding of the overall audience breakdown that views this content. Buyers can place their ads on content where their desired audience makes up some percentage of the audience that consumes that content. By doing this across a certain number of publications or shows, they can be relatively confident that they are reaching a certain number of members of their target audience.

This is easy when you’re selling a product or service that has a very broad audience — say, toothpaste. But when you have a very targeted customer you’re trying to reach, it can be much more difficult. Other than niche publications clearly aligned with your target customer — say, knitting magazines or websites — it has been hard to find enough touchpoints to reach prospective customers easily.

That has changed significantly over the last few years. Let’s focus on digital media for our purposes. The core types of targeting available today include the following.

Panel-based data

Panel-based data is the most broadly used today, from providers such as Nielsen, comScore, and others. These panels are used as described above — to understand the overall audiences that consume content provided by a publisher. This “whole milk” approach works well for brand advertisers that have large audiences that are easy to find.

Geography

This category includes geo-targeting and geo-derived information such as Nielsen PRIZM clusters that merge information about households in specific geographies. This is much more important today than in the past, given that mobile devices offer information about where audiences are at the moment of the ad delivery, thereby taking location-based advertising to new heights. In mobile devices, this matters a lot, as some of the mechanisms available on the web are either not available on mobile, or much less available due to technical limitations related to cookies.

First-party audience data

First-party audience data is available from either the advertisers directly (data they have about their existing customers) or from publishers directly (data they have about their individual audience members). First-party data is derived either from explicitly provided information or from observed behavior.

On the advertiser side, this is typically CRM data; generally these are either customers or prospects with whom the advertiser has had direct contact. Perhaps the person in question has purchased from the advertiser before, or perhaps that person has signed up for a newsletter. In the case of e-commerce, perhaps the user has visited the site but hasn’t purchased, in which case a click-path analysis might derive some information about the person’s interests.

In the case of publishers, this information can be captured through registration (which actually tends to be much more accurate than professionals believe; as it turns out, many people don’t put in fake information) or from observed behavior (users who read financial news get put into a finance bucket to be targeted when consuming other kinds of content).

Third-party audience data

Third-party audience data is available from numerous providers. Typically these data points are derived from observing the behavior (anonymously) of the end users as they’re moving across numerous websites. Sometimes this data is derived from other sources, such as credit card activity matched anonymously to users via cookie matching.

Third-party retargeting data

Third-party retargeting data is available from numerous providers. These companies will typically place targeting tags on both the advertiser and publisher websites and then link those together in order to execute media buys. Because the provider needs to have matched cookies on both the advertiser and publisher websites, typically these services run as ad networks, since they need to close the loop directly. But there are providers that allow advertisers to create their own retargeting cookie pools and reach their customers and prospects over ad exchanges and through their own direct publisher relationships. This is frequently being referred to as second-party targeting.

Look-alike targeting

Look-alike targeting is available from numerous providers as well, which enables the buyer to provide the look-alike vendor or network with a pool of cookies or data definitions. The providers will then find matching audiences who “look like” the users you’ve provided to them. This allows the buyer to get value similar to retargeting campaigns, but for much larger audiences.

Custom micro-segmentation

Custom micro-segmentation is available from a few providers. This enables the buyer to specify extremely targeted audiences that are orders of magnitude more targeted than what is available over the open market and that match their ad campaign goals exactly or much more closely. This type of targeting can be used for brand campaigns or for performance.

The types of targeting above are broad bucket definitions, and there are now literally hundreds of thousands, if not millions, of available targeting segments on the market. Vendors should be more than happy to educate buyers (and sellers) on the opportunity and methodologies behind the data segmentation. I highly recommend that one or more buyers within every buying group become an expert in the types of available segmentation and the data models involved.

Life after the death of 3rd Party Cookies

By Eric Picard (Originally published on AdExchanger.com July 8th, 2013)

In spite of plenty of criticism by the IAB and others in the industry, Mozilla is moving forward with its plan to block third-party cookies and to create a “Cookie Clearinghouse” to determine which cookies will be allowed and which will be blocked.  I’ve written many articles about the ethical issues involved in third-party tracking and targeting over the last few years, and one I wrote in March — “We Don’t Need No Stinkin’ Third-Party Cookies” — led to dozens of conversations on this topic with both business and technology people across the industry.

The basic tenor of those conversations was frustration. More interesting to me than the business discussions, which tended to be both inaccurate and hyperbolic, were my conversations with senior technical leaders within various DSPs, SSPs and exchanges. Those leaders’ reactions ranged from completely freaked out to subdued resignation. While it’s clear there are ways we can technically resolve the issues, the real question isn’t whether we can come up with a solution, but how difficult it will be (i.e. how many engineering hours will be required) to pull it off.

Is This The End Or The Beginning?

Ultimately, Mozilla will do whatever it wants to do. It’s completely within its rights to stop supporting third-party cookies, and while that decision may cause chaos for an ecosystem of ad-technology vendors, it’s completely Mozilla’s call. The company is taking a moral stance that’s, frankly, quite defensible. I’m actually surprised it’s taken Mozilla this long to do it, and I don’t expect it will take Microsoft very long to do the same. Google may well follow suit, as taking a similar stance would likely strengthen its own position.

To understand what life after third-party cookies might look like, companies first need to understand how technology vendors use these cookies to target consumers. Outside of technology teams, this understanding is surprisingly difficult to come by, so here’s what you need to know:

Every exchange, Demand-Side Platform, Supply-Side Platform and third-party data company has its own large “cookie store,” a database of every single unique user it encounters, identified by an anonymous cookie. If a DSP, for instance, wants to use information from a third-party data company, it needs to be able to accurately match that third-party cookie data with its own unique-user pool. So in order to identify users across various publishers, all the vendors in the ecosystem have connected with other vendors to synchronize their cookies.

With third-party cookies, they could do this rather simply. While the exact methodology varies by vendor, it essentially boils down to this:

  1. The exchange, DSP, SSP or ad server carves off a small number of impressions for each unique user for cookie synching. All of these systems can predict pretty accurately how many times a day they’ll see each user and on which sites, so they can easily determine which impressions are worth the least amount of money.
  2. When a unique ID shows up in one of these carved-off impressions, the vendor serves up a data-matching pixel for the third-party data company. The vendor places its unique ID for that user into the call to the data company. The data company looks up its own unique ID, which it then passes back to the vendor with the vendor’s unique ID.
  3. That creates a lookup table between the technology vendor and the data company so that when an impression happens, all the various systems are mapped together. In other words, when it encounters a unique ID for which it has a match, the vendor can pass the data company’s ID to the necessary systems in order to bid for an ad placement or make another ad decision.
  4. Because all the vendors have shared their unique IDs with each other and matched them together, this creates a seamless (while still, for all practical purposes, anonymous) map of each user online.

All of this depends on the basic third-party cookie infrastructure Mozilla is planning to block, which means that all of those data linkages will be broken for Mozilla users. Luckily, some alternatives are available.

Alternatives To Third-Party Cookies

1)  First-Party Cookies: First-party cookies also can be (and already are) used for tracking and ad targeting, and they can be synchronized across vendors on behalf of a publisher or advertiser. In my March article about third-party cookies, I discussed how this can be done using subdomains.

Since then, several technical people have told me they couldn’t use the same cross-vendor-lookup model, outlined above, with first-party cookies — but generally agreed that it could be done using subdomain mapping. Managing subdomains at the scale that would be needed, though, creates a new hurdle for the industry. To be clear, for this to work, every publisher would need to map a subdomain for every single vendor and data provider that touches inventory on its site.

So there are two main reasons that switching to first-party cookies is undesirable for the online-ad ecosystem:  first, the amount of work that would need to be done; second, the lack of a process in place to handle all of this in a scalable way.

Personally, I don’t see anything that can’t be solved here. Someone needs to offer the market a technology solution for scalable subdomain mapping, and all the vendors and data companies need to jump through the hoops. It won’t happen in a week, but it shouldn’t take a year. First-party cookie tracking (even with synchronization) is much more ethically defensible than third-party cookies because, with first-party cookies, direct relationships with publishers or advertisers drive the interaction. If the industry does switch to mostly first-party cookies, it will quickly drive publishers to adopt direct relationships with data companies, probably in the form of Data Management Platform relationships.

2) Relying On The Big Guns: Facebook, Google, Amazon and/or other large players will certainly figure out how to take advantage of this situation to provide value to advertisers.

Quite honestly, I think Facebook is in the best position to offer a solution to the marketplace, given that it has the most unique users and its users are generally active across devices. This is very valuable, and while it puts Facebook in a much stronger position than the rest of the market, I really do see Facebook as the best voice of truth for targeting. Despite some bad press and some minor incidents, Facebook appears to be very dedicated to protecting user privacy – and also is already highly scrutinized and policed.

A Facebook-controlled clearinghouse for data vendors could solve many problems across the board. I trust Facebook more than other potential solutions to build the right kind of privacy controls for ad targeting. And because people usually log into only their own Facebook account, this avoids the problems that has hounded cookie-based targeting related to people sharing devices, such as when a husband uses his wife’s computer one afternoon and suddenly her laptop thinks she’s a male fly-fishing enthusiast.

3) Digital Fingerprinting: Fingerprinting, of course, is as complex and as fraught with ethical issues as third-party cookies, but it has the advantage of being an alternative that many companies already are using today. Essentially, fingerprinting analyzes many different data points that are exposed by a unique session, using statistics to create a unique “fingerprint” of a device and its user.

This approach suffers from one of the same problems as cookies, the challenge of dealing with multiple consumers using the same device. But it’s not a bad solution. One advantage is that fingerprinting can take advantage of users with static IP addresses (or IP addresses that are not officially static but that rarely change).

Ultimately, though, this is a moot point because of…

4) IPV6: IPV6 is on the way. This will give every computer and every device a static permanent unique identifier, at which point IPV6 will replace not only cookies, but also fingerprinting and every other form of tracking identification. That said, we’re still a few years away from having enough IPV6 adoption to make this happen.

If Anyone From Mozilla Reads This Article

Rather than blocking third-party cookies completely, it would be fantastic if you could leave them active during each session and just blow them away at the end of each session. This would keep the market from building third-party profiles, but would keep some very convenient features intact. Some examples include frequency capping within a session, so that users don’t have to see the same ad 10 times; and conversion tracking for DR advertisers, given that DR advertisers (for a whole bunch of stupid reasons) typically only care about conversions that happen within an hour of a click. You already have Private Browsing technology; just apply that technology to third-party cookies.

Targeting fundamentals everyone should know

 

By Eric Picard (Originally published in iMediaConnection, April 11th, 2013)

Targeting data is ubiquitous in online advertising and has become close to “currency” as we think about it in advertising. And I mean currency in the same way that we think about Nielsen ratings in TV or impression counts in digital display. We pay for inventory today in many cases based on a combination of the publisher, the content associated with the impression, and the data associated with a variety of elements. This includes the IP address of the computer (lots of derived data comes from this), the context of the page, various content categories and quality metrics, and — of course — behavioral and other user-based targeting attributes.

But for all the vetting done by buyers of base media attributes, such as the publisher or the page or quality scores, there’s still very little understanding of where targeting data comes from. And even less when it comes to understanding how it should be valued and why. So this article is about just that topic: how targeting data is derived and how you should think about it from a value perspective.

Let’s get the basic stuff out of the way: anything derived from the IP address and user agent. When a browser visits a web page, it spits out a bunch of data to the servers that it accesses. The two key attributes are IP address and user agent. The IP address is a simple one; it’s the number assigned to the user’s computer by the internet to allow that computer to be identified by the various servers it touches. It’s a unique number that allows an immense amount of information to be inferred; the key piece of information inferred is the geography of the user.

There are lots of techniques used here to varying degrees of “granularity.” But we’ll just leave it at the idea that companies have amassed lists of IP addresses assigned to specific geographic locations. It’s pretty accurate in most cases, but there are still scenarios where people are connected to the internet via private networks (such as a corporate VPN) that confuse the world by assigning IP addresses to users in one location when they are actually in another. This was the classic problem with IP address based geography back in the days of dial-up, when most users showed up as part of Reston, Va. (where AOL had its data centers). Today where most users are on broadband, the mapping is much more accurate and comprehensive.

As important as geography are the various mappings that are done against location. Claritas, Prism, and other derived data products make use of geography to map a variety of attributes to the user browsing the page. And these techniques have moved out of traditional media (especially direct-response mailing lists) to digital and are quite useful. The only issue is that the further down the chain of assumptions used to derive attributes, the more muddled things become. Statistically, the data still is relevant, but on a per-user basis it is potentially completely inaccurate. That shouldn’t stop you from using this information, nor should you devalue it — but just be clear that there’s a margin of error here.

User agent is an identifier for the browser itself, which can be used to target users of specific browsers but also to identify non-browser activity that chooses to identify itself. For instance, various web crawlers such as search engines identify themselves to the server delivering a web page, and ad servers know not to count those ad impressions as human. This assumes good behavior on behalf of the programmers, and sometimes “real” user agents are spoofed when the intent is to create fake impressions. Sometimes a malicious ad network or bad actor will do this to create fake traffic to drive revenue.

Crawled data

There’s a whole class of data that’s derived by sending a robot to a web page, crawling through the content on the page, and classifying the content based on all sorts of analysis. This mechanism is how Google, Bing, and other search engines classify the web. Contextual targeting systems like AdSense classify the web pages into keywords that can be matched by ad sales systems. And quality companies, like Trust Metrics and others, scan pages and use hundreds or thousands of criteria to value the rank of the page — everything from ensuring that the page doesn’t contain porn or hate speech to analyzing the amount of white space around images and ads and the number of ads on a page.

User targeting

Beyond the basics of browser, IP, and page content, the world is much less simple. Rather than diving into methodologies and trying to simplify a complex problem, I’ll simply list and summarize the options here:

Registration data: Publishers used to require registration in order to access their content and, in that process, request a bunch of data such as address, demographics, psychographics, and interests. This process fell out of favor for many publishers over the years, but it’s coming back hard. Many folks in our industry are cynical about registration data, using their own experiences and feelings to discount the validity of user registration data. But in reality, this data is highly accurate; even for large portals, it is often higher than 70 percent accurate, and for news sites and smaller publishers, it’s much more accurate.

Interestingly, the use of co-registration through Facebook, Twitter, LinkedIn, and others is making this data much more accurate. One of the most valuable things about registration data is that it creates a permanent link between a user and the publisher that lives beyond the cookie. Subsequently captured data from various sessions is extremely accurate even if the user fudged his or her registration information.

First-party behavioral data: Publishers and advertisers have a great advantage over third parties in that they have a direct relationship with the user. This gives them incredible opportunities to create deeply refined targeting segments based on interest, behavior, and especially from custom created content such as showcases, contests, and other registration information. Once a publisher or advertiser creates a profile of a user, it has the means to track and store very rich targeting data — much richer in theory than a third party could easily create. For instance, you might imagine that Yahoo Finance benefits highly from registered users who track their stock portfolio via the site. Similarly, users searching for autos, travel, and other vertical-specific information create immense targeting value.

Publishers curbed their internal targeting efforts years ago because they found that third-party data companies were buying targeted campaigns on publishers and then their high-cost, high-value targeting data was leaking away to third parties. But the world has shifted again, and publishers and advertisers both are benefiting highly from the data management platforms (DMPs) that are now common on the market. The race toward using first-party cookies as the standard for data collection is further strengthening publishers’ positions. Targeted content categories and contests are another way that publishers and advertisers have a huge advantage over third parties.

Creating custom content or contests with the intent to derive high-value audience data that is extremely vertical or particularly valuable is easy when you have a direct relationship with the user. You might imagine that Amazon has a huge lead in the market when it comes to valuation of users by vertical product interest. Similarly, big publishers can segment users into buckets based on their interest in numerous topics that can be used to extrapolate value.

Third-party data: There are many methods used to track and value users based on third-party cookies (those pesky cookies set by companies that generally don’t have a direct relationship with the user — and which are tracking them across websites). Luckily there are lots of articles out there (including many I’ve written) on how this works. But to quickly summarize: Third-party data companies generally make use of third-party cookies that are triggered on numerous websites across the internet via the use of tracking pixels. These pixels are literally just a 1×1 pixel image (sometimes called a “clear pixel”), or even just a simple no-image JavaScript call from the third-party server, that allows them to set and/or access a cookie that they can set on the users’ browsers. These cookies are extremely useful to data companies in tracking users because the same cookie can be accessed on any website, on any domain, across sessions, and sometimes across years of time.

Unfortunately for the third-party data companies, third-party cookies have recently come under intense scrutiny since Apple’s Safari doesn’t allow them by default and Firefox has announced that it will set new defaults in its next browser version to block third-party cookies. This means that those companies relying exclusively on third-party cookies will see their audience share erode and will need to fall back on other methods of tracking and profiling users. Note that these companies all use anonymous cookies and work hard to be safe and fair in their use of data. But the reality is that this method is becoming harder for companies to use.

By following users across websites, these companies can amass large and comprehensive profiles of users such that advertising can be targeted against them in deep ways and more money can be made from those ad impressions.
Read more at http://www.imediaconnection.com/content/33972.asp#qakIxCXJbl9KpiG3.99

We don’t need no stinkin’ 3rd party cookies!

By Eric Picard (Originally published on AdExchanger.com)

I’ve been writing about some of the ethical issues with “opt-out” third-party tracking for a long time. It’s a practice that makes me extremely uncomfortable, which is not where I started out. You can read my opus on this topic here.

In this article, I want to go into detail about why third-party cookies aren’t needed by the ecosystem, and why doing away with them as a default setting is both acceptable and not nearly as harmful as many are claiming.

 

First order of business: What is a third-party cookie?

When a user visits a web page, they load a variety of content. Some of this content comes from the domain they’re visiting. (For simplicity sake, let’s call it Publisher.com.) Some comes from third parties that are loading this content onto Publisher.com’s web site. (let’s call it ContentPartner.com.) An example would be that you could visit a site about cooking, and the Food Network could provide some pictures or recipes that the publisher embeds into the page. Those pictures and recipes sit on servers controlled by the content partner and point to that partner’s domain.

When content providers deliver content to a browser, they have the opportunity to set a cookie. When you’re visiting Publisher.com’s page, it can set a first-party cookie because you’re visiting its web domain. In our example above, ContentPartner.com is also delivering content to your browser from within Publisher.com’s page, so the kind of cookie it can deliver is a third-party cookie. There are many legitimate reasons why both parties would drop a cookie on your browser.

If this ended there, we probably wouldn’t have a problem. But this construct – allowing content from multiple domains to be mapped together into one web page, which was really a matter of convenience when the web first was created – is the same mechanism the ad industry uses to drop tracking pixels and ads onto publishers’ web pages.

For example, you might visit Publisher.com and see an ad delivered by AdServer.com. And on every page of that site, you might load tracking pixels delivered by TrackingVendor1.com, TrackingVendor2.com, etc. In this case, only Publisher.com can set a first-party cookie. All the other vendors are setting third-party cookies.

There are many uses for third-party cookies that most people would have no issue with, but some uses of third-party cookies have privacy advocates up in arms. I’ll wave an ugly stick at this issue and just summarize it by saying: Companies that have no direct relationship with the user are tracking that user’s behavior across the entire web, creating profiles on him or her, and profiting off of that user’s behavior without his or her permission.

This column isn’t about whether that issue is ethical or acceptable, because allowing third-party cookies to be active by default is done at the whim of the browser companies. I’ve predicted for about five years that the trend would head toward all browsers blocking them by default. So far Safari (Apple’s browser) doesn’t allow third-party cookies by default, and Mozilla’s Firefox has announced it will block them by default in the next version of Firefox.

Why I don’t think blocking third-party cookies is a problem

There are many scenarios where publishers legitimately need to deliver content from multiple domains. Sometimes several publishers are owned by one company, and they share central resources across those publishers, such as web analytics, ad serving, and content distribution networks (like Akamai). It has been standard practice in many of these cases for publishers to map their vendors against their domain, which by the way allows them to set first-party cookies as well.

How do they accomplish this? They set a ‘subdomain’ that is mapped to the third party’s servers. Here’s an example:

Publisher.com wants to use a web analytics provider but set cookies from its own domain. It creates a subdomain called WebAnalytics.Publisher.com using its Domain Name Server, or DNS. (I won’t get too technical, but DNS is the way that the Internet maps IP addresses – the numeric identifier of servers – to domain names.) It’s honestly as simple as one of the publisher’s IT people opening up a web page that manages their DNS, creating a subdomain name, and mapping it to a specific IP address. And that’s it.

This allows the third-party vendor to place first-party cookies onto the browser of the user visiting Publisher.com. This is a standard practice that is broadly used across the industry, and it’s critically important to the way that the Internet functions. There are many reasons vendors use subdomains, not just to set first-party cookies. For instance, this is standard practice in the web analytics space (except for Google Analytics) and for content delivery networks (CDNs).

So why doesn’t everybody just map subdomains and set first-party cookies?

First, let me say that while it is fairly easy to map a subdomain for the publisher’s IT department, it would be impractical for a demand-side platform (DSP) or other buy-side vendor to go out and have every existing publisher map a subdomain for them. For those focused on first-party data on the advertiser side, they’ll still have access to that data in this world. But for broader data sets, they’ll be picking up their targeting data via the exchange as pushed through by the publisher on the impression itself. For the data management platforms (DMPs), given that this is their primary business, it is a reasonable thing for them to map subdomains for each publisher and advertiser that they work with.

Also, the thing that vendors like about third-party cookies is that by default they work across domains. That means that data companies could set pixels on every publisher’s web site they could convince to place their pixels, and then automatically they would track one cookie across every site they visited. Switching to first-party cookies breaks that broad set of actions across multiple publishers into pockets of activity at the individual publisher level. There is no cheap, convenient way to map one user’s activity across multiple publishers. And only those companies that have a vested interest – the DMPs – will make that investment, and it will limit the number of small vendors who can’t make that investment from participating.

But, is that so bad?

So does moving to first-party cookies break the online advertising industry?

Nope. But it does complicate things. Let me tell you about a broadly used practice in our industry – one that every single data company uses on a regular basis. It’s a practice that gets very little attention today but is pretty ubiquitous. It’s called cookie mapping.

Here’s how it works: Let’s say one vendor has its unique anonymous cookies tracking user behavior and creating big profiles of activity, and it wants to use that data on a different vendor’s servers. In order for this to work, the two vendors need to map together their profiles, finding unique users (anonymously) who are the same user across multiple databases. How this is done is extremely technical, and I’m not going to mangle it by trying to simplify the process. But at a very high level, it’s something like this:

Media Buyer wants to use targeting data on an exchange using a DSP. The DSP enables the buyer to access data from multiple data vendors. The DSP has its own cookies that it sets (today these are third-party cookies) on users when it runs ads. The DSP and the data vendor work with a specialist vendor to map together the DSP’s cookies and the data vendor’s cookies. These cooking mapping specialists (Experian and Acxiom are examples, but others provide this service as well) use a complex set of mechanisms to map together overlapping cookies between the two vendors. They also have privacy auditing processes in place to ensure that this is done in an ethical and safe way to ensure that none of the vendors gets access to personally identifiable data.

Note that this same process is used between advertisers and publishers and their own DMPs so that first-party data from CRM and user registration databases can be mapped to behavioral and other kinds of data.

The trend for data companies in the last few years has been to move into DMP mode, working directly with the advertisers and publishers rather than trying to survive as third-party data providers. This move was intelligent – almost prescient of the change that is happening in the browser space right now.

My short take on this evolution

I feel that this change is both inevitable and positive. It puts more power back in the hands of publishers; it solidifies their value proposition as having a direct relationship with the consumer, and will drive a lot more investment in data management platforms and other big data by publishers. The last few years have seen a data asymmetry problem arise where the buyers had more data available to them than the publishers, and the publishers had no insight into the value of their own audience. They didn’t understand why the buyer was buying their audience. This will fall back into equilibrium in this new world.

Long tail publishers will need to rely on their existing vendors to ensure they can easily map a first-party cookie to a data pool – these solutions need to be baked by companies who cater to long tail publishers, such as ad networks. The networks will need to work with their own DMP and data solutions to ensure that they’re mapping domains together on behalf of their long tail publishers and pushing that targeting data with the inventory into the exchanges. The other option for longer tail publishers is to license their content to larger publishers who can aggregate this content into their sites. It will require some work, which also means ambiguity and stress. But certainly this is not insurmountable.

I also will say that first-party tracking is both ethical and justifiable. Third-party tracking without the user’s permission is ethically a challenging issue, and I’d argue that it’s not in the best interest of our industry to try and perpetuate – especially since there are viable and acceptable alternatives.

That doesn’t mean switching off of third-party cookies is free or easy. But in my opinion, it’s the right way to do this for long-term viability.

The inconvenience of privacy revisited

Back in 2004 I wrote an article called, The Inconvenience of Privacy. It was the first article I wrote about privacy, and I still go back and read it. Frankly it wasn’t a great article, the best thing about it was the premise of the title: Privacy is inconvenient to achieve even if it’s something people want!

The simple fact is that most people aren’t privacy fanatics. Most people don’t understand the implications of privacy issues, and most simply can’t be bothered to figure out how to keep their activity private. One common meme for the last few years is how “younger generations” today don’t care about privacy like “we used to” back in the pre-internet days.  I actually don’t agree with this statement – I’d argue that young people simply never care about privacy, in any generation. They don’t have anything to protect yet, they don’t have any real short term harm that could come from a lack of privacy. Until you have assets, a career, a professional reputation, a family – in other words – something to lose, privacy just doesn’t seem too important, especially since it’s so damn inconvenient to achieve.

The problem is that almost everything done on the internet is permanent. So while in the pre-internet world, our youthful indiscretions were washed away by time, the pictures from Spring Break 2009 are still showing up in web searches for your daughter’s name. And when her prospective bosses go and look her up online, their impression of her may well be set by the most indiscreet moments in her young life, certainly not what she’d want to be characterized by when interviewing for a job.

The problem is – with social media tools such as Facebook, and photo sharing sites like Flickr, the power to have these ‘interesting’ life moments immortalized is in the hands of others. And once something exists on the internet, it’s out in the public domain forever. Most people aren’t even aware of tools like “The Way Back Machine” or don’t realize that search engines maintain searchable archives of pages and images they’ve indexed.

With all the social unrest in less democratic and stable societies, privacy is more than just an issue of convenience or professional acceptance – it can be an issue of freedom and personal liberty – possibly even mortal danger. Technologies to understand who is tracking us, but in my opinion far more importantly, that protect us from being tracked, are floating around in the market. The most recent example that I’ve seen is SpotFlux, which raised $1M in venture funding.

Regardless of the issues, privacy and advertising are extremely intertwined due to the overlap of various behavioral targeting and tracking technologies and the fact that some consumers feel that their behavior is being tracked without permission to make some unnamed faceless corporation money. The upshot of this is at some point, a backlash against various tracking technologies – and broader adoption of extremely simplified and convenient technologies for keeping your online activity private will come onto the market. And the debate will simply end.

Then the debate about people blocking their activity from publishers will become a bigger issue, since publishers need to be able to track their users over sessions and over time, such that they can more easily sell ads. So privacy and advertising are deeply intertwined and simply can’t be disconnected.  The right solution to this problem has not come along yet, but I’m confident it will eventually.

The Ethical Issues with 3rd Party Behavioral Tracking

(Originally published on AdExchanger, October 2011) by Eric Picard

Companies that track consumers’ behavior across the web without their consent, and without providing them any recognizable value, should stop. I’ll argue that virtually no company that tracks consumer behavior across multiple sites actually provides consumers with recognizable value.

And the real issue here is that consumers never opt-into being tracked this way – if we required this, then the ethical issues would go away. But we don’t require an opt-in because in reality, consumers don’t want this, don’t benefit from it, and as an industry we’re acting in unethical ways. I realize that for this audience, my position makes me as unpopular as a New York City steam bath in August, but I challenge the industry to really stand up and do the right thing here.

For clarity – Publishers that track what their visitors do on that one publisher’s site face completely different issues. Consumers who visit a publisher’s site are engaging in a direct relationship with that publisher. As long as the publisher is collecting data to be used only on its own website, this is defensible – the consumer has elected to visit their site, and gets the value of content that the publisher provides. And if the publisher asks the consumer to opt-into being tracked across multiple websites, then there is no ethical issue at stake. But cross-publisher behavioral tracking should definitely require an opt-in.

As long as a publisher has a clear privacy policy, data collection for their site without an opt-in is ethical. The consumer gets value from personalization of content as well as enabling the publisher to sell behaviorally targeted advertising. And the publisher has the right to collect this data to optimize their business, especially given that most publishers make the most of their revenue from advertising – this data is generally used to better sell ads to advertisers.

While a consumer is visiting a publisher’s site, the publisher certainly has the right to track his or her behavior. And having a user specifically ‘opt-out’ of being tracked on that publisher is the ‘right’ option to provide in terms of creating consumer good will.

There are three arguments commonly used by advertising industry professionals that make a case for behavioral targeting today:

1. Behavioral targeting makes advertising more relevant, a consumer benefit.

Targeting doesn’t make advertising more relevant – it makes a small percentage of the ads people see more relevant. In order to really increase relevance of advertising via targeting, the number of advertisers would need to vastly increase. There are more than 5 trillion ad impressions per month in online display. And more than 90% of US display ad spend is driven by less than 6,000 advertisers.

Frequently the argument is used that with Paid Search, consumers feel that contextual targeting makes the ads more relevant. But contextual targeting doesn’t require consumer behavior data. And further, the basis is completely wrong: Paid Search has roughly 250 billion impressions a month, and 400,000 active advertisers (verses 5 trillion ad impressions and 6,000 – 8,000 for display ads.)

The math is pretty simple – there is very little opportunity to target display advertising against niche segments today in order to increase overall relevancy of the ads. The reason people aren’t seeing relevant ads is not because targeting is not good enough, or we’re not collecting enough behavioral data, it’s because there are too few ad creatives to apply against the vast number of ad impressions.

This could change over time as more advertisers start moving into display – especially if we ever find a way to make display advertising efficient and effective for local advertisers, at which point there’d be lots more creatives. But even in these cases – cross-publisher tracking wouldn’t be necessary. As long as we had geographic targeting available (which doesn’t require any browsing history) and basic data that the publisher could track themselves, the ads could be much more relevant and valuable. But, until we grow the number of advertisers, and more importantly, the number of creatives to more closely match paid search – this is a moot point.

2. There is no harm in the third-party tracking technologies, the tracking is anonymous.

So-called anonymous tracking is not very secure – the anonymity is fairly easily broken. Cracking open that anonymous shell and merging it with personally identifiable information from other sources is a fairly easy engineering feat. Search for, “netflix prize privacy” in any search engine for one example. (Keeping in mind that this example is from 2007). There’s been a lot more examples since then.

Beyond this, many of the players in the behavioral targeting space are small startups, without a huge amount of investment in security infrastructure. Even major corporations have leaked millions of people’s data into the public domain. Searching for “AOL Data Leak”, “Sony PlayStation Data Leak”, “Skype Android Data Leak”, “Epsilon Data Leak” should prove interesting. More of these happen all the time. If these major corporations can’t keep your personal data secure, the idea that a small startup is a safe place for this data simply doesn’t ring true. And I say this with all respect to my friends working at these companies – at the same time, I’m worried about it.

There are lots of very ethical people in this industry who would never do anything intentional or nefarious with the behavioral data that is collected. But that doesn’t mean that bad actors don’t exist. And even if good people are at the helm in some of these companies today – over time, mergers and acquisitions, or bankruptcies can cause changes in ownership over this potentially sensitive data.

I think I’ve shown above that the potential for damage is both real and proven, and could be quite harmful.

3. We’re not as ‘bad’ as the offline marketers.

I probably shouldn’t have to even engage in this kind of argument. But just because I hear this a lot – I’ll address it.

Yes, the offline direct marketers do use a lot of targeting data – much of it using personally identifiable information – in order to target users on direct mail campaigns and other mechanisms. Yes, they use credit card purchase data, and financial data that no consumer really would ever be excited about anyone getting access to. And yes – they’ve been doing it for years. As far as I’m concerned, this is unethical as hell. And I’ve written lots of articles over the years that state my position on this.

That doesn’t justify us doing this online, even if we were doing things in a vacuum. However, we’re not doing things in a vacuum. The process used in the online space in order to support a lot of this behavioral targeting data being actually used for buying ads requires cookie matching.

Cookie matching requires the use of a third party to find some kind of ‘data key’ that sits in a third party data provider, which is used to match two anonymous sources together. This can be an email address, or could be a telephone number, or a mailing address, or something else. This ‘key’ allows two separate anonymous cookies to be matched together as one single user.

The main providers of this service are the same exact ‘offline marketing’ data companies that are referenced by people in our industry as the ones who are ‘worse than we are.’ In other words – there is no difference whatsoever between the online and offline data providers – as they are both used in order for this behavior to function in our space.

Some of you may remember that the acquisition of Abacus (an offline data provider to direct marketing companies who built targeted mailing lists) by DoubleClick was blocked way back in 1999 because of fears that Abacus’ data would be able to merged with online data, and that this was a dangerous thing to the privacy of consumers.

But in reality – only a few years later, other providers of offline data for direct marketing began offering this kind of cookie matching service to online marketers – without any acquisitions. If this was such a concern that it caused DoubleClick’s acquisition to be rejected by regulators – then why is this not a concern when it’s done as part of the day-to-day business of the online advertising industry?

In the end, this is really just about doing the right thing – from my perspective. Consumers should give their approval before anyone without a direct relationship with them begins tracking their behavior across numerous web sites. This seems self-evident to me, and to most consumers I’ve talked to about it. The only argument to the contrary I’ve ever gotten was from professionals in the marketing industry. And as I’ve shown above, these don’t hold water as far as I’m concerned.

Tagged , ,

Why Facebook will ‘own’ brand advertising

(Originally published in iMediaConnection, February 2012) by Eric Picard

I’ve been watching and reading the Facebook IPO announcement frenzy with curiosity. The most curious meme floating around is the one that pooh-pooh’s its strike price, market cap, and valuation because its ad business “clearly isn’t going to be able to sustain growth the way Google’s did” — to which I call BS.

Here’s why Facebook will ultimately be the powerhouse in brand advertising online (and eventually offline as well):

Facebook is a platform

To really do this one justice, I’d need to write a whole article about the power of platforms and explain why platform effects are almost impossible to defeat once they’ve started. Platform effects are similar to network effects, so let’s start there in case you’re one of the 20 people left on the planet who haven’t learned about them. Network effects are basically when multiple users have adopted a platform or network, causing the platform or network to be more valuable. Telephones are the primal example here — the more people who have a phone, the more valuable the phone platform or network is to its users, therefore more people get telephones. Facebook has cracked that nut — it’s a vast social network, and network effects have rendered it as difficult to avoid getting a Facebook account as they have rendered not having a telephone or email address to be almost impossible.

Platform effects are similar, but even stickier: They come from opening a platform to third party developers. Once you have developers creating software that relies on the use of a platform, the platform becomes more useful and therefore becomes more adopted by end-users. This has been proven repeatedly — from Windows beating the Mac originally because so many software developers and hardware manufacturers supported the Windows PC platform. Apple has of course had the last laugh there, with the iPod/iPhone/iPad apps marketplace taking a page right out of Microsoft’s playbook and kicking them in the teeth.

Facebook is a platform that “consumer facing applications” like Zynga and other game companies have made good use of. But also it’s a massive data and business to business platform, which has been less broadly publicized, but which is beginning to gain adoption. And that part of its platform, tied to the data from the consumer side of its platform, is why advertising will ultimately bow to Facebook (barring some horrible misstep on their part.)

Facebook takes user data in return for free access to the Facebook platform

Facebook requires all users to opt into its platform — and despite all the various privacy debates and discussions about Facebook, it is actually pretty good about being transparent and providing value to users in return for sharing all sorts of data.

Facebook is right now (my opinion — open to debate) the most authoritative source of data on consumers, their interests, and brand affiliations. It’s going to grow and become more comprehensive, meaning that it will become the main source of all data used by brand advertisers to reach targeted users.

To my mind this is already destined to happen — and locked up due to the fact that Facebook is a platform. It builds content that no media company would be able to build (social content.) So in that way it really doesn’t compete with online publishers. Online publishers wisely have adopted Facebook as a distribution platform as well as an authentication platform for allowing consumers to accesstheir content.

It’s only a matter of time before publishers become so intertwined with Facebook’s platform that all their content becomes effectively part of the Facebook platform. But not in a way that publishers should be worried about Facebook disintermediating them. If Facebook is smart, it will work this out now and find a way to give publishers what they want in return for this: Let the publishers own their own targeting data, and work out a way to help them make more money without losing that data ownership.

Facebook will own brand advertising, and will not need to own direct response

Most of the wonks in the ad space are pooh-poohing Facebook because of a near-sighted over focus on direct response advertising. They believe in this false premise because of a single proof point, which is Google paid search advertising. The idea is that, “Since Facebook owns ad inventory that is further ‘up’ the purchase funnel than Google’s, Facebook will never justify a high enough CPM to compete for supremacy in the online space. Since Google is the owner of advertising online, and it did this by creating a vast pool of inventory that is sold at extremely high CPMs (because it is so close to the purchase on the purchase funnel) and because most of the online ad industry has been focused on DR for its entire existence, DR is where online must go.”

The wonks are wrong on this topic. Google undisputedly “owns” paid search advertising. But the entire paid search market is made up of something close to 250 billion monthly ad impressions. Google gets a very high premium on those ads — around $75 CPM. But Facebook has many more ways to play in the ad space than Google, and a lot more inventory to play with. Estimates put display ad volume well above 5 trillion monthly impressions, and Facebook has a huge percentage of these.  Since Facebook can cater to brands, it can be an efficient platform for selling ads to brands that target authoritatively to very granular audiences. Nobody has cracked that nut yet — the targeted reach at granularity and scale “nut” (disclaimer — this is specifically the problem I’ve been working on for the last year.)

So Facebook could own brand advertising online, could own a role as the authoritative data provider for brand advertising, could own the way that the big brand content platform of TV makes its way into a more modern and effective ad model, and could very well be the winner of the online advertising (nay the entire advertising) space for brands.

Facebook will dominate local advertising

Facebook has already grown a massive advertising business, and my bet is that when the details of its ad revenue are fully disclosed, a big chunk of that business will prove to be locally based. It is the only real play to be had for local businesses online right now; the only place to get local audience reach at any kind of scale. Local is a massive advertising market — one that nobody has been able to crack online, and Facebook will be the gateway between traditional media and online media for local advertising. Zuckerberg must already secretly have 200 people working on this problem as I type.

I’m very bullish on Facebook, but then, this is all just my opinion: I don’t have any idea how much of this Facebook really understands itself. All it really needs is some decent ad formats, and it’s got everything pretty well sewn up.