Category Archives: Predictions

When will digital take over traditional media?

By Eric Picard (Originally published on iMediaConnection.com, September 12, 2013)

In 2005 I worked on a project to map the infrastructure used for all traditional media advertising and determine if there was an opportunity to inject the new modern infrastructure of online advertising into the mix. This was a broad look at the space — with the goal to see if any overlap in the buying or selling processes existed at all and if there was a way to subtly or explicitly alter the architecture of online advertising platforms to drive convergence.

If you think about it, this is kind of a no-brainer. Delivering tens or hundreds of billions of ads a day in real time with ad delivery decisions made in a few milliseconds is much harder than getting the contracts signed and images off to printing presses (print media) or ensuring that the video cassettes or files are sent over to the network, broadcaster, or cable operator by a certain deadline. And the act of planning media buys before the buying process begins isn’t very different between traditional media and digital.

I went and interviewed media planners and buyers who worked across media. I talked to publishers in print, TV, radio, out-of-home, etc. And I went and talked to folks at the technology vendor companies who supported advertising in all of these spaces. It was clear to me that converging the process was possible, and as I looked at how the various channels operated, it was also clear that they’d benefit significantly from a more modern architecture and approach.

But in 2005, the idea of digital media technologies and approaches being used to “fix” digital media was clearly too early. It would be like AOL buying Time Warner…Oh yeah, that happened. In any case, the likelihood of getting traditional folks to adopt digital media ad technology in 2005 was simply ludicrous.

And despite progress, and clearly superior technical approaches in digital (if lower revenue from the same content due to business model differences), there’s little danger of traditional and digital media ad convergence in the near term. This is actually a real shame because digital media now is stepping into a real renaissance from an advertising technology perspective.

Programmatic media buying and selling is clearly the future of digital, and I believe they will extend into traditional as well. And within programmatic, RTB is a clear winner (although not the only winner) in the space. The value proposition of RTB for the buyer is incredibly strong.  Buyers get to deliver ads only to the specific audiences they desire and on the specific publishers (or group of publishers) they want their ads associated with. While still mostly used for remnant media monetization, this is changing very fast.

Television is the obvious space to adopt digital media ad technology, and with terms like “Digital Broadcast,” “Digital Cable,” “IPTV,” and others, it would seem on the surface that we’re moments away from RTB making the leap from online display ads and digital video to television.

That’s not quite the case. While great strides are being made in executing on targeted television buys by fantastic companies like Simulmedia, Visible World, and others, this space is still not quite ready to make the transition to real-time ad delivery (what we think of as ad serving in the online space) at large, let alone RTB.

This is because the cable advertising industry is hamstrung by an infrastructure that is designed for throughput and scale of video delivery, which was absolutely not designed with the idea of real-time decisions at the set-top-box (STB) level in mind. Over the years we’ve seen video on demand (VOD) really take off for cable, but even there, where the video content is delivered via a single stream per STB, they didn’t design the infrastructure around advertising experiences. Even the newer players with more advanced and modern infrastructures and modern-sounding names like IPTV, such as Verizon’s FIOS solution, haven’t built in the explicit hooks and solutions needed to support real-time ad delivery decisions across all ad calls. That basically means that for the vast majority of ads, there’s no targeting whatsoever.

Some solutions like Black Arrow and Visible World have done the work to drop themselves into the cable infrastructure for ad delivery, but nobody has seen massive adoption at a scale that would let something happen at the national level. And the cable industry’s internally funded advanced advertising initiative — The Canoe Project — laid off most of its staff last year and has focused on delivering a VOD Clearinghouse to get VOD to scale across cable operators. So in 2013, we’re still not to the point where dynamic video advertising can be delivered on any television show during its broadcast, and even VOD doesn’t yet have a way to easily, cohesively, and dynamically deliver video advertising — let alone providing an RTB marketplace.

On the non-RTB side of programmatic buying and selling, I think we’ll see a lot of progress here in traditional media. Media Ocean has been doing their own flavor of programmatic for quite some time — in fact the Media Ocean name of the post-merger company was a product name within the Donovan Data Systems (DDS) portfolio that helped bind together the DDS TV Buying Product with a Television Network selling product and allowed buyers and sellers to transact on insertion orders programmatically for spot television. With Media Ocean’s new focus on digital media (which is getting rave reviews from folks I’ve talked to who have seen it), there’s little doubt in my mind that these products will extend over to the traditional side of the market and ultimately replace (or be the basis of new versions of) the various legacy products that allowed DDS to dominate the media buying space for decades.

If our industry can get to the point where executing media buys across traditional and digital share a common process until the moment where they diverge from a delivery perspective, I think the market overall will make great headway. And I’m bullish on this — I think we’re not far away but it won’t happen this year.

Advertisements

Life after the death of 3rd Party Cookies

By Eric Picard (Originally published on AdExchanger.com July 8th, 2013)

In spite of plenty of criticism by the IAB and others in the industry, Mozilla is moving forward with its plan to block third-party cookies and to create a “Cookie Clearinghouse” to determine which cookies will be allowed and which will be blocked.  I’ve written many articles about the ethical issues involved in third-party tracking and targeting over the last few years, and one I wrote in March — “We Don’t Need No Stinkin’ Third-Party Cookies” — led to dozens of conversations on this topic with both business and technology people across the industry.

The basic tenor of those conversations was frustration. More interesting to me than the business discussions, which tended to be both inaccurate and hyperbolic, were my conversations with senior technical leaders within various DSPs, SSPs and exchanges. Those leaders’ reactions ranged from completely freaked out to subdued resignation. While it’s clear there are ways we can technically resolve the issues, the real question isn’t whether we can come up with a solution, but how difficult it will be (i.e. how many engineering hours will be required) to pull it off.

Is This The End Or The Beginning?

Ultimately, Mozilla will do whatever it wants to do. It’s completely within its rights to stop supporting third-party cookies, and while that decision may cause chaos for an ecosystem of ad-technology vendors, it’s completely Mozilla’s call. The company is taking a moral stance that’s, frankly, quite defensible. I’m actually surprised it’s taken Mozilla this long to do it, and I don’t expect it will take Microsoft very long to do the same. Google may well follow suit, as taking a similar stance would likely strengthen its own position.

To understand what life after third-party cookies might look like, companies first need to understand how technology vendors use these cookies to target consumers. Outside of technology teams, this understanding is surprisingly difficult to come by, so here’s what you need to know:

Every exchange, Demand-Side Platform, Supply-Side Platform and third-party data company has its own large “cookie store,” a database of every single unique user it encounters, identified by an anonymous cookie. If a DSP, for instance, wants to use information from a third-party data company, it needs to be able to accurately match that third-party cookie data with its own unique-user pool. So in order to identify users across various publishers, all the vendors in the ecosystem have connected with other vendors to synchronize their cookies.

With third-party cookies, they could do this rather simply. While the exact methodology varies by vendor, it essentially boils down to this:

  1. The exchange, DSP, SSP or ad server carves off a small number of impressions for each unique user for cookie synching. All of these systems can predict pretty accurately how many times a day they’ll see each user and on which sites, so they can easily determine which impressions are worth the least amount of money.
  2. When a unique ID shows up in one of these carved-off impressions, the vendor serves up a data-matching pixel for the third-party data company. The vendor places its unique ID for that user into the call to the data company. The data company looks up its own unique ID, which it then passes back to the vendor with the vendor’s unique ID.
  3. That creates a lookup table between the technology vendor and the data company so that when an impression happens, all the various systems are mapped together. In other words, when it encounters a unique ID for which it has a match, the vendor can pass the data company’s ID to the necessary systems in order to bid for an ad placement or make another ad decision.
  4. Because all the vendors have shared their unique IDs with each other and matched them together, this creates a seamless (while still, for all practical purposes, anonymous) map of each user online.

All of this depends on the basic third-party cookie infrastructure Mozilla is planning to block, which means that all of those data linkages will be broken for Mozilla users. Luckily, some alternatives are available.

Alternatives To Third-Party Cookies

1)  First-Party Cookies: First-party cookies also can be (and already are) used for tracking and ad targeting, and they can be synchronized across vendors on behalf of a publisher or advertiser. In my March article about third-party cookies, I discussed how this can be done using subdomains.

Since then, several technical people have told me they couldn’t use the same cross-vendor-lookup model, outlined above, with first-party cookies — but generally agreed that it could be done using subdomain mapping. Managing subdomains at the scale that would be needed, though, creates a new hurdle for the industry. To be clear, for this to work, every publisher would need to map a subdomain for every single vendor and data provider that touches inventory on its site.

So there are two main reasons that switching to first-party cookies is undesirable for the online-ad ecosystem:  first, the amount of work that would need to be done; second, the lack of a process in place to handle all of this in a scalable way.

Personally, I don’t see anything that can’t be solved here. Someone needs to offer the market a technology solution for scalable subdomain mapping, and all the vendors and data companies need to jump through the hoops. It won’t happen in a week, but it shouldn’t take a year. First-party cookie tracking (even with synchronization) is much more ethically defensible than third-party cookies because, with first-party cookies, direct relationships with publishers or advertisers drive the interaction. If the industry does switch to mostly first-party cookies, it will quickly drive publishers to adopt direct relationships with data companies, probably in the form of Data Management Platform relationships.

2) Relying On The Big Guns: Facebook, Google, Amazon and/or other large players will certainly figure out how to take advantage of this situation to provide value to advertisers.

Quite honestly, I think Facebook is in the best position to offer a solution to the marketplace, given that it has the most unique users and its users are generally active across devices. This is very valuable, and while it puts Facebook in a much stronger position than the rest of the market, I really do see Facebook as the best voice of truth for targeting. Despite some bad press and some minor incidents, Facebook appears to be very dedicated to protecting user privacy – and also is already highly scrutinized and policed.

A Facebook-controlled clearinghouse for data vendors could solve many problems across the board. I trust Facebook more than other potential solutions to build the right kind of privacy controls for ad targeting. And because people usually log into only their own Facebook account, this avoids the problems that has hounded cookie-based targeting related to people sharing devices, such as when a husband uses his wife’s computer one afternoon and suddenly her laptop thinks she’s a male fly-fishing enthusiast.

3) Digital Fingerprinting: Fingerprinting, of course, is as complex and as fraught with ethical issues as third-party cookies, but it has the advantage of being an alternative that many companies already are using today. Essentially, fingerprinting analyzes many different data points that are exposed by a unique session, using statistics to create a unique “fingerprint” of a device and its user.

This approach suffers from one of the same problems as cookies, the challenge of dealing with multiple consumers using the same device. But it’s not a bad solution. One advantage is that fingerprinting can take advantage of users with static IP addresses (or IP addresses that are not officially static but that rarely change).

Ultimately, though, this is a moot point because of…

4) IPV6: IPV6 is on the way. This will give every computer and every device a static permanent unique identifier, at which point IPV6 will replace not only cookies, but also fingerprinting and every other form of tracking identification. That said, we’re still a few years away from having enough IPV6 adoption to make this happen.

If Anyone From Mozilla Reads This Article

Rather than blocking third-party cookies completely, it would be fantastic if you could leave them active during each session and just blow them away at the end of each session. This would keep the market from building third-party profiles, but would keep some very convenient features intact. Some examples include frequency capping within a session, so that users don’t have to see the same ad 10 times; and conversion tracking for DR advertisers, given that DR advertisers (for a whole bunch of stupid reasons) typically only care about conversions that happen within an hour of a click. You already have Private Browsing technology; just apply that technology to third-party cookies.

What everyone should know about ad serving

By Eric Picard (Originally published in iMediaConnection.com)

Publisher-side ad servers such as DoubleClick for Publishers, Open AdStream, FreeWheel, and others are the most critical components of the ad industry. They’re responsible ultimately for coordination of all the revenue collected by the publisher, and they do an amazing amount of work.

Many people in the industry — especially on the business side of the industry — look at their ad server as mission critical, sort of in the way they look at the electricity provided by their power utility. Critical — but only in that it delivers ads. To ad operations or salespeople, the ad server is most often associated with how they use the user interface — really the workflow they interact with directly. But this is an oversight on their part.

The way that the ad server operates under the surface is actually something everyone in the industry should understand. Only by understanding some of the details of how these systems function can good business decisions be made.

Ad delivery

Ad servers by nature make use of several real-time systems, the most critical being ad delivery. But ad delivery is not a name that adequately describes what those systems do. An ad delivery system is really a decision engine. It reviews an ad impression in the exact moment that it is created (by a user visiting a page), reviews all the information about that impression, and makes the decision about which ad it should deliver. But the real question is this: How does that decision get made?

An impression could be thought of as a molecule made up of atoms. Each atom is an attribute that describes something about that impression. These atomic attributes can be simple media attributes, such as the page location that the ad is imbedded into, the category of content that the page sits within, or the dimensions of the creative. They can be audience attributes such as demographic information taken from the user’s registration data or a third-party data company. They can be complex audience segments provided by a DMP such as “soccer mom” — which is in itself almost a molecular object made up of the attributes of female, parent, children in sports — and of course various other demographic and psychographic atomic attributes.

When taken all together, these attributes define all the possible interpretations of that impression. The delivery engine now must decide (all within a few milliseconds) how to allocate that impression against available line items. This real-time inventory allocation issue is the most critical moment in the life of an impression. Most people in our industry have no understanding of what happens in that moment, which has led to many uninformed business, partnership, and vendor licensing decisions over the years, especially when it comes to operations, inventory management, and yield.

Real-time inventory allocation decides which line items will be matched against an impression. The way these decisions get made reflects the relative importance placed on them by the engineers who wrote the allocation rules. These, of course, are informed by business people who are responsible for yield and revenue, but the reality is that the tuning of allocation against a specific publisher’s needs is not possible in a large shared system. So the rules get tuned as best they can to match the overarching case that most customers face.

Inventory prediction

Well before the impression is generated and has to be allocated out to the impressions in real-time, inventory was sold in advance based on predictions of how much volume would exist in the future. We call these predicted impressions “avails” (for “available to sell”) in our industry, and they’re essentially the basis for how all guaranteed impressions are sold.

We’ll get back to the real-time allocation in a moment, but first let’s talk a bit about avails. The avails calculation done by another component of the ad server, responsible for inventory prediction, is one of the hardest computer science problems facing the industry today. Predicting how much inventory will exist is hard — and extremely complicated.

Imagine if you will that you’ve been asked to predict a different kind of problem than ad serving — perhaps traffic patterns on a state highway system. As you might imagine, predicting how many cars will be on the entire highway next month is probably not very hard to do with a pretty high degree of accuracy. There’s historical data going back years of time, month by month. So you could take a look at the month of April for the last five years, see if there’s any significant variance, and use a bit of somewhat sophisticated math to determine a confidence interval for how many cars will be on the highway in the month of April 2013.

But imagine that you now wanted to zoom into a specific location — let’s say the Golden Gate Bridge. And you wanted to break that prediction down further, let’s say Wednesday, April 3. And let’s say that we wanted to predict not only how many cars would be on the bridge that day, but how many cars with only one passenger. And further, we wanted to know how many of those cars were red and driven by women. And of those red, female-driven cars, how many of them are convertible sports cars? Between 2 and 3 p.m.

Even if you could get some kind of idea how many matches you’ve had in the past, predicting at this level of granularity is very hard. Never mind that there are many outside factors that could affect this; there are short-term issues that could help get more accurate as you get closer in time to the event such as weather and sporting events, and there are much more unpredictable events such as car accidents, earthquakes, etc.

This is essentially the same kind of prediction problem as the avails prediction problem that we face in the online advertising industry. Each time we layer on one bit of data (some defining attribute) onto our inventory definition, we make it harder and harder to predict with any accuracy how many of those impressions will exist. And because we’ve signed up for a guarantee that this inventory will exist, the engineers creating the algorithms that predict how much inventory will exist need to be very conservative on their estimates.

When an ad campaign is booked by an account manager at the publisher, they “pull avails” based on their read of the RFP and media plan and try to find matching inventory. These avails are then reserved in the system (the system puts a hold on avails that are being sent back to the buyer based for a period of time) until the insertion order (I/O) is signed by the buyer. At this moment, a preliminary allocation of predicted avails (impressions that don’t exist yet) is made by a reservation system, which divvies out the avails among the various I/Os. This is another kind of allocation that the ad server does in advance of the campaign actually running live, and it has as much (or even more) impact as the real-time allocation does on overall yield.

How real-time allocation decisions get made

Once a contract has been signed to guarantee that these impressions will in fact be delivered, it’s up to the delivery engine’s allocation system to decide which of the matching impressions to assign to which line items. The primary criteria used to make this decision is how far behind the matching line items are for successfully delivering against their contract, which we call “starvation” (i.e., is the line item starving to death or is it on track to fulfill its obligated impression volume?).

Because the engineers who wrote the avails prediction algorithms were conservative, the system generally has a lot of wiggle room when it comes to delivering against most line items that are not too complex. That means there are usually more impressions available when the impressions are allocated than were predicted ahead of time. So when all the matching line items are not starving, there are other decision criteria that can be used. The clearest one is yield, (i.e., of the available line items to allocate, which one of those lines will get me the most money for this impression?).

Implications of real-time allocation and inventory prediction

There’s a tendency in our industry to think about ad inventory as if it “exists” ahead of time, but as we’ve just seen, an impression is ephemeral. It exists only for a few milliseconds in the brain of a computer that decides what ad to send to the user’s machine. Generally there are many ways that each impression could be fulfilled, and the systems involved have to make millions or billions of decisions every hour.

We tend to think about inventory in terms of premium and remnant, or through a variety of lenses. But the reality is before the inventory is sold or unsold, premium or remnant, or anything else, it gets run through this initial mechanism. In many cases, inventory that is extremely valuable gets allocated to very low CPM impression opportunities or even to remnant because of factors having little to do with what that impression “is.”

There are many vendors in the space, but let’s chat for a moment about two groups of vendors: supply-side platforms (SSPs) and yield management companies.

Yield management firms focus on providing ways for publishers to increase yield on inventory (get more money from the same impressions), and most have different strategies. The two primary companies folks talk to me about these days are Yieldex and Maxifier. Yieldex focuses on the pre-allocation problem — the avails reservations done by account managers as well as the inventory prediction problem. Yieldex also provides a lot of analytics capabilities and is going to factor significantly in the programmatic premium space as well. Maxifier focuses on the real-time allocation problem and finds matches between avails that drive yield up, and it improves matches on other performance metrics like click-through and conversions, as well as any other KPI the publisher tracks, such as viewability or even engagement. Maxifier does this while ensuring that campaigns deliver, since premium campaigns are paid on delivery but measured in many cases on performance. The company is also going to figure heavily into the programmatic premium space, but in a totally different way than Yieldex. In other words, neither company really competes with each other.

Google’s recent release of its dynamic allocation features for the ad exchange (sort of the evolution of the Admeld technology) also plays heavily into real-time allocation and yield decisions. Specifically, the company can compare every impression’s yield opportunity between guaranteed (premium) line items and the response from the DoubleClick Exchange (AdX) to determine on a per-impression basis which will pay the publisher more money. This is very close to what Maxifier does, but Maxifier does this across all SSPs and exchanges involved in the process. Publishers I’ve talked to using all of these technologies have gushed to me about the improvements they’ve seen.

SSPs are another animal altogether. While the yield vendors above are focused on increasing the value of premium inventory and/or maximizing yield between premium and exchange inventory (I think of this as pushing information into the ad server to increase value), the SSPs are given remnant inventory to optimize for yield among all the various venues for clearing remnant inventory. By forcing competition among ad networks, exchanges, and other vehicles, they can drive the price up on remnant inventory.

How to apply this article to your business decisions

I’ve had dozens of conversations with publishers about yield, programmatic premium, SSPs, and other vendors. The most important takeaway I can leave you with is that you should think about premium yield optimization as a totally different track than discussions about remnant inventory.

When it comes to remnant inventory, whoever gets the first “look” at the inventory is likely to provide the highest increase in yield. So when testing remnant options, you have to ensure that you’re testing each one exactly the same way — never beneath each other. Most SSPs and exchanges ultimately provide the same exact demand through slightly different lenses. This means that barring some radical technical superiority — which none have shown me to be the case so far — the decision most likely will come down to ease of integration and ultimately customer service.

How Do Companies Make Any Money in Digital?

(By Eric Picard, Originally Published in AdExchanger 10/25/12)

In 2007 I wrote a paper that analyzed the lack of investment from 2001 to 2006 in the basic infrastructure of ad technology.  The dot-com bubble burst had a chilling effect on investment in the ad tech space, and as an industry we focused for about six years on short term gains and short term arbitrage opportunities.

This period saw the rise of ad networks and was all about extracting any value possible out of existing infrastructure, systems, and inventory.  So all the “remnant” inventory in the space, the stuff the publisher’s in-house sales force couldn’t sell, got liquidated at cheap prices.  And those companies with the willingness to take risk and the smarts to invest in technology to increase the value of remnant got off the ground and succeeded in higher efficiency buying and selling, and lived off the margins they created.

But we lost an entire cycle of innovation that could have driven publisher revenue higher on premium inventory – which is required for digital to matter for media companies. There’s been lots of discussion about the drop from dollars to dimes (and more recently to pennies) for traditional media publishers. And while the Wall Street Journal and New York Times might be able to keep a pay-wall intact for digital subscriptions, very few other publications have managed it.

In 2006 the ad tech ecosystem needed a massive influx of investment in order for digital to flourish from a publisher perspective.  These were my observations and predictions at the time:

  • Fragmentation was driving power from the seller to the buyer. Like so:
  • A lack of automation meant cost of sales for publishers, and cost of media buying management for agencies, were vastly higher in digital (greater than 10x what those things cost for traditional on both the buy and sell side).
  • Prices were stagnated in the digital space because of an over-focus on direct response advertisers, and the inability of the space to attract offline brand dollars.
  • Market inefficiency had created a huge arbitrage opportunity for third parties to scrape away a large percentage of revenue from publishers. Where there is revenue, investment will follow.
  • There was a need for targeting and optimization that existing players were not investing in, because the infrastructure that would empower it to take off didn’t exist yet.
  • Significant investment would soon come from venture capital sources that would kick start new innovation in the space, starting with infrastructure and moving to optimization and data, to drive brand dollars online.

Six years later, this is where we are. I did predict pretty successfully what would happen, but what I didn’t predict was how long it would take – nor that the last item having to do with brand dollars would require six  years. This is mainly because I expected that new technology companies would step up to bat across the entirety of what I was describing.  Given that the most upside is on brand dollars, I expected entrepreneurs and investors to focus efforts there.  But that hasn’t been the case.

So what’s the most important thing that has happened in the last six years?

The entire infrastructure of the ad industry has been re-architected, and redeployed.  The critical change is that the infrastructure is now open across the entire “stack” of technologies, and pretty much every major platform is open and extensible. This means that new companies can innovate on specific problems without having to build out their own copy of the stack.  They can build the pieces they care about, the pieces that add specific value and utility for specific purposes – e.g. New Monetization Models for Publishers and Brand Advertisers, New Ad Formats, New Ad Inventory Types, New Impression Standards, New Innovation across Mobile, Video and Social, and so on.

So who will make money in this space, how will they make it, and how much will they make?

I’ve spent a huge portion of my career analyzing the flow of dollars through the ecosystem. Recently I updated an older slide that shows (it’s fairly complex) how dollars verses impressions flow.

The important thing to take away from this slide is that inventory owners are where the dollars pool, whether the inventory owner is a publisher or an inventory aggregator of some kind.  Agencies have traditionally been a pass-through for revenue, pulling off anywhere from 2 to 12% on the media side (the trend has been lower, not higher), and on average 8 to 10% on the creative side depending on scale of the project.  Media agencies are not missing the point here, and have begun to experiment with media aggregation models, which is really what the trading desks are – an adaptation of the ad network model to the new technology stack and from a media agency point of view.

The piece of this conversation that’s relevant to ad tech companies is that so far in the history of this industry, ad technology companies don’t take a large percentage of spend.  In traditional media, the grand-daddy is Donovan Data Systems (now part of Media Ocean), and historically they have taken less than 1% of media spend for offline media systems. In the online space, we’ve seen a greater percentage of spend accrue to ad tech – ad serving systems for instance take anywhere from 2 to 5% of media spend.

So how do ad tech companies make money today and going forward? It’s a key question for pure transactional systems or other pure technology like ad servers, yield management systems, analytics companies, billing platforms, workflow systems, targeting systems, data management platforms, content distribution networks, and content management systems.

There’s only so much money that publishers and advertisers will allow to be skimmed off by companies supplying technology to the ecosystem. In traditional media, publishers have kept their vendors weak – driving them way down in price and percentage of spend they can pull off. This is clearly the case in the television space, where ad management systems are a tiny fraction of spend – much less than 1%.

In the online space, this has been less the case, where a technology vendor can drive significantly more value than in the offline space. But still it’s unlikely that any more than 10% of total media spend will be accepted by the marketplace, for pure technology licensing.

This means that for pure-play ad tech companies with a straightforward technology license model – whether it’s a fixed fee, volume-based pricing, or a  percentage of spend – the only way to get big is to touch a large piece of the overall spend. That means scaled business models that reach a large percentage of ad impressions.  It also means that ultimately there will only be a few winners in the space.

But that’s not bad news. It’s just reality.  And it’s not the only game in town. Many technology companies have both a pure-technology model, and some kind of marketplace model where they participate in the ecosystem as an inventory owner. And it’s here that lots of revenue can be brought into a technology company’s wheelhouse. But its important to be very clear about the difference between top-line media spend verses ‘real’ revenue. Most hybrid companies – think Google for AdSense, or other ad networks – report media spend for their marketplaces as revenue, rather than the revenue they keep. This is an acceptable accounting practice, but isn’t a very good way to value or understand the value of the companies in question. So ‘real revenue’ is always the important number for investors to keep in mind when evaluating companies in this space.

Many ad technology companies will unlock unique value that they will be the first to understand. These technology companies can capitalize on this knowledge by hybridizing into an inventory owner role as well as pure technology – and these are the companies that will break loose bigger opportunities. Google is a great example of a company that runs across the ecosystem – as are Yahoo, Microsoft and AOL.  But some of the next generation companies also play these hybrid roles, and the newest generation will create even greater opportunities.

Entering the Fourth Wave of Ad Technology

By Eric Picard (Originally published on AdExchanger.com, 9/18/2012)

Ad tech is a fascinating and constantly evolving space.  We’ve seen several ‘waves’ of evolution in ad tech over the years, and I believe we’re just about to enter another.  The cycles of investment and innovation are clearly linked, and we can trace this all back to the late 90’s when the first companies entered the advertising technology space.

Wave 1

The early days were about the basics – we needed ways to function as a scalable industry, ways to reach users more effectively, systems to sell ads at scale, systems to buy ads at scale, analytics systems, targeting systems, and rich media advertising technology.

There was lots of investment and hard work in building out these 1.0 version systems in the space. Then the dot-com bubble imploded in 2001, and a lot of companies went out of business.  Investment in the core infrastructure ground to a halt for years. The price of inventory dropped so far and so fast that it took several years before investment in infrastructure could be justified.

We saw this wave last from 1996 through 2001 or 2002 – and during that dot-com meltdown, we saw massive consolidation of companies who were all competing for a piece of a pie that dramatically shrank.  But this consolidation was inevitable, since venture firms generally invest on a five to ten year cycle of return – meaning that they want companies to exit within an ideally 8 year window (or less).

Wave 2

The second wave was really about two things: Paid Search and what I think of as the “rise of the ad networks.”  Paid search is a phenomenon most of us understand pretty well, but the ad network phase of the market – really from 2001 to 2007 was really about arbitrage and remnant ad monetization.  Someone realized that since we had electronic access to all this inventory, we could create a ‘waterfall’ of inventory from the primary sales source to secondary sources, and eventually a ‘daisy-chain’ of sources that created massive new problems of its own.  But the genie was out of the bottle, and this massive supply of inventory that isn’t sold in any other industry was loosed.

It’s actually a little sad to me, because as an industry we flooded the market with super cheap remnant inventory that has caused many problems. But that massive over-supply of inventory did allow the third wave of ad tech innovation to get catalyzed.

Wave 3

Most people believe that the third wave was around ad exchanges, real-time buying and selling, data companies, and what I like to call programmatic buying and selling systems. But those were really just side effects. The third wave was really about building out the next generation infrastructure of advertising. Platforms like AppNexus and Right Media are not just exchanges; they’re fundamentally the next generation infrastructure for the industry.  Even the legacy infrastructure of the space got dramatic architectural overhauls in this period – culminated by the most critical system in our space (DoubleClick for Publishers) getting a massive Google-sponsored overhaul that among other thing opened up the system via extensive APIs so that other technology companies could plug in.

Across the board, this new infrastructure has allowed the myriad ad tech companies to have something to plug into.  This current world of data and real-time transactions is now pretty mature, and it’s extending across media types.  Significant financial investments have been made in the third wave – and most of the money spent in the space has been used to duplicate functionality – rather than innovate significantly on top of what’s been built.  Some call these “Me too” investments in companies that are following earlier startups and refining the model recursively.  That makes a lot of sense, because generally it’s the first group of companies and the third group of companies in a ‘wave’ that get traction. But it leads to a lot of redundancy in the market that is bound to be corrected.

This wave lasted from about 2005 to 2011, when new investments began to be centered on the precepts that happened in Wave 3 – which really was a transition toward ad exchanges (then RTB) and big data.

That’s the same pattern we’ve seen over and over, so I’m confident of where the industry stands today and that we’re starting to enter a new phase. This third major ad tech wave was faster than the first, but a lot of that’s because the pace of technology adoption has sped up significantly with web services and APIs becoming a standard way of operating.

Wave 4

This new wave of innovation we’re entering is really about taking advantage of the changes that have now propagated across the industry. For the first time you can build an ad tech company without having to create every component in the ‘stack’ yourself. Startups can make use of all the other systems out there, access them via APIs, truly execute in the cloud, and build a real company without massive  infrastructure costs.  That’s an amazing thing to participate in, and it wasn’t feasible even 3 years ago.

So we’ll continue to see more of what’s happened in the third wave – with infrastructure investments for those companies that got traction, but that’s really just a continuation of those third wave tech investments, which go through a defined lifecycle of seed, early, then growth stage investments.  Increasingly we’ll see new tech companies sit across the now established wave 3 infrastructure and really take advantage of it.

Another part of what happened in Wave 3 was beyond infrastructure – it involved the scaled investment in big data.  There have been massive investments in big data, which will continue as those investments move into the growth phase. But what’s then needed is companies that focus on what to do with all that data – how to leverage the results that the data miners have exposed.

Wave 4 will really change the economics of advertising significantly – it won’t just be about increasing yield on remnant from $0.20 to $0.50. We’ll see new ad formats that take advantage of multi-modal use (cross device, cross scenario, dynamic creatives that inject richer experiences as well as information), and we’ll see new definitions of ad inventory, including new ad products, packages and bundles.

So I see the next five years as a period where a new herd of ad tech companies will step in and supercharge the space. All this infrastructure investment has been necessary, because the original ad tech platforms were built the wrong way to take advantage of modern programming methodologies.  Now with modern platforms available pretty ubiquitously, we can start focusing on how to change the economics by taking advantage of that investment.

I also think we’re going to see massive consolidation of the third wave companies. Most of the redundancies in the market will be cleaned up.  Today we have many competitors fighting over pieces of the space that can’t support the number of companies in competition – and this is clearly obvious to anyone studying the Lumascape charts.

Unfortunately some of the earlier players who now have customer traction are finding that their technology investments are functionally flawed – they were too early and built out architectures that don’t take advantage of the newer ways of developing software. So we’ll see some of these early players with revenue acquiring smaller newer players to take advantage of their newer more efficient and effective architectures.

Companies doing due diligence on acquisitions need to be really aware of this – that buying the leader in a specific space that’s been around since 2008 may mean that to really grow that business they’ll need to buy a smaller competitor too – and transition customers to the newer platform.

For the investment community it’s also very important to understand that while Wave 3 companies that survive the oncoming consolidation will be very big companies with very high revenues, it is by nature that these infrastructure heavy investments will have lower margins and high volume (low) pricing to hit their high revenues. They still will operate on technology/software revenue margins – over 80% gross margins are the standard that tech companies run after. But the Wave 3 companies have seen their gross revenue numbers be a bit lower than we’d like as an industry.  This is because they are the equivalent of (very technically complex) plumbing for the industry.  There are plenty of places where they invest in intelligence, but the vast majority of their costs and their value deal with massive scale that they can handle, while being open to all the players in the ecosystem to plug in and add value.

Being a Wave 4 company implicitly means that you are able to leverage the existing sunk cost of these companies’ investment.  Thomas Friedman talks about this in “The World is Flat” – one of his core concepts is that every time an industry has seen (what he called) over-investment in enabling infrastructure, a massive economic benefit followed that had broad repercussions.  He cites the example of railroad investment that enabled cheap travel and shipping that led to a massive explosion of growth in the United States.  He cites the investment in data infrastructure globally that led to outsourcing of services to India and other third world countries on a massive scale.  And frequently those leveraging the sunk cost of these infrastructure plays make much more money from their own investments than those who created the opportunity.

So what should investors be watching for as we enter this fourth wave of ad tech innovation?

  1. Companies that are built on light cloud-based architectures that can easily and quickly plug into many other systems, and that don’t need to invest in large infrastructure to grow
  2. Companies that take advantage of the significant investments in big data, but in ways that synthesize or add value to the big data analysis with their own algorithms and optimizations
  3. Companies that can focus the majority of technical resources on innovative and disruptive new technologies – especially those that either synthesize data, optimize the actions of other systems, or fundamentally change the way that money is made in the advertising ecosystem
  4. Companies that are able to achieve scale quickly because they can leverage the existing large open architectures of other systems from Wave 3, but that are fundamentally doing something different than the Wave 3 companies
  5. Companies that take advantage of multiple ecosystems or marketplaces (effectively) are both risky but will have extremely high rewards when they take off

This is an exciting time to be in this space – and I predict that we’ll see significant growth in revenue and capabilities as Wave 4 gets off the ground that vastly eclipse what we’ve seen in any of the other waves.

Why Media Companies Are Being Eaten By Tech Companies

By Eric Picard (Originally published on AdExchanger.com, August 20, 2012)

My friend and colleague Todd Herman (LinkedIn) once wrote a strategy paper about video content when we worked together at Microsoft. Called “Don’t be food,” it was a brilliant paper that laid out a strategy for effectively competing in a world where content is distributed everywhere by anyone.  I love the concept of “Don’t be food.”  It applies to so many existing business models, but clearly where Todd initiated it – Media – it applies incredibly well.

The media business is being forceably evolved through massive disruptions in content distribution. In the past, control over distribution was the primary driver of the media model. Printed material, radio and television content required a complex distribution model. Printing presses and distribution are expensive. Radio and television spectrum is limited, and cable infrastructure is expensive. Most media theory and practices have been deeply influenced by these long term distribution issues, to the point that the media industry is quite rigid in its thinking and cannot easily move forward.

One of my favorite business case studies is the Railroads.  Railroad companies missed massive opportunities as new technologies such as the automobile and airplanes began to be adopted. They saw themselves as being in the “railroad business,” and not the “transportation business.”  Because of this they lost significant opportunities and very few of the powerhouse companies from the rail era continue to exist.

In media, new technologies have been massively disrupting the space for more than a decade. And there is an ongoing debate about technology companies stepping in and disrupting the media companies. Google is a prominent example, and its recent acquisition of Frommer’s is yet another case where it has eaten a content company and continued to expand from pure technology into media.  But Google isn’t moving into media based on the existing rules that the media companies play by – it is approaching media through the lens of technology.

But this issue doesn’t only pertain to the oft-vilified Google: Amazon continues to disrupt the book industry by changing the distribution model through the use of technology, and is clearly gunning for magazine, radio and video content as well.  Microsoft is changing the engagement model and subsequently the distribution of content to the living room via its ever-expanding Xbox footprint, and is broadly expanding toward media with Windows 8, its new Surface tablet devices and smartphones – again using technology.  Apple has turned distribution models on their ears by creating a curated walled garden of myriad distribution vehicles (apps on devices), but charges a toll to the distributors – again using technology to disrupt the media space.  Facebook, Twitter and social media are now beginning to disrupt discovery and distribution in their own ways – barely understood, but again based on technology.

Existing media models are functionally broken – and will continue to be disrupted.  Distribution is always a key facet of the overall media landscape, and will continue to be.  But as distribution channels fragment, and become more open, the role that distribution plays will radically change. Distribution is no longer the key to media – it is inherently important to the overall model of media – but it isn’t the key.

Technology is the key to the future of media. Technology can and has profoundly changed the way content is distributed, and will continue to do so. The future of media is wrapped up in technology, and this is an indicator of why technology companies are eating media companies’ lunches, if not actually consuming them in their entirety.

Media companies don’t understand technology because they are not run by technologists. And there is a vast gulf between the executive leadership of media companies and the needs to understand technology. Every media company should be running significant education efforts to pass along the concepts needed to compete in the technology space, but I’m not convinced even that would be enough to fix the problems they face.

At Microsoft I once had an executive explain to me why most of the executives running businesses at the company were from a software background.  He said something along the lines of, “A super smart engineer who can wrap his or her head around platforms and technology issues can probably learn business concepts and issues faster than a super smart business person can learn technology.”  And he was right – it’s that simple.

Business schools should have requirements today for anyone graduating with an undergraduate or graduate degree to learn how to write software, and most importantly to develop a modern understanding of platforms. These platform models are the future of distribution, and are barely understood even among many technologists. The modern platform models used broadly on the Internet and to create software on devices that drive content distribution are relatively new, and are frequently not understood by people with technical backgrounds who haven’t spent time working with them.

Bad business decisions continue to be made by media companies because of the significant lack of technology leadership in both executive and middle management. As technology evolved, the model for many years was that business people figured out “Why and What” to build and “Where” to distribute it, and engineers figured out “How and When” something could be delivered.  Great technology companies break down the walls between Why, What, How, When and Where. Engineers have just as much say in all of those things as the business people. Great technology companies don’t treat engineers and technologists like “back room nerds.”  They recognize that engineering brilliance can be applied to the business problems facing them, and that technology innovation will drive their businesses to disrupt themselves toward future success.

Media companies must evolve away from their historical strengths based on distribution control, and must embrace technology as a key principal.  And they need great engineers to do so. The problem is, great engineers won’t work for mediocre engineers. Great engineers won’t take bad direction from people they don’t respect, especially business people. And many media companies have treated their existing engineering organizations as an extension of traditional IT models, with mediocre engineering talent endemic in their organizations – frequently top to bottom.

Let me say again; great engineers will not work for mediocre engineers.  This means that the existing CTO and entire engineering infrastructure within a media company will not solve this problem. Before moving forward, executive leadership has to recognize that it is likely that their existing technology organization will fight, block and actively try to sabotage any efforts created outside of their own infrastructure. But it is very clear that without a significant change here, these companies are doomed.

For a traditional media company to compete effectively with Google, Amazon, Apple, Microsoft, Facebook, and the thousands of hot startups now competing with them, they must build world-class engineering organizations. This isn’t a light fuzzy requirement, it’s a core fundamental of their ability to survive for the next century.  These companies must evolve forward.  They must find ways to empower internal disruption.

Media companies must build startup organizations within their own internal structures that are isolated from the existing IT leadership and given bold broad empowered charters with the leeway to disrupt other teams’ businesses.  They must build a new technology driven culture within these large media companies that is separate from the existing groups, and then embrace those internal startups as the future of the company.  This isn’t easy.  It’s nearly impossible.  And this very likely will not work the first time it’s tried. But if media companies don’t commit to this kind of change, they are going to be eaten.

Why Facebook will ‘own’ brand advertising

(Originally published in iMediaConnection, February 2012) by Eric Picard

I’ve been watching and reading the Facebook IPO announcement frenzy with curiosity. The most curious meme floating around is the one that pooh-pooh’s its strike price, market cap, and valuation because its ad business “clearly isn’t going to be able to sustain growth the way Google’s did” — to which I call BS.

Here’s why Facebook will ultimately be the powerhouse in brand advertising online (and eventually offline as well):

Facebook is a platform

To really do this one justice, I’d need to write a whole article about the power of platforms and explain why platform effects are almost impossible to defeat once they’ve started. Platform effects are similar to network effects, so let’s start there in case you’re one of the 20 people left on the planet who haven’t learned about them. Network effects are basically when multiple users have adopted a platform or network, causing the platform or network to be more valuable. Telephones are the primal example here — the more people who have a phone, the more valuable the phone platform or network is to its users, therefore more people get telephones. Facebook has cracked that nut — it’s a vast social network, and network effects have rendered it as difficult to avoid getting a Facebook account as they have rendered not having a telephone or email address to be almost impossible.

Platform effects are similar, but even stickier: They come from opening a platform to third party developers. Once you have developers creating software that relies on the use of a platform, the platform becomes more useful and therefore becomes more adopted by end-users. This has been proven repeatedly — from Windows beating the Mac originally because so many software developers and hardware manufacturers supported the Windows PC platform. Apple has of course had the last laugh there, with the iPod/iPhone/iPad apps marketplace taking a page right out of Microsoft’s playbook and kicking them in the teeth.

Facebook is a platform that “consumer facing applications” like Zynga and other game companies have made good use of. But also it’s a massive data and business to business platform, which has been less broadly publicized, but which is beginning to gain adoption. And that part of its platform, tied to the data from the consumer side of its platform, is why advertising will ultimately bow to Facebook (barring some horrible misstep on their part.)

Facebook takes user data in return for free access to the Facebook platform

Facebook requires all users to opt into its platform — and despite all the various privacy debates and discussions about Facebook, it is actually pretty good about being transparent and providing value to users in return for sharing all sorts of data.

Facebook is right now (my opinion — open to debate) the most authoritative source of data on consumers, their interests, and brand affiliations. It’s going to grow and become more comprehensive, meaning that it will become the main source of all data used by brand advertisers to reach targeted users.

To my mind this is already destined to happen — and locked up due to the fact that Facebook is a platform. It builds content that no media company would be able to build (social content.) So in that way it really doesn’t compete with online publishers. Online publishers wisely have adopted Facebook as a distribution platform as well as an authentication platform for allowing consumers to accesstheir content.

It’s only a matter of time before publishers become so intertwined with Facebook’s platform that all their content becomes effectively part of the Facebook platform. But not in a way that publishers should be worried about Facebook disintermediating them. If Facebook is smart, it will work this out now and find a way to give publishers what they want in return for this: Let the publishers own their own targeting data, and work out a way to help them make more money without losing that data ownership.

Facebook will own brand advertising, and will not need to own direct response

Most of the wonks in the ad space are pooh-poohing Facebook because of a near-sighted over focus on direct response advertising. They believe in this false premise because of a single proof point, which is Google paid search advertising. The idea is that, “Since Facebook owns ad inventory that is further ‘up’ the purchase funnel than Google’s, Facebook will never justify a high enough CPM to compete for supremacy in the online space. Since Google is the owner of advertising online, and it did this by creating a vast pool of inventory that is sold at extremely high CPMs (because it is so close to the purchase on the purchase funnel) and because most of the online ad industry has been focused on DR for its entire existence, DR is where online must go.”

The wonks are wrong on this topic. Google undisputedly “owns” paid search advertising. But the entire paid search market is made up of something close to 250 billion monthly ad impressions. Google gets a very high premium on those ads — around $75 CPM. But Facebook has many more ways to play in the ad space than Google, and a lot more inventory to play with. Estimates put display ad volume well above 5 trillion monthly impressions, and Facebook has a huge percentage of these.  Since Facebook can cater to brands, it can be an efficient platform for selling ads to brands that target authoritatively to very granular audiences. Nobody has cracked that nut yet — the targeted reach at granularity and scale “nut” (disclaimer — this is specifically the problem I’ve been working on for the last year.)

So Facebook could own brand advertising online, could own a role as the authoritative data provider for brand advertising, could own the way that the big brand content platform of TV makes its way into a more modern and effective ad model, and could very well be the winner of the online advertising (nay the entire advertising) space for brands.

Facebook will dominate local advertising

Facebook has already grown a massive advertising business, and my bet is that when the details of its ad revenue are fully disclosed, a big chunk of that business will prove to be locally based. It is the only real play to be had for local businesses online right now; the only place to get local audience reach at any kind of scale. Local is a massive advertising market — one that nobody has been able to crack online, and Facebook will be the gateway between traditional media and online media for local advertising. Zuckerberg must already secretly have 200 people working on this problem as I type.

I’m very bullish on Facebook, but then, this is all just my opinion: I don’t have any idea how much of this Facebook really understands itself. All it really needs is some decent ad formats, and it’s got everything pretty well sewn up.

5 reasons Twitter simply doesn’t matter

(Originally published in iMediaConnection, August 2009) by Eric Picard

Over the course of the last few weeks I’ve had about a dozen conversations with people about Twitter. People’s feelings range from gushing love of the 140 character medium, to disdain for the narcissistic tweeters among the digerati who simply won’t shut up.

I don’t have any particular beef myself with Twitter, and I’m as jacked in, connected, and narcissistic as the next guy. But much of the conversation about Twitter is incestuous and “insider-ish.” There’s a bit of haughty staring down the nose at the unwashed masses who aren’t tweeting — as if those who don’t tweet are simply showing that they have nothing to say.

And of course this week, discussing the denial-of-service attacks against Twitter has been all the rage. Which is to say that when you don’t have a vehicle to talk about nothing — or should I say, to tweet and re-tweet the nothing that is being tweeted — folks are tweeting about not being able to tweet.

A lot has been said about the power of micro-communi-blogging, or whatever category of the week that Twitter sits within. And as a communications tool, while I personally find it unwieldy and a bit untargeted, I’m nothing but respectful of those who get value out of Twitter. Shelly Palmer, for instance, is full of ways he’s gathering value out of Twitter. He said recently that by simply tweeting that he was thinking about dinner, he immediately had a readymade dinner party without having to make a single phone call or send a single email. Or should I have said, rt: @shelly_palmer?

More power to folks who find this to be a powerful medium for communications. But have you noticed that, for the most part, the people who are “power-tweeters” are either professional writers, or are using Twitter for personal PR?

Here are the reasons the buzz surrounding Twitter is a lot of hype.

1. Twitter is a recursive conversation among individuals who are promoting their own careers.

Not participating doesn’t hurt you. In some ways, it will help you if the folks you’re trying to impress are not part of the twit-clique. There’s nothing like mutually being on the outside to solidify a relationship with an interviewer or potential client than showing your anti-Twitter camaraderie. Because goodness knows that anyone who has chosen not to tweet at this point is resentful and annoyed by all the Twitter hype. And if you’re not an idiot, you’ll have already made certain that the person you’re meeting with is not an avid tweeter — because, well, that’s quite an easy thing to verify. And for the small part of the population that is hooked on the newest form of crackberry, stroking their ego is as easy as “following” them.

2. Signing up to follow someone on Twitter is easy and painless (especially if you never check your account).

Twitter is mostly about making the folks who are tweeting feel important and loved. Not to say that I don’t love them all already. But if all I need to do is simply sign up to follow them on Twitter, what a marvelously easy thing it is to make them feel better about themselves. And of course, when someone looks me up and sees that I’m following the “who’s who” of the online advertising digerati, I look both connected and very important. Oh, and I am, baby, I am — it’s how I roll.

Next page >>

3. Nobody can really follow what the hell’s going on anyway — who can read all that crap?

It’s really easy to say that you follow along on Twitter. But it’s way worse of a time-suck than email. I get about 300-500 email messages a day. Most need to be scanned or read, and about 100 of them need to be responded to or handled one way or another. I also try to read the top-of-mind industry news every day — that’s about 10-20 “must read” articles.

Last week I wrote more than 150 emails. And when I went through them (I am just that kind of geek), most of them are a few paragraphs at least. About a dozen of them are several printed pages. Plus I wrote several long documents last week. Oh — and I actually did work. Who has time to tweet?

And of course there’s the blogs of my favorite people, and my Facebook account — where I’ve shut off most of the feeds from Twitter (sorry FB friends) because I just don’t care about reading updates from whatever talk, conference, or luncheon most of my industry friends are attending. I prefer to find out what my friends are up to on Facebook — not read unedited input from my reporter friends while they’re sitting in a session at “advertising show of the week.” I’d rather read their edited article the next day, or their blog posting that night.

And there’s always my recent and growing love of Kodu on the Xbox 360 to suck away my free time.

4. Most of the professional tweeters are not writing their own stuff anyway.

Come on… you know better, don’t you? You really think all these CEOs, VPs, and entrepreneurs are writing their own tweets? Really? Yeah — and they really write all their own trade articles too. Honest.

Why do you think all the PR folks love Twitter? Just like blogging, it’s been a huge shot in the arm to the PR industry. If you have to be up to date, always responding, and seemingly always “in the know,” the only way to do that — and earn your executive salary — is to pay your PR firm to tweet for you. (Disclaimer: I write all my own articles — and I write all my own tweets.)

Yes, there are some seeming superhumans out there who manage to run their own companies, post on numerous mailing lists, tweet all over the place, write a blog, post on Facebook, spend time with the family, and have a garden (Yes, I’m talking about you, @thespos1Hespos.com,@_marketingLLC). But for the rest of us, it just doesn’t work that way.

5. It’s the future of communications!

In 20 years, we’ll find Twitter right alongside the CB radio and personal blogs. Well — maybe not personal blogs, but…

Despite my tongue-in-cheek commentary above, I actually do use Twitter — and I do find it to be an amazing communications tool. But I will say that the hype is a bit, well, hyper. Twitter is a communications tool like many others. And it’s a new type of tool that has incredible network effects. While it is popular, the value of Twitter is incredible. The more people who use it, the more valuable it is. But whether Twitter is the CB radio of our decade — or the next form of IM and email, but for broadcasting to the masses — is entirely uncertain. It will only remain as valuable as the number of people who really read it (or at least mine it and use analytics to tap into the mass-mind.)

Oh — and you can follow me at @ericpicard. I’m really active on the tweeter.

You! Appearing Soon in an Ad Near You

(Published originally in ClickZ, September 2008) by Eric Picard

It occurs to me that in 10 years, Facebook will know what every ex-girlfriend and ex-boyfriend of an entire generation looks like. They already know what millions of people’s children look like and obviously have numerous images of almost every person that uses its service.

I was talking with a friend the other day about the fact that people haven’t considered the ramifications of Moore’s law (define) on real-time image processing. With more powerful computers and the increases in processing power growing more significant in 10 years, many things we think of today as technically impossible or, at the very least, technically difficult will no longer be. Certainly this will impact technologies like targeting and analytics; it will also impact computer graphics. Looking across both of these worlds and their intersection, it’s easy to start predicting how this could come together.

It won’t be long before the kind of photo and video compositing done painstakingly by hand with lots of CPU horsepower today will be handled in real time on a consumer PC or even on servers in the cloud. This means advertising could be assembled in real time, too. Some companies have been doing this for a while. Visible World, for instance, has enabled creative shops to build template-driven ads that enable elements of the video to be swapped out based on targeting parameters. Near Mother’s Day, residents of an affluent neighborhood might see the expensive flower arrangement while residents of a working-class neighborhood see the inexpensive flower one.

But the kinds of things we’ll see in the next 10 years will make this seem amateurish and quaint. Imagine the following commercials:

  • A man stepping out of the new Lexus sedan catches your eye, as he seems somewhat familiar. As he crosses over to the trunk, winking at the attractive (and also somewhat-familiar looking) woman passing through the parking lot, you notice something. He looks an awful lot like you! He opens the trunk and hefts a set of golf clubs. The scene cuts to him beautifully teeing off into the sunrise. You really pay attention — because it’s almost as if someone had peered into your dreams and put them on the television. And you appear in the commercial as an idealized, slightly more chiseled, rugged, and handsome version of yourself.
  • A woman who looks oddly like your wife is getting three kids roughly the age, size, and look of your own kids into a minivan that matches the criteria of cars you’ve been shopping for just this week. The eight-year-old boy is carrying a soccer ball — just as your son would be. The toddler even carries a stuffed animal that resembles the one your daughter carries with her everywhere! You see the kids calmly watching a movie on the installed screens, and they seem quite comfortable. The mom seems calm, relieved to have such a nice ride that all the kids enjoy getting into — without squabbling.
  • An oddly appealing woman is fly-fishing. She seems so familiar, like you know her from somewhere. The ad focuses in on the graphite rod she’s using, just like the one you were shopping for online last week but didn’t buy. You keep watching because the woman in the ad has such a nostalgic appeal to you. It’s almost as if she were a combination of three women you dated in college. And in truth, she is.

All these commercial seem like science fiction but aren’t far-fetched at all. We think about profile-based targeting as dealing with our habits and anonymously delivering products we’re interested in. But there’s no reason that down the road technology won’t enable the situations I just described. And while the privacy implications are vast — and the ads may seem a bit creepy — over time they may become acceptable. As we’ve seen in numerous studies, the current younger generation has very different feelings about privacy than older generations. And opting in to scenarios like I described may be quite commonplace in 10 years.