Category Archives: Uncategorized

Not getting left behind: AI business transformation

By Eric Picard

Here’s something that people are missing about the changes coming from AI. You can’t just treat AI as a productivity tool. You can’t just use it to speed up document creation, you have to turn it into a thinking tool, a tool to make your ideation and decision-making better.

If the CEO and the rest of the leadership team are not treating AI as a core capability across all aspects of the organization, their company is going to be standing still while competitors run by. So what do I mean by “core capability across all aspects of the organization?”

AI used correctly creates a workbench of hyper-intelligent advisors who can augment the intelligence of the leadership team – especially the CEO – so that they can make better decisions in less time. It needs to come from the top down, and it needs to be mandated. And not delegated to the IT team, whose job is to get the best price with the highest uptime with the lowest number of support tickets (great for many things but bad when trying to do business transformation.) Companies need to keep AI adoption at very senior levels abstracted from the IT team. And must be willing to live with the friction this causes.

In my practice I’m seeing that when organizationally, the leadership team starts using AI tools in their daily workflow, ingraining them into their habits, the whole org starts moving faster. It’s critical that AI is used not only for document creation (boy we’ve really sped up writing – what a productivity boost!) , but for ideation (our ideas are now battle tested and pushed around and stronger before we start executing!) that things really change. It’s critical that we’re clear – this is not abdicating their ideation to the AI, but bouncing ideas off the AI, getting feedback through a variety of points of view.

If you don’t understand what I’m talking about, try this simple experiment:

Create several specific personas – Here are some basic prompts that will help you try this out:

World Class CEO: You are a world class CEO with 25 years of leadership experience in the _______ industry. You are now retired, but mentoring other CEOs and leadership teams. You are cranky – you don’t glad-hand your mentees, you don’t tell them what they want to hear. You are inherently kind, you don’t argue for the sake of arguing, but you don’t hold back when you see mistakes being made. Your point of view is that _______________ and ____________ are great leaders who you try to emulate. And ____________ and _________ are overrated.

World Class CFO: You are a world class CFO with 25 years of experience at a combination of large publicly traded companies and startups that went through rapid growth and went public under your leadership. You understand the needs of a growing business, as well as the concerns of a large publicly traded company with regulatory oversight in the following industries _______________, _______________, ________________.

Innovative Entrepreneur: You are a massively successful entrepreneur in the ________________ and ____________ industries. You’ve broken each of these industries previously by disrupting the status quo and driving incredible success doing things that nobody expected to be successful. You are seasoned enough that you understand the issue of inertia, and you are not afraid of experimenting and failing. You’ve been through 3 acquisitions, where you had to transition to roles within large companies and you understand how big companies differ from startups, and were surprisingly successful navigating these organizations. You’re somewhat brash, and you tell the truth regardless of who you’re talking to.

Finish these prompts off by putting in the industry and names (the names have to be well known figures) and copy/paste these into your AI tool of choice before starting to run ideas by them. You’ll be shocked at the outcome, and the difference of opinion you get from each of these.

Now imagine if you had a stable of 20 or 30 or 100 of these predefined prompts that you could pull out whenever you needed them. Imagine if you debated all your big decisions with a large group of experts with strong points of view. Do you think your ideas would come out the other side stronger or weaker?

Having done this a lot – I can tell you my experience. I often disagreed with the input I was getting, which was half the value. It helped me become clearer on my own point of view. I got feedback I didn’t like, but sometimes it was what I needed to hear. When I pulled in advice from “experts” with different skillsets than my own, I got really valuable expansions of my thinking. For instance – when I was CPO at Bark, I knew very little about supply chain when I started, and became pretty knowledgeable over the time I was there. Part of the way I did that was to create a supply-chain expert prompt who I could run my ideas by. Note – I also became pretty close with our leadership on supply chain and had weekly meetings with various folks to get smarter faster. But I could ask really dumb questions of my AI expert, as often as I wanted, and then take my refined understanding to those other meetings.

It’s like having a superpower. And anyone not doing this in 2 years is going to be left behind.

I happen to use an AI workbench called CharmIQ that makes this much easier. You create “charms” in this tool, which are saved prompts like the ones I describe above. You can assign these prompts to any LLM – CharmIQ comes with baked in access to all of them for one monthly subscription fee. It makes things much easier.

If you want to try it out, feel free to use this link, it gives you a discount and puts some change in my pocket. I will tell you I use this tool literally hours every day. It’s a game changer.

Click Here to Try CharmIQ.

Tagged , , , ,

How AI is going to change software development

by Eric Picard

As someone who’s spent decades watching technology waves crash over the software industry, I find myself constantly recalibrating my predictions about AI’s impact on how we build software. Two years ago, I wrote about the potential for AI to create massive productivity gains and fundamentally alter development practices. Now, with some genuinely surprising developments happening right under our noses, I can see we’re heading somewhere much more interesting than I initially thought.

The productivity gains from AI tools like GitHub Copilot, ChatGPT, Cursor, and Replit are real, but what’s caught my attention is how we’re seeing two distinct paths emerge. Professional developers are using AI as incredibly powerful assistants, while non-developers are essentially managing teams of AI-powered junior developers through conversational interfaces. Both approaches are working, just for different types of projects and different scales of ambition.

Two Worlds of AI-Powered Development

Professional developers have largely embraced AI as sophisticated tooling that makes them dramatically more effective. They’re using AI for code generation, debugging assistance, refactoring, and rapid prototyping, but they’re doing it within established architectural patterns and development methodologies. The productivity gains are genuine – I’m consistently hearing reports of 30-50% improvements in specific tasks – but these developers maintain architectural oversight and code comprehension. I’ve talked to several developers who are often referred to as “10x” developers, meaning they’re ten times more effective than others on their teams. These particular developers were 10x devs at big tech companies, so certainly more than 10 times most developers. Each of them has told me that they are 100X-ing themselves using AI. So this is not a small change.

Then there’s what Andrej Karpathy coined “vibe coding” – developers describing what they want in plain English and accepting AI-generated code without necessarily understanding every line. This “fully giving into the vibes” approach emphasizes rapid experimentation over careful architectural planning. Y Combinator reported that 25% of their Winter 2025 batch had codebases that were 95% AI-generated, which represents a fundamentally different relationship with code creation.

The key insight is that these aren’t competing approaches – they’re serving different needs. Professional development teams use AI to accelerate work within proven frameworks, while vibe coding enables non-developers or small teams to build functional applications that would have been impossible for them to create just a few years ago.

Replit: DevOps Intelligence for the AI Era

Replit’s explosive growth – from $10M to $100M ARR in less than six months – illustrates something important about where this is heading. Replit is actually the intelligent evolution of DevOps principles, bringing continuous integration, automated testing, and deployment automation to AI-driven development.

Traditional DevOps emerged because managing infrastructure, deployment pipelines, and scaling manually was becoming impossible at scale. Smart development teams adopted CI/CD, automated testing, infrastructure as code, and monitoring because these practices made complex systems manageable and reliable.

Replit takes these same principles and makes them accessible through conversational interfaces. When their AI Agent selects a tech stack, generates code, sets up databases, and handles deployment, it’s not eliminating DevOps – it’s automating DevOps intelligence so that non-developers can benefit from these practices without needing to understand them deeply.

This matters because it’s expanding who can build functional software. You don’t need to understand Kubernetes, Docker, CI/CD pipelines, or infrastructure configuration to get the benefits of modern deployment practices. The AI handles the complexity while applying proven DevOps principles under the hood.

The Custom Application Renaissance

What excites me most about this trend is that we’re finally approaching the custom application future that the internet promised but never quite delivered. For twenty years, we’ve talked about having rich, customized web applications for internal business processes, but the development overhead made it impractical for most organizations.

Now we’re entering an era where custom web applications for fairly complex business tasks can be built quickly and cost-effectively. The “intranet” that organizations have wished for – dynamic, task-specific applications that actually solve their particular workflow problems – is becoming achievable. Just this week I built two very powerful internal apps for one of my clients inside of CharmIQ. These apps automate extremely intensive processes that were bottlenecks for my client. And three weeks ago, I had no idea how to do this. I’d have hired someone to build them.

I’m hearing about custom applications for everything from inventory management to customer onboarding to internal reporting, applications that would have required months of development and significant ongoing maintenance. These aren’t replacing enterprise software entirely, but they’re filling the gaps where off-the-shelf solutions don’t quite fit.

The Architecture Challenge Ahead

As these AI-powered development approaches mature, we’re approaching a fundamental architectural question. Current approaches work well for their respective use cases, but we need architectural patterns optimized for AI collaboration rather than just AI assistance. Right now AI developers are about as talented as junior developers – with maybe 2-3 years of experience. They break things a lot, the code isn’t efficient, they’re not always architecting things properly. But that’s a short-term problem – we’re only a few years away from AI developing software as well as any human, or better. What happens then?

The traditional monolithic applications or coarse-grained microservices that work well for human development teams may not be optimal for AI-powered development environments. Some teams experiment with treating code as completely disposable, letting AI regenerate implementations for each iteration. This works for prototypes and simple applications, but it breaks down for complex systems where you lose accumulated knowledge and performance optimizations.

Components: The Architecture for AI Collaboration

I think the future lies in component-based architectures that provide the right granularity for AI systems to work effectively. This draws inspiration from earlier component models like Microsoft’s COM objects, adapted for modern cloud environments and AI capabilities.

Applications would be built from well-defined components with stable interfaces and clear functional boundaries. Each component handles specific capabilities – user authentication, payment processing, data transformation, content generation – with explicit contracts for inputs and outputs. The critical insight is that while these interfaces remain stable, AI systems can continuously optimize, refactor, or completely reimplement the internal logic of individual components.

This architecture offers several advantages for AI-powered development. Components provide bounded problem spaces where AI systems can operate effectively without breaking broader system functionality. The stable interfaces enable comprehensive testing and debugging, while internal flexibility allows for continuous optimization based on performance data and changing requirements.

What This Looks Like in Practice

Over the next five to ten years, I expect we’ll see component registries emerge that catalog available functionality with detailed specifications. AI systems will continuously monitor component performance and generate optimized versions for testing and gradual deployment.

Applications will become more dynamic, automatically reconfiguring by swapping component implementations based on load patterns, user behavior, or resource availability. Unlike current microservices managed by human teams, these components would be maintained by AI systems operating within architectural guidelines defined by human engineers. And eventually, we may even let the AI take that over too.

The development process shifts toward interface-first design, where human architects focus on defining component boundaries and interactions, while AI systems handle implementation details. This division of labor plays to respective strengths: humans excel at architectural thinking and business requirements, while AI systems optimize implementations and handle routine coding tasks. And as the AI gets better and better at architecture and business requirements development, we may see a whole new world emerge.

The Transition Path Forward

This transformation is happening gradually but accelerating quickly. Professional developers are becoming more effective through AI assistance while maintaining architectural oversight. Non-developers are building functional applications through conversational interfaces that would have been impossible for them to create previously.

Current service-oriented architectures provide a foundation that can evolve toward component models as AI capabilities mature. Organizations with good interface design practices, comprehensive testing strategies, and strong observability will be best positioned for this transition.

The engineers who thrive will be those who can think architecturally about system design while effectively directing AI systems. Product managers become even more critical because rapid prototyping capabilities make clear product vision, competitive intelligence, customer-centric approaches and market understanding the primary competitive differentiators.

The Strategic Reality

As we move toward this future, competition shifts in important ways. Technical barriers to building certain types of software continue falling, but success increasingly depends on architectural excellence and product strategy rather than implementation speed alone.

Organizations that can design effective component architectures and orchestrate AI development systems will gain significant advantages in both development velocity and system reliability. The ability to continuously optimize software systems without traditional refactoring risks could become a major competitive edge.

However, this also presents new challenges around managing dynamic system complexity, ensuring security across AI-generated code, and maintaining coherent user experiences across rapidly evolving implementations.

The transformation isn’t about replacing human engineers – it’s about creating new collaboration models between human architectural thinking and AI implementation capabilities. The future belongs to organizations that can effectively combine these strengths while maintaining clear product vision and strategic focus.

We’re witnessing a shift from static implementations toward dynamic, continuously optimizing systems. While full realization is still years away, the foundation is being built through current experiments with AI-assisted development, vibe coding platforms, and component-based architectures. Replit’s growth numbers suggest this isn’t theoretical anymore – it’s happening faster than most of us expected, and the organizations preparing now will be best positioned to capitalize on the opportunities it creates.

Tagged , , , ,

AI Agents via CharmIQ have Supercharged my work

by Eric Picard

Today I’m exploring a tool that stands out in its ability to harness AI potential in ways I haven’t seen before—CharmIQ. This article discusses the capabilities of CharmIQ and is itself crafted using the platform. I’ve spent over twenty-five years building products and leading teams. I’ve witnessed technology evolve from multiple perspectives, helping guide both startups and large corporations through innovation challenges. This experience gives me a nuanced view on tools that genuinely transform how we work.

The artificial intelligence landscape, especially regarding large language models (LLMs), continues to expand rapidly. Each model offers distinct strengths and weaknesses. Navigating these options can overwhelm even experienced practitioners. CharmIQ addresses this challenge with a document-based approach rather than the chat interfaces that dominate the market. This represents more than a superficial interface change. It fundamentally changes how we interact with AI, allowing these capabilities to integrate more naturally into existing workflows.

CharmIQ’s power comes from its ability to create specialized AI agents called Charms. These agents function as virtual team members, each bringing unique perspectives to problem-solving. Every Charm possesses specialized knowledge and capabilities, enabling me to handle tasks that traditionally required teams of experts. These Charms act as cognitive partners, working alongside you whether generating strategic solutions, refining methodologies, or creating content.

Creating a Charm takes minimal effort. You simply describe the type of expert you need, and the system uses another Charm to write the definition for your new agent. The process typically takes 2-3 minutes to create an effective Charm. The output from different Charms varies significantly. Asking two different Charms the same question produces distinctly different answers.

CharmIQ has integrated with virtually every commercially available and open-source LLM. These integrations enable power users to leverage specific strengths of different models. Anthropic’s models excel at writing content. ChatGPT leads in creativity and reasoning. CharmIQ lets users switch between these models seamlessly, ensuring the right tool for each task.

This capability allows small teams to operate with the capacity and expertise of much larger organizations. By enabling AI-enhanced collaboration among human team members and AI-based agents, CharmIQ democratizes access to advanced capabilities. This matters significantly in today’s competitive landscape where agility drives success.

Consider a practical scenario: a product manager developing a go-to-market strategy. Traditionally, this involves coordinating across departments, synthesizing input from market research, sales, and engineering, and iterating through numerous drafts. With CharmIQ, the product manager can use specialized Charms for each step. One Charm analyzes market trends and customer insights, while another drafts a comprehensive go-to-market plan. This approach saves time and enhances quality by incorporating diverse perspectives.

The document-centric approach means all interactions, feedback, and iterations exist within a cohesive workspace. This eliminates friction associated with switching between tools and interfaces. The result? A streamlined workflow that lets teams focus on innovation and value delivery.

CharmIQ’s collaborative features extend beyond AI-human interaction to include real-time collaboration among team members. This proves invaluable in today’s remote and hybrid work environments. Team members work together within the dynamic workspace, sharing insights, providing feedback, and iterating on ideas without the constraints of traditional interfaces.

CharmIQ stands out not just for technical capabilities but for its strategic vision. By allowing users to customize and deploy multiple AI personas, it fosters creativity and experimentation. Users aren’t limited to predefined workflows. They can explore new approaches, test hypotheses, and iterate on solutions in real-time.

This flexibility benefits organizations trying to stay ahead in rapidly evolving markets. CharmIQ provides tools to adapt quickly to changing conditions, identify emerging opportunities, and respond with agility. For entrepreneurs and startups, this capability can determine success or failure.

As someone immersed in product management, I recognize the importance of aligning technology with business objectives. CharmIQ accomplishes this by providing a platform that enhances productivity, fosters collaboration, and drives innovation. It lets users focus on strategic thinking and high-quality work while reducing time spent on repetitive tasks.

By shifting from chat-based to document-centric interactions, CharmIQ redefines how we work with AI and each other. Its ability to integrate multiple AI models and create specialized agents offers remarkable flexibility and power. For teams of all sizes, CharmIQ enables AI-driven collaboration that unlocks new productivity and creativity levels.

I use this tool for hours daily. It saves me at least 1-2 days of work every week when generating work product. Incorporating CharmIQ into my workflow has boosted my productivity by 10-30x. If I primarily created documents or wrote code, I believe it would approach a 100x multiplier.

I’ve advised their team and CEO since they released their first internal beta. As they expanded access, they quickly discovered its broad appeal. Everyone who uses it becomes an avid power user within days.

If you want to try it, they offer a free trial. However, to fully understand its capabilities, I recommend signing up for a paid plan that unlocks all features. Use it for a few consecutive days, and you’ll likely find it transforming how you work.

Some personal and professional use cases:

I’ve introduced CharmIQ to all my teams and watched those who adopt it transform their workflow and approach. This transformation spans product managers, marketing teams, writers, and software developers. The software architect Charms I’ve created have educated teams on best practices and streamlined product launches.

Professional Example:

While leading Technology at Bark, we launched a new mobile app in just months using React Native. We accomplished this with two full-time developers (neither had used React Native before), plus a fractional team of one product manager and one QA specialist. From kick-off to launching both iOS and Android apps took three months. We used CharmIQ to create Charms that amplified each team member’s work: Market Research, Competitive Analysis, Product Strategy, Go-to-Market Plans, Architecture Design, Software Development Environment Configuration, and code writing and testing.

Personal Example – Writing:

As an author, I had spent three years writing a novel but got stuck with about 100 pages remaining. I couldn’t organize the final chapters and remained completely blocked for almost a year. I pasted my novel into CharmIQ and created Charms based on my favorite authors. I included my outline and what I had written so far, then asked these author-Charms (Neal Stephenson, Neil Gaiman, C.J. Cherryh, Frank Miller, Stephen King, Cormac McCarthy, Joan Didion, and Ian McEwan) for detailed feedback and help refining the remaining outline.

Their initial feedback proved harsh but broke my creative block. I finalized the outline in one night and wrote the remaining 100 pages as a first draft in about two weeks. After gathering human feedback, I made major revisions that continue today. My writing process now deeply incorporates CharmIQ.

Personal Example – Health:

I’m in my 50s with several long-term managed health issues, I’ve created a repository for all my medical data including test results, visit summaries, scans, and reports. I’ve created Charms representing my doctors: Primary Care Physician, Neurologist, Cardiologist, and Vascular Surgeon. The feedback from these Charms mirrors what my actual doctors tell me. I can also have them collaborate with each other, something nearly impossible in real medical practice.

I’ve created additional Charms for recipes, mixology, veterinary advice, restaurant recommendations, vacation planning, plumbing, HVAC, electrical work, home automation, and more.

Give Charm a try using this affiliate link and I’ll get a small financial bonus (anyone can sign up to be an affiliate – I’m just beta testing this for them.) They’re a great team of wonderful humans, and they’ve built a product that has changed everything for me.  I’d cry if it went away.

Tagged , , , ,

Programmatic Curation Surges Ahead: What is it, and why is it so hot?

by Eric Picard

As programmatic advertising continues to evolve, the concept of curation has become a critical focus. My article from a few years ago, The Fifth Wave of ad tech highlighted the rise of “privileged programmatic”, but today’s discussions center around the nuanced roles of curation. To truly understand its implications, we need to dig into the distinctions between manual and smart curation, and clarify how these approaches differ from traditional ad networks and the proto-curation methods established by programmatic buyers using PMPs.

Ad Networks: The Initial Curators

In the early days of programmatic advertising, ad networks acted as intermediaries, facilitating transactions between buyers and sellers. However, their model was fundamentally flawed. By exploiting market inefficiencies, ad networks engaged in arbitrage—buying low and selling high—without significantly enhancing transactional value. This approach, while initially convenient, soon revealed its limitations as advertisers grew wary of inflated costs and minimal value addition.The recognition of ad networks’ inefficiencies spurred a shift towards more transparent and efficient transaction models. Although some ad networks persist, their model is increasingly viewed as outdated and incompatible with the demands of a sophisticated programmatic ecosystem.

Proto-Curation: Traders using PMPs

For over a decade, advanced programmatic buyers have employed a strategy that could be termed proto-curation. This involves negotiating with publishers for privileged access to inventory, resulting in the manual creation and management of a vast array of Private Marketplace (PMP) deals within DSPs. These PMPs offer buyers curated inventory aligned with their specific needs, but managing thousands of such deals is labor-intensive and complex, and negotiating for privileged inventory access has varied results. This proto-curation is distinct from the curation models we see emerging today, and is the answer to the question of why curation is different from the approach trading desks have taken with PMPs for the last decade.

The Evolution of Curation: Standard and Smart Approaches

In recent years, curation has evolved into two distinct forms: Standard Curation and Smart Curation. Both approaches build upon the foundations laid by proto-curation, yet they offer unique methodologies and benefits.

Standard Curation involves human intervention to select inventory based on specific buyer criteria. This approach is akin to proto-curation but is more focused and refined, and often done on behalf of the publisher. Manual curators negotiate inventory access with publishers, ensuring that DSPs receive bid opportunities that meet predefined criteria. This method provides a critical layer of control and efficiency that open exchanges cannot offer, making it indispensable for buyers seeking to optimize their programmatic strategies. This curation is happening inside of platforms designed to improve and streamline the work buyers have been doing for the last decade by providing strong workflow and tools to streamline the process of curating inventory through PMPs.

Another piece of the puzzle is that curation is done typically on the sell-side of the ecosystem. It’s in the publisher’s best interest to curate inventory from their end and to ensure that any privileged access to inventory is coming through curation platforms, so they can preserve prices and margins. Sometimes the manual curation is done by the publisher’s sales team, sometimes it’s done by a third party on behalf of the publisher.

Frequently these platforms bring together audience data as a differentiator, sometimes they act as the means for an advertiser to bring their own first party data to the media environment. Publishers typically put PMPs from their curation partners into higher privileged positions in the ad server than those done for advertisers and agencies directly – because it’s in the publisher’s interest to increase curated inventory’s value. Examples of these curation platforms include Permutive and Audigent.

Smart Curation, on the other hand, leverages advanced technology to enhance the curation process. By utilizing proprietary algorithms, signals, and data, smart curation refines inventory selection, aligning buying decisions with advertisers’ strategic goals. Unlike manual curation, smart curation minimizes human intervention, relying on advanced technology to streamline processes and maximize efficiency. Examples of smart curation include Yieldmo and OneTag.

Note – for all forms of curation, every vendor in the ecosystem is developing some curation product that proposes to be the way that curation should be done. Nearly every SSP/Exchange has a curation tool or marketplace, lots of the older data companies are getting into the curation game, and there are several standalone curation platforms on the market now. The goal of this article is to get you up to speed on what everyone’s talking about, and go a bit deeper into why it matters.

Curation isn’t just about Curated Audiences

While this is a significant use-case, curated inventory against audiences defined in advertiser first-party data, potentially with lookalike audiences, it’s not the only use case. Many curation engines are not using user data or targeting audiences. Many are curating using contextual data, some with other performance signals. This is an important distinction because there have been several movements to rebrand curation against the concept of Curated Audiences, which in my mind are a subset of curation overall.

Dispelling Misconceptions: Curation vs. Ad Networks

Beware anyone telling you that curation is merely a rebranded version of ad networks. This simply isn’t true, and is often thrown out by very experienced people in the industry as a way to diminish the value of curation – but saying it sounds smart while truly missing the point of what curation is. While both models involve intermediaries, their methodologies and value propositions are fundamentally different.

Ad networks thrived on market inefficiencies, engaging in arbitrage without adding significant value beyond transactional convenience. Conversely, curation—whether manual or smart—focuses on optimizing inventory selection without engaging in arbitrage. Curation grants inventory access ahead of buyers coming in directly through their DSP through the open exchange. Curation provides DSPs with refined bid opportunities at higher levels of privilege in the auction to improve results. There are no hidden costs or markups; instead, curation aims to maximize advertisers’ investments by aligning inventory with campaign goals.

Navigating the Programmatic Ecosystem with Curation

To fully appreciate curation’s value, it’s important to understand the programmatic ecosystem’s complexity. From Supply-Side Platforms (SSPs) to Pre-Bid Frameworks and Ad Server prioritization rules, numerous factors influence buyer-seller relationships. Advertisers lacking privileged access risk losing valuable impressions, a challenge that curation effectively addresses by refining bid opportunities.

The impact of this kind of privileged inventory access:

Imagine two bids on the same root impression that is sent to the exchange – even from the same DSP. Bid 1 could be $5.00 against the open exchange. Bid 2 could be $5.00 against a curated PMP. Bid 2 will always win because the publisher is going to favor (give privilege to) the PMP that they are curating for that advertiser. To make it even more complicated, some publishers may give enough privilege to curated PMPs that cost doesn’t even matter. If Bid 1 through the open exchange was $50, and Bid 2 through the curated PMP was $5 – Bid 2 would always win.

DSPs evaluate each bid opportunity provided by exchanges and SSPs, valuing them based on campaign objectives. While comprehensive, this approach is inefficient, as most bid opportunities hold little value for a specific campaign. And DSP bidder algorithms are valuing every bid opportunity – when not every bid opportunity even warrants any scrutiny. Buying a bad piece of inventory just because the cost is low enough doesn’t really help lead to good outcomes. Consequently, the shift towards PMPs and curated inventory has become a strategic necessity to screen out inventory that shouldn’t be in consideration.

Standard curation continues to provide value, especially when curators negotiate priority access with publishers. Meanwhile, smart curation utilizes technology to either streamline the process, or to find powerful new ways to define and value inventory altogether. Smart curation is not the evolution of curation, it’s a subset of curation that defines and values inventory differently based on proprietary data and advanced algorithms and data to make informed decisions earlier in the bid stream than the DSP. Both approaches have value in enhanced access to inventory, increases in performance, increases in efficiency and offering significant value beyond what DSPs alone can achieve.

Strategic Implications and Future Directions

As programmatic advertising advances, the strategic implications of curation are profound. Advertisers must discern which platforms and technologies offer genuine value, distinguishing between superficial buzzwords and solutions delivering tangible results.Publishers, too, should embrace the transparency and efficiency that curation offers. By collaborating with advertisers and technology providers, they can increase yield, preserve pricing, and unlock new revenue streams that enhance their competitive edge in a rapidly evolving market.

The lessons from previous ad tech waves remain relevant. Balancing innovation with value creation is critical, and success hinges on our ability to adapt and evolve. Curation represents not only a technological advancement but a strategic shift poised to redefine programmatic advertising’s future. By navigating this new terrain thoughtfully, advertisers and publishers can unlock new opportunities and drive meaningful results in an increasingly competitive market.

Tagged , , , , , , , , , ,

From Small to Large: Scaling Product Management

By Eric Picard

In my career as a Chief Product Officer, I’ve had the opportunity to witness firsthand the evolution of product management roles in both small and large companies. This journey has given me a unique perspective on the challenges and opportunities that product managers face as they navigate these different environments. Today, I want to share some insights on how product management scales from small organizations to larger ones and a little story I like to call “The Parable of the Rocks.”

“HEY! Give me back my rocks!”






In smaller organizations, product managers are like Swiss Army knives. They juggle an array of roles, from product marketing to technical product management. This requires a versatile skill set and the ability to adapt quickly to shifting demands. In these environments, the scope of each role is broad, and resources are often limited. The challenge lies in effectively balancing these diverse responsibilities. The ability to switch contexts seamlessly and maintain organization is not just a helpful trait; it becomes a superpower.

For instance, a product manager in a small company might start their day aligning the product roadmap with the engineering team, spend the afternoon crafting go-to-market strategies, and end the day troubleshooting technical issues. They are the glue that holds disparate functions together, ensuring that the product not only meets market needs but also stays on track with the company’s overarching goals.This breadth of responsibility fosters a deep understanding of the product and its ecosystem. However, it also means that product managers in smaller companies often feel like they are carrying the weight of the world—or at least the product—on their shoulders. It’s a fast-paced and demanding role, but it also provides an unparalleled learning experience.

There comes a time in every company or team when the work becomes too much for one person, and that’s where things get very interesting. Eventually the work gets split across multiple product managers. Sometimes that individual contributor becomes a manager and has to divvy out their work. Sometimes a product leader is hired into the company as well. And as organizations grow, the product management role becomes more specialized, breaking into a variety of focused positions that allow for deeper expertise and efficiency.

  • Product Marketers focus on the Go-to-Market strategy, developing value propositions, creating sales materials, and assisting marketing and sales teams in targeting prospects. They decide whether to roll out by geography, market segment, or industry vertical, and prioritize efforts accordingly.
  • Product Strategists spend their time analyzing market opportunities, engaging with analysts and customers, crafting Market Requirements Documents, and conducting competitive analysis. Their role is to understand where the product fits in the market and how it can best meet customer needs.
  • Product Analysts or Product Operations Specialists ensure that products are properly instrumented for capturing user activity, enabling path analysis and financial performance evaluation. They provide invaluable insights into how the product is used and where improvements can be made.
  • Product Designers are responsible for the product’s look and feel, focusing on usability and user feedback. They conduct both qualitative and quantitative analyses, ensuring that the user interface is intuitive and effective.
  • Technical Project Managers coordinate the various teams and deliverables, ensuring that deadlines are met and resources are allocated efficiently. They play a critical role in keeping projects on track.

This specialization allows Technical Product Managers to concentrate on a more focused yet still pivotal role. They “own” the product, defining what will be built, prioritizing features on the roadmap, and writing the specifications that engineers use to develop the product. They still need to talk to customers, and they still need to stay on top of the market, but now they have help from partners. The Product Manager role now requires even more synthesizing input from various stakeholders, convincing the organization that their vision is the best way to tackle business challenges. They need a strong ego to hold firm opinions backed by data, yet remain open to ideas coming from anywhere. This all sounds wonderful, but the organizational transition and the personal transition that these previous superstar unicorns have to go through can be daunting.

This brings me to a story I often share when discussing this transition, which I call “The Parable of the Rocks.” Imagine being a product manager in a small team. Your day is spent picking up rocks—tasks, feature areas, responsibilities, and challenges—and putting them in your backpack. As the product develops and matures, you accumulate more and more rocks, and your backpack grows heavier. Eventually, it’s breaking your back. You’re walking hunched over, struggling to move forward, your chin is almost scraping the ground.

Finally, the company recognizes the need for help and hires a new product manager or even a leader for the PM organization, or splits the responsibilities out into some of these specializations mentioned above. This new person walks in, sees you bent double under the weight of all those rocks, and says, “Oh my god, let’s get some of that weight off.” They take some rocks out of your backpack, and either put them in their own backpack, or they hand them off to other PMs or teams. If that person is a new product leader, they might decide, we shouldn’t be doing some of these things, and they might throw those rocks back on the ground.

At first, the product manager feels relief. They stand up straight, stretch, crack their back, and take a few steps forward. But then they notice those rocks on the ground, or see others carrying them, and doing things with them differently than they’d have done, and they say, “Hey, those are my rocks! Give me back my rocks!” This parable illustrates a common pitfall in transitioning from small to large teams. It’s natural to feel a sense of ownership over tasks you’ve been managing, but it’s critical to embrace the shift.

Letting go of certain responsibilities allows you to focus on strategic priorities and leverage the strengths of a larger team. It can be very hard to let go, because the new person who owns that rock might see it very differently, might change the very nature of a feature and how it solves the customer problem, or might deprioritize that feature altogether. The transition from small to large companies can be a transformative experience for product managers. It requires a willingness to adapt and a readiness to embrace new challenges and opportunities. Here are some strategies to navigate this transition successfully:

  • Develop a Growth Mindset: Be open to learning and adapting to new ways of working. Embrace the opportunity to deepen your expertise in specific areas and collaborate with specialized teams.
  • Cultivate Strong Communication Skills: In larger organizations, the ability to effectively communicate your vision and align cross-functional teams is paramount. Become a great data-driven storyteller. Inspire your teammates, inspire your customers. Foster relationships with stakeholders and build a network of allies.
  • Focus on Strategic Impact: Learn to balance bot the day-to-day tasks with long-term strategic goals. Leverage the resources available to you in larger organizations to drive meaningful impact. Don’t feel like you need to own all the rocks.
  • Let Go of the Rocks: Recognize the value in delegating responsibilities and sharing the load with your team. Trust in the capabilities of others and focus on the bigger picture.
  • Embrace Change: Change is inevitable in the transition from small to large companies. Embrace it as an opportunity for growth and innovation.

Scaling product management from small to large organizations involves a shift in mindset and approach. It’s a journey that offers both challenges and rewards, and one that can ultimately lead to greater strategic impact and career fulfillment. Embrace the shift, learn to love to give your rocks away, but ensure the new people have all the context they need to value them appropriately. Learn to tell great, inspirational, fact-based and data-driven stories. It’s only by convincing others that what you believe should be done or built that you’ll win – both as a company and you personally as part of your career development.

Tagged , , , ,

5 Year Predictions – January 2023

Once every few years I like to write an article predicting what will happen in the future. Over the years I’ve had a pretty good track record of getting things right.  The world is shifting and moving a lot right now, but I believe that the future is bright.  Here’s how I think about the next five years, and beyond, through the lens of Ad Tech, Consumer Technologies, Media and Advertising.

  1. 3rd Party Cookies won’t go away, but they will slowly be rendered non-usable as persistent IDs

3rd party cookies, which have been a commonly used tool in the ad tech industry, will not completely disappear but will instead become increasingly less useful as persistent IDs. Google, for example, will not shut off 3rd party cookies in Chrome, but will instead make them less usable for persistent IDs over time. This gradual decline in functionality is expected to take place over a period of five to ten years, and by the end of this time frame, we will likely see the value of 3rd party cookies in the ad tech space significantly decrease. In five years, we will already be on this trajectory towards the obsolescence of 3rd party cookies as persistent IDs.

  1. New approaches to targeting inventory that are privacy-centric will arrive at scale

As the ad tech industry shifts away from 3rd party cookies as persistent IDs, new approaches to targeting inventory that prioritize privacy will become increasingly prevalent. These new methods will be built on technologies such as cohorts and will make use of panels of users that are statistically relevant. This will allow advertisers to not only target the audiences they care about but also more effectively attribute their advertising spend to various outcomes. 

The approaches currently being developed, including techniques such as embeddings and deep learning, will greatly surpass the current “brute force” methods used in ad tech and will lead to a move away from surveillance-based approaches towards those that prioritize privacy. Additionally, publisher and advertiser first-party data will be used to feed these privacy-centric models. The technology and techniques to match supply-side and demand-side data already exist, and this process will become increasingly easy, privacy-conscious, and available at scale. 

This will lead to a more equitable understanding of customer behavior and reduce the information imbalance that has favored the buy side in recent years. The seed audiences that act as panels for ML models will lead to more equilibrium of understanding customer behavior and reduce the information imbalance that has grown over the last decade in favor of the buy side.

  1. The lines between Buy and Sell Side ad technologies will blur

The lines between buy-side and sell-side ad technologies are becoming increasingly blurred. Companies like The Trade Desk are beginning to integrate directly with publishers, bypassing the SSP and exchange infrastructure. In response, SSPs and exchanges are starting to offer buying platforms, allowing buyers to bypass DSPs. This trend will continue for a few years, reach a peak, and then ultimately collapse in on itself. 

This is because DSPs are designed to lower competition over inventory and keep prices as low as possible, which is in line with their role as representatives of the buy-side. However, their algorithms are designed from a buyer’s perspective, and publishers will be wary of these direct paths, resulting in a decrease in yield. 

Exchanges and SSPs have mostly focused on liquidity and passing inventory through to DSPs at the lowest cost possible, while publishers have continued to lose power in the struggle between the buy-side and sell-side of the market. However, the pendulum will ultimately swing back towards equilibrium, and publishers will regain more control over data and measurement. Companies will invest heavily in ways to increase publisher yield and the market will balance out again.

  1. Web 3 Technology will iterate beyond just Cryptocurrency 

Web 3 technology is evolving and shifting beyond cryptocurrency, towards solutions that support distributed identity and group collaboration. This will have a significant impact on advertising in several ways. Imagine a world where users have full control over their identity and data, and only share relevant information with the companies they choose to interact with, through mechanisms that obscure unnecessary information. Healthcare and finance industries already use some techniques for doing this at scale, and combining these techniques with approaches used in the Web3/crypto space can open up new possibilities. For example, a digital wallet that contains all the important information about an individual’s life, such as healthcare, financial, education, employment, real estate, municipal and government information and automatically shares only relevant information with companies and organizations.

Users could easily opt-in to being part of a brand’s community, which would merge CRM, CDP, Ad Serving, and Social Media. This would mean that users get special perks from that brand, including the ability to get special offers, customized products, early access, etc. Brands could reach out to users and ask for their opinions on products and reward them for their participation. Users could “stake” their interest in a new product or feature and in return get early access, similar to an Indiegogo campaign, but for major brand interactions. Users could also vote on product changes or feature prioritization based on their staking, and the staking could be based on a points system based on their loyalty.

For example, if you have owned five BMWs over the last 20 years, and you are a known high-value customer, you could participate in a user group of other high-value customers and apply your influence to get special options for your next car, or maybe even for mainstream features in all models. Maybe BMW would offer a limited-edition model just for that group of customers, or a special badge. Or maybe you and others have strong opinions about the placement of cup holders, and could influence a change in future models. The “staking” in this case could be the fact that you have already bought several BMWs, and you currently own one or more.

These concepts like Staking are common in the Web3 and Crypto space but haven’t yet gone mainstream. But in the next five years, we are likely to see more and more of these concepts being integrated in the mainstream industry, even if the behind-the-scenes mechanism is obscured from the customers.

  1. Retail Media Marketplaces will grow and expand. 

Retail media marketplaces are expected to grow and expand in the coming years. For big retailers like Amazon, Walmart, and Target, this represents an opportunity for additional revenue at higher margins. These networks have already expanded into grocery chains, and even to boutique e-commerce and retailers. They could expand even further beyond the virtual world and into the physical space between bricks-and-mortar stores.

The growth of these retail media marketplaces is due in part to the evolution of the old “coop-dollar” systems that have been in place for decades into something much more advanced. Brands can now pay for product placement in the search results for similar products. When combined with e-commerce experiences, this leads to better outcomes for all parties involved – brands, consumers, and e-commerce retailers. The margins on these media businesses are significantly higher compared to other parts of retailers’ businesses, which is why it is expected to proliferate.

Retailers have a direct consumer relationship, pure first-party data about the customer, and the positioning of these media units is almost perfectly located between the moment of purchase consideration and the purchase itself. This means brands will be willing to spend money on this “must-buy” piece of media. Additionally, bundling of virtual shelf placement with in-store environments will make this buy even stickier over time. If brands want to get good shelf positions, end-caps, and other in-store benefits for their products, they will need to also pay for placement in the virtual space. Ultimately, these will blur and blend and package together, but it is likely further out than five years.

  1. Social Networking will evolve to something else altogether. 

Social networking is expected to evolve into something else altogether, with everything tied back through the social graph. This includes commerce, communications, education, search, and more. The social graph maps the connections between people and their interests, and platforms like Facebook understand who you know and the flow of information between you and your connections, as well as their interests and sentiments on various topics.

If the social graph were to become open, meaning it is no longer a walled garden, and your identity and the social graph extends beyond people to companies, products, brands, media, music, film, etc., and where you, the human, are in control, and it’s easy to manage, there would be significant opportunities for growth and change. Social graphs would connect everything, and the consumer would be in charge. Applications built on top of these open social graphs would be different from anything we have seen so far.

Facebook has already become Meta, and they’re trying to own the metaverse. But even without virtual reality, the social graph overlaid across everything would be transformative. It could lead to collaboration between ephemeral and permanent groups of people to do things together. For example, it would be easy to organize a friend group to buy out a restaurant for an evening party, find 800 people in the greater Boston area who also love the New England Patriots and want to have a meet and greet with the team, or have dinner at a local restaurant with a special menu with ten of your closest friends.

But this is just the tip of the iceberg. Connecting the social graph to everything else will change the world. And if identity is solved, so we know you’re not a Russian Bot, things will only get better.

  1. Artificial Intelligence will change everything.  

Artificial Intelligence (AI) is expected to change everything in the coming years. We are already familiar with AI-powered solutions such as filters in Instagram or “lenses” in Snapchat, and predictive text to help with text messaging and correcting grammatical errors in documents. But these are just the beginning of a trend that is now starting to take off.

One example of this is ChatGPT, a new chatbot by OpenAI that complements their DallE offerings. ChatGPT enables the creation of very complex written content that can be indistinguishable from content created by humans. Some software developers are even using it to both bug-check and write code from scratch.

Similarly, AI image and video generators are on the cusp of making significant strides. MidJourney, Dall-E 2, and numerous other solutions can generate images in almost any style just by describing what one would like to see. The results are getting exponentially better on an ever-shortening curve.

While it’s important to note that this technology also brings ethical concerns such as copyright and originality, which need to be addressed, the gains will outweigh these concerns. Over the next few years, the art of combining human input with computer-generated output will be refined, and every single software tool used for writing, office work, finance, design, etc., will be transformed. Corporate users will have AIs trained just with their own datasets, such that trade secrets and non-public information can be incorporated into the AI engines. For creators, the initial concerns about artists having their work stolen by these AI engines will be replaced by new understandings of how artists can have their own AI, trained on their behalf, to supercharge and speed up the creation and generation of work.

For production artists and graphic designers, these AI tools will become a seamless and integral part of their workflow, allowing them to create and generate content faster and more efficiently. Musicians will also have access to similar tools that will allow them to compose, produce, and record music in new and innovative ways. The impact of AI in these fields will not only change the way we create and consume art, but it will also open up new possibilities for expression and creativity.

  1. The long game:  What big technology will sneak up on us and change all aspects of society?

    I’m going to say something that will sound boring:  Electricity.

When I do these predictions, I like to pick one long term trend and extrapolate even further out than 5 years.  The biggest trending technology I can think of is Electricity.  

Solar technology will continue to improve on a scale increase similar to Moore’s Law, which it has been meeting or beating for more than twenty years. Today the cost of solar power is about $0.08 per Kilowatt. If the costs keep dropping and the output keeps increasing on the same scale it has, electricity will become extremely cheap. 

You may recall how 20 years ago you paid a long distance fee for all phone calls except for local calls (just the town you lived in). Electricity will never be totally free, but similarly to how we now basically have free calling to anyone, anywhere, even video calling, we’re approaching a world where the cost of electricity is going to be so low, and the ability to create a distributed electrical grid and expand it everywhere will be so low, that the long term prediction should be for a very low cost and low or even zero emissions. Solar everywhere and incredibly cheap electricity will transform the way the world works eventually. 

Over the next five years, you should expect to see a lot more solar power implemented, on houses, on buildings, and even the beginnings of solar panels placed under fixed infrastructure like streets and parking lots. https://solarroadways.com/ 

Once that transition happens at scale, with free electricity nearly everywhere, you’ll see big shifts. There will be a convergence with other lower cost technologies like LEDs (Light Emitting Diodes) hitting their next generation, where laser diodes will become cheap enough to replace LEDs.  Lasers put out 1,000 times as much light as an LED, for only two-thirds as much energy.  When the laser diodes become cheap enough, and the power is almost free, we’ll see a revolution in lighting and therefore in video.  Effectively this means video everywhere, all the time. Streets made up of solar cells that have laser diodes mounted into the transparent high-strength glass surfaces so that the roads light up and animate.  Buildings covered in solar cells with laser diodes embedded in them, instant christmas lights, video on the side of buildings everywhere, and the ability to put lighted animated signage anywhere for nearly no cost. Streetscapes and cities will radically transform when this happens. 

And I’m bullish about carbon emissions because solar will be so cheap and the innovations on top of a newly formed, completely distributed solar grid are massive.

  1. And as always, my final prediction:  

Sometime in the next five years, some new technology nobody has even thought about, or a simple reinvention of an existing widely used technology, will come into existence and totally scramble things. Just like the iPhone was unexpected, just like the success of Social Media was unexpected, something new will appear. And once again it will change everything.

The 6th Wave of Advertising Technology: Privacy

By Eric Picard, Originally published on AdExchanger – Wednesday, February 24th, 2021

There’s a revolution happening in digital media, primarily driven by a new focus on privacy. Major players at the core of the digital ecosystem have decided that privacy is a core value, and have made fundamental changes that block many standard practices. This change is going to upend the industry as we know it, and offers huge opportunities for anyone in the right position to take advantage of it.

Let’s work our way from where we’ve been to where we are, and then talk about where we’re going.

Wave 1: In the beginning (1996-1998)

The first wave of ad tech was about establishing scalable ways to operate the digital advertising business. Someone had to figure out how to sell ads in advance of the campaign running, how to implement and operate campaigns, how to track delivery and how to bill customers. We saw the rise of ad servers, the creation of sales and ad operations tools and workflows and the invention of buy-side ad serving. And we saw significant growth.

Wave 2: Formats, Targeting, Tracking, Attribution 1.0 (1999-2001)

After the basics got sorted, we saw innovative work in rich media ad formats (things like interactive ads, video, audio, visual effects, over-the-page, expanding ads, etc.). My first startup, Bluestreak, developed many of these formats. Across the industry we saw significant innovation in targeting of ads. (User behavior was tracked and turned into audience segments, which could be sold.) And a new attribution discipline emerged to measure what happened after a person saw or clicked on an ad.

Wave 3: Remnant Monetization, Multi-Touch Attribution, Yield Optimization (2002-2006)

When the “dot-com” bubble burst in 2001, the average CPM of display ad inventory dropped from about $25 to about $0.50 in the course of a year. All the peripheral ad tech companies that had been charging ad-on fees for rich media and targeting began to struggle – that is, until they eventually realized they could sell directly to publishers as a way to drive yield. In the hunt for revenue at any cost, and as vast numbers of smart sales people got laid off, someone figured out that secondary and even tertiary ad marketplaces could be used to monetize every single impression at some price. This model was in some ways a mistake, because it further devalued inventory, which was already under price pressure. It took a long time for this wave to end, and in some ways it still hasn’t ended.

On the buy side, advertisers began to realize that “last touch” attribution was obscuring the real drivers of conversions, falsely rewarding some channels, specifically paid search. Sadly, some advertisers still use last-touch models.

Wave 4: The rise of programmatic (2007-2014)

A few really smart people realized that remnant marketplaces were evolving similarly to commodities and securities marketplaces. And they began building auction-based exchanges that sold inventory in much the same way paid search was sold.

This wave was incredibly powerful and it supercharged the industry. As we saw with the evolution of electronic exchanges in securities, the market moved away from daisy-chained tag-based auctions to real-time bidding (RTB). This extensible infrastructure also led to opportunities for nefarious actors to make money by fraudulently selling fake ads, defrauding advertisers and publishers of billions of dollars over many years. And a massive investment in the data infrastructure has led in many ways to a “surveillance state” that allows almost any company to track people’s behavior across the entire internet and build targeting segments that can be used to buy them as advertising.

Wave 5: Privileged Programmatic and Fraud Cleanup (2015-2020)

As it matured, programmatic advertising continued to walk in the footsteps of the securities exchanges. The largest ad buyers and sellers began to recreate privileged relationships inside the new RTB infrastructure. Examples include PMPs, “first look” mechanisms like header bidding and Prebid. It is now possible (but will take a while) to completely recreate all the ways ads have been sold historically on top of RTB infrastructure, but the eventual result will be a much more scalable and automated way of doing business.

Similarly, the massive and hidden problem of fraud was uncovered, and measures were taken to root it out. Industry efforts like Ads.txt and Sellers.json, and whole new companies and technologies for fraud detection and prevention, has set the industry on a path to solving this crisis. The result: a massive maturation of the ecosystem.

Wave 6: Privacy – Centric Advertising, New Format Innovation and Supply-Chain Optimization

Meanwhile the “surveillance state” we’ve found ourselves in has led to a huge backlash against third-party tracking that is upending the ecosystem again.

Over the last few years we’ve seen major initiatives by the technology industry to establish and enforce new privacy controls across all media. This trend is accelerating and broadening, and many of the mechanisms we’ve taken for granted in online advertising have been ruled privacy-unsafe, and are being phased out. Many companies in the space have doubled down on a commitment to these older tracking approaches, and are trying to find a path through that perpetuates them. I will hazard a prediction that this is not going to work.

The time is coming to an end when companies with no relationship to the consumer can track those consumers’ behavior across the internet and then sell that data. This evolution will strengthen companies that do have a direct (i.e. “first party”) consumer relationship, such as advertisers and publishers. It also is helping the largest incumbents like Facebook and Google, who have immense amounts of first-party data.

Technology providers will need to find ways to evolve their offerings such that they support the direct consumer relationships held by the advertiser and/or the publisher. This will mean in many cases either a completely new approach, or a set of innovations in how technology is integrated with the first-party companies’ infrastructure. The great thing about disruption is that it leads to new innovations.

Because the third-party data and tracking infrastructure is becoming less valuable, new ways to increase the value of ad opportunities will come to the foreground. Format innovation is back in the mix as a way to increase the value of inventory without breaching privacy protections. And the next wave of supply cleanup, after the war over fraud, will ensure supply chains are clean and optimized, with low-value suppliers shuffled out of existence.

Supply-chain optimization has been emerging as a focus area since around 2014, but now is becoming mainstream. The first round of supply-chain optimization was ‘brute force’ and ugly, but we’re now seeing intelligent and powerful supply-chain optimization enter the market, as well as industry initiatives like Sellers.json. These new technologies, initiatives and approaches are driving advertiser value and publisher yield significantly.

Tagged ,

Why dynamic creative has bounced back from failure

By Eric Picard (Originally published on iMediaConnection October 14th, 2013)

Back in 1999 (when the moon did not have a moonbase Alpha, nor did an explosion send the moon rocketing across the cosmos — a reference for old-timers like me) while at my last startup, Bluestreak, we started experimenting with dynamic creative.

The idea was that there were e-commerce companies with thousands of products available online, and that based on location we should be able to test and optimize which products led to the most clicks and purchases. Over the next few years, we worked with several customers to experiment with this. We ultimately ran ads with several publishers that would rotate through a list of products, and we used our creative optimization technology to determine which combination of offers was getting the best results (based on clicks, interactions, or conversions).

It turned out that there were various combinations of location (publisher) and product that worked much better than others, and the tests were successful. But the question was really about matters of degrees. We saw significant improvements in results, and we developed great technology that supported all this. But after the bottom dropped out of the market in 2000 and 2001 and the price of inventory dropped significantly, the improvements in performance stopped mattering as much.

Essentially, the price of inventory was so low that it was cheaper to just run much higher volumes of unoptimized ads than to pay for optimization service.

But I knew that creative optimization and dynamic creative would have its time and place. Either the impact of the creative optimization would drive significantly better results, the price of inventory would come back up, or we’d be able to optimize the offers based on user targeting rather than just by publisher.

Creative optimization and dynamic creative dropped out of the industry for eight to 10 years, but it came screaming back. As I guessed, the major driver was targeting based on user data. And over the past few years, the growth of real-time bidding and audience targeting has led to significant improvements in dynamic creative and optimization.

There are now several significant companies that have built their business around the idea of optimizing the offer shown to users based on their profiles, including a lot of retargeting. They build advertising campaigns that are driven by databases — ones that pull together the creative in ways that include hundreds or thousands or even millions of possible combinations. The best offer is selected based on a variety of criteria, including audience targeting attributes such demographics, behavioral data, and retargeting data. This information is extensively available and can be used to drive significantly better optimization than just location.

We all know that with real-time bidding and ad exchanges, ads can be targeted based on this kind of data. And we all know that with basic tracking of impressions, clicks, and conversions, bid prices in ad exchanges can be adjusted to optimize results based on the number of clicks or conversions. But dynamic creative optimization can take things to the next level. Using all of these technologies and techniques in combination can significantly drive up ROI. The only question is how many different products, offers, or options are available for optimization purposes.

The more opportunities to adjust the creative — especially if those products or offers can be somehow predicted to match against different audiences’ preferences or interests — the more likely the user is to act.
Read more at http://www.imediaconnection.com/content/35170.asp#sxRJhl1kvggxUiAe.99