Creative Optimization in the Age of CTV and Digital Video: A Q&A with Les Seifer, SVP, Global Creative

The demand for highly effective, personalized creative has never been greater – particularly across CTV and digital video. But while the advertising industry leverages data in almost every aspect of digital media planning, one area where we’re lagging is creative optimization. So, how can brands leverage data to increase the relevance of their campaigns to consumers, particularly across these growing channels? This is exactly what we aimed to demystify in a proprietary study with MAGNA: The Intersection of Audience Data + Creative Optimization: How to Drive Action on Streaming TV. On the heels of this report’s release, Les Seifer, Nexxen’s Senior Vice President, Global Creative, further unpacks the gaps in creative strategies so that advertisers can create more engaging and impactful ads. Here’s what he had to say.  Where do you see gaps in how advertisers approach their creative for CTV & digital video? From my perspective, there is one major gap: many creative teams are not producing content that’s fully optimized for different screens or tailored to specific audiences, particularly in the realms of CTV and digital video. And honestly, it’s not entirely their fault. A big part of the problem is bandwidth—creative teams are stretched thin and often lack the resources to create multiple versions of an ad for every audience or screen. Another challenge is visibility. These teams don’t always have complete insight into how their ads will be distributed across multichannel platforms, nor how specific audiences will respond to their creative. Lastly, many of these teams have limited access to actionable insights, often relying solely on post-campaign brand lift reports that provide little more than surface-level data and lack the nuance needed to make real improvements. What opportunities are there for advertisers to address those gaps?   The advertising industry excels at leveraging data in almost every aspect of digital media planning. Yet, ironically, one area where it’s lagging is creative optimization. And I say “ironically” because, at the end of the day, it’s the creative—the ad—that drives consumer response. Significant investments can be made in reaching the right audience, but if the content isn’t informed by data, all of that effort risks falling flat. We hear a lot about making creative relevant to audiences, but how do you make that actionable?   To make creative genuinely relevant, going beyond surface-level data and digging into actionable insights is essential. This means understanding audience behavior and preferences as well as how consumers interact with content across different devices. One of the most effective ways to gather these insights is through pre-flight testing. By sharing creative assets with real consumers—who self-report their interests and demographics—advertisers can gather both voluntary feedback (like surveys) and involuntary feedback (such as facial coding, active attention and emotional reactions). This combination of data helps reveal who’s engaging with the content and how it resonates, providing a clearer picture of how the campaign might perform. With these insights in hand, advertisers gain a deeper understanding of audience response before committing to paid impressions. If issues like low purchase intent or weak brand recall surface, they can be addressed early in the process. For example, strong purchase intent might suggest adding interactive elements—especially in CTV ads—while low brand recall could be improved with simple tweaks, like incorporating a logo or branded frame. To make creative genuinely relevant, going beyond surface-level data and digging into actionable insights is essential. This means understanding audience behavior and preferences as well as how consumers interact with content across different devices. To make creative genuinely relevant, going beyond surface-level data and digging into actionable insights is essential. This means understanding audience behavior and preferences as well as how consumers interact with content across different devices. How does Nexxen approach solving these challenges?   At Nexxen, we take a holistic approach to creative testing. We begin by recruiting panels of real consumers to watch ads. As these panels watch the ads, we collect voluntary and involuntary data and feedback, as mentioned. The real value of this approach is the 360-degree view it provides, capturing both conscious feedback and subconscious reactions. Because we conduct this testing before the campaign goes live, we can make quick, cost-effective adjustments to the creative, improving performance before launch. Tackling insights and optimizations early delivers more value than relying on post-campaign results, which often come too late to execute. Additionally, Nexxen offers “Creative Engaged Audience” data, identifying viewers who are not just watching the ad, but also showing greater attention, better brand recall and stronger purchase intent. Ignoring this segment means missing a key opportunity to connect with the most engaged part of an audience. What’s interesting is that this highly engaged group may not always align with the intended target audience. For example, a campaign aimed at parents might find that younger professionals are more engaged with the ad. Once these engaged groups are identified, we leverage tools like Nexxen Discovery to understand their interests and content consumption further, enabling the creation of more refined audience or contextual segments for the campaign. The key takeaway? These highly engaged viewers are more likely to take action—they click more, buy more and interact with the brand more than others. By targeting them with the creative that resonates most, based on pre-flight testing, a stronger connection is built between the ad and the audience, leading to better campaign performance. And finally, what’s next for creative optimization in digital video? What has you excited?   Current solutions are only scratching the surface. As more advanced tools like AI and machine learning come into play, we’re already seeing creative optimization become even more sophisticated. We’re working on ways to better predict what will resonate with different audience segments and enable more dynamic, personalized creative messaging in real time. The true game-changer will come as advertisers start to not only optimize their creative before launching, but also adjust it dynamically as the campaign unfolds. With the right data from pre-flight testing and mid-flight performance, we can continually optimize throughout the campaign, ensuring

Life at Nexxen with Chris Wieland

In this installment of Life at Nexxen, Engineering Manager Chris Wieland shares what he enjoys most about his work, his favorite music venue in New York, and how curiosity led him to adtech.

Million Ads Machine: How Nexxen Runs an Accurate, High-Performance Ad Scoring Platform

In the fast-paced world of digital advertising, the ability to efficiently rank and score millions of ads in real-time is paramount to success. As advertisers increasingly demand measurable results tailored to their unique campaign objectives, the challenge for Demand-Side Platforms (DSPs) intensifies. How do we strike the perfect balance between speed, accuracy, and scalability in a landscape marked by fluctuating demands and diverse goals?  Feature Engineering  Effective feature engineering is crucial for extracting actionable insights from complex, sparse datasets. At Nexxen, we distill thousands of candidates’ features into a few hundred high-impact signal-based features, which is key to optimizing model performance and enhancing prediction accuracy.  Our data scientists utilize a proprietary query engine to execute a variety of Spark and MapReduce jobs, deconstructing data across multiple dimensions. We’ve developed a framework that detects feature discrepancies, improving training data quality. This system combines automated pipelines with on-demand SparkSQL queries, allowing for comparisons between training data and incoming ad requests, ensuring that the models remain aligned with the latest data distributions.  Ad Scoring and Ranking Using Machine-Learning Models  Before scoring and ranking, a series of services handle pre-processing steps like filtering and discarding based on specific rules. These services leverage in-memory caches and distributed key-value stores for fast retrieval of metadata from relational databases and object stores. These lookups occur in milliseconds, crucial for ensuring real-time performance.  When an impression request arrives, the scoring system uses the deserialized trained models loaded into memory for immediate scoring and ranking. Requests are transformed into feature vectors, which are then scored using a Directed Acyclic Graph (DAG), where each machine learning model acts as a node in the DAG. The DAG structure allows for dependency-based execution, optimizing for various KPIs like CPA (Cost Per Action), CPC (Cost Per Click), or Viewability.  The complete bidding workflow—including selection, filtering, and scoring—occurs within a few milliseconds, enabling Nexxen to handle millions of requests per second while maintaining high throughput and minimal latency.  Below is a high-level design of such a system:  A/B Testing and Custom Bidding Strategies  The scoring platform provides A/B testing through user-split methodologies, to quantify campaign lift while minimizing expenses commonly associated with control group impressions. Distinct machine learning model versions can be assigned unique budget caps, facilitating performance benchmarking and adaptive budget allocation.  Advertisers can further optimize their bidding algorithms by applying bid multipliers across various targeting vectors, providing increased flexibility to maximize campaign effectiveness.  Observability  Our observability infrastructure is divided into two core components:  1. Model Generation and Training: Tracks the success rates of data collection, training, and model distribution. 2. Model Performance: Monitor real-time performance metrics such as latency, throughput, and accuracy for each deployed model. We leverage a time-series database to collect high-resolution metrics and generate dynamic dashboards. These dashboards allow us to separate signal from noise, providing insights into true performance anomalies.  Model Release and Versioning  Our CI/CD pipeline integrates GitLab and Jenkins for version control, build automation, and deployment. This setup enables seamless rollout of new machine learning models or rollback to previous versions based on real-time performance metrics, ensuring both agility and reliability in model deployment.  Looking Ahead  As Nexxen looks to the future, a key focus will be leveraging larger and more sophisticated AI models to tackle challenges of AI-driven ad fraud and navigating the potential for AI-powered algorithms to perpetuate biases present in training data, leading to discriminatory ad targeting and reinforcing existing inequalities.  By continuously refining its technology and methodologies, Nexxen is committed to developing strategies and further expand customization on our scoring platform that address these issues while ensuring efficient predictions and maintaining high performance.  Read Next

From Insight to Activation: Nexxen’s Formula for Personalization in a Privacy-Conscious World

World Conway’s Law, a principle coined by computer programmer Melvin Conway in 1967, states that “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” In simple terms, the way teams are structured within a company directly influences the architecture of the software they build. If teams are siloed or fragmented, the software systems they create will reflect that disjointedness, and vice versa – collaborative teams tend to build more cohesive systems. This is best explained via the following simplistic illustration: At Nexxen, we’ve witnessed this law in action many times, both in positive and challenging ways. Below are a few real-world examples where Conway’s Law was applied, intentionally or otherwise, and how it shaped our technical architecture and team structure. The Monolith Several years ago, Nexxen acquired a successful ad-tech firm that had built an impressive and feature-rich platform. However, it came with one significant architectural challenge: it was built as a monolith. A single, unified codebase where all functionality is tightly coupled. While such systems may be easier to develop in the early stages, they come with several long-term downsides. Such applications usually demonstrate slower development and scaling limitations given the butterfly effect, where for example, changing one line of code in the UI may impact the data-science codebase. It was clear we needed to break down this monolith and transition to a distributed architecture with microservices. Distributed services allow different components of the system (e.g., runtime, front-end, data) to operate independently, making the system easier to scale, maintain, and deploy. To make this transition, we reorganized the original monolithic development teams into several smaller groups, each focused on a specific domain – such as runtime, front-end, and data. These new teams, now more autonomous, were incentivized to take ownership of their respective areas, leading to the natural evolution of independent services. Each group began decoupling their domain from the monolith and building it as a standalone service. As Conway’s Law dictates, as our communication structure shifted, the software followed, evolving from a monolithic architecture into distributed microservices Distributed teams lead to misalignment Not all outcomes of the newly distributed team structure were purely positive. As teams became more independent, we noticed an increase in the rate of production issues. The issue wasn’t with the teams’ capabilities, but rather with a lack of alignment between them. As each team focused on its own service, communication across teams diminished, and this misalignment led to inconsistencies. Because the teams were operating in silos, the systems they built reflected that disconnection. The architecture had become fragmented, with services not always communicating well or adhering to shared practices. Fearing the return of the monolith, we introduced a more structured communication framework, leaving the teams independent. For example, we implemented a rigid Slack structure, ensuring all teams had dedicated channels for cross-team collaboration, changes, and feedback. We created clear guidelines for communicating changes early and soliciting feedback across teams. Another example would be the enforcement of regular cross-team meetings and a shared response system that improved communication, leading to faster resolution times and better alignment across services. By improving the communication structure, we were able to reduce production issues and bring the architecture into better alignment with the business’s needs. Freedom vs. Alignment Another example of Conway’s Law at play can be seen in our front-end development teams. Nexxen operates several independent product lines, each with its own front-end application. Each product line is supported by its own development team, leading to a natural separation in how teams approach their work. Over time, this autonomy led to diverse tool choices – teams were using different logging frameworks, testing tools, monitoring solutions, and even CI/CD pipelines. There are clear advantages to allowing teams to choose their own tools, such as increased ownership and innovation, but it raised the challenge of inconsistent practices. Each team’s differing choices in tools made it harder to maintain, monitor, and support these applications at a company-wide level. To address this, we reorganized the teams under a single department, while maintaining the independence of each team’s product line. This shared management layer ensured alignment on core tools and protocols (logging, testing, monitoring, etc.) while preserving the benefits of autonomy where it made sense. This approach allowed us to standardize critical infrastructure while giving teams the freedom to innovate within their domains. Conclusion Conway’s Law has been a guiding principle at Nexxen, sometimes intentionally and other times revealed through experience. Whether breaking down a monolith, dealing with the challenges of distributed teams, or balancing autonomy with alignment, we’ve learned that organizational structure and communication are just as critical to system design as the technologies we choose. By being mindful of how teams interact, we’ve been able to shape our architecture to better serve the business – and ultimately, our customers. Read Next