Four Winning Brand Strategies for the Big Game

In 2024, more than 123 million viewers tuned into the Big Game, making it the most-watched TV event of all time. Think on that – in the history of American television viewership, last year’s showdown between the Chiefs and 49ers took the number one spot, beating out every other pivotal media event (like the moon landing). And here’s the real kicker: those millions of fans weren’t all glued to the same screen. Our research suggests 2025 will be similar, with viewing plans varying by audience. Sure, 40% of viewers will catch the game on cable or satellite (the top choice for Boomers and Gen-X). But 51% (namely Millennials and Adult Gen-Z) are expected to stream it this year, while the rest will likely watch on a website or through other means. This fragmentation isn’t an obstacle, though – it’s an opportunity. Football’s biggest night is more than a three-hour block on TV; it’s a flurry of activity across phones, streaming platforms and social media. For advertisers, this means that – instead of relying on a monolithic approach (where’s the fun in that, anyway?) – you can use data-driven strategies to meet viewers on the platforms and devices that most resonate with them. Here are four ways to create a brand moment worthy of the Big Game this year: 1. Use Audience Data to Your Advantage Understanding the different ways your audience engages with content during the game will help you craft a winning strategy. After sports fanatics, for instance? Say less. These highly engaged, often Millennial viewers watch live sports weekly (87%) and prefer ad-supported platforms (83%), so try activating campaigns aligned with pre-game analysis or live commentary. 52% of Millennials or Gen-Xers, on the other hand, plan for this night weeks ahead of time, engaging with hosting tips, recipes and decor ideas. Target them with lifestyle content tied to food and event essentials. Using data on fan viewing habits to identify where they are most engaged and activating campaigns that align with their typical behaviors will help you deliver more relevant and impactful messaging, maximize engagement and ensure your brand is present during the moments that matter most. 2. Optimize Your Ad Before You Activate Let’s be honest: the game is as much about ads as it is football. And creating a stand-out campaign requires ensuring your creative resonates before it even goes live. Pre-launch testing helps gauge how audiences respond. Use surveys, attention measurement or emotional response analysis to fine-tune messaging, visuals and tone for maximum impact. Time and again, research has shown that optimizing creative is the key to success when it comes to resonating with audiences, especially if those optimizations are data-driven. By optimizing your ad, you can make sure it drives both measurable and memorable impact – the ultimate touchdown. Football’s biggest night is more than a three-hour block on TV; it’s a flurry of activity across phones, streaming platforms and social media. Football’s biggest night is more than a three-hour block on TV; it’s a flurry of activity across phones, streaming platforms and social media. 3. Dig Deeper with Real-Time Audience Insights Knowing where your audience is watching is just the start – to refine and strengthen your campaign, you’ll need to tap into real-time digital and TV behavior. Some hints? Ads placed alongside live commentary or player stats drive 39% and 36% engagement, respectively, helping your message reach fans who are actively interacting with the game. Nearly 33% of viewers also engage with hosting tips, creating a perfect placement opportunity for brands that align with the more social side of Sunday night. 4. Blend TV and Digital Metrics to Extend Reach and Reduce Waste With viewers spread across platforms, the challenge isn’t just reaching them—it’s doing so efficiently. Blending data sources like ACR, set-top box and app analytics allows you to track cross-screen behaviors, reduce duplication and optimize frequency for a seamless viewer experience. These insights also reveal gaps and overlap between TV and digital audiences, helping you strategically allocate placements. For instance, reach Millennials and Adult Gen-Z on streaming, but engage Boomers and Gen-X on cable. By unifying metrics and refining placements, you can maximize reach and impact across this fragmented landscape. Ready to make the play? The Big Game is your time to shine. Whether you’re building hype before kickoff, engaging fans on second screens or keeping the moment alive after the final whistle, there’s room for every brand to have an impact. It’s not always the biggest budgets that win; it’s the smartest strategies. Read Next

Innovating at Scale – Practices from Within Nexxen Engineering (Part 2)

At Nexxen, the stability of our platform is core to our engineering team’s mission, ensuring that our customers have a seamless experience while we continue to innovate at a fast pace. To achieve this, we rely on our ability to make small, incremental changes, push them to our production systems quickly, and immediately see the impact those changes have on the overall health of our platform. In my previous post, we discussed why and how we test in production. In this article, we’ll dive into our observability platform and our culture of ownership. Observability-Driven Development In a highly concurrent, low-latency system like Nexxen’s, validating a change requires us to examine the production environment holistically. This is where our observability platform, Atlas, comes into play. Atlas is an internally white-labeled, self-hosted Grafana LGTM stack maintained by our infrastructure team. It provides us with real-time visibility into the health and performance of our production systems, enabling us to quickly detect and diagnose issues. With Atlas, every engineer has access to a wealth of telemetry data, including metrics, logs, and traces, which they can use to gain insights into how their changes are affecting the system. At Nexxen, some of the first questions we ask when developing a new feature or making changes to our system are: How will we know if this change is working as intended when it’s released to production? How will we be alerted if the change is not performing as expected? These questions are at the heart of our observability-driven development approach. By defining clear metrics upfront and ensuring that we have the necessary telemetry in place to track those metrics, we can quickly assess the impact of our changes once they’re deployed to production. This proactive approach to observability helps us catch potential setbacks early to avoid negative impacts on our customers. Observability-driven development not only helps us identify and resolve issues more efficiently, but also enables us to continuously optimize our systems. By analyzing the telemetry data collected by Atlas, we can identify performance bottlenecks, resource inefficiencies, and opportunities for improvement. We proactively make optimizations and architectural changes that enhance the overall reliability and scalability of our platform. A Culture of Ownership Perhaps most importantly, Nexxen has a culture of ownership where every engineer is given the knowledge, tools, responsibility, and trust they need to own their work end-to-end. We all know how our systems work, and nothing is “thrown over the wall” for another team to run or monitor in production. To support this mindset, we have invested heavily in production-related tooling and practices. Engineers are encouraged to actively engage with production systems daily, as that is where our users interact with our code and infrastructure. We have built robust guardrails and safety nets that enable us to confidently make changes. By fostering a culture of trust, ownership, and continuous improvement, we are able to deliver exceptional value to our customers while maintaining a stable and reliable platform. Conclusion At Nexxen, we pride ourselves on our platform’s stability and our ability to continue to improve our technology while we grow. Through realistic testing in production environments, how we track success metrics and analyze our performance data, and fostering a culture of ownership throughout our engineering teams, Nexxen’s platform delivers both innovation and stability. Read Next

Innovating at Scale – Practices from Within Nexxen Engineering (Part 1)

At Nexxen, the stability of our platform is core to our engineering team’s mission, ensuring that our customers have a seamless experience while we continue to innovate at a fast pace. To achieve this, we rely on our ability to make incremental changes, push them to our production systems quickly, and immediately see the impact those changes have on the overall health of our platform. I will highlight a few practices the Nexxen engineering team uses to innovate quickly at scale, while minimizing change risk, and keeping our production systems stable. Specifically in this article, we’ll discuss why and how we test in production rather than stage environments. Testing in Production When done with the right safeguards and observability in place, testing in production enables engineers to gain immediate confidence in their changes and ship quicker than otherwise possible. It is also, arguably, safer than traditional “staging environment” testing, which doesn’t capture the full complexity of real-world conditions, making it less reliable as a predictor of production readiness. While staging environments serve a purpose, they are often unable to fully mimic the complexity of live systems, especially at scale. No amount of preparation and testing in a development or staging environment is the same as running your code on a production machine. The hardware is not the same, the network is not the same, the data is not the same, nor are the patterns and behaviors of interactions between different system components. At Nexxen, we shorten the feedback loop and test directly in production through canary deployments, leveraging the power of Kubernetes to make this process seamless. Canary deployments involve rolling out changes to one or two production servers, limiting exposure to a small percentage of traffic, and closely monitoring the performance and behavior of the canaries before releasing the changes more broadly. Operating at scale both enables and requires us to do this. For example, Nexxen’s DSP serves millions of requests per second across four datacenters, with an average latency under 80 milliseconds. This, of course, requires a substantial amount of hardware. We’re able to target a subset to test new changes, but also requires a substantial amount of precision – just a small increase in garbage collection time or ten milliseconds of additional latency could be detrimental to overall system performance. Testing anywhere but production doesn’t inspire confidence. We still follow thorough SDLC (Software Development Lifecycle) procedures, such as passing unit and integration tests, undergoing code reviews, utilizing feature flags to manage new functionality, and proper approvals before any change is applied to production. However, Nexxen has invested heavily in modernizing our CI/CD pipelines, ensuring that we can rapidly and safely deploy, as well as roll back changes across every part of our production system. This modernization enables us to deliver features faster without compromising the stability of our platform. Testing in production is the ultimate quality control checkpoint, ensuring that our changes work as intended in the real-world environment where they will ultimately run. In the next article, we’ll explain further by exploring our observability platform and our culture of ownership. Read Next

Life at Nexxen with Erin Frey

For the latest edition in the series, we talked with Erin Frey, a Senior Account Executive based out of our Chicago office. Erin shares advice for building relationships in sales, how to deal with setbacks, and her current favorite restaurants in Chicago.

Creative Optimization in the Age of CTV and Digital Video: A Q&A with Les Seifer, SVP, Global Creative

The demand for highly effective, personalized creative has never been greater – particularly across CTV and digital video. But while the advertising industry leverages data in almost every aspect of digital media planning, one area where we’re lagging is creative optimization. So, how can brands leverage data to increase the relevance of their campaigns to consumers, particularly across these growing channels? This is exactly what we aimed to demystify in a proprietary study with MAGNA: The Intersection of Audience Data + Creative Optimization: How to Drive Action on Streaming TV. On the heels of this report’s release, Les Seifer, Nexxen’s Senior Vice President, Global Creative, further unpacks the gaps in creative strategies so that advertisers can create more engaging and impactful ads. Here’s what he had to say.  Where do you see gaps in how advertisers approach their creative for CTV & digital video? From my perspective, there is one major gap: many creative teams are not producing content that’s fully optimized for different screens or tailored to specific audiences, particularly in the realms of CTV and digital video. And honestly, it’s not entirely their fault. A big part of the problem is bandwidth—creative teams are stretched thin and often lack the resources to create multiple versions of an ad for every audience or screen. Another challenge is visibility. These teams don’t always have complete insight into how their ads will be distributed across multichannel platforms, nor how specific audiences will respond to their creative. Lastly, many of these teams have limited access to actionable insights, often relying solely on post-campaign brand lift reports that provide little more than surface-level data and lack the nuance needed to make real improvements. What opportunities are there for advertisers to address those gaps?   The advertising industry excels at leveraging data in almost every aspect of digital media planning. Yet, ironically, one area where it’s lagging is creative optimization. And I say “ironically” because, at the end of the day, it’s the creative—the ad—that drives consumer response. Significant investments can be made in reaching the right audience, but if the content isn’t informed by data, all of that effort risks falling flat. We hear a lot about making creative relevant to audiences, but how do you make that actionable?   To make creative genuinely relevant, going beyond surface-level data and digging into actionable insights is essential. This means understanding audience behavior and preferences as well as how consumers interact with content across different devices. One of the most effective ways to gather these insights is through pre-flight testing. By sharing creative assets with real consumers—who self-report their interests and demographics—advertisers can gather both voluntary feedback (like surveys) and involuntary feedback (such as facial coding, active attention and emotional reactions). This combination of data helps reveal who’s engaging with the content and how it resonates, providing a clearer picture of how the campaign might perform. With these insights in hand, advertisers gain a deeper understanding of audience response before committing to paid impressions. If issues like low purchase intent or weak brand recall surface, they can be addressed early in the process. For example, strong purchase intent might suggest adding interactive elements—especially in CTV ads—while low brand recall could be improved with simple tweaks, like incorporating a logo or branded frame. To make creative genuinely relevant, going beyond surface-level data and digging into actionable insights is essential. This means understanding audience behavior and preferences as well as how consumers interact with content across different devices. To make creative genuinely relevant, going beyond surface-level data and digging into actionable insights is essential. This means understanding audience behavior and preferences as well as how consumers interact with content across different devices. How does Nexxen approach solving these challenges?   At Nexxen, we take a holistic approach to creative testing. We begin by recruiting panels of real consumers to watch ads. As these panels watch the ads, we collect voluntary and involuntary data and feedback, as mentioned. The real value of this approach is the 360-degree view it provides, capturing both conscious feedback and subconscious reactions. Because we conduct this testing before the campaign goes live, we can make quick, cost-effective adjustments to the creative, improving performance before launch. Tackling insights and optimizations early delivers more value than relying on post-campaign results, which often come too late to execute. Additionally, Nexxen offers “Creative Engaged Audience” data, identifying viewers who are not just watching the ad, but also showing greater attention, better brand recall and stronger purchase intent. Ignoring this segment means missing a key opportunity to connect with the most engaged part of an audience. What’s interesting is that this highly engaged group may not always align with the intended target audience. For example, a campaign aimed at parents might find that younger professionals are more engaged with the ad. Once these engaged groups are identified, we leverage tools like Nexxen Discovery to understand their interests and content consumption further, enabling the creation of more refined audience or contextual segments for the campaign. The key takeaway? These highly engaged viewers are more likely to take action—they click more, buy more and interact with the brand more than others. By targeting them with the creative that resonates most, based on pre-flight testing, a stronger connection is built between the ad and the audience, leading to better campaign performance. And finally, what’s next for creative optimization in digital video? What has you excited?   Current solutions are only scratching the surface. As more advanced tools like AI and machine learning come into play, we’re already seeing creative optimization become even more sophisticated. We’re working on ways to better predict what will resonate with different audience segments and enable more dynamic, personalized creative messaging in real time. The true game-changer will come as advertisers start to not only optimize their creative before launching, but also adjust it dynamically as the campaign unfolds. With the right data from pre-flight testing and mid-flight performance, we can continually optimize throughout the campaign, ensuring

Life at Nexxen with Chris Wieland

In this installment of Life at Nexxen, Engineering Manager Chris Wieland shares what he enjoys most about his work, his favorite music venue in New York, and how curiosity led him to adtech.

Million Ads Machine: How Nexxen Runs an Accurate, High-Performance Ad Scoring Platform

In the fast-paced world of digital advertising, the ability to efficiently rank and score millions of ads in real-time is paramount to success. As advertisers increasingly demand measurable results tailored to their unique campaign objectives, the challenge for Demand-Side Platforms (DSPs) intensifies. How do we strike the perfect balance between speed, accuracy, and scalability in a landscape marked by fluctuating demands and diverse goals?  Feature Engineering  Effective feature engineering is crucial for extracting actionable insights from complex, sparse datasets. At Nexxen, we distill thousands of candidates’ features into a few hundred high-impact signal-based features, which is key to optimizing model performance and enhancing prediction accuracy.  Our data scientists utilize a proprietary query engine to execute a variety of Spark and MapReduce jobs, deconstructing data across multiple dimensions. We’ve developed a framework that detects feature discrepancies, improving training data quality. This system combines automated pipelines with on-demand SparkSQL queries, allowing for comparisons between training data and incoming ad requests, ensuring that the models remain aligned with the latest data distributions.  Ad Scoring and Ranking Using Machine-Learning Models  Before scoring and ranking, a series of services handle pre-processing steps like filtering and discarding based on specific rules. These services leverage in-memory caches and distributed key-value stores for fast retrieval of metadata from relational databases and object stores. These lookups occur in milliseconds, crucial for ensuring real-time performance.  When an impression request arrives, the scoring system uses the deserialized trained models loaded into memory for immediate scoring and ranking. Requests are transformed into feature vectors, which are then scored using a Directed Acyclic Graph (DAG), where each machine learning model acts as a node in the DAG. The DAG structure allows for dependency-based execution, optimizing for various KPIs like CPA (Cost Per Action), CPC (Cost Per Click), or Viewability.  The complete bidding workflow—including selection, filtering, and scoring—occurs within a few milliseconds, enabling Nexxen to handle millions of requests per second while maintaining high throughput and minimal latency.  Below is a high-level design of such a system:  A/B Testing and Custom Bidding Strategies  The scoring platform provides A/B testing through user-split methodologies, to quantify campaign lift while minimizing expenses commonly associated with control group impressions. Distinct machine learning model versions can be assigned unique budget caps, facilitating performance benchmarking and adaptive budget allocation.  Advertisers can further optimize their bidding algorithms by applying bid multipliers across various targeting vectors, providing increased flexibility to maximize campaign effectiveness.  Observability  Our observability infrastructure is divided into two core components:  1. Model Generation and Training: Tracks the success rates of data collection, training, and model distribution. 2. Model Performance: Monitor real-time performance metrics such as latency, throughput, and accuracy for each deployed model. We leverage a time-series database to collect high-resolution metrics and generate dynamic dashboards. These dashboards allow us to separate signal from noise, providing insights into true performance anomalies.  Model Release and Versioning  Our CI/CD pipeline integrates GitLab and Jenkins for version control, build automation, and deployment. This setup enables seamless rollout of new machine learning models or rollback to previous versions based on real-time performance metrics, ensuring both agility and reliability in model deployment.  Looking Ahead  As Nexxen looks to the future, a key focus will be leveraging larger and more sophisticated AI models to tackle challenges of AI-driven ad fraud and navigating the potential for AI-powered algorithms to perpetuate biases present in training data, leading to discriminatory ad targeting and reinforcing existing inequalities.  By continuously refining its technology and methodologies, Nexxen is committed to developing strategies and further expand customization on our scoring platform that address these issues while ensuring efficient predictions and maintaining high performance.  Read Next

From Insight to Activation: Nexxen’s Formula for Personalization in a Privacy-Conscious World

World Conway’s Law, a principle coined by computer programmer Melvin Conway in 1967, states that “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” In simple terms, the way teams are structured within a company directly influences the architecture of the software they build. If teams are siloed or fragmented, the software systems they create will reflect that disjointedness, and vice versa – collaborative teams tend to build more cohesive systems. This is best explained via the following simplistic illustration: At Nexxen, we’ve witnessed this law in action many times, both in positive and challenging ways. Below are a few real-world examples where Conway’s Law was applied, intentionally or otherwise, and how it shaped our technical architecture and team structure. The Monolith Several years ago, Nexxen acquired a successful ad-tech firm that had built an impressive and feature-rich platform. However, it came with one significant architectural challenge: it was built as a monolith. A single, unified codebase where all functionality is tightly coupled. While such systems may be easier to develop in the early stages, they come with several long-term downsides. Such applications usually demonstrate slower development and scaling limitations given the butterfly effect, where for example, changing one line of code in the UI may impact the data-science codebase. It was clear we needed to break down this monolith and transition to a distributed architecture with microservices. Distributed services allow different components of the system (e.g., runtime, front-end, data) to operate independently, making the system easier to scale, maintain, and deploy. To make this transition, we reorganized the original monolithic development teams into several smaller groups, each focused on a specific domain – such as runtime, front-end, and data. These new teams, now more autonomous, were incentivized to take ownership of their respective areas, leading to the natural evolution of independent services. Each group began decoupling their domain from the monolith and building it as a standalone service. As Conway’s Law dictates, as our communication structure shifted, the software followed, evolving from a monolithic architecture into distributed microservices Distributed teams lead to misalignment Not all outcomes of the newly distributed team structure were purely positive. As teams became more independent, we noticed an increase in the rate of production issues. The issue wasn’t with the teams’ capabilities, but rather with a lack of alignment between them. As each team focused on its own service, communication across teams diminished, and this misalignment led to inconsistencies. Because the teams were operating in silos, the systems they built reflected that disconnection. The architecture had become fragmented, with services not always communicating well or adhering to shared practices. Fearing the return of the monolith, we introduced a more structured communication framework, leaving the teams independent. For example, we implemented a rigid Slack structure, ensuring all teams had dedicated channels for cross-team collaboration, changes, and feedback. We created clear guidelines for communicating changes early and soliciting feedback across teams. Another example would be the enforcement of regular cross-team meetings and a shared response system that improved communication, leading to faster resolution times and better alignment across services. By improving the communication structure, we were able to reduce production issues and bring the architecture into better alignment with the business’s needs. Freedom vs. Alignment Another example of Conway’s Law at play can be seen in our front-end development teams. Nexxen operates several independent product lines, each with its own front-end application. Each product line is supported by its own development team, leading to a natural separation in how teams approach their work. Over time, this autonomy led to diverse tool choices – teams were using different logging frameworks, testing tools, monitoring solutions, and even CI/CD pipelines. There are clear advantages to allowing teams to choose their own tools, such as increased ownership and innovation, but it raised the challenge of inconsistent practices. Each team’s differing choices in tools made it harder to maintain, monitor, and support these applications at a company-wide level. To address this, we reorganized the teams under a single department, while maintaining the independence of each team’s product line. This shared management layer ensured alignment on core tools and protocols (logging, testing, monitoring, etc.) while preserving the benefits of autonomy where it made sense. This approach allowed us to standardize critical infrastructure while giving teams the freedom to innovate within their domains. Conclusion Conway’s Law has been a guiding principle at Nexxen, sometimes intentionally and other times revealed through experience. Whether breaking down a monolith, dealing with the challenges of distributed teams, or balancing autonomy with alignment, we’ve learned that organizational structure and communication are just as critical to system design as the technologies we choose. By being mindful of how teams interact, we’ve been able to shape our architecture to better serve the business – and ultimately, our customers. Read Next

Life at Nexxen with Dominik Weber

For this installment, we spoke with Dominik Weber, Senior Creative Project and Insights Manager for Nexxen Studio Insights team. Dominik breaks down the process of generating insights for clients, the skill that helps the most in managing multiple projects, and what he enjoys most about living in London.

How the Nexxen SSP Scales: Our Technical Approach to High-Performance Systems

Conway’s Law, a principle coined by computer programmer Melvin Conway in 1967, states that “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” In simple terms, the way teams are structured within a company directly influences the architecture of the software they build. If teams are siloed or fragmented, the software systems they create will reflect that disjointedness, and vice versa – collaborative teams tend to build more cohesive systems. This is best explained via the following simplistic illustration: At Nexxen, we’ve witnessed this law in action many times, both in positive and challenging ways. Below are a few real-world examples where Conway’s Law was applied, intentionally or otherwise, and how it shaped our technical architecture and team structure. The Monolith Several years ago, Nexxen acquired a successful ad-tech firm that had built an impressive and feature-rich platform. However, it came with one significant architectural challenge: it was built as a monolith. A single, unified codebase where all functionality is tightly coupled. While such systems may be easier to develop in the early stages, they come with several long-term downsides. Such applications usually demonstrate slower development and scaling limitations given the butterfly effect, where for example, changing one line of code in the UI may impact the data-science codebase. It was clear we needed to break down this monolith and transition to a distributed architecture with microservices. Distributed services allow different components of the system (e.g., runtime, front-end, data) to operate independently, making the system easier to scale, maintain, and deploy. To make this transition, we reorganized the original monolithic development teams into several smaller groups, each focused on a specific domain – such as runtime, front-end, and data. These new teams, now more autonomous, were incentivized to take ownership of their respective areas, leading to the natural evolution of independent services. Each group began decoupling their domain from the monolith and building it as a standalone service. As Conway’s Law dictates, as our communication structure shifted, the software followed, evolving from a monolithic architecture into distributed microservices Distributed teams lead to misalignment Not all outcomes of the newly distributed team structure were purely positive. As teams became more independent, we noticed an increase in the rate of production issues. The issue wasn’t with the teams’ capabilities, but rather with a lack of alignment between them. As each team focused on its own service, communication across teams diminished, and this misalignment led to inconsistencies. Because the teams were operating in silos, the systems they built reflected that disconnection. The architecture had become fragmented, with services not always communicating well or adhering to shared practices. Fearing the return of the monolith, we introduced a more structured communication framework, leaving the teams independent. For example, we implemented a rigid Slack structure, ensuring all teams had dedicated channels for cross-team collaboration, changes, and feedback. We created clear guidelines for communicating changes early and soliciting feedback across teams. Another example would be the enforcement of regular cross-team meetings and a shared response system that improved communication, leading to faster resolution times and better alignment across services. By improving the communication structure, we were able to reduce production issues and bring the architecture into better alignment with the business’s needs. Freedom vs. Alignment Another example of Conway’s Law at play can be seen in our front-end development teams. Nexxen operates several independent product lines, each with its own front-end application. Each product line is supported by its own development team, leading to a natural separation in how teams approach their work. Over time, this autonomy led to diverse tool choices – teams were using different logging frameworks, testing tools, monitoring solutions, and even CI/CD pipelines. There are clear advantages to allowing teams to choose their own tools, such as increased ownership and innovation, but it raised the challenge of inconsistent practices. Each team’s differing choices in tools made it harder to maintain, monitor, and support these applications at a company-wide level. To address this, we reorganized the teams under a single department, while maintaining the independence of each team’s product line. This shared management layer ensured alignment on core tools and protocols (logging, testing, monitoring, etc.) while preserving the benefits of autonomy where it made sense. This approach allowed us to standardize critical infrastructure while giving teams the freedom to innovate within their domains. Conclusion Conway’s Law has been a guiding principle at Nexxen, sometimes intentionally and other times revealed through experience. Whether breaking down a monolith, dealing with the challenges of distributed teams, or balancing autonomy with alignment, we’ve learned that organizational structure and communication are just as critical to system design as the technologies we choose. By being mindful of how teams interact, we’ve been able to shape our architecture to better serve the business – and ultimately, our customers. Read Next