HomeThis Week in MetaCoutureAndy Nightingale, VP of Product Marketing at Arteris - Interview Series

Related Posts

Andy Nightingale, VP of Product Marketing at Arteris – Interview Series


Andy Nightingale, VP of Product Marketing at Arteris is a seasoned global business leader with a diverse background in engineering and product marketing. He’s a Chartered Member of the British Computer Society and the Chartered Institute of Marketing, and has over 35 years of experience in the high-tech industry.

Throughout his career, Andy has held a range of roles, including engineering and product management positions at Arm, where he spent 23 years. In his current role as VP of product marketing at Arteris, Andy oversees the Magillem system-on-chip deployment tooling and FlexNoC and Ncore network-on-chip products.

Arteris is a catalyst for system-on-chip (SoC) innovation as the leading provider of semiconductor system IP for the acceleration of SoC development. Arteris Network-on-Chip (NoC) interconnect intellectual property (IP) and SoC integration technology enable higher product performance with lower power consumption and faster time to market, delivering proven flexibility and better economics for system and semiconductor companies, so innovative brands are free to dream up what comes next.

With your extensive experience at Arm and now leading product management at Arteris, how has your perspective on the evolution of semiconductor IP and interconnect technologies changed over the years? What key trends excite you the most today?

It’s been an extraordinary journey—from my early days writing test benches for ASICs at Arm to helping shape product strategy at Arteris, where we’re at the forefront of interconnect IP innovation. Back in 1999, system complexity rapidly accelerated, but the focus was still primarily on processor performance and essential SoC integration. Verification methodologies were evolving, but interconnect was often seen as a fixed infrastructure—necessary but not strategic.

Fast-forward to today and interconnect IP has become a critical enabler of SoC (System-on-Chip) scalability, power efficiency, and AI/ML performance. The rise of chiplets, domain-specific accelerators, and multi-die architectures has placed immense pressure on interconnect technologies to become more adaptive, innovative, physically, and software-aware.

One of the most exciting trends I see is the convergence of AI and interconnect design. At Arteris, we’re exploring how machine learning can optimize NoC (Network-on-Chip) topologies, intelligently route data traffic, and even anticipate congestion to improve real-time performance. This is not just about speed—it’s about making systems more innovative and responsive.

What excites me is how semiconductor IP is becoming more accessible to AI innovators. With high-level SoC configuration IP and abstraction layers, startups in automotive, robotics, and edge AI can now leverage advanced interconnect architectures without needing a deep background in RTL design. That democratization of capability is enormous.

Another key shift is the role of virtual prototyping and system-level modeling. Having worked on ESL (Electronic System Level) tools early in my career, it’s rewarding to see those methodologies now enabling early AI workload evaluation, performance prediction, and architectural trade-offs long before silicon is taped out.

Ultimately, the future of AI depends on how efficiently we move data—not just how fast we process it. That’s why I believe the evolution of interconnect IP is central to the next generation of intelligent systems.

Arteris’ FlexGen leverages AI driven automation and machine learning to automate NoC (Network-on-Chip) topology generation. How do you see AI’s role evolving in chip design over the next five years?

AI is fundamentally transforming chip design, and over the next five years, its role will only deepen—from productivity aid to intelligent design partner. At Arteris, we’re already living that future with FlexGen, where AI, formal methods, and machine learning are central to automating Network-on-Chip (NoC) topology optimization and SoC integration workflows.

What sets FlexGen apart is its blend of ML algorithms—all combined to initialize floorplans from images, generate topologies, configure clocks, reduce Clock Domain Crossings, and optimize the connectivity topology and its placement and routing bandwidth, streamlining communication between IP blocks. Moreover, this is all done deterministically, meaning that results can be replicated and incremental adjustments made, enabling predictable best-in-class results for use cases ranging from AI assistance for an expert SoC designer to creating the right NoC for a novice.

Over the next five years, AI’s role in chip design will shift from assisting human designers to co-designing and co-optimizing with them—learning from every iteration, navigating design complexity in real-time, and ultimately accelerating the delivery of AI-ready chips. We see AI not just making chips faster but making faster chips smarter.

The semiconductor industry is witnessing rapid innovation with AI, HPC, and multi-die architectures. What are the biggest challenges that NoC design needs to solve to keep up with these advancements?

As AI, HPC, and multi-die architectures drive unprecedented complexity, the biggest challenge for NoC design is scalability without sacrificing power, performance, or time to market. Today’s chips feature tens to hundreds of IP blocks, each with different bandwidth, latency, and power needs. Managing this diversity—across multiple dies, voltage domains, and clock domains—requires NoC solutions that go far beyond manual methods.

NoC solution technologies such as FlexGen help address key bottlenecks: minimizing wire length, maximizing bandwidth, aligning with physical constraints, and doing everything with speed and repeatability.

The future of NoC must also be automation-first and AI-enabled, with tools that can adapt to evolving floorplans, chipset-based architectures, and late-stage changes without requiring complete rework. This is the only way to keep pace with modern SoCs’ massive design cycles and heterogeneous demands and ensure efficient, scalable connectivity at the heart of next-gen semiconductors.

The AI chipset market is projected to grow significantly. How does Arteris position itself to support the increasing demands of AI workloads, and what unique advantages does FlexGen offer in this space?

Arteris is not only uniquely positioned to support the AI chiplet market but has been doing this already for years by delivering automated, scalable Network-on-Chip (NoC) IP solutions purpose-built for the demands of AI workloads including Generative AI and Large Language Models (LLM) compute —supporting high bandwidth, low latency, and power efficiency across increasingly complex architectures.  FlexGen, as the newest addition to the Arteris NoC IP lineup, will play an even more significant role in rapidly creating optimal topologies best suited for different large-scale, heterogeneous SoCs.

FlexGen offers incremental design, partial completion mode, and advanced pathfinding to dynamically optimize NoC configurations without complete redesigns—critical for AI chips that evolve throughout development.

Our customers are already building Arteris technology into multi-die and chiplet-based systems, efficiently routing traffic while respecting floorplan and clock domain constraints on each chiplet. Non-coherent multi-die connectivity is supported over industry-standard interfaces provided by third- party controllers.

As AI chip complexity grows, so does the need for automation, adaptability, and speed. FlexGen delivers all three, helping teams build smarter interconnects—faster—so they can focus on what matters: advancing AI performance at scale.

With the rise of RISC-V and custom silicon for AI, how does Arteris’ approach to NoC design differ from traditional interconnect architectures?

Traditional interconnect architectures were primarily built for fixed-function designs, but today’s RISC-V and custom AI silicon demand a more configurable, scalable, and automated approach than a modified one-size-fits-all solution. That’s where Arteris stands apart. Our NoC IP, especially with FlexGen, is designed to adapt to the diversity and modularity of modern SoCs, including custom cores, accelerators, and chiplets, as mentioned above.

FlexGen enables designers to generate and optimize topologies that reflect unique workload characteristics, whether low-latency paths for AI inference or high-bandwidth routes for shared memory across RISC-V clusters. Unlike static interconnects, FlexGen’s algorithms tailor each NoC to the chip’s architecture across clock domains, voltage islands, and floorplan constraints.

As a result, Arteris enables teams building custom silicon to move faster, reduce risk, and get the most from their highly differentiated designs—something traditional interconnects weren’t built to handle.

FlexGen claims a 10x improvement in design iteration speed. Can you walk us through how this automation reduces complexity and accelerates time-to-market for System-on-Chip (SoC) designers?

FlexGen delivers a 10x improvement in design iteration speed by automating some of the most complex and time-consuming tasks in NoC design. Instead of manually configuring topologies, resolving clock domains, or optimizing routes, designers use FlexGen’s physically aware, AI-powered engine to handle these in hours (or less)—tasks that traditionally took weeks.

As mentioned above, partial completion mode can automatically finish even partially completed designs, preserving manual intent while accelerating timing closure.

The result is a faster, more accurate, and easier-to-iterate design flow, enabling SoC teams to explore more architectural options, respond to late-stage changes, and get to market faster—with higher-quality results and less risk of costly rework.

One of FlexGen’s standout features is wire length reduction, which improves power efficiency. How does this impact overall chip performance, particularly in power-sensitive applications like edge AI and mobile computing?

Wire length directly impacts power consumption, latency, and overall chip efficiency—both in cloud AI / HPC applications that use the more advanced nodes and edge AI inference applications where every milliwatt matters. FlexGen’s ability to automatically minimize wire length—often up to 30%—means shorter data paths, reduced capacitance, and less dynamic power draw.

In real-world terms, this translates to lower heat generation, longer battery life, and better performance-per-watt, all of which are critical for AI workloads at the edge or in mobile environments and the cloud by directly impacting the total cost of ownership (TCO). By optimizing the NoC topology with AI-guided placement and routing, FlexGen ensures that performance targets are met without sacrificing power efficiency—making it an ideal fit for today and tomorrow’s energy-sensitive designs.

Arteris has partnered with leading semiconductor companies in AI data centers, automotive, consumer, communications, and industrial electronics. Can you share insights on how FlexGen is being adopted across these industries?

Arteris NoC IP sees strong adoption across all markets, particularly for high-end, more advanced chiplets and SoCs. That is because it addresses each sector’s top challenges: performance, power efficiency, and design complexity while preserving the core functionality and area constraints.

In automotive, for example, companies like Dream Chip use FlexGen to speed up the intersection of AI and Safety for autonomous driving by leveraging Arteris for their ADAS SoC design while meeting strict power and safety constraints. FlexGen’s smart NoC optimization and generation in data centers help manage massive bandwidth demands and scalability, especially for AI training and overall acceleration workloads.

FlexGen provides a fast, repeatable path to optimized NoC architectures for industrial electronics, where design cycles are tight and product longevity is key. Customers value its incremental design flow, AI-based optimization, and ability to adapt quickly to evolving requirements, making FlexGen a cornerstone for next-generation SoC development.

The semiconductor supply chain has faced significant disruptions in recent years. How is Arteris adapting its strategy to ensure Network-on-Chip (NoC) solutions remain accessible and scalable despite these challenges?

Arteris responds to supply chain disruptions by doubling down on what makes our NoC solutions resilient and scalable: automation, flexibility, and ecosystem compatibility.

FlexGen helps customers design faster and remain more agile to adjust to changing silicon availability, node shifts, or packaging strategies. Whether they are doing derivative designs or creating new interconnects from scratch.

We also support customers with different process nodes, IP vendors, and design environments, ensuring customers can deploy Arteris solutions regardless of their foundry, EDA tools, or SoC architecture.

By reducing dependency on any one part of the supply chain and enabling faster, iterative design, we’re helping customers derisk their designs and stay on schedule —even in uncertain times.

Looking ahead, what are the biggest shifts you anticipate in SoC development, and how is Arteris preparing for them?

One of the most significant shifts in SoC development is the move toward heterogeneous architectures, chiplet-based designs, and AI-centric workloads. These trends demand far more flexible, scalable, and intelligent interconnects—something traditional methods can’t keep up with.

Arteris is preparing by investing in AI-driven automation, as seen in FlexGen, and expanding support for multi-die systems, complex clock/power domains, and late-stage floorplan changes. We’re also focused on enabling incremental design, faster iteration, and seamless IP integration—so our customers can keep pace with shrinking development cycles and rising complexity.

Our goal is to ensure SoC (and chiplet) teams stay agile, whether they’re building for edge AI, cloud AI, or anything in between, all while providing the best power, performance, and area (PPA) no matter the complexity of the design, XPU architecture, and foundry node used.

Thank you for the great interview, readers who wish to learn more should visit Arteris



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Posts