F5 and NVIDIA expand collaboration on AI infrastructure

F5 and NVIDIA join forces to enhance AI infrastructures by enhancing token throughput, reducing latency, and enabling secure multi-tenant platforms.

  • Tuesday, 24th March 2026 Posted 2 weeks ago in by Sophie Milburn

F5, a provider of application and API delivery and security solutions, has announced expanded capabilities in collaboration with NVIDIA to enhance AI inference infrastructures. This collaboration integrates F5 BIG-IP Next for Kubernetes with NVIDIA BlueField-3 DPUs, creating a telemetry-aware infrastructure layer. The integration is designed to increase token throughput through improved GPU utilisation, reduce latency, and support secure multi-tenant AI platforms at scale.

In AI systems, tokens are measurable units of AI output, such as words or data fragments generated during inference. The production rate of these tokens affects user experience, infrastructure efficiency, and revenue per accelerator. As businesses and GPU-as-a-Service (GPUaaS) providers adopt AI, infrastructure efficiency is an important consideration. The solution from F5 and NVIDIA aims to address these factors, including token throughput and cost per token.

The shift from application-centric to agent-driven AI workflows requires architectural approaches that improve token throughput and reduce costs. BIG-IP Next for Kubernetes now uses NVIDIA NIM statistics and GPU telemetry to make routing decisions for inferences. This matches workloads with appropriate accelerators in real time, aiming to improve utilisation and reduce latency.

Tests validated by The Tolly Group demonstrated increased token throughput, faster time to first token (TTFT), and reduced request latency. Offloading functions such as networking and AI-aware load balancing to NVIDIA BlueField-3 DPUs allows host CPU capacity to be preserved, enabling GPUs to perform high-throughput inference. This increases token yield and reduces costs without requiring modifications to AI models.

AI applications require traffic control beyond traditional load balancing. BIG-IP Next for Kubernetes now supports inference-aware routing for agent-driven AI tasks. Integration with the NVIDIA DOCA Platform Framework facilitates deployment and management of NVIDIA BlueField DPUs. These capabilities aim to allow organisations to share GPU infrastructure securely across units or clients while maintaining performance and service predictability.

The collaboration between F5 and NVIDIA aims to provide tools to monitor token consumption, improve traffic flow, and optimise infrastructure utilisation. This approach seeks to allow organisations to achieve greater efficiency from GPUs and better align resources with AI workloads.

By combining NVIDIA infrastructure telemetry and DPU acceleration with F5 operational intelligence, enterprises can adapt AI infrastructures for more efficient, multi-tenant, and agent-driven workloads.

atNorth has joined the European Data Center Association (EUDCA) to collaborate with digital...
Red Hat and Google Cloud have expanded their collaboration, introducing Red Hat OpenShift in the...
The latest report by Fivetran highlights pipeline fragility in data infrastructures hindering...
SATLINE’s core infrastructure achieves Tier III alignment, with upgrades intended to improve...
Rebellions secures $400 million in pre-IPO funding, with plans to expand in the U.S. and scale its...
The new Gcore Radar report highlights the surge in DDoS attacks, driven by sophisticated techniques...
ControlMonkey augments its platform to protect and restore observability configurations,...
CoreWeave strikes a $21 billion deal to help strengthen Meta's AI capabilities with advanced cloud...