Gcore unveils Inference at the Edge

Gcore has launched Gcore Inference at the Edge, a breakthrough solution that provides ultra-low latency experiences for AI applications. This innovative solution enables the distributed deployment of pre-trained machine learning (ML) models to edge inference nodes, ensuring seamless, real-time inference.

  • 2 months ago Posted in

Gcore Inference at the Edge empowers businesses across diverse industries—including automotive, manufacturing, retail, and technology—with cost-effective, scalable, and secure AI model deployment. Use cases such as generative AI, object recognition, real-time behavioural analysis, virtual assistants, and production monitoring can now be rapidly realised on a global scale.

Gcore Inference at the Edge runs on Gcore's extensive global network of 180+ edge nodes, all interconnected by Gcore’s sophisticated low-latency smart routing technology. Each high-performance node sits at the edge of the Gcore network, strategically placing servers close to end users. Inference at the Edge runs on NVIDIA L40S GPUs, the market-leading chip designed specifically for AI inference. When a user sends a request, an edge node determines the route to the nearest available inference region with the lowest latency, achieving a typical response time of under 30 ms.

The new solution supports a wide range of fundamental ML and custom models. Available open-source foundation models in the Gcore ML Model Hub include LLaMA Pro 8B, Mistral 7B, and Stable-Diffusion XL. Models can be selected and trained agnostically to suit any use case, before distributing them globally to Gcore Inference at the Edge nodes. This addresses a significant challenge faced by development teams where AI models are typically run on the same servers they were trained on, resulting in poor performance.

Benefits of Gcore Inference at the Edge include:

· Cost-effective deployment: A flexible pricing structure ensures customers only pay for the resources they use.

· Inbuilt DDoS protection: ML endpoints are automatically protected from DDoS attacks through Gcore’s infrastructure.

· Outstanding data privacy and security: The solution features built-in compliance with GDPR, PCI DSS, and ISO/IEC 27001 standards.

· Model autoscaling: Autoscaling is available to handle load spikes, so a model is always ready to support peak demand and unexpected surges.

· Unlimited object storage: Scalable S3-compatible cloud storage that grows with evolving model needs.

Andre Reitenbach, CEO at Gcore comments: “Gcore Inference at the Edge empowers customers to focus on getting their machine learning models trained, rather than worrying about the costs, skills, and infrastructure required to deploy AI applications globally. At Gcore, we believe the edge is where the best performance and end-user experiences are achieved, and that is why we are continuously innovating to ensure every customer receives unparalleled scale and performance. Gcore Inference at the Edge delivers all the power with none of the headache, providing a modern, effective, and efficient AI inference experience.”

Jitterbit has announced the next era of integration, orchestration, automation, and application...
RHEL AI combines open, more efficient models with accessible model alignment, extending the...
A recent survey conducted by Iris.ai, a leading AI company for scientific research, has unveiled...
Nebula cloud management platform can help partners deliver great customer service and scale-up...
Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high.
Enhancing the protection and performance of enterprise AI inference solutions with F5 NGINX Plus,...
Intel and IBM to deploy Gaudi 3 AI accelerators on IBM Cloud to help enterprises scale AI.
However, only one third of IT leaders believe their business is fully set up to realize the...