Tintri 'redefines' Quality of Service

New patent pending technologies let users visualise contention, recover snapshots from any point in time and manage both application recovery and storage policies at scale.

  • 9 years ago Posted in

Tintri has announced three product updates that reset industry conventions for QoS and address long-standing performance and policy pains of enterprise data centres and service providers. The new capabilities include:

· Tintri OS 3.2. Administrators can now allocate exact maximum and minimum IOPS to each individual VM. Unlike conventional QoS, which requires administrators to predict the right values, Tintri provides visual guidance on the QoS values to specify, removing guesswork. The patent pending VM-level QoS is paired with powerful contention visualisation in the UI. Now administrators can see the immediate impact of throttle changes on VM-level latency instead of waiting for end user feedback. The visualisation spans the entire infrastructure—including latency stemming from host, network, storage contention and QoS throttle.
· Tintri SyncVM. This new product, based on patent-pending technology, allows the user to move back and forth between snapshots of an individual VM without losing other snapshots or performance history. Administrators can also use this capability to update hundreds of “child” VMs from a refreshed “master” VM without physically moving data or reconfiguring the VM and/or storage. They can even automate the process with Tintri PowerShell or REST APIs.
· Tintri Global Centre 2.0. Enterprises and service providers can now monitor and manage more than 100,000 VMs from a single pane of glass. They can manage dynamic collections of VMs based on group definitions and policies. Groups can span VMstores, hypervisor types and geographies.


While Enterprise customers can apply the new Tintri capabilities for VM-level performance isolation, Service providers can more easily offer differentiated tiers of storage service and manage 100,000 VMs across multiple data centres.

“As a cloud service provider, we need to provide different tiers of storage to our customers based on their performance requirements and budget,” said Dan Timko, CTO and Co-Founder at Cirrity. “Before, the easiest way to do this was to have three separate storage platforms with different characteristics, which was very inefficient. With Tintri, we can now apply per-VM QoS policies that allow us to mix workloads from different customers with different service levels on the same storage without any ongoing management headaches. And with Tintri Global Center 2.0, we can manage over 100,000 virtual machines from multiple tenants all from a single pane of glass.”

“Administrators need to be able to perform storage operations—QoS, snapshots, clones, replication, etc., and see storage metrics such as IOPS, latency, throughput, flash hit ratios and more—at a VM level,” said Eric Burgener, Research Director, Storage at IDC. “Starting with VM-level data management in the first product they shipped in 2011, Tintri has continued to add more VM-level capabilities that now include VM-level QoS. IDC sees VM-level management as the wave of the future, not only to improve the efficiency of storage operations but also to make storage management more intuitive for the IT generalists that are increasingly managing storage in virtual environments.”

Exos X20 and IronWolf Pro 20TB CMR-based HDDs help organizations maximize the value of data.
Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Collaboration will safeguard HPC storage systems and customer data with Panasas hardware-based...
Peraton, a leading mission capability integrator and transformative enterprise IT provider, has...
Helping customers plan for software failure, data loss and downtime.
Cloud Computing and Disaster Recovery specialist, virtualDCS has been named as the first UK-based...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.