What to keep in mind when monitoring services containing SaaS apps

By Karthik Ramchandran, Product Marketing Specialist, SolarWinds.

  • 9 years ago Posted in

According to Forrester, the SaaS application and software market is expected to reach $75 billion in 2014. When you think about it, this makes sense; today’s users spend most of their time accessing various “smart applications”, such as Office 365 or Salesforce, and the user base accessing and using these applications is increasing tremendously. With this in mind, network admins have to work hard to maintain the same levels of performance from the SaaS based applications used by their employees as they do for the applications based on their own servers.

Monitoring the performance of these applications will make a huge difference considering more and more users are adopting the use of SaaS and cloud based applications. Monitoring the load on the server, user experience, and bottlenecks is crucial to optimising a systems overall performance, regardless of whether the application is hosted on premise, in a public cloud, or using a hybrid approach. If your organisation is using several SaaS based applications, network admins will need to take proactive measures to ensure that applications don’t suffer downtime during crucial business hours.

In wake of the SaaS revolution, there are a number of key points to keep in mind when ensuring that the SaaS apps used by your employees keep up to speed with those on your existing network.

Monitor overall user experience

Since users are going to be accessing their preferred SaaS application extensively, you should monitor overall user experience and users’ interaction with the application. This allows you to analyse performance from the end-users perspective. Slow page load times or image matching issues can be a first indication that there’s an issue with the application. By drilling in deeper, you can determine if the problem is related to a specific page and location. Ultimately, monitoring user experience allows you to improve and optimise application performance, which results in improved conversion.

You can monitor user experience in two ways: from the perspective of the service provider, and from the perspective of the service consumer.

Keep in mind the needs of the service providers

1. User experience: It is likely service providers have SLAs with end users and they need to demonstrate that they are meeting uptime and other SLA considerations. For this, you need to have complete visibility and control over the website and applications’ testing environment. This allows you to test and apply account limitations, restrict access for users within a given network, and helps improve the accuracy and performance of applications and websites thereby offering a more streamlined user experience for customers in different locations.

2. Infrastructure: There are many factors that can cause a service failure, therefore all aspects of the infrastructure must be monitored. These aspects include:

· Infrastructure applications: Email servers, directory services and authentication servers all have to be monitored to avoid downtime. For example, Office 365 is an essential application a lot of organisations widely depend on. Monitoring critical metrics will ensure optimal email performance and high availability during peak business hours.

· Physical and Virtual Servers: The physical and virtual servers where SaaS and other cloud applications are hosted will also have to be monitored. Service providers will have to monitor server hardware metrics such as temperature, fan speed, CPU load, and memory, and make sure there are no resource contention issues for virtual servers.

· Storage Performance: It’s very important that critical applications have dedicated datastore with enough storage capacity. Inadequate storage can hurt application performance, especially when multiple applications depend on the same datastore.

· Network Performance: Network downtime is one of the main reasons for application failure. Routers, switches, servers, etc. will all have to be continuously monitored for performance issues and high availability. Get proactively notified by setting up alerts using baseline data, this way you’re immediately aware before a hardware failure occurs.

3. Integration services (web services): Services provided are dependent on other SaaS providers or internal apps. Service providers will have to recognise this and need to monitor web services such as JSON and SOAP.

Failure of web services can be easily escaped by determining key web service availability, latency, and validating the content returned as a result of that query by monitoring the web service. This can be achieved when you are continuously checking the overall health of the web services running across your servers.

Keep in mind the needs of the service consumers

1. User experience: If part of your web application consumes web services, this can be the first indication of a problem. From a users’ perspective, it’s essential to have all the front-end user experience components such as CSS, JavaScript, HTML, images, and third-party plugins performing within the thresholds. To ensure consistent user experience, your websites and web applications will have to be tested periodically for responsiveness, page loading, etc. in order to prevent application rollbacks.

 

Another way to do this is to monitor the web transactions. You can monitor each step in a transaction and look at historical availability for each step and quickly identify where the bottleneck is coming from.

2. Web service failures: This can help identify a failure in communication. For example, when you’re buying something online and the site hangs or freezes, it could be mostly due to an unresponsive web service which causes application failures. Such issues can happen when applications rely on external web services. In such cases it’s good to start troubleshooting such web services and identify if the issue is really yours.

Keeping the above points in mind will prove essential when monitoring SaaS applications. These key considerations help IT teams take proactive measures to ensure that applications don’t suffer downtime during crucial business hours. At the same time, each application will be optimised as a result of continuous monitoring, thus improving overall efficiency.
 

Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Nearly all senior business decision-makers (96%) surveyed report data strategies as essential to...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.
NetApp extends its collaboration to accelerate Ducati Corse’s digital transformation and deliver...
Partnership to be featured at COP26, highlighting how data-driven solutions and predictive...
Next-Gen solutions to deliver market-leading enterprise cloud scalability, cyber resilience and...
he EMEA external storage systems market value was up 3.3% year on year in dollars but down 5.5% in...