Alert Fatigue

Why you’ve got IT Monitoring and IT Alerting All Wrong

IT alerting and IT monitoring are not what they used to be. In years past, software releases were scheduled a few times per year. Often, one monitoring tool would review the infrastructure and would catch and spit out alerts. Sorry, but those days are gone. Nowadays, start-ups use containers and microservices, continuous integration and delivery. As such, monitoring can and needs to be at multiple points along the pipeline.

If you are not taking the time to calibrate your systems to reduce the amount of noise and ensure effective alerting, then you’ve got monitoring and alerting all wrong. Don’t worry though. It’s not a death sentence – thankfully. There are clear methods for turning IT monitoring noise into actionable IT alerting.

Come on feel the noise

It’s not just a catchy line from Quiet Riot. ‘Come on feel the noise’ also encapsulates how many engineers in IT Ops experience monitoring. Because of the need to monitor multiple points in the stack, multiple monitoring tools have arisen. And because there are multiple monitoring tools, there is a lot of noise. Per Big Panda’s CTO:

The old “one tool to rule them all” approach no longer works. Instead, many enterprises are selecting the best tool for each part of their stack with different choices for systems monitoring, application monitoring, error tracking, and web and user monitoring.

….

As companies add more tools, the number of alerts that they must field can grow by orders of magnitude. It’s simply impossible for any human, or teams of humans, to effectively manage that.

Indeed, it is impossible for Dev, Ops, IT or SecOps to stay on top of 100 alerts during the day and night. Instead, these groups need to find a way to make order of the madness. Teams need to be nimble to remain competitive and support the multiple moving parts that comprise their groups. As Big Panda’s CTO goes on to add:

If organizations [do not adjust their monitoring strategies] they will not only cripple their ability to identify, triage and remediate issues, but they run the risk of violating SLAs, suffering downtime, and losing the trust of customers.

Furthermore, by failing to order the noise, engineers and corporations will suffer a predictable set of problems:

  • Alert fatigue: too many alerts waking engineers up at night will not only cause tired engineers, but also hurt your team’s effectiveness at maintaining effectiveness.
  • Decreased MTTR: Because there are too many alerts, it will take extra time for engineers to respond intelligently to the issue or begin proper escalation.
  • Missed alerts: Like the boy who cried wolf, after too many false positives, engineers will begin to ignore alerts and, as a result, miss important issues.

Need to order the IT alerting noise

The very purpose of monitoring is to set thresholds that inform the team on how to act upon them. If the monitoring tools along with alerting tools are not providing actionable events, then there is a problem with how the system is set up.

But by bringing a strong testing mindset to bear, monitoring and alerting can help solve many of the issues. SolarWinds gets this just right when they indicate:

It is only with continuous monitoring that a network admin can maintain a high-performance IT infrastructure for an organization. … Adopting the best practices can help the network admin streamline their network monitoring to identify and resolve issues much faster with very less MTTR (Mean Time To Resolve)

So how can and should organizations make order of the noise? At OnPage, our best practices encourage DevOps, IT or SecOps to implement the following procedures:

  • Establish a baseline for the system. Initially set the IT monitoring and IT alerting parameters somewhat loosely so that you can determine the overall health and robustness of your system. While initially painful, this will allow you to see what types of alerts are garbage and which are meaningful. You won’t always know this type of information from the outset so it is a necessary part of the process.As our friends at SolarWinds go on to note, “Once normal or baseline behavior of the various elements and services in the network are understood, the information can be used by the admin to set threshold values for alerts.”
  • After three to four weeks of monitoring, you can review the audit trail on your OnPage console. Reviewing the console will allow you to see which components of your system are producing alerts that need immediate answering and which ones do not.In the language of OnPage you are able to determine which alerts are low priority and which are high priority. Low priority alerts such as ‘server is 90% full’ can often be taken care of during normal working hours. High priority alerts such as a potential zero-day attack need immediate attention and should wake up the on-call engineer.
  • Ensure that the alerts come with proactive messaging. Messaging allows engineers to quickly solve problems. By having proactive messaging included, engineers can know if the problem needs escalation or if they can handle the issue.
  • In order to keep up with the pace of change that will inevitably befall your system, it is important that every component of your IT stack follow this process. Otherwise, you will quickly be drowning in alerts.

Not every attack of heartburn is a heart attack. Similarly, not every alert is high priority requiring a 2 a.m. wake-up call. You need to know how to tell the difference.

Control the noise

If you want to maintain your stack’s value and usefulness, you need to have alerting which is meaningful and useful. You need to create thresholds and analyze them. Having a thousand alerts come through will cause the most tolerant of engineers to lose their cool. You don’t want that and we at OnPage don’t want that for your team either.

OnPage Corporation

Share
Published by
OnPage Corporation

Recent Posts

What’s New: OnPage Unveils Multiple Account Login

We’re thrilled to announce the launch of OnPage’s new Multiple Account Login feature. Designed to…

4 days ago

Home Call Survival Guide

Whether it's your first or hundredth home call shift, preparing yourself both physically and mentally…

1 week ago

OnPage’s Strategic Edge Earns Coveted ‘Challenger’ Spot in 2024 Gartner MQ for Clinical Communication & Collaboration

Gartner’s Magic Quadrant for CC&C recognized OnPage for its practical, purpose-built solutions that streamline critical…

1 month ago

Site Reliability Engineer’s Guide to Black Friday

Site Reliability Engineer’s Guide to Black Friday   It’s gotten to the point where Black Friday…

1 month ago

Cloud Engineer – Roles and Responsibilities

Cloud engineers have become a vital part of many organizations – orchestrating cloud services to…

2 months ago

The Vital Signs: Why Managed IT Services for Healthcare?

Organizations across the globe are seeing rapid growth in the technologies they use every day.…

2 months ago