How Cognitive Bias Leads to Reasoning Errors in Cybersecurity

Forcepoint’s Dr Margaret Cunningham shares insight on how human bias can impact decision making and business outcomes, offering unique guidance on overcoming bias through human understanding combined with advanced behavioural analytics.

  • 5 years ago Posted in

Terms from cognitive science are not arbitrary labels applied to cybersecurity. Historically, the relationship between computing and cognition emerged as early as the 1950s during the cognitive revolution when behavioural-based psychological science embraced the mind and its processes. Today, cognitive science is an expanding interdisciplinary domain that overlaps with nearly every aspect of cybersecurity. 

 

In this article, we’ll explore six common cognitive biases - aggregate bias, anchoring bias, availability bias, confirmation bias, the framing effect and the fundamental attribution error – the impact that they have on cybersecurity and how they can be addressed.

 

How Human Biases Skew Security Strategies

We are all subject to cognitive bias and reasoning errors, which could impact decisions and business outcomes in cybersecurity. We regularly see business leaders influenced by external factors. For example, if the news headlines are full of the latest privacy breach executed by foreign hackers, with dire warnings regarding outside attacks, people leading security programs tend to skew cybersecurity strategy and activity against external threats.

This is availability bias in action, where an individual high-profile breach could cause enterprises to ignore or downplay the threats posed by malware, poor patching processes or the data-handling behaviour of employees. Relying on what’s top of mind is a common human decision-making tool but can lead to faulty conclusions. 

Confirmation bias also unconsciously plagues security professionals. When individuals are exploring a theory for a particular problem, they are highly susceptible to confirming their beliefs by only searching and finding support for their hunch. For example, an experienced security analyst may “decide” what happened prior to investigating a data breach, assuming it was a malicious employee due to previous events. Expertise and experience, while valuable, can be a weakness if people regularly investigate incidents in a way which only supports their existing belief.

 

It’s not my fault, it’s PEBKAC

One social and psychological bias that impacts nearly every aspect of human behaviour is the fundamental attribution error. Security professionals have been known to use the acronym PEBKAC, which stands for “Problem Exists Between Keyboard and Chair”. In other words, they blame the user for the security incident. Security engineers are not solely impacted by this bias, as end-users also blame poorly designed security environments for any incidents, or refuse to recognise their own risky behaviours.

Coping with fundamental attribution errors, and the self-serving bias, is not easy and requires personal insight and empathy. For supervisors and leaders, acknowledging imperfections/failures can help create a more resilient and dynamic culture. For those designing complex software architectures, it should be recognised that not all users’ motivations will be as highly security-focused as the designers of a system. Users’ failures are not because they are “stupid”, but because they’re human.   

However, an exceptional human trait is that we are able to think about thinking, thus can recognise and address these biases. By taking a different approach and avoiding those instances where automatic thinking does damage, we can improve decision making.

 

Overcoming Bias with Applied Insight

By improving understanding of biases, it becomes easier to identify and mitigate the impact of flawed reasoning and decision-making conventions. The industry’s efforts to build harmony between the best characteristics of humans and the best characteristics of technology to tackle cybersecurity challenges depend on understanding and overcoming bias.

Building a deep understanding of human behaviour into risk-adaptive security solutions is key to the end goal of improving business processes and outcomes, reducing friction and enabling the business to thrive and succeed. Products created in this fashion can compute and continuously update a behavioural risk score against a baseline of “normal” behaviour of each end-user, wherever and however that user is accessing the corporate network.

Intelligent systems, informed by the individual risk assessment, can then apply a range of security countermeasures to address the identified risk based on an organisation’s appetite for risk. For example, data access can be allowed and monitored, but downloads can be encrypted, or access fully blocked to sensitive files depending on the context of individual interactions with corporate data and the resulting risk score.

 

Take Action to Address Bias: Questions for Cybersecurity Professionals

Where do you start? Our advice to security professionals and business leaders is to take a few moments to walk through the six biases and ask these questions:

  • Do you or your colleagues make assumptions about individuals, but use group characteristics to form your assumptions?
  • Have you ever been hung up on a forensic detail that you struggled to move away from to identify a new strategy for exploration?
  • Has the recent news cycle swayed your company’s perception of current risks?
  • When you run into the same problem, over and over again, do you slow down to think about other possible solutions or answers?
  • When offered new services and products, do you assess the risk (and your risk tolerance) in a balanced way? From multiple perspectives?
  • And finally, does your team take steps to recognize your own responsibility for errors or for engaging in risky behaviors, and give credit to others who may have made an error due to environmental factors? 

 

People tend to make mistakes when there is too much information, complex information, or information linked to probabilities. However, there is a powerful parallel between how humans learn to think and reason, and how security technology can be designed to improve how we cope with the “grey space.” In the case of biases, pairing behavioural analytics with security countermeasures, can decrease the bias problem significantly and take you a step closer to a more secure environment.

 

By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.