Threat Intelligence – Measuring Impact

Executive Summary

Measuring the actual impact and value provided by threat intelligence can be incredibly challenging. There’s a reason why few vendors ever mention how you might go about doing it.

In this article my intention is to get you started with taking your first wobbly, bobbly steps towards measuring the impact you get from your threat intelligence investments. It’s not a checkbox, nor a tool you install, it’s an active process you need to engage in and with.

But if you do (and succeed) you will enable the full power of truly data-driven decision making as well as tangible and measurable improvements to your cyber security efforts.

Introduction to Measuring Threat Intelligence

Your cyber security effort is often argued to be transformed, even revolutionized, by threat intelligence. Unfortunately, you are often left to figure out on your own how to measure impact of your threat intelligence products and services.

Fret not my friend for I have come to help usher you towards enlightenment. Okay, perhaps not necessarily enlightenment, but something towards better understanding of how you can practically approach figuring out the value of your current, or future, cyber threat intelligence provider.

I will start by outlining areas where measuring intelligence can be done. Having done that, I will highlight a few examples of how we could go about measuring intelligence.

There’s a reason why most security and threat intelligence vendors don’t talk about this. It could almost be considered our dirty little secret. Why don’t we talk about it you ask? It’s quite a hard problem to tackle.

Humor me, will you? Try and find an article that thoroughly explains how to measure the impact of threat intelligence use. Go on, I’ll wait…

Assumptions

When defending against threats it’s necessary to spend time doing so. It may seem as if that’s stating the obvious, but nothing comes for free. There are lots of parameters to consider when defending such as your exposure on the Internet, the number of employees, technologies used etc. and so on, and so forth.

But regardless I will assume that you want to defend your organization as effectively as you can with the resources you currently have. Thus, one of your more important goals is to defend intelligently by prioritizing the threats that matter.

I’m also going to assume that you’ve got some basic understanding of key concepts concerning threat intelligence such as the different types of intelligence (strategic, tactical etc.), categories of intelligence (basic, current, and estimative).

Foundational Concepts for Measuring Threat Intelligence Efficacy

All security initiatives should contribute to the efficacy of our defense; we should do better than before the initiative.

Threat intelligence is no different, it should contribute and improve your current defenses. And it should do so in a reasonably measurable way. So, how do we do that? We will use Key Performance Indicators (KPI:s).

When measuring through Key Performance Indicators (KPI:s) they should ideally have the following characteristics: quantifiable, strategic, actionable, relevant and timely.

Essentially they should enable you to learn, understand and improve your cyber security posture. Because if you ultimately aren’t improving your defense what’s the point?

So a KPI should be possible to express as a number or percentage, aligned with cyber defense goals and objectives, should provide insights to enable improved performance, relevant to what they are measuring (cyber defense) and provided on a regular basis and tracked over time.

Clear? Hopefully it will become clearer as you progress in this article 🙂

Threat Intelligence Areas To Consider

Let’s talk about KPI:s and how they relate to threat intelligence.

  1. Predictive and Early Detection
  2. Time to detect and respond to threats
  3. Accuracy of detection mechanisms and rules
  4. Incidents prevented and mitigated
  5. Relevance and actionability of intelligence
  6. Threat coverage and visibility
  7. Operational efficiency
  8. Collaboration and integrations
  9. Organizational Impact

While I would love to write about all of these potential areas I will focus on a select few instead.

Using Threat Intelligence for Predictive and Early Detection

TL;DR – Measure the number of detected threats as a direct consequence of your threat intelligence program.

Performing predictive and early detection will require that you first learn of new threats and then proactively blocking said new threats and then waiting for the block to register a “hit”.

Let’s say that you download one of these lists of “malicious” IP-addresses that you can find pretty much anywhere. You import the addresses into your security product, ingress firewall for example, and then wait.

You now must measure two things:

  1. False positives
  2. True positives

If you block something that turned out to be actually legitimate you accumulated a False Positive. If you do indeed block something that was malicious (however you determine that) you have a True Positive. And if you get neither a False Positive or a True Positive, you’ve essentially gotten your self a True Negative.

You also must consider the decay of a preemptively blocked IP-address, and domains. IP-addresses are to a certain extent shared, and while it at one point perhaps was associated with malicious actors, it need not be so forever.

You therefore should implement a decay function which after a given amount of time removes the block, ideally depending on the threat associated with the IP-address.

Prevented and Mitigated Incidents Due to Threat Intelligence

TL;DR – Measure how many incidents were successfully prevented (or mitigated) due to threat intelligence.

This is a hard one, but also quite possibly the most important. If we prevent a fire from happening in the first place, what exactly was it that prevented it from starting at all? Was one thing? Multiple things combined?

We can do some general inferences based on our broad understanding of how cyber-attacks transpire and the various phases which attackers’ traverse. A leaked set of valid credentials is according to our incident response assignments a common initial access vector.

If we discover the credentials, reset them and then configure additional monitoring on said account we both get a chance to see attempted attacks, and subsequently derive valuable statistics on the value of discovering and preventing leaked credentials.

Precision and Accuracy of Detections

TL;DR – When using threat intelligence to support detection efforts it lead to less false positives, has it?

If you’ve worked with EDR-tooling, or, any kind of security product you’ve experienced false positives; alerts raised as malicious when in fact they were not.

If you implement technical threat intelligence and ingest them into your detection engineering this should, ideally, raise less false positives and more true positives without sacrificing accuracy.

Speaking of precision and accuracy, two related terms that are often confused and it can sometimes be a little bit hard to intuitively understand them. But here’s an example that may serve as a useful foundation for explaining how they relate to each other.

Ideally you want high precision, high accuracy. And this is where threat intelligence can help. But to understand if your threat intelligence makes a difference, you need a baseline against which to compare results.

Thus, you need to measure your current rate of false positive reporting. And if you’ve got a managed detection and response service, start tracking the number of false positives being reported by your service provider.

If you’ve got a high true positive rate and a low false positive rate that would indicate a good accuracy, assuming you also have a good ratio on finding threats. High precision is useless if you’ve got zero accuracy.

An Example of High Precision, Low Accuracy

From a defensive perspective you can think of it like this. You want to defend against phishing attacks. Let’s say that you’re absolutely rockin’ the detection of phishing attacks using delivery- and banking themes. Nothing gets through. That’s high precision. But when it comes to general phishing attacks you are like a Swiss cheese. This means that your accuracy is bad. Remember that accuracy is calculated by adding up true positives and true negatives and then dividing that by true positives, true negatives, false positives, and false negatives.

Getting Started With Measuring Threat Intelligence Effectiveness

Now it’s your turn to begin measuring and understand the real impact your current threat intelligence efforts are having on your cyber security initiatives.

The first thing you are going to do is baselining your security organization, i.e. your current detection capabilities. Here’s what I want you to do first:

  1. Number of alerts – In your preferred EDR-tooling, start measuring the rate of raised alerts (potential security violations).
  2. Number of false positives – Out of all raised alerts, measure the number of false positives.
  3. False Negatives – The number of times malicious activity was NOT detected.
  4. Number of true positives – Measure the number of true positives, i.e. real alerts which signified malicious activity.
  5. Number of true negatives – Instances where a tool raises an alert, but indicates that the behavior is benign, i.e. expected, and non-malicious.

Take appropriate measurements daily, with a sliding window of monthly aggregations. If anything, begin by measuring at least the number of alerts, false positives, and true positives.

Next up we’ll look at how you can use these numbers to create a baseline of your current accuracy and precision for your detection capabilities and how threat intelligence may help you reduce the false positives, and increase the true positives, thereby driving both a higher accuracy and precision for your detection.