Chat with us, powered by LiveChat Based on the white paper provided? What are the four goals of effective metrics as defined in the paper. In your own words ex | Wridemy

Based on the white paper provided? What are the four goals of effective metrics as defined in the paper. In your own words ex

 

For this assignment you will be required review the attached reading assignment.

Based on the white paper provided? What are the four goals of effective metrics as defined in the paper. In your own words explain your understanding of the metric and where and how it can be beneficial. (25 pts per goal clarified). 

Interested in learning more about security?

SANS Institute InfoSec Reading Room This paper is from the SANS Institute Reading Room site. Reposting is not permitted without express written permission.

How Can You Build and Leverage SNORT IDS Metrics to Reduce Risk? Many organizations have deployed Snort sensors at their ingress points. Some may have deployed them between segmented internal networks. Others may have IDS sensors littered throughout the organization. Regardless of how the sensor is placed the IDS can provide a significant view into traffic crossing the network. With this data already being generated, how many organizations create metrics for further analysis? What metrics are valuable to security teams and how are they used? What insights can one gain by good metric…

Copyright SANS Institute Author Retains Full Rights

A D

!! [VERSION!June!2012]!

! ! !

How!Can!You!Build!and!Leverage!SNORT!IDS!Metrics!to! Reduce!Risk?!

GIAC (GCIA) Gold Certification

Author:!Tim!Proffitt,[email protected]! Advisor:!David!Shinberg!

Accepted:!August!19th!2013!! !

Abstract

Many organizations have deployed Snort sensors at their ingress points. Some

may have deployed them between segmented internal networks. Others may have IDS

sensors littered throughout the organization. Regardless of how the sensor is placed the

IDS can provide a significant view into traffic crossing the network. With this data

already being generated, how many organizations create metrics for further analysis?

What metrics are valuable to security teams and how are they used? What insights can

one gain by good metrics and how can that be used to reduce risk to the organization?

The paper will cover current technologies and techniques that can be used to create

valuable metrics to aide security teams into making informed decisions.

!

How Can You Build and Leverage IDS Metrics! 2 !

Tim!Proffitt,[email protected]! ! !

1. Introduction Metrics are used in many facets of a person’s life and can be quite beneficial to

the decision making process. Is a car still getting the miles per gallon it should be? Are

invested stocks increasing in value and at a rate that is desirable? What should the

thermostat be set to this summer to minimize the amount of energy consumed? How

much weight can one loose each week to get ready for trips to the community pool? Can

a family afford to lease a car over the next three years with a variable based income?

Technologists should be asking in the same vein, questions about IDS activities that can

be answered by metrics. Is the sensor operating as intended? Is the sensor alarming on the

correct events? Is the IDS seeing an increase or decrease in events and should this

matter? What events should an analyst find interesting? Is a sensor being managed in a

manner that offers the best protection for the organization? Is the organization being

attacked and not aware of it?

The Snort IDS, created in 1998, has seen a very large deployment with a long

history. The open source community has richly supported this tool and offers additional

GUI tools that generate graphical views and metrics. Several tools such as SQUIL and

Snorby have emerged to provide a nice platform for analysis by a security team.

Regardless of the technology utilized to generate the sensor alarms, security teams

can create processes that will generate valuable data. Utilizing statistical techniques

against collected data will allow security teams to build metrics. These metrics can then

be used to in decision making by management to reduce risk to the organization.

2. Creating Metrics “On any given network, on any given day, Snort can fire thousands of alerts. Your

task as an intrusion analyst is to sift through the data, extract events of interest, and

separate the false positives from the actual attacks.” (Beale, Baker, et al, 2006)

The term “metrics” describes a broad range of tools and techniques used to

evaluate data (Greitzer 2005). The evaluation of that data is then used as a measurement

compared to one or more reference points to produce a result. A simple technology

How Can You Build and Leverage IDS Metrics! 3 !

Tim!Proffitt,[email protected]! ! !

security example would be to collect incomplete 3 way TCP handshake packets to a

destination, over a period of time, with the intent to show a trend. This extremely simple

example is one of many situations where technology metrics can help a manager make

informed decisions about their security infrastructure. A good security team is concerned

if the above IDS metric was trending upward by a factor of two every month. What if the

same metric trended downward by half every month? In the first case one could have an

IDS showing that a resource is under a prolonged attack. In the second case the IDS

could have a rule misconfiguration allowing conversations to be conducted but not

monitored. Either way this would be valuable data to a decision maker or at least a

situation that would need attention by a member of the team responsible for the IDS.

The technology auditing focused organization ISACA defines information

security as the protection of information assets against the risk of loss, operational

discontinuity, misuse, unauthorized disclosure, inaccessibility, or damage (Brotby 2009).

Technology security is concerned with the potential for legal liability that entities may

face as a result of information inaccuracy, loss, or the neglect of care in its protection. A

more current definition from CSO management circles describes information security as

the triad of confidentiality, integrity and availability. This definition can cause an issue

for security teams. How do security teams go about measuring confidentiality or

integrity? One can measure availability as it pertains to networks outages and systems

uptimes but how can metrics be applied to availability as it pertains to technology

security? These are very difficult questions to answer. The above simple metric of the

sensor recording the TCP 3 way handshake does not answer these questions, at least not

standing on its own. A metrics program needs to develop sound metrics to answer these

questions and others that executive management will need for steering an organization.

2.1. What makes a good metric? Bad metrics can be found most everywhere. Vendor dashboards are littered with

them, presentations contain them and security teams expect management to make

decisions off them. Take a traditional, out of the package IDS metric that shows the

number of signatures being seen by the sensor. This can be valuable data, especially for

the IDS team, intrusion response team or the individuals responsible for hardening

infrastructure. Knowing that a SYN flood is being executed against a critical web server

How Can You Build and Leverage IDS Metrics! 4 !

Tim!Proffitt,[email protected]! ! !

is important but the metric says little of the overall security of the organization. Are the

intrusion sensors in their current configuration protecting the organization? Is the

protection the security team provides now better or worse than last year? Can the budget

being allocated on managing the IDS be utilized better in a different control? Smaller,

technical metrics should be rolled up into a more comprehensive security picture if

security teams are going to be successful in creating good metrics and getting the point

across to the upper management of the organization. A good start on metrics,

measurements and monitoring information can be summarized as being manageable,

meaningful, actionable, unambiguous, reliable, accurate, timely and predictive (Brotby,

2009)

To create quality metrics security teams should strive to:

1. Develop a set of metrics that are repeatable and automated where

applicable

2. Create baselines or timelines from the repeatable metrics

3. Have actionable enough metrics to make decisions

4. Be meaningful for management decisions

Teams should constantly be asking what needs to be measured and why. If there is not a

good answer to “why”, the team should consider whether this would make a good metric.

Could the metric be used with other metrics to produce an aggregate picture of an overall

security control? Many organizations have multiple technologies to combat malware;

often at the end point, mail gateway, the firewall and server. Each of these technologies

can produce metrics that can be grouped or aggregated to produce a metric that can show

insight into the organizations ability to combat malware.

2.2. Statistical Techniques There are several commonly used techniques for analyzing data that can be

applied to create IDS metrics. Mean, median, aggregation, standard deviation, grouping,

cross sectional, time series, correlation matrix, quartile analysis and Statistical Process

Control can each be leveraged to build meaningful security metrics offering visibility into

large data sets. Many of these techniques can be used in conjunction with one another to

build more complex and often more insightful metrics.

How Can You Build and Leverage IDS Metrics! 5 !

Tim!Proffitt,[email protected]! ! !

The mean, or average as it is commonly known, is a standard aggregation metric.

The average is the easiest of these techniques to compute. Add the elements in the data

set and divide by the number of elements in the set. It should be pointed out; averages can

be a poor choice for highly variegated data sets as they can obscure hidden spikes that

might be interesting. A data set containing the number of thousands of SYN connections

per hour {10,10,10,10,10,10,10,10,10,10} has the same average as the data set

{1,1,1,1,90,1,1,1,2,1}. The second data set has a significant deviation (90) that could

show interesting activity that might otherwise have been missed if the averages technique

was utilized to show this data set’s activity.

The median of a data set is the number that separates the top half of the set from

the bottom half. The data set’s mean will highlight where half the elements are above and

half the elements are below. Medians can help particularly with measuring performance.

A median metric can aid IDS management in understanding performance or relevance.

When a particular signature can be counted by number of instances fired, an analyst can

rate his response based on whether the signature is above or below a calculated median.

! Figure!1:!Median!Statistical!Example

Aggregation is a popular technique for consolidating records into some type of

summary data. Common to aggregation statistics are sum, standard deviation, highest,

lowest and count. In most cases aggregation involves averaging numeric values and

counting nonnumeric values such as signature descriptions or severity. Highest and

lowest aggregation values will allow analysis on top seen data elements and the least seen

data elements. Aggregation is heavily used in technology metrics. Top 20 alerts, Total

How Can You Build and Leverage IDS Metrics! 6 !

Tim!Proffitt,[email protected]! ! !

number of High ranked signatures, and number of denied signatures are often generated

for intrusion sensors dashboards.

Standard deviation measures the dispersion of a data set from the mean. This

analysis technique can show if the data set is tightly clustered or wildly disperse. The

smaller the standard deviation the more uniform the data set will be. A higher standard

deviation would indicate an irregular pattern. One can calculate the standard deviation by

first calculating the mean of the set. Then, for each element square the difference between

the element and the mean. Adding up the squares, divide by the number of elements in

the set to produce the variance. The variance provides a measure of dispersion and the

root of the variance produces the standard deviation. 1!This type of statistical analysis could be used to show the types of TCP socket connection attempts to an organization’s

internet accessible assets and whether that could be considered normal.

! Figure!2:!Example!Standard!deviation!chart

Time series analysis is the technique of understanding how a data set has

developed over time. This technique is a series of recordings, during regular intervals, of

a data set over a period of time. After grouping and aggregating the data set within the

desired interval the metric is sorted typically in ascending order. A time series technique

can be a powerful tool in determining the current state of a technology versus how it has

operated in the past.

“Time series analysis is an essential tool in the security analyst’s bag of tricks. It

provides the foundation for other types of analysis. When combined with cross-

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 1!If!the!reader!is!interested!in!further!discussion!of!standard!deviation!they!can!visit! (http://www.mathsisfun.com/data/standardZdeviation.html).!

How Can You Build and Leverage IDS Metrics! 7 !

Tim!Proffitt,[email protected]! ! !

sectional analysis and quartile analysis, it provides the basis for benchmarking”

(Jaquith 2007).

The time series technique will generate metrics for sensor behavior over a specified

reporting time window. Have the sensors alarmed on more events today than what was

seen last year? Has the number of incidents investigated decreased in the last 6 months?

! Figure!3:!Example!Time!Series!chart

Cross-sectional analysis is a technique that will show how an attribute in the data set

will vary over a cross section of comparable data. The technique will take a point in time

and compare “apples to apples”. For example, analysts may want to measure current

high, medium and low ranked signatures in a data set. One could draw a sample of 1,000

alarms randomly from that population (also known as a cross section of that data set),

document their attack type profile, and calculate what percentage of that data set is

categorized as attack signatures. Twenty percent of the sample could be categorized as

attacking the organization. This cross-sectional sample provides one with a snapshot of

that population, at that one point in time. Note that an analyst does not know based on

one cross-sectional sample if the attack alarms are increasing or decreasing; it can only

describe the current proportion and what it could mean to the organization.

How Can You Build and Leverage IDS Metrics! 8 !

Tim!Proffitt,[email protected]! ! !

! Figure!4:!Cross!Sectional!Analysis!Example

Quartile analysis shares several traits with cross-sectional analysis. Each requires a

collection of attributes to examine and the analyst will identify a grouping and

aggregation technique. In quartile analysis the aggregate is broken into quarters: first,

second, third, fourth. The first quarter represents the top (or best) 25% of the aggregation.

The fourth is the bottom 25%. By ranking each attribute into quartiles, the analyst gains

an understanding of which section each item falls into. This type of analysis can be used

to determine how well sensors are being managed, false positive acceptance rates, and

aide in determining outliers (i.e., items in the first and fourth quartiles).

! Figure!5:!Quartile!Analysis!Example

How Can You Build and Leverage IDS Metrics! 9 !

Tim!Proffitt,[email protected]! ! !

Statistical process control is a technique that is used for determining scenarios

outside normal operating patterns and thus establishing the concept of a baseline. As an

example the security team is recording events from a tuned, managed sensor. A series of

line graphs or histograms can be drawn to represent the data as a statistical distribution.

This can be a picture of the behavior of the variation in the measurement that is being

recorded. If a process is deemed as “stable” then the concept is that the sensor generated

alarms in statistical control. If the distribution changes unpredictably over time, then the

process is said to be out of control. The variation may be large or small but it is always

present. Statistical process control can guide a team to the type of action that is

appropriate for trying to improve the functioning of a process being monitored. When the

data set is charted and falls outside the statistical upper control limit or below the lower

control limit, than a security team can investigate what caused the change and implement

changes to remediate what changed the sensor’s statistics.

Figure!6:!Statistical!Process!Control!Chart!Example!

For organizations with extremely large data sets and complex reporting

requirements, advanced tool sets from Informatica, Siebel analytics, Microsoft Business

Intelligence or Information Builders may be warranted. For most organizations, the

common Excel or Open Office spreadsheet can provide plenty of processes for carving

through data sets to produce metrics. Fortunately for security teams, there are common

statistical analysis tools than can make creating the analytics easier. For example,

Microsoft Excel can calculate:

How Can You Build and Leverage IDS Metrics! 10 !

Tim!Proffitt,[email protected]! ! !

• Standard Deviation =STDDEV.P(),

• Absolute Deviations = AVEDEV

• Mean =AVG()

• Median =MEDIAN()

• Quartile analysis =QUARTILE.EXE()

By utilizing spreadsheets, analysts can automatically generate line graphs, scatter

charts, bar graphs and most visual graphics needed to generate metrics.

2.3. How can metrics identify an incident? An intrusion will typically start off with a series of unsuccessful attempts to

compromise a host. Due to the current complexity of authentication systems, clandestine

attempts at intrusion generally take considerable time before the system gets

compromised or damaging change is affected to the system giving administrators a

window of opportunity to proactively detect and prevent intrusion (Pillai). Therefore

monitoring IDS patterns can be an effective way of identifying possible attacks.

However, an IDS system can show an attack attempt, but often has no way to validate

that it was successful. A host’s logs may show a new administrative user being added, but

has no way to determine if this was done maliciously. Yet the sequence of alarms,

followed almost immediately by the creation of an admin account, is an event that shouts

‘successful attack’ quite clearly.

Cross-technology correlation between a host event and a monitored event can be a

straight forward piece of evidence. Attacks against a host known to have a service or

vulnerability present can be correlated into metrics. Does the organization have a system

that must run in a vulnerable state? Any alarms against this system should be interesting,

but when the alarms are coming from several sources and are multiplying an analyst

should be notified. In the case of low and slow reconnaissance scans, many organizations

will miss the activity. IDS sensors are typically not configured to escalate on "slow and

low" single-packet probes, complex bounce or idle scans. If the signature event is not a

critical or high ranking and the number of packets is only a few an hour, many times this

will not stand out from the potentially millions of events generated for that day. A

reconnaissance scan of 20 sessions a day may not meet the threshold for an analyst’s

How Can You Build and Leverage IDS Metrics! 11 !

Tim!Proffitt,[email protected]! ! !

attention but after 90 days the metric can show 1800 sessions to a resource which may be

interesting. Data collected over time can generate metrics that will show reconnaissance

attacks from source and/or destinations when the metrics are built.

Baselines produce a powerful advantage from existing metrics. A baseline can be

defined as a normal condition. A data set can then be measured, typically against the

baseline to show deviations. Most baselines are established at a point in time and serve to

continue to track measurement against the reference point. By utilizing baselines with

sensor metrics, security teams can develop key performance indicators (KPI) or leading

indicators to identify when an incident may be occurring. When creating a baseline for

total signatures recorded over a time period, the metric with a baseline can quickly show

where signature deviations have occurred. In figure 7 a baseline has been applied to an

aggregate of signatures over time. An interesting indicator is present between 7/23/2013

and 7/24/2013 where the signature count was significantly higher that the baseline.

! Figure!7:!Baseline!Example

2.4. What metrics should security teams build? Metrics can be built that provide visibility into trends, configuration errors and risks

to the organization. By utilizing statistical techniques teams can build several metrics that

can help reduce risk to the organization by providing data to investigate. There are a

number of simple metrics, utilizing the above statistical techniques, which a security team

can easily build and put into practice will little effort. These simple metrics can not only

provide insight into how the sensors are being utilized but can also lead to the building of

more complex metrics as the program matures.

How Can You Build and Leverage IDS Metrics! 12 !

Tim!Proffitt,[email protected]! ! !

Top 20 Alarming Signatures (ordering, highest count,) – This aggregation metric

can be used to baseline traffic patterns, identify DoS scenarios, highlight

misconfigurations and will show the top talking signatures the sensor is processing. The

top 20 signatures can be an insight into what the state of the network is in and possible

tuning of the sensor.

Top 20 Alerts by Date Metric (ordering, highest count) – This aggregation metric

can be used to highlight traffic patterns and potentially identify malicious activity during

weekends or holidays. High numbers of failed authentications during the beginning of a

work week could be considered normal but the same number of authentication failures on

a weekend may be interesting.

Alerts by Source IP (aggregation, grouping) – Creating data sets by source IP can

allow for identifying low and slow reconnaissance over time. Additionally, a sharp

trending upward from a few IP addresses can identify a DoS attack. Similar to the Top 20

Alters by Date, this metric will allow teams to tune their sensors by eliminating false

positives. Alarming or blocking the organization’s vulnerability assessment scanners is

most likely counterproductive and is wasting visibility into the true top source IP the

analyst would be interested in.

Alerts by Destination IP (aggregation, grouping) – Data about destinations can

highlight where outside entities are looking to or have found weakness. A high number of

alerts, from differing sources, to a single destination should be very interesting to an

analyst.

Alerts Categorized by Severity (aggregation, grouping) – By breaking the generated

alerts up by severity this metric can allow an analyst to focus on the more risky events

first and remediate lower risked events when applicable. Reconnaissance alarms can be a

lower ranked severity and can be displayed when sorted from the higher categories.

Number of Alerts by Signature (aggregation, grouping) – By creating a metric for

the number of signatures by a specific alarm, an analyst can identify brute force attacks

against a single or multiple destinations. This metric can also be used for tuning IDS

signatures to reduce the amount of noise they are recording.

Alerts by Source Port (aggregation, grouping) – Collecting data by source port can

highlight attacks specifically designed to compromise a specific vulnerability. Analyst

How Can You Build and Leverage IDS Metrics! 13 !

Tim!Proffitt,[email protected]! ! !

should be interested if 75% of the alarms are packets destined for port 1337. This metric

could be showing activity to services that are not allowed by the firewall. An attacker

may have found a vulnerability from a misconfiguration.

Alerts by Destination Port (aggregation, grouping) – Destination port metrics are

similar to source port but will provide data on systems that may be compromised. An

analyst may see a large amount of signatures firing for events destined for port 51001.

Similar to the metric for Alerts by Source Port, destination port metrics can show attacks

to services. There has been a significant increase in the amount of outbound beaconing

from compromised hosts in an effort to defeat inbound defenses. Metrics created from

identified incidents can uncover network sessions sharing common characteristics that

recur at regular intervals. Teams can seek additional indicators of malware infection to

support proactive incident detection as well as to supplement incident response efforts

(Balland 2008).

Source IP by Country (aggregation, count, sort) – Geolocation data can be a very

powerful indicator if used in conjunction with other metrics data. If an organization does

not conduct business outside the United States, an analyst should be interested in any

established conversations from a source IP from outside the country. With the rise of the

advance persistence threat (ATP), teams can create metrics for connections from

countries that have known nation state APT programs.

Total Number of alarms by hour / day (aggregation, sum, sort) – Creating

baselines for the number of alarms over a time period can highlight patterns of an attack.

Activity levels registered during off hours can be interesting for the security team. More

activity on the weekends than during the traditional 9am to 5pm shift can be an

interesting indicator. Using statistical techniques against the total number of alarms by

hour will yield data that paints a picture around how the sensors are operating. This

metric can also be used as a baseline for the creation of additional metrics to show

activity outside of normal operating parameters.

Number of denied conversations (aggregation, count) – Denied connections can

yield information about several scenarios. Using baseline data this metric can show

whether the organization’s resources currently under attack with a denial of service. The

sensors may be recording a large number of denied connections to a destination that does

How Can You Build and Leverage IDS Metrics! 14 !

Tim!Proffitt,[email protected]! ! !

not exist. A high number of denied connections can show a misconfiguration, provide

indicators of reconnaissance or a brute force attack. By grouping denied conversations

and crafting a baseline, metrics can show when these scenarios have initiated.

Number of failed login attempts (aggregation, count) – Metrics on failed logons

over time can show indicators of password cracking attempts, policy issues and denial of

service attempts. An organization can expect a reasonable number of failed logons due to

human nature and a proliferation of devices but when that baseline is doubled in a day </p

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

About Wridemy

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.

Hire a tutor today CLICK HERE to make your first order

Related Tags

Academic APA Writing College Course Discussion Management English Finance General Graduate History Information Justify Literature MLA