next up previous
Next: 2. Cost Factors and Up: Toward Cost-Sensitive Modeling for Previous: Toward Cost-Sensitive Modeling for

1. Introduction

Accompanying our growing dependency on network-based computer systems is an increased importance of protecting our information systems. Intrusion detection (ID), the process of identifying and responding to malicious activity targeted at computing and networking resources [1], is a critical component of infrastructure protection mechanisms.

A natural tendency in developing an intrusion detection system (IDS) is trying to maximize its technical effectiveness. This often translates into IDS vendors attempting to use brute force to correctly detect a larger spectrum of intrusions than their competitors. However, the goal of catching all attacks has proved to be a major technical challenge. After more than two decades of research and development efforts, the leading IDSs still have marginal detection rates and high false alarm rates, especially in the face of stealthy or novel intrusions. This goal is also impractical for IDS deployment, as the constraints on time (i.e., processing speed) and resources (both human and computer) may become overwhelmingly restrictive. An IDS usually performs passive monitoring of network or system activities rather than active filtering (as is the case with Firewalls). It is essential for an IDS to keep up with the throughput of the data stream that it monitors so that intrusions can be detected in a timely manner. A real-time IDS can thus become vulnerable to overload attacks [20]. In such an attack, the attacker first directs a huge amount of malicious traffic at the IDS (or some machine it is monitoring) to the point that it can no longer track all data necessary to detect every intrusion. The attacker can then successfully execute the intended intrusion, which the IDS will fail to detect. Similarly, an incident response team can be overloaded by intrusion reports and may be forced to raise detection and response thresholds [5], resulting in real attacks being ignored. In such a situation, focusing limited resources on the most damaging intrusions is a more beneficial and effective approach.

A very important but often neglected facet of intrusion detection is its cost-effectiveness, or cost-benefit trade-off. An educated decision to deploy a security mechanism such as an IDS is often motivated by the needs of security risk management [3,8,19]. The objective of an IDS is therefore to provide protection to the information assets that are at risk and have value to an organization. An IDS needs to be cost-effective because it should cost no more than the expected level of loss from intrusions. This requires that an IDS consider the trade-off among cost factors, which at the minimum should include development cost, the cost of damage caused by an intrusion, the cost of manual or automatic response to an intrusion, and the operational cost, which measures constraints on time and computing resources. For example, an intrusion which has a higher response cost than damage cost should usually not be acted upon beyond simple logging.

Currently these cost factors are, for the most part, ignored as unwanted complexities in the development process of IDSs. This is caused by the fact that achieving a reasonable degree of technical effectiveness is already a challenging task, given the complexities of today's network environments and the manual effort of knowledge-engineering approaches (e.g., encoding expert rules). Some IDSs do try to minimize operational cost. For example, the Bro [20] scripting language for specifying intrusion detection rules does not support for-loops because iteration through a large number of connections is considered time consuming. However, we do not know of any IDS that considers any other cost factors. These cost factors are not sufficiently considered in the deployment of IDSs because many organizations are not educated about the cost-benefits of security systems and analyzing site-specific cost factors is very difficult. Therefore, we believe that the security community as a whole must study the cost-effective aspects of IDSs in greater detail to help make intrusion detection a more successful technology.

We have developed a data mining framework for building intrusion detection models in an effort to automate the process of IDS development and lower its development cost. The framework uses data mining algorithms to compute activity patterns and extract predictive features, and then applies machine learning algorithms to generate detection rules [12,13]. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems, most of which were knowledge-engineered [15].

In this paper, we examine the relevant cost factors, cost models, and cost metrics related to IDSs, and report the results of our current research in extending our data mining framework to build cost-sensitive models for intrusion detection. We propose to use cost-sensitive machine learning techniques that can automatically construct detection models optimized for overall cost metrics instead of mere statistical accuracy. We do not suggest that accuracy be ignored, but rather that cost factors be included in the process of developing and evaluating IDSs. Our contributions are not the specific cost models and cost metrics described, but rather the principles of cost analysis and modeling for intrusion detection.


next up previous
Next: 2. Cost Factors and Up: Toward Cost-Sensitive Modeling for Previous: Toward Cost-Sensitive Modeling for
Erez Zadok
2000-11-09