Is IDS Effective? It Depends.

2012-08-04 22:54:34 by chort

Recently Steven Alexander wondered if IDS is effective. This is a topic I've been ranting about at work recently, so I will share my thoughts here in long form.

I've been using Snort on and off over the last decade, in various environments. Recently I've been implementing commercial IDS/IPS in corporate office environments. I have done quite a lot of tuning, over a period of several months. I've turned a lot of signatures off, turned a lot of non-default signatures on, responded to actual security events, and even written my own signatures. My comments are based on these experiences.

First, to address the theoretical value of IDS, I will say that it has a place in the toolbox. I'm more dubious about IPS, that is to say full blocking, mostly due to the amount of extra work necessary to ensure collateral damage is minimized. There is a role to be played by IDS that isn't accomplished by other means. Netflow could give you some of it, log analysis could give you some of it, web proxies could cover a good deal, DNS monitoring could cover some of it, but nothing really gives the visibility of potentially malicious application traffic that IDS does.

I think IDS has some huge short-comings, though. For a while I thought these were fairly Snort-specific and due to the community nature of the technology. I thought that a commercial IPS would go a long way to solving the configuration nightmare, rampant false-positives, and generally poor signature quality. Unfortunately I was wrong.

While commercial IPS does have a slightly better UI than various web front-ends for Snort, it's still laid out in a very unintuitive way. There are a maze of required steps for setup and baffling options that must be explained by an experienced user in order to make any sense at all. The quality of the signatures enabled by default is highly suspect. There were a number of rules that generated ludicrous amounts of false-positives.

The worst part of operating IDS/IPS is the vast amounts of time required to investigate alerts and determine that yes, they really are FPs. Many times debunking an alert requires a detailed explanation of the vulnerability the signature is supposed to detect being exploited. This can leads to hours of research on the web and futile quests to track down .PCAP files of an actual attack to compare. Actual malicious traffic is often much quicker to identify, but proving a negative takes forever.

A great number of signatures appear to be very broadly constructed and apparently (the only rational explanation I can come up with) only tested against .PCAPs of exploits, not actually on live networks with legitimate traffic. At the most generous we can say any testing must have been strictly limited to a narrow set of clients and servers. Many signatures geared to detecting exploits of specific vulnerabilities are unusable, due to the number of false-positives they generate. Sure, they may detect real attacks, but the false-alert ratio is so high that investigating all of them is prohibitive, so they're simply disabled or ignored.

Given this, one could argue that, if standard applications are patched regularly, custom applications either go through extensive code review, or have some other type of prophylactic (such as WAF), and end-points all have anti-virus, you are protected and hence don't need to detect attacks. To this I say: So you're on some good drugs, huh?

These conditions never ever exist, even for shops who think they do all of the above. Further, relying on anti-virus to detect attacks is a false hope. The severe lag in delivering signatures is a well-known problem. In the mean time if a system becomes infected, the first thing most malware does it to cripple AV functions (disable updates, turn off real-time protection, etc). In order to patch everything, patches need to exist, and what about patching systems or software you didn't know was on your network? Perhaps you have a fantastic inventory and you use NAC, app whitelisting, etc to keep unapproved stuff (mostly) off the network, but you never know for certain. (Side-note, this is why turning off IDS signatures for software you "don't have" or disabling vulnerability scans for the same is a bad idea--you'll miss what shouldn't be there, but actually is.) The vast majority of organizations don't even implement those controls (although they arguably should). Lastly, WAFs are constantly being bypassed. Yes the detection is getting better, but more evasions are being developed as well. It's essentially the same problem IPS faces. The difference is that, while IDS might not be able to identify every attempted attack, it still gets a second chance at identifying the post-compromise traffic (DNS queries, CnC traffic, etc). WAFs cannot detect outbound IRC connections as a result of a command injection.

So what do I think IDS is good for? I think it does a pretty decent job of detecting client-side exploit attempts (via web content) and post-compromise behavior. I suppose web security proxies could do a good job of the former, but I don't have experience judging the effectiveness of proxy rules. I don't hear recommendations for proxies, or even discussion of proxy vendors within my social network of security peers, which I take to mean they aren't widely viewed as good solutions. For the post-compromise detection you could use other approaches, particularly DNS monitoring--either home-grown, or what Damballa claims to do. Keep in mind that with a total DNS-based approach, you'll miss malware that hard-codes CnC IPs. Nothing gives the comprehensive view and specific alerting that IDS does, to my knowledge.

I have discovered a number of potentially useful signatures and rules that were disabled by default in the particular commercial IDS I'm using. It appears nearly all the disabled rules are due to performance overhead. Indeed after turning on about 200 additional rules, the CPU use nearly doubled. In a corporate environment, assuming you've sized the hardware appropriately for bandwidth, that's probably not an issue. In a datacenter environment those expensive rules could overwhelm your sensor.

As part of incident response I've noticed a few patterns that were glaringly obvious, yet could not find an IDS rule for them. In those cases I created my own rule by cloning similar rules and editing them. These rules have generated some useful alerts that I would not have seen otherwise, but they haven't been a major component of detection thus far.

So I do believe the value of IDS is, in part, determined by the skill of the operator. In order to make sense of alerts and weed-out false positives, you have to be comfortable with wading into application protocols, researching vulnerability reports (including filling in a lot of missing pieces that have been held-back from public disclosure), and making judgements about what's expected to be on this particular network segment. You also need a lot of knowledge of attacker tactics and what resources are valuable to which kinds of attackers.

In closing, NIDS certainly isn't a panacea, or even a good solution. For as long as the technology has been around, I would expect much higher quality in the UI and signature accuracy at this point. That these two issues are still the biggest problems just boggles my mind, and that's to say nothing of the ways NIDS can be evaded. I think the most value in NIDS is detecting behavior and reputation, not specific exploits. I hope over time vendors move to much more threat-intel sorts of updates, essentially Indicators Of Compromise for network traffic. I may or may not care if machines are being attacked, but I definitely care if an attack has been successful. Those sort of alerts, combine with a rolling-window of packet capture that is automatically preserved when a compromise is detected would provide a lot of value.

Add a comment:




max length 1000 chars