Information Sharing Considered Harmful, Maybe

2012-09-24 22:12:59 by chort

Lately the security echo chamber has been reverberating with talk of information sharing. Many parties, including (in possibly the most ironic blog post of the year, Oracle) are calling on the industry in general to share more information. The call is not unanimous, however. Several voices have urged restraint with information disclosure. Each side has good arguments and I think everyone can agree that the status quo is not working. I urge more sharing, read-on to see why.

Before I make my arguments, it's important to know why some people think sharing is dangerous. I'm going to simplify the main points to:

1.) The value of leaked information rapidly approaches zero
2.) The risk of a leak dramatically increases with each informed party

The first point was made by Richard Bejtlich, while the second was illustrated by Kurt Wismer (the inspiration for this post).

I will grant that the value of information does decrease as more parties have access it. It's also definitely true that the risk of a leak goes up every time information is shared with another party. What is missing is the magnitude of that negative impact. Everything in security is a trade-off of some kind. In this case we're trying to figure out if the benefit of sharing with another party is outweighed by the frequency * magnitude of a leak. To understand that we should really use quantitative methods. Since I don't have access to scientific data on the topic, I'll use example scenarios of my own to provide counterpoints.

In Mr. Bejtlich's scenario, someone found information that had national security implications. It's easy to see the situation he's describing is malware researchers finding information about industrial espionage groups' TTPs (Tools, Tactics, and Procedures). Publicly revealing this information could jeopardize investigations being conducted by federal law enforcement, and potentially private firms, possibly including some of the intended victims.

I think it's important to note the difference between the constructed situation and the real-life situation it's being applied to. Although the current avalanche of spying is often aimed at defense projects, much of it is directly targeting private corporations, not government agencies. Although there are implications for national security, there are also very real effects on the global competitiveness of the firms being targeted. What's more, many of the victim firms aren't involved in weapons programs, they just happen to be in globally significant industries that foreign governments want their domestic businesses to become more competitive in.

It's likely in the best interest of targeted businesses to work with their national law enforcement, but that doesn't mean lying down and being robbed blind simply to avoid interfering with an investigation that may or may not be happening. I don't have personal knowledge, but it's easy to believe with the sheer scope of current industrial espionage, law enforcement is overwhelmed and likely cannot respond to every incident in a timely manner.

Does that mean every business is own it's own to detect and remediate targeted attacks? Keep in mind many of the target companies problem don't even have an Incident Response team now, or have just started one very recently. Suppose a company does find indications of an intrusion, while it might tactically be in their interest to let domestic competitors suffer, it's almost certainly not in their strategic interest. Doesn't it make sense to share IOCs (Indicators Of Compromise, ex: domain names, registry values) with industry peers? We can be sure the government won't give this information to most corporations, even if they have it. I was point-blank told by an FBI agent that they'd never share IOCs with private industry because they were government secrets.

So what are the pros and cons of sharing threat intelligence in private industry? On the negative side, revealing attack details could cause adversaries to abandon infrastructure that was already under surveillance (unbeknownst to them), thus interfering with an investigation. It could also prod attackers into changing their TTPs, which might foil existing defenses being employed by high-performing organizations.

On the bright side, there's a potential to alert companies to threats they had no idea existed. It could provide extremely helpful information to competent security analysts who didn't have the infrastructure to collect the intelligence on their own, but are in position to act on it. Third, it helps to raise awareness of the prevalence and scale of intellectual property theft by organized nation-state sponsored (or encouraged) entities. I think you would be hard-pressed to find someone critical of Google's decision to go public with information about Aurora. While it's true they revealed few details, their courage in bringing the information to the public is largely responsible for the level of awareness we have today.

Lastly on this point, although completely public information rapidly loses it's value, there is a possible middle-ground of sharing between vetted parties. It's also not true that public information is worth nothing. There's considerable value in education of nascent security teams, and creating awareness at executive and board level.

This brings us to the next point, which is each additional party increasing the risk of a leak. That's difficulty to argue with, but really how damaging would each leak be? Both the examples of viruses leaking were prior to 1990. That's not say they're irrelevant, but the context was much different than today.

At the time a much smaller percentage of machines had anti-virus protection. Hence even if signatures were released quickly, a leaked virus could still infect a lot of machines. Today there isn't complete protection, but with widely installed malware protection and nearly always-on Internet connectivity, it's pretty difficult to catch users flat-footed when signatures are available. Second (and arguably more significant), at the time malicious code knowledge wasn't very public. Today anyone can download Metasploit, or visit The Exploit Database, or any number of semi-public forums, and obtain ready-made exploits for compromising a significant percentage of computers. Security by censorship may have been somewhat effective in the past, but we have long since crossed the watershed for that approach (at least as a primary control).

I don't think blindly apply past approaches without confirming they fit the current problem is wise. We should constantly being questioning the efficacy of our controls. As the cost of imposing a control outweighs the benefit of it's restrictions, we should abolish that control. Just because something worked before doesn't mean it will work now, or forever.

If information-sharing isn't the end of the world, why do people speak out against it so stridently? There are many reasons, but one I'm particularly sensitive to is the simple fact that knowledge is power. If you're an expert in a domain, part of what you have that others don't is knowledge. As long as what you know is a mystery to others, and you speak of it in only vague and general terms, you continue to be held above your peers. Continuing to stay on top of a field is demanding. It's taxing to spend the majority of your waking hours researching a topic, and it becomes tedious after a while. As you get older, other interest compete for your time, and younger people with fewer distractions have more time to invest in information-gathering. Hence if you want to maintain your edge, it becomes more and more attractive to simply suppress other people's knowledge by not sharing yours.

I want to be very clear, I'm not attacking anyone personally here. I have nothing but respect for people who have gone before me (and who are coming after me). I'm just pointing out that everyone enjoys being recognized as the best, but staying the best for an extender period is really hard. It can feel like newcomers are getting off easy if they have information simply handed to them, rather than discovering it themselves. It's also easy to lose sight of what benefits the greater good, vs. the (perceived) benefit to yourself. I'm simply encouraging everyone to examine their motives and think about how great the actual harm will be in sharing what they know.

Edit: I should point out (apropos nothing, aside from my nagging feeling of incompleteness) that this exclusionary behavior is typical of any clique, and many other professions. It's reasonable to make newcomers prove their commitment and competence by clearing some hurdles. Often these barriers go beyond what's reasonable for the circumstances and become merely tools of exclusion, for no reason other than to make the clique seem more exclusive (and hence, worthy of attention). It's this behavior I'm trying to call attention to.

I don't think everyone should setup their Cuckoo instance to stream directly to their blog, and caution should definitely be exercised when thinking about what details to published in a public forum, but we shouldn't let fear dissuade us from sharing at all.

I can think of several appropriate ways to increase sharing. One at the top of my list should be obvious. It's fun to make jokes about the number of security conferences, but most regular con-goers agree the best part is meeting peers from all around the world. I think conferences are a great place to build networks of trusted peers you can share information with, and solicit advice from. If you connect with peers in your industry, think about what information you can share with them that would be relevant. I bet there's a lot that could be done without giving away a competitive edge. I also bet most international industries are under much greater threat from foreign, rather than domestic competition. So go forth and share, just don't forget the OPSEC.

Add a comment:




max length 1000 chars