The Great Security Pill Scam

2012-08-04 16:36:24 by chort

What do Information Security and weight loss have in common? Many people who pretend to be interested in each try to get desirable results without making any substantial changes. I recently posed the question "would you hire a trainer & ask them to make you skinny & fit, as long as no exercise or diet change?" It was rhetorical of course, but one of the replies pointed out that most Americans would do just that.

Sadly I feel like I spent several years running late-night infomercials, selling expensive gadgets to people who wouldn't really use them. Sadder still is the prevailing attitude in the IT industry that buying a product is the solution to every tough problem, because it's easier to whip out the corporate checkbook than it is to solve in a thoughtful way. The problem, as most of my peers know, is that these products rarely solve anything on their own. The only benefit is an organizational perception that "something has been done." When security incidents happen, because the issue wasn't solved comprehensively, everyone is shocked and loudly protests "but we were following industry best-practices!" The people who say things like that actually believe it. How can we change the tune?

People who make decisions want a nice summary of the problem and the proposed solution, so they can make the decision and get on with their lives. Corporate culture rewards people who are decisive, not people who agonize over decisions by collecting vast amounts of input and processing it methodically, with a deep level of domain expertise. The more succinct an argument is, the less ambiguity there is over whether it's a good course of action. No one making a decision wants to hear "well, it depends..."

Thus it's very easy to understand why corporate directors often choose to buy products, rather than people or training. It's very easy (in theory) to know what you're getting for your money with a "Next-Gen Firewall." Helpful marketing people told you this product blocks all the bad stuff from coming into your network! They couldn't be lying, because false advertising is illegal and someone would stop them, right? You could easily imagine people think the same thing about diet pills, i.e. "they must work, otherwise they wouldn't be allowed on TV."

I think a major part of the problem is how far the effect is removed from the cause. Even if I were to stop eating junkfood and carbs, and start working out regularly, it would take at least several weeks to notice a result. I'd also have to continue doing that for the rest of my life in order to stay fit and maintain a healthy comportment. In other words, I'd have to be thinking about being healthy constantly. I don't really want to think about being healthy, I just want to look good to other people. That is really what organizations want from security: They don't want to think about being secure, they just want to look good to the press, competitors, customers, regulators, etc.

In theory, corporations (for profit and otherwise) and government agencies have known all along they should aspire to have defensible infrastructure. When they hire people to help them out, even if we generously assume those people knew what they were doing, they'd come up with a lot of suggestions for how the organization should change it's lifestyle to be more secure (healthier). Confronted with the choice between changing behavior, and being unhealthy, the organization would then start looking for the easiest things it could do (not the things that would bring the greatest change). This leads to half-measures (I'll switch to diet soda) and improbable quick-fixes.

This is where the security vendor industry comes in. People trying to make money usually aren't blind to what their customers want. Vendors who are (hello RIM!) quickly go out of business. So to stay alive as a security vendor, you pander to what the customer thinks they want. This resulted in the security appliance market of magic boxes that you "simply plug into your network and they solve the problem." This is not to say that security appliances are fundamentally bad, but the way that they are marketed is. Unfortunately if they were marketed truthfully (you will need to hire an extra person to become an expert in this technology to use it effectively), no one would buy them.

I worked at several such security vendors for years, as an implementation engineer and later a sales engineer. While I tried to show customers how they could use the tools to their fullest extent, most of them just wanted a quick configuration for "best-practices" so they could plug it in and forget about it. The teams were generally too busy to spend any time learning or maintaining the products and it was easy to tailor sales pitches to that.

Now that I'm the architect and engineer trying to solve problems, I can see the issue distinctly. Often we will identify an area of great risk that requires attention. We come up with a direct solution, that in part involves a change in behavior by individuals or business processes. These solutions often meet fierce opposition. The funny thing is, everyone agrees that security should be a big priority and that everyone else should be doing more to improve security, but when it comes down to them personally doing anything differently, it's unacceptable. When a change in behavior is proposed, it's rejected.

Mind you security is a big concern, but all the accountability is put on the security department and they're expected to magically solve problems without doing anything noticeably different. The security team is the personal trainer, and the expected function is to proscribe diet pills that will magically make departments shed risk and build defensive posture, all for the simple effort of generating a PO. There is a prevailing attitude that any perceptible changes or disruptions are inherently bad and should be avoided at all cost, even if the cost is continuing to ingest partially hydrogenated fats that will eventually lead to a heart attack.

The problem is that invisible controls are largely useless. We see this all the time with things like email that's "encrypted," but doesn't require a password to decrypt, penetration tests with the scope limited to only a few IP addresses, vulnerability scanning that's limited to only a tiny sub-set of possible tests, patching that's limited to only critical patches and only non-critical systems, security reviews that are conducted so late in a release process that they can't require any substantial changes before release, etc. As intelligent security professionals who think critically and do vast amounts of research, we know these steps are as effective as switching to diet soda, buying "toning" shoes, and eating a salad drenched in oil instead of fries. Unfortunately that doesn't matter much, because other people perceive these steps as "doing something" and so it's socially acceptable. Then these same people wring their hands and pretend to worry about how obese Americans are getting and how many databases Lulzsec put on pastebin.

The great disconnect is between intellectually acknowledging that something could be done in theory, and actually witnessing that thing being done. Security academics have known for years that MS-CHAPv2 has flaws, but it wasn't until researchers demonstrated just how easy it was to attack in practice that people started to take note. The same can be said insecure cookie usage and Firesheep, SQL Injection and sqlmap or whatever your favorite example is of a well-known flaw being animated and suddenly gaining credibility with the masses. The incredible frustration is that very intelligent people do a lot of research and warn about a problem, but people think their personal beliefs (and they are just that, beliefs) are just as valid as the researcher's conclusions, until it's made irrefutable.

Just in the last few days I've realized the incredible burn-out people suffer in the security industry, because despite pouring their every waking moment into researching security topics, someone from another line of business thinks they know better, despite having done no research what so ever and picking the belief that causes them the least personal discomfort. It feels like no one believes they're vulnerable to bullets until they're actually shot by one. Sadly if that ever happened, they'd blame the failing on their security guards, even if the guards had been begging them for years to wear a kevlar vest.

I wish I could say I had a solution, but I don't think there is one. Clearly we need to resist attempts to buy lots of "magic" technology, and instead try to steer budget into hiring curious and thoughtful people. This means we also need to move away from our mental model of security engineers as people who run firewalls and web proxies, towards an idea that security engineers implement automation and analyze data. The only way to get peoples' attention is to demonstrate, over and over again, how the flaws in their behavior result in negative consequences. That means security practitioners need to know the business processes and application architecture just as well, if not better, than the people who designed them. You can't get there by buying appliances.

Ultimately it's unreasonable to think people will change, even if confronted with reality. That doesn't mean we give up trying, but behavior-modification by education shouldn't be the primary goal. The only way to affect substantial change is to be embedded in every process. This is uncomfortable for every team that has to accomodate new requirements, but that's too bad. You can't lose 50lbs by eating a bucket of KFC every day and wishing to be skinny. You can't get get secure by making it the security team's problem.

Add a comment:

  name

  email

  url

max length 1000 chars