The Problems in Certifying Software Safety

2010-11-03 14:38:57 by chort

I just finished reading @TanAtHNN's 1999 paper contrasting inspection of electrical devices and safes with software and information security products (thanks toJosh Corman for brining it up). The paper pointed out failings of prominent technology associations in the area of certification, and indicated encryption standards (such as FIPS) as examples of how it could be done right.

Overall I think the paper raises good questions. I think you would be hard-pressed to find people in the industry (especially security researchers) who don't think companies should be held to a higher-than-current standard for information technology. I believe the paper comes up a bit short, however in recognizing the differences between physical productions and digital products.

Inspecting and rating safes and vaults is a fairly straight-forward process (not to marginalize engineers and inspectors, I'm sure the work is demanding). You have a box, with a door, secured by a lock. There are only so many ways to physically approach such a device, and the technology behind it doesn't evolve very quickly. Unless I've missed something, metallurgy doesn't experience weekly break-throughs, and hinge mechanisms aren't revolutionized every year. Similarly, encryption standards do not change often, because they're judged to be sufficiently safe for a period of time, which is why they get adopted in the first place.

Software, on the other hand, evolves hourly. I read recently that the proof-of-concept for Twitter was constructed in a mere six weeks. Everyone in software has worked at a place where they had wizards taking home a tough problems for the weekend and coming back on Monday with working code. Even the largest, sloth-like software companies are turning out a high-volume of patches and updates on a monthly basis. If you're in software and haven't done something new in 6 months, many will consider you dead.

When working with designs that are rarely changed, it makes a lot of sense to inspect each iteration at great length. The benefit/cost ratio of how long your product will be on the market vs. how long it takes to test is heavily in favor of the manufacturer for physical products, such as electrical adaptors and personal safes.

The time to inspect software completely is just as long as, to very much longer than physical devices. This is due to the complexity of the components and the myriad of potential failure points. You might have several thousand pieces in a safe, software has hundreds of thousands, to millions of lines of code. Physical devices have very limited number of input points, software has almost incomprehensibly many input points.

OK, so the analogy was bad, but shouldn't there be some kind of inspection for software? Instinctively I think yes! My biggest gripe with software is that I often find problems that I think should have been obvious to test for. I'm not talking about obscure flaws where you have to manipulate input in ways that would never happen during "normal" use; I'm talking about perfectly reasonable combinations of actions that result in disaster. I think there should be a digital equivalent of Aberdeen Proving Ground for software. Your code should be virtually run over by tanks, made to climb steep slopes at one-half normal power, struck point-blank with high-velocity projectiles, and sent across rivers with nothing more than a snorkel. In short, you should have to prove your software can survive actually being used.

The problem is, how can this be done in a way that supports rapid evolution, frequent inspection, and support wide application, all while still attaining a meaningful level of assurance? This isn't an easy task, as the debate around PCI-DSS clearly shows. Many people feel that PCI-DSS doesn't specify stringent-enough measures to offer meaningful protection, while others believe it is overly-specific and diverts attention from more critical areas. Clearly it's very difficult to even come up with industry-crafted regulations to call-out what appropriate safeguards are.

I find myself going back to the insurance component of Tan's article. It seems to me that an awful lot of money is spent on protections that are "required" (by policy, regulation, or whim), but there isn't a lot of thought being put into what the return-on-investment is for them. Decision-makers in information technology look to external sources, mainly large analysts firms, for guidance. It's not clear that those analysts are anything other than the illegitimate children of vendor-paid lobbyists and soothsayers. It does not seem like there's any good data on what kinds of investments (training on secure coding, implementing IDS, erecting firewalls, etc) actually pay out in the long run.

My opinion is: If there are direct financial incentives, they drive behavior. What are customers most outraged by? That vendors and providers are unaccountable. I think the contractual and legal systems should support customers suing over defective software. Right now vendors get off the hook by writing up EULAs which basically say: You can only use this how we want you do, and if anything goes wrong it's your fault. WTF is that?

BMW can't tell me which roads I'm allowed to drive my car on, and if the axel breaks in half for no apparent reason, you better believe I'm going to sue. Strangely in the software world we accept not only those restrictions, but we just grimace at the broken axel and go pay another company for their axel-dolly to strap under our vehicle. Can you imagine the stares you'd get if you rolled a car up to the valet stand at a restaurant with a furniture dolly underneath? We should be that embarrassed to slap anti-virus clients on anything with a processor and RAM.

Once customers are able to sue for defective software products and services, companies will need to figure out how to defend themselves. They still won't know what the most effective defense is, so it would be easier to just buy insurance. I think that's what they should do. Insurance companies will want to know which potential customers are the riskiest, and which are the safest, so they will have a huge incentive to collect data, conduct tests, and determine what measures are useful for lowering liability. At that point, my friends, we will start getting durable software.

Add a comment:

  name

  email

  url

max length 1000 chars