Monday, June 21, 2010

Defenders of the Faith

Quite recently, Tavis Ormandy released a 0-day vulnerability in a prominent piece of software. For this transgression, both he and his employer received a good deal of bad press. Sadly, very few in the professional security researcher crowd made enough noise about this, and to the contrary, one man in particular came down squarely against him. Thankfully however we still have Brad Spengler. Last night he posted what none of us had the courage to say. You can find this post on the Daily Dave mailing list archives: http://seclists.org/dailydave/2010/q2/58

I won't rehash the post, I'd very much rather you read it yourselves. But I would like to point out the timeline.

June 5) Tavis contacts Microsoft requesting a 60 day patch timeframe.

June 5-9) Tavis and Microsoft argue about the patch timeframe and are unable to come to an agreement.

June 9) Tavis releases the information to the public.

June 11) Microsoft releases an automated FixIt solution

Tavis did not "give Microsoft 5 days to patch the bug" as was said by various media outlets.

As a few people (@dinodaizovi, @weldpond) have pointed out, this strikes at the heart of the term "Responsible Disclosure". A clever branding trick by software vendors, the term automatically assumes that any other method of disclosure is irresponsible. So we must ask, were the actions that  Tavis took responsible? Would it have been more responsible to allow a company to sit on a serious bug for an extended period of time? The bugs we are discussing are APT quality bugs. Disclosing them removes ammunition from APT attackers. If your goal is to stop attacks, where bugs are the supply chain of attacks, you must make bug and exploit creation prohibitively expensive as compared to the return on that investment. This is why OS mitigations are helpful. Removing high-value bugs from the marketplace is what full disclosure is good at.

I'd like to explicitly debunk a couple of myths related to this issue now.

Myth 1) Targets are a commodity. (All targets carry the same value)

At some point, the security posture of common software is no longer about your mother's Windows XP desktop with a CRT monitor from 8 years back. It is not about the money wasted when sales people's laptops need to be reimaged. It is about real security. It is about the financial information of your public company. It is about the plans for Marine 1 ending up in the hands of people who shouldn't have them. It is about the stability of our power grid.

This is because when a vulnerability becomes public it is no longer as useful for serious attackers. Defense companies provide detection and prevention mechanisms, researchers provide useful mitigations, and high end companies are able to arm their response teams with the information necessary to protect their particular environments. The companies with high-value data that are regularly attacked are able to proactively protect themselves. The attackers who have spent significant time evaluating a company's vulnerability with regard to a particular bug, will now find that bug to be much less useful for a stealthy attack. Yes, you may see an uptick in attacks, but you see a downtick in overall target value. The loss due to a 20+ company exploit spree such as "Aurora" is significantly greater than the monetary loss due to low-end compromises which can be cleaned with off the shelf anti-virus tools. No one is persistently using advanced exploitation techniques against low-value targets such as Joe's Desktop. These attacks are focused on large corporations, government, and military targets with the goals of industrial espionage and military superiority.

Myth 2) Only Tavis knew about the bug

The media asks, "how could attackers know about this flaw if Tavis hadn't released it?" Every bug hunter knows this statement is ridiculous. Security research, like all scientific research, moves like a flock of birds. I'm relatively sure that Leibniz wasn't spying on Newton's work, but they both developed calculus at the same time. They both had the same environment and the same problem to solve, so they developed the same working solution. I'm sure I'm not the only researcher to have lost bugs to another researcher's reporting. Within the past year I have lost several bugs which on the market would have sold for in excess of $65,000. At the point in which the bugs became public, their value dropped to approximately $0 because companies are able to build protections against the vulnerabilities. The bugs that I lost were bugs that had lived for more than 5 years, yet they were discovered independently by myself and others within months. Even if no one else had found the bug, there are other ways an attacker could become aware of it. It would be unreasonable to assume that high-end researchers and their companies are not the targets of espionage. The value of their research is high, and if an attacker can get a free exploit and know that it won't be patched in the next 60 days that is a win for the attacker. It is unreasonable to assume that a bug is not known to attackers once it is found by a researcher. Tavis has protected high-value targets by refusing to allow an unreasonable timeline for patching. Tavis has devalued the vulnerability by letting companies know about a threat that they otherwise would have been unaware of. Tavis has acted responsibly.

The long and short of this is that when only a handful of people have information, that information is very valuable and very useful. When everyone has this information, everyone can use it, but its value decreases significantly. Tavis simply devalued this flaw. Yes, what Tavis did means you might have to reimage your mother's computer when you visit at Thanksgiving. But also, what Tavis did means that you won't think twice about whether or not the power will be on when you get there. Despite branding, what Tavis did was responsible. In this case, "responsible disclosure" wouldn't have been responsible.

Add to Technorati Favorites Digg! This

5 comments:

Monarch said...

Its rather odd Tavis didn't sell the vulnerability to iDefense or ZDI. By selling to those companies the high value targets you speak of would gain protection. iDefense tells the companies of the risk and how to mitigate it. ZDI offers a IDS to protect these high value targets.

Sure this assumes every high value target is a iDefense or ZDI customer, but shouldn't they be?

PS, no bug is worth 65k anymore..

David Sharpe said...

The point another person made in the DailyDave email chain about TavisO's disclosure possibly not being in accordance with Google's Employee Code of Conduct policy might be a problem for him. Obviously that is an internal Google matter. My point is that TavisO's actions might not be the best model to follow for others wanting to disclose later since you run such a risk of damage to your reputation plus possible sanctions at your day job.

Question: Sourcefire VRT runs a somewhat "liberal" shop HR-wise from what I understand, but even in your environment would conduct that put possible strain on business relationships be a problem?

alex said...

Hi!

Love some of the fresh arguments here. Would love to know a couple of things:

A.) "This is because when a vulnerability becomes public it is no longer as useful for serious attackers"

Do you have a data set you're using to support this claim? For example, do you have some information that uses a rational scale for describing threat capability and then match a frequency component for particular vulnerability uses for that population - and then compare that frequency component to a data set that describes known 0day use in data breaches?

B.) "The companies with high-value data that are regularly attacked are able to proactively protect themselves."

Would like to see how you define "high-value data", "regularly attacked" and "proactive protection". My experience is that there isn't necessarily always a correlation between "high-value data" and ability/willingness to create "proactive protection".

Also, if you have good definitions there, do you have supporting data that says "these companies patch within some time frame" where "some time frame" can be compared against data for "uptick" of attacks?

C.) "The loss due to a 20+ company exploit spree such as "Aurora" is significantly greater than the monetary loss due to low-end compromises which can be cleaned with off the shelf anti-virus tools."

Reviewing the data sets I have at my disposal, I'm seeing:

1.) I don't have a good estimate for hard costs for Aurora
2.) data supporting that breaches of significant value are predominately caused by tools that are not able to be "cleaned with off the shelf anti-virus tools." Rather, I'm seeing data that supports the notion that the for the significant portion of data breaches, the effort to prevent could have been classified as "simple and cheap" (source: VZ DBIR).

Finally, I think I would have difficulty asserting that we should *only* care about " large corporations, government, and military targets with the goals of industrial espionage and military superiority." And frankly, I find it really, really strange that a company whose past goodwill (for me at least) was the fact that they could provide the SMB market with sophisticated tools to defend themselves making such an assertion. Off the top of my head, I can think of hundreds of millions of records exposed by data breaches that came from organizations other than those you seem to be designating as important. So no, not all targets have the same value...

Nikhil said...

Agreed..

Will like to add one more point...A public exploit released forces adminstrators around the world to take the vulnerability seriously..Otherwise every security advisory says..*soft is unaware of vulnerability being exploited in the wild..

Also Tavis tried to defend MS guys in the end...lol

cw said...

There seem to be many angles at play. Various parties tend to have approximately one main angle that they tend to focus on, assigning additional relevance to these elements that are more salient to their capitalistic or organizational interests. It seems to me that there are many shades of grey brought up by this issue and by considering many points of view we might have a better understanding of the pros and cons of all the disclosure methods at play.

I'm aware of a particular organization that has experienced over 350 malware infections in 2010, many caused by drive-by exploits and the majority of those being publicly disclosed exploits or ripped from metasploit. Some smaller percentage is caused by social engineering techniques at play. We all know that people should patch, but if economic and organizational dynamics do not create an environment where that happens the way that it should, loss occurs. Whether or not these 350 incidents (some of which may have been classified as a data breach) cost more than a targeted attack remains to be seen. The value of the asset is going to change depending upon the attackers motives. Since most orgs are not too eager to come forward and disclose their compromise counts, either caused by targeted/APT style attacks or by commodity malware (IF they even know), I wonder how we can obtain accurately metrics and create a meaningful comparison.

Despite a variety of concerns, I applaud the effort to offer a quantitative perspective on this phenomenon but I am not sure who is in the position to have accurate metrics, which may then leave a lot to speculation and opinion.

I will say that I think Tavis's contributions offer far more benefit to the world than harm. There are those that would vehemently disagree with me and who would generalize in a black-and-white manner that publishing exploit code only helps the bad guys, which I find to be an overly simplistic assessment of a nuanced, complex situation.

The amount of pontification on this topic doesn't appear to be slowing down any time soon.