Skip to content ↓ | Skip to navigation ↓

Like more than a few others, I experienced the infosec outrage against Mary Ann Davidson, Oracle’s Chief Security Officer, before I actually read the now-redacted blog post. After taking the time to read what she actually wrote (still available through Google’s web cache), I think there’s more discussion to be had than I’ve seen so far.

First, it seems clear to me that the reaction is as much to the condescending tone of the blog post as to the content. Oracle’s CSO manages to talk down to customers and the security research community alike, from the title “No, You Really Can’t” to implications that security researchers are like ‘boy bands.’ All of this language and tone generates page views, and emotional reactions, but I’m more interested in the actual content of what’s being said.

After reading the post, I found myself wanting to re-write it in a more professional tone and see what it looked like. I haven’t done that, however, because it’s a lot of work to produce content that’s not really mine. Instead, I’ll try to distill the actual content for discussion.

Salient Statements (no judgement yet)

  • Customers are worried about breaches.
  • Reverse engineering vendor code to find vulnerabilities is not the most effective way to protect your organization.
  • The Oracle EULA prohibits reverse engineering.
  • Customers break this provision in the EULA enough for the CSO to be aware of it.
  • Oracle has a “robust” software assurance program that obviates the need for third-parties to find vulnerabilities.
  • A high percentage of the reported issues are false positives or already discovered internally.
  • The reverse engineering clause in the EULA isn’t about keeping security researchers out; it’s about intellectual property.
  • Bug bounty programs are not an economically advantageous vendor investment.

There are a few things in here that we can take at face value. Yes, customers (and vendors, I’ll add) are worried about breaches, and it’s a fact that Oracle’s EULA, along with many others, prohibits reverse engineering of code. On to more debatable points…

Good Risk Management

Jennifer Granick (@granick) recently pointed out in her Black Hat keynote that human beings are pretty bad at risk management, citing that we’re afraid of sharks but not cows, while cows actually kill more people than sharks. Davidson’s point that spending time finding vulnerabilities in Oracle’s code isn’t the most effective means of securing your organization seems demonstrably accurate at face value, but implies that there’s a real-world tradeoff going on, i.e. researchers are spending time on reverse engineering instead of other security measures that are more effective.

I won’t declare that the opposite is categorically true, but it seems dubious that there’s a one-for-one tradeoff here. It’s more likely that this reverse engineering is being done by dedicated security researchers paid to do just this kind of work, or as an extra-curricular activity. That, frankly, draws out the question of how this time is being funded by the market and why.

We might legitimately ask ourselves as an industry why we have so much third-party security defect discovery going on.

Vendor Software Assurance and Transparency

There’s a common claim that this research is required to ensure secure software in the market. The counter-argument put forth by Davidson is that Oracle does this work already, and that only Oracle can do it effectively because of their inside knowledge of the code. I actually really like this point. It’s true that the original vendor can perform more effective software assurance than an outsider. The problem is that most simply don’t.

While Oracle may claim they are an exception, the reality of this fact is demonstrated by the continuous publication of newly discovered vulnerabilities. Customers can actually make a difference here by following Davidson’s advice and asking about software assurance programs when they make a purchase. Sticking a question about how the vendor ensures security of their developed code in every RFP you issue will make a material impact.

Still, if we assume that Oracle does actually do an exceptional job on software assurance, then the impact of security defects found by reverse engineering should be minimal, right? It’s not, and that, in part is because of ….

Low Quality Defect Reporting

Steve Christey Coley (@SushiDude) gets the credit for sparking this paragraph:

If you’ve ever worked in a tech support or QA role, you are intimately familiar with the impact of low quality defect reporting. “It’s broken” simply doesn’t cut it, and neither does cutting and pasting an error message with no context or reproduction.

With a customer-base the size of Oracle’s, triaging and responding to security defect reports might very well be a significant time-suck. In fact, it really must be for the CSO to spend time penning individual letters out to customers.

Is it appropriate for a vendor to reject a low quality defect report and ask for more details? Doesn’t that apply to security defects as well, or is there a requirement for a higher standard of care?

The Value of Bug Bounties

And how does someone learn what a good defect report is anyway? This is, perhaps, a side benefit of the economic model of bug bounties. When someone is looking to get paid, they’re more likely to read and follow the guidelines for submission.

It’s worth drawing a distinction between community bug bounty efforts, like Google’s Project Zero and Tipping Points Zero Day Initiative, which aren’t specific to one vendor, and a vendor-driven program. Davidson’s argument on economics seems targeted at vendor-driven programs. Her point is that her money is more effectively spent hiring additional internal staff, which is consistent with her point that only internal staff can be effective at security testing.

Still, I can’t help wondering about the economics of off-loading all those low-quality security defects weighing down customer support into a ‘paid-to-play’ bug bounty program where you can more effectively enforce standards. Even if that works out to be more economically sound, and drive higher customer satisfaction with support, there’s still the pesky problem of prohibited reverse engineering in the EULA.

Protection of Intellectual Property

And this is really the point. That clause in the EULA that inhibits security research is there for an entirely different purpose: to protect intellectual property. In the threat model that drives the inclusion of that clause, security research isn’t present. Oracle, and many, many other vendors, are worried about competitors stealing their capabilities, more than they are about researchers finding vulnerabilities.

That point is also where there’s room for improvement. It won’t be an overnight change, but there’s no reason that security conscious vendors can’t move in a direction that supports security research while maintaining the protections for intellectual property.

 

Title image courtesy of ShutterStock.com

Hacking Point of Sale
  • Coyote

    Their only valid point is the EULA. But there is some hypocrisy that goes along with it (below). Aside from that, it is utter rubbish. I realise that they didn't write this article, but I'm responding as if this is your interpretation of their points it must be their points (which means some of this might be unfair to them – but they know how to play unfairly too).

    The fact remains reverse engineering has its uses (many). I also find the following amusing:

    "and that only Oracle can do it effectively because of their inside knowledge of the code."

    Unless of course it is reverse engineered well enough. And then consider the experience and knowledge of the programmer(s) versus the one doing the RE. Also, there are ways to make it much more difficult (to analyse). But then refer back to the part about experience and knowledge. Yes, it might be it refers to something else they don't have access to, but you can argue this (and many other things) all year long and it is (at times if not always) hypothetical and also only one variable of many. But not having every variable doesn't mean you can't find a flaw (say). This goes for other things in life – it is not only in software.

    "Oracle has a 'robust' software assurance program that obviates the need for third-parties to find vulnerabilities."

    I suppose they have a bugzilla (or equivalent) for no reason. They are sure of themselves but the fact they do have bugs (as all software does) should be kept in mind.

    Lastly, on the subject of the EULA. Even though they can get away with it legally (because they took over Sun Microsystems), the fact remains they did NOT create Java and they did not create Solaris (and anything else they inherited from Sun) that the comments (because source is available in these) claim they wrote (maybe they worded it differently – I can't recall – but the point is the same; it isn't their own work). I can't recall the rather convoluted history of Java but I do seem to think it was Sun; Solaris absolutely is from Sun.

    Oracle doesn't exactly have a great record, though, and I suppose that this is just one example of others.

    • terlin

      Thanks for the feedback. Your point about reverse engineering is a fair one. It's a skill, and there are people who excel at it. Of course, if Oracle's assurance program is really robust, they should be employing people with those skills directly. There's not information in Davidson's blog post on those kinds of details, however.

  • Aodhhan Murray

    It is funny to believe Oracle (with an irritatingly poor record of secure coding) can efficiently and accurately find problems in their own coding. Especially when you consider (roughly) 95% of programmers have no in-depth knowledge of how their code works at low levels (i.e. memory and processing). 99% of programmers have never taken a 200 level machine language course or attempted to reverse engineer code. So given Oracle's coding record along with typical low level knowledge of programmers… Oracle would be doing well to reverse engineer a cheese pizza.

    It's nice of Mary Ann to rant out a bunch of 'security' lingo and buzz words, and hint at using other methods of securing a corporate's enterprise network. Apparently she was given the job not based on academic research, systems design, or anything else to prepare her for high level security role. By the way she put the words together, I can imagine most of her security background was garnered by half grazing through security articles to get some sort of an idea of the true nature of the beast. Not everything can be learned from best practice guides, or from those subordinates who have much more experience.

    Of course we should use defense-in-depth techniques. However, when there are products such as Oracle vulnerabilities which make so many DiD strategies moot because you just can't sanitize/filter every browser/client input/query and have reliable product for customers to use. Not to mention the fact, you still have to keep things flexible enough for programmers to write code. Wouldn't it be nice, if we could limit SQL programming to 50 secure commands with no crazy methods or options. In this case… You can't just code for the database product itself, you have to code for those who will code applications to use the database. Microsoft and other DB vendors figured this out a decade ago.

    But there is one thing to say in her defense. If, during the systems development lifecycle, you continue to approve the use of Oracle or a solution which only uses Oracle, then you are just as much to blame as Mary Ann… and you know why.

  • terlin

    You bring up an interesting point about what skillset a CISO should have these days. It's probably a subject for a different blog post. The real question is, given this record in security, how is Oracle such a successful company?