Disclosure notices that some company, some website, or some server has been compromised are growing in numbers. But there is something rather disconcerting in these notices that have been bothering me for quite some time, which has prompted me to write about it.

The recent Adobe breach, coupled with a forensic audit I just completed that really sent my mind racing down the path I’m about to share with you. Although there are disclosures that something has been breached, the notices are always lacking information – specifically, details.

Now, I understand the need for keeping some things in security close to the chest – so to speak – but there’s a looming, unforeseen threat in not fully disclosing. That threat is a possible further breach of untold proportions when we practice non-disclosure of the details.

Take for example the Adobe breach. We were told that there was a compromise on the server which allowed access to the source code of the Adobe products and usernames and credit card information of users. But little was said about the source code access.

One might think that the source code could simply be the programming code that makes the actual Adobe product what it is – what makes it function. But Adobe, like many other self-updating software also has a component in it that pushes updates to its users. Java, Microsoft products, and many, many more software providers do this on a regular basis.

Let’s take a walk on the wild side of my brain for a minute:

Let’s pretend I’m a hacker and I break into Oracle or Microsoft (pick anyone you’d like). I gain access to the source code which includes the programming code that allows the updates to be pushed to users. What do you suppose I could do with that code?

I could use it to push an update of my own, couldn’t I?

I could create a horrible, malicious malware that makes ransomware look like an amateur coded it. Or, I could make a small update that would implant a backdoor, rootkit, or some other method of access into the users system. And if you think that isn’t possible, look at how long Flame and Stuxnet went undetected!

I’m growing extremely concerned about the need for secrecy in disclosing.

In the original blog post regarding the breach, Adobe published a link to locking down ColdFusion. In the ColdFusion 10 Lockdown Guide, there are slightly over 56 pages of recommendations for securing installations of ColdFusion including one that creates a dedicated and restricted user for IIS server installations. How many of the server admins that might be reading this post have done that?

But it’s not just the server admins that need to worry in this situation. What about average users of Adobe products? How does the average Joe or Jill consumer know that the update notice they’re getting on their device is from Adobe? Or Java, or Microsoft for that matter?

Nearly 8 months ago, I had a client contact me that he had received a Java pop-up that there was an update available. When he downloaded the update, it was anything but a legitimate update. Unfortunately, he did not save any of the files or forensic data so I could review what happened and how it was done. But that one, single incident was brought back to mind when I read about the Adobe source code breach.

Microsoft is also notorious for its non-disclosure in patches and updates. The nebulous “may give a remote hacker access to….” Or, “could cause an elevation in privileges…..” statements tell us nothing about what’s being orchestrated. What if, disclosing the full details could lead a security researcher to find another, potentially more dangerous flaw?

The malicious powers that be will always have the inside track of what vulnerability is accomplishing what end. It’s the system admins, general users, and security professionals who don’t have the time to scour hacker forums for details that suffer. We’re put at a direct disadvantage by not knowing the details with full disclosure.

As for the Adobe breach, I’m recommending to all my students and clients to go directly to the Adobe page to download the latest version of the software and not to trust the update function at this point in time. I personally do the same with Java updates. If the pop-up tells me there’s an update, I go straight to the Oracle website and get it.

A little healthy paranoia can be a good thing. Especially when we’re being left in the dark even when there’s been disclosure. A philosophy I recommend that all administrators, technicians, and security professionals adopt now, rather than later.


About the Author: Debbie Mahler (@DebbieMahler) is the CEO of Internet Tech Specialists. She’s an online instructor for Ed2go, a division of Cengage Learning and a global education provider. She teaches Introductory and Advanced PC Security courses for all English-speaking colleges and universities.


Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.


Related Articles:



picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].


picDefinitive Guide to Attack Surface Analytics

Also: Pre-register today for a complimentary hardcopy or e-copy of the forthcoming Definitive Guide™ to Attack Surface Analytics. You will also gain access to exclusive, unpublished content as it becomes available.


Title image courtesy of ShutterStock

Categories ,

Tags , , , , , , , ,

Executives Guide to the 20 CSC
  • http://www.allcotmedia.com Dawn

    That is a GREAT point… always go directly to the site for updates. Also, be careful if you access the site via a Google search… My husband and I both made an error when installing the latest edition of Firefox. Another company/person/a hacker actually garnered the number one slot in a Google search for Mozilla Firefox. And the update *looked* legit. It wasn't until *after* I installed it that I realized I did not download FF, but malware. I felt very silly until I found out my husband, a week earlier, made the same error. (And he's in the IT field!!) My gut instincts told me, when I hit install, that something wasn't quite right, but I didn't follow those instincts. I would say, to end users… if you're not sure, ASK someone who knows, and don't take action until you find out. I know IT and security is a science, but I want to bring a bit of new age mysticism in it. For end users: Trust your internal barometer and your instincts. If something doesn't feel right, don't click to download/open the file/whatever it might be. That and good old fashioned common sense will take you at least part of the way toward protecting yourself… for the rest, I recommend an expert like Internet Tech Specialists. :)

    • http://internettechspecialists.com Debbie

      That's funny you mentioned the intuition part Dawn. A lot of my students will post in the discussion area that they just "felt weird" about clicking on something but did it anyway. Maybe my next research project should be if malware carries negative energy with it that creates that intuitive feeling. That scientifically could be measured. More to ponder!

  • Nemo Dat

    Great post!

    56 pages of recommendations for securing installations???
    Madness. Sysadmins should send invoices for their involuntary time!

    Which prompts me to have a different perspective: As a user, software is often a time squandering, self-serving bureaucracy.

    1) Layers and layers of secret programs create security risks exponentially. They add obfuscated complexities which have little to do with the actual intended purpose of computers – as seen from USERS: An efficient productivity- and dignified communications tool. If through purpose creep we are more dependent on public-private partnerships and their possible ulterior motives, computers are not fit for their intended purpose: Money back please!

    2) Engineers ought to be able to design computers without built-in "vulnerabilities" (Linux MkIII?). Customers shouldn't need to become geeks and spend their resources to tweak things daily. Let me ask this question, out of the box: "Why do we need gazillions of risky updates (per day) in the first place?" Are we pushing built-in obsolescence and coercive dependency to a diabolical extreme? Intuitively, should we not go back to basics instead?

  • http://internettechspecialists.com Debbie

    Nemo, Thank you for the insightful post. I totally agree with your statement, "computers are not fit for their intended purpose" and would add that the same is true now for smart phones, tablets and any other connected device. I often wonder if won't return some day to the "dark ages" prior to the age of technology as users abandon their use of social media and connectivity to the web of things.

    In response to your question about why we need the updates…. the issue isn't about built-in obsolescence, at least not as I see it. I see it as everyone jumping on the Microsoft model (started with founder, Bill Gates) of pushing products to market in order to be first with no regard to security or privacy consequences. The mindset of "be first, patch later" is what's at the heart of the problem. And unfortunately, because there are no consequences (financially or legally) for putting out a bad software product, companies continue to produce them. There is no incentive to build security and privacy in at the start.

Tripwire Guest Authors

Tripwire Guest Authors has contributed 277 posts to The State of Security.

View all posts by Tripwire Guest Authors >