Skip to content ↓ | Skip to navigation ↓

I have been focused lately on reading Threat Modeling, Designing for Security (Shostack, A c2014), and thus thinking a lot about educating development teams on how to engage in this type of activity. In the middle of this, I get an email from a co-worker with a tagline at the bottom from a smartphone app “sent using XXXXX”.

I immediately had a warning light go on in the part of my brain that is (mis-)wired to think about security issues. I was pretty sure that my coworker had needed to give his company email credentials to this third-party service provider in order to use this service. I looked into it and sure enough this app was used to provide searching of email from your phone over the web and requires credentials for every account you access.

They have a great privacy statement that gave me a warm fuzzy, but that is about all. Ignoring all the things wrong with how this app may have been installed on a device accessing corporate networks and data, I want to focus on what happened to our companies trust boundaries when this happened.

Let’s start out by assuming that we started out in an Eden like state  – we gave this employee a new phone (avoiding the complications involved in deciding to trust the phone they already owned). This phone was pristine only including company approved apps and set up to connect to the corporate network when onsite. Everyone is happy from a security point of view.

Now all hell breaks loose because we have a human involved and one of my favorite quotes so far from the Threat Modeling book comes to mind “A process without input is a miracle, while one without output is a black hole. Either you’re missing something, or have mistaken a process for people, who are allowed to be black holes or miracles” (actually a quote of David LeBlanc author of Writing Secure Code).

Immediately this person adds her Gmail account to the device because who wants to manage email accounts separately. Then she downloads her favorite app to manage all of her calendars on her new phone, and adds her Tripwire account credentials to the apps settings. Then she finds a couple of apps (games, newsreaders, etc) from various vendors on her favorite app store and grants all of their permissions they “require” to operate (full network access, read phone state, etc). Lastly she installs her social media apps and her phone is all set up for her.

As a baseline, the company itself took the responsibility for trusting the OS to have provided a safe sandbox for all apps to play in, and the phone vendor to have only installed trustworthy apps as part of their customization. So our trust boundary is already extended beyond our company resources to the phone service provider, the phone manufacturer, the phone vendor, the phone OS developer and the security application company. All which we may have to just accept but should be aware of.

Assume we have processes to keep the phone OS and apps up to date with security patches and have installed trusted security tools. So we have little mini trust boundaries to help protect the company in place, but in general our trust extends beyond the company, and this is all normal (the same can be said for any service we contract to – think Target’s and HVAC provider).

But where things go south for me right now is the innocent actions of our new employee. She has done nothing wrong and nothing that would be considered surprising. All of the above the company “consciously” agreed to as its trust boundary and has accepted risk for.

But our new employee has, unknown to the company, extended our trust boundary is several ways:

  • Adding Gmail account to email client opens up risks if the app doesn’t isolate things properly – think XSS (these apps mostly scrape websites and extends the trust boundary from the employees professional life to her personal life
  • Adding games extends trust to the guy who wrote a quick app one night to play solitaire online with friends and he rents IT services from AWS to host his webapp that runs the game server. Occasionally one of his friends from high school contributes code and Joe trusts him to not insert malicious code into his game.
  • Adding calendar app and adding company credentials extends trust to Joe Developer, who wrote a quick app to aggregate his calendar and he rents IT services from AWS to host his webapp that runs the game server. Now this app has authentication credentials to your company network/email services and calendars that this employee has access to.
  • Lastly the social media apps that are oh so important extends the companies trust boundary to all of the employees friends as they advertise comings and goings (work trips, vacations, what’s going on at work, are you happy or angry with your boss)

I think that ultimately, none of this is all that egregious and should be a normal use case for IT distributing devices, but add this up over a 500 person company and your trust boundary grows far beyond your ability to manage it, making it effectively infinite. What the company does to mitigate these risks is more important than stopping them (although the third listed is one that should be stopped, IMNSHO), but the first step is awareness that that trust boundary grows so much bigger the minute we hand over the phone.

And that is what started me off on this tangent, that the employees credentials were stored on a third party server beyond our control. The fact that that data now has a life of its own that we can’t manage, that is another blog post I am working on…


Related Articles:



picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].


Title image courtesy of ShutterStock