That phrase came to me many years ago when working on a multi-million pound IT outsourcing deal. We were up to our necks in the finer points of platform-wide and stack-deep security, and I realised we were fighting amongst ourselves more than challenging the final competing vendors. This infighting was partly due to the large amount of IT staff in the room likely to transfer to the winning team and partly due to the view of security controls as a bolt-on extra. Our folks kept stressing the dependencies and limitations that squeeze the ability to implement things that our policies required, essentially doing the supplier sales job for them. Poor procurement practice, by any standard, but symptomatic of the syndrome named above. Fast forward to a battle-weary post-integration team, sitting across desks from colleagues now settled in new roles. I’m being schooled about the pressure the vendor is under to get the network segregation done. There’s the shortage of PMs, the co-ordination with two other service providers, and the risk of budget overruns. Vendor relationship managers doing the vendor account managers’ job. Empathy for those on the ground more than those who sign the cheques. Just like those hostages who feel an inappropriate affinity for their captors. You might think that the dynamic is different in the days of cloud computing. After all, we can shop around for fairly generic units of replaceable service. Except we can’t. This fact was called out at length just this week in the New York Times when the publication referenced the primacy (pun intended) of Amazon’s cloud. The market for Platform-, Infrastructure-, and utility Software-as-a-Service (e.g. email, word processing, presentation creation tools, and spreadsheets) is concentrated around single brands. The cost and flexibility benefit come from relatively customer-agnostic parts of the business models. To that extent, you rely more on them than they rely on you. Yes, you can vote with your feet if the mickey is taken, but if we're honest, this kind of supply is almost as inelastic as your old IT service deal. There are few realistic options for supply at scale, and the act of reversing out of a big contract, selecting a new supplier, and making the operational switch can bleed any foreseeable benefits out of a change—something all parties in the procurement process know too well. In more traditional IT outsourcing, the 'interesting' characteristics of your organically grown network demanded huge investment on their part to make everything work, an investment they needed to see returned. Depending on the size of deal, on how hybrid and private you want things, and on the layered extras (e.g. managed security service, software development, data analytics, project management, identity and access management), that might still be the case, but oftentimes the orchestration, integration, configuration, and maintenance piece is more in your court than it was before. No one likes to feel cornered in a financial relationship, but all except the biggest and richest cloud customers kind of are. So, what can you do to keep things competitively healthy, especially when it comes to stuff historically seen as nice to have vs necessary? Things like security and data protection? This could easily become a chunky book, so I’ve limited myself to foundations and signposting good governance practice. Every single part of which hangs off one enduringly vital but increasingly complex aim: to trust but verify.
Nothing works unless you pin down vendor, key downstream supplier, partner, and your own responsibilities. A RACI (who is Responsible, Accountable, Consulted, and Informed) solves nothing. But the process of thrashing one out at a useful level of operational granularity with all of those who might have skin in the game can kick off substantial culture change. That’s if you keep an attendant record of all of the dependencies, limitations, assumptions, risks, and issues raised. This is our management system equivalent of walking a mile in each other’s shoes and keeping a diary of the pain when we do. No-one should be able to claim they didn’t know it was their job, assume that the impact is trivial when they drop the ball or throw it over the fence to another team.
Almost in the same pot as creating a RACI and fostering associated buy-in, this is about the letter of contract law. It’s about making sure it is a legal requirement to have someone appropriately qualified in post (your side and their side) to perform essential roles. Here, it’s setting out expectations for responsiveness in terms of speed, regularity, and quality of inputs / outputs, then being crystal clear about the point where your job ends and theirs begins. If you don’t, it will work itself out, usually in a lawyered-up standoff within days of some high-profile product launch, critical audit finding, or some system vulnerability getting noisily exploited by a nasty threat. Key targets for this effort (far from exhaustively and depending on nature of supply) are incident management; risk management (especially metric collection, measurement, and reporting); security event logging, monitoring, and management; identity and access management (especially system accounts, remote accounts, offshore support accounts, and god-like access); project management; threat and vulnerability management; their own downstream vendor management; physical and information asset management; cryptographic key and certificate management; backup and recovery management; data retention limitation; secure data and device destruction; and availability and capacity management. And build in an ongoing commitment to review and update that picture in case of change.
Again, this is fundamentally linked to the above points, but it’s more than worthy of its own paragraph. As I tweeted again just the other day, a system without means to rapidly deal with exceptions is not just broken. It’s destructive. Partnered with another hopefully not too trite one, the later you escalate, the more it sounds like an excuse. Too often as the buffers are hit when trying to get stuff done, people are slow to face the fact they don’t have sufficient means or authority to fix it. Often because those who are one step up the line don’t have the understanding and support of their management team. Using the output of that work to thrash out the RACI (It keeps coming in handy.), set out regular routes to escalate engagement blockages and standard ways to describe risks. Get the conversations to the right places as fast as possible. If you have done the other things recommended, you should have concrete information to go with that, and those you escalate to should be ready to help and know what you need them to do.
Of course, none of this is one and done. Especially the part we haven’t discussed: the technical and procedural controls assessment. Whether it’s a full SOC II type II audit against standard and bespoke controls, pen testing, red teaming, another own brand of vendor tyre kicking, or taking what the vendor gives you (with more or less lip service paid to actual fitness for purpose and effective operation), it’s out of date from the moment it’s over. There should be a core of continually assessed controls to reassure you of service availability, security and data protection, e.g. metrics for uptime, vulnerability management, IdM, and timely management of data subject requests, followed by at least an annual rechecking of other controls. There should also be tracked treatment of any findings that fall short of your risk appetite. Does that feel like I’m stating the obvious? It should, but too many firms figuratively or literally put the report in a draw until assessment time comes around again, or something goes bang. But that’s where cloud supply is fun. They have little to no motivation to adjust their suite of operational, security, and data protection controls just for you. Their whole value proposition is based on economies of standardised scale. So that is where your risk appetite has to kick in. You have to be clear on what you rely on cloud vendors to provide (how much data processing, how sensitive is the data, where is the data, how available does the cloud service need to be) and pair it with information they can’t or won’t provide plus detail of any gaps in control they are unwilling or unable to rectify. You also need to give your staff (referring back to that detailed demarcation) the time, means, and skills to do their job. Too often, organisations gleefully realise part of advertised savings by divesting themselves of specialist bodies. Bodies they then have to re-employ at exorbitant consultancy rates. There is no place more critical than where you translate and integrate between processes, technologies, and teams. Those folk with the deep local knowledge who up-skilled to understand how the solution dovetails with your network, systems, processes and needs are your cloud service Crown Jewels. Watch out for single points of failure in that space and treat them well.
You then take that picture, commonly called something like a risk articulation, and put it in front of whoever you ‘RACIed’ as accountable for the fallout. That’s fallout should those things you can’t assess and those things you know are broken end up hurting the data sets, operations, and people in the firing line. Vendor resistance to these kinds of processes, not to mention being unwilling or unable to provide the kind of information described, should be viewed as a red flag. You should then question whether that type of cloud supply or that specific cloud supplier is capable of reliably, securely, and transparently doing what you need. Having that cumulative picture in hand is the single most effective antidote to expensive experiments driven by folk aggressively pitching the latest tech solution in search of a business problem. (At this point, you are invited to find and replace all mentions of cloud with either ‘blockchain’ or ‘A.I’) In the end, it’s all down to your risk appetite, something you should thrash out upfront when it comes to leveraging all flavours of cloud. If you don’t ask for these assurances, you definitely won’t get them, but to ask the right questions, you have to detail your requirements, your ongoing governance expectations, and what’s needed to mitigate your local risks. There’s no better way to start working that out than to run incident scenarios with those same folk you sat down with to thrash out your RACI. Game out what happens with data misuse, a processing mistake, a misconfiguration exposing data, or a ransomware scenario along with the more usual outage related playbooks. And that, at considerable length, is the skeleton of my antidote to Supplier Stockholm Syndrome. You’ll have noticed that what we have explored above is largely lacking discussion of technical specifics. That’s because the problems I see most are not about finer points of configuration. We can contract specialists to get into the finer points of cloud architecture and bucket security (especially bucket security), but that won’t tackle your biggest risks. What we are almost always missing are the mechanisms to arm those managing vendor relationships with the data needed to do their job. The information they need to be sure of their ground, the contractual clarity necessary to put them on the front negotiating foot, and the support needed to maintain competitive distance while they land you and your management team the best, most stable, and most secure return on your investment.
About the Author: Sarah Clarke is a security and privacy GRC specialist, speaker, and award-winning blogger. She shares insights gained during her two decades working first in IT, then security, and now data protection. She currently works with BH Consulting to help large companies with their governance frameworks and writes for a range of publications via her own firm Infospectives Ltd Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.