Image

Exhibit B: Potentially Unwanted Leaks
If you have some technical literacy, you may have heard of potentially unwanted programs (“PUPs”). It’s all that glop and gloop – malware, viruses, trojans, worms – that can cause havoc on your devices and systems. More often than not, PUPs end up on your system because of – you guessed it – the operator doing something silly. Clicked a link they shouldn’t have, opened up an attachment from an unknown sender, downloaded and installed sketchy app. You name it. Here’s our advice for this type of basic operator fail: make sure your OS is patched and updated, make sure the signatures of your antivirus are up-to-date, make sure your firewalls aren’t a sieve, and if you are one of the lucky ones who has some AI at your disposal, make sure it picks up and where possible stops nasty traffic. If you are an everyday user, that’s about all you can do after you’ve done something you shouldn’t have. That’s about all we have to say on PUP in here, except for this: it is the “potentially unwanted” phrase that catches our attention, as it captures the idea of non-linear costs. Because if something can be “potentially unwanted,” it also carries the potential to cause you problems if not immediately at some point in the future. In other words, costs that can be both unpredictable and incalculable. That’s why we identified something else that is potentially unwanted: leaks of information. To clarify, we are not talking about the breaches you hear about in the news. What we are talking about are those tidbits of information – those breadcrumbs – that you and others leave behind, like comments on web posts of what you were doing that day, pictures of you on social media getting a little rambunctious, or partial information that websites leave open to the public, such as part of a Social Security Number or user preferences. Why do the crumbs matter? It is because when pieced together, they give an attacker a formidable weapon to use against you, particularly in a social engineering attack (phishing, spear-phishing, and pretexting as very common tactics). The “collecting the crumbs” approach is extremely powerful because it’s not only the good guys who have AI but the bad guys also. AI can be used to scour, collect, collate, and develop a picture of you that can be shockingly accurate and of course used against you. And the thing about these crumbs is that they are normally made available, in one way or another, because of one reason: Homo sapiens. Go back to the “convenience” conversation. Sure, some argue there is an upside of us being able to carry a camera in our pocket and post to social media with a touch of the screen, but we have also not understood the cost of that capability. And why can’t we? Because that picture, post, or tidbit of information about where we went last night has no set or agreed “value” at any given time or between parties. Don’t believe us? How much do you value your family photos? Chances are more than us two. But if we all had similar standards of living and had a relatively equal financial position, we’d be pretty consistent on the value of $100,000 to each of us individually. That is why these crumbs that you leave behind – and others leave behind of you – individually may have no intrinsic value but when pieced together generate some different value. In the moment, that collated picture of you could be used in a social engineering attack in order to gain access to your employer’s network. But over time, the value of that collated picture of you could change. Instead of a using it for a social engineering attack to simply access your employer’s network, it is used for blackmail to manipulate you into more specific action. That’s the problem with these potentially unwanted leaks. There is no metric to determine their value. There is no market index to compare against. No “information” gold standard system. This is what makes the privacy discussion so difficult because privacy, as many of us know it, is regrettably dead in large part to operator habits.Because You Cannot Determine Cost, It Becomes All About You
No need to give a detailed account of what you have heard in the news the last few months regarding social media and privacy. But a brief summary: the industry moved so fast, altering our habits just as fast, that we never had the opportunity for a truly honest discussion about how information gets gathered, used and retained and how those processes could alter our behavior. That is why, until proven otherwise, you should consider accepting the following as truths:- 15 pages (give or take a few) of terms and conditions agreements don’t count as “clear.”
- Free is never free. There is a cost somewhere. You just haven’t identified and weighed it.
- Your mobile devices are information ticking time bombs because there’s a good chance you have no idea what information your devices and apps are scooping up.
- Not only do you have to worry about your data protection habits, you really need to worry about the person you are communicating with. Remember, the receiver also has a copy of what you are communicating, and their information protection habits may not be as good as yours. It’s a pretty old tactic: if you can’t get who you want to get at directly, you do it indirectly.
- If you expect tech to do all the heavy lifting for you (which still will not guarantee protection), you should also accept to forego any claims to privacy. That’s not a statement reflective of legality or universal rights. That’s a statement of practicality.
- Normalize theft as an expected part of business and somehow try to incorporate that cost into our business models; (We can’t because it may be a non-linear, meaning non-calculable, cost.) or
- Give up all expectations of privacy (gasp) and let AI and algorithms act as gatekeepers, literally analyzing every last kilobyte of data. (Kind of smashes the efficiency discussion, too.)
Image

Image
