This January, I was invited to Oslo to speak at NDC Security. For those who don't know NDC Conferences just yet, I would definitely recommend checking them out.
Unlike submitting a CFP (call for paper), when you're invited to speak here you aren't given a specific point, more general topics. This time round was no different. I was told it's a Security conference, and to speak about security. In order to make this talk have the biggest impact to the audience, I went through my history of IT to IS, and then reviewed my current role of consulting, and I realised I could focus on my motivation for becoming a consultant. Bringing Privacy and Ethics to the work place.
The weekend after presenting, I was sent this tweet by a friend. It went to the very core of my talk: we are being trusted with intimate details of our consumers lives, it is our job to respect this privilege, and protect this information properly.
As Heather Burns discusses, Rene Carmille recognised the information stored on those punch cards could be used to harm the persons it belonged to. By removing the ability to processes that information, Rene was able to protect those data owners.
This is the exact message I want to share, we are the first line of defence to our consumers, and it is our job to treat the information they provide with the respect that it deserves; because when we don't people will find out, and there can be devastating consequences to the data owners.
An example I shared at NDC was an autopsy view of Equifax; I refuse to join this blame game that seems to arise whenever disaster strikes. It is not simply that one person was unable to do one thing in time, and therefore affected the entire system. The truth of the matter is, if failure of one person or one system can have that much of an impact to the entire environment and all it's data, then we failed all the way back in the design phase.
In my prior roles, as a network architect I would design a network that is resilient to likely scenarios to the client. I would look at the threat actors, I would create a threat map, I would understand the environment. I would then continuously test against that, updating the map when things changed.
Take for example, a DevOps person, you will know what your solution does, and how it should react, you will test if it works. Then, the next test you should do is the "hacker" testing, what happens when a malicious actor says 'what can I make it do?'
In the last year, we've been going absolutely insane over four letters: GDPR or the General Data Protection Regulation. Which, at its most basic level, GDPR is empowering EU citizens to take back control of their personal information. What's more, GDPR is going to shape the future of how we process information.
GDPR requires Privacy by Design, and holds directors personally liable. This means that whilst brainstorming the idea, organisations will need to include privacy controls and considerations. Being personally liable, bring that risk closer to home, directors are going to scrutinise what we do as security persons more closely.
You may say, "then they're going to put foolish rules in place!" at times senior level execs can be so far removed from the day to day, they don't understand a situation properly; that's okay. Take the time to explain how things are designed and work, effectively, use plain language and references these execs will understand.
Have formal Risk and Privacy Impact Assessments done on your solutions or products, to be able to comprehend what information you have and what the priorities actually are.
One thing I have learned is, just like there's an XKCD for everything, Dr. Jessica Barker has brilliant advice for every situation. When presenting on awareness, Jess brought up something called Social Proof. This is things like, when saying x% of people fell victim to Phishing, we're actually saying 'Don't worry, everyone fails, you will to so there's no reason to try.' Instead, we should say x% of people protected themselves. Look at the positive, how others can do it, so can you.
On that topic of positive, and why I bring it up here, is so often as techs we only escalate when we fail. We present information of all the times our controls were bypassed, we're only seen when things are broken. Senior leadership, and likely everyone else, can't help but question, are we really doing anything? Let's instead, brag about our brilliant solutions. Let's show how our decisions actually minimised impact of failure, how we recognised before anyone else a better solution. Be the hero they didn't know they needed.
When designing awareness programmes, I look at bringing the risk closer to home. When someone can see how they're personally impacted, they often will pay more attention. Teach people how to protect their family, especially their children. Not only will they pay more attention, but your training could have saved a child.
I'm not saying this will solve everything. But standing up for those whom cannot protect themselves, pushing the liability back to the manufacturer, and presenting the facts in less fear focused more positive light will be a start to a more secure and safer world.