Our joint webcast with UnderDefense LLC on Mitre ATT&CK for Blue Teams (in Ukrainian). Enjoy!
Our joint webcast with UnderDefense LLC on Mitre ATT&CK for Blue Teams (in Ukrainian). Enjoy!
Let’s imagine for a moment how the “bad guys” are planning their attacks. In the dark basement with cyber-punk posters covering the graffiti on the walls, with a bunch of half-assembled computers lying here and there, malicious hackers gather around the poorly lit table to decide what version of a Black Hat Attack Methodology to use in the upcoming criminal operation. Sounds absurd, right? Of course, because the attackers are not methodical.
As Penetration Testers, we see our main goal in testing our clients’ defenses functionally in order to assess their ability to withstand a real-world attack. Do we have to rely on external knowledge for that? Obviously, yes, it is impossible to know everything about every attack vector in 2016. Do we have to stick to a predefined set of instructions, a so-called methodology? That depends.
If you are not a pentester, and yet you have to act as one, the use of methodologies is inevitable. To conduct a pentest yourself or to reproduce the results in the report from an external consultancy, you have to get your head around a methodology of some sort. In fact, this happens all the time: the perception in the market is such that anyone, be it an accounting firm or an IT audit practice, can do Penetration Tests: look at the plenty of methodologies out there!
But if you do pentesting for a living, do you really need Methodologies? I am a big fan of seeing a pentest as rather a mission than a project. Of course, a mission has to have a plan, but it rarely can be scripted in detail. It’s essential to have a recurring chain of acquiring, analyzing, and applying data and share it within the team. It’s very good to have both specialization and knowledge sharing between the team members. But to write down “what we do”, “what we do when we’re in”, “how we exfiltrate” in a static document? No, thanks.
To succeed at something we have to have good mental models and actual practical how-tos at our disposal. The models let us build the insight about how the attack would go and what we would have to do along the way. The how-tos and examples let us prepare for the actual operations: collect the data, apply or build the tools, make moves and get the proof of a risk to the client’s business out there. The methodologies try to bridge the gap between the two for those who need it. Do you?
Since I’ve written the first part of this post in May, several related articles have appeared in different well-known online resources. The most notable of them, in my opinion, is this piece on Fortune that is trying to bridge infosec and business as many tried (and mostly failed) before them. You don’t have to read all the article’s text to catch what it and others have in common: the very first paragraph ends with the statement we all have long got used to.
If your company is like most, you’re spending an awful lot of your information technology budget on security: security products to protect your organization, security consultants to help you understand where your weaknesses lie, and lawyers to sort out the inevitable mess when something goes wrong. That approach can work, but it fails to consider the weakest link in your security fence: your employees.
So, if you’ve read my first post on the topic, you have an idea that this stereotype might mislead anything that follows in the article. I warned you last time that anything that sounds similar to “humans are the weakest security link” should be followed or preceded by “by default”. And by “default” I mean “in case your company’s security management did nothing to change that”.
But easier said than done, right? So what could one do to, well, leverage the most influential factor in security – human nature?
To understand that, it’s necessary to get an idea about how our brain functions. I’ve spent quite some time getting familiar with this topic by reading the results of contemporary scientific research. And I encourage you to do the same! However, for the sake of this blog post, I am going to summarize the most critical points, ones you have to embrace to, well, see the light.
Imagine that inside every human brain, and there are three animals: a crocodile, a monkey, and an actual human being. If you are familiar with the brain’s structure, you already know that: different parts of it have grown during different evolutionary periods. Thus the croc is an impersonation of our reptile brain, the monkey is our mammal or limbic brain, and the human is our neocortex. Each of them is doing its job, and there is a definite hierarchy between them.
The croc is the boss by default although he doesn’t micromanage. He is responsible for only three basic instincts –
As you see, the crocodile brain executes the most important roles: the preservation of individual humans and the species overall.
The monkey trusts the croc with its life. It’s sometimes afraid of the croc too, but still, there are little chances it’s going to stay alive for long if croc falls asleep of is gone, so yeah, the monkey trusts the croc.
The monkey’s work is more complicated. Protected by the crocodile, it can dedicate some of its time training and learning from recurring experience. In other words, the monkey can be taught things if it does them enough times. There are many words to represent that ability, but we are going to stick to ‘the habit’. Using habits, we simplify our life as much as possible, for better or worse, but certainly – for more comfortable.
And the human usually is much different from them both, because, well, you know, abstract thinking, complex emotions, ethical frameworks, cosmology, and sitcom TV shows. With all that, the human brain optimizes its job as much as possible, so if there is a chance that the monkey can do something it has to do, the human will take that chance. Going through the different procedures over and over, we train the monkey, and once it’s ready, we hand over the task to it. How many times you missed the turn and drove along your usual route to the office even on weekends? The monkey took over, and the habit worked instead of your human reasoning that was busy with something else at that moment.
To some, it may sound counterintuitive or even scary, but that’s how it is. If we thought out every decision we make, we wouldn’t be able to develop as a species and a society. Too much thinking at the moments of crisis would kill us: deciding on the tactics of dealing with a saber-tooth tiger would take all the time needed to run towards the cave or the nearest tree. Humans tend to shortcut and rely on their instincts and reflexes as much as possible. And in general, it’s a good strategy, given that humanity spent many centuries training the monkey and adjusting the croc’s input data.
But then… boom! Cyber!
The recent development in technology and communications has changed our lives. Now we have to do many old things the new way, and as a result, it’s not easy for our brain to apply the tricks evolution taught us for millennia. The new signs of danger do not trigger the monkey’s old habits and the croc’s even older instincts. We are used to dealing with danger tête-à-tête, not in front of a computer screen. Centuries-old fraud tactics find new life online with humans not able to resist them because of the scale of anonymity and ease of impersonation on the internet.
So what can we do? Not much. I don’t believe in technology when it comes to human nature. So I prefer to focus on the human (and the monkey, and the crocodile) instead. Having read and discussed much of what contemporary science can teach on behavioral economics, irrationality of decision making, and most importantly – habits, I have come to conclusion that people can be taught to effectively resist modern cyber-threats the same way they have learned to survive other hazards: by leveraging the instincts, installing new reflexes, and transforming the habits.
In the next post, we’ll wrap it up with me presenting the method of transforming individuals and groups from vulnerability to a countermeasure. I hope this sounds intriguing enough for you to stay tuned.
In January 2013, Gary McGraw wrote an excellent piece on 13 secure design principles that summarize the high-level ideas any security engineer or architect should be familiar with in order to be called so. Dr. McGraw is of course that smart gentlemen from Cigital who wrote the “Software Security” book, records the “Silver Bullet” podcast, and has played his role in many career choices in the security industry. The first principle he explains is quite logical and intuitive: “Secure the weakest link”. This principle spans over many disciplines, such as project management and logistics, and is evident to many: there hardly is another way to dramatically improve something than taking its worst part and fixing it. Pretty simple, right?
The vast majority of information security professionals agree that the human factor is the weakest element in any security system. Moreover, most of us promote this idea and don’t miss a chance to “blame the user” or address human stupidity as an infinite source of security problems. However, when you start challenging this idea and ask what did they attempt to do in order to change the situation, the answers are few. Just try it yourself: every time you hear someone says “… you cannot fight phishing/social engineering/human error etc.”, kindly ask them: “And have you tried to?…” I do it all the time and believe me, and it’s a lot of fun.
The uncomfortable truth is that the human brain is very efficient in detecting and dealing with threats. It spends the majority of its computing time and calories burned to maintain this “situational awareness” that allows us to step on breaks long before we solve the equation system that represents the speeds and trajectories of our car and that one approaching from the side. Our brain, if appropriately trained, can serve as an effective security countermeasure that could outrun any security monitoring tool in detection or response. The problem is that we as an industry didn’t have as much time to train the humanity to monitor for, detect, and respond to technology threats as nature had for teaching us to avoid open fire, run from a tiger, and not jump from the trees. And an even bigger problem is that we don’t seem to start doing it.
So, what’s wrong with us? Why don’t we combine the collective knowledge of human weakness in front of cyber threats and the maxim of securing the weakest link? I frankly have no idea. Maybe it’s because the knowledge domains that deal with human “internals”, such as neuroscience, psychology, and behavioral economics, are very different from what security people are used to dealing with: networks, software, walls, and fences, – I don’t know. However, I have tried (harder ©) to improve the way people that are not security experts deal with cyber threats. And you know what? It’s more fun than blaming the user. But, I guess that’s enough for one post, to be continued…