In 2020, there is no need to doubt the threat posed by phishing. We could spend so much time trying to size the risk that phishing poses to our society, our organisations. I am not even sure I could give you an accurate estimation of this risk. There is so much unknown. However, for many of us, being targeted by cyber-attacks using phishing is now part of our typical day, or, at least, a typical week.
The GÉANT network connects 50 million people across Europe, 50 million targets for a phishing email. We can use many technologies to prevent phishing, but most have already shown their limits. Even if they could block 95% of the phishing emails, we still need our users to detect and report the remaining 5%. Furthermore, even if we had systems that could stop 99.99% of the phishing emails, they would not be able to block the more targeted, and dangerous, spear-phishing, without blocking many legitimate emails. Technology cannot solve it all. As CISO s often like to say, humans must be the last firewall.
Phishing is a form of social engineering using emails as a vector. Criminals mostly use phishing to achieve one of two goals: steal information (credit card numbers, computer credentials) or install malware (ransomware or trojan) on the user’s computer. We also see many CxO fraud attempts (type of attack where attackers impersonate a CEO, CFO or another high-level executive) using spear phishing, often combined with other types of social engineering like vishing (using voice, over the phone) or smishing (using short messages).
Our trust in technology
Phishing attacks exploit human characteristics to their advantages. The first one is often our trust. We tend to trust people like us who belong to our group. We belong to many groups at the same time: family, friends, colleagues, fellow students, fellow researchers, sports team players or school parents. The closest, the more intimate people are, the more we will likely trust them.
So, when we get an SMS from our best friend and see her or his picture on our smartphone, we will tend to take it seriously and assume it is trustworthy (except maybe if our friend is used to make many practical jokes). It also means we trust technology enough to believe that the message we received comes from our friend’s phone. If our phone shows our friend’s picture and name, it must be her or him. Somehow, we are right to assume that as GSM technology seems challenging to hack.
However, if we receive an email from the same friend, with the same content, we should not have the same trust in this message. Why? Well, email technology is not as trustworthy as the GSM one, yet. An attacker could maybe spoof our friend’s email address or use a Punycode to make it look like it. Even without that, we probably do not even know our friend’s email address. Most of the time, the only information we look at is the full name of the sender, not the email address. It seems stupid for cybersecurity specialists or people with some computer literacy. For the lambda user, the “simple” concept of a domain name might be where we lost them already. As there is no “driving license” to surf on the Internet and get a mailbox, we cannot assume any basic IT knowledge on the user side. That is normal. So, we should provide IT services that are safe and trustable. Email technology is not safe yet, but it could be. It should be. Technologies like anti-spoofing , DNS domain check , SPF or DKIM have existed for years and are still not implemented and enforced on all email servers. If they were, we would be able to trust emails like our SMS.
It would be tremendous progress. Still, it would not stop phishing completely. Criminals would still be able to create new look-alike domains, hijack existing email accounts or use different pretexts. There, we need humans to be more vigilant.
Hypervigilance
The big issue is that, although we would like people to be “human firewalls”, we did not hire them for that purpose. We want them to do their job: study, research, heal people, manage finances or IT systems. Being vigilant at all time is called hypervigilance. It is often the result of a traumatic event, but, more importantly, the underlying cause of pathologies. Being vigilant requires extra energy. It is stressful. If we are cautious with each of the 150 to 500 emails, we may receive every day, we will spend hours just doing that every single day. Even more importantly, we will be exhausted. Not a good idea. What can we do then?
Nudges
We need to make it easier to spot phishing. Many little things can be changed to help users. Usability must apply to security features too. As an example, we can make the email address visible if the contact is unknown to us, or if it comes from outside our organisation. It will help spot the look-alike domains or the sudden change of our friend’s email address. We can tag suspicious emails, but only when we are sure they are suspicious. Below 80 or even 90% of accuracy, users seem to disregard the insight coming from automated systems (Chen et al., 2018).
Let us not forget to change the warning signal regularly to avoid habituation. When a warning signal is repeated too often, we tend to ignore it (Brinton Anderson et al., 2016). It is like when we get dressed. We feel our clothes on our skin for a few minutes, or even seconds, then we ignore it, and we do not think about it. When we change the signal, we start to pay attention again.
Learning in context
Will it be enough? Likely not. Now we need to train our users to spot a phishing email. Nevertheless, first, we need them to think before clicking. For some of us, reading email has become a habit. We go through our emails with a glance. We see a word related to something we expect, and it triggers an automated response: we click. Some experts say phishing is about influence. Well, that is ignoring any social event that occurs in a context. The context defines the way we will react. When we drive a car, we hit the brakes with the right foot. When we are on a bike, we use our hands to do the same thing. We do not have to think about it—the context conditions even our reflexes. That is why we need to train people in context. We did not learn to drive a car while reading a book. Nor did we learn to swim while standing on the side of the pool. We have learned the theory about it in books or from our teacher.
Then, we had to seat on the driver side or jump in the pool to acquire the required skills. Phishing exercises are the same. They are like a vaccine. They allow our users to recognize phishing emails without the inherent dangers of the real one. Like for the flu, phishing emails come in different types. We must train with all of them to be sure we will be able to detect and fight anything the criminals will throw at us. As for any training, it should provide rapid feedback to the users when they detect a phishing email. It will help them improve. It should also be progressive and tailored to our population. A phishing email about exam results will likely work with students a few weeks per year. It will likely not work with people in financial services (see Goel et al., 2017, for this matter). Context is key.
Blaming is the path to failing
We should be attentive to be positive about phishing. Blaming the users for something they might do by accident will not help. Even more, it might increase the workload of our helpdesk or our SOC. People might start reporting any suspicious email, spam, scam, or even internal communications to avoid making a mistake. We would then create an unnecessary burden on our first line of support. We must make our people cautious, not paranoid.
Phishing exercises do not only provide training, but they also keep our users vigilant. As we need to repeat flu shots, we need to repeat phishing exercises. By experience, a monthly frequency seems to be the optimal one. It keeps people aware, sharp, and it just takes a few seconds of their time. That is not a high price to pay to avoid losing our network, our research work, or some money.
A path to walk
Research on phishing is still at its beginning. We start to understand better how it works, what makes us click and how we can improve. Still, we know very little for sure. So far, if we reduce the exposure, facilitate the detection, and train our users, we are doing the best we can do. It is teamwork, as all those changes require the involvement of different teams, different entities even. We all have our share to do, and when we do, we prevail.
About the author
Emmanuel Nicaise is a cybersecurity consultant with Approach and a researcher at the ULB (Brussels). With his 25+ years of experience in IT & cybersecurity and a master in psychology, he is fostering cyber safety amongst organisations using psychology and neurosciences. He is currently pursuing a PhD in social psychology, focusing on trust and vigilance in our digital society. Phishing is presently his main area of research.
References
- Bonnie Brinton Anderson, Anthony Vance, C. Brock Kirwan, Jeffrey L. Jenkins & David Eargle (2016) From Warning to Wallpaper: Why the Brain Habituates to Security Warnings and What Can Be Done About It, Journal of Management Information Systems, 33:3, 713-743
- Jing Chen , Scott Mishler , Bin Hu , Ninghui Li , Robert W. Proctor , The description-experience gap in the effect of warning reliability on user trust and performance in a phishing detection context , International Journal of Human-Computer Studies (2018), doi: 10.1016/j.ijhcs.2018.05.010
- Goel, S., Williams, K., & Dincelli, E. (2017). Got phished? Internet security and human vulnerability. Journal of the Association of Information Systems, 18(1), 22–44.
Read the other contributions on the GÉANT Cyber Security Month!