Last month, the largest global cyber-attack in history recently infected Windows computers in a network. It utilised a security glitch in the Windows operating system that allowed it to jump from computer to computer on an internal network.
Only this week, yet another ransom attack spread across the world.
The WannaCry attack was in terms of computing technology, simple and even naive. For example, no sophisticated password-breaking software was used.
But the developers of the malware had a deep understanding of the behavioural biases which are frequently encountered in organisations. In short, the attack succeeded because of simple human fallibilities.
The attackers simply hoped that at least one user on a network would click on a malicious link in an email or download a malicious attachment. The virus could then get to work with minimal sophistication.
But how could this happen? Microsoft had released a system patch for its operating system long before the virus became large-scale operational. The NHS in Britain was notified by Microsoft two months ago about a security patch that could have prevented the spread of the virus within the organisation.
A long-standing reluctance by large organisations to roll out system updates before substantial internal testing was instrumental in the crisis.
In effect, paranoia about system security was the cause of system-wide vulnerability.
This problem is not new. The illusion that elaborate rule-based systems can eliminate systemic risk was prevalent amongst regulators in the run-up to the financial crisis, and still persist even to this day. The apparent security provided by having lots of boxes ticked and the paperwork passed through endless committees before an update could be approved proved to be completely false.
The state of affairs puts into question the complicated security procedures in place within organisations. What is the point of spamming staff with emails forcing them to update their passwords as often as once every two/three months when this large-scale cyber-attack took place without breaking even a single password?
A study published in the Proceedings of the 17th ACM conference on computer and communications security assessed the security advantages of password expiration policies,.
It shows that many users asked to update their passwords use simple mental shortcuts and heuristics on all but the first password. For example, they just change a character to a symbol or number, such as ‘s’ to ‘$’ or ‘A’ to ‘4’,
These shortcuts introduce patterns and biases to the password which makes it much easier to guess your updated password once you have even partial access to the password history. The researchers in this study could break the current password of approximately 41 per cent of the accounts in less than three seconds, if they knew their previous password.
The situation is even more disconcerting. A report by Carnegie Mellon University showed that personal characteristics and traits were correlated with password strengths among CMU students, faculty and staff. Individuals who reported annoyance with university password policies also tend to choose weaker passwords.
The obsession with frequent password updates actually reduces overall system security rather than enhances it.
Instead of trying to design elaborate, fail-safe systems, companies should just reflect a bit more about how humans actually behave.