Understanding cybersecurity
Working in cybersecurity is frustrating, particularly if you work in application security or cloud security. Despite all the hard work in trying to secure our systems, we are constantly reminded of our security failings by stories of more data breaches, more cyber attacks such as ransomware, and increasing losses from insecure systems. Why is cybersecurity so hard?
My first answer to that question is: the attackers only need to succeed once, the defenders need to succeed all the time. So when we slip up only once, we create multiple opportunities for hackers to exploit our weaknesses. Phishing is a good example of this. It only takes one person to fall victim to a phishing attack and the whole organisation is compromised.
My second answer to this question is: our systems are very complex, making them very difficult to protect. In addition to the technical complexities of our systems, there are levels of sociocultural complexity too. Individuals and groups come with their own values and ideas of ‘best practices’ that conflict with those of other individuals and groups and with the cultures of the organisation.
A third response to this question is: software development is a relatively new industry. It is volatile creating uncertainty about how to build secure systems while delivering value to customers and remaining profitable. There is a constant stream of new technologies appearing claiming to solve a plethora of security problems that organisations face. Compared to other industries, such as construction, healthcare, and transport, software development is not as mature.
In my opinion, we need to understand what cybersecurity is and what it can learn from more mature industries. Our ideas about what cybersecurity is falls into a very simple notion or protecting confidentiality of data, maintaining data integrity and ensuring data is available only to those who have a legitimate reason to access it when they need to access it. We call this the CIA triad. The apparent simplicity of this acronym obfuscates the complexity required to make our systems secure.
For some, security is a component of quality. If we build quality into our systems, we implicitly build security into them. But measuring quality and therefore security is no mean feat. Another way of looking at security is through the lens of safety — we are keeping data ‘safe’ while maintaining a ‘safe’ place within our organisations for data and the people responsible for the ‘welfare’ of the data (such as development and operations engineers, data analysts, and systems users). With this in mind, there’s an argument that cybersecurity should learn from the research that has been carried out over many years into the quality and safety of the products and services that industry provides.
For example, Toyota invested heavily in quality, becoming a learning organisation, not afraid to challenge current traditions of building cars and develop affordable, high quality cars that meet the demands of their customers. Toyota challenged the old paradigm of mass-producing cars in large batches of similar styles as a method for reducing costs. Instead, they focused on quality and variety, requiring new paradigms, new technologies and new standards of safety and compliance to make it happen. Similarly, safety is a priority of many industries including aircraft manufacturing and operation, nuclear power, mining, and space flight. In those industries, critical systems cannot tolerate a reduction in safety compliance or quality.
The research that has gone into quality and safety by academic institutions is immense. Yet there has been very little academic research into cybersecurity by comparison. My own search for academic papers on cybersecurity suggests to me that there needs to be a lot more research into some of the practices we take for granted in cybersecurity. Often, an organisation’s notion of research is based on market research, and quantitative research that seems to validate the necessity of using one particular security practice over another. Vulnerability management becomes a normal way of working — imagine jumping on an airplane for the captain to say “we are waiting for the defect manager to come on board to manage the defects on this aircraft during flight”. But that is what happens with cybersecurity — we expect our customers to accept the risks that we take with their data. When developers discover multiple issues after running an application security scan they normalise the process of accepting that not all of them can be fixed and so their products remain vulnerable. Diane Vaughn termed this ‘normalisation of deviance’ following her research into the Space Shuttle Challenger disaster in which a faulty component was left in place because the fault was acceptable.
So, how can we understand cybersecurity? I posit that we should look at research in other domains of safety and quality to find parallels with the cybersecurity industry. Organisations should encourage and work alongside academic institutions to perform more research into cybersecurity, particularly research that challenges current paradigms, that allow us to advance new ways of working with proven theories. We will never fully understand cybersecurity, in as much we cannot possibly understand quality and safety. There will always be conjecture and theories that challenge or advance current praxis. In my opinion, the cybersecurity industry seems to rely on existing paradigms without truly understanding their efficacy in making organisations more secure. To understand cybersecurity is to challenge this norm.