For years, security research has been focused around technology. But now – finally – the humans in the system are getting the attention they deserve.
As a security community, we’re beginning to recognise the impossible demands we’ve made of the end user at times – supposedly the weakest link – only then to throw our arms up in despair when things don’t work out in practice the way they looked on paper, blaming systemic failures on the supposed “weakness” of these users. But the end user is not the only human in the system. The competing perspectives, desires and pressures from all the people involved with getting something to market (CEO, security practitioner, accountant, legal expert, safety consultant) cause headaches for developers.
What’s the problem?
In a world where connectivity is a prerequisite, it’s easy to find examples where vulnerabilities in code have been exploited with some pretty devastating consequences. The first SQL injection attack was reported while I was (just about) still at school, yet it is still a significant and recurring problem. The tools to robustly protect against vulnerabilities are widely known. So why don’t developers use them? We don’t believe it’s that straightforward – perhaps as security professionals we should instead be asking “Why aren’t these tools being used? And how do we make sure they are?”
What do developers have to deal with?
Consider a developer – with no domain expertise in cryptography – using a cryptographic library API. These are potentially powerful tools to protect data, but used incorrectly they can create a false sense of security. Choosing the most appropriate algorithm and mode of operation is vital. Then selecting a sensible approach to key generation and secure key storage all require fairly detailed crypto knowledge. But, they are prone to misuse, which can lead to vulnerabilities such as a failure to validate certificate chains correctly, insecure encryption modes and inadequate random number generators.
Navigating past the potential security pitfalls in APIs (and other tools) without this specialist knowledge is really hard, and isn’t helped by a prevailing expectation amongst library designers that developers ‘are experts’ and should therefore know better.
Usability. It is rarely seen as a fundamental requirement for design – let alone for security. We don’t think it’s reasonable to promote a ‘secure’ product, but state that the security depends on how it is used. How does a developer ensure that their product is going to be secure enough, no matter how it is used, bearing in mind there’s no such thing as perfect security?
Then there’s the time pressures of getting code into production and, for many, embracing continuous delivery. Amazon, in 2014, deployed 50 million changes. That’s more than one change deployed every second of every day. Add to this a constantly adjusting threat landscape, and we’ve a situation where the conversations around security risk being left out because they are too hard, too slow and too expensive.
How can we make it better?
Whilst stamping our feet and cursing developers might be cathartic, it clearly isn’t having much effect. We need to invest time and effort into understanding developers and the development process, so that we can re-focus our efforts on creating developer-friendly approaches. We need to motivate and support these professionals to make better security decisions.
To explore this thorny problem in more depth, the NCSC and the Research Institute for the Science of Cyber Security (RISCS) have brought together a multidisciplinary community to start understanding the challenges. This community of leading academics, industry practitioners and government experts spans the social science disciplines through to more traditional technical backgrounds associated with cyber security. From these discussions, we’ll be issuing a call to this community for research proposals that the NCSC will sponsor under the RISCS umbrella over the next financial year.
In the meantime, expect to see some more blogs from us on various aspects of this topic in the coming months, as we explore the key drivers and blockers to secure software development.
Source: National Cyber Security Centre