You may have noticed from my previous blogs, or the talks I’ve given, that I’m a massive advocate for the security benefits that can come with moving to a good cloud service.
And yet, as these services grow in popularity it’s increasingly common to hear news stories reporting that a company or organisation has suffered a cyber breach, accidentally leaking or losing sensitive data.
In this blog post I want to talk about why accidental breaches are continuing to happen in the context of cloud services and suggest some measures that should make future incidents less likely.
Recently, we’ve seen a steady flow of incidents where organisations using cloud services haven’t applied the security settings needed to keep their information private. The result has been, in many cases, that anyone with an internet connection can see their, often sensitive, data.
We’re talking here about things like S3 buckets being unintentionally left open, sensitive data being posted to public Trello boards and web service API keys being accidentally checked into GitHub. These are just some examples of the kind of issues that can affect many different types of cloud service, including SaaS offerings.
Many of these breaches have come to light through the work of security researchers who have responsibly disclosed their findings to the affected organisations. To these noble individuals I would like to say a big thank you.
Our heads are in the cloud
So, why are these breaches happening? My theory is that some people are treating cloud services in the same way that they would an on-premise service. One that has centralised enterprise control and oversight. This leads to the assumption that, by default, they get the same control over cloud services as they would the on-premise equivalent.
To use a crude analogy, our traditional self-hosted IT can be thought of as having been built inside a Medieval castle. It was up to us to decide how many portcullises we wanted and the minimum moat width. We could change the shape of the arrow slits when we had to defend against new weapons, and we could choose when to raise and lower the drawbridge to let only the people we wanted inside.
In our heads, we’ve assumed that cloud services resemble our castle. It just happens to be a castle that’s owned by somebody else and shared by several families. In many ways that’s true, especially when you stretch the analogy to suggest that archers on the battlements come as part of the service.
Unfortunately, that way of thinking allows us to forget that many cloud services aren’t designed to directly replicate old IT. Many cloud services are intentionally designed to promote collaboration and data sharing, while still allowing us to constrain access to named organisations or individuals.
In our old on-premise model, making some data available to ‘everyone‘ meant ‘everyone within the organisation, but no-one else‘. In the cloud it can mean that same thing, or by design it can mean that ‘everyone on the Internet can see it’.
Whilst the cloud provider is responsible for delivering a secure platform for our data, we as data owners are still responsible for how the service is configured. This means we need to acknowledge and act on our responsibility to configure the service to only share the data that we intend to, with only the people we want to.
Finding the silver lining
We recommend that your focus should be on reducing the burden on users to make good security decisions:
- Make it obvious to contributors that they must not submit sensitive data to services – or parts of a service – that have public sharing enabled. This ensures that anyone contributing to publicly visible services will be on the same page about what is/isn’t appropriate to post.
- Set sharing to be ‘off’ by default on each of the services that you use to hold internal-only data. And – unless required – entirely disable the ability to make data public. You should continue to use cloud services to share data to named individuals who are otherwise outside your organisation in an intentional, managed and audited way.
- Identify an individual or a small team as being responsible for your organisation’s use of each cloud service. They can make sure the service is configured as expected and act as an authoritative contact if things go wrong (don’t forget to make sure this responsibility is tolerant to people leaving your organisation!). You can then get confidence in such configurations by including your organisation’s use of SaaS in your regular security audits and penetration test.
- Reduce the desire for your people to use shadow IT. You can do this by creating organisation-wide accounts for the services that your people need or want to use. An individual signing up for services for their own use may not realise that they need to put a bit of thought and effort into the configuration. It’s very hard to audit/monitor something that you don’t know is there.
- Avoid sharing secrets such as credentials, API keys and password reset emails in shared services, unless they are appropriately protected by the service and can only be accessed by specific authenticated users. This is described in Cloud Security Principle 2, our Secure Development and Deployment guidance.
I don’t think that any of these suggestions will be a magical sticking plaster that makes these accidental data leaks just go away. However, I am hoping that we can make the number significantly smaller.
As always, any comments gratefully received below.
Cloud Security Research Lead – NCSC
Source: National Cyber Security Centre