May 21, 2019
Ever since AWS Lambda popularized the serverless architecture, there's been plenty of FUD (fear, uncertainty, and doubt) about securing serverless applications. With a new architecture comes new risks - or at least, that's the common thought among DevSecOps teams. The reality is that serverless is inherently the most secure way to run applications, but like any architecture, strong security is only possible if you are able to leverage it.
With serverless, everything below the application level is handled by the serverless provider. This includes the hardware, operating system, security and application runtime environment. However, this doesn't mean you can forget about security, but rather change the way you approach security.
Serverless providers are responsible for securing their platforms, but this doesn’t mean that your applications are protected. While you may not have the same level of visibility and control into your apps' behavior as with other architectures, you still need to monitor for threats and identify potential vulnerabilities. Attackers can exploit applications in a variety of ways, and you need to be aware of these vulnerabilities in order to predict, prevent, and respond to attacks.
As Slobodan Stojanovic, CTO of Cloud Horizon, explains: "security is really important because with serverless everything will scale and infrastructure is managed, but your code and security is definitely not fully managed". For example, let's look at three more advanced attacks on serverless applications: credential theft, attack persistence, and container poisoning.
Functions often need access to secure resources, such as other cloud services. These resources usually require some form of authentication, such as tokens and keys (called secrets). Although secrets should be stored securely when they're not in use, functions will convert secrets to plain text while using them.
If an attacker gains access to the application (e.g. via remote code execution), they can dump these secrets from memory and use them to access secure resources.
Functions are designed to be short-lived, existing for just a few minutes at their longest. Functions are also meant to be stateless, with each call to the function creating a new instance of the function. However, to avoid cold starts, many providers reuse the same function sandbox or container for different invocations (known as a warm start).
This makes apps feel faster, but attackers might leverage warm starts to persist and launch long-lasting attacks.
Attack persistence is also present in container poisoning, where attackers leverage the scalability of the serverless architecture to launch large-scale attacks.
If an attacker gains RCE access to a function, they can inject code that is designed to scale with the application, or even manipulate the application's invocation flow. This makes the attack much easier to scale, and much harder to detect. For example, an attacker may inject code that dumps data from a database. Instead of dumping the entire database all at once, the attacker might dump a single row for each invocation. This uses far fewer resources per invocation and is easily scalable, making it much harder to detect and block.
There are a lot of misconceptions surrounding serverless security, but that doesn't need to be the case. Serverless is one of the most secure ways to run apps in the cloud. With providers assuming the responsibility of securing their platforms, you just need to focus on securing your applications. We explore this in greater detail in our whitepaper, Don't Worry About Serverless Security - Worry About Application Security.
In this whitepaper, we look more closely at the misconceptions surrounding serverless security and what the true approach to security is. Click here to download a free copy.