Understanding Serverless Security

Learn how to properly secure your serverless applications.

Serverless is the latest step in cloud computing. It lets developersdeploy applications without having to worry about the underlying infrastructure. With serverless, developers can focus exclusively on writing code while taking advantage of cost savings.
Like any new architecture, serverless brings new implications for security. By removing the infrastructure from your control, serverless providers make securing applications easier for developers, while also making traditional security practices almost ineffective. This raises an important question: with no control over the environment, how can you be sure your serverless applications are really secure?

Are Serverless Environments Secure?

Serverless is the most secure way to run applications today. The serverless provider assumes the responsibility of securing each layer of the environment under your application. This includes the operating system, updating software, and applying patches. In addition, serverless architectures have some embedded benefits including an ephemeral nature (though some platforms lack this) making it extremely difficult to perform persistent attacks and small execution units. This makes it easier to detect and block abnormalities.
Serverless architectures benefit DevSecOps teams by letting them leverage the provider's security expertise. You no longer need to worry about system updates or potentially compromised components, since the provider handles these for you. You may have less control, but this reduces the risk of a vulnerability being introduced, for example, when running an Apache server. Serverless applications have another benefit: compartmentalization. Serverless applications are broken into individual functions that are self-contained and fully independent. An attack on one function only affects that particular function and not the rest of the application, limiting the attack’s effect. Even functions with many permissions are less vulnerable than a monolithic application.
Although serverless applications are known for scalability, platforms like AWS Lambda can and do impose limits. For example, Lambda lets you specify a maximum number of concurrent executions, but also sets a hard limit in case your application scales too much. An attacker may try to perform a Denial of Wallet (DoW) attack where they scale your application, resulting in higher charges. But with limits, the damage caused by this attack is greatly reduced. This still leaves applications vulnerable to Denial of Service (DoS) attacks, but this is no different than any other architecture.

A Brief History of Cloud Native Security

Serverless adoption is skyrocketing as more organizations are taking advantage of the ability to delegate infrastructure efforts and responsibility to FaaS providers. Serverless applications allow organizations to innovate faster and focus only on their business logic. At the same time, new tools and best practices for securing these applications are emerging from vendors and third-party security experts.
However, this lack of control can leave applications vulnerable. Developers need to focus on using appropriate tools and practices to secure the application layer. Using tools designed for other architectures or ignoring this requirement altogether can result in a weaker security posture than before.
Remember, attackers don't care whether your application is running on a serverless platform, a container, or on-premise. What matters is that the application itself is protected.

Why Securing Serverless Means Securing Your Applications

For developers and their organizations, serverless security means protecting their applications against threats in an environment with limited visibility and control over the underlying infrastructure. Serverless providers assume responsibility for securing the infrastructure, leaving application layer security to developers.
Although this reduces the attack surface that organizations need to account for, it also introduces new challenges. Many of the tools and best practices used in other architectures, like containers or VMs, don't apply to serverless, as they rely on access to the host or platform. They might need low-level access to monitor activity, track resources, or enforce security policies, none of which is available in a serverless environment.
Instead, organizations need to focus on protecting against threats to the application-layer.

Protecting the Application Layer

Since serverless application developers must focus on securing the application, best practices such as the OWASP Top Ten still apply. Developers need to secure against threats such as:
Although this reduces the attack surface that organizations need to account for, it also introduces new challenges. Many of the tools and best practices used in other architectures, like containers or VMs, don't apply to serverless, as they rely on access to the host or platform. They might need low-level access to monitor activity, track resources, or enforce security policies, none of which is available in a serverless environment.
Instead, organizations need to focus on protecting against threats to the application-layer.
  • 1. Code injection
  • 2. Broken authentication and access controls
  • 3. Accidentally exposing sensitive data
  • 4. Using vulnerable components, such as insecure or outdated libraries
To detect these and other attacks, organizations need to monitor their applications. In addition, organizations need a baseline understanding of their application's behavior in order to recognize deviations or anomalies. But by abstracting away the infrastructure, serverless providers prevent you from collecting valuable data such as process information, CPU and RAM usage, and host details. While you can't monitor activity taking place on the host itself, you can monitor activity that takes place between the application and the host, such as network traffic and system calls. Although limited, this data can still be used to detect malicious activity.
Let's look at two types of serverless attacks: credentials theft and attack persistence.

Protecting Against Credential Theft

Applications, microservices, and functions alike require access to different cloud resources and services. These resources are often secure and require credentials, tokens, or other forms of authentication before granting access. These are known as secrets, and they are one of the most highly valued targets for attackers.
Although this reduces the attack surface that organizations need to account for, it also introduces new challenges. Many of the tools and best practices used in other architectures, like containers or VMs, don't apply to serverless, as they rely on access to the host or platform. They might need low-level access to monitor activity, track resources, or enforce security policies, none of which is available in a serverless environment.
An attacker who gains access to secrets can use them to access other secure resources including storage containers, external services, and other functions. The challenge is securely storing and transmitting secrets. Services like AWS Secrets Manager and Hashicorp Vault work to protect secrets by encrypting them at rest and in transit. However, in order to use a secret, functions must first decrypt the secret, then store it unencrypted in memory. If an attacker gains access to the function, they may be able to read any currently loaded secrets, and use them to spread their attacks.
The same problem is present in Identity and Access Management (IAM). To access secure resources, AWS applications can use the AWS SDK to request temporary security credentials. These credentials allow the application to assume an IAM role and access secure resources. However, these credentials are stored in memory as cleartext. If an attacker archives the ability to execute code and manages to retrieve these credentials, or if the application leaks the credentials, the attacker will have access to the secure resources.
What organizations and serverless platforms want is a way to allow functions to securely access secrets without storing them as cleartext during runtime.

Protection Against Attack Persistence and Container Poisoning

A key benefit of serverless applications is ephemerality. For each request, the serverless platform should create a brand new instance of a function and destroy it once its task is finished. The vision of ephemerality is ideal from both operational (memory management) and security standpoints, but the process of creating new functions containers/sandboxes can create latency problems (known as cold start).
In reality, FaaS platforms will reuse function containers/sandboxes to avoid cold starts and their associated implications. This reduces the amount of time and resources needed for high-traffic serverless applications, but it also gives attackers a way to deploy persistent attacks. If an attacker gains RCE capabilities, their attack could persist for the lifetime of the function’s container/sandbox. Ideally their injected code would be destroyed after the function's invocation, but the practice of keeping functions "warm" gives these attacks a much longer lifespan.
This leads to another type of attack: function’s container poisoning. By leveraging the architecture and scalability of serverless applications, attackers can launch attacks that are both highly scalable and difficult to detect. Attackers can also use techniques to keep functions warm, preventing them from being deleted by the platform.

Where do Web Application Firewalls (WAFs) Fit In?

Web Application Firewalls (WAFs) work by inspecting and filtering traffic between cloud applications and the public Internet. WAFs are more effective for traditional applications that are monolithic, less dynamic in terms of scaling, and have a clear operational boundary.
Serverless applications, on the other hand, are distributed, highly scalable, and consist of potentially thousands of individual functions. In addition to being hard to scale, WAFs are expensive, are only effective against known vulnerabilities, have a high rate of false positives, and can't protect internal network traffic. WAFs may offer some benefits to serverless applications, but their limitations make them impractical.

The Benefits of a Serverless Security Solution

A good serverless security solution must be able to tackle application-level security challenges, including:
  • 1. Monitoring the application and infrastructure in real time to achieve visibility for security
  • 2. Providing a fresh execution environment for every function invocation without restricting performance
  • 3. Thoroughly inspect inbound and outbound traffic to detect malicious traffic or possible data leaks, even for internal traffic
  • 4. Securing secrets in transit, at rest, and while in use by a function
  • 5. Delivering alerts when problems, anomalies, or possible attacks are detected
  • 6. Being runtime agnostic and offering high detection rates regardless of the application or the environment
  • 7. Scaling with your application in terms of both performance and price
Trying to retrofit older security technologies around these requirements would be extremely difficult and ineffective in most use-cases. We need security solutions that are designed from the ground up to support the serverless architecture.
This means offering fast, scalable, runtime agnostic protection and monitoring that integrates closely with your infrastructure and doesn't slow down your applications.

Additional Resources

If you want to learn more about serverless security, take a look at the following resources.
Contact us for more details about Nuewba’s highly-secure and ultra-fast serverless platform.