March 04, 2019
Serverless has seen significant growth in the last number of years. And indeed, serverless does allow developers to spend more time doing what they love – working on code. This is important not only from a technical perspective – people do what they love best – but also from a human resources perspective. Experienced developers are costly resources, and it’s in the organization’s best interest to optimally leverage their knowledge and skill.
From any perspective, serverless is great for agile businesses. Faster app deployment leads to happier customers, after all. However, in the transition to serverless, there are still a few challenges that need to be overcome.
The major FaaS providers (AWS Lambda, etc.) try to minimize users’ long cold-starts and optimize performance – as they should. To do so, however, they often keep environments (containers and sandboxes) alive for up to 55 minutes after the function has finished running. The reason? To remain ready in case the function is called again. This is useful to keep latency in check but may cause problems in serverless functions. Reusing containers can cause functions to run out of memory if there are difficulties in cleanup. And keeping containers running unnecessarily compromises security by giving hackers more opportunities to penetrate open instances, or more effectively move laterally in the network through reused containers.
There are two options you can choose from when moving to serverless: lower-performance yet safer cold starts, or warm starts that can carry a financial and security cost.
The key advantage of serverless is to divorce code from infrastructure – you do the code, the FaaS provider does the infrastructure. The problem is that when we give up control over infrastructure, you also relinquish the ability to closely monitor your functions in their operational environment, to troubleshoot problems, and to closely track costs.
If you can’t install agents and daemons for monitoring purposes because the application environment is locked down – you end up sorely lacking visibility, which can make it difficult to course-correct when inevitable problems arise. This lack of control can also impact security since an open or undiscovered error is a soft spot that an attacker can take advantage of. Make sure you have real-time visibility over all functions and activities.
Debugging locally is an important part of the SDLC. Yet to truly understand how code performs, you need to run it in the actual environment and system that will eventually host it. Most serverless vendors offer log-based performance metrics (like AWS CloudWatch), but these are extremely limited in terms of debugging, and can be expensive overall to use.
What’s more, you can’t attach debuggers to AWS Lambda. Even though you can test and debug Lambdas locally, it’s not simple. And in any case, the final word on actual performance is still in the CloudWatch logs.
And it’s not only about performance. Debugging is all about identifying problems and figuring out how to solve them. Lacking this troubleshooting capability, if there’s a problem in your serverless app, there’s a good chance that you’ll never know it exists. And even if you do, it’s tough to know where exactly it originated.
In the end, you’re left with the choice between debugging only locally – and hoping for the best in production – or trying to maximize the usefulness of inherently-limited CloudWatch logs.
There’s a mistaken assumption that security is built-in to serverless environments. The truth is that even AWS Lambda, Azure Functions, and GCF do not fully secure serverless. None of these providers secure the application layer – leaving the topmost layer (arguably the most valuable part of the app) exposed. In any case, lacking access to infrastructure or servers in a serverless environment, developers couldn’t add security even if they were acutely aware of the severe security deficiency on the provider side.
Serverless users are becoming aware of the systemic flaws in serverless platforms (like container reuse and clear storage of credentials such as IAM roles at runtime), and are actively seeking viable and safe technology to supplement what serverless providers are currently not offering.
There are a lot of reasons to choose to run your app on a serverless environment, but there are also some notable disadvantages that you need to be aware of. Serverless platforms like AWS Lambda are highly attractive – offering a way to speed up the development cycle while letting developers concentrate on creating great apps.
But the inherent problems of security, monitoring, debugging, and performance are constant pain points that cannot – and should not – be ignored. Yet as with any paradigm shift, as long as you go into it with a clear view of the possible pitfalls – the benefits far outweigh the downside.
Read our new ebook for the full story on the top challenges in serverless.