October 02, 2019
While building our own FaaS platform (Nuweba), we needed a reliable way to test and compare it to other managed FaaS solutions. Even though there were a lot of blogs and tools attempting to deal with FaaS benchmarking, they all had their own perspective on the matter and each contained some technical mistakes (either with the testing methodology or with the presentation of the results).
We needed a tool that would allow us to reliably test FaaS platforms' performance in different scenarios. This is why we created FaaSbenchmark.
Additionally, we wanted to enable the community to benchmark FaaS platforms with just a few clicks, which is why we also created FaaSTest. FaaSTest is a website on which FaaS platforms' performance and benchmark tests are presented and can be easily analyzed. The benchmark results on FaaSTest are measured once a week using FaaSbenchmark.
FaaSbenchmark is an open-sourced framework written in Go to accurately benchmark FaaS platforms' performance.
FaaSbenchmark enables you to test and understand the performance of various FaaS platforms like AWS Lambda, Google Cloud Functions, and Azure Functions.
In addition to FaaSbenchmark, we are releasing three packages used by FaaSbenchmark:
HTTP Bench: Fine-tuned HTTP/s benchmark framework with synchronization across requests and detailed traces.
Azure-Stack: a Go package to deploy functions to Azure Functions.
SLS: Simple Go wrapper to Serverless Framework (used in AWS and Google tests)
The CLI is a tool for you to control and run performance tests using FaaSbenchmark. There are two available commands in the CLI:
The Terminal UI
We've created a TUI that will allow you to view results from the terminal
FaaSbenchmark is not just about cold starts*it's a generic framework to benchmark a variety of different scenarios, such as:
Cloud services performance using FaaS
*We believe that Invocation Overhead is a more exact term than the commonly used Cold Start. You can find more details about Invocation Overhead in our next blog.
As active members of the serverless community, we decided to offer an accessible and pluggable solution by open-sourcing this framework and inviting other members of the community to participate in our efforts.
This will include support for the major providers, test suits and function stacks.
FaaStest is a website that presents the FaaSbenchmark results and allows users to choose and define the different test results they want to examine.
Each week we will run FaaSbenchmark on all the supported providers. The results can then be viewed on FaaSTest.com
Our CI for FaaSbenchmark runs the tests for each provider in the provider’s environment (for example, FaaSbenchmark running test for AWS will run inside AWS compute).
If you're interested in finding the fastest (pun intended) FaaS platform, or curious to understand the different performance aspects between FaaS providers, you can use FaaStest to get an answer quickly and easily.
When entering FaaStest you choose the test you want to run, for example: “Increasingloadlvl1”.
The results summary will be shown on a podium, ranking the best performing providers, by an average of all functions per provider.
* Please note that we currently don’t take into account invocation errors.
The results of the test will be shown on a plot, where you can choose the providers you want to test and the graph type: Bar, Box, and Line.
Each colored-graph represents a different function.
The results consist of 4 different data points:
You can also:
The first release of FaaStest only supports AWS, Azure, and GCP, but we are inviting you to add new test cases and FaaS providers, using the open-source framework.
At the moment, Nuweba's benchmark sources and results will not be shown on FaaStest.com. Our core values are reliability and transparency - since we don't currently offer a self-service solution, other users can't corroborate our results.
*We would really value your feedback. Please contact us on GitHub and thanks in advance!
In our next blog, we’ll explain why we should rethink what cold starts and warm starts are, why we should start calling them invocation overheads and how to accurately test invocation overheads. Stay tuned for more information.