Knative is undoubtedly awesome. It is dynamic, source-centric, extendable, robust and well thought-out. And in a way, it allows you to take back control of your FaaS platform.
But nothing good comes easy. Knative is a big system, and as more configurations and extensions appear, the more complex the setup becomes.
In this first part of the series, we will try to solve some boilerplate issues, in addition to bringing something new to the table. At the very least, we will import your (already deployed) AWS Lambda functions to your Knative platform in just a few clicks (along with some keyboard hits and zero face smashes, I hope).
Before diving in, let's go through what's to come. This guide can prove useful, both for those who have minimal knowledge of Kubernetes and aren't too familiar with Knative, and for those who want to experiment with some of its features.
Your next several minutes are going to look something like this:
Setting up a single-master Kubernetes cluster using kubeadm on AWS, hassle-free *If you already have a cluster and would like to skip this part, please
make sure it is configured as assumed (requirements section)
Setting up Knative on your Kubernetes cluster, hassle-free
*If you already have a Knative setup and would like to skip this part, please
make sure it is configured as assumed (scroll quickly through Knative section)
Deploying a "Lambda function import" build template and an example service (Python 3.7)
Pointing your API Gateway endpoint to your Lambda-like Knative service (And making sure it integrates with other AWS services like DynamoDB)
Ending notes, as everything comes to an end eventually
So, you might be asking yourself a thing or two, which I'll try to answer right now:
- 🕺 What do I mean by hassle-free? Well, I'm going to attach snippets that work out of the box (even if you ssh + C&P!), some of them are followed with a few steps or order of execution but they are truly no brainers. I've added comments to the configuration files so you'll understand the basics of the underlying services as it's always better if you understand what you're doing.
- ⏳ How long is it going to take? That's a great question - and that is entirely up to you! Copy & pasting while skipping explanations could get you running with your imported Lambda function in less than 10 minutes, and that's great. But if you would like to take it as a learning opportunity, open up Google in your browser and get ready to widen your serverless horizons!
- 🤓 What do I mean by "configured as assumed"? Before each section, I tried to compile a list of requirements from that given service (e.g. assumptions on the AWS Setup, on the Kubernetes setup), to make sure that those who already have some of the resources ready, have them all set up in the same way. If you want to start from scratch, you don't have to go through the requirements list (you can though), but skip straight to the hands-on part where configurations, scripts and snippets are shared.
Long introductions always set high expectations! Or do they?
Single-master, single-node, AWS EC2s - ever heard of something more beautiful?
Setting up a Kubernetes cluster is a task of its own, and depends on the application requirements or what we are going to need from our cluster:
- If you already have a Kubernetes cluster, make sure you meet each of the requirements.
- If you're only interested in a 1-click setup, including a Kubernetes cluster and Knative on it without checking the requirements, you can skip this section.
* You'll need to perform this stage manually, using the AWS Web Console, AWS CLI, or any management tool of your choice
* We'll use Ubuntu 18.04 LTS - Bionic (Community AMI: ami-06397100adf427136 on us-west-1) as our base AMI, specifically going for
- 1 x EC2 Instance for master, minimum 4GB RAM (2 vCPUs), 8GB Disk
- 1 x EC2 Instance for node, minimum 12GB RAM (4 vCPUs), 24GB Disk
- Make sure your node EC2's disk is extended to 24GB as mentioned above! It doesn't come as part of the AMI, so it might be necessary to do it by hand (and execute
sudo growpart /dev/xvda 1 or similar commands)
- Shared VPC, DNS resolving, connected to Internet-facing interface on each of the instances (kind of a default)
- Security group allowing all traffic inside the VPC subnet (and SSH, obviously), attached to both instances, TCP 6443 can be added if you would like to control your cluster from your computer instead of through ssh in the machines.
- IAM Role allowing EC2 access for necessary AWS services, for the sake of this tutorial I have created a role named knative_tester with AdministratorAccess AWS managed policy attached. Please note this is a bad practice, but we aren't focused on a secure production deployment in this article, but more on learning as much new stuff as possible :) This step is one of the most important ones
- Optional stuff that I used myself was creating a Launch Template with the security group and the AMI, and launch each my node and my master as Spot Instances, using a Spot Request with the minimum requirements mentioned above. In addition to that, I've added an Elastic IP to the master instance so it will be more accessible from the outside, or if it switches a spot instance overnight. Remember that launching instances through a Launch Template doesn't mean you have the IAM role attached to the EC2/Spot instances - you still need to do it manually / using the CLI!
Knative itself could be seen as a Kubernetes extension, in a way. It has its own objects and controllers, but relies on Kubernetes' core components. You can think of it as a big configuration of a Kubernetes cluster to support general serverless needs: building, serving, and eventing functions in a very customizable way.
Remember that our final goal isn't getting the setup ready; I just tried making it as easy as possible. Stay tuned for part 2 of the blog: setting up Knative.