In the beginning there was basic computer sharing - a rare and expensive experience. Today, millions of people can simultaneously use the same online services that are provided by innumerable vendors, both large and small.
How we got to this point in the history of cloud computing is a fascinating story – and where this is taking us is very exciting. After all, understanding what we see in the rearview mirror can often help propel us forward. In this case, our destination is serverless.
It Started With Bare Metal
In most cases Bare Metal serves a single customer. Because it’s a single-tenant physical piece of hardware, one Bare Metal server might provide access to multiple users and include many services, but only one customer has access to the physical machine.
Even today, Bare Metal servers are used to provide high performance for data-intensive processes, to support certain uses cases or in specific business cases such as cryptocurrency mining. Because each Bare Metal server is used for one customer, it is easy to customize the physical hardware to suit their needs.
Virtual Machines: Hardware, But Software
The Bare Metal model was expensive and difficult to maintain - a cheaper and easier solution was needed but the answer wouldn’t come until the turn of the century. The late 1990s saw the rise of Virtual Machines (VMs). Instead of needing one computer per customer, a single computer could run software that emulated multiple computers. In this way you could have multiple VMs, running different OSs and applications, on a single physical piece of hardware. Logging-in to one VM would almost seem like logging-in to a separate physical computer, although it was only a software emulation. The major drawback of VMs, however, was that performance was usually poor because hardware resources were over-taxed and not efficiently distributed.
It’s Up in the Cloud
2006 was a busy year for tech journalists – Google bought YouTube, Apple Macs started using Intel Chips, and the adoption of the Public Cloud truly started to rise. According to Jason Hiner at ZDNet:
While the Public Cloud originally started out as applications hosted over the internet--Software-as-a-Service--today's public cloud can involve applications, infrastructure, or data storage served up by a third party vendor.
The advantages of the Public Cloud mitigated most of the drawbacks of traditional VMs. The advent of fast internet connections and the popularity explosion of the smartphone (which compensates for low device storage by taking advantage of cloud-based applications) helped lead towards the surge in Public Cloud use.
Getting it as a Service: IaaS and PaaS
Sitting on top of the Public Cloud are two very important layers: IaaS and PaaS. Around ten years ago, cloud providers realized that they could create cloud-based infrastructure platforms that included all of the features of a physical data center. The advantage was that the provider would manage the infrastructure so that customers won’t have to. And this is how Infrastructure-as-a-Service (IaaS) was born.
The logical next step beyond IaaS is Platform-as-a-Service (PaaS). These are cloud platforms that are specifically meant for application development. The basic infrastructure of the platform (servers, storage, networking, OS) are managed by a third-party (like Microsoft Azure or Google App Engine). Developers maintain the management aspects of their applications.
Don’t Worry, It’s Contained
IaaS and PaaS are very important to cloud computing. However, one of the biggest problems with application development is that when you transfer code across from one platform to another – even from one version of a platform to another version of the same platform – you can often get unexpected results. The solution to this is to use “containers”:
A container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.
Although the idea of using containers wasn’t new, it was hard to do. Docker changed that. In 2013 Docker popularized this solution by making it really easy to containerize applications. Without excessive effort, developers could now bundle their applications with absolutely everything they need to run.
Where to From Here?
The next logical step in the evolution of cloud computing for building and deploying apps is serverless. Simply put, serverless architectures provide both Backend-as-a-Service (BaaS) – client applications, such as databases and authentication services – and Function-as-a-Service (FaaS) – where server-side logic is written by the app developer and is run in stateless containers triggered by events or run one time only, all managed by a third party. Martin Fowler describes the benefits of serverless as follows:
Serverless architectures may benefit from significantly reduced operational cost, complexity, and engineering lead time.
What Fowler is alluding to is the reduced operational cost that results from paying only for what you consume. Combine that with the fact that serverless by their very nature as event-based architectures, enable and encourage fast and agile development; it is the clear, logical step in the advancement and evolutionary progression in this space. So take your eyes off the rearview mirror, look ahead and tell me what you see. I see it too. It’s serverless.