
One of the biggest complaints against Functions as a Service (FaaS) like AWS Lambda, Google Cloud Functions, Azure Functions, IBM Functions, etc. is the problem of cold start. Cold start is about the delay between the execution of a function after someone invokes it. To be more specific, it is about the function at the time of invocation. In the background, FaaS uses containers to encapsulate and execute the functions. When an user invokes a function, FaaS keeps the container running for a certain time period after the execution of the function (warm) and if another request comes in before the shutdown, the request
Most FaaS providers have 1-3 second cold starts and this impacts certain types of applications where this latency will have a dramatic impact. The cold start varies by the cloud provider and programming languages. Though it is almost a year old, this benchmark study shows cold start latency impact in various FaaS offerings.

In the 2018 Serverless Community survey, developers
The cold start problem is overblown for various reasons. First, and foremost, users should understand that while FaaS is maturing fast, it is not suitable for many workloads. They can meet the needs of event-driven functions but, for most other workloads, containers are a better fit. It is also important for users to understand that the low cost of the service is because FaaS providers need not run infrastructure in anticipation of use and they can shut down unused resources in a more fine grained way. They then translate the cost savings because of these resource efficiencies
We strongly recommend that users consider the continuum of services from containers to serverless containers to services like Google Cloud Run to FaaS. Taking a binary approach of Kubernetes vs FaaS is shortsighted and it will not help your organization use the resources optimally.
Disclaimer: SpotInst is a Rishidot Research client
Leave a Reply