youtube image
From YouTube: How Fast is FaaS? Reducing Cold Start Times in Knative - Paul Schweigert & Carlos Santana, IBM

Description

How Fast is FaaS? Reducing Cold Start Times in Knative - Paul Schweigert & Carlos Santana, IBM

Knative is used to build serverless-style systems. One of the key features of these systems is the ability to scale a service up/down on demand, only running pods when they are needed to handle a request. When scaling up, however, users are likely to encounter the “cold start” problem, whereby the latency of when a new pod is ready to handle requests is non-negligible (2-5 seconds or more). Scheduling and making a Pod available in Kubernetes fast enough to provide a FaaS (Function as a Service) experience is a problem that we face today as it involves many components and orchestration. In this talk, Paul and Carlos will discuss how autoscaling works in Knative, the cold start problem space, and the steps taken by Knative to reduce container startup latency. One innovative solution is to pause the container CPU while maintaining state, this will allow for a fast response by having warm containers available and orchestrated. This is a practice typically used in FaaS systems.