youtube image
From YouTube: Autoscaling Kubernetes Deployments: A (Mostly) Practical Guide - Natalie Serrino, New Relic

Description

Don’t miss out! Join us at our upcoming hybrid event: KubeCon + CloudNativeCon North America 2022 from October 24-28 in Detroit (and online!). Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.

Autoscaling Kubernetes Deployments: A (Mostly) Practical Guide - Natalie Serrino, New Relic (Pixie team)

Sizing a Kubernetes deployment can be tricky. How many pods should it have? How much CPU/memory is needed per pod? Is it better to use a small number of large pods or a large number of small pods? What’s the best way to ensure stable performance when the load on the application changes over time? Luckily for anyone asking these questions, Kubernetes provides rich, flexible options for autoscaling deployments. This session cover the following topics: - Factors to consider when sizing your Kubernetes application - Horizontal vs Vertical autoscaling - How, when, and why to use the Kubernetes custom metrics API - Practical demo: Autoscaling with application metrics from Prometheus, Linkerd, Pixie (request throughput/latency, number of shoes purchased in my web store) - Impractical demo: A Turing-complete autoscaler!