►
From YouTube: 4 Kubernetes Open Source Tools You Need in 2023
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Kubernetes
open
source
tools
that
you
need
and
2023
and
we're
here
to
discuss
tools
that
we
use
here
at
Fairwinds
that
we've
developed
and
that
we
use
to
manage
kubernetes
clusters
because
it's
hard
to
do,
and
so,
whatever
you
know,
tooling,
you
can
find
out
there
to
to
help
manage
policies
and
help
create.
A
You
know,
help
create
governance,
policies
that
make
your
clusters
more
stable
and
more
secure
and
more
efficient,
and
all
that
great
stuff,
you
know
the
more
you
can
use
the
better.
Your
lives
will
be
as
folks
who
operate
kubernetes
clusters,
so
I'll
do
a
quick
intro
here.
B
Good
question,
so
we're
not
allowed
to
start
using
these
until
2023.
Is
that
what
I'm
getting
from.
A
B
A
This
is
your
New
Year's
present
right,
like
crinkle
Moss
snuzzle
tag.
If
anyone
watches
Gumball
all.
A
I'm
Stevie
Caldwell
I'm,
an
SRE
technical
lead
here
at
Fairwinds
I've
been
working.
You
know
in
tech
for
longer
than
I
will
ever
admit
to
anyone,
but
I've
been
through
lots
of
stuff
I've
been
a
CIS
admin,
a
network
engineer-
and
you
know
devops
and
working
with
kubernetes
for
a
few
years
now,
and
also
starting
with
some
open
source
development
here
at
Fairwinds,
with
some
of
the
tools
that
you
see
here
today,.
B
And
I'm
Andy
I'm
the
CTO
here
at
Fairwinds
and
been
using
kubernetes
for
I.
Think
seven
years
now,
I've
been
with
Fairwinds
for
four
and
a
half
like
Stevie
have
also
been
a
sysadmin
and
I
like
to
call
myself
a
reformed,
assist
admin
for
many
many
years
and
I'm.
Also
an
author
and
maintainer
of
a
lot
of
our
open
source.
So
awesome.
A
Oh
I
know
you
love
reading
the
missions
yeah
because
everyone
loves
a
slide
that
gets
read
out
loud
to
them.
So
Fairwinds
is
a
trusted
partner
for
kubernetes
security
policy
and
governance
with
Fairwinds
customers,
ship
Cloud
native
applications
faster,
more
cost
effectively
and
with
less
risk.
We
provide
a
unified
view
between
Dev,
SEC
and
Ops.
Removing
friction
between
those
teams
with
software
that
simplifies
complexity.
B
All
right,
I
think
we're
going
to
kick
off
a
polling
question
here
for
everybody
just
a
little
interactive
piece.
So
what
is
your
greatest
opportunity
to
improve
your
kubernetes
environment?
Is
it
a
getting
help
with
the
basic
B
General
best
practices,
assessment,
C,
improving
the
security
posture
of
your
clusters,
D,
saving
money
or
E?
Improving
the
reliability
of
apps
running
in
kubernetes
I
have
to
give
Zoom
points
here.
Zoom.
Doesn't
let
me
vote
in
the
polls.
B
A
C
It
looks
like
we've
got:
14
are
looking
for
help
with
the
basics,
so
maybe
Andy
did
something
his
answer.
There,
we've
got
38
getting
a
best
practice
assessment,
13,
improving
security,
posture
of
your
clusters
and
the
remaining
38
improving
the
reliability
of
apps
running
in
kubernetes.
B
All
right
all
right,
well,
I,
like
that
b
and
e
General
best
practices
and
improving
the
reliability,
are
completely
tied
together,
because
if
you
follow
the
best
practices,
hopefully
you
have
a
reliable
cluster
and
I
think
that
leads
perfectly
into
the
first
tool
that
we're
going
to
talk
about
today,
which
is
Polaris
so
I'm,
going
to
figure
out
how
to
kill
these
slides
because
I'm
using
a
new
browser
these
days,
and
it
is
sometimes
baffling
all
right.
So,
let's
talk
about
the
setup
here.
I
have
a
kubernetes
cluster.
B
So
if
I
keep
CTO
get
nodes,
I
believe
Steven
I
will
be
using
the
same
cluster
today
from
different
perspectives
with
the
different
tools.
So
we
have
a
cluster
here.
It
looks
like
it
is
working
on
scheduling
down
some
nodes.
That's
interesting
and
then
I
have
two
demo
applications
running
in
here.
So
I
have
some
a
relatively
simple
application
that
just
runs
a
single
deployment.
B
It's
got
an
Ingress
and
a
horizontal
pod,
Auto
scaler
and
then
a
more
complex
application
that
has
a
database
and
a
cache
and
a
front
end
and
a
back-end.
It's
a
multi-tiered
web
app
and
these
are
running
in
the
cluster
they're
available
at
public
endpoints.
So
we
have
this
one
here.
It
just
pings
the
pods
over
and
over
again,
and
then
we
have
this
one
here.
This
is
the
multi-tiered
web
app.
B
It
lets
you
vote
for
where
you
want
to
go
to
lunch,
even
though
I
think
all
of
these
places
would
be
more
than
two
hour
drive
for
me.
So
I
don't
know
that
it's
going
to
happen,
but
we
can
vote
on
where
to
go
to
lunch
and
we
are
generating
some
traffic
against
these.
So
this
vote
count
is
just
going
to
keep
going
up
and
up,
hopefully
it
may
have
finished
generating
load,
but
we
have
somewhat
of
a
realistic
situation
here.
B
We
have
you
know
multi-tiered
web
app,
we've
got
another
app,
there
are
different
namespaces
and
we
we
want
to
figure
out
if
we're
following
best
practices
and
how
we've
deployed
these,
and
so
the
third
thing
that
I've
installed
in
this
cluster
is
Polaris
and
if
you
go
take
a
look
at
our
documentation,
page
for
Polaris,
which
is
at
Polaris
dot,
Docs
dot,
fairwinds.com
you'll
find
instructions
on
how
to
install
it.
B
It
is
a
relatively
straightforward
Helm
installation
I
have
customized
it
with
a
bunch
of
values
that
I
won't
necessarily
get
into
too
much
detail
about
just
yet,
but
what
it
has
given
us
is
a
dashboard.
You
can
also
run
Polaris
as
a
CLI
tool,
so
you
can
run
this
exact.
This
exact
same
set
of
checks
against
your
cluster,
just
from
the
CLI
I
like
the
dashboard.
It's
you
know,
I'm
an
executive,
so
I
love,
shiny,
shiny
dashboards,
and
so
we
can
scroll
down
here
and
see
right
now,
I
believe
we're
filtered
by
namespace.
B
But
if
I
drop
the
namespace
filter,
we
should
see
all
of
our
namespaces
pop
up
here
and
we'll
get
a
score
that
is
related
to
the
number
of
passing
versus,
failing
checks
and
we've
got
some
errors.
Some
things
we
consider
dangerous,
some
warnings
and
some
passing.
So
if
we
scroll
down,
we
will
start
to
see
some
of
the
things.
C
B
We
care
about
I'm
going
to
scroll
past
all
this,
our
back
stuff
for
the
moment
and
I'm,
going
to
look
at
the
different
namespaces
and
we'll
see
different
checks
failing
or
passing
in
some
of
these
places.
So
I'm
going
to
look
first
at
like
this
deployment,
and
we
see
that
we
have
some
passing
checks.
B
These
are
custom
ones,
we're
not
using
host
path
volumes
because
that's
a
big
security
No-No
and
then
we
have
this
check
here
that
says:
I
should
have
a
pod
disruption
budget
for
my
deployment.
Well,
what
does
that
mean?
So
I'm
gonna
click,
this
question
mark
and
that's
going
to
take
me
back
to
that
documentation,
page
again
and
I'm
going
to
see
here
that
missing
pod
disruption
budget
is
the
only
one
that
has
no
description.
Nice
didn't
mean
to
do
that,
but
we
should
have
descriptions
for
a
lot
of
these.
B
But
essentially,
what
we
need
to
do
is
add
a
pod
disruption,
budget
that
refers
to
that
deployment,
and
so
we
have
a
whole
bunch
of
these
checks
built
in
they're
documented
here
under
reliability,
efficiency
and
security,
and
these
are
all
best
practices
that
we
have
learned,
followed
and
suggest
all
of
our
customers,
as
we've
we've
built
and
run
kubernetes
clusters
over
the
years,
and
so
that's
the
very
basics
of
what
Polaris
is,
and
you
saw
all
the
built-in
checks,
and
now
you
may
want
to
to
add
custom
checks
to
Polaris,
maybe
there's
some
things
that
are
specific
to
your
environment
or
some
things
that
you
want
to
enforce,
and
so
this
is
where
we'll
jump
into
the
Polaris
values
file
that
I
have
used
here
and
we'll
scroll
down
into
the
config
section,
and
we
see
here
the
list
of
checks
and
their
their
level
of
what
would
the
right
word
for
that.
B
Be
there
severity
level,
that's
the
word
I'm
looking
for
so
these
are
the
different
severity
levels,
ignore
warning
and
danger,
so
the
ignore
ones
just
don't
even
show
up
and
the
the
warning
ones
showed
up
on
our
dashboard
as
warnings,
and
then
we
have
the
danger.
B
So
if
we
come
down
here,
we'll
see
that
I've
added
a
few
custom
policies,
I've
named
them
image
registry
resource
limits
and
host
path.
Mount
and
I've
got
some
Carpenter
stuff
in
here
that
I've
been
tinkering
with,
but
we
can
look
at
this
custom
check
for
image.
Registries,
so
perhaps
your
you
know
only
pushing
your
images
to
a
specific
registry
at
your
company
or
you
are.
C
B
Folks
see
this
I
see
my
little
preview
here
and
I'm
concerned.
Oh
no,
never
mind
we're
good
I
saw
something
else,
but
but
perhaps
we
want
to
say
that
all
of
our
images
have
to
come
from
a
specific
list
of
Registries.
We
can
write
this
policy
here
that
says
it's
got
a
couple
of
messages
to
go
with.
It
signed
a
category
to
it.
B
It's
targeting
the
container
specification
and
then
we
write
our
policies
in
Json
schema
and
if
you
go
into
the
Polaris
repository
you'll
see
that
all
of
the
default
checks
are
in
the
checks,
folder
and
they
are
listed
they're
in
yaml
files,
and
they
look
exactly
like
this
they're
all
Json
schema
checks
and
so
we're
looking
at
the
image
property
of
the
container
and
we're
going
to
say
that
it
has
to
be
any
of
these
string
pattern
matches.
B
So
this
should
be
coming
from
any
one
of
these
Registries,
and
so
any
one
of
these
that
doesn't
match
will
pop
up
as
a
warning
in
our
Polaris
dashboard,
because
we
have
that
currently
set
to
warning.
So
that's
how
we
add
custom
checks
and
then
the
last
thing
I'll
talk
about
real
fast
is
exemptions.
There's
probably
some
things
in
here
that
have
to
run
this
way,
for
example,
that
pod
disruption
budget
for
cert
manager
that
I
mentioned
cert
manager
is
a
single
pod
controller
in
this
cluster.
B
There's
no
need
for
it
to
run
more
than
one,
because
it
runs
as
a
Reconciliation
Loop
and
if
it
goes
down
or
disappears
for
a
little
bit,
it
comes
back
it's
going
to
take
care
of
its
business
when
it
comes
back,
that's
not
a
big
deal,
and
so
what
I
want
to
do
is
add
an
exemption
for
for
cert
manager,
so
I'm
going
to
say,
controller
names
and
I'm
going
to
go
back
here
and
see
that
the
controller
name
is
going
to
be
cert
manager,
and
so
that
here
and
then
I'm
going
to
say
rules
and
it
is
going
to
be
exempt
from
the.
B
What
I,
don't
remember
the
exact
name
of
it
missing
pod
disruption,
budget
rule
all
right,
so
we'll
add
that
in
there
as
an
exemption.
So
ideally
we're
going
to
save
this.
We're
going
to
rerun
the
helm,
install
that
references.
This
values
file
we're
going
to
update
Polaris
in
that
namespace
and
I
am
going
to
wait
for
that
to
finish
and
go
back
to
my
dashboard
and
hopefully
that's
gone
away,
and
then
some
of
those
other
settings
to
set
things
to
warning
have
been
enabled
any
questions
or
thoughts.
B
All
right-
and
we
see
here
the
deployment
cert
manager
just
has
this
new
security,
or
this
new
check
for
priority
class.
That
I
believe
is
a
custom
check.
Actually
so
I'm
not
going
to
worry
too
much
about
that.
But
our
let's
see
where
is
the.
B
B
So
that's
a
good
sign,
but
that's
custom
policy
exemptions
and
how
we
deploy
Polaris
into
our
clusters
and
then
once
we
have
like
I
said
once
we
have,
that
of
working
I
definitely
am
going
to
go
once
we
have
all
of
our
policies
green
or
we
have
specific
ones
green-
that
we
care
about
a
lot
I'm
going
to
go,
enable
that
validating
admission
web
hook
and
start
blocking
things
from
entering
the
cluster
that
don't
meet
these
requirements
that
don't
pass
these
policies.
A
B
B
Well,
then,
I
think
I
will
go
ahead
and
hand
it
off
to
you
for
the
second
one
and
one
of
the
things
that
Polaris
will
tell
you
to
do
all
the
time
is
to
set
your
resource
requests
and
limits
on
your
deployments
and
if
those
aren't
set,
you
may
be
asking
me
how
to
set
them.
So
that's
where
Steve
comes.
A
Hello,
so
I'm
gonna
see
if
I
can
share
my
screen.
I
always
have
difficulty
sharing
like
the
right
desktop
because
yeah,
let's
see,
let's
try
this
one
see
if
that
works.
All
right.
You
all
see
a
terminal
here.
A
A
A
A
Right
then,
we'll
keep
going
unless
somebody
pops
into
the
chat
and
says
yo:
that's
not
good,
all
right,
so
yeah
Goldilocks.
A
What
does
it
do
and
you
know
Andy
was
talking
about
how
you'll
get
reports
from
Polaris
about
setting
resource
requests
and
limits
and
I'm
sure
you've
been
it's
been
hammered
into
you
like?
Yes,
you
have
to
you,
should
best
practice,
set
resource
requests
and
limits
for
your
CPU
and
memory
and
your
workloads-
and
you
know
that's
always
something
that
people
will
struggle
to
do
because
it
requires
you
to
like
do
some
thinking
about
it
right,
you
need
metrics.
You
need
like
to
have
a
a
good
span
of
time
for
metrics.
A
You
know
you
need
to
capture
traffic
at
your
high
loads
and
your
low
lows
and
all
that
other
stuff,
and
that's
a
lot
to
do
manually
if
you're
pulling
up
graphs,
you
know
somewhere
in
a
grafana
or
something
and
trying
to
do
that.
Math
Goldilocks
does
that
for
you
using
some,
you
know
already
existing
kubernetes
projects,
because
it's
always
great
to
build
on
trusted
and
tested.
You
know
projects
and
open
source
open
source
software,
and
so
that's
what
Goldilocks
does
so
Goldilocks
can
be
installed.
A
So
actually
first
prerequisites
for
Goldilocks
we're
talking
about
you
know
those
tried
and
true
projects.
Goldilocks
requires
you
to
install
a
metric
server
and
vertical
pod.
Auto
scaler.
Both
those
things
are
pretty
standard.
I
feel
like
you
know.
Most
people
have
them
installed
in
their
clusters.
Just
so
you
could
do
like
a
cube
control
top
or
something
like
that
right.
A
So
this
is
the
cluster
Andy
was
working
on.
It
looks
a
little
different
because
we're
accessing
it
a
little
differently,
and
you
know
my
all
my
stuff's,
just
not
as
cool
looking
as
his
I.
A
Don't
know
it's
just
a
it's
a
thing
I
aspire
to,
but
anyway
you
know
if
we
look
and
I'm
going
to
go
ahead
and
disclaimer
that
my
typing
gets
really
bad
when
I
am
in
front
of
folks,
and
so
there's
gonna
be
a
lot
of
copy
pasta
if
I
can
help
it
so
that
I
don't
have
to
worry
about
my
typing.
So,
as
you
can
see,
we
already
have
metric
server
installed
in
this
cluster.
It's
all
running
and
good
because
we
can
do
the
good
old.
A
You
know
okay,
top
pods
and
find
stuff
in
there
right,
and
so
the
other
thing
that
we
need
running
in
here,
like
I
said,
is
the
vertical
pod
Auto
scaler
vertical
pod,
Auto
scaler.
A
It
has
three
components
to
it,
and
the
component
that
Goldilocks
needs
is
the
recommender
which
you
know
we
have
installed
here,
and
we
also
have
the
right,
the
updater,
but
that's
not
exactly
necessary.
You
could
get
away
with
just
doing
the
recommender.
The
third
portion
of
the
vpa
is
the
admission
controller.
We
don't
install
that
by
default,
so
you
know
you
can
install
the
metric
server
using
their
Helm
chart
and
for
installing
the
the
vertical
pod
Auto
scaler.
A
A
The
recommender
gets
metrics
from
the
metric
server
and
then
uses
those
metrics
to
like
essentially
make
recommendations
to
the
vpa
objects
that
are
created
for
the
deployments
that
you've
attached
to
vpa
and
then
based
on
those
and
Goldilocks,
essentially
uses
those
values
to
make
recommendations
or
to
surface
those
for
you.
A
You
know
based
on
the
vpa,
so
these,
like
I,
said
this
has
already
been
installed
in
the
cluster
and
Goldilocks
has
been
installed
as
well,
which
is
also
installable
via
a
Helm
chart.
So
we've
installed
the
controller
and
the
dashboard,
and
those
are
pretty
much
all
you
need
to
have
Goldilocks
running
and
giving
you
information
for
setting
your
resources,
your
resource
limits
and
requests.
A
So
how
do
you
actually
get
Goldilocks
attached
to
your
workloads?
How
do
you
get
it
to
create
vpa
objects
for
your
workloads?
A
A
We
actually
have
a
bunch
of
namespaces
that
have
the
Goldilocks
label
on
it,
but
we
did
create,
as
Andy
said,
two
new
namespaces,
this
Yelp
one
here
and
this
demo
one
here
for
the
demo
app.
So
we
created
those
namespaces
specifically
to
show
you
like
Goldilocks
in
action,
so
we're
gonna
work
within
those,
but
just
to
show
you
if
you
say,
if
you
say
K
and
get
BPA
and
and
K
is
just
my
shortcut
for
cube
control.
A
You
how
those
probably
got
created
in
a
little
bit,
but
let's
start
off
with
just
labeling.
Well,
let's
take
a
look
at
the
Goldilocks
dashboard.
First
right,
I'm
going
to
run
this
in
another
terminal:
I'm,
not
I'm,
not
a
tmux
aficionado,
like
my
buddy
here
so
I'm,
going
to
support
forward
Goldilocks
dashboard
here.
A
And
then
we're
going
to
go
over
to
the
browser
and
look
I
even
pulled
up
localhost
already,
because
I
am
super
lazy,
so
when
I
pull
localhost
8080
and
here's
the
Goldilocks
dashboard,
so
normally,
if
you've
just
set
up
Goldilocks
in
your
cluster,
all
this
stuff
isn't
going
to
be
here.
These
name
spaces
won't
be
here
because
you
know
you
typically
haven't
labeled
your
name
spaces.
These
namespaces
are
already
labeled
or
they've
already
been.
You
know,
set
to
have
vpas,
so
they
show
up.
A
If
you
were
starting
with
like
a
pure
vanilla
installation,
there
would
be
a
nice
little
block
of
text
up
there
and
it
tells
you
exactly
the
command.
You
need
to
run
to
manually
label
your
namespace
and
that's
the
command
I'm
going
to
show
you
now
to
get.
A
Let's
start
with
our
demo
app,
so
we're
going
to
get
our
demo
app
set
up
with
the
Goldilocks,
shoot
the
Goldilocks
label
and
again
copy
paste
in
like
it's
my
job
and
in
some
cases
it
sometimes
is
so
we're
doing
a
cube
control
label
name
space
demo
with
that
Goldilocks
fairwinds.com
enabled
equals
true
we're
going
to
hit
the
OK
on
that,
and
now
we
see
that
namespace
is
labeled
and
if
we
go
back
now
over
to
our
dashboard,
which
is.
A
Right
here,
no,
it's
not
right!
There
either
that's
right!
Here!
There
we
go
and
we
refresh
that
guy
and
we
see
demo
namespace
right
here.
So
that's
how
you
that's!
That's
just
that's
how
easily
you
can
add
workloads
to
Goldilocks
everything.
That's
now
deployed
in
that
demo.
Namespace
will
be
picked
up
and
we'll
have
a
vpa
associated
with
it.
So
if
we
do
a
vpa
dash
new
space
demo,
we
can
see
there's
the
Goldilocks
demo
basic
demo,
vpa
and
you'll
notice.
Well,
let's
actually
just
look
into
it
a
little
bit.
A
So
a
couple
of
things:
you'll
notice
in
here
the
update
policy
is
set
to
off
so
with
the
vertical
pod
Auto
scaler.
If
you
have
the
updater
installed,
the
updater
can
actually
vertically
scale
your
resources
on
your
pods
for
you,
based
on
this
information
down
here
from
the
recommender
right.
A
We
have
these
set
to
off.
So
it
doesn't
do
this
automatically
for
you
and
you
know
as
I.
Don't
know
if
you
noticed,
but
when
I
did
a
a
git
labels
on
all
the
namespaces
in
the
cluster.
There
were
some
of
them
that
had
a
label
that
was
update,
policy
or
update
mode
set
to
Auto.
A
So
you
can
pass
another
label
to
your
namespace
that
will
enable
the
update
mode
to
be
Auto,
and
so
it
will
automatically
scale
the
resources
on
your
on
your
on
your
deployment
on
your
pods
as
as
needed,
but
in
our
default
method
or
our
default
mode
is
set
to
off,
and
then
these
recommendations
here
that
come
from
the
recommender,
they
play
a
part
in
how
we
recommend
resource
limits
and
requests
and
Goldilocks.
So
if
we
go
back
here
and
look
at
the
demo
app.
A
We're
going
to
just
go
ahead
and
collapse
that
little
guy,
so
you
see
it's
a
lot
like
Polaris
in
the
sense
that
it,
you
know,
there's
a
namespace,
so
it
shows
you
like
the
workload
shows
you
the
container,
and
it
shows
you
what
Goldilocks
is
recommending
you
set
for
your
resources
and
requests
right,
and
so
this
is
the
current,
and
this
is
what
they
recommend
for
guaranteed
quality
of
service
and
reversible
quality
of
service,
and
for
each
of
these
it
prints
out
some
yaml
that
you
can
then
copy
pasta
yourself
into
your.
A
You
know
your
your
Helm
charge
or
you
know
I
guess
you
could
live
edit
it
in
the
cluster,
but
I
wouldn't
recommend
that
you
should
be
doing
some
cicd
stuff
right.
So
also
we
have
a
little
glossary
down
here.
That
explains
some
of
the
stuff
about
the
difference
between
the
guaranteed
quality
of
service
and
the
bursible
quality
of
service,
which
essentially
about
like
how
kubernetes
treats
your
workloads
when
it's
under
resource
pressure
right.
A
So
you
know
kubernetes
the
cube
scheduler
uses
your
resource
request
for
how
to
for
trying
to
you
know,
Bim
pack,
your
your
pods
on
the
on
nodes,
the
HPA
horizontal
pod,
Auto
scaler
also
uses
your
request,
and
so
you
know
that's
that's
important
to
have
set.
A
But
if
you
have
both
your
requests
and
your
limits
set
to
the
same
values,
then
if
you
have
some
sort
of
resource
contention,
your
pods
will
will
both
will
sort
of
have
a
guaranteed
like
this
is
what
I'm
going
to
use
on
this
node
right
burstable
means
that
your
pod
could
go
up
or
down
for,
like
short,
spikes
of
time
you
know,
except
for
except
for
a
CPU
I
guess,
because
then
you
you
get
throttled.
A
If
you
do
that,
but
for
memory
you
know
you
get
a
little
burstable
thing
here,
and
so
that's
makes
it
a
little.
It
makes
it
a
little
more
flexible,
but
I
also
think
it
means
that
it's
more
easily
evicted,
if,
if
you
have
some
resource
contention,
but
so
these
are
the
things
that
the
you
know,
Goldilocks
will
recommend.
These
are
starting
points
right.
A
So
that's
important
to
keep
in
mind
like,
ultimately,
you
know
your
workloads
and
you
know
your
sort
of
your
your
business
patterns
better
than
we
do
better
than
Goldilocks
us.
So
this
is
like
a
good
starting
point
to
like
test
out
and
tweak
and
and
see
how
those
things
will
work
will
work
for
you
in
terms
of
your
traffic
patterns.
A
So,
like
you
know
what
I
showed
you
was
going
in
and
manually
adding
a
label
to
your
namespace,
but
you
can
actually
just
go
ahead
and
label
all
the
namespaces
in
your
cluster
or
it's
not
actually
labeling
the
namespaces,
but
it's
like
tagging
them
for
being
used
with
Goldilocks,
because
if
you
do
a
show,
labels
you'll
find
that
they
don't
actually
create
a
label
in
a
namespace,
but
it
does
create
a
vpa
for
every
for
every
namespace
in
your
cluster,
which
means
that
any
new
names,
any
new
namespace
will
also
automatically
get
vpas
added
for
any
workloads
running
in
those
name
spaces
right.
A
So,
for
example,
let's
see
if
this
will
work
I'm
going
to
try
and
actually
I'm
not
going
to
do
that
here,
because
we
actually
didn't
install
Goldilocks
with
Helm
in
this
cluster,
so
there
is
no
Helm
chart
that
I
can
upgrade,
but
potentially
in
my
cluster,
one
just
check
it
out
here
for
a
second
I
do
have
a
Goldilocks
chart,
and
so
this
is
my
kind
cluster.
This
is
what
I
meant
when
I
said.
A
I
was
going
to
be
using
a
bunch
of
different
clusters,
because
I
think
I
already
ran
this
command,
so
I'll
just
copy
paste
it.
So
you
can
see
what
it
is,
but
I
won't
necessarily
run
it
here,
but
this
command
there's
a
couple
of
flags
that
you
can
set
in
the
in
Goldilocks
the
controller
Flags.
A
So
if
you
set
controller
flags
on
by
default
to
True
or
flags
on
by
default
to
true
in
both
the
controller
and
dashboard
sections
of
Goldilocks,
what
that'll
do
is
automatically
add
a
vpa
for
whatever
new
namespace
you
create
in
your
cluster,
which
is
which
is
pretty
cool
and
also
create.
A
You
know
the
vpas
for
all
your
existing
stuff
in
the
cluster
and
the
last
thing
that
I'll
mention
about
Goldilocks
I
think
is
that
so
Goldilocks
uses
the
recommender,
but
you
can
actually
and
and
so
there's
like
a
limited
amount
of
information
like
it
only
goes
back
so
far.
A
I,
don't
actually
know
how
far
back
it
goes,
but
you
know
not
super
far,
but
you
can
actually
install
like
Prometheus
in
your
cluster
and
then
hook
Prometheus
up
to
both
hook
that
up
to
the
vertical
pod
Auto
scaler
as
a
as
a
back-end
like
as
a
storage
and
then
you'll
have
like.
However
long
you
decide
to
set
Prometheus
for
the
worth
of
data
to
reference
in
terms
of
making
those
kinds
of
recommendations
for
CPU
and
and
memory
requests
and
limits,
that's
all
I
have
any
questions.
B
A
So
we're
gonna
go
right
into
Nova.
A
Are
we
are
all
right
so
Nova?
Let's
take
you
to
the
Nova
page,
bye,
bye,
localhost,
Nova,
documentation,
all
right
so
Nova
again.
This
is
a
tool
that
we
developed
in-house
when
we
just
you
know,
found
a
need
for
it,
based
on
all
the
Clusters
that
we
managed
and
it
scans
your
cluster
for
updates
to
helm,
charts
and
container
images
if
you're
not
using
help.
A
So
you
know
it's
useful
for
like
keeping
track
of
add-ons.
You
know
your
cert
managers,
your
nginx
and
address
external
DNS,
all
those
things
you
know
you
want
to
keep
track
of.
You
want
to
keep
as
up
to
date
as
possible
with
those
because,
obviously
there's
all
kinds
of
security
patches,
that'll
come
through
stability
patches
and
you
know,
keeping
those
things
up
to
date.
As
to
you
know
the
the
security
and
stability
of
your
cluster
right.
It
helps,
you
know,
maintain
that
so
Nova
is
really
simple.
To
use.
A
I
am
actually
going
to
so
I
can
I
can
do
both
right,
I'm,
gonna,
I'm,
gonna
run
it
in
in
the
cluster,
we're
using,
but
you're
going
to
find.
Oh,
that
was
the
other
typed
fine,
as
I
said
fine,
and
that
was
pretty
it's
pretty
dope
you're
going
to
find
that
there's
not
a
lot
in
this
cluster,
because
again,
this
cluster
doesn't
have
a
lot
of
Helm
charts
installed
in
it.
A
But
let
me
run
through
the
command
here,
so
Nova
find
is
the
base
command
and
by
default,
Nova
will
output
the
stuff
in
Json,
and
so
you
pass
it
the
format
table,
flag
and
it'll.
You
know
give
you
this
cute
little
table
and
dash.
That's
wide
is
just
more
information
that
it
shows
you
like.
If
you
did
this
without
the
dash.
That's
why
you
get
like
just
a
a
little
bit
less
just
get.
The
four
columns.
A
And
so
this
is
a
this
is
the
helm.
This
is
how
it
finds
old
and
deprecated
versions
of
your
or
the
terms.
If
your
Helm
charts
are
old
or
deprecated,
you
know
quickly
going
through
these
fields.
You
know:
release
name,
chart
name
new
space,
the
home
version,
so
installed
is
obviously
the
chart
version.
That's
in
your
cluster
latest
is
the
latest
chart
version
that
Nova.
C
A
Of
old
is
a
bull
that
Boolean
that
says
like
yeah.
This
is
either
your
your
version
is
old
or
not,
and
then
deprecated
is
a
flag
that
sometimes
inside
actual
Helm
charts
they
can
be
set
as
deprecated,
so
that
you
know
that
Helm
chart
you
know
should
not
be
used
in
in
the
future.
You
should
start
moving
off
of
it.
We
saw
this
happen.
A
You
know
in
a
big
way,
when
all
those
charts
started
moving
off
of
the
stable
charts,
repo
and
moved
into
their
separate
repos
right.
So
that's
why
that
flag
is
there
and
let's
see
so
and
where
Nova
gets
this
information.
C
A
The
charts
that
are
you
know
the
the
different
versions
of
charts
and
whether
or
not
they're
deprecated
is
we
pull
artifact
hub
to
get
that
information
and
there's
one
caveat
that
has
been
mentioned
in
previous
presentations.
A
That
I'll,
just
you
know,
make
sure
I'll
carry
on
the
tradition,
which
is
that
you
know
you
can
Fork
charts
charts
can
be
forked
and
that's
cool,
and
sometimes
those
Forks
charts
wind
up
in
artifact,
Hub
and
thechart.yaml
is
sometimes
the
same
between
the
forked
chart
and
the
original
chart,
which
means
that
Nova
doesn't
have
a
way
to
Fig
to
to
see
which
Upstream
chart
is
actually
the
one
that
your
your
release
came
from.
So
that
is
important
to
just
keep
in
mind.
A
We
do
some
matching
and
and
scoring
to
try
and
like
mitigate
that,
but
and
we
do
a
pretty
good
job
of
it.
But
that
is
just
something
to
keep
in
mind,
because
that
is
where
it
gets
that
information
from
so
what
Andy
was
going
to
ask
about
is
showing
the
container
scanning
so
Nova,
also
scans
container
images
I'm
just
going
to
go
ahead
and
copy
pasta
that
command
from
over
here.
A
A
So
you
see,
we've
got
like
a
similar
output
here.
So
these
are
the
containers
running
in
this
cluster.
Here's
the
current
version
and
here's
another
Boolean.
It
tells
you
whether
or
not
it's
old.
A
It
tells
you
the
latest
container
version,
but
it
also
is
so
it
tells
you
three
different
in
three
different
ways:
what
your
different
like,
how
out
of
date,
you
are
so
it
Scopes
it
by
the
latest
major
version,
the
latest
minor
version
and
the
latest
patch
version,
and
that
just
gives
you
more
info
on
what
to
update
right.
So
you
know,
for
example,
you
might
not
have
any
concerns
about
updating
your
add-ons
to
the
latest
patch
version
or
even
the
latest
minor
version
right.
A
If
it's
like
you
know,
hey
my
latest
minor
version
is
you
know:
zero,
sixty
I'm
on
60.1
and
the
latest
minor
version
is
0
68.1
or
something
you
might
be
like.
Yeah
I
can
go
ahead
and
patch
that
I
don't
need
to
worry
about
any
breaking
changes
and
stuff
like
that.
So
it
just
gives
you
more
information,
so
you
can
make
linear
really
granular
decisions
about
how
you
want
to
handle
matching
your
your
add-ons
and
do
those
upgrades.
A
Another
thing
about
Nova
is
that
there
are
some
configurable
options
to
it.
So
there's
a
command.
You
can
run
and
I'm
just
going
to
go
ahead
and
do
it
here,
more
coffee,
pasta,
so
Nova
generate
config
will
actually
oh
did
I
not
do
that
right,
yep
generate
config,
it
was
a
dash
in
between
generate
Dash
computers,
I
think
there
we
go,
and
so
that
will
give
you
a
boa
Anova
config.yaml
file.
Let
me
actually
put
this
at
the
top
here.
A
So
this
is
the
config
that
Nova
is
using
when
it
runs
all
the
I
think
it's
essentially
these
map
to
like
command
line
arguments
that
you
can
pass
into
Nova
on
the
fly
right.
So,
for
example,
you
know
format
Json,
as
you
can
see,
that's
the
default
and
we
pass
in
format
table
from
the
command
line,
but
we
could
easily
change
this
to
table.
A
The
difference
is
that
you'll
then
have
to
point
Nova
to
your
config
file
when
you
run
it
so
that
it
uses
your
your
change
to
config
so
I
guess
we
can
actually
even
go
ahead
and
do
a
Quick
Change.
There
change
it
to
table
and
then
I
think.
If
I
go
back
up
to
Nova
fun
format,
table
config.
A
Uration
config
Nova
config.yaml,
so
this
will
just
be
all
right,
so
I
wouldn't
say,
containers.
Sorry.
A
I
think
that
will
just
give
me
by
default.
Oh
I,
said
format
table
I'm
on
a
format
table,
so
I
want
to
see
if
that
works,
without
it.
How
about
that.
C
A
That's
magical
all
right,
so
yeah,
so
because
we
changed
that
in
the
config
now
you've
got
table
as
your
default,
which
is
great,
because
why
would
you
ever
really
actually
want
that
to
be
in
Json?
There's
reasons:
you'd
want
it
in
Json,
but
for
this
purpose
you
don't
want
it
in
Json.
A
couple
of
other
things
that
are
interesting
in
this
file
that
you
can
look
into
is
that
you
can
set
desired
version.
A
So
that
is
a
map
that
allows
you
to
specify
the
desired
version
of
your
come
chart
or
containers
here,
for
example,
if
you
have
some
dependency
in
your
cluster,
where
you
know,
I
can't
use
anything
above
a
Helm
chart
of
this
version.
Now
you
can
put
this
in
here
and
and
set
that
version
constraint
essentially
and
then
it'll
ignore
it.
It'll
drop
from
the
output
of
of
Nova.
A
Also
there's
a
URL
list
down
here.
So
you
know,
I
was
telling
you
that
Nova
pulls
from
artifact
or
polls.
Artifact
Hub.
If
you
have
other
repos
that
you
know
your
Helm
charts
are
sourced
from
like.
Maybe
you
have
some
private
repos,
you
can
add
those
to
this
URL
list
and
it
actually
like
uses
both.
A
So
you
can
have
poll
artifact
Hub
set
to
true
and
your
own
private,
URLs
and
it'll
scan
or
pull
all
those
things
things
to
give
you
a
report
about
your
about
your
deprecated
and
out
of
date,
so
old
and
Dusty
containers
and
Helm
charts.
A
A
Can
you
have
my
Elite
Bass
Pro?
Oh
actually,
you
know
who
I
got
this
Elite
bash
prompt
config
streaming
from
you
ready
that
guy
right
there?
Oh.
C
A
My
CTO
has
grabbed
it
from
him
because
he
has
a
whole
really
cool
thing
for
using
Starship.
So
Starship
is
the
what
I
use
to
configure
my
bash
prompt.
So
I
don't
know
if
that's
public
or.
A
Yeah
yeah
all
right,
Starship
and
there's
sometimes
it's
more
information
like
it
gets
a
little
messy
depending
on
what
I'm
doing,
but
yeah
I
find
it
very
helpful.
So
shout
out
to
shout
out
to
my
boss:
did
I
miss
anything
okay,
I.
B
Don't
think
so
that
was
great
I
think
yeah,
the
only
thing,
oh
no,
that's
for
Pluto,
no
I.
Think
that's
great.
Thank
you.
B
B
Yet
another
tool
written
as
we
ran
into
problems.
We
were
doing
the
kubernetes
1.16
upgrades
and
if
anybody
remembers,
that
particular
upgrade
all
of
the
old
deployment,
extensions,
V1
beta,
1
API
versions
were
removed
in
1.16.,
and
so,
if
you
had
a
whole
lot
of
old
yaml
laying
around
or
you
were
deploying
old,
Helm
charts
that
had
deprecated
API
versions
in
them,
you
were
sort
of
scrambling
potentially
to
update
those
and
we
needed
a
way
across
all
of
our
customers
to
tell
them
hey.
B
This
is
where
you're,
using
deprecated
or
removed
ones,
particularly
removed,
API
versions,
but
deprecated
as
well,
and
so
we
we
wrote
this
tool
called
Pluto
and
what
we
realized,
while
we
were
trying
to
write
Pluto,
was
that
the
kubernetes
API
server
is
a
lot
smarter
than
we
are,
and
it's
automatically
translates
API
versions
from
one
to
the
next,
and
so,
if
I
was
looking
at
a
cluster
that
had
a
deprecated
API
version
and
I
said
Cube
CTL
get
deployment,
Dash,
o
yaml.
B
You
gave
me
an
extensions
deployment,
but
I
know
how
to
translate
that
into
an
app
speed,
one
so
I'm
just
going
to
do
that,
which
is
great
and
it
makes
it
so
that
we
can
upgrade
in
place,
but
then
that
breaks
our
ability
to
deploy
to
the
cluster
after
we've
done
that
upgrade,
and
so
we
wanted
to
prepare
our
customers
beforehand,
rather
than
just
blocking
them
from
updating
their
clusters,
and
so
there's
a
couple
different
strategies
by
which
we
do
this
first,
you
know
we
gave
folks
the
ability
to
just
scan
their
yaml
files.
B
That's
kind
of
the
most
obvious
thing
is
like
I've
got
this
list
of
yaml
here:
I
want
to
run
Pluto
against
it
and
say
are
any
of
these
API
versions
removed
or
deprecated,
and
then
we
thought
okay.
Well,
you
know
we
want
to
be
able
to
do
that
with
local
files.
Then
we
want
to
be
able
to
template
out
Helm,
charts
and
feed
those
into
it,
and
so
Pluto
has
lots
of
different
options.
B
So
if
we
take
a
look
at
Pluto,
another
CLI
tool
like
Nova,
we
can
run
the
help
command
and
we'll
see
that
there's
several
detect
commands
here
that
detect
different
things
so
detect
files.
You
just
pass
it
a
directory
of
files
and
it'll.
Look
through
all
of
those
that's
relatively
straightforward,
and
then
we
have
just
the
straight
up:
detect
command
and
this
one's
pretty
interesting.
So
I'm,
going
to
I'm
going
to
template
out
a
Helm
chart
a
very,
very
old
home
chart.
This
is
the
cert
manager
0.7
Helm
chart.
B
This
is
probably
what
two
and
a
half
years
old
at
this
point,
or
something
like
that,
and
if
we
take
a
look
at
this,
we
template
this
out.
We're
gonna
scroll
back
up
here
and
see
we've
got
well.
We
have
a
validating
web
hook,
configuration
that
is
the
admission
registration,
V1
beta
1
version
that
was
removed
in
1.22.
B
We've
got
the
assert
manager.kates.iov1
alpha
one.
That's
also
been
removed
and
deprecated.
So,
there's
a
whole
lot
of
versions
in
here
that
that
we
would
not
want
to
try
to
apply
to
a
cluster
today
and
obviously
we
wouldn't
be
applying
this
chart
because
it's
two
and
a
half
years
old.
But
it's
a
good
example
here.
B
So
then
we're
going
to
pipe
that
into
Pluto,
detect
we're
going
to
give
it
the
old
dash
for
standard
in
shortcut
that
is
so
popular
and
we're
gonna
see
it's
going
to
by
default,
give
us
a
table
output
and
we're
going
to
see
I'm
going
to
zoom
out
just
a
little
bit
here.
So
we
can
see
some
some
more
of
this
output,
we're
going
to
see
a
list
of
objects
and
what
kind
they
are
and
what
version
what
API
version
they
are.
What
API
version
replaces
it
and
then
we're
gonna
see?
B
Has
it
been
removed
and
has
it
been
deprecated?
All
of
these
have
both
been
deprecated
and
removed
if
you're
not
familiar
with
the
way
that
kubernetes
apis
are
removed
and
deprecated,
and
they
are
first
marked
as
deprecated
and
then
in
multiple
versions
of
kubernetes
later
they
will
be
removed
and
in
the
Pluto
FAQ
on
our
documentation
site.
There's
a
link
to
the
policy
that
describes
how
this
is
done
in
in
the
kubernetes
code
base,
but
there
are
specific
rules
about
how
this
happens.
B
The
important
thing
to
note
is
that
once
it's
been
removed,
you
can't
do
anything
with
it.
You
can't
query
it.
You
can't
play
a
yaml
that
has
it.
It
basically
seems
like
it
doesn't
exist
anymore.
We
have
completely
erased
it
off
the
face
of
the
Earth,
so
we
don't
want
to
be
applying
any
removed.
Api
versions.
A
B
A
Quick
I
just
want
to
figure,
it
looks
like
maybe
the
last
column
after
replacement
is
like
off
screen
there.
We
go
yes.
B
Yeah,
it
was
actually
wrapping
which
is
super
annoying,
but
yeah.
We
have
the
removed
and
deprecated
columns
over
here,
so
I
guess
I
could
have
used
less
or
something
like
that,
but
anyway,
and
that's
how
deprecations
and
such
work.
So
if
the
kubernetes
API
server
can't
tell
us
about
these
versions,
then
what
can?
This
is
the
question
we
asked
ourselves?
Well,
the
first
thing
is
Helm
chart
releases.
B
So
when
you
apply
a
Helm
chart,
it
creates
a
release,
object
which,
inside
of
that
really
subject,
it
has
all
of
the
yaml
that
was
applied
as
part
of
that
Helm
chart
in
its
raw
form
hasn't
been
translated
through
the
API
server.
Yet
so
we
can
look
for
those
and
ask
for
those.
So
we
have
the
Pluto
detect
Helm
command,
which
will
give
you
the
opportunity
to
do
that
now.
I,
don't
have
any
Helm
charts.
B
As
Stevie
was
pointing
out
in
this
cluster
that
are
deprecated,
but
if
I
had
applied
any
of
them,
they
would
show
up
here
with
their
name
and
namespace
applied
and
then
the
very
last
thing
since
we
are
actually
running
relatively
short
on
time.
The
very
last
thing
that
it
does
that
has
been
recently
added.
B
That
is
the
last
applied
configuration
annotation
and
what
that
contains
is
essentially
a
string
copy
of
the
last
applied
yaml.
So
if
you
keep
CTL
apply
your
yamls,
then
that
that
annotation
gets
set,
and
so
we
can
look
through
that
and
see
if
there's
anything
so
actually,
I've
found
two
things
here
using
that
functionality.
The
fun
thing
is
that
these
are
both
provided
by
eks
and
so
they're,
not
things
that
I
actually
control
in
this
cluster,
but
Pluto
has
found
the
last
supplied
configuration
annotation
and
said
that
hey
this.
C
B
So
those
are
the
different
ways
to
use
Pluto
to
detect
your
deprecated
API
version,
since
it's
a
CLI
tool
and
we
control
the
exit
codes
based
on
whether
we
find
resources
that
have
been
removed
or
deprecated,
you
can
use
it
in
your
CI
to
block,
builds
or
fail
builds
if
people
are
trying
to
deploy
things
that
are
not
that
are
deprecated
or
removed
depending
on.
However,
you
want
to
set
that
up.
So
the
that's
the
the
last
tool
you
should
use.
So
we've
talked
about
Polaris,
Goldilocks,
Nova
and
Pluto.
B
Today,
I
believe
we
also
have
content
detailing
even
more
depth
each
one
of
those
tools
individually.
So
take
a
look
at
our
past
webinars
and
you
may
find
some
more
content
if
you're
curious
about
these,
but
all
together.
If
you're
running
all
four
of
these
and
fixing
everything
that
it
finds
and
tells
you,
you
should
be
in
a
much
better
place
in
your
kubernetes
journey,
finding
a
lot
more
good
configurations
and
keeping
things
up
to
date
and
not
deploying
deprecated
API
versions
and
setting
your
resource
requests
and
limits
properly.
B
So
the
last
piece,
because
I
can't
leave
without
talking
about
it,
at
least
briefly,
is
what,
if
I
don't
want
to
install
all
these
tools
and
run
them
myself
and
write
my
own
CI
CD
code
and
deploy
these
into
my
100
clusters
that
I'm
running
everywhere
and
I
really
just
want
to
install
a
single
little
agent.
That's
going
to
report
all
this
information
back
to
a
single
dashboard
can
I
do
that.
B
B
So
this
is
where
we
get
to
Fairwinds
Insight.
So
Fairwinds
insights
is
our
commercial
SAS
platform.
It
allows
you
to
hook
up
all
of
these
tools
that
we've
talked
about
today,
along
with
multiple
other
tools
and
then
adds
a
much
more
additional
functionality
on
top
of
them.
So
we
normalize
all
of
the
results
from
all
of
these
tools
into
what
we
call
Action
items
and
they're
reported
into
this
dashboard
from
here.
You
can
route
these
to
different
places.
B
You
can
write
automation,
rules
to
send
them
to
slack,
to
generate
jira
tickets,
to
generate
GitHub
issues,
and
then
we
add
in
the
ability
to
take
a
look
at
the
cost
and
efficiency
of
your
clusters
as
well.
So
we've
got
this
entire
section
on
cost
that
lets
you
see
what
your
workloads
are
costing
you,
how
much
your
clusters
are
costing
you
across
multiple
different
clusters.
B
So
if
you
like
all
these
tools-
and
you
want
to
operationalize
them
at
scale
across
a
lot
of
clusters-
give
us
a
shout
out,
give
us
a
shout,
and
we
can
talk
to
you
about
Fairwinds,
insights
and
I.
Don't
see
any
additional
questions
so
I
want
to
thank
you
Stevie
for
presenting
with
me
today
always
a
good
time.
Thank
you
to
everyone
who
joined
today
and
gave
us
your
time.
I
hope
you
all
have
a
great
rest
of
your
week.
Thanks.