►
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
A
I
am
Danielle
cook,
I
am
a
VP
at
Fairwinds
and
I'm.
Also
a
co-chair
of
the
cncf
cartograph
working
group
where
we
publish
the
cloud
native
maturity
model,
so
I'll.
Let
Joe
and
Robert
introduce
yourselves.
A
Awesome
well
what
we're
here
to
talk
about
so
Fairwinds
has
put
together
a
kubernetes
benchmark
report.
We
looked
at
150
000
workloads
and
looked
at
them
against
different
configuration
settings
focused
on
security,
around
efficiency,
around
reliability
and
brought
you
this
report.
Robert.
Do
you
want
to
talk
a
little
bit
more
about
like
how
we
actually
pulled
that
data
together.
C
Yeah,
so
we
basically
examined
I,
think
I
think
it's
several
hundred
thousand
workloads
across
many
different
organizations,
some
big,
some
small,
some
big
clusters,
some
small
clusters-
so
a
wide
wide
variety
of
different
kinds
of
deployment
models,
and
we
basically
scanned
all
those
workloads
for
common
configuration
issues
which
which
we've
identified
over
time
and
then
we
kind
of
aggregated
those
results
to
figure
out.
C
You
know,
okay,
what
what
percentage
of
organizations
have
say
left
the
majority
of
their
workloads
exposed
to
being
able
to
run
as
root
or
haven't
set
CPU
with
memory
settings
things
like
that.
A
Awesome
and
the
the
big
big
takeaway
we
have
from
this
year's
report.
We
were
able
to
evaluate
it
based
on
a
year
ago
and
compare
and
contrast
the
two.
The
big
findings
are
we're
not
doing
well
as
an
industry.
We
need
to
do
better,
so
we
are
going
to
dig
into
these
findings
now
we're
going
to
look
at
how
you
compare
you
know
see
what
you're
doing
see
if
you're
configuring
following
these
best
practices,
you
know
it
is.
It
is
interesting
finding
so
jumping
in
quickly.
A
A
Now
those
look
at
the
top
ones
will
say:
Okay,
46
percent
of
workloads
from
zero
to
ten
percent
are
impacted,
so
I
know
that's
a
little
bit
of
a
mind,
play
Robert.
Why
don't
you
explain
that
a
little
bit
better
yeah.
C
So
basically,
every
organization
gets
put
into
one
of
these
10
buckets.
If
that
organization
has
very
few
of
their
workloads
impacted,
you
know
zero
to
ten
percent
of
their
workloads.
Impacted
they
go
in
that
top
bucket.
That's
a
really
good
place
to
be.
You
want
to
be
in
that
top
bucket
if
they
have
almost
all
their
workloads
affected,
then
they're
going
to
go
in
that
bottom
bucket,
where
90
to
100
of
workloads
are
affected.
So
some
organizations
are
in
that
bottom
bucket.
That's
a
place
where
you
don't
want
to
be.
It
means
pretty
much.
A
A
First
start
with
our
first
category
around
reliability,
and
this
relates
to
the
graph
we
were
just
looking
at,
and
here
we
can
see
that
we
are
missing
memory
limits
and
requests
and
that's
a
problem
so
Joe
do
you
want
to
talk
about?
Why
that's
a
problem
yeah.
B
I
mean
I
think
at
you
know,
this
is
one
of
the
more
important
you
know,
checks
that
are
you
know
important
for
adding
to
your
kubernetes
configuration.
B
You
know
right
from
the
start,
so
when
you
create
a
new
configuration
for
a
workload
and
go
to
deploy
the
application,
you
want
to
make
sure
that
you
at
least
have
the
memory
settings
set
and
those
are
the
requests
as
well
as
even
the
limits
like
how
much
memory
is
that
with
the
maximum
memory
that
that
workload
should
use
and
I
think
what
we're
noticing
is
that
over
time
you
know
organizations
are,
you
know,
adding
more
containers,
adding
more
workloads
to
kubernetes,
but
you
know
oftentimes
forgetting
to
do
this,
so
the
the
percentage
of
workloads
impacted
has
increased
over
time
and
you're,
seeing
more
organizations
kind
of
neglect.
B
This
kind
of
from
the
start
and
I
think
you
know
this
is
an
area
that
there's
actually
some
great
open
source
tooling.
That
can
help
you
address
these
issues
as
well,
so
I
think
there's
some
opportunities
to
kind
of
quickly
close
the
gap
here,
but
we
are
seeing
that
things
are
getting
kind
of
worse
over
time.
B
B
B
I
mean
I
think
at
a
at
the
end
of
the
day,
you
are
consuming
Cloud,
because
you're
trying
to
get
your
applications
out
to
customers
fast
you're,
trying
to
enable
your
teams
to
ship
faster
and
Cloud,
provides
you
with
that
flexibility
and
that
scalability,
but
it
also,
especially
in
nowadays
it
can
be
very
expensive
if
you
don't
manage
Cloud
resources
correctly
and
one
of
the
you
know.
First,
things
that
you
want
to
do
is
make
sure
that
your
applications
are
operating
within
a
bound
of
appropriate
memory.
B
Use
because
if
it's
unbounded
or
you're
not
setting
this
configuration,
kubernetes
has
to
guess,
and
sometimes
it
can
over
allocate
resources
because
it
is
guessing
and
ultimately
you're
spending
more
in
the
cloud
than
you
need
to,
and
so
over.
Provisioning
is
a
very
large
problem.
I
think
we've
seen
that
on
average
customers
are
over
provisioned
by
at
least
30
percent,
and
so
that's
one
of
the
things
that
we're
hoping
Morgan
is
organizations
apply
or
put
you
know,
memory
requests
and
limits
in
place
to
kind
of
better
manage
their
kubernetes
and
container
spend.
A
Awesome
all
right,
let's
move
on
to
the
next
one,
which
is
around
missing
liveness
and
Readiness
probes.
So
here
you
can
see
again
it's
getting
worse,
like
83
percent
of
organizations
are
not
setting
these
so
Robert.
Let's
talk
a
little
bit
why
these
are
important.
C
C
You
know
so
some
kind
of
information
as
to
whether
whether
it
can
start
sending
traffic
into
your
application
a
to
make
sure
that
if
your
application
does
experience
some
problem,
you
know.
Maybe
the
database
goes
down.
Etc
it's
going
to,
or
you
know
a
particular
pod
loses
connection
to
the
database.
C
It's
going
to
stop
sending
traffic
there
and
make
sure
your
customers
don't
experience
that
outage
and
it'll
restart
that
pod
to
make
sure
it
gets
back
into
a
healthy
State
and
also,
if
you're,
if
you're
kind
of
say,
issuing
a
new
deployment.
You
know
you're
updating
your
source
code
Etc
as
that
deployment
rolls
out.
If
it
takes,
you
know,
say
30
seconds
for
your
application
to
go
from.
You
know
the
binary
started
to
is
actually
ready
to
serve
traffic.
C
A
All
know
like
time,
you
know
that
that
30
seconds
I
mean
I
know.
My
attention
span
is
gone
if
it,
if
it
doesn't
something
doesn't
load.
Excuse
me
all
right
so
now
that
I'm
joking,
let's
move
on
to
the
next
one
yeah
put
yours
around
deployment
missing
replicas,
so.
B
B
Me
take
this
one
away
and
get
get
things
started.
So
I
think
with
you
know,
missing
replicas.
This
is
actually
a
new,
a
new
check
that
we've
added
to
this
year's
report
and
we're
finding
that
you
know
25
of
organizations
are
running
over
half
of
their
workloads
without
replicas
and
really
one
of
the
advantages
of
kubernetes
is
the
ability
to
kind
of
quickly
scale,
containers
and-
and
you
know,
have
kubernetes-
can
help
you
respond
to
new
inbound
demands
for
your
application.
B
If
you
have
lots
of
users
accessing
it
or
if
there's
new
workloads
that
have
to
spin
up
quickly
to
address
a
pipeline
of
issues
or
pipeline
of
work
and
so
replica
what
replicas
do
is
really
help
with
that
horizontal
scaling
of
kubernetes,
so
that
you
can
make
sure
that
there's
enough
replicas,
if
you
will
of
your
application
in
kubernetes
to
handle
that
load
and
there's
also
it
there's
generally
a
recommendation
around
you
know
having
more
than
one
replica
set
so
Robert,
can
you
share
a
little
bit
more
around
what
those
General
best
practices
are
for
replicas.
C
C
C
If
you
only
have
one
replica,
then
you're
you're
at
serious
risk
for
downtime,
because
if
anything
happens
to
that,
one
replica
kubernetes
is
going
to
basically
kill
that
pod
and
restart
it,
and
that
could
that
could
even
be
just
normal.
You
know
operating
procedure
for
our
kubernetes
cluster
right.
A
node
might
go
away
and
get
replaced
by
another
node.
C
You
know,
kubernetes
is
supposed
to
be
fault.
Tolerant,
there
are
there's
aspects
of
the
environment
that
are
known,
you
know
known
to
be
noisy,
are
deliberately
noisy,
and
so,
if
you
don't
have
more
than
one
replica
set
for
a
particular
deployment,
there's
a
very
good
chance.
You're
going
to
experience
blips
of
downtime.
A
Right,
so
that's
our
reliability
findings.
We
are
going
to
move
on
next
to
security,
so
here
we
can
see
you
know,
here's
our
our
big
statement,
which
is
33
of
organizations,
are
running
90
or
more
of
the
workloads
within
capabilities.
Like
that's
not
good,
so
we
dig
into
that
data.
We
can
see
the
findings
of
the
workloads
that
are
impacted,
but
I
think
we
should
start
with.
So
what
is
an
insecure
capability.
B
C
C
So
in
kubernetes
there's
a
lot
of
configuration
that
goes.
You
know
around
your
Docker
container
as
you're
putting
it
into
the
cluster
and
there's
a
lot
of
security
settings
involved
in
basically
telling
kubernetes
how
to
run
this
container,
and
there
are
ways
you
can
give
that
container
extra
permissions,
for
you
know
what
it's
allowed
to
do,
how
it's
allowed
to
interact
with
the
host
node.
That's
running
that
container,
and
one
thing
you
can
do
is-
is
attach
Linux
capabilities
to
that
container.
C
If
you
know,
maybe
it
does
need
some
kind
of
extra
access
to
the
host
node.
Typically,
you
know
your
average
API
server,
your
average
dashboard,
whatever
it
is
that
your
your
application
teams
are
deploying
is
not
going
to
need
those
extra
capabilities.
They
really
just
need
to
be
able
to
serve
traffic.
You
know
over
the
Internet.
C
Unfortunately,
a
lot
of
these
capabilities
are
added
by
default
and
they
need
to
be
explicitly
dropped
if
you
want
to
not
have
them
set
for
your
container,
so
we
recommend
that
you
know
for
most
workloads.
You
know
that,
are
you
know
running
applications
that
these
that
these
capabilities
be
dropped?
C
You
can
see
here
in
the
data
there's
a
pretty
like
bimodal
distribution
here,
where
some
organizations
have
done
this
for
the
vast
majority
of
their
workloads
they're
in
that
top
bucket
and
some
organizations
have.
You
know
basically
done
this
for
none
of
their
workloads.
So
you
can
see
there
are
some
organizations
that
take
this
very
seriously
and
are
making
sure
to
drop
these
drop,
these
capabilities
wherever
possible.
There's
other
organizations
who
are
apparently
just
unaware
that
this
is
a
thing
and
they've
left
the
default
configuration
for
all
their
workloads.
Well,.
A
B
I
think
sometimes
what
you
find
with
this
particular
area
is
organizations
that
might
be
rolling
out.
Kubernetes
across
large
parts
of
the
organization
may
end
up
creating
you
know
templates
that
allow
for
that
repeatability
of
configuration.
But
what
can
happen?
Is
you
if
those
templates
aren't
aligned
to
best
practices?
You
can
actually
propagate
these?
B
You
know
misconfigurations,
whether
it's
security
or
liability,
oriented
or
cost
oriented,
and
so
sometimes
these
can
become
widespread
problems
quickly,
if
again,
you're
not
making
sure
that
that
root
template
is
following
security,
best
practices,
and
this
is
an
area
that
it's
easy
to
miss
right,
A
lot
of
times,
you
know
folks,
are
trying
to
just
get
workloads
to
run
and
securing
those
workloads
isn't
something
that
you
notice
right
away
right.
It
looks
like
things
are
running.
C
Yeah
totally
and
I'll
add
I
think
having
some
kind
of
policy
enforcement
in
place
like
this
is
a
thing
that
you
might
get
yourself
to.
You
know
100
compliance
today,
but
six
months
from
now
you
know
people
are
onboarding
new
workloads,
Etc
they're,
not
necessarily
following
those
best
practices,
so
making
sure
you've
got
some
kind
of
policy
enforcement
mechanism
in
there
to
say
hey
anything
new
coming
into
this
cluster
has
to
be
dropping
these
capabilities.
Unless
it's
got
some
kind
of
carve
out.
A
B
Well,
I
I'll
I'll,
kick
it
off
I
I,
think
like
with
writable
file
systems.
I
know
that
in
general,
containerized
workloads
of
best
practice
is
to
actually
only
just
have
the
file
system
as
read
only
which
is
really
the
best
practice
in
this
case,
I
think
a
writable
file
system
can
open
up
additional
risk
to
that
container,
but
you
know
Robert
help
us
understand
kind
of
what
what
are
some
of
those
risks?
Why
is
it?
Why
is
it
a
best
practice
to
be
read
only
here.
C
Yeah,
so
you
know
from
a
security
perspective,
it
really
like
kind
of
hampers
any
and
he
would
be
an
attacker
from
attacking
the
system.
They
can't
download
an
external
script.
They
can
install
new
software,
their
their
hands
are
really
tied
in
terms
of
what
they
can
do
inside
this
container.
If
they
do
manage
to
break
in
so
it
does
a
great
job
of
just
kind
of
cutting
down
on
on
what
attacker's
able
to
do.
C
There's
also,
you
know
the
added.
You
know
reliability
impact
where
you
know
that
you're
not
accidentally
keeping
some
State
on
the
disk.
C
You
you
know
if
it's
re,
if
it's
read
only,
then
you
know
that
every
time
you
spin
up
a
pod
you're
going
to
get
an
identical
environment,
so
it
just
helps
you
ship
with
a
little
bit
more
confidence,
and
it's
worth
noting
too
that
this
is
an
option
that
is
not
set
by
default.
So
the
file
system
is
writable
by
default
with
kubernetes.
C
You
have
to
explicitly
add
some
configuration
saying:
I
want
my
file
system
to
be
read
only
by
default,
so
you
know
again,
you
can
see
kind
of
a
bimodal
distribution
where
there
are
some
organizations
that
care
very
heavily
about
this.
They
set
it
for
the
vast
majority
of
their
workloads
and
there's
some
organizations
who
seem
to
be
unaware
that
this
is
even
an
option
for
them
and
are
leaving
the
vast
majority
of
their
workloads
impacted.
A
Well
and
I
think
here
you
know
the
the
stat
says
56.
So
that's
if
you
kind
of
take
your
70
to
100
of
workloads,
impacted,
add
those
all
together,
that's
56
of
or
56
of
organizations
who
they're
just
not
doing
it.
So
it
needs
to
change
all
right.
Moving
on
to
our
next
one,
privileged
escalation
allowed
so
Joe.
Do
you
want
to
give
me
a
definition
of
this
one
and
talk
about
why
this
is
important.
B
I
I
think
this
is
one
that
we
see,
that
is
pretty
prolific
right.
I
mean
similar
types
of
statistics
around
this
bimodal
distribution.
B
B
You
know
just
kind
of
switch
the
the
flag
from
from
True
to
false,
but
you
know
with
privilege
escalation.
It
really
allows
a
container
to
be
able
to,
as
as
you
kind
of
expect,
escalates
it
Privileges
and
be
able
to
kind
of
expose
the
container
to
more
security
risk
and
to
kind
of
get
into
a
little
bit
more
detail
around.
Like
you
know,
why
is
that
bad
Robert?
Do
you
have
any
kind
of
examples
of
why
that
would
be
a
you
know,
a
bad
thing.
C
Yeah
I
mean
the
big
thing
is.
If
you
know
somebody
does
manage
to
break
into
that
container
or
escape
the
container,
it
basically
gives
them
a
whole
bunch
of
extra
permissions
on
the
host
node,
which
would
allow
them
to
spread
their
attack
throughout
the
cluster.
So
basically
a
lot.
A
lot
of
the
security
configuration
that
we're
talking
about
here
is
all
about
limiting
the
limiting
the
blast.
Radius
of
an
attack
right
or
assuming
some
attacker
has
gotten
into
a
particular
application.
C
They
found
a
hole
in
one
application,
they've
managed
to
get
into
that
container,
and
so
all
this
extra
configuration
that's
getting
layered
on
top
is,
is
keeping
them
in
that
container
and
restricting
the
amount
of
things
that
they
can
do
from
within
that
container.
So
this
is
yet
another
thing
you
know
yet
another
layer
of
protection.
You
can
put
into
place
to
make
sure
that
an
attacker
is
really
unable
to
do
anything
outside
of
that
one
application.
A
Well
and
I
think
one
one
thing
to
mention
on
this
Joe.
You
said
this
is
an
easy
fix
and
sure
it's
an
easy
fix.
If
you
have
you
know
a
few
clusters,
but
when
you
have
multiple
clusters
and
multiple
people
using
your
platform,
the
problem
isn't
necessarily
that
it's
a
hard
fix.
It's
a
scalable.
Are
you
auditing
your
clusters
to
make
sure
that
this
is
done?
Yeah.
B
Can
can
you
are
you
auditing
it,
and
can
you
implement
controls
in
place?
That'll
ensure
that
you
know
the
right
best.
Practices
are
in
place,
I
think
that
goes
back
to
policy
enforcement,
but
there's
also
also
things
like
mutating
admission
controllers
that
can
help
you
know
Force
these.
These
good
practices
as
well.
A
Awesome
so
next
one
run
as
root.
Now
there
was
this.
There
was
a
vulnerability
around
this
in
the
last
year,
a
little
bit
longer
than.
B
A
Year
now,
which
you
know
looked
at
this
being
exploited
so-
and
we
can
see
here
that
there's
a
lot
of
organizations
that
are
still
allowing
root
access
Joe,
you
were
gonna
chime
in
this.
B
Is
similar
to
the
privilege
escalation,
one
where
it's
it's,
fortunately,
an
easy
fix,
but
the
number
of
applications
out
there
that
are
running
as
root
by
default.
You
know
is,
is
way
too
many,
and
in
some
scenarios
you
might
need
a
certain
workload
to
run
as
rude
depending
on
what
it
does,
but
for
the
most
part,
I
think
this
is
a
general
best
practice
that
you
can
apply.
B
The
example
that
you
mentioned,
there's
there's
a
was
a
vulnerability.
I
think
we
can
share
the
link
after
this
webinar
back
in
2021,
where,
as
if
you
had
a
workload
that
was
running
as
root,
then
the
vulnerability
was
able
to
take
advantage,
or
you
know,
gain
additional
access
to
the
system
and
one
of
the
ways
to
mitigate
that
vulnerability
in
the
near
term
was
making
sure
that
your
workloads
weren't
running
as
rude.
B
It
was
enabled
to
kind
of
you're
able
to
kind
of
add
that
defense
in
depth
to
your
existing
system,
to
thwart
the
attack,
and
so
I
think
you're
going
to
find
that
this
is
an
important
defense,
in-depth
layer
right
if
there
are
vulnerabilities
that
are
need
route
access
to
the
system.
In
order
to
you
know,
you
know,
do
its
nefarious
things
being
able
to
make
sure
your
workloads
don't
have
the
ability
to
run
as
root
become
very
important.
A
Great
so
next
up
and
we
have
just
two
more
on
security-
and
this
is
around
in
image
vulnerability,
so
your
62
percent
of
organizations
have
this
happening
on
50
or
more
of
their
workloads
like.
That
is
a
lot
so
Robert
talk
to
me
about
image
vulnerabilities.
What
are
they?
Why
should
I
care?
Why
should
I
be
scanning
for
this.
C
Yeah,
so
every
every
you
know
container
image
that
gets
shipped
into
your
cluster
as
not
only
you
know
your
build
application
binary
inside
of
it,
but
also
a
bunch
of
other
applications
that
might
be
being
used.
It's
got
a
whole
operating
system
typically
and
all
all
that
software
that's
getting
packaged
up
inside
that
container
could
potentially
be
vulnerable
to
an
attack.
C
A
lot
of
you
know
actual
exploits
in
the
wild,
involve
you
know,
a
basically
a
a
known
vulnerability
and
a
in
a
widely
used
piece
of
software
like
PHP
or
WordPress,
or
something
like
that.
The
attacker,
you
know,
basically
knows
about
the
vulnerability,
because
everybody
knows
about
the
vulnerability.
They
see
that
you're
using
that
old
version,
and
they
just
jump
in
and
exploit
it.
C
So
it's
super
important
to
be
for
every
for
every
container
that
you're
putting
into
that
cluster.
Getting
that
container
and
understanding.
What's
inside
of
it,
what
version
is
running
and
is
that
version
known
to
be
vulnerable
I?
And
if
it
is,
you
need
to
make
sure
you
update,
make
sure
you
update
the
container,
make
sure
you
ship
that
new
update?
C
A
All
right
and
so
last
step
on
the
security
front.
We
looked
at
outdated,
Helm,
charts
and
then
outdated
container
images.
So
the
container
images
is
new
to
the
report
this
year
and
you
know
it's
the
graph
at
the
bottom,
which
really
does
show
this
split
of.
C
A
People
have
it
covered,
some
people
do
not
have
it
covered
at
all
and
then
the
helm
charts
are
all
over
the
place.
So
Joe
Jump,
On
In.
B
Yeah
I
think
I
mean
in
in
Cloud
native
right
you're
having
to
make
sure
that
you're
running
the
the
correct
version
of
a
container
at
any
given
time,
and
you
might
want
to
make
sure
you
upgrade
containers
so
that
you
can
take
you
know.
Customers
can
take
advantage
and
do
functionality,
but
you
may
also
need
to
upgrade
so
that
you're
resolving
vulnerabilities
like
in
the
previous
slide.
A
lot
of
containers
contain
known
vulnerabilities
and
upgrading
to
a
newer
version
is
often
a
way
to
kind
of
reduce
your
vulnerability
risk
I.
B
Think
the
unique
thing
about
the
kubernetes
and
Cloud
native
landscape
is
there's
lots
of
third-party
Helm
charts
and
third-party
containers
that
are
needed
to
run
your
cluster,
and
this
is
sort
of
the
equivalent
of
patch
management
for
the
cloud
native
world,
making
sure
that
your
third-party
containers-
you
know
these
are
different
add-ons
that
are
necessary
for
running
critical
services
in
your
cluster.
Keeping
those
up
to
date
is
important
for
not
only
just
the
reliability
of
your
system,
making
sure
that
you're,
you
know
addressing
any
bugs,
but
again
going
back
to
that
security
aspect.
B
A
All
right
so
moving
on
to
our
efficiency
cost
efficiency
area.
You
know
this
is
something
that
is
increasingly
important
this
year.
You
know
companies
are
looking
at.
What
are
we
spending
on
our
cloud
spend?
How
do
we
reduce
it,
and
so
here
we're
looking
at
you
know
configuration
around
memory,
memory
limits
and
those
being
set
too
high,
so
yep
we'll
jump
into
it
and
look
at
the
first
graph,
which
is
which
is
around
memory
limits
too
high
Joe.
Do
you
want
to
jump
in
on
this.
B
Yeah
I
think
what
we're
finding
is
that
with
memory
limits
being
too
high.
Ultimately,
this
is
a
signal
that
organizations
are
generally
over
provisioning,
their
applications,
so
they're
they're,
giving
very
generous
amounts
of.
We
talked
about
the
importance
of
setting
memory,
requests
and
limits,
so
the
the
low
end
and
the
high
end
of
memory
usage,
but
they're
being
very
generous
on
the
high
end
usage,
which
means
that
you
know
kubernetes,
you
know,
may
end
up,
you
know
consuming
more
Cloud
resource
than
they
need
to
and
I
think
the
the
other.
B
You
know,
while
this
is
an
important
issue,
there's
also
the
issue
of
folks
actually
provisioning
too
much
memory
requests
and
that's
where
you
know
kubernetes
reserves.
You
know
an
oversized
amount
of
yeah.
That's
this
is
the
exact
slide
there.
It's
reserving
an
oversized
amount
of
memory
that
it
doesn't
actually
that
that
application
doesn't
actually
need.
A
Well
and
oftentimes,
you
know
if
you
have
created
a
platform
for
your
developers
to
use
and
they're
using
it
there.
They
might
not
be
thinking
about
this,
but
you're
the
one
who
the
finance
team
is
going
to
come
to
and
go.
Our
Cloud
bill
is
high.
What
is
happening
so
being
able
to
to
dig
in
put
some
guardrails
in
place
for
your
developers
so
that
they
can
be
studying
these
right
memory
requests
and
limits
is
really
important
and
I
think
we
we
found
that.
B
B
But
there
still
is
this
theme
of
neglecting
the
best
practices
which
can
you
know,
become
problematic
later
on
Maybe,
not
immediately
around
security,
cost
efficiency
and
reliability,
and
one
of
the
areas
that
folks
are
actually
fixing
is
we're
seeing
a
class
of
organizations
really
Implement
policy
and
guard
rails
intently
and
making
this
feedback
loop
to
developers
and
one
of
the
first
classes
of
issues
that
they're
fixing
is
really
around
those
memory
and
CPU
requests
and
limits
that
we
had
talked
about
earlier.
B
The
good
news
is
I,
think
there's
you
know
in
Fairwinds
has
open
source
here
a
tool
called
Goldilocks
that
there's
tools
that
are
coming
out
that
help
provide
those
specific
recommendations
for
CPU
and
memory,
so
that
developers
don't
have
to
guess
they
can
put
the
right
values
in
place
from
day
one
and
you're.
B
Seeing
these
this
class
of
issues
being
fixed,
faster
I
think
over
time,
we're
going
to
see
more
kind
of
context,
sensitive
issues
like
the
security
issues,
where
applications
may
or
may
not
need
certain
privileges
or
ask
or
certain
security
context
enabled
those
would
probably
be
the
next
wave
of
things
to
be
fixed.
But
it's
great
to
see
that
there
is
some
progress
in
general
happening
around
the
the
CPU
memory
settings.
A
Well
and
here
overall,
what
we're
seeing
too
is
organizations
that
are
trying
to
to
look
at
these
issues
that
we've
gone
through
from
reliability
to
security
and
cost
efficiency,
if
they're
doing
it
earlier
in
the
process
and
the
shift
left
scenarios
and
your
admission
control,
you're,
finding
you're
fixing
them
earlier
and
that's
a
really
good
thing.
It's
not
getting
into
production
you're,
not
having
to
waste
time
after
the
fact
trying
to
fix
these
problems.
Yep.
B
A
Okay,
so
those
are
our
findings
of
The
Benchmark
report,
so
we're
going
to
do
a
plug
for
Fairwinds
Insight,
so
we
have
a
free
SAS
product,
which
is
kubernetes
governance,
software
that
helps
scan
for
all
of
these
problems
that
we
were
talking
about,
and
it
really
is
a
tool
that,
if
you're
creating
you
know
your
internal
developer
platform,
you
want
to
create
this
paved
Road
for
your
developers
to
work
on
it
to
get
to
the
end
without
any
issues
without
any
bumps
in
the
road
and
that's
what
Fairwinds
Insight
is
really
helping.
B
B
Think
you're,
absolutely
right,
I
think
at
the
end
of
the
day
there,
the
emerging
you
know,
role
of
platform,
engineering,
their
whole
job
is
to
enable
teams
to
ship
faster
and
in
larger
organizations,
making
sure
that
teams
are
consistently.
You
know,
following
the
best
practices
but
able
to
ship
their
applications
to
kubernetes
and
get
the
feedback
in
in
a
timely
way
that
becomes
critical,
and
so
what
kubernetes
governance
is
is
really
an
important
capability
of
an
internal
developer
platform
and
what
we
bring
to
the
table
is
three
different.
B
You
know
key
capabilities
here
that
organizations
are
adopting
at
scale
they're
adopting
the
security
feedback
and
that's
not
security
feedback
after
it's
shipped
while,
while
we
do
provide
that
it's
also
the
feedback
right
at
the
time
of
pull
request
so
that
those
development
teams
or
even
those
devops
Engineers
who
are
helping
get
applications
to
run,
understand
the
best
practices
that
they
should
follow
right
from
the
start.
The
other
is
in
this
in
2023.
This
is
incredibly
important.
Now,
it's
being
able
to
manage
Cloud
costs
again,
kubernetes
has
gone
from.
B
You
know
one
team
to
many
teams
in
in
many
companies,
as
the
adoption
has
soared
and
it's
important
to
bring
visibility
into
container
cost,
because
that
is
not
something
that
comes
natively
or
naturally
to
the
the
current.
You
know
tooling,
out
there
and
so
being
able
to
measure
how
much
containers
cost
and
also
get
recommendations
on
those
CPU
and
memory
settings
that
we've
talked
about
that
becomes
another
critical
capability.
For
you
know,
development
teams
or
platform
engineering
teams
who
ultimately
have
to
fix
these
issues.
B
And
finally,
you
know
Fairwinds
also
provides
the
suite
of
guard
rails.
Guard
rails
are
really
the
core
of
this
feedback
loop.
So
at
every
stage
of
the
process,
whether
it's
you're
you're,
editing,
your
your
configuration
or
infrastructure
as
code
or
you're,
deploying
your
app
or
you're
trying
to
investigate
why
an
app
is
not
working
correctly.
Guard
rails
and
policy
feedback
are
important.
For
you
know,
helping
teams
understand
what
they
need
to
do
now
and
go
faster.
So
what
you're
going
to
see
in
our
platform?
B
Yeah
I
can
actually
share
a
quick
demo
if
that
makes
sense
that.
A
Does
make
sense?
Let's
do
that
and
while
Joe
brings
it
up
I'll
just
say
you
can
get
a
copy
of
the
benchmarking
report
by
visiting
fairwinds.com,
and
you
can
also
try
this
platform
by
visiting
fairones.com
and
signing
up
for
free.
B
Yeah
awesome
and
so
I
think
what
you'll
see
is
you
know
the
the
insights
platform
you
know,
first
organizes
all
of
this
feedback
for
developers
and
for
platform
teams
into
three
different
categories.
It's
the
same
that
you've
seen
in
this
presentation,
whether
it's
security,
reliability
or
efficiency,
and
you
can
also
you
know,
narrow
in
on
specific
tools
like
Fairwinds
Polaris,
to
understand
the
types
of
issues
that
are
affecting
your
workloads
so
you'll
see
some
of
the
recommendations
we
talked
about
here
today
show
up
I.
B
Think
the
thing
that
we're
seeing
lots
of
organizations
turn
on
today
is
is
really
this
whole
notion
of
repository
scanning
or
infrastructure,
as
code
scanning
and
so
being
able
to
integrate
right
at
the
time
of
pull
request
to
detect
things
like
image
vulnerabilities
or
help
you
identify
workloads
that
might
be
missing.
Labels
which
are
important
for
cost
allocation.
You
know
of
a
variety
of
types
of
checks
can
can
emerge
here
where
feedback
is
critical
for
those
developers.
B
So
Fairwinds,
you
know
really
again
helps
drive
a
lot
of
this
feedback
loop
and
helps.
You
kind
of
you
know
plug
important
capabilities
into
your
internal
developer
platform.
A
Awesome
and
you
know
within
insights,
there
are
a
number
of
Open
Source
tools
like
Polaris,
which
we've
mentioned
Goldilocks,
which
we've
mentioned
so
lots
of
Open
Source
tools
that
you
can
be
trying
and
using.
We
are
huge
Advocates
of
Open
Source,
so
you
can
check
that
out
too,
and
you
know
here's
here's
some
of
the
links
again,
you
can
get
the
full
benchmarking
report
to
to
see
how
you
compare
in
more
detail
and
you
can
try
out
insights.