►
From YouTube: Cloud Native Live: Optimizing and Securing Kubernetes Workloads with Polaris and Goldilocks
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hello,
everyone
hi
and
welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud
native,
I'm
itai
shakuri
and
I'm
director
of
open
source
at
aqua
security,
I'm
also
a
cloud
native
ambassador
and
will
be
hosting
today's
show
so
cloud
native
live.
This
is
where
we
bring
a
new
set
of
presenters
every
week
to
showcase
how
to
work
with
cloud
native
technologies.
A
They
will
build
things
and
break
things
and
they
will
answer
your
questions
every
week
on
wednesdays
this
week
we
have
andy
from
fairwinds
to
talk
to
us
about
polaris
and
gridlocks
before
we
get
to
that.
Just
a
quick
reminder
that
kubecon
is
coming
up,
it's
going
to
be
both
an
in-person
and
a
virtual
experience
so
make
sure
to
register
in
time,
and
now
is
the
time
and
another
disclaimer
that
this
is
an
official
live
stream
of
the
cncf
and
the
such
is
subject
to
the
cncf
code
of
conduct.
A
B
Hi,
I'm
andy,
I'm
director
of
r
d
and
technology
at
fairwinds
fairwinds
is
got
its
start
as
a
managed
service
provider
in
kubernetes,
and
we
took
the
learnings
from
those
years
of
managing
lots
and
lots
of
clusters
and
we've
we've
built
a
lot
of
open
source.
While
we
manage
those
clusters
to
solve
a
lot
of
the
problems
that
we
have
and
then
on
top
of
that,
we
built
some
sas.
So
we
just
do.
B
Yeah
yeah
the
two
tools
I'm
gonna,
I'm
gonna,
show
today
really
were
built
out
of
a
need
for
just
some
additional
tooling
to
help
us
in
our
journey
running
kubernetes
for
all
of
our
different
customers.
So
polaris
really
focuses
on
best
practice
configuration
of
your
application
workloads
and
then
goldilocks
came
out
of
a
need
to
help
our
clients
set
their
resource
requests
and
limits
properly
on
all
of
their
deployments
in
their
clusters.
B
So
what
I'd
like
to
do,
I
kind
of
have
what
I've
done
today.
Is
I've
set
up
a
cluster,
it's
a
pretty
barebones
eks
cluster,
and
then
I
have
installed
an
app
on
it.
This
is
a
demo
app
out
there.
It's
a
multi-tiered,
app
called
yelp
or
ylb.
I'm
not
actually
sure
how
we're
supposed
to
pronounce
it,
but.
B
Voting
app,
so
I
wrote
a
little
loop
over
here
in
the
console
to
just
randomly
vote,
and
so
this
app
is
running
in
my
kubernetes
cluster.
I
got
some
yaml
for
deploying
it
from
the
repository
that
yelp
came
from
it's
very,
very
bare-bones
deployment,
so
we
have
just
a
few
different
pods
running
here.
B
We've
got
a
database,
a
reddit
server,
we've
got
a
front
end
and
a
back
end,
and
what
I'd
like
to
do
today
is
start
with
polaris
and
I'm
gonna
install
polaris
in
the
cluster
and
look
at
the
findings
that
it
has
regarding
this
particular
application,
so
kind
of
modeling.
If
I
had
just
deployed
a
brand
new
application
into
my
cluster,
how
could
polaris
help
me
improve
the
security
posture
in
the
configuration
of
that.
A
Right
sounds
good,
and
while
we
do
that,
I
just
want
to
remind
everyone
that
they
can
type
questions
in
the
chat.
If
you
have
anything
just
type
it
as
you
think
about
it
and
I'll
pick
it
up
sometime
in
the
maybe
bring
it
up.
B
Great
all
right
so
in
order
to
install
polaris,
I'm
just
going
to
use
hum
so
I've
actually
already
installed
it
but
show
how
to
install
it.
So
I'm
gonna
install
polaris
in
the
namespace
polaris
from
the
fairwinds,
stable,
helm
repository
and
then,
if
I
were
doing
this
from
scratch,
I
might
want
to
add
the
create
namespace
flag
just
so
that
we
get
I've
been
doing
too
many
other
things
today,
helm
upgrade
dash
dash
install.
B
So
this
should
install
polaris
in
our
polaris
namespace
within
our
cluster
and
I'm
going
to
pop
over
here
and
use
canines,
because
I
really
like
the
way
it
does
port
forwarding.
So
I'm
in
the
polaris
namespace
here
I'm
going
to
look
for
services
in
this
namespace.
I
see
that
we
have
a
dashboard
in
this
namespace,
so
I'm
gonna
port
forward
to
this
using
canines.
So
I'm
just
gonna
port
forward
to
the
dashboard
on
localhost
8080.
B
B
B
B
B
Oh
there's
the
problem,
I
can't
spell
localhost
dangers
of
a
live
demo,
all
right,
localhost,
8080.
There
we
go
so
we
see
the
polaris
namespace
or
the
player's
dashboard
here,
and
it's
going
to
give
us
just
a
rough
score
for
our
cluster.
It's
going
to
give
us
some
just
numbers
on
how
many
checks
we
have
passing
how
many
checks
we
have
in
the
morning
how
many
checks
we
consider
dangerous
and
then
some
basic
information
about
the
cluster
and
I'm
really
going
to
focus
on
the
app
namespace.
B
B
But
it's
going
to
give
you
stuff
for
the
entire
cluster
and
when
we
filter
down
to
just
our
name
space
here
for
our
app,
we
see
we're
doing
really
poorly.
We
have
an
f,
that's
not
a
great
score.
So
let's
take
a
look
at
the
different
findings,
so
we
have
like.
I
said
we
have
different
deployments
for
the
ui,
the
database,
the
app
server
and
redis,
and
so,
let's
just
focus
for
now
on
the
ui,
because
that
would
be
our
you
know.
Front-Facing
portion-
and
I
see
we
have
some
dangerous.
A
B
That
are
enabled
that
are
going
on
here
and
we
have
some
warning
things
so,
the
top
one
here
I'm
going
to
just
tackle
that
first.
So
let's
take
a
look
at
what
that
means.
So
it
says
privilege,
says
privilege:
escalation
should
not
be
allowed.
If
we
click
on
the
question
mark
here,
I'm
going
to
get
a
link
to
our
docs,
where
we
see
prelims
privilege
escalation,
allowed
danger
security
context,
dot,
allow
privilege
escalation
is
true,
so
that
is
the
default
setting.
B
B
A
B
The
question
there
yeah
no
problem,
let
me
I'm
just
going
to
let
this
take
over
for
a
moment.
I'll
have
to
switch
back
and
forth,
but
that's
all
right.
So
we
we
know
we
need
to
set
security
context,
dot,
privilege,
allow
privilege
escalation
to
false
and
really
what
we
should
do
is
have
another
link
back
to
the
kubernetes
talks.
But
maybe
I
don't
know
what
this
means.
Maybe
I'm
unclear
on
this.
So
I'm
going
to
look
for
the
docs.
B
And
we're
going
to
find
the
kubernetes
documentation
on
configuring
a
security
context
make
this
a
little
bit
bigger
too,
and
I
tend
to
just
kind
of
scroll
through
until
I
find
the
yaml
I'm
looking
for.
This
is
probably
looking
hey
look
at
that
allow
privilege
escalation.
So
this.
B
B
B
And
take
a
look
at
our
ui
and
we
see
that
our
red
x
there
has
gone
away
and
we've
just
improved
our
security
posture
a
little
bit
by
fixing
some
of
the
default
configuration
in
that
app.
So
I'm
gonna
keep
going
with
a
couple
of
these.
If
we
look
we'll,
probably
see
that
on
all
of
these,
so
I'm
gonna
just
take
a
quick
look
at
my
ammo
here
and
I'm
just
gonna
apply
that
to
all
of
them.
So
my
goal
today
is
I'd
love
to
get
a
higher
score
in
our.
A
So
we
started
by
installing
polaris
just
from
helm.
We
saw
some
issues,
we
fixed
an
issue
and
we
immediately
saw
the
the
updated
results.
So
polaris
is
constantly
watching
for
changes
in
the
kubernetes
api
and
always
always
up
to
date
right.
A
B
Is
correct
so
initially
what
it
does
is
it
scans?
So
polaris
also
has
a
an
admission
web
hook
that
you
can
install
as
well
to
enforce
these
on
apply
to
the
cluster,
and
then
it
also
has
the
ability
to
add
custom
checks
as
well.
So
we
have
all
these
built-in
checks
that
we
see
here,
but
we
are
also
able
to
add
additional
checks.
A
How
does
one
add
so
I
don't
wanna
interact
your
flow,
but
if
you
could
just
say
in
a
few
words
like,
what's
the
the
language
where
I
can
specify
things,
yeah.
B
Let's
see,
I
actually
haven't
written
a
custom
check
a
little
while
so
let
me
go
to
here
and
we
can
go
to
the
polaris
documentation
at
polaris
polaris.docs.fairwinds.com
and
go
to
the
custom
checks
area
and
we
essentially
write
them
in
yaml
here.
But
I
believe
we're
using
we're
using
json
schema
under
the
hood
to
write
those.
B
All
right,
so
we've
got
our
security
context.
We
have
no,
we
are
no
longer
allowing
privilege
escalation.
Hopefully,
everywhere
we've
got
rid
of
all
of
our
dangerous
checks.
We
just
went
from
an
f
to
a
d.
Minus,
that's
great.
I
suppose
these
get
degrees,
I
believe,
was
the
saying
when
I
was
in
college,
and
so
we
can
see
that
our
security
score
actually
has
gone
up
a
little
bit.
So
that's
great!
So
that's
some
of
the
security
things
that
you'll
see.
B
If
we
keep
looking
you'll,
we
may
see
some
more
not
allowed
to
run
as
root
is
another
common
one.
So
that's
also
set
in
the
security
context,
but
that's
set
at
the
pod
level
security
context.
So
we
probably
want
to
disable
that
in
fact
the
cbe
that
was
released
last
week,
if
you
don't
have
the
ability
to
run
any
containers
as
root,
would
have
not
mitigated
entirely
but
reduced
the
blast
radius
of
that
cbe
that
we
all
had
to
deal
with
last
week.
B
So
we
can
go
ahead
and
add
that
in
as
well.
B
B
So
in
the
pod
spec,
not
underneath
containers,
I'm
going
to
add
in
our
security
context
and
this
one's
a
little
more
tricky
to
modify,
because
you
can
just
stop
running
as
root,
but
some
containers,
depending
on
how
you've
built
your
container,
doesn't
necessarily
always
play
nicely.
So
we're
gonna
try
this
and
see
if
it
works.
A
B
For
the
ui,
I'm
not
horribly
worried
about
it.
The
database
container
in
the
reddit
server
might
have
a
little
bit
more
promise.
B
All
right,
first
things:
first,
let's
make
sure
our
app
is
still
running,
looks
like
we've
refreshed
our
database
restarted
and
took
all
of
its
data
with
it.
So
our
number
of
total
votes
is
way
down,
feel
free
to
throw
some
load
at
this
thing.
While
we're
on
here
see
if
you
can
get
the
one
of
these
to
win,
we've
got
yelp.kepler.hillghost.com.
B
Is
the
url
and
it's
http
only
I
wasn't
able
to
get
tls
certificate
working
with
this
particular
app
while
I
was
prepping
for
it.
So
let's
go
back
to
our
dashboard.
Take
a
quick
look
at
our
ui.
B
And
see
all
right,
so
we've
dropped
our
ability
to
run
as
root
notice.
That
check
has
gone
green
for
us.
So
that's
great.
Let's
see
what
time
we
have
10
15.,
so
I'm
going
to
do
one
more
of
these
before
I
jump
into
the
efficiency
side
of
things.
So
let's
do
one
more
security,
one
and
let's
do
capabilities.
This
is
an
interesting
one,
so
again,
insecure
capabilities.
This
will
link
specifically
into
our
our
internal
list
of
insecure
capabilities.
B
These
are
the
linux
kernel
capabilities
that
your
container
has.
These
are
also
covered
in
the
documentation
here.
B
B
B
B
Apply
that
assuming
I
got
my
animal
correct,
we
did
and
we'll
take
a
look
at
our
pods
and
we
have
a
crash
loop
back
off,
not
super
surprised
there.
So
capability
is
kind
of
like
changing
your
user
and
your
group
that
you're
running
as
depending
on
how
your
container
is
built
and
what
it
requires
to
do.
You
probably
need
some
level
of
capabilities
here,
so
my
guess
is
we're
going
to
need
something
related
to
networking
so
that
we
can
actually
run
whatever
it
is
we're
running
here.
B
We're
running
an
nginx
container,
I
see
so
the
yelp
ui
container
is
obviously
built
on
an
nginx
container
and
then
serving
up
some
files
out
of
that
and
nginx
is
going
to
need
some
of
those
capabilities
to
run.
So,
let's
take
a
look
and
we're
going
to
add.
Actually
I've
done
this
in
a
little
while
so
I'm
going
to.
B
Let's
take
a
look:
let's
close
these
and
we'll
take
a
look
at
the
list
of
capabilities,
I'm
fairly
certain
we're
going
well,
let's
just
guess
here.
I
need
that.
Let's
try
that
feel
free
to
throw
into
the
questions.
If
you
know
exactly
what
capabilities
nginx
needs,
but
that
doesn't
work.
We
can
move
on
from
that.
So.
A
B
It's
you
know
I
like
to
run
through
this
just
kind
of
from
a
super
clean
perspective
here
because
say
I'm
an
ops
person
working
on
a
team
with
developers
that
built
a
container,
and
I
need
to
help
you
know
change
the
security
configuration
here.
B
There
are
some
things
that
are
easy,
that
are
low
hanging
fruit
and
then
there's
a
lot
of
things
that
are
more
complex
to
change
and
more
complex
to
update
and
polaris
does
a
great
job
of
alerting
you
to
potential
issues
there,
but
it
still
takes
some
effort
to
get
the
to
get
these
things
all
working.
A
I
think
it
sounds
like
it's
related
to
the
user
change,
maybe
not
the
capabilities
change.
Do
you
think.
B
A
B
B
Kind
of
the
level
of
complexity
that
I
can
take
to
get
some
of
these
configurations
locked
down-
it's
not
just
a
matter
of
you
know,
go
set
the
thing.
Do
the
thing
here:
change
your
yaml.
You
really
have
to
understand.
What's
running
inside
your
container,
what
capabilities
it
needs
and
and
also
build
your
container
in
such
a
way
that
it
doesn't
require.
B
You
know,
root
level
access
if
you
can.
So,
let's
put
this
back,
get
this
running
again.
So
that's
that's!
Really
a
decent
overview
of
some
of
the
security
checks.
We
have
there's,
probably
additional
ones
I
haven't
talked
about,
but
those.
B
B
B
Really
one
of
my
favorite
areas
to
to
jump
into
is
reliability
and
efficiency.
So
we'll
see
a
couple
of
issues
here
specifically
around,
and
this
is
still
this
one.
I
apologize
for
that
for
memory
limits,
cpu
limits,
cpu
requests,
memory,
requests
and
liveness
and
readiness
probes,
and
these
things
and
again
they'll
link
out
to
the
documentation.
B
If
you
click
on
the
the
question
mark
here,
but
these
things,
the
liveness
and
randomness
probes,
the
cpu
requests,
the
memory
requests
are
really
kind
of
the
bare
minimum
for
good
reliability
inside
your
kubernetes
cluster,
especially
if
you're
running
multiple
apps
that
need
to
use
different
amounts
of
resources,
and
things
like
that.
So
this
is
really
where
goldilocks
comes
into
play.
So
cpu
and
memory
requests
are
great.
We
can
say
set
them,
but
the
question
is
then:
okay,
what
do
I
set
them
to?
You
know
you
can?
B
Maybe
you
know
you
can
profile
your
app.
You
can
run
it
for
a
little
while
I
could
go
in
here
and
kind
of
get
an
idea
of
how
much
cpu
memory
is
my
application
using
is
each
piece
of
my
application
using
right
at
this
moment.
Maybe
I
have
some
sort
of
monitoring
hooked
up
and
I
can
go
look
at
historical
graphs,
but
it's
really
kind
of
a
frustrating.
It
can
be
a
frustrating
experience,
especially
across
many
apps,
trying
to
go
in
and
figure
out.
B
What
do
I
set
these
to,
and
so
we
set
out
a
long
while
ago
to
try
and
make
this
at
least
a
little
bit
easier.
Just
move
the
needle
a
little
bit
give
people
a
tool
that
would
make
it
possible
to
set
those
memory
requests
some
limits
in
an
easier
way,
and
so
what
what
that
resulted
in
was
a
project
that
we
have
called
goldilocks
and
all
these
products
around
github
in
our
fairwinds
or
org.
B
But
goldilocks
is
a
controller
that
manages
vertical
pod,
autoscaler
objects
in
recommendation
mode
and
then
aggregates.
The
recommendations
from
those
vertical
pod
autoscaler
objects
into
a
dashboard,
and
so
the
way
this
works
is
we
install
goldilocks
in
our
cluster,
so
we're
gonna.
We
would
do
the
how
the
same
type
of
helm
install
that
we
would
have
for
polaris,
so
we'd
install
goldilocks
in
the
goldilocks
namespace
from
the
fairwinds
stable
repository.
B
And
create
namespace,
I'm
gonna
run
this.
I
already
have
it.
If
we
take
a
look
in
the
goldilocks
namespace,
we
have
two
components
as
a
controller
and
a
dashboard.
One
of
the
prerequisites
of
installing
goldilocks
is
that
we
also
have
the
vertical
pod,
auto
scaler
in
stalled.
B
So
if
we
look
in
our
vpa
namespace,
I've
already
installed
the
vertical
auto
scaler
we
have
a
chart
for
that.
It
can
be
installed
as
a
subchart
of
goldilocks,
but
essentially
we
only
I.
I
really
only
need
the
recommender
portion,
I'm
not
going
to
run
vertical
pod,
auto
scaler
in
the
automatic
mode
where
it
changes
your
requested
limits.
I'm
just
going
to
run
a
recommendation,
look
and
then
the
last
thing
we
have
to
do
is
label
our
namespaces.
B
So
if
we
take
a
look
at
our
namespace,
our
app
namespace
yelp,
we
have
this
label.
Goldilocks.Fermins.Com
enabled
equals
true,
and
what
this
does
is.
It
goes
and
creates
a
vertical
pot.
Auto
scaler
object
for
all
of
the
deployments
in
our
namespace
here.
So
we
we
see
we
have.
These
have
been
here
for
a
few
days.
I
built
this
a
few
days
ago.
B
So
they've
been
kind
of
collecting
information
over
that
time,
so
vertical
pod,
auto
scaler
will
watch
the
resource
usage
of
your
your
pods,
your
each
container
in
your
pod
and
create
a
recommendation.
So
if
we
look
at
the
say
for
the
ui,
we
look
at
the
vpa
object.
B
We
see
that
in
the
status
block,
there's
a
set
of
recommendations,
there's
a
lower
bound,
a
target,
an
uncapped
target
and
an
upper
bound.
Currently
these
look
all
the
same
because
they're
using
the
minimum
of
what
the
vertical
pod
auto
scaler
is
set
to.
It
has
a
minimum
target,
but
over
time,
if
we
had
load
on
our
application,
we
would
see
these
numbers
start
to
change,
and
so
we
can
take
a
look
at
the
goldilocks
dashboard.
B
B
I'm
going
to
pull
up
the
goldilocks
dashboard
I
put
forwarded
to
it.
We
can
list
all
of
our
namespaces.
We
see
all
of
them
that
are
labeled
here
and
have
vpa
objects
in
them.
So
we
saw
that
the
yelled
namespace
had
that
label
on
it,
I'm
going
to
click
into
this
namespace
and
I'm
going
to
see
just
a
little
bit
too
big.
Maybe
I'm
going
to
see
the
various
deployments
within
there
within
that
namespace
and
then
each
within
that
deployment.
B
So
if
we
had
multiple
containers,
we'd
see
those
here
and
we
see,
goldilocks
is
giving
us
the
same
issue.
That
polaris
was,
which
is
that
our
limits
aren't
set,
and
it's
going
to
give
us
a
recommendation
on
how
to
set
those.
And
so,
if
we
install
goldilocks
alongside
of
our
applications,
we
can
get
some
recommendations
over
time
on
how
to
set
those
another.
Nice
thing
you
can
hook
vpa
or
the
vertical
pod
autoscaler
up
to
prometheus,
to
get
some
more
to
get
some
more.
B
That's
redis.
We
applied
to
the
ui
so
now
we're
gonna
see
that
goldilocks
is
seeing
that
we
have
our
resource
requested
limits
set
to
exactly
what
it
recommends.
So
I'm
going
to
talk
a
little
bit
about
qos.
Now,
if
you're
not
familiar
with
quality
of
service
class,
it
is
when
your
it's
the
configuration
of
the
difference
between
your
limit
and
your
request.
So
if
your
limits
and
requests
are
equal,
it's
in
what's
called
the
guaranteed
qos
cost
because
you're
guaranteeing
that
amount
of
resources
to
your
container.
B
So
we
show
both
burstable
and
guaranteed
versatiles
when
you
have
requests
that
are
lower.
A
B
Your
limits
so
that
your
your
workload
can
burst
up
to
the
limit
and
those
are
actually
defined
down
here
and
link
out
to
the
kubernetes
documentation
when
we
talk
about
it,
but
we
use
the
lower
bound
and
upper
bound
to
build
the
burstable
qos
recommendation,
and
then
we
use
the
the
target
from
the
vpa
recommendation
for
both
the
request
and
the
limit
for
the
guaranteed
qos
class.
A
A
So
there
was
one
question
that
maybe
try
to
generalize
about
polaris.
I
think
specifically,
I'm
not
sure
how
it
applies
to
gridlock,
whether
it
follows
some
kind
of
standard
for
the
specific
set
of
rules
that
you
chose
to
enforce
there
or
maybe
to
even
generalize
this
further.
How
do
you
choose
which
rules
get
into
polaris,
which
tests
and
how
do
you
update
them
or
does
it
relate
to
any
kind
of
compliance
or
standard.
B
It's
a
great
question,
so
this
the
current
set
of
checks
that
are
built
into
polaris
are
not
built
on
any
particular
standard,
they're,
really
kind
of
a
collection
of
things
that
we've
seen
as
common
best
practices
over
the
years.
So
we
don't
currently
have
anything
that
maps
specifically
to
standards.
We
are
talking
about
what
we
can
do
in
that
area.
We
just
achieved
our
stock
to
certification,
so
we're
working
on
things
to
work
with
standards
like
that.
The
other
thing
that
we
can
do
is
in
our
in
our
commercial
product.
B
A
B
I
use
it's
called
alacrity,
it's
a
rust
terminal,
emulator
cool,
so
yeah,
it's
out
there,
it's
open
source,
so
I'm
trying
to
get
some
load
running
on
these
different
pods
so
that
we
can
see
different
recommendations
in
goldilocks.
The
other
thing
that
we
can
do
is
we
can
kind
of
tweak
these
a
little
bit
to
see
you
know
say
we
maybe
say
we
we
built
our
container
and
we've
got
just
these.
You
know
huge
recommendations,
I'm
not
going
to
go
that
high,
because
I'm
not
sure
how
big
these
notes
are.
B
B
It
will
tell
you
if
you
have
over
provision
your
requests
and
limits
as
well,
so
we
can
go
back
here
and
take
a
look
at
the
dashboard
and
see
that
hey
we've
we've
over
provisioned
these,
maybe
we
were
using,
we
allocated
too
many
resources.
Maybe
we
have
an
opportunity
to
save
a
little
bit
of
money
here
and
reduce
the
number
of
nodes
that
we're
using
so.
B
B
Would
be
cool,
although
in
an
infrastructure
as
code
world,
not
my
favorite
solution,
but
one
ideal.
One
thing
that
we've
been
talking
about
doing
actually
is
adding
the
ability
to
do
you
know
kind
of
dependable
style
pull
requests,
so
you
have
issues
in
your
polaris
and
it
goes
and
creates
a
pull
request
on
your
infrastructure's
code
or
your
helm.
B
Chart
or
you
know,
whatever
you
have
to
apply
these
settings,
which
that
would
be
super
cool,
but
our
issues
are
open
on
all
these
open
source
projects
so
feel
free
to
go.
Make
that
request
there
all
right.
Let's
go
refresh
our
dashboard
here,
see
what
we've
got
great
great,
green,
green,
green,
oh
well!
I
changed
it
to
lower
than
what
they
recommend.
So,
let's
just
get
it
all
green
because
green's
good
color.
B
B
B
Before
up
to
100,
because
really
the
the
number
one
thing
about
efficiency
is
getting
your
resource
requests
and
limits
set
properly
and
if
you've
watched
any
of
the
stuff
that
I've
done
recently
or
you
talked
to
me,
you
probably
know
that
I
tend
to
harp
on
that
a
lot.
B
It's
one
of
the
things
that
I
just
jumped
back
to
frequently,
but
I
have
noticed
over
running
clusters
for
so
many
different
clients
that
so
many
problems
can
be
solved
by
really
knowing
your
resource
requests
and
limits
and
then
setting
those
properly
and
you're
utilizing
the
horizontal
plot,
auto
scaler,
along
with
your
cluster
out
of
scaling
effect,
just
those
few
things
right.
There
can
increase
the
stability
of
your
kubernetes
deployments
considerably,
so
we've
done
security,
we've
done
efficiency.
Let's
talk
about
reliability,
that's
a
security
one!
That's
security,
one!
B
Let's
talk
about
liveness
and
readiness
probes.
So
right
now
you
may
have
noticed
that
in
my
animal
files
here
I
have
no
liveness
and
readiness
probes
whatsoever.
B
Liveness
and
radiance
probes
are
super
important
to
reliability
because
they
allow
you
they
essentially
allow
you
to
not
route
traffic
when
your
app's
not
ready
and
also
allows
your
app
to
be
your
pod
to
be
terminated
and
brought
back
up
when
it's
doing
things
that
it
shouldn't.
So
I
don't
know
if
you
notice,
but
whenever
I
restart
the
pods,
I
get
a
bunch
of
errors
in
the
console
here.
I
think
they're
further
up.
B
This
is
a
different
error
here,
but
that's
because
we're
still
routing
traffic
to
a
pod,
that's
shutting
down,
because
the
readiness
probe
hasn't
there
is
no
readiness
probe
configured.
So
traffic
is
always
being
routed
to
my
pod.
So
if
we
take
a
look
at
the
polaris
check
where
it
says,
liveness
probe
should
be
configured.
B
B
Liveness
readiness
and
startup
pros-
and
we
can,
let's
find
one-
let's
find
an
example-
that's
in
http
because
we're
running
an
http
server
here.
So
in
our
container
we
want
to
configure
loudness
probe.
B
This
and
then
we'll
have
to
modify
it
a
little
bit
for
this
application,
we're
in
the
ui.
We
know
it's
listening
on
port
8080,
all
right,
sorry,
480,
due
to
the
container
report
here
we
probably
don't
have
a
health
path
that
I'm
aware
of,
and
we
don't
need
to
send
any
custom
headers.
So
there's
our
liveness
probe,
it's
just
going
to
be
an
http.
Yet
on
port
80.
If
we
get
a
200
back,
it's
going
to
pass.
If
we
don't
get
a
200
back,
it's
not
going
to
pass
fairly
straightforward
and
then
we
can.
B
A
B
Running
but
we're
not
routing
traffic
just
yet
because
the
liveness
probe
has
the
readiness
probe
hasn't
started
and
then
the
readiness
probe
starts.
We
have
the
ready
pod
and
now
we're
ready
to
terminate
the
old
one.
So
hopefully,
we've
handed
over
we've
had
time
for
any
connections
to
move
to
the
new
ready
pod
once
it
was
actually
ready
to
start
accepting
connections
and
we'll
go
back
to
polaris.
B
B
B
B
A
B
B
B
So
essentially
we
want
to
say
you
know
this
pod
is
supposed
to
run
as
root,
or
this
pot
has
to
run
at
root
run
as
root.
We
can
add
exemptions,
so
you
can
exempt
an
entire
deployment
from
all
checks,
but
you
can
also
exempt
from
specific
checks.
So
if
we
want
to,
for
example,
annotate
our
now,
let's
let's
do
the
ui
or
actually
the
app
server
deployment
we
weren't
able
to
configure
our
liveness
and
arenas
pro,
perhaps
there's
some
reason.
B
B
B
Of
course,
we
could
put
this
behind
an
ingress,
maybe
front
it
with
some
oauth
using
proxy,
but
if
we
take
a
look
at
our
app
server,
we
see
that
the
liveness
probe
liveness
probes
still
here
but
readiness
is
not
I'm
a
spell
something
that
can
happen.
B
A
B
Drop
the
readiness
probe
issue
from
the
list,
so
that
is
how
you
would
do
exemptions.
Obviously,
if
we
wanted
our
score
to
go
straight
to
an
a
plus,
we
could
just
turn
on
all
the
exemptions
and
we
would
get
that
which
comes
to
the
top
question
there,
which
is:
how
is
the
score
calculated?
B
A
All
right
another
question
there
about
any
way:
to
figure
some
sort
of
notification.
B
So
we
definitely
have
that
in
our
sas
product
we've
used
the
data
from
polaris
and
send
that
to
our
sas
product.
We
can
do
notifications
there.
I
don't
think
we
do
notifications
from
the
open
source
project.
B
You
can
run
so
an
additional
feature
of
polaris
is
there
is
a
cli
and
you
can
run
it
in
ci
cd
as
well.
If
you
want
so,
if
we
had
our
yaml
files
here,
we
could
run
the
polaris
cli
and
we
could
run
a
polaris
audit,
I
believe,
by
default.
It
tries
to
connect
to
your
cluster,
but
we
can
polaris
audit
dash
dash
audit
path
and
we
can
audit
our
yaml
in
place
right
here.
B
So
if
you
wanted
to
put
a
ci
cd
check
in
place,
use
polaris
to
audit
it,
and
then
you
know,
write
some
automation
to
send
a
notification
based
on
that
it
would
be
relatively
straightforward.
We
just
output
this
nice
json
object
here
that
you
can
parse
and
see
all
the
different
failing
checks
and
the
different
name
spaces
and
and
what's
going
on
with
them.
So
we
see
like
my
ingress,
as
I
mentioned
earlier,
doesn't
have
tls
configured.
We
would
see
this
as
a
result
in
our
json
object.
B
So
it'd
be
relatively
straightforward
to
build
a
a
pipeline
notification
for
that
using
the
cli.
B
Yeah
yeah
would
be
great,
so
you
know
a
quick
summary
of
what
I
did
today
is.
B
We
took
an
app
just
deployed
with
some
very
basic
yaml
files,
with
almost
no
overrides
for
the
defaults
in
kubernetes,
and
we
used
polaris
to
identify
some
of
the
security
issues
with
those
default
deployment
yamls,
and
then
we
use
the
recommendations
from
polaris
to
fix
some
of
those
security
settings
so
not
running
his
route,
not
allowing
privilege
escalation
looking
at
kernel
capabilities,
and
then
we
used
goldilocks
to
take
a
look
at
resource
recommendations
to
set
our
resource
limits
and
our
resource
requests
for
the
deployment
that
we
had
deployed
to
the
cluster
for
that
application.
B
Yeah,
that's
a
great
question,
so
we
have
our
github
repositories
for
all
of
our
open
source.
Our
at
our
github
organization,
fairwinds,
ops
and
then
slash,
polaris
or
slash
goldilocks,
feel
free
to
file
an
issue
or
take
a
look
at
pr's
on
any
of
those,
and
in
addition
to
that,
we
also,
if
you
go
to
any
of
our
open
source,
repos,
there's
a
link
to
our
community
slack.
B
And
then
we
also
have
an
open
source
user
group
that
we've
been
working
on
on
building
recently
that
meets
every
so
often
and
there's
also
a
link
to
do
that
there
as
well
so
feel
free
to
reach
out
through
any
of
those
mediums
and
I'm
also
in
the
kubernetes.
So
if
you
want
to
hit
me
up
there,
I'm
always
available
there
as
well.
A
All
right
great
so
with
that
andy.
Thank
you
so
much.
This
was
a
really
great
introduction
to
polaris
and
gridlocks
yeah
and
everyone
else.
Thank
you
as
well
for
joining
and
see
you
next
wednesday
on
cloud
native
live.
Thank
you.
Thank
you.