►
From YouTube: Cloud Native Live: Writing Polaris policies
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
A
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions,
so
you
can
join
us
every
Wednesday
to
watch
live,
and
this
week
we
have
Andy
here
with
us
to
talk
about
writing
Polaris
policies
and
as
always,
this
is
an
official
live
stream
of
the
cncs
and
as
such
it
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
A
Basically,
please
be
respectful
of
all
of
your
fellow
participants
as
well
as
presenters,
but
yeah
with
that
done
I'll
hand
it
over
to
Andy
to
kick
off
today's
presentation.
B
B
Engine
for
kubernetes
Fairwinds
has
been
managing
clusters
kubernetes
clusters
for
like
seven
eight
years
now,
and
through
all
of
that,
we
realized
that
the
thing
that
broker
clusters
most
often
was
our
customers
deploying
things
into
them
in
ways
that
broke
the
cluster,
and
so
we
wrote
Polaris,
as
you
know,
in
a
policy
engine
to
audit
those
misconfigurations
and
also
to
you
know,
block
them
from
coming
into
the
cluster.
B
So
that's
what
Polaris
is,
but
today
what
I'm
gonna
try
to
do
is
to
tackle
one
of
our
open
issues.
So
if
you're
curious
about
contributing
to
Polaris-
or
you
want
to
add
checks
to
it,
this
is
the
perfect
place
to
watch.
Is
I'm
going
to
do
exactly
what
you
would
have
to
do
to
add
a
policy
to
Polaris.
B
So
please
feel
free
to
jump
in
with
any
questions
throughout
this
I'll
be
happy
to
answer
them
as
we
go
along,
but
I
am
just
going
to
go
ahead
and
dive
in
so
the
issue
we
have
today.
This
has
actually
been
open
for
quite
a
while
and
it
has
probably
the
most
thumbs
UPS
of
any
of
our
Polaris
issues
at
the
moment.
B
So
it
seemed
like
a
good
one
to
tackle,
but
the
original
ask
was
to
create
a
check
to
validate
that
pod,
anti-affinities
or
Affinity
terms
were
added
to
pods
in
your
cluster
if
you're
not
familiar
with
affinity
or
anti-affinity,
those
are
just
rules
that
you
can
specify
to
affect
where
your
pods
are
scheduled,
and
so
the
request
is,
you
know.
Basically,
you
know
when
you
schedule
a
set
of
PODS,
it's
not
guaranteed
to
be
spread
across
multiple
nodes.
It's
not
guaranteed
to
be
spread
across
multiple
availability
zones.
B
The
scheduler
will
attempt
to
do
that,
but
it's
not
guaranteed,
and
so
we
can
sort
of
either
guarantee
it
or
start
to
push
further
in
that
direction
of
spreading
out
our
pods
more.
By
adding
these
terms-
and
so
one
of
our
maintainers
actually
came
in
and
suggested
that
instead,
we
used
pod
topology
spread
constraints,
which
I
think
is
a
great
suggestion.
B
So
we're
going
to
follow
that
today
and
so
I'm
just
going
to
try
and
add
a
policy
to
Polaris,
to
suggest
that
you
add
pod
topology,
spread
constraints
to
your
pod
definitions,
so
I
have
over
here
I'm
running
a
kind
cluster.
B
It's
1.25
I
just
created
it
about
15
minutes
ago,
maybe
20
minutes
ago,
and
then
I've
installed
a
couple
of
applications
in
it.
So
I
have
a
deployment
running
two
pods
in
the
demo.
Namespace
and
I
have
several
deployments
in
the
Yelp
namespace.
So
I
have
two
different
apps
running
in
the
cluster
and
I
know,
because
I
I
wrote
the
the
yaml
for
these
that,
for
example,
the
app
server
here
has
no
affinities
or
topology
spread
constraints
on
it.
B
So
first
thing
I'm
going
to
do
is
just
go:
re-familiarize
myself
with
pod
topology,
spread
constraints
and
get
them
added
to
this
deployment,
so
that
once
I
write,
my
check
I
can
actually
validate
the
checks
working
all
right.
So
we're
gonna
go
pull
up
the
kubernetes
documentation
here,
pod
topology
spread
constraints.
It
goes
right
under
the
Pod
spec.
There's
a
bunch
of
optional
stuff
I'm
just
going
to
copy
this
straight
out
at
the
moment
and
go
down
to
my
pods
back
here
and
break
them
real,
quick.
B
Max
Q
is
the
amount
which
we
can
unevenly
distribute
it
I.
Don't
think
I
really
care
at
the
moment
about
most
of
this.
So
let's
see
optional,
optional,
optional,
optional,
all
right,
so
we've
got
Max
skew
I'm,
just
gonna
set
that
to
one
for
the
moment
the
topology
key.
That's
the
interesting
thing,
so
that
specifies
what
what
we're
trying
to
spread
across.
So
we're
going
to
say
right
now,
I'm
going
to
use
the
host
name,
so
we're
going
to
say
we
want
it
to
be
spread
across
multiple
nodes.
B
You
might
put
something
in
here
on
like
a
failure
domain
that
would
spread
across
multiple
availability
zones
or
add
multiple
spread
constraints
to
spread
it
across
both
nodes
and
availability
zones.
I'm
just
going
to
keep
it
simple
for
the
moment
when
unsatisfiable
I,
don't
think
I
can
be,
quite
so
strict
as
to
say
Do,
not
schedule,
because
that
is
going
to
break
in
my
kind
cluster
here,
because
my
kind
cluster
is
only
a
single
node.
So
this
is
definitely
not
going
to
be
satisfiable.
B
You
can
still
replace
here
interesting
I,
think
I'm
going
to
leave
this
empty,
but
not
quite
certain
I
think
for
the
moment.
I'm
just
gonna
comment
that
out
all
right.
So
let's
apply
that.
B
B
We
see,
let's
see
this
one's
six
seconds
long,
let's
describe
it,
I,
don't
know
why
it's
failing
it's
Readiness
Pros,
but
that's
all
right
looks
like
it
got
scheduled
just
fine
and
we're
terminating
the
ones
okay.
So
we
have
one
deployment
with
a
topology
spread
constraint
and
we
have
several
deployments
in
the
cluster.
Without
so
now
we
can
go
actually
start
to
write
our
policy
to
verify
that
we
have
those.
We
have
both
the
positive
and
the
negative
case
here.
B
All
right
so
I
have
here
a
Polaris
configuration
and
the
first
thing
I'm
going
to
do
instead
of
trying
to
just
like
add
it
into
Polaris
and
make
my
pull
requests
and
all
that
I'm
just
gonna
add
it
into
my
confision
for
Polaris
as
a
custom
check.
So
this
is
a
Polaris
configuration.
This
is
largely
the
default,
but
I
do
have
a
few
checks
in
here.
A
few
custom
checks
in
here
already
as
examples
that
I've
used
in
other
demos
and
things
like
that.
B
So
I'm
gonna
just
go
ahead
and
add
this
as
a
as
a
custom
check,
and
then
we
can
look
at
how
we
add
it
into
the
Polaris
code
base
very
easy
way
to
get
started
with
it.
So
we're
going
to
add
a
custom
check,
called
topology
spread
constraint,
I'm
going
to
add
it
here
in
the
list
of
checks
that
we
want
to
apply
I'm
going
to
set
it
to
the
warning
level.
That
way,
if
I
do
have
the
admission
controller,
enabled
it's
not
going
to
block
that.
B
So
we're
going
to
say
success,
message:
pod
has
a
topology
spread
constraint,
that's
a
difficult
thing
to
say
quite
so
many
times
in
a
row,
all
right
punch
should
be
configured
with
a
topology
spread
constraint.
So
we've
just
got
our
success
message
and
our
failure
message:
the
information
that
we're
going
to
share
with
the
end
user
in
the
category
when
we
add
this
to
Polaris.
B
This
is
going
to
go
under
the
liability
category,
because
the
the
built-in
categories
are
efficiency,
security
and
reliability.
This
affects
the
reliability
of
PODS
and
the
stability
of
your
cluster,
so
I'm
going
to
put
in
that
category.
B
B
I
want
to
be
looking
at
the
Pod
specification,
because
that's
where
the
where
the
topology
spread
constraint
exists,
so
I
want
to
Target
the
Pod
spec
yep,
so
that'll
look
for
any
pod
spec,
which
is
good
and
then
now
we
get
into
the
complex
bit
where
we
start
adding
our
steam
up
all
right,
so
we
have
to
put
in
the
schema
draft.
B
So
if
you're
not
familiar
with
Polaris
Polaris
uses,
Json
schema
to
validate,
what's
going
on
and
to
do
the
audit.
That's
and
we've
kind
of
extended
Json
scheme
a
little
bit,
but
it's
largely
vanilla,
Json
schema.
So
we
have
here.
If
we
look
down
at
this
other
custom
policy
that
I've
written
we've
got
we're
targeting
the
container
and
then
we're
looking
at
it's
an
object.
It
has
a
property
called
image
and
it's
a
type
of
string
and
we're
going
to
allow
any
of
these
pattern
matches.
B
So
this
is
actually
going
to
be
fairly
similar
to
that
what
I'm
going
to
do
is
actually
pull
up
the
app
server
that
I
just
modified
here
on
the
left,
so
that
we
can
get
a
sense
for
where
we
are
in
the
object.
So
we
know
we're
targeting
the
Pod
spec,
which
means
we're
essentially
sitting
here
in
our
policy,
so
just
under
there,
so
we're
going
to
say
type
object,
I'm
going
to
say
properties
and
we're
going
to
say,
topology
spread
constraints.
B
Now
we
get
into
the
tricky
bit
where
we
have
a
list,
and
so
that's
the
property.
It's
going
to
be
a
type
hooray!
That's
an
excellent
question!
I,
don't
know
the
answer
that
so
usually
when
I
get
stuck
like
this
in
Polaris
policy,
I
start
looking
for
other
policies
that
are
similar
so
on
the
bottom
left
of
the
screen
here.
I'll
make
that
a
little
bit
bigger
I
am
in
the
Polaris
repo
in
the
tax
repository.
B
So
this
is
all
of
the
built-in
checks
that
come
with
Polaris
by
default,
so
I'm
gonna
look
for
something
that
has
the
pods
back
oh
priority
class
now
I
need
something
with
a
list.
B
Let's
look
at
oh,
let's
look
at
Runners
privileged.
Let's
see
what
this
policy
looks
like
we're,
targeting
the
pods
back,
we
create
a
definition.
Oh
so
Json
schema
lets
us
create
predefined
blocks
that
we
want
to
reuse
potentially.
B
Let's
say:
I'm
gonna
find
a
different
policy
to
get
an
example
off
of
or
I'm
gonna
have
to
go.
Find
some
Json
schema
examples,
because
this
is
where
things
get
tricky
type
object
required.
Let's
look
for.
B
All
right
any
happenings,
those
are
okay.
That's
the
container
array
again.
B
B
Apology
spread
constraints
and
we've
got
items
type,
that's
what
I
was
missing.
Okay,
so
we
have
items
and
then
under
items
we
have
type
object.
Now,
essentially,
we
are
targeting
the
actual
object.
That
is
one
of
the
items
in
the
topology
spread
constraints,
and
then
we
need,
let's
say
properties,
we'll
say:
let's,
let's
just
say
we,
the
topology
key
has
to
be
any
of
type
string.
B
B
Yes,
let's
try
that
say:
kubernetes
dot,
IO,
slash
hostname,
so
essentially
I'm
saying
the
topology
key,
so
I
have
to
have
a
topology
spread,
constraint
and
I
have
to
have
one
of
these
topology
keys.
I
need
to
find
the
one
for
yep
topology
dot
kubernetes.io
zone,
so
it
has
to
have
it's
already
spread
constraint
with
the
topology
key.
That
is
either
currents.io,
hostname
or
topology.cubernetes.io.
B
All
right
so
I'm
going
to
save
that
config
and
I'm
going
to
run
Polaris
and
we're
going
to
do
out
formats
pretty
and
then
you
can
config
Polaris
config.yaml
and
that
should
run
my
custom
check.
B
So,
let's
take
a
look
at
our
config
also,
in
addition
to
questions,
feel
free
to
point
out
when
I
do
things
wrong,
because
it's
definitely
happening
in
here.
So
if
anybody
sees
the
issue,
let
me
know
I
do
think
we
have
to
say:
let's
see,
pods
back
properties,
apology,
spread
constraints,
I
think
we
need
a
required
in
here.
B
A
B
Now
we
see
that
we
are
getting
a
warning
from
all
of
the
ones
that
don't
have
topology
spread
constraints,
and
then
this
particular
one
this
Yelp
app
server
does
so
now
our
check
is
working.
That's
good!
Now,
I'm
going
to
modify
the
wording
here,
a
little
bit
to
say
a
valid
topology
spread
constraint
because
we're
going
to
not
just
say
that
it
has
to
have
one,
but
it
has
to
be
configured
a
certain
way.
So
I
want
to
be
clear
about
the
messaging
there
and
then
I
want
to
double
check.
B
Let's
go
ahead
and
I'm
going
to
copy
this
spread
constraints
and
I'm
going
to
edit
the
deployment
in
the
demo,
namespace
and
I'm
going
to
add
this
I'm
going
to
give
it
a
different
topology
key
just
to
make
sure
that
it
is
working.
That's
the
way,
I
expect
and
actually
I'm
going
to
use
Polaris
a
little
bit
differently
here,
instead
of
auditing
directly
in
the
cluster
I'm
going
to
say
audit
the
yaml,
we
need
the
demo
app
configuration
there,
we
go.
We
see.
This
is
also
warning.
B
It
does
have
a
topology
spread
constraint,
but
I
modified
the
required
list
of
topology
the
topology
key.
That
I
was
using
to
be
outside
the
list,
so
this
is
now
an
invalid
pod
topology
spread
constraint.
This
might
be
something
where
we
consider
splitting
this
into
two
checks:
one
saying
that
you
should
have
a
spread
constraint
and
one
saying
that
it
should
be
configured
a
certain
way
for
this
I
think
I'm
just
going
to
leave
it
the
way
it
is
right
now,
so
we
have
a
working
custom
policy.
B
B
So
why
do
I
have
a
death
here
I'm
going
to
check
out
the
branch?
What
issue
is
this?
It
is
547,
X,
547,
add
apology,
spread.
B
Check
around
pot
yeah
right
all
right,
so
the
first
thing
we're
going
to
do
is
add
a
yaml
file
called
apology.
A
B
Yeah,
so
all
of
the
Polaris
documentation
is
at
polaris.docs.fairwinds.com.
B
And
then,
if
you're,
looking
for
information
on
contributing,
if
you
go
to
the
Polaris
repo,
we
have
a,
we
should
have
a
contributing
guide
in
here
somewhere.
It
might
be
in
the
documentation
as
well,
but
General
process
file,
an
issue
make
sure
it's
something
that
you
know
we
think
is
is
a
good
addition
and
then
feel
free
to
open
a
PR
on
that.
B
And
then,
if
the
question
is
suggesting
that
we
need
to
add
documentation
as
part
of
the
pr
I
also
agree
with
you
and
I
will
be
doing
that.
So
hopefully
that
covers
all
the
documentation,
questions
that
come
up
all
right,
so
I'm
just
going
to
grab
the
text
that
I
wrote
here,
I'm
going
to
put
that
in
the
yaml
file
in
Polaris.
So
I'm
going
to
write
that
so
now
we
have
topology
spread
constraint
as
a
check
and
I'm
gonna
go
ahead
and
build
this
here.
B
And
then
there's
a
few
other
things
we
have
to
do
so.
First
I'm
just
gonna
make
sure
it
works.
So
we're
gonna
go
back
here
to
where
I
was
running.
This
locally
and
I'm
gonna
do
locally
built
version
box,
Polaris,
Polaris,
so
I'm
going
to
use
the
version
I
just
built
I'm,
not
gonna,
pass
my
configuration
in
because
that's
adding
the
custom
check
and
let's
see
if
we
find
it
in
the
list
here
all
right.
B
So
that's
now
working
I
think,
oh,
maybe
it's
not
it's
not
because
just
adding
it
to
the
list
of
checks.
Polaris
is
not
enough.
B
What
we
need
to
do
is
find
the
default
configuration
and
add
to
that
as
well,
and
where
is
that?
That's
that
is
a
great
question
and
I
believe
it
is
covered
in
the
documentation
all
built-in
checks
there.
Let's
see.
B
B
B
A
B
B
B
So
we
have
a
very
complete
PR
here
all
right,
so
let's
go
to
Docs,
it's
docsmd,
no,
not
docsmd,
and
if
we
go
to
docs
and
we
go
to
checks
and
reliability.md,
we
have
a
table
here
that
lists
all
the
reliability
checks,
so
I'm
going
to
add
a
topology
spread
constraint
and
it's
defaulting
to
warning
now:
I'm
going
to
say
that
it
fails
when
there
is
no
I'll,
be
spread
constraints
on
the
pod
and
then
let's
see
this
is
an
interesting
dock
in
that
it
just
talks
about
liveness
and
Readiness
probes.
B
B
B
Example,
okay
constrained
and
cross
stones
I
do
feel
like
if
I'm
going
to
recommend
this,
though
I
need
to
understand
Mac's
skew
a
little
bit
better
before
we
put
it
in
the
documentation,
you
must
specify
it
must
be
greater
than
zero.
A
B
To
necessarily
do
if
you
have
three
zones
with
two
two
and
one
my
execute
set
to
one
of
the
global
minimums.
Okay,
thank
you
for
the
example
that
is
much
better,
so
one
is
fine,
we're
just
going
to
put
one
in
there
for
the
documentation
and
I'm
going
to
leave
out
this
label
selector
and
to
drop
the
container.
So
we
have
an
example,
so
that's
probably
enough
documentation
for
this
new
policy.
A
No
questions
so
far,
yeah
all.
A
Yeah,
but
keep
the
questions
coming
if
anyone
has
any
but
I
think
we
also
have
probably
a
good
time
in
the
end,
if
everyone's
waiting
for
like
final
note
or
something
gotcha.
B
All
right
so
I
see
here
we
have
a
checks
directory
under
test
and
in
each
one
of
the
each
check
has
a
folder
and
that
folder
actually
has.
This
is
looking
a
little
simpler
example,
A
failure.yaml
and
A
success.yaml.
So
my
assumption
is
that
we
go
through
and
we
run
the
policies
against
each
of
these
failure
and
successes,
and
you
know
assume
that
the
failures
fail
and
the
success
is
Success,
which
I'm
guessing
happens
in
our
CI
somewhere.
A
B
B
Spread
constraint.yaml,
and
so
in
this
failure,
I
am
going
to
just
drop
the
whole
block
because
we
want
it
to
fail
when
there's
no
technology
spread
constraint
at
all
and
I'm
going
to
do.
Failure
Dot
invalid
topology
key
and
we're
going
to
put
that
in
here
and
I'm
gonna
go
change
the
topology
key.
Oh,
it
is
already
bad
I'm
going
to
say
that
and
we're
gonna
fix
our
success
because
it's
clearly
not
going
to
work
where's
our
documentation,
I'm
gonna,
go
grab
the
example
from
that
we
you're
going
to
use
the
Zone
thanks.
B
B
B
B
B
The
thing
I'm
thinking
is
that
maybe
allowing
both
of
those
topology
Keys
is
not
the
best
solution
might
not
be,
but
I
think
giving
the
options
was
probably
best.
B
B
B
B
B
B
B
B
B
Utation
schema
test,
this
looks
like
it
might
be.
It.
B
B
B
B
B
A
B
We're
good
I
think
that's
all
the
testing
we
need
to
add
for
this
new
track.
Hopefully
my
reviewer
will
tell
me
if
I'm
missing
anything
there,
let's
see
if
we've
passed
yet
we're
testing
the
build.
B
I'll
have
to
go
look
at
why
this
is
failing,
but
that's
not
important.
So
I
think
we
have
a
valid
PR
and
with
nine
minutes
to
spare.
A
Perfect
timing,
I
have
to
say
then
plenty
of
time
now
for
the
Q,
a
so
perfect
yeah.
So
now
is
the
time
that
if
anyone
was
waiting
to
the
end
of
the
demo
to
ask
their
questions
so
now
is
the
perfect
time
to
start
typing
away
and
sending
it
in
but
yeah
anything
else.
If
you
want
to
share
right
now,
Andy
or.
B
No
I
mean
in
general,
if
we
have
Fairwinds,
has
I
think
I
think
I
think
probably
like
10
open
source
projects
that
we
maintain
four
or
five
of
which
we
consider
our
sort
of
Flagship
open
source.
So
if
you're
interested
in
contributing
token
source-
and
you
like
working
with
kubernetes,
all
of
our
tools
are
surrounding
kubernetes,
so
we
have
Goldilocks
Nova
Polaris
Pluto.
B
Those
are
the
kind
of
the
main
ones,
our
back
manager
and
reckoner
as
well.
So
if
you
are
interested
in
working
on
any
scenes,
any
open
source
projects
related
to
kubernetes,
please
feel
free
to
go
to
github.com
fairwindsops,
and
we
also
have
a
community
slack
and
I'm,
also
in
the
kubernetes
slack.
If
folks
want
to
reach
out,
have
any
questions
about
this
afterwards
or
anything
like
that.
I'm
happy
to
chat
about
open
source.
A
Awesome,
but
so
now
we
are
all
experts
in
obviously
experts
in
providing
policies
Polaris.
But
if
anyone
wants
to
learn
more,
what
would
be
the
next
resource
if
they
should
check
out
or
is
there
anything
that
they
should
be
kind
of
moving
on
next.
B
Definitely
if
you're
interested
in
Polaris
take
a
look
at
the
repository,
take
a
look
at
the
documentation
and
again
reach
out
in
the
kubernetes
slack
or
in
our
community.
Slack
should
be
linked
in
the
repo,
but
definitely
take
a
look.
It
also
functions
as
a
validating
admission
controller
and
a
mutating
admission
controller,
which
I
think
the
mutating
piece
is
super
interesting.
It's
not
something.
We've
explored
a
ton
but
I
think
it's
very
interesting
and
has
a
lot
of
potential
use
cases.
So
take
a
look.
A
Perfect
and
Final
Call
for
questions,
if
there's
not
coming
anything
in
but
I,
but
our
question,
though
so,
what's
in
polaris's
future
so
like
what
are
the
next
steps
with
actual
project,
any
kind
of
road
map
things
that's
happening
or
so
forth,.
B
Good
question
good
question:
I
think
at
the
moment
the
actual
feature
set
of
Polaris
is
fairly
stable,
where
we
see
the
most
opportunity
in
the
next.
Probably
six
months
is
mostly
just
adding
additional
checks,
so
I
know
I
do
have
a
co-worker
working
on
building
out
a
set
of
checks
to
essentially
satisfy
the
NSA
hardening
guide
that
we're
also
familiar
with
that
came
out
I
think
about
a
year
ago,
at
this
point,
I
may
have
my
timeline
off
on
that,
but
just
really
adding
checks,
but
other
than
that.
A
Makes
it
oh
yeah
awesome?
Thank
you
so
much,
but
the
audience
questions
has
popped
up,
but
it
realizes
that
oh,
they
should
have
asked
something.
I
think
you
gave
a
lot
of
good,
helpful
resources
and
places
where
to
reach
out
to
you.
So
that's
really
great,
but
that
said,
thank
you.
Everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
really
great
to
have
a
session
about
writing
Polaris
policies.
A
We
also
loved
the
few
interactions
from
the
audience
and
say
hi
to
the
people
who
said
hi
to
us
of
course
as
well,
and
as
always,
we
renew
the
latest
Cloud
native
code,
every
Wednesday
and
in
the
coming
weeks
we
have
more
great
sessions
coming
up
and
thank
you
for
joining
us
today
and
see
you
around
in
the
coming
weeks.