►
From YouTube: TGI Kubernetes 078: Pod Security Policies
Description
Join Josh Rosso as he hosts his first TGIK covering Pod Security Policies. We'll explore why PSPs are important and how they can be implemented in your Kubernetes clusters! Come Join and bring all your questions!
A
A
A
All
right
there
we
go
so
learning
quickly.
Here,
don't
have
the
live
stream
open
in
your
background,
or
that
will
cause
issues
so
we're
all
good.
Now,
hey
everyone
Eisen
from
Mexico.
Welcome
thanks
for
joining
us
Shahar
from
Atlanta.
Welcome!
Welcome
glad
to
have
you
how's
it
going
everybody
happy,
Friday,
happy,
Friday,.
A
A
All
right,
everybody
so
welcome
to
episode.
78,
my
name
is
Josh
Rosso
and
I
am
on
the
same
team
with
Duffy
and
I.
Also
work
with
Joey
I
came
over
to
VMware
from
the
hep
you
acquisition,
so
I'm
so
happy
to
have
you
all
here
and
thanks
for
humoring
me
and
my
my
first
tee
gik
here
hopefully
were
gonna,
have
a
good
time
talking
a
little
bit
about
pod
security
policies
and
in
diving
a
little
bit
into
that
so
getting
rolling
here.
A
Let's
start
with
our
hack
MD
and
we'll
talk
a
little
bit
about
some
of
the
news
and
the
kubernetes
ecosystem
and
then
we'll
get
written
to
the
content.
So
again
my
name
is
Josh.
You
can
find
my
handle
in
the
hack
MD
here
I'll
put
the
hack
MD
link
inside
alright,
so
a
little
bit
about
what
we're
gonna
be
covering
here
today.
So
obviously,
our
focus
is
pod
security
policies,
but
there's
a
couple
things
that
I
want
to
just
talk
to
you
all
about.
As
far
as
news
in
the
kubernetes
ecosystem.
A
Alright,
probably
the
most
exciting
news
that
we
have
this
week
is
that
kubernetes
has
officially
reached
five
years
old,
which
is
super
super
exciting.
So
it's
kind
of
interesting,
if
you
think
about
it
right,
you
know
when
you
started
your
kubernetes
journey.
Some
of
us
have
been
here
for
just
a
couple
months.
Some
of
us
have
been
here
for
a
year
plus
some
have
been
here.
You
know
for
many
years,
so
it's
it's
really
really
exciting.
To
see
communities
hit
it's
its
fifth
anniversary
right
after
cube
con
here,
so
really
really
cool
all
right.
A
Some
other
cool
links,
there's
a
really
cool
post
on
medium
talking
about
how
to
set
up
a
serious
kubernetes
terminal.
So
if
you
get
a
chance
to
check
this
out,
it's
got
some
really
cool
tips
and
tricks
around
some
kind
of
interactive,
tooling
that
you
can
use
to
to
to
analyze.
Look
at
interact
with
your
kubernetes
cluster,
which
is
really
really
exciting
and
probably
another
another
big
item
to
talk
about
here
as
well
as
our
kubernetes
versions
have
changed.
So
we
had
a
three
releases
based
on
a
security
issue.
A
So
if
you
haven't
already
checked
those
out,
please
do
one
fourteen
three
one,
thirteen,
seven
and
one
twelve
nine
have
been
released.
Even
in
this
session.
I,
unfortunately,
won't
be
using
the
newest
version,
but
those
versions
have
come
out.
So
so
please
be
aware:
another
cool
post
as
well
there's
some
there's
a
article
on
medium
I'm,
sorry,
click
the
wrong
one.
There,
let's
go
back
sorry
I'm,
not
on
medium,
but
on
I.t.
Next,
through
medium,
boosting
your
kubernetes
productivity
has
some
really
cool
tips
and
tricks
around
cube,
cuddle,
doing
different
types
of
completion
easily.
A
Switching
between
namespaces
and
contacts,
so
please
be
sure
to
check
this
post
out
as
well.
If
you
want
to
get
a
little
bit
more
savvy
with
some
of
the
the
cube
cuddle
interactions
that
are
out
there
all
right
and
then
just
a
couple
more
real,
quick,
the
certified
kubernetes
administrator
cert
is
now
good
for
three
years.
So
those
of
you
who
do
have
your
CKA
be
sure
to
check
that
out.
The
cert
is
again
good
for
three
years
now
cube
con
Barcelona
has
just
finished.
A
So
that's
also
really
exciting
news
we're
all
kind
of
perhaps
a
little
bit
jet-lagged
still,
although
it's
been,
it's
been
a
while
getting
back
from
Cuba
and
Barcelona.
So
hopefully
some
of
you
were
in
Barcelona
and
yeah
good
stuff.
Another
cool
thing
is
Tim.
Hakan.
Didn't
ask
me
anything,
so
please
do
check
that
out
as
well.
A
Inside
of
the
AMA,
you
should
be
able
to
kind
of
get
some
of
the
idea
of
some
of
the
questions
that
Tim
had
asked
and
the
answers
that
people
had
posted
as
well,
so
overall,
really
cool
stuff
and
then,
in
the
news
this
week
again
major
things
being
cube,
cons
over
we're
all
back
kubernetes
turned
and
we
have
a
couple
different
releases
based
on
some
small
security
things.
So
that's
about
it
so
we'll
get
right
into
it
here,
see
who
else
has
joined
Sergei
welcome
from
Russia
glad
to
have
you
with
us
alright.
A
So
what
we're
going
to
be
getting
into
today
is
talking
a
bit
about
pod
security
policies
and
around
pod
security
policies.
I
first
just
kind
of
want
to
pull
the
group
here
in
chat,
go
ahead
and
tell
me
whether
you
have
PSPs
being
used
in
your
clusters
or
not
so
say.
Yes,
if
you
have,
if
you're,
currently
using
PSPs
and
saying
no,
if
you're,
not
so
I
get
a
kind
of
an
idea
of
the
room
and
and
what
people
are
considering
so
again,
yes,
if
you're
using
PSPs.
A
No,
if
not
so
pot
security
policies
are
something
that
we
work
with
a
lot
of
our
different
customers
around
just
kind
of
walking
through
setting
them
up
kind
of
conceptually
understanding
how
the
different
pieces
fit
together.
It's
it's
a
little
bit
challenging
conceptually
to
to
dive
into
this
this
area
of
pod
security
policies
and
how
it
kind
of
changes.
Your
your
structure,
of
how
things
work
inside
of
the
cluster,
so
cool
cool,
so
I
can
see.
It
looks
like
we
got
a
mixed
group
here.
A
We've
got
some
who
have
PSPs
on
a
lot
that
don't
so
those
of
you
who
don't
you're
going
to
get
some
some
good
perspective
here
on
some
of
the
things
that
it
can
impact,
looking
good
all
right.
So
the
first
thing
that
I'm
going
to
kind
of
start
off
with
as
we
conceptualize
about
pod
security
policies,
is
try
to
try
to
show
you
a
little
bit
about
why
these
policies
are
important
to
consider.
A
A
So
inside
of
the
master
node,
let
me
go
ahead
and
switch
to
my
my
screen
here.
Alright,
so
inside
of
the
master
node,
we
have
no
pod
security
policies
turned
on
whatsoever.
So
this
is
a
pretty
standard
cluster
that
you've
got
here.
I've
got
just
the
cube,
control,
plane,
components,
running
and
core
DNS
and
Calico,
so
calicoes,
my
networking,
CNI
and
core
DNS
is
my:
is
my
my
DNS
server?
A
Obviously,
so
what
exactly
does
or
how
does
a
communities
cluster
behave
out
of
the
gate
when
you're,
when
you're
doing
this
kind
of
stuff
without
any
pod
security
policies
enabled
well,
it
basically
means
that
for
the
most
part,
you
can
almost
do
anything
with
your
deployments
right.
You
can
take
a
manifest.
You
can
drop
it
into
the
cluster
and
it
will
instantiate
and
create
that
pod
for
you
on
your
on
your
behalf,
right.
So
an
example
of
this.
If
I
wanted
to
run
an
engine,
X
pot,
nothing
nothing
too
too
crazy.
A
A
Alright,
so
we've
got
a
very
simple
pod
structure
here:
our
labels,
our
nginx
and
team,
a
and
our
container
is
just
gonna,
for
the
sake
of
example,
just
run
off
of
nginx
latest
right,
so
applying
this
should
theoretically
instantiate
the
pot
all
right.
So
let's
go
ahead
and
set
this
up
here,
so
I'm
gonna
pop
out
of
this
window,
alright,
we'll
do
an
apply
for
the
nginx
app
app
pod.
Excuse
me,
and
down
here
in
the
bottom,
will
set
up
a
little
watch
just
so
you
can
kind
of
see
what's
going
on.
A
So
if
we
do
a
watch
for
the
cube,
cuddle
get
pod,
namespace
or
one
actually
I've
got
an
app
running
inside
of
here,
because
I
was
playing
around
a
little
bit.
So
let
me
let
me
delete
that
real,
quick,
so
delete
the
app
deployment,
all
right,
cool
and
we'll
get
rid
of
those,
and
let's
just
focus
on
our
our
app
pod
at
first
to
make
sure
it's
a
little
bit
simpler,
so
we'll
do
apply.
A
Okay
of
our
app
pod,
yeah
awesome!
Alright!
So,
as
you
would
expect,
this
is
going
to
create
an
engine
container
and
this
nginx
container
is
running
inside
of
the
pod
network.
It's
it's
working,
just
as
you
just
as
you'd
expect.
Okay.
So
if
we
wanted
to
do
more,
how
do
you
say?
Maybe
things
that
we
shouldn't
be
doing
with
the
manifest?
It
really
wouldn't
be
hard
for
us
to
do
so.
A
Theoretically,
we
could
come
in
here
to
our
pod
and
say
we
wanted
to
to
go
into
the
kubernetes
cluster
and
start
kind
of
launching
workloads
inside
of
the
cluster
from
within
a
pod
itself.
Okay,
so
kind
of
giving
you
an
example
here.
So
if
we
go
into
pod,
particularly
our
app
pod
camel
alright
and
let's
go
ahead
and
just
change
this
up
a
little
bit.
Alright.
So
we'll
come
down
to
containers
here
and
we'll
set
up
a
section
for
volumes.
A
Okay,
all
right,
so
volume
is
good
good
and
then
inside
of
our
container
itself,
we're
gonna
put
in
section
for
volume
mounts
inside
of
here,
alright,
so
that
all
looks
good
now
inside
of
our
volumes,
we're
gonna
have
one
that
we've
named.
Let's
say
bad
inside
of
here
and
then
I'll
double
check
on
the
the
syntax
here,
but
I
think
we're
gonna
have
a
seat
here,
a
volume
mount,
so
the
volumes
should
have
a
name,
and
it
should
have
a
mount
path.
So
we'll
say
action.
This
will
be
in
volume.
A
Mounts
excuse
me
getting
my
mask
emos
mixed
up,
so
we'll
have
a
mount
path.
Gear
right
and
the
mount
path
is
going
to
be
inside
of
a
root
directory
called
bad
right
and
volumes
themselves
should
also
be
an
array
all
right.
So
we've
got
mount
paths
mounting
into
bad
and
we're
gonna
go
ahead
and
set
up
the
mount
to
the
volume
bad
inside
of
here.
A
So
this
volume
mount
is
going
to
bind
in
to
this
volume
right
here,
alright,
so
what
we
can
do
now
is,
since
we
don't
have
any
pod
security
policies
in
place
effectively,
we
should
be
able
to
anywhere
in
the
file
system
that
we
want
to
which
you
could
probably
guess
why?
That's
probably
not
the
the
best
thing
right.
So
inside
of
volumes
for
host
path,
we've
got
name.
This
is
going
to
be
the
post
PAP.
Our
path
will
be
a
path
for
Etsy
kubernetes.
A
Manifests:
okay,
so
we'll
push
this
inside
the
manifest
directory
and
then
the
type
is
of
course
this
is
a
directory,
so
we'll
go
ahead
and
call
it
directory
right.
So
we've
got
our
path
setup
we've
got
our
type
directory
set
up
and
it's
all
inside
of
host
path
for
the
name
bad
now
chances
are
decent.
A
That
I've
got
some
schema
issues
here,
but
let's
see,
if
I
can,
if
I
can
find
any
potential
issues
with
with
volume,
so
I'm
going
to
delete
that
pod
here
and
we're
gonna
go
ahead
and
apply
this
newly
setup
pod.
That
again
has
the
the
host
path
in
it.
So
I'm
only
make
sure
I
get
the
right
app
here
so
app
pod.
There
we
go
we'll
delete
that
and
in
the
bottom
of
my
screen
here
you
can
see
that
that
app
is
now
officially
deleted.
A
A
Theoretically,
what
we've
got
here
is:
we've
got
a
volume
mount
that's
set
up
to
effectively
write
any
data
that
we
want
inside
of
this
manifest
directory
so
again,
keep
in
mind
that,
like
these
hosts
in
kubernetes,
they're,
they're,
multi-team,
potentially
even
maybe
multi-tenant,
perhaps
where
you've
got
many
different
workloads
across
different
namespaces,
perhaps
running
on
the
same
host,
so
to
kind
of
hop
into
an
exploit
here.
Now
that
we've
mounted
this
in
place
and
there's
been
nothing
to
stop
us
in
our
manifest.
A
We
should
be
able
to
go
into
this
pod
and
start
doing
some
some
fairly
bad
things
right.
So
as
an
example,
here,
I'll
pop
one
window
over
and
I'm
gonna
open
up
two
buffers.
Okay,
so
again
we
have
got
a
pod,
so
let's
do
cube
cut.
It
will
get
pods
for
the
name,
space
org
one,
which
is
where
our
pods,
and
so
our
pot
is
called
ta
right.
Let's
go
ahead
and
exact
into
that
team.
A
So
something
in
here
is
named
bad,
so,
like
any
good
person
would
will
CD
in
too
bad,
because
that's
what
you
do
and
now
that
we're
in
the
bad
directory
I'm
gonna,
ask
the
sage
into
the
host
now,
because
I've
only
got
one
worker
node
I
know
that
the
the
KVM
based
host
here
is
running
on
202,
so
I'm
gonna
go
ahead
and
SSH
into
it
log
in
alright
and
if
I
sudo
in
and
go
to,
Etsy
kubernetes
manifests
I'm
now
in
the
manifest
directory.
Okay
and
we'll
set
up
a
watch
here.
A
So
let's
do
a
watch
LS
on
this
manifest
directory
now
at
see,
kubernetes
manifest
depending
on
how
you
install
your
clusters
or
set
them
up.
It
may
or
may
not
be
something
you're
used
to
without
getting
too
much
into
the
the
details
right
now.
This
is
a
directory
where
we
are
able
to
put
something
called
static
pods,
so
by
putting
a
manifest
in
this
directory,
if
static
pods
are
turned
on
the
cubelet
without
having
to
talk
to
an
API
server
will
actually
be
able
to
instantiate
that
container
and
run
it
on
the
host.
A
Now,
if
we
just
hop
on
the
internet-
and
we
type
in-
let's
say
we
do
pod
example-
let's
say
kubernetes
kubernetes
pod
example:
let's
do
that
alright
and
let's
say
pod
example:
yamo,
maybe
just
want
to
get
some
random
random
pot
off
the
internet
here,
okay,
so
kubernetes,
let's
see
here
my
sequel
pod.
Actually
you
know
what
there's
one
that
I
there's
one
I
was
using
earlier
called
a
croc
hunter.
Let's
see
if
I
can
find
that.
Okay,
all
right
perfect,
so
I've
got
this
pod
sea
croc
hunter!
That's
the
chart
do
see
here.
A
Here
it
is
okay,
great,
so
I
was
looking
at
this
this
pod
just
earlier
today,
when
I
did
the
same
Google
search
to
see
what
what
pods
would
come
up
so
I'm
just
gonna
go
ahead
and
grab
this
manifest
this
pod,
manifest
from
this
URL
alright
and
now,
if
I
go
back
to
my
host
again
in
the
top
window,
I'm
in
my
pod
and
in
the
bottom
window,
I'm
on
the
host
now
I'm
gonna
go
ahead
and
install
W
get
on
this,
because
this
pod
should
have
egress
capability,
so
I'll
do
a
quick,
app
update
and
I
should
have
baked
the
image.
A
But
that's
okay,
we'll
do
an
aft
install
W
get
so
we
have
W
get
alright
looking
good
and
then
once
W
get
is
installed.
We
are
going
to
download
this
pod.
So
if
I
do
a
double
you
get
for
this
croc
hunter
pod,
it
puts
it
in
this
directory
on
the
host
in
the
bottom
of
my
window.
Here
now
again,
since
this
is
a
static
pod
directory,
the
cubelet
isn't
concerned
about
talking
to
the
api
server
about
information
about
the
pod
or
where
it
should
go
right.
A
So,
if
I
back
up
here
and
just
do
a
quick,
let's
CD
back,
serve
root
here
routes,
home
directory,
if
I
do
a
docker
for
let's
say
well
croc
hunter
I
guess
you
can
see
that
croc
hunter
is
now
running
as
a
static
pod
in
this
host,
I
didn't
have
to
use
cute
cuddle.
I
didn't
have
to
do
anything.
Special
I
just
had
to
mount
into
this
directory
and
drop
it
in
kubernetes
aside
right.
A
If
you
had
the
ability
to
do
this,
if
you're
using
something
like
a
bun
or
some
Linux
operating
system
that
has
like
a
pre
start
or
a
start
directory
where
you
can
drop
different
units
or
things
in
to
start
up
on
startup
a
hunk
or
tube
to
boot
on
startup,
if
you
will,
you
can
effectively
take
advantage
of
this
and
start
having
things
and
processes
kind
of
boot
up
and
start
as
well.
Now,
an
interesting
thing
about
this
and
I
want
you
to
kind
of
hold
on
to
this.
A
This
little
nugget
for
a
second
here,
an
interesting
thing
about
this.
If
we
leave
our
terminal
is,
if
you
do
a
cute
cuddle
get
pods
in
the
default.
Namespace
croc
hunter
will
technically
show
up
okay,
but
that
doesn't
necessarily
mean
that
croc
hunter
is
being
managed
by
the
cube
API
server.
We'll
talk
a
little
bit
more
about
that
for
now,
but
for
now
we're
gonna
call
this
something
called
a
mirror
pod.
A
All
right,
we'll
talk
more
about
it
later,
but
for
it
suffice
to
say
that
I've,
inserted
a
pod
or
a
container
into
the
system
and
I'm
effectively
been
able
to
do
something
like
this
right.
So
you
can
see
how
from
a
multi-tenant
environment,
this
could
be
really
problematic,
as
people
could
get
into
the
hosts
and
do
bad
things
like
this.
Now.
What
I'm
gonna
do
to
kind
of
keep
things
simple
here:
I'm
gonna
leave
the
container
and
we're
gonna
go
back
to
the
host
all
right
and
in
this
host
oops.
A
Let's
get
back
into
it
in
this
host.
We're
gonna
just
clean
up
that
directory
here,
all
right,
so
we're
just
gonna,
go
back
to
Etsy.
Kubernetes
manifests
and
delete
that
so
Etsy
kubernetes
manifests
and
we
will
delete
the
the
pod
example
to
make
sure
it
goes
away
all
right,
okay.
So
this
is
an
example
of
how,
from
a
file
system
perspective,
having
no
real
enforcement
on
the
pods
that
are
being
created
is
going
to
enable
you
to
do
some
potentially
bad
things
now.
A
Here's
another
example
for
you,
and
just
just
to
kind
of
drive
this
home
and
then
we'll
talk
about
actually
implementing
the
PSP's
and
and
what
exactly
they
look
like
all
right.
So
if
we
go
into
the
app
pod
one
more
time,
let's
bring
ourselves
back
to
where
we
were
we're.
Gonna
take
host
mounts
out.
We're
gonna,
take
volumes
out
all
right
now.
This
cluster
is
running
with
calico
as
its
CNI,
plugin
and
calico
is
going
to
allow
us
to
put
a
bunch
of
network
policies
in
place.
A
So
we
can
restrict
traffic
in
the
cluster
and
make
sure
that
every
pod
can't
just
talk
to
every
other
pod
or
perhaps
certain
pods
can't
go
outbound
through
egress
and
and
so
on.
Right
as
an
example
here,
if
I
just
bring
back
up
this
bottom
window,
if
I
do
a
cube,
cuddle,
get
global
network
policy,
all
right,
I
have
this.
A
This
calico
Ciardi
called
global
network
policy
and
if
you're
not
super
familiar
with
calico,
suffice
to
say
that
this
is
telling
the
cluster
to
lock
down
all
traffic
by
default,
except
for
the
cube
system,
traffic
and
traffic
to
DNS
servers
everything
else,
I
locked
down
by
default
across
the
cluster,
all
right
now,
org
one.
If
we
look
inside
of
it
because
technically
we
shouldn't
have
been
able
to
do
like
apt,
update
and
things
like
that
right.
A
So
if
we
go
inside
of
here
and
do
cube,
Caudill
get
Network
policy
and
these
are
the
standard
kubernetes
network
policies.
They
are
namespace
scopes.
So
if
I
asked
for
the
one
in
org,
one
I
have
org
one
set
up
to
have
an
allow
all
egress
policy
now
I'm
not
going
to
show
you
the
policy
itself,
because
this
is
not
about
network
policies.
But
you
kind
of
see
where
this
is
going
right.
A
The
whole
cluster
is
locked
down
by
default
and
then
someone
went
into
this
namespace
and
set
up
a
rule
that
allows
all
egress
traffic
it's
right
out
of
the
gate.
So
hypothetically
here
let's
say:
let's
say
we
wanted
to
remove
this
any
app
that
runs
inside
of
the
team,
a
namespace
we
no
longer
want
it
to
be
able
to
egress
by
default.
So
let's
do
a
cube
cuddle
and
we
will
delete
this.
This
network
policy
so
delete
network
policy
inside
of
the
namespace
org,
one
all
right.
So
now
it's
deleted.
A
So
in
a
theoretical
world,
if
I
go
out
and
read,
and
your
next
pod
I
am
NOT
going
to
be
able
to
do
like
an
apt
update,
because
it's
just
gonna
freeze
and
be
blocked
by
calico
from
getting
extra
traffic.
So
let's
try
that
real,
quick
I
have
the
app
pod
up
here.
Let's
go
ahead
and
watch
our
or
one
name
space
in
the
bottom,
so
I've
made
some
changes
to
it.
We
will
start
off
by
deleting
it.
So
let's
delete
that
pod.
A
Take
it
out
of
the
equation:
okay
and
then
let's
go
ahead
and
apply
this
pod
right
back
and
again,
I've
taken
out
all
the
volume
amount
stuff,
so
we're
just
focused
on
network
traffic.
Now
so
I
apply,
this
pot
container
goes
in
and
it
creates
same
idea
if
I
come
down
here.
I
should
be
able
to
exact,
let's
clear
this
out.
A
I
should
be
able
to
exact
into
that
same
team
a
container
under
bin
bash,
so
we'll
go
ahead
and
enter
that
now,
if
I
do
a
app
update,
you'll
notice,
depending
on
your
familiarity
with
apt
I,
did
through
DNS
resolve
deb
debian
or
get
resolved
to
this
IP
address.
However,
I
am
being
blocked,
which
might
actually
be
a
desirable
behavior
of
your
network.
So
again,
this
is
a
namespace
where
you
want
to
run
a
bunch
of
workloads
where
those
workloads
cannot
egress
outside
of
the
cluster.
It
is
completely
locked
down
for
egress
traffic.
A
Now
we
don't
have
pod
security
policies
in
place,
so
getting
around
this
model
is
pretty
trivial.
If
we
leave
our
container
again
here,
set
up
a
watch
one
more
time
right,
I'm
gonna
bring
it
down
a
little
bit
alright.
So
some
of
you
probably
know
where
I'm
going
with
this
right.
What
could
we
do
here
to
effectively
bypass
our
network
policy
altogether?
A
Well,
if
we
go
into
our
containers
here,
we
could
set
up
post
network
equal
to
trip
now,
there's
a
lot
of
reasons
why
we
don't
want
to
set
host
meet
with
network
equal
to
true,
but
especially
for
user
workloads.
That
is
there.
There
are
some.
You
know:
exceptional
workloads
like
perhaps
the
cube
API
server,
or
perhaps
your
ingress
layer,
we're
actually
binding
to
the
host
network
might
make
cents,
but
this
should
theoretically
block
us
or
allow
us
to
get
past
that
block
this
host
network.
A
Apply
that
pod,
alright,
it
is
created
with
host
network.
Now
again
without
any
PSPs
in
place,
it
was
one
line
line
15.
This
is
all
it
took
for
me
to
change
effectively
the
network
security
model
of
my
running
pot
right.
What
do
I
mean
by
that
so
so
check
this
out
real
quick
first
things.
First,
there's
no
network
policy,
that's
allowing
egress!
So
if
I
exactly
team,
a
I
shouldn't
be
able
to
do
an
apt
update
just
like
before
right.
A
So
let's
go
ahead
and
exact
into
Team,
a
all
right,
so
exec
back
in
now,
if
I
go
and
I,
do
an
apt
update,
I'm
able
to
access
the
external
internet,
regardless
of
what
my
network
policy
is
because
I'm
now
bound
to
the
host
network.
Another
interesting
thing
I
can
do
for
those
of
you
who
are
used
to
like
how
the
IP
tables
mechanisms
work
and
things
like
that.
Theoretically,
I
can
not
only
leave
the
host
Network,
but
I
can
actually
access
potential
things
in
the
service
network
inside
of
the
cluster
as
well.
A
So
let's
say
as
an
example:
I
go
in
and
I
do
a
let's
see
here:
a
cube
cuddle
get
service
for
the
namespace
or
one
okay.
I've
got
this
teammate
cluster
IP
that
fronts
the
pod.
So
this
cluster
IP
is
an
internal
virtual
IP
that
shouldn't
be
exposed
in
theoretically.
You
know:
I
I
can
go
in
with
this
pod.
That's
not
allowed
to
egress
and
assuming
I
do
a
install
of
curl.
A
So
let's
go
ahead
and
curl
on
the
toast
real,
quick,
the
bottom,
so
we'll
install
curl
I
should
be
able
to
access
this
service
IP
because
I'm
on
the
host
itself,
it's
almost
like
I'm
some
route
process,
or
at
least
some
process
not
necessarily
route.
That's
running
and
able
to
access
things.
So
if
I
do
a
curl
to
here,
boom
I've
got
the
nginx
404
page.
So
now
I
can
access
these
internal
Service
IPS,
even
though
I
should
have
been
blocked
by
egress
and
then
lastly,
of
course,
and
I
promise.
A
This
is
the
last
example
of
this
since
I've
bound
to
the
host,
not
only
from
like
an
internal
security
standpoint
is
that
concerning,
but
I'm
on
host
network.
Now,
which
means
that
my
random
workload
has
bound
to
port
80
on
that
host.
So
if
we
think
about
this,
we
know
that
it's
running
on
one
92168,
0.2
Oh,
that's
my
worker
node
in
this
cluster
and
if
I
curl,
that
on
port
80
I
also
get
the
nginx
page.
A
So
this
shows
you
how,
with
these
minor
modifications,
you
actually
have
the
ability
to
do
some
pretty
serious
things
in
a
cluster
that
could
be
multi
team
all
right.
So
this
is
where
we're
gonna
start
talking
about
pod
security
policies
as
a
whole
and
how
they
work
all
right
and-
and
it
feels
a
little
bit
tricky
at
first
you're
right
to
be
frank,
it's
pretty
it's
pretty
tricky,
conceptually,
at
least
at
least
for
me.
It
was
hard
to
really
get
an
idea
of
how
these
things
work,
but
effectively.
A
When
somebody
puts
something
like
host
network
in
when
somebody
binds
to
a
host
path,
we
want
to
stop
that
before
the
pod
itself
gets
created.
Okay
now,
as
you
know,
there's
no
pod
security
policies
that
are
on
by
default,
which
means
we
need
to
enable
pod
security
policies
on
the
host
itself.
Alright,
so
I'm
gonna
switch
over
to
the
host
and
I
am
going
to
show
you
how
to
flip
that
on
and
take
a
quick
look
at
the
chat.
Olaf
welcome
from
Denmark
glad
to
have
you
Timmy,
welcome
calcic
great
to
have
you
welcome.
A
Welcome
thanks
for
joining
cool
cool,
all
right,
so
let's
go
back
and
we
are
going
to
SSH
into
the
master
node.
So
again
my
worker
noticed
OH.
My
master
node
is
201,
so,
let's
SSH
into
the
master
node
here
and
now
that
we've
shown
an
example
of
this.
You
might
have
a
pretty
good
idea
with
cube
admin,
which
is
how
these
clusters
were
stood
up.
I
am
able
to
go
in
and
edit
the
API
server.
So
to
do
this,
I
need
to
sudo
vim
at
C.
Kubernetes
manifests.
A
A
What
we
need
to
do
here
is
we
need
to
look
for
the
line
where
we
can
set
enable
admission
control
plugins
all
right.
There
is
an
old
flag
here
that
had
a
slightly
different
name
and
that
flag
order
didn't
matter
this
one.
It
does
all
correctly,
it
doesn't
actually
matter.
So
all
we
need
to
do
with
this
is
we
need
to
append
on
pod
security
policy
to
the
end
of
this
right?
So
obviously,
you're,
probably
gonna,
do
this
with
your
automation.
A
Perhaps
you're
gonna
have
ansible
change
this,
or
perhaps
you
have
some
type
of
distribution
of
kubernetes,
where
you
can
flip
this
on
as
a
default
setting,
but
we're
gonna
flip
this
on
here
and
see
how
things
change
alright
now
I
also
want
to
show
you
before
I
before
I
save
this
up.
I'm
gonna
go
ahead
and
set
up
one
more
watch
down
here.
So
let's
do
a
cube.
Cuddle
get
pods
in
the
namespace
cube
system.
Alright,
I'll
make
that
little
bit
smaller.
This
is
my
set
of
cube
system
pods
all
right
when
I
save
this.
A
The
behavior
should
be
that
the
static
pod
Rhys
arts
itself
and
the
API
server
goes
offline
temporarily
and
comes
back
so
ideally,
I
have
a
highly
available
control
plane.
So
I
can
make
changes
like
this,
and
the
load
balancer
will
automatically
know
to
not
send
stuff
to
that
API
server,
while
it's
restarting
so
let's
go
ahead
and
save
this
up
and
quit
and
in
the
bottom
buffer.
Here
you
can
see
my
API
server
has
officially
gone
down
and
it
is
going
to
be
coming
up.
Alright,
so
I'll
check
the
chat
while
that's
coming
back
up.
A
Gosselin
welcome.
Welcome
merked,
yes,
I
am
using
a
standing
desk.
I'd,
be
very
talented
to
move
this
quickly
with
that
in
a
chair.
It
would
be
a
very
like
a
very
Rolley
chair,
I
guess
you
could
say,
but
yeah
I
mean
I
am
in
a
standing
desk
all
right.
So
my
kubernetes
api
server
is
coming
back
online
and
my
pods
are
now
running
again,
so
we
flipped
pod
security
policies
on
okay.
A
Now,
if
those
of
you
who
said
no
in
the
chat
to
never
running
pod
security
policies,
I
will
tell
you
what
I
just
did
is
usually
what
messes
people
up
out
of
the
gate
when
they
first
do
pod
security
policies:
okay,
because
you
go
okay,
well,
I
need
pod
security
policies,
so
obviously
the
first
thing
I'll
do
is
I
will
turn
them
on
all
right.
Now,
I'm
gonna
go
ahead
and
let
me
just
make
sure
because
my
I
was
playing
off
my
system
earlier.
A
Let
me
make
sure
I
don't
have
any
pot
security
policies
so
get
PSP.
I
do
have
one.
Let
me
let
me
delete
this
PSP
real,
quick.
Just
so
I
can
show
you
what
it
would
normally
be
like:
okay,
cool,
so
we're
back
to
normal
I
have
no
pod
security
policies
in
my
system,
but
I
have
turned
pod
security
policies
on
now
by
default.
This
will
sort
of
hose
you
right.
A
This
is
going
to
put
you
in
a
place
where
you're
super
fragile
and
if
a
pod
needs
to
restart
it
might
not
be
able
to
it,
probably
won't
be
able
to
so.
Let's,
let's
use
an
example,
so
we're
looking
at
our
system
here,
let's
use,
let's
use
Core
DNS
as
an
example,
so
core
G&S
runs
as
a
deployment
and
if
I
delete
this
core
DNS
pod,
it
is
the
job
of
the
replication
controller.
A
The
replication
controller
who
handles
a
deployment
is
the
job
of
the
replication
controller
to
make
sure
that
the
pods
are
instantiated
in
its
control
have
been
sharing.
It
has
the
amount
of
pods
that
you
expected
right.
So
let's
do
a
cute
cuddle
and
let's,
let's
delete
pod
all
right
and
if
I
delete
core
DNS
here
inside
of
the
name,
space
cube
system
and
now
I'm,
obviously
deleting
this,
but
those
of
you
have
been
using
kubernetes.
You
know
pods
get
started
and
stopped
all
the
time.
So
this
doesn't
have
to
be
me.
A
So
if
I
delete
this
pod,
the
pod
terminates
and
it
goes
away
eventually-
and
we
should
still
have
one
of
two
core
DNS
pods
running
inside
of
here
and
now
we
do
but
again,
based
on
what
I
told
you
assuming
I'm
not
lying
to
you,
we
should
have
had
a
core
DNS
pod
replaced
this
one,
because
the
replication
controller
should
be
instantiating
a
new
core
DNS
pod.
So
let's,
let's
check
this
out
and
see
see
what
it
means
real,
quick.
A
So
if
I
do
aq
cuddle
and
I'm
gonna
get
the
deployment,
if
you
will
for
core
DNS
inside
of
the
right
name,
stay
so
for
cube
system
all
right,
all
right,
so
I've
got
core.
Dns
I've
got
a
ready
one
of
two
now
one
thing
that
will
oftentimes
check
is
will
want
to
see
the
events
of
these
constructs
and
see,
if
there's
any
events
that
are
interesting.
That
might
have
given
us
some
details.
So
if
we
do
a
get
events
for
the
core
DNS
deployment
in
the
name,
space
cube
system.
A
Let's
see
here
get
events
for.
Oh
sorry,
get
events.
My
dad
I
meant
to
describe
sorry
not
get
events.
I
can
get
events
for
all
of
them,
but
I'm
just
gonna
describe
the
core
DNS
deployment.
So
let's
do
deploy
your
core
DNS
alright.
Now,
if
I
look
closely
in
this
window
here,
there's
no
events
of
this
particular
control,
which
is
what
I'm,
mostly
interested
in,
but
I
do
have
a
little
bit
of
information
as
I'm
debugging
I'm,
trying
to
figure
out
why
the
heck
has
core
DNS
just
disappeared.
Why
isn't
it
being
created?
A
Well,
I
can
see
that
I
have
some
type
of
replication
failure.
Condition
all
right
and
based
on
this
I
know
that
core
DNS
is
being
managed
by
this
replica
set
right
here.
So
the
problem
isn't
necessarily
going
to
surface
on
the
deployment
right.
It's
actually
gonna
surface
on
the
controller,
which
is
the
replica
set
that
actually
owns
the
instantiation
of
the
pot.
So
let's
go
ahead
and
do
a
cube.
Caudill
get
replica
set
for
core
DNS.
A
All
right
and
I'm
gonna
go
back
to
a
cube
system
here
and
get
our
s
cube
system
with
an
M
and
same
idea.
We've
got
two
desired
with
one
current
all
right.
So
let's
describe
this.
If
we
do
a
describe
replica
set
now
the
replica
set
itself,
it
does
have
an
event
now,
while
it's
wrapped
around,
hopefully
you
can
kind
of
see
it
here.
A
It's
telling
us
to
zoom
in
a
bit,
it's
telling
us
warning
I
failed
to
create
your
rep
via
the
replica
set
controller,
the
pod
core
DNS
with
the
replica
set
prefix,
and
it's
telling
you
it
was
forbidden
right,
particularly
it
was
forbidden
because
there
is
no
providers
available
to
validate
the
pod
requests.
So
what
does
this
mean?
Some
things
to
kind
of
know
about
pod
security
policies
here
that
are
that
are
really
important
and
I'm.
A
A
Okay,
so
when
PSPs
are
on
the
pods
created
must
resolve
against
a
existent
policy
all
right
now,
if
we
kind
of
think
about
the
flow
here
for
just
a
moment
in
regards
to
what's
happened.
We've
basically
got
this
situation
where
we
had
the
replication
or
sorry
I.
Guess
the
replica
set
controller
right.
It
went
to
create,
they
wanted
to
create
a
pod
right.
So
what
happened
between
the
replica
set
controller
and
the
ideal
creating
of
the
pod?
A
So
that's
one
of
the
first
things
that
it
did
here
and
then
it
passed
that
through
the
the
pod
security
policy,
admission
control
so
we'll
just
say,
admission
controller
in
this
case
PSP
or
in
potentially
any
other
admission
control.
So
in
the
replica
set
controller
when
and
to
create
that
through
the
API
server,
it
got
blocked
on
this
command
right
here
or
this.
This
call,
because
you
had
Mission
Control,
are
blocked
us
and
said
no
way,
you're,
not
creating
this
pod.
You
don't
have
a
policy
to
validate
against.
A
You
have
to
have
some
policy
in
the
system
itself,
all
right.
So
now
it's
time
to
kind
of
talk
about
how
we
can
get
the
the
actual
policies
themself
created.
So
we
can
bring
our
cluster
back
in
to
a
state
where
pods
can
be
restarted.
We
can
create
new
pods
as
another
example
here,
I'm
gonna
bring
these
notes
back
in
a
sec,
but
let's
go.
Let's
go
back
to
our
original
example
with
nginx
right.
A
A
Now,
as
a
user
just
I'm,
just
like
the
replica
set
controller
here
right
now,
I'm
just
another,
let's
say
I'm,
just
another
account
as
far
as
the
API
server
is
concerned,
same
idea,
it
said:
hey
I
can't
create
this
pod,
for
you,
there's
no
PSP
for
you
to
actually
use
all
right.
So
now
we
have
to
get
the
actual
policies
in
place
to
make
this
work
and,
as
you
had
seen,
if
I
do
get
PSP
in
my
cluster
I've
currently
got
no
pod
security
policies
in
the
system.
A
So
what
kind
of
policies
do
we
actually
want
to
make
this
work
all
right?
So
if
you
kind
of
think
about
the
landscape
of
your
apps
for
a
moment,
all
the
workloads
that
run
in
the
cluster,
let's,
let's
do
a
let's
do
a
cube.
Caudill
get
pods
for
all
namespaces,
okay,
so
cube
Caudill
get
pods
for
all
namespaces
good,
make
it
a
little
smaller
all
right.
A
We
want
to
use
okay,
so
first
thing
that
you're,
probably
gonna
have
to
do
is
hop
into
the
kubernetes
Docs,
okay
and
learn
a
bit
about
what
these
pod
security
policies
are.
In
particular,
what
some
of
the
different
parameters
are,
that
you
can
turn
on
and
off
all
right
now,
I'll
be
on
when
you're.
First,
getting
into
this,
it's
a
bit
arduous
and
I'm,
going
to
show
you
a
really
cool
tool
by
Cystic
towards
the
end
of
this.
A
That
makes
this
a
lot
easier,
at
least,
can
help
get
you
started
so
some
more
on
that
to
come,
but
for
the
time
being,
it
could
be
a
little
bit
a
little
bit
challenging
to
say
the
least
okay
on
inside
the
documentation.
They
do
have
some
examples
if
I
can
find
them
here,
permissive
and
restricted,
let's
see
if
I
can
find
it.
Okay,
cool
so
inside
of
the
docs
you'll
find
some
restricted
examples
and
you'll
find
sorry,
not
permissive,
but
privileged
examples.
A
So
the
kubernetes
Doc's
by
default,
usually
have
you
put
in
a
privileged
PSP,
which
is
again
kind
of
like
those
cube
system.
Like
things
you're
you're
allowed
to
use
host
ports,
you're
allowed
to
use
host,
pin
you
can
run
as
any
user,
but
if
we
go
down
to
the
more
restrictive
policy,
there's
some
more
constraints,
for
example
in
the
list
of
volumes,
you'll
notice
that
you're
not
allowed
to
mount
a
host
path.
You'll
notice
that
your
user
in
the
container
must
run
as
a
non
root
user.
A
So
these
examples
in
the
docs
are
actually
a
really
good
start.
I
argue
that
they're,
probably
more
permissive
than
you
need
them
to
be,
depending
on
how
tight
of
a
ship
you
need
to
run
here
in
regards
to
some
of
your
cube
components
and
I'll,
give
you
an
example
of
that.
So
if
we
go
back
up
to
the
the
privileged
policy
for
a
moment,
one
of
the
things
you'll
notice
is
that
we
are
allowing
all
capabilities,
we're
allowing
privilege
to
run
as
privileged
and
so
on.
A
Okay,
now,
I'm
gonna
show
you
just
a
slightly
modified
version
of
this
policy.
Okay,
so
if
we
go
back
and
look
at
policies,
I
have
a
policy
that
I
usually
run,
or
at
least
I
kind
of
start
people
off
with
that
I
call
the
cube
system
PSP
and
up
for
debate,
whether
that's
a
good
name,
because
you'll
learn
how
PSPs
themselves
aren't
namespace
scoped,
which
is
something
we'll
talk
about
there
globally
scoped
and
we
kind
of
reference
them
with
namespaces.
A
But
I
do
call
this
one
cube
system
because
I
like
to
scope
it
to
key
system
itself
and
I'll
kind
of
show
you
how
that
works
in
a
moment.
But,
let's,
let's
look
at
what
the
cube
system
cube
system
policy
is
okay.
So
if
we
ignore
all
this
stuff
at
the
top
and
just
focus
on
the
policy,
what
I
was
able
to
do
here
is
basically
figure
out.
A
What
are
the
things
that
the
cube
system
components
need
now,
I'll
admit
when
I
first
wrote
this
policy
I
actually
didn't
realize
the
Stig
had
the
tool
that
they'd
have
to
kind
of
auto
generate
these,
which
is
something
I'll
show
you,
but
nonetheless,
I
I
kinda
had
to
go
through
an
audit
and
look
through
and
figure
out.
What
are
some
of
the
the
policy
things
that
I
actually
need
to
make
this
work
well.
I
know
that
to
make
it
work,
I
need
host
path
access.
A
If
the
API
server
can
see
its
asserts,
I
need
config,
Maps
I
need
the
ability
to
run
as
any
user
right.
There's
there's
a
couple
different
privileges
in
here
that
I
that
I
do
need,
but,
unlike
the
permissive
one
most
of
the
most
of
the
cube
system,
components
don't
necessarily
need
the
allow
all
capability.
So
you
don't
need
to
give
it
every
capability
in
the
world
per
se.
Okay
and
I.
Don't
need
to
give
it
privileged,
true
or
anything
like
that.
A
I
can
kind
of
scope
down
this
cube
system
policy
and
apply
it
to
to
my
my
pots.
So,
let's,
let's
start
off
by
applying
this
policy
and
we'll
get
deeper
into
kind
of
how
everything's
linked
together.
But
let's
just
start
by
applying
this,
so
I'm
gonna
open
up
another
window
down
here
alright
and
we're
gonna.
Do
a
watch
on
a
watch
for
cube,
cuddle
get
PSP,
looks
good,
alright,
so
let's
apply
it
so
we'll
do
a
cube,
cuddle
apply
and
we
are
going
to
apply
policies
inside
of
the
cube
system.
A
Namespace
here
or
sorry,
not
cube
system.
Namespace
see
I'm
already
messing
up
the
naming
just
calling
a
cube
system.
Psp
I'm
gonna
apply
this
policy
alright,
and
now
it
has
created
the
policy
one
of
the
nice
things
about
the
get
in
the
policy
is.
It
will
also
give
you
kind
of,
like
a
nice
view,
of
all
the
things
you
potentially
have
turned
on
inside
of
cube
system.
All
right,
so
I
have
flipped
this
on
now.
A
If
we
go
back
to
the
cluster
for
a
moment,
let's,
let's
do
another
get
pods
right,
so
we'll
do
a
cube,
cuddle
pods
for
the
namespace
cute
system
and
we'll
set
up
a
watch
here
all
right.
So
unfortunately,
it's
not
what
I
was
expecting
right.
I
was
hoping
that,
with
the
introduction
of
this
policy,
I
would
actually
see
the
core
DNS
pod
get
created
in
here,
because,
based
on
everything,
I
know
about
core
DNS.
The
privileges
you
see
here
I
believe,
should
be
adequate.
A
Now,
if
you
think
about
this
for
a
moment,
the
PSP
that
you
have
here
this
policy
is
an
object
like
anything
else
in
kubernetes,
in
an
order
for
something
to
be
able
to
use
this
policy,
it
needs
to
have
the
are
back
use,
verb
attached
to
this
specific
policy
and
if
we
think
about
it
from
our
diagram
earlier
I
know
it's
a
very,
very
ugly,
rough
diagram.
But
you
get
the
idea.
We
know
that
it
was
the
replica
set
controller
right
that
was
trying
to
go
through
the
API
server
and
create
this
pod
right.
A
So
what
I
effectively
need
to
ensure
is
that
the
replica
set
controller
is
in
fact
able
to
go
in
and
instantiate
this
pod,
based
on
based
on
what
it's
trying
to
do.
So
this
is
where
our
back
is
going
to
come
in,
and
this
is
where
these
PSPs
oftentimes
get
a
little
bit
tricky.
So
if
we
go
back
to
our
our
PSP
here,
I
am
going
to
open
up
a
policy.
A
A
So
this
this
is
where
it's
gonna
get
probably
the
most
tricky
so
so
bear
with
me
here
and
ask
questions
in
chat
if
it
gets
confusing,
ok,
so
effectively.
What
we're
gonna
do
here
is
we're
gonna
set
up
a
cluster
roll,
a
cluster
roll
that
we
can
attach
to
from
role
bindings
in
a
namespace
or
cluster
role
bindings
that
our
cluster
wide
up
to
us,
but
it's
a
cluster
all
that
can
be
reused
nonetheless,
and
we're
gonna
call
this
cholesterol
PSP,
cube
system.
A
We
are
going
to
provide
use
access
to
the
cube
system,
pod
security
policy-
all
right!
That's
this
PSP
right
down
here.
So
by
giving
this
use
access,
we
are
going
to
give
that
service
account
whatever
it
might
be.
The
ability
to
resolve
it
now
I
had
mentioned
to
you
a
little
bit
about
a
little
bit
about
these
different
service
accounts.
So
what
I'm
gonna
do
here
is
I'm
gonna,
provide
a
role
binding,
okay
and
again
role.
A
Bindings
are
gonna,
take
the
cluster
role
and
they're
going
to
apply
them
on
a
namespace
level
to
everything
running
in
that
namespace.
So
I
have
a
bunch
of
things
inside
of
system
service
accounts
that
I
am
going
to
give
access
to
so
these
systems
service
accounts,
if
I,
if
I
kind
of
backup.
Let
me
let
me
just
show
you
something
in
a
big
screen
here.
A
So
if
I
do
a
cute
cuddle
get
service
accounts
for
the
namespace
to
you,
the
system
you
can
see
here
that
we've
got
a
bunch
of
service
accounts
for
all
these
different
controllers.
Right
now,
I
can
go
in
instead
of
using
the
group
that
they're
part
of
which
is
system
service
accounts.
I
could
go
in
and
give
specific
service
accounts
access
to
this
policy.
I
don't
have
to
do
all
service
accounts
inside
of
cube
system.
I
could
be
very
hyper
specific.
It
would.
It
would
kind
of
look
like
this.
A
I'd
create
a
kind
of
user,
and
then
my
name
would
be
something
like
the.
It
would
be
something
like
the
replica
set
controller.
This
way
I
could
give
just
the
replica
set
controller
access,
but
in
this
case
I'm
gonna
be
a
little
lazy,
because
I
want
the
daemon
set
controller
to
be
able
to
create
pods
I
want
the
replica
set
controller
to
be
able
to
create
pods
and
and
so
on
right.
A
So
let's
get
rid
of
that
all
right
now,
there's
another
subject
in
here
for
system
nodes,
I'm
gonna
talk
a
little
bit
about
that
at
the
end,
alright,
but
I
will
I
will
show
you
kind
of
what
this
system
nodes
piece
does
in
regards
to
Mir
pods
in
a
little
bit.
Alright,
alright!
So
we've
got
this
role
binding
and
got
this
cluster
roll
inside
of
here,
so
I'm
gonna
go
ahead
and
save
that
up
see
if
I
can
get
out
of
these
things.
I
change
real,
quick
all
right.
A
If
the
air
and
the
replica
set
events,
change,
sure
sure
thing,
yeah
sure
thing
I'll
show
you
so
inside
of
this.
If
I
do
the
get?
Actually,
let's
do
a
describe
here.
Let's
do
get
replica
set
in
the
namespace
cute
system.
Okay,
so
we've
got
core
DNS
in
here
and
again.
This
will
be
a
good
reminder
for
all
of
us.
If
we
do
a
describe
on
this
replica
set,
the
error
is
still
just
kind
of
looping.
You
can
see
it's
happened
18
times
over
17
minutes
right.
A
It
is
constantly
been
trying
to
create
the
pod
and
it
never
found
a
policy
because
that
this
controller
number
was
able
to
so
if
I
apply.
This
are
back,
so
the
are
back.
You
just
saw
with
the
cluster
all
in
the
cluster
or
sixties
me
the
role
binding
in
the
cluster
role,
so
we'll
do
apply
of
policies,
and
we
will
do
the
cube
system.
Psp
are
back
inside
of
here.
Alright
and
I'm.
Gonna
go
ahead
before
I
hit
enter
here
and
open
up
another
window.
A
So
let's
go
ahead
and
let's
go
ahead
and
do
a
watch.
Okay
for
cube,
cuddle
get
pods
in
the
namespace
system,
all
right!
So
it's
inside
of
there
and
now
we
can
apply
this
our
back
and
these
our
backs
have
now
been
created.
So
if
all
goes
well
in
a
moment
here,
once
the
replica
set
controller
attempts
to
recreate
that
pour
DNS
pod
for
me,
I
should
see
core
DNS
show
back
up
here.
A
If
all
goes
well
now,
it's
probably
on
some
amount
of
like
circuit
breaker
type
pattern,
but
we
should
effectively
see
it
pop
back
up
and
not
too
long.
So,
let's,
let's
give
it
a
couple
minutes
and
see
how
it
goes
so,
we've
got
that
in
place.
We've
got
the
PSP
setup,
looks
good
alright,
and
if
we
do
another
watch
on
the
describe,
even
let's
see,
if
we
can,
we
can
see
this
here.
Okay,
so
eighteen
times
over
nineteen
minutes,
so
I
should
eventually
see
this
good
increment
to
19.
A
So
we'll
keep
an
eye
on
this
and
see
if
it
comes
back
to
to
19
in
a
moment
here,
all
right,
okay.
So
while
we're
waiting
for
this
kind
of
thinking
about
it
so
now
we've
got
a
cube
system,
pod
security
policy
in
place
right
and
we're
giving
all
the
service
accounts
that
might
want
to
create
pause.
These
are
the
daemon
set
controllers.
These
are
the
replica
set
controllers.
Perhaps
stateful
set
controllers
whatever
they
might
be,
they're
now,
theoretically
allowed
to
create
pods,
hopefully
we'll
see
Cordia
show
up
in
a
moment
here
too
right
now.
A
If
we,
if
we
go
in
and
think
about
our
normal
workloads,
we
probably
don't
want
those
standard
workloads
to
be
able
to
do
the
exact
same
thing
right.
So
in
this
case,
we're
gonna
want
to
think
about
doing
something
like
a
kind
of
default
policy.
If
you
will
all
right
so
I'm,
gonna,
I'm
gonna
check
on
that
I'm
gonna
check
on
that
describe
one
more
time.
A
Let's
see
here
see
if
we
get
anything
okay,
so
we're
still
at
18,
it
hasn't
tried
again,
let's
give
it
a
little
bit
more
time,
hopefully
it'll
increment
to
19
here
and
not
too
long.
Actually,
one
way
that
I
could
prove
it
out
to
you
to
be
completely
honest.
Just
in
case
the
back
off
is
gonna.
Take
a
while,
let's
try
killing
this
other
pod
and
hopefully
it
works,
or
else
we'll
lose
our
DNS
all
together
right.
A
So
if
we
do
a
cube,
cuddle
delete
pod
and
we
delete
the
core
DNS
pod
in
the
cube
system,
namespace.
Okay
it'll
delete
that
pod
and
that
triggered
the
replica
set
control
are
great.
So
now
you
can
see
two
core
DNS
pods,
two
core
DNS
pods
to
actually
go
in
and
instantiate
these
pods
and
set
them
up
so
pretty
cool.
Now
these
pods
are
able
to
run
with
that
PSP.
Now
you
might
wonder
well
Josh
how
the
heck
do
I
know
that
it's
running
with
that
PSP
itself
righted.
A
How
do
I
know
it
didn't
just
randomly
start
well,
one
way
that
we
could
validate
it
is.
We
could
select
that
pod
and
we
could
do
a
cute
cuddle,
get
pod
name,
space,
cube
system,
the
core,
DNS,
pod
and
I-
can
ask
for
the
yeah
Mel
output
of
that
pod
and
put
it
through
less
pipe
here.
So
let's
do
that
now,
what's
cool
about
this,
if
you
look
around
is
inside
of
the
annotations
you'll
notice
that
with
PSPs
enabled
it'll
actually
tag
via
and
annotation
what
PSP
resolved
against
the
pod
itself
alright.
A
So
this
is
kind
of
a
cool
thing
for
us
to
kind
of
sanity
check
and
wonder,
like
you
know,
what
exactly
is
the
PSP?
That
resolved
was
a
cube
system,
wasn't
another
PSP.
We
know
that
this
is
in
fact
that
policy,
and
we
know
that
that
is
what
was
allowing
the
replica
set
controller
to
instantiate,
that
new
pot,
so
overall
pretty
cool,
stuff.
Okay,
alright
and
checking
chat,
Bogdan,
o
D,
with
wash
yeah
good
call
I
should
I
should
use
that
at
all
it'll
help
out
with
with
highlighting
that
stuff
Christopher
from
Germany
hello.
A
Welcome.
Welcome,
alright!
So
for
those
of
you
who
are
maybe
just
getting
up
to
speed
where
we're
at
right
now
is
we've
got
pod
security
policies
turned
on.
We've
got
the
cube
system
namespace.
That
is
allowed
that
we're
allowing
specific
pods
to
to
be
created
in
and
now
we
want
to
go
in
and
just
add
in
kind
of
like
a
default
PSP
that
we
can
use
and
and
make
sure
that
everything's
working
correctly.
A
A
How
do
I,
how
do
I
know
this
well
effectively
check
this
out
if
we
take
cube
proxy
which
is
running
on
our
host
and
we
do
a
cube,
cuddle
delete
pod
cube
proxy
for
the
namespace
cube
system.
All
right
I
am
going
to
be
able
to
remove
this
pod
and
it
will
not
actually
start
back
up
again
because
that
policy
that
I
applied
to
cube
system.
It's
good
for
core
DNS,
it's
good
for
some
of
these
other
pods,
but
it's
actually
not
good.
A
It's
actually
not
good
for
the
for
the
networking
components
because
they
require
a
little
bit
more
privilege.
So
thinking
this
through
a
little
bit
here,
all
right.
If
I
come
back
to
the
the
the
policy,
I've
actually
set
up
a
separate
policy
for
my
networking
stuff.
Okay,
because
through
examining
my
networking,
pods
and
things
of
that
nature,
I've
started
to
realize
you
know
they
have
some
additional
rights
they
need
in
particular
they
need
privilege
and
they
need
to
be
able
to
have
proven
privilege
escalation
right.
A
They
need
to
be
able
to
bind
to
host
network
which
most
of
my
stuff
doesn't
actually
need
in
cube
system.
So
my
preference
is
to
lock
down
most
of
the
cube
system,
components
and
have
a
special
exception
for
my
networking
components
themselves
right
so
same
idea
here,
just
like
you
learned
before
we
know
we
need
another
policy
and
we
can
validate
this
if
we
go
into
cube,
Caudill
get
daemon
set
alright
and
we
get
the
daemon
set
for,
let's
say,
let's
see
so
Gaiman
set
for
cube
system
right,
get
daemon
set
cube
system.
A
Let's
see
we
got
here,
so
we've
got
cube
proxy.
All
right
see
what
we'll
see
what's
going
on
with
cube
proxies,
it's
clearly
something's
up.
So
not
the
replica
set
controller,
but
the
daemon
set
controller
is
blocked
from
being
able
to
instantiate
it.
So
when
we
think
about
our
our
terribly
ugly
diagram
here,
right,
we'll
open
up
another
one
and
now
the
situation
that
we're
in
here
is
the,
let's
just
say:
DS
controller
and
we'll
call
this
the
the
RS
controller.
For
short,
really
same
idea,
though
right
we
have
the
API
server
it's
validating.
A
We
know
that
this
first
one
is
good,
this
one's
fine
and
we
know
that
this
new
one
is
not
good,
because
it
is
not
able
to
find
something
where
it
has
the
privilege
escalation
in
the
host
networking
right.
So
we
need
to
basically
do
the
exact
same
thing
we
did
before.
Let's
go
in
and
let's
apply
that
policy
so
again,
I
am
going
to
I'm
gonna
them
into
that
for
a
moment
here,
so
we've
got
cube
system.
Let's
do
the
networking
PSP
alright,
bring
this
down
a
little
bit.
A
Okay
and
let's
do
a
watch
and
we'll
add
the
end
since
I've
actually
I
never
knew
that
you
could
do
gee,
that's
really
cool,
so
watch
D
and
we'll
do
a
cube.
Cuddle,
get
pods,
namespace
or
sorry
get
PSP
know
namespace
great,
so
we've
got
the
PSP
inside
of
here
alright.
So
let's
look
at
our
PSP
one
more
time.
A
The
big
change
is
here
is
we're
gonna
introduce
a
new
PSP
called
networking
in
its
primary
differences
is
that
it
is
going
to
be
allowed
for
privileged,
allow
privilege
escalation
and
for
host
networking
for
our
networking
grande
components,
so
in
particular
this
is
going
to
be
applicable
for
calico
and
cube
proxy.
Alright.
So
let's
see
if
we
can
apply
this
now
so
we'll
do
a
cube.
Cuddle
apply
for
the
networking
PSP,
so
we'll
do
policies,
networking,
PSP
mo
all
right,
so
that's
created
and
awesome.
I
just
saw
our
highlighted
change,
shoot
up.
A
A
Again,
we
have
to
keep
in
mind
that
adding
the
PSP
is
not
going
to
be
enough
to
make
the
extra
cube
proxy
come
in,
it's
still
being
blocked
by
the
daemon
set
right,
and
if
we
do
a
cube,
cuddle
get
daemon
set
and
bobbed,
and
this
is
I
think
kind
of
going
towards
your
question.
If
we
go
to
get
the
demon
set
for
cube
proxy
here,
let's
go
ahead
and
describe
it
all
right,
let's
see
if
I
can
describe
Damon
set.
Oh
sorry,
the
Damon
sets
name
is
whew
proxy.
A
That's
the
that's
the
pie,
alright,
so
we
have
the
same
thing
going
on
failed
to
create
forbidden
now.
What's
cool
about
this
time
is
since
we
have
a
policy
in
place,
it
actually
is
gonna
tell
us
kind
of
the
Delta
that
we
have
going
on
right
and
into
to
your
point
bargain
yeah,
but
the
Damon
said
exactly
so.
This
is
key
right.
A
It's
actually
going
to
be
able
to
say:
hey,
I
I
looked
and
saw
the
policies
you
have
but
you're
asking
for
privileged
containers,
and
that
is
one
of
the
things
that
you're
asking
for
too
much
and
I
am
gonna
block
you,
because
the
existing
policies,
none
of
them
are
gonna,
allow
this
particular
thing
to
start
up
right,
so
we're
being
blocked
because
of
that
privileged
call,
which
is
great.
That's
actually
really
good.
We've
limited
the
scope
of
our
PSPs
here.
A
If
we
do
a
get
pods
one
more
time:
okay,
clear
that
out,
we've
limited
the
scope
of
the
privileged
containers
to
just
calico
node
and
cube
proxy,
but,
as
you
know,
to
make
this
work
we
need
proper
are
back
in
place
or
else
queue
proxy
will
never
be
recreated.
So
what
does
that
mean
now
before?
If
we
go
back
to?
Let's
go
back
to
the
the
cube
system,
general
PSP,
for
cube
system
right?
If
I
go
back
and
I
say
we'll:
do
policies
will
be
cubes
system
PSP
all
right?
A
Let
me
sorry,
actually
I
meant
to
do
I.
Bear
with
me
here.
Cube
networking.
Cubes
sorry
cube
system
are
back.
Is
what
I
meant
nope
and
I
just
apply
it?
Okay?
So
let's
see
if
we
can
do
the
right
command
here,
how's
that
sound
okay,
so
II
great
all
right.
So
this
is
cube
system.
This
is
not
the
networking
policy.
This
is
the
cube
system
policy.
That's
already
applied.
Remember
that
the
role
binding
we
gave
it
is.
We
said:
hey
any
service
account
that
lives
in
the
namespace
cube
system.
A
You
can
use
this
policy,
you
can
instantiate
pots
all
right.
So
that's
great,
but
we
don't
want
to
do
this
for
networking
right
for
networking.
We
want
to
know
hey
I
want
to
make
sure
calico.
Node
only
sees
this
and
I
want
to
make
sure
cube.
Proxy
only
sees
this
right,
and
how
can
we
do
that?
Well,
if
we
go
back
to
our
our
cluster
for
a
moment
and
we
do
a
cube,
cuddle
get
service
accounts
all
right
and
we
grep
for
calico
or
sorry.
A
If
we
let's
see
here
if
we
get
service,
accounts,
sorry
for
the
name
space.
So
again,
all
these
namespace
scoping
things
can
get
you
so
cube.
Cuddle
get
service
account
for
the
namespace
cube
system,
see
if
we
got
results
there
good
deal
now.
If
I
search
for
calico
there
is
a
calico
node
service
account.
If
I
search
for
cube
proxy,
there
is
aq
proxy
service
account.
This
is
a
good
thing.
A
What
it
means
is
we
can
give
that
networking
PSP
exclusive
access
to
these
two
service
accounts
so
that
when
the
pod
comes
online,
while
the
replica
set
controller
might
not
be
able
to
create
it
because
it's
service
account
is
bound
to
the
PSP,
we're
basically
going
to
be
able
to
make
an
exception
for
these
particular
components.
So
what
does
that
mean
in
the
our
back
stand
boys?
Let's
open
this
up?
If
we
go
to
policies,
ok
and
then
we
do
the
cube.
Sorry,
the
networking
are
back.
A
Alright,
we've
got
this
cluster
role,
which
is
just
like
the
one
you
saw
before,
except
it's
pointed
the
PSP.
Networking
is
going
to
allow
us
to
use
the
the
networking
PSP
and
then
we've
got
this
new
role,
binding
and
you'll
notice
that
the
kind
and
I
think
I
might
have
said
user
earlier.
I
was
mistaken
there.
A
The
kind
is
service
account
in
this
case,
so
we
are
going
to
allow
the
Calico
node
service
account
in
the
cube
proxy
service
account
in
the
cube
system
namespace
to
be
able
to
resolve
this
pod
security
policy,
and
only
this
pod
security
policy
well,
actually,
technically.
This
guy
can
also
these
service
accounts
also
could
theoretically
use
the
cube
system
policy,
but
we
know
that
that
policy
is
not
good
cuz.
It
doesn't
give
privileged
access
and
things
like
that.
Alright,
so
let's,
let's
see
if
we
can
test
this
out
now.
A
So
if
we
do
a
cute
cuddle
apply-
and
we
do
the
the
networking
here
so
we'll
do
policies.
Networking
are
back
cool,
so
we've
got
networking
are
back
in
place
and
if
I
apply
that
okay
we've
got
the
PSP
networking
unchanged
unchanged
looks
good,
which
might
have
mean
I
applied
it
earlier.
So,
let's
see,
if
we
can,
we
can
figure
this
out.
So
all
right,
let's
see
here
if
we
do
a
cute
cuddle,
get
PSP
just
want
to
make
sure
those
are
there.
A
So
we
got
cube
system
and
networking
looks
good
now
if
we
do
a
cube
cuddle
of
get
daemon
sets
for
the
namespace
cube
system
awesome.
This
is
looking
much
better,
so
theoretically,
cute
proxy
picked
it
up
and
now
has
two
that
are
available.
Let's
check,
let's
check
the
actual
on
the
events
again,
just
to
make
sure
we've
got
a
good
draft
on
this.
So
if
we
describe
okay
so
we'll
do
it
described
on
the
cube
proxy
demon
sect
inside
a
cube
system.
A
A
If
you
will
that
run
in
networking
that
need
extra,
privileged
capabilities,
we've
given
another
set
of
permissive
privileges
to
the
just
general
cube
system,
posit,
don't
need
all
that
access,
maybe
there's
like
I,
don't
know,
maybe
like
net
server
or
something
or
something
I,
guess:
I,
guess
something
that's
running
as
a
pot
in
your
system.
More
or
less
is
going
to
be
running
in
here
and
it
can
resolve
the
cube
system.
A
A
Caudill
get
pod
nope
that
one
didn't
see
here
this
one
had
it
interesting
so
for
some
reason,
my
core
DNS
pods,
don't
have
the
PSP
on
the
manual
which
is
super
weird,
so
pod
core
DNS
namespace
cube
system,
and
then
let's
do
a
bogey
Emma
on
that
okay
to
do
PSP
in
the
annotations.
A
What's
there
I
wonder
what
I
was
doing
with
my
search?
Sorry
anyways,
so
the
cube
system
for
Corgi
NS
was
not
actually
it
was
actually
not
resolving
networking,
but
it
was
resolving
the
cube
system.
Psp,
all
right,
cool.
A
All
right,
sorry,
I've,
been
I've
been
looking
away
from
chat
here.
Yes,
so
around
policy
orders
I
think
I
think
Duffy
had
kind
of
a
kind
of
hit
on
that
there
that
it's
alphabetical
first
match
wins.
So
that's
a
very
important
point
right.
A
I
mean
it
is
important,
because
you
want
to
make
sure
you
have
the
right
policies
falling
in
place,
but
at
the
end
of
the
day,
if
a
policy
satisfies
it,
the
pods
allowed
to
be
created.
That's
your
date
and
that's
why
I
checking
for
this
annotation
is
helpful,
because
it
will
ensure
that
you
have
the
that
you
have
the
the
correct
policy
in
place
that
you
were
expecting
it
to
resolve
against
all
right.
A
Okay,
so
I
see
some
questions
about
a
base
PSP,
and
that
is
that
is
kind
of
the
last
thing
we
haven't
gotten
to
hear.
So,
let's,
let's
talk
a
little
bit
about
what
a
base
PSP
would
look
like
so
backing
up
a
little
bit.
We
have
a
couple
namespaces
right.
We
have
our
cube
system
namespace,
but
we
have
a
bunch
of
other
namespaces
that
our
company,
our
hypothetical
companies
using
lines,
mountains,
organ
one
work,
two
pings
so
on
and
so
forth.
A
In
so
far,
none
of
these
other
namespaces
are
going
to
be
able
to
have
the
privileges
they
need
to
actually
create
pots.
Okay,
so
what
we
need
is.
We
need
some
kind
of
defaults
policy.
That's
going
to
provide
us
with
the
ability
to
effectively
instantiate
pods,
it's
just
normal
workloads
with
kind
of
some
like
some
sensible
defaults,
okay.
So
in
this
example-
and
this
will
all
be
in
the
the
TGI
K
github
for
the
this
episode.
A
In
this
example,
I
have
a
policy
I'm
going
to
show
you
that
CD
policies,
alright
I,
have
a
policy
called
default.
Psp
now
default.
Psp
is
quite
a
bit
different
than
the
other
ones
for
the
most
part,
and
you
can
ignore
these
annotations
I
was
just
playing
with
them
earlier,
but
for
the
most
part
in
the
SPAC,
it
has
a
bunch
of
what
I
would
consider
to
be
sensible
defaults
about
what
I
want
standard
workloads
to
be
able
to
use
okay,
so
first
off
I
want
all
capabilities
to
be
dropped
by
default.
A
I
don't
want
any
additional
Linux
caps
to
be
to
be
available
to
me.
I've
got
a
set
of
volumes
in
here
that
I'm
going
to
allow
the
workloads
to
use
I'm
gonna,
allow
them
to
get
secrets,
config
Maps,
obviously
secrets
they
have
barback
access
to
persistent
volume
claims,
but
I'm
not
gonna,
give
them
host
paths
and
different
things
like
that.
A
Well,
one
really
strict
thing
is
I'm,
no
longer
gonna,
let
them
run
his
route,
so
people
who
come
onto
my
cluster
they're
gonna
have
to
have
a
non
root
user
running
in
their
containers,
hoping
to
prevent
random
privilege,
escalation
things
or
or
similar
vulnerabilities
that
might
pop
up
that
that
root
users
might
be
able
to
to
expose
and
access
same
goes
for.
Groups.
A
Same
goes
for
different
parameters
inside
of
here,
so
the
default
one-
and
you
might
want
to
tune
this
to
to
your
own
organizational
needs-
is
gonna,
be
what
policy
can
always
be
resolved,
always
no
matter
what
this
policy
can
be
resolved.
All
right,
so
I'll
show
you
kind
of
how
we
achieve
that
and
we'll
start
off
by
by
setting
up
another
watch
down
here.
So
let's
do
a
watch
for
okay
watch
and
again
be
we've
gotta
get
used
to
that.
A
A
pretty
cool
trick,
cube
Caudill,
get
PSP
all
right,
so
we've
got
cube
system
and
we've
got
networking.
Okay,
so
I'm
going
to
apply
the
default
PSP
llamó
file
here,
okay
and
I'm,
going
to
hopefully
do
apply
correctly
with
there.
We
go
all
right
so
now.
This
has
changed
and
we
have
got
the
default
policy
that
is
now
in
place,
and
this
is
again
what
we're
going
to
use.
But
again
the
thing
to
keep
in
mind.
A
A
A
Now,
here's
how
I
made
this
the
default
instead
of
using
a
roll
binding
which
is
going
to
give
access
to
a
PSP
on
a
namespace
level
for
service
accounts
and
and
things
like
that,
I'm
going
to
use
a
cluster
roll
binding
right,
I'm
gonna
use
a
cluster
roll
binding
here
so
with
the
cholestoral
binding,
I'm
gonna
call
this
PSP
default
and
effectively
what
I'm
gonna
say
here
is
I
am
going
to
say
anything
that
is
system.
Authenticated
can
see
this
PSP
alright.
Now
you
still
have
to
have
abilities
to
create
pods.
A
You
still
have
to
have
abilities
to
create
the
plan
and
so
on
and
so
forth.
But
if
you
are
authenticated
and
authorized
again
remembering
that
you've
gotten
past
this
first
step
in
our
chain
of
the
API
server,
you
often
you
bought
the
seed.
Now
we
are
gonna.
Let
you
see
this
default
PSP,
always
whether
you're
in
cube
system
default,
namespace
or
any
other
namespace
alright.
So
let's
go
ahead
and
apply
that
inside
of
here.
So
we'll
do
a
cube.
Cuddle
apply,
okay,
playa
and,
let's
see
so.
This
is
default.
A
A
I
could
also
still
create
this
pod
and
it
could
theoretically
resolve
against
default.
Okay
or
at
least
attempt
to
resolve
against
default.
As
you'll
see
now
I'm
going
to
change
this
just
a
little
bit
to
something
called
app
dot,
deploy
or
sorry
app
deploy,
it's
really
no
different,
except
I've.
Wrapped.
Your
container
in
the
bottom
here,
your
pod
I've,
wrapped
that
in
a
deployment
okay,
so
we
are
gonna,
deploy
three
replicas
of
our
pod
nginx,
alright,
and
that
is
going
against,
hopefully
going
against
the
default
PSP.
Okay,
all
right!
A
A
A
Go
into
your
replica
set
and
figure
out
why
things
are
in
fact
breaking
right,
so,
let's
figure
out
why
this
is
breaking
because
again
we
should
have
been
able
to
resolve
against
the
default
PSP.
So
I'll
go
back
and
again
based
on
our
diagram.
I
did
a
deployment
right,
so
we
know
it's
going
through
this
chain
of
command.
There's
a
replica
set
controller
that
is
trying
to
instantiate
the
pod
and
it
is
clearly
not
finding
a
PSP
all
right.
A
It
should
tell
me
about
why
it
failed.
So,
let's
see
if
we
can,
if
we
can
figure
that
out
real
quick,
so
we'll
do
a
cute
cuddle
describe
pod
right
again
in
the
name
space,
org
1,
all
right!
This
is
what
I'm
looking
for
so
interesting.
Okay.
So
it's
in
the
pod,
which
I
actually
think
is
a
bit
of
a
better
user
experience
from
at
least
what
I
was
recalling
in
the
pod.
It's
saying
hey
trying
to
create
this,
but
your
container
has
run
as
non-root
or
sorry.
A
Your
PSP
has
run
as
non-root
and
the
image
will
run
as
root,
so
our
PSPs
are
actually
doing
what
we
want
them
to
hear.
If
we
go
back
to
our
deployment
to
the
app
deploy,
this
nginx
image
is
running
as
root,
so
our
PSP
is
doing
its
job
in
actually
blocking
us
here,
which
is
which
is
pretty
cool.
So
what
do
we
do?
Well,
suddenly
we
realize
okay,
do
we?
Are
we
willing
to
let
nginx
run
as
a
as
a
root
user?
A
Is
that
okay
with
us
and
let's
assume
for
a
moment
that
it's
not
so
we
go
back
to
our
App
teams
and
we
say:
hey,
you
have
to
find
a
container
image,
it's
not
running
as
root.
You
need
to
select
this.
So
if
you
go
into
the
kubernetes
docks,
there
is
a
area
that
you
can
go
into
called
security
context.
A
Okay,
and
if
you
look
up
security
context,
you'll
actually
have
ways
to
specify
who
you're
going
to
run
this
particular
pod,
as
which
can
be
helpful.
So
if
there's
multiple
choices,
perhaps
it's
not
just
a
root
user.
It's
just
selecting
root
by
default
right
and
you
want
to
run
as
some
other
user
now
in
the
nginx
image.
I
actually
would
prefer
to
use
an
image
that
runs
as
non-root
by
default.
So
by
doing
some
googling
and
I
won't
waste
your
time,
googling
docker
hub
right
now,
but
I
found
out
that
bit.
A
Nami
has
a
now
part
of
the
F
we're
actually,
but
just
striking
it's
just
connecting
now,
but
nonetheless,
that
Nami
has
images
that
are
that
are
blessed
and
a
little
bit
more.
A
little
bit
more
security
minded
I,
hope,
I,
hope,
that's,
okay
to
say,
but
basically
on
the
bitNami
image.
I
read
about
actually
runs
as
non-root
by
default.
So
if
I
change
my
image
to
bitNami
nginx
rather
than
just
running
nginx
as
long
as
the
docks
for
the
bitNami
nginx
container
are
correct,
it
should
be
able
to
run
this
as
a
non
root
user.
A
A
So
we
will
do
a
cube,
cuddle
apply
and
do
the
app
deploy,
dot,
yeah
more
all
right
and
then
apply
that
in
and
now
you
can
see
some
of
these
things
and
watch
are
starting
to
change
and
they're,
starting
to
churn
against
this
default
PSP
and
boom.
We
have
officially
got
running
pods,
pretty
cool
stuff
and
again
trying
to
keep
in
mind
what
we
learned.
A
We
should
now
be
able
to
do
a
cute
cuddle
get
pod
this
app
inside
of
the
namespace
inside
of
the
namespace,
or
one
alright
and
inur'd
one
we
can
ask
for
Oh
and
then
I
will
AG
for
the
PSP,
and
you
can
see
that
it
in
fact
resolved
the
default
PSP.
Alright.
So
again,
now
we've
got
this
blank
default
PSP.
It
runs
everywhere,
it'll
block
anything
that
that
we
need
it
to
block
and
so
on,
and
and
that's
it
really
really
cool
stuff.
A
So
just
kind
of
to
to
summarize
here
and
there's
a
couple
kind
of
tips
and
tricks
and
a
couple
more
things:
I
want
to
leave
you
all
with
just
a
hopefully
no
more
than
10
or
15
more
minutes
of
content
here.
So
if
we
look
at
this,
we've
got
our
PS
PS
right
and
our
PS
PS
are
a
cube
system.
Psp
right
we
have
a
networking
PSP
in
the
network.
Psp
is
very,
very
high
privileges
and
so
on,
and
then
we've
got
a
default.
Psp!
A
That's
going
to
run
on
anything
in
the
cluster
that
we
want
to
resolve
right.
The
cube
system,
PSP
is
bound
to
system
service
accounts,
okay
system
service
accounts
inside
of
the
cube
system,
cube
system,
namespace
all
right,
so
open
that
up
a
little
bit
all
right.
So
in
cube
system,
networking
is
bound
to
the
Calico
node
and
cube
proxy
in
cube
system.
A
A
How
do
you
provide
exceptions
right
so
you're
good,
align
to
this
case,
where
an
app
team
is
gonna?
Call
you
up
and
say:
listen,
I,
can't
change.
This
thing.
I
need
my
workload
to
have
an
exception.
I
can't
just
be
beholden
to
the
default
PSP
itself
right.
So
there's
there's
a
couple
ways
that
you
can
approach
this
right,
so
you
might
have
situations
where
you
either
want
to
provide
a
let's.
Let's
call
them
exception.
A
Psps
right
exception
PSPs,
you
might
want
to
do
them
on
a
workload
level
or
you
might
want
to
do
them
on
a
namespace
level.
Okay-
and
this
is
where
PSPs
also
get
a
little
bit
tricky.
You
have
these
two
options.
You
can
have
a
namespace
called
system,
components
right
and-
and
you
know
it's-
it's
gonna-
it's
gonna
effectively.
A
It's
gonna
effectively
want
namespace
level
access
to
these
kind
of,
like
cube
system
cube
systems
not
too
different
with
queue
system.
That's
effectively
what
we
did.
We
set
up
a
roll
binding
for
all
the
service
accounts
right
and
we
said
hey.
You
should
be
able
to
resolve
this
if
you're
a
service
account
in
side
of
cube
system
but
effectively.
A
We
may
also
want
to
provide
these
workload
level
exceptions,
which
would
basically
mean
that
if
we
want
to
go
back
here
right
and
we
want
to
go
back
and
go
to
our
app
and
we
want
to
run
nginx
nginx
latest
again
and
our
app
team
says
there's
no
way
we're
not
going
to
run
this
thing.
Okay,
so
let's
go
ahead
and
get
it
in
that
that
state,
real,
quick,
so
we'll
apply
this
deployment.
A
Okay,
it'll
start
recreating
and
then
once
again
once
again
fail
now,
probably
what
we
need
to
tell
our
app
team
to
do
is
that
we
need
to
tell
our
app
team
that
we
want
them
to
set
up
a
service
account
dedicated
to
the
nginx
controller
okay,
but
for
the
time
being,
we're
just
going
to
give
a
extra
privileged
capability
to
the
nginx
to
the
nginx
workload.
Here
all
right.
So
if
I
go
back
into
my
policies,
okay-
and
then
we
go
to
the
default
PSP
for
a
moment
here.
A
So
here,
here's
what
I'm
actually
gonna,
do,
let's,
let's
back
out
well
exit
out
of
here
all
right
and
we
will
go
into
TGI
K
episode,
zero,
seven,
eight
cool
cool,
yes
and
I
will
definitely
show
the
Cystic.
The
Cystic
thing
is
our
last
little
thing
by
the
way.
So
don't
don't
worry,
I
promise
I
haven't
forgotten
that
so
inside
of
zero,
seven,
eight
I'm
gonna
go
to
policies
all
right
now,
I'm
gonna
start
off
by
just
copying
the
default
policy.
A
Okay
default
PSP
and
I'm
gonna
call
this
it's
a
terrible
name
was
just
comments,
call
it
extra
PSP,
all
right,
so
extra
PSP
and
inside
of
the
extra
PSP.
We
are
gonna
change.
One
thing
about
it:
we
are
going
to
allow
users
to
run
as
any
alright,
so
run
as
any
user.
We're
gonna
make
an
exception
for
this
team,
all
right
cool
now,
if
we
go
back,
I
am
going
to
I
supply
should
name
it
correctly
to
so
extra
PSP.
We're
gonna
call
this
guy
the
the
extra
DSD.
A
Okay
good
deal
now:
let's
apply
extra
PSP
here.
So
if
we
do
a
cube,
cuddle
apply
extra
PSP,
alright
extra
PSP
was
created.
Okay,
we
do
a
cube
cuddle
and
get
PSPs.
We've
now
got
default
extra
cube
system
and
networking.
Now
again,
all
of
this
happens.
All
the
magic
here
happens
in
our
back.
So
if
we
copy
over
our
on
the
copy
over,
a
different
policy
actually
will
copy
the
the
cube
system,
pia.
Actually
you
know,
let's
copy
over
the
the
default
bar
back
here
so
default.
Our
back
we're
going
to
call
this
extra.
A
Our
back
got
enamel,
okay,
so
extra
our
back
awesome.
Alright!
So
let's
change
a
couple
things
about
this,
so
extra
I'm
gonna
call
it
extra
again
these
namings
or
terrible.
It's
just
for
the
sake
of
time,
extra,
alright
and
then
what
I'm
gonna
provide
here
is
I'm
gonna
provide
access
for
the
service
account
and
that
service
account
is
going
to
be.
The
default
service
account
alright,
okay,
and
this
is
actually
going
to
be
a
role
binding
here.
So
we're
gonna
do
a
role
binding.
A
Okay,
the
namespace
of
this
role
binding,
is
going
to
be
the
the
org
one
namespace.
Okay,
that's
where
we're
at
so
org
one
will
take
namespace
out
of
here
looks
good.
The
cluster
role
that
we're
going
to
reference
is
going
to
be
I
think
it
was
called
extra
PSP,
so
hopefully
I've
got
that
name
right
and
we'll
call
that
extra
PSP,
alright,
so
roll
wrap.
Oh
sorry,
the
role
wrap,
which
would
be
this
so
PSP
default
extra.
Let's
see
if
we
can
find
this
here,
so
PSP
default
extra.
Alright,
so
we'll
do
a
cube!
A
Cuddle
get
PSP,
make
sure
I've
got
that
right:
extra,
PSP!
Okay!
So
that's
good
for
the
resource
name.
Extra
PSP
is
the
cluster
role
cluster.
Sorry,
the
pause
security
policy,
the
cholesterol
is
PSP
default
extra,
which
we're
referencing
down
here.
All
of
that
all
looks
pretty
darn
good,
okay
and
yeah,
so
org
one
namespace
at
a
conceptual
level.
It
looks
correct.
I'm,
always
worried
about
trying
to
write
our
back
really
fast
like
this.
But
let's,
let's
see
what
happens
so
well,
troubleshoot
it
if
need
be
so
to
cuddle,
apply
extra.
Our
back
right.
A
A
A
Delete,
oh
sorry,
apply
this
time
what
I
meant
pending
all
right
all
right,
so
we've
still
got
a
container
create
error
here,
let's
see
if
we
can
find
out
where
in
our
PSPs
here
we've
gone
wrong,
so
org
one
namespace,
the
service
account
name
is
the
default
service
account.
It
all
seems:
okay,
PSP
default
extra
looks
good.
A
Let's
see
here
see
if
we
can.
If
we
can
find
this
real,
quick,
so
gets
service
account
inside
of
default,
so
the
service
account
is
called
default
seems
to
be
what
it's
using
so
service
account
default.
A
Okay
and
I.
Wonder
actually,
let's
see
here,
I
wonder
if
I
explicitly
have
to
call
out
the
service
account
for
any
reason.
Org
one
service
account
default.
Okay,
let's
see
here
so
we
check
this
out
here.
Kubernetes
service
account
I'll
try
to
get
a
quickie
animal.
If
I
can
here
so
service
accounts
for
pods.
A
Let's
see
service
account
there.
It
is
right
in
sizes,
back
okay,
let's
see
if
we
can,
if
we
can
make
this
really
quickly,
yeah,
so
cool,
so
inside
of
the
spec
for
the
pod
I'm,
just
gonna
go
right
here.
Service
account
name
is
defaults.
A
Yeah,
actually
good
caller
Ari
I
think
you
might
be
right.
I
think
we
are
blocking
the
group.
Let's,
let's
check
that
right
now.
I
think
you
are
correct,
let's
double
check,
so,
if
I,
what
I
should
have
done
here,
which
is
effectively
what
I
told
you
all
to
do?
I
should
have
gone
ahead
and
checked
the
the
actual
error
message.
So,
let's,
let's
try
that
real,
quick,
so
work,
one
all
right.
So
if
we
do
see
here,
if
I
make
it
a
little
bit
bigger
so
okay
there.
A
So
if
we
do,
the
cute
cuddle
gets
get
that
all
right
and
we
do
the
oh
sorry
which
I'm
gonna
do
a
describe
here
so
cube
cuddle
describe
the
pod
f1
how
the
feeling
or
you're
correct
on
this
one.
Let's
see
oops
sorry
getting
my
commands
mixed
up
now
so
describe
pod
inside
of
or
one
Eric
container
run,
as
non-root
image
will
run
as
root
now,
it
seems
to
say
the
same
thing
but
I
wonder
if
that's
still
that's
still
a
thing.
A
So
here's
actually
here's
something
we
can
do
to
just
prevent
it,
potentially
being
something
silly
with
the
the
PSP
I
made.
Now
not
that
I'd
recommend
that
you
do
this,
let's
go
ahead
and
give
it
the
role
ref
to
the
actual
cube
system,
one
which
we
know
has
far
more
heightened
privileges
right,
so
we'll
do
PSP
cube
system
can
see,
if
happens,
to
fix
it
and
that
might
validate
what
Rory
said
in
there
too.
So
I'm
just
gonna
do
a
quick
split
and
make
sure
my
my
cube
system.
A
A
Sorry
yeah.
What
was
it
called
here?
Extra
car
back
right
authorization
group
cannot
change
roll
wrap,
okay,
so
our
backs
are
that's
gettin
to
me
here
so
we'll
delete
that
we'll
apply
it
again.
Okay,
created
created.
Alright,
let's
see
if
we
have
any
different
results
here.
If
not,
I
might
need
to
troubleshoot
offline
and
get
back
to
you
so
keep
cuddle
get
pods
inside
of
the
namespace
or
one
okay.
So
let's
go
ahead
and
cube
cuddle
delete
the
app
namespace.
A
So
this
is
do
app
boy,
okay
and
I'll-
set
up
our
watch
down
here
again:
cool,
so
org
one.
Alright,
so
cute
cuddle
apply,
deploy
up
one
deploy,
okay,
see!
If
we
have
any
better
luck
here,
if
it
still
blows
up
on
us
hey
here
we
go
okay,
so
I
would
be
willing
to
bet
Rory
that
you
were
probably
correct.
The
message
wasn't
much
better
for
me,
but
to
his
point,
I
believe
what
was
going
on
here
is
the
are
back
for
our.
A
What
was
it
called
the
extra
PSP
right,
so
I
am
allowing
it
to
run
as
any
user,
but
I'd
be
willing
to
bet
you
that
this
guy
needs
to
allow
the
route
group
and
in
fact
we
know
for
a
fact
it
has
to,
because
without
the
route
group
you're
not
gonna,
be
running
his
route.
So
that's
almost
guaranteed
exactly
what
blew
me
up
so
I,
just
kind
of
tied
it
into
the
cube
system,
one
for
testing
purposes
and
sure
enough.
A
We
were
able
to
create
it
without
any
issue
at
all,
all
right,
all
right,
so
we've
gone
fairly
over
on
time.
I
appreciate
you
all
sticking
around
all
the
awesome
questions
and
comments
in
the
chat.
I
want
to
show
PSP
advisor
off
real
quick,
so
you
saw
me
showing
off
tons
and
tons
and
tons
of
these
policies
and
playing
around
with
them,
and
you
just
saw
how
easy
they
are
to
screw
up.
A
In
fact,
it's
almost
like
I'm
trying
to
sell
you
PSP
advisor
right
now,
right
literally
I
messed
it
up
and
if
I
had
used
PSP
advisor
I
may
have
been
able
to
prevent
that.
So
basically,
PSP
advisor
is
a
open
source
project.
That's
in
the
Cystic
labs
repository,
it
looks
like
Duffy
put
it
in
chat,
so
those
of
you
who
are
still
with
us,
you
can
check
out
the
chat
down
there,
but
basically
you
can
clone
this
down.
A
It's
a
really
really
simple
binary
and
what's
great
about
it,
is
not
only
will
it
give
you
an
idea
of
some
PS
PS,
you
can
start
off
with,
but
it's
actually,
in
my
opinion,
an
awesome
auditing
tool,
the
ability
to
go
what's
running
in
my
namespace.
So
after
this
session,
some
of
you
should
just
go
into
your
clusters
and
try
running
this
and
see
what
results
you
get
so
basically,
I
have
a
binary.
It's
called
QP
SP
advisor
on
my
system,
alright
and
QP
SP
advisor.
A
If
we
look
into
the
help,
real
quick
is
going
to
provide
me
with
the
ability
to
give
a
name
space
right.
So
if
I
do
this
here
named
space
and
I
can
do
it
for
the
whole
cluster,
but
I
think
namespace
kind
of
scopes
a
little
bit
better.
Let's
start
off
by
doing
it
for
org
one
sorry
I'm
used
to
the
kubernetes
conventions
here,
namespace
org,
one
all
right,
so
the
pod
security
policy
that
it
has
for
org
one
is
it's
basically
telling
me
hey.
Your
file
system
group
needs
to
run
as
any
user.
A
So
if
I
had
taken
PSP
advisors,
advice,
PSP
advisor
is
not
necessarily
saying
you
should
use
this
as
your
policy
for
the
default
namespace,
but
it
is
telling
me
based
on
your
workloads.
You
need
this
to
be
sapper,
it
won't
work.
You
also
need
this
to
be
set
or
it
won't
work.
Now,
here's
a
really
cool
example:
let's
look
at
cube
system.
So
if
we
take
the
namespace
cube
system
inside
of
here,
it
is
going
to
analyze
all
my
workloads
once
again
and
notice
granular.
This
gets.
A
It
allows
host
paths,
which
is
what
mine
does,
but
it's
actually
going
to
take
every
single
one
of
the
paths
that
my
workloads
are
using,
so
things
like
CA
certificates
and
it's
gonna
make
sure
that
that
workload
has
read-only
access
to
that
particular
path.
I
can
also
see
that
I
have
some
workloads
running
host
network
and
privileged.
So
that
is
a
really
really
cool
thing.
Inside
of
here,
I
would
say
you
know
the
only
thing
I'd
be
careful
with
PSP
adviser
that
maybe
there's
a
way
around
this
and
the
Cystic
folks
can
help
out.
A
If
so,
but
you
know
if
I
took
this
at
face
value,
it
is
gonna,
give
certain
workloads
more
privileges
than
I
need,
because
it's
analyzing
what's
being
used
at
a
namespace
level.
So
what
I'm
getting
at
here
is
if
we
go
back
to
cube
system,
so
cube
Caudill,
get-get,
pods,
namespace,
cube
system
like
I,
had
mentioned
PSP
advisors,
saying
hey,
you
have
workloads
that
need
host
network
and
privileged,
but
on
a
technical
level,
only
cube
proxy
and
calcio
node
need
that
right.
So,
nonetheless,
that's
not
to
down
cubed
queue
up
sixteen
epsp
advisor.
A
This
is
an
amazing
amazing
place
to
start
an
amazing
place
to
to
to
audit
how
things
work,
and
it
should
be
a
really
good
starting
point.
So
all
you
would
need
is
to
run
PSP
advisor,
get
some
outputs,
maybe
massage
them
a
little
bit
and
then
all
you're
doing
is
setting
up
our
back.
That
allows
certain
service
accounts
to
utilize.
This
particular
pot
security
policy.
So
if
it
were
me,
I
would
honestly
take
this
output
massaged
it
a
bit
and
I
would
use
it
instead
of
the
cube
system
policy.
A
That
I
showed
you
today,
because
I
think
this
is
a
far
better,
more
descriptive
policy,
all
right,
so
that's
PSPs.
Frankly,
it's
a
lot
of
information
and,
if
you're
still
a
little
bit
confused
by
it,
you're
probably
doing
better
than
I
was
because
it
took
me
a
really
long
time
to
wrap
my
head
around.
Some
of
these
concepts
really
really
appreciate
you
all
sticking
around
and
being
encouraging.
Hopefully
this
wasn't
too
much
of
a
brain
dump
on
you,
but
yeah
again,
this
was
our
PSP
session.
A
You
can
find
me
on
twitter
at
josh,
ro,
sso
I
would
love
feedback
both
positive
and
negative,
so
you
know
throw
it
at
me
and
if
you
have
additional
questions
hit
up
hit
up
Duffy
hit
up
myself.
We
love
this
stuff.
We
love
talking
about
it
and
making
people
more
aware
of
it.
So
please
shoot
us
a
message
on
Twitter
and
we'll
do
our
best
answer
for
you
all
right,
all
right.
Everyone
thanks
again,
it's
it's
been
a
pleasure
and
we'll
until
next
time,
we'll
see
you
soon
bye.