►
From YouTube: TGI Kubernetes 171: Pod Security Problems
Description
Join Evan Anderson and Pushkar Joglekar as they talk about Pod Security -- the history of Pod Security Policy and the problems it caused, and the future of PodSecurity admission controls.
A
Hello-
everyone,
I'm
evan
and
here's
pushkar
and
we're
going
to
be
talking
today
about
pod
security
policy.
Do
you
want
to
introduce
yourself.
B
A
Okay,
so
usually
the
first
part
of
this
is
putting
together
the
show
notes.
So
let's
take
a
look
at
those.
A
Let's
see
here
we
go
and
for
those
of
you
who
don't
recall
the
show
notes
are
at
tgik
dot,
io,
slash
notes,
you
can
see
it
in
the
upper
corner
feel
free
to
add
your
own
notes
to
that
exciting
stuff
that
happened
in
the
last
week.
A
Well,
in
the
last
two
weeks
since
we
since
tgik
was
last
live,
we
had
our
first
hybrid
kubecon,
so
for
those
of
you
who
have
been
living
completely
under
a
rock
for
the
last
year
or
so,
we
didn't
have
kubecon
last
year
because
of
this
little
pandemic
thing
and
now
we're
slowly
inching
our
way
back
to
normal
and
so
la
was
a
mix
of
people
in
person
with
social
distancing
and
masks
and
stuff.
A
Like
that
and
online
sessions,
and
I'd
say
I
was
there
in
person,
but
about
half
the
events
were
pre-recorded
and
the
other
half
were,
you
know,
live
with
questions
from
both
online
audiences
and
in-person
audiences.
A
But
yeah
yeah,
I
don't
know
if
anyone
wants
to
link
or
in
the
talk
mention
any
talks
that
they
found
were
particularly
good.
A
B
We
had
san
diego
and
then
I
was
kind
of
hoping
to
go
back
again
and
took
for
forever
almost
like
almost
two
years
since
we
have
been
in
san
diego
and
turns
out,
we
were
a
few
miles
away
from
san
diego
in
la
this
time,
so
it
definitely
was
very
weird,
different
and
interesting
to
be
back
with
people,
and
I
think,
hybrid
sort
of
worked
for
both
of
us,
like
we've
met
both
of
us
for
the
first
time
together
in
kubecon
as
well.
B
A
So
I
actually
attended
a
couple
of
day.
Zero
events
and
one
of
the
really
one
of
the
one
of
the
talks
that
was
pretty
interesting
to
me
was
that
the
clouds,
the
supply
chain
security,
con
conference,
someone
someone
from
solarwinds,
gave
a
talk
about
how
they
were
rebuilding
all
of
their
push,
all
of
their
build
and
push
pipelines
to
be
much
more
paranoid,
and
it
was
really
interesting
if
you
get
a
chance
to
catch
it
on
youtube
or
if
you
can
go
back
in
and
watch
it.
B
A
We've
got
got
people
wow,
we'll
eat
this
up
pretty
late,
as
is
yuka
finland
and
saudi
arabia
right
lots
of
folks.
We
seem
to
get
the
late
crowd
here.
That's
fine
yeah!
I
didn't
attend
the
ebpf
day,
but
it
seemed
like
that
was
pretty
popular.
B
Yeah
yeah
definitely
and
we
had
an
another
cloud
native
security
con.
I
think
the
whole
cloud
native
security
con,
if
I'm
not
wrong,
started
either
in
san
diego
or
after
that.
So
this
was
like
a
coming
back
of
security
con
in
person
and
we
had
a
clear
focus.
I
felt
on
security
this
time,
even
in
the
main
event,
with
some
keynotes
and
many
many
talks
talking
about
security.
One
of
my
favorite
talks
was,
I
don't
know
if
others
who
have
joined
in
also
watched,
it
was
the
one
on
burnout
by
julia
simon.
B
Attendees
in
in
person,
as
well
as
virtual
from
what
I
could
remember,
definitely
a
good
talk
considering
all
of
the
stuff
we
have
all
been
through.
I
think
there
are
some
good
personal
lessons
to
learn
from
that,
and
hopefully,
since
somebody
there
some
felt
like
okay,
they
are
not
alone
in
this.
Can.
B
Yeah,
that
was
julia
simon.
A
A
There
was
a
fun
talk.
He
ended
like
two
in
a
row
one
of
those
days,
but
it
was
seven
of
nine
kubernetes
security
secrets
and
oh,
was
it
brad?
I
gosh.
A
Thank
you
and
they
had
a
lot
of
fun
ways
to
get
stuff
running
in
a
kubernetes
cluster
that
maybe
wasn't
what
you
might
expect.
A
For
so
for
those
of
you
not
in
the
know,
kubernetes
started
at
google,
which
already
had
a
cluster
operating
system
called
borg,
and
so
the
internal
project
was
called
project.
Seven
referencing,
seven
of
nine,
which
was
a
particularly
friendly,
converted
borg
from
I
think,
voyager
from
the
voyager
series.
A
So
this
was
a
friendly
borg
that
would
help
people
move
forward
towards
this
world
of
cluster
operating
systems,
because
at
the
time
it
was
really
clear
how
to
run
containers
on
a
single
instance,
but
it
wasn't
totally
clear
how
to
take
just
a
pile
of
computers
and
run
a
pile
of
containers
on
them
without
worrying
about
this
container
map.
Specifically
to
this,
you
know
to
this
computer.
B
B
I
see
some
folks
emma
from
berlin
and
agav
from
pakistan,
great
it's
kind
of
late
for
you
in
pakistan.
Thank
you
for
joining.
A
Yeah
I
had
a
fun.
I
had
fun
doing
my
talk
there.
I
did
a
talk
and
rebuilt
the
demo
that
day,
but
it
was,
it
was
fun.
I
got
to
play
with
with
cloud
native
build
packs,
which
is
a
project
that
I've
sort
of
poked
at
from
a
distance,
but
now
I've
gotten
into
the
guts
of.
I
guess
I
don't
know.
A
If
there's
any
core
kubernetes
news
this
week,
I
haven't
seen
any
big
announcements
or
anything
like
that,
so
everyone's
still
digesting
122
and
isn't
quite
you
know
the
march
on
123
hasn't
started
too
much.
Yet
I
think.
B
A
Yeah
and
I
work
on
a
little
project
called
k
native
and
so
we've
all
been
excited
because
we're
gonna
number
our
next
release.
1.0
lots
of
the
individual
components
have
been
ga,
but
now
we
feel
like
our
documentation
and
so
forth.
Oh
yeah
definitely
throw
that
in
about
cluster.
Api
went
to
v1
as
well.
A
K
native
has
a
bunch
of
v1
apis,
but
the
top
level
version
number
was
0.26,
which
is
a
both
a
big
and
a
small
number,
and
so
the
next
one
we're
going
to
be
starting
over
the
numbering
scheme.
A
And
with
that
roundup,
we're
probably
gonna
dive
a
little
bit
into
pod
security
policy.
So
for
those
of
you
not
in
the
know,
oh
gosh,
I'm
gonna,
close
up
twitter
pod
security
policy
is
deprecated.
A
So
this
is
all
about
a
feature
that
you
don't
want
to
get
on
board
with
because
it's
going
away,
but
we're
also
going
to
talk
about
why
it's
going
away
and
the
problems
it's
trying
to
solve
and
how
we're
doing
it
better
next
time,
do
you
want
to
talk
a
little
bit
more
detail
about
the
history
and
so
forth?.
B
Yeah
it's
so
interesting,
like.
I
think
this
is
one
of
the
most
popular
features
from
security
perspective
in
kubernetes,
and
it's
really
like
the
historical
context
behind
this
is
red.
Hat
openshift
has
something
called
security
context,
constraints
which
is
scc
and
that
was
created
before
psps
came
into
picture
in
upstream,
and
that
was
a
big
sort
of
influence
or
something
that
led
to
people
thinking,
okay,
this
might
make
sense
to
have
in
upstream
as
well
and
then
that's
when
what
security
policies
came
into
picture.
B
In
the
beginning,
there
were
very
few
fields
that
were
supported
and
then
slowly
slowly
features
kept
getting
added,
but
at
the
same
time
we
also
saw
some
external
admission.
Control
is
being
created,
like
I'm,
seeing
comments
about
oppa
and
so
many
others,
kevorno
and
few
others
that
people
are
fans
of
started
coming
in
and
they
were
able
to
actually
do
the
right
thing
in
terms
of
allowing
people
the
flexibility
to
manage
policy
outside
of
the
kubernetes
domain
and
be
very
specific
in
terms
of
what
you
want
to
allow
and
disallow.
B
A
B
I
think
the
biggest
thing
was
when
port
security
policies
were
created
was
I
want
my
security
team
to
have
one
place
to
define
what
is
allowed
in
my
cluster
and
then
the
reason
it
became
deprecated
is
like
a
couple
of
reasons
based
on
like
you
cannot
have
something
in
beta
forever.
That
was
a
policy
that
was
created
which
led
to
really
big
discussion
on
okay.
This
has
been
in
beta
for
a
while
and
secondly
like
as
more
people
started
using
this
and
all
the
other
external
controllers
was.
B
A
So
pod
security
is
really
about
making
sure
that
you
know
there's
a
bunch
of
interesting
fields
on
the
pod
spec,
where,
if
you
set
them,
you
can
assume
basically
full
control
of
the
cubelet
stuff
like
privileged
stuff,
like
these
host
pid
host
network
and
so
forth,
volume
types.
If
you
can,
if
you
can
mount
a
host
path
volume
that
gives
you
the
cubelets
certificate,
for
example,
then
you
can
act
as
the
cubelet
to
the
api
server
and
then
from
there.
A
A
Some
of
the
overrides
are
just
different,
and
some
of
the
docker
defaults
aren't
great.
For
example,
docker
defaults
to
your
containers
run
as
root
unless
you
declare
that
you
run
as
some
other
user
and
then
kubernetes
overrides
that,
with
a
run
as
user,
which
also
defaults
to
zero.
A
So,
but
if
it's
unset
rather
than
zero,
it
will
use
whatever
the
docker
container
says
to
run
as
so
that
might
be
safe
and
that
might
not.
But
you
can't
tell
until
you
actually
try
to
exec
that
pod.
B
Right
and
I'm,
I
think,
we're
getting
a
question
from
carlos
related
to
that,
like
what
usability
problem
are
we
talking
about
so
you're
right,
carlos
that
the
security
context-
and
this
is
very,
I
think,
useful
and
important
point,
regardless
of
psp
or
port
security
admission?
The
security
context
piece
is
not
changing
at
all.
B
Is
this
going
to
allow
me
to
essentially
fail
things
or,
or
would
I
know
well
before,
failing
so
that
I
can
have
some
confidence
that
this
policy
is
actually
going
to
work
as
expected,
so
that
drier
and
feature
is
something
that
came
up
in
the
newer
replacement,
which
I
hope
we
get
to
play
around
with.
A
We
will
we
will
get
to
play
around
with
that.
A
little
later
I
feel
like
this
is
another
big
problem
with
pod
security
policies.
This
is
a
feature
which
is
off
by
default
and
when
you
turn
it
on
it,
slams
the
door
closed
until
you
set
up
security
policies
that
allow
your
existing
pods
to
run
right.
So
I've
got
a
cluster.
You
know
I've
set
up
two
clusters
and
you
know
we
can.
A
We
can
try
turning
on
pod
security
policy,
admission
controller
and
then
see
about
running
a
new
pod,
and
you
know
we'll
see
that
it
doesn't
work,
but
then
I
can
also
go
and
kill
some
existing
pods
and
we'll
see
what
happens
there
too.
B
A
Yeah
there
eric's
pointing
out
as
well
there's
a
lot
of
good
bits
in
this
blog
post
at
the
top
of
the
pod
security
that
describes
some
of
the
problems
that
it
went
away
and
there's
a
great
talk
from
san
diego
in
2019.
That
describes
some
of
the
problems
there
too.
Yeah.
B
There
are
also
two
great
talks
that
I
hope
will
be
up
next
week
by
this
time.
In
this
year's
kubecon,
one
was
from
tabby
and
tim
all
claire,
where
they
sort
of
told
the
history
about
what
some
of
the
blog
is
covering,
and
then
there
is
another
one
from
cigar
as
in
the
maintainer
track,
which
went
deep
into
how
this
actually
works.
And
what
do
you
can
you
expect
by
a
lot
of
good,
detailed
demos
on
it.
A
About
sharing
this
okay-
oh
that's
fun!
So
here
I've
got
a
kubernetes
cluster,
we're
running
version
122,
so
we've
got
the
different
apis
available
and,
let's
see,
we've
got
some
pods
and
this
is
just
on
a
single
node.
So,
let's
see,
let's
just
ssh
in.
A
B
A
For
a
desktop
system,
windows
is
pretty
nice
and
you'll.
Note
that
I'm
running
linux
here
mostly
because
using
some
of
the
kubernetes
stuff
and
some
of
the
scripts
and
so
forth,
I
often
find
that
when
you
copy
things,
people
use
backslash
line
breaks
which
is
a
bashism.
It
turns
out
and
powershell
uses
a
different
character
to
escape
new
lines,
because
backslash
is
the
windows
directory
separator
but
yeah.
A
So
I
like
to
develop
on
windows
in
part,
because
it
turns
out
that
like
half
the
world
or
more
has
a
windows
machine
as
their
their
desktop
machine
to
work
with.
So
I
like
to
remember,
you
know
what
that
feels
like,
and
it's
also
a
little
bit
more
powerful
than
my
macbook.
So
I
like
to
use
it
for
streaming.
B
Yeah,
that's
so
true,
like
I
I
when
I
started
working
and
writing
code,
I
couldn't
afford
a
macbook.
It's
just
like
completely
out
of
the
picture,
so
you
just
end
up
using
windows
and
then
run
virtualbox
on
top
and
then
run
your
unix
or
ubuntu
or
fedora,
whatever
you
like
on
on
a
vm
and
that
turned
out
pretty
well
and
got
to
learn
all
the
basics
through
windows.
So,
yes,
I
think
macbook
when
you
can
afford
it
is
great,
but
windows
is
not
too
bad
too.
A
I'm
just
gonna
show
this
off
because
it
amazes
me
every
single
time
I
can
run
the
s
code
and
it
will
pop
up
a
native
windows
10
window,
which
I
guess
you
can't
see
here.
But
if
you
don't
have
vs
code
server
installed,
it
will
download
vs
code
server
into
windows,
subsystem
for
linux
and
then
bounce
out
to
windows
using
server
mode
of
of
vs
code,
which
I
find
kind
of
amazing
and
mind-boggling.
A
Yeah,
but
so
I'm
looking
here,
we've
got
one
admission
plugin,
which
is
node
restriction,
so
we're
going
to
need
to
add
the
pod
security
policy
admission
controller.
So
we're
going
to
go
back
over
here
and.
A
So
that's
exciting:
where
am
I
getting
this
answer
from.
B
What's
our
context
on
cube
ctl.
A
Give
me
just
a
moment,
but
I'm
pretty
sure
this
is
the
same
cluster,
because
if
we
look
at
the
other
cluster.
A
A
But
it's
not
showing
up
in
this
list.
I
have
a
guess
as
to
why,
but
I'm
guessing
that
cubelet
is
actually
logging
complaints
that
it's
not
allowed
to
run
api
server.
B
A
B
From
the
viewers
looks
like
walid
is
saying,
because
they
are
mirror
parts
and
not
allowed
by
psp
to
create
pods.
They
are
created
by
cubelet.
A
Yep,
creating
a
mirror
pod
for
for
cube
api
server
is
forbidden
because
the
pod
security
policy
says
we're
not
allowed
to
add
this
pod
right.
A
The
flag
turned
on
the
pod
security
policy:
admission
controller,
just
the
controller,
without
that
we.
B
A
No
pod
security
policies
that
explains
yeah,
so
this
is
this-
is
an
awesome
failure
mode.
I
will
point
out
where
your
static
pods
start
to
disappear
from
your
system.
If
you
modify
them
because
cubelet
tries
to
mirror
them
back
and
is
told
no
you're
not
allowed
to
do
that.
A
A
A
A
Enabling
pod
security
policies
yeah
it's
recommended
that
policies
are
added
and
authorized
before,
enabling
the
admission
controller.
If
you
don't
read
that
sentence,
you're
gonna
be
sad
and
then,
in
order
to
use
it
either
the
requesting
user
or
the
target
pod
service
account
needs
to
be
authorized
to
use
the
policy.
A
B
And
now,
in
port
security
admission,
you
can
either
apply
the
port
security
to
the
entire
cluster
with
exception
list,
or
you
can
apply
it
to
the
entire
namespace.
So
now
you
can't
have
two
different
policies
within
your
namespace,
because
now
the
policy
can
be
applied
to
all
the
parts
in
the
namespace
in
the
newer
feature.
A
Just
a
minute,
I
saw
that
hai's
asking
a
question:
would
this
be
the
same
behavior
for
opa?
If
we
put
if
we
enable
opa
as
on
the
cluster,
but
with
no
opa
pod
policy?
This
depends
on
how
opa
is
designed
and
structured.
A
So
when
you're
designing
a
system
from
the
ground
up,
it's
great
to
put
in
a
secured
by
default
situation,
where
you
say,
hey
you're
not
allowed
to,
you
know,
create
pods
that
are
unsafe
unless
someone
specifically
enables
it.
A
The
problem
is
that
we're
going
into
an
existing
system
here,
that's
already
running,
and
so
you
may
not
want
your
default
to
be
lock
everything
down
to
be
secure,
because
you
may
actually
be
breaking
existing
things
that
are
working
istio
had
this
problem
and
in
version
one
when
in
their
1.0
version,
when
you
installed
istio,
it
would
immediately
lock
down
the
cluster
so
that
you
couldn't
do
any
network
traffic,
which
meant
that
installing
istio
was
a
breaking
change
and
then,
after
you
had
it
installed,
you
needed
to
open
things
up
again,
but
you
couldn't
do
this
without
having
some
downtime
or
routing
stuff
to
a
different
cluster
effectively.
A
Your
cluster
had
downtime,
so
something
to
think
about
when
you're
designing
one
of
these
policies
is
how
do
people
get
from
the
state
where
they
don't
have
it
on
to
the
state
where
they
do
have
it
on
and
pod
security
policy?
Didn't
really
think
about
that.
A
So
it's
sort
of,
if
you
didn't
start
with
this,
it's
going
to
be
really
painful
to
get
there
later
and
when
we
start
talk
when
we
start
poking
at
pod
security,
a
little
bit
later
after
I've
built
up
the
pain,
a
little
more
we'll
see
how
we've
solved
some
of
these
sorts
of
problems.
A
So
it
looks
like
eks
has
a
pod
security
policy?
That's
already
installed,
so
in
that
sense,
you've
successed
it
by
having
defined
these
things.
A
The
fact
that
it's
not
default
in
kubernetes,
though,
means
that
you
can't
rely
on
it
being
everywhere,
even
better,
since
this
is
a
flag
on
the
api
server
you
may
be
stuck
with
whatever
on
or
off
your
cluster
provider
whoever's
creating
the
cluster
has
set,
I'm
not
sure
if,
like
gke
or
digitalocean
or
azure
has
you
know,
exposes
that
list
where
you
can
go
and
add
extra
strings
to
it,
because
it
is
a
maintenance
concern.
You
know
how
many
different
snowflakes
can
we
do?
You
know
if
you've
got
five
booleans?
B
Yeah,
I
think
this
is
the
classic
fail,
open
or
fail
close
design
problem
and
the
privileged
psp,
I
think,
have
being
there
just
sort
of
allows
you
to
run
anything
you
want,
and
then
you
keep
knocking
it
down
as
much
as
needed
until
you
feel
like.
Okay,
I'm
only
running
everything
that's
needed
with
the
limited
permissions
that
I
need.
A
So
it
looks
like
we're
already
seeing
hints
of
the
new
format,
so
let's
take
a
look
at
what
this
privileged.
So
here's
a
privileged
pod
security
policy
and
we've
got
the
whole
thing,
so
I'm
just
gonna
apply
it
to
my
cluster.
It's
probably
not
that
exciting.
So
I'm
not
going
to
go
and
show
it.
B
A
But
in
order
to
say
that
it
takes,
I'm
gonna
blow
this
up
a
little
bit,
so
people
can
see
yeah
yeah.
It
takes
one
two,
three,
four,
five,
six,
seven,
eight
nine
ten,
eleven
twelve
thirteen
fourteen
it
takes
twenty
some
ish
lines
to
say
you
can
do
anything.
You
want
okay,
but
maybe
baseline
is
a
better
idea.
Let's
see
what
baseline
looks
like
oops
so
baseline,
well,
gosh,
that's
shorter
or
not.
B
A
This
file,
so
that's
your
your
your
hey,
you're,
not
doing
completely
crazy
stuff,
only
takes
74
lines
to
spell
out.
So
that's
one
of
the
problems
with
pod
security
policy
as
well.
Is
that
and
no
this
says
things
like
allow
all
volume
types
except
host
path?
We
have
to
spell
out
all
of
these
types
and
assume
that
there
aren't
going
to
be
more
in
the
future.
B
A
Oh
so
it
sounds
like
you're
actually
saying
that
there
is
a
security
hole
here
where,
if
I
have
this
pod
security
policy
and
kubernetes
adds
a
new
security,
sensitive
field,
and
I
go
and
upgrade
my
kubernetes
cluster
until
I've
gone
and
upgraded.
My
pod
security
policy
after
that
upgrade
I'm
potentially
vulnerable
right.
B
So
that
new
field
correct,
so
what
will
happen
is
if
I'm
on
123
cluster
version
and
my
psp
is
120
or
port
security.
Admission
is
pointing
to
1
22
and
a
new
feature
is
added
in
123.
I
won't
be
able
to
use
it.
So
that's
where
the
recommendation
is
start.
Your
pod
security,
when
you
upgrade
a
cluster
with
a
warn
mode
for
the
latest
version,
so
that
you
see
what
you're
missing
and
then
you
can
enable
it
later.
A
And
then
here
is
an
example
using
oh,
so
this
is
another
interesting
thing:
pod
security
policies
use
rbac
to
to
actually
do
the
application,
so
this
is
this
is
basically
a
cluster
role,
binding
so
applying
across
the
entire
cluster
that
references,
a
cluster
role
that
describes
the
pod
security
policy.
A
Let
me
go
back
and
look
here
is
this,
so
there
needs
to
be
a
cluster
role
as
well.
I
think.
A
We're
just
passing
up
here
yeah,
so
you
need
to
create
a
cluster
role
that
references,
the
pod
security
policy
and
says
you're
allowed
to
use
it,
which
is
a
special,
our
back
verb,
and
then
with
that,
then
you
can
go
down
here
and
do
a
role
binding.
That
maps
you
to
a
particular.
A
You
know
to
one
of
these
things,
so
you
kind
of
have
like
three
resources.
You
need
to
look
at
as
well
and
you
need
to
understand.
Kubernetes
are
back
and
the
particular
way
it's
used
here
in
order
to
figure
out.
What's
going
to
happen,.
B
A
So
yeah
and
unfortunately,
it
looks
like
because
all
this
is
security
sensitive
stuff
and
everyone
wants
to
set
up
their
cluster
a
little
differently.
This
suggests
oh
bind
these
two
specific
namespaces
like
development
or
canary,
or
something
like
that.
But
there's
not
actually
a
recipe
I
can
copy
to
follow
and
just
have
a
secure
cluster.
A
A
It's
going
to
be,
you
know,
okay,
maybe
I
can
copy
this
cluster
role
binding
and
adjust
it,
and
maybe
I
think
baseline
is
sufficient
for
my
cluster,
so
I
can
bind
that
to
system
service
accounts,
and
that
means
all
the
service
accounts,
but
I'm
still
going
to
need
to
go
up
here
and
copy
this
cluster
role
and
fill
in
a
name
and
which
probably
needs
to
be
psp
baseline
right.
A
A
What
it
works
out,
this
isn't
actually
everything
you
need
for
a
secure
cluster.
Without
this,
you
don't
have
a
secure
cluster,
but
this
doesn't
guarantee
that
your
cluster
is
secure.
B
Exactly
it's
just
one
of
the
many
things
and
the
sad
part
of
all
of
this
was
this
is
so
niche
in
terms
of
knowledge,
even
within
kubernetes
that
if
you
had
a
cluster
as
an
end
user-
and
you
wanted
port
securities
enabled
on
your
namespace,
you
would
have
to
either
figure
out
the
admin
of
the
cluster
and
tell
that
person
hey.
Please
do
this
for
me
and
the
admin
is
doing
100
more
things
that
to
just
keep
the
cluster
alive.
B
A
A
One
oppa
makes
it
at
least
a
little
easier
to
say
any
volume
as
long
as
it's
not
host
path
right,
as
opposed
to
needing
to
have
a
list,
a
positive
list
of
all
the
volume
types
that
aren't
hostpath
and
the
other
thing
is
that
oppa
allows
you
to
actually
apply
other
policies
someone's
asking
about
checking,
for
example,
that
you
know
a
pod
was
built
with
the
right
security
processes
and
so
forth,
and
pod
security
policy
has
nothing
to
say
about
that.
You
can
run.
A
So
we're
gonna
be
talking
about
warren
modes
in
in
a
little
bit.
We're
almost
done
pointing
out
the
the
challenges
here
with
pod
security
policy,
but
yeah-
and
this
is
not
to
say
you
know.
Oh
pod
security
policy
was
a
terrible
idea,
or
even
it
was
a
terrible
implementation.
It
was
pod.
Security
policy
was
introduced
in
kubernetes,
one
three
or
one,
four:
okay,
and
it's
a
hard
problem
and
we
didn't
get
it
right
the
first
time
and
that
shouldn't
be
a
big
shock
in
terms
of
apis.
B
And
I
mean
just
like
adding
on
to
that:
we
didn't
have
our
back
at
that
time.
We
didn't
have
admission
controllers
that
are
that
exist
today
and
we
did
as
a
community.
I
think
we
were
less
mature
and
the
project
was
new,
so
at
that
time
we
made
the
bed
this
best
decision.
That
made
sense
in
that
particular
context,
and
that
is
why
we
have
deprecation
cycles
where
you
get
to
fix
things
that
now
don't
make
sense
and.
A
So
yeah,
one
of
the
things
you
would
have
seen
as
you
were
watching
me,
apply
this
stuff
to
my
cluster-
was
warnings
from
the
api
server
which
still
isn't
being
admitted
into
the
list
of
pods,
yet
that
all
these
pod
security
policies
I
was
defining,
are
deprecated
in
121
and
going
away
in
125..
A
B
Yeah,
this
is
the
most
I
think,
hands-on
blog
post,
to
get
this
working
in
a
cluster.
It
does
use
kind,
though
so
we'll
have
to
maybe
make
some
modifications
with
minicube.
B
Yeah,
it's
new.
I
actually
remember
writing
code
for
this
a
few
months
back
when
we
released
it
in
alpha,
and
one
of
those
things
I
wanted
to
give
a
shout
out
was
if
anyone
is
sort
of
new
to
golang
or
kubernetes
project
as
a
whole.
Look
at
how
this
is
implemented,
and
especially
the
testing
framework.
B
I
think
it
was
one
of
the
best
things
I
saw
in
term
in
terms
of
how
you
can
make
something
that
is
difficult
to
write
easier
for
others,
so
that
somebody
who
is
not
very
familiar
with
go
or
kubernetes
can
pick
it
up.
B
A
I'm
guessing
I
don't
have.
This
is
an
api
server
flag
right.
A
Okay,
so
let's
do
that
can't
go
worse
than
the
last
time.
A
This
is
the
broken.
This
is
the
broken
one.
B
Another
code
review
session
for
tgik,
which
talks
about
how
the
port
security
admission
is
implemented.
I
think
that's
a
good
suggestion,
we'll
try
to
see
if
jordan
or
team
can
join
us
in
that
one.
A
Awesome
so,
let's
see
here,
I
am
over
here:
kubernetes
manifests
cube
api
server.
A
B
A
A
B
B
A
Looking
for
a
lot,
oh
cr,
sierra
control,
that's
a
good
idea!
Well,
let's
see,
first
of
all,.
A
A
Oh
carlos,
I
have
two
different
clusters.
This
is
the
cluster
with
pod
security
policy
and
you
can
see
that
it
still
doesn't
have
a
cube
api
server,
and
this
one
is
the
one
that
we're
turning
on
pod
security
admission
on
note.
The
names
are
super
close
pod
security,
admission
and
pod
security
policy.
So
I
I
have
been
worried
that
I'm
not
going
to
get
the
names
right.
A
B
B
A
It
looks
like
it,
which
is
unfortunate,
because
api
server
is
kind
of
the
one
you
want
like.
If
you're
going
to
have
one
pod,
you
might
we'll
make
it
api
server.
A
Oh,
it's
you're
right,
it's
probably
in
a
config
file,
not
in
not
in
the
api
server.
B
A
Well,
I
can
always
spin
up
a
kind
kind
cluster
if
necessary,
but
one
of
the
nice
things
here
is
that
when
you
turn
it
on,
it
doesn't
do
anything
rather
than
immediately
slamming
closed.
B
A
But
no
one
actually
wants
to
say
where
these
things
get
stuck
on
disk.
I
know
they
have
to
go
to
disk
because
otherwise
it
says
using
feature
gates
command
line
flag
on
each
kubernetes
component.
A
So
I'm
not
sure
choco,
which
log
you're
thinking
of
I
can
run
cube.
Control,
get
events
cube
system,
but
I'm
not.
A
It
says
back
off,
restarting
failed
container,
but
its
current
state
is
going
to
be
ready
and
I'm
not
sure
how
to
grab
the
logs
out
from
a
pod.
That's
now
healthy
from
an
earlier
incarnation.
A
So
I
guess
I
can
just
start
a
kind
cluster
over
here.
A
B
A
Oh
excellent,
okay,
great.
B
I,
and
for
folks
who
were
always
wondering
for
a
while
when
they
just
heard
of
kind.
Like
me
at
least
few
months
back
years,
back
kind
is
actually
kubernetes
in
docker
acronym,
where
k
is
the
kubernetes
and
d
is
docker,
which
essentially
allows
you
to
have
kubernetes
using
docker
in
your
laptop
to
sort
of
mimic.
What
you
will
see
when
a
cluster
actually
exists.
A
Oh
someone's
formatting
turned
double
dashes
into
m
dashes.
A
So
kind
is
really
nice
because
it
lets
you
just
spin
up.
A
Yeah,
I'm
not
sure
what
happened
there,
but
we
seem
to
be
back
yes.
A
I
don't
think
it's
windows,
although
I
did
have
an
interesting
bug.
The
last
time
that
I
did
tgik,
where
it
turned
out.
I'd
fallen
back
to
ethernet,
because
the
the
nick
on
the
motherboard
had
actually
gone
bad
a
week
or
so
earlier,
and
I
didn't
realize
it
right.
A
B
Yeah,
I
I
did
the
main
thing
up
for
me
was
just
the
sheer
amount
of
extra
leg
work
we
had
to
do
to
get
the
cluster
roll
crystal
roll,
binding
and
everything
in
place
and
just
like
the
ability
to
have
exception
list
very
clear
as
a
set
of
yaml,
arrays
or
list.
I
think
those
are
the
big
changes
for
me
that
I've
I
really
enjoyed
looking
at
and
the
main
thing,
which
was
made
as
a
clear
assumption.
B
When
building
this
was,
we
will
always
have
for
any
complex
scenario,
existing
admission
controllers
that
are
external
to
kubernetes.
That
can
be
used
for
advanced
special
scenarios,
but
at
the
same
time,
if
somebody
cannot
or
don't
want
to
use
something
outside
of
kubernetes,
we
should
have
a
bare
bones:
admission
controller
within
kubernetes.
That
allows
you
to
do
something
basic
instead
of
having
root
everywhere
when
you
can
create
a
pod.
B
A
While
we're
starting
the
control
plane,
I'm
going
to
pop
back
over
here,
oh
I
got
a
timeout
starting
the
control
plane.
That's
awesome.
A
Said
that
it
got
a
tcp
connection
refused.
So
do
you
remember
how
over
here
we
were
looking
at
these?
You
know,
policies
defined
by
the
pod
security
standards.
B
Yeah,
so
if
my
memory
serves
right,
this
came
before
bot
security,
admission
came
into
being,
and
the
reasoning
behind
that
was
people
were
saying.
Okay,
I
everything
makes
sense
like
port
security
policy
is
something
you
want
to
do,
it's
not
great,
but
it
is
something,
but
what
people
were
looking
from?
The
community
was
what
policy
is
good.
What
policy
is
bad
because
sometimes,
as
we
saw
70
plus
fields,
it's
hard
to
figure
out
what
makes
sense
and
what
doesn't
so.
This
was
a
sentence.
B
B
A
You
can
also
use
something
like
opa.
If
you
want
to
rule
out,
you
know:
oh
hey,
I
don't
want
anyone
using.
You
know
ebs
volumes
directly
or
something
like
that.
You
can
still
put
those
policies
in
separately
from
pod
security
policy,
but
you
kind
of
have
these
three
profiles
which
is
privileged,
which
is
like
you
know.
I
trust
these
folks
and
they
got
to
do
some
crazy
stuff
so
like
they
can
privilege
escalate
if
they
need
to,
and
I
trust
them
to
just
not
do
it.
A
You've
got
baseline,
which
is
you
know,
hey,
we
don't
think
there's
any
problems
here,
like
you're
a
normal
container,
but
I'm
not
gonna
bug
you
too
much
about
like
hardening
things
and
then
restricted
is
like
hey.
You
need
to
actually
harden
your
configurations
in
your
pods.
You
need
to
say
you're
not
going
to
run
as
root.
You
need
to.
B
A
Yeah,
so
it
looks
like
that
rules
out
a
bunch
of
volume
types.
Basically,
anything,
that's
not
csi
anything
that
that's
not
container
storage
interface
using
persistent
volumes
gets
ruled
out
a
bunch
of
the
privilege
escalation
settings
on.
Oh
look.
If
you
didn't
remember
to
set
this
for
all
the
different
container
types,
you
could
have
a
problem
in
your
policy,
so
this
kind
of
cans,
all
that
stuff
together
and
has
whoever's
publishing
kubernetes,
is
the
one
who
needs
to
know
this
rather
than
whoever
is
running
kubernetes
yeah
exactly.
B
A
So
let's
see.
A
A
Trust
me
because
I
don't
have
a
windows
editor,
but
we're
going
to
try
taking
out
the
feature
gates
and
see
if
it's
kind
altogether
that's
unhappy
for
if
it's
a
problem
with
the
feature
gate.
B
Yeah
and
I
think
if
this
doesn't
work,
we'll
just
hop
onto
the
docks
and
look
at
what's
going
on
yeah
in
the
meantime,
I
think
somebody
asked
about
the
wand
feature
also
so
maybe
I'll
talk
about
that
a
bit.
Basically,
there
are
two
features,
three
different
modes
in
which
you
can
implement
this
or
apply
it.
One
is
one
one
is
audit
and
another
is
enforce.
B
B
So
it's
kind
of
similar
logic
here
where
I
can
see
if
something
is
not
going
to
work
for
me
if
I've
set
it
in
one
mode
for
my
namespace
and
the
other
one
is
audit,
where,
if
I'm
a
cluster
admin-
and
I
have
access
to
my
audit
logs,
it
will
log
it
in
your
audit
log
of
kubernetes
api
saying
this
is
going
to
fail
if
you
enforce
it.
B
So
that
allows
you
to
have
a
different
point
of
view
in
terms
of
where
to
look
at
some
if
something
is
failing
and
also
allows
some
visibility
into
what
could
fail
for
others
who
are
not
the
actual
owners
of
the
pods
that
are
going
to
fail.
So
that's
always
helpful
and
that's
why
the
two
modes
exist
and
enforce
it's
kind
of
the
name
suggest
what
it
means
is.
Essentially,
I'm
going
to
enforce
this
policy
and
things
will
break
if
something
is
not
going
to
work
for
you.
A
B
A
Here
we
go,
oh,
it
looks
funny.
It
looks
like
it's
simply
a
matter
of
not
having
busted
docker
on
your
desktop.
A
Now,
in
this
case,
I
have
a
prompt
to
update
and
restart
docker
desktop
that
I
had
ignored.
A
B
So
so
we
need
to
set
context
and
while
you're
doing
that
I'll
answer
one
question,
we
have
one
audit
and
enforces
three
modes
you
can
set
and
basically
one
is
going
to
give
you
the
warning
when
you
are
trying
to
create
a
pod
that
is
going
to
fail
and
audit
is
going
to
also
silently
log
an
error,
but
in
the
apis
or
audit
log,
instead
of
showing
the
user.
Who
is
trying
to
create
the
port.
B
A
B
Exactly-
and
this
is
another
interesting
topic
where
you-
you
can
also
figure
out
that
okay,
what
does
really
a
dry
run
mean
in
this
case,
so
dry
run,
is
something
before
the
pod
is
created.
You
can
see
if
something
is
going
to
fail,
and
this
is
after
the
pod
is
created.
It
will
allow
you
to
create
the
pod,
but
it
will
still
log
some
error
or
warning
in
this
like
technically,
it's
warning
to
share
that
this
might
fail
when
you
enforce
it.
B
So
that's
the
main
difference
and
one
thing
to
note
which
came
up
in
one
of
the
work
I
was
doing
recently
on.
The
nsa
hunting
blog,
we
wrote
for
kubernetes
was:
does
port
security
admission
and
psp
behave
in
the
same
way
for
existing
parts?
B
So
the
answer
behind
that
is
both
are
animation
controllers
so
because
you
have
a
gate
when
you're
creating
the
pod
for
both
of
them,
any
existing
parts
do
not
get
any
updates
or
do
not
get
affected,
based
on
what
you
have
set
up
in
your
admission
controller.
So
that's
always
something
to
keep
in
mind
when
we're
dealing
with
both
of
them
all
right.
So
I
I'll
stop
talking.
Let's
try
to
get
this
working
with
at
least
one
name.
B
So
carlos
is
saying
my
I
sound
like
I
really
enjoy
working
with
kubernetes,
so
I
mean
believe
it
or
not.
This
was
like
I
started
in
cloud
native
in
terms
of
my
cloud.
Career
and
kubernetes
was
pretty
early
in
my
career
as
well,
so
I've
seen
how
bad
it
is
in
the
has
it
it
has
been
in
the
past
and
the
improvements
that
we
have
done
now.
It
just
makes
me
feel
happy
and
grateful
that
oh,
I
get
to
contribute
to
make
this
better.
So
that's
really
what
it
is.
A
A
little
bit
about
about
this
command
and
it's
a
little
different
than
the
other
commands
that
we've
had.
B
Correct,
I
just
want
to
make
sure
there
is,
I
think,
one
more
command
that
precedes
this,
which
might
be
useful.
Also,
let's
see
unless
I
missed
it
yeah.
So
this
explains
a
bit
about
one
more
audit
mode
and
enforce
mode
yeah,
so
you're
right.
I
think
this
is
the
first
one
command.
So
if
we
look
at
this
basically,
what
it's
trying
to
say
is
my
namespace
is
test
ns
and
I'm
going
to
set
the
baseline
port
security
standard.
B
So
if
you
remember
the
thought
security
standards
we
just
talked
about,
we
are
using
in
port
security,
admission
the
same
three
standards
as
something
that
we
are
going
to
set
built
in
in
kubernetes,
and
what
that
will
allow
you
to
do
now
is
I'm
going
to
set
my
namespace
setting
for
port
security
as
baseline
one,
so
anything
that
is
going
to
fail
my
baseline
standard?
I
am
going
to
get
a
warning
for
it
and
it
also
says
that
my
one
version
is
122,
which
is
the
recent
version
so.
A
And
the
other
thing
is:
this
is
using
labels
on
existing
resources.
This
isn't
creating
I'm
not
creating
a
new
resource,
a
new
pod
security
policy
or
a
new,
our
back
policy
or
binding
the
our
back
policy
to
a
set
of
credentials.
This
is
basically
we've
said.
The
way
that
you
do
this
enforcement
is
by
namespace.
A
B
B
A
B
Exactly
so
that
that's
the
main
thing
like
the
labels
are
applied
at
namespace
object
level.
So
anything
in
that
namespace,
which
is
a
part,
is
going
to
get
the
same
setting,
and
this
is
where,
if
the
way
you
have
been
doing,
psps
is
having
different
policies
applied
to
different
parts.
In
the
same
name
space.
You
have
to
be
really
careful
when
you're
migrating,
because
now
your
name
space
may
have
higher
permissions
than
some
ports
need.
A
So
that's
actually
container
d,
it's
not
necessarily
docker
engine,
but
but
it
orchestrates
things,
but
then
the
underlying
os
like
when
you
call
open-
or
you
know,
socket
or
something
like
that
that
goes
directly
to
the
linux
kernel
container
engine
just
sets
things
up,
puts
you
in
a
corner,
that's
supposed
to
be
yours
and
maybe
set
some
limits
on
you
before
it
execs
your
pod
and
then
after
that
you're
talking
directly
to
the
os
yeah
yeah.
A
B
So
what
this
will
mean
is
anything
that
is
implemented
as
a
security
feature
that
can
be
set
using
the
port
security
standard
until
122
you
get
to
use
all
of
them
and
all
of
those
will
be
enforced.
But
let's
say
now:
I
have
a
123
cluster
and
a
new
feature,
something
like
abc
security
is
implemented.
B
I'm
not
going
to
be
able
to
warn
you
on
abc
security
feature,
because
I
am
only
going
to
look
at
features
on
122
or
before
that
were
implemented,
and
that.
A
Also,
potentially
means
that
you
could
decide
in
say
124
that
there's
a
feature
that
maybe
should
be
getting
a
little
more
scrutiny
than
it
is
right
now
you
know,
maybe
it's
a
bad
idea
to
be
able
to
open
udp
ports
as
a
hypothetical
like
udp.
Probably
isn't,
isn't
it,
but
you
could
say:
hey
in
124
we're
going
to
start
enforcing
that
you
can't
open
udp
ports.
A
B
Exactly-
and
this
is
one
of
the
exciting
parts
I
feel
like,
we
should
always
make
it
easier
to
move
to
secure
defaults
in
a
gradual
way.
Instead
of
just
making
a
mandate,
you
shall
now
only
run
insecure
by
default
mode.
So
this
allows
you
to
know
and
get
some
idea.
Okay,
I'm
going
to
get
enforced
soon,
so
I
might
as
well
start
figuring
out
why
I'm
getting
these
warnings.
B
So
this
is
a
good
point
and
one
call
out
I
want
to
make
on
that
is
as
part
of
security
tag
work
we've
been
working
to
define
how
we
could
do
secure
by
default
design
for
all
cloud
native
apps,
and
for
that
we
have
a
very
simple
two-page
document
which
I'm
adding
in
the
notes
called
secure
default
cloud
native
8,
which
is
sort
of
inspired
from
a
pep
8,
which
is
a
python
style
guide.
So
this
is
open
for
comment
until
end
of
this
month
and
it
basically
gives
some
guiding
principles
on.
B
If
you
follow
this,
you
will
have
a
better
experience
moving
to
secure
by
default.
So
if
anyone
is
interested
want
to
add
comments
or
talk
to
me,
I
really
appreciate
come
in
there
or,
if
you
want
to
share
it
with
others,
go
ahead
and
share
that
link
with
others.
It
should
be
open
for
all
for
comment
for
all
the
folks
on
the
internet.
Essentially,
so
no
permissions
extra
permissions
needed.
B
A
Yeah,
so
the
next
thing
here
is
kind
of
neat
about
dry
run.
Yeah,
do
you
want
to
say
anything
more
there
or
just
let
people
read.
B
A
Actually
pass,
this
is
actually
really
neat.
This
is
a
dry
run
on
applying
a
label
to
the
name
space
right.
Exactly
this
tells
you
when
you
say
when
you
say:
oh,
I'm
thinking
about
applying
this
label
that
will
enforce
on
future
pods
in
this
name
space.
It
will
warn
you
about
all
the
existing
pods
that
don't
meet
that
standard.
It's
not
going
to
take
them
out
of
the
name
space,
because.
B
B
B
A
B
A
Oh
and
here's
the
example
so
yeah
enforce
baseline
audit,
restricted
and
worn
restricted,
and
you
could
also
you
know,
set
a
enforced
baseline
version.
You
know,
v122,
you
know
audit
restricted,
v
123
or
something
like
that.
When
that
comes
out.
B
A
So
yeah
I'd
say
that
if
we
kind
of
look
at
the
api
change,
I'd
kind
of
break
it
up
into
a
couple
different
categories,
the
first
thing
was
that
psp
was
really
fine
grained.
You
could
apply
it
to
users,
you
could
apply
it
to
service
accounts.
You
could
x
out
individual
like
capabilities
one
by
one
and
say
I'm
okay
with
x,
but
not
y
right.
Pod
security
admission
is
a
lot
simpler,
there's
a
lot
fewer,
knobs
and
so
forth
to
adjust
the
second
one
is
that
there's
been
some
thought
about?
A
How
do
you
roll
this
out?
How
do
you
make
it
upgrade
safe?
How
do
you
make
it
safe
to
turn
on,
and
so
it's
got
some
versioning
capabilities
and
it
by
default.
When
you
turn
it
on,
it
doesn't
do
anything
until
you
take
some
other
affirmative
action
to
get
it
enforced,
whereas
with
pod
security
policy,
you
had
to
create
all
your
pod
security
policies
on
the
cluster.
You
had
to
know
they
were
right,
but
you
couldn't
test.
They
were
right
until
you
flipped
the
flag,
yeah
exactly.
B
A
Yeah
and
then
the
third
piece
is
that
they
thought
a
lot
about
how
to
communicate
this
to
the
user,
and
you
know
stuff
like
having
dry,
run,
work
and
warn
you
about
what's
going
on
in
your
current
cluster
and
having
worn
an
audit
so
that
you
can
both
communicate
to
the
system
administrator
and
to
the
end
user,
depending
on
who
needs
to
who's
responsible
for
that
stuff,
and
that
that's
not
always
the
same,
and
you
know
sometimes
you
expect
developers
to
devops
and
do
the
right
thing,
and
sometimes
you
expect
cluster
administrators
to
go
in
and
like
talk
to
people
about
hey,
you
got
to
change
stuff
and
I'm
going
to
help
you,
because
we
don't
really
expect
you
to
know
kubernetes.
B
Yeah
exactly-
and
I
think
that's
where
the
people
who
ended
up
designing
this
were
from
end
user
background
and
as
a
result
of
that,
they
had
known
the
pain
and
the
organizational
problems
and
things
that
everyone
needs
visibility
on.
So
some
of
that
magic
is
coming
into
this
implementation.
It
it
doesn't
mean
it's
perfect,
but
hopefully
it's
better
than
the
one.
That's
it
that
is
getting
replaced.
A
And
it
looks
like
there
is
one
new
at
the
end
of
this.
We
see
a
kubernetes
resource
that
is
actually
new
and
it
looks
like
this
is
the
configuration
for
the
pod
security
defaults.
So
if
you
want
to
ship
and
say
hey
all
name,
spaces
should
be
at
least
baseline
nobody's
allowed
to
run
privileged
right.
A
Then
you
can
actually
go
in
here
and
do
that
so
hopefully
we'll
see
as
122
rolls
out
distro's
starting
to
say,
you
know:
hey
we're
going
to
ship
with
a
default
pod
security
that
isn't
wide
open,
yeah,
yeah
or
one
that's
default
wide
open,
because
we
know
a
lot
of
people
are
still
like
doing
stuff
like
that.
But
we're
also
going
to
put
a
you
know:
worn
restricted
on
there
to
steer.
B
B
Yeah-
and
I
I
mean
I've
been
an
end
user
before
as
well,
and
one
of
the
things
this
maps
to
like
real
people.
Human
interaction
is
all
the
security
folks,
don't
understand
kubernetes
as
much
as
we
would
want
them
to
right.
So
what?
What
would?
What
would
make
sense
for
them
is
what.
B
Thought
exactly
exactly
so
like
all
of
us
are
learning
and
if
we
just
throw
a
lot
of
jargon,
it's
not
going
to
help
anyone.
So
if
we
can
give
a
nice
story
and
tell
them
that
hey
our
default,
enforced
spot
security
policy
today
is
baseline
and
we
are
going
to
make
sure
and
all
the
parts
automatically
get
this
info
and
apply
it
regardless
of
what
they
are
doing,
and
this
is
our
cluster
level
policy.
B
So
this
allows
them
some
confidence
that
at
cluster
level,
that
we
are
doing
the
right
thing
and
we
are
catching
things
that
might
be
scary
and
then
one
or
two
years
later
you
can
say
we
are
always
trying
to
take
care
of
security
more
and
more
so
slowly,
slowly,
we
are
going
to
now
move
from
baseline
to
restricted
enforce,
and
that
gives
a
good
story
of
how
you're
moving
towards
more
secure
default
by
actually
looking
at
the
object
that
you
can
say
and
all
the
third-party
auditors,
all
the
people
who
have
to
go
through
audits
for
compliance
can
give
like
evidence
of
this
saying
hey.
A
Yeah
so
one
thing
I
didn't
actually
read
all
the
text
before
I
looked
at
the
gamel
and
talked
about
it,
so
that's
my
bad.
It
looks
like
this.
This
yaml
object
is
actually
a
file
that
you
need
to
pass
to
your
api
server
on
the
command
line.
It's
not
a
resource
in
the
cluster
right.
A
Kind
of
realize
that,
because
it
doesn't
have
metadata
on
there
and
if
you
don't
have
metadata
on
there,
then
you
can't
have
a
name,
and
so
you
can't
really
fit
into
the
way
the
api
server
likes
to
manage
things.
A
I
didn't
realize
it
at
first
either
until
someone
was
asking,
then
I
actually
read
the
text
above
so
lachlan's
post
also
goes
over
this
pretty
well,
but
I
thought
there's.
I
thought
it
was
interesting.
The
set
of
api
decisions
that
had
changed
between
pod
security
policy
and
and
pod
security
emission.
B
B
You
can
do
whatever
you
want,
but
one
thing
I
would
also
suggest
is
make
sure
you
don't
give
permission
for
that
time,
space
to
all
the
users
in
your
cluster
restricted
to
maybe
cluster
admins
or
some
folks
who
really
need
it
like
potentially
yours,
devops,
tech,
devops
or
security
person
who
needs
to
run
maybe
some
cni
plugins.
So
that's
something
to
keep
in
mind.
B
A
Yeah
well,
I
am,
I
am
excited
that
this
is.
This
is
one
of
those
things
that
that
beta
policy
that
you
were
talking
about
earlier,
you
alluded
to
it-
was
around
kubernetes,
118
or
thereabouts.
I
think
I
believe
so
yeah
that
the
the
you
know
the
the
api.
B
B
Correct
correct
and
go,
I
think,
as
luck
would
have
it.
Coincidentally,
seek
security
also
was
formed
around
the
same
time,
so
it
was
not
just
sega's
responsibility
anymore,
and
there
were
friends
who
could
help
out
from
security,
and
I'm
really
glad
this
is
how
this
has
turned
out.
We'll
probably
have
a
lot
more
improvements
in
migration
of
from
psp
to
psa
in
the
next
few
releases
and
yeah,
because
it's
alpha
and
beta
you
get
to
also
decide
the
roadmap
in
the
future.
A
And
since
this
is
an
alpha
feature,
you're
not
going
to
be
able
to
get
this
off
of
like
you,
you
probably
won't
be
able
to
get
this
off
of
your
cloud
providers
until
the
next
release.
B
A
A
Kind
is
a
good
place
to
do
this.
It's
really
fast
to
spin
things
up,
assuming
your
docker
is
working.
B
B
Yeah
so
again
reminder
psp
will
be
removed
in
125,
and
so
we
we're
not
there
yet
one
125
is
not
the
next
release,
not
the
next
release,
but
it's
it's
depleted
for
sure.
So
we
should,
if
you
are
using.
B
A
Yeah
so
feel
free
to
continue
filling
in
any
notes
on
talks
and
so
forth
or
other
exciting
stuff.
Oh
engine
ingress
engine
x,
cve.
I
wonder
if
I
can
guess
what
this
one's
about.
A
Oh
fun
allows
retrieval
a
service
account
token.
Yes,.
B
A
It
looks
like
you
need
to
also
add
an
additional
config
map
setting
yeah
and
it
looks
like
maybe
they
allowed
you
to
inject
snippets
of
other
configuration
and
it
looks
like
a
standard
escaping
type
vulnerability.
A
Yeah
amusingly
carlos,
was
pointing
out
that
you,
your
nginx
ingress,
might
need
to
be
privileged,
but
hopefully
it
doesn't
need
to
be
privileged.
Hopefully
at
best
it
needs
to
be
baseline,
and
it
shouldn't
necessarily
need
to
even
do
that.
I
recently
poked
the
contour
team
and
they
realized
that
they
could
actually
move,
stop
binding
to
port
80
and
just
use
a
higher
number
container
port
yeah
yeah.
B
A
So
hopefully
your
ingress
doesn't
actually
need
to
be
privileged.
Yes,
metal
lb
does
because
it
needs
to
either
send
arps
or
play
with
your
routing
table
in
funny
ways,
but.
A
B
B
A
And
if,
if
psp,
if
you
need
psp
features
that
aren't
in
pod
security,
you
can
still,
you
can
still
use
tools
like
opa,
yeah
or.
A
Caverno
is
another
policy
agent
that
works
for
all
different
types
of
resources,
so
you
may
need
both
that
and
you
may
want
a
belt
and
suspenders
with
pod
security,
admission
controller,
because
it's
easy
to
do
now.
A
Awesome
well,
thank
you.
This
was
a
lot
of
fun.
I
managed
to
break
at
least
one
cluster
in
my
discover
that
my
local
doctor
was
broken
so
yeah.
B
Same
same
here,
I
loved
a
lot,
I
loved
to
learn
how
to
fail
publicly.
So
this
was
great
and
it
was
nice
to
see
comments
from
everyone
all
around
the
world.
A
Oh
js
policy
there's
another
another
another
admission
or
carlos
points
out.
You
can
write
your
own
admission
web
hook.
If
you
yes,.
A
Of
thing,
okay,
well,
we'll
see
you
all
next
week.
I
think
it's
going
to
be
brian
lyles,
talking
about
something
that
brian
lyles
finds
interesting.
Yes,.