►
From YouTube: Kubernetes SIG Security Meeting 20210128
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
All
right,
it's
it's
for
after
we'll
we'll
say
that
it
has
clicked
over
and
call
it
officially
started
hello.
Everyone.
Thank
you
so
much
for
for
coming
to
kubernetes,
sig
security.
Thank
you
kirsten
for
agreeing
to
take
notes,
sure,
and
let's,
let's,
let's
see
what
we've,
what
we've
got
ray.
Do
you
want
to
talk
about
third
party.
C
Yeah,
so
the
pull
request
for
the
rfp
is
out.
I
have
the
link
on
the
agenda.
There's
some
good
discussions
going
on
on
what
is
in
scope
of
the
of
the
third
party.
D
B
Yeah
I
had,
I
had
a
look
at
it
and-
and
I
think
that
it's
I
think,
that
it's
looking
real
good
and
yeah
there's
some
good
discussions
going
on
in
there
will
the
will
the
actual
showing
of
the
rfp
to
potential
vendors
begin
when
the
pr
lands.
C
I
believe
so,
and
I
think
we
also
have
to
change
those
dates
as
well,
because
we
initially
thought
we
would
it
would.
This
would
be
released
this
past
monday,
so
we'll
have
to
change
some
dates
on
that.
C
So
earliest,
I
believe,
will
be
this
upcoming
monday,
but
I'll
have
to
to
well
we'll
discuss
this
in
our
next
meeting
next
week
to
see
when
what
final
changes
that
needs
to
be
done
for
this
rfp
and
when
we
can
merge
it
to
call
it
official.
That's
officially
out.
B
Awesome
yeah.
Thank
you
so
much,
I'm
so
glad
to
see
this,
I'm
so
glad
to
see
this
moving
forward
like
this
look
forward
to
having
it
land
and
then
doing
some
doing
some
good
work
with
some
vendor.
E
I
just
have
a
quick
update
on
the
security
docs
sub
project.
He
moved
the
meetings,
so
it's
gonna
be
alternating
thursdays
with
security
meeting.
It
will
happen
on
the
first
and
the
third
thursday
off
every
month.
So.
E
That's
the
update
I
have
for
today.
B
paste
this
forward.
Here
I
had
posted
in
slack
for
the
tooling
channel,
asking
whether
anybody
was
interested
in
being
added
to
a
github
notification
group
to
look
at
prs
that
they
would
put
into
their
configuration
for
their
taint
analysis
tool,
haven't
seen
any
response
to
that
yet
so
I
figured
I'd,
call
it
out
now.
If
anybody
wants
to
be
on
that,
I
think
it
should
be
pretty
low.
B
Like
a
low
number
of
interrupts,
but
trying
to
avoid
having
it
default
to
whoever
the
sig
chairs
are
just
by
nature
of
of
us
being
here,
but
but
instead,
hopefully
get
it
to
be
something
that's
more
driven
by
the
community.
That
shows
up.
F
D
B
B
Right
then,
I
think
the
the
next
thing
that
we
have
here
is
to
talk
about
pod
security
policy,
so
I
know
that
a
lot
of
folks
have
been
thinking
about
pod
security
policy.
I
know
that
I
have
spent
a
lot
of
time.
Thinking
about
pod
security
policy.
Did
did
everybody
get
a
chance
to
see
the
email
or
the
slack
messages
about
having
some
new
some
new
proposals
out
that
are
firm
enough,
that
we
could
actually
implement
them,
but
certainly
have
a
lot
of
room
to
be
improved
by
having
everybody's
thoughts
reflected
in
them.
B
Links
are
here
in
the
in
the
notes
for
two
different
proposals,
one
of
which
is
more
like
pod
security
policy.
It's
flexible
has
the
ability
to
configure
user
specified
levels
of
what
kinds
of
pod
settings
are
allowed
in
different
name
spaces.
Another
one
is
much
more
simplistic
intended
specifically
to
be
replaced
by
an
external
admission
controller
such
as
like
gatekeeper
or
or
k-rail
for
folks
that
have
more
sophisticated
needs
than
that
and
aiming
to
make
it
as
simple
as
possible
for
those
who
don't
believe
that
they
have
those
sophisticated
needs.
B
So
I
think
that
I
think
that
they're
both
really
solid
but
have
have
a
lot
of
have
a
lot
of
room
to
get
better
with
everybody
bringing
their
experience
to
bear
on
them.
So,
if
you're
interested
in
helping
to
make
sure
that
pod
security
policy
gets
replaced
by
something
awesome,
then
you
know
please
make
yourself
make
yourself
heard
in
the
docs.
B
We
have.
We
have
time
and
space
to
talk
about
it
right
here.
If
folks
have
folks
have
things
that
they
want
to
talk
about,
live
and,
of
course,
it'll
be
coming
before
the
next
sig
off
meeting
as
well,
because
ultimately,
this
is
this
is
something
that
they're
going
to
have
to
implement
and
take
care
of
in
the
future.
So
so
it's
ultimately
going
to
be
a
thing
that
takes
place
in
sigoth,
but
hopefully
we
can
provide
some
some
experience
and
some
different
points
of
view
to
drive
that
into
a
good
place.
G
G
You
know,
I
think,
that's
you
know
and
if
they
hear
kaverna
as
well,
so
it's
just
a
matter
of
like
what
I
think
we
need
to
get
consensus
on
what
that
is
like
what
is
what
is
the
things
not
only
from
the
community
but
also
from
a
vendor
perspective?
Right,
there's,
you
know,
vendors
that
have,
you
know,
integrations
and
all
of
that,
but
I
think
that
by
and
large
vendors
will
follow
what
what
the
community
is
going
to
do
and
the
operators
out
there
so
I'll.
G
G
F
B
Agree,
we
absolutely
have
to
pick
something
and
implement
it,
and
so
because
that
has
been
such
a
such
an
issue
of
swirl
in
the
past.
B
This
has
been
a
swirl
forever,
and
so
the
hope
is
with
you
know:
here's
here's
two
places
we
could
draw
a
line
in
the
sand.
So,
let's,
let's
talk
about
them.
Let's,
let's
draw
that
line
and
say
this
is
what
we
as
an
upstream
project
are
going
to
do
so
this
is,
you
know
this
is
a
starting
point.
This
is
an
option
that
users
can
use
and
it'll
be
very
easy
for
them
to
use
because
it'll
be
built
in
and
if
it
isn't
sophisticated
enough
for
their
needs,
then
we
have
cabrno.
We
have.
B
We
have
k-rail
that
that
they
can
choose
to
stick
in
there
if,
if
it's
what
they
need,
but
psp
has
been
so
hard
and
so
the
the
benefit
of
having
something
that
is
upstream
out
of
the
box
and
super
easy
hasn't
really
been
realized
with
psp
either
one
of
either.
One
of
these
would
be
super
easy
and
out
of
the
box,
but
replaceable
by
something
more
powerful.
D
So
I
kind
of
like
the
frame
you
just
used
there.
Sorry
I
didn't
realize
my
video
was
off.
I
I
think
it's
a
valid
question
that
that
pop
raises
about
opa,
gatekeeper
and
kaiverno,
because
we
certainly
hear
a
lot
of
them.
I
like
the
frame
that
you
just
teed
up,
which
is
to
say
because
kind
of
initially
I
was
thinking
well
yeah
it'd,
be
interesting
to
know
how
these
two
proposals,
which
I
haven't,
had
a
chance
to
read
yet
compare
to
what
something
like
gatekeeper
or
caiverno
offer
and
what
you
just
said.
D
I
think,
helps
frame
that.
But
I
still
I
still
you
know
I
do
hear
a
lot
of
interest
and
more
in
gatekeeper,
perhaps
because
it's
been
around
longer,
I'm
not
sure.
So,
if
you
see
these,
I
think
we
would
want
to
clearly
identify
what
we
see
as
different
between
the
two.
If
we,
if
we
want
to
pursue
a
built-in
available
by
default
use
case,
which
is
what
I
I
heard
from
you
and
I
think
is
a
good
frame,
you
know.
H
Quite
expensive,
the
built-in
out
of
the
box
case
is
the
biggest
one
to
me
about
like
well,
you
could
just
use
opa
or
gatekeeper
and
those
don't
come
out
of
the
box
and
as
it
is
right
now,
pod
security
policies
do
come
out
of
the
box,
and
users
often
don't
use
them
because
they
see
them
as
having
too
much
complexity,
and
if
they
are
currently
not
using
the
thing
that
is
in
the
box,
because
they
think
it's
too
complex,
I
think
expecting
all
users
to
go
find
something
from
somebody
else.
H
Do
come
in
built
in
so
that
they're
able
to
just
do
that
without
having
to
take
those
extra
steps,
because
if
it
defaults
to
nothing,
go
find
this
from
someplace
else.
A
lot
of
users
just
straight
up,
aren't
going
to
do
it,
and
so
it
will
become
more
insecure
by
default.
As
a
result,.
G
It's
almost
like
the
docs
page
right,
we'll.
You
know
right
now,
there's
like
here's,
how
you
agree
there
has
to
be
a
prescribed
way
to
do.
You
know
whatever
the
policies
that
we
say
run
as
root
whatever
they
might
be
right
and
then
the
docs
page,
which
I'm
sure
is
part
of
that
amazing
document
that
I
have
to
stop
skimming
and
actually
read
all
the
way
through
is
basically,
like
you
know,
has
like
prescribed
ways
of
here's.
You
know
how
you
implement
this
post,
this
change.
I
The
difference
is
mainly
there's
one
proposal
bare
minimum
upon
security.
That
says:
here's
the
problems
of
pod
security
policy,
let's
cut
off
all
the
problematic
features
and
call
it
a
day
and
then
there's
another
that
says
the
psp
plus
plus
that
says:
okay,
here's
the
problematic
features,
let's
fix
them
and
ship
something
new.
H
Psp
plus
plus
also
has
different
plans
that
you
can
use
that
are
pre-configured,
whereas
bare
minimum
pod
security
policy,
I
believe
only
uses
one.
I
could
be
wrong.
A
B
B
An
important
feature
of
of
both
of
these
proposals
is,
they
can
be
turned
on
by
default
by
distributions
and
have
clusters
still
work,
but
then
also
users
can,
you
know,
opt
out
of
restriction
in
the
places
where
they
need
to,
because,
of
course,
we
all
run.
Things
like
you
know
like
like
network
plug-ins,
that
require
a
lot
of
fancy
pod
features,
but
but
if
we
can
discourage
the
use
of
those
as
a
as
a
default
so
that
people
intentionally
say
no.
A
I
think
we're
we're
in
agreement
that
a
sufficiently
advanced
user
use
case
is
going
to
need
a
full-fledged
policy
engine,
so
something
as
flexible
as
opa
or
caverno,
or
something
custom
built,
and
I
think
the
difference
between
those
proposals
is
where,
where
in
the
life
cycle
or
like
the
you
know,
advancement
of
the
cluster,
the
user
hits
that
point
and
so
bare
minimum
pod
security
policy.
A
Whatever
it's
called
is
sort
of,
like
you
know,
just
here's
the
bare
minimum
to
make
create
pod,
not
root
in
the
cluster
and
anticipating
that
pretty
soon
you're
going
to
want
something
more
flexible
and
that,
but
by
the
time
you
hit
that
point,
you'll
have
be
familiar
enough
to
go,
and
research
and
use
an
alternative
and
psp
plus
plus
will
take
you
a
lot
farther
down
the
road
of
doing
custom
custom
policies.
A
I
For
bare
minimum
pod
security
policy
or
bare
minimum
quad
security,
there's
some
discussion
about
we're
only
targeting
the
baseline
policy
and
then
there's
some
extra
pod
security
policy
behavior.
That
is
not
going
to
be
offered.
What
is
the
delta
there
like?
What
can
I
do
what
holes
am?
I
not
closing.
A
There
is
a
document
called,
I
think,
it's
the
pod
security
standards
or
something
like.
F
A
For
those
who
aren't
familiar,
there's
privileged,
which
is
totally
unrestricted,
there's
baseline,
which
basically
says
you
know
if
I
run
a
pod
that
just
has
a
name
and
a
container
image
and
doesn't
configure
anything
else.
Then
that's
accepted
by
the
baseline
security
policy
and
then
there's
the
restricted
policy,
which
is
the
best
best
practices
policy
so
sort
of
like.
If
you're
really
serious
about
security,
you
would
try
and
use
the
restricted
policy
and
the
main
differences
between
restricted
and
baseline
is
restricted.
Requires
you
to
run
as
non-root.
A
A
You
can
actually
use
all
those
volume
types,
but
it's
saying
that
the
administrator
has
to
set
them
up
as
a
persistent
volume.
Okay
and
then
you
use
a
pvc
to
claim
it.
So
you
can't
do
any
of
that
stuff
with
bare
minimum
pod
security
policy,
because
it
just
implements
the
baseline
and
my,
I
think,
when
I
originally
wrote
it
up.
I
included
the
restricted
policy
and
david
eads
pointed
out
that
I
can't
remember
exactly
how
this
came
out.
A
But
basically,
if
you,
if
you
go
by
best
practices
and
stuff,
that's
not
the
default,
you
need
to
version
it
because
those
you
know,
as
we
add
new
features
and
things
that
are
off
by
default
or
as
the
best
practices
evolve,
that
restricted
policy
like
by
necessity,
has
to
evolve
to
to
match
that.
And
so
you
end
up
having
this,
like
version
thing
that
changes
over
time.
A
The
baseline
policy,
since
by
definition,
is
just
that
default
pod
configuration
and
nothing
elevated
from
that.
It
just
tracks
the
defaults
and,
I
think,
can
avoid
being
versioned.
So
that
was
part
of
the
motivation
behind
that.
D
Yeah,
it's
that
the
versioning
thing
is
a
really
good
point
tim,
so
openshift
uses
sccs
today
and
we're
still
using
them
so
kind
of
the
I
I
don't
know
how
to
refer
to
it
perhaps
ancestor
to
psps,
I'm
not
sure,
and
the
restricted
is
set
by
default
on
all
worker
nodes.
You
know,
there's
a
complex
set
of
logic
that
winds
up
being
invoked.
D
If
you
know
that
that
can
cause
a
different,
scc
or
psp
to
be
applied,
it
is
complex
and
when
we've
needed
to
change
restricted,
you
know
we
worry
a
lot
about
impact
on
running
workloads,
and
so
we
have
to
manage
that
carefully.
So
so
that
is
absolutely
anything
that
we
do.
That
is
similar
to
that
approach.
I
I
your
the
versioning
point
is
going
to
be
key.
How
do
we?
How
do
we
make
adjustments
and
ensure
that
running
workloads
are
not
not
broken
by
them?.
B
And
so
you
know,
I
think
that
that's
a
good
reason
to
have
cut
the
scope
there
on
bare
minimum
pod
security
to
the
scope.
That
is
extremely
unlikely
to
change.
But
of
course,
if
it
does
change
like
because
cri
changes
in
a
very
serious
way,
then
then
how
to
how
to
adjust
that
behavior
over
the
course
of
releases
in
a
safe
way
would
be
part
of.
I
think,
also
the
discussion
of
how
the
pod
spec
would
probably
need
to
change
as
a
result
of
such
a
serious
run-time
definition
change
and
like
that
tension.
B
B
It
doesn't
do
anything
to
try
to
encourage
people
to
to
adopt
more
best
practices
that
need
to
be
actively
opted
into,
but
it
does
still
reduce
the
spookiness
of
an
out-of-the-box
cluster
having
create
pod,
be
rude
on
every
node
which,
which
is
an
interesting
kind
of
concept,
because
it
seems
like
there's
a
bathtub
curve
on
the
understanding
of
that
concept,
where
you
ask
people,
they
either
say
yeah,
of
course,
that's
obvious
or
oh,
my
gosh,
really,
and
and
so
the
fact
that
there's
such
a
broad
divergence
there
in
the
understanding
of
how
much
power
you
get
over
a
cluster
just
by
being
able
to
create
a
pod,
like,
I
think,
is
why
we
need
to
have
something,
and
so
you
know
to
to
my
point
of
view.
H
From
an
end
user
perspective,
part
of
what
the
differing
options
in
the
doc
came
from
is
that
every
company
that
I've
worked
at
for
years
now
has
asked
for
a
kind
of
red,
yellow,
green
low,
medium
high.
You
know
low
reasonable
paranoid.
However,
you
want
to
describe
those
things.
I
think
those
were
at
least
three
of
them
like
we
don't
want
to
come
up
with
all
of
the
ammos
for
all
of
the
best
practices.
We
don't
want
to
think
about
it.
We
understand
that
we
need
to
do
it.
H
Can
you
come
up
with
these
list
of
like
differing
emails,
that
we
can
choose
from
that?
Have
best
practices
on
them
so
that
we
can
choose
them
according
to
needs
and
use
case
and
from
an
end
user
perspective,
because
I've
always
worked
at
end
users
companies
until
now?
That
is
a
thing
that
people
really
really
want,
and
so
that
is
is
giving
people
options
that
I
think
the
demand
for
it
is
out
there.
D
I
plus
one
that
it's
it's
come
up
recently
for
for
us
in
in
you
know
a
variety
of
customer
environments
where
security
is
really
important
and
they
don't.
They
can't
afford
to
manage
multiple
custom
policies
and
kind
of
figure
out
how
to
apply
those,
but
the
concept
of
a
minimum
of
three
that
meet
the
you
know
the
majority
of
their
goals
that
works
really
well.
G
What
I
mean
so
like,
basically
as
again,
we
look
talk
about
that
curve,
like
basically
it's
like
there's
a
point
where,
like
we
like,
we
have
a
rule
set
that,
like
has
you,
know
the
basics
and
then
as
it
grows.
But
okay,
this
thing
happened.
So
it's
completely
iterative
process.
If
I
get
that,
that's.
D
True
yeah,
no,
I
think
and
and
and
it
will
change
and
and
that's
where
that's
why
I
think
tim's
versioning
point
is
so
important
and
I
think
it
may
be
really
hard
and
again
noting
that
I
need
to
go.
Read
these
docs.
It
may
be
really
hard
to
do
something
that
isn't
the
most
basic
that
doesn't
include,
versioning
and,
and
I'm
not
sure
versioning
should
be
a
problem.
I
would
think
we
could
craft
something
that
that
could
work.
Oh
great,
I'm
not
sure
that
versioning.
B
Yeah
we've
got
versioning
in
the
dock
that
clearly
needs
versioning
and
then
and
then
the
other
doc
has
its
scope
set
to
avoid
the
need
for
versioning.
D
I
G
And
I
would
say:
oh
sorry,
sorry
go
ahead
here.
I
okay,
okay,.
I
I
Instead
being
you
know,
false,
safe
and
then
in
the
future,
we
could
introduce
super
safe.
You
know
really
restrictive
straight
jacket,
v1
as
the
annotation
versions
and
extend
the
behavior.
So
I
don't
think.
H
But
my
question
is,
I
mean:
are
we
talking
like
simpler,
for
whom
are
we
talking
about
ourselves?
Are
we
talking
about
end
users,
because
I
have
I
have
seen
a
lot
of
demand,
end
users
for
like
gradations
of
policy.
I
have
not
actually
seen
a
lot
of
demand
from
end
users
as
much
like
in
an
immediate
sense
for
like
we
want
one
thing,
that's
actually
a
thing
that
I
have
not
seen
a
lot
of
demand
for,
and
I
think
it's
important
to
keep
the
wants
and
needs
of
users
in
mind.
B
I
guess
an
important
thing
to
note
about
the
more
complex
proposal
while
I
am,
but
a
baby
go
programmer.
B
A
I
don't
think
we
should
be
too
worried
about
implementation
complexity.
You
know,
unless
we're
going
all
the
way
to
like
full-fledged
policy
engine
kinds
of
things
within
the
scope
of
just
limiting
preset
fields
on
pods,
I'm
confident
that
we
could
easily
implement
something
within
a
release.
Time
frame,
the
the
long
pole
is
going
to
be
figuring
out
what
we
want
to
build.
B
F
F
Yes,
a
couple
of
comments
I
wanted
to
add
to
the
discussion
so
for
those
of
you
who
don't
know,
I
do
work
on
caverno.
So,
like
tabby
said,
you
know
kind
of
have
some
knowledge
on
that
implementation
of
a
policy
engine
but
totally
agree
that
having
something
built
in
and
in
cluster
makes
complete
sense
and
that's
the
right
thing
to
do
for
a
security
baseline,
and
you
know
both
caverno
and
gatekeeper
address
a
lot
more
use
cases
than
just
spot
security.
F
So
you
know,
I
don't
think,
there's
any
concern
of
overlap
or
anything
like
that
with
what
these
policy
engines
would
do.
One
question
I
did
have-
and
this
applies
to
both
proposals-
is
the
scope
at
which
these
policies
could
be
applied
right,
and
I
think
there
was
some
discussion
on
this
and
the
documents
and
on
slack
is
if
there
could
be
an
ability
to
target
workloads
versus
namespaces
and
just
because
you
know,
at
least
in
my
experience
what
we've
seen
is
in
enterprises
there.
F
There
are
some
restrictions
also
to
using
namespaces,
especially
for
developers
right.
They
might
not
have
the
ability
to
segment
or
create
their
own
namespaces
things
like
that,
and
then,
if
they
have
to
run
varying
workloads
in
the
namespace
they're
assigned
to
how
do
they
kind
of
manage
that
with
different
levels
of
security.
B
A
It's
not,
though,
because
you
also
need
to
take
into
account
the
the
command
that's
running
and.
A
B
You
know
which,
like
being
very,
very
picky
about
which
hashgraphs
you
allowed
in
there
and
all
sorts
of
things,
and
it's
very
it's
very
hard,
like
you're
you're,
totally
right
tim.
I
overstated
it
when
I
said
that
that's
watertight,
but
even
if
that
was
watertight,
the
ux
challenge
around
it
is
so
so
huge.
But
I
would
love
to
hear
creative
solutions
to
allowing
like
multiple,
multiple,
privileged
levels
to
coexist
within
a
namespace
yeah.
D
Yeah
and
and
I'd
agree
with
jim
it
was,
I
guess,
who
said
that
you
know
that's
a
use
case.
I
also
see
with
our
customers
a
lot
right.
They
need
you
know,
developers,
yeah,
multi
workloads
need
privilege,
you
know
different
workloads
in
the
same
name.
B
In
this
name,
space
needs
privilege,
but
nothing
else
does
and
like
well.
I
could
make
a
policy
pretty
easily
that
says
in
this
namespace.
Only
in
it
containers
can
run
with
privilege,
but
but
now
the
strength
of
that
security
boundary
is
was
the
attacker.
Did
the
attacker
take
enough
time
to
try
an
in
it
container
so.
A
There's
a
piece
of
confusion
that
I
often
hear
come
up
in
this
in
the
context
of
this
conversation
within
a
namespace
workloads
can
absolutely
run
with
different
privileges,
independent
of
the
policy,
so
the
privileges
that
a
workload
is
running
with
are
described
through
the
security
context
and
then
on
the
the
policy
engine
is
more
aimed
at
the
users.
So
what
can
a
user
create
a
workload?
That's
doing
so.
A
D
Well,
so
I
was
starting
to
think
that
I
didn't
understand
psps
because
I
spent
so
much
time
in
sccs,
so
so
tim
thank
you
for
confirming
that
by
and
large
there's
still
a
huge
overlap,
because
because
absolutely
we
use
you,
you
know
in
openshift
today
you
can
have
a
different
sec
applied
to
different
workloads
in
the
same
name
space,
and
that's
done
through
a
variety
of
different
different
mechanisms.
You
can
actually
associate
an
scc
with
the
namespace.
D
You
can
associate
it
with
a
service
account.
You
there's
logic
built
in
that
checks.
The
security
contacts
that
come
in
with
the
pod
against
the
available
sccs
to
see,
which
is
the
most
appropriate
to
apply
it's
complex,
it's
hard
for
end
users
to
understand
it's
easy
to
accidentally
get
the
wrong
scc
applied.
If
you
don't
really
know
what
you're
doing
so,
I'm
not
saying
it's
it's
the
be-all
and
end-all,
I'm
just
like!
Okay,
I'm
glad
to
you
know
I
don't
know
yet
how
much
that
aligns
with
what
psps
do
today.
D
F
D
So
organizations
who
worry
about
insider
threat
yeah
they
absolutely
want
to
minimize
user
access
in
those
name
spaces
and
the
privileges
that
those
users
have
so
you're
right.
So
if
somebody
has
a
lot
of
privilege
on
that
name
space,
yes,
they
could
take
advantage
of
that
scc.
B
Like
for
because,
like
for
psps,
for
example,
I
really
I
really
like
the
way
that
psp
tried
to
do
this,
where
what,
where,
if
a
psp
is
allowed
to
be
used
by
the
user,
creating
a
pod,
then
that
psp
can
admit
the
pod
or
if
a
psp
is
allowed
to
be
used
by
the
service
account
mounted
into
the
pod.
Then
then,
that
pod
can
be
admitted
by
that
by
that
psp
so
like.
B
If
you
have
a
name,
if
you
have
two
psps
like
a
privileged
one
and
an
unprivileged
one,
and
you
grant
everybody
in
the
world
the
ability
to
use
the
unprivileged
one,
and
then
you
grant
some
special
user
the
ability
to
use
the
privileged
one
if
they
create
a
pod
like
if
they
apply
a
yaml.
That
starts
with
pod
great
they're,
the
only
person
who
can
do
that
and
everything's
cool,
but
if
they
want
to
create
a
replica
set
or
a
deployment
that
doesn't
matter
because
it's
not
them,
that's
creating
the
pod.
B
B
So,
like
the
other
way
to
do
it
and
what's
recommended
in
the
documentation-
and
you
know
definitely
tim
jump
in
on
top
of
me
as
soon
as
I
get
this
wrong
is,
is
instead
you
can
have
service
accounts
in
there,
and
so
then
you
know
only
pods
that
reference,
a
certain
service
account,
will
be
able
to
be
admitted
privileged.
B
But
now
it
comes
down
to
iterating
over
the
service
accounts
in
the
namespace
and
finding
the
one
that
lets
me
admit
a
privileged
pod
like
it's
still,
everybody
can
make
a
privileged
pod
if
we
do
it
by
service
account,
and
so
like.
I
really
love
the
flexibility,
that's
there
and
the
way
that
it
tries
to
hit
that
goal,
and
I'm
sad
and
frustrated
with
the
way
that
other
parts
of
kubernetes
make
it
so
that
that's
really
hard
to
do.
A
So
I
have
a
proposal
that
went
out
to
sig
architecture,
probably
five
months
ago.
I
can
surface
it
and
link
it
in
the
notes
that
was
aiming
to
address
this
problem
and
the
idea
there
is.
You
have
something.
A
Call
it
restricted
labels,
maybe
not
a
label,
restricted
tags
whatever,
and
you
say
who
can
create
a
restricted
tag
so
that
this
is
bound
to
the
user
and
it's
a
metadata
thing.
That's
attached
to
every
object.
So
you
know,
maybe
you
have
a
restricted
tag
that
says
privileged
and
I
and
tim
item
cluster
admin.
I'm
allowed
to
set
the
privileged
tag
on
resources
and
then
exactly
so
and
then
you
have
some
rules
about
well.
A
The
replica
set
controller
is
allowed
to
create
a
pod
on
behalf
of
a
replica
set
that
has
restricted
tags,
and
so
you
propagate
the
tag
down
to
the
pod,
and
then
your
policy
is
actually
bound
to
the
tag,
and
so
your
policy
says
pods
tagged
with
privileged
are
allowed
to
use
the
privileged
profile
and
yeah.
So
it's
a
way
of
kind
of
getting
through
that,
like
delegate
asynchronous
delegation
of
responsibility,
but
that's
also
a
much
larger
architectural
change
that
would
be
required
to
implement
this.
B
I
think
the
way
that
you
can
mix
privilege
levels
with
psp
is
strong
but
impractical
to
use,
because
you
could
say
if
you
want
a
privileged
pod,
you
have
to
make
it
as
a
pod
done
and
like
that
would
work,
except
that
now
you've
got
people
managing
pods
when
the
whole
point
is
that
instead
kubernetes
should
manage
your
pods.
So
it's
rough,
but
anyway,
that's
that's
a
a
big
limitation
in
both
of
these
proposals
that
I
think
is
pragmatically
reflective
of
the
the
fundamental
limitation
of
the
space
that
we're
working
in
inside
kubernetes.
F
So,
even
though
the
name
space
is
where
the
strong
boundary
is
at,
would
it
not
be
beneficial
to
allow
users
to
choose
like
you
know,
and
then
things
like
network
policies
use
the
bot
selector?
So
I
guess
I'm
not
fully
understand.
I
fully
understand
the
the
reason
why
the
name
space
and
if
an
attacker
is
you
know,
has
access
to
the
namespace.
F
They
do.
You
know,
get
access
to
other,
of
course,
data
in
that
namespace,
but
to
start
with,
if
some,
if
a
user
is
defining
their
security
policies,
why
not
I'll
still
allow
them
to
just
define
the
minimal
set
in
which
pods
require
that
privilege?
F
Is
it
because
it
would
just
provide
a
false
sense
of
security
for
those
parties?
That's.
B
B
That's
trivial,
for
an
attacker
to
bypass
would
still
be
useful,
for
example,
in
forcing
engineers
of
an
end
user
to
apply
certain
settings
on
their
pods
because
their
pods
wouldn't
deploy
unless
they
did
the
appropriate
things
or
went,
and
you
know
did
clever
attacker
things
but,
like
personally,
I
am
afraid
of
security
controls
that
are
only
effective
against
people
who
are
already
honest
and
already
not
trying
to
bypass
them.
Because
of
that,
because
I
am
afraid
of
giving
people
a
false
sense
of
security.
F
B
So
like
as
a
as
a
soft
policy
suggestion,
you're
right,
something
like
that
is,
is
totally
valuable.
But
but
I'm
scared
that
it
looks
like
it's
a
much
stronger
boundary
than
it
really
is,
and
and.
F
B
H
D
D
B
Thanks
I
mean
this
has
been.
This
has
been
a
really
a
really
interesting
discussion.
It
seems
like
we're
kind
of
petering
out,
but
it
makes
sense
we're
coming
up
towards
time
and
honestly,
it's
still
a
really
hard
world
for
us
all
to
live
in.
So
I'm
I'm.
J
One
thing
just
wanted
to
ask:
is
it
possible
if
we
were
because
in
terms
of
implementation,
seems
like
the
easiest
is
just
to
go
with
a
minimal
default
so
to
speak
at
least
for
now?
I
don't
know
what
the
the
plan
is.
Is
it
possible
to
also
build
in
some
sort
of
educational
tool
like
as
a
part
of
those
policies
so
like
if
we
did
have
sort
of
a
green,
yellow
red
policy
tool
that
it's
like?
These
are
all
the
things
that
we're
looking
at?
J
These
are
the
different
levels,
we're
not
implementing
these
gaps
right.
So
it's
like,
if
you
are
configuring,
your
kubernetes
cluster
out
of
the
box,
it's
like
here's,
what
the
basic
checks
come
with,
or
maybe
it's
built
into
the
cli
or
something
like
that,
but
it's
just
you
you're
going
to
go
with
the
sort
of
easy,
install.
Here's
all
the
things
you're
missing.
This
will
get
you
up
and
running,
but
like
you
need
to
have
this
covered
before
you
move
to
production
sort
of
thing,
so
it's
kind
of
like
a.
D
J
Yeah,
I
agree
with
tim's
sorry
sort
of
reasoning
too,
where
it's
like.
You
almost
want
the
other
policy
engines
and
the
external
tools
to
sort
of
like
grow
and
mature
and
become
more
useful
right,
like
you,
don't
want
kubernetes
to
say,
hey,
we
support
like
this
specific
thing,
all
the
time
you
almost
want
to
look
at
it
like
a
container
on
time
right.
J
But
yeah
again,
like
I've,
seen
so
many
different
implementations
that
it's
like
I'd
rather
see
it
as
here's
the
different
levels
and
then
it's
also
an
educational
tool
to
say
this
is
what
you're
missing
you
need
to
do.
This
yeah.
F
Yeah,
so
one
discussion
we've
had
related
to
that
is,
you
know
using
the
pod
security
standards
to
draft
either
a
set
of
yamls,
which
somebody
can
use
to
test
and
see
where
you
know
where
they
fall
across
such
different
levels
or
even
creating,
maybe
a
benchmarking
tool
of
which
could
be
a
single
binary.
They
run
in
their
cluster
and
it
provides
them
with
that
feedback
to
say
you
know
that
maybe
you're
at
baseline
or
you
know,
you're
privileged,
and
you
know
you
recommend
your
other
things.
D
I
think
I
need
to
anytime,
so
I
just
think
it's
an
interesting
idea.
I
think
about
how
we've
some
of
the
challenges
with
as
we've
got
so
many
different
distros
with
something
like
the
cis
coupe
benchmark.
So
what
would
you
know?
Maybe
this
would
be
simpler,
but
as
soon
as
you
said,
the
word
benchmark,
I'm
my
mind,
split
into
like
five
different
distros
and-
and
you
know
how
how
you
know,
how
is
that
going
to
work
yep
but
doesn't
mean
we
should
not
think
about
it.
B
Right
well,
in
kind
of
tying
these
ideas
together,
you
know,
michael.
I
think
that
that's
I
think
that
this
kind
of
educational
information,
I
think,
could
come
in
several
different
places.
Like
I
don't
know
if
you
have
seen
or
reviewed
the
pod
security
standards
document,
but
like
within
upstream
kubernetes
as
it
exists
today.
B
That
would
be
a
good
place
to
have
some
more
of
that
discussion
so
like
if
you
have
thoughts
around
there
like
pr
something
into
that
web
page,
because
people
would
love
to
hear
them,
but
then
also
also
within,
like
kubernetes
distributions
or
or
installation
tools,
depending
on
how
you
want
to
think
of
them
could
be
another
good
place
to
have
some
information
like
that
like
either
in
their
in
their
install
documentation
or
like.
If
there's
you
know
messaging,
that
comes
out
of
you
know
like
when
you,
when
you
run
kind
to
create
a
cluster.
B
It's
like
congratulations
now
you
have
a
cluster,
and
you
know
kind
is
not
a
great
example
because
you
really
ought
to
be
using
it
to
run
your
production
workloads
on.
But
you
know
whatever
your
setup
tool
is
like
that.
B
B
J
And
you
guys
have
done,
I
I
really
haven't
contributed
to
it
at
all,
but,
like
you
guys
and
tim
and
all
that
stuff
who
have
been
prepping
these
documents,
it's
been
really
useful.
D
B
Presumably
we're.
D
D
B
B
Well
then,
yeah
like
the
docs,
are
there
please
feel
free.