►
From YouTube: sig-auth bi-weekly meeting 20200203
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody:
this
is
the
february
3rd
2021
meeting
of
sig
auth.
We
have
a
pretty
full
agenda
today
and
I
know
that
pod
security
policy
replacement
has
been
kicked
off
of
previous
meetings
or
I
guess
the
last
meeting
was
canceled.
So
I'm
going
to
time
box
the
other
things,
and
I
think
we
should
also
discuss
whether
we
want
to
have
breakout
sessions
while
psp
v2
is
hot,
just
to
kick
off
that
work
stream.
A
So
yeah.
Let's
try
to
get
through
the
polls
of
note
and
announcements
in
the
first
10
minutes.
Do
you
want
to
kick
off
this
first
announcement?
Jordan.
B
A
Cool
ma
rosette:
do
you
want
to.
C
Yeah,
so
this
is
we're
continuing
with
windows
privileged
containers.
I
think
we
brought
this
cup
to
the
sega,
a
couple
segoth
meetings
in
120,
and
there
were
a
couple
of
kind
of
design
changes
that
I
can
highlight
here
to
keep
things
brief,
but
we
wanted
to
just
where
I
think
we
are
pursuing
getting
this
sentiment,
implementable,
state
and
alpha,
and
I
wanted
to
circle
back
and
just
make
sure
we're
in
alignment
with
with
with
cigar
around
all
the
security
policies.
C
When
we
last
discussed
this,
there
was
a
plan
to
just
reuse
the
existing
securitycontext.privileged
field.
After
a
number
of
discussions
with
signode
and
kind
of
internal
discussions,
I
think
we're
deciding
to
introduce
a
new
field
on
the
windows
security
context
options.
I've
highlighted
some
of
the
the
reasons
in
the
cap,
but
from
a
very
high
level.
The
the
reasons
why
we're
doing
that
is
one
there's
no
pod
wide
security
context
that
privileged
field
today
and
for
windows
secure
privileged
containers.
C
C
If
there's
a
new
field
on
windows,
the
windows,
security
context,
we
are
actually
deciding
to
not
call
it
privileged
as
well
for
a
number
of
reasons
that
are
also
mentioned
in
there,
the
big
one,
the
biggest
one
being
that
these
containers
will
work
and
have
vastly
different
capability
they'll
work
quite
differently
and
have
vastly
different
capabilities
than
linux.
Privilege
containers.
C
So
we
wanted
to
avoid
kind
of
all
confusion
that
we
usually
see
where
people
who
are
more
familiar
with
linux,
come
in
and
assume
things
work,
the
same
way
with
windows
and
maybe
not
go
through
all
the
documentation,
and
so
we're
still
kind
of
like
shedding
a
little
bit
on
the
name.
C
But
right
now
we're
calling
them
host
process
containers
and
that's
a
little
bit
of
carryover
from
how
they're
implemented
in
which-
and
the
cap
goes
into
a
lot
of
detail
about
that
where
essentially,
these
privileged
containers
are
packaged
up
like
containers
but
are
actually
run
as
a
kind
of
an
isolated
windows.
Job
object
on
the
host
and
the
process
is
run
kind
of
in
the
host
name
space.
C
So
we
I
just
wanted
to
kind
of
bring
this
to
like
raise
awareness
for
this
and
see
if
there's
any
discussions
that
need
to
happen.
I
believe,
when
we
discussed
this
in
120
members
of
the
segoth
community
kind
of
felt
that
if
we
were
re,
reusing
the
existing
privileged
fields,
there
wouldn't
really
wasn't
much
concern
here,
but
we
are,
since
we
are
changing
that.
I
wanted
to
just
kind
of
have
a
chance
for
everybody
to
review
and
provide
feedback.
B
I'd
probably
direct
that
to
the
design
itself
just
to
keep
this
time
boxed.
I
I
I
had
made
a
comment
about
their
existing
policy,
things
that
look
at
privilege,
the
privileged
field
sort
of
as
a
stand-in
for
you're
a
powerful
pod,
and
so
if
a
new
field
that
shows
up
will
definitely
get
missed
by
existing
policy
enforcement
mechanisms
that
think
they're
keeping
out
privileged
or
powerful
things.
So
that
that's
my
main
concern,
I
I
haven't
revisited
the
design
since
you
made
the
updates
so
I'll,
take
a
look
at
it.
D
C
The
same
way,
that's
the
big.
That's
that's
one
thing
that
we
went
back
and
forth
with
between
sig
node
and
and
myself
in
sig
windows.
Here
there
is
no
concept
of
privileged
fields
today
in
in
windows,
in
even
in
with
docker,
I
believe
so
outside
of
kubernetes.
C
The
other
reason
which
I
I
forgot
to
mention
earlier,
but
I
do
want
to
highlight
here
too,
is
that
the
main
way
of
restricting
access
to
host
resources
with
these
privileged
container
fields
is
the
run
as
username
and
or
credential
spec
fields
that
already
exist
on
the
windows
security
context,
and
we
kind
of
felt
that
having
those
live
on
the
same
object
as
the
new
field
would
kind
of
potentially
outweigh
the
the
risk
of
having
these
missed
by
new
policy
enforcement
and
we're
hoping
that,
with
a
lot
of
documentation
and
reaching
out
to
those
policy
enforcement
tools,
we
can
get
them
to
kind
of
support
windows
in
a
more
kind
of
windows
focused
way.
D
Maybe
I
can
ask
like
a
more
specific
question
if,
if
I
want
to
prevent,
say,
host
network
access,
that's
going
to
be
listed
for,
say
linux
pod
in
the
security
context
or
peer
to
it
in
windows,
does
it
control
the
same
thing
like
if
I'm
controlling
those
fields
for
windows
pods?
Does
it
make
sense
today
already,
or
is
it
already
the
case
today
that,
like
the
answer.
C
D
C
In
in
in
the
case
for
windows
or
for
host
network,
specifically
so
host
network
doesn't
work
on
windows.
Today
that
was
actually
discussed
in
some
of
the
in
the
kep
comments.
There
you
it
it's,
there's
a
very
hacky
way
of
getting
it
to
work
with
docker
shim.
That
requires
you
to
manually,
configure
well
named,
or
certain
named
host
networks
on
the
the
node
and
containerdia
doesn't
support.
Where
applicable,
we
would
like
to
reuse
the
same
fields
and
hostname
or
host
network.
C
We
are
I've,
commented
here
that
so
initially,
these
privileged
containers
will
always
inherit
or
must
always
be
joined
to
a
host
network,
and
in
cases
like
that,
we
are
going
to
enforce
in
api
validation
and,
in
particular,
and
in
the
cubelet
check
to
some
checks
in
the
cubelet.
That,
though,
that
existing
kind
of
security
context
fields
that
do
have
that
that
are
useful
or
applicable
to
these
new
pod
types
will
also
be
validated.
A
Yeah,
it
seems
like
this
is
the
same
problem
that
we
were
facing
when
we
were
talking
about
the
first
versions
of
psp,
where
these
constraints
aren't
portable
and
we
can't
really
guarantee.
A
Compatibility
of
I
guess
this
security
guarantees
from
release
to
release.
I
don't
think
that
is
particularly
surprising
to
me
and
seems
like
documentation
and
action
required
to
the
common
policy
engines,
or
I
guess
policy
library
maintainers-
is
a
reasonable
course
of
action
to
get
over
this.
E
But
these
questions
would
be
easy
to
answer,
but
it's
it's
not
and
nobody's
talking
about
burning
down
all
of
pod
spec
in
order
to
make
sure
that
linux
and
windows
pods
are
on
an
equal
footing
as
far
as
how
much
they're
respected
by
the
pod
spec
but
yeah
it.
It
does
seem
like
accepting
that
the
pod
spec
is
very
linux
specific
and
then
putting
all
of
the
windows
things
to
be
separate
from
the
windows
things
or
a
windows,
pod
yeah.
That
also
seems
that
also
seems
like
a
thing.
C
And
actually
we're
running
it
we're
having
the
same
discussions
in
the
cri
spec
too,
and
this
cap
actually
does
address
that
right
now,
there's
a
linux,
our
pod
sandbox
config
field,
there's
a
linux
security
like
container
security
context
and
pod
security
context
field,
and
with
this
cap
we're
introducing
and
have
kind
of
the
like.
The
signature
viewers
acknowledge
this
and
think
that
this
is
the
way
forward,
we're
introducing
window
specific
fields
for
all
of
those
and
mapping
them
to
that.
C
I
don't
think
that
we
necessarily
wanted
to
kind
of
tackle
the
whole
pot,
like
pods,
back
level
changes
with
with
this
kept.
But
if
there's
a
desire
to
do
that,
we
can
maybe
keep
moving
forward
here
or
with
that.
But
we
are.
We
are
seeing
a
lot
of
we're
seeing
more
and
more
instances
of
that
where
a
lot
of
the
fields
we're
having
a
hard
time
trying
to
decide
if
we
should
try
and
kind
of
mash
the
windows
constructs
into
the
existing
api
structure
or
break
that
out.
A
So,
let's
review
the
changes
offline,
sure
I'm
not
nothing!
You
said
sounds
terribly
concerning
to
me,
but
maybe
jordan,
david
can
weigh
in
on
the
cap
and
whoever
else
has
thoughts.
Let's
bree,
let's
check
out
this
node
service
log
viewer.
I
took
a
quick
look
at
this.
Do
you
want
to
intro
it
and
tell
us
what
the
implementation
of
the
alpha
timeline
is
that
you're
looking
at?
F
F
So
what
we're
trying
to
do
here
is
openshift
has
a
feature
that
allows
us
to
view
node
logs
from
journal
services,
which
we
have
now
extended
to
all
to
also
support
windows
services,
and
we
would
like
to
upstream
that-
and
the
way
we're
trying
to
upstream
that,
after
talking
to
the
six
cli
folks,
is
to
extend
cube,
cube
cuddle
logs
to
also
work
with
nodes
and
leverage
the
existing
wall.
Log
endpoint,
which
is
already
present
in
the
cubelet,
to
also
work
with
in,
in
particular
systemd
services
and
on
the
windows
side.
F
C
Yeah
today
on
windows,
today,
there's
no
kind
of
equivalence
of
general
ctl
for
for
windows
nodes,
and
this
would
allow
potentially
folks
to
grab
arbitrary
windows
event
logs,
which
may
or
may
not
contain
I
mean
sensitive
information
is
definitely
higher
risk
than
not
exposing
those.
So
we
just
wanted
to
make
sure
that
all
of
the
kind
of
echoing
and
everything
is,
is
considered
and
raised
and
reviewed.
A
Sounds
good
and
what's
the
are
you
trying
to
get
this
in
implementable
state
by
the
enhancement
freeze.
C
Yes,
I
believe
so.
This
has
been
discussed
in
sig,
node,
six
cli,
sig
windows
and
now
sigoth.
C
The
kappa
outlines
changes
to
the
the
cubelet
to
stream
these
logs,
and
that
will
be,
hopefully
that's
going
to
be
gated
by
feature
gates,
and
then
there
were
also
questions
on
how
to
kind
of
and
then
make
the
same,
make
similar
changes
in
the
cube
ctl
to
be
able
to
also
like
relay
those
to
users,
and
we
would
like
to
have
this
in
an
alpha
state
for
121.
G
A
very
high
level
question:
what's
what
our
back
is
checked
when
you
try
to
make
this
call
like
what
permissions
must
you
have
before
you
can
start
scraping
those
logs?
Is
it
just
node
slash
logs
or
is
it
something
magical.
F
I
don't
remember
so
from
what
we
understood
from
the
sig
cli
folks
is
given
the
minute
you
add,
support
for
viewing
node
objects
in
when
you
extend
cube,
cdl
logs,
they,
whatever
our
back
is,
is
is
present
for
those
objects,
and
those
endpoints
will
apply
here
too.
We
don't
need
to
do
anything.
Extra
is
what
I've
understood.
G
A
Okay,
I
will,
I
think
people
should
review
this
offline,
hopefully
in
the
next
soon
considering
we're
coming
up
on
the
enhancement
freeze.
But
I
haven't
looked
much
in
detail,
so
I
can't
comment
on
it.
Yet
I
will
try
to
formulate
some
opinions.
A
A
A
B
So
yeah,
I
guess
we
have.
We
have
two
different
proposals
out
right
now,
but
maybe
before
we
well,
no,
I
changed
my
mind
on
that.
I
think
it's
worth
having
the
beginnings
of
a
discussion,
and
then
we
can
see
where
we
are
in
half
an
hour
and
or
however
much
time
mike
wants
to
give
us
and
figure
out
next
steps
and
whether
we
need
a
more
of
a
breakout
session
to
to
get
through
this
but
yeah
right
now.
B
We
have
two
different
proposals
out
one,
so
the
psp
plus
plus
proposal,
that's
up
right
now,
if
you
haven't
had
a
chance
to
see
it
the
I
would
say
that
the
high-level
objective
of
this
is
to
take
something
that's
similar
to
pod
security
policy,
as
it
is
today
and
clean
it
up,
try
and
fix
a
bunch
of
the
problems
that
we've
called
out
with
that
proposal
and
put
forward
something
that
is
similar,
not
quite
so
similar
as
to
be
an
actual
security
policy.
V2.
B
And
it's
not
configurable
beyond
just
enforcing
that
bare
minimum,
either
enforcing
or
not
enforcing
a
namespace
level.
The
that
bare
minimum
policy
and
the
idea
there
is
that,
once
you
move
beyond
the
need
for
that
bare
minimum,
then
we
expect
users
to
go
to
all
the
way
to
some
more
advanced
third-party
plug-ins,
such
as
opa
or
gatekeeper
workfare.
Now.
E
So
then,
jumping
off
of
what
tim
said.
Ultimately,
these
are
both
kind
of
walking
in
the
same
direction
of
giving
something
that
is
out
of
the
box
and
easy
to
use
for
simple
use
cases
in
order
to
ensure
that
folks,
who
lack
either
the
the
the
sophistication
or
the
desire
to
go
to
something
more
like
a
like.
An
external
policy
controller
can
still
have
a
feeling
of
running
a
normal
pod
is
consuming
a
fungible
compute
resource,
rather
than
you
know,
giving
an
ssh
session
to
as
root
to
every
node
in
the
cluster
and
like.
E
And
you
know
which,
how
how
much
flexibility
we
want
to
give
them
how
how
soon
in
a
growth
of
sophistication
journey.
They
have
to
jump
to
an
external
controller
is,
is
essentially
the
decision
that
that
we
need
to
make
if
we're
trying
to
choose
between
these
two
between
these
two
policies
and
that's
not
really
a
question
with
a
right
answer.
So
so.
Therefore,
I
was
hoping
that
we
could
all
have
our
feelings
out
about
it
and
and
get
to
a
point
where
we
felt
good
about
choosing.
E
B
Yeah
just
to
add,
I
think,
there's
some.
You
know
to
what
tabitha
said
about
they're,
not
being
a
right
answer.
B
There's
a
bunch
of
trade-offs
in
that,
and
you
know
the
more
the
more
advanced
we
make,
the
out
of
the
box
solution,
the
long
sort
of
the
longer
that
users
will
be
able
to
stay
on
that,
but
also,
in
my
opinion,
the
longer
they
stay
on
that
the
harder
the
transition
to
another
controller.
G
D
Not
entirely
right,
you
have
to
have
a
way
to
avoid
using
the
anything
you
build
in
in
order
to
have
an
on-ramp
right
like
if
you,
if
you're
already
using
the
built-in
thing
you
need
to
expand,
you
have
to
have
a
way
to
transition
from
one
to
the
other,
and
that's
going
to
be
in
the
design
of
this
admission,
plug-in
and
api.
G
D
B
To
a
cliff,
normally
you,
you
need
some
way
to
sort
of
have
an
exception
or
punch
a
hole
in
the
policy
like
normally
the
the
ramp
is
the
built-in
thing.
That's
simple
is
working
well
and
then
I
come
up
with
an
exceptional
case,
and
now
I
need
to.
I
realize
the
built-in
thing
doesn't
work
for
my
exceptional
case.
G
D
G
Yeah,
so
my
comment
had
been
more
around
the
idea
of
making
it
easier
to
run
external
policy
engines
right
and
to
me
that
is
the
api
machinery
problem
right
like
like.
Theoretically,
if
I
could
easily
run
opa
as
a
static
pod
guaranteed
in
a
way
that
the
only
way
that
the
cube
api
server
couldn't
talk
to,
it
is,
if
it,
if
the
actual
like
they
were
scheduled
together
and
they
both
blew
up
at
the
same
time.
D
I
mean
I
think
that,
even
if
you
were
to
build
that
and
it
is
possible
to
write
a
static,
pod
controller
right
like
a
workload
controller
that
creates
static
pods
for
you
that
it,
you
would
still
need
to
design
the
the
first
api
to
give
you
an
on-ramp.
Instead
of
a
cliff.
E
Yeah,
I'm
not
as
sure
about
that,
like
going
going
back
to
tim's
comment
about
the
more
advanced
users
of
the
out
of
the
box,
software
are
going
to
have
trouble
moving
on
to
a
different
thing.
I
I
disagree
because
to
make
advanced
use
of
an
out
of
the
box
thing
like
even
psp
today
fundamentally
requires
you
to
understand
what
your
workloads
are
and
what
kinds
of
restrictions
you
want
to
put
on
them.
E
First,
in
a
philosophical
way
and
then
translate
that
into
into
those
policies,
and
so
you've
already
done
the
hard
work.
Then
switching
to
a
third
party
controller
is
a
matter
of
learning
the
syntax
for
that
third-party
controller
and
deciding
how
to
implement
those
same
feelings
in
it,
and
so,
like
I
feel
like
if
they've
made
sophisticated
use
of
of
one
controller.
E
Moving
to
another
controller
is
mostly
operational
problems,
rather
than
the
the
more
hard
like
business
or
philosophical
problems,
and
if
we
have
a
flexible
built-in
option
like
like,
what's
in
the
psp
plus
plus
option,
then
if
a
separate
third-party
controller
wants
to
capture
that
market
share
of
saying
people
who
want
to
move
beyond
beyond
psp,
plus
plus
the
the
psp
plus
pluses
are
in
their
cluster
already.
E
They
could
write
a
tool
that
would
translate
from
from
psp
plus
pluses
into
whatever
their
policy
object
is,
and
that
feels
like
they're,
the
ones
that
have
the
incentive
to
do
so,
rather
than
rather
than
us.
You
know
there
could
be
15
different
commercial
options
for
kubernetes
policy
controllers,
but
I
don't
think
it's
on
us
to
write
a
translator
into
all
of
their
different
policy.
E
Languages
but,
like
gatekeeper,
has
done
quite
a
bit
of
work
to
make
it
easier
for
people
to
transition
from
psp
to
gatekeeper
with
the
with
the
work
that
they've
done
in
their
policy
library,
and
I
have
actually
used
that
and
it
wasn't
trivial,
but
it
was
less
work
than
the
initial
figuring
out
what
the
policies
are
and
how
to
apply
them
without
hurting
the.
E
H
A
I
guess
the
your
point
is
that
that
is
potentially
should
be
their
responsibility
if
they
want
to
capture
some
section
of
users
of
psp,
plus
plus
or
the
out
of
the
box
solution
versus
hours.
I
guess.
E
Yeah
yeah,
I
mean
like
we
also
ship
kube
net,
but
we
don't
ship
tools
to
make
it
easy
to
move
from
cubenet
to
calico
and
kubenet,
to
psyllium
and
and
so
on,
but
the
providers
of
those
tools
as
part
of
their
finding
product
market
fit.
You
know
they
they're
trying
to
figure
out
who
are
the
users
that
aren't
using
them
already
and
want
to
and
and
what
are
their
needs
and
providing
tools
to
address
those
needs.
A
Right
so
I
think
the
migration
is.
I
think
it
is
also
only
one
of
a
number
of
problems
with
running
third-party
admission
controllers,
so.
D
Yeah,
our
design
can
clearly
make
that
scarier
or
less
scary,
and
if
the
barrier
to
entry
is
annotate
a
namespace
and
it's
less
scary,
I
I
would
take
that
barrier
to
entry
as
a
for
instance.
I
don't
need
to
write
their
policy
for
them,
but
I
do
want
to
make
it
possible.
Someone
can
do
this
and
not
be
afraid
that
it's
going
to
suddenly
do
something.
D
E
I
mean
I
feel
like
both
the
bare
minimum
and
the
the
psp
plus
plus
proposals
do
have
a
way
that
you
could
run
in
like
a
two-legged
mode
like
that,
because
in
in
either
case,
if
you
want
to
have
both
policy
engines
running
without
fighting
each
other,
you
decide
on
a
namespace
by
namespace
basis
which
policy
engine
you
want
to
be
quote,
unquote
active
for
that
namespace
and
then
in
the
other
policy
engine
you
make
it
fully
privileged.
E
E
D
B
So
when,
when
we
had
talked
about
like
taking
sort
of
psp
the
good
parts,
the
psp
plus
plus
proposal
actually
kept
more
of
psp
than
I
was
expecting.
I
felt
like
it
kept
psp
the
good
parts
and
also
the
mediocre
parts,
and
then
it
cut
out
the
binding
thing
like
the
binding
thing
was
the
really
bad
part.
So
I
was
glad
to
see
that
go.
B
B
Of
the
discussions
about
like
do,
we
have
a
different
object
name,
or
do
we
have
a
different
object
version
like
if,
if
we
were
actually
going
to
keep
schema
compatibility
with
the
existing
psp
api,
maybe
deprecating
the
fields
that
were
specifically
for
mutating
pods?
I
would
actually,
I
could
see
our
way
to
having
a
second
admission
plugin
that
looked
at
the
same
api
and
got
bound
a
different
way
like
treated
used,
different
binding
objects
to
decide
what
policies
applied
if
we
were
happy
with
the
psp
api,
like
as
a
going
forward.
B
This
is
the
api
we
will
maintain
and
add
to
and
keep
matching
the
pod
spec.
I'm
not
convinced,
that's
the
api.
We
want
to
keep
and
maintain
and
like
sort
of
perpetually
expand
to
match
new
fields
that
come
into
pod
specs.
So
it
feels
like
that's
sort
of
the
main
question
to
answer.
First,.
E
I
think
it's
miserable
but
inevitable,
because
podspec
is
itself
pretty
huge
and
sprawling
and
will
continue
to
change
and
anything
that
wants
to
enforce
policy
over
pod.
Spec
will
have
to
reflect
that
complexity
somehow
and
so
like
using
the
gatekeeper
library
example.
One
one
way
to
do.
That
is
each
time
podspec
gets
more
complicated.
You
have
to
add
more
policies
to
your
cluster
or
the
other
way
to
handle.
That
is
the
the
psp
way
where
each
time
podspec
gets
more
complicated.
E
You
have
to
add
more
fields
to
your
policy
object,
but
I
would
love
to
hear
a
third
way,
but
I
don't
think
there
is
a
third
way.
I
think,
as
long
as
podspec
is
miserable.
All
of
us
who
are
yoked
to
it
will
also
be
miserable,
and
the
best
thing
that
we
can
hope
to
do
is
ship,
something
that
makes
that
easier
to
digest
for
users.
B
I
the
thing
I'm
wondering-
and
I'm
we've
struggled
with
this
with
psp
from
the
beginning
like
what
what
is
the
philosophy
of
pod
security
policy
like
what
decides
whether
it
cares
about
a
field
or
not,
so
you
can
do
things
with
pods
like
you
can
set
spec
node
name,
you
can
say
I'm
going
to
create
a
pod,
and
I
want
to
run
on
that
node.
If
you
can
express
a
preference
about
what
node
you
want
and
pod
security
biology
doesn't
control
that
you
can
say.
B
I
have
pod
anti-affinity
and
I
require
like
to
run
at
high
priority,
not
scheduled
with
pods
in
these
namespaces.
So
you
can
do
things
that,
like
disrupt
names,
pods
and
other
names,
just
disrupt
scheduling
and
pause
in
other
name
spaces
and
pod
security
policy
doesn't
have
an
opinion
about
that
cloud.
Security
policy
is
like
you
can't
mount
the
host.
You
can't
run
privileges,
you
can't
run
host
network.
B
I
haven't
seen,
and
I
mean
it's.
I
was
one
of
the
ones
who
helped
come
up
with
psp,
so
we
didn't
come
up
with
a
particularly
coherent
philosophy
of
what
exactly
does
was
the
line
for
like
what
pod
security
policy
includes
the
more
api
we
expose
the
more
policy
api
we
expose.
It's
not
an
unreasonable
assumption,
like
if
I
lock.
If
I
came
up
with
a
psp
and
locked
my
cluster
down,
I
thought
I
locked
my
cluster
down.
B
I
would
not
expect
pods
to
suddenly
become
more
powerful,
like
in
the
next
release
and
like
expose
all
these
additional
holes,
so
I
would
expect
the
policies
that
I
wrote
last
release
to
still
be
locking
my
spaces
and
pods
down,
and
we
we
really
struggled
with
like
a
psp
manifest
from
you
know
last
year,
must
not
get
more
permissive
or
less
permissive
and
like
how
different
people
had
different
opinions
on
that
different
clusters
have
different
opinions
on,
like
oh
yeah,
that's
clearly
a
more
powerful
pod.
You
can't
allow
it
like.
B
Oh
no,
that's
unrelated
to
pod
security.
You
should
just
allow
whatever's
there
like.
So
if,
if
we're
going
to
go
down
the
road
of
maintaining
this
sort
of
sprawling
api
that
maps
one
to
one
ish
with
podspec,
I
think
we
need
to
describe
what
that.
What
that
philosophy
is
like
what
decides
whether
it
goes
in
here
or
not,
and
what
an
existing
policy
object
will
do
for
new
fields
or
show
components.
E
Not
really
pod
issues
they're
more
like
what
do
you?
Let
people
do
with
your
with
your
api
kind
of
kind
of
issues,
and
so,
like
I
don't
know,
I
could
maybe
see
like
a
cluster
security
policy
that
would
have
that
would
have
opinions
or
scheduling
security
policy
that
would
have
opinions
on
those
things
but,
like
I
was
really
thinking
of
you,
can't
break
out
of
the
container
as
the
scope
and
and
tried
to
make
that
clear.
When
I
was
when
I
was
writing,
but
I
mean
what
what
do
you?
E
J
And
I
think,
like
you're,
bringing
up
the
cluster
policy
is
a
good
it's
a
good
way
of
like
saying
what
we
probably
need
to
reject
is
we
can't
create
something
that
closes
over
all
possible
complexity.
So
we
can't.
We
can't
define
a
cluster
policy
because
it
would
be
intractable
and
then
someone
would
add
an
extension
that
breaks
it.
So,
whatever
the
scope
is
we
thought
about
policy
as
primitives
that
would
be
used.
J
Originally,
that's
a
little
bit
of
you
know
if
he
shares
some
mindset
from
that,
but
if
we
flip
it
around
and
say
something
as
concrete
is
what
you
just
described
as
the
goal
of
you
know
preventing
preventing
a
x
from
doing
y,
because
that
is
a
reasonable
expectation
and
then
there's
a
set
of
problems
that
are
unbounded
that
are
left
as
an
exercise
to
the
integrator,
because
integrators
could
bring
in
a
a
volume
plug-in
that
is
completely
insecure
and
that's
not
our
fault.
J
B
I
think
the
points
where
I
had
difficulty
understanding
what
what
is
the
policy
doing
around
things
like
host
path,
mounts
or
csi,
like
allowed
csi
drivers
like
things
that
you
can't
actually
reason
about
just
their
existence
like
you
could
put,
you
could
give
a
read-only
host
path,
mount
to
you
know
etsy
hosts
or
something,
and
that
doesn't
let
you
escape
the
container
that
that's
just
injecting
a
useful.
B
You
know
hosts
file,
but
if
you
host
mount
a
read-only
volume
to
like
cubelet
pki
well,
now
you've
got
node
credentials
and
so
the
the
what
is
exposed
by
the
api
of
psp
lets.
You
express
a
wide
variety
of
policies
that
you
can't
actually
look
at
programmatically
and
say
yeah
this.
This
is
protecting
against
standard
breakout.
B
Similarly,
for
csi
drivers,
right,
like
you
can
say,
I
allow
c
to
csr,
driver,
fubar
and
baz,
and
those
are
extension
points
that
who
knows
what,
through
barbados,
do
in
that
cluster
and
so
by
having
the
api
surface,
be
bigger
and
more
expressive
than
what
we
really
want
to
express
like
the
pod
security
standards,
we're
inviting
people
to
use
it
to
express
all
kinds
of
policies
like
all
along
the
spectrum,
and
that
seems
to
be
straying
past
the
like
protect
against
container
breakouts
in
a
reasonable
way
and
give
a
good
ramp
to
like
more
complicated
policies.
C
H
B
E
B
B
Yeah,
like
I've,
seen
crazy
ones
that
like
allow
lots
of
different
host
paths
but
read-only
and
like
really
particular
csi
drivers
and
like
it's
hard
to
know
what
the
intent
of
the
policy
author
was
when
we're
considering
like
a
new
pod
field
like
for
the
built-in
ones,
it's
a
little
easier
to
understand
so
like.
If
you
say
you
can
mount
a
secret,
then
it's
also
reasonable
to
let
you
mount
a
projected
volume
with
a
secret
data
source
like
that
was.
B
Well,
if
your
policy
lets
you
mount
a
secret,
then
it's
also
reasonable
to
let
you
mount
a
secret
a
different
way,
even
though
you
didn't
explicitly
opt
into
projected
volumes
right
so
because
we
understand
sort
of
the
meaning
of
those
things,
we
can
be
more
intense,
driven
in
how
we
apply
the
policy,
but
the
more
low
level
you
make
your
policy
primitives
the
harder
it
is
to
understand
the
intent
behind
what
they
were
doing,
and
so
my
ideal
would
be
something
with
a
small
surface
area.
Small
api
surface
area
that
we
could
tie
to.
B
Like
the
pod
security
standards,
maybe
a
little
more
than
tim's,
like
I
kind
of
like
having
some
level
of
versioning
like
this-
was
version
one,
the
baseline
or
version
one
of
the
restricted,
but
not
sort
of
this
we're
going
to
have
an
api
around
every
api
on
a
pod
spec
because
it,
the
psp
api,
is
actually
more
complicated
than
the
pod
spec
api
for
the
fields
that
it
addresses.
B
B
David,
I
I
definitely
want
to
like
keep
talking
about
this
I'd,
be
happy
to
do
like
a
breakout
session
in
a
second
meeting.
If
people
are
up
for
that.
B
I'd
like
to
propose
taking
the
alternate
weak
slot
of
sig
off
to
move
forward,
at
least
once
suspect.
We
might
need
more
than
that
to
move
forward,
and
I
like
going.
D
The
opposition,
the
opposite
spot
of
this
is
api
machinery
and
that's
a
lot
of
the
same
folks,
good
idea,
but,
like
that's
me,
jordan,
clayton
mo
I'll
I'll,
skip
I'll,
skip
api
machinery
I'll
skip
at
the
same
time.
We'll
see
what
happens
to
it.
A
And
that's
a
one-off
breakout
for
now
and
then
we
can
decide
in
there
whether
we
need
to
repeat.
B
Yeah,
if
people
I
kind
of
gave
my
my
opinion
about
like
what
sort
of
the
first
question
to
answer
was
if
other
people
have
something
similar
like
there's
a
lot
of
questions
and
we
can
easily
get
like
down
into
the
weeds
if,
if
there's
sort
of
a
primary
question
that
you
have,
that
you
think
we
should
answer
might
be
good
to
kind
of
gather
those.
So
we
talk
about
the
big
questions.
First,.
H
So
on
that
note,
if
people
never
came
up
with
what
pod
security
policies
philosophy
was
in
the
first
place,
and
now
we
are
talking
around
our
competing
philosophies
of
what
it
ought
to
be,
might
it
make
sense
for
everybody
to
actually
take
a
moment
to
be
like
what
is
the
philosophy
of
pod
security
policy
actually,
and
I
think
that
a
lot
of
these
questions
might
be
settled
by
that,
if
we
just
like
step
back
several
steps
and
are
like
what
are
we
doing
here,
I
wonder
if
a
lot
of
these
things
that
we
are
speaking
around
might
actually
be
spoken
to
directly.
E
A
Yeah
that
makes
sense
to
me:
let's
all
come
up
with
philosophies
independently
and
then
meet
next
week
and
compare.
A
The
authors
of
these
two
psp
docs,
do
you
mind
giving
comment
access
to
kubernetes,
sig
auth,
google
group.
E
D
E
D
This,
oh
well,
it's
for
discussion,
actually,
there's
a
couple
different
choices
to
decide
what
to
do.
You
know,
as
we
built
our
an
api
server,
that
we
would
embed
into
many
spots
and
it's
in
places
you
wouldn't
even
think
of
right.
So,
for
example,
the
cube,
scheduler
or
cube
controller
manager
actually
runs
a
server
inside
of
it
that
exposes
metrics
version,
healthy,
ready,
z,
livesy
checks.
Each
of
these
by
default
uses
something
called
a
delegated
authorizer
and
a
delegated
authenticator.
D
D
D
The
question
is,
you
know:
do
we
want
to
keep
doing
that?
Or
can
we
look
and
say
you
know,
scraping
is
a
fairly
common
thing.
Metrics
is
common.
We
want
to
have
a
policy
local
to
a
binary
that
says
I
don't
have
to
go
out
and
check,
because
I
know
that
if
user
foo
hits
my
metrics
endpoint,
he
should
be
allowed.
I
don't
need
to
go
and
ask
because
inside
of
my
deployment-
and
it
is
going
to
be
deployment
specific
inside
of
my
deployment-
this
is
expected
normal.
It
should
have
access.
D
I
think
tim
had
actually
identified
something
similar.
I
saw
he
had
linked
to
a
pr
comment.
He
made
a
while
back
and
so
as
we
think
about
that,
we
then
have
to
think
about.
Like
what
mechanism
would
you
use
to
make
this
happen,
and
there
have
been
several
ideas.
D
Brainstormed
one
is
to
use
our
old
friend
abac,
which
is
actually
designed
as
a
file
on
disk
to
be
local
to
a
binary
that
would
grant
access
and
it's
actually
well
suited
for
this
sort
of
a
purpose
where
I
have
a
very
small
number
of
rules,
and
I
can
express
them
in
like
two
lines
or
less.
D
It
disadvantages
like
people
don't
use
abac
anymore,
but
it
still
exists.
We
actually
do
still
have
it.
Another
option
is
to
add
an
argument.
You
could
have
a
structured
argument
that
made
it
work.
The
pr
I
made
actually
did
that
it's
it's
easy.
It's
constrained,
it's
not
very
expressive,
so
it
would
never
grow.
I
don't
know
you
could
see
it
as
good
or
bad.
It
is
kind
of
ugly.
I
will
admit
that
another
option
that
was
suggested
was
to
read
an
r
back
file
like
a
series
of
rbac
manifest
off
a
disk.
D
You
know
that
and
then
try
to
evaluate
that
locally
with
an
rbac
evaluator.
You
know
that's
a
well-known
api
I'll
grant
it
that
it's
really
big
for
the
task.
Rbac
was
built
to
create
sort
of
extensible
policy,
and
this
is
this
is
not
that,
and
I
guess
I
will
mention
the
last
possibility-
is
to
create
a
longer
auth
zcash
just
make
it.
I
don't
know
really
long.
The
problem
with
that
is
that
it
impacts
every
request,
which
is
not
equivalent
to
saying
this
particular
user.
D
So
that's
the
problem,
the
solutions
we
thought
of
so
far,
I'm
open
to
other
solutions
or
comments.
G
Thing,
I
still
think
that's
the
best
thing,
but
my
general
feeling
was
if
you're
gonna
take
like.
If
you
started
using
abac
for
this,
I
would
presume
you
would.
I
don't
know,
make
that
api.
Look
like
the
rest
of
kubernetes
apis
and
give
it
a
proper
like
type
meta
version
structure.
It
nicely
get
rid
of
all
the
awful
to
do's
in
that
structure.
D
I'd
be
open
to
making
it
v1,
I
think
I'd.
I
want
to
first
bring
it
as
beta
and
then
get
it
on
its
three
release,
beta
plan
and
so.
G
E
But
now
you're,
encouraging
people
to
mix
our
back
and
abac
and
like
abac,
is
fine,
except
that,
once
you
have
more
than
three
rules,
you
can
never
figure
out
what
your
rules
are,
and
so
I
I
am
afraid,
like
I'm
sorry,
to
make
a
slippery
slope
argument,
but
I
am
afraid
that,
like
the
good
idea
about
abac
is
not
even
once
because,
like
abac
once
is
fine
except
that
now
you're
now
you're,
not
thinking
of
abac.
E
H
As
an
attacker,
I
would
just
like
to
say
that,
in
my
experience,
people
stopped
using
abac
not
only
because
our
back
solved
the
problem
better,
but
because
people
could
break
it.
Six
ways
to
next
tuesday.
I
I
B
Being
confusing
to
have
a
static
rbac,
manifest
with
references
to
objects
that
don't
actually
exist
in
the
api,
like
you
have
a
roll
ref
to
a
cluster
role
which
only
exists
in
the
local
static
file.
B
That
especially
gets
strange
when
you
start
mixing
scopes
like
a
namespace
binding
to
a
cluster
scope
thing,
the
plus
side
of
our
back
is
you
could
try
out
your
policy
just
using
our
back
and
then
once
you're
happy
with
it,
you
like
get
export
to
yaml
and
throw
it
in
a
static
manifest.
So
this
is
like
we're
developing
it.
I
kind
of
like
it
mixing
two
scopes
like.
Oh,
it's,
the
our
back
roll,
which
are
back
roll,
a
static
one
in
a
local
file
or
a
live
one
in
the
api.
B
B
Like
just
a
case
of
kind
of
generic
service
to
service
authentication,
which
is
a
problem
that
has
been
solved,
a
number
of
ways
by
service
meshes.
H
B
This
is
authorization,
sorry
authorization
and
I
think
the
the
common
way
to
solve
it
in
that
case
is
to
have
your
central
policy
definition
and
just
push
that
definition
down
into
the
end
points
to
do
authorization
at
the
end
point,
and
so
I
wonder
if
there's
something
we
could
build,
that
would
basically
do
just
that
where
it
says
you
know,
you
still
define
your
rbac
in
the
same
way,
but
you
can
link
in
this
library
or
this
sidecar
or
whatever,
that
is
capable
of
caching,
that
policy
and
enforcing
it
locally.
A
Want
to
be
respectful
of
everybody's
time
and
we
are
already
three
minutes
over,
so
I
I
think
we
should
hold
off
on
this
discussion
and
maybe
continue
it
in
the
sega
auth
slack
channel.
Apologies
to
the
last
item.