►
From YouTube: sig-auth bi-weekly meeting for 20210407
Description
sig-auth bi-weekly meeting for 20210407
A
A
Sorry,
second,
pull
that.
A
A
A
A
So
the
question
here
is
when.
A
A
Right
so
this
would
mostly
come
into
play
if,
let's
say
an
exempt
user
had
created
a
pod
and
then
a
non-exempt
user
edits
that
pod.
How
does
the
policy
apply
or,
alternatively,
if
a
pod
was
in
the
already
in
the
name
space,
and
then
the
the
namespace
policy
was
updated
so
that
the
pod
is
now
in
violation
of
that
policy?.
A
Does
that
mean
that
pod
can
no
longer
be
edited
until
it,
unless
that
edit
makes
it
conformant
with
that
policy?
So
I
I'm
proposing
that
the
answer
is
no,
that
we
allow
edits
to
anything
except
the
fields
that
are
enumerated
here
or
sorry.
Every
we.
A
Yeah,
so
there's
not
actually
that
much
of
the
pod
that
is
mutable
and
so
most
of
the
fields
that
we
care
about
for
the
security
policy
are
actually
immutable.
A
So
the
ones
that
I
listed
out
here
are
the
secomp
annotations,
which
are
going
away
in
122.
the
app
armor
annotations
I
listed
out
here,
but
they're
actually
immutable.
So
this
is
sort
of
irrelevant.
We
would
allow
updates
to
active
deadline
seconds
and
tolerations,
both
of
which
are
mutable.
A
Yeah
so
status
and
node
name
are
handled
through
sub-resources.
I
don't
think
we
care
about
anything
in
the
status
of
pod
status
from
the
policy
perspective,
so
I
think
it's
okay
to
just
ignore
status
updates.
A
D
A
The
way
the
policy
pod
security
standards
are
defined
today
they
don't
care
about
scheduling
constraints,
so
I
definitely
agree
with
you
that
there's
cases
where
you
might
wanna
have
like
a
dedicated
set
of
dedicated
set
of
nodes
and
have
separate
policy
around
that
the
way
I
would
tell
someone
to
to
make
that
work
with
the
policy
we're
proposing
here
is
to
say
you
know
this
namespace
runs
as
restricted,
or
maybe
this
namespace
runs
as
privileged,
and
only
pods
in
this
namespace
are
allowed
to
run
on
these
nodes,
so
kind
of
have
a
separate
policy
applied
at
the
namespace
level
and
just
rely
on
composability
of
this
rather
than
enforcing
it.
A
Yes,
you
would
have
to
say:
you'd
have
to
have
a
separate
policy
mechanism
that
says
well.
So,
first
of
all,
you
need
to
taint
those
nodes
to
prevent
anything.
That's
not
tolerating
that
taint
from
running
on
them.
Then
you
would
have
a
policy
that
says
only
these
namespaces
are
allowed
to
have
this
toleration.
A
There's
a
few
notable
corner
cases
that
you
have
to
cover:
one
is
wild
card
tolerations
and
then
the
other
is
manual
scheduling.
So
you
can't
create
nodes
or
create
pods
with
the
node
name
preset,
and
you
can't
grant
the
binding
per
the
bind
permission
to
users.
C
C
A
I
don't
know
if
gatekeeper,
if
the
gatekeeper
policy,
library,
ships
with
something
like
that
there
has
been
a
fair
amount
of
discussion
around
a
built-in
scheduling
policy
that
would
limit
things
like
node,
selectors
and
tolerations
and
other
things
around
that
that
policy
I
can.
I
can
find
the
proposal
and
link
it
to
the
agenda
later.
C
A
Yeah
the
when
that
proposal
was
raised,
we
were
still
kind
of
undecided
around
the
direction
that
we
were
going
with
pod
security
policy,
and
there
was
some
questions
around.
Are
we
just
gonna
push
all
policy
out
of
the
cl
out
of
the
cluster
out
of
built-in
things,
so
that
was
part
of
the
reason
that
that
didn't
go
anywhere.
A
Yeah,
but
one
thing
I
would
caution
about
that
is,
I
think,
setting
having
dedicated
node
pools
is
not
a
super
simple
use
case.
I
think
there's
actually
a
lot
of
kind
of
nuances
to
how
you
do
that.
That
means,
depending
on
that
for
a
security,
boundary
actually
sort
of
very
error-prone.
A
A
A
Moving
on
ephemeral
containers,
so
if
anyone
doesn't
know
ephemeral,
containers
is
currently
an
alpha
feature.
I
think
they're
targeting
beta
in
121,
but
that
may
have
been
pushed
back
in
part
due
to
the
some
issues
that
were
raised
around
this
proposal.
A
That's
only
done
through
an
ephemeral
containers
sub-resource,
but
one
of
the
challenges
there
is
that
the
ephemeral
container
sub-resource
takes
a
different
request
type.
It
doesn't
take
the
full
pod
specification.
It
just
takes
the
basically
the
ephemeral
container
that
you're
adding
to
the
pod,
and
so
what
that
means
is
that
an
admission
controller
operating
or
looking
at
those
requests
only
gets
only
gets
the
ephemeral
container.
It
doesn't
get
the
full
context
for
the
whole
pod.
A
So
if
you
have
you
know
your
baseline
policy
says
you
can't
run
host
network
or
something
and
you're
adding
an
ephemeral
container.
You
can't
tell
if
it's
being
added
to
a
pod
with
host
network,
so
one
option
would
be
to
say
that
we
only
check
the
fields
that
are
on
the
ephemeral
container
and
ignore,
say:
host
network
checks.
A
That
kind
of
thing
that
sort
of
goes
in
against
what
we
were
just
talking
about
in
the
previous
section,
where
we
don't
want
you
to
be
able
to
update
the
container
image
on
a
privileged
pod
in
an
unprivileged
namespace.
A
A
So
currently,
ephemeral
containers
don't
have
don't
have
security
contexts,
but
that's
also
something
we're
planning
to
add
soon.
I
think
again,
I
think
that
got
slipped
out
of
121,
so
we
will
want
to
check
that
once
it's
added.
D
A
I
think
it
should
be
safe
with
the
built-in
controller,
since
all
of
the
fields
that
you'd
be
checking
or
almost
all
the
fields
you'd
be
checking
were,
are
immutable.
It's
possible.
There
could
be
some
sort
of
edge
case
there,
but
for
if
you're
running
in
web
hook
mode,
there
would
definitely
be
a
race
condition.
A
A
However,
I
think
that
we're
going
to
change
the
admission
interface
for
that,
so
I
think
it
should
start
getting
the
full
pod
spec
once
those
changes
go
in
again.
This
is
a
feature
that's
currently
in
alpha,
so
it's
I
kind
of
expect
it
to
change,
and
now
that
we
have
an
actual
proposal
here,
we
can
make
sure
that
they
sort
of
track
together.
A
But
the
other
thing
I
want
to
call
out
around
ephemeral
containers
is
so
the
the
target
use
case
for
them
is
debugging,
and
I
think
it's
a
common
case
where
you
want
to
run
your
production
workloads
in,
like
least
privileged
mode,
but
oftentimes
to
effectively
debug.
You
need
elevated
privileges,
so
the
example
I
gave
here
is
cast
as
p
trace.
If
you
wanted
to
p-trace
that
process
for
debugging,
I
believe
that
requires
the
privileged
profile
today.
A
A
So
there's
this
is
probably
more
of
a
future
extensions
thing
to
consider,
but
that
would
be
if
we
wanted
to
allow
you
to
have
elevated
privileges,
but
only
on
ephemeral
containers.
C
I
don't
sorry
doesn't
say
I
don't
remember
what
the
what
the
are
back
around
ephemeral
containers
looks
like.
Is
there
separate?
Is
there
separate
control
possible
for
who
can
create
ephemeral?
Containers
versus
like
who
has
create
pod
because,
like
what
I'm
trying
to
think
through
here
is,
is
whether
we
can
allow
this
kind
of
breaking
policy,
but
only
for
ephemeral
containers
without
that,
actually
just
meaning
that
the
elevated
policy
is
what
applies
to
most
things
most
of
the
time
and
like
one
way
that
I
was
imagining.
A
C
If
you
allowed,
you
know
the
staff
who
would
want
to
debug
things
to
have
impersonate
privilege
onto
system
debuggers,
then
normally
they
couldn't
bypass
any
of
these
things,
but
when
they
did
coop
ctl
debug,
if
they
added
the
appropriate
dash
dash
as
group
option,
then
they
could
put
an
ephemeral
container.
That
would
break
policy
temporarily.
A
Yeah,
that's
interesting,
so
yeah
the
there,
because
ephemeral
containers
are
only
added
through
the
ephemeral
container
sub
resource.
Then
you
can
control
arvax
separately.
On
that
I,
like
the
idea
of
using
impersonate,
we
don't.
We
said
that
we
explicitly
didn't
want
to
allow
group
exemptions,
but
that
doesn't
actually
matter
for
the
impersonate
case.
A
You
could
just
have
like
you
know,
ephemeral,
debuggers
or
whatever,
and
then
just
through
our
back
control,
who
is
allowed
to
impersonate
that
user
and
then
just
grant
that
user
permission
to
add
ephemeral,
containers.
D
So,
does
that
mean
that
that
to
do
this
correctly
within
a
namespace,
those
debuggers
would
not
be
allowed
to
otherwise
create
pods,
but
they
would
be
able
to
like
could.
Would
there
be
a
workaround
where,
because
they're
able
to
do
that
for
ephemeral
containers,
they
can
also
create
regular
pods
and
basically
do
it
for
everything.
A
Yeah
so
the
way
I
would
suggest
setting
this
up
is
so
you
say
this
made
up
username
that
doesn't
isn't
an
actual
username,
that
anyone
could
have.
You
know,
system,
colon,
ephemeral,
debuggers
or
whatever
have
it
as
a
one
of
the
statically
configured
exempt
users,
and
then
only
grant
our
back
bindings
to
allow
them
to
to
allow
that
user
to
operate
on
the
ephemeral
container
sub-resource.
A
Just
create
and
update
ephemeral
containers
and
then
just
control,
who
has
impersonate
permission
on
that.
A
A
A
It
means
that
we
don't
need
to
build
some
special,
like
second-tier
policy
level
for
ephemeral
containers.
If
this
workaround
seems
sufficient,
it
is
a
bit
of
a
kind
of
a
little
bit
of
a
hacky
work
around,
so
we
might
want
to
think
about
if
there's
a
way
to
kind
of
streamline
the
user
experience
on
this.
D
C
Then
all
of
that
said,
I
really
like
the
idea
here
of
just
ephemeral
containers
being
subject
to
the
same
policy
restrictions,
because
that
seems
good
from
a
principle
of
least
surprise
standpoint
like
if
you
just
turn
this
on
then
ephemeral
containers
are
not
in
any
way
a
side
door
through
your
policy
yeah
and
then,
if
that
limitation,
which
is
beautiful
in
its
simplicity,
but
is
a
big
limitation.
C
A
Are
there
any
se,
linux
experts
other
than
dan
walsh.
A
A
couple
of
times
do
you
have
any
thoughts
on
what
a
sensible
se
linux
policy
would
be
to
set
for
the
baseline.
A
So
peter
hunt
suggested
that
there
might
be
sort
of
specific
values
that
could
be
banned
or
allowed
explicitly,
but
there's
kind
of
a
question
that
I
don't
think
they
ever
responded
to
around.
C
So
I
am
not
familiar
with
the
out
of
the
box
use
of
se
linux
on
any
distributions
that
aren't
red
hat
distributions,
and
I
am
of
the
impression
that
se,
linux
labels
are
completely
site,
specific
modulo.
The
fact
that
you
inherit
a
ton
of
your
site,
specific
labeling
from
your
os
installer
and
so
like
sbc
spc
underscore
t
or
container
underscore
t
like
literally
speaking,
are
just
strings,
but
they
are
special
on
red
hat
type.
Linux
distributions
because
they
are,
you
know,
hard-coded
into
what
your
file
system
looks
like
when
you
install
the
appropriate.
B
C
C
E
A
A
A
Of
unconfined
for
se
linux,
then
we
could
just
deny
that
and
say
anything
else
goes
actually.
Another
reason
that
app
armor
is
different
is
with
app
armor,
there's
validation
in
the
cubelet.
That
says
you
can
only
run
with
a
policy.
That's
been
defined
and
loaded
into
the
kernel,
and
only
the
system
administrator
can
define
and
load
policies.
A
C
C
And
if
so,
maybe
we
can
ban
those,
but
otherwise
it
seems
that
we
can't
really
have
an
opinion
about
se
linux
without
breaking
people's
reasonable
se.
Linux
use
cases
because
there's
no
way
for
this
policy
engine
to
know
whether
a
particular
label
is
actually
restricting
you
or
opening
up
your
permissions.
A
Yeah
that
makes
sense
so
yeah,
I
guess
short
of
figuring
out
something
it's
probably
best
to
just
say
that
this
field
is
that
the
se
linux
fields
are
unenforced
and
if
you're
running
an
se
linux
enabled
system,
then
it's
just
kind
of
on
you
to
use
a
third-party
controller
to
set
that
up.
C
I
I
think
that
makes
sense
in
light
of
our
targeted
use
cases
and
the
fact
that
running
selinux
as
a
compensating
control
inside
your
kubernetes
cluster,
that
isn't
open
openshift
is
actually
incredibly
difficult
and
like.
If
you
can
do
that,
you
can
also
certainly
run
an
external
policy
controller.
Exactly.
A
A
But
that
kind
of
breaks
the
shared
volume
use
case,
the
other
one
is
to
say
that
we,
this
policy
has
no
opinion
over
the
selinux
fields
and
that
you're
on
your
own,
then
there's
kind
of
a
third
option
which
goes
against
what
we're
trying
to
do
here,
but
basically
to
introduce
some
sort
of
configuration
knob
for
se
linux.
It
says
here's
the
allow
listed
set
of
labels
that
you
can
use.
A
A
A
C
A
A
Yeah,
that
would
certainly
be
useful.
We're
constrained
by
our
metrics
implementation,
prometheus
in
the
cardinality
of
metrics,
and
so
anything
that's
user.
Definable
is
sort
of
a
no-go
for
metrics.
A
Yeah
exactly
it's
captured
in
the
audit
logs,
so
you
can
always
implement
your
own
metrics.
On
top
of
your
audit.
C
Pipeline
you
look
like
you've
got
something.
D
A
D
Maybe
it
sounds
like
a
single.
Basically,
a
single
bad
deployment
could
cause
that
to
happen.
Is
that
how
do
you
tune
your
thresholds
or
whatever
would
be
interesting
to
alert
on
based
on
that,
and
since
we
don't
have
labels
saying
which
pod
this
was
or
which
anything?
This
was
it's
kind
of
hard
to
really
tease
that
apart
afterwards,.
C
It
tells
you
to
go:
read
your
api
server
logs,
but
like
any
time
that
number
of
denials
spikes
it's
either
a
security
incident
or
an
operational
incident,
because
somebody
tried
to
do
something
and
they
were
prevented
from
being
able
to
do
it,
and
so
I
mean
like
how
you
would
tune
them
in
terms
of
do
you
want
to
get
somebody
up
at
night.
Questions
like
that.
C
I
feel
like
are
site
specific,
but
like
any
time
you
have
an
unusually
large
number
of
rejections,
something
bad
is
going
on
and
just
from
the
count
you
don't
have
any
idea.
What
that
bad
thing
is,
but,
like
somebody
is
unhappy,
and
if
it's
your
enemy,
that's
unhappy,
that's
great,
and
if
it's
your
friend
that's
unhappy,
you
need
to
help
them.
A
Yeah,
I
think
that's
true
mostly
around
how
metrics
are
used
and
that
it
kind
of
gives
you
some
very
high
level
overview
of
the
system
and
can
be
used
for
alerting
or
drawing
attention
to
an
issue.
But
when
it
comes
to
actually
debugging
that
issue,
it's
expected
that
you
go
to
the
logs
and
just.
E
A
E
A
I
don't
think
that
would
be
too
hard
to
implement.
That's
an
interesting
idea.
A
Yeah,
I
would
want
some
input
from.
I
think
we
have
like
a
metrics
review
team
explicitly
that
needs
to
review
all
metrics
changes,
so
I
can
get
their
input
on
that
another
one
that
occurred
to
me
is
if
we
wanted
to
separate
out,
create
and
update
requests.
A
I'm
not
sure
anyone
cares
about
that,
but
could
potentially
be
interesting.
A
Yeah,
we
should
definitely
add
exemption
counters.
C
A
Yeah,
it
should
definitely
go
in
the
audit
logs.
I'm
not
sure
if
I
called
that
out
explicitly
here.
A
C
Yeah
that
actually
sounds
really
useful,
because
I
kind
of
don't
care
that
there's
a
bunch
of
of
admissions
happening
in
namespaces
that
I've
exempted
like
that's,
why
I've
exempted
them,
but
but
when
I'm
granting
user
exemptions,
that
may
be
more
interesting.
Unless
I'm
granting
that
user
exemption
to
a
controller
that
create
you
know
is
basically
a
fork
of
the
replica
set
controller.
But
if,
if
I'm
that
fancy-
I'm
probably
not
using
this.
A
I've
lost
my
window,
however,
I
guess:
okay,
any
other
suggestions
on
metrics.
A
Let's
see,
okay,
I
don't
think
we
can
include
profile
version
for
cardinality
reasons.
A
E
From
the
troubleshooting
and
like
the
ciu's
case
we
talked
about,
the
warning
would
be
handy.
Sorry,
which
use
case
was
that
so
the
the
use
case
of
doing
testing
policies
on
well
again,
okay,
nevermind
you're,
not
gonna,
be
testing
these
policies
never
mind.
I
think
something
else.
A
C
I
would
say
if
it's
easy
and
cheap
yes,
because,
like
off
the
cuff,
I
don't
think
I
would
want
to
wake
somebody
up
if
the
number
of
warnings
goes
through
the
roof.
But
if
I
am,
if
I
have
a
cluster
and
I'm
trying
to
ratchet
down
the
permissions
in
it,
then
I
could
apply
a
more
restricted
policy
as
a
warning
config
on
some
namespaces
and
without
the
metric,
I
would
go
to
the
cardinality
of
audit
log
events
that
show
that
warning
yeah.
A
Okay,
yeah
I'll
add
that
as
a
nice
to
have
and
consult
with
some
sig
instrumentation
folks.
D
A
We
could
maybe
validate
that
it
was
a
valid
version
less
than
or
equal
to
the
current
version
and
just
kind
of
lump
everything
else
into
like
a
future
kind
of
generic
future
version.
A
C
That
could
be
interesting
if
you
didn't
think
that
you
were
using
any
pinned
versions
and
then
suddenly
you
saw
some
metrics
on
the
previous
pinned
versions.
It
could
could
be
a
sign
that
something
weird
was
happening.
A
Yeah,
I
do
worry
a
little
about
the
cardinality
of,
like
I,
I
guess,
hopefully,
no
one's
running
actually
21
different
versions
in
their
cluster,
but
at
the
upper
end
you
have
21
times
three
times
three
for
the
three
profile
levels
and
the
three
enforcing
the
warning
and
audit
modes.
C
A
D
A
That,
let's
see
most
of
the
production
readiness
I
ignored,
since
it's
more
for
going
to
beta
or
ga
we've
already
talked
a
bit
about
version,
skew
monitoring
requirements,
dependencies
should
be
pretty
minimal.
A
Troubleshooting
we've
talked
about
audit
annotations.
A
Optional
feature
extensions,
maybe
it's
worth
going
through
this
quickly.
I
know
that
we're
getting
close
to
time
here.
Just
I
don't
want
to
talk
through
the
details
of
these,
but
if
anyone
feels
like
this
should
be
a
part
of
the
main
cap
and
not
a
part
of
optional
future
extensions,
let
me
know
so.
The
first
one
is
a
way
to
roll
out
baseline
by
default.
A
A
Okay,
I'll
leave
it
here
for
now
pod
security
policy,
migration-
we've
talked
about
a
bit.
I
would
love
to
see
this.
I
think
it
makes
sense
to
leave
in
future
extensions
for
now.
Basically,
that
just
means
that
we
are
not
going
to
plan
on
working
on
it
for
alpha.
A
Yeah
exactly
if
someone
wants
to
start
working
on
this
by
all
means
go
for
it,
but
I
don't
think
it
needs
to
be
a
part
of
the
core
cap
for
now.
A
We
may
want
to
think
about
doing
something
around
the
time
that
you
know
around
like
124
or
something
like
that.
But
I'll
leave
this
here
for
now.
A
Custom
profiles:
this
would
just
be
an
extension
mechanism.
If
I
wanted
to
be
able
to
say
you
know,
you're
allowed
to
set
deny
or
allow
equals
host
network
and
have
a
custom
third-party
controller
that
enforces
the
host
network
policy.
A
So
in
practice
this
would
just
be
like
the
entry
one
would
ignore
these
levels
and
treat
them
like
privileged
and
just
expect
that
you've
configured
a
third
party
controller
to
do
it.
A
It's
more
explicit
so
that
when
you're
looking
at
your
name
spaces,
you
can
separate
out
this
one's
actually
privileged
versus
this
one.
My
custom
implementation
isn't
forcing
on
it.
A
The
use
case
here
is:
if,
on
your
warning
policy,
you
want
to
say,
like
you
know,
say
say
you
have
your
enforcing
policy
pinned
to
120
and
you
want
to
update
that
now
to
122..
A
The
122
version
to
let
users
know
if
they're,
using
a
feature
that
is
going
to
be
denied
in
the
future.
You
might
want
to
set
a
warning
message
that
says
something
like
you
know:
here's
the
date
that
you
have
when
this
will
stop
working
or
here's
an
email
address
to
contact.
If
you
need
an
exemption
from
this
policy
or
whatever.
D
C
D
Who
can
is
this
on
the
is
this
on
the
name
space
this
this
future
annotation.
A
Yeah,
that's
where
I
was
picturing
it
actually,
since
we're
almost
at
time.
I
just
quickly
want
to
touch
on
windows
support,
so
we've
talked
about
how,
like
even
without
explicit
windows,
support.
A
B
No,
I
think,
they're
different
right.
I
think
that's
why
there
was
a
proposal
around
like
like
another
spec
for
windows,
pods.
A
A
A
Documented
all
right,
that's
all
we
have
time
for
today.
I
think
we
are
getting
close
on
this
proposal.
I
probably
have
at
least
one
more
of
these
meetings,
thanks
for
everyone
who
is
able
to
join
today
and
have
good
afternoon.