►
From YouTube: sig-auth bi-weekly meeting for 20210324
Description
sig-auth bi-weekly meeting for 20210324
A
A
C
A
A
A
D
Additional
units-
hey-
this
is
pushkar.
I
am
new
to
the
cigarette
agenda
template,
but
I'm
happy
to
take
notes.
If
somebody
can
point
me
where
I
have
the
doc
open
on
my
site.
A
Thank
you
I'll
drop,
a
link
in
the
chat
you
have.
You
may
need
to
join
sig
off
the
google
group
in
order
to
have
editor
access
on
that.
D
A
Okay,
where
are
we
all
right?
I
think
we
can
debate
what
color
to
paint
this
bike
shed
later
on
in
the
meeting,
if
there's
time
for
it,
so
I'm
just
going
to
actually
before
I
jump
through
the
unresolved
session
sections.
Is
there
any
kind
of
big
overarching
feedback
comments,
concerns
questions
on
the
overall
puzzle.
A
A
Okay,
I
thought
there
was
another
one
just
talking
about
conformance
thinking
through
conformance,
I'm
not
sure
this
meets
the
default
bar
today
and
I
didn't
really
understand
if
that
was
like
an
objection
to
the
enabled
by
default
or.
C
I
think
it's
a
concern
about
whether
a
conformance
test
would
require
this
admission
plug-in
to
be
present
and
require
certain
pieces
of
configuration
for
it.
You
can
imagine
things
like
default
enforcement
level.
You
probably
wouldn't
want
to
make
that
a
conformance
thing
requiring
it
to
be
on.
B
A
Test
which
I
didn't
read
as
saying
like
this
was
going
to
be
required
for
conformance,
or
it
was
just
saying
like
how
are
we
going
to
test
this
feature?
So
I
tend
to
agree
that,
like
on
by
able
to
be
on
by
default
is
a
good
goal.
If
you
don't
have
an
opinion
having
it
on
by
default.
I
think
is
a
good
thing,
but
I
would
be
fine
with
someone
running
a
cluster
deciding
in
their
environment.
It
should
be
off,
but
we're
just
giving
them
tools
so
that
it
can
be
on
by
default.
A
B
A
Labels
to
the
e
to
e
namespaces
as
part
of
the
conformance
tests.
I
think
I
captured
that
in
a
possible.
C
A
So
if
I
say
I
would
like
to
allow
baseline
policy
really,
what
I
mean
is
I'm
going
to
deny
anything
above
that,
but
it's
sort
of
allow
baseline
is
a.
I
think,
a
way
that
a
lot
of
people
will
think
about
that.
A
The
problem
is
that
the
audit
and
worn
labels
don't
quite
line
up
with
that
semantically.
So
if
I
say
audit
baseline,
what
I
actually
mean
is
audit
record
and
audit
annotation
for
pods
that
exceed
the
baseline
value.
Similarly
with
warren,
so
one
option
we
had
discussed
was
something
like
audit
above
or
worn
above,
but
those
are
sort
of
sort
of
cumbersome,
especially
when
you
have
audit
above
version
thrown
in
there.
B
I'm
a
big
big
fan
of
the
enforce
it
is.
It
is
a
little
more
ambiguous,
but
I
don't
think
it's
necessarily
that
ambiguous
if
you
were
thinking
of
it
as
an
admission
controller,
because
an
admission
controller
enforcing
its
will
on
you
means
rejecting
you
when
it
doesn't
like
you
so
like.
I
think
that
the
extra
context
helps
it
to
not
be
as
ambiguous
as
it
could
be.
A
A
A
C
I
guess
I'll
say
I
didn't
have
that
much
issue
with
allow
or
enforce
and
audit
and
warn,
but
I
really
didn't
like
deny.
I
had
difficulty.
A
The
next
six
yeah
okay.
Well,
I
just
wanted
to
kind
of
call
out
that
piece.
Oh
sorry,
someone
else
trying
to
I
think
we
can
kind
of
take
this
discussion
offline.
If
you
think
of
any
ideas,
then
by
all
means,
please
drop
them
in
there.
A
Otherwise,
let's
move
on
all
right,
so
versioning
there's
some
there's
some
nuance.
That
needs
to
be
worked
out
around
versioning.
A
I
I
had
two
comments
about
that.
I
I
like
keeping
it
gathered
in
the
admission
plugin
so
that,
if
you
don't
want
this
active
in
your
cluster,
you
can
actually
like
completely
isolate
it.
Disable
it
if
it's
also
in
the
same
bucket
of
code
as
the
admission
stuff,
I'm
envisioning,
that
being
in
a
location,
that's
referenceable
by
things
outside
kubernetes
kubernetes.
A
So
that
would
make
that
same
logic,
usable
externally,
maybe
against
an
older
cluster
as
a
web
hook
or
some
custom
policy
as
far
as
data
integrity,
because
this
is
being
added
to
kubernetes
120,
whatever,
like
we
already
have
to
define
what
how
this
behaves.
If
there
are
labels,
we
consider
malformed
with
these
keys.
F
Yeah
we
scrolled
off
the
api
section,
just
we
touched
on
this
last
time,
but
I
didn't
see
any
call
out
here
describing
the
decision
to
use
labels
as
the
main
api
versus
creating
a
new
resource.
And
I
think
that
that
feels
a
little
bit
novel
to
me
and
at
least
I'd
like
us
to
kind
of
make
sure
that
everybody's
considered
the
the
trade-offs
and
that
we're
happy
with
them.
B
Plus
one
that
I
think
labels
are
much
better
than
creating
a
new
resource.
I
say
this
is
one
of
the
primary
people
who
used
to
be
advocating
for
creating
a
new
resource,
but
like
calling
out
specifically.
This
is
why
we
think
it's
better
than
creating
a
new
resource
like
if
it
only
takes
a
couple
of
sentences,
could
help
people
to
understand
it
better.
When
they're
reading
it.
A
Just
briefly
going
back
to
admission,
I
see
this
as
or
sorry
the
validation.
I
see
this
as
more
guardrails
to
protect
users
against
accidentally
having
a
typo
that
causes
their.
B
A
I
can
add
that
into
the
validation
section
as.
E
A
The
the
typo
in
a
label
is
another
reason
I
really
don't
want
unwieldy
label
keys
like
worn
dot
above
or
worn
dash
above
or
worn
if
exceeds
level
like
the
the
weirder
and
longer
and
more
hyphenated
dot
delimited.
We
make
these
label
keys
the
more
likely
they
will
be
typoed
yeah,
the
the
prefix
is
already
getting
pretty
long.
A
A
A
C
Labels
also
get
you
versus
versus
objects.
If
you
only
have
you
know
a
fairly
small
enumeration
of
these
things
and
you're
not
going
to
allow
any
customization,
you
would
end
up
having
very
few
api
objects
relatively
right
either
that
or
you
would
end
up
placing
an
api
object
inside
of
a
namespace
that
would
prevent
present
some
sort
of
read.
C
Challenges
not
not
an
impossibility,
but
you
would
have
to
dig
down
and
then
find
the
parent
name,
space
sort
of
thing.
A
Those
were
the
two
reasons
that
I
liked
having
it
be
on
the
namespace
object,
so
we
had
talked
about
like
having
a
single
policy
level
applying
to
a
namespace
for
a
given
action,
and
if
you
had
a
separate
object,
you
would
either
have
to
make
it
a
singleton
like
say,
the
name
of
the
object
must
be
default
or
something
or
you
open
the
door
to
having
like
multiple
competing
objects
applying
or
yeah,
or
you
have
multiple
competing
objects
applying
which
actually
makes
dry
run.
A
Not
the
admission
controller
only
allows
pods
that
it
can
find
the
namespace
for
so
the
way
my
proof
of
concept
was
written
up
when
a
pod
comes
in.
It
looks
for
the
namespace
in
the
namespace
and
it's
informer,
and
if
it
can't
find
it,
it
does
a
live
get
of
the
namespace.
So
it
will
never
allow
in
a
pod
that
it
hasn't
explicitly
gotten
the
names
for
now.
B
I
do
like
the
the
the
labels
versus
adding
fields
to
the
namespace,
just
because
it
makes
it
more
clear.
I
think
that
that
this
admission
controller
is
not
part
and
parcel
of
the
core
kubernetes
experience
like
those
additional
fields
would
be
really
weird
in
clusters,
where
you
turned
this
off,
because
you
were
using
gatekeeper.
A
A
Maybe
there's
a
new
policy
that
comes
out
in
kubernetes,
128
and
you'd
really
like
to
get
that
policy
level
on
your
or
those
policy
changes
on
your
126
cluster.
So
you
can
run
it
as
a
web
hook
to
kind
of
backward
that
essentially.
A
If
we
do
that,
how
do
we
want
to
handle
the
case
where
your
cluster
is
running
126
and
you're?
Asking
for
the
128
policy
again
in
the
built-in
version?
We
would
just
consider
that
as
latest
that's
required
to
handle
version
skew
and
back
ports
or
rollbacks,
but
in
the
web
hook
case
we
could
say
that
if
you
ask
for
the
128
profile
and
the
web
hook
knows
what
the
128
profile
is,
then
it
can
actually
enforce
that.
A
B
C
A
That's
just
because
you
love
version
endpoints
david!
That's
why
you
have
so
many
of
them
yeah
I
mean
I
like
my
first
thought
would
be
when
you're
running
it
as
a
web
hook,
you
get
to
tell
the
web
hook
what
what
does
latest
mean,
like
I'm
running
my
web
hook
on
a
whatever
server,
so
latest
means
117.,
even
though
I
built
you
with
the
library
that
understands
like.
D
A
One
can
you
share
a
web
hook
across
multiple
clusters
across
multiple
versions?
Sure
sure,
then
you
can't
necessarily
resolve
unless
you
can
identify
on
the
incoming
request.
What
latest
means
at
that
point
right?
You
must
a
priority
know.
Obviously
we
would
make
the
version
into
the
path
of
the
web
hook.
No,
it
just
gets
squirrelier
and
squirrelly
yeah,
but
the
thing
that
I
don't
like
about
making
a
web
book
enforce.
The
version
is
because
it's
surprising
as
a
normal
person,
because
the
policy
is
written
in
namespaces
on
the
cluster.
A
A
Like
I
don't
know,
I
can't
imagine
myself
wanting
the
118
policy
on
a
117
cluster,
because
I
don't
know
what
that
means,
because
I
don't
have
a
time
machine
that
lets
me
go
in
the
future
or
I
think
I
guess
technically
I'm
going
the
past
in
this
mode,
and
I
definitely
can't
tell
you
well.
The
point
is
that
if
you
don't
specify
a
version
like
you're
going
to
get
whatever
the
web
hook,
thinks
latest
means
which
depends
on
what
version
of
this
policy
it
was
built
against.
A
C
A
A
A
A
Another
one
is
this
actually
came
up
when
we
were
debating
whether
to
allow
patch
versions
in
the
version
information
is,
I
could
imagine,
having
a
cve
come
out
that
exploits
a
field
that
wasn't
previously
restricted
and
we
decide.
Okay,
we
weren't
thinking
about
this
in
the
past,
but
this
should
this
field
should
really
be
restricted
because
it
can
lead
to
this
exploit.
A
E
A
A
There's
another
concern
around
kind
of
version
sku
with
the
web
hook,
which
is,
if
I
forget,
to
upgrade
my
webhook
and
no
one's
really
looking
at
it
web
hooks
at
116..
The
cluster
is
now
at
123.
C
C
A
Okay,
I
think
we
should
move
on
I'm
trying
to
capture
some
of
this
in
the
discussion.
Just
to
like
remember
what
we
talked
about
yeah.
Thank
you.
A
Did
you
have
a
question?
Can
we
move
on
okay
covered
that
okay,
under
an
older
version
of
the
policy
new
field,
may
only
be
set
to
its
default
value
or
left
unset?
So
let's
say
so.
This
is
the
case
where
I
have
my
policy
pinned
to
122.
A
I
upgrade
my
cluster
to
123
and
we
add
a
new
field
to
pods,
so
for
backwards
compatibility
all
the
policies
need
to
allow
the
unset
or
default
value
of
the
new
field,
so
in
other
words,
pods
that
were
allowed
in
the
previous
version
are
still
allowed
without
setting
that
field.
A
There's
kind
of
there's
two
questions
around
this.
The
first
is:
do
we
want
to
restrict
all
new
fields
that
get
added
to
the
pod
with
version
skew,
or
should
it
only
apply
restrictions
to
fields
that
we
think
that
we're
going
to
have
an
opinion?
The
policy
is
going
to
have
an
opinion
about.
A
To
give
a
concrete
example,
let's
suppose
we
add
a
new
a
new
resource
field,
maybe
pod
overhead?
We
can
use
that
as
an
example.
That's
something
that
the
policy
controller
is
probably
not
going
to
have
an
opinion
about.
Would
we
say
that
you
have
to
use
the
default
value
or
upgrade
to
a
newer
policy
version?
That
says
I
don't
care
about
this
field.
B
A
I
think
I
agree
on
ignoring
fields
that
we
don't
care
about,
but
suppose
that
we
added
a
new
type
of
privileged.
So
maybe
we
decided
to
have
a
privileged
email
field
right,
yeah,
superb,
run.
A
A
And
so
new
fields
that
the
policy
has
an
opinion
about
can
be
enforced
as
long
as
they're
allowed
to
either
be
unset
or
have
their
default
value.
I
think.
A
If
the
thought
experiment
is
every
every
pod
that
was
allowed
in
that
old
version
by
that
policy
must
continue
to
be
allowed.
I
think
that's
the
guarantee.
A
And
so
then,
if
there
are
new
fields
we
can
decide
like?
Is
this
a
field
that
we
have
an
opinion
about
like
pod
overhead?
No,
we
don't
have
an
opinion
about
it,
so,
like
whatever,
if
you
want
to
quote
it,
knock
yourself
out,
but
we
don't
care.
If
it's
a
field,
we
have
an
opinion
about
like
super
privileged
or
host
process.
C
A
C
B
A
So
the
proposal
is
to
to
have
an
officially
supported,
implement
webhook
implementation.
So
basically
we
would
implement
the.
We
would
implement
the
core
logic
as
a
library
in
a
separate
repel,
and
we
would
also
have
a.
A
You
know
a
containerized
web
hook,
implementation
of
that
that
you
can
run
and
then
we
would
also
import
that
library
into
kubernetes
for
the
core
admission
controller.
B
C
I
think
the
maintenance
overhead
is
fairly
minimal,
given
that
this
particular
one
isn't
one
that
would
need
to
provide
it
wouldn't
need
to
identify
whoever
calls
to
it.
So
it
would
be
very
similar
to
the
sample
that
we
provide
already
with
with
for
our
ede
tests,
and
I
can
see
value
for
cases
like
you
mentioned,
where
it's
not
there
in
somebody
wishes
to
add
it
as
a
web
book
when
they
aren't
cluster
admin
or
they
aren't
able
to
administer
the
actual
qapi
server.
A
A
I
I
think
if
we
are
wanting
these,
we
talked
about
the
tests
for
these
being
written
in
a
way
that
they
could
be
run
against
this
reference
implementation
or
against.
A
A
A
It
might
be
a
heap
of
boilerplate
david
thinking
back
to
your
question
about
detecting
I.
I
think
it
would
actually
be
easy
to
detect
when
a
server
that
knows
about
pod
fields
that
we
don't
know
about
is
sending
us
an
admission
review.
B
C
Wouldn't
go
as
far
as
as
super
weird,
like
we've
had
cases
in
the
past
right
we've
had
innit
containers,
they
came
in
we're.
Looking
at
adding
the
debug
ephemeral
containers
right.
There
was
a
discussion
of
attempting
to
do
post
hooks,
there's
a
discussion
of
additional
fields
for
windows.
Right.
I
see,
I
see
cases
where
we
are
talking
about
adding
fields
that
matter
I'm.
I
can
see
a
reason
to
allow
fields
with
default
values.
C
A
Right,
so
what
I'm
saying
is
if
we
want
to
ensure
that
the
web
hook
is
updated
prior
to
the
server
so
that
it
understands
all
the
fields
the
server
does,
then
the
presence
of
those
defaulted
fields
means
that
the
server
is
newer
than
we
are,
and
so
you
could
tell
the
web
reject
pods
sent
to
you
by
a
server
that
have
fields
that
you
don't
understand
you
couldn't.
I
don't
know
if
we
want
to.
C
A
A
A
Those
are
allowed
as
well,
less
yeah,
so
allow
privilege
escalation
as
an
example
prior
to
what
what
is
it
like
one,
seven,
you
like
a
restricted.
There
is
no
concept
of
allow
privilege
escalation.
So
all
pods
have
it
implicitly,
so
the
restricted
policy
doesn't
have
an
opinion
about
it.
B
A
Yeah
that's
a
good
example.
Adding
a
new
volume
type
might
be
another
example.
C
A
I
mean
the
field
you
don't
recognize
is
easy
like
round
trip
it
through
your
typed
v1
pod,
and
if
it's
not,
if
there's
any
diff,
then
there's
a
field,
you
don't
recognize,
I
don't
think
we
should.
I
don't
think
web
hooks
should
get
in
the
business
of
trying
to
figure
out
defaults
like
yeah.
C
On
unrecognized
fields
like
that,
I
was
just
imagining
this
as
something
we
would
try
to
write
once
per
release
and
never
touch
it
again,
and
so
we
would
try
to
do
something
like
clear
the
fields
we
don't
know
and
run
it
through
the
defaulting
chain
a
second
time,
and
we
have
all
the
code
to
do
that.
We
can
just
call
set
defaults
on
pod
and
if
you
get
a
diff-
and
it
is
not
the
same
that
came
in
to
you-
you
know
that
it
was
not
a
default
value.
A
In
terms
of
like
philosophy
for
new
fields,
I
think
we
just
talked
through
examples
that
make
sense,
and
I
think
we
agree
like
you-
should
allow
things
that
previously
were
allowed,
so
it
allow
absence
of
a
field.
You
should
allow
the
default
value,
at
least
for
the
baseline
case.
A
You
should
allow
them
to
opt
into
a
more
restricted
version,
like
all
of
those
things
make
sense,
I
think
writing
up
those
like
litmus
tests
to
help
us
evaluate
new
fields
would
be
really
useful.
Yeah
we'll
do
all
right,
I'm
gonna
say:
let's
move
on.
A
A
A
Should
we
when
a
new
namespace
is
created,
I
suppose
this
is
this
would
make
it
a
mutating
controller
so
there's
that
issue,
but
should
we
set
the
default
policy
mode
labels
on
the
namespace,
so,
for
instance,
if
my,
if
I've
configured
my
cluster
to
say
no
label
equals
baseline,
if
I
create
a
new
namespace,
would
you
expect
it
to
be
to
get
the
baseline
label
added.
A
Yeah,
the
I
think
in
the
demo
I
showed
you
can
you
can
find
namespaces
that
haven't
explicitly
expressed
and
like
if
we
stomp
the
labels
on
them,
we've
lost
data
there
not
being
able
to
tell
the
difference.
A
Okay,
hopefully
another
quick
one.
Should
we
default
the
cube
system
or
the
cube
system
name
space
to
being
exempt
or
for
kind
of
when
we
auto-create
it?
Should
we
automatically
add
the
privilege
label
to
it.
A
I
like
not
treating
cube
systems,
especially,
I
think
in
the
early
days
of
cube.
We
like
really
really
didn't,
treat
it
especially
now.
There
are
a
few
concessions
like
you
can't
delete
it
because
it
deletes
the
kubernetes
service
and
that
can
be
impossible
to
recover
from,
but
not
every
cluster
has
to
run
privileged
things
inside
keep
system.
If
you're,
not
self-hosting
stuff,
you
don't.
D
A
Okay,
we're
coming
to
the
time
limit.
I
would
like
to
talk
about
updates,
but
before
we
do
that,
do
folks
want
to
have
another
one
of
these
discussions
in
two
weeks
or
should
we
take
everything
offline
on.
D
A
A
I
I
can
think
about
sharing
an
anonymized
version
of
that
feedback.
I
there
were
some
comments
on
there.
I
felt
like
they
were
mostly
comments
that
we
had
discussed
in
the
past.
A
B
I
would
lean
towards
let's
schedule
another
one
in
two
weeks
within
the
idea
of
having
a
check-in
like
part
way
and
if
things
seem
to
be
rolling
along
asynchronously
such
that
the
meeting
doesn't
seem
necessary.
Then
we
cancel
it
but
like
in
in
the
history
of
this
project
having
deadline
driven
development
has
actually
seemed
to
help
things
roll
along,
and
so
therefore
I
wouldn't
want
to
give
that
up
like
at
least
until
the
cap
is
merged.
A
I
was
just
gonna
do
this
was
pretty
high
bandwidth.
I
I
appreciate
you
bounce
back
and
forth
quickly.
Yeah
and
all
these
everything
we've
talked
about
is
going
to
go
through
the
cap
anyway,
so
anyone
who's
not
able
to
make
the
meeting
we'll
be
able
to
follow.
You.
C
A
Okay,
one
said
we
don't
get
to
talk
about
updates,
but
I
think
we
should
probably
call
it
so
everyone
can
get
to
the
next
meetings
so
yeah.
Thank
you.
This
was
useful.
A
We
don't
need
to
wait
until
the
next
meeting
in
two
weeks
to
follow
up
on
some
of
these
unresolved
issues.
So
if
you
have
time
before
then
to
to
leave
additional
comments
or
weigh
in
on
some
of
the
unresolved
sections
or
existing
common
threads,
please
do
and
I'll
make
an
effort
to
update
push
and
update
with
what
we
talked
about
today,
probably
won't
get
to
that.
For
a
few
days.