►
From YouTube: Kubernetes SIG Auth 2020-03-18
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2020-03-18
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
A
B
B
B
A
F
E
D
E
E
If
we
kind
of
reach
out
to
the
major
deployments
that
we're
aware
of
do
a
sweep,
and
no
one
is
saying
this
is
super
important
and
we're
using
this
heavily
then
I
I'm
in
favor
of
removing
it.
This
does
tie
into
something
that
I'm
wanting
to
work
on
in
119,
which
is
a
way
to
surface
messages
to
users
who
are
using
deprecated
things.
So
I
will
reference
this
as
a
place.
I
would
have
liked
to
be
able
to
send
messages
to
users
using
this
authentication
mechanism
to
say,
hey.
F
E
E
A
B
A
C
C
If
we
could
talk
about
you
know
we
had
last
time
we
had
a
few
items
on
the
agenda
that
we
weren't
able
to
go
to
so
one
of
those
was
this
the
multi-tenancy
benchmarks,
and
then
it
was
also
this
request
for
the
new
git
repo
for
the
policy
working
group.
So
just
just
for
time,
management
I'm,
not
sure.
If
there's
any
preferred
sequence
we
want
to
go
through.
You
know
happy
to
just
do
a
quick
overview
off
give
or
no
and
and
show
what
the
tool
does,
but
then
I
do
want.
A
D
C
C
So
really
the
the
intent
yes
make
it
easy
for
cluster
operators
to
create
policies
and
then
also
be
able
to
support,
in
addition
to
validation
off,
you
know
of
certain
constructs,
support,
mutate
and
generate
and
we're
just
adding
a
feature
where
we
are
going
to
be
able
to
not
just
generate
resources
on
triggers,
but
also
keep
resources
synchronized.
So
for
use
cases
where
you
have
multiple
namespaces
and
I
saw
something
on
Twitter
the
just
just
yesterday.
D
C
Where
there
was
a
proposal
on
keeping
namespaces
even
synchronize
across
multiple
clusters,
but
what
this
would
do
is
if
you
have
use
cases
where
you
want
to
keep
things
like
role,
bindings
or
secrets.
You
know
synchronized
across
different
namespaces
in
the
same
cluster
you'd
be
able
to
do
that
and
because
of
the
way
Carano
is
designed
like
we're,
you
know
it
will
work
with
pretty
much
any
resource,
and
you
can,
you
know,
even
write
policies
to
generate
policies
and
create
custom
customizations,
if
required.
D
C
Mean
you
get,
you
know
the
key
Renault
engine
will
sit
behind
the
emission
controller.
It
will
receive
requests
from
the
API
server
and
will
be
able
to
then
validate
resources.
Send
those
back
as
responses
to
the
you
know,
API
server
or
in
cases
where,
if
it's
just
a
validate
rule,
it
would
be
able
to
then
generate
violations.
So
that's
a
CR
and
be
able
to
then
generate
also
events
on
both
the
resource
itself
as
well
as
the
you
know,
you
will
see
those
on
the
policy
if
anything
is
violated.
C
A
B
C
Sorry
I
thought
it
started
my
screen
share,
but
you
know
it
yeah,
so
I
did
have
these
slides
up
and
I
was
just
kind
of
actually
talking
through
the
structure
yeah.
So
this
is
what
I
was
just
describing
as
as
the
basics
of
how
given
or
works
right.
So
it's
a
fairly
straightforward
as
a
admission
controller
and
then
the
policy
engine
itself.
You
know
we
also
have
a
CLI
where
it
would
then
be
able
to.
You
know,
do
some
of
the
same
functions
but
outside
of
a
cluster.
C
If
you
want
to
test
policies,
and
things
like
that
right,
so
the
structure
of
a
policy
looks
something
like
this:
you
have
policies
with
rules,
you
have
match
and
exclude
blocks.
You
can
match
an
exclude
based
on
kinds
name:
selectors
label,
selectors,
namespaces
and
also
we've
added
user
rules.
Groups
names.
So
you
can
control
in
a
fairly
fine-grained
manner.
C
What
type
of
resources
you
want
to
match
the
policy
on
or
exclude,
and
then
the
route
each
rule
can
will
have
either
a
mutator
validate
and
a
generate
construct
and
diving
into
a
little
bit
more
about.
So
this
is
just
an
example
of
the
match
and
exclude
so
you're
showing
a
match
and
it
uses
label
selectors
to
match.
C
You
know
one
application
and
then
kinds
over
here
you
can
specify,
or
you
can
leave
that
empty,
so
that
would
match
all
kinds
with
bit
label
it
that
in
most
cases,
then
the
policy
is
only
operating
on
metadata.
But
if
you
do,
if
you
leave
kinds
empty,
but
if
you
specify
particular
kinds
and
of
course
you
could
have
more
detailed
UML
constructs
or
policy
constructs
for
different
definitions,
so
here's
an
example
of
a
mutate
which
is
using
a
JSON
patch.
So
we
support
two
styles
of
mutate.
C
One
is
a
JSON
patch
and
one
is
more
of
an
overlay
type
of
syntax.
So
in
a
patch
of
course,
you
know
you're
we
can,
we
can
do
different
operations
like
add,
delete,
modify
so,
but
there's
also
some
additional.
You
know
decorations
you
can
put
onto
the
animals
like,
for
example,
this.
The
first
one
will
do
like
an
if-then-else
type
of
you
know
construct,
and
so
this
is
something
we
call
an
anchor
in
the
gamma
which
putting
the
parentheses
will
add.
C
The
conditional
logic
and
then
otherwise
you
could
do
and
if
not
defined,
and
only
in
a
mutate
yeah,
but
if
something's
not
defined
in
the
second
example
right.
So
those
are,
you
know,
that's
an
example
of
using
some
simple
logic
within
the
policy
ml
itself.
A
validator
is
also
pretty
straightforward
right
so
here
this
simple
example
is
showing
that
we
require.
If
you
want
to
require
label
which
is
an
app
label,
then
you
would
write
a
policy
in
this
manner
and
you,
you
write
a
validate
rule.
C
So
to
generate
the
third
type
of
rule,
which
would
be
a
generate
policy
right,
so
this
would
be
where,
if
you
can
generate,
for
example,
if
a
new
name
space
is
created,
that's
the
most
common
use
case,
some
of
the
users
in
the
community
and
are
using
also
to
generate
role
bindings
when
a
CR
is
created.
Things
like
that,
so
it
could
be
on
any
generator
trigger.
C
C
Yeah,
so
the
main
main
difference
like
I
was
mentioning
is
just
the
language
for
policies
right,
so
OPA,
of
course,
uses
reg
o
as
its
underlying
language
gatekeeper
adds
some
constraints:
definitions
on
top
of
the
Reg,
oh,
but
ultimately
requires
still,
you
know
creating
and
managing
policies
or
the
Rago
syntax
as
well.
So
that's
the
main
difference,
the
other.
C
Managing
policies
is
what
we
make
very
straightforward
and
then
all
of
the
results,
the
policies
themselves
are
see
ours
as
well
as,
of
course,
the
policy
violations
would
become
also
custom
resources
in
your
cluster
itself.
So
one
one
effort
that
we
are
you
know
kind
of
discussing
in
the
policy
work
group.
Is
this
so
there's
OPA
gatekeeper,
there's
tools
like
Falco
there,
other
tools
like
a
rails,
there's
governo,
so
we
see
that
there
will
be
multiple
ways
of
doing
policies
right
but
to
try
and
standardize
some
of
the
machinery
around
policies.
D
C
C
C
There
we
go
okay,
so
everything
looks
good
and
I'd
see
that
you
know
my
Kavanagh
is
now
running
inside
my
cluster
and
it
are
automatically
also
registered.
You
know
that
the
API
server
to
start
receiving
that
box.
So
at
this
point
you
know
I,
don't
I
shouldn't
have
any
policy.
So
if
I
say
get
you
know,
cluster
policies,
it
will
show
me
there's
nothing
installed
so,
but
what
we
can
do
is
install
some
best
practice
policies
from
our
repo
itself
right.
C
C
So
here
you
know
we
have
a
number
of
different
policies.
Like
you
know,
this
is,
of
course,
a
one,
a
good
example
of
disallowing,
rich
user.
So
if
I
take
that
policy,
you
know
the
way
it's
written
and
the
way
to
retry.
It
is
saying
pretty
much
match
on
part,
but
then
it's
going
to
you
know
it's
checking
for
both
the
security
context
at
the
pod
level
and
then
at
each
container
level
and
making
sure
that
it's
set
to
true
right
and
that's
what
we
want
to
validate.
So,
let's
just
make
this
to.
H
H
C
So
you
could
you
could
you
would
just
add
another
pattern
to
this
any
pattern
block
and
you
would
write
for
in
it
containers
and
then
at
the
same
there,
but
that's
a
good
point
and
we
can
we'll
update
the
example
to
include
that.
One
thing
you
could
also
do
is
you
can
reference
in
cover?
No,
you
can
reference
other
parts
of
the
policy.
C
C
So
validate
would
mean
that
it
creates
a
policy
violation
and
reports
that
so,
but
it
there's
there's
two
modes
right.
So
the
default
is
to
report
the
policy
violation,
but
you
could
also
enable
a
more
to
enforce
it,
in
which
case
it
would
reject
it
completely
right
so
and
I'll
show
I'll
just
so
actually
I
have
that
policy,
or
so
I'll,
just
open
it
up
and
see
what
that's
doing
and
then
we'll
go
ahead
and
import
it
back
in
yes.
C
C
And
you
know
so:
if
I
now
do
get
cluster
policies,
I'll
see
that
one
policy,
if
I,
do
policy
violations
I
shouldn't
see
anything
yet
because
I,
oh
well,
actually
it
automatically
started
picking
up
there
already
some
violations
so
I'm,
seeing
all
the
pods
now
in
my
you
know
in
my
cluster,
which
are
showing
up
so
it
looks
like
I
had
some
like
guest
book
or
something
running
before
so
it's
showing
that
those
are
violating
right
and
if
I
look
at
you
know
one
of
those.
So
let's
go
ahead
and
check
more
details.
C
It
will
show
me
you
know
which
resource,
obviously,
which
policy,
which
resource
and
more
information
on,
and,
of
course,
if
now,
if
I
change
the
resource
or
if
I
change
the
policy,
this
will
automatically
recalculate
at
that
point.
So,
but
let's
go
ahead
and
you
know
what
we'll
do
is
we
can
also,
instead
of
showing
this,
as
you
know,
I'll
go
back
to
my
so
all
of
the
samples
by
default
I
just
configured
to
show
the
violations,
but
what
I
could
also
do?
C
If
we
go
back
to
the
docs
and
I'll
just
quickly
show
the
different
validate
modes,
we
can
change.
One
of
these
policies
to
be
able
to
you
know,
block
the
requests
as
well
on
admission
control
right.
So
that's
what
this
validate
failure
action
does
and
if
I
go
into
my
policy
and,
let's
just
add
this,
validate
failure,
action.
C
C
C
Okay,
so
that's
what
I
was
expecting
to
happen
before
where
now
it
ran
that
rule,
and
it's
basically
telling
it
that
you
know
because
I
change
this
to
enforce
I'm,
not
able
to
you,
know
edit
or
or
apply
that
llamo
and
it's
rejected
through
admission
control.
One
of
the
quick
thing
I
want
to
show
is
you
know
if
you
recall
the
policy
that
we
started
with
was
just
at
the
pod
level,
but
caverna
has
this
feature
where
it
can
automatically
now
also
based
on
the
because
this
is
a
policy
written
at
the
pod
level.
C
It
will
generate
policies
at
you
know
for
common
pod
controllers
automatically,
and
you
can
control
that.
There's
a
feature
for
this
origin:
validation
rules.
You
can
control
that
and
specify
which
part
controllers
you
want,
but
by
default
we
do
it
for
daemon
sets
deployments
jobs
and
staple
sets.
So
the
nice
thing
here
is
now:
if
a
user
just
creates
a
deployment
which
has
a
pod
template
spec
which
doesn't
match
it
will
get
rejected
at
that
level,
you
don't
have
to
wait
till
the
pod
actually
is
running
and
deployed,
because
those
things
are
hard
to
debug.
H
C
Right
now
it
we
the
features
for
pod
controllers
built-in,
but
we
do
have
some
thoughts
on
how
to
make
it
extensible
just
for
anything
that
works
with
owner
references
right.
So
the
basic
idea
is:
if
you
have
an
owner
reference
and
some
other
controller
managing
that
resource,
you
would
be
able
to
apply
the
policy
at
the
owner
level
versus
the
manage
resource.
C
Yes,
so
that's
you
know
one
feature
which
had
sort
of
come
up
also
from
users
because
a
common
common
problem-
but
you
know
policies
is
otherwise
if
they're
written
at
them.
So,
ideally
you
want
to
create
policies
at
the
pod
level,
but
then
you
want
to
apply
them
at
the
controller
level
and
reject
at
that
right.
So
this
kind
of
gives
you
the
best
of
both.
C
Let
me
see
if
there's
any
other
quick
features.
I
want
to
highlight
right.
So
there's
we
talked
about
the
admission
controller.
This
is
the
origin
feature
that
I
was
just
mentioning.
You
know
that
you
could
apply
policies
of
the
controller
level
and
you
can
control
this
in
an
in
fine-grain
like
if
you
want
to
specify
which
particular
you
know,
resources
you
want
and
there's
also
you
know
it's.
Several
of
these
I
showed
this
on
the
repo
as
well
right.
So
we
have
several
best
practice.
Policies
grips
thing,
that's
stop
mate
share.
C
C
And
the
idea
here
was,
you
know,
I
think,
there's
also
a
similar
effort
with
the
gatekeeper,
but
to
make
sure
that
you
know
things
which
are
covered
like,
for
example,
in
a
pod
in
in
PS.
Ps
can
be
also
covered
through
key
Brno,
and
we
can,
you
know,
sort
of
extend
that
this
is
everything
is
customizable.
So,
for
example,
if
you
want
to
you
know,
write
your
own,
similar
policies
like
that.
Those
could
be
done
as
well.
B
H
B
B
C
No
yeah
good
question
right,
and
so
we
are
that
that
was
one
of
the
reasons
why
we
will
are
focusing
now
and
creating
so
two
things.
One
is
validating.
The
policy
like
we
just
saw
in
what
I
did
I
had
the
attribute
in
the
wrong
place,
so
that
should
have
been
caught
through
a
validation
rule,
and
the
second
part
is
how
do
I
test
the
policy
without
applying
it
right.
C
So
for
that
we
are
creating
a
CLI
where
you
could
just
have
a
policy
which
you're
developing,
but
then
you
would
test
it
on
a
cluster
through
your
good
config
and
it
would
show
you
the
violations
or
what
the
results
of
that
policy
would
be
before
you
put
that
into
before.
You
activate
it
in
your
cluster.
D
H
C
D
C
D
A
A
A
C
Yeah,
no
thank
you
for
that
yeah.
So
definitely
just
switching
switching
to
the
git
repo
request
right.
So
this
is
from
the
policy
working
group
and
you
know
I
can
pull
up
the
request
itself,
but
what
we
were
looking
for
was
you
know
a
place
for
like
proposals
and
other
things
to
other
artifacts,
that
for
collaboration
and
I
submitted
this
request
in
you
know,
under
communities,
org
and
I
think
this
is
following.
There
was
a
prior
template
and
some
precedents.
C
B
A
F
E
Matter
of,
if
stuff,
a
trapeze
and
rods
like
who's
responsible
for
moving
it
out
and
cleaning
it
up
or
if,
especially
when
it
gets
to
code
right
like
if
something
is
even
if
it's
alpha
level
like
big
disclaimers,
this
is
not
production
or
qz2
production.
That's
a
little
easier.
But
if
there's
issues
found
or
something
has
a
dependency
that
has
a
vulnerability
like
github
starts
sending
notifications
about
vulnerable
dependencies
and
those
get
routed
to
whoever
owns
the
Rico.
So
it's
mostly.
E
About
like
where
who
is
responsible
for
it,
especially
if
it's
sort
of
experimental
types
of
things
that
are
gonna,
some
some
might
grow
and
proceed,
some
might
not,
and
the
ones
that
don't
like
who
decides
when
to
delete
them,
or
do
they
just
sit
here
and
live
forever.
Those
are
the
most
of
questions.
I
had.
F
Multi-Tenancy,
at
least
in
this,
we
instructed
multi-tenancy
be
a
little
bit
crisper
about
the
fact
that
this
was
prototypes.
I
mean
I.
Think
a
key
point
here
is
that
code
in
those
repos
is
not
part
of
cube
in
the
sense
of
there
is
no
project
in
there
that
can
be
called
part
of
cube.
It
has
to
move
out
into
a
sig
sponsored
repo
through
this
sig
or
through
another
sig.
F
E
For
code,
because
you
don't
want
people
depending
on
code-
that's
nested
inside
one
folder
of
a
repo
and
along
with
other
prototypes,
that
you
don't
have
a
good
coherent
way
to
version
independently.
So
before,
basically
the
point
where
you
would
want
anyone
to
actually
depend
on
the
code,
it
needs
to
be
in
its
own
things,
you
can
version
it.
E
C
Yet
so
happy
too,
you
know
Jen's
of
the
name
etc.
We
could
add,
add
some
more
details
to
the
name
and
then
what
so,
with
the
multi-tenancy
working
group
I
know
the
the
leads
and
for
the
working
group
have
been
pretty
much
responsible
for
managing.
So
there's
three
projects
right
now,
which
are
contributing
into
that
repo
and
they've
been
sort
of
doing
the
any
governance
of
that
repo
itself.
C
Okay,
so
yet
just
let
me
know
what's
required
and
if
there's
anything
else,
I
need
to
add
either
to
this
request
or
to
the
mailing
lists
and
I.
Think
from
the
procedure
perspective,
you
know
once
once
everybody
is
ready
or
somebody
would
need
to
then
approve
on
this
request,
and
that
would
move
it
into
the
next
stage
of
the
repo
creation
etc.
A
C
C
So
the
idea
here
was
to
try
and
measure
the
the
running
cluster
and
to
be
able
to
report
whether
it
was
configured
for
some
level
of
you
know,
multi-tenancy
or
some
adequate.
You
know
with
the
adequate
security
constructs
and
and
to
be
able
to
report
that
so
what
we
have
so
far
defined,
which
is
you
know
in
the
working
group
repo,
is
about
seven
different
categories
of
where
we
would
do
validation,
checks
and
there's
three
profile
levels
that
we're
proposing
and
two
types
of
tests
that
we
would
run.
C
So
the
idea
here
would
be
to
come
up
with
the
test
tool
like
a
eat.
You
eat
stool
as
well,
as
you
know,
some
guidelines
for
how
folks
could
configure
their
clusters
for
multi-tenancy.
A
A
C
All
right
so
I'll
just
see
I'll,
add
links
to
the
git
repo
for
this
as
well
and
there's
a
document
with
all
of
the
tests
that
we
have
so
far
so
feel
free
again,
if
there's
any
comments
or
feedback,
we
could
do
that
and
then
you
know
carry
on
the
discussion.
Perhaps
in
a
in
an
upcoming
meeting
great.
D
Yeah
I
mean
there's
nothing
to
present,
but
justjust
just
want
to
get
everybody's
thoughts
around.
How
do
we
define
maturity
level
on
some
projects
and
I
think
you
know
I'm.
Just
thinking
like
CN
CF
has
a
maturity
model
wondering
if
we
should
follow
that
or
is
there
another
process
that
we
should
follow
to.
A
E
To
think
about
the
different
dimensions
we
have
so
most
of
the
sub
projects
we
have
today.
Maybe
all
of
them
are
things
that
live
like
entry
and
the
kubernetes
project,
and
so
they
either
have
api's
associated
with
them,
or
they
have
feature
flags
associated
with
them
that
have
gone
through
sort
of
an
alpha
beta,
GA
lifecycle,
something
that
lives
out
of
tree
may
have
an
API
associated,
but
obviously
isn't
paired
to
the
kubernetes
release.
E
But
I
would
expect
similar
levels
of
maturity.
So
alpha
level
basically
means
release
to
release.
You
might
have
to
delete
recreate
everything.
It's
not
claimed
to
be
production
ready.
Once
you
get
to
beta
level,
it
is
expected
to
not
have
known
crasher
bugs
and
is
generally
expected
to
have
a
path
to
GA
that
is
understood
and
planned,
and
then
once
it
reaches
GA
that
indicates,
it
has
been
like
scale
tested
and
audited
in
various
aspects.
E
F
Wouldn't
we
I
was
kind
of
thinking
about,
like
DNS
as
an
example
where
we
defined
the
spec
I,
would
kind
of
expend
like
different
SIG's,
have
done
this
slightly
differently,
but
anything
that
we
would
consider
like
an
official
sub-project.
Certainly
the
spec
part
of
it
having
it
in
a
place
that
sig
off
owns
to
at
least
scope
it
and
bound.
It
would
probably
be
a
good
idea
whether
that's
the
enhancements
repo
or
something
a
little
bit
more.
B
So
I
Rita
asked
me
this
not
ask
her
to
put
it
on
the
agenda.
Like
I
was
wondering,
since
at
least
the
CSI
driver
doesn't
have
an
API,
would
it
make
sense
to
do
like
a
retroactively
kept
for
what
exists
today?
Where
is
that
into
enhancements
and
then
define
the
alpha-beta
GA?
Graduation
it'll
be
amazing,
because.
F
D
So
I
so
I
just
linked
a
graduation
criteria,
doc
from
C&C
F.
Now
that's
at
the
project
level
right
so
sandbox
Inc
you
incubating
and
graduation
I
again,
I,
don't
know
how
relevant
that
is
for
a
sub
project,
but
I
think
that
is
a
little
bit
different
from
let's
say
the
where
the
api's
are
at
right.
So
I
just
want
to
bring
that
up
as
we've
been
talking
about
api's.
F
You
have
to
be
owned
by
a
cig
and
then
incubating
is
effectively
anything
that
hasn't
gone
GA
and
then
once
it
reaches
GA
its
GA,
under
the
guidance
of
the
kubernetes
project
for
most
other
things
that
we've
done
like
it
has
to
follow
the
same
deprecation
policies.
So
that's
maybe
another
variation.
We
could
consider.
D
E
Think
the
kept
process
is
flexible
enough
that
it
can
accommodate
a
wide
variety
of
types
of
designs.
We've
used
it
for
everything,
from
technical
designs
to
like
release,
process
changes
to
documentation,
process
changes,
I
think
it's
a
place
that
most
people
are
already
looking
at
a
regular
basis
or
on
a
regular
basis,
so
putting
it
there
is
gonna
funnel
it
into
the
right
people's
we're
close
to.