►
From YouTube: Kubernetes SIG Auth 2020-04-16
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2020-04-16
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
Let's
get
started
so
we
have
a
big
full
agenda.
I
wanted
to
note.
I
had
written
I,
do
not
expect
people
to
I'm,
ready,
yeah
or
anything
like
that.
I
just
wanted
to
bring
it
up
and
like
people
have
a
chance
to
look
at
it,
we
have
a
full
agenda.
So
I
don't
want
to
discuss
this
and
give
time
to
everyone
else
who
has
been
in
the
gender
bar
early.
So
Jim
did
you
want
to
search
for
anything
that
already
you
wanna
talk,
yeah.
A
A
B
B
B
Exactly
so,
we
took
the
conformance
tests
as
one
of
the
you
know,
one
of
the
sort
of
at
the
using
the
e2e
test
package
as
one
of
the
subtest
that
could
be
run
as
a
cluster,
and
that
was
like
something
we
looked
at
as
a
model.
We
also
looked
at
you
know
other
tools
which
do
things
like
for
CIS
benchmarks.
There's
there's
a
few
tools
out
there
like
ooh
bench
and
others,
which
can
run
for
configuration
checks.
B
So
looking
at
some
of
those
tools,
the
intent
is
to
allow
administrators
or
cluster
operators
to
define
some
inputs
in,
in
a
definition
like
a
Hamel
definition.
Then
this
would
automatically
the
test.
Suite
would
run
and
report
back
whether
multi-tenancy
is
correctly
configured
now.
The
Assumption
here,
obviously
we're
building
on
namespaces
as
a
basic
construct
and
we'll
talk
a
little
bit
more
about
that
is
to
you
know,
check
for
various
levels
of
things,
some
of
it
which
could
be
configuration
text
some
could
be
behavioral
and
runtime
checks.
A
B
B
The
intent
would
be
with
any
what
you
would
require
for
configuration
checks,
of
course,
that,
depending
on
whether,
if
the
cluster,
the
man
at
the
control
plane
is
available
or
not,
could
vary,
but
for
the
behavioral
checks,
yes,
it
would
create.
You
know
the
test
suite
would
create.
Namespaces
would
try
different
configurations,
would
try
and
do
things
across
different
namespaces.
B
You
know
just
even,
for
example,
checking
for
network
isolation,
but
without
you
know,
being
too
perspec
prescriptive
about
how
exactly
network
isolation
in
that
example
should
be
set
up
right.
So
there
are
a
few
test
cases
we've
automated
just
a
proof-of-concept,
but
the
intent
was
before
we
go
off
and
do
too
many
more
just
at
least
gather
feedback
on
the
various
profile
levels,
the
test
categories
and
the
way
we
were
starting
to
define
these
tests.
B
B
So
for
the
scope
of
you
know
these
benchmarks,
these
tests,
a
tenant,
really
is
just
a
either
a
user
or
a
group
of
users
that
own
some
set
of
namespaces,
which
are
isolated
from
other
tenant
namespaces,
and
there
were
three
roles
that
we
were
interested
in.
One
is,
of
course,
is
a
cluster
operator,
cluster
admin
role,
there's
a
tenant
admin
role,
where
the
assumption
being
that
that
tenant
can
manage
their
own
set
of
namespaces
and
a
you
know
in
configure
different
aspects
of
multi-tenancy
for
those
namespaces
and
then
there's
users
within
these
tenant
namespaces.
B
So,
similarly,
what
we
were
thinking
of
is
how
do
we
allow
for
a
fairly
broad
and
flexible
range
of
cluster
configurations,
like
anything
from
you
know,
if
you're
bringing
up
a
cluster
directly,
it's
you're
using
cube,
ATM
or
any
other
tool
like
that,
and
you
want
to
manually,
set
up
some
multi-tenancy
and
configure
namespaces.
How
do
you
make
sure
that's
done
correctly
worse
and
moving
more
towards
automated
ways
of
you
know,
setting
up
clusters
managing
multi-tenancy
and
various?
B
And
then
just
you
know
just
to
kind
of
cover
the
profile
levels
and
then
we'll
stop,
and
you
know,
maybe
have
some
discussion
because
I
think
for
the
scope
of
this
meeting
and
the
time
we
have.
That
would
be
good
too
in
terms
of
a
milestone,
but
so
the
first
profile
level
we're
thinking
about,
is
a
fairly
manual
configuration
where
the
idea
is.
You
know
what
what
would
be
checked
for
is
to
make
sure
that
tenants
can't
impact
the
community's
control
plane,
and
you
know
the
with
this
set
up.
B
It
would
be
using
standard
communities
resources,
so
you
know
known
as
it
would.
It
should
not
be
necessary
to
run
like
any
any
additional
controllers
or
CRS
or
things
like
that
to
enable
multi-tenancy
and
it
would
be
restricted,
so
it
you
know,
features
like,
for
example,
tenants
would
not
be
able
to
have
their
own.
You
know,
CRD
definitions
and
some
features
may
not
work
in
this
setup
right,
but
the
idea
was
with
even
but
namespace
configuration.
B
Like
you
know,
network
policies
so
say,
for
example,
there
could
be
a
default
network
policy
that
the
cluster
admin
sets
up,
but
then
tenant
admins
should
be
able
to
add
their
own
network
policies
for
their
workloads
and
even
defined
like
roles
and
role
bindings
for
their
namespaces.
So
this
is
where,
in
a
projects
like
the
next,
in
fact,
the
next
topic
on
the
agenda,
Ryan
and
Adrian-
are
going
to
present
H
and
C.
The
intent
was.
That
agency
would
be
a
good
example
of
this
level.
A
Guess
I
have
a
general
question
on
this,
like
how,
like
other
than
namespaces
I'm,
not
really
aware
of
any
like
built-in
boundaries
within.
Certainly
you
can
have
extra
boundaries.
Open
check
has
projects
and
their
self-service
approaches
with
that.
I'm
sure
that
you
can
do
some
kind
of
self-service
flow
through
like
the
cloud
provider
you
guys,
but
that
seems
like
incredibly
specific
to
any
given
like
variation
right.
A
B
Yes,
so
the
intent
was
to
start
with
just
you
know,
seeing
if
tenants
create
namespaces
and
perhaps
there's
controllers
and
others
which
are
running
in
the
background,
which
then
augment
that
namespace
for
multi-tenancy
or
the
right
configurations
as
required.
Now
what
we
are
seeing
and
in
reality
that
becomes,
you
know
a
bit
tricky
and
the
user
experience.
There
is
not
not
that
great.
If
you're,
using
just
roll
bindings
to
allow
or
deny
you
know
various
operations
or
namespaces.
B
So
you
know
most
of
the
like
most
of
the
projects
that
are
doing
this
type
of
you
know:
tenant
namespace
management.
They
do
have
some
CR
and
within
the
multi-tenancy
working
group
there
is
also
a
tenant
controller
which
will
then
take
over
the
management
of
these
namespaces.
So
what
we
haven't
figured
out
yet
and
I
don't
know
like
the
full
spectrum
of
these
type
of
solutions,
but
the
idea
would
be
that
you
know
one.
One
level
is
okay,
can
I
just
create
namespaces
and
then
do
they
get
set
up
correctly?
B
The
other
level
would
be
if
there's
a
custom
resource
which
in
turn,
acts
as
a
proxy
for
creating
the
namespace.
We
create
that
custom
resource
and
these
will
be
configurations
the
user
would
supply
based
on
you
know,
which,
whichever
type
of
cluster,
that
they're
auditing
and
based
on
that
custom
resource,
then
namespaces
would
get
created
for
on
behalf
of
the
tenant
and
then
we
would
validate
the
namespaces
at
that
level.
So
yeah.
F
Go
ahead,
good
I,
like
just
one
more
comment
here
from
the
mother
tendency
working
group
perspective,
which
is
that
to
the
questioner?
Yes,
you
know
we
are
not
introducing
new
concepts
beyond
namespaces,
but
the
way
to
look
at
it
is
that
the
baseline
multi-tenancy
is
with
one
namespace
equals
one
tenant.
F
So
that's
sort
of
the
way
we
have
been
thinking
that
the
working
group
has
been
thinking,
but
we
definitely
welcome
any
feedback
and
guidance
and
then
the
the
benchmarks
is
sort
of
you
know
orthogonal
direction
to
sort
of
independent
of,
in
some
sense
mapping
to
the
profiles
that
you
must
talking
about,
which
is
you
know,
having
benchmarks
for
each
each.
Each
level
of
the
three
levels
I
mentioned
earlier,
which
is
one
namespace
equals
one
tenant
or
an
aggregation
of
namespaces
equals
one
tenant
or
a
virtual
cluster
with
a
virtualized.
D
B
Much
I
think
that
introduced
the
main
concepts
and
you
know
certainly
in
the
slides,
there's
links
to
the
git
repo
as
well
as,
if
there's
a
working
document.
So
please
do
you
add
in
comments,
even
if
something
comes
up
later
or
any
thoughts
feel
free
to
reach
out,
but
with
that
I
think
it's
a
good
time
to
transition
and,
let's,
let's
cover
agency
cool.
D
That's
why
we're
here
so
that's
kind
of
the
backstory
and
we're
really
just
looking
for
a
permanent
home
wherever
that
may
be
for
the
heart,
my
name's
Jason
controller
at
a
high
level.
The
hierarchical
namespace
controller
is
just
that.
It
allows
you
to
have
some
sort
of
hierarchy
of
namespaces
and
will
propagate
resources.
Both
kubernetes
and
custom
resources
into
these
children,
namespace
and
and
I
can
I
mean
agent
can
speak
to
it
much
better
than
I
can.
If
you
want
to
add
anything.
G
G
Unless
you
were
granted,
that's
requestable
binding,
but
we'll
ignore
that
for
a
second,
so
we've
been
developing
this
we've
gotten
some
good
feedback
on
it.
It's
just
now
getting
to
the
point
where
it's
usable,
because
without
any
artificial
limitations
such
as
the
types
of
resources,
but
it
can
copy,
it
can
also
the
secrets,
can
take
math
stuff
like
that
and
now
and
as
a
reason
we
see
our
DS
and
so
having
it
in
an
incubator
directory
and
a
temporary
repo
owned
by
a
group
that
doesn't
that
isn't
supposed
to
encode
yeah
didn't
seem
ideal
anymore.
G
A
Okay,
any
feedback,
David
Jordan,
Jim
I.
E
A
G
A
G
The
bar
for
getting
a
crazy
new
radical
changed
idea
of
namespace
into
API
machinery
is
incredibly
high
and
without
a
very
clear
demand,
all
of
those
I
chatted
about
it
with
a
couple
of
people
that
at
least
internally
at
Google
before
I
started
on
this
project,
and
it
seemed
highly
unlikely
we
would.
We
would
clear
the
bar
without
people
screaming,
for
which
clearly
they're
not
whereas
a
controller
is
something
that
we
can
implement
very
quickly
relatively
and
it's
optional.
G
People
can
install
it
if
they
like,
and
it
serves
many
of
the
same
purposes
and
it
can
be
used
as
a
way
to
prove
out
the
idea.
Now,
let's
say
that
we
release
this
and
people
go
absolutely
that's
written
like
what
I
really
want
now
is
a
hierarchical
list
or
hierarchical
watch
or
some
other
kind
of
eager.
That
can
really
only
be
done
by
a
machinery.
Well
then
we
can.
G
A
E
E
E
Or
are
you
still
formulating
an
opinion
I'm
still
formulating
an
opinion,
it
does
rise
me
that
we
would
try
to
do
something
region.
I
E
Formulating
primitives,
where
someone
with
access
to
multiple
projects
or
namespaces
would
be
able
to
do
the
things
like
set
up
network
policy
between
them
and
and
then
the
aggregation
aspect
of
it
really
seems
questionable
in
light
of
the
way
that
our
controllers
would
operate.
When
aggregating
may
the
primitive
pieces
I.
H
Think
there
is
like
a
constrained
delegation
aspect
to
this
where,
right
now,
we
really
have
two
nodes,
basically
in
our
resource
hierarchy,
which
is
cluster
level
and
namespace
level,
and
it's
very
delegating
sub
trees.
I
think
that
this
is
what
the
proposal
was
meant
to
solve,
that
are
that
allows
from
more
nested
structures
like
folders
of
namespaces.
G
G
D
G
Whose
opportunity
not
there's
another
project
in
the
working
group
called
the
tenant
CRD
that
is
I,
think
closer
to
what
what
you're
describing,
but
it
is
a
way
that
is
tied
very
closely
to
existing
kubernetes
primitives
and
a
very
sort
of
minimal
extension
and
so
I
guess
what
I'm
asking
is?
What
is
the
bar
that
is
required
to
make
this
a
sub
repo?
G
That
is
sponsored
by
say,
goth
is
it
this
is
the
recommended
way
to
do
to
solve
the
kind
of
problem
we're
talking
about,
or
this
is
a
recommended
way
or
a
way
that
we're
exploring,
because
if
it's
the
first
one
that
I
think
we
need
to
do
a
lot
more
due
diligence.
If
it's
any
of
the
second
to
I
thought
we
were
aiming
for,
then.
H
D
A
H
A
H
G
It's
funny:
we've
had
a
squat
I
have
heard
that
the
working
groups
are
not
supposed
to
have
repose.
Nevertheless,
multi-tenancy
he
does
have
one
and
that's
where
we
are
right
now,
so
we're
in
a
multi-tenant
city,
/,
incubator,
/,
HNC,
undercover
9
states
right
now,
I'm,
not
sure
how
that
happened,
but
it
did
I'm,
not
super
excited
about
moving
it
to
a
different
temporary
reboot.
So
if
the
answer
is
that
this
is
a
quarter,
maybe
that
is
a
good
enough
answer
now.
G
We
are
basically
at
the
point
where
I
would
call
it.
We
have
a
first
draft
of
all
of
the
features
done
and
we
have
a
certain
amount
of
production,
ization
and
stability.
So,
for
example,
it
has
metrics
as
bilko
SAS.
It
can
be
configured
and
it's
now
at
the
point
where
I
want
to
start
getting
feedback
on
it.
G
H
A
D
A
H
G
I
Everyone,
if
I,
could
just
chime
in
with
a
few
thoughts,
so
we
might
need
to
revise
our
charter
for
kind
of
prototype
or
incubator
style
projects.
I,
don't
know
that
it
sounds
great
to
require
us
being
a
sub
project
before
being
in
a
prototype
stage
and
also
Adrienne,
to
go
back
to
your
earlier
question
of
like
what
the
bar
is
for
getting
endorse
place
to
gods.
I,
don't
think
it
should
be
that
this
is
the
one
true
way.
I
think
it
should
be
that
this
is
a
recommended
approach.
I
G
I
Yeah
sure
so
I
wanted
to
talk
about
this
because
dynamic
author
has
been
in
alpha.
I,
don't
actually
know
off
the
top
of
my
head,
what
release
it
was
introduced
in,
but
it's
been
in
an
alpha
state
for
a
while.
We've
had
a
few
attempts
to
kind
of
progress
it
either
to
beta
or
to
build
out
policy
a
policy
spec
first,
and
neither
of
those
have
kind
of
made
much
progress.
I
A
C
It
was
just
a
mask:
could
you
elaborate
eventually
if
we
went
to
that
route
together
this
a
project
and
develop
something
like
an
out
of
three
out
of
process,
probably
webhook,
that's
going
to
do
to
do
the
nukes,
the
MOOCs,
the
requests
and
so
on.
How
physically
would
that
look?
Do
you
mean
to
have
something
else
that
needs
to
be
installed
too
long,
the
API
server
I'm
just
trying
to
imagine
this.
I
Yeah
so
I
think
with
that
approach
we
would
have
a
separate
binary
that
would
be
running
it
could
run
anywhere,
but
it
would
probably
make
sense
to
ship
it
as
a
container
with
pod
spec,
and
so
that
could
either
run
as
a
static
pod
on
the
master
or
a
regular
pod
in
the
user
cluster
and
then
get
registered
as
a
web
hook.
And
so
in
that
case
we
wouldn't
have
dynamic
web
hooks.
So
it
would
have
to
be
statically
configured
on
the
master
to
say,
send
audit
events
to
this
server
wherever
it
is.
C
C
C
H
I
H
I
Okay,
so
so
we
already
have
the
ability
to
have
a
static
web,
and
so,
in
this
case
you
would
set
up
the
dynamic
audit.
Server
would
be
that
static
web
book
that
you
would
have
to
manually
configure
through
API
server
flags
or
config,
and
then
once
you
have
that
dynamic
server
setup
that
would
have
an
Associated
CRD.
That
would
be
used
to
configure
the
dynamic
sinks
and
then
all
of
the
logic
for
routing
audit
events
to
dynamic
sinks
would
happen
in
the
dynamic
server.
C
I
Basically,
what
I'm
proposing
with
this
model
is
that
we
would
make
basically
no
changes
to
the
way
audit
works
today.
So
you
would
still
have
a
static
audit
policy
that
would
control
which
events
go
to
the
static
web
book,
which
is
actually
the
dynamic
audit
server.
So
if
you
have
a
policy
that
says
secrets
get
metadata,
then
your
dynamic
audit
webhooks
could
never
get
more
than
metadata.
You
could
still
filter
on
top
of
the
static
policy,
but
you
would
never
be
able
to
get
more
than
the
static
policy.
I
If
you
want
the
dynamic
webhooks
to
be
able
to
receive
everything,
then
you
would
need
the
static
policy
to
basically
say
everything
gets
full
request
response
level.
One
concern
there
is:
we
would
hit
some
performance
issues.
We
knew
that
there's
some
performance
issues
if
you're
auditing,
everything
at
full
level
and
I-
think
that
those
performance
issues
we
should
address
in
court
kubernetes
regardless
so
at
least
from
VMware,
is
concerned,
like
I
knew.
I
The
main
thing
we
need
is
just
to
turn
that,
on,
like
we
like
just
dynamically
turn,
it
on
I
feel
like
I,
say:
FBI
has
been
held
up,
she's
solely
on
the
policy
piece
like
people
want
a
lot
of
things
at
the
policy
which
may
make
sense,
but
that's
kind
of
held
up.
You
know
the
ability
to
just
configure
distinct
dynamically.
Like
I,
almost
wonder
we
could
just
separate
out
this
thing
from
the
policy
and
say
you're
gonna
use
the
master.
Obviously
that's
there,
but
you
could
dynamically
your
sinks,
wonder
if
that
would
be
more.
I
It's
just
the
hardest
thing:
it's
like
me,
take
a
latte,
you
know
closer
to
adopt
clusters
and
do
a
management
platform.
We
just
want
to
turn
the
auditing
on
and
get
that
audit
data
and
right
now
that
would
require
a
restarting
the
API
xur,
which
is
super
unfortunate,
and
you
know
the
only
other
level,
like
the
cloud
providers
have
a
real
monopoly
over
this
day
there
that
you
know
it
can
only
go
to
their
tooling
there's
no.
J
Was
sorry
for
some
things
we
that
traditionally
were
statically
configured
like
certificate
authorities
around
different
client
and
authentication
proxy
setups
that
has
recently
been
made
dynamic.
That
used
to
require
API
server
restarts
and
has
started
to
honor
updates
to
those
files.
Would
would
it
make
sense
to
do
something
similar
with
the
static
audit
configuration?
That
would
let
someone
who
did
want
to
drive
that
dynamically
regenerate
and
update
that
to
basically
only
include
the
union
of
the
things
that
they
needed
to
route
to
their
backends.
J
I
J
I
mean
the
presumption
is
that
if
we
went
the
route
where
we
basically
keep
the
API
server
the
way
it
is
today
and
route
things
to
the
statically
configured
web
hook,
the
presumption
is
that
the
person
can
figuring
the
API
server
is
pointing
at
other
web
book
that
they
know
they
are
managing
dynamically
and
it's
going
to
do
this
multiplexing
and
so
there's
some
level
of
cooperation
assumed
there.
If,
if
you
wanted
to
have
a
back-end
that
was
going
to
be
dynamically
configurable
monitoring.
J
A
A
A
I
understand
right,
but
what
that
means
is
that
your
dynamic
sync
can
get
the
same
events
that
the
static
standard.
So
there's
no
there's
no
like
hierarchy
of
like
better
sync
right.
So
if
a
cloud
provider
configures
this
thing
to
have
all
the
events,
the
dynamics
and
cut
all
events,
they
don't
if
they
configure
a
secret
so
just
metadata.
Well
then
the
same.
Has
medical
I'm
not
not.
J
I
We
would
certainly
take
that
over
the
other
options,
I
mean
looking
at
it
from
the
perspective
of
building
out
kind
of
a
multi
cloud
system
adopting
in
clusters
and
providing
teachers
like
their
party
features.
Even
if
it
was
hey,
you
know
they
set
this
policy,
that's
somewhat
restrictive,
so
we're
not
gonna
get
a
whole
everything.
They
would
self
control
over
that
and
that's
a
good
kind
of
agreement.
You
know
for
this
like
real
world
solution,
we're
coming
Thanks.
E
So
it's
like
I
think
I
think
what's
being
pointed
out
here,
is
that
this
has
failed
to
progress,
because
a
coherent
solution
that
can
have
consistent
and
expected
results
across
any
cluster
using
this
API
doesn't
exist
and
hasn't
been
presented,
since
this
alpha
thing
was
created.
I
think
one
thing
to
consider
here
is
that
we've
already
taken
a
stance
on
things
need
to
be
moving
in
beta
and
show
progress
on
a
regular
cadence.
C
Actually,
just
before
things
thought
we
had
a
solution
that
and
the
reason
why
it
was
stoat
and
we
moved
to
another
solution
was
mainly
there
was
there
was
no
agreement,
the
weather
is
too
complex
or
not.
It
was
not
about
how
consistent
it
is
because
it
actually
spawned
all
the
use.
Cases
that
had
been
figured
out,
I
mean
I.
Remember.
A
I
Couple
thoughts
here,
so
one
well
cloud
providers
would
still
mean
to
enable
this
feature:
I,
guess.
If
it's
built
in
the
core,
then
it's
kind
of
a
more
expected
kind
of
standard
thing.
We
could
eventually
call
it
something:
that's
conform
it
if
we
had
the
the
separate,
dynamic
server
that
doesn't
mean
that
cloud
providers
can't
still
enable
that
would
just
be
on
them
to
go
and
configure
it
manually
and
probably
post
the
dynamic
server.
Also,
it's
not
just
the
policy
pieces
that
have
held
this
up
in
alpha.
I
I
E
D
C
I
D
J
J
It's
a
lot
better
than
writing
different
things
to
different
backends,
but
that
combined
with
the
sort
of
unsatisfying
half
api
of
like
register,
sync
and
you'll,
maybe
get
like
what
you
get
is
undefined.
If
the
policy
isn't
associated
with
it,
I
I
don't
see
a
great
way
for
this
to
give
us
something
that
I
think
works
well,
I
mean.
I
From
our
Senate
like
having
that
master
balls,
they
stammers
ideal
because
it
gives
the
provision
or
some
level
of
control
what
could
ultimately
come
out.
The
Closer
silly
and
our
relationships
and
discussions
are
that
with
people
trying
to
get
this
turned
on
like
that.
That
would
actually
probably
be
more
ideal.
I
So
I
don't
see
that
as
a
distraction
at
all
like
I
get
that
it's
similar
to
undefined,
but
it's
it's
being
defined.
You
know
within
that
relationship,
so
we're
a
time,
unfortunately,
but
I
think
that
this
is
a
good
conversation
and
I
want
to
kind
of
move
this
forward.
I
My
proposal
for
next
steps
here
are
we've
heard
kind
of
a
couple,
different
proposals,
and
so
I
suggest
that
everyone
who
has
some
kind
of
idea
of
how
to
move
forward
with
this,
maybe
try
and
write
up
like
not
a
cap
but
like
a
very
rough
like
kind
of
one-page
proposal
and
then
maybe
in
the
next
meeting,
we
could
sort
of
review
those
in
a
little
more
detail
than
we
have
here
and
kind
of
carry
the
discussion.
That
way.
Does
that
make
sense
good.
I
J
This
happened
a
couple
of
years
ago
and
then
this
was
for
just
for
the
static
policy,
so
the
default
policy
got
tweaked
to
just
have
metadata
for
some
things
and
emit
some
high-volume
things,
clients,
namespaces
yeah,
so
it
I
mean
we
immediately
failed
scalability
test.