►
From YouTube: Kubernetes SIG Auth 20171213
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20171213
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
Yeah,
okay,
hi
everybody.
This
is
cigar
for
a
December
3rd
2017.
We've
got
a
pretty
small
agenda
today,
stuff
about
the
multi-tenancy
working
groups
and
I,
just
like
to
call
out
some
other
PRS
that
probably
need
review
early
in
the
Winton
cycle.
So
just
in
terms
of
first
announcements,
the
code
freeze
is
targeted
to
lift
today.
So
that's
awesome
because
we
can
start
pushing
more
PRS
through
and
getting
all
those
merged.
A
B
Yeah,
so
we
have
a
mailing
list,
which
is
in
the
Google
Doc
there,
and
there's
only
been
like
a
few
threads
on
there,
and
mostly
it
was
about
the
talks
that
we
gave
at
the
deep
dive
at
cube
con.
If
you
were
there,
but
you
can
like
catch
up
on
all
the
slides
because
they're
all
on
that
list,
so
yeah
go
ahead
and
join.
A
I
did
I
didn't
know
that
most
working
groups
sort
of
have
our
time
bounded
like
they
have
a
specific
goal,
and
once
they
achieved
that
goal
for
whatever
you
know,
definition
of
that
that
they
generally
sort
of
end
I
was
wondering
with
the
multi-tenancy.
If
there
was
a
specific
goal
that
was
targeting
or
if
this
was
sort
of
something
we
saw
that
was
going
to
live
on
for
a
long
time.
So.
B
C
The
first
round
is
like
coming
up
with
definitions,
taking
the
things
we
already
have
and
slotting
them
into
like
what
do
we
already
have
it
our
disposal
identifying
gaps,
and
then
that's
really
the
first
phase
and
then
thanks
partitioning
off
what
what
needs
to
be
done?
What
are
the
gaps
and
who's
gonna
go
work
on
those
or
you
know
identifying
those
things.
A
Cool
so
yeah.
Let
me
know
if
I
have
a
you
know
more
up-to-date
link
on
that.
I
was
looking
in
the
community
repo,
but
once
you
once
you
all,
you
know
open
that
PR
and
get
a
formal
like
landing
page.
Then
let
us
know
and
we
can
update
the
link.
Okay.
A
C
A
Like
it's
demo,
the
movie
launches
pulls
in
note
one.
Is
that
actually,
some
of
our
actually
that
cop
brought
up
around
some
of
the
multi
turning
stuff
was
a
pull
that
is
planned
for
1:10
to
add
some
of
the
network
policy
objects
to
user
facing
rules.
So
in
our
back
we
have
roles
that
are
targeted
for
specific
namespace
users,
so
there's
a
view
role
which
is
intended
for
users
of
a
namespace
that
can
only
view
objects.
There's
an
edit
role
too,
for
users
of
a
namespace
that
can
deploy
workloads
and
there's.
A
A
A
E
That
I
think
that's
maybe
a
little
that
one
bad
proposal
might
be
a
little
aggressive
for
1:10,
but
it
gets
to
how
you
could
bypass
the
built-in
secrets
part
of
this
bounded
proposal.
So
it's
kind
of
like
a
this
is
more
focused
on
the
token
there's
another
part
which
is
more
the
mechanics
and
the
Cabernets
system
and
how
it
might
evolve.
C
Yeah
there
are
those
who
haven't
been
following
this:
the
goals
are
there
they're
kind
of
three
main
goals
around
this?
The
first
is
to
remove
the
need
for
service
account
tokens
to
be
in
API
objects,
and
that
is
for
security
reasons,
so
that
they're
not
easily
visible
scriptable
and
also
just
for
performance
reasons
about
on
large
clusters.
You
don't
have
huge
numbers
of
static
tokens,
each
with
a
c',
a
buttonhole
sitting
in
your
SED,
so
that
that's
the
first
motivation.
C
The
second
is
to
let
the
tokens
be
bound
or
attenuated
in
a
lot
of
different
ways,
so
to
a
specific
audience
or
to
a
specific
node
or
to
a
specific
pod
and
then
also
lifetime
limited,
and
then
the
the
third
motivation
behind
this.
This
particular
proposal
is
to
give
to
make
it
an
API,
so
that
actors
like
cubelets
can
provision
these
on
demand
for
a
particular
pod.
So
those
are
the
three
motivations
each
of
those
has
value
kind
of
independently,
but
if
we
can
get
all
three
of
those
to
happen,
that
that'll
be
really
great.
A
The
next
item
is
for
the
proposals,
at
least.
Is
the
self-hosted
webhook
authorizer
this
right
now,
when
you
run
when
you
configure
an
API
server
and
say
point
to
an
external
authorizer
to
you
know,
handle
things
that
are
back.
Doesn't
you
have
to
you?
Can
it's
it's
not
possible
to
run
that
on
the
pod
Network,
just
because
of
sort
of
the
way
that
the
API
server
reaches
out?
So
this
is
a
proposal
to
let
you
be
able
to
actually
run
those
authorizers
in
the
cluster
and
have
the
api
server
consult
something
through
the
pod.
C
Let
you
explicitly
say
I
want
you
to
use
this
service,
and
while
we
are
at
it,
there
are
a
couple
other
configuration
options
like
letting
you
specify
what
should
happen
if
a
webhook
authorizer
fails
like.
Should
it
continue
on
to
the
next
authorizer
or
should
it
hard
fail?
That
was
something
that
a
feature
that
came
in
in
the
one:
nine
timeframe
that
we
want
to
make
configurable
and
then.
C
Finally,
what
version
of
the
subject
access
review?
Api
objects
should
be
sent
and
received
when
the
webhook
was
first
introduced
or
still
in
v1,
beta
1,
and
now
that
those
are
promoted
to
b1.
It
would
be
nice
to
let
people
opt
in
to
say.
Yep
I
can
use
the
v1
format.
Please
send
that
to
me
so
we're
just
kind
of
flushing
out
the
configurable
pieces
and
then,
as
part
of
that,
letting
you
explicitly
tell
the
API
server
to
talk
to
a
service
instead
of
just
a
arbitrary
URL.
A
I'm
Abed
at
a
new
section
for
this,
for
this
particular
meeting
under
a
needs
review.
So
these
are
just
PRS
that
are
sitting
around.
They
don't
have
any
particular
outstanding
concerns,
but
we
need
probably
want
to
go
and
review
them
and
get
them
moving
again.
There
was
a
lot
of
interest
in
the
kms
G
RPC
plugin,
allowing
you
to
specify
an
external
service
to
do
the
encryption
at
rest,
so
this
would
allow
people
to
hook
up
the
encryption
of
mechanisms
to
use
keys
from
vault
or
use
encryption
services
like
like
Google
kms
or
AWS
kms.
A
This
one
still
has
some
outstanding
concerns.
Just
around
is
the
G
RPC
format.
Okay,
do
we
need
to
do
a
different
webhook
mechanism
based
off
of
all
the
work
that
has
been
done
on
this
and
sort
of
this
kind
of
unpleasant
experience
of
having
to
tell
people?
Sorry
we're
doing
something
different
now,
so
all
that
work
you
did
isn't
getting
in.
We
definitely
want
to
target
this
for
110
and
any
eyes
on
that.
To
get
it
moving
would
be
appreciated.
A
C
E
A
No
one
yet
so
one
thing
that
became
very
clear
with
some
of
the
conversations
is
that
a
lot
of
the
people
working
on
this
are
very
motivated,
get
it
in,
but
aren't
particularly
familiar
with
the
cig
process,
or
don't
chop
two
sig
off
every
time
or
understand
what
sig
architecture
is.
So
as
we
have
those
discussions,
we
should
probably
be
updating
that
issue
and
just
saying
by
the
way,
someone
needs
to
go
to
Sagarika
texture
and
have
this
discussion.
D
F
C
The
question
is
for
the
various
extension
points
we
have.
We
have
rest
ones
for
like
the
web
hooks.
We
have
G
RPC
ones
for
things
like
CNIC
RI
and
the
question
is
for
each
new
extension
point
which,
which
model
makes
more
sense
is:
are
there
one
of
the
principles
we're
using
to
decide
which
makes
sense,
and
we
should
settle
on
how
we
make
that
decision?
So
this
question
is
easier
to
answer
the
next
time.
C
So
this
is
the
first
thing
from
the
API
server
that
is
sort
of
a
downward
facing
extension
point
not
an
outward
facing
extension
point,
so
the
storage
layer
is
completely
encapsulated
by
API
server.
That's
not
exposed
to
anyone
else,
and
so
this
is
a
aspect
of
the
storage
layer
and
so
we're
trying
to
figure
out,
given
the
call
pattern
and
the
performance
goals
and
where
we
expect
it
to
be
surfaced,
does
G
RPC,
make
sense.
F
A
C
A
We
expect
that
to
be
very
helpful
for
people
building
dashboards
or
something
like
that,
because
you
can
perform
a
bulk
action
and
say
what
are
the
things
that
this
person
can
act
on
in
the
same
space
and
then
hide
particular
like
edit
cost
based
off
of
that
information.
That
just
I
would
appreciate
if
anybody
could
take
a
look
at
that,
maybe
even
after
the
holidays,
because
it's
not
particularly
urgent,
but
this
is
one
that
didn't
get
into
1/9.
So
it'd
be
great.
If
we
could
get
one
time.
A
C
As
far
as
that
needs
review
saying,
another
thing
that
I'm
planning
to
work
on
in
110
is
the
the
node
restriction
stuff
around
chains
and
labels.
So
if
you
have
thoughts
on
that,
let
me
put
that
in
the
needs
review
section
again.
Otherwise,
I
think
we
will
probably
start
putting
some
concrete,
concrete
steps
in
place
and
see
if
we
can
make
progress
to
get
that
block
down
in
110.
A
A
Awesome
so
I
imagine
that
signal
be
a
little
bit
more
less.
It
would
be
a
little
less
regular
with
the
holidays,
showing
up
so
I
haven't
looked
the
calendar
yet,
but
I.
Imagine
the
next
time
we'll
be
meeting
is
in
January,
so
just
a
heads
up
for
anybody
wanting
to
know
where
we
are
awesome.
Well,
thanks,
everybody
for
joining.
This
is
a
short
one.
So
we'll
go
ahead
and
give
you
back
30
minutes
and
Happy
Holidays
get
you
open.
Everyone.