►
From YouTube: Kubernetes SIG Auth 20180530
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20180530
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
B
So
I've
been
working
on
a
little
utility
is
called
Kubo
IDC.
That
I
would
like
to
go
off
to
everybody.
It's
just
a
set
of
tools
that
I
think
can
help
people
try
to
use
ONDCP
common
way
for
us
to
describe
people
who
are
trying
to
bootstrap
Mythbusters
into
using
authentication
to
actually
have
some
amount
of
single
sign-on,
rather
than
having
everybody
pass
around
credentials
to
the
boxes,
which
is
surprisingly
common.
B
So
just
to
remind
everybody
how
Open
ID
Kinect
works
in
kubernetes
benignly
Connect
produces
data
beauties
that
can
be
validated
against
a
particular
provider.
In
this
case,
I'll
show
a
little
demo
in
a
second
that
will
use
Google
so
who,
who
gives
you
an
open,
ID,
jadibooti
and
then
has
a
bunch
of
keys
that
allow
you
to.
B
Thank
you
so
the
way
that
the
way
that
kubernetes
validates
to
WTS
it
doesn't
actually
have
any
way
of
initializing
a
login
flow
for
Jada
beauties.
You
generally
just
do
all
of
the
fee,
but
the
API
server
only
validates
JWT
s.
It
doesn't
prescribe
how
you
are
going
to
get
them,
how
you're
gonna
get
these
things
called
ID
tokens
and
how
you're
eventually
going
to
use
them.
B
So
this
is
sort
of
a
pretty
big
barrier
to
entry
for
a
lot
of
administrators
trying
to
set
up
OpenMP
Connect
generally,
when
we
can
go
tell
them
is
use
something
like
that.
The
tectonic
console,
which
knows
how
to
July
6
close
to
do
that,
but
there's
no
really
good
open
source
solutions.
So
one
of
the
simple
things
that
Google
died.
Kupo
igc
provides
us
a
little
login
service
to
demonstrate
how
this
works.
I'm
gonna
go
to
localhost
8080,
where
my
services
running
I'd,
give
it
an
ID
token.
So
it
immediately
arrived.
B
You
a
Roth
potentials
to
Google
it
bounces
me
around
I
log
in
with
my
Red
Hook
account
and
I
successfully
logged
in
this
is
all
the
information
and
the
claims
that
it's
our
set
of
the
token
so
consider
my
email
and
kubernetes
can
now
use
this
information
to
grab
this
and
then
there's
just
a
simple
download.
So
I
go
ahead
and
download
the
gute
config
cool
so
now
over
here,
if
I
use
that
could
configure
custom.
B
This
now
works
as
me
and
if
I
do
something
like
get
notes,
you'll
see
that
the
error
actually
shows
that
I
am
not
authenticated.
I'm
gonna
get
it
as
media
I'm,
not
authenticated
as
there's
another
provider
or
an
admin
user.
So
it's
actually
grabbing
me
an
ID
token
doing
a
simple
little
generation
of
the
coop
config
and
talking
to
people
who've
been
trying
to
use
our
ID,
see.
I
think
this
will
be
helpful
for
them
just
to
understand.
Like
Oh,
have
your
users
go
here,
login
and
then
download
it
could
get
they
get.
B
They
can
start
down.
If
you
actually
look
at
this
coop
config,
it's
right
now
pointed
at
localhost.
So
this
isn't
actually
pointed
at
a
real
cluster,
even
though
this
cluster
is
running
on
AWS
that
it's
talking
to,
and
this
gets
into
the
second
part
of
my
demo
and
the
second
utility
in
the
Koopa
IGC
repo,
which
is
something
that
I
called
Kubo
and
see
proxy.
So
right
now
we
have
a
proxy
that's
running
on
localhost
9,
4,
4,
3
and
I'm
still
using
my
ID
token
to
access
this
localhost
proxy.
B
But
it
turns
out
that
my
kubernetes
cluster
right
now
is
not
configured
to
understand
open
etiquette.
It's
just
a
regular
kubernetes
cluster
I
haven't
reconfigured
any
of
the
flats
and
in
another
terminal
I
have
my
login
service
running
on
my
port
8080,
where
I
logged
in
and
then
proxy.
So
the
proxy
is
just
an
authenticating
proxy.
That
is
understanding.
Id
tokens
actually
uses
sort
of
the
same
logic
that
kubernetes
internal
API
server
uses
to
authenticate
the
ID
tokens.
B
It
also
negates
the
ID
tokens
and
then
uses
impersonation
headers
on
the
backend
to
talk
to
the
API
server.
So
that
means
that
I
logged
in
using
my
OpenID
Connect
token.
It
authenticated
me
using
first
Clinton
headers,
and
the
nice
part
about
this
is
that
I
am
not
actually
using
I
do
not
have
to
configure
my
API
server
at
all.
Just
talk
about
to
understand,
open
I
do
connect.
This
has
been
really
problematic,
I
think
for
some
providers
who
build
surfaces
on
top
of
kubernetes.
B
Some
of
the
her
rules
for
the
people
that
we've
discussed
talk
to
is
that
this
has
to
be
able
to
install
and
heavy
any
kubernetes
cluster,
no
matter
what
the
what
the
API
server
flags
are
enabled,
or
not,
enabled
and
I
think
that
this
is
a
fun
little
way
of
demonstrating
that
with
an
authentication
proxy,
you
could
do
something
like
all
of
a
sudden.
I
can
build
a
home
chart
that
works
on
top
of
any
cluster
and
have
a
console
that
uses
open,
nd
Connect
authenticated
users.
It
doesn't
rely
on
any
of
the
underlying.
B
It
doesn't
make
any
assumptions
about
what
type
of
protocols
the
API
server
was
configured
to
use.
So
that's
just
the
that's
it's
a
pretty
quick
demo,
but
that's
about
it.
The
repo
that
I'm
working
on
just
has
a
few
tools
that
hopefully
help
people
get
started
with
these
set
of
things
and
get
to
the
point
where
you
can
tell
your
users
to
just
go
and
download
fakes
from.
B
A
B
Awesome
so
now
on.
To
close
one
note:
the
first
one
is
another
item
of
mine:
promoting
the
exact
plugin
support
to
beta,
so
this
PR
was
open.
Yesterday,
I
didn't
realize
that
I
was
cutting.
It
really
really
close
with
good
freeze,
but
this
is
not
a.
This
is
not
a
very
large
PR
in
terms
of
it
doesn't
change
much
functionality.
B
It's
just
doing
all
of
the
things,
so
we
can
get
a
v1
beta,
they'll
tack
in
there
I
don't
know
if
this
gets
merged
now
or
it
gets
merged
once
code
freezes
over,
and
we
should
probably
discuss
that
with
various
people
and
if
I
hear
his
thoughts
a
be
good
to
hear.
But
the
biggest
thing
is
that
it
doesn't
change
much
of
those
functionality,
so
hopefully
alphago
continuing
to
work
and
then
have
a
sort
of
seamless
transition
into
beta
I.
A
Know
the
the
TLS
PR
is
kind
of
in
the
last
stages
of
review,
and
that
adds
two
fields
to
the
response
object,
so
that
would
intersect
with
this
I
think
this
is
basically
promoting
the
existing
response,
API,
as
is,
and
dropping
the
input
API,
because
the
only
thing
we
were
using
it
for
we
didn't
actually
need.
Is
that
correct?
Whether
or
not
the
thing
was
interactive
could
be
detected
well.
B
A
I
think
the
only
other
one
that
I
had
remembered
talking
about
was
on
retry
like
indicating
to
the
plug-in
whether
or
not
they
were
being
contacted
in
response
to
a
401
or
or
just
kind
of
green
field,
and
the
other
thing
I
could
think
of
is.
A
If
we
ever
wanted
to
support
negotiation
flows,
you
would
need
to
provide
kind
of
the
negotiate
cutters,
but
I
I
agree
that,
if
we're
not
using
it
in
his
current
state,
it
was
kind
of
wonky
enough
that
doing
it
in
an
empire
like
that,
probably
don't
want
to
promote
that
debate.
Ax
without
thought.
Yeah.
B
A
A
Would
actually
consider
taking
the
distinct
parts
of
information
that
we
want
to
feed
in
and
making
those
tokens
that
they
could
put
in
their
invocation.
So
if
you
want
a
server
response
code,
then
you
can
pass
that
as
a
token
and
pass
it
as
an
arc
to
yourself
or
a
interactive,
true/false
or
negotiate.
Headers
like
it
seems
like
we're.
Gonna
have
maybe
one
to
three
specific
parts
of
pieces
of
information
and
letting
you
pass
them
to
yourself
as
an
Arg
might
be
cleaner,
but
I
think
that
can
be
a
follow
up.
A
B
But
yeah
I
think
there
was
a
request
up
is
basically
like
we
don't,
rather
than
trying
to
be
thoughtful
about
Butler
or
not,
we
should
catch
it.
It's
tough,
a
mode
where
we
can
just
not
catch
it.
I
the
nice
part
about
that
is
that
that
could
we'd
be
configuration
to
this
exact
plugin
like
greatly
specify
the
environment
variables
and
the
arguments
at
the
command
line.
I
could
easily
see
that
being
there,
and
that
has
nothing
to
do
with
sort
of
our
call
and
response
for
the
inputs
and
outputs.
D
D
You
know
dynamic
audit
configuration
similar
to
dynamic
admission
control,
but
also
allow
the
policies
to
be
dynamic,
as
well
so
hoping
to
get
working
on
this
the
next
couple
weeks,
and
then
we
we
have
a
couple
questions
as
far
as
how
we
want
to
implement
some
stuff
around.
You
know
if
there's
a
static
on
Vega
live
on
the
master
versus
you
know,
dynamic
copying
the
cluster.
What
takes
precedence?
How
do
we
handle
that?
D
Do
we
want
to
have
multiple
weapon,
backends
and
multiple
policies
from
those
backgrounds,
and
then
how
does
that
sink
into
what
Tim
was
saw
him
out
there?
He
is
a
thing
up
and
for
streaming
audit
events,
so
the
cluster
we
kind
of
sync
these
two
things
up
going
forward,
so
I
think
those
are
some
of
the
other
questions
left
in
it
that
have
seen
so
far
list.
C
A
I
do
think
the
question
about
whether
the
static
config
is
sort
of
a
bedrock
that
always
always
is
outputting
and
is
largely
independent
of
the
dynamic
ones
like
or
if
it
is
a
limiting
factor
and
all
the
dynamic
ones
must
fit
within
it.
I
think
Tim
and
I
on
a
thread
where
we
were
kind
of
trying
to
suss
out
what
which
direction
are
we
trying
to
go
with
this
I?
A
E
I
think
there's
good
good
use
cases
for
from
a
command
line
flag
or
from
a
sort
of
provider
being
able
to
scope,
reduce
the
scope
in
both
directions.
So
you
don't
want
less,
even
someone
who's
a
cluster
admin
but
maybe
less
trusted,
then
the
provider
to
disable
auditing
that
you
need
enabled.
We
also
don't
necessarily
may
not
want
them
to
be
able
to
enable
full
request.
Logging
also
for
safety
reasons.
So
it
seems,
like
you,
maybe
build
sort
of
put
bounds
around.
E
D
To
be
one
of
limits,
the
dynamic
stuff
do
we
think
we
just
want
them
to
operate
separately?
I
guess
is
kind
of
one
of
the
core
questions.
Yeah.
A
That's
kind
of
a
core
question:
like
is
the
dynamic
stuff,
a
like
a
debug
mode
thing
like
you
want
to
increase
verbosity
around
a
certain
user
around
a
certain
set
of
requests
temporarily
beyond
what
the
base
configures
or
are
these
sort
of
long-running
things
that
are
disjoint
you're,
sending
one
set
of
events,
one
place
and
a
different
set
of
events,
a
different
place,
yeah
figuring
out
the
use
cases
and
making
sure
we're
not
cutting
ourselves
off
of
the
legs
for
ones.
Those
seem
important.
It's
good.
E
A
D
I
know,
are
these
at
least
are
pretty
long-running,
so
it
would
be
kind
of
a
long-running
configuration,
but
you
can
definitely
see
how
you
may
be
one
of
Limited's
from
a
closer
admin
perspective
of
hey.
We
don't
want
to
be
logging
requests
for
secrets
ever
right,
because
it's
sensitive
data,
so
I
can
kind
of
see
both
ways.
Okay,.
A
D
It's
really
quiet
today,
I
I
think
the
streaming
on.
It
is
more
likely
to
be
useful
for
a
debugging
use
case
just
because
the
kind
of
hull
based
model
is
a
little
easier
to
plug
into
and
I
suspect
that
be
kind
of
the
webhook
pushed
push
model
is
more
likely
to
be
used
in
a
you
know.
I
have
a
long-running
thing.
You
know
maybe
you're
running
on
a
cloud
provider
that
has
their
own
setup
gke,
and
then
you
want
to
like
push
your
audit
events
off
your
you
know.
Custom
set
up.
A
Into
the
streaming
would
almost
like,
when
you
open
the
stream,
you
would
provide
the
something
that
would
essentially
transform
into
a
policy
that
would
describe
the
types
of
events
and
requests
and
users
you've
wanted
to
stream,
and
it
would
start
tapping
into
those
for
your
stream.
Something
like
that,
because
we
obviously
don't
want
to
accumulate
all
data
on
the
off
chance
that
someone
might
start
a
stream.
So
it
would
have
to
be
dynamic.
Well,.
D
E
B
D
B
I
think
there
were
also
issues
around
if
the
goal
is
to
be
able
to
configure
the
API
server
to
push
a
web
hook
to
something
running
on
the
cluster.
Didn't
we
already
have
issues
with
sort
of
the
networking
specifics
for
different
configurations
with
I
think
they
self
hosted
authorization
proposal
that
is
still
open,
yeah.
E
D
B
Are
we
any
good
and
think
that
we
can
continue
discussion
on
here,
yeah
awesome,
so
somebody
requested
this
PR.
The
non
root
group
API
changes
is
that
person
here
they
actually
didn't
catch
your
name
but
I
resolved
your
issue.
A
Also
wanted
to
call
out
the
token
service
account
turbine
projection,
PR
I'll,
add
a
link
as
well.
That
is
Mike's
alpha
implementation
for
requesting
a
service
account
so
can
be
injected.
So
the
design
was
merged
few
weeks
ago
and
PR
is
finishing
up
review,
probably
today
or
tomorrow.
So
that
is
getting
the
API
changes
in
and
alpha
this
relation,
and
he
was
going
to
start
working
on
getting
the
cubelet
to
be
doing
that
request
in
112.
F
A
F
Communities
taking
a
look
I
intend
to
push
to
get
that
into
my
11
I,
not
just
say,
VI
changes
and
the
other
thing
that
I
would
also
like
to
get
into
111
is
the
token
review
changes.
I
think
we
settled
on
behavior
of
that
token
review
and
if
it's
a
lower
priority
than
the
getting
the
implementation
of
the
tug
in
volume
source
for
me,
but
I
would
also
like
to
see
that
land
that
was
of
the
change
that
plumbed
that
context
all
through
audience.
F
F
F
Think
this
segues
into
our
next
topic:
I-
guess:
the
opening
volume
source,
the
token
volume
source
and
API
and
the
token
review
API
I
think
you
are
the
only
approver
for
those
two
sections.
So
if
we
can
get
those
API
is
settled
on
and
approved,
we
can
start
getting
other
approvers
for
the
implementation.
I
can
get
the
storage
team
to
work
on
that
and
I
noticed
that
there's
a
lot
of
the
the
owner
file
since
in
staging,
like
specifically
like
the
client
Clank,
oh
and
a
lot
of
API
machinery
they're
a
little
scant
and
sometimes.
F
A
E
A
A
One
of
the
things
that
I
am
wanting
to
do
as
part
of
the
sub-project
stuff
is
take.
The
the
tech
leads
over
those
areas
and
make
sure
that
all
of
them
are
in
there
as
owners,
and
then
people
who
are
involved
in
those
areas
get
pulled
in
as
reviewers
so
I'm,
hoping
that
the
sub-project
stuff
will
actually
give
us
more
principled
ways
to
kind
of
put
people
in
those
and
get
more
than
like
two
people
more.
F
A
F
A
C
A
A
A
F
A
Be
Tim
or
Eric
or
I
will
fight
each
other,
for
it
I'm
sure
I
think
that
will
probably
be
after
next
Tuesday
but
I
agree
on
the
approver
bandwidth.
The
thing
is
that
things
that
cross
a
lot
of
SIG's
and
things
that
touch
API
is
tend
to
be
the
slowest
and
we
tend
to
have
a
lot
of
things,
especially
the
cross
big
ones.
We've
hit
more
API
things
recently
with
your
token
stuff.
F
A
So
owner's
files
on
these
packages
will
help
for
some
things,
but
they're
not
actually
going
to
help
with
the
thing
that
touches
the
cubelet
and
touches
the
API
and
touches
the
controllers,
and
you
know
requires
scheduling
and
node
and
whoever
else
to
all
be
aware.
So,
but
we
should
do
what
we
can
to
unblock
the
subtrees.
We
control
so
yeah.
D
A
A
A
Please
tag
one
of
Tim,
Eric
or
I
to
request
that
that
milestone
be
put
on
so
the
way
we
are
tracking.
What
needs
to
hit
the
milestone
is
things
with
the
sig
off
label
and
the
V
111
miles,
and
so
your
PR
doesn't
have
a
milestone,
doesn't
mean
we
don't
think
it
needs
to
go
in.
It
just
means
that
we
didn't
have
to
add
the
milestone
before
and
now
we
do,
and
so
now
we
will,
if,
if
it
is
targeting
111.
B
H
H
H
Let's
go
to
the
last
comment:
I
think
that
yeah
this
one,
the
use
wrong
I
have
several.
You
know,
I
echo
what
all
Claire
mentioned
in
the
previous
comment.
I
think
this
validation
comes
later
and
volumes
may
already
be
mounted
with
FS
group
earlier
I.
Don't
fully
understand
this
if
someone
could
help
me
understand
that
what
needs
to
be
done
here,
I
actually.
D
D
H
I
think
yeah
yeah,
so
I
mean
I,
think
I
added
the
FS
group
check,
but
the
concern
is
that,
at
whatever
point
this
runs
right.
It
won't
check
because
you,
at
whatever
point
verify
non-root
fails.
The
part
would
not
be
running
right,
so
wouldn't
that
still
not
allow
the
part
to
run
as
root
or
is
that
like?
Are
we
saying
I
mean?
Are
we
saying
that
the
amount
will
still
be
done,
even
though
the
part
is
not
running?
That's
a
risk.