►
From YouTube: Kubernetes SIG Auth 20181031
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20181031
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
B
Hey
thanks
Jordan
for
accommodating
me
last
minute,
I
really
appreciate.
So
today
we
have
a
demo
for
kubernetes
policy
controller,
which
is
essentially
a
policy
admission
controller
backed
by
open
policy
agent.
This
essentially
allows
policies
that
are
semantics
in
nature
that
that
are
mostly
like
rules
that
any
organization
would
basically
implement
for
governance
and
for
practices
that
are
learned
lessons
from
past,
like
you
know,
using
specific
registry
paths
and
so
on.
So
let's
go
to
the
demo
to
share
my
screen.
B
B
B
B
B
B
So
if
you
see
this
has
the
rule
saying
whitelist
of
it
can
only
come
from
QA
at
madoka.
So
since
the
ingress
was
ahmed
or
comma
did
not
match
the
regex
and
by
defining
a
simple
policy
you
can
basically,
you
know,
apply
certain
rules.
So
what
we
envision
is
like
there
will
be
a
library
of
such
policies,
which
will
basically
be
a
set
of
deny-
and
this
is
one
of
the
validations
rules,
but
you
can
also
do
mutation
to
kind
of
you
say,
annotate,
a
part
on
create
which
matches
certain
conditions
and
things
like
that.
B
B
B
B
D
B
B
C
B
The
audit
is
a
service
endpoint
on
the
service
itself,
the
or
it
can
be
implemented
as
a
separate
service.
So
the
way
the
rules
are
written
right
is
the
thing
that
is
interesting.
The
all
the
routes
are
basically
written
in
this
fashion,
which
is
a
deny
rule,
so
you
can
basically
query
for
all
on
all
the
objects.
What
are
all
the
deny
violations?
B
C
B
C
B
C
B
C
B
E
E
G
Even
they,
even
though
you
could
use
here's
our
API,
for
example,
on
eks
the
x.509
search
you'll
get
are
not
accepted
by
cluster
API
as
long
as
they
they
actually
partially
accepted
in
case
of
EPs
in
case,
if
they
are
only
using
Kimmel,
it's
rolls
but
they're,
not
working
for
anything
else,
no
other
roles.
So
basically,
this
question
is
to
the
group
is
whether
the
CSR
API
should
be
promoted
out
of
better
what
what
is
record?
What
is
what
are
the
requirements
and
what
are
the
missing
features?
A
F
G
Hold
on
yeah,
don't
get
me
wrong,
I'm,
not
frustrated.
Actually,
we
took
a
misadventure
upon
ourselves
when
starting
using
beta,
API,
so
I'm
actually
more
interested
in
getting
this
to
work
and
identifying
what
actually
has
to
be
done.
So
we
can
all
confidently
say
that
hey
this
is
a
good
API
for
everyone
to
so
we
can
actually
start
working
on
the
conformance.
If
that
makes
sense
right.
F
F
Think
there's
I
have
a
lot
of
open
questions
about
what
we
should
allow
clients
to
specify
and
how
we
should
allow
clients
to
specify
that
information
right
now
we
take
information
from
the
encoded
in
the
CSR
and
some
in
formation
encoded
in
the
actual
object.
I'm,
not
sure
how
that
division
is
best
makes
sense.
For
example,
should
we
use
the
CSR,
the
actual
encoded
CSR
as
a
transport
for
the
public
key
that
is
signed
and
move
all
attributes
that
we're
going
to
sign
into
the
end
certificate
into
the
outer
object?
F
That's
not
like
a
hard
requirement,
but
because
we
have
this
flow,
where
an
approver
approves
a
certificate
and
a
signer
signs
assert
of
the
certificate.
Since
those
subjects
are
encoded
in
the
CSR
in
our
API
today,
those
aren't
modifiable
by
an
approver
in
an
approval
flow.
So
I
think
this
causes
problems.
When
you
have
situations
where
you
have
a
pod,
that
wants
to
get
a
certificate
for
itself,
but
it
doesn't
necessarily
know
what
services
it's
running
as
or
you
have
to
give
the
pod
potentially
broad,
read
permissions.
F
F
E
F
F
That
point
to
me,
I
need
a
certificate
for
being
a
node
and
have
those
map
to
have
those
kind
of
profiles
mapped
to
a
specific,
like
structures
that
are
filled
in
by
approvers
and
and
then
there's
there's
other
issues
like
I.
Think
that's
that's.
The
main
thing
I
would
like
to
see
in
the
API
that
would
solve
a
lot
of
problems.
There's
there's
other
issues
such
as
like
support
for
multiple
CAS
that
we
get
pretty
often
and
yeah.
G
All
right
couple,
questions
follow-ups
so
have
you
see
as
far
as
I
remember
it's
based
on
CFS
SL
right
internally,
yes,
API
and
I
looked
at
CFS
SL
and
it
actually
addresses
some
of
your
questions,
for
example,
the
profiles
that
you
mentioned
it
has
it
so
the
way.
As
far
as
I
remember
the
API,
the
underlying
CFS
cell,
is
structured,
they're,
sort
of
like
the
CSR,
but
the
CSR,
as
you
maybe
notice
yourself-
is
quite
limited
in
its
nature.
G
So
there's
alternative
metadata
that
is
included
in
the
request
itself
in
CF
necessarily
includes
like
a
purpose
and
other
TTL,
for
example,
that
CSR
does
not
support
right
now.
So,
on
the
other
side,
CF
physical
server
supports
what
you're
called
profiles
right.
It
allows
you
to
widely
kind
of
apply
this
as
a
blueprint
on
the
request
and
limited.
So
it
will
be
nice
to
expose
I
guess
if
we
expose
this
in
the
let's
say,
odd
approval
or
suddenly
give
the
flexibility
for
the
approver
to
give
this.
G
This
is
sort
of
will
solve
your
problem
like,
for
example,
if
I
will
request
certificate
with
a
purpose
of
being
just
a
user.
Let's
say
for
12
hours,
but
the
profile
will
only
be
limited
to
6
hours.
You
I
will
get
6
hours
or
rejected
depending
on
how
we
want
to
solve
this
right,
but
would
that
address
you're,
basically
exposing
what
CF
SSL
already
provides
a
little
bit
more
I.
Also.
F
D
Making
the
requester
know
anything
about
the
profile
name
like
I,
think
you
want
the
ability
for
the
approver
to
be
able
to
say,
I
approve
this
cert,
or
this
request
to
be
signed
by
this
profile.
With
these
additional
dynamic
attributes,
I
think
it's
the
dynamic
attributes
that
the
CFS,
so
profiles
don't
deal
well
with
so
like
they
could
say.
This
profile
means
this.
Ca
is
gonna
sign
it
and
the
expiration
is
going
to
be
fixed
to
this
or
limited
to
this
and
we're
gonna
ignore
these
requests
for
extensions
and
we're
gonna
honor.
D
F
Right
I
was
that
is
one
way
to
do
it,
where
the
approver
hands
off
a
profile
to
designer
another
way
to
do
it
is
have
the
approver
fully
like
reified
the
certificate
and
hand
off
a
fully
baked
template
to
designer
and
just
say
well,
a
signer
for
the
CA
go
signed
this
template
yeah.
If
I
remember
correctly,
when
we
were
talking
through
this,
we.
G
Right
so,
basically,
I
think
the
right
way
to
to
work
on
that
part
specifically
is
just
to
write
a
design,
dog
and
I
will
send
it
to
you
I'll
kick
this
off
I'll,
send
it
to
the
group
and
we'll
just
work
on
this
until
you
identify
exact
design,
details
and
then
move
on
to
the
implementation.
The
only
other
questions
I
have
about
this,
assuming
it's
all
done
right.
So
we
sort
out
the
limitations
on
sad
on
profiles
which
shaped
us
a
little
bit
more.
G
D
I
think
that
requiring
a
certificate,
if
ik
it
ought
to
be
supported
to
be
conformant,
is
probably
not
going
to
be
part
of
the
basis
conformance
profile,
as
well
as
requiring
that
certificates
issued
by
and
so
a
downstream.
In
fact,
if
that
would
be
that
you
could
not
depend
on
certificates
issued
by
this
being
able
to
be
authenticated
to
the
API
that
puts
restrictions
on
my
last
apology
of
clusters
that
I
don't
think
we
can
dictate
as
part
of
base
components.
So.
G
G
So
it's
all
just
a
little
bit
more
work
and
those
are
the
only
two
options
that
we
have
is
a
independent.
What's
the
users
of
the
Kerr
branch
clusters
and
we've
noticed
that
basically,
the
I've
talked
to
UTS
team
and
asked
them
what
do
they
think
we
should
use?
They
said
service
accounts
I'm,
just
curious.
What
what
does
group
thinks
is
the
right
way
to
you:
go
against,
let's
say:
co-op
provider,
independent
vendor
independent
conform
and
Korea's
cluster
for
the
purposes
of
programmatic
API
access
for
user
purposes
that
doesn't
make
sense,
yeah.
D
G
So
the
way
we've
been
kind
of
working
this
around
building
independent
flow
for
authentication
is
using
the
CSR
API.
So
assume
you
have
like
a
small
identity
provider
like
a
sam'l,
sam'l
identity
provider
or
open
eyes.
You
can't
really
matter
users,
let's
indicate
against
it,
and
then
what
happens?
Basically,
this
authentication
flow
includes
the
third
party
connecting
to
you,
criminals,
cluster,
issuing
some
sort
of
independent
cluster
identity.
Token
right
and
the
only
way
there
are
two
independent
identity
tokens
that
are
recognized
by
Koreans
clusters
right
now,
or
rather
not
tokens
about
authentication
methods.
G
The
first
one
is
client
certs,
which,
in
my
opinion,
is
a
really
good,
because
it's
a
really
simple
industry,
coupe
standard
compliant
short-term
certificates,
are
really
good
from
the
security
purpose
perspective.
That's
why
we've
picked
it
right
in
the
first
place,
so
we've
got
a
short-term
cert
that
is
trusted
by
criminal,
austere,
API
or
the
other
way
is
to
have
service
account,
trust
and
token,
with
token
managed
by
kubernetes
cluster,
which
is
a
little
bit
less
convenient,
but
still
works
outside.
G
Of
that
there
is,
if
you
think
about
you,
know
EPS
as
your
Google
or
any
other
vendor
or
cloud
provider.
They
don't
have
this
common
ground
for
external
identity
is
interacting
with
the
cluster.
They
all
kind
of,
expecting
you
to
use
their
plugins,
either
on
cube
model
side
or
on
any
other
side
right,
and
we
would
like
to
avoid
this
because
for
us
it's
it's
kind
of
integration.
G
Complexity
will
skyrocket
right.
Assuming
that
you
have
to
write
a
plugin
for
every
single
cloud
provider,
kind
of
defeats
for
us,
the
purpose
of
using
something
like
kubernetes
in
the
first
place
right,
then
just
we
assume
that
kubernetes
should
give
us
some
sort
of
way
to
talk
to
the
cluster
and
present
our
identity
without
be
aware
of
certain
added
cloud
providers
right
in,
at
least
in
you
know,
80%
of
the
scenarios
right.
This
makes
sense.
E
I'm
a
little
confused
on
this
right,
like
the
it's
kind
of
the
point
that
each
of
these
clusters
has
their
own
authentication
method
right.
It's
why
authentication
is
pluggable
in
the
first
place.
Okay,
it's
meant
as
a
none
opinionated
thing
for
the
cluster
and
its
value,
certainly
service
accounts
and
stuff
or
highly
opinionated
from
the
cube
flow
right.
But
if
you
want
to
have
a
way
to
like
assert
and
authority
like
I
assume,
you
have
some
level
way
of
at
least
configuring
the
cluster
in
some
way,
like
maybe
a
misunderstanding
right.
E
But
if
you
have
a
an
IDP,
it
could
issue
just
a
random
noise
token
and
then
have
a
proxy
sitting
in
front
of
the
control
plane
and
then,
when
it
sees
its
only
token,
it
sends
a
specific
identity
to
the
community
cluster
behind
it
all
right,
and
then
it
doesn't
really
matter
right
could
that
it
would
be
some
type
of
like
mutual
TLS
between
that
proxy
and
the
kubernetes
cluster,
and
then
it
doesn't
matter
what
kind
of
cluster
it
is.
It
certainly
that's
possible
if
that's
all.
G
E
G
Exactly
your
points
right,
so
we
have
this
proxy
that
basically
works
exactly
as
you
described
the
bear.
The
fact
how
users
actually
interact
with
this
proxies
irrelevant
what's
relevant,
is
how
proxy
interacts
with
the
presents
this
or
transfer
this
dancing
knowledge
to
the
Caribbeans
cluster
right
and
right
now.
There
is
only
two
clause:
vendor
independent
ways
of
doing
that,
as
you
mentioned
yourself,
there's
a
barrier
tokens
with
service
accounts
or
mutual
TLS,
and
that
we
have
at
our
disposal.
We
ready,
but
they're
also
supports,
like
header.
D
F
D
G
Kibriya
is
native
identity,
not
the
arbiter
identity
and
again,
whatever
this
authenticated
proxy
being
able
to
negotiate
with
the
api
right
is
up
to
the
proxy
API,
so
API
community
API,
doesn't
trust
proxy
to
issue
certain
identities.
It
shouldn't
right-
and
this
is
kind
of
again
should
be
in
our
opinion,
governed
by
our
back
of
the
kubernetes
itself.
G
So
basically,
what
we're
looking
for
is,
as
I
mentioned,
write
a
programmatic
way
to
issues
identities
which
are
could
be
as
native,
not
cloud
provider
native
right
that
are
trusted
by
kubernetes
itself
and
will
work
on
any
compliant
Corbin's
cluster,
which
is
not
the
case
today.
Right
so
right
now
there
is
no
clear
way,
except
maybe
service
accounts
right,
but
again
it's
not
that
it's
slightly
less
convenient
and
if
the
sing-off
answer
is
hey,
yes,
I'll
just
use
service
account
for
that
purpose,
we'll
just
resort
to
that,
but
before
going
there,
I
was
mostly
curious.
G
B
G
Basically
are
holding
memory
in
the
proxy
and
it
just
authenticates
every
single
user
whenever
it
gets
this
authentication
on
the
client
side
and
the
client.
In
our
case,
client
authentication
is
also
x5i
on,
but
it's
pretty
much
irrelevant
right.
So
the
real
question
is:
how
do
we
make
this
happen
in?
If
the
it
should
be
even
possible
to
have
this
across
all
cloud
providers,
as
we
have
talked
to
Google
team
as
well
about
that,
and
their
opinion
is
at
least
the
project
manager.
Teams
that
we
have
talked
to
is
hey.
E
E
G
So
I'm
not
sure
like
there's
a
third
like
this,
regardless
of
the
proxy
conversation
right.
Let's
just
move
on
to
the
root
cause
of
the
problem
should
or
should
not
be
there
a
way
to
issue
any
sort
of
trusted,
identities
or
tokens
or
authentication
and
other
authentication
mechanisms
that
should
be
trusted
by
all
kubernetes
clusters
that
are
conforming
to
the
API.
If
the
answer
is
no,
that's
I
mean
that's
a
that's
an
answer.
G
If
the
answer
is
yes,
then
we
should
dig
into
what
this
answer
should
be
and
it
should
be
x.509
or
bear
token
authentication
or
any
other
form
of
authentication
or
both
right
because
other.
If
that
is
possible,
then
all
different
scenarios
like
that
trusted
proxies
and
trusted
proxies
are
possible.
If
that's
not
possible,
that's
a
different
story.
Right
like
we
should
be.
Basically,
then
saying:
okay,
if
that's
not
possible,
the
only
way
is
to
integrate
with
every
single
cloud
provider
out
there
right,
I.
F
D
D
D
So
when
you
make
an
API
request,
yeah
you
authenticate
as
whoever
you
authenticate
as
you
can
set
headers.
That
request
for
the
that
indicated,
request
should
be
treated
as
if
some
other
user
made
it,
and
so
it's
like
a
link
to
the
docs,
but
I've
got
it.
You
can
dictate
the
the
user
name
and
group
memberships
associated
with
the
request
and
the
once
the
server
determines
who
you
are.
G
D
F
F
I
think
this
is
almost
equivalent
to
request
header
authentication
for
front
proxies,
but
it
doesn't
erase
audit
and
it's
more
widely
exposed
by
like
hosted
kubernetes
platforms.
It's
pretty
cool
I
didn't
know
about
that.
Thanks.
However,
I
don't
want
to
discourage
you
from
fixing
the
certificates.
Api.
G
D
E
Yeah,
you
would
have
to
be
careful
because,
like
by
default
service
accounts
are
in
the
group.
That's
part
of
their
namespace
and
I.
Think
you
have
to
explicitly
say
that
when
you
impersonate
so
that
might
be
a
little
confusing,
but
that
only
matters
if
you're
our
back
is
assigned
or
if
your
permissions
are
assigned
to
the
group
and
not
the
specific
service
account
and.
D
D
D
All
right
so
for
anyone
following
along
with
the
node
self
labeling
proposal,
this
is
like
a
year
and
a
half
in
the
making
and
I
think.
Maybe
we
finally
actually
came
up
with
something
that
will
give
people
who
want
to
isolate
nodes,
the
tools
they
need
and
also
give
people
who
don't
care
about
isolating
nodes,
the
ability
to
like
live
without
poking
their
eyes
out.
D
D
Yeah,
our
ability
to
kind
of
partition
up
that
space
is
stronger
than
arbitrary
labels,
so
for
the
people
who
don't
care
about
isolating
their
nodes,
the
implications
of
this
limitation
would
be.
If
you
want
your
qiblas
to
set
arbitrary
labels.
Put
them
under
your
own
namespace,
don't
put
them
under
the
kubernetes
IO
namespace.
D
D
So
that
is
a
much
smaller
implication
for
people
who
don't
care
about
this
than
some
of
the
other
things
that
were
being
discussed.
The
implications
for
someone
who
does
want
to
isolate
their
nodes
would
be
that
they
would
need
to
use
labels
under
the
kubernetes
IO
namespace,
ideally
within
this,
like
node
restriction.
Prefix
that
would
be
set
aside
for
that
purpose,
but
any
label
under
kubernetes
io
that
isn't
one
of
these
white
listed
ones
would
be,
would
be
safe.
D
Giblets
wouldn't
be
able
to
mess
with
those
or
or
overwrite
them,
so
that
this
is
a
lot
cleaner
than
some
of
the
things
that
we
talked
about
that
required
like
translating
between
topology
labels
for
CSI
and
Corinne
ADIZ
labels.
If
you
value
your
sanity,
don't
don't
go
read
that
suggestion,
because
it's
really
really
ugly
anyway.
I
just
wanted
to
bring
people's
attention
to
kind
of
the
latest
iteration
of
this.
D
If
you
have
thoughts
or
feedback
on
this
from
either
perspective,
how
you
would
use
it
as
someone
who
cares
strongly
about
node
isolation
or
how
you
would
how
this
would
impact
you
as
someone
who
doesn't
care
at
all
about
new
legislation,
please
weigh
in
I
think
we're
going
to
try
to
wrap
up
the
discussion.
Probably
today,
tomorrow,
on
this
and
move
forward.
But
anyway,
your
and
I
haven't,
making
and
I
think
we're
I
think
we're
close
so.
E
D
So,
if
you
think
about
people
who
want
to
isolate
their
nodes,
what
this
is
saying
is
that
only
the
set
of
white
listed
labels
in
the
first
list
can
be
set
by
cupid's.
So
if
you
set,
if
you're,
using
criminate
Izaya
labels
outside
of
that
white
list,
that's
safe
as
far
as
a
compromised
node
not
being
able
to
monkey
with
it
you're
late.
D
The
problem
with
that
is
that
it
encourages
people
to
use
labels
under
kubernetes,
io
namespaces
for
node
segmentation,
and
we
we
want
to
define.
We
don't
want
people
just
picking
random
labels
under
communities.
Io
like
we
want
to
give
guidance
to
say,
use
these
labels
for
this
purpose
and
this
label
for
this
purpose,
and
this
prefix
for
this
purpose,
and
so
the
the
node
restriction
communities.
Io,
is
just
giving
guidance
to
say
if
you're
isolating
nodes
do
it
with
this
prefix
and
we
promise
not
to
like
use
this
prefix
for
something
random
in
the
future.
D
That's
probably
a
better
approach
anyway,
like
we,
our
admission
plugins,
probably
shouldn't
be
making
claims
about
labels
outside
the
criminals,
yeah
I
Oh
namespaces.
If
they
wanted
to
do
something
with
labels
outside
that
namespace,
they
can
set
up
an
admission,
plugin
themselves
and
allow
or
deny,
or
whatever,
limiting
ourselves
to
opinions
about
our
own
labels
is
probably
good.
D
C
D
Would
that
would
fall
outside
the
the
whitelist
and
the
in
bullet
point
one,
and
so,
like
everything
else
outside
that
whitelist
people,
that's
wouldn't
be
odd
to
mess
with
it.
So
that's
just
a
document
if
you
mentioned
to
say,
if
you
want
to
label,
knows
this
way,
use
this
prefix
and
you
won't
conflict
with
us
in
the
future.
D
C
E
C
D
F
F
So
this
kind
of
throws
a
little
bit
of
a
wrench
in
our
migration,
from
old
secret
volumes
for
service
Kentuckians
to
projected
volume
service
Kentuckians.
We
had
some
ideas
on
how
this
should
look.
Maybe
PSP
Ackles
projected
volumes
based
on
source,
rather
than
just
projected
volume
as
a
whole.
It
doesn't
solve
our
specific
issue.
F
I
think,
there's
like
a
some
gross
compatibility
things
that
we
could
do
where
secrets
permission
implies
downward
API,
config
map
and
service
account
token,
or
something
like
that
for
just
the
migration
and
maybe
eventually
face
that
out
or
never
phase
that
out.
I
just
wanted
to
make
people
aware
of
this
issue.
If
they're
interested
I
will
post
a
link
to
where
we're
discussing
it,
yep.
F
C
Just
wanted
to
mention
back
to
the
kind
of
release
code
freeze,
so
code
freeze
is
November
15th,
which
is
right
in
the
middle
of
cube
con
China,
if
you're
going
to
be
at
that
cube
con.
Keep
that
in
mind.
If
you're
not
going
to
be
there,
keep
in
mind
that
your
reviewers
might
be
there.
I
will
be
there
and
so
make
sure
you're
coordinating.