►
From YouTube: Kubernetes SIG Auth 20180808
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20180808
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
A
A
And
some
discussions
around
whether
whether
this
needs
to
follow
the
deprecation
policy
or
not
my
opinion
is
that
this
is
this
is
more
of
advanced
auditing
going
to
GA,
as
opposed
to
a
removal
of
a
past
feature.
However,
the
advanced
auditing,
even
with
the
legacy
log
format,
has
slightly
different
behavior
than
the
basic
auditing.
A
So
if
you
have
opinions
on
that,
please
chime
in
on
that
PR
also,
we
have
a
new
sig
off
charter.
This
is
a
bit
different
from
what
we
presented
probably
about
two
months
ago.
Originally,
the
the
steering
committee
kind
of
looked
at
all
of
the
charters
and
sort
of
refactored
the
process
a
bit
so
that
we
weren't
all
copying
and
pasting
slightly
different
versions.
A
The
audit
and
a
few
security
related
admission
plugins,
including
node
restriction
service,
count,
pod
security
policy,
image
policy
and
some
others
that
might
be
coming
along
the
lines
coming
down
the
pike
and
also
some
other
mechanisms
for
protecting
things,
like
secrets,
securing
the
connection
between
the
master
and
the
nodes
and
other
components
and
ensuring
that
components
in
the
cluster
are
running
with
the
appropriate
permissions,
that's
kind
of
our
core
core
responsibilities.
This,
hopefully,
doesn't
come
as
a
surprise
to
anyone
if
you
feel
like
anything,
is
left
out
here.
A
A
A
Alright
I'm
going
to
stop
sharing.
If
someone
else
wants
to
try
sharing
their
screen,
they
seem
to
be
having
some
technical
difficulties
here,
but
yeah
just
kind
of
finish
running
through
the
the
other
processes,
basically
Consulting
on
features
that
might
affect
or
depend
on
the
other
things
that
we've
talked
about
thanks
June.
A
A
Kind
of
general
security
of
kubernetes
should
be
handled
by
the
the
appropriate
SIG's
for
the
component
that
we're
discussing
so
things
like
container
isolation
is
mostly
owned
by
Cigna
and
signet
working
day.
Protection
is
kind
of
six
storage
and
sig
note,
and
a
few
other
things
that
are
mentioned
there.
A
D
D
C
Have
seen
setups,
where
someone
wanted
one
container
to
not
be
able
to
talk
to
the
API
at
all
right
where
they
had
bad
one
kind
of
dedicated
container.
That
was
going
to
manage
API
communication
and
do
stuff
and
then
firewall
that
off
from
other
containers
and
interact
via
files
in
a
shared
volume
or
some
other
mechanism.
B
D
A
D
E
D
In
too
many
containers,
I
said
it
would
be
a
significant
architectural
change
to
the
how
we
implement
at
today
and
then,
as
Jordan
brought
up
on
the
proposals
that
Qiblah
would
be
responsible
for
managing
and
tokens
per
pod,
rather
than
where
n
is
the
number
of
containers.
That's
probably
less.
Concerning
for
me,
I
I.
D
C
Seems
more
broadly
useful,
like
if
you
have
the
same
service,
account
token
or
sorry,
tokens
for
the
same
service
account
with
equivalent
permissions.
That
just
happen
to
be
distinct
that
are
bound
into
the
various
containers
in
a
pod.
I
guess
that
could
be
interesting
from
an
auditability
perspective.
But
it's
going
to
be
pretty
hard
to
thread
that
information
and
make
authorization
decisions
and
the
container
lifetimes
are
all
the
same.
Basically
because
they're
in
the
same
pods
and
like
the
distinct
permissions
seem
more
interesting
to
me.
D
C
Yeah
it
did
you
do
broad.
You
almost
have
to
echo
like
some
really
broad
name,
that
ties
to
some
set
of
permissions
and
it
you
started.
It
starts
to
look
a
lot
like
our
back
or
something
like
this.
This
token
is
scoped
to
this
role
or
I'm,
not
sure
how
to
do
that
without
specific
opinions
about
authorization
there.
F
A
A
D
D
Well,
we
can't
make
really
your
if
your
granting
permission
to
a
service
account
and
that
service
account
has
different
I
guess
that
permission
is
scoped
depending
on
which
container
you're
in
or
I
guess
restricted.
We
can't
make
strong
guarantees
that
this
one
container
we
won't
be
able
to
gain
access
to
the
full
set
of
permissions
of
the
service
account
there.
C
C
D
C
C
B
D
Yeah
I
was
I'm
planning
on
sending
it
PR
out
move
the
service
account
admission
controller
over
to
using
projected
volumes.
Projected
service
count
tokens
this
week
or
next
week,
so
I'll
start
to
play
around
with
turning
that
off
and
also
propose
a
plan
on
how
to
phase
that
in
over
the
next
couple
of
really
citizen,
yep
I.
A
Would
definitely
be
interested
in
seeing
if
there
are
ways
that
we
can
streamline
the
API
to
manually
mount
those
service
accounts
through
the
token
volume
projection
and
it's
kind
of
like
a
stepping
stone
to
eventually
disabling
or
even
removing
the
auto
mount
feature
since
I
think
that's
I
think
most
pods
do
not
need
service
accounts.
Mountain.
B
G
D
Have
proposed
on
the
future
that
we
promote
cubelet,
TLS
bootstrap
to
GA
without
promoting
a
dependent
API,
the
certificates
API
to
GA
I
am
confused
about
the
difference
between
GA
features
and
GAAP
is
my
take
on
the
certificates.
Api
is
while
I
think
that
the
code
is
GA
quality
and
it
is
GA.
Stable
I
am
unhappy
with
the
the
current
shape
it
was
API
I
like
to
make
some
changes
before
we
promote
it
to
GA.
D
However,
it
is
a
dependency
before
a
feature
that
is
widely
used
today
and
has
been
in
kubernetes
for
almost
probably
multiple
years
at
this
point
and
it's
something
that
we
should
probably
move
towards
GA
or
get
to
J.
At
this
point,
we
it's
kind
of
flattered
I,
think
that
we
can
commit
to
providing
a
back-end
that
cubelet
back-end
to
provision
certificates.
C
Yeah,
so
even
beta,
if
you
guys,
have
a
timeline
guarantee
and
if
our
cubelet
features
made
use
of
the
cs,
our
api,
we
wouldn't
drop
that
inside
the
skew
window
that
we
support
before
we
had
replaced
it
with
something
else
of
equivalent
stability
or
better
stability
and
waited
for
the
cubelet
api
service
queue
window.
So
I
mean
we're.
We
own
both
of
those
pieces.
We're
not
gonna
cut
our
own
legs
out.
My
dress.
E
C
C
Api,
actually
so
beta
API
v1
comes
out
the
next
release.
If
you,
if
the
cube
lid
is
depending
on
the
beta
API,
you
can't
remove
that
until
two
releases,
so
the
most
qubits
are
outside
the
ski
window:
okay,
okay,
I,
don't
I,
don't
think
there
are
particular
concerns
that
we
would
try
to
do
something
that
would
cause
problems.
There's
more
just
kind
of
a
process.
Question
like
can
a
feature
be
called
GA
if
it
is
built
on
top
of
a
beta
API.
C
C
E
C
C
This
may
be
more
of
a
question
for
the
CSR
API,
but
I
saw
something
recently
about
like
wanting
to
be
able
to
interact
with
the
request
of
the
request
comes
in
for
X,
but
you
want
to
approve
it
conditionally
and
kind
of
tweak.
The
what's
actually
going
to
be
issued
is
that
the
kind
of
thing
that
you're
talking
about,
like
not
being
quite
satisfied
with
API.
D
D
C
D
The
signer
sees
a
CSR
for
the
profile
service
back-end
with
a
sign
public
key.
It
goes,
and
it
knows
what
pod
made
the
request,
because
that
information
is
available
in
the
CSR
objects
we
forward
user
info
there.
It
then
goes
and
looks
up
what
the
pod
is
sitting
behind
and
fills
in
those
stands,
so
it
all
just
kind
of
related
yeah.
C
D
C
Possible
to
like
the
ability
of
a
pod
to
say,
I
want
I
want
a
serving
cert
that
is
valid
for
the
services
pointing
at
me,
like
that's
a
reasonable
thing
to
want,
and
so
you
can.
You
can
express
that
and
have
a
component
whose
job
it
is
to
monitor
such
requests,
figure
out
what
services
mapped
to
it
and
then
interact
with
the
CSR
API
to
produce
those
certificates.
That
seems
like
a
distinct.
D
But
it
can't
there's
no
way
for
it
to
generate
the
CSR
unless
you
replace
the
signer,
so
the
signer
needs
to
change,
because
the
CSR
is
immutable
after
it's
created.
Unless
you
also
want
the
thing,
that's
watching
to
plumb
the
private
key
down
to
the
pod,
once
it's
done,
I
think
the
crux
is
in
the
model
where
the
pod
generates
the
private
key.
That
CSR
is
immutable
and
the
signer
has
is
not
flexible.
G
It
goes
back
to
lots
like
manual
PK
I
use
cases
that
don't
apply
that
that
well
to
this
kind
of
use
case,
so
I
would
be
in
favor
of
a
model
where
we
treat
the
CSR
as
just
like
the
public
key
and
doesn't
necessarily
need
to
have
any
names
in
it.
The
names
the
names
that
are
being
requested
are
a
separate
field
on
the
CSR
on
the
certificate
request,
object,
and
that
way
you
could
have
like
a
admission
controller
or
a
commute
atm.
G
C
D
C
Already,
like
the
signing
profiles,
we
have
already
will
omit
things
that
are
in
the
C
as
ours
does.
One
of
the
issues
is
someone
opens
I
requested
this
special
o
at
arc
and
I
got
a
cert
back
that
didn't
include
it
right
like
it.
There's
there's
no
guarantee
that
the
cert
you
get
back
has
everything
you
asked
for
today.
It
just
happens
to
be
that
what
most
people
ask
for
and
the
things
that
the
qiblah
tasks
for
are
not
fancy
things.
G
If
we
just
did
one
thing,
which
was
pull
the
names
out
of
the
CSR
blob
out
into
a
field
like
that
usages
are,
then
you
could
do
a
flow
where
you
admission
controller,
mutating
admission
controller
that
saw
a
request
from
pod
swapped
in
the
serving
surgit
services.
Names
could
have
then
add
an
improver
that
validated
that
requests
from
certificate
request
from
a
certain
pod
only
had
names
that
matched
up
with
services
that
point
at
that
pod
and
then
the
signer
would
just
look
at
the
names
seems
like
we're
almost
there
for
that
kind
of
flow.