►
From YouTube: Kubernetes SIG Auth 2020-01-08
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2020-01-08
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
A
A
A
A
B
B
C
A
Question
yeah
I
mean
to
me
I
think,
even
if
it
wasn't
necessarily
flexibility
that
we
could
gain
I,
do
like
the
idea
that,
like
if
you
had
like
an
encrypted
set
of
data
from
a
CD,
you
might
actually
be.
You
might
have
a
chance
of
being
able
to
use
a
standard
tool.
You
know
like
I,
think
you'd
have
to
still
like
probably
trim
off
prefixes
for
values.
B
We
have
this
length
encoded
format
at
with
some
random
prefix
yeah.
You
expected
out
that
should
never
have
existed.
In
my
personal
opinion,
we
could
have
like
created
a
proto
buffer,
a
per
and
that
given
another
version
and
I
think
we
would
have
been
in
a
better
situation
and
make
this
migration,
because
I
don't
know
that
this
format
scares
me
a
little
bit.
I
think
CMS
is
an
option.
I,
don't
I,
probably
wouldn't
recommend
jate
jwe.
B
B
C
C
A
The
only
thing
I
know
about
CBC
is
you
since
you
can't
you,
you
can't
know
exactly
how
you're
going
to
sorry
with
with
GCM
you,
since
you
can't
control
the
amount
of
writes
you
do.
You
have
issues
there,
but
since
we
only
ever
do
one
right,
perky
and
some
random
key
that
way
and
I
did
I
did
ask
my
friends
who
actually
know
crypto
better
than
me
like
this
makes
sense
the
the
general
gist
I
got.
Was
it
probably
doesn't
matter
either
way?
It's
not
necessarily
a
big
deal
in
this,
but.
C
A
E
F
F
I
think
when
I
started
thinking
about
what
that
would
look
like
and
I
quickly
realized
that
we
would
need
to
not
only
we
need
to
sort
of
rethink,
how
we,
how
much
we
encrypt.
Basically,
today,
we
encrypt
the
whole
envelope,
including
the
metadata
section,
which
is
the
crux
of
the
problem.
Basically,
if
you
cannot
decrypt
the
whole
packet,
the
whole
payload,
you
basically
no
longer
able
to
do
anything
with
that.
F
G
C
If
you're,
in
an
environment
where
the
configured
storage,
like
the
same
the
same
degree
of
control
that
lets
you
fix,
a
problem
with
your
encryption
provider
is
the
same
level
of
access
that
you
need
to
go
fix
is
using
a
CD
lightning
got
into
this
problem
via
the
API
and
couldn't
get
out
of
it.
Then
that
would
make
more
sense
to
me.
But
this
means
you
got
into
this
problem,
because
basically,
your
storage
configuration
went
sideways
and
so.
F
F
H
F
But
what
do
you
think
it's
an
issue
that
we
cannot?
So
the
problem
is
with
billing,
so
many
many
EMS
providers
who
Alchemist
providers
charge
you
for
key
versions.
So
it's
a
very
legitimate
this
case
for
a
customer
to
go
and
clean
up
the
keys
versions
of
the
key,
which
are
the
customer
beliefs
are
no
longer
in
use,
but
there
is
no
way
to
tell
by
looking
at
the
secret
object
what
version
of
the
key
was
actually
used.
A
F
I'm
talking
from
I'm
trying
to
represent
the
customer
here,
because
I
actually
have
customers
that
got
into
this
scenario,
and
they
are
very
surprised
by
the
fact
that
they
cannot
delete
the
key.
You
cannot
delete
the
secret
yeah
and
they
are
surprised
by
the
fact
that
they
cannot
tell
what
they
can.
What
versions
of
the
key
cannot
can
be
safely
deleted
and
I
acknowledge
that
may
be
part
of
this
responsibility
lies
on
TMS.
Somehow.
G
B
G
B
H
G
Can
get
some
mileage
out
of
that
right
like
if
you
pulled
out
name,
you
would
resource
version
and
finalized
errs,
but
you
don't
know
whether
any
of
the
individual
finalized
errs
are
gonna,
go
through
and
say:
I
need
secret
data
content
to
make
a
decision
about
what
action
I
need
to
take
the
finalizar
right,
and
so
you
can
get
some
mileage
out
of
it.
But
it's
not.
G
B
Well
so
consider
the
case
for
service
guy
can
token
controller
right,
so
it
will
blow
up
entirely
because
every
a
list
of
when
it's
watching
secrets
is
going
to
return
500s
from
the
API
actually
most
lists
in
all
namespaces
will
obsequious
will
return
500s
because
one
secret
is
not
unencrypted
able.
So
this
renders
the
cluster
pretty
unusable
gets
will
continue
to
work.
If
you
remembered
what
your
secrets
were
named
in
general,
they're,
not
discoverable,
that
cluster
gets
into
a
pretty
messed
up
state.
In
this
situation
the
you.
G
Know
but
I
get
it.
How
are
you
gonna
decide?
You
want
to
delete
it
and
I
guess
you
know
if
you
can
read
the
sed
data
directly,
you
can
know
that
you
want
to
delete
this
particular
one,
but
like
we're
we're
trying
to
solve
something
where
you
can't
get
the
object
anymore,
but
you
know
which
one
you
want
to
delete,
but
the
data
get.
G
B
G
B
G
A
G
A
C
G
A
Like,
like
part
of
the
problem
here,
is
the
kms
API
effectively
lets
you
touch
things
that
weren't
meant
to
be
touched
by
you
right,
like
we
were
supposed
to
control,
how
we
wrote
things
to
@cd
the
fact
that
it
was
at
CD
and
the
at
all
was
a
thing
like
are.
All
of
this
was
supposed
to
belong
to
the
API
server,
and
we
gave
you
this
really
low-level
control
on
how
you
contort,
bytes
and
stuff,
and
now
you
are
telling
us
they
you're
in
the
critical
path
of
us
reading
and
writing
data.
B
C
Is
there
I
do
want
to
time
boxes,
because
there
were
some
things
towards
the
end?
The
agenda
I
would
really
like
to
get
to,
but
it
would
it
be
worthwhile
to
see
if,
especially,
if
we're
talking
about
how
we
store
things
like
having
a
way
to
associate
key
IDs
or
some
method,
so
that
a
provider
that
wanted
to
have
a
repair
or
reconcile
or
some
sort
of
function,
that
was
really
low
level,
could
scan
a
CD
and
identify
keys,
associated
or
itd
keys
that
were
associated
with
no
longer
available.
C
Kms
keys,
like
I,
don't
think
this
is
a
normal
thing
that
we
want
to
expose
be
the
API,
but
if
there
was
a
something
that
a
managed
provider
could
do,
that
would
sweep
and
say,
like
these
kms
cubes
are
gone.
I
know
that
you
need
these
IDs
and
the
only
reasonable
thing
to
do
is
to
actually
just
delete
them.
I
think.
B
C
A
F
I
think
there's
a
disadvantage
to
this
bulk.
We
encryption
wrapping
approach
because
it
kind
of
defeats
the
purpose
of
gradual
key
use
like
this,
you
you
do.
One
of
the
ideas
of
key
rotation
is
to
limit
the
amount
of
operations
to
perform,
but
the
key.
If
we
basically
every
time
we
rotate
a
key
way,
we
encrypt
all
the
secrets
and
that's
it
Eva,
basically
kind
of
defeat.
The
purpose
defeat
that
purpose
you.
C
F
I
A
D
Is
here,
yeah
I
think
him
he
didn't
respond.
I
can
maybe
speak
a
little
bit
to
this,
but
sure
I
know
that
we're
working
on
working
to
extract
the
entry
cloud
cloud
provider
aware
credential
mechanism
right
so
I
I,
don't
know
we're
out
for
Nick
on
this
issue.
Actually,
not
Mike
did
you
have
you
talked
to
Nick
about
this
one
yeah.
B
It's
essentially,
if
you
are
all
familiar
with
auth,
exec
plug-in
and
the
docker
credential
helpers,
then
this
is
like
the
child
of
those
two
things.
It
is
a
basically
the
hey.
It
proposes
a
config
API
group
for
similar
to
the
exec
plug-in
API
group
that
will
be
used
to
feed
registry
credentials
to
the
cubelet.
To
pass
down
to
CRI
to
pull
images
seems
reasonable.
B
B
There
were
some
alternatives
that
seemed
less
ideal
overall
I
think
it's
reasonable
I
wouldn't
want
to
reuse.
The
existing
exec
plug-in
API
group
I
think
this
should
probably
be
a
separate
API
group
in
cubelet,
maybe
or
yeah,
probably
maybe
a
cubelet
api
group
other
than
that
seems
reasonable.
If
this
is
something
that
somebody
wants
to
spend
time
to
work
on.
J
On
this,
one
was
whether
the
cubelet
should
be
handling
this
at
all
or
if
we
should
just
push
it
down
into
the
image
service,
API
I
think
right
now
we
do
sort
of
a
mix.
There's
like
some
things
that
run
time.
Figures
there's
some
pieces
that
the
cubelet
provides,
and
so
I'd
like
to
make
that
more
consistent
and
either
push
it
all
down
into
the
image
service.
J
C
C
It
happens
to
be
hard-coded
using
cloud
api's
and
the
cubelet,
but
if
that
was
instead
using
it
done
inside
the
container
runtime
I
I,
don't
I'm
not
familiar
enough
of
the
container
runtime
to
know,
if
that
would
it
all
be
reasonable,
like
if
it
has
the
information
that
it
needs
to
make
the
same
kinds
of
decisions
but
yeah,
if
we
can
keep
it
contained
to
either
the
cube
loader
to
container
run
time?
That
would
be
my
preference
I.
B
A
C
Please
speak
yeah,
be
the
point
person
too,
so
this
doesn't
languish.
I
think
this
is
probably
the
trailing
edge
of
the
things
that
would
still
require
intrigue
cloud
provider
dependencies.
So
just
because
it's
getting
such
a
late
start,
so
I
would
really
like
to
see
at
least
a
plan
for
this.
Otherwise,
even
once
the
other
cloud
providers
stuff
extraction,
stuff
wraps
up
like
into
this
year,
this
will
still
be
requiring
things
an
alternative
as
well,
which
I
think
we
talked
about
in
one
of
the
linked
issues.
C
Some
of
the
credential
providers
that
are
cloud
specific,
don't
actually
use
the
cloud
SDKs
they
just
like
it
metadata
endpoints
and
do
things
much
more
lightweight,
so
I
it's
worth
asking.
If
that's
a
possibility
for
some
of
the
providers,
and
if
it
is,
then
that
could
be
a
good
intermediate
step
that
would
let
us
drop
super
heavy
sdk
dependencies.
You
know
so.
B
J
C
G
I
J
We've
been
talking
about
doing
around
the
discussions
of
the
future
of
security
policy
and
some
of
the
different
different
alternatives
that
were
considering,
and
one
of
the
things
that
came
up
in
a
few
of
those
discussions
was
the
same
actually
most
organizations
just
a
small
selection
of
like
standard
profiles.
So
it
was
basically
the
privileged
one
and
the
restricted
one.
And
then
maybe
this
intermediate
default.
J
The
idea
we
had
was
to
publish
essentially
a
recommended
hot
security
profile
sort
of
with
a
qualitative
description
and
have
links
to
this
is
what
this
actually
looks
like
in
pod
security
policy.
This
is
what
this
looks
like
under
gatekeeper
and
anyone
else
that
wants
to
add
to
that,
and
we
can
say
sort
of
this.
J
C
C
J
G
G
A
A
G
C
G
C
Mechanisms
that
support
like
are
aware
of
these
levels
and
have
pre-canned
implementations
of
these
levels
is
really
nice.
I
the
this,
actually
ties
into
it
later
agenda
item
I,
don't
want
us
to
be
like
a
gatekeeper
like
hosting
the
manifests
really
but
I'm
fine,
with,
like
pointers
to
cube
compatible
policy
mechanisms
that
are
on
board
with,
like
these
well-known
levels,.
A
J
A
H
A
H
C
C
G
G
G
Yeah
so
we
talked
about
this
last
year,
it's
gone
through
a
couple
review
cycles.
There
were
active
sorry.
This
is
CSRs
support
for
multiple
signers
and,
at
this
point,
I
have
I,
think
I've
gotten
all,
except
for
a
couple.
Minor
questions
and
I
went
in
the
channel
yesterday
asking
for
additional
comments
and
specifically
tagged
mic
Denise,
who
had
a
couple
outstanding
questions
on
the
implementation
PR
and
on
the
on
the
cap
itself.
Do
you
want
to
add
you
want
to
talk
to
your
questions?
Micra
to
you
feel,
like
you,
yeah.
B
C
If
exist,
especially
if
it
is
implemented
in
a
way
that
they
have
an
option
to
turn
off
the
enforcement,
so
if
we
do
it
in
an
admission
plug-in
and
they
decide
they
don't
care
about
this,
they
can
always
turn
off
the
ignition
plugin
in
their
environment.
But
I
think
the
beta
phase
is
the
appropriate
time
to
make
changes
like
that.
G
That
aligns
with
with
what
I'd
like
to
do
I
would
like
to
make
changes
like
this
extremely
clear,
so
that
users
aren't
surprised
by
a
change
of
behavior
and
give
them
a
means
to
to
either
temporarily
suspend
it
and
so
grant
an
hour
back
permission
to
whoever
they
want
or
to
update
the
code
that
uses
it
and
I
would
say
to
this
particular
bit
of
code.
If
you
wrote
your
own
signer
is
complex
enough
that
we
can
reasonably
expect
you
to
understand
how
to
interpret
a
string
field.
C
So
what
well,
if
that
would
look
like,
is
release
notes
with
the
option
laying
out
the
options
you
have
like.
If,
if
you
want
to
keep
all
approvals
the
way
they
are
and
not
get
not
benefit
from
this
segmented
enforcement,
then
as
a
cluster
administrator,
you
can
turn
off
the
admission
of
course
minh
and
you
miss
out.
But
you
also
don't
change
anybody
here.
C
The
fact
that
we
don't
differentiate
multiple
signers
today,
I
think
I'm,
comfortable,
saying
that
means
that,
if
you're
running
the
custom
signer,
you
are
also
in
control
of
the
cube
controller
manager
for
the
cluster,
because
you
would
have
to
customize
its
configuration
to
say,
don't
run.
Otherwise
you
would
get
dueling
signers,
so
I
don't
think.
We
generally
have
a
case
where
end
users
who
don't
control
the
cluster
configuration,
are
running
custom
signers.
C
B
G
Right
I'll
plan
to
make
that
update
tomorrow
there
was
one
other
significant
question:
it
was
about
whether
defaulting
signer
name
was
valuable
in
view
1
beta,
1
I
think
it's
one
question
was
also
from
you:
Mike
yeah
and
I
tried
to
describe
what
I
thought
it
was
valuable
and
why
I
thought
we
should
do
it.
I
wasn't
sure
whether
I
convinced
or
not.
B
B
C
G
C
A
C
G
C
E
G
C
What
will
be
possible
or
difficult
for
external
signers
that
want
to
basically
answer
or
sign
the
kubernetes,
I/o
requests
and
so
James?
Your
experience
like
whether
Steiners
typically
ignore
like
pay
attention
to
a
few
things
and
then
just
stomp
the
remainder,
whether
that's
more
normal
or
whether
they
typically
reject
things
that
are
out
of
bounds.
That
might
be
helpful
to
know
yeah.
E
E
E
C
I
A
Are
you
around
the
call,
so
I
think
you
know
we
are
in
myself,
we've
been
talking
a
little
bit
and
I
know.
Jordan
had
commented
some
on
the
thread.
I
think
the
general
concern
it's
kind
of
similar
to
the
discussion
around
the
PSP
discussion.
We
had
just
a
little
while
ago,
like
I.
Have
this
feeling
that
there
is
like
just
a
little
bit
too
much
centralization
around
the
driver
and
like
like
in
the
world.
I
was
sort
of
imagining
right.
A
The
driver
exists
on
its
own
and
it
has
like
little
links
to
where
you
can
get
providers,
but,
like
you,
deploy
the
driver
by
itself
and
then
you
go
get
your
providers
and
you
add
those
as
you
want
or
like
the
like,
the
driver
kind
of
works
by
itself
without
like,
like
it
shouldn't
necessarily
like
you
know,
like
we
had
discussed
some
of
the
command
line
flags
and
like
how
they
enumerate
like
like
minimum
versions
of
particular
providers.
That
seems
like
a
really
tight
coupling
to
me.
Yeah.
H
For
sure
so,
I
think
the
dude.
This
is
a
easy
fix.
Right,
I
think
it's
easy
to
remove
it
from
the
chart
and
then
and
then
is
for
the
provider
compatibility
and
the
the
pray
at
the
parameter
check
for
minimum
version
of
the
provider
like
those
are
just
nice
to
have
that
we
put
in
place
to
give
users
better
error
message
to
tell
them
hey
here's
the
minimum
version
you
should
be
running,
but
what
we
could
do
is
to
make
that
Trek
optional.
H
C
I
mean
generic
and
agnostic,
the
driver
can
be
the
provider,
that's
rubbing
on
top
of
it,
the
better
so
yeah.
If
there's
a
generic
mechanism,
that
would
say
like
here's,
the
person
I
am
and
here's
my
version
range
and
then
in
the
providers
manifests.
They
can
invoke
it
with
specific
specific
parameters
and
it
doesn't
check.
That's
fine.
I
just
want
to
avoid
particular
providers
and
embedded
in
the
driver
so.
A
C
Network
that
that
was
the
only
remaining
concern
I
had
it
sounded
like
Mike
and
Michelle
were
happy
with
where
things
ended
up.
There's,
maybe
some
cleanup
stuff
to
do
after
the
move,
but
nothing
so
I.
I
just
want
to
have
this
conversation
before
we
bring
in
yeah,
oh
and
it
becomes
like
an
attractive
sort
of
first
party
like
here's,
the
official
manifest
thing
so
yeah.