►
From YouTube: Kubernetes SIG Auth 2022-02-08 KMS #6
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2022-02-08 KMS #6
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
started.
I
think
we
had
some
action
items
from
last
time
to
sort
of
think
about
some
of
this
stuff.
I
know.
First
off
you
had
made
some
suggestions
and
the
stock
channel.
B
Yeah
so,
basically,
usually
before
one
starts
to
reinvent
the
wheel,
it's
always
good
to
check,
watch
which
ways
are
out
there.
So
I
kind
of
googled
and
oh
well,
I'm
not
using
google,
actually
I'm
using
a
different
situation,
but
I,
but
I
created
the
internet
and
found
out
that
there
is
a
google
library.
B
That
is
called
think-
and
I
remember
that
in
my
previous
company,
someone
referred
to
this
kind
of
well
defined
library
that
more
or
less
tries
to
protect
the
users
from
shooting
themselves
in
the
feed,
and
I
was
curious
to
look
at
it.
If
there
may
be
some
nice
nuggets.
That
could
be
useful
for
us,
and
then
I
ran
into
the
topic
of
kms
that
they
support
gamers
as
the
first
class
citizen.
B
B
A
protobuf,
I
think,
if
I
remembered
correctly
yeah
and
I
think
if
when
we
would
go
for
a
solution
either,
we
would
have
this
kind
of
this
custom
solution
where
we
kind
of
think
around
the
binary
or
we
just
serialize
a
struct
right.
So
at
some
point
it
would
make
sense.
B
So
this
I
just
had
a
high-level
view
on
the
library,
went
through
the
source
code
a
little
bit
and
it
looked
super
useful,
so
it
what
I
also
like
that-
and
I
thought
to
be
useful
also
for
for
the
reference
implementation
is
that
like
like
for
kubernetes,
the
smallest
unit
is
not
the
container
but
the
port.
B
So
for
them
it's
it's
not
not
the
key
itself,
but
it's
more
like
the
key
set
and
they
they
when
you
create
a
new
new
key
set
with
a
new
key,
and
it
has
all
this
management
put
into
it
that
you
can
rotate
the
keys
and
stuff
like
that.
So
it
had
some
super
sound,
looking
design
decisions
and
this
is
yeah.
I
thought
I
proposed
it,
and
maybe
someone
thinks
it's
useful.
B
So
from
the
documents,
what
I
could
see
is
that
maybe
you
could
open
it
up,
but
what
I
could
see
is
that
they
want
a
uri.
C
B
Yeah,
so
so
that
go
except
let
me
search
for
it.
B
Or
you
go
in
the
current
status,
there
there's
a
reference
to
languages,
okay,
yeah
and
when
you
scroll
down
you
see
that
further
down
yeah
there
is
a
qri
and
credentials
path.
And
then
you
have
a
client
for
your
kms.
A
E
C
A
A
Yeah
I
I
misheard,
I
thought
she
said
it.
Actually,
that's
not.
Okay,
so
yeah,
amazon
and
google
are
supported
in
hashicorps,
but
not
microsoft
or
not.
B
A
Yeah
I
mean
if
we
use
this
library
right
and
it
ends
up
in
our
reference
implementation.
You
know
I
wanted
to
be
fair
right.
I
don't
want
to
just
have
a
bunch
of
like
easy
use,
knobs
for
everybody,
but
thank
you.
That's
basically
like
that
doesn't
seem
right,
but
I
I
do
think
if
I
remember
correctly,
doesn't
hashicorp
support
like
a
pass
through
like
mode
where
it
uses
different
vaults
under
the
hood.
D
Yeah
I
mean
there
is
an
adapter
where
you
can
connect
it
to
different
kms
back
ends
like
I.
I
think
it
has
the
gcp
and
azure
keyword
for
that,
but
in
terms
of
auth
and
all
that,
I
think
it
doesn't
support,
like
all
the
odd
scenarios
like
it
supports
very
basic
audiences,
like
a
client
id
password
kind
of
thing.
Oh.
A
E
A
So
I
mean
this
would
be
in
a
sense.
I
had
not
imagined
our
reference
implementation
supporting
any
of
the
cloud
providers.
I'd
only
imagined
it
supporting
like
pkcs11
or
or
something
of
that
vein
right,
because
that's
just
a
standard-
and
it's
just
talking
to
hardware.
F
I
mean
we
don't
have
to
support
anything
in
the
reference
implementation
like
we
just
have
to
come
up
with,
like
the
interfaces
and
the
tool
that
they
can
reuse,
and
then
we
could
have
again
of
a
test
version
of
this
reference
library.
That
would
only
support
my
pc
key.
I
guess
I
don't
remember
like
that,
remove
it,
but
yeah
it
only
supports
like
the
bare
minimum
and
that
we
would
use
them
for
the
test.
A
Very,
I
guess
we
don't
have
to
use
any
of
their.
So,
let's
see
so
you
get
some
kind
of
a
client.
A
B
Yeah
exactly
so,
this
was
the
thing
that
really
attracts
me
to
the
concept
that
that
hey
key
set
is
the
minimalist
thing
we're
working
with,
and
this
is
what
you
see
the
very
first
thing
that
they
do
is
they
create
a
key
set
and
to
handle
this
this
key
set,
they
create
a
handle
for
it,
and
then
they
use
this
weird
templates
to
specify
kind
of
a
combination
of
algorithm
to
cryptographic,
primitives
yeah
exactly
this
is
this
was
usually
so
before
that
I
looked
into
it
without
the
chemist
kms
feature.
B
So
I
don't
think
that
is
super
tightly
coupled
to
it.
I
think,
at
the
end
of
the
day,
it
doesn't
care
all
too
much
about
where
the
master
key
comes
from,
and
I
guess,
with
additional
layout
of
the
local
cac.
Maybe
you
would
need
to
think
around
this
library
and
add
something
on
top
or
something
I
don't
know
how
extendable
it
is.
So
I
just
had
a
brief
look
at
it.
A
The
the
the
reason
I
ask
is
we
want
to
limit
how
many
in
memory
like
keys,
we're
holding
right
to
to
the
bare
minimum
and-
and
we
want
to
make
sure
that
at
some
point
at
some
level,
like
whatever
key
we're
using
for
our
encryption
with
the
the
upstream
cloud
kms
or
whatever
or
whatever
you
have,
I
guess
we
don't
need
necessarily
the
key
hierarchy
for
hardware
kms,
but
we
probably
do
need.
B
Yeah,
I
would
need
to
to
to
look
at
that,
but
from
we
see
the
usage
below
that
we
have
the
initial
key
and
we
created
and
we
want
to
encrypt
the
mem
key
set.
So
so
the
the
key
set
that
lives
in
the
memory
and
going
to
encrypt
it
with
the
master
key
that
we
got
from
the
kms.
B
So
I
would
just
check
what
time
it
is
right,
the
master
key,
but
I
would
assume
it
needs
to
be
some
same
representation
of
it.
A
I
think
I
saw
some
comments
on
that
doc.
Are
they
up?
Did
you
want
to
talk
about
them?
I
think
maybe
damien.
You
have
written
something.
F
Last
time
we
discussed
like
that,
we
might
need
to
rethink
how
the
data
is
stored
and
try
to
bring
a
new
format
and
so
yeah
in,
like
the
different
ideas
I
had
were
mostly
focused
around
like
making
it
so
that
the
apis,
it's
like
the
change,
is
seamless
to
the
epso
like
whether
there
is
key
arc
in
place
or
not
idp
server
shouldn't
be
aware
of
that,
because,
like
it's
really
up
to
the
provider
to
enable
it
and
then
use
it
because,
like
from
the
epa
server
side,
nothing
will
really
change
like
it
would
only
encrypt
the
case
and
decrypt
them
and
besides
that.
F
F
So
so
far,
my
ideas
were
to
change
a
bit
the
store
to
include
like
every
time
like
we
store
a
data
include
like
the
an
id
for
the
local
kk,
whether
it
exists
or
not,
and
then
this
id
will
be
given
back
by
the
provider
like
if
like.
F
If
in
the
encryption
response,
we
say
that
the
provider
will
provide
the
uid
of
the
local
kk
in
case
it
was
used,
and
then
we
can
use
it
to
store
both
like
the
data
and
associate
the
the
dk
with
the
local
kkk
it
was
encrypted
with,
and
then
in
the
decryption
request.
F
A
Yeah,
so
I
imagine
something
like
this
would
happen
by
the
way
I
don't
know
if
y'all
have
really
thought
about
some
of
this
on.
If
you
look
at
the
from
line
where
you
have
two
bytes
deck
written
right,
that's
that
encodes
the
length
of
the
deck.
So
if
your
deck
doesn't
fit
in
two
bytes,
like
the
length
of
it,
just
garbage
will
get
written
yeah
and
then,
like
horrible
things,
will
happen
when
you
try
to
decode
data
back.
So
like
I
love
how
like
there's
this
implicit
contract
with
the
kms
plugin.
A
F
F
A
A
Well,
yeah
yeah,
it's
it's
a
it's
crap,
so
one
one
thing
just
from
all
of
this
right
is:
if
we're
one
once
we
get
down
to
like
we're,
gonna
we're
gonna
start
adding
new
stuff
to
storage.
I
I
want
I
want
to
get
away
from
this
horrible
prefix
spaghetti.
We
got
going
on
like
that,
like
I'd,
take
just
a
simple
json
structure
over
this
like
at
least
a
json
structure,
you
could
read
with
that
tool
or
something
about
this
thing.
A
So
with
that
aside,
like
I,
I
think
the
hardest
part
of
this
is
since
we
have
to
like,
remain
backwards
compatible.
We
would
have
to
allow
the
api
server
to
send
us
empty.
F
Yeah,
that's
something
I
didn't
mention,
so
my
idea
was
that
if
the
apser
sends
an
empty
key
id,
then
it
will
get
like
the
provider
that
needs
to
and
like
it
doesn't
need
to
take
a
particular
key
to
or
no
that
wouldn't
no,
that's
not
the
case.
That
will
tell
the
provider
that
it
needs
to
reach
out
to
the
to
the
kms
in
the
cloud
rather
than
just
looking
at
the
local
keys.
D
A
Yeah,
I
think
I
think
that
we
could
handle
like.
I
don't
think,
that's
a
big
deal.
I
think
what
comes
I
mean
so
certainly
the
larger
question
is
what
what
is
the
storage
format
going
to
look
like
right?
Where
are
we
going?
Are
we
going
with
the
binary
format?
First,
compactness.
Are
we
going
with
a
more
friendly
format,
not
sure.
F
A
No
no,
but
I
you
know,
but
we
should
be
able
to
like
get
data
out
of
that
cd
and
not
necessarily
require
the
api
server
to
do
stuff
right
like
if
it's
a
nice
structured
format
with
keys
and
values
right,
you
could
use
regular
tools
right,
whereas
in
this
case
you
have
to
like
very
carefully
like
strip
off
a
prefix
and
very
carefully
interpret
like.
I
think
it's
big
big
endian.
I
don't
remember
how
the
length
is
encoded
right
like
and
it's
a
certain
amount
of
bites
like
that's.
F
But
I
wonder
the
cost
of
like
moving
away
from
the
binary
format
because,
like
since
we
have
to
prepend
this
information
to
all
the
kind
of
secrets
or
all
the
resources
that
are
encrypted,
I
can
see
it
becoming
quite
yeah.
A
So
I
and
I
think
what
we
could
do
is
if
we
went
with
like
proto
like
off,
we
could
do
what
kubernetes
does
right
so
kubernetes
the
for
all
proto
requests.
It
has
the
bytes
k8s
and
then
it's
got
like
a
two
byte
version
number,
which
I
think
is
either
zero
one
or
zero
zero.
Today
it's
never
been
changed
and
that's
it.
That's
the
that's,
the
raw
prefix
of
all
that
data
and
then
rest
is
just
proto-data.
A
What
my
thought
would
be
is
we
would
we
could
do
something
like
that?
A
very
small
prefix
to
you
know
help
you
know
our.
You
know
it
had
to
be
different
so
that
it
couldn't
be
confused
with
existing
proto
data.
That's
unencrypted
stored
right,
but
it
could
be
something
of
that
vein.
A
But
then
the
rest
of
the
data
is
in
the
actual
proto
right.
So
the
way
we
handle
that
in
the
rest
of
the
decoding
layer
is
there's
guaranteed
to
be
a
type
meta
at
the
top
level,
which
gets
encoded
decoded
into
a
runtime
dot
unknown,
which
then
gives
you
enough
information
to
find
the
actual
resulting
type
we
don't
have
to
get
into
that,
but
but
the
the
point
is,
though,
that
we
have
a
ton
of
machinery
today
to
take
binary
data
off
the
wire.
A
F
Thinking
about
it
like,
should
we
make
the
change
for
all
the
encryption
type
because,
like
as
far
as
you
like
from
the
example
that
you
showed
like
since
we
have
like
this
meta
type
in
the
portal,
then
we
can
infer
like
which
encryption
type
was
used,
whether
it's
kms,
whether
it's
ascbc
and
then
like,
if
we
kind
of
reward
this
whole,
like
storage
layer,
I
think
we
could
then
separate
like
the
different
type
of
encryption
and
have
its
own.
Like
4k,
ms,
I
mean
I'm,
not
I'm
not.
A
Too
strongly
opinion
on
that.
I
don't
I
I
I'm
already
skeptical
of
the
value
that
most
of
this
stuff
provides
and
then,
when
you
combine
it
with,
like
I'm
gonna,
do
a
local
file
on
your
disk,
I'm
extra
extra
skeptical
of
the
value
that
is
now
being
provided.
So,
like
I
don't
know,
I
don't
really
particularly
feel
compelled
to
like
improve
that.
A
If,
if
we
want
to
just
to
be
consistent,
I
could
I
couldn't
buy
that
just
so
that
you
know
we
move
away
from
this
prefix
spaghetti
to
there's
a
like
a
very
small,
strict
prefix.
That
is
just
for
fine,
like
you
know,
for
saying
that
this
is
encrypted
data.
Basically,
and
then
you
have
this
nice
structure.
F
F
More
like
in
terms
of
consistency
like
if
you
want
to
build
a
tool
to
then
get
this
data
out
of
that
city.
As
you
mentioned,
it's
easier
like
if
we
are
consistent
between
all
the
encryption
types
rather
than
like
a
dedicated
format
for
kms
and
then
the
whole
different,
like
binary,
like
format
for
the
other
encryption
types.
E
Hey
sorry,
obviously
I
missed
the
past
few
meetings
so,
but
I
also
didn't
see
this
quite
clearly
stated
in
the
dock.
Yet
there's
a
lot
of
talks
about
the
data
being
stored.
How
is
the
tech
actually
being
stored
like?
Is
it
the
actual?
E
It's
it's.
It's
mentioned
uuid
right.
So
when,
when
there's
a
decrypt
request,
does
that
mean
the
api
server
would
need
to
go?
Look
for
the
local
cac
first?
Where
is
that.
A
C
C
A
On
the
other
end,
when
we
ask
the
syste
the
kms
plugin
hey,
I
want
this
decrypted
if
we
happen
to
know
a
uid.
If
we
were
originally
past,
one
we'll
also
pass
it
back
as
effectively
as
a
hint.
So
that
way
the
kms
plug-in
already
has
that
locally.
You
know
it
has
the
capability
of
somehow
locally
handling
that
key.
It
does.
A
A
So
from
the
api
server's
perspective,
it
doesn't
store
it
right
because
it
never
sees
the
keck.
It
only
sees
an
id,
but
it
would
okay
in
this
world.
We
are
saying
that
we
would
make
a
change
to
the
storage
format
of
encrypted
data
on
disk,
to
have
a
new
field
to
optionally
store,
a
uid
that
refers
to
the
uid
of
whatever
key
encryption.
Key
is
being
used
by
the
kms
plugin,
and
this
is
all
in
service
of
the
idea
that
we
want
to
be
able
to.
A
D
A
You
have
to
configure
more
than
one
kms
at
a
time
to
get
that
server,
so
that
way
it
can
detect
if
it
was
the
first
one
or
the
second
one
that
was
used
and
if
it's
the
second
one
it's
like.
Oh
it's
out
of
bed
right.
Basically,
we
we
need
to
basically
make
that
information
available
through
the
kms
api
itself
right.
So
we
basically
need
a
semantic
to
be
able
to
stay.
A
A
That
is
up
to
the
kms
plug-in
to
this
site
right.
It
could
be
directly
just
using
or
like
a
cloud
kms
and
directly
doing
encryption
and
decryption,
and
then
the
id
is,
I
don't
know
like
maybe
the
name
in
5kms
or
whatever,
or
the
uuid,
like
whatever
unique
identifier
that
key
has
in
the
system
or
it
could
have
generated
a
local
key
encryption
key
you
know
had
that.
A
You
know
in
memory,
have
go
ahead
and
store
it
in
you
know
so
in
that
in
that
mode,
what
it
would
have
to
do
is
encrypt
it
using
the
cloud
kms
yeah
and
then
return
it
in
the
uid
to
the
right.
D
A
That
the
only
reason
we
are
really
worried
about
these
decryptions
is
in
a
lot
of
situations
you're
using
a
cloud
kms,
which
means
you're
using
basically
network
calls
to
get
all
this
right.
If
you
were
using
pkcs11,
this
does
not
matter
it.
Can
a
local
kms
can
trivially
handle
thousands
of
decryptions
right
because
it's
in
hardware
it's
not
there's
no
network
o.
A
So
what
we're
saying
is
well
the
the
idea
here
is
to
say
that
is
an
implementation
detail
of
of
plugins,
but
it's
an
important
one.
So
instead,
but
instead
of
the
api
server
somehow
trying
to
bend
over
backwards
and
support
it,
why
don't
we
just
support
it
in
our
reference
implementation
via
a
key
hierarchy?
A
E
For
the
for
the
lifetime
that
you
you've
configured
and
then
once
that
expires
next
time,
you
call
the
encrypt
decrypt
it.
It
will
call
the
cloud
kms
to
decrypt
again.
So
it's
possible
that
worst
case
scenario.
You
are
calling
the
cloud
kms
to
decrypt
and
then
the
local
keg
to
decrypt
again
so
two
decryptions,
yes,.
A
Yes,
yeah
right,
and
so
so
what
would
that
to
me?
What
needs
to
also
be
included
with
like
a
reference
of
the
of
this
local
kingfishing
key?
If
that
reference
is
included,
I
think
you
also
have
to
include
some
kind
of
ttl
back
to
the
api,
so
tell
it
okay
for
this
duration
from
maybe
until
this
time,
or
something
like
that,
you
can
assume
that
this
is
going
to
be
my
key
right.
F
And
that
I
think,
do
we
need
to
do
that.
Can
we
just
like
let
the
game
is
played
in
a
new
old
life
cycle.
A
E
So
with
this
proposal,
when
we
talk
about
key
rotation,
are
we
talking
about
the
local
keck
or
the
cloud
keck.
A
A
But
yeah
you
you
would.
You
would
need
some
level
of
custom
extension
right
with
the
reference
implementation
in
your
cloud
kms
to
do
that
and
then
you
would.
You
would
be
able
to
funnel
that
information
through
the
stack
just
as
a
way
of
saying.
Well,
everything
is
new
now,
so
everything
has
to
rotate.
A
So
the
idea
would
be
is,
like
you
know,
if
one
process
generated
a
local
key
in
memory,
the
other
ones
won't
know
about
it,
but
when
they
are
asked
to
decode
data
that
needs
that
key
they'll
basically
have
to
go
through
the
you
know
the
whole
process
of
basically
building
up
their
cache
and
the
the
final
cash
breaker
is
the
cloud
frame.
That's
right
like
no.
A
E
But
but
I
think
cloud
rotation
is
all
is
also
a
requirement
still
right.
E
D
A
A
Yeah,
so
I
I
think
what
this
would
enable
is
a
more
graceful
and
sort
of
lazy
approach
where
you
could
imagine
where
that
your
cloud,
kms
integration
like
every
single
day,
creates
a
new
version
of
the
key
of
its
global
key
and
starts
encrypting
with
that,
but
it
keeps
the
old
ones
around
and
if
the
system
is
auto
rotating
under
the
hood,
it
would
slowly
pick
up
the
new
key
right.
A
A
You
do
a
storage
migration
and
you
can
move
all
the
operations
and
that's
basically,
your
rotation
right,
like
the
rotation
system,
is
always
happening
and
you
want
to
force
it
cool.
Do
a
storage
migration
which
is
just
a
bunch
of
writes
to
add
cd
to
the
kubernetes
api
once
that
is
successful,
nuke
all
the
old
version
right
and
the
nice
thing
is:
there's
no
like
restarting
api
servers.
There's
none
of
this
right
because
you
basically
encoded
that
logic
within
the
api
enough
to
do
it
right.
E
B
I'm
wondering
if,
if
I
would
ever
would
like
to
to
delete
any
keg,
no
matter
how
old
it
is,
it
doesn't
take
that
much
space
up.
I
think
that
the
only
problem
would
be
that
the
older
key
gets
the
more
fade
iterations
would
have
in
between
right.
If
there's
no
uid,
but
with
the
uad,
it
should
be
kind
of
straightforward.
A
A
A
Yeah,
that's
what
I
mean
right
like
a
while
back.
You
know,
like
you
know,
the
gke
folks
were
trying
to
change
a
bunch
of
stuff
about
how
health
checks
are
done
today,
because
they're
like
oh,
these
health
checks
are
costing
customers
a
bunch
of
money
and
they're,
not
happy
about
that.
I
was
like
well
just
don't
charge
them
for
it
like.
Why
are
you
like
like?
Why
are
you
bugging
me
about
like
the
fact
that
you're
over
charging,
your
customers?
A
I
don't
care
like
it's,
not
my
fault,
but
that
as
an
aside,
we
we
just
need,
like
a
dedicated
health
check,
api
on
kms,
so
that
way
a
plug-in
can
implement
whatever
it
thinks
is
like
a
valid
way
of
doing
a
health
check,
and
you
know
perhaps
some
of
this
stuff
feeds
into
it.
So
you
can
imagine
our
health
check.
Api
could
include
like
the
current
like
level,
the
current
uid,
the
api
server
thinks
is
latest,
and
so
that
way,
when
it
held
checks
with
the
kms
like
the
k
must
be
like.
A
Some,
I
don't
know
I
haven't
thought
through
that,
but
I
do
think
we
need
the
dedicated
help
check
it
guys
that
way,
we
don't
have
to
ask
people
to
like
flush
around
it,
basically,
because
I
think
what
they
ended
up
doing
is
inside
of
their
kms
plugin.
They,
just
literally
like
look
for
the
string
held
as
the
the
key
that's
being
asked
to
be
encrypted
and
just
return.
Okay,
it's
just
like!
Well,
okay,
that's
fine!
I
guess.
C
D
A
So
one
one
thing
that
reminds
me
is
the
the
the
kept
we
just
merged
for
the
observability
stuff
changes
the
existing
beta
api
and
adds
a
new
field
optionally
behind
a
feature
flag
like
we
did
all
the
right
things,
but
you
know
it's
changing
the
existing
thing.
For
me,
I
kind
of
feel
like
this
stuff.
A
We
would
put
behind
a
feature
flag
and
a
full
api
version,
so
like
v1
beta
2
of
the
grpc
api
and
then
in
there
we
can
start
really
starting
to
look
at
all
of
our
choices,
because,
if
we're
gonna,
if
we
had
to
put
a
feature
flag
that
started
at
alpha
for
like
observability,
I
already
guarantee
you
that
we're
gonna
be
asked
for
this.
If
that's
the.
D
A
Basically
right,
but.
A
No
like
what
I,
what
I'm
saying
is
if
you
have
to
have
a
minimum
three
version
roll
out
anyway,
you
you're
not
gaining
anything
from
the
backwards
compatibility,
because
you're
saying
that
it
has
to
be
staged,
after
as
if
it
was
not
backwards,
compatible
to
begin
with
right
and
and
if
we
need
to
make
all
of
these
crazy
storage
version
changes
anyway.
I'm
just
like
just
do
it
like.
A
A
E
A
Basically,
and
we
when
we
introduced
this
new
thing
right,
maybe
a
completely
brand
new
configuration
in
that
struct.
You
can
then
specify
a
head
right.
You
can
specify
the
same
kms
plugin
but
running
in
a
completely
different
mode
as
your
new
right
key
right.
Basically,
and
so
you
can
migrate
right
and
the
idea
would
be
is
that
we
would
leave
the
old
functionality
for
many.
Many
many
releases
right
to
allow
you
to
say:
okay,
yeah,
I'm
ready
for
the
new
thing
et
cetera,
and-
and
it
might
be
that
we
basically.
A
We
basically,
maybe
we
freeze
the
beta
api
and
just
leave
it
alone
and
just
say
all
right:
here's
the
new
one,
we're
gonna
go
through
all
of
this
revisions
right
and
that's
the
one
that
we
want
you
to
use.
We
don't
really
particularly
want
to
force
you
into
a
place
that
you
don't
want
to
be
just
leave
the
whole
thing
alone.
A
No,
you
can't
you
can
use
them
at
the
same
time
right.
The
idea
would
be
is
that
you
have
more
than
one
provider
configured
right
and
if
you,
if
you
had
the
old
kms
plugin
sitting
around,
you
could
leave
it.
Configure
the
new
kms
plugin
as
a
new
provider
and
gently
migrate
to
it
as
you
want
to
right,
so
you
can
basically
use
both
at
the
same
time
and
migrate
away
right.
Basically,
you
would
have
two
unix
domain
sockets
running
on
you.
E
Yeah,
that's,
but
why,
like
you,
don't
want
them
to
do
that.
A
It
would
be
so
that
you
were
not
forced
to
migrate
away
from
the
old
one
right
if
you
did
not
immediately
feel
the
need.
If
you
wanted
to
have
right
because
you
you
still
need
the
ability
to
downgrade
right,
so
you
might
want
both
of
them
around
able
to
switch.
You
know,
swap
them
around
and
do
whatever
you
want.
A
G
A
So,
like,
I
think,
we're
relatively
well
covered
there
as
long
as
we
express
it.
Basically
as
a
new
configuration,
the
in
the
encryption
configuration
structure.
E
A
B
D
E
D
A
D
A
Encrypt
them
for
me
and
hand
me
the
encrypted
results,
okay
right
and
then
once
the
encrypted
result
is
handed
to
it
every
single
time
it
does
an
encryption
and
decryption
of
the
api
server
as
the
id
of
the
key
it
hands
it.
The
encrypted
blob
as
the.
E
E
A
Though
I
don't
know
about
you
guys
but
like
for
me,
a
lot
of
this
stuff
becomes
a
lot
more
clearer.
If
I
go
try
to
like
make
a
thing
like,
if
I
actually
go,
try
to
like
build
something,
no
matter
how
happy
for
sure.
A
G
B
I
have
a
question
so
currently
we
have
different
cams
plugins
and
they
are
provided
by
the
cloud
providers
right.
So
we
have
one
for
google
one
for
amazon
one
for
microsoft,
and
so,
basically,
when
we
create
a
reference
implementation,
it's
kind
of
the
use
of
the
keyboard
linux
cluster
would
need
to
decide.
Okay,
do
I
want
to
continue
with
one
camera
span
or
move
to
the
other
kms
plugin?
B
So
when
we
would
have
a
kms
plugin
reference
implementation,
we
would
need
to
support
all
the
possible
cloud
providers,
or
would
we
kind
of
delegate
it
to
the
kms
plugins
of
the
cloud
providers
or
how
would
we
go
forward.
A
So
I
think
that
is
mostly
a
policy
question
for
sigoth
itself
right
so
like
as
we
have
this,
you
know
we
have
the
secrets,
csi
score
driver
right.
That
does
support
a
bunch
of
extensions
to
a
bunch
of
different
places
right.
We
could
follow
a
similar
model
where
we
allow
like
where
we
have
a
canonical
replay,
where
we
allow
extensions
and
support,
we
could
do
that
or
we
could
have
a
minimal
reference
implementation.
A
A
The
the
only
concern
I
ever
have
with
something
like
csi
secret
sword
driver
where
we
have
a
bunch
of
extension
points.
Is
we
don't
want
to
play
king
maker
right
like
we?
We
don't
want
to
be
a
gatekeeper
where
we
say
that
you
can't
have
pardon
this
reference
implementation.
You
don't
want
to
be
in
that
state.
F
If
I
can
give
an
example
from
c
instrumentation
what
we
did
like
having
a
reference
implementation
for
like
all
the
metrics
api,
and
what
we
did
is
that
this
is
a
26
project
and
like
there
are
a
few
projects
that
are
reusing
it
and
we
kind
of
endorse
them.
So
they
are
now
part
of
also
kubernetes
sig,
it's
like
if
we
want
to
guide
users
to
any
project
that
we
know,
support
the
reference
implementation.
We
just
guide
them
to
this
project.
So
what
we
could
do
is
kind
of
the
same.
D
So
I
mean
right
before
the
move:
we
extracted
the
providers
out
of
the
code
base
and
made
it
out
of
three
right
and
then,
like
today,
the
driver
resides
in
kubernetes
six
and
then
anytime
we
make
changes.
We
ensure
like
backward
compatibility
and
all
that
and
then,
as
part
of
the
community
call,
we
have
different
providers
that
come
in
who
use
the
driver
right.
So
we
do
define
a
criteria
for
what
is
a
supported
driver
like
in
terms
of
the
basic
tests
that
they
need
to
have
and
all
those
like.
D
We
just
regulate
that
and
when
we
say
supported
provider
we
say
adding
them
in
our
documentation,
but
anyone
can
implement
a
provider
that
works
and
like
they
can
still
go
and
say,
like
users
can
use
this
and
all
those
right
and
then
each
provider
resides
in
their
own
github
part
like
the
azure
one
recites
an
azure
target,
the
gcp
one
is
gcp,
so
that
model
has
worked
well
like
in
terms
of
adding
new
items
to
the
driver,
like
it's
mostly
a
community
driven
decision
like
where
all
the
providers
think
some
feature
is
worth
it.
D
D
A
I
guess
I'm
less
sure
if
this
extension
point
needs
that
level
of
like
support
in
a
sense
like
I
can
see,
there's
there's
a
ton
of
different
secret
stores
and,
like
all
sorts
of
people
want
to
interface
with
kubernetes,
because
it's
application-centric,
and
that
is
what
the
platform
is.
This
is
like
an
infrastructure
detail
right
like
in
aks,
eks
and
gke.
I
never
expect
to
have
to
configure
this.
D
So
I
think
yeah.
The
reference
the
example
implementation
from
the
reference
library
will
be
useful
for
the
kms
plugin
implementers
to
see
okay.
This
is
how
it
works
and
all
of
those
and
then
obviously
based
on
versioning.
They
can
decide
which
version
of
the
library
that
they
want
to
use
to
support
this
particular
feature
and
so
on,
but
I
think
it
does
not
require
the
level
of
contract
that
we
have
for
driver
providers
today,
like,
I
don't
think,
that's
necessary.
That's.
E
The
contract
right
is
go
ahead.
A
A
I
mean,
if
you
insist,
that's
okay,
too,
but
we
we
really
want
to
make
it
so
that
you
know,
like
I
don't
know
like.
I
don't
know
if
digitalocean,
for
example,
supports
kms
and
kubernetes
today,
but
let's
say
they
don't,
and
you
know
they're
a
smaller
cloud
and
they
want
to
support
this
right.
They
shouldn't
have
to
like
spin
up
an
entire
engineering.
D
A
Just
to
like
get
that
built
and
maintained
and
supported
right,
they
should
be
like
yeah
like
this
is
like
like
it
should
be,
the
equivalent
effort
of
like
running
cert
manager.
Right,
it's
like
a
well
like.
Yes,
you
can
go
really
wrong
with
certain
managers.
If
you
can
figure
it
wrong
and
it's
like
starting
to
issue
search
for
everything
and
you
didn't,
you
know,
do
proper
access,
control
or
whatever
on
it,
but
you
know
you
didn't
have
to
do
all
of
acme
by
hand
right.
It
did
shakes
for
you.
G
E
A
Yeah,
so
I
I
think
I
would
lean
in
the
same
way
that
we
say
for
all
kubernetes
libraries
right,
like
they
they're
compatible
in
the
sense
that
they
don't
break
any
rest
apis,
but
they're
not
compatible
with
the
go
live
right.
So
you
can
have
compiler,
compile
issues
if
you
bump
from
one
version
to
just
a
like.
You
know:
0
23,
2,
0,
23,
3.
right.
You
can.