►
From YouTube: WG-KMS Bi-Weekly Meeting for 20220510
Description
WG-KMS Bi-Weekly Meeting for 20220510
A
Has
everyone
had
time
to
look
at
the
cap?
We're
gonna
bring
it
up
for
discussion
tomorrow
in
the
cigars
meeting,
which
I
believe
is
like
the
last
one
before
either
folks
will
be
out
or
you
know
whatever
like.
I
just
I
think
that's
the
last
one.
We
have
to
actually
get
feedback
from
the
leads.
C
D
Mostly
the
same,
but
I
think
there
were
a
few
things
that
got
added
since
the
dock,
so
I
I
would
say
the
dock
is
now.
We
can
probably
not
look
at
the
dock
anymore.
A
Yeah,
because
it's
now
we're
now
we're
in
the
yeah.
D
A
D
D
A
It
does
make
it
a
little
bit
crowded,
but
in
my
head
I
was
like
if
we,
if
we
were
to
duplicate
the
diagram
for
like
the
alternative
flows,
I
feel
like
people
would
lose
the
fact
of,
like
the
only
like
this
piece
changes
and
the
rest
is
the
same.
So
I
was
like,
I
think,
it's
better
to
leave
it,
as
is
even
if
it
is
noisy,
and
I
I
have
a
suspicion
that
when
others
review
it,
they
might
be
like.
A
I
don't
understand
what
you're
talking
about
here,
because
we've
probably
explained
it
at
a
level
that
is
clear
to
the
rest
of
you
know
the
five
of
us
that
have
been
talking
about
this
for
a
couple
of
months,
but
they'll
be
like.
I
don't
know
what
you're
talking
about.
Where
did
you
come
up
with
this
word
or
what
you
know
anything
like
that,
but
I
do
think
the
diagram
does
clear
up
a
lot
of
the
specifics.
A
I
think
the
only
like
knit
I
had,
which
I
don't
even
really
think
matters
is
like.
There's
there's
some
framing,
I
think
in
the
second
diagram
which
says
that,
like
the
api
server
will
assume
that
key
hierarchy
is
not
being
used
or
something
like
that.
I
remember
seeing
something
like
that
in
one
of
the
diagrams.
A
I
I
just
remember
something
about
the
api
server
making
assumptions
I
was
going
to
say:
I
don't
think
the
api
server
ever
cares.
What
you're
doing
is
just
just
sitting
there
shuffling
the
data
that
you
gave
it
back
to
you
and
you
get
to
decide
what
it
means
right.
You
just
like.
Basically,
using
metadata
does
not
mean
you're
using
a
key
hierarchy
using
metadata
just
means.
You
asked
us
to
store
something
unencrypted
for
you.
I
think.
A
Yeah,
well
I
mean
even
even
then,
for
example,
you
can't
assume
the
key
hierarchies
are
being
used
right,
because
you
could
shove
everything
into
the
key
id.
If
you
wanted,
we
just
don't
want
you
to
really
we're
giving
you
a
nice
structured
place
to
put
your
stuff
stuff
there.
So
that
way,
if
a
human,
like
the
the
reason
that
metadata
is
split
out,
is
not
because
it's
fully
required
it's
to
provide.
B
A
Yeah
it's
to
provide
a
place
of
structure
so
that
way,
when
a
human
being
or
the
plug-in
has
to
go.
Look
at
the
state
again,
it's
easier
to
reconstruct
and
they're
not
doing
like
bike
parsing
at
basically
length
offsets
right,
so
it
just
makes
it
a
lot
easier
right
like,
but
you
know,
there's
nothing
preventing
the
key
id
from
being
an
entire
protobuf
serialized
structure
that
has
schema
within
it.
That
is
completely
opaque
to
us
right.
It's
just
that
we're
saying
don't
do
that.
F
A
A
G
Yeah,
I
think
there
are
certain
assumptions
like
at
least
I
think,
like
the
way
we
have
the
cap
and
the
diagram
is
that
we
are
assuming
that
in
metadata.
There
is
something
which
is
like
a
local
tip,
and
that
translates
to
key
hierarchy,
and
I
think
that's
how
we
have
it
and
probably
also
going
forward
if
we
document
it
like
in
terms
of
how
reference
implementation
is
used
in
all
of
that.
G
But
yes,
you're
right
like
at
least
as
part
of
the
kept.
We
are
assuming
this
and
then-
and
I
think,
like
one
good
thing
is
in
the
entire
gap.
We
make
the
same
set
of
assumptions
like
even
in
the
sequence
diagrams
or
wherever
we
actually
document
the
flow
like.
I
think
we
make
the
same
assumption
that
if
you
have
something
in
metadata
which
signifies
a
local
key,
then
we're
just
assuming
that's
what
you're,
using
for
key
hierarchy.
D
A
A
Would
we
want
to
write
that
down
in
the
as
a
specific
field
like
we
want
to
persist
it
in
storage
so
like
it
would
be
generated
on
the
encryption
request
sent
through
to
the
plugin,
and
you
know
it
can
do
whatever
it
wants.
But
when,
when
the
response
comes
back,
the
encrypt
response
comes
back,
would
we
want
to
store
that
uid
that
we
sent
through
as
a
field
in
the
proto
that
we
feel
right
so
kind
of
thing?
Is
that
useful
like?
A
Basically,
the
encrypt
operation
right
and
or
or
you
know,
if
you
have
like
cloud
kms
logs
or
whatever
and
they're,
storing
that
you
could
say
that
this
piece
of
data
is
100
correlated
with
that
event,
like
I
think
you
can
already
we're
already
basically
saying
you
can
correlate
the
key
like
the
cloud
kms
key,
because
that's
part
of
the
api,
I
guess
I'm
asking
if
we
think
that
the
uid
being
stored
also
would
be
valuable.
G
A
C
I
wonder
if
it's
really
meaningful
to
stories
like
couldn't
you
just
get
the
same
amount
of
information
by
let's
say
log
encryption
request
in
the
api
cell
directly
and
I'll
put
like
an
object
id,
for
example,
with
the
uid,
but
without
storing
the
uid
itself?
And
then,
if
you
ever
want
to
install
space
like
what
happened
with
this
particular
object,
you
will
have
like
the
object
id
from
a
cd.
C
A
That
means
the
only
other
thing
that
you
probably
have
to
record
is
cloud
kms
right,
like
your
final
external
kms
right,
if
you
record
it
there,
you
basically
can
say
well,
like
I
know
in
ncd
the
the
one
place
that
has
this
uid.
Is
this
secret
in
this
name
space
right
and
like
you
know,
you
can
kind
of
look
at
that
and
be
like
okay.
Well,
I.
G
A
This
is
how
it
was
correlated
right,
so
your
external
kms
would
have
timing
and
all
that
information,
and
you
can
kind
of
correlate
it
back
really
easily.
So
in
a
sense
in
it
it
reduces
a
small.
I
think
operational
burden,
which
is
you
you
no
longer
have
to
correlate
kubernetes
api
server
logs
with
cloud
kms
logs.
You
you
just
look
at
your
persistent
storage
for
lcd
and
you,
if
you
bother
to
store
it
on
your
external
kms,
it's
gonna,
it's
gonna
be
a
pretty
easy
link.
A
G
It
yeah,
the
only
thing
I
can
think
of
is
there
will
be
no
history
right
like
if
there's
multiple
operations
performed
on
a
secret
like
it's
being
encrypted
decrypted
like
multiple
times
for
some
reason
like
it's
always
a
cache,
miss
then
the
uid,
that's
persisted
is
only
the
last
one.
So
as
a
user,
if
I
only
rely
on
lcd
data,
then
I'm
not
seeing
the
entire
data.
D
A
I
mean
technically,
you
know
we
could
be
fancy
and
keep
like
like
the
last
10.
Somehow
I'm
not
saying
that
that
that
would
actually
be
like
a
significant
implementation
effort
like
I
would.
It
would
be
hard
right
because
you'd
have
to
you'd
have
to
pull
the
data
out
and
like
store
it
under
the
context,
somehow
for
a
bit
and
then
reuse
it
later.
We
kind
of
be
kind
of
rough.
A
D
A
Who
cares
it's?
I
honestly,
I'm
not
even
sure
if
I
would
bring
it
up
it
might.
It
might
just
be
a
comment.
That's
there
and
then
we,
I
think,
there's
much
bigger
questions
to
talk
about
tomorrow,
like,
for
example,
are
people
okay
with
having
a
brand
new
v2
api
right.
That
goes
to
the
full.
Like
that's,
that's,
actually
the
biggest
ask
right:
it's
like
I
want
to
make
a
new
api.
You
all
agree
that
we
should
make
a
new
api
yep.
D
D
A
A
A
It
was,
it
was
limited
right.
The
amount
of
data
it
could
send
back
to
you
was
significantly
limited
because
it
had
to
fit
into
how
many
I
think,
256
bytes
or
something
some
something
like
that,
because
it
was
length
prefix
right,
but
it
only
gave
it
like
two
bytes
to
fit
the
length
in
right.
So
whatever
the
max
that
you
can
fit
in
is
the
length
right
so
yeah.
That's
that's
a
pretty
big
ask.
C
Yeah,
I
was
just
wondering
because
like
for
me,
the
reference
implementation
as
we've
talked
about
is
focused
on
the
new
api,
but
the
actual
user
of
this
reference
implementation
might
be
existing
plugins
developers
that
have
to
support
the
previous
api
as
much
as
the
new
one
in
order
to
be
backward
compatible
with
their
like
previous
versions.
A
While
tedious
is
totally
sort
of
within
what
the
server
already
supports
right,
that
that's
why
the
the
encryption
config
proposes
a
brand
new
spot
for
the
v2
api.
It
doesn't
mess
with
the
v1
at
all
and
like
one
of
the
implementation
requirements,
I'm
going
to
sort
of
assert
for
this,
is
we
literally
do
not
touch
those
files
at
all?
A
It
has
some,
but
I
wouldn't
I
wouldn't
I
don't
feel
confident.
So
I
would.
I
would
rather
just
leave
it
alone
and
make
a
nice
thing,
but
but
yeah
I
I
do.
A
I
do
think
what
what
our
recommendation
should
be
is
when,
when
we're
asking
people
to
migrate,
is
to
basically
build
a
brand
new
plug-in
using
the
reference
implementation
like
new
in
the
sense
of
like
basically
just
use
the
thing
we
gave
you
for
free
and
run
both
right.
So
add
the
you
know:
go
ahead
and
start
running
the
v2
one
alongside
your
beta
one
and
migrate,
at
whatever
speed
that
you're
comfortable
with
right
and
like
I
I
know
I
said
three
releases
before
v2
goes
g8
before
beta
is
removed.
A
That
to
me,
is
the
minimum
right
like
if
folks
feel
like
six
is
better,
that's
fine
hell
or
even,
if
you
say
nine
or
twelve,
is
better
whatever
right.
Like
the
point,
is
I
rather
put
a
point
in
time
where
we
say
it's
going
to
be
gone
so
that
there
like
there
is
an
end
time,
but
you
know
three
means
one
year.
I
think
that's
the
absolute
minimum.
We
should
provide
people
right.
A
But
this
beta
api
is
just
like
psp,
it's
been
around
for
a
super
long
time
and
it
hasn't
changed.
So
it's
effectively
the
v1
api
right
and
removing
it
is
fine,
but
you
have
to
basically
pretend
like
it
was
v1,
so
I
I
do
think
we're
providing
enough
to
migrate,
but
if
like,
if
we
can
make
it
even
easier,
I
don't
know
I
like.
Certainly
you
what
you
could
imagine
is.
A
A
Like
a
shim
that
would
take
the
v1
sorry,
the
v2
api
as
input
translated
into
the
v1
beta1
and,
like
you
put
that
shim
in
between
your
old
beta
implementation
and
this
like.
If
we,
if
we
think
that
that's
valuable
enough
to
build,
we
can
build
it.
Then
then,
basically,
what
we're
saying
is
you
don't
actually
have
to
migrate
at
all?
You
can
just
run
the
shim
forever.
C
Like
if,
if
people
have
to
rewrite
all
the
plugins
that
have
been
written
so
far
like
it
will
take
a
while
for,
like
actual
users
of
our
implementation
to
start
growing,
like
the
other
ways
to
start
growing
yeah
but
yeah.
If,
like
the
reference
implementation
provide
like
enough
like
data
to
just
be
extended
easily,
then
it's
fine
to
just
rewrite
the
plugins.
I
think,
but
it
really
depends
on
how
it
looks
like
at
the
end.
A
Yeah
I
mean
I,
I
don't
think
a
shim
would
be
too
hard
to
write.
I
just
don't
know
if
people
would
use
it
like
if
they
would,
because
you
still
have
to
deploy
it
right,
you
still
have
to
deploy
it
alongside
right.
It
basically
would
be
a
another
unix
domain
socket
that
translates
to
the
existing
unix
domain.
Socket
that's
already
running
right,
and
so
you
would
change
your
config
to
use
that
the
other
one,
and
I
mean
I
look.
A
I
think
the
apis
are
similar
enough-
that
you
could
stub
out
pieces
right
like
your
status,
would
basically
return
static
data
and
basically
say
it's
always
healthy,
and
the
key
id
is
basically
none
that
there
is
none.
You
know
it
would
be
non-empty
like.
I
think
our
validation
would
require
that
key
id
is
always
not
empty,
but
you
know
nothing
prevents
you
from
putting
in
a
static
value
that
just
never
changes.
C
Yeah
also
yes
invalid,
like
if
we
allow
them
to
put
static
values
for
the
statistics,
for
example,
like
it
will
kind
of
like
go
against
the
idea
of
like
improving
observability,
of
like
the
plugins
as
a
whole,
like
it
will
go
against
the
idea
of
like
improving
the
plugins
completely
from
what
we
are
we've
been
trying
to
solve
because
like
if
we
provide
them
in
a
solution
that
is
too
easy.
They
won't
really
look
at
what
were
the
improvements
that
we
tried
to
make
yeah
say.
A
D
I
think
we
should
present
it.
At
least
you
know
after
the
presentation,
assuming
everything
everybody
else
in
the
sig
is
okay
with
this,
then
we
can
actively
ping
people
from
those
plug-ins
and
invite
them
for
feedback
like
I,
I
don't
think
we
should
spend
too
much
time
on
this.
Unless
people
are
coming
to
us
and
say
hey,
this
is
unacceptable.
A
G
A
B
D
A
Right,
yeah,
no,
I
I
I
will
certainly
be
there,
I
don't.
I
don't
miss
the
cigar
meetings.
Hardly
ever
it's
more
more
of
someone
would
be
presenting
and
answering
maybe
the
bulk
of
questions,
and
then
maybe
you
know
we
would
jump
in
if
they
asked,
or
we
felt
more
contextually
aware
of
the
thing.
A
Oh,
that
also
reminds
me.
The
the
other
big
thing
is
the
the
rotation
piece,
so
you
also,
I'm
sure,
have
seen
that
I
dropped
the
whole
storage
version
hash
and
the
reason
that
was
dropped
is
because,
when
I
went
to
api
machinery
they
said
we
plan
on
deleting
that
field,
so
don't
use
it.
A
A
It's
meant
to
it's
meant
to
be
a
place
that
your
api
servers
individually
state
their
opinion
about
the
world
in,
and
the
diff
that
basically,
I
had
in
here
is
basically
saying
that
I
want
a
new
another
field
where
api
servers
can
also
state
their
opinion.
A
How
did
folks
feel
about
that
part
of
the
cap?
It's
a
little!
It's
a
little
iffy
now
because
of
the
fact
that
we
don't
have
a
good
spot
to
put
the
data.
A
The
the
api
we're
talking
about
this
one
right
here,
the
what
is
this
thing
called
the
storage
version
status.
It's
an
alpha
api
right,
so
it's
a
feature
gated
alpha
api.
That's
been
stuck
in
alpha
for
a
while,
because
api
machinery
is
busy
and
no
one's
picked
up.
This
particular
piece
of
work
and
it's
a
relatively
hard
implementation.
A
A
There's
already
semantics
for
what
is
the
api
version
that
the
a
particular
api
server
wants
to
use
for
writing
and
where
the
versions
we
can
read
and
then
there's
a
separate
field
that
basically
says:
do
all
api
servers
currently
agree
on
how
they
want
to
write
data
right,
meaning
that
you
can
run
a
storage,
migration
and
it'll
get
consistent
results
right,
and
this
basically
says
the
same
thing
for
key
ids
right.
A
What
what
what
particular
key
id
is,
each
api
server
say
it's
at
and
is
that
key
id
the
same
across
all
api
servers
right,
that's
true!
You
know
you
can
run
a
storage
migration
to
cause
rotation,
their
encryption
at
rest
to
be
rotated
correctly,
so
semantically
it's
easy
to
describe,
but
it
relies
on
an
api
that
you
can't
rely
on.
Realistically,
it's
basically
saying
I'm
going
to
piggyback
on
a
thing:
that's
not
done.
A
And
hence
why
there's
that
big
old
note
saying
that
rotation
will
not
be
part
of
the
graduation
criteria.
It'll
just
be
part
of
like
testing,
but
we
can't
make
it
part
of
the
graduation
criteria,
because
that
means
we
can't
graduate
this
thing
until
the
other
thing
graduates-
and
I
didn't
want
that-
that
seemed
like
the
wrong
outcome.
D
I
almost
wanna.
I
also
because
of
this
right
like
I,
I
almost
think
maybe
the
rotation
should
be
out
broken
out
of
this
cup,
just
to
make
it
more
clear.
This
one
is
purely
for
performance
and
you
know
observability
and
the
rotation.
Yes,
we
introduce
it
here,
but
you
know
we
don't
track
rotation
until
something
long
more
is
available
for
us
to.
Actually
you
know,
use
it
and
actually
push
it
out
for
beta
and
ga
or
stable.
A
A
That's
why
it
exists
is
to
make
the
rotation
possible
so
like
at
the
bare
minimum.
You
basically
have
to
say
that
I
want
to
support
rotation
very
soon,
so
I've
built
the
api
to
support
it
and
at
that
point
basically,
what
I'm
saying
is:
if
you
build
the
api
with
rotation
in
mind
and
that's
important
to
you,
you
should
still
be
able
to
write
a
test
that
runs
against
a
single
api
server.
A
That
works
correctly,
always
right
right
and
I
think
that's
sufficient
right.
If
we
did
that
and
we
were
writing
tests
against
a
single
api
server
and
basically
describing
the
steps
of
storage
migration
it,
it
should
give
us
the
what
we
want.
We
should
be
able
to
rotate
even
with
a
key
hierarchy
in
place
correctly.
A
A
G
D
Yeah,
okay,
anything
else
we
should
discuss
in
terms
of
next
steps
because
you
know,
I
think
that
kept
you
know,
feel
free
to
review
it.
You
know
async
right,
I
don't,
I
think
you
know.
Where
are
we
in
terms
of
these
particular
issues
and
what
what's
everyone
doing?
What's
the
status.
A
A
Like
silly
change
somewhere
to
make
a
branch
and
make
a
pr
with
it
against
kk,
and
then
we
would
all
just
keep
making
prs
against
that
branch
and
then
merging
them
in
right,
like
basically
as
a
feature
branch
right
like
we.
We
need
to
work
together,
but
we
can't
merge
this
in,
like
tiny
pieces
right
like
the
stuff
has
to
like
work
as
some
cohesive
gold
before
it
goes
in.
A
So
something
like
that,
probably
is
what
I
was
imagining.
So
that
way
we
don't
block
each
other
much
but
at
the
same
time
we're
not
trying
to
merge
something
half
done
into
the
api
server.
A
F
A
F
D
G
Yeah,
I
think
this
is
the
tricky
part,
because
I
think
when
dual
stack
was
added,
it
was
like
a
huge
beer
like
about
10k
lines.
I
took
months
to
review
it
and
just
before
the
release,
so
we
can
either
go
with
the
approach
just
producing
a
mega
pr
with
all
of
the
changes,
just
in
different
comments
like
each
comment
individually
addressing
a
certain
area
or
if
we
can
do
it
in
phases
like
breaking
it
down,
where
we
have
the
proto
api
and
individual
just
keep
getting
it
in.
G
G
A
Yeah,
I
I
think
what
we
could
try
to
do
to
at
least
not
necessarily
de-risk
the
effort,
but
de-risk
the
code
base
is
we
could
defer,
like
the
final
wiring
to
like
the
end
of
the
release,
so
the
idea
being
is
like
we
could
have,
like
you
know,
a
a
sister
folder
to
the
the
beta
api.
That's
the
v2
alpha
one
api
and
be
working
in
there
and
making
changes
and
even
writing
tests
and
everything.
A
But
if
we
don't
build
a
wiring
from
the
outside
to
come
consume
it,
it
doesn't
matter
that
it's
there,
because
you
can't
you
can't
use
it
as
an
external
consumer
yet
so
that
gives
us
basically
the
bulk
of
the
release
to
like
sit
there
and
fuss
with
at
least
unit
level
test
right.
It's
basically
when
you,
when
you
want
to
get
to
the
integration
test
bit,
then
you
have
to
have
the
wiring
in
front.
D
A
Yeah
but
yeah,
I
I
do
agree
with
you
initially.
I
suspect
it's
gonna
be
at
least
at
least
a
little
bit
like
the
whole
dual
stack
thing
right,
where
it's
just
gonna
be
a
massive
pr.
Yeah.
A
A
If,
if
we
could
try
to
somehow
have
like
the
the
more
code,
jenny
stuff
in
first,
like
you
know,
there's
that
script
for
keeping
the
code
gen
up
today
and
those
things
if
we
get
that
stuff
in
first,
that's
mostly
boilerplate
right,
but
it
increases
the
size
of
the
pr
significantly.
So
we
could
probably
do
that
stuff.
First,
you
know
get
things
cleaned
up,
there
probably
get
that
merged
without
too
much
fuss
and
then
actually
try
to
consume
it
afterwards.
A
That
might
break
it
up
a
little
bit,
but
maybe
this
is
a
little
bit
too
forward
thinking,
because
our
cap
is
not
approved
so
we're
like
yeah
we're
gonna,
implement
everything.
That's.
A
Yes,
yes,
it's
gonna,
it's
gonna,
take
some
significant
effort
to
get
at
least
the
reference
implementation
doesn't
have
to
initially
track
any
kubernetes
release.
We
can.
We
can
work
on
it
on
the
side
and
just
make
a
pr
that
basically
says:
there's
a
giant
reference
implementation.
Take
it
you'll
like
it.
Take
it
and.
B
E
Yeah,
so
what
I
would
try
for
the
reference
library,
I
would
try
to
finish
as
much
as
possible
by
tomorrow
before
the
kms
meeting,
and
just
you
have
a
pull
request
which
can
be
reviewed
and
everything
that's
not
within
the
progress.
I
would
just
create
new
issues
out
of
it,
so
others
could
pick
it
up.
B
E
It
it
it's
it's
definitely,
it
was
definitely
confusing,
but
the
sequence
segment
have
helped
a
lot.
So
so
I
think
it
should
work
now,
but
I
still
need
to
to
write
the
tests
and
to
verify
that
everything
works
as
expected,
according
to
sequence
diagram.
So
this
is
an
old
pull
request
that
does
that
work
with
the
current
key
id
and
observed
pid
and
at
the
very
bottom.
The
last
comment
was
about
about
adding
the
new
one.
E
And
as
mo,
what's
really
interesting
for
me,
especially
from
people
who
worked
on
the
remote
kms
would
be,
how
should
the
interface
look
like
to
them
right?
I
just
said
basically
there's
an
interface
where
at
some
point,
I'm
handing
over
in
current
key
id,
and
I
hope
that
the
upstream
people
know
what
to
do
with
it.
A
Are
you
always
a
question?
The
go
interface
for
abstracting
away,
arbitrary
external
cloud?
Kms
is
basically
right,
so
it
has
to
be
able
to
represent
like
a
hardware
kms
like
pkcs11,
but
also
be
able
to
represent
cloud
kms
so
like
it
probably
has
to
be
yeah.
You
know
the
the
input
parameters
for
like
local
tech
and
stuff
are
easy,
but
it's
not
as
clear
to
me
as
like,
for
example,
in
the
go
standard
library,
the
encrypt
and
decrypt
interfaces
in
there.
A
The
like
the
last
parameter
to
them
is
empty
interface
right
and
that's
basically
meant
to
be
like
contextual
options,
for
the
encryption
that
are
dependent
on
what
encryption
is
actually
happening.
I
don't
know
if
we
need
something
like
that,
but
I
can
imagine
us
needing
that
right
as
a
way
of
passing
in,
I
don't
know
like
maybe
cloud
kms
needs
like
a
project
id
or
something,
and
it
needs
to
be
passed
as
a
parameter
on
the
side
or
something
like
that.
I
don't
know
so
that
I
think
that
would
be
important
to
suss
out.
A
We
could
certainly
take.
A
Some
learnings
from
the
existing
v1
beta
1
integrations
right.
We
could
kind
of
see
how
those
different
existing
implementations
interface
with
their
cloud
kms
to
get
an
idea
of
like
well,
here's
the
bare
minimum
you
need,
because
to
be
sufficient
for
the
three
existing
implementations
that
were
public.
G
A
G
A
D
A
D
Okay,
what
so
the
reference
implementation
is?
Gonna
live
in
this
repo.
F
D
A
A
A
Why,
wouldn't
you
make
it
something
that
they,
you
could
import
and
use
directly,
so
it
seems
like
it
should
be
in
staging,
so
that
way
it's
synced
out
from
kk,
and
then
you
can
consume
it
without
literally
rendering
all
the
kk
kkk
in
your
base.
B
Yeah
all
right
cool,
it's
fine
anything.
D
A
Yes,
let's
totally
do
that
and
again
right,
like
I
at
least
for
now,
the
cap
is
very
much
a
living
document
right.
We
should
we
should
iterate
and
keep
it
up
to
date,
and
and
also
I'm
a
big
stickler
on
this,
as
we
do
implementation,
if
it
drifts
from
the
cap,
take
the
kept
to
match
the
implementation,
you
don't
let
it
go
stale,
it's
nice
to
be
able
to
tell
someone
look
at.
If
you
read
this
cap,
you
will
understand
what
we
have
implemented
because
it
matches
not.
It
hasn't
diverged.
A
A
D
A
A
A
A
It
basically
says
that
implementing
the
key
hierarchy
in
the
kms
plugin
optionally
is
a
valid
approach.
That's
basically
what
that
one
is
saying
and
the
reason
we
believe
it
to
be
valid
is
we
believe
that
the
performance
over
her
head
is
caused
by
network
requests
of
to
an
external
kms,
not
the
grpc
layer
right,
and
we
saw
some
of
that
with
some
of
this
work
initiated
done,
but
we
didn't.
We
haven't
really
measured
it
with
like
a
large
cluster,
where
there's
like
ten
thousand
hundred
thousand
secrets
or
whatever
right.
A
We
sort
of
compare
the
old
and
the
new
that'd
be
at
least
something
we
would
wanna
manually
do
to
suss
out
that,
like
yeah
like,
if
you
can
use
a
key
for
like
a
local
keck
for
like
1000
operations,
is
it
is
it
enough
to
like
dramatically
reduce
the
load
on
the
external
cameras.
D
A
A
I
think,
even
in
the
case
where
you
don't
need
the
key
hierarchy
migrating
an
implementation
to
the
reference
one
I
think
could
still
provide
significant
value
right
because
we'll
we'll
start
it
out
with
some
feature
set,
but
then
enhance
it
with
more
over
time
and
so
you're
you're
basically
like
going
to
get
it
like
it
yeah,
if
you're
purely
happy
with
what
you
have-
and
you
just
don't
want
to
change
it
at
all
and
yes,
you're
you're
right
that
we're
basically
creating
work
for
you,
though,
if
you
really
feel
that
way,
you
probably
should
just
write
a
shim
adapter
between
your
old
plug-in
and
the
new
api
and
basically
like
do
do
no
do
no
work
really
like
just
build
a
shim
instead
of
trying
to
update
to
any
reference
implementation.
A
But
I
think
what
we
would
want
to
say
is
that
none
of
this
code
is
truly
static.
Right
you,
you
know,
software
maintenance
is
a
significant
cost
and
what
we're
saying
is,
if
you're
willing
to
pay
an
upfront
cost
of
migrating
to
the
new
thing,
then
you
get
to
leverage
the
community
built
solution.