►
From YouTube: WG- KMS Bi-Weekly Meeting for 20200329
Description
WG- KMS Bi-Weekly Meeting for 20200329
A
Hey
everyone:
this
is
the
cigar
kms
meeting
for
march
29th
22..
This
is
chemistry
number
12..
So
I
have
one
item
to
discuss,
which
is
some
thoughts.
I
had
around
how
we
think
and
they'll
sort
of
be
the
rotation
case
within
kms.
B
Yeah,
so
the
only
thing
would
be
just
splitting
up
the
issues
so
and
yeah
just
that
so
work
coordination,
topic.
A
Spend
some
time
bringing
up
some
issues
too
that
way
we
have
some
chance.
Okay.
So
this
is
very
very
rough,
but
I
tried
to
write
down
some
of
the
thoughts
I
had
around
like
what
the
the
changes
to
the
api.
But
so
let
me
walk
you
guys
to
the
highlights.
A
So
the
sort
of
the
core
change
is
the
concept
of
a
status,
which
is
basically
just
an
enhanced
version
request,
which
encodes
version
is
it
healthy,
but
also
it's
current
key
id
now
this
one
is
not
based
on
any
local
key
hierarchy.
This
is
meant
to
be
like
the
true
key
id
as
in
what
is
the
key
that
you're
using
in
azure
keyboard
right
in
particular,
it's
meant
to
be
a
very
stable
identifier,
so
it
only
changes
when
you're
actually
saying
I
need
I
I
need
rotation
to
occur.
A
It
does
not
change
on
that
plug-in
resource
and
stuff,
at
least
not
often
okay,
and
then
within
the.
A
When
you,
when
you
do
an
encryption
response,
it
also
tells
you
the
current
key
id
there
and
we'll
kind
of
get
into
why
that's
sort
of
important
and
then
it
has
arbitrary
metadata,
which
can
include
all
the
local
details
or
whatever,
whatever
the
plugin
needs
to
do
whatever,
and
I
just
kind
of
said,
probably
we
would
validate
it
in
the
same
way
that
we
validate
labels
in
object.
Meta.
I
don't
know
if
that's
a
good
or
bad
idea,
but
I
feel
like
there
should
be
some
bounds
on
what
you
can
send
back
and
forth.
A
But
but
this
is
just
a
bucket.
You
get
to
have
whatever
you
want
in
here
and
when
we,
when
we
decrypt
we,
you
know
we
send
the
key
id
that
we
recorded.
Originally,
you
know
what
was
at
that
point.
The
current
key
id
is
what
we
observed
and
then
also
all
the
metadata
he
gave
us
and
even
in
the
detail
response
there
is.
Maybe
it
doesn't
matter
as
much,
but
there
is
the
option
for
this
metadata
probably
just
makes
sense,
but
so
what?
A
A
A
Some
some
object
right.
So
it's
a
per
resource
field
today
you
know
it's
opaque
to
the
client,
but
it's
like
a
shock
to
like
first
eight
characters
of
the
shock,
256
hash
and
v-strings
the
group
version
and
kind
that
are
the
preferred
storage
version
right.
It's
opaque
on
purpose
to
allow
it
to
be
extended
right,
because
the
idea
is
clients
never
knew
how
it
was
generated.
A
Is
the
infrastructure
provider
has
some
way
of
knowing
how
many
api
servers
there
are
and
knowing
dedicated
ips
for
all
of
them.
They
have
this
within
their
infrastructure
because
they
have
to
like
there's
no
way
they
can't
because
they
run
the
process.
But
the
point
is
this
is
not
guaranteed.
There's
no
guaranteed
and
consistent
way
to
do
this.
To
the
kubernetes
api,
like
you,
can't
rely
on
kubernetes
or
because
sometimes
that's
a
load
balancer
that
hides
the
number
of
api
servers
and
stuff
so
we'll
just
say
all
right:
infrastructure
provider.
A
You
have
to
figure
out
how
to
do
that,
whether
that's
expressed
directly
in
some
means
or
not,
is
it's
sort
of
out
of
scope,
but
the
first
step
of
rotation
is
the
infrastructure
provider
basically
checks
each
each
api
server
and,
let's
just
say
we're
only
talking
about
the
secrets
resource,
so
they
check
the
discovery
document
of
the
secrets,
resource
and
look
at
the
source
version
ash
and
all
they
do
is
that
they
make
sure
all
api
servers
agree.
A
So
if
they
don't
the
system
isn't
in
a
steady
state
right
now,
so
just
wait
until
it
catches
up.
Basically,
this
should
almost
never
happen,
but
it
can
so
you
wait
for
it
to
be
the
same
string.
And
then
you
record
that
string
like
in
memory-
and
you
know
this
process
is
happening-
then
the
infrastructure
provider
again
either
through
user
driven
action
or
not,
and
this
this
second
step
can
actually
happen
before
the
first
one.
A
It
doesn't
actually
matter
you,
you
somehow
initiate
the
change
of
the
key,
so
you
know
if
that's
the
label
on
the
key
or
whatever
object.
However,
you
you
do
that
right
at
this
point,
what
is
expected
to
happen
is
that
this
api,
which
I
said,
is
cold
or
could
be
streamed.
I
don't
know
whatever
this
string
is
supposed
to
change
right,
but
when
how
often
this
is
pulled
through
the
kms
itself
and
the
api
server,
so
it's
completely
up
to
the
system.
It
doesn't
really
matter.
A
The
idea
is
that
what
what
would
happen
eventually
across
all
api
servers
and
their
respective
plugins
is
that
they
would
eventually
observe
the
new
key
id
right.
You
you've
said
that
hey,
I
was
using
azure
key
number
one
at
revision2
would
label
x
now
I
want
to
use
the
same
key
id,
but
I
want
to
use
revision3
because
it's
another
or
whatever
right,
that
is
somehow
encoded
into
this
string
right.
A
So
once
that
is
there,
then
through
discovery,
the
process
that
first
recorded
the
strings
that
were
here
then
caused
rotation
to
start
can
now
sit
there
and
wait
for
all
api
servers
to
converge
on
the
new
value.
Again
it
doesn't
matter
what
the
new
value
is.
It
just
has
to
be
different
from
the
first
one.
It
has
to
be
consistent
across
all
api
servers
right,
so
you're,
basically
saying
all
right.
I
caused
the
change
now.
I
need
to
wait
at
runtime.
The
nice
thing
is
so
far
exactly
zero
restarts
have
occurred.
A
You
don't
have
to
restart
anything
right.
The
system
is
just
slowly
progressing
forward
right
and
at
this
point
now
you
know
all
your
api
servers
and
their
respective
kms
plug-in
processes
have
observed
the
new
state
that
you
asked
them
to
observe,
and
now
you
can
just
run
a
storage,
migration
of
secrets
and
you're.
Basically
done
your
rotation
is
done
the
if
you
want
to
be
a
little
more
relaxed.
A
You
don't
actually
have
to
do
the
first
step.
You
don't
have
to
observe
the
initial
value.
You
can
just
kind
of
wait
some
amount
of
time
after
to
make
sure
that
the
system
converges
it's
just
nice
to
know
what
the
initial
value
was.
So
you
can
guarantee
that
changed,
because
the
hash
should
certainly
change,
but
that's
that's
the
the
idea
there
and
then
the
reason
for
returning
on
like
encrypt
the
key
id
as
well
is
because
we
would
enhance
the
deck
storage.
A
We
have
the
local
cache
to
record
what
we
saw
and
then
as
we're
using
that,
so
this
one
we
use
pure
as
our
level
right.
So
that's
what
that's!
This
is
the
only
api
that
keeps
us
at
a
particular
level
all
right,
so
we
always
understand.
A
A
So,
even
though
I
can
decrypt
the
data,
I
have
everything
to
decrypt
the
data.
I
actually
need
to
return.
The
scale
equals
true
response
that
way.
In
the
case
of
storage
migration,
the
system
understands
that
action
needs
to
fully
rewrite
through
ncb.
Anyway.
It
can't
just
know
off
the
update,
but
I
know
that's
a
lot
of
words
and
like
a
crap
ton
of
moving
pieces,
but
that's
the
general.
C
Thing
I
want
to
make
sure
more,
like
the
current
key
id
that
you
have
in
the
encrypt
response.
Is
it
oh,
it's
the
one
you
use
it's
the
same,
basically,
that
the
one
in
status
response
right.
A
A
We
want
to
record
this
within
our
deck
cache
to
make
sure
that
if
we
use
it,
we
can
validate,
or
we
can
correctly
say
if
things
are
stale
or
not
stale,
because
we
trust
the
canvas
plugin
to
not
accidentally
tell
us
the
wrong
value,
because
it
has
to
be
correct,
but
we
can't
know
at
which
sort
of
state
that
we
are,
and
it's
at
it's
just
that
we
basically
say:
okay,
okay,
shut
up
pretty
much
like
I
do,
I'm
not
signing.
I
don't
know
why
I
decided
so
yeah
so
that
that's
the
that's.
A
The
general
idea
is
that
every
operation,
basically
in
codes
within
it,
the
that
that
level
and
that's
just
as
an
observation
and
then
the
status
api,
which
basically
happens
synchronously
in
the
sense
that
each
api
server
is
just
doing
one
of
those
per
process,
is
how
it
understands
the
level
and
then
we
export
that
in
an
opaque
way
in
the
storage
hash.
So
that
way,
if
you
want
to
have
fully
automated
rotation
without
any
restarts,
all
you
have
to
do
is
record.
A
A
Oh
yeah,
so
the
metadata
is,
it's
meant
to
be
completely
opaque
to
the
api
server.
It's
just
and
the
kms
is
ris
and
this
will
be
stored
unencrypted.
A
So
the
kms
is
responsible
for
if
there
are
any
secrets
in
there
for
for
coordinating
how
they're
encrypted,
but
the
reason
the
metadata
is,
there
is
to
support
a
expressive
api
for
things
like
the
key
hierarchy
that
we
want.
So
we
would
put
in
there
like
in
within
our
implementation
and,
like
you
know,
encrypted
local
keck
and
just
a
blob
in
there
and
the
the
idea
is
that
the
api
server,
which
will
give
you
stable
storage
for
this
data
and
returned
you,
this
data,
including
the
current
key
id.
So
that
way
the.
A
Yes,
the
metadata
is
stored
unencrypted,
so
the
kms
is
responsible
for
guaranteeing
that
it's
not
storing
anything
sensitive
or
if
it
is
sensitive,
it
is
somehow
encrypted
which
is
easy
for
it
to
do,
because
it
has
access
to
the
cloud
kms
or
whatever.
So
it's
not
a
hard
problem.
It's
just.
It
is
a
specific
contract
that
we
have
to
be
careful
to
describe
correctly
yeah.
A
Yes,
so
the
idea
is
the
values
to
pay
to
the
api
server,
but
a
change
in
the
value
signifies
to
it
that
it
needs
to
change
the
hash
and
that
when
it
does
reads
from
storage,
especially
when
it's
using
a
cached
deck,
it
can
tell
that
hey.
Is
it
the
same?
It
doesn't
matter
if
it
was
before
or
after
it
doesn't
actually
matter
where
it
came
in.
Is
it
exactly
the
same,
and
if
it's
not
exactly
the
same,
it
just
says
the
thing
is
stale
and
you
need
to.
You
need
to
do
a
hard
drive.
A
The
ability
to
do
this
rotation
safely
and
basically
I
use
the
concept
of
a
status
to
kind
of
say
you
have
the
concept
of
aversion,
but
we
also
wanted
health
and
I
kind
of
just
cheated
and
said
it's
a
string
and
okay
means
good,
and
anything
else
does
not
mean
good.
Then
maybe
there's
a
better
way
to
express
that,
but
I
figured
we
wanted
to
have
at
least
some
way
of
being
able
to
say.
A
So
I
mean
this
is
all
making
the
assumption
that
api
machinery
folks
are
happy
with
us
piggybacking
on
their
search
version,
hash
api,
which
is
technically
a
beta
api
right
now
and
being
like.
I
would
like
to
add
to
this,
but
the
the
reason
I
feel
semi,
confident
that
this
is
okay
is
the
reason
this
api
is.
There
is
to
enable
storage
migration.
A
With
the
caveat-
and
you
know
this
is
written
in
the
storage
version
that
you
have
to
coordinate.
If
you
have
multiple
masters,
you
have
to
have
some
external
coordinating
capability,
because
there
exists
no
concept
in
the
kubernetes
api
to
identify
api
servers
and
then
directly
make
connections
to
individual
ones.
We
have
approximations,
but
not
exact
guarantees,
so
it
just
kind
of
waves
that
part
away
and
I'm
basically
waving
that
part
away.
A
Also
just
saying
that
I
I
guess
I
would
ask
you
a
niche
like
in
azure:
do
you
do
you
feel
like
you
guys,
could
build
like
a
safe
customer
experience
around
this?
Does
it
feel
like
a
reasonable
thing
like
I,
like
you
know
like?
I
am
assuming
that
the
customer
initiates
their
rotation,
it's
more
of
like
the
customer,
initiates
the
rotation,
and
can
you
basically
keep
the
api
servers
up
the
whole
time
and
the
canvas
plugins
up
the
whole
time
and
do
the
rotation
sort
of
for
them.
A
Yeah,
I
think,
you're
still
fine.
We
we
have
to
come
up.
We,
I
think
we
already
agreed
to
do
the
dynamic
reload
of
the
config.
So
I
so
as
long
as
you
have
that
I
think
that's
also.
Okay,
so
yeah
you
don't
have
to
use
this
mechanism
the
the
the
thought
process
behind
this
mechanism.
Is
you
configure
your
kms
provider
once
as
a
single,
basically
as
a
single,
only
encryption
item
and
you
basically
never
change
it
and
you
use
the
external
apis
to
drive
the
behavior.
D
Basically,
yeah
yeah.
I
think
the
only
part
that
we,
I
think,
probably
for
all
cloud
providers
right
is
if
we
can
detect
the
change
dynamically,
like
the
only
way
possible
is
through
pole
or
basically
exposing
a
public
endpoint
where
we
can
get
like
events
on
methods
like
that
is
one
way
to
detect.
If
there
is
a
newer
version
of
the
key
available
or
the
straightforward
way
is
to
build
that
logic
in
the
kms
plugin,
so
that
we
don't
even
have
to
have
two
instances
of
the
kms
plugin.
D
It's
just
encrypt
versions
and
decrypt
versions,
maybe
on
a
very
high
high
level
where
we
can
say
for
encrypt
use
this
version
and
for
decrypt
use
these
different
versions,
but
the
cloud
provider
still
has
control
over
the
kf
plugin,
so
they
can
configure
the
plugin
to
do
this.
If
plugin
is
not
smart
enough
to
detect
dynamically
like
if
a
newer
version
is
available,
the
only
way
the
plug-in
can
detect
that
there's
a
newer
version
is
by
falling.
A
So
yeah,
it's
basically
a
way
of
saying
if
I
was
going
to
write
a
new
object.
What
is
the
hash
of
basically
the
group
version
kind
and
other
state
right
and
rights
only
occur
from
that
top
one
right.
So
it's
always
that
first
one.
So
I
think
we're
still
fine
there.
I
I
think,
in
regards
to
yeah
what
you're
saying
about
having
like
a
smart
kms
plug-in.
A
I
I
think
if,
if
I'm
understanding
correctly,
I
believe
we
can
give
people
the
ability
to
build
such
a
plug-in
that
you
know,
because
part
of
the
reason
I
want
to
store
the
the
current
key
id
is.
The
idea
is,
even
if
you
change
it
dynamically
and
you're
not
going
to
do
a
rotation.
The
idea
is
that
that
is
kept
in
storage.
So
if
a
kms
plug-in,
let's
say
you're
a
kms
plug-in,
the
key
has
been
changed,
but
this
kms
instance
hasn't
observed
it,
but
maybe
some
others
have
when
it
gets
the
decrypt
request.
A
It
will
see
the
key
id
in
there
and
be
like
oh
that
one
doesn't
match
mine
like
it's
actually
different
than
mine
right,
so
it
can
at
that
instant
decide
to
do
an
inline
check,
or
it
could
just
say
it
doesn't
matter.
I
it's
it's
still
in
the
correct
format.
Whatever
it's
its
own
opaque
format,
I
can
still
figure
out
which
key
I
have
to
use
I'll,
just
use
it
like
right
and
maybe
optimistically
in,
like
maybe
in
the
second
go
routine,
be
like.
A
I
need
you
to
go
ahead
and
pull,
because
obviously
something
has
changed
either,
I'm
out
of
date
or
somebody
else
is
out
of
date.
I
just
want
to
make
sure
we're
all
on
the
same
level,
so
I
think
you're
pretty
safe,
because
the
I
we've
already,
I
think,
sort
of
designed
the
whole
thing
in
a
way
that
having
disjoint
understanding
of
the
keys
is
okay,
because
we
just
synchronize
using
the
external
cloud.
D
A
A
A
A
A
D
A
What
it
looks
like
so
it's
the
I
think
this
is
base64
encoded,
shot,
256
hash,
but
purposely
only
the
eight
characters.
First,
eight
characters
because
they're
like
and
the
birthday
problem
doesn't
matter
too
much
when
you
only
have
and
10
000
resources
at
most
or
something
so
so
that's
that's.
The
idea
is
right
that,
like
the,
if
you
have
a
way
of
doing
this,
call
against
every
single
api
server,
which
is
possible
in
open
shift
directly
through
the
kubernetes
api,
because
each
api
server
is
absorbable.
A
I
think
it
is
also
directly
possible
in
cube
adm
clusters,
less
sure
about
the
clouds,
but
certainly
the
cloud
provider
has
a
way:
they've
managed
the
infrastructure,
and
so
that's
how
they
could
fully
automate
this.
A
It's
basically
like
you
know
the
button
that
says
either
rotate
key
or
maybe
says,
locate
key
and
provide
me.
The
new
key
or,
however,
the
cloud
provider
manages
that
you
just
press
that
button
and
then
it
probably
like
locks
blocks
out
and
doesn't
let
you
do
anything
else
for
a
little
while,
then
it
just
goes
and
does
a
rotation
for
you,
but
you
don't
have
any
downtime
and
really
much
of
anything
happening
like
honestly.
Almost
nothing
happens
right.
D
Yes,
yeah,
but
I
think
I
than
that
this
this
can
actually
work.
Like
I
mean
the
key
discovery
became
a
tech
version.
Discovery
is
something
that
kms
plugin
can
do
how
they
want
to,
but
I
think
other
than
that,
like
automating,
this
whole
thing
with
this,
I
think
that'll
work
great.
A
Yeah
so
yeah,
basically,
what
we're
asking
the
kms
plug-in
through
this
api
is
please
come
up
like
so
this
is
like
a
required
field
right.
So
it's
a
way
of
saying:
please
come
up
with
a
stable
identifier
to
describe
your
current
key
and
if
you
change
it
at
any
time,
just
tell
me
by
changing
the
stable
identifier
to
a
new
stabilizer.
That's
basically
the
contract,
we're
telling
them
and
yeah
it
is
it's
a
it's
a
harder
question
when
you're
using
something
like
a
azure
key
vault,
because
you
know
there's
a
full
rest.
A
A
Yeah
and
then
basically,
I
went
with
the
route
of
like
I
just
want
arbitrary
metadata,
so
in
in
here
we
would
probably
have
the
encrypted
tech
and
any
other
semantics
that
we
feel
like
that
are
very
relevant
for
the
decrypt
or
debugging
that
that's
not
the
basically
the
root
key
that
you
don't
keep
up
with
yeah.
So
if
you
know,
if
we're
kind
of
sort
of
happy
with
this
general
idea,
I
I
guess
you
folks
have
other
things
to
discuss
before
we
maybe
try
to
make
some
issues.
D
I
don't
have
anything
so
in
this
game
with
repo
I
added
the
proto
one
that
I've
been
working
with
as
well.
So
I
think
at
least
for
what
we
want
to
do
like.
I
think
you
also
have
the
hierarchy
stuff
in
this
fight
right,
like
with
the
metadata
and
stuff
and
in
in
my
proto
definition.
I
just
had
metadata
as
bytes
so
like.
We
can
probably
create
a
pr
and
then
like
finalize
the
proto
api
on
what
we
need
like
how
we
want
it
to
look.
A
A
Okay,
so
I
say
you
have
this:
that's
a
key
rotation.
Do
we
feel,
could
we
rename
this
issue
to
update
to
the
newer
proto
as
a
way
of
saying
that
that
is
like.
A
A
So
I
I
can
think
of
you
know
I
I
mean
there's
like
two
core
tracks
of
work
in
my
head:
there's
the
kubernetes
api
server
changes
and
then
there's
the
kms
plugin
changes.
Those
could
be
certainly
split
up.
I
suspect,
in
the
smaller
ones,
but
like
those
are
like
the
big
chunks
of
work
that
are
clearly
isolated
but
they're
hard
to
hard
to
do
independently
because
they
talk
to
an
api
that
doesn't
exist
yet
because
we
have
an
open
issue,
but.
A
So
the
one
one
of
the
weird
things
today
in
the
code
base
is
the
fact
that
the
health
handler
and
the
actual
encryption
system
is
basically
like
two
separate
instances
of
like
grpc
connections.
They're
like
actually
disjoint.
A
A
D
A
A
I
guess
one
one
thing
I
want
to
do
is
I
will
go
ahead
right
now
and.
A
A
Because
I
think
we
went
into
some
serious
leads,
but
this
is
recorded,
it
might
be
worth
watching
and
recording,
but
the
gist
is
we're
trying
to
figure
out
what
some
next
steps
might
be.
A
So
I
I
I
talked
through
this
proto
api,
which
is
sort
of
like
a
very
high
level
proposal
for
the
v2
api
in
a
way
that
supports
rotation
to
sort
of
this
first
class
concept
of
a
current
key
id
and
sort
of
an
extended
status,
api
for
versioning
health
and
that
current
id
and
then
basically
in
all
the
encryption
and
decryption
requests
the
responses.
A
The
current
thing
is
always
encoded
to
let
our
cash
logic
know
if,
if
things
are
stale
or
not
stale,.
A
A
A
A
So
I
I
will
take
that
item
for
the
next
machinery
meeting,
which
is
you
know
we
can
change
from
now.
You
can
do
that,
but
yeah
we're
trying
to
formulate
what
we,
what
what
items
we
can
work
on
right
now,
without
maybe
getting
too
far
ahead
of
a
kep.
You
know
because
we
do.
A
D
Yeah,
so
for
the
healthy
handler
and
all
of
those
like
the
same
thing
right
like
we
want
to
validate
our
proposal
like
when
we
actually
have
the
kept,
we
can
also
show
them.
It
works.
I
was
thinking.
Should
we
build
on
the
existing
poc
that
we
have
and
just
also
add
rotation
to
see
if
it's
if
it
works
like
what
we're
proposing.
A
Yeah
I
mean
I,
I
don't
think
that's
a
bad
idea.
I
don't
think
it's
would
be
wasted
effort,
because
I
feel
like
it's
like
the
trickiest
part,
and
you
know
if
we
can
see
it
working,
we
can
have
a
lot
of
a
lot
more
confidence
that
we're
proposing
is
the
right
thing.
A
So
yeah,
I
think
that's
fine,
do
folks
feel
like
we're
at
a
place
where
we
could
start
writing
a
cab
or
do
we
feel
like
we
want
to
do
more
work
on
the
poc
first
and
at
the
same
time
also
reach
out
to
like
api
machinery.
C
I
think
you
can
always
start
drafting
a
cap
and
then,
like
start
discussion
here
and
there
and
once
like
we
have
some
like
real
action
item,
then
put
them
on
the
cab
because,
like
there
are
already
some
points
that
we
could
tackle,
part
of
it
is
the
poc
and
I
think,
having
a
way
to
track.
The
effort
might
be
a
good
idea.
A
Yeah
yeah,
the
only
hesitation
is
to
toronto,
or
the
only
hesitation
I
have
is
do
we
feel
that
we
have
enough
crispness
and
the
technical
details
to,
I
guess
probably
doesn't
matter
like
I
feel
like
the
kepler
proposing,
has
enough
different
pieces
that
we
and
we
built
enough
consensus
on
a
large
set
of
them,
that
we
just
need
to
write
them
down.
A
A
Or
whatever
I
don't
think
any
of
those
are
like
necessarily
big
deals
or
in
really
any
way
without
you
kept
work
so
that
that's
that's
fair.
A
So
so
what
we're
saying
is.
A
So
if
someone
wants
to
create
a
google
doc
paste
in
template
and
start,
you
know
adding
bits
and
pieces
that
gives
all
of
us
a
place
that
are
working
on
it.
I'm
mostly
proposing
a
google
doc
because
it's
easier
for
us
to
all
work
on
it
together
without
making
pr's
and
doing
all
that.
Obviously,
we
eventually
make
that
pr.
A
So
that
is
an
item.
This
is
a
this
is
a
piece
related
to,
I
guess
the
poc
and
same
with
this
right.
This
is
this
is
kind
of
basically.
E
I
have
one
suggestion
for
the
cab,
so
we
just
do
it
in
this
repo
because
then
it's
marked
down
and
people
can
comment.
It's
actually
easier
than
google
doc
is
that
is
that,
okay
or
you
think
it's
google
doc
is
good
for,
like
some
things
but
like
for
the
cap,
because
it's
marked
down
it's
actually
better.
If
it's
just
on
github.
A
E
A
A
So
I
I
don't
disagree
that
once
it's
at
a
certain
point
in
time
that
it's
good
to
move
it
into
get
and
then
start
having
more
deliberate
agreement
on
changes.
But
I
don't
know
if,
as
a
starting
point,
it's
a
good
idea
because
then
like,
for
example,
if
I
write
something
but
then
the
nish
also
writes
something
like
who's
whose
starting
point
is
the
starting
point
is
kind
of
what
I
was
trying
to
avoid.
A
A
D
So
I
think
the
third
one
is
the
reference
implementation
right
so
like
for
poc,
like
all
of
the
code,
is
just
strung
together
in
kms
plug-in
like,
but
for
the
reference
implementation.
I
think
if
we
can
have
what
we
want
to
start
off
with
there,
then
that
can
be
just
a
separate
workspace
like
I
think,
on
a
very
high
level.
It's
basically
just
key
generation,
the
rotation.
D
D
The
kms
can
just
import
it
and
use
that
for
logging
requests
with
uid.
So
maybe
a
logging
instance,
and
then
metric
is
something
that
we
can
eventually
add,
but
I
think
this
reference
library
work
can
be
done
separately,
like
it
doesn't
have
any
impact
on
the
from
the
proto
api
and
none
of
the
other
changes
right
like
this
is
mostly
just.
How
do
you
generate
the
key
then
encryption
with
gcn
for
using
the
local
kit
and
then
the
rotation
based
on
a
counter
like
after
how
many
operations
this
particular
encryption
is
done.
D
And
then
for
using,
we
could
add
the
interface
third
for
interacting
with
like
the
external
kms
for
encrypting
and
decrypting,
the
local
kit.
But
again
that
can
be
extended
like
right
now.
The
library
is
as
simple
as
hey
give
me
a
key.
It
goes
and
looks
like
cash
and
everything
and
just
returns
a
key,
and
then
I
just
use
that
and
then
I
can
go
and
call
kms
to
encrypt
it
or
we
can
pass
a
handler
which
will
cause
the
keyword
to
encrypt
it
or
defeat
it.
D
A
Basically
like,
for
example,
this
is
probably
some
kind
of
interface
that
is
implementable
by
different
vaults
and
such
this
is
probably
an
interface
that's
implemented
by
different
logging
libraries.
A
This
one,
I
guess,
is
prometheus
metrics.
I
guess
I
don't
know
something
like
that.
Probably
this
one,
I
think,
is
pretty
easy
and
this
one's
pretty
easy.
D
A
A
E
A
Okay,
so
I
I
think
the
the
feeling
we
had
last
time
when
we
talked
about
that
stuff
is
that
the
tank
stuff
was
a
little
bit
too
opinionated
and
wanted
too
much
control
over
like
how
things
are
sort
of
laid
out
in
basically
a
bite
slice
of
how
things
keys
and
stuff
should
be
laid
out.
A
So
I
think
our
feeling
was
that
we
would.
We
would
probably
end
up
hand
writing
this,
probably
using
tync
and
the
existing
api
server
as
inspiration,
but
we
we
want
a
bit
more
control
about
like
basically
like
this
is
like
what
the
key
id
looks
like
or
sorry.
This
is
what
the
encrypted
key
looks
like
and
basically
the
tink
stuff
wants
to
have
like
a
frat,
a
flat
slice
photo
array
for
basically
everything
because
that's
the
unified
interface
of
programming
languages
is
slights
and
bytes.
A
So
you
want
to
use
a
nice
map
and
not
just
a
giant
blob,
because
that's
what
basically
tank
would
say
is
like
there
will
be
one
key
which
is
like
white
search
whatever
and
that's
it
and
I
was
like
I
don't
want
to
do
that.
I
would
like
to
have
some
something
a
bit
nicer
right,
because
this
is
meant
to
be
unencrypted
data
on
disk.
So
if
it
is,
if
it's
there,
an
observable
would
be
nice
if
a
human
being
could
read
it.
A
Also,
I
I
think
remind
me
if
I'm
remembering
this
right,
kristoff
tink
you
had
to
like,
was
it
that
you
had
to
enforce
when
the
rotation
occurred
like
manually,
somehow.
B
Yeah
exactly
so,
I
added
an
atomic
counter
on
top
of
that
structure
and
counted
how
many
encryptions
were
happening.
A
Yeah,
so
that
part's
a
little
kind
of
two,
which
is
that
kind
of,
was
hoping
that
it
would
just
do
rotation
like
at
some
interval
instead
of
asking
me
to
figure
out
when
the
interval
occurred,
but
I
guess
that's
fair,
maybe
maybe
some
places
want
time
based
versus
counter-based
versus
something
else.
Okay,
that
does
bring
up
a
question
for
me.
If
you
have
a
relatively
idle
environment
that
isn't
doing
a
lot
of
rights,
do
we
feel
like
we
would
want
to
do
the
local
kept
rotation?
B
Well,
I
would
always
default
to
to
to
the
counter,
and
ideally
because
timeout
I
don't
know,
I-
I
have
no
experience
how
what
the
upper
bound
is,
what
can
happen
online
clients
and
on
the
customer
side,
when
they're
using
companies,
how
many
of
those
rights
could
happen
and
if,
for
example,
they
it's
reasonable
for
certain
customers
to
surpass
the
2
million
threshold.
A
A
I
don't
care,
I'm
just
gonna,
throw
away
the
key
and
just
make
a
new
one
just
because
enough
time
has
passed
like
like
as
a
for
example,
you
know
in
openshift
the
storage
encryption
at
rest
has
just
one
week
interval
where
keys
are
just
rotated,
not
based
on
any
right
load
or
anything
it's
just
every
week.
It
just
happens
always
when
you
enable
the
encryption
mess
right,
and
that
was
just
based
on
a.
We
have
no
idea
how
many
rights
there
are,
but
in
this
case
we
do
know
how
many
rights
are.
A
I'm
I'm
more
saying
that
even
in
that,
in
that
world
do
we
feel
like
we
should
just
proactively
just
also
cause
like.
Basically,
you
get
at
least
one
rotation
per
week,
no
matter
what
you
may
get
more,
if
you
do
a
crap
on
the
right
that
that's
sort
of
the
api
api
and
semantic
I'm
thinking
is
that
okay,
I
mean,
I
don't
think
you
can
hurt
anything.
B
No
definitely,
it
doesn't
hurt,
especially
though
those
customers
who
don't
decrypt
all
too
often
it
wouldn't
be
a
huge
issue
for
them
to
also
rotate
in
once
a
week
right.
So
but
on
the
other
side,
those
who
actually
would
need
more
frequent
rotations,
who
are
using
the
the
key
more
often
wouldn't
benefit
that
much
from
the
weekly
rotation.
B
So
maybe
definitely
some
some
upper
limit
in
days
would
make
definitely
sense.
B
A
A
I
I
think
the
only
you
know
real
concern
we
have
around
this
local
keck
is.
It
is
in
memory
on
a
growth
process
on
the
machine
right,
it's
not
in
your
key
vault
right.
It's
it's
a
it's
less
safe,
presumably
right
because
the
key
vault
is
probably
implemented
using
hardware
and,
at
the
very
least
it's
in
some
data
center.
Probably
so
it's
like
extra
isolated
right,
because
its
entire
purpose
is
security,
whereas
kubernetes
purposes,
application
development.
A
D
A
D
A
As
a
as
an
aside,
I,
I
suspect,
in
the
final
version
of
all
of
this,
the
key
vault
stuff
would
be
in
its
own
repo
and
just
consuming
the
library
I'm
not
stressed
at
all
about,
if
you
just
shove,
all
of
it
in
one
repo
and
like
the
key
wall
stuff
in
his
own
folder
and
it's
importing
the
other
folder
and
just
using
it.
I
don't
care
about
that.
A
I
think
that's
all
and
I
I
would
prefer
to
limit
how
many
git
repos
we're
working
with
to
two
as
in
the
api
server
and
this
one.
So
that
way,
it's
just
less
moving
parts.
D
Yeah
yeah,
I
think
that
was
saying
like
we
want
to
extract
it
and
have
it
in
the
reference
library
like
I
didn't
want
to
spend
time
on
creating
a
library.
So
I
just
put
all
my
code
into
the
like
the
keyword
to
validate
things.
A
Yeah,
that's
cool.
Okay,
I
think
we're
out
of
time.
So
you
know
folks
want
to
pick
up
stuff.
That'd,
be
cool.
C
C
Could
you
ask
the
question
one
more
time?
Yeah,
I
was
wondering
if
we
could
start
right
away
refactoring,
like
the
like
the
way,
the
api
server
handled
like
the
electric
for
the
kms
plugin,
to
only
have
like
one
job
essentially
like
the
issue
number
four.
A
So
you
could
try
to
fix
the
the
multiple
connections
bit
like.
You
could
technically
just
do
that
right
now
in
the
kubernetes
code
base,
but
it
would
have
to
be
against
the
v1
api
right
or
the
v1
beta
1
api,
not,
but
that
was.
A
Well,
then,
the
new
api
has
a
as
an
actual
concept
of
healths.
The
old.
The
current
implementation
of
helps
literally
just
tries
to
call
these
two
apis
and
sees
if
they're
not
dead,
which
is
which
is
not
really
helped.
That's
like
yes,
it's
kind
of
different
yeah,
so
yeah,
so
there's
a
there
there's
a
pretty
significant
change
to
health's
in
this,
which
is
that
there's
a
real
health
which
is
a
way
of
asking
the
kms
plug-in.
A
So
yeah
I
mean
yeah,
I
I
think
the
sort
of
the
limiting
factor
is
getting
the
api
in
and
generated
on
sort
of
both
repos.
So
that
way
you
can
start
sort
of
programming
against
it
and
then
I
think
you
can
sort
of
more
paralyze
the
effort
and
just
pretend
the
other
end
exists.
I
don't.
I
don't
think
it's
too
hard
to
do
that,
but
you
do
have
to
have
like
a
like
a
ghost
structure
with
work
against.
A
I
have
to
go
yeah,
it's
been
good,
see
y'all.