►
From YouTube: WG KMS Bi-Weekly Meeting for 20220315
Description
WG KMS Bi-Weekly Meeting for 20220315
A
Hey
everyone:
this
is
the
cigot
meeting
for
march
15
2022,
specifically
the
kms
meeting
number
10.
A
B
Yes
sure
so
so
my
current
implementation
was
under
an
assumption,
which
is
wrong,
which
I
can
see
from
the
source
code
that
the
kms
plugin
also
does
the
encryption,
but
it
does
not.
So
this
is
something
that
I
need
to
change
but
anyway,
so
my
question
was:
if
we
introduce
a
aes
gcm
just
to
the
kms
plugin,
it's
just
half
the
way
off
of
the
solution
right.
B
So
it's
pretty
nice
that
we
have
stronger
and
authenticated
encryption
for
the
decks,
but
we
would
also
need
to
add
asgcm
then
to
the
envelope
logic
in
the
api
server
right,
yeah.
A
B
A
B
Okay
and
everything
except
the
last
part,
so
we
don't
like
the
kms
plugin
what
to
do
with
the
deck.
A
A
It
is
up
to
the
kms
plug-in
what
it
wants
to
do
so
in
our
reference
implementation
that
would
be
asgcm
using
a
key
hierarchy,
but
the
kms
plug-in
is
too
free
to
do
whatever,
including
not
encrypting,
because
you
can't
make
it
actually
do
encryption
right.
Just
basically
just
send
it
back
to
us
or
something
right
like
a
as
a
like.
A
You
know,
one
of
the
things
that
you
know
I
had
proposed
a
long
time
ago
that
I
don't
know
if
alex
ever
did,
was
to
take
my
citadel
thing
and
strip
out
all
the
encryption
and
just
return
the
keys,
as
is
as
a
way
of
measuring
just
the
pure
impact
of
having
a
kms
plug-in
at
all
right,
because
it's
not
making
any
network
calls
it's
not
even
doing
any
encryption.
But
if
it's
present
in
the
loop
and
it's
causing
grpc
connections
to
be
made
and
funneled
through.
C
A
A
That
we
can
never
actually
get
around
once
we
came
up
with
a
completely
different
architecture,
but
yeah,
so
I
I
believe
in
the
key
hierarchy
within
our
kms
reference
implementation,
we
would
use
gcn
and
last
time
when
we
had
talked
about
the
the
protoconfig
that
the
the
the
well
the
niche
was
looking
at.
You
know
that
you
know
there
would
be
this.
This
key.
C
C
A
We
had
talked
about
you
know.
Well,
maybe
it
would
be
nicer
for
it
to
be
more
strong,
like
there'd,
be
more
fields
available
so
that
there
could
be
more.
A
I
could
imagine
metadata
that
a
kms
plugin
emitted,
which
was
well
what
is
the
mode
I
used
to
encrypt
the
plain
text
that
the
api
server
sent
me,
and
you
know
it
might
be
as
gcm
or
whatever
right.
That
way.
You
know
I.
If
we
go
down
the
route
of
including
metadata,
I
would,
I
would
hope,
by
reference.
Implementation
would
also
include
that
that
mode
as
a
as
part
of
its
metadata,
so
that
way
we
have
both
flexibility
on
the
api
server
to
change
things
in
the
future,
but
also
within
the
reference
implementation.
A
Dude,
thank
you
for
answering
my
question.
I
was
going
to
ask
an
initiator.
Did
I
say
anything
that
you
all
felt
was
inconsistent.
C
D
I
was
actually
going
to
ask
like
should
that
change
happen
like
we
could
work
on
it
in
parallel
right
that
particular
change.
A
Yeah,
you
know
I
had
the
pr
like
two
years
ago
that
basically
did
that
and
we
just
decided
not
to
pursue
it
because
we're
like
oh
we'll,
fix
everything,
and
we
just
then
did
nothing
so
yeah.
If
we
want
to
like.
We
still
have
two
weeks
in
124.
C
A
A
C
A
Really
heavy
weight
thing
for
a
cap.
D
B
But
it
might
need
that
cap
because
either
we
make
it
at
an
property
where
we
can,
where
you
can
configure
how
we
want,
for
example,
the
envelope
encryption
to
happen.
If
it
should,
the
default
would
be
the
ncbc.
So
it's
a
behavior
that
you
know
or
you
could
change
the
asgcm
or
if
we
would
just
fully
change
it,
for
example
to
gcm
and
the
encryption
would
work
right.
So
we
would
need
some
kind
of
logic
that,
if
gcm
phase,
then
please
try
cbc
and
assume
that
it
worked
right.
A
So
the
the
pr
I
had
all
it
did
was
give
the
api
server
the
capability
of
reading
gcm
data
back,
but
it
left
it
only.
Let
it
use
cdc
and
did
not
change
actually
any
behavior.
It
just
made
it
so
that
if,
in
the
future
it
encountered
gcm
data,
it
would
understand
it
and
be
able
to
do
something
about
it,
the
idea
being
that
then
one
release
later
or
maybe
even
more,
you
just
flip
it
to
say
I
always
use
gcm,
and
then
it
doesn't
matter
because
you
can
always
downgrade
back
like
you
can.
A
That
was
sort
of
the
framing,
because
then
it
kind
of
lets.
You
basically
uses
the
multiple
release
cycles
as
a
way
of
avoiding
the
api,
because
it
basically
says
this
is
the
new
default.
It
is
obviously
not
compatible
with
the
old
one,
but
we
can
gracefully
move
you
forward
and
let
you
gracefully
move
back
and
you're
fine.
So
like
we
haven't,
introduced
any
breaking
change
by
changing
the
format.
We've
just
pushed
it
across
enough
releases
that
you
it's
not
a
breaking
change.
D
A
I
don't
know
if
I'll
be
able
to
pick
it
up
in
the
next
two
weeks
I
mean
I
can
certainly
try
if
you
know,
if
there's
enough
motivation
and
interest
because,
like
I
literally
remember
like
mike's
comments
saying
this
is
a
great
idea.
By
the
way
should
we
fix
a
bunch
of
other
stuff
while
we're
at
it.
I'm
like.
Okay,
fine,
maybe
we'll
fix
a
bunch
of
other
stuff,
and
then
we
never
did.
D
Yeah
but
as
you
said,
this
is
like
not
it's
related,
but
it's
it.
I
think
the
isolating
it
to
its
own
pr
makes
sense,
and
since
it's
not
a
breaking
chain,
you
can
introduce
it
in
this
release
even
right,
I
I
so
I
I'm
a
proponent
of
you
just
reviving
that
pr
and
push
it
through.
A
D
D
A
D
A
Okay,
yeah,
so
we
can
do
that
totally.
So,
if
I
recall
anish,
you
were
one,
you
were
gonna,
maybe
present
something
today
or
show
some
stuff
today
you
want
me
to
stop
sharing.
C
Okay,
so
I
have
a
working
poc
on
what
we
discussed
last
time.
One
thing
was:
I
ran
into
an
issue
with
using
protobuf
for
serialization,
so
just
for
the
poc
purposes,
right
now,
I'm
using
json
but
again
changing.
That
should
be
pretty
simple
right,
but
in
terms
of
the
things
that
I
did
basically,
what
I
did
was
I
created
a
new
api
new
proto
file,
so
v2
alpha
one
and
then
in
v2
alpha
one.
C
The
things
that
we
are
adding
is
we're
adding
this
new
string
uid
field,
so
that
we
can
observe
requests
going
from
api
server
to
the
kms
plugin
and
then
also
like
more
mentioned
right.
This
new
key
field
right
now,
I
am
using
this
key
only
for
the
encrypted
local
kick,
but
I
think
we
can
also
add
additional
metadata
here
and
in
terms
of
the
azure
kms
plugin.
C
I
was
thinking
we
can
send
metadata
on
which
keck
was
used
to
encrypt,
which
kms
keck
was
used
to
encrypt
this
local
kit,
so
that
when
we
get
it
for
decryption,
we
don't
have
to
scramble
through
all
the
versions
available.
We
can
just
pick
the
version.
That's
there
in
the
metadata
and
just
try
and
decrypt
with
it
and
then
in
the
encrypt
request.
We
are
adding
a
new
uid
field
and
then
in
the
encrypt
response.
C
Basically,
the
kms
plugin
sends
the
encrypted
local
kit,
which
was
used
for
the
encryption
and
then
in
terms
of
the
changes
that
I
had
to
make.
Most
of
it
was
in
transform
to
storage
and
transform
from
storage,
basically
reading
through
the
serialized
data
and
then
sending
the
encrypted
kick
and
then
the
cipher.
In
case
of
when
you
do
an
encrypt,
you
get
the
cipher
and
the
encrypted
check
from
the
kms
plugin.
And
then
you
add
that
into
hdd
and
then
similarly
on
the
decrypt
call
we
send
the.
C
C
So
I
think
the
majority
of
the
changes
were
in
the
kms
plug-in
and
then
I
I
looked
at
moe's
citadel
already
right
and
I
think
that
implemented
a
lot
of
things
that
we
need.
Although
it
was
for
getting
the
kms
keck
and
using
it,
I
was
able
to
reuse
most
of
it
for
actually
locally
generating
a
kit
and
then
using
that
for
encryption.
C
So
basically
I'm
creating
a
new
keck
service
in
my
kms
plugin
and
then
I'm
using
some
of
the
logic
that's
already
used
in
our
api
server
for
generating
the
key
and
then
also
storing
it
in
an
lru
cache.
So
I
have
a
transformer
lru
cache
where
I'm
storing
the
base64
encoded
encrypted
key
and
then
the
decrypted
check
so
that
whenever
the
calls
come
in,
we
can
just
automatically
use
that
and
then
at
any
given
point
in
time.
C
I
have
one
keck,
which
is
used
as
a
nominated
keck
for
encryption
purposes,
and
then
I
added
this
encryption
operation
counter
which
basically
can
be
configurable,
but
right
now,
I'm
setting
it
to
five
just
so
that
we
can
see
in
the
demo.
But
what
this
does?
Is
it
reuses
the
same
key
for
encryption
certain
number
of
times
and
then,
when
you
hit
the
limit,
it
will
rotate
that
kick
so
in
this
get
logic,
we
are
saying.
C
If
this
particular
encryption
check
has
already
been
used
for
five
or
more
operations
less
than
five
operations,
then
we
can
still
reuse
this
check
for
encryption,
so
just
get
it
from
the
cache
and
then
return
it.
So
we
can
encrypt
it.
But
if
we
have
already
exceeded
five
operations,
then
what
you
do
is
you
go
and
rotate.
This
locally
generated
kick
and
then
use
this
as
the
new
kick
going
forward
for
all
the
operations
and
then
here
in
rotate
encryption
check.
C
C
Can
you
encrypt
it
with
the
kms
keck
for
me
and
then
give
it
back
and
this
metric
I
added
so
that
in
as
part
of
the
demo,
we
can
actually
see
for
how
many
calls
we
are
generating,
but
yeah
after
we
do
that
we're
basically
adding
the
encrypted
keck
to
the
cache,
and
then
we
also
have
the
encrypt
key
so
that
whenever
the
calls
come
in,
we
look
at
the
cache
not
found
in
cache.
Then
we
can
go
outside
and.
A
Question
and
ish
yeah,
so
maybe
it
was
the
last
thing
you
just
said
where?
Where
is
the
logic
for
when
you?
I
guess
this
is
just
encryption
right?
You
haven't
gone
over
decryption.
C
The
decryption
is
basically
in
the
get,
so
this
is
the
part
for
decryption
right,
so
when
decryption
calls
comes
in
so
basically
we
know
that
this
encoder
we
get
this
encoded
kit
from
entry,
tech
from
the
grpc
call
already
as
part
of
decrypt
request.
So
the
first
thing
we
do
is
we
look
up
the
cache
to
see
if
it's
already
found
in
the
cache.
If
it
is
not
found
in
the
cache,
then
we
take
that
encrypted
kick
and
then
make
a
call
to
keyword
to
say:
hey,
look,
I
have
this
encrypted
check.
C
Can
you
decrypt
it
for
me
using
your
kms
key,
all
right,
yeah
and
then,
if
once
we
get
that
again,
we
have
a
metric
here
just
so
for
the
demo
purposes,
how
about
also
we
can
have
it
and
then
once
we
get
it,
we
also
add
that
check
to
the
cache,
and
then
we
send
it
back
so
that
that
can
be
used
for
detail
and
then
in
case
of
the
local
encrypt
and
decrypt
that
we
talked
about.
So
we
talked
about
aes
cbc
being
used
in
the
cube
api
server.
C
So
I
followed
along
what
civil
is
doing
and
used
aacbc
also
for
encrypting,
the
local
kick
in
the
kms
plugin,
so
that
is
where
cbc
the
go
yeah.
So
this
is
very
similar
to
what
citadel
is
doing
right
so
we're
basically
creating
a
new
aes
cbc
service,
and
then
we
are
actually
doing
decrypt
and
encrypt,
and
then
this
decrypt
operation
is
basically
decrypting.
The
local
keck
using
the
local
kick
to
decrypt
the
deck
and
then
encrypt
is
using
the
local
keck
to
encrypt
the
deck.
So
nothing
else
changes
it's
just.
C
We
have
another
kick
which
is
there
in
memory
and
we're
using
that.
So
we
have
that
and
then
what
else
and
just
here
yeah
I
think
majority
of
that
was
that
and
then
obviously
in
in
some
of
the
logics
where
we
were
directly
calling
keyword,
I
just
had
to
swap
it
to
use
the
local
encrypt
decrypt.
But
I
have
a
working
cluster
with
this.
A
Can
I
ask
you
a
quick
question:
yeah
sure
could
you
could
you
show
me
where
that
get
function
is
called?
I'm
just
kind
of.
I
was
just
kind
of
trying
to
mentally
map
the
code
in
my
head,
so
where,
where
do
we
call
get
in
like
relation
to
that,
like
a
cbc
service
thing.
C
Yeah
so
basically,
when
we
are
actually
doing
decrypt,
so
this
decrypt
is
called
when
the
grpc
request
comes
in
right.
So
here
when
we
call
decrypt,
we
give
it
this
encoded
kit,
which
we
actually
got
as
part
of
the
grpc
encrypt
our
decrypt
request,
and
then
it
uses
that
to
say,
hey,
go,
get
blocked
and
then
get
blocked
is
what
is
just
doing
get.
So
it's
doing
go,
get
the
keck.
C
Yeah
we
do
so
this
encoded.
This
encrypted
kick
is
actually
coming
from
storage
right.
So
in
a
decrypt
call,
it's
api
server
is
reading
it
from
hcd
and
it's
giving
us
the
cipher
text
and
the
encrypted
check
as
part
of
the
decrypt
request.
D
C
C
C
C
Yeah,
so
we're
actually
mimicking
what
we're
doing
today
with
debts
in
api,
so
yeah.
So
this
is
the
get
call.
So
if
encore
encrypted
keck
is
nil.
That
means
it's
an
encrypt
operation
right,
because
there
is
no
data
available.
This
currently
I'm
assuming
it's
an
encrypt
operation,
but
this
can
also
mean
this
is
all
data.
But,
like
that's
the
case
we
have
to
handle,
but
let's
go
down
the
route
where
this
is
a
decrypt
operation.
So
we
are
not
in
that
if
loop
right
so.
C
Not
in
that,
if
loop,
basically,
we
are
saying,
go,
look
up
the
cache
first
thing
with
this
encrypted
kit.
If
it
is
available
in
cache,
you
return
the
unencrypted
check,
so
it
can
be
used
for
other
further
operations,
but
if
it
is
not
found
in
the
cache,
we
already
have
the
encrypted
kit,
which
is
this
so
we
go
and
make
a
call
to
keyword
to
say,
hey
you.
A
I
could
make
a
quick
comment.
I
think
we
already
talked
about
this
last
meeting,
but
I
I
think,
both
in
the
code
and
in
our
apis
we're
going
to
be
want
to
be
very
careful
with
how
we
name
all
the
variables,
because
basically
all
of
them
are
like
kv.
Key
vault
key
this
key
that
and
it
just
like
it
gets
like.
C
C
Yeah,
I
think
there
was
a
point
in
time
yesterday
where
I
was
just
going
around
and
found,
because
all
my
variables
were
like
wait
encrypted
encoded,
which
one
is
what
but
definitely
like
when
we
clean
up
this
implementation.
Yes,
the
variable
names
and
how
we
name
things
is
very
important
to
understand
that
actually.
D
I
mean
now
that
you
explained
it.
I
think
the
encrypt
that
can
encrypt
and
encrypt
data
make
sense
to
me.
Like
I
mean
this
makes
a
lot
of
sense
like
the
way
it's
stored,
so
I'm
actually,
okay
with
the
naming
of
these
things
in
scd.
I
think
it
was
just
in
the
code
yeah.
It
was
confusing
like
get
and
decrypt,
because
everything
is
decrypt
but
who's
calling
decrypt
right
like
it's
a
local
d
grip
or
it's
a
kmsd
crib.
C
Yeah
yeah
yeah,
okay,
so
yeah,
so
I
have
the
kms
plugin
running.
So
if
we
look
at
the
logs
like
right
now,
I'm
just
logging
some
of
the
things
so
there's
also
the
health
check
request
and
all
that
right.
So
if
you
see
here
basically
like
for
encrypt
operations
that
happen,
it
says,
hey
look.
C
So
if
you
look
at
this
metric
basically
this
is
the
total
number
of
encrypt
calls
and
decrypt
calls
that
the
kms
plugin
is
doing
right.
So
this
includes
calls
from
api
server
health
check
from
api
server.
Then
health
check
from
my
kms
plugin
as
well.
C
So
if
you
see
here
basically
there's
like
4200
our
decrypt
operations
that
were
called
within
like
since
last
night
and
then
similarly
there's
like
about
14
for
4200
encrypt
calls
to
it
right
and
then
the
metric
that
I
added
the
kms
encrypt
kms
decrypt
was
to
see
how
many
calls
we're
actually
making
to
key
world.
This
number
is
divided
by
a
factor
of
five.
C
So
if
you
look
at
it,
it's
only
780
instead
of
the
actual
4200,
because
we
are
only
encrypting
it
and
storing
it
in
cash
and
then,
even
in
terms
of
cache
warm-up,
we
actually
don't
have
to
do
a
lot
of
warm-up.
So
we
just
when
the
plug-in
restarts
it
generates
a
new
local
kit
that
it
will
use
for
new
encrypt
operations.
But
if
api
server
sends
an
old
deck,
an
old
check,
saying
hey,
can
you
decrypt
this?
For
me,
it'll
say:
oh,
this
is
not
in
my
cache.
C
So
we
can
do
that
and
then,
in
terms
of
all
the
configurations,
I
think
we
can
have
similar
things
right
like
today
we
can
configure
cache
size
and
api
server.
I
think
we
would
be
able
to
configure
cache
size
even
in
the
kms
plugin,
for
how
many
local
kicks
it
can
store
in
memory.
C
I
think
that
can
be
what
kms
plugin
providers
want
to
do
and
then
also
in
terms
of
how
many
times
they
can
reuse,
a
single
local
kit
for
encryption
can
be
a
factor
that
they
can
decide
based
on
how
many
calls
they
want
to
cut
down
like
based
on
rate
limits
available
with
azure
keyword
or
something
like
that
they
can
say
like.
I
want
to
reuse
this
particular
local
kick
for
like
1000
operations
or
10
000
operations.
Basically,
they
will
reduce
the
number
of
calls
by
that
structure.
D
Can
I
answer
a
question,
so
this
cache
is
for
the
local
cac
is
in
is
maintained
by
the
plug-in
right,
the
kms
plug-in
right.
What
happens
if
so,
what
if
there
are
multiple.
A
Right
so
you're
using
azure
key
vault
as
your
orchestration
layer
right
so
as
they
generate
new
keys.
If
a
different
instance
of
the
kms
plugin
sees
a
key,
it
doesn't
understand,
it
just
asks
key
vault,
hey.
I
don't
know
what
this
says.
Can
you
decrypt
it
for
me
and
then
I'll
cache
it
locally?
Well,.
C
D
C
A
Yeah
so
thinking
through
this,
let's
say
the
default
key
reuse
inside
of
the
kms
plugin
is
like
a
thousand
or
less
than
or
maybe
ten
thousand
he's
kind
of
use
it
a
little
bit
in
like
like
a
medium-sized
kubernetes
cluster.
A
You
probably
only
would
need
to
make
three
calls
to
kms
right,
because
you
would
basically
have
three
api
servers,
so
they're
likely
to
have
three
kms
plugins
with
three
keys
that
were
actively
realistically
being
used
at
any
given
time.
And
if
you
know,
if
you're
doing
your
regular
storage
migration,
you
basically
pull
less
onto
three
right.
A
This
is
cool
like
this
is
kind
of.
What
I
expected
is
that
it
would,
just,
as
you
said,
just
divide
the
number
of
network
operations
with
how
often
you
were
willing
to
reuse
the
key,
and
I
know
christoph
you
did
some
actual
math
to
say
yeah.
You
can
basically
use
it
like
a
stupid
number
of
times
and
it
doesn't
actually
matter.
A
Yeah,
like
it's
just
an
absurd
amount
right
like
so,
if
you
use
it
10
000
times,
you're
like
many
orders
of
magnitude
like
below
any
safety
margin,
without
like
really
any
concern
like
honestly
like
I,
I
don't
care
if
that,
like
inside
the
code.
All
of
these
are
like
inputs
to
some
function
and
can
be
used.
A
I
don't
know
if
I
would
want
to
expose
this
as
config
that
I
tell
a
user
to
care
about
because,
like
I
don't
know
like
10
000
is
perfectly
safe
and
basically
reduces
the
problem
that
this
is
solving
to
exactly
kind
of
zero
or
a
hundred
thousand
whatever
like.
It
basically
makes
the
network
problem
disappear
in
my
opinion,
so
I
don't
know
if
it
like
it's
worth
asking
the
user
to
understand
the
number,
because
it's
kind
of
hard
to
even
explain
what
it
does.
A
C
D
E
A
Because,
right
now,
if
you
run
storage
migration
on
this
cluster,
depending
on
the
state
of
the
kms
plugin,
basically
where's
the
counter
at
right
now
it
will
either
do
a
storage,
migration
or
maybe
won't
okay,
I
think
sort
of
maybe
actually
I'm
not
even
sure
what
so
yeah
like.
I
think.
First,
we
have
to
define
what
is
the
expected
behavior.
A
D
A
So
I'm
stepping
back
a
little
bit
and
thinking
through
the
kubernetes
api,
so
storage,
migration
from
the
kubernetes
api
is
basically
coupe.
Ctl,
get
secrets,
dash
all
name
spaces
shove
into
a
file
and
then
cube
ctl.
A
Yeah
replace
dash
f
that
file
right
and
that
that
that's
the
interface
that
kubernetes
provides
so
I'm
trying
to
stay
within
that
for
a
second.
So
the
expectation
traditionally
has
been
that
the
api
server
can
use
the
positions
in
the
encryption.
Configuration
object
to
figure
out
if
keys
are
stale.
A
Basically,
there's
never
a
need
to
do
storage
migration
because
there's
only
one
right,
it's
only
one
in
the
list.
It
can
never
be
stale,
but,
as
we've
seen
that
every
five
increments,
it
changes
right,
the
key
being
used
technically
changes
from
the
perspective
of
like
the
api
server.
Maybe
we
don't
care
about
that
five.
But
if
you,
if
you
changed
the
I
I
guess
I
didn't
ask
a
niche:
how?
How
is
the
real
key
vault
configuration
done
on
this
plugin?
Is
it
like
just?
C
A
Right
so
in
terms
of
orchestrating
the
rotation
right
now,
what
exactly
the
assumptions
in
the
api
server
are
currently
correct,
because
the
meaning
of
the
kms
plug-in
never
really
changes
and
the
meaning
in
the
encryption
configure
can't
can't
really
change
right
now,
because
there's
only
one
right.
So
in
that
sense,
this
is
correct,
even
when
this
new
key
hierarchy
is
correct,
because
the
final
key
vault
key
is
still
the
same
thing.
A
So
it's
kind
of
fine,
I
I,
but
it's
not
neces,
it's
not
sufficient.
If
you
want
the
key
itself
that
like
like,
basically,
you
know,
we
talked
about
like
labels
or
whatever
like
so.
If
there
was
like
a
way
of
saying,
key
label
equals
current
fraud
or
whatever
right.
So,
basically,
instead
of
pinning
to
a
particular
exact
version,
you
were
pinning
to
a
mean
instead
of
a
version,
then
there
has
to
be
some
way
for
that
to
be
communicated
through
the
whole
layer.
A
C
Metadata
that
more
talked
about
can
be
helpful
so
today,
when,
as
part
of
the
byte
key,
all
I'm
sending
is
the
actual
encrypted
local
kick
right.
But
let's
say
we
have
an
additional
metadata
field
or
we
reuse
that
field
in
that
metadata
field.
We
would
have
this
encrypted
local
key,
but
also
like
for
azure
kms
plugin.
Like
we
can
say
this
was
the
version
of
the
key
that
I
used
for
encryption.
C
So
let's
say
the
key
got
rotated
in
key
vault,
so
there
is
v2
and
then
there
is
v1
and
everything.
That's
there
in
xcd.
Right
now
is
encrypted
with
p1
right.
Even
if
the
user
does
a
storage
migration
immediately,
then
everything
would
get
decrypted
without
any
issues,
because
in
the
metadata
we
are
saying
v1,
so
we
will
go
to
keyword
and
say
hey.
C
I
don't
have
this
in
cache
decrypt
this
with
v1
and
give
it
to
me,
and
we
still
would
only
make
one
beat
call
because
the
first
time
it's
not
in
cache,
we
make
the
decrypt
call
then
for
all
the
other
secrets
that
come
as
part
of
storage
migration.
We
don't
have
to
it's
already
in
the
cache.
Decrypt
would
work
fine
and
then,
when
encrypt
call
comes
in
for
all
of
them,
we
would
actually
encrypt
it
with
whatever
is
configured
as
latest
or
whatever
is
pinned
to
the
kms
plugin
right
now.
Okay,.
D
C
C
C
A
That
that
needs
some
tweaking
in
the
api
server
is
the
fact
that,
let's
say
you're
in
your
in
your
kms
configuration
struct,
you
set
the
cache
size
to
a
million.
Basically
cache
all
data
encryption
keys,
really
care
right.
The
api
server
will
never
make
call
to
kms,
then
once
it
caches
them.
So
when
you
do
storage
migration,
it
it
won't
do
anything
with
kms
it'll
just
be
able
to
decrypt
the
data
and
it'll
be
fine
right,
but
that's
the
part.
That's
missing
is.
C
A
Has
to
be
a
way
for
the
kms
plugin
to
tell
the
api
server.
I
realize
you
might
be
caching
stuff,
and
I
realize
it's
actually
functionally
valid.
I
need
you
to
pretend
that
you
don't
have
that
cache
and
throw
it
away,
because
I
want
you
to
explicitly
call
me
because
I
might
want
to
make
a
change
that.
A
And
then,
then,
what
that
does
is
it
allows
the
kms
plugin
to
be
like
okay
cool?
I
am
currently
at
version
one.
I
see
that
you're
also
at
version
one.
It
does
not
matter
that
my
particular
local
keck
is
different
from
the
one
that
you
used
because
we're
still
at
v1.
It
doesn't
matter
right
like
it's
conceptually.
A
Like
you
know,
all
of
those
local
texts
are
just
basically
hierarchied
under
one
true
root
key,
and
so
we
don't
care
they're,
semantically,
the
same
right,
but
if
I
instead
see
that
I'm
at
version
two
now
and
and
there's
a
little
bit
of
ambiguity
in
the
sense
of
like
I'm
at
version
two.
How
do
I
know
all
the
other
plugins
are
at
v2?
I
think
there
is
some
dot
right
there.
A
That
has
to
be
replied,
but
let's
just
pretend
that
that
has
been
solved
if
it
sees
it
set
v2
and
it
sees
that
the
metadata
says
v1
and
it's
like
cool
now.
I
actually
do
definitely
want
to
cause
a
change,
because
I
know
that
the
real
root
key
has
been
changed
so
now
the
hierarchy
matters
right
like
because
it's
actually
a
different
hierarchy.
A
So
then
again
kind
of
that
that
has
to
flow
through
in
some
way,
because
then
you
can
basically
just
make
easy
tweaks
within
azure
key
vault
and,
like
the
rest
of
the
system,
will
basically
catch
up.
I
do
think
there
needs
to
be
like
I
make
a
change
in
key
vault
and,
like
I
wait
a
little
bit
to
for
something
to
happen.
Basically,
like
all
the
kms
plugins
catch
up
to
me,
and
somehow
I
observe
that
this
has
been
caught
up
to
and
then
I
do,
storage
migration.
D
Here:
here's
a
crazy
thought:
what
if
we
keep
the
the
the
remote
sorry
what
if
we
keep
the
metadata
in
the
kubernetes
api
server
cache
as
well,
that
way
it
would
trigger
it
will
know
like
hey.
This
is
out
of
date.
A
D
A
Well,
so
so,
right
now
we
don't
really
have
the
concept
of
a
version
right
like
we
have
like.
So
you
know
in
initial
proto
there's
a
key.
I
think
key
is
the
wrong
name.
I
think,
should
be
a
key
id
or
something
to
not
make
one
thing
they're
supposed
to
send
a
literal,
private
key
or
something
or
an
encryption
key
right.
It's
supposed
to
be
a
reference,
and
it's
okay,
if
it's
an
encrypted
reference,
but
it's
not
literally
plain
text.
A
I
don't.
I
want
to
explicitly
tell
you
that
you
don't
need
to
change
anything
right
like
it
needs
to
be
smart
enough
to
be
like.
I
see
that
you
have
a
v1
key,
I'm
also
at
v1.
It
doesn't
matter
that
the
local
keck
is
different,
they're
conceptually
the
same
key
and
it
basically
hands
you
back
the
exact
data
that
you
sent
me
without
changing
anything
on
purpose,
to
make
sure
that
you
stay
aligned
with
me
right.
It's
just
a
way.
We
have
to
basically
have
a
way
of
a
push
notification
effect.
A
Actually,
no,
I
think
it's
both
next
time
you
encrypt
or
decrypt
data.
You
need
to
call
me.
I
don't
care
if
your
cache
is
up
to
date
and
you
can
do
it
without
me.
I
need
you
to
call
me
so
that
way
I
can
help
you
figure
out
what
like,
basically,
what
to
do
here.
C
A
C
A
I
I
think
that
works,
it's
just
that
you
have
to
somehow
guarantee
that
before
you
run
storage
migration,
that
the
periodic
thing
happened
after
you
changed
the
real
key
in
key
vault
right,
so
that
might
be
okay
right.
Basically,
instead
of
basically
instead
of
the
communication
coming
from
the
kms
plug-in,
I
think
initially
saying
well,
maybe
we
can
try
to
reuse
the
channel
we
have
in
the
other
direction
and
just
pull
instead
of
push.
I
think
that
might
be
fine
right,
like
at
the
end
of
the
day.
Rotation
is
relatively
rare
right.
A
It's
it's
like
it's
a
really
important
edge
case
that
we
want
to
have
strong
support
around,
because
it's
basically
the
hardest
part
of
the
whole
thing,
but
it's
also
still
a
rare
enough
event
that
you
could
say
that,
like
it
happens
like
at
worse
once
a
week,
basically
like
once,
maybe
once
a
day
at
the
really
crazy
edge
cases.
So
it's
like
okay,
fine,
if
you
pull
like
every
minute
asking
the
plug-in.
Hey
are
you?
Are
you
good
now.
C
Yeah
I
mean,
if
you
look
at
it,
it's
already
doing
a
health
check
of
every
20
seconds
today
so
like
this
does
not
add
a
lot
of
calls
like
doing
every
minute
or
something
like
that
and
also
like
it's.
It
makes
it
a
lot
easier
in
terms
of
how
we
rotate
stuff,
like
all
it
really
has
to
do,
is
invalidate
cache.
So
any
new
request
that
comes
in
now
says:
oh
cache
miss
I'm
just
going
to
go.
Call
the
games
plugin.
A
A
A
And
yeah
within
the
scope
of
status,
it
makes
sense
right
to
say
I
am
healthy.
I
can
talk
to
like
I
I
you
know
like
I
I'm
healthy,
because
I
recently
had
a
successful
interaction
with
key
vault.
That
might
be
the
definition
of
healthy,
for
the
plug-in
is
that
within
the
last
10
minutes,
I
made
a
successful
network,
I'm
healthy
in
that
way
and
by
the
way
in
that
last.
A
I
I
guess
there
has
to
be
some
definition
of
like,
like
how
does
a
kms
plug-in
know
that
it's
now
at
v2
versus
v1
and
like
how
recently
did
it
check?
There's
some
there's
some
ambiguity
there.
I
guess
of
like
there's
a
delay
right
like
we're
not
not
going
to
sit
there
pull
every
second
trying
to
be
like
hey
key
vault.
Did
you
get
a
new
key?
That
would
be
silly,
so
I
I
guess
my
question
to
y'all
would
be
all
of
these
sort
of
layers
introduce
some
delay.
A
How
how
do
we
know
when
all
of
the
delays
have
effectively
caught
up
is
because
once
all
of
them
have
caught
up,
then
you
can
run
storage,
migration,
okay,
but
before
then
running
story,
migration
is
safe.
It's
perfectly
sound,
there's
nothing
wrong
with
it,
but
it
doesn't
necessarily
do
what
you
want.
D
That's
why
I
was
like,
when
you
say,
invalidate
the
cache,
if,
if
we
don't
store
the
new
version
of
it
in
the
cache
in
the
new
updated
cache,
how
will
we
know
now
if
I
need
to
invalidate
it
or
versus
not
about
invalidating
well.
D
A
D
Yeah,
that's
why
I
was
saying
if,
if,
if
plugin
says
hey,
I
have
a
new
version
now
and
aps
service
sees
oh
okay,
I'm
gonna
invalidate
the
old
cache
and
store
this
one
in
the
new
cache.
Can
we
do
that
because
because
then
next
time
it
would
just
use
the
cache
one
and
compare
if
it's
the
same.
It
won't
invalidate
the
cache
again.
A
A
So
when
you
do
this
periodic
status
fetch
within
it,
it
would
tell
you
the
current
version
of
the
thing
right,
so
I
think,
what's
still
missing,
there
is
how
how
does
the
actor
that's
gonna
run?
The
storage
migration
know
the
current
observed
version
right.
Basically,
if
I,
if
I
told
azure
key
vault,
I
want
to
go
to
version
three
now.
A
At
version
two
all
of
the
systems
are
aware
of
this
in
the
sense
that
they're
gonna,
as
they
do
their
periodic
polling
and
stuff,
they
would
catch
up
and
learn
about
version
three.
But
how
do
you
know
that
they
have
done
that?
Basically,
how
do
you
know
that
all
three
api
servers
and
all
three
kms
plugins
have
observed
the
new
v3?
A
Thus,
when
you
run
storage
migration,
the
right
thing
happens
that
that's
the
I
think
the
piece
that's
missing.
I
I
think
you're
correct
that
explicitly
trying
to
funnel
a
version
through
is
better
than
the
timestamp.
I
agree
with
that,
but
I
think
there's
still
one
step.
That's
missing,
which
is
how
do
you
know?
Storage
migration
is,
can
now
be
run.
D
D
Can
it
can
I
create
something
that
triggers
the
storage
migration.
E
A
I
think
that
that's
where
we
would
want
to
somehow
include
this
version
information,
so
the
hash
changes,
meaning
that
within
the
kubernetes
api
there
is
an
artifact
that
changes
when
this
is
observed
and
thus,
when
you
still
have
an
issue
right,
which
is
that
that
artifact
is
per
api
server.
I
think
maybe
I'm
actually
not
exactly
sure
on
that.
A
So
there's
still
a
little
bit
of
a
coordination
issue,
but
yeah
like
there
has
to
be
some
way
for
you
to
say:
hey
all
three
api
servers,
I'm
late
for
my
next
meeting
that
you
have
observed
the
state
and
dust
storage
migration
will
do
the
right
thing.
I
think
there's
still
a
little
bit,
I
think,
of
disconnect
in
my
head
how
we,
how
we
get.
D
Right
yeah,
I
mean
I
think,
last
time
you
mentioned,
there
was
like
some
a
code
for
when
storage
migration
happened
like
maybe
we
should
all
look
at
that.
A
Have
not,
I
have
not
gotten
back
to
it.
I
realized
we're
over
time,
so
we
can
probably
stop
a
niche
that
was
really
cool.
Thank
you.