►
From YouTube: WG-KMS Bi-Weekly Meeting for 20220503
Description
WG-KMS Bi-Weekly Meeting for 20220503
A
B
Okay,
is
it
everything
good,
okay,
all
right,
so,
as
we
were
saying,
we
want
to
make
sure
it
gets
in
125
and
the
cap
deadline
is
fastly
approaching
and
we
want
to
make
sure
we
can
get
a
lot
of
the
sake
off
and
probably
seek
machinery
to
look
and
review
right,
so
just
want
to
give
them
enough
time
and
and
so
yeah.
So
this
is
the
draft
document
for
the
kep
issue
and
pr
I
don't
know
if
y'all
had
a
chance
to
review
it.
I
only
got
a
few.
B
I
got
comments
from
a
niche,
so
thank
you
for
doing
that.
Do
you
do
you
want
me
to
go
through
this
real
quick
to
provide
some
context
or
jump
into
the
items
that
we
need
to
discuss
and
agree
on?
What
would
you
folks
prefer.
A
So
I
have
been
out
of
office,
so
I
have
not
read
anything
and
also
people.
People
don't
like
to
respect
my
half
day
and
make
me
make
me
do
things
that
are
not
fun
on
my
half
days,
so
I
yeah
the
time
I
planned
on
spending
on
this
was
not
used
on
it.
I'm
sorry.
B
A
B
B
I
should
highlight
this
so
anything
with
yellow
it
means
we
need
to
discuss
and
agree
on
so
one
thing.
One
thing
I
just
want
to
confirm
is,
I
think
we
said,
even
though
the
observability
cap
got
merged
and
it
got
approved,
but
because
it's
very
closely
related
with
everything
else
that's
discussed
in
this
cap.
We
are
actually
merging
it
back
to
this
cap
and
then
so
that
we
can
discuss
it
together
and
also,
I
think
there
were
some.
You
know
minor
changes
since
the
cap
got
merged
so
yeah.
B
Moving
on
this
is
all
you
just
whatever
so
yeah,
so
the
the
summary
is
for
this
cap,
which
we
are
proposing
for
the
for
the
key
management
service
service
contract
and
and
the
key
hierarchy
design
in
kms
plug-in,
and
these
are
the
things
that
were
enhancing
right.
Improve
readiness
time
for
clusters,
with
large
number
of
secrets,
reduce
the
likelihood
of
hitting
the
kms
rate
limit,
enable
fully
automated
key
rotation
for
the
latest
key,
improve
kms,
plug-in
health
check.
B
Reliability,
improve
observability
of
envelope
operations
beyond
between
the
api,
server,
plug-in
and
kms
right.
This
one
was
a
little
bit
like
I
don't
know
how
to
say
it
in
a
clear
way,
but
I
wanted
to
highlight
that
it's
not
fully
automated
key
rotation
for
every
type
of
key,
but
rather
for,
like
the
latest
key
right.
I
don't
know
how
to
say
that
like
more
clearly,
but
that's
something
that
like
may
be
confusing
for
people
if
they're
not
familiar
with
this
again.
A
A
summary
which
is
fine,
it's
kind
of,
I
think,
there's
a
lot
of
things.
We've
talked
about
and
trying
to
even
summarize
them,
and
I.
A
It's
going
to
be
hard,
I
think
one
thing
we
want
to
make
sure
it's
clear
throughout
the
cap,
it's
like
things
like
the
like,
reduce
the
likelihood
of
hitting
the
kms
rate
limit.
That
is,
that
is
purely
an
implementation
detail
of
the
reference
implementation
that
we're
providing
right.
It's
it's
not
actually
enforced
through
the
v2
api
in
any
way.
C
A
You
a
point
yeah,
you
might,
there
might
actually
need
to
be
two
buckets
like
what
does
the
v2
api
try
to
accomplish
and
then
a
second
list
of
like
within
the
reference
implementation
that
implements
the
v2
api.
Here's
the
extra
things
that
we
try
to
accomplish,
because
we
believe
that
basically,
everybody
needs
these
things,
or
at
least
the
majority
of
users
need
these
things.
So
this
is
how
the
v2
api
enables
that
and
then
also
like.
A
We
realized
that
many
kms
implementations
are
based
on
underlying
rest,
apis
that
do
enforce
strict
limits
on
encryption
operations
and
just
general
iops.
We
didn't
think
that
that
is
a
concern
of
the
actual
v2
api.
A
We
didn't
think
that
that
was
a
core
requirement
that
that
api,
somehow
handled,
but
somehow
within
the
api
server
we
reduced
the
number
of
operations
we
do.
We
believe
that
the
plug-in
layer
is
better
for
that,
but
instead
of
telling
everyone
go
solve
this
problem,
we
also
want
to
solve
that
for
them
in
a
good,
fully
featured
implementation,
but
also
this
fully
featured
implementation
won't
force
you
to
use
the
hierarchy.
A
A
A
As
an
aside,
I
think
we.
What
we
might
want
to
do
is
actually
like
the
other,
the
observability
cap
we
might
just
need
to
like
retract
it.
So
that
way
people
don't
get
confused
and
think
that
we're
trying
to
do
that
kept
in
two
places
like
I
could
imagine
a
world
where
people
say
that
we
should
try
to
somehow
shove
this
into
the
v1
api,
but
I
really
rather
not
try
to
somehow
come
up
with
magical
migration
strategies
from
that
old
thing.
A
B
A
good
call
out,
I
think
this
is
more
clear,
yeah,
yeah
and-
and
I
initially
I
started
writing
this
in
here,
and
it
was
just
way
too
much
and
I
actually
think
it
it
belongs
in
motivation.
B
So
if
people
want
to
know
why
we're
doing
this
like
each
each
folded,
thing
has
its
own
like
section,
okay,
but
I
think
as
a
summary,
if
I'm
just
coming
in
quickly,
like
you
know
years
from
now.
Looking
at
this
change,
all
I
need
to
know
is
what
what
is
this
service?
What
is
this
new
api
adding
right
and
how
does
that
impact
plug-in
implementation.
A
So
quick
check
and
ish
damian
do
y'all
agree
on
that
summary.
Is
there
anything
we
have
left
out?
That
was
an
important
aspect
of
all
the
work
that
we've
been
talking
about.
C
No,
I
I
think
only
for
the
rotation
stuff,
so
rita
did
mention
it's
with
the
local
cake
right
and
I'm
the
one
question
that
I
wanted
to
bring
up
was.
Do
we
also
want
to
mention
about
the
storage
hash
and
the
whole
thing
that
we
have
been
thinking
about
like?
Is
that
does
that
fall
under
the
scope?
But
do
we
just
say
it's
not
under
the
scope,
but
it's
something
that
we
will
look
into.
A
C
C
Yes,
like,
I
think,
the
fight
where
you
have
in
motivation
the
rotation,
so
we
have.
D
C
This
idea
that
when
the
plug
games
plugin
starts
up
when
it
does
a
status
check,
it
says:
hey,
I'm
using
key
id1
from
keyword.
C
This
is
what
I'm
configured
with
right
now
and
then,
let's
say
after
one
hour
or
like
two
hours
now
it
has
key
id2
it's
configured
with
key
id2
like
it's
possible
that
there's
another
instance
or
like
someone
changed
it
in
the
back
end
or
it
just
did
that
during
run
time.
Right.
B
C
This
case
now
the
kms
check
has
been
rotated,
so
we
need
to
handle
that
rotation
and
then
the
idea
was
there
is
a
storage
hash
which
will
have
the
status
response,
hash
it
and
then
store
the
hash.
And
then,
when
the
key
id
changes
now
the
hash
will
change,
because
the
status
response
says
like
hey,
I
have
a
different
gid
and
that
could
be
a
trigger
for
auto
rotation.
So
basically,
some
component.
D
A
A
The
real
root
key
within
your
kms,
whatever
implementation,
you
have
right
so
yeah.
I
did
plan
on
that
bit
being
in
this
kept,
because
I
thought
it
was
important
for
the
rotation
right.
That's
why
the
key
ids
in
the
v2
api
is
to
enable
that,
but
the
implementation
of
that
does
need
it
to
be
observable
outside
the
api
server,
and
maybe
folks
agree
that
that
storage
version
hash,
I'm
not
tied
to
that
it
was
just
the
closest
thing
I
saw
that
existed
already.
A
That
gave
us
a
good
approximation
right.
We
don't,
we
don't
need
like
we
don't
have
a
status
for
the
api
server
to
shove,
that
data
in,
but
to
me,
storage
version
hash
actually,
roughly
has
that
meaning
of
like
how
do
I
store
bytes
on
disk?
Well,
encryption
is
part
of
how
you
store
bytes
on
disk.
That's
why
I
thought
it
was
okay,
but.
C
I
think
burrito
already
has
most
of
that
written
under
motivation
and
rotation
like
we
probably
just
need
to
add
details
on
the
storage
version
hash
and
all
that
maybe
in
implementation,
details
or
something
like
just
show
examples
in
them,
because.
C
B
So
this
is
mentioned
in
here,
but
it's
like
way
down
the
in
the
implementation
in
the
design
and
implementation.
I
purposely
didn't
want
to
introduce
it
too
early,
because
it's,
unless
you
understand
like
enough
about
this
current
key
id
you're,
going
to
be
very
confused
reading
this,
that's
why
it's
way
down
so.
E
B
B
But
except
this
one
change
is
actually
impacting
various
use
cases
and
various
pain
points,
and
because
of
that
it's
it
takes
a
long
time
to
explain
and
and
not
confuse
people
so
so
coming
back
here
like
we
don't
need
to
go
through
this
because,
like
you
know,
you
already
know
like
the
pain.
F
Point
just
maybe
to
go
back
to
the
summary
I
think.
What's
missing
is
actually
like
a
mention
of
the
reference
library
that
we
want
to
add,
because
at
the
end
of
the
day,
like
one
of
the
main
problems
that
we
were
trying
to
solve
was
to
reduce
the
load
on
the
kms
and
that's
what
we
are.
That's
what
we
will
be.
F
A
B
B
Nothing
like
live
document
writing.
B
Shall
we
move
on
you
guys.
Okay,.
A
Yeah,
I
was
gonna
say
I
did
see
below.
I
think
something
that
I
disagreed
with,
which
was,
I
think,
the
proposal
for
how
to
store
stuff.
A
B
Okay,
that
way,
you
can
give
me
feedback
to
how
to
make
this
better
right,
so
all
right
so
goals
just
basically
copy
paste
of.
What's
in
the
summary
and
then
dongle,
I
specifically
said
we're
not
here
to
prevent
kms
rate,
limiting
we're
not
here
to
recover
any
checks
if
it's
deleted
using
the
transaction
id
for
auditing.
This
is
copy
from.
B
Whatever
yeah
anyway,
I,
if
there's
anything
you
know
more
than
this,
please
add
just
I.
This
is
what
I
was
thinking
about.
That's
all
all
right
so
proposal,
so
I
started
with
key
hierarchy
because
I
think
it's
it's
an
important
detail
to
introduce
first,
so
that
people
understand
why
we're
adding
current
key
id
and
then
once
they
understand
how
this
has
been
used,
then
we
can
talk
about
how
it's
used
for
rotation
right.
B
So
first,
you
know
the
proposal
is
support,
key
hierarchy
in
kms
plugin
that
generates
local
keck
and
then
add
v2,
alpha
1
in
kubernetes
to
include
current
key
id
and
metadata
to
support
key
rotation
key
id
sorry
current
key
id
is
the
kms
key
id
stable
identifier,
changes
change
to
trigger
key
rotation
and
storage
migration
right
metadata.
B
It's
a
structured
data
can
be
the
encrypted
local
keck
and
can
be
used
for
debugging
recoverable.
Now,
let's
talk
about
the
first
highlighted
item
validation,
similar
to
how
kubernetes
labels
are
validated.
This
was
something
mentioned
in
one
of
the
prior
calls
and
anish
also
brought
up.
How
do
we
validate
this
right?
B
B
There
is
some
somewhat
of
a
contract
right
between
what
is
returned
by
the
encrypt
response
and
and
then
what
and
then
how
it
gets
passed
back
to
the
kms
plug-in
as
part
of
the
decrypt
request
right.
So
how
does
the
plug-in
find
the
encrypted
local
keck
right?
So
the
key
that
is
used
in
this
metadata.
A
Right
and
but
this
is
why
it's
opaque
right,
that
that
hierarchy
is
purely
an
implementation,
detail
the
plugin,
so
the
contract
is
the
between
the
plugin
and
itself
right.
The
api
server
just
says
you
give
me
some
unencrypted
data
that
you
have
somehow
figured
out,
not
how
to
put
secrets
in
or
you
haven't
created
them,
but.
A
Think
about
the
alternative
right
say
there
is
a
contract
with
specific
fields.
What,
if
I'm
a
newer
plug-in
implementation
that
needs
like
a
annotation?
Basically,
that
tells
me
my
root
key,
so
I
can
figure
that
out
right,
but
my
concept
of
root
key
is
completely
different
than
whatever
we
came
up
with
now,
I'm
stuck
right,
so
I
don't
have
a
field
to
put
it
in.
A
So
what
I'm
going
to
do
is
take
an
existing
field,
put
like
a
pipe
character
with
and
like
put
the
first
data
there
and
the
second
character
there
like
and
just
like
hack
it
in
right
because,
like
even
if
we
give
you
just
raw
bytes,
you
could
put
anything
you
wanted
in
there.
All
we're
saying
is:
let's
not
do
that.
Let's
give
you
a
map,
that's
like
labels.
So
that
way,
if
a
human
being
ever
tries
to
read
this
proto
on
disk,
they
can
see
high
level
structure
and
kind
of
make
sense
of
it.
A
A
That
what
was
that
name
of
that
google
thing
that
we
originally
looked
at
that
had
key
hierarchy
support,
but
but
the
what
what
it
did
was
it
encoded
everything
into
a
flat,
proto
array
at
the
end
and
that's
what
it
passed
around
because
it's
like,
I
know
you
can
store
bites.
Here's
some
bites
right
and
I
was
like
that's
exactly
what
I
don't
want
is
flat
bites,
because
they
don't
have
any
structure
to
them
when
you
observe
them
right.
So.
A
You
to
do
this
semi-structured
thing
at
least
we
can
kind
of
read
it
right,
so
we
increase,
we
increase
the
chance
that
plug-in
authors
are
encouraged
to
just
use
separate
fields
for
separate
things.
That's
all
I
want
out
of
this
because
it
has
to
stay
opaque
to
us
and
we
do
need
a
scary
message
saying
this
is
unencrypted,
do
not
put
secrets
in
here
unless
you
pre-encrypt
them.
Somehow,
then
it's
fine,
but
like
don't
put
secrets
in
here.
B
And
n
is
the
idea
that
it
will
be
a
global
variable
that
the
plugin
defines
that
will
be
shared
across
the
decrypt
and
encrypt
requests
and
response,
like
I
just.
B
We
have
a
like
a
a
more
a
less
error-prone
way
to
make
sure
the
the
plug-in
the
plug-ins
do.
The
right
thing.
Yeah.
A
So
like,
if
we
think
of
the
look
at
if
we
think
about
what
the
reference
implementation
might
look
like,
I
would
imagine
at
least
two
fields
being
included
in
the
response.
One
would
be
the
whatever
its
true
root
key
is
in
the
the
external
canvas
right
and
that's
to
help
with
lookups.
So
that
way,
when
it
sees
the
decrypt
response
coming
in
it
knows
what
key
was
the
true
root
key
for
everything,
and
then
the
second
thing
would
be
like
encrypted
key,
and
that
would
contain
the
encrypted
blob.
A
That
was
the
local
keck
at
the
time
right.
So
it'll
look
up
in
its
cache
and
say:
can
I
decrypt
this
local
keck
thing,
yes
or
no?
If
it
can
it'll
just
do
that
and
just
have
it
if
it
can
it'll
be
like
okay,
who,
like
what
did
I
actually
use
to
encrypt
this
thing
with?
If
I
and
you
know
that
way,
I
can
look
it
up
if
I
need
to,
and
it
could
also
then
fall
back
being
like
well.
A
If
that
key,
for
some
reason,
isn't
there
granted,
it
should
always
be
there
to
decide
to
be
like
all
right,
I'll
I'll,
just
try
the
latest
key
or
something
I'll,
try
all
my
keys
whatever,
but
that's
all
it
needs
right.
It
just
needs
those
two
fields
to
basically
be
able
to,
and
it's
correct
me
if
I'm
forgetting
some
field
that
you
actually
know
better
than
me
on
this.
A
C
C
C
A
I
mean
I
already
picked
the
plugin
to
just
fail
right
because
it's
basically
a
bug
in
the
plug-in
it
handed
metadata
like
we're
going
to
assume
the
api
server
is
correct
in
that
it
hands
you
back
exactly
what
you
handed
it
right.
It's
easy!
That's
actually
super
easy
for
us
to
get
right,
but
on
the
other
side
like
if
you
never
gave
us
the
right
date
at
the
beginning,
we
can't
respond
with
correct
data.
Blind
told
us
right,
but
I
think
that
I'd
be
very
surprised.
B
C
A
Well,
keys
would
have
to
basically
be
host
names,
it's
roughly
the
roughly
the
requirement
of
keys
and
then
values
I
think,
can
be
basically
anything
but
then
there's
an
overall
size
limit
of
like
one
megabyte
or
something
or
like
half
a
megabyte,
which
also
seems
totally
reasonable
and
fine,
like
don't
don't
spend
more
than
that
because
it's
like
what
so,
I
think,
that's
probably
good
enough
and
labels
are
pretty
restricted,
and
I
kind
of
like
that,
like
I,
I
don't
feel
the
need
to
be
like
you
can
store.
A
I
don't
know
random
foreign
character
sets
that
I
don't
understand
that
are
not
utf8.
You
just
don't
feel
the
need
to
support
that
yeah,
and
I
honestly,
I
can't
imagine
it
being
too
restricted
right.
A
C
A
Yeah,
as
far
as
I
know,
labels
is
like
the
most
strict
map
we
have.
I
think
it's
more
stricter
than
like
the
config
map
and
secret
data,
that
the
data
field
has
relatively
strict
restrictions,
but
I
think
labels
is
more
restrictive
on
size
and
stuff
and
it's
more
restrictive
than
annotations
on
the
two
names.
So
I
think
it
gives
us
basically
the
strictest,
but,
like
nobody
really
complains
about
labels
right
like
it's,
it's
fine,
it's
robust,
it
works.
So
I
can't
imagine
anyone
complaining
about
this
either.
A
So,
and
we
can,
we
can
like,
we
can
validate
offline,
make
sure
that
I
have
not
misremembered
the
restrictions
on
labels
just
to
just
be
sure.
B
B
Are
we
good
with
this
one?
Should
we
move
on
sorry,
I'm
rushing
a
bit
because
it's
pretty
long
yeah.
I
know.
E
F
B
30
minutes,
sorry,
I'm
not
like
trying
to
you
know.
You
know
what
I
mean
all
right:
we're
good
status
request
response
return.
This
is
the
status
new
api
return
version,
health
and
current
id.
The
current
id
and
status
can
be
used
to
compare
and
validate
the
key
id
stored
in
cache
and
latest
encrypt
response,
current
id
to
ensure
they
all
line
up
with
the
status
response
and
stat
and
update.
If
it's
stale
during
storage,
migration.
B
Generate,
oh
yeah,
this
is
observability
now,
so
are
we
good
with
this
one?
This
is
a
lot.
B
Yeah,
so
it's
basically,
this
is
basically
explaining
how
this,
in
return
by
status
response
is
used
as
the
key
id
to
validate
between
that
cache
and
the
latest
encryption
that
we
just
happened
to
use
whatever
key.
We
was
used
to
make
sure
okay.
A
Okay,
now
I
get
it
so
this
isn't
exactly
what
I've
imagined.
Okay,
I
had
imagined
that
we
would,
within
the
health
implementation
of
this,
the
health
center
implementation
that
that
wires
up
to
the
health
endpoint,
which
would
be
like
part
of
this
whole
one
global
internal
object
that
contains
the
kms
status.
A
A
A
So
I
know
that
still,
but
as
a
for
example,
the
level
does
not
change
because
of
any
response
and
encryption
and
decryption,
because
those
can
happen
in
arbitrary
orders
in
parallel
and
we
don't
want
to
accidentally
go
back
in
time
right.
So
basically,
your
level
only
follows
status
and
then
encrypt
and
decrypt
sort
of
record
point
in
time
results
and
compare
to
that
level
and
that's
sort
of
it
does
that
make
sense.
A
I
am,
I
think,
I
think
it's
okay,
not
necessarily
to
like
nail
down
exactly
this,
because
this
this
is
really
specific.
I
I
I
think,
as
long
as
we
say
that
we
are
level
driven
by
status
and
so
that
just
drives
our
level
up
and
during
storage
migration
or
basically
during
storage
operations.
A
B
A
A
Did
that
match
the
level
that
status
is
currently
at
yes
or
no,
and
if
it
doesn't
match
you
say,
you're
stale,
that's,
basically
it
right,
and
this
is
distinctly
different
from
the
staleness
check
that
exists
today.
Today,
stillness
is
only
possible
if
you
have
to
go
down
the
list
of
encryption
configuration
options
in
your
static
config.
That's
the
only
way
to
get
staleness
is.
If
you
basically
use
something.
That's
not
index
zero.
A
B
A
F
A
That's
why
one
of
the
particular
requirements
of
this
implementation
is
that
they
can
only
in
memory
be
one
piece
of
code
that
talks
to
the
kms
plugin,
because
it
can
only
have
one
understanding
of
its
level,
because
today
the
health
check
implementation
and
the
stories
and
implementation
are
separate,
like
they're
two
pieces
of
memory
that
don't
know
anything
about
each
other,
so
they
can
diverge
in
their
understanding
of
levels.
That's
why
I'm
like
this.
B
A
Yeah
because
we're
saying
that
that
will
happen
at
some
deterministic
period
right,
like
every
10
seconds
or
whatever
right,
there's
only
one
of
those
requests
at
a
time
and
they
promote
the
level
up,
no
matter
what
the
value
is
they
just.
They
just
say
that
this
is
the
current
level,
whereas
encryption
and
decryption
responses
happen
arbitrarily
some.
You
know
a
request
that
started
at
time.
A
Zero
might
take
five
seconds
to
complete
because
it
just
happens
to
be
a
bad
request
or
something,
whereas
the
one
that
started
at
time,
one
to
complete
before
it
and
that's
what
I'm
saying
right.
We
don't
want
to
accidentally
go
backwards
or
forwards
in
time
in
weird
ways
that
we
don't
want.
We
just
have
one
meaning
of
going
forward,
and
then
everything
else
has
to
just
believe
that
that's
true.
B
B
Glad
you
can
join
us,
we're
just
talking
about
current
key
id
and
how
it
can
be
used
to
trigger
a
storage
migration.
I
guess
a
rotation.
Are
we
good
with
this
section?
Then
again,
this
is
still
not
implementation
detail.
Yet
this
is
still
yeah.
B
Level
proposal-
yeah.
Okay,
all
right!
You
all
know
what
this
is.
Nothing
new
here
enable
fully
automated
rotation
for
latest
key
in
kms,
and
I
put
in
prerequisite
encryption
configuration
is
set
up
to
always
use
the
latest
key
version
in
kms
and
the
values
can
be
interpreted
dynamically
at
runtime
by
the
plugin
to
hot
reload.
The
current
write
key
rotation
process
sequence
record
initial
key
id
across
all
api
servers.
A
Yeah,
I
think
that's
that's
a
good
high
level
thing.
The
only
bit
that
within
a
high-level
description
is
like
one
and
three
are
infrastructure
dependent.
So
we
cannot
give
you
steps
that
will
work
on
all
infrastructure
that
like
because
it
depends
on
how
load,
balancers
and
ips
and
basically,
api
server
identity
or
networking
identity
works
in
your
infrastructure.
A
Yeah
yeah,
so
you
know
in
in
openshift
there
is
a
guaranteed
way
to
find
the
address
of
every
api
server.
That's
not
true
in
all
the
cloud
providers.
A
A
D
A
Yes,
yes,
we
should.
I
don't
have
a
strong
opinion
on
the
string
that
we
choose.
A
A
B
B
In
reference
implementation,
maybe
that
will
help.
So
what
we
said
was
no
change
in
api
server
keep
one-to-one
deck
mapping
kms
plug-in
generates
its
own
tech.
In-Memory
real
kms
is
used
to
encrypt
the
local
keck.
Local
keck
is
used
for
encryption
of
kex
7
by
api
server.
Local
cac
is
used
for
encryption
based
on
policy.
B
Since
key
hierarchy
is
implemented
at
kms
level,
state
cams
plug-in
level
by
the
way
the
terminology
I've
been
using
is
kms
plug-in.
I
know
we
go
between
plug-in
provider,
but
I
I
changed
to
plug-in,
because
provider
can
mean
something
else
anyway,
it
should
be
seamless
for
api
server,
so
whether
the
plugins
use
using
a
local
keck
or
not
the
api
server
shouldn't
care
same
behavior.
B
What
is
required
for
the
api
server
is
to
be
able
to
tell
the
plugin
which
check
what
local,
keck
or
kms
keg.
It
should
be
an
issue
used
to
decrypt
the
incoming
deck
to
do
so
upon
encryption.
Okay,
now
now
we're
getting
to
the
fun
part
upon
encryption.
The
kms
plugin
should
provide
a
uuid
identifying
the
local
cache
and
the
encrypted
local
keck
as
part
of
the
metadata
field
in
the
encrypt
response.
B
D
C
That's
what
I
added
as
a
comment
right
like
the
cache
logic.
What
we're
doing
is
what
we
did
in
the
poc
was
very
similar
to
what
caching
logic
that
api
server
does
today.
The
cache
key
is
actually
the
encrypted
deck,
so
in
case
of
kms
plugin,
the
cache
key
will
be
the
encrypted
local
kick
and
when
it
does
a
look
up,
it
basically
just
gets
the
encrypted
local
clip
from
metadata
and
sees
if
that
entry
exists.
The
value
is
the
decrypted
check.
A
A
A
A
So
what
I
want
to
see
here
is
a
simpler,
a
similar
implementation
to
what
we
do
for
proto
storage,
unencrypted,
proto
stores
for
regular
objects,
which
is
that
the
object
is
prefixed
with
k8
s,
colon
like
zero,
zero,
like
it's
like
something
like
that,
and
that
prefix
is
stripped
and
then
the
object
within
it
is
called
the
runtime.unknown
and
it
like
basically
has
the
the
proto
object
within
it.
Basically,
that's
how
kubernetes
does
photosource
today.
A
A
Yeah
totally,
I
can
totally
do
that,
but
yeah,
but
the
idea
is
that
I
don't
want
to
have
crazy
prefix
with
colons
and
stuff
we're
going
to
store
it
in
a
nice
structured
way
as
protobox
and
one
of
those
fields
will
be
the
encrypted
blob.
That
is
just
a
set
of
bytes,
but
all
the
other
stuff,
including
the
metadata
from
the
kms
plugin
and
the
key
ids,
and
all
that
crap.
A
Whatever
it's
going
to
be
stored
nicely
and
structured,
so
you
can
see
it
because
today
it
gets
shoved
into
this
crazy,
spaghetti,
prefix
thing
and
then
we'll
be
able
to
add
new
fields
very
easily
right.
If
we
decide
add
a
new,
we
just
add
it
to
the
proto
and
we
make
sure
the
api
server
handles
it
not
existing
or
whatever.
D
One
question:
so:
basically,
when
you
encrypt
with
as
gcm,
you
need
the
nonce
and
what
it's
quite
common
is
to
put
the
nodes
in
front
of
the
encrypted
blob,
because
the
nonce
usually
doesn't
change.
So
theoretically,
you
could
make
it
smaller
or
bigger
the
notes,
and
it
would
be
a
variable
that
you
would
need
like
we're
currently
doing
it.
I
think
with
acvs
we
where
we
see
how,
how
long
it
is,
but
we
don't
need
it
for
s
gcm,
because
it's
more
or
less
a
defective
center
to
have
a
12
or
16.
D
I
would
need
not
small
as
a
standard
that
theoretically
could
be
different,
but
not
but
most
programming,
matrix
languages
or
at
least
java
and
swift.
I
think
don't
even
support
changing
the
size
so,
and
I
just
put
it
in
front,
should
we
is
it's
okay,
or
should
we
have
a
dedicated
prototype
for
this?
Where
we
have
the
encrypted
blob
and
the
nonset
was
used.
A
I'm
not
100
sure
off
the
top
of
my
head.
I
I
could
see
arguments
in
both
directions.
My
initial
gut
reaction
is
where
possible,
to
use
separate
fields
and
structure.
A
If
you
go
off
the
end
in
some
weird
way-
and
I
just
want
to
get
away
from
that-
I
want
to
use
nice
structure
and
make
it
like
today,
like
writing
tooling,
for
this
stuff
would
be
atrocious
right,
because
you
would
be
very
carefully
picking
bites
and
trying
to
figure
out
where
everything
is
whereas
like
because
there's
no
way
a
human
can
look
at
it
and
get
a
sense.
A
I
want
to
get
away
from
that
and
be
able
to
say
like
where's
the
fields
like
you
can
see
them,
and
they
they
mean
separate
things
and
they
have
good
names
and
all
that
that
kind
of
stuff-
and
I
realized
protobuf-
is
not
the
best
format
for
having
good
names
for
stuff,
because
you
need
the
schema
and
the
value
together
to
get
the
full
detail,
but
our
schema
will
be
documented
right.
So
it's
not
it's
not
as
bad
but
yeah.
A
B
I'll
do
it
later.
I
hate
that
google
just
shows
every
one
of
my
contacts
thanks,
google
yep.
A
So
a
quick
question:
how
so
today,
all
the
encryption
arrest,
stuff
shoves
in
the
the
name
of
the
encryption
provider
into
the
prefix.
A
Maybe
that's!
Okay,
but
basically
I'm
curious.
How
folks
think
this
should
be
represented?
Should
we
continue
on
this
path
and
statically
encode
the
name
into
the
proto
as
a
way
of
saying
that,
like
basically,
if
you
don't
encode
the
name
and
the
proto,
when
you
encounter
one
of
these
new
proto
objects,
that's
encrypted
protobuf,
you
basically
would
have
to
try
every
v2
alpha
plug-in,
that's
currently
configured
which
in
reality
is
probably
going
to
be
one,
but
if
you
have
more
than
one
it's
going
to
basically
have
to
be
like.
Do
you
understand
this
problem?
A
So
that's
a
bad
failure
mode,
but
it's
also
to
me
kind
of
yucky
to
basically
say
hey
this
arbitrary
string
that
you
picked
in
your
encryption
configuration
you're
not
allowed
to
change
it
because
it
gets
stored
as
part
of
the
data
in
lcd
and
there's
no
way
to
retroactively
realize
that
whatever
name
you
used
was
bad
or
confusing
or
whatever.
F
We
don't
we
need
it
at
some
point
anyway,
like
to
identify
at
least
which,
like,
for
example,
kms,
is
used
for
the
encryption,
as
this
particular
data
like
at
some
point,
we
discussed
that
we
wanted
to
add
some,
maybe
cli
tools
to
be
able
to
introspect
the
data
that
is
stored
in
the
cd
and
maybe
decrypt
it
manually
if
needed.
A
Yeah
I
mean
I,
I
guess-
maybe
one
thought
around
this
could
be
is:
does
this
configuration
belong
within
the
encryption
configuration
of
static
for
the
api
server,
or
is
this
actually
should
this
actually
be
part
of
the
status
response
from
the
kms
plugin
itself,
like?
Does
the
kms
plugin
get
to
decide
its
name.
A
Again,
it
still
has
the
same
problem
of
like
once
you
decide
your
name.
You
can't
change
your
name
right,
because
it's
not.
A
A
Yeah
I
mean,
but
you
could
argue
that
maybe
this
migration
shouldn't
occur.
Maybe
you
should
just
pick
a
good
name
and
just
be
okay
with
never
changing
it.
I
don't
know
like
I
don't
I
don't
like
telling
people
that
don't
screw
this
up,
because
you
don't
there's
no
tape,
backsies
or
or
or
to
take
backseas
is
really
hard,
which
is
basically
configure
the
new
plugin.
That's
the
same
plugin
with
a
different
name
and
do
storage
migration
from
one
to
the
other.
A
But
this
is
the
hard
storage
migration
where
you
do
have
to
do
like
the
full
orchestration
of
your
plugins
and
everything.
It's
a
lot
of
work
to
change
this
name.
Basically,
but
maybe
this
is
not
a
real
problem.
B
A
B
A
A
B
B
So
the
rest,
honestly,
the
rest
aren't,
as
I
think
the
rest
are
pretty
much
just
an
extension
of
what
we
already
discussed.
This
is
why
I
want
to
talk
about
the
high
level
first,
because
I
feel
like
if
we
mostly
agree
with
that
high
level.
The
detail
stuff
shouldn't,
be
that
bad
honestly.
So
please
review
the
rest
of
the
doc
and
then
for
the
highlighted
stuff.
B
You
know,
for
you
know,
mo
come
back
to
update
this,
and
then
you
know
naming
things
like
what
what's
the
feature
flag,
I
don't
know,
but
we
need
to
come
up
with
a
name
it.
You
know.
I.
B
A
The
final
result
of
the
api,
which
is
encryption,
arrest,
right,
pod
security
policy
to
pot
security.
Admission
is
actually
like,
incredibly
different,
because
the
policy
is
different.
It's
static,
it's
versioned!
It's
based
on
labels
like
it's.
It's
like
a
different
world
approach
to
like
policy
enforcement
on
pods.
This
is
basically
saying
we
still
want
the
same
encryption
at
rest.
At
the
end,
we
just
want
to
be
able
to
run
it
in
production.
C
A
Like
blowing
up
your
environment-
and
we
don't
want
to
fix
the
api
in
place,
we
just
want
to
use
it
like.
We
just
want
to
make
a
new
api,
get
it
right
and
ask
people
to
migrate
to
it.
Also,
how
do
you
all
feel
about
a
goal
of
deprecating
the
v1
beta
api,
three
releases
after
this
gs?
A
So,
but
we
could
also
like
we
don't
have
to
do
it,
remove
the
old
one,
but
I
kind
of
feel
like
we
should
like,
I
feel
like.
We
should
give
lots
of
time
like
I'm,
okay,
even
at
six
releases
or
nine
releases
right
like
multiple
years.
If
people
really
want
that,
I
just
but
I
want
to
actually
say
that
we
don't
have
perma-beta
apis
beta
apis
do
go
away,
and
this
beta
api
did
not
graduate
here
is
their
replacement.
It
does
every
single
thing
that
the
old
one
did,
but
better,
please
migrate.
F
A
B
Okay,
great
discussion,
thank
you
for
all
the
feedback.
Please
go
and
comment
more,
but
I
would
like
to
set
a
date
for
pushing
the
pr
through
just
to
get
the
get
people
to
start
reviewing
outside
of
this
group.
So
can
I
ask
that
everybody
get
their
like?
Should
I
should
we
say,
let's
push
create
the
push
the
pr
by
end
of
the
week.
Does
that
sound
good
to
you.
A
B
Anyway,
yeah.
A
A
A
B
Okay
again,
please
help
review
this
that
way.
Hopefully
it
will
reduce
the
time
other
folks
have
to,
like.
You
know,
push
back
on
the
pr
right
all
right,
so
I
will
ping
you
that
before
end
of
thursday,
to
just
another
reminder,
and
hopefully
friday,
we
can
get
the
pr
out
all
right.