►
From YouTube: Kubernetes SIG Auth 2022-01-18 KMS #4
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2022-01-18 KMS #4
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
All
right,
hey
everyone.
This
is
the
kms
meeting
for
january
18th
2022..
This
is
the
fourth
cans
meeting
in
the
series,
so
we
were
just
continuing
the
discussion
on
how
we
want
to
frame
versioning
and
ga
requirements
and
such
for
the
kms
cap,
so
trying
to
remember
my
kind
of
thought
initially
exactly
what
we
were
on
right
now.
A
A
Yeah
right
so
right
now,
the
the
the
initial
cap
document
does
not
have
graduation
criteria,
because
it's
not
a
kubernetes
rest
api
and
and
what
we're
proposing
is
basically
just
one
minor,
optional
additional
field
that
doesn't
have
to
be
persisted
anywhere.
You
know
it's
just
up
to
the
kms
to
consume
it
if
it
wants
to
so
that.
B
So
so
I
had
commented
on
damian's
point
about
that.
I
I
think,
there's
having
the
audit
id
there
is
useful,
but
given
that
there
are
requests
that
don't
have
the
audit
id
generated,
I
failed
like
it's.
In
some
cases
this
wouldn't
be
quite
useful
for
what
we
need
in
terms
of
correlating.
B
C
I
can
ask
something
about
that
like
by
saying
that
it
doesn't
cover
all
the
cases
like.
Do
you
mean
that
some
of
the
operations
since
they're
writing
the
cache?
We
don't
have
an
audit
id
on
the
kms
side,
or
is
it
something
different.
B
No,
it's
something
different.
Let
me
pull
up
the.
It
was
something
that
mike
brought
up
right,
yeah,
so.
A
Yeah
so
he's
saying
that
you
know
as
we
the
api
server
when
it
starts
up
to
fill
up
the
watch
cache
we'll
do
a
full
list
on
all
secrets
and
all
all
api
resources
right,
and
at
that
point
it
will
cause
it
to
cache
data
encryption
keys
in
its
storage
layer
right
so
later
on.
When
users
are
actually
getting
those
secrets,
you
won't
see
it.
B
A
And
and
to
me,
it
might
make
more
sense
to
pursue
the
audit
annotations
approach,
which
is
to
actually
include
the
kms
uid
as
an
annotation
within
audit
logs.
So
if
an
audit
log
is
generated-
and
it
happens
to
also
have
a
kms
event
occur,
then
the
audio
event
will
tell
you
about
that,
so
you
can
search
for
it
if
you
want
to,
but
but
the
the
the.
B
It's
not
useful
right,
I
mean
we
could
try
to
add
it,
but
it's
not
going
to
cover
all
our
bases
and
just
to
clarify
like
what
mike
said
was
initial
load.
Is
not
audit
logged
but
does
trigger
kms
calls
right?
Yes,
so,
in
which
case
we
will
get
the.
D
B
C
Right
yeah
that
may
be
kind
of
a
topic
for
and,
as
I
said,
but
like
how
can
requests
that
are
made
by
the
api
server
to
let's
say
in
our
case
it's
a
kms,
but
in
some
other
cases
it
might
be
an
aggregated
api
and
how
come
these
requests?
Aren't
logged
like
what?
What
is
the
the
actual?
Like
reasoning
for
that,
because
to
me
whatever
the
operation
is
as
long
as
it
goes
from
one
point
to
another,
even
if
it's
made
by
the
api
server,
we
would
want
to
track
it.
A
That's
why
right
this
is
the
storage
layer
being
the
storage
layer
right?
It
doesn't
go
through
the
kubernetes
api
because
it
implements
the
kubernetes
api
yeah
right
right.
So
that's
kind
of
the
just.
We
do
try
very
hard
in
the
kubernetes
code
base
to
go
through
the
front
door,
which
is
to
go
through
the
kubernetes
restore.
But
you
can't
do
that
everywhere,
because
you
can't
implement
the
rest
api
without
talking
to
storage
directly
at
some
points
right.
A
A
Know
of
the
service
to
talk
to
do
you
make
sense,
yeah.
A
Right
so
I
I
don't
know
if
we
want
to
like
if
we
want
to
have
the
audit
log,
have
an
audit
annotation
at
some
point
with
the
kms
uid.
A
That
would
need
to
probably
be
in
this
kept
somewhere
right
as
a
as
a
thing
that
happens
at
some
point
in
the
future.
I
guess-
and
it
doesn't
have
to
be
like
initial
implementation,
whatever
right,
but
it's
probably
worth
calling
out
to
see
if
people.
D
Yeah,
I
think
that
that's
where
I
was
a
little
confused.
Should
we
add
it?
If
we
still
don't
know
if
we're
gonna
use
it
or
can
we
add
it
at
a
later
stage
like
so
this
part
focuses
on
what
we
are
starting
with
and
then
can
we
keep
updating
the
kept
with
newer
things
before
we
say
it's
completely
done
so
like
at
some
point
after
investigation.
We
think
the
audit
id
annotation
is
something
we
can
do.
A
A
I
would
want
that
to
happen,
but
my
experience
with
the
enhancements
repo
has
told
me
that
that's
not
how
the
enhancements
needs
and
others
want
this
to
work,
because,
for
example,
with
the
addition
to
the
csr
api,
I
just
tried
to
update
the
existing
certificate.
Signing
requests
count
with
the
details
of
it
right
because
it
was
sort
of
all
there
and
I
wanted
to
keep
that
kept
as
a
complete
reference
of
like
well.
This
is
what
the
csr
api
is
and
they're
like
nothing.
This
this
cap
is
implemented.
This
kept
is
done.
A
You
have
to
make
a
different
kept
to
talk
about
your
effectively
your
five
line
change
today
right.
It
has
to
be
completely
isolated
and
tracked
on
its
own,
and
I
was
like
okay
right
and
you
know
I
got
like,
like.
I
remember
mike
being
like
really
that
you
have
to
do
that
much
work
to
like
do
this
tiny
thing?
A
Yes,
the
cap
is
significantly
more
work
than
the
code,
so
I
I
I
would
be
very
hesitant
to
say
that
it
would
be
okay
to
say,
here's,
here's
a
cap
and
like
we
did
the
uid
stuff
and
that
work
is
done
now.
We
can
just
go
updated
to
add
more
stuff
about
what
we
now
want
to
do
with
this
uid
and
because
I
think
it's
very
hard
for
them
to
track
right.
D
Yeah
the
way
I
saw
the
cap
was
it's
like
a
design
implementation
noted
down
and
like
basically
discussed
before
we
do
it
and
that's
why,
like
it,
doesn't
fall
into
any
of
those
categories
of
graduation
point
and
then,
if
we
add
something
and
don't
do
it,
are
we
allowed
to
take
it
out
later,
like
let's
say
whatever
we're
investigating
turns
out
that
it's
not
possible
to
implement
then
at
that
point,
even
if
it's
implement,
we
can't
change
it
to
implement
it,
but
are
we
allowed
to
remove
it
from
the
cap.
A
I
mean
I
so
I've
certainly
like
if,
for
like,
the
client
go
credentials
plug-in
like
when
I
took
that
cap
over
I,
you
know
I
rewrote
significant
chunks
of
it,
because
what
had
been
implemented
significantly
diverse
from
what
was
in
that
cave,
and
so
you
know
I
was
like,
like
there's
no
point
in
this
kept
existing
in
the
state.
It's
just
confusing
because
it
says
one
thing,
but
the
code
does
another
thing,
so
I
think
that
part
is
okay.
A
Now
it's
not
done
anymore
and
thus
needs
to
be
tracked,
is
a
little
more
like
like
like
if
we
were
gonna,
if
we're,
if
we're
gonna
go
from
a
document
that
doesn't
have
any
phrases
to
a
document
that
does,
I
suspect,
they're
gonna
ask
us
to
make
two
separate
documents
like
one
with
the
phases
that
is
going
to
go
through
it
with
its
own
criteria
and
one
that
was
just
a
point
in
time
like
kms
uid
kept.
I
just
said
we
would
like
to
add
this
uid.
A
D
Okay,
quick
questions
right,
so
if
we
don't
move
it
to
implement
it,
then
are
we
allowed
to
update
the
existing
cap
with
newer
things
that
we're
adding?
Because
even
now,
if
we
add
that
in
the
future
we're
looking
at
audit
id,
that
means
we
cannot
move
the
kept
implemented
until
we
do
that
right
because
we
said
in
the
cab
that
we're
gonna
do
it.
D
A
D
A
So
I
mean,
I
think
my
my
thought
process
would
be.
Is
we
we
certainly
want
like,
for
example,
clayton's,
like
the
only
approver
for
encryption
at
rest
left
that
still
works
on
kubernetes,
so
we
would
certainly
want
like-
and
you
know,
clayton's
an
api
reviewer
too.
So
we
can.
We
can
ask
for
his
feedback
on
like
how
we
should
frame
any
of
this
and
same
with,
like
jordan
or
mike.
If
anyone
has
opinions,
I
think
I
would
be
okay
with
just
saying
that
this
cap
will
go
from
like
when
it
emerges.
A
I
think
this
one
should
be
relatively
smaller
than
right,
like
really
or
I
think
it
is
small.
When
I
was
scanning
it
seemed
pretty
small.
The
other
thing
I
wanted
to
ask
more
related
to
implementation
is,
did
anish?
Did
you
go
into
details
about
how
we're
going
to
guarantee
that
this
uid
is
available
in,
like
the
kubernetes
vlogs.
D
I
mentioned
that
we
will
log
it
in
the
api
server
as
part
of
the
changes
that
we
are
making.
So
I
said
for
every
request
that
we
are
logging
for
kms
and
also
additional
logs,
that
we
add,
we
will
add
uid
as
a
key
to
the
log.
A
I
don't,
I
don't
think
that
answer
is
kind
of
what
I
want,
which
is:
how
do
we
guarantee
that
this,
like
like?
Basically,
how
do
we
guarantee
that
this
persists
over
time
right
like
over
changes
right
like
as
we
as
we
go
work
on
other
other
things
right,
like
related
to
kms
or
whatever?
How
do
we
guarantee
that
those
things
have
this.
A
Like
the
closest
I
can
think
of
is
so
from
the
perspective
of
the
storage
api
or
the
internals
of
the
storage
api.
This
is
represented
as
a
transformer
and
I
think
the
transformer
takes
in
a
context
like
not
not
a
go
context
but
like
a
encryption
context,
and
I
think
what
we
might
want
to
do
is
put
the
uid
in
there
and
then
write
a
wrapper
implementation
of
the
transformer
whose
entire
purpose
is
to
always
make
sure
that
that
uid
gets
locked
right
along
with
anything
else.
A
That
we
think
is
valuable
as
context
but
like
I
think,
if
we
did
it
that
way,
we
have
a
pretty
strong
guarantee
because
we're
just
doing
it
from
the
outside
of
the
transformer
layer.
Just
saying
like
this
is
going
to
happen.
It
doesn't
matter
if
the
rest
of
the
layers
remember
to
do
that
as
long
as
they
log
the
error
itself
or
whatever
it's
going
to
be
in
there.
D
A
Yeah,
I
think
we,
I
think
we
need
at
least
a
little
bit
more
right,
yeah.
So
right
now,
I'm
I'm
looking
at
it.
The
transformer
interface
has
transformed
transformed
from
storage
and
transformed
to
storage,
and
you
know
to
take
the
data
in
and
then.
The
second
argument
is
of
context
which
today
only
has
authenticated
data,
which
I
believe
is
the
path
to
that
tv.
A
A
I
think
once
we
have
that
you
know
I'm
assuming
you
know,
we've
enumerated
our
motivations
and
stuff,
which
I
think
are
pretty
straightforward.
You
know,
I
see
a
thing
in
kms.
I
don't
want
to
sit
there
and
look
at
time
stamps
inside
of
api
server
logs
and
guess.
Oh
I
I.
A
I
think
I,
when
I
scanned
it,
I
had
think
I
had
seen
something
about
like
the
name
of
the
object
or
something
like
that
somewhere
in
the
cap.
So
I
for
the
grpc
api.
All
we're
saying
is
we're
gonna
pass
in
the
uid
right,
so
I
think
that's
totally
cool.
I
think
that
fits
within
the
the
intent
of
that
api.
Just
basically
give
the
plugin
the
least
amount
of
data
possible
for
it
to
do
stuff.
A
A
Yeah,
so
it
would
be
good
to
enumerate
what
we
want
right.
So
I
think
what
we
want
is
group
version,
resource
and
name
right
right.
Group
version
resource
allows
you
to
get
a
full
location,
and
then
the
name
pinpoints
that
location
to
a
particular
item
right,
yeah
and-
and
these
are
all
on
individual
items
right.
So
that
would
be
enough
right.
You
could
say
this
secret
in
this.
Oh
sorry,
you
need
the
name
space
as
well
name.
A
So
yeah
all
that
would
be
good
to
enumerate
because
then
it
would
be
clear
to
like
clayton
and
jordan,
and
whoever
else
wants
to
review
this
that
like
and
also,
I
think
we
have
cigars
tomorrow,
right
yeah
so,
and
so
I
think
we
have
enough
like
we
can
talk
about
it
tomorrow,
the
specific
details
and
maybe
even
bring
up
the
the
oddities
around
fitting
this
within
the
enhancement
sweeper
or
or
we
can
just
say
that
we
we
want
to
just
basically
have
this.
A
This
bit
of
the
cap
just
go
ga
as
it's
implemented,
because
all
it
is
is
just
adding
this
thing
and
see
what
people
think
and
if
people
think
no
that's
weird
or
unexpected,
then
we
basically
have
to
add
a
feature
gate
that
says
like
if
you
opt
into
this
behavior
it'd
be
very
strange,
but
it
is
what
it
is.
D
Well
I'll
add
these
today,
so
it'll
be
up
for
review
again.
So
hopefully
we
can
all
do
one
more
round
of
review
before
special
call.
Tomorrow.
A
I
have
been
talking
a
lot.
There's
other
folks
have
comments.
E
Do
you
want
to
talk
also
about
other
topics
like
the
performance
topics,
and
or
would
you
just
like
to
focus
on
okay?
I
see
a
lot
of
nothing.
Okay,.
A
I
I'm
totally
cool
with
that.
Are
we
done
talking
about
observability
just
want
to
make
sure
I
I
don't
have
anything
else
to
say
this
morning.
D
C
Just
one
thing
I'm
wondering
maybe,
since
the
main
focus
will
be
the
id
with
the
log,
would
it
make
sense
to
move
the
addition
of
metrics
as
a
library
to
beta
so
that
at
least
like
we
have
one
focus
for
this
release
and
then
for
the
next
one
for
like
yeah
for
the
graduation,
then
we
have
another
focus.
C
So
like
there
will
be
different
graduation
phases,
as
we've
seen,
and
you
mentioned
that
we
like
at
first,
we
didn't
really
know
where
to
put
the
audit
id
if
it
ever
was
like
supposed
to
lived
in
this
particular
cup
and
since,
like
we
wanted
to
add
matrix
of
the
library
like.
Should
we
still
consider
to
do
that
for
this
release,
or
should
we
push
that
to
another
one.
A
I
mean
what
I'm
understanding
from
our
framing
is
that
it
would
be
a
different
cap.
Okay,.
B
C
No,
no,
no,
I'm
more
saying
that
as
part
of
the
observability
improvements.
One
thing
we
wanted
to
do
is
introduce
a
library
for
like
to
provide
generic
metrics
in
both
promoters
in
open
telemetry
format,
right
that
an
implementation
could
use-
and
I'm
wondering
like
if
it's
part
of
the
scope
of
this
particular
cap,
if
we
should
even
consider
it
like
to
be
done
at
some
point
later
or
if
it
should
be
done
in
another.
B
B
C
I
don't
think
so.
I
I
like,
I
don't
even
think
we
need
a
cap
for
that
because,
like
at
the
end
of
the
day,
we
just
introduce
kind
of
an
implementation
which,
like
we
think,
is
a
reference
but
like
nobody
has
to
fully
implement
it
or
even
like
they
can
like.
They
don't
need
to
abide
to
our
standards.
A
You
I
think
we
might
still
need
at
least
some
part
of
a
kept,
because
it
would
become
like
a
sega
sub
project
sort
of
be
within
our
example
repos
and
stuff.
So
yeah,
I
I
I
think
it
would
probably
be
part
of
a
larger
cap
related
to.
A
Some
aspect
of
kms,
whatever
we
want
to
make
it
part
of-
or
it
might
be
like,
like
a
like
a
mini
cap
on
the
side
related
to
other
caps,
basically,
which
is
like
you
know.
If
we
have
like
a
performance
one
and
an
observability
one,
both
of
them
could
have
beta
requirements.
Let's
say
that,
like
you
need
that
implementation
to
be
done
right,
like
like.
Basically,
it's
like
a
dependent
of
that.
C
Yeah
yeah
this
would
make
sense.
The
the
only
problem
I
have
with
like
how
we
are
proceeding
is
that
it's
something
I
mentioned
on
the
cab,
but
we
are
basically
trying
to
solve
how
we
can
correlate
the
data
once
like
we've
noticed
a
problem,
but
we
have
currently
like
no
ways
to
actually
detect
the
an
operation
failed
or
I
something
happens
like
we.
Don't
have
the
metrics
that
surface
this
information
today,
I'm
not
sure
how
users
are
currently
able
to
notice
issues,
but
as
far
as
I
can
tell
like,
it
lacks
information.
C
Let's
say
like
within
with
not
that
much
information,
because
you
can't
really
know
like
what
exactly
happened,
but
you
at
least
aware
that
a
decrypt
or
an
encrypt
operation
failed.
So
that
can
be
surfaced
as
a
metric
and
then
have
maybe
another
effort
that
would
provide
a
reference
library
to
like
provide
matrix,
but
on
the
kms
provider
side,
not
on
the
api
server
itself.
A
Don't
think
it's
a
bad
idea
to
say
that
this
captured,
like
within
the
production,
readiness
review
questions
there
is
stuff
about
like
well.
How
does
how
does
the
cluster
admin
or
the
administrator
observe
the
system
is
working,
and
so
we're
improving
that
by
saying
that
we're
gonna
have
better
logging
right
but,
like
it
explicitly
says
in
the
prr
questions,
that
logging
is
like
the
absolute
last
thing.
Can
you
like
come
up
with
something
slightly
better
if
right,
so
so
yeah?
A
A
C
Let's
see
I
can
I
can
help
with
that.
If
you
I
can
provide
you
some
metrics
that
could
be
useful
tomorrow
and
if
you
want.
D
I
think
if
we
can
define
with
what
metrics
we
want
to
add,
maybe
we
can
do
that.
That's
fine,
like
damien!
If
you're
interested,
you
can
actually
just
open
a
pr.
B
D
Kep
branch
and
then
we
can
just
merge
that
and
that
will
show
up,
but
if
we
don't
have
it,
I
think
that
is
also
fine.
I'm
just
saying
like
we
don't
have
to
block
on
it.
If
we
don't
have
it,
we
can
always
add
this
as
part
of
the
reference
implementation,
we
can
say
like
hey,
we
are
introducing
these
metrics
in
kms
and
then
we
are
also
providing
a
ways,
so
you
can
consume
it
in
your
kms
plugin,
with
the
reference
implementation.
A
So
I
think
we
still
got
15
minutes
or
so.
If
you
want
to
talk
about
performance
stuff,
do
you
want
to
want
to
leave
that
and
have
strong
opinions
and
dumps.
A
If
anyone
has
the
doc,
I
didn't
want
to
share
this
screen.
I
don't
have
it
right
now,
and
the
computer
is
in
a
weird
state
to
have
been
out
of
office,
but
I
can
you
know,
make
folks
co-host
if
anyone
wants
to
share.
D
D
It
makes
a
single
call
encrypt
call
for
decrypt
call
for
every
single
secret
that
it
has
and
it
wants
to
update
the
cache
it's
calling
sequentially
and
then
this
means
there's
a
burst
of
calls
right
and
then,
at
this
point
your
kms
will
hit
a
rate
limit
and
then
because
api
server
keeps
retrying
like
your.
D
Your
kms
goes
into
like
some
kind
of
a
jail
where
it
cannot
recover
or
basically
every
call
is
failing
and
then
when
api
server
cannot
warm
up
its
cash,
because
these
operations
are
failing,
it
will
fail
all
the
health
check
and
all
that
and
then
it
will
just
keep
restarting
so,
like.
I
think
one.
The
first
one
was
that
use
case
where
how
do
we
optimize
this
scenario?
Because
if
you
set
a
really
large
size
for
the
cache
size,
the
api
server
when
the
api
server
is
running?
D
It's
really
helpful
because
not
all
calls
are
made
to
kms,
but
if,
by
chance
your
api
server
restarts
then
you're
basically
like
hit
like
now,
you
cannot
recover
at
all
and
then
I
think
it
also
becomes
worse.
When
you
have
more,
I
mean
if
you
have
multi-master
scenario
so
like
if
you
have
aggregated
api.
So
if
you
have
three
replicas
of
the
api
server,
then
each
one
is
trying
to
warm
up
its
caption.
A
Yeah,
so
I
I
think
today
the
recommended
approach
for
like
larger
environments
with
a
lot
of
encrypted
data
through
a
kms,
is
to
explicitly
configure
your
api
server
health
check
with
the
parameters
that
tell
it
not
to
care
about
the
kms
part,
so
basically
check
check
all
the
other
health
checks,
but
skip
that
one,
because
it's
just
like
just
this
is
not
worth
it,
which
obviously
is
silly
right
like
the
point
of
the
health
check
is,
is
the
system
healthy
and
you're
like
no?
No,
I
just
want,
like
mostly
healthy.
A
I
think
I'll
just
take.
Take
it
semi-functional,
that's
fine!
So
I
so
I
think
some
open
questions
around
this
are
so
today.
You
know
we
have
that
one-to-one
mapping
between
data
encryption
key
and
object
in
xcd.
A
The
less
data
encryption
keys
is
what
it
comes
down
to
is
that
today,
if
you're
on
the
stories,
migration
right,
which
is
you
know,
part
of
the
you
know,
we
have
a
sig
api
machinery
sub
project,
which
is
the
storage
migrator
and
just
the
general
concept
of
storage
migration
right,
which
is
basically
read
all
the
data
from
the
kubernetes
api
and
post
it
back
to
the
kubernetes
api,
completely
unmodified
for
all
objects
right
and
then
look
at
the
resource
version
of
all
the
objects
that
are
returned
for
the
objects
that
have
the
same
resource
version.
A
You
know
that
they
were
already
up
to
date
with
the
latest
and
greatest
and
with
the
ones
that
do
have
a
resource
version
changed.
You
know
that
they
needed
to
be
updated
and
they
were,
but
the
gist
of
it
is
that
is
the
process
that
sort
of
has
to
work
correctly.
For
you
to
rotate
your
keys
right,
and
it
does
today
right,
if
you
do
that,
if
you
follow
those
steps
today
and
you
do
the
rotation
dance,
everything
works,
fine.
A
The
the
approaches
I've
seen
where,
like
a
deck,
is
reused
for
some
time
had
failure
modes
where
they
would
be
like
yeah.
No,
it's
everything
is
up
to
date,
but
it
hadn't
actually
done
the
rotation
because
it
had
cached
the
deck
for
more.
Some
of
my
time,
I'm
trying
to
remember
exactly
the
flow
it's
been
so
long
since
I
looked
at
that
code
that
it's
a
little
flaky
in
my
head
right
now,.
D
Yeah,
I
think,
like
some
of
the
different
options
proposed
as
one
was
deck
reuse
right
like
use
it
for
a
certain
period
of
time
and
then
like
if
for
five
minutes
and
then
after
that
regenerate
a
new
deck
and
then
start
using
that
for
encryption.
And
then
I
think
there
was
also
a
option
where
someone
suggested
you
can
use
a
deck
per
namespace.
D
So
that
was
the
second
one
and
then
the
third
one
was
rather
than
duration,
use
a
certain
depth
for
a
certain
number
of
operations.
So,
like
you
said,
say,
one
deck
is
used
for
thousand
encryptions
and
then
once
it
hits
that,
then
you
basically
use
another
deck
for
the
next
thousand
and
so
on.
Right
so,
like
I
think
most
of
it
was
around
reusing
it,
but
how
to
reuse.
The
context
was
just
like
there
were
different
proposals,
name
space
duration,
counter.
A
D
Actually,
I
think
that
might
be
in
the
issue.
I
think
there
was
an
issue
link
that
I
should.
D
A
D
E
So
this
is
something
I
mentioned
in
the
document
too,
so
I
was
a
little
bit.
B
E
Cautious
with
reusing
the
keys
and
then
I
kind
of
jokingly,
said
well
if
we
would
use
the
deck
to
encrypt
other
keys,
so
it
would
kind
of
be
a
local
keck.
This
would
be
most
probably
fine,
so
I
think
this
was
just
a
diagram
to
show
that
you
have
a
layout.
This
is
remote
kms
and
you
have
a
local
kms,
and
then
you,
you
have
to
delete
that
encryption.
Keys.
A
Yeah
and
I
think
what
it
gets
down
to
is
like
if,
if
I
do
a
storage
migration,
I
I
have
to
be
able
to
tell
the
system
somehow
it
cannot
use
the
local
kms.
It
has
to
go
all
the
way
down
to
the
real
kms,
get
it
to
do
work
associated
with
that
to
make
sure
everything
is
up
to
date
with
the
latest
foundation,
like
you
know,
the
sort
of
the
foundational
keys
that
are
being
used
to
encrypt
everything,
and-
and
that's
where
you
get
but
like
like.
A
Basically
we're
hit
we're
we're
hitting
one
of
the
canonical
problems
of
computer
science
right,
which
is
caching
validation.
Right
like
how
do
you?
How
do
you
know
when
the
right
time
to
validate
your
cashless?
So
everyone
is
up
to
date,
and
I
don't
know
what
I
I
think
that's
my
head
at
the
bottom
of
this
screen
right
now,
just
wrote
down
it.
A
Okay.
That
was
me.
I
think,
talking
about
using
citadel
to
play
around
with
stuff
too,
because
one
of
the
things
that
I
think
still
was
never.
A
Defined
is
what
is
the
actual
cost
of
running
a
kms
like
what
is
like
the
actual
like
latency
and
performance
impact
of
just
a
kms
right
and
citadel
was
like
a
perfect
way
to
test
that,
because
all
it
did
was
only
ever
have
a
local
key
encryption
key
that
he
held
in
memory,
so
it
never
made
any
network
calls.
So
the
only
thing
that
you
were
measuring
is
if
you
have
an
api
server
and
you
have
citadel
co-located
with
it
running
a
unix
domain
socket.
D
Yeah,
I
think
two
aspects
of
it
right.
One
is
the
round
trip
time
so
basically
like
how
long
the
operation
takes
and
then
also
the
number
of
calls
it
generates,
like
100k
cash
size,
I
think
with
cloud
providers,
it's
just
if
you
actually
have
under
thousand
secrets.
I
think
it's
really
hard
to
actually
restart
your
api
server
and
like
not
have
things
crashing
or
hitting
rate
limits
unless
the
kms
plug-in
is
keeping
the
encryption
key
in
memory
like
if
it's
making
network
calls
yeah,
I
think
it's
just.
D
A
If
we
had
a
kms
implementation
that
internally
generated
a
key
encryption
key
like
locally,
you
know,
went
ahead
and
told
the
real
the
grpc
api
behind
it,
the
like
azure
key
vault
or
whatever
hey.
I
need
you
to
store
this
for
me
and
like.
Let
me
let
me
get
it
back
after
some
point
right
and
then
it
was
then
going
to
locally
use.
It
right
like
like
basically
is
this
a
problem
that
can
be
solved
with
some
amount
of
key
hierarchy
within
the
kms
itself
right.
A
Let
me
you
know,
let
me
fetch
that
back.
Let
me
decrypt
like.
Let
me
make
one
decryption
call
to
decrypt
it
back
into
its
full
state.
Now
I
can
decrypt
all
data
encryption
keys
that
are
being
sent
over
the
grpc
api.
With
this
one
locally,
I
don't
have
to
make
any
network
calls
right
and
then
it
sort
of
becomes
a
kms
policy
decision.
Then
right
like
how?
How
much
are
you
willing
to
use
that
key
encryption
key
right?
A
So
you
can
then
basically
say
well,
I
don't
know
I
can
use
it
for
some
amount
of
time.
I
can
use
it
for
a
certain
amount
of
operations,
but
I
think
that
does
significantly
reduce
the
problem,
though
right
like
if
the
if
that
capability
is
in
the
kms
or
am
I
in
my
misunderstanding,
I'm
missing
some
scenario.
C
Today
the
users
have
control
of
the
kkk
if
they
want
to
create
a
new
one
or
to
do
a
rotation
every
like
seven
days,
they
can
do
that
on
the
cloud
provider,
but
if
we
have
a
local
kk,
we
need
to
define
like
how
like
how
do
we
change
it?
So
we
will
have
to
communicate
like
have
a
way
to
notice
that
the
kkk
on
the
server
side
change
for
us
to
change
your
local
key.
C
I
think
that's
the
bare
minimum
that
we
should
do
to
leave
the
users
kind
of
under
the
control
of
the
key.
B
I
also
want
to
raise
that
the
reason
why
people
want
to
use
a
remote
kms
is
because
it
comes
with
all
the
industry
standards
and
all
the
policies
and
all
the
compliance
stuff
right.
So
one
now
that
we're
starting
to
have
a
local
one.
I
don't
know
what
that
means,
for
compliance
and
and
security
concerns
go
ahead.
A
So
is
that
any
different
than
the
fact
that
we
already
have
a
hierarchy
with
the
data
encryption
keys
right?
We,
the
api
server,
generate
keys
in
a
way
that
the
kms
cannot
influence
and
we
use
an
algorithm
that
the
kms
cannot
influence
and
a
storage
format
that
the
cams
cannot
influence.
And
then
we
just
hand
you
the
encryption
key
and
say
you
have
to
store
this.
Give
it
back
to
me
in
a
format
that
I
can
hold
on
to
that's
opaque
right.
A
A
B
Now
now
this
local
one
is
is,
it
is,
is
it
on
disk
or
in
memory?
No.
A
It
would
only
be
it
would
be
generated
in
memory
like
so
imagine
the
kms
it
starts
up,
it
generates
one
in
memory
and
it
does
one
kms
call
telling
the
backing
the
real
kms
hey.
I
need
you
to
encrypt
this
for
me
and
and
give
me
the
encrypted
blob
right.
So
that
way
it
knows
what
the
encrypted
payload
is
right,
and
so,
when,
when
data
comes
in
from
the
kubernetes
api
server,
it
can
figure
out
like
all
right
like
how
did
I
encrypt
this?
A
A
Come
back
up,
generate
a
new
key
and
be
ready
to
do
encryption,
but
when
it
comes
time
to
decryption,
it
has
to
basically
warm
up
its
cache
using
the
real
kms.
But
the
idea
there
would
be
is
that
cache
could
be
over
many
many
data
encryption
keys
from
the
kubernetes
api
server
right,
whatever
the
policy
is
whatever
is
okay.
For
that
plugin
to
do,
but
it's
just
one
level
of
hierarchy,
separated
right,
it's
it's,
but
it's
within
the
control
of
the
cms
plugin
like
basically
today.
A
If
we
explore
that
as
a
potential
choice,
then
like
I,
I
want
to
be
careful
with
the
decisions
we
make
in
the
kubernetes
api
server
because
we
make
them
for
everyone
and
we
and
you
don't
really
get
a
choice
right
one,
because
today
you
don't
get
a
choice
about
how
we
use
data
encryption
keys
in
the
format
right,
that's
not
really
great
and
that's
actually,
where
some
of
the
pinpoint
comes
in
is
like.
I
have
a
bazillion
secrets
and
they're
one
to
one
and
I'm
calling
as
a
niche
mentioned.
A
If
you
make
100k
incred
decryption
calls
to
your
kms
something's
not
going
to
work
right,
and
maybe
the
kms
plug-in
can
actually
take
a
stronger
role
in
making
that
right,
and
I
would
feel
much
more
confident
in
saying
such
a
statement
if
there
was
a
reference
implementation
that
did
the
thing
that
we
were
saying
instead
of
like
hand
waving
it.
B
E
E
Definitely
a
local
chems
is
a
huge
advantage
when
we
have
maybe
just
one
cac
or
we
have
very
few
calls.
But
this
is
quite
a
big
implementation
and
I
was
curious.
Why
don't
you
have
such
some
batch
calls
for?
Okay,
so,
okay,
I
have
a
cache
of
why
not
k
keys,
send
them
all
at
once
to
to
the
kms
and
ask
it
to
to
decrypt
or
encrypt
it.
A
That
is
interesting.
Maybe
initially
I
can
answer.
Could
such
a
is
such
a
capability
present
in
like
azure
key
vault?
Could
I
give
it
a
bunch
of
a
bunch
of
data
at
one
time
with
maybe
some
schema
for
it
to
figure
out
like
which
bits
are
like
you
know,
like
the
delimiter
or,
however,
like
the
encrypted
data
separated,
and
could
it
decrypt
all
of
them
and
hand
them
all
back
to
me.
I
don't
actually
have
no
idea.
D
E
Yeah,
but
could
we
just
you
know,
send
them
like
okay,
this
is
so
that
client,
so
you
as
a
kms
plugin
would
set
some
id
and
behind
it
the
thing
that
we
want
to
to
encrypt
and
then
they,
then
the
cloud
provider
would
just
encrypt
it
and
return.
Also
like
kind
of
a
json
object
where
you
have
the
id
that
we
specified
and
then
behind
that
you
have
the
encrypted
part.
So
we
could
equip
several
things
at
once
and
not
have
a
single
calls.
B
A
Yeah
I
mean
I,
I
think
at
a
high
level,
though
I
think
first
off
what
you
were
describing
is
actually
kind
of
similar
to
what
we
were
just
talking
about
right,
we're
basically
trying
to
build
a
hierarchy.
Somehow
it's
basically:
where
does
the
hierarchy
live?
Does
the
hierarchy
live
within
the
true
kms
implementation,
or
does
it
live
within
the
cms
plug-in,
and
I
think
the
problem
is
kubernetes
is
trying
to
make
a
extension
point.
E
A
So
I
mean
so
certainly
the
grpc
api
could
be
expanded
to
have
like
a
bulk
encryption,
really
really
bulk
encryption.
I
don't
think
matters.
I
think
it's
bulk
decryption.
We
care
about,
like
your
rate,
limited
like
the
kubernetes
api
rate
limits,
how
fast
clients
can
talk
to
it.
So
I
don't
really
think
that's
the
case
here.
It's
like
when
I
start
up
and
I
have
to
read
the
whole
world.
I
need
that
to
work
and
I
need
to
work
every
single
time
right.
A
In
that
case,
if
you
had
a
bulk
decryption
capability,
I
think
that
solves
the
problem.
I
just
don't
think
any
kms
that
we
currently
have
like
that
is
in
use
today
supports
such
a
capability,
where
you
send
it
not
just
a
ciphertext
but
a
series
of
ciphertexts
at
once
and
ask
it
to
do
all
of
the
decryptions
and
then
send
all
of
the
decrypted
payloads
at
once.
I
just
don't
think
that
api
exists.
E
Yes,
so
this
would
be
so
obviously
your
solution
that
your
proposal
and
having
a
local
chems
is
obviously
better,
but
for
I
don't
know
if
it
if
it
would
be
a
huge
implementation
effort.
I
just
want
to
just
ask
if
it
wouldn't
be
just
easier
to
change
the
api
and
say:
okay
now
we
would
like
to
force
a
batch
encryption
batch
decryption
things
and
before
we
close
out
one
thing,
that's
super
important.
E
We
should
always
have
boundary
conditions
what
we
allow
the
users
to
do.
For
example,
I
looked
up
that
you
shouldn't
use
the
gc
when
you're
using
asg
sim
use
it
more
than
two
million
times
and
for
cbc
I
tried
to
find
it,
but
it
was
more
esoteric.
So
there
was
no
no
concrete
hard
number
which
you
should
take
to
orient
it.
So
I
would
be
worried
that
the
customer
says
oh
it.
E
I
need
to
be
fips
compliant
and
it
it's
enough
when
it's
once
a
year,
something
like
that
and
then
they
would
wonder
that
that
that
the
database
got
hacked
and
that
there's
somebody
catching
it.
A
Right,
I
think,
kristoff
what
you're
saying
is
no
matter
what
we
have
to
make
sure
that,
if
we're
using
something
like
gcm
or
whatever
encryption
mode,
we're
using
we
stay,
we
don't
let
the
user
have
an
option
where
they
can
go
out
out
of
the
safe
bounds
of
that
mode.
No
matter
what
we
guarantee
that,
and
I
totally
agree
with
that.
Yeah.