►
From YouTube: Kubernetes SIG Auth 2022-01-25 KMS #5
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2022-01-25 KMS #5
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
All
right
so
hey
everyone.
This
is
kms
meeting
number
five,
yes
for
january
25th,
2022.,
cheltenham.
B
Yeah,
bodyguard
christian,
no!
No!
No!
No!
No!
No!
No!
Okay!
Thank
you!
So
so,
as
we
want
to
talk,
do
you
want
to
talk
about
the
cap
first
or
do
we
want
to
start
with
the
performance
topics.
C
We
can
talk
about
the
kept
first
too.
I
mean
we
talked
about
before
the
recording.
So
but
yeah
I
mean
right
now,
but
the
cap,
I
think,
mo
and
rita,
have
reviewed
it
and
then
damian
added
a
few
comments.
So
we've
made
changes
based
on
that.
I
think
right
now
mo
has
tagged
clayton
and
david
to
take
a
look
for
the
final
review
and
approval.
C
What
we've
done
is
based
on
the
previously
goth
call.
We
have
made
changes
to
add
a
feature
flag
which
is
enabled
by
default,
but
if
the
user
wishes
to
disable
it,
they
can
do
that
in
the
124
release,
and
we
have
also
made
the
uid
field
pointer
so
that
if
the
user
disables
the
flag,
then
the
data
that's
been
sent
on
the
wire
will
still
remain
the
same.
D
C
A
One
one
thing
I
would
say
there
is
that,
aside
from
the
this,
these
metrics
aren't
documented
as
far
as
I
know
so
because
I
had
forgotten
they
existed
because
it's
been
so
long
like
I
I
I
felt
bad
because
I
actually
reviewed
them
like
I
don't
know
how
many
years
ago,
like
at
least
part
of
them,
so
I
should
know
that
they
existed
I'd,
forgotten
god.
So
there's
that
but
they're
not
documented.
I
don't
think
they
go
into
prr
for
the
current
cap,
but
they
they
should
be
discoverable
without
reading
the
source
code.
A
Right
and
yes,
there
are
alpha,
but
all
metrics
are
like
vaguely
alpha.
So
that's
kind
of
unhelpful.
D
D
A
Yeah,
my
gut
feeling
would
be
is
that
there's
these
are
just
separate
things
now
yeah.
You
know
we
might
review
the
metrics
and
decide
that
they're
insufficient
and
then,
as
part
of
whatever
further
observability
work.
We
do,
we
might
say
well,
we
need
more
or
or
we
might
need
to
take
existing
ones
to
be
better
in
some
particular
way.
D
Yeah
my
reasoning
process
was
more
like
in
terms
of
prediction
readiness.
D
I
would
kind
of
expect
that
we
go
through
the
whole
process
that
the
user
would
have
to
go
through,
like
in
prediction,
if
they
had
to
kind
of
debug
kms,
because,
like
this
scape
is
about
observability
as
a
whole,
and
the
first
thing
that
they
would
eat
should
hopefully
be
metrics,
because
that's
how
they
would
detect
that
something
is
going
wrong
and
then
by
knowing
that,
and
they
would
be
able
to
use
this
cap
with
the
uid
that
has
been
id
to
really
kind
of
have
more
awareness
about,
like
which
operation
failed.
D
Why
and
yeah
get
more
insight
based
on
that
to
meet
part
of
the
overall
experience,
but
I'm
not
sure
if
that's
what
they
are
trying
to
measure
with
this
question
or
not.
C
Yeah,
I
think
that
makes
sense
like
the
way
I
viewed
it.
When
I
was
writing,
it
was
like
vr
seem
to
be
very
specific
for
the
new
feature
that
you're
adding
like
how
ready
what
you're
adding
is
in
terms
of
readiness.
But
what
you
are
saying
makes
sense
right
in
the
overall,
like
we
can
say:
kms
observability.
D
Yeah,
the
only
thing
that
I
might
be
aware
of
is
that
if
the
production
readiness
team
comes
in
and
say
hey,
why
are
you
adding
logging
stuff
where
you
don't
even
have
metrics
just
by
reading
like
the
question
that
they
ask
like
we
have
metrics
just
that
it's
not
part
of
the
cap
like
we
don't
have
added
this.
D
We
haven't
added
this
information
yet
today,
and
maybe
that's
what
I
want
to
see
like
they
want
to
see
overall
experience
like
if
we
are
missing
anything
before
starting
like,
for
example,
if
you
start
by
tracing-
and
you
don't
even
have
monitoring
in
the
first
place
and
that's
something
that
they
might
want
to
catch
with
this
production
readiness
stuff.
D
A
Yeah,
I
think
all
of
the
work
we're
going
to
do
with
this
will
require
this,
but
maybe
you
know
we
could
start
off
with
just
the
uid
stuff,
because
I
think
we
would.
We
want
to
document
this
new
behavior
right
so
within
right
now
the
kms
documentation
within
kubernetes
just
says
how
to
configure
it,
but
and
that's
it
so
within
that
page,
you
know
we
should.
A
If
you're,
you
know
new
enough
docks,
but
you
know
they
would
tell
you
you
know
if
you're
observing
kms
failures,
you
know
in
your
kms,
then
you
know
you
can
try
to
correlate
by
looking
at
this
uid
in
the
chemist
and
looking
at
these
logs
and
here's
maybe
an
example
log
that
tries
to
help
you
understand.
What's
going
on.
A
A
C
Cool,
so
you
want
to
talk
about
performance
or
do
we
have
any
other
questions
on
the
existing
cap.
C
Okay
yeah,
so
we
can
talk
about
performance.
I
think
we
touched
on
it
briefly
in
the
last
sync
up
on
what
the
issues
were
and
then
just
to
summarize.
C
C
And
then
the
game
is
plugins
on
one
side
can
get
rate
limited
and
once
they
get
rate
limited,
it
takes
time
to
recover
and
the
calls
will
just
keep
failing
and
because
this
the
api
server
could
end
up
restarting
because
it's
not
able
to
warm
up
this
cache
so
like.
That
is
one
big
performance
issue
that
is
there
currently.
C
So
for
that,
we've
yeah
and
listed
different
scenarios
that
are
happening
today
and
then
I
think
the
focus
at
least
the
first
one
is:
how
do
we
optimize
the
deck
generation?
And
then
I
think,
a
couple
of
proposals
that
have
been
there
like.
So
this
was
basically
summarized
from
different
github
issues
and
like
conversations
and
also
like
different
suggestions
from
users
was
using
a
deck
for
a
period
of
time,
sorry
using
the
deck
for
a
period
of
time
for
encrypting
secrets.
So
that
means
there's.
C
No
one
is
one
mapping,
but
rather
have
a
time
period
defined
for
which
a
single
deck
can
be
reused.
Then
the
second
one
that
was
proposed
was
maybe
use
a
single
deck
per
name
space.
So
any
secrets
that
you
have
in
a
namespace,
all
of
them
are
encrypted
using
a
single
deck
which
can
considerably
reduce
the
number
of
decks
that
are
generated.
C
Then
the
third
one
was
maybe
have
a
certain
limit
set
on
how
many
secrets
can
be
encrypted
using
a
single
deck
and
then
perform
an
encryption
using
that
deck
so
till
that
limit
and
then
once
the
limit
is
hit,
regenerate
a
new
deck
and
then
use
that
for
the
next
set
of
secrets.
C
Yeah
so
like.
I
think
we
have
three
of
these
suggested
right
now
and
then
I
think.
D
I
think
there
was
a
fourth
one
as
well,
if
I
may
had,
which
was
like
to
add
some
kind
of
hierarchy
to
the
to
the
kex,
and
that's
like
the
first
kk
that
is
owned
by
the
kms.
Then
I
have
a
local
kk
inside
of
the
kms
provider
and
that
would
be
used
to
encrypt
like
the
dks.
D
Yeah
I
was
just
going
to
comment
on
the
on
the
the
proposition
of
having
like
dex
per
namespace.
I
think
the
the
problem
that
we
will
encounter
like.
D
I
really
like
the
idea
in
the
first
place
because,
like
we
would
still
have
like
some
kind
of
tenancy
model
in
place,
but
we
have
to
remind
that
remember
that,
at
the
end
of
the
day,
like
this
feature
doesn't
only
target
secrets
but
can
target
any
kind
of
resource
and
as
part
of
all
the
resources
that
exist,
there
are
some
of
them
that
are
named
space,
so
it
like
wouldn't
be
able
to
cover
this
kind
of
scenario.
If
we
were
to
have
like
one
name,
space
per
one,
deck
per
name
space.
D
C
Yeah,
I
think
that
that
makes
sense,
but
also,
I
think,
yeah
mostly
today,
it's
been
secret.
I
think
that's
why
the
proposal
was
very
focused.
On
the
I
mean,
the
suggestion
on
github
was
very
focused
on
secret
and
I
think
once
we
do
solve
the
performance
issues,
I
think
it
opens
up
an
option
to
use
this
for
all
the
different
components
so
that
without
hitting
their
atrium
but
yeah,
that
does
make
sense.
If,
because,
if
you
don't
have
namespace
resources,
then
it
basically
just
violates
the
whole
thing.
D
Like
at
least
four
or
five
resources
that
need
to
be
encrypted,
not
only
like
the
secret,
there
are
like
plenty
of
other
things
that
needs
to
be
encrypted.
D
A
D
That
that
wouldn't
fit
this
particular
use
case.
A
Right
also,
just
as
a
whole
like,
I
find
the
idea
of
asking
for
certain
resources
to
be
encrypted
and
others
not
just
to
be
like
weird.
I
just
want
to
encrypt
everything
like
just
move
on
like
like.
Why
do
I
need
to
figure
out
what
is
a
secret,
because
I
don't
know.
A
In
the
first
place,
right
right
I
mean
just
in
general,
though,
like
you
know,
you
can
add
any
custom
resource
to
the
api
server
right
and
you
can,
you
can
have
it
have
any
semantic
that
you
want
right.
So,
since
you
can't
possibly
know
what
custom
resource
will
be
added
to
your
api
server,
you
have
to
assume
at
least
all
custom
resources
are
secret
because
you
don't
know
right.
So,
if
you're
going
to
do
that,
you
might
as
well
just
encrypt
everything.
B
D
And
I
honestly,
like
my
the
feedback
that
I
wanted
to
give
last
week
about
this
particular
idea
that,
but
we
hadn't
really
much
time,
but
that
is
what
I
really
like
about.
This
particular
suggestion
is
that
we
still
have
the
same
level
of
security
with
the
the
decks
so
like
the
decks
are
rotated
whenever
the
object
changes,
which
is
like
the
best
that
we
can
do
in
terms
of
envelope
encryption
and
the
key
principle
that
we
have
like
on
this
feature.
D
That
is
the
kms
encryption
is
still
here
because,
like
the
owner
at
the
end
of
the
day
of
the
whole
key
archi
is
the
kms.
So
if
the
users
interact
with
their
key
in
the
kms
as
long
as
we
reflect
the
changes
that
are
made
in
the
kms
to
the
kk
that
we
have
locally.
So
if
the
kkk
is
changing
the
kms
and
we
should
actually
change
it
in
the
kms
provider
as
long
as
we
do
that
and
the
user
are
still
in
control
of
the
key.
B
For
the
first
suggestions,
so
yes,
I
also
agree
that
the
one
with
the
local
attack
is
the
best
solution.
The
problem
that
I
have
with
the
first
solution
is,
for
example,
to
have
it
for
for
sorry.
My
kids
are
right
now
and
that's
a
little
bit
so
the
problem.
The
first
thing
with
the
first
suggestion,
is
that
use
it
for
a
certain
amount
of
time
is
that
it's
really
hard
to
pinpoint
how
many.
So
we
would
have
two
different
kind
of
boundaries
right
for
asgcm.
B
We
can
do
it
that
many
times,
whereas
cpc,
we
can
have
it
that
many
times,
and
so
I
see
that
we
have
to
plan
lots
of
different
boundary
conditions
for
depending
on
the
encryption
we
use-
and
this
might
be
confusing
problematic
to
the
to
the
user
and
by
namespace
reduces
the
problem
problem,
but
having
it
pair
amount
seems
like
the
best
solution,
because
it's
actually
the
problem
that
we
have
so
this
kind
of
pinpoints
it,
but
at
the
end,
having
a
local
keg
kind
of
another
layer
solves
it.
B
So
maybe
it's
so
like
how
soft
engineering
works
right.
So
you
can
fix
everything
by
adding
yet
another
abstraction
layer.
So
I
think
also,
like
damien
said
that
this
works
quite
nicely,
because
then
you
have
just
one
key:
that
you
need
to
get
decrypted
and
then
you
can
decrypt
all
the
other
texts.
So
it's
really
nice
idea.
C
Yeah,
I
think
also
like
one
thing
is
basically
this
would
be
something
that
each
kms
plug-in
would
have
to
do
right,
rather
than
the
something
in
kubernetes
api
server.
Rather
than
that,
it
would
be
something
that
each
kms
plugin
has
to
introduce
and
then
basically
build
from
there
on
their
own
in
each
kms
plugin.
So
it's
not
something
that
would
be
common
across
all
the
kms
plugins.
A
A
That's
where
the
reference
implementation
idea
comes
in
is
that
if
we
just
do
this
for
you
in
the
one
place
right,
because
what
we're
trying
to
say
is
the
only
reason
this
is
like
this
problem
actually
only
exists
because
you're
using
a
cloud
canvas
if
you're
using
a
local
hsm
like
a
hardware
thing,
it
doesn't
care
how
many
encryption
operations
you
need
to
do.
It
can
do
it
just
as
fast
as
like
literally
possible
right.
A
So
you
know
it's
just
as
fast
as
software
will
be
right
and
what
we're
saying
is:
that's
not
our
fault
right,
like
we
can't
make
your
network
faster.
We
cannot
make
your
rate
limits
go
away,
but
we
also
can't
work
around
them
in
a
good
way,
like
every.
Every
way
that
we
come
up
with
to
work
around
them
involves
basically
a
compromise
of
some
some
aspect
of
either
our
security
or
probably
more
important.
A
Just
like
the
semantics
of
the
kubernetes
they
can
that
we
want
to
make
sure
it's
persist
all
right
so,
but
if
we
provide
a
library
implementation
of
this,
I
don't
think
that's
that
big
of
a
deal,
because
all
we're
saying
is
well,
you
don't
have
to
use
it.
If,
for
example,
you
don't
need
that
functionality,
just
don't
use
it
right
and
we
could
totally
make
it
so
that
the
key
hierarchy
bit
is
and
maybe
like
enabled
by
default
when
disabled,
within
the
reference
implementation.
A
That
way,
if
you're
like
no,
I
don't,
I
don't
actually
need
that
indirection,
because
my
kms
is
like
right
here,
it's
hardware,
so
that's
right,
so
we
can
do
both
it's
more
of
like
I.
I
like
this
idea
because
we
can
build
it
once
we
can
ask
people
to
use
it
if
they
want
to.
I
suspect
people
will,
because
why?
Why
would
you
write
your
own
if
someone
already
did
it
for
you,
like?
I
don't
really
see
like
there's.
A
No
like
competitive
advantage
of
writing
your
own
you're
not
going
to
get
anything
out
of
it,
and
then
it
would
prove
out
that
we
did
a
good
job
right
like
because
people
would
adopt.
B
Yeah,
so
so
I
agree
also
on
that
that
it's
it's
not
really
an
encryption
problem,
but
it's
more
like,
like
a
kms
provider,
so
an
api
network
problem
that
there
are
too
many
requests.
D
Yeah,
I
think
the
aspect
that
was
pointed
out
is
that
the
problem
itself
isn't
in
the
kms
feature,
but
rather
in
the
fact
that
we
are
like
some
of
our
users
or
kms
austin
kms
that
need
to
go
through
the
network,
whereas,
like
some
others
just
have
our
local.
So
all
of
our
like
of
our
the
consumer
of
this
api,
basically
are
not
impacted
by
this
problem,
so
it
like.
There
is
no
actual
reason
to
put
the
fix
inside
of
the
api
server
itself,
but
rather
like.
D
If
we
provide
the
tools
in
the
canvas
provider
itself
or
in
the
reference
library
implementation,
then
it
would
be
up
to
them
to
really
implement
it
or
not
if
they
really
need
it
so
yeah.
I
really.
I
really
really
like
this
id
personally
of
like
delegating
this
responsibility
to
each
cloud
provider
to
like
make
the
performance
optimization
that
are
needed.
C
A
A
So
that
way,
it's
not
just
here's
a
tool
box
that
you
can
use
like
it'll,
be
like
a
complete
like
end
to
end
like
if,
if
you're
running,
bare
metal,
kubernetes
and
you
just
plugged
in
a
yuba
key
to
the
server
or,
however
you
get
your
kms
there
right,
you
or
probably
tpm
is
probably
already
on
the
server.
You
don't
need
to
plug
anything
in
then
you're
ready
right.
You
just
you
just
literally
need
to
deploy
this
one
process.
A
A
Obviously
the
harder
thing
with
something
like
tpm
is
really.
You
know
it's
not
our
problem,
but
you
you
would
have
to
make
sure
that
the
ppms
across
your
aha
environment
share
the
same
encryption
keys
under
the
hood
right,
but
but
that,
but
that's
a
that's
a
deployment
issue.
That's
not
something
we
can
do
for
you
because
you
know
cloud
kms
does
that
for
you
right
they,
when
you
provision
a
key
in
cloud
kms
it's
available
across
the
globe.
A
A
Yeah
I
mean
I
recently
had
someone
from
vmware
that
was
running
kubernetes
on
vsphere
and
stuff
and
they're
like
yeah,
I
wanna
like
so
we
you
know
they
did
the
local
key
base
encryption.
They
were
like,
and
you
know,
is
this
good
enough.
I
was
like
no
how's
that
good
enough,
your
key
is
sitting
right
there
in
plain
text
inside
the
api
server.
What
part
of
that
makes
it
good
like
you
need
to
use
kms
and
they're
like
okay,
how
it's
like?
A
B
A
So
yeah
I'd
like
to
have
a
better
answer
for
individuals,
especially
you
know
if
you're
sticking
within
the
well-known
standards
for
tpms
and
can
messes
and
stuff
right
like
you
shouldn't,
need
to
write,
there
should
be
an
implementation.
B
Yeah
also
that
I
think
in
inter
has
this
sgx
right,
so
maybe
it
would
work
also
with
this
so
yeah.
This
is
a
cool,
quite
white
domain.
That
would
be
really
nice.
A
But
you
know
you
could
certainly
keep
the
encryption
stuff
inside
of
there
and
you
know
have
some
way
of
mediating
between
the
kms.
B
Plug-In
and
such,
but
basically
those
are
two
different
concerns
right,
having
the
additional
layer
for
the
kegs
or
maybe
even
something
more
dynamic
like
like
a
keychain
or
something
that
depends
on
just
one
and
coming
from
the
kms
and
the
other.
One
is
the
reference
implementation
that
adds
a
lot
of
benefits
or
sugar.
On
top
like
the
pixie
11
implementation
right
right,.
A
A
So
one
just
as
a
at
a
high
level
is
there
any
anything
we're
like
so
today
we're
saying
like
if
we
just
pretend
like
you
know,
you're
on
gke
and
you're
using
their
you
know,
so
their
kms
plug-in
is
deployed
using
like
a
123
api
server
and
everything
is
encryption
is
enabled
right.
A
D
The
the
only
thing
that
I
I
want
us
to
keep
in
mind
is
that
to
me,
like
the
the
key
aspect
behind
this
hierarchy
is
to
have
like
the
kkk
that
is
inside,
of
the
the
kms
being
the
main
key.
So
whenever
this
one
is
changed
by
the
user
or
even
the
kms,
because
like
it
can
be
rotated
every
seven
days,
for
example,
I
want
to
make
sure
that
this
always
remained
the
main
key
and
we'd
never
rely
on
a
kkk
that
depend
on
the
previous
kkk
from
the
like
from
the
kms.
D
So
the
only
thing
that
I
would
want
to
make
sure
that
whenever
a
kkk
changes
in
the
kms,
we
are
notified
in
the
kms
provider
and
we
can
rotate
the
kkk
at
that
moment.
A
Terms
for
this,
so
we
have
the
we
have
the
like
the
master
or
not
that's
a
bad
word,
not
using
that
word.
That
works
not
allowed
anymore,
the
the
main
key
or
the
true
kms.
A
A
D
Yeah
right
so
because
I
I
think,
like
the
the
reasoning,
is
that
to
me
what
the
users
want
from
this
feature
is
to
actually
like
have
the
the
main
key
always
being
on
the
canvas
and
everything
encryption
related
should
be
controlled
by
this.
Chemist,
like
everything
like
the
chemist,
should
always
be
the
main
actor
in
the
encryption
like
we
should
never
derive
like
from
another
branch
in
the
iraqi.
I
would
say.
A
That
seems
fine,
I
I
so
I
I
think
the
questions
I
have
around
this
are
then
how
what
is
the
mechanism
for
the
the
true
kms
informing
the
kms
plug-in
of
such
a
rotation?
So
we
need
that.
We
don't
have
anything
today,
but
then
also
do
we
need
a
mechanism
from
the
api
server
side
for
it
to
say
that
it.
A
That
it
wants
to
basically
do
like
it
wants
to
do
a
rotation
in
the
sense
of
it
has
previously
encrypted
data
from
some
some
key
encryption
key
or
with
some
data
encryption
key
that
was
encrypted
with
some
key
encryption
key,
and
what
it's
saying
is
that
it
would
like
to
redo
that
with
the
latest
key
encryption
key
is
whatever
whatever
the
definition
of
latest
is
because
the
api
server
doesn't
know
anything
about
your
hierarchy.
It's
just
asking
for.
I
want
you
to
be
at
the
latest
level
right
yeah,
so
I
think.
D
I
think
I
was
just
going
to
say:
I
think
the
the
signal
that
okay
cash
chain
should
first
be
propagated
from
the
kms
to
the
kms
provider
and
then
once
the
kms
provider
does
the
exchange.
I
should
also
like
tell
the
api
server
that
the
k,
the
local
kkk
change
and
that
if
they
want
to
do
a
full
rotation
of
the
data,
so
that,
like
everything,
is
based
on
the
new
hierarchy,
then
they
can
do
it.
D
And
with
that
in
place,
we
could
even
like.
So
there
was
a
discussion
last
time
about
like
how
we
do
a
full
rotation
of
hcd.
That's
something
that's
not
yet
supported,
actually
like
when
the
kkk
changes.
Currently
in
upstream,
we
are
not
rotating
all
the
data,
which
means
that
if
a
user
has
their
kk
being
stolen
or
whatever
like,
there
is
a
leak
in
the
cake
and
they
want
to
rotate
them
completely
and
rotate
all
the
data
that
is
currently
in
the
cd.
D
They
cannot
do
that
currently
because
like
they
would
have
to
basically
generate
a
right
request
for
each
resources
that
are
encrypted,
and
if
we
have
this
mechanism
in
place,
since
we
won't
have
like
the
huge
load
on
the
kms
provider
anymore,
we
can
definitely
upstream
a
code
that
will
do
the
complete
rotation
for
us.
A
I
think
I
might
need
like
a
diagram
or
something
to
follow
exactly
how
this
would
work
out,
and
maybe
I
suspect,
to
get
the
most
value
out
of
this
stuff
is.
I
think
we
would
need
to
go
ahead
and
propose
some
structure
change
to
what
we
store
on
disk
right.
So
I
suspect
that
the
the
real
key
encryption
key,
the
the
one
used
by
the
kms.
A
You
know
it
probably
needs
to
have
some
kind
of
unique
identifier.
That's
you
know
immutable
and
only
changes
when
you
get
a
new
kms
encryption
key
right,
so
it's
you
know
strongly
attached
to
that
and
then
presumably
the
local
key
encryption
key
like
maybe
like
you
know
it
would
have
maybe
like
it's.
Maybe
it's
hash
like
it's
unique
identifier,
might
be
it's
hash,
plus
the
id
of
the
real
encryption
key
right.
A
A
You
can
you
can
start
building
mechanisms
to
stay
like
well,
okay,
what
was
the
real?
Like
you
know,
what
was
the
real
key
encryption
key
is
first
everything
encrypted
by
that,
but
then
also
if
there
is
a
key
hierarchy
right,
like
you,
can
kind
of
try
to
trace
down.
Well,
is
that
also
up
to
date,
if
that
matters,
maybe
it
doesn't.
D
Yeah
one
thing
I'm
wondering
is
like
because
you
you
mentioned
that
we
need
to
change
how
we
store
the
data
to
kind
of
say
with
which
kkk
like
which
local
kkk
it
was
encrypted
with.
But
one
thing
I'm
wondering
is
like:
do
we
actually
want
to
support
having
multiple
local
kk
encrypt
like
having
encrypted
data,
and
then
cd
are
they?
Do
we
want
to
go
forward
with
the
idea
of
like
whenever
the
k
can
change?
D
A
I'm
not
sure,
because
rewriting
all
the
data
in
ncd
no
matter
what
would
be
expensive
in
terms
of
I
o,
and
you
know
I
want
to
get
to
a
world
where
it's
like
a
good
idea
to
just
encrypt
everything,
not
like
a
really
dangerous
operational
decision,
yeah
right.
So
for
that
to
be
reasonably
the
case,
you
know
you
should
be
able
to
do
rotation
of
your.
You
know
main
key
encryption
key.
A
Like
I'm
unsure
about
like
what
this
ends
up
being
right,
like
this,
like
so
there's,
probably
some
limit
to
how
much
the
local
key
encryption
key
should
be
used
right
and
each
each
aha
instance
of
the
kms
plugin
should
have
its
own
set
of
local
key
encryption
keys
right
right.
Obviously
any
of
them
can
decrypt
data
because,
no
matter
how
they
are
presented
with
the
data,
they
can
always
ask
the
real
kms
for
help
in
decrypting
a
local
key
encryption
key
that
was
originally
created
by
some
other
instance
of
itself
right.
A
So
that's
totally
fine,
it's
just
there,
but
there
needs
to
be
some
finite
limit
to
how
much
you
use
the
the
local
key
encryption
key
right,
and
it
might
just
be
that
we,
it
might
just
be
an
arbitrary,
like
small
number
like
we
might
just
say
that
it's
only
used
for
like
a
thousand
encryption
events,
because
we
know
that's
perfectly
safe
within
any
bound
of
any
modern
cryptography.
A
A
D
One
thing
I'm
I'm
just
wondering
like:
has
anyone
seen
like
a
scenario
where
a
user
would
want
to
do
a
full
rotation
of
the
like
the
key
that
isn't
kns,
because,
like
some
of
the
previous
key
got
leaked
and
they
want
to
do
a
full
rotation
of
the
data
unit?
Cd
based
on
the
latest.
A
I
mean,
I
think,
that's
a
totally
reasonable
thing
for
someone
to
do
right
and
it
might
not
be
based
on
any
like
belief
of
compromise.
That
might
just
be
just
good
security
hygiene
right
like
yeah
or
you
know,
in
an
open
shift.
When
I
wrote
the
encryption
controller
right,
it's
basically,
I
think
hard
coded
to
just
rotate
every
thing
every
two
weeks
or
something
like
that
right,
not
because
I
expected
the
key
to
be
lost
every
two
weeks
or
something
he
just
does
it
like.
A
Why
not
I
mean
it
was
to
partially
prove
that
I
wrote
all
the
encryption
rotation
logic
right
right,
just
by
actually
running
it
over
and
over,
and
if
you
still
have
a
functioning
cluster
at
the
end,
you
know
it's
working.
D
So
I'm
wondering
like
how
we
could
maybe
introduce
that
to
upstream
I
mean
this
mechanism,
full
rotation.
A
Yeah
I
mean
I,
I
think
what
we
need
to
first
start
off
with
is
like.
Basically,
what
are
the
changes
to
the
grpc
api
to
funnel
extra
information
through
and
then
what
are
the
changes
to
the
storage
format
to
store,
said
extra
information
and
then
then,
once
we
have
that,
and
we
don't
have
to
worry
about
the
format
exactly,
we
can
just
say
that
somehow
we're
going
to
store
this
new
id
field
within
the
stored
disk
because
we
can
come
up
with
any.
A
I
mean
it
could
just
be
json
for
all
that
matters
right
I
mean
that's
a
bad
idea
because
it's
robust,
but
it
could
be
so
once
we
have
that,
then
we
can
start
saying
like
okay.
What
kind
of
component
would
you
need
to
orchestrate
a
rotation
right
like
can?
Does
the
component
need
like
read
level
access
to
raw
lcd
data?
Does
it
or
can
it
do
it
through
the
kubernetes
api
right?
Is
it
part
of
the
kms
plugin,
or
is
it
a
kubernetes
controller?
A
What
is
it
or
is
it
a
coordination
among
all
of
these
actors,
but
I
don't
think
we've.
I
don't
think
we
have
the
foundation
yet
to
really
ask
the
question
of
how
would
this
work,
because
within
openshift
right,
the
reason
the
encryption
rotation
worked
is
because
the
api
servers
are
self-hosted.
You
know
like
I
would
probe
them
for
their
state,
so
I
could
tell
like
how
far
along
they
were.
A
I
could
look
at
their
configuration
with
their
static
config
to
be
like
all
right.
This
is
the
key
that
you
have
by
definition
observed
because
it's
on
disk-
and
I
can
see
that
you
read
it-
those
types
of
things
right
so
like
each
of
those
assumptions,
has
to
be
sort
of
pried
out
and
turned
into
like
a
generic
mechanism
of
like
well.
How
do
you
know
how
far
a
particular
pi
server
is.
D
A
D
It
makes
sense
like
to
wait
until,
like
we
have
the
actual
mechanism
of
like
funneling
the
information
that
the
kkk
changed
and
store
it
into
hcd.
Before
actually
thinking
about
this
kind
of
problematics.
B
I
think
that
the
rotation
is
really
important.
It's
not
just
good
hygiene,
because
I
mean
today
we
most
tls
is
with
all
the
encrypts
right.
So
we
use
new
keys
every
time
and
we
don't
reuse
them
anymore,
because
back
then
it
was
numbered
to
re
use
them
a
couple
of
times.
So
once
the
key
got
leaked,
you
could
kind
of
decrypt
the
whole
conversation
and
this
whole
rotation
can
be
a
huge
mess
if
you
think
of
it
too
late.
B
So
in
my
previous
company,
we
had
this
whole
mbap,
encryption
and
figuring
out
how
to
do
the
rotation
properly
without
any
hassle
was
a
huge
mess.
So
so,
basically,
it
caused
more
cves
than
benefits,
because
the
user
sensors
never
really
use
it.
That
much,
and
I
also
agree
that
it
would.
It
generates
a
lot
of
float
on
lcd
if
you
would
also
want
to
re-encrypt
everything,
even
the
the
dex
for
us.
B
For
example,
we
sought
medical
data
and
the
dex
was
always
stored
next,
alongside
to
the
medical
data,
so
if
some
suffix,
so
somehow
the
decks
were
leaked,
the
assumption
was
almost
also
that
that
did
that
it
was
already
encrypted.
So
so
we
never
reported
out
much
with
decrypting
the
re-encrypting,
the
data
itself
again.
B
But
again,
this
discussion
is
super
hard
to
do
without
kind
of
a
board
or
a
common
language
that
we
agree
on.
So
we
had
also
specified
kind
of
a
kind
of
a
notation
how
we,
how
we
could
write
down
what
we
mean
by
saying:
okay,
this
keg
encrypts
this
keg,
and
this
kick
and
grips
this
deck,
and
this
is
and
because
can
we
become
very
confusing
quite
quickly.
D
Yeah
just
to
to
go
back
to
your
point
of
that,
this
kind
of
mechanism
like
having
a
rotation
mechanism
in
place,
will
put
a
lot
of
load
on
its
cd.
At
some
point,
I
think,
like
the
problem
comes
down
to
whenever,
like
we
want
to
improve
something
in
terms
of
security,
there
are
always
some
drawbacks
in
terms
of
performance
or
whatever,
and
I
think
if
you
want
to
do
a
full
rotation,
if
it's
like
something
that
you
want
to
do
every
other
week
or
like
just
once
in
a
while.
D
B
I
would
think
think
so
too,
but
the
the
camera
topic
arrives.
You
know
like
in
october
last
year
and
the
first
reaction
from
stefan
was
like,
oh
my
god,
how
much
more
performance
does
it
squeeze
out
of
that
cdw?
I
haven't
already
the
beast
on
max
load.
We
don't
have
much
to
spare
so
so
I
got
always
the
thing
that
lcd
is
already
reaching
its
limits
of
capacity.
B
So
this
is
why
I
kind
of
assumed
it's
the
same.
D
In
openshift
we're
already
doing
this
for
rotation,
so
it's
fine,
we
it!
It
won't
be
worse
than
what
it
actually
is.
So
for
us,
it's
fine,
it's
more
like,
but
the
other
users
like
what
would
they
expect
if
we
want
to
have
a
mechanism
to
do
for
rotation
like
they
have
to
be
aware
that
it
will
put
load,
it's
completely
normal,
but
it's
a
drawback
you
that
you
have
to
embrace.
If
you
really
want
your
encryption
to
be
secure
from
an
end-to-end
perspective,.
B
And
also
the
the
future
outliner
to
maybe
encrypt
at
some
point.
Everything
address
is
also
nice,
which
would
be
nice
to
know.
Maybe
it's
a
future
task
to
keep
this
in
mind.
C
Good,
I
think
we
need
to
spend
some
time
on
this
one
like
basically
like
writing
out
a
design
spec
on
how
this
works.
I
think,
if
we
actually
write
it
down,
then
we
can
also
consider
what
different
scenarios
we
need
to
kind
of
have
right,
like
I
think
rotation
is
one
really
great
use
case,
but
then
I
think
it'll
also
help
us
visualize
what
we
are
missing
missing
if
we
move
to
this
new
model
like
how
do
we
make
it
backward
compatible
like
when
game
is
plug
and
start
using
this?
C
D
Yeah,
I
think
that
definitely
sounds
like
a
good
plan,
just
focusing
on
like
how
we
want
to
kind
of
implement
this
hierarchy.
D
What
are
the
design
changes
that
we
want
to
make
to
the
api
and
then
really
focus
on
the
like
the
potential
aspect
that
might
be
impacted,
so
such
as
rotation,
as
you
mentioned,
such
as,
like
backward
opportunity
and
stuff
like
that
say,
focusing
on
that
at
first
will
help
us
kind
of
figure
out
like
how
it
will
impact
the
other
aspects
of
the
future.
A
Yeah
do
does
anyone
wanna
take
an
axe,
man
do
like.
A
Do
do
some
explicit
fleshing
out,
maybe
in
this
dock
of
just
like
explicit
mechanisms
we
might
build,
or
you
know
new
data.
We
might
want
to
store
this
stuff
like
that.
So
that
way
we
have
something
and
it
might
it
might
even
we
might
even
want
to
first
start
with,
like
use
user
stories
like
what
we
expect,
the
final
state
of
the
system
to
look
like
a
little
bit
more
strongly,
and
then
we
can
try
to
address
like
which
of
those
can
be
handled
by
what
we're
discussing.
D
Yeah
I
mean
we
can
always
we'll
have
kind
of
a
look
at
how
that
could
look
like
and
then
sing
during
the
next
meeting
about.
C
D
What
are
the
approach
that
we
suggest?
What
is
the
possible
like
look
of
the
data
in
the
city,
how
the
data
would
look
like
that?
I
mean
it's
always
fine
to
cross
different
ideas
and
implementation
in
that
regard.
B
Okay,
so
the
current
action
item
would
be
that
either
we
we
individually
create
pairs
or
we
do
it
on
our
own
trying
to
sketch
something
out,
and
then
we
see
kind
of
in
this.
This
brainstorm
mode
like
like
decoupled
or
asynchronous
brainstorming,
that
we
come
together
in
the
next
video
and
try
to
show
what
each
other
has
and
trying
to
figure
out.
They
were
forward
right.
C
Yeah,
I
think
that
sounds
like
a
good,
short-term
plan
right
like
because
we
haven't
synced
up
next
week.
So
if
we
all
have
something,
then
we
can
come
back
with
that
and
then
like.
C
B
B
Okay,
so
ask
my
former
colleague
if,
if
the
the
encryption
annotation
that
he
created
back
then
is
open
source,
so
so
maybe
I
could
show
it
how
beneficial
it
was.
It
is
because,
in
in
design
discussions
with
office
issues,
which
do
you
mean
which
deck
do
you
mean
this
is
the
one
from
the
users
is
the
one
from
this.
B
So
maybe
I
could
also
pitch
that
annotation,
so
it
sucks
able
to
get
into
it,
but
once
you
entered
it
super
clear:
what's
meant.
C
Yeah
and
also
in
this
dark,
I
think,
like
we've
added
these
extra
extra
metadata
here
and
there
right
like
we
had
the
kick
in
the
observability
too,
like
where
we
said,
maybe
the
game
is
plugging,
should
written
the
kick
so
there's
some
data
around
what
key
was
used
to
encrypt
it
and
all
those.
So
I
think
it's
just
basically
moving
it
around
and
then
just
having
one
common
thing
where
we
say
like
what
are
the
things
we
need
and
just
go
from
there,
but
I
think
yeah.
B
A
B
B
But
but
I
think
this
is
the
most
crucial
phase
where
we
might
have
had
a
bit
more
leeway,
but
once
it's
it's
we
agreed
on
something.
Then
we
can
well.
If
you
would
be
in
one
building,
one
room
it.
B
I
would
say:
okay,
let's
take
a
day
and
break
some
of
that
topic
find
out
all
the
solutions
to
sketch
it
out,
maybe
discuss
it
on
the
full
update
and
then
we
would
be
done,
but
remotely
it's
kind
of
hard
right
or
do
we
have
babies,
or
someone
of
you
has
with
which
we
could
have
such
a
kind
of
white
boarding
session.
C
I
I
can
do
friday
if
you
have
time
if
you
want
to
pair
on
that,
like
I
think
I
don't
have
a
lot
of
meetings
on
friday.
So
if
you're
interested
I
can
pair
and
then
we
can
brainstorm
together,.
B
Yeah,
definitely
because
because
no
but
see
you
based
for
me,
it's
always
like
through
discussions.
I
learn
a
lot
and
realize
how
how
few
things
I
know-
and
this
is
why-
why
I
kind
of
shy-
which
I
way
off
the
super,
to
doing
something
completely
on
my
own,
because
I
I
would
be
worried
that
I'm
going
down
a
rabbit
hole
and,
at
the
end
someone
says
yeah,
but
your
first
action,
your
axiom,
your
assumption
was
wrong
and
then
I
wasted
a
lot
of
time.