►
From YouTube: Kubernetes SIG Auth 2022-02-15 KMS #7
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2022-02-15 KMS #7
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
B
Hey
everyone
today
is
the
feb
15th.
This
is
the
seventh
kms
meeting
and
this
call
falls
under
the
cncf
code
of
conduct,
it's
being
recorded
and
it
will
be
published
on
youtube.
B
And
then
today,
I
think
we
are
discussing
about
the
performance
improvements
that
we
talked
about
in
the
last
couple
of
meetings,
so
in
the
slack
chat
mo
mentioned
that,
instead
of
going
forward
with
the
current
observability
cap
that
we
have
where
we
talked
about
adding
a
new
uid
field
to
the
grpc
api,
we
probably
can
consider
starting
to
work
on
a
v2
alpha
one.
Instead,
so
what
we
do
is
we
introduce
a
new
v2
alpha
one?
B
We
add
the
new
uid
field
that
we
talked
about
and
then
also
extend
it
with
the
new
kms
key
id
and
the
other
fields
that
we
discussed,
which
can
be
useful
in
terms
of
having
the
multi-level
hierarchy,
which
can
improve
the
call
patterns
and
all
of
those.
So
the
idea
was
we
do
v2
alpha
one
as
a
poc,
so
we
start
doing
work
on
it
like
we
actually
build
an
api
and
then
try
out
everything
and
then
maybe
also
like
demo.
It
within
this
group
and
then
once
we
have
something
concrete.
B
We
convert
that
into
a
cap
and
then
submit
it
to
cigar,
and
then
that
will.
That
is
what
we
will
take
forward
with
all
the
changes
that
we
need.
Instead
of
proposing
individual
changes
in
each
cap,
and
then
we
won't
know
if
it's
actually
extendable
right,
like
the
uid
works,
one
with
v1
with
the
current
apis,
but
for
the
newer
fields
it
probably
makes
sense
to
just
have
a
new
api.
B
So
if
you're
going
to
have
a
new
api,
we
might
as
well
just
invest
all
our
efforts
and
adding
all
the
required
fields
in
the
new
one,
have
a
poc
working
and
then
once
we
validate
the
theory,
we
can
also
show
it
cigar.
Then,
once
we
get
folks
saying
this
looks
good,
we
can
convert
that
to
a
cap
and
actually
go
ahead
and
implement
it,
and
I
think
that
at
least
personally
I
think
that
makes
sense.
We
can
actually
spend
some
time
on
a
poc
and
in
terms
of
poc.
B
I
think
it's
basically,
this
new
v2
alpha
one
and
then
also
the
hierarchy
on
the
provider
plug-in
so
like
these
two
things
are
the
main
ones
and
then,
if
we
can
actually
implement
that
and
show
it
works,
then
we
can
go
forward
with
it
and
then
the
next
steps
would
be
like
the
reference,
implementation
and
all
those,
but
I
think
for
poc.
Maybe
we
just
have
to
try
it
out
with
like
a
single
provider
or
something
to
show
like
what
the
theory
works.
What
we
propose.
C
Yeah,
I
think
we
can
even
like
write
the
library
that's
going
to
like
the
what
was
the
term
for
that.
The
official
like
implementation
for
the
plugins
that
we
wanted
to
write.
We
can
even
like
write
that
in
puzzle
as
the
cap
to
kind
of
give
it
as
a
proof
of
concept
and
then
yeah
further
improve
it
to
like
make
it
available
to
the
users.
B
Yeah,
that
makes
sense,
so
I
was
wondering:
should
we
start
with
a
dog
before
the
poc,
or
I
mean
at
least
I
think
it
will
be
nice.
We
can
discuss
what
we
want
to
do
in
the
poc
so
that
we
have
like
a
framework
of
things
that
we
need
to
do,
and
then
we
can
basically
take
things
that
we
can
work
on
and
set
a
deadline
for
ourselves,
and
just
do
that
so
that
we
can
demo
it
in
this
in
this
game.
It's
called
maybe
like
in
two
weeks
or
three
weeks
later.
C
Yeah,
I
think
that
would
make
sense
to
at
least
like
start
by
a
document,
and
I
outline
like
what
we
want
to
exchange
and
exactly
like
outline.
What
improvement
we
expect
and
then
yeah
start
with
the
poc.
Maybe
a
bit
a
little
plc
because,
like
there
are
still
a
lot
of
topics
that
we
haven't
discussed
from
the
original
dock.
So
maybe
there
will
be
in
the
future
new
fields
that
were
added,
maybe
or
maybe
a
couple
of
things
that
will
change
so
yeah.
C
We
can
start
by
a
document
and
then
like
improve
the
poc
like
along
the
time
like
after
every
meeting,
basically
to
try
to
incorporate
like
everything
that
we
discussed
and
keep
it
up
to
date
with,
like
the
current
state
of
our
discussion,.
B
C
C
But
you're
having
a
poc
that
we
like
improve
after
every
meeting
after
every
discussion,
might
be
some
like
good
proof
of
concept.
So
that,
like
it,
follows
our
discussion
and
goes
into
the
right
direction,
and
then
I
constantly
want
to
open
the
gap.
We
have
like
this
piece
here
on
the
side
that
we
can
show
off.
B
Yeah,
I
think,
like
one
main
outstanding
thing,
even
for
the
poc
was
what's
the
format
of
the
data
that's
stored
in
fcd
right,
so
I
mean,
I
think,
damian
you
had
the
suggestion
on
how
we
can
do
that
and
then
more.
I
had
mentioned
before
that
we
should
consider
maybe
a
completely
new
format
on
how
we
want
to
do
it.
So
I
think
maybe
we
can
discuss
that,
because
that
is
a
key
part
of
the
poc,
because
the
other
new
fields
that
we
are
adding
is
still
just
something
that
gets
added
to
the
api.
B
D
Yeah,
I
was,
I
was
a
little
hand
wavy
there,
but
but
the
gist
of
my
thought
process
was:
let's
stop
trying
to
encode
a
bunch
of
spaghetti
into
prefixes
and
instead
have
a
structured
format
that
can
grow
without
requiring
like
catastrophic
changes,
basically
because
right
now,
every
time
we
want
to
grow
the
thing
we're
like
all
right
like
we
have
these
prefixes,
I
guess,
and
we
want
to
shove
more
suffixes
on
those
prefixes.
I
guess
none
of
that
sounds
great,
so
you
know
you
know.
D
Json
might
not
be
appropriate
in
this
I
mean,
but
it
depends
on
what
the
goals
for
the
storage
of
metal,
if
our
desire
is
for
it
to
be
easy
to
consume
outside
of
kubernetes
we're
gonna
have
to
think
pretty
hard
because
it
it
really
then
can't
be
just
like
protobuf,
at
least
not
the
protobuf
variation
that
we
have
today,
because
it's
specific
to
how
the
api
servers
works.
D
D
D
Presumably,
we
would
then
add
new
fields
for
things
like
key
id
or
whatever
else.
We
think
we
want.
Maybe
have
I'm
unsure
about
this,
but
maybe
we
could
have
some
fields
that
are
basically
meant
to
provide
with
the
kms
plug-in
with
like
some
form
of
storage
right.
So
the
idea
would
be
is
like
when
you
make
a
kms
call
when
the
plug-in
responds
it
could
give
you
extra
data
with
the
idea
being
that
hey,
that
extra
data
is
opaque
to
the
api
server.
D
But
when
you
ask
the
kms
plugin
in
the
future
to
decrypt
that
data
send
that
back
to
it,
so
they
can
do
something
with
it.
You
know
so
that
could
include
other
bits
and
pieces
of
information
if
we
wanted
to
make
it
a
little
bit
more
formal
that
the
plugin
basically
is
allowed
to
use
the
api
server.
Conversation
as
a
form
of
durable
storage,
if
it
chooses
to
do
so.
D
It's
not
necessarily
strictly
required,
but
maybe
it's
desirable,
maybe,
but
I
do
think
we
at
a
bare
minimum.
We
have
to
come
up
with
an
easy
way
to
identify
the
encrypted
data
on
disk.
So
that
way,
when
it's
time
for
the
api
server
to
decode
it,
it
can
figure
out
who
it's
supposed
to
ask
in
some
reasonable
way.
D
We
may
want
it
consumable
by
external
actors,
one's
less
clear
to
me.
What
else
is
there.
B
D
Maybe
you
get
into
a
situation
where
your
kubernetes
infrastructure
is
really
worked
and
you
can't
get
the
api
server
back
up
or
whatever,
and
you
just
you
just
need
to
get
the,
maybe
some
small
amount
of
data
out
of
there.
That's
actually
a
really
important
data.
It's
really
about
disaster
recovery.
Right,
like
is
disaster
recovery
for
this
type
of
stuff,
like
figure
out
how
to
get
the
api
server
to
run
again
and
tell
it
to
ask
for
the
data
back,
because
it
has
all
the
code
or
is
disaster
recovery
more
about
well
yeah.
D
No,
you
can
reasonably
write
a
tool
that,
if
it
has
the
correct
decryption
keys,
can
pull
pull
the
data
out.
I
don't
know,
I
don't
know.
I
I
I
think
in
this
regard.
This
whole
area
of
kubernetes
is
really
immature
because,
like
it
barely
kind
of
works,
so
it's
like.
I
got
it
working
okay,
I'm
just
like
I'm
gonna
leave
it
alone
like
rotation,
and
I
need
not
even
try
like
rotation.
Is
I'm
gonna
turn
everything
off
force,
rewrite
everything
and
turn
it
back
on
and
it's
like.
That's
not
rotation!
You
have
down
time.
D
D
I
think
the
I
think
the
hardest
thing
will
be
is
coming
up
with
the
apis
and
mechanisms
that
allow
for
rotation
to
be
done
much
more
dynamically
than
it
is
today
today
involves
one
or
at
least
two
or
more
kms
providers
in
the
encryption
configuration
with
careful
orchestration
of
the
api
servers
and
their
startups,
and
all
that
I
I
would
like
to
come
up
with
something
that
pushes
more
of
the
burden
onto
the
kms
api
and
lets
us
lets
us
do
that
with
some
confidence
without
needing
api
server
restarts
and
without
needing
the
more
than
one
kms
for
them
to
be
defined.
D
Yeah
because
the
act
of
orchestrating
api
servers
restarting
them
all
of
that
is
basically
100
infrastructure,
specific
it's
different
on
aks
than
gke
and
openshift
and
tanzu,
and
basically
what
that
means
is,
if
you,
if
you
say
that
that
part
is
not
in
scope,
you
just
can't
do
rotation
right,
because
you
don't
control
those
aspects
and
you
don't
have
enough
information
to
control
those
aspects
right.
That's
why
I'm
saying
it
has
to
be
done
in
a
way
that
requires
no
api
server
restarts
and
no
change
to
the
static
encryption
config.
D
If
you
look
at
that
original
doc
with
alex
and
mike
and
others
that
were
on
there,
you
know
alex
was
like
hey.
Can
we
like
make
like
an
entire
api
that,
like
exports
all
of
this
information,
so,
like
everyone
can
do
rotation?
I
was
like,
I
don't
think
any
infrastructure
provider
is
going
to
buy
an
api
that
basically
has
to
shove
all
of
their
infrastructure
details
and
like
ability
to
restart
the
api
server
like
into
the
kubernetes
api.
I
just
just
don't
see
that
happening
because
it
sounds
like
a
dangerous
api.
D
So
that's
the
bit.
That's
a
little
bit
less
clear
to
me
right,
like
I
think
we
would.
I
think
we
would
probably
need
to
build
like
a
a
part
of
the
grpc
api.
That's
like
a
streaming
api
that
the
api
server
basically
subscribes
to
and
is
emitted
information
about,
like
hey
like
now,
you
need
to
do
a
rotation
or
something
or
or
even
it
might
be.
Just
like
hey.
D
I
just
created
a
new
key
right
and
I
need
you
to
like
be
aware
of
the
fact
that
I
just
made
one-
and
you
know
all
of
this-
has
to
work
with
n
api
servers
using
n
kms
plugins,
all
of
which
only
have
fcd
as
a
synchronization
mechanism.
D
C
Yeah,
but
I
wonder
if
it's
not
still
too
early
to
to
talk
about
that
because,
like
it
will
change
gpa
like
thinking
about
it,
will
most
likely
change
the
api
but
currently
like
we
haven't
yet
solved
how
we
want
to
store
the
data
to
improve
the
performance,
and
I
don't
think
rotation
will
have
an
impact
on
that,
like
the
the
data
format
that
we
have
in
hcd
shouldn't
change
based
on
rotation
so
like,
if
we
already
like
figure
out
like
how
like,
what's
the
format
of
the
data
we
want
to
store,
then
we
can
build
on
top
of
that
and
think
about
like
okay,
let's
go
to
the
next
step,
which
is
like
rotation
with
an
old
performance.
C
D
D
I
am
totally
willing
to
accept
that
we
want
to
do
something
tractable
as
a
starting
point.
So
if
our
starting
point
is
I
just
want,
I
just
want
to
handle
performance
by
building
one
extra
level
of
hierarchy,
and
I
want
to
improve
the
storage
format
to
at
least
allow
me
to
encode
some
concept
of
a
new
key
encryption
key
id.
So
that
way
the
plugin
can
figure
out
how
to
do
the
work
it
needs
to
do
I'm
totally
down,
for
that
is
like
the
starting
point
we
can
just
say
like.
D
We
can
basically
say
that
you
know
this
is
an
intermediate
step
that
introduces
a
significant
rotation
issue,
because
the
hierarchy
is
on,
like
isn't,
is
sort
of
unobserved
by
the
api
server
right
it
when
it
does
a
storage
migration.
It
might
not
know
that
it's
supposed
to
call
the
kms
plug-in
because
of
the
hierarchy
causing
sort
of
like
a
delay
in
propagation,
but
that
might
be
okay
as
a
starting
point
right,
like
at
least
help
us
figure
out
a
storage
format
or
a
starting
point,
at
least
and
yeah
at
least
something
like
we.
That.
C
We
can
extend
and
reuse
at
least
like
as
a
starting
point
like,
as
you
said,
so
we
want
it
to
be
extended
so
that,
like
we,
don't
rely
on
a
kind
of
binary
format
that
we
need
to
show
like.
We
need
to
put
like
new
prefixes
inside
of
like
we
want
a
real
format,
and
then
we
want
this
format
to
maybe
be
reusable,
or
at
least
like
consumable,
by
any
kind
of
client
that
would
need
like
to
recover
some
secrets
or
whatever
from
a
city.
Whenever,
like
something.
C
You
can
take
that
as
a
starting
point
and
maybe
then
extend
it
to
all
the
other
encryption
mechanism
to
just
have
like
a
kind
of
a
generic
format
across
like
all
the
encryption
mechanisms.
So
then
we
can
like
build
one
client
that
would
be
dedicated
to
getting
encrypted
information
from
a
cd.
That
could
be
a
starting
point
and
based
on
that,
we
can
like
build
all
this
rotation
mechanism
that
might
update
the
the
format
that
we
had
in
hdd.
B
Even
if
we
introduce
hierarchy,
like
I
mean
without
thinking
about
rotation,
it
could
be
very
similar
to
what
rotation
is
today
right.
Like
I
mean
this
is
the
first
step
like,
even
if
you
build
in
hierarchy
like
as
a
user,
if
I
want
to
rotate
it,
the
keck
is
not
rotated.
So
if
the
locally
generated
keys
are
rotated,
then
the
rotation
is
very
similar
to
what
it
is
today
like
if
you're,
given
the
key
id
from
hdd,
which
is
encrypted,
you
can
basically
decrypt
it
with
the
existing
check,
and
then
that
would
work.
B
So
I
I
think,
as
a
first
problem
like
what
damian
suggested,
we
solve
the
format
and
then
like
just
have
the
initial
api,
so
we
can
validate
that
and
then
build
with
rotation.
On
top
of
that
to
see
if
any
other
additional
metadata
is
required,
I
think
once
we
figure
out
what
the
storage
format
is
going
to
look
like
and
it's
extensible
by
adding
new
fields,
then,
if
we
need
new
fields
for
rotation,
it
should
be
a
lot
easier
for
us
to
add
more
like
if
you
think
that
will
make
it
easier
for
rotation.
D
D
You
know,
and
if
we
want
to
simplify
our
thought
process
right,
we
can
just
start
with
it's
going
to
look
exactly
like
the
protobuf
that
other
resources
use
like
non-encrypted
resources.
It's
just
now.
It's
going
like
the
the
protobuf
resource
is
going
to
be
some
new
encryption
on
the
rope
object
or
whatever
the
hell.
We
want
to
call
it
with
whatever
fields.
Obviously,
in
there
there's
going
to
be
like
a
ciphertext
field,
it's
just
slice
and
bytes,
but
the
other
fields
are
the
important
ones
right
like
those
are
the
ones
that
we
want.
D
I
susp.
I
suspect
we
won't
get
much
pushback
if
we
just
go
with
the
proto,
unless,
unless
someone
has
a
strong
desire
for
us
to
try
to
use
one
of
the
more
commonly
used
industry
formats,
I
know
mike
had
suggested
one
a
while
back.
I
just
don't
remember
what
it's
called.
D
D
Yeah
something
you
know
so
it
could
be
jwk.
There
was
something
else.
I
think
it
started
with
c
that
I
cannot
remember
but
yeah
so
yeah
you
can
imagine
a
jwk
right.
So
the
you
know
the
problem
with
jfk.
Is
you
know
it's
semi?
You
know
human
readable.
You
know
it's
it's
it's
not!
It's
not
binary
right,
it's
not
as
compact
as
it
could
be.
D
D
Implementation
is
a
tiny
tool
that
helps
you
just
pull
out
like
you
know,
if,
given
a
connection
to
fcd
and
a
a
path
or
a
prefix,
it
can
go
in
and
you
know
get
try
to
use
basically
a
kms
plug-in
to
like
pull
data
out
right.
So
it's
just
basically
the
back
end
and
it
will
help
you
read
the
data
back
out.
D
C
Yeah
as
long
as
we
have
the
format
like
it's
pretty
straightforward
to
build,
the
only
problem
is
like
having
a
good
format
that
can
be
then
reused,
yeah
and
yeah.
If
you
we
have
like
one
tool
for
all
the
encryption
kind
like
we
don't
have
to
really
care
about,
like
whether
it's
gamers
data
or
like
just
ascbc
encrypted
data
inside
of
hcd.
We
just
have
like
you,
have
this
tool
that
will
pull
the
encrypted
data
and
the
user
will
provide
a
key
or
whatever,
and
they
will
get
back
to
whatever.
The
data
is.
D
We
did
bring
up
an
important
point,
though,
which
is
we
wanted
to
move
away
from
a
yes,
you
see
right.
We
wanted
to
use
a
yes
gcm
or
whatever,
but
the
specific
question
there
is
what
what
do
we
move
to
and
is
the
user
having
control
over
it.
D
Since
you
cannot
influence
that,
do
we
care
it's,
it's
see,
it's
surprisingly,
no
one
seems
to
have
complained
as
far
as
I
know,.
D
I
think
at
the
bare
minimum
we
want
gcm.
So
that
way
we
get,
you
know
authenticated
encryption
and
decryption.
That
seems
like
a
good
idea.
D
D
A
I
also
very
good-
and
I
thought
about
the
problem
with
with
in
case
you're
in
restrictive
states,
where
you're
not
freely
allowed
to
use
strong
encryption.
I
think
in
those
cases
you
need
to
somehow
share
the
keys
with
the
state
right,
so
I
think
even
then
you
can
use
htcm
as
long
as
yours.
Your
state
has
some
kind
of
an
access
key.
D
D
You
know,
presumably
there
would
be
some
way
for
the
system
in
to
new
rotation
right
if
it
used
to
use
some
other
format.
It
needs
to
like
go.
Do
that
or
maybe
maybe
it's
just
as
easy
as
just
doing
a
storage
migration
and,
like
that's
part
of
the
staleness
of
the
data
right,
they
could
detect.
Oh
this
data
that
I
pulled
off
required
this
kind
of
encryption
mechanism
and
I'm
told
my
current
mechanism
is
this
other
thing
cool.
D
D
A
I
mean
once
we
are
touching
it,
why
not
adding
it?
I
think
it's
a
cool
thing,
because
encryption
is
less
deterministic
I
mean
you
can
so
we
know
that
as
and
gcm
is
secure
because
not
hacked
as
of
now
right.
So
like
every
10
years,
you
have
new
insights,
how
you
can
and
increase
the
speed
with
which
you
can
try
to
crack
the
cypher.
So
maybe
at
some
point
there's
some
surprise
that,
as
can
be
now
quickly
hacked
or
whatever,
and
we
need
to
swiftly
switch
the
secret
box
or
whatever.
A
D
So
I
I
think
I
I
subscribe
more
to
like
the
wire
guard
ethos
and
less
of
the
open
vpn
ethos
than
this,
which
is
wire
guards
like
this
is
the
encryption
format.
If
it's
broken
you
you're
going
to
rev
the
whole
thing,
I'm
just
going
to
have
a
new
version
of
the
everything
and
you're
going
to
move
on,
whereas
openvpn
is
like
no,
you
can
do
anything
you
want
and
you
can
misconfigure
it
in
any
way
you
want.
D
I
I
lean
more
strongly
towards
like
there
is
just
a
way
to
do
it
and
it's
the
right
way
and
you
don't
like
there
is
no
foot
guns.
You
cannot
pick
an
option
that
doesn't
make
sense,
but
yeah,
like
I
think
it
comes
down
to
is
like
how
how
much
concern
and
value
do
you
place
on
crypto
agility.
A
Yeah,
okay,
so
I
guess
jason
has
more
experience
than
me
in
those
things,
so
when
it's
hard
coded
within
why
I
got
this
should
be
good
enough.
Okay,
beneath
this
I
mean
no
one
complained
about
the
ascbc
so
yeah.
I
I.
D
I
would
at
least
want
someone
to
complain
before
I
built
the
feature
like
that.
That's
I
think,
just
good
sort
of
like
high
level
product
design
is
someone
needs
to
at
least
ask
or
you
you
should
do
product
research
and
ask
them
questions
to
understand
how
they
believe
the
system
should
work
if
it's
too
technical
for
them
to
be
asking
for
such
features
right,
like
most
normal
users,
cannot
tell
you
anything
about
cryptography
or
modes
or
authentication
and
all
that
right.
But
but
you
know
you
can
certainly
ask
a
user
like
hey.
D
D
D
B
So
the
v2
alpha
one
with
the
new
field
for
the
uid
and
the
key
id,
and
then
I
think
I
was
going
to
try
out
the
new
storage
format.
So
I
think,
like
we
talked
about
doing
the
protobuf,
so
I'm
gonna,
try
with
that
like
this-
is
all
with
a
new
scenario
without
migration
in
mind
like
how
we're
gonna
do
that.
But
I
was
gonna,
try
these
things
and
then
also
maybe
try
the
key
hierarchy
with
the
azure
kms
plugin.
B
So
I
was
going
to
try
and
stitch
all
these
three
together
that
we
talked
about
and
then
see
if
we
can
actually
have
a
working
demo
with
it.
So
we
can
use
that
as
a
base
and
then
keep
building
on
top
of
that.
So,
like
I'm,
hoping
I
can
have
something
by
next
tuesday,
but
if
not
like
later
that
week,
this
week
is
no
meeting
week
for
us,
so
I'm
hoping
I
can
get.
B
We've
been
doing
this
once
in
a
while
so
like,
I
think
I
will
have
time
this
week,
so
I'm
going
to
try
and
do
that
so
that
we
can
have
some
base
and
then
we
can
basically
like
tear
that
apart
and
say
like
hey.
This
doesn't
work
that
doesn't
work
and
just
make
incremental
changes
to
it.
Once
we
have
a
final
thing
we
can
say
like
hey.
This
is
what
we
want
to
do
and
then
convert
that
to
a
tip.
A
Okay,
so
just
in
case
you
need
help
or
you
need
a
spring
partner,
so
this
week
is
also
for
for
for
open
shift
people
a
little
bit
easier
because
we
have
something
called
shift
week,
which
means
that
we
have
no
normal
feature
work
and
we
can
do
catch
up
on
things
that
we
didn't
follow
up
properly
on
so
yeah.
So
we
have
also
a
cool
week.
This
time.
D
That's
that's
great.
Okay
yeah!
I
was
gonna,
ask
you
initiates.
How
can
we
help
you
if
you're
going
through
this
effort?
Are
you
are
you
gonna,
be
maybe
pushing
any
of
this
code
to
some
public
repo
that
you
know
we
could
kind
of
look
at?
Maybe
you
know
provide
thoughts
or
feedback
or
just
you
know,.
B
Yeah,
whatever
changes,
I
have
like
I'll
push
it
to
my
fork,
so
I'll,
actually
post
that
when
I
do
push
changes,
I'll
post
it
on
the
slack
channel
and
then,
if
you
think
like
we
can
actually
collaborate
there,
then
we
can
just
do
that.
D
D
So
I
think
christoph
you
and
damian
only
have
like
anything
vaguely
resembling
reasonable
overlap
with
me.
D
D
D
You
know
like
so
you
know,
we've
talked,
we've
talked
a
lot
about
like
a
reference.
You
know
implementation
right.
This
is
anyone
interested
in
trying
to
start
building
out
a
reference
implementation
against
the
current
api,
the
one
that's
already
there
to
start
specking
out
things
like
like
metrics
and
what
was
the
other
thing.
D
Yeah
logging,
as
well
as
you
know,
like
getting
it
into
a
state
where
it
could
support
something
like
pkcs11,
like
you
know,
getting
that
baseline
there,
and
then
you
know
once
we
start
understanding
the
new
v2
api.
You
know
we
could
start
swapping
out
the
v1
code
for
the
v2
code
and
maybe.
D
D
So
just
just
trying
to
see.
If
we
could.
You
know
if
kristoff
has
more
time
this
week,
because
he's
like
told
go,
go,
do
crazy
stuff.
D
D
A
C
D
C
It's
all
right:
okay,
definitely
previous
reviewing
and
stuff
like
that.
If
there
is
anything.
A
Okay
sure
I
will,
whenever
I
do
something
I
post
it
and
and
let's
figure
out
how
to
collaborate
best.
So
we
didn't
communicate
that
much
in
slack.
So
maybe
we
can
increase
the
that
amount
and
yeah.
We
see
how
it
develops.
D
So
I'm
looking
through
our
dock,
trying
to
remember
all
the
things
we
wanted
to
address
performance,
so
we're
saying
performance,
we're
wanting
to
handle
via
a
key
hierarchy.
Roughly,
I
think,
is
what
we're
saying
is.
We
want
to
experiment
observability
we
want
to
handle
in
a
few
places.
D
B
B
D
C
I
think
the
easiest
solution
here
is
like
we
just
have
to
admit
that
they
would
have
to
delete
the
keys
like
there
is
no
way
they
can
recover
them
if
they
don't
have
like
the
cake
if
they
deleted
it
like
there
is
no
way
they
can
recover
their
secret
or
whatever
they
stored
and
based
on
that
like.
We
can
then
build
some
alerting
like
let's
say
we
introduce
metrics
to
say
that
there
are
some
errors
getting
like
data
from
the
storage,
because
we
couldn't
decrypt
them.
C
If,
like
these
are
like
accumulate,
then
we
can
emit
an
alert
that
would
tell
like
the
administrator
that
hey
there
is
something
wrong
like.
I
cannot
decrypt
the
data
in
dpa,
so
you
need
to
do
something
and
then
they
can
check
like
whether
they
have
the
key
or
not,
and
if
they
don't
have
it,
then
we
guide
them
to
like
what
other
cd
common
that
can
be
made
to
delete
all
the
data
that
they
have
to
kind
of
like
put
the
epa
server
back
in
shape.
A
There
may
be
some
post
modern,
where
some
company
run
into
issues
with
with
they
with
kms
and
the
whole
storage
was
encrypted
and
they
had
issues
reaching
the
keys.
Maybe
there
are
some
real
life
exams,
so
we
would
see
okay,
this
is
happening.
People
really
need
it.
A
D
The
the
other
thought
I
had
around
this
is
so
I
agree
with
basically
everything
damon
said,
which
is
that
we
we
should
make
it
clear
that
the
problem
exists
and
guide
towards
a
human
being,
making
the
final
decision
for
what
their
options
are,
presumably
before
they
do
anything
to
back
up
everything
and
then
start
deleting
stuff.
Instead
of
the
other
order,.
D
C
Basically
because,
like
we
owned
the
local
kk,
and
then
we
can
say
like
that,
every
one
hour,
the
plugin
will
check
against
the
kms
if
it
can,
for
example,
just
decrypt
the
local
key
that
we
have
and
then
like
just
like
try
to
see
if
the
kk
that
is
linked
to
this
focal
key
case
still
exists.
And
if
it
doesn't,
then
we
put
a
list
of
like
I
don't
know.
24
hours
before,
like
we
kind
of
remove
the
local
kk.
D
And
if
I
don't
know,
I
I'm
not
that
familiar
with
the
different
like
vault
apis
like
is
it
in
like
an
azure
key
vault?
Is
there?
Is
there
some
concept
of
release?
Can
you.
B
You
I
mean
so
there
is
this
concept
where,
even
if
you
delete
the
key
it's
recoverable,
but
there
is
no
lease
like
where
you
say
you
can't
delete
this
key.
I
mean
there's
nothing
blocking
the
delete.
Basically,
so
if
you
delete
it,
it's
recoverable
for
like
seven
days
so
like
that
correlates
with
lease,
because
if
someone
accidentally
deletes
it,
it's
almost
saying
like
hey.
This
was
supposed
to
be
used,
so
you
can
still
go
and
recover
it
for
the
next
seven
days.
But
yes,
there
is
no
way
to
block
it
from
being
deleted.
D
Okay,
but
if
it's
deleted,
but
still
recoverable,
so
it
was
deleted
less
than
seven
days
ago.
I
assume
you
cannot
use
it
for
operations
anymore,
so
at
least
that
would
that
that
is
something
that's
detectable
like
in
a
similar
way
to
what
damon
just
said
right
is
that
if
you
tried
to
use
that,
like
you
know,
you
were
like
hey,
I
have
this
local
key
encryption
key.
D
D
Maybe
do
an
encryption
operation
or
something
like
that
with
a
particular
key
just
at
some
interval
as
a
way
of
saying
hey,
I
still
need
that
key.
You
can't
delete
it
like
you
like
have
to
like
turn
me
off
and
like
then,
like
you
know,
like
my
you
know,
like
my
dead
man
switch
is
off
now
and
then
you
can
go
delete
the
key
right,
but
like
you,
you
can't
do
it
in
some
other
weird
order,
but
you
know.
C
D
C
Like
some
kind
of
rotation
mechanism,
so
let's
say
that
they
want
to
delete
a
key
like
for
some
security
reasons
and
like
we
have,
this
game
is
freaking
that
still
using
it.
So
they
cannot
really
like
do
that
anymore
because,
like
we
have
this
list
mechanism
but
like
if
they
really
want
to
delete
it,
they
will
have
to
have
a
way
to
migrate
their
whole,
like
all
the
data
to
the
new
key,
so
that,
like
the
previous
one,
is
not
used
anymore.
A
Yeah,
so
definitely
it
sounds
like
a
very
nice
feature.
So
what
for
security
or
when
you're
rotating
and
you
don't
have
100
keys
and
and
your
cloud
provider
chat-
is
just
increasingly
in
charge
for
that
amount
of
keys.
But
this
is
really
a
feature
that
so,
even
though
it's
a
really
nice
feature,
would
it
make
sense?
We
postpone
it
to
a
later
version,
or
can
we
tackle
that
that
much
complexity
on
on
the
first
go.
D
Just
from
what
like,
in
this
just
said,
I'm
I'm
skeptical
that
it
can't
even
be
done
like
in
in
the
way
I
didn't
imagined
it.
I
I,
I
think
what
damian
said
can
be
done,
which
is
you
can
observe
if
the
system
has
gotten
into
a
state
where
the
only
reason
the
system
is
healthy
is
because
you
happen
to
have
in
memory
the
like
the
decrypted
key
encryption
key,
so
you're,
okay
right
now,
but
you're
like
if
the
plug-in
was
restarted.
You
would
not
be
okay,
so
you
can
detect
that
state
and
that's
nice
right.
D
That
gives
you
like
a
little
like
it
gives
you
you
know,
imagine
if
you
were
using
a
local
file
based
encryption
and
you
accidentally
like
deleted
the
files
that
have
the
keys
in
there.
While
the
api
server
is
running
you're
still
technically,
okay,
because
the
keys
are
in
memory
right,
it
gives
you
that
kind
of
state,
but
with
the
kms
plugin
right,
you
can
get
into
a
state
where
the
cloud
provider
key
or
your
hardware
key
is
gone,
but
you're
still
technically
kind
of
okay
for
a
little
while.
D
D
You
can't
delete
it
right
now,
because
the
system
is
saying
it's
in
use
right.
It's
kind
of
like
that
annoying
windows,
thing
where
you
try
to
delete
a
file,
and
it's
like
no.
You
can't
like
this
file
this
pro
this
this
program.
Has
it
open
and
then
you're
like
screw
you.
I
still
want
to
delete
the
file
right
like.
B
But
yeah
there
is
no
lease,
but
I
think
yeah
the
one
that
damien
suggested.
We
can
definitely
do
that.
I
think
one
other
topic,
that
probably
is
in
one
of
the
sections
and
observability
is
the
health
check,
and
I
think
we
also
talked
about
it
right.
Do
we
want
to
have
a
dedicated
health
check
api
and
let
kms
plugins
do
whatever
they
want
to.
First,
as
part
of
health
check
and
return,
a
response.
C
Then,
at
the
end
like
it
will
still
require
like
we
cannot
really
take
action
based
on
like
whatever
happened
like
we
can
notice
that
the
key
was
deleted,
but
like
as
a
plugin
or
even
in
gps
like
we
cannot
really
take
action
because
we
don't
know
like
what
is
the
new
key.
We
don't
know
if
we
need
to
do
migration
or
whatever
like
the
best
thing
we
can
do,
is
just
like
notify
the
administrator
that
there
is
something
that
came
up.
That
is
really
really
critical
and
they
need
to
look
into
it.
Asap.
D
Like
actually
one
thing,
if
you're,
if
you
get
into
a
case
where
you
have
an
in-memory,
sub-key
encryption
key,
but
you
cannot,
you
cannot,
if
you
also
had
like
just
a
little
weird
like
so,
if
you
had
the
encrypted
version
of
it,
and
you
asked
the
kms
to
decrypt
it
and
it
couldn't
because
you,
if
you
had
the
in-memory
version
of
it,
the
unencrypted
version
at
that
instant,
you
could
immediately
try
to
make
a
new
in-memory
key
and
ask
the
cloud
provider
to
encrypt
it,
and
if
that
operation
works
now
you
have
a
new
key
encryption
key
that
you
could
do
rotation
with
in
like
a
desperate
attempt
to
be
like,
I
need,
I
need
to
like
read.
B
D
A
Yeah,
this
is
really
cool,
so
so
there
was
this
platform
called
key
base.
I
think
it
still
exists
where
it's,
where
at
the
end
encrypted
as
long
as
one
device
is
still
kind
of
connected
and
somehow
has
the
keys
for
your
account
in
memory
like
like,
especially
on
your
smartphone,
you
have
this
in
the
secure
storage.
Then
you
could
kind
of
re,
revive
or
share
the
keys
with
other
devices
and
reset
password
and
magic
like
that.
D
Okay,
I
do
need
to
drop
a
little
bit
early.
I
think
we
can
maybe
continue
to
any
of
our
other
discussions
offline.
I
I
kind
of
got
the
sense
that
all
of
like
recovery
is
sort
of,
like
maybe
like
a
hand,
baby
thing
in
the
future,
but
I
think
we're
looking
at
performance
and
observability
really
through
the
lens
of
new
api
changes,
new
storage
formats
and
a
new
reference
implementation
like
we
feel
like.
I
think
we
can
address
all
of
those
pre-cohesively
with
those
three
things.
Basically,.
D
So,
which
is
exciting?
That
means
you
know
like
we're,
like
sort
of
coalescing,
on
an
approach
which
I'm
excited
about.
A
Okay,
so
one
question:
let
me
pause
the
recording.