►
From YouTube: Kubernetes SIG Auth 2021-12-14 KMS #1
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2021-12-14 KMS #1
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
All
right
so,
hey
everybody.
So
this
is
the
first
meeting
on
december
12th
at
9am,
pacific
time
to
discuss
the
kms
work.
Yeah
y'all
want
to
get
started.
B
Yeah
so
yeah,
I
shared
the
dark
link
right
and
then
the
pldr
is
so
around
the
kubernetes
123
release.
We
decided
that,
like
there
are
quite
a
few
github
issues
related
to
the
kms
plug-in
in
terms
of
performance
improvements
and
then
like
issues
that
the
users
have
run
into,
and
we
decided
that
maybe
it's
an
area
that
we
can
focus
and
then
like
try
to
make
improvements
and
then
help
graduate
that
feature
from
data
to
ga.
B
B
B
If
that's
something
that
we
want
to
do,
if
we
don't
want,
it
will
be
great
if
we
can
brainstorm
in
the
scholar,
alternative
approaches
that
we
can
document
and
then
also
maybe
elaborate
in
the
follow-up
calls
and
then,
as
a
group
it'll,
be
great
if
we
can
decide
if
these
are
the
areas
that
we
want
to
do
and
then
also
finalize
on
how
we
want
to
do
it,
because,
eventually
we
want
to
translate
this
google
doc
into
a
cap,
and
then
the
kept
deadline
for
124
is
end
of
jan
and
then
on.
B
B
Ideally,
it'll
be
great
if
we
can
get
a
pr
up
in
the
first
two
weeks
of
jan,
if
not
for
all
the
categories,
we
can
at
least
try
to
get
some
of
it
for
124,
but
the
long
term
goal
is
to
get
a
kept
for
all
these
features
in
in
the
next
couple
of
months,
so
that
we
only
have
to
work
on
the
implementation
and
then
that
can
span
across
maybe
one
or
two
kubernetes
releases.
A
Do
we
just
want
to
kind
of
try
to
go
down
the
dock
and
look
at
sort
of
the
comments
has
been
made.
Yep.
A
C
You
want
me
to
walk
you
through
the
id
yeah,
so
so,
basically,
I
had
like
when
I
first
learned
about
kms.
Had
this
whole
idea
of
like
it
was
really
security
focused
and
at
the
end
I
was
pretty
not
shocked,
but
kind
of
that
it's
still
not
very
secure
as
such,
like
there
are
still
areas
of
improvements.
C
If
we
were
just
to
take
like
the
like,
the
vanilla
implementation
and,
for
example,
like
the
cage,
all
the
dks
that
are
cached
in
the
api
server,
we
know
that
they
are
useful
for
performance
improvements,
but
at
the
same
time
they
might
still
be
accessible.
If
the
api
server
is
compromised,
and
so
that's
one
aspect
and
the
other
one
is,
since
there
is
a
way
in
the
cluster
to
access
the
kms.
C
That
means
that
if
someone
were
to
have
access
to
the
host,
then
they
would
also
be
able
to
impersonate
some
kind
of
like
the
plugins
to
also
have
access
to
the
kms
and
either
like
just
decrypt
the
secret
as
if
they
were
the
plugin
or
even
worse,
like
I
saw
an
implementation
recently
like
for
the
aws
plugin
and
that,
as
far
as
I
know,
doesn't
allow
to
change
the
iem
rule.
So
you
just
have
to
like
go
through
with
the
master
one
and
add
the
kkk
to
the
master
wall.
C
So,
and
that
means
that
you
kind
of
have
the
masterful
permission
on
the
kms.
If
you
were
to
hijack
the
host-
which
I
don't
think,
is
something
really
good
at
the
end.
And
I
wanted
to
kind
of
like
see
if
we
could
address
this
kind
of
issues,
and
I
saw
on
a
couple
of
talks
that
some
google
folks
mentioned
tpms
as
a
solution
and
some
faults
also
from
our
work
mentioned.
C
Teas
or
trusted
execution
environments
such
as
the
enclave
to
store
like
most
of
the
sensitive
data
into
one
single
place
and
handle
like
every
sensitive
operation
in
this
trusted
place.
Basically-
and
they
mentioned
in
their
presentation
like
that,
they
wanted
to
introduce
some
kind
of
interface
for
that,
which
was
something
I
was
also
looking
into,
but
that
didn't
go
through.
I
guess
and
they
wanted
to
do
an
announcement,
but
I
didn't
see
anything
about
that.
So
I
wanted
to
see.
C
Time
ago
was
it
until
maybe
it
was
entire.
I
think
it
was
local,
but
maybe
it
was
intellie
from
like
2019
also
yeah.
A
A
A
Vector
is
like
basically
irrelevant
because
you
cannot
guard
against
it.
The
only
way
to
guard
against
is
to
run
the
entire
api
server
in
a
tpm,
which
we
don't
support
because
right.
So,
while
I
understand
as
an
attack
vector
it's
not
meaningfully
addressable
because
of
the
architecture
of
the
kubernetes
api
server
right
in
order
for
it
to
be
performing,
it
stores,
every
resource
amended
right,
so
the
working
set
is
always
in
memory.
A
C
I
just
wanted
to
say
that
I
wasn't
really
sure
that
the
data
in
the
watch
cache
was
decrypted
or
not
because
at
the
end
of
the
day
like
to
me,
gps
was
just
like
querying
dhc
data,
so
either
the
cache
or
not
like
the
abstraction
was
there,
and
it
was
just
basically
doing
the
transformation
like
by
decrypting
the
data.
A
C
Sense
because,
at
the
end
of
the
day
like
we
could
have
the
encrypted
data
in
the
watch,
cache.
A
Yeah
and
then
you
would
instead
of
yeah,
you
have
to
have
it
encrypted
for
performance
right.
The
reason
the
entire
abstraction
exists
is
your
performance
right.
So,
yes,
you
could
encrypt
it
in
memory,
but
I
don't
know
what
problem
you're
solving
at
that
point:
yeah
yeah
right,
but
this
is
I'm
pretty
sure
the
reason
the
intel
guys
never
bothered
pushing
that
further,
because
I
asked
them
after
the
talk
purposely
after
the
recording
was
over
because
it
didn't
want
to
make
them
feel
bad
about
their
presentation.
A
A
Stuff
written
is
specifically,
it
is
a
unix
domain,
socket
because
it
bypasses
the
authentication
problem
right.
We
are
literally
saying
if
you
can
connect
to
the
unix
domain
socket.
You
have
passed
the
single
authentication
check
of
the
system,
and
thus
you
are
the
api
server
and
the
api
server
is
allowed
to
perform
any
action
with
that
service.
C
C
But
one
thing
I
wanted
to
check
is
also
like
what
do
you
think
about
still
like
the
not
the
encryption
like
stowing
in
a
safe
place,
all
the
information
that
is
needed
to
contact
the
kms,
because
that's
still
valuable
like
because
you
might
give
access
to
the
kms
to
an
irm
rule.
As
I
mentioned,
that
has
more
access
than
just
the
key
that
is
used
in
the
cluster.
A
A
C
Yeah,
but
I'm
wondering
if
we
could
include
that
into
like
the
things
that
we
are
giving
to
the
plugins,
because
at
the
end
of
the
day
like
we
are
the
one
that
owns
like
every
helpers
functions
or
every
interfaces
that
are
given
to
the
plugins.
So
if
we
were
to
provide
a
set
of
functions
to
secure
this
data,
then,
and
that
could
be
reused
by
the
plugins.
A
So
just
somebody
answering
your
own
yeah,
I
mean
that's
fair,
I
guess.
Doesn't
it
get
you
back
to
the
same
problem
we
have
with
the
on
disk
based
encryption,
which
is
you
just
you
just
move
the
configuration
into
our
file
and
now
you've
made
that
file
a
secret,
but
so
was
the
other
file
and
they're
all
both
on
the
host
file
system.
A
So
are
you?
Are
you
familiar
so
I
know
I
know
you
work
it
red
hat.
I
don't
know
if
you're
familiar
with
tang
and
clevis.
Are
you
familiar
with
those.
A
You
say
well
old,
co-worker,
minor
red
hat.
He
wrote
those
tools
with
some
other
folks,
but
they
basically
red
hat,
calls
this
network
bound
encryption,
which
is
the
idea
that
you
have
encrypted
data
on
like
disk
or
whatever,
and
as
long
as
you
can
reach
a
particular
network
service,
you
can
over
like
plain
text.
It
doesn't
require
any
encryption
on
that
connection.
You
can
basically
finish
the
cryptographic
operation
required
to
do
the
decryption.
A
I
think
those
types
of
like
I
can
show
you
like,
because
I
did
a
lot
of
this
stuff
like
long
ago.
A
A
A
A
A
Though
this
api
is
limited
by
the
con
construct
of,
we
are
asking
it
to
do
the
encryption
and
decryption
right,
but
the
decryption
is
allowed.
It's
authorized
right,
so
I
I
might
just
leave
this
as
a
comment
for
now,
so
we
can
keep
sort
of
thinking
about
it,
but
I
mean
I
I
guess
I
don't
disagree
with
security
as
a
thing,
because
it
should
be
like
the
entire
yeah
entire
video.
C
You
are
very
weight
that
should
also
be
different
to
the
plugin
at
some
point
like
if
they
want
more
security
on
that
side
and
it's
up
to
the
plugin.
There
is
not
much
we
can
do.
A
The
thing
the
aspects
of
security
that
I
feel
like
we
could
try-
and
this
is
something
I
wanted
to
actually
read
and
see
what
they
felt
like
is,
I
know
a
while
back
rita.
You
had
mentioned
that
we
we
could
help
with
the
the
the
problem
associated
with
like
if
you
accidentally
delete
the
key
in
your
cloud
provider
but
having
some
concept
of
release.
So
that
way
we
you
know
we
can
indicate
to
the
cloud
provider
like
where
a
key
is
being
used
and
sort
of.
A
Why,
and
you
know
they
could
prevent
that
from
being
deleted
very
easily.
I'd
wondered
as
like
an
extension
of
that.
Could
we
could
we
expand
the
api
to
be
more
about
the
life
cycle
of
it
keys
all
together
right,
so,
instead
of
it
being
purely
like,
hey
tell
me
what
key
to
use
and
I'll
hold
the
lease
on
it
until
I'm,
not
until
I'm
dead
or
whatever
it
could
be.
A
That's
sort
of
one
aspect
of
this
that
I
feel
like
the
kms
plug-in,
is
sort
of
in
a
better
position
than
others
to
sort
of
up
with
which
is
key
rotation,
because,
as
far
as
I
understand
right
now,
today,
the-
and
this
was
vaguely
mentioned
in
the
original
dock
by
alex
right
where
he
was
like.
Could
we
like
move
some
of
these
rotation
constructs
into
like
the
kubernetes
api?
So
that
way,
not
every
single
provider
has
to
rewrite
this
from
scratch,
which
is
fair.
B
A
A
Those
steps,
while
easy
to
say
out
loud,
require
a
lot
of
work.
I
know
this
because
I
wrote
this
for
open
source
where
I
wrote
parts
of
it
and
stefan
like
fixed
a
lot
of
my
mess.
But
you
know
it's
there
right
it
does.
It
does
the
dance
right
and,
of
course
the
dance
was
extra
hard
in
openshift,
because
it's
a
self-hosted
system
right,
but
the
problem
doesn't
go
away
just
because
you're
not
self-hosted,
it's
just,
maybe
slightly
easier.
C
The
the
main
weight
could
be
variable
is
first
like
for
rotation,
because
then
you
can
know
when
the
like
the
canvas
is
creating
new
kkk,
which
is
something
that
you
don't
know
yet,
and
so,
if
you
want
to
start
some
key
rotation
on
the
side,
then
that's
not
possible
today,
because
you
don't
have
this
awareness
and
it
and
also
help
like
the
users,
as
you
mentioned,
track
down
like
what
are
the
keys
that
are
still
used
and
yeah
because
currently
like
there
is
no
way
for
that,
like
I
don't
have
any
awareness
on
where
the
key
used
like
if
a
new
key
version
is
created,
for
example
in
gcp,
and
then
they
don't
know
if,
like
version
3
is
still
used
or
like
if
they
can
delete
it
or
whatever
like
they
have
no
awareness
at
all
on
this
process.
A
A
So
that
key
is
abstracted
in
the
in
the
kms
right,
so
yep,
we're
kubernetes
is
completely
unaware
of
it
and
I
think
you
might
be
saying
or
what
was
the
thought
or
request
that
maybe
that
should
be
more
observable.
Yeah.
E
B
I
was
gonna
say
like
that's.
Why,
like
down
the
proposal
like
we
have
this
one,
where
the
kms
plug-in
actually
provides
some
kind
of
metadata
about
the
key
it's
using
right,
so
like
the
key
id
and
all
that,
even
though
it
doesn't
make
sense
to
the
api
server,
it's
something
that
you
can
still
send
it
back
to
the
kms
plugin
on
next
encrypt
or
decrypt,
so
that
the
kms
plugin
can
use
that
information
rather
than
having
to
find
it
every
single
time.
D
May
I
suggest
something
since
we're
like
almost
halfway
in
the
call
we
I'm
I
as
you
can
see,
there's
phase
one
and
phase
two
there's
a
lot
to
be
discussed.
I
was
wondering
if
we
can
focus
on
phase
one
which
we
kind
of
jumped
we
kind
of
started.
Talking
about.
You
know,
recovery
right.
We
kind
of
talked
we're
beginning
to
talk
about
that
and
the
key
rotation
stuff.
D
So
I
wonder
if
we
could
maybe
jump
down
to
phase
one
just
to
help
with
the
facilitate
the
conversation
so
that
maybe
by
in
the
this
call
or
future
calls
we,
we
would
know
like
what
we
need
to
do
for
phase
one.
B
So
yeah
I
mean
in
terms
of
observability,
like
the
main
intent
for
adding
that
is
today.
The
kms
plugin
is
not
aware
on
the
kms.
Plugin
is
not
aware
of
which
secret
it's
encrypting
or
decrypting,
like
in
terms
of
just
metadata.
B
Okay,
sorry
about
that
so
yeah.
The
kms
plug-in
today
is
not
aware
of
with
secret
it's
encrypting
decrypting
and
then
on
the
path
back
again.
The
api
server
does
not
know
which
key
or
what
the
game
is
plugin
used
to
encrypt
decrypt.
So
that
is
one
thing
and
then,
apart
from
that,
there
is
actually
no
way
for
us
today
to
have
a
trail
where
we
say
like
this
particular
request
originated
from
api
server.
B
This
came
to
the
kms
plugin
and
then,
if
we
want
to
audit
that
on
the
external
kms
or
like
keyword
or
secrets
manager,
then
there
is
no
way
for
us
to
actually
have
one
player
with
like
no
kind
of
id
or
anything
that
we
have
today.
So
I
think
in
terms
of
observability,
the
two
things
that
we
proposed
is
one
is
audit
id
so
that
it's
a
uid
generated
for
every
encrypted
operation.
So
it's
a
unique
id
for
each
operation,
so
one
for
encrypt
a
different
one
for
decrypt.
B
So
the
audit
id
is
one
thing
and
then
also
in
terms
of
the
observability
part,
just
so
that
the
game
is
plugged
in
and
the
api
server
are
more
aware
of
what
operations
they're
performing
the
kms
plugin
will
get
just
the
secret
name
like
so
it
already
gets
the
payload
today
and
then
it's
basically
in
the
dark,
but
it
will
probably
get
a
secret
name
or
some
other
metadata
that
it
can
use
for
logging
purposes,
which
makes
it
easier
for
debugging
and
then
the
second
one
is
when
the
kms
plug-in
is
done
with
the
encrypt
or
the
detailed
operation.
B
It
would
send
additional
metadata
back
to
the
api
server,
which
is
it
says.
I
used
this
particular
encryption,
key
version
like
or
any
other
metadata
that
it
think
that
it
thinks
is
safe
to
be
shared
with
the
api
server,
so
that
the
api
server
has
it
and
then,
when
it
makes
the
next
entry
per
decrypt
call,
it
can
tell
the
actually.
I
think.
B
So
when
it
actually
makes
the
decrypt
call
the
kms
player,
the
api
server
can
say,
hey
look.
This
is
what
you
encrypted
it
with
the
key
tech
version.
Maybe
this
is
useful
to
you,
but
this
is
what
I'm
giving
to
you
and
then
at
that
point,
if
the
kms
plug-in
is
maintaining
some
kind
of
cache
where
it
says
this
key
is
deleted
and
all
that
it
can
avoid
making
these
external
api
calls
and
then
it
can
make
a
decision
from
there.
B
A
So
I'm
looking
at
this
some
some
obvious
sort
of
concerns
like
for
one
with
audit
id,
that
that
is
a
user
controlled
string.
A
B
Yeah,
that's
what
I'm
inside
like
for
every
operation.
It
generates
a
unique
id
like
just
something
before
it
does
the
operation,
rather
than
the
user.
Passing
this
and
then
like.
There
is
some
kind
of
trail
like
so
so
we
can
just
filter
logs
on
all
the
the
whole
path
like
as
we
trace
it,
using
that
particular
id
and
then
also
like
when
the
kms
plug-in
gets
that
id.
B
It
can
use
that
to
log
into
the
kms
app
store
so
that
like,
if
you
go
into
the
kms
store
and
like
look
at
the
logs,
then
we
can
use
that
audit
id
to
correlate
that.
Oh,
this
encrypt
operation
was
for
this
kubernetes
secret,
and
then
it
came
from
this
particular
cluster
like,
for
instance,
if
they're
using
the
same
kms
store
across
multiple
clusters
too.
A
Not
set
they'd
gps
server
sets
it
itself,
and
I
thought
this
was
that
same
value.
Basically
what
I
thought
what
this
was
trying
to
do
is
correlate
all
the
way
from
the
user's
request
down
all
the
way
to
all
the
kms
right
which
to
me
would
be
more
useful
than
a
particular
encryption
operation
caused.
All
of
this
to
happen
to
me,
it'd
be
much
more
useful
to
say
a
particular
user
request
caused
these
end
encryption
operations
to
happen
within
the
kms
right.
A
We
just
don't
have
like
the
yeah.
It's
it's
just
not
tracked
like
that
today,
in
sort
of
a
maybe
a
good
way
like
you
know,
you
could,
because
it
really,
it
really
should
be
the
audit
id
right.
You
should
be
able
to
look
at
the
kubernetes
audit
log
say
mo
made
a
request
to
create
this
config
map
or
secret.
A
Maybe,
and
here's
like
the
encryption
that
happened
because
of
that
you
should
be
able
to
trace
all
that
because,
in
my
opinion,
so
if
we,
if
we
can
make
that
happen,
somehow
that
would
be
nice,
but
it
cannot
be
driven
by
a
user
controlled
string
so
that
that
that
sounds
awful
now.
That
might
be
as
simple
as
saying
that
we
need
to
come
up
with
a
better
way
to
control
this.
This
field
right.
A
You
know,
we
can't
just
let
any
user
set
this
header
and
make
some
level
of
checks
in
place
the
other
bit.
I
think
it's
a
little
bit
unclear
to
me
and
some
of
these,
these
structs
what
the
actual
data
would
be
so
it'd
be
helpful
to
me
to
just
have
like
some
just
exploded
examples
of
like
here's,
maybe
one
example
and
if
there's
a
few
variations,
especially
then
just
so
I
understand
like
thinking
through
like
for
example
today
you
know
we
only
get
deciphered
response.
We
don't
get
this
one
right.
A
Are
we
expecting
the
api
server
to
store
this
or
it
has
to
right
somehow,
because
it's
supposed
to
return
it
back.
So
what
that
immediately
tells
me
is
we're
proposing
a
storage
schema
change
which
immediately
makes
it
so
that
we
cannot
make
we
cannot.
We
can
only
teach
the
api
server
how
to
do
this
in
1
24.
A
Maybe
it
would
be
okay
if
this
was
an
alpha
api
because
we'd
be
like
yeah,
don't
do
it
it's
alpha,
but
it's
the
beta.
Please.
A
A
A
Maybe
we
can
work
around
that
by
starting,
like
part
of
me,
thinks
that
this
whole
thing
should
be
an
alpha
api.
Just
a
new
one
like
we
just
leave
the
beta
one
alone
and
start
with
an
alpha
one
and
say
cool
we're,
gonna,
we're
gonna,
break
all
sorts
of
stuff.
D
A
A
That
is
like
a
kubernetes
api
on
its
own
that
can
express
proper
additions
of
new
data
without
requiring
the
whole
thing
to
be
thrown
away,
and
maybe
that's
the
first
step
right.
Maybe
that's
the
step
we
do
in
124
is
teach
the
api
server
about
a
new,
properly
structured,
either
binary
or
human
readable
format
that
can
be
expanded
over
time.
So
that
way
we
could
add
stuff
to
it
and
have
some
hope
that
if
you
downgrade
that
it
still
can
read
the
data
back
right.
A
Yeah
and
one
of
the
things
that
was
outlined
by
alex
in
that
original
doc
was
that
you
can't
do
anything
with
this
data
without
the
api.
So,
like
you
may
or
like
its
code
right
like
you,
you
can't
process
it
with
an
external
tool
because
it's
basically
a
made-up
format
that
we
made
and
we
wrote
it
into
lcd
and
now
we're
like
cool
it's
like
you're
locked
into
using
an
api
server.
That's
like
that
seems
sketch,
because
at
least
beforehand
the
proto
was
technically
unmarshable.
The
json
was
definitely
on
marshall,.
D
A
This
it's
basically
like
nah,
you
gotta
lose
our
stuff,
so
there
there
was
some.
I
forget
the
names,
but
there
was
some
specific
industry
standard
and
alex
had
pointed
outside
like
hey.
Maybe
we
should
like
store
it
like
this.
So
that
way
you
could
realistically
write
a
new
tool
or
realistically
write
or
use
an
existing
tool
that
wasn't
the
api
server
yeah.
So
those
are
sort
of
just
my
initial
thoughts.
First
off
damien
yeah.
I
have
a
few.
C
Questions
because
there
are
a
few
aspects
that
I'm
not
sure
I
understand
properly.
So
what
is
exactly
the
data
that
we
are
trying
to
log
like
from
the
kms
plug
inside,
like
what's
the
actual
information
that
we
are
looking
for
here?.
D
I
mean,
as
you
know
today,
right
it's
really
hard
to
tell
like
which,
as
mo
was
mentioned
right,
which
requests
actually
triggered
the
encryption.
D
C
C
D
D
A
Like
this
is
what
I
think
of
right,
because
I
don't
want
to
look
through
the
poorly
written
logs
within
the
api
server.
I
want
to
look
at
the
proper,
structured,
fully
supported
feature
which
is
kubernetes
audit
logs
and
look
at
that
one
and
then
correlate
with
kms
and
like
cool
right,
and
I
wanted
to
be
able
to
be
done
in
a
way
that
the
machine
can
do
it.
So
that
way
like
when
I
have
to
go
write
this
code
or
a
niche
or
reader
has
to
go
write
this
code
into
their
products.
It
just
works.
A
You
know
you
don't
need
to
like
invent
some
craziness,
and
this
is
a
thing
you
can
do
so
and
that's
where
my
hesitation
came
about
all
your
ideas
like
we
like,
I
don't
want
a
user
to
be
able
to.
You
know,
send
a
three
megabyte
audit
ideas,
they're
thing
watch
my
kms
crash
and
burn
on
fire.
It's
like!
I
don't
know
what
to
do
with
this.
C
A
A
A
D
I
I
just
just
to
summarize
I
I
think
sounds
like
we
need
to
take
some
time
to
think
about
how
to
leverage
the
existing
audit
id
and,
if
not,
then,
because
we're
adding
a
new
one.
There
could
be
backward
compatibility
issues.
So
then
do
we
want
to
create
a
new
api,
an
alpha
api
right.
B
A
I
you
know,
I
I'm
hand
waving,
maybe
there's
some
feature
flag
or
whatever.
Whenever
the
feature
flag
is
set,
you
know
you
could
have
audit
ids
emit
extra
fields
with
new
data,
totally
totally
fine
totally
backwards
compatible
should
not
cause
any
harm.
The
backwards
incompatibility
is
associated
with
if
we
are
sending
extra
data
to
the
kms
or
receiving
extra
data
from
the
kms
in
those.
A
A
A
A
Yeah,
so
if,
if
we're
gonna
go
through
the
process
of
doing
schema,
changes
like
it's
going
to
be
basically
impossible.
If
we're
saying
that
this
is
going
to
change
the
existing
api
and
thus
every
time
we
need
to
do
a
schema
change,
we
need
to
have
one
release
of
rollout
like
where
we
just
roll
out
the
ability
to
park
the
schema
and
then
the
next
release.
We
can
actually
do
any
real
work
using
the
schema
that
we
just
spent
that
last
three
months
waiting
to
be
funneled
out.
That
seems
really
impractical.
A
At
that
point,
it
seems
better
to
start
with
an
alpha
api
where
we
can
say
there
is
no
stability
of
the
schema.
This
is
an
output
guy,
we're
just
we're
working
through
it
and
build
in
either
the
extensibility
or
whatever.
We
need
to
be
able
to
promote
it
to
a
beta
api
without
without
having
to
worry
or
just
say
that,
like
we
will
not
graduated
from
alpha
until
we're
like
really
done
with
the
schema.
A
B
D
D
E
Oh,
I
I
wanted
to
add
several
times
something,
but
but
then
the
conversation
moved
on
so
basically
agree
with
mo
that
we
can't
rely
on
anything
even
from
the
client,
so
sometimes
it
might
be
quite
useful
to
have
this
information
in
case
it
was
a
good
good
user
that
has
no
bad
intentions,
otherwise
can
go
really
bad.
Also.
E
I
think
it's
it's
the
right
way
to
go
with
an
alpha
api
version,
because
then
we
are
more
free
to
to
experiment,
try
something
out
and
don't
bother
all
too
much,
and
I
mean
don't
hate
the
player.
I
hate
the
game
right,
so
I
think
that
this
attitude,
and
once
we
feel
like
oh
at
the
end,
it's
just
two
fields.
We
still
can
kind
of
backpedal
and
to
try
to
add
it
to
to
some
existing
api.
So
it
would
give
us
more
freedom.
A
Yeah,
I
suppose
I
suppose,
if
we
did
build
this
out
as
an
alpha
api,
if
we
came
to
the
conclusion
when
we
were
done
with
it,
that
oh,
it's
not
actually
that
much
different
from
the
beta
api,
let's
just
fix
the
beta
api
and
delete
the
alpha.
That
would
be
fine.
I
don't
think
that's
like
breaking
anything
it
it's
more
of.
A
If
it's
an
alpha
api
and
you
try,
there
is
basically
no
accidental
way
for
you
to
use
the
output
yeah
right
like
and
also
there's
no
way
for
you
to
use
it
and
say
that
you're
in
a
supported
configuration
of
any
any
sense
right
that
that's
a
pretty
significant,
like
guardrail,
to
say
that
yeah,
like
the
reason
we
don't
support
our
downgrades
right
now,
is
because
we're
not
done
we're
not
in
the
state
that
we
can
reasonably
support
you
doing
that.
It's
much
harder
like
like
the
reality
is
the
encryption
at
rest.
A
A
Sadly,
in
its
current
state,
at
least
in
the
cloud
provider,
environments
and
and
to
be
fair,
it
had
you
know
I've.
It's
been
order
of
less
than
a
week
and
since
someone
last
asked
me
about
it
and
like
they're
like
hey,
I
need
to
deploy
encryption
at
rest
and
the
on
this
stuff
seems
dumb
so
like.
How
do
I
use
a
kms
like?
Oh,
you
want
to
know
how
to
use
a
canvas.
You're
gonna
be
sad.
A
The
hesitation
I
have
with
that
is
that
would
require
us
probably
to
change
the
code
in
place
and
it's
very
hard
to
guarantee
that
you
have
not
subtly
changed
the
behavior
of
the
code,
but
I
remember
recently
someone
downstream
got
broken
by
a
change
to
the
hpa
controller
or
some
other
controller,
because
they
had
some
new
logic.
That
was
feature
gated,
but
some
of
it
slightly
leaked
into
the
non-feature
gated
part
and
it
suddenly
broke
them
downstream
right.
So
it's
very
hard
to
make
sure
that
that
doesn't
happen.
B
I
think
recoverability
is
still
smaller
topic,
so
the
idea
is.
C
Just
before
that,
I
had
one
topic:
if
we
want
to
stay
on
observability.
I
just
wanted
to
add
one
thing,
so
I
have
decided
that
currently
it's
a
bit
annoying
that,
like
we
have
all
these
plugins
and
we
don't
have
like
common
metrics
between
them,
so
there
is
no
way
to
actually
like
create
generic
alerts.
So
if
you
need
to
support
multiple
plugins,
then
you
have
no
ways
to
actually
create
alerts
for,
like
just
one
address
for
all
of
them.
C
A
We
might
be
able
to
provide
it
not
like
necessarily
a
full-scale
sub-project,
but
just
a
a
set
of
libraries
or
maybe
a
library
which
is
basically
like.
There's
a
go
function
for
encryption.
Here's
a
go
function
for
description,
do
whatever
you
want,
but
the
rest
of
the
code
looks
basically
exactly
the
same
across
all
providers,
because
it's
just
a
kms
api
and
obviously
you're
not
required
to
use
it.
It's
just
that
it'd
be
cool
if
everyone
did
so
that
way,
the
on-ramp
is
easier.
C
Exactly
what
I
was
referring
to
and
the
goal
would
be
to
only
like,
provide
a
function
that
would
return
the
metric
that
we
would
say,
like
best
practices
and
like
covers
most
of
the
observability
use
cases
that
we
have
for
alert,
for
example.
C
So,
for
example,
we
want
to
count
the
number
of
decrypts
so
that
we
have
a
rate
of
decrypts.
We
have
a
rate
of
encrypted
stuff
like
that,
so
that
we
have
like
more
observability
on
the
actual
number
of
queries
that
are
made
to
the
to
the
kms
and
then
it's
up
to
the
plugin
to
actually
like
register
those
metrics
and
expose
them.
A
D
A
Like
things
like,
for
example,
latency
on
the
request
and
stuff,
I
feel
like
such
a
library
could
do
that
by
default
right.
Obviously,
it's
still
up
to
you
to
close
it,
but
we
would
basically
be
recording
it
all
the
time
right.
So
encryption
took
10,
milliseconds
cool,
I'm
storing
in
my
meteor
structure.
It's
your
choice
to
expose
that
meteor
structure.
In
whatever
way
you
have
consumers,
I
can
consume
it,
but
it's
all
there
so
yeah.
A
All
that
sounds
great
to
me
and
to
be
fair
lines
up
really
well
with
what
we've
done
with
like
flanker
credential
plugins
right.
There
is
a
samples
library
that
includes
a
fully
functioning
one
that
you
can
plug
into
and
change
your
stuff
right.
So
I
think
that's
well
within
what
we've
done
in
the
past,
and
I
I
in
a
retrospective.
It
sounds
like
a
mistake,
but
that
doesn't
already
exist.
A
A
Sorry,
so
it's
not
purely
that
one's
not
metrics,
that
one
is
a
full
go
library
for
you
to
build
your
entire
plugin
with
basically
the
correct
set
of
best
practices
right,
so
damon
was
pointing
out
just
metrics
as
one
best
practice.
I
was
pointing
out
that,
like
90
of
this
looks
exactly
the
same
right
like
the.
A
D
Well,
like
I
have
questions
on
like
right
now:
we're
lumping
observability
and
recovery
like
as
phase
one,
but
for
a
cap
do
we
want
to
just
focus
on
observability.
E
B
B
D
I
prefer
separating
them
just
for
the
ease
of
discussing
and
moving
forward.
Otherwise
we
just
end
up
like
talking
on
the
pr
over
and
over
it.
C
Happens
if
I
can
just
give
my
opinion
on
that
to
me,
both
topics
are
similar
and
the
answer
is
observability
because
for
recovery
I
don't
think
like
the
mechanism
that
was
proposed
like
to
have
a
way
to
false,
delete.
The
secrets
or
like
any
resource,
is
the
right
solution,
and
the
reason
for
that
is
and
that,
if
you
are
users,
you
want
to
know
like
what
are
the
secrets
that
were
deleted,
because
you
may
want
to
recreate
them.
C
In
the
case
they
are
in
recoverable
and
to
me,
like
the
best
vector
for
this
kind
of
information
and
to
alarm
the
user
as
fast
as
possible,
is
to
have
metrics
and
then
alerts
to
tell
them
hey.
There
are
unrecoverable
objects
in
your
cluster
that
needs
to
be
fixed,
and
then
there
are
the
list
of
the
actual
secrets,
for
example,
that
they
need
to
delete
and
recreate
and,
to
me,
that's
a
manual
process
that
we
don't
want
to
automate,
because
the
users
need
to
be
aware
that
some
changes
need
to
be
made.
B
B
So
sorry,
so
today,
if
the
user
tries
to
delete-
and
if
it's
not
recoverable-
basically
they
cannot
delete
it.
So
that's
why
this
is
there's
a
proposal
that
maybe
we
can
add
a
force
flag
where
the
user
says.
Okay,
now
I'm
not
able
to
list
my
secrets
because
of
this
one
unrecoverable
secret,
so
cube
city
will
delete
secret
force,
at
which
point
the
api
server
says.
Oh,
this
is
force,
I'm
not
even
going
to
try
and
decrypt
it.
I'm
just
going
to
go
and
remove
it
from
hdb.
A
Yeah,
so
there's
a
bunch
of
problems
in
this
one
right,
so
one
is
the
lack
you
know
having
one
secret
being
the
undecryptable
causing
all
this
to
fail
is
really
catastrophic
right.
It
basically
destroys
the
api
server
right,
it
just
becomes
unusable
and
that's
one
problem.
The
other
thing
is
any
of
these
force
delete
mechanisms
explicitly
bypass
kubernetes
life
cycle
management
right,
so
they
bypass
finalizers
and
stuff
right,
which
then
breaks
you
can
raise
api
and
some
of
those
right.
A
The
the
gist
I
got
was
this
is
not
a
problem
that
we
can
meaningfully
handle.
It
is
basically
a
storage
level
problem
that
the
kubernetes
api
cannot
meaningfully
handle
right.
So
it
might
be
that
the
existing
behavior
is
fine,
which
is
to
just
break
and
stay
broken
until
manually
corrected
and,
and
certainly
we
might,
we
can
work
on
trying
to
make
it
easier
to
understand
what
is
broken
and
thus
allow
admins
to
go
immediate
and
manually.
A
I
think
my
desire
here
would
be
to
try
to
come
up
with
ways
to
similar
to
what
rita
had
said
about
having
leases
and
stuff
on
keys,
to
basically
make
it
like
incredibly
hard
to
get
into
situations
where
you
cannot
decrease
data,
because,
like
once,
you
do
get
into
that
state.
The
kubernetes
api
no
longer
works
right.
The
semantics
of
the
api
have
been
broken
either
either
you
have
to
be
able
to
ask
for
partial
lists,
which
is
bad,
because
now
you
don't
have
a
full
list.
A
C
Yeah
dude,
my
two
cents
on
that.
I
I
think
you
raised
a
good
point,
and
the
problem
here
is
that
it
might
be
better
to
stay
in
a
broken
state,
because
even
if
we
were
to
force
delete
all
the
unrecoverable
objects,
we
have
to
consider
that
some
other,
like
users,
may
not
only
encrypt
circuit
secrets,
but
all
the
information
stuff
like,
for
example,
in
openshift,
we're
encrypting
routes
so
basically
like
dns
behind
services,
we're
also
encrypting
old
tokens.
C
So
if
we
were
to
force
this
kind
of
information,
the
cluster
will
be
in
a
worse
state
than
if
the
list
was
failing
because,
like
you
would
basically
break
all
the
things
that
these
basically,
everything
that
was
relying
on
this
data
that
was
deleted
will
be
broken
whereas
like
in
the
first
place,
you
would
have
just
the
least
that
was
broken
to
me.
It's
worse
to
deal
it.
B
Yeah
the
problem
actually
comes,
I
think,
when
you
have
like
a
managed
offering
right.
So
the
only
way
today
to
recover
like
where
make
your
list
calls
work
again.
Is
you
have
to
go
into
fcd
and
delete
it?
And
then,
if
you
are
running
on
a
cloud
provider,
then
there's
no
way
you
can
actually
do
that,
which
means
either
cloud
providers
are
forced
to
have
an
api
to
do
it
or
like
it
only
works
through
support
tickets.
Until
then,
like
the
list
is
just
broken
right.
A
B
A
Assuming
the
cluster
is
functional
enough
to
let
you
do
that.
Obviously
you
know
yeah.
I
I've
broken
many,
a
self-hosted
cluster
before
yeah
we're
a
little
bit
over
time.
So
I
I
think
we
can
pause
here
and
then
rita
for
next
steps,
which
is
a
good
fallout.
E
And
one
more
thing
that
what
what
when
we
would
so
another
thing
that
might
be
good
to
have
manual
involvement
from
from
people
just
imagine
we
would
somehow
fast
lead
certain
entries
and
have
a
functional
functioning
cluster
again,
and
what,
if
I
could
provoke
this
and
and
push
the
class
in
a
state
which
would
make
it
open
to
attacks
for
me.
So
I
could
somehow
get
rid
of
some
of
the
keys.
E
But
it's
also
like
like,
if
you
imagine,
you're
an
sre
in
the
middle
of
the
night,
you
get
the
wake
up,
call
you
look
it
up
and
you're.
Okay,
I
need
to
delete
those
four
or
five
things
and
then
I
can
go
to
sleep
or
it
can
go
bad
right.
So
it's
not
like,
like
in
the
middle
of
the
day
where
you're
freshly
recovered
and
you
can
think
rationally
easily.
So
humans
also
like
a
potential
issue.
D
A
Yeah,
my
overall
my
two
sentences.
I
think
it's
fine
to
have
many
smaller
caps,
so
we
don't
try
to
boil
the
ocean
and
never
get
anywhere.
The
only
high
level
concern
I
have
about
all
of
that
is.
I
just
want
to
make
sure
that
the
end
state
that
we
eventually
reach
is
the
state
that
we
wanted
to
be
in
and
not
like
a
series
of
half
states
that
never
coalesced
into
a
larger
picture,
but
you
know
you
know.
A
I
have
faith
that
if
we're
also
you
know
all
meeting
and
discussing
and
making
us
progress
together.
I
think
we'll
be
fine
there
too,
but
yeah
trying
to
write
a
cap
and
then
have
it
answer.
Every
single
kms
problem
probably
is
not
going
to
work.
No,
and
also,
if
I
understand
it
correctly,
you
can't
really
change
the
scope
of
a
cap
once
you've
written
it
like
very
broadly,
because
it
makes
it
very
hard
to
track
right
because,
what's
alphabet
with
beta
you've,
just
added
a
bunch
of
new
requirements.