►
From YouTube: Kubernetes SIG Auth 2021-09-29
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2021-09-29
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
Hey
everybody,
so
this
is
the
wednesday
september
29th
2021
cigot
meeting
pretty
light
agenda
today,
but
let's
go
ahead
and
get
started.
Y'all
see
my
screen
in
my
social
screen.
Sharing
kind
of
I
lost
a
little
sorry
all
right
nish
did
you
want
to
talk
about
your
retroactive
cat.
B
Yeah,
so
I
added
this
to
the
agenda.
Last
week,
sarita
has
been
helping
review
the
pr
she
added
a
couple
of
comments
and
we've
been
responding
to
that.
So
I
just
wanted
to
bring
it
up
again
in
the
agenda,
so
we
can
get
more
reviews
also
in
terms
of
the
releases.
We
actually
cut
a
release
candidate
for
the
driver
this
week
and
then
we're
going
to
publish
an
official
image
after
the
kept
is
merged.
B
A
Cool,
so
I
I
think
I
saw
the
comment
where
you
said
you
guys
were
working
towards
a
release
candidate
for
like
1.0.
Are
you?
A
Is
this
kept
sort
of
in
line
with
that
one
daughter
of
state?
Yes,
so
I
think
I
know
I
think
mike
had
said
he
would
look
over
this
from
like
a
cigar
perspective.
Are
we
are
you
hoping
to
use
that
review
as
a
like,
like
I
know,
mike
you're,
on
the
college?
So,
if,
like
mike
really
yeah
was
happy
and
like
approved
it
in
whatever
sense
you
approve
of
already
done
thing.
C
So
this
is
actually
a
driver
implementation,
or
is
it
part
of
the
central
infrastructure
nah.
B
So
this
kept
should
have
all
of
it,
so
we
we
tried
to
go
back
all
the
way
to
where
we
started
and
then
like
we
talked
about
alpha
and
beta
and
then
right
now
we
have
a
release
candidate
for
pga,
so
it
goes
over
all
of
those,
but
if
the
preference
is
displayed
into
two
different,
so
we
can
get
the
retroactive
in
and
then
just
have
another
one
for
just
the
ga
updates.
Then
we
can
do
that
as
well.
D
C
Yeah,
that
sounds
good,
is
it
like
in
lgtm
state
from
the
perspective
of
the
project
leads.
E
I
I
reviewed
it.
I
think
it's
really
really
close
from
my
perspective,
but,
like
a
niche
mentioned,
it
would
be
good
to
get
another
at
least
one
more
review.
C
Yeah,
that
sounds
good.
I
will
I
I
can
do
the
additional
review.
I
think
I
have
context
from
the
earlier
reviews
I
did
and
yeah.
It
sounds
like
that's
ready
to
go.
A
So,
just
for
clarity,
so
you
know-
let's
say
you
know
so
rita.
Let's
say
you
finish
your
review
and
you
get
all
your
your
desires
addressed
and
same
with
mike,
and
we
let's
say
we
merge
the
cap.
Are
we?
Are
we
calling
that
a
formal
plus
one
api
review
from
segat,
meaning
that
if
you
then
ship
the
10
release
as
a
like?
Basically,
are
we
saying
that
the
one
or
release
that
comes
after
this
that
meets
whatever
is
in
the
final
state
of
this
kept?
A
Is
that
aga
api
sort
of
following
that,
like
I
realize
it's
a
sub
project
and
that
maybe
doesn't
have
the
same
exact
conventions
as
kubernetes
core,
but
I'm
trying
to
understand
what
what
we
intend
for
an
external
party
to
observe
from
this
one
release?
What
what
are
they
getting?
Are
they
getting
a
stability
guarantee.
B
Yeah
in
terms
of
how
we
have
designed,
I
mean
written
the
cap,
it's
around
the
core
functionality
right
rather
than
focusing
on
the
api
aspects,
because
cids
are
something
that
we
added
later,
but
we
have
focused
more
on
the
core
functionality
and
then
in
terms
of
issues.
How
can
you
recover
and
then
like
try
to
add
metrics
so
like
we
went
a
little
bit
over
the
prd
aspect
of
it,
so
it's
around
how
to
enable
it
disable
it
and
then
debug
in
terms
of
errors
and
what
the
core
functionality
is.
B
Not
for
the
1.0,
but
we
are
thinking
about
few
changes
in
terms
of
the
crds
post
1.0.
So
we're
going
to
look
at
graduating
the
api
post,
one
data
like
going
to
a
beta
api
and
then
a
v1.
A
B
D
E
C
Well,
I
mean
so.
I
think
it
is
definitely
a
recommendation,
however,
like
there
is
precedence
for
ga
features,
depending
on
at
least
beta
apis.
I
don't
know
about
alpha.
Is
that
true?
That
is,
that
is
true.
Certificate
certificate
rotation
has
been
ga
for
a
very
long
time
and.
D
D
Can
we
really
like?
Does
that
mean
we
effectively
have
to
treat
the
alpha
api
as
like
a
more
supported,
more
stable
thing
than
it
maybe
actually
is,
or
we
have
to
keep
it
around
forever,
because
removing
it
would
break
this
1-0
version
that
we
said
was
stable.
Where
that's
what
I
would
really
focus
on
right,
rather.
B
D
D
Can
we
really
make
that
promise
if,
under
the
covers,
you
have
to
have
this
alpha
level
thing
which
we
know
we
want
to
progress
to
beta
and
progress
to
v1,
and
I
would
expect
we
would
not
want
to
support
the
alpha
api
forever
and
once
the
v1
is
there,
we
would
like
to
try
to
shift
usage
onto
it.
Yeah
that
seems
kind
of
strange.
Is
there
a
reason
not
to
wait
until
the
apis
that
underlie
it?
B
So
I
think
it's
mostly
in
terms
of
usage
like
you,
users
actually
want
ja,
implementation
of
the
driver
and
then
like
custom
resources
was
something
initially
when
we
started
off
all
the
metadata
that
was
required
for
the
volume
mount
was
provided,
part
of
the
pod
spec
itself
right
and
then
in
to
support
pod
portability.
B
We
basically
added
these
custom
resources
and
then
switched
over
to
having
the
user
configure
that
through
custom
resources,
so
the
1.0
we
are
trying
to
say
that
the
apis
will
not
change
like
I
mean,
I
think
what
you
said
right
like
the
alpha
apis
is
very
close
to
what
we
can
call
beta
today.
D
Want
to
avoid
a
situation
where
we
would
say
you
have
to
upgrade
your
1-0
secret
store,
because
it's
going
to
stop
working,
because
the
alpha
api
is
graduated
like
that.
That
doesn't
seem
like
a
good
situation.
E
I
think
another
thing
to
mention
also
is
the
api:
will
change
for
certain
features
so,
though
the
driver
may
go
to
v1
or
ga
the
I
think
the
core
functionalities
the
apis
should
be
also
graduating.
However,
the
specific
features
they
may
not
be
going
to
v1
because
some
of
these
are
still
being
tested
or
it's
a
it's
a
feature
flag
that
people
can
turn
on
and
off
whereby
then
go
ahead.
D
That
can
be
okay,
like
if
a
particular
field
and
the
api
that
you
have
to
enable
but
like
which
endpoint
we
even
talk
to.
D
That's
that's
pretty
breaking
if
we
say
like
the
v1
expects
to
talk
to
the
alpha
endpoints
and
if
those
aren't
there,
then
it's
non-functional
and
later
when
we
move
to
beta
or
v1
endpoints
like
we
expect
the
driver
to
talk
to
those
instead,
like
that,
that's
very
breaking,
as
opposed
to
like
we
looked
at
the
we
got
the
api
info
and
there's
like
this
one
feature
in
the
in
the
api:
object
that
maybe
on
or
off,
like,
depending
on
feature
gates
like
rolling
those
out
in
a
controlled
way,
makes
sense
and
that's
an
easier
story
to
say
like
there
are
these
features
in
the
pipeline,
and
you,
if
you
want
those,
you
need
to
update
your
your
driver
from
1
0
to
1,
1
or
2
0
or
whatever,
like
upgrade
to
get
new
features,
is
a
much
more
normal
story
and
easier
to
sell
than
upgrade
to
to
avoid
breaking
what
was
previously
working,
because
the
apis
are
no
longer
alpha
they're.
A
So
if,
if
I
could
as
a
suggestion
and
I'm
trying
not
to
be
prescriptive
here,
it's
just
sort
of
a
exercise
before
you
guys
do
a
one
dollar
release.
Would
it
be
possible
for
you
to
go
through
your
custom
resource-based,
apis,
your
crds
and
effectively
make
the
changes
that
you
want
to
make
and
basically
make
a
vivo
and
release
with
the
changes
that
you
believe
to
be
good
today?
A
A
After
that
you
can
expand
that
api
over
time
very
carefully
sort
of
following
what
you
would
expect
of
the
one
api
to
do,
or
you
could
build
a
brand
new
v2
api
that
starts
off
in
alpha
iterate
on
it,
and
then
you
know
at
some
maybe
2.0
release
you'll
say
that
hey
we're
done
with
the
v2
crd
based
api
now
and
the
2
release
requires
that
v2.
It
does
not
does
not
sort
of
like
it
to
me.
A
It's
just
very
strange,
but
basically
just
the
way,
the
different
way
of
saying
what
jordan
said
is
that
it's
it's
totally
fine
to
expand
a
v1
api
in
a
backwards
compatible
way.
Yeah,
and
you
know
you
could
have
fields
that
are
inert
when
feature
flags
are
not
enabled.
You
know
the
csr
api
has
those
csr,
v1
has
expiration
seconds
or
whatever
on
it
right.
That's
a
beta
feature
flagged
api.
A
If
you
turn
the
feature
flag
off
that
that
field
becomes
a
nerd
so
that
that's
all
totally
fine,
but
we're
not
going
to
break
the
csr
in
one
api
right,
we
might
add,
features
to
it
and
we
might
one
day
do
a
csrb2.
A
D
As
the
primary
motivation
for
tagging,
this
is
v1,
which
is
like
consumers
desire
to
have
a
stable,
supported
component
version.
I
think,
is
a
good
indication
that
the
dependencies
of
this
thing
also
need
to
be
at
a
stable,
supported
level
for
us
to
actually
be
able
to
make
that
promise
to
people
that
consume
this.
B
A
Thanks
and
yeah,
we
can
continue
this
discussion
next
time
too,
if
you
guys
have
feedback
or
concerns
or
whatever
awesome
does
anyone
else
have
anything
on
this
before
we
move
to
the
cap,
279
discussion.
D
A
F
Yes,
we
we
talked
about
this
in
the
previous
meeting,
which,
at
that
time
it's
it's
ambiguous.
I
mean
ambitious,
that
we
try
to
deprecate
the
token
controller,
but
after
the
reviews
we
make
this
group
smarter.
Basically,
we
focus
on
only
focus
on
the
auto
generated
tokens,
so
there
are
two
actions
just
a
summary.
There
are
two
actions
in
this
cap.
One
is
stop
auto,
generating
sacrifice
tokens
and
the
second
is
remove
unused,
auto,
generate
tokens.
F
So
just
bring
up
here
to
give
more
attentions
on
this.
I
mean
the
sooner
that
we
can
get
this
in,
and
the
fewer
next
tokens
exist.
A
If
I
could
ask
the
question
on,
does
this
so
I've
seen
the
comment
by
clayton
and
I
haven't,
I
haven't
had
a
chance
to
read
it
again,
but
does
this?
Does
the
current
state
of
the
proposal
state
that
we
will
make
it
so
that
if
you
make
a
secret
with
the
correct
annotation
that
the
token
controller
is
supposed
to
fill
in?
Are
we
saying
that
we're
going
to
break
that,
or
are
you
saying
we're
going
to
keep
that
working.
A
Okay,
is
there?
Is
there
any
ever
breaking
that
function?
I'm
trying
to
understand,
like
you
know,
I
you
know
the
perspective
I
have
is
you
know,
I'm
very
sympathetic
to
users
that
have
use
of
this,
and
also
as
a
as
a
user
of
that
functionality,
primarily
because
I
have
to
support
both
new
and
old
kubernetes
versions.
So
I
don't
have
the
luxury
of
necessarily
not
only
using
bound
service
account
tokens.
I
have
to
basically
support
both
worlds.
At
the
same
time,.
F
Yeah,
yes,
for
the
matter
to
create
tokens,
they
are
not
in
the
scope
of
this
cap.
For
now,
I
think,
because
we
don't
have
alternative
for
the
manually
created,
like
non-expiring
tokens,.
G
Yeah,
I'm
I'm
curious
how
you
are
going
to
detect
and
indicate
that
particular
token
has
been
used.
I
like
the
idea
of
saying
if
this
one's
been
used
in
the
past
three
months,
I
better
not
delete
it,
but
I'm
not
sure
how
you
determine
that
in
the
api.
D
Annotating
secrets
that
have
been
used
with
a
pretty
coarse-grained
indicator
like
the
date
they
were
used
so
that
they
get
written
to
at
most
once
a
day.
D
Okay
and
then
you
can
query
that
it's
visible
in
the
api,
so
you
can
locate
them.
If
you
want
to
eliminate
that
usage,
you
can
audit
them
and
then
the
controller
can
use
that
information
at
a
defined
time
interval,
which
I
would
expect
the
cluster
admin
to
be
able
to
control
to
say,
clean
up
tokens,
auto
generated
tokens
that
haven't
been
used
in
x
time
period.
G
I
guess
once
a
day
is
fine
with
me
and
would
that
include
usages
where
someone
said?
Is
this
token
valid?
I
guess
it
has
to,
even
though
it
wouldn't
be
on
the
authentication
chain,
but
I
can
imagine
using
that,
for
is
this
token
valid
someone
presented
it
to
me.
D
D
D
A
D
Yeah
yeah,
I
think
the
the
first
the
first
proposed
stage
is
to
stop
generating
new
tokens.
I
think
120
for
the
oldest
supported
cubelets
are
a
version
that
would
be
using
bound
service
account
tokens,
probably
plus
one
version
actually.
So
at
that
point,
all
pods
that
are
running
in
the
system
would
have
been
created
on
an
api
server
version
where
bound
servers
account
tokens
were
in
use.
So
at
that
point
we
no
longer
need
the
auto
generated
ones
to
run
to
mount
into
pods.
A
Okay,
a
related
question
to
all
of
this.
So
to
the
really
a
question
related
to
the
explicitly
requested,
which
I
know
we've
said
isn't
really
in
this
kept,
but
as
a
as
a
possible
approach
would
we
would
we
consider
approach
so
like
today?
You
have
you
put
an
annotation
on
a
secret
or
yeah,
it's
not
a
secret
right
yeah.
You
say
I
want
it
injected
here.
I
think
that's
not
a
ghost
when
we
consider
adding
a
like
a
a
sibling
annotation.
That
means.
A
Basically,
that
would
be
ignored
by
older
api
servers
so
on
on
a
old
api
server.
That
structure
would
mean
I
want.
I
want
a
long-lived.
Non-Expiring
service
account
token,
but
on
a
newer
api
server,
it
would
mean
I
just
need
you
to
fill
this
out
with
token
requests
for
me
with
a
whatever
lifetime.
I'm
like
I'm
equipped
to
do
the
refresh
off
of
my
disk.
I
just
just,
but
I
needed
like
a
specific
like
do
we
care
about
that
specific
case,
or
is
that
just
like,
whatever.
D
B
D
A
A
I
I
can
see
that's
an
edge
case
and
maybe
I'm
the
only
one
gives
a
crap.
It's
a
long,
skew
yeah,
but
that's
the
skew
I
currently
support
so
yeah.
So
I
I'm
not
saying
that
it's
not
a
long
skew.
I
also
happen
to
live
in
that
world
right
now.
So
I'm
not
like
particularly
like
saying
that
it's
a
bad
thing
to
do.
I'd.
D
Yeah,
I
don't
think
I
would
be
in
favor
of
adding
features
to
like
124
125
time
frame
that
are
pretty
much.
The
only
use
case
I
can
see
is
to
support
a
manifest
that
works
back
more
than
a
year
and
a
half,
it's,
I
think,
121
and
up
so
by
the
time
this
was
enabled
by
default.
It
would
be
125
so,
like
six
version
sku,
I
I
think
is
kind
of
weak.
A
One
was
when
was
token
request:
g8
was
it
121
121
yeah
I
mean
yeah
like
121,
is
still
quite
new
when
you're
like
deployed
on
the
cloud
providers
right
like
117
119
are
quite
common
there
that
that's
kind
of
what
what
I
mean
right,
like
the
bleeding
yetis
and
what
the
reality
of
kubernetes
are
kind
of
different
things.
D
I
think
my
my
perspective
on
that
is,
if
you
have
to
support
very
large
skus,
you
likely
will
also
need
to
have
different
manifests.
A
D
Yeah,
I
I
thought
you
were
going
to.
I
thought
what
you
were
going
to
ask
was,
so
what
are
we
going
to
do
about
the
manually
generated
or
manually
requested
tokens?
Are
we
ever
going
to
find
a
way
to
make
those
not
be
eternal
tokens.
C
C
So
both
of
those
things
could
change.
I
I
think
we
should
challenge
ourselves
to
figure
out
what
the
end
state
is.
I
don't
think
we
necessarily
need
to
do
that
for
an
alpha
feature,
but,
as
we
kind
of
approach
beta
and
ga,
I
think,
like
removing
the
majority
of
tokens
from
the
api
is
helpful-
definitely
a
boon
to
security.
A
I
mean
we,
you
know
if
we
wanted
to
help
make
that
automatic
right.
We
there
there's
no
reason
that
we
couldn't
have
support
for
automatic
rotation,
but
with
no
expiration
of
old
public
keys.
Right,
like
you,
can
keep
making
new
signing
keys
and
using
them.
You
just
can't
get
rid
of
the
old
ones
right,
and
so
I
mean
that
still
moving
the
bar
at
least
a
little
bit
forward,
and
you
can,
the
admin
could
decide
if
stuff
gets
pulled
off
the
truck.
C
Yeah,
I
think,
without
expiring
old
keys,
the
benefit
there's
gonna
be
limited.
A
A
G
I
was
just
gonna
remind
you
that
it
was
jordan's
fault
and
and
if
it
hadn't
been
so
useful,
we
wouldn't
be
in
this
situation.
So.
D
Yeah,
I
I
would
like
to
five
years
later
make
excuses
for
myself
and
say
that
my
first
proposal
was
actually
an
endpoint
on
the
api
server
that
you
could
request
tokens
from,
and
so
they
would
never
hit
the
api.
But
that
was
considered
too
too
complex
at
the
time,
so
should
have
stuck
to
my
guns,
but.
D
H
D
On
the
current
kep
from
so
last
I
looked
most
of
the
comments
were
addressed,
so
I
expect
that
to
kind
of
merge,
more
or
less
in
the
current
shape,
I
haven't
done
a
final
review,
but
if
you
have
comments,
please
jump
in
the
test
case.
The
question
about
token
reviews
and
authentication
chains.
That's
a
good
one
david.
If
anyone
else
has
any
comments
about
that,
one
jump
in
in
the
next
few
days
probably
look
to
merge
that
same
time.
Next
week,.
A
So
I
can
give
a
quick
update
so
jordan
and
david,
you
were
there
last
week
on
the
api
machinery
meeting,
but
as
a
very
quick
recap,
so
I
had
asked
the
question
of:
could
we
form
formulize
the
subset
of
the
icd
api
that
we
use
so
that
way
it
would
be
easier
to
build
like
a
shin
for
it
response
was
well.
The
subset
of
the
api
is
the
whole
api,
which
is
fine.
So
what
that
leaves
us
there
is.
A
A
A
I
mean
it
might
be
plausible,
but
seems
a
little
strange
to
depend
on
a
like
an
unbounded
api
set
that
can
grow
over
time.
Maybe
it's
fine.
I
don't
know
so
anish.
I
know
you
have
a
proposal
out
there.
I
guess
I
think
it's
from
both
you
and
rita,
and
I've
been
trying
to
think
of
what
our
next
steps
are.
A
My
I
think
my
overall
hesitation
is
a
lack
of
like
a
lack
of
operational
experience
with
what
like
what
the
shape
of
such
an
api
would
be
and
sort
of
like
how
we
expected
to
behave
like
in
the
real
environment.
I
think,
like
the
pain
points
of
kms
come
from,
that
it
was
just
sort
of
it
was
built
without
running
it.
Now,
when
you
run
it,
it
does
weird
things
in
certain
subsets
of
cases
right.
A
I'm
also
not
fully
sure
how
much
of
like
the
life
cycle
stuff
we'll
be
able
to
preserve,
but
as
a
as
an
approach
and
I'm
looking
for
feedback
from
sig
leads
and
the
nation
read
and
stuff
is:
should
we
should
we
try
to?
A
You
know
just
like
experiment
like
entry,
but
not
like
with
merging
anything,
just
like
experiment
in
the
kubernetes
code
base,
with
like
experimental
versions
of
the
existing
kms
plugins
that
exist
for
the
cloud
providers,
so
basically
make
changes
to
those
both
components
and
try
to
solve
the
the
problems
that
we
know
exist,
probably
the
hardest,
one
being
like
the
life
cycle
book
stuff
when
the
key
is
gone-
and
I
know
rita
had
mentioned
that
cloud.
A
Kms's
have
concepts
of
locks
on
online
keys
so
like
maybe
we
should
take
advantage
of
those,
but
but
basically
I'm
kind
of
thinking
of
like
a
beta,
2
grpc
api.
That
is
radically
different
than
what
exists
today
and
maybe
maybe
it's
like
a
v2
alpha
one,
not
even
a
beta2
like
it's
a
complete,
not
a
completely
unrelated
thing
sort
of
explore
and
see.
If
we
can
solve
the
problems.
C
C
Do
you
think
we
should
start
with
ga
graduation
requirements,
or
do
you
think
we're
too
far
away
from
that,
and
we
need
to
do
some
like
you
know,
fundamental
introspection
before
we
proceed
on
the
current
implementation
path.
A
Yeah,
like
the
what
I'm
unsure
about,
is
like
what
are
like
what
is
like
sort
of
technically
possible
like
what
could
you
like,
if
you
have
complete
code
level,
changes
controlled
on
internal
kubernetes
and
kms
plugins?
What
is
the
maximum
state
that
you
can
reach
for
the
perverse
cases
of
kms,
you
know
and
like
how?
How
well
can
you
tolerate
these
failure
modes?
A
I'm
not
sure
what
the
edges
of
those
are.
I
kind
of
want
to
explore
those
to
try
to
understand
like
like
what
are
our.
What
are
reasonable
expectations
for
the
ga
of
this
I
mean
we
could
say
that
all
the
problems
that
we've
identified
so
far,
all
of
them
have
to
be
perfectly
solved
effectively
and
that's
what
takes
the
ga,
but
that
might
might
set
the
bar
to
a
point
that
it's
effectively
unreachable.
H
F
Yeah,
the
one
thing
is
the
scalability
right
so
because
we
introduced
another
like
external
call.
So
when
we
do
the
secrets,
when
we
read
the
secret
from
the
ecd,
we
are
doing
like
serials
serialized
like,
for
example,
the
watch
cache.
They
have
a
page
size
1000,
but
yeah,
one
thousand
you're
gonna
make
serialized.
One
thousand
calls
to
the
external
cloud
commands,
which
is
the
latent,
is
pretty
large.
F
Sometimes
we
had
the
issues
that
the
watch
cache
will
fail
to
initialize
because
of
the
the
first
list
of
the
secrets
will
like,
because
there's
a
resource
version
associated
with
each
the
list
call
right,
so
it
just
expires
which
in
this
case
the
watch
cash
doesn't
initialize.
So
I
think
the
scalability
issue
that
we
probably
can
solve
in
the
open
source
site.
F
We
can
introduce
a
cache
or,
or
or
paralyze
the
cause
and
also
or
like
expand
the
api
to
introduce
and
added
some.
I
don't
quite
remember
what
I
proposed
that
last
time.
Basically
we're
gonna
change
the
apis,
so
that's
the
only
thing
that
I
can
think
about
about
graduating
this:
the
api
to
gta.
It's
the
scalability
issue
that
we
probably
can
solve
before
that.
A
G
Yeah,
I
am
concerned
about
how
to
handle
undecryptable
data.
That's
one
that
I
think
we
need
to
solve
before.
We
consider
ga,
because
right
now
you
can't
list
anything
is
really
bad
and
while
that
data
is
gone
right
now
I
could
see
someone
saying:
oh
no,
maybe
I
have
a
backup
of
it
somewhere
and
trying
to
bring
it
back
somehow
as
an
action.
So
I'm
also
not
sure.
Just
call
deleted
is
correct.
A
G
So,
seven
years
ago
we
had
this
idea
that
we
might
want
to
use
scd
to
store
two
different
records,
a
metadata
record
and
a
status
spec
record
of
a
status
record.
We
never
needed
to
do
that
for
scale,
so
I'm
going
to
say
realistically
no
nobody's
looking
at
it.
A
You
know
like
like,
if
you
like
some
kind
of
thing
like
owner
references,
finalizers,
there's,
probably
some
other
stuff
in
the
beginning.
Like
yeah
semantics,
I
I
was
trying
to
think
so.
I
I
made
I
hadn't
thought
of
the
restore
from
backup
case.
Like
you
have
encryption
like
you,
either
have
an
encrypted
backup
or
a
decrypted
backup,
and
you
want
to
somehow
try
to
reinsert
it
over
writing
encrypted
data
and
thus
restoring
functionality.
A
I
thought
of
that.
That's
even
more
edge
case
of
an
edge
case
than
I'd
expected.
D
I
could
maybe
see
but
finalizers
finalizers
are
just
a
string
which
is
like
here's.
The
list
of
things
that
need
to
participate
in
tear
down
of
this
object
and
usually
finalizers
need
other
information
from
somewhere
else
in
the
object
to
do
their
thing
right
like
if
they're
tearing
down
some
external
system
or
their
whatever
it
is
like,
and
so
a
finalizer
controller
is
going
to
be
watching
for
objects
and
paying
attention
to
ones
with
a
deletion
timestamp
with
its
finalizer
string
in
the
list
and
then
acting
on
those
objects.
D
D
Yeah,
but
I
agree
with
david,
like
the
the
two
well
we're
talking
about
like
ga
requirements.
I
think
that's
a
good
question
mike
and
I
I
don't
know
if
it's
particular
to
this
implementation,
but
saying
like
what
would
it
take
for
this
to
be?
Ga
is
a
good
question
and
then,
once
we
have
those,
we
can
look
at
the
current
implementation
say
like.
Can
this?
D
Is
there
a
road
map
to
get
this
there
or
do
something
need
to
start
over,
but
like
the
scalability
issue,
I
think
is
significant
secrets
in
particular
will
probably
be
helped
a
lot
if
we
stop
auto-generating
service
account
tokens
for
every
service
account
in
every
namespace,
but
I
think
the
scalability
issues
are
still
there.
If
you
have
enough
secret
of
different
types,
so
the.
A
Skill
issues
so
one
I
could
make
one
comment
on
that
like
I
know
it's
not
easy
to
express
in
our
api,
but
we
could
certainly
make
it
easier
to
spread
like
I
feel
like
you
should
be
able
to
ask.
Please
encrypt
everything,
not
not
just
secrets,
not
just
config
maps,
not
random
tokens
just
encrypt
the
world,
because
I
don't
know-
and
I
don't
want
to
know-
I
don't
care
just
encrypt
everything
right
and
to
me
it
should
work
in
that
state
like
if
every
single
call
has
to
decrypt
data.
A
D
A
D
So
scalability,
I
agree
the
undecryptable
data
story.
I
agree
with
david,
like
we
have
to
have
some
resource
there.
That
is
not
like
throw
away
your
cluster,
which
is
sort
of
what
is
there
today
and
those
two
seem
like
the
biggest
ones
to
me
now
to
meet.
Those
two
may
require
changing
how
we
talk
to
kms
and
they
require
yeah
may
require
changes
in
the
implementation,
but
those
seem
super
important.
D
I
would
be
really
interested
to
hear
from
the
people
implementing
kms
integrations,
we
sort
of
hand
waved
around
like
how
do
how
do
you
integrate
with
the
kms
life
cycle,
key
life
cycle
and
just
sort
of
said?
Well,
we
assume
people
implementing
those
will
figure
that
out,
and
that
may
or
may
not
have
been
true.
D
So
I
think
I
don't
know
that
that's
necessarily
on
us
to
solve,
but
I
think
we
should
at
least
make
sure
that
the
people
we're
saying
should
be
solving
that
have
what
they
need
from
us
to
do
it,
whether
that's
some
changes
to
the
kms
api
or
changes
to
the
decryption
config,
or
we
should
at
least
be
closing
that
loop
and
making
sure
those
problems
can't
be
solved
because
they're
super
important
problems
and
if
they
don't
get
solved
by
anyone,
then
the
feature
isn't
really
usable.
H
D
D
That's
that
I
can
imagine
that
working,
but
when
you
couple
that,
with
like
the
performance
issues
around
touching
every
object
in
the
system,
that's
encrypted
it
wasn't,
it
didn't
really
work.
B
D
What
keys
are,
are
we
still
using?
So
if,
if
there
was,
if
you
didn't,
want
to
do
an
integration
with
kms
to
say
like
don't,
let
this
key
be
dropped
until
the
data
encrypted
with
it
has
been
re-encrypted
with
the
next
key
like
there
wasn't
a
good
way
to
to
answer
that
question.
A
So
when
I
mentioned
like
storing
like
data
separately,
what
I
was
imagining,
though,
was
like
a
structured
payload,
but
still
one
single
value
in
fcd
and
like
part
of
the
structure,
would
need
to
be
something
like
key
id.
But
I
was
also
imagining
you
could
have
a
section
that
was
purposely
like
only
references
or
whatever
like
that
was
separated
out.
As
a
way
of
saying,
I
could
reconstruct
part
of
the
object
if
you
asked,
because
I
know
how
to
do
that
part.
A
But
I
can't
can't
do
the
rest,
because
it's
encrypted,
I
don't
know
how
to
decrypt
it.
A
D
D
Going
to
remove
it
like
yeah,
I
I'm
not
sure
what
we
would
serve
in
a
list
or
a
get
like
for
something
to
act
on
it.
It
still
needs
to
be
able
to
get
an
update
and
if
we've
dropped
all
of
the
object,
except
like
these
few
narrowly
defined,
probably
reasonable
to
not
encrypt
fields
then
like
when
we
serve
a
get
of
that
object.
We're
not
even
serving
a
valid
object.
A
Yeah,
like
it
might
like
it,
it
might
be
semantically
valid
from
the
kubernetes
api
validation,
but
not
valid
for
what
the
actor
that
originally
stored.
It
is
that
they
might
have
a
controller
that
implodes
on
itself,
because
it
expects
a
field
to
always
be
set
and
then
like
have
the
runtime
panic
or
something
trying
to
use
it
yeah.
A
Those
are
the
bits
I
find
harder
to
figure
out
that
if
they're
reasonably
addressable
like
I
do,
I
do
think
we
we
could
expand
the
api
and
make
it
easier
to
express,
express
the
concepts
of
rotation
and
stuff,
especially
with
like,
if
you,
if
you
had
some
way
to
have
like
a
multi-leveled
config
for
the
same
kms
integration
as
a
way
of
saying
that,
like
like
these
things
are
related
and
they
need
to
coordinate
with
each
other
in
some
way.
Maybe
I
think
that
could
make
it
somewhat
easier.
A
I'm
not
not
exactly
sure
how
far
you
actually
get
with
the
five
percent
of
the
object
that
we
can
keep
unencrypted.
I
mean
we
could
certainly
allow
you
to
delete
it.
We
could
totally
add
hooks
in
to
the
api
that
say
if
you
pass
some
parameter
in
your
delete
request.
It's
a
way
of
you
saying
delete,
even
if
you
can't
decrypt
totally
make
that
work.
I
don't
know
if
that's
what
we
want.
G
Oh,
we
should
definitely
have
the
sixth
way
to
say
force.
You
know
now
grace
period,
zero
force
and
whatever
the
fourth.
B
G
I
just
know
that
it's
on
my
pain
point
for
for
the
current
api.
A
E
A
E
Oh
I'm
talking
about
the
delete
like
un
unique
undecryptable
data
right.
We
talked
about
deleting
if
you
can't
decrypt.
B
A
D
Yeah,
so
I
mean
that
that's
one
possibility,
so
when
you
configure
the
encryption
you
can
decide
what
happens
when
stuff
is
not
able
to
be
decrypted.
Dropping
undecryptable
data
means
violating
finalizers
and
stuff
like
that,
which
is
bad,
but
breaking
your
cluster
is
also
bad.
D
I
think
if
we
were
going
to
do
something
like
that,
we
would
need
a
much
more
definitive
answer
from
the
kms
that
not
only
did
this
particular
decryption
request
failed,
but
it
failed,
and
I
know
that
the
key
is
gone
and
it's
never
coming
back
and
it's
like
permanently
failed.
This
isn't
a
blip.
This
isn't
the
key.
A
C
So
I
think
you
know,
obviously
deleting
a
key
is
and
stuff
being
unrecoverable
is
working
as
intended.
It's
interesting
to
think
how
this
conversation
would
change
if
we
were
encrypting,
the
entire
disk
that
a
city
was
stored
on
or
if
we
were
doing
what
you
mentioned
earlier,
where
we
were
encrypting
all
objects
where
it
would
be
kind
of
obvious
that
deleting
the
key
totally
messes
up
the
cluster
unrecoverably,
I
don't
know,
is
it:
how
deep
do
we
need
to
solve
this
is?
C
Is
it
going
to
be
like
object-dependent
heuristics,
based
on
specific
resources,
need,
I
guess,
more
advanced
finalization
or
you
know.
D
D
For
the
integration
to
put
a
like
when
a
key
is
in
use,
there's
a
lien
on
the
key
in
the
kms
that
says
this
key
can't
go
away
because
it's
used
in
this
cluster
and
before
you
get
to
delete
the
key
you
have
to
like
there's
enough
information
and
for
the
integrator
to
like,
say:
okay,
the
key
is
no
longer
in
use
like
drop
the
lien
like
that.
That's
really
what
I
want.
I
don't
want
this
to
be
our
problem
to
solve.
C
Right
exactly
that
seems
reasonable
to
me.
I
I
don't
know
how
well
we
can
actually
do
solving
this
ourselves.
A
Yeah,
so
that
was
the
first
part
of
it
right
like
it
was
that
we
would
make
it
effectively
nearly
impossible
in
a
well-behaving
system
for
the
state
to
occur,
because
we
are
tracking
the
information
to
say
the
key
is
induced
by
us.
Please
don't
delete
it
because
you
are
going
to
destroy
the
cluster
in
some
subtle
way.
Right
occur.
A
Yeah,
so
if
it
somehow
manages
to
occur
because
of
a
bug
on
a
human's
part
or
whatever
do
we
do,
we
need
to
care
about
that
case.
Maybe
I
mean,
like
you
know,
if
a
different
approach
to
deleting
stuff
could
be,
I
don't
know
if
ncd
has
a
move
operation,
but
we
can
move
it
somehow,
like
move
the
encrypted
data
into
a
different
part
of
the
tree
and
effectively
delete
that
key,
but
leave
it
around
as
a
way
of
saying.
Well,
we
can't
do
anything
with
it
anymore.
A
A
So
we
are
out
of
time.