►
From YouTube: Sig-auth bi-weekly meeting for 20210414
Description
Sig-auth bi-weekly meeting for 20210414
A
B
B
B
All
right,
thank
you.
I'm
taking
over
just
share
my
desktop
all
right.
So
what
I'd
like
to
demo
for
you
today
is
called
kuba
tls.
It
is
a
add-on
to
your
kubernetes
cluster
that
automatically
appends
tls
certificates
to
each
pod.
So
the
objective
here
is
to
make
mtls
easy
by
auto
mounting
in
exactly
the
same
way.
The
service
account
token
is
auto
mounted
a
tls
certificate
that
is
signed
by
a
certificate
authority
common
to
the
pop
to
the
cluster.
B
So
I
will
start
off
by
showing
off
like
the
the
cool
bit
of
the
demo,
which
is
I've
got
a
pod
here.
That
is
my
greeter
server.
It's
just
a
copy
of
the
golang
grpc
example
with
some
enhancements
made
to
actually
use
mtls,
so
I
will
port
forward
from
it.
B
B
B
If
I
go
to
this
webpage,
we
can
look
at
the
certificate
that
it
is
using,
and
this
certificate
is
named
with
the
name
of
the
service
account
that
this
pod
is
running
as
it
is
generated
with
some
dns
names
that
correspond
to
the
pods
name
as
well
as
the
matching
services,
and
this
is
all
automatic.
I
did
not
specify
this
in
the
deployment
spec.
It
is
added
by
a
mutating
web
hook.
Controller.
B
B
I
don't
want
to
take
up
too
much
of
the
time
so
doing
that
I
have
it
generates
a
certificate.
Here
is
the
contents
of
that
certificate.
I've
put
it
at
a
well-known
certificate,
location
that
more
or
less
matches
where
service
account
tokens
go,
and
this
particular
server
is
actually
configured
to
use
mtls.
B
So
if
I
proceed
to
localhost,
I
actually
get
back
the
bad
client
officer.
It
does
not.
Let
me
even
fully
established
the
ssl
connection,
because
I
have
not
presented
a
good
client
certificate.
On
the
other
hand,
my
client
application-
I
just
create
a
job
that
runs
the
client
application.
B
And
this
works
will
work
just
fine
as
soon
as
it
you
know,
actually
creates
the
there.
We
go
this
launches
this
figures
out
its
tls
certificate.
It
has
a
mutual
tls
certificate
that
is
trusted
by
the
application,
because
it
is
generated
from
the
same
certificate
authority
that
it
trusts
and
it
all
is
pretty
just
pretty
just
magical.
B
So
I
come
to
you
today
to
demonstrate
this
and
to
talk
about
it,
because
this
no
longer
works
in
1.9
or
1.22,
when
certs
v1
goes
away,
because
the
certificate
authority
that
it's
using
is
the
kubernetes
certificate
authority,
and
it
is
just
assuming
that
that
we
can
approve
and
issue
things
that
the
the
kubernetes
cluster
itself
will
will
issue
certificates.
B
C
Correct
me:
if
I'm
wrong,
but
certificates
v1
should
still
be
present
in
122..
It's
just
the
v1
beta1
api.
That's
going
away!
Sorry.
B
V1B
the
legacy
signer,
it
sounds
like
it's
a
legacy
signer
that
is
going
away,
so
I
actually
have
three
different
proposals
that
I
would
like
to
to
talk
about,
mostly
just
to
get
a
feel
for
the
room
of
which
of
these
should.
I
should
I
push
for
so
I
didn't
see
this
this
pod
specific,
dual
excerpt.
B
Oh
man,
this
is
pretty
much
exactly
what
I'm
looking
for.
B
D
This
this
sounds
a
lot
like
the
resolution
of
this
issue.
I
I
think
there
are
so
cert.
There
are
a
lot
of
situations
in
cluster
where
something
like
this
would
be
useful.
I,
for
example,
metric
scrapers.
The
cubelet
ports
are
kind
of
like
an
unsolved
problem
as
far
as
authentication
other
things
setting
up
pki
for
admission
web
hooks
and
aggregated
api
servers
is
very
challenging
today,
so
I'm
kind
of
all
for
having
some
some
support
for
built-in
dual
serving
certs.
D
B
I
mean
the
the
one
one.
The
122
is
simply
that
legacy
is
deprecated
right.
The
the
v1
beta
1
is
deprecated,
and
so
I
need
to
start
generating
my
own
certs,
rather
than
like
the
existing
signers.
Don't
solve
my
problem,
so
the
caps
that
I
was
going
to
that
I
that
I'm
working
on
are
basically
just
are
just
adding
signers
that
would
be
appropriate
for
kuba
tls
to
use
so
to
do.
Where
do
I
have
I've
actually
got
three
different
versions
of
this.
B
So
excuse
me,
while
I
talk
about
them,
one
of
them
just
is
a
generic
mtls.
It
will
sign
anything
that
is
a
valid
mtls.
This
is
broad
scoped,
and
so
I
can
see
lots
of
cluster
administrators
not
wanting
to
not
wanting
to
support
this
right.
They
want
to
disable
this,
because
if
the
cluster
ca
is
used
to
sign
something,
then
I
don't
know
it's
got
permissions
from
the
cluster
ca.
It
could
impersonate
a
node.
B
Version
two
of
this
involves
a
bunch
of
specific
signers,
so
a
signer
for
a
user
right.
Once
upon
a
time
I
had
all
of
my
users
set
up
with
with
client
certificates
generated
in
exactly
this
format
right.
I
called
out
manually
to
approve
client
certificates,
client
csrs,
and
then
I
gave
everybody
their
their
clients
certificates.
I
wrote
some
little
scripting
that
would
automate
all
of
that
on
their
on
their
side,
so
that
I
never
had
to
generate
them
keys.
A
key
was
automatically
generated.
B
B
B
How
are
you
distributing
trust?
Distributing
trust
is
that
the
nodes
themselves
are
trusted
and
when,
when
the
node
creates
a
pod,
when
it
admits
a
pod,
it
generates
the
key
it.
It
creates.
The
csr
and
the
signer
checks
that
the
right
node
has
the
csr
and
that
it
matches
the
pod
that
it
is
theoretically
requesting
for.
F
Ca
bundle;
no,
that
that
that
that
is
actually
the
misunderstanding
that
the
ab1
beta
1
api
propagated.
F
So
the
thing
that
is
distributed
in
the
service
account
token
is
only
the
signer
for
the
cube
api
server,
okay
certificate,
nothing
else,
and
so
assuming
that
that
bundle
is
valid
for
all
of
these
things
or
that
the
same
bundle
contains
the
cas.
For
all
of
these,
things
is
not
a
great
assumption,
so
the
trust
distribution
is
actually
probably
the
hardest
bit
of
this.
I,
like
the
the
idea
of
having
being
able
to
have
a
service
experience
you
get
or
a
service
certificate.
B
B
It
only
works
intracluster,
but
if
you
trust
something
to
mount
into
your
container
name
space,
then
you
trust
it
as
the
source
of
truth.
G
F
Right,
I
think
the
the
key
bit
that
we
would
need
to
figure
out
is
how
do
consumers
indicate
that
they
want
the
ca
route
for
these
particular
uses
and
that
we
don't
smush
all
of
the
cas
for
different
uses
together,
because,
like
the
ca
for
signing
service
account,
credentials
might
be
have
very
different
expectations
than
the
one
that
signs
services
or
the
cube
api
server
serving
certificate,
and
so
having
consumers
be
able
to
express
somehow
that
they
want
a
ca
for
this
use,
whether
it's
looking
in
a
well-known
location
or
indicating
it
somehow
like
that.
B
F
B
I,
but
this
would
be
well,
but
from
a
from
a
perspective
of
of
this,
would
be
either
a
cluster
add-on
or
a
cluster
feature
that
the
service
account
tokens
also
get
a
service
account.
You
know
certificate.
E
B
What
replaces
that,
as
a
you
know,
like
the
the
feature
of
kubernetes,
automatically
distributing
a
an
identity
or
trust
to
your
application,
is
actually
very
useful.
You
still
got
that
with
the
oidc
provider,
but
I
mean
is:
is
that
the
solution
is
that
the
replacement
for
for
automotive
service
account
token.
F
F
C
F
Purpose
built,
like
we
inject
this
token
and
so
yeah,
I
wouldn't
see
this
being
part
of
the
service
account
token.
This
is
a
different
thing
like
I
need
to
receive
trust
routes
for
this
purpose
so
figuring
out.
What
that
mechanism
is,
is
the
trick.
B
Okay,
I
I
see,
I
don't
see
the
service
account
token.
I
I
completely
agree
that,
like
the
secret
for
the
service
account
token
is
a
miss
feature.
B
I
would
like
to
see
the
ability
to
to
use
mtls
for
that
service
account
token,
as
a
replacement
for
that
service
account
token
that
you
should
be
able
to
call
the
kubernetes
master
using
an
mtls
certificate
rather
than
a
token,
because
a
token
is
a
shared
object,
whereas
a
an
mtls
certificate
is
is
a
is
unshared.
C
So
we're
doing
a
little
bit
of
thought
about
this
yeah
we're
we're
doing
something
a
little
bit
related
to
this
okay,
and
one
thing
that
we've
done
that
you
may
want
to
consider
or
that
we
may
bring
to
sig
off
in
a
little
in
month.
You
know,
and
months
would
be
a
csi
driver
for
doing
this,
that
could
that
be
injected
via
like
a
annotation
or
something.
B
I
would
be
very
interested
in
in
helping
with
that.
You
know.
Fundamentally,
what
I
want
is
a
amount
with
or
a
I
mean
it
needs
to
be
for
from
practical
stake.
It
needs
to
be
a
mount
of
some
sort.
It
needs
to
be
on
the
file
system
that
you
have
your
tls
certificate,
your
ca
bundle
and
a
key
that
is
provided
before
the
application
even
starts
up.
G
A
C
Why
have
a
separate,
signer
type
for
services?
Why
not
have
services
just
present
their
service
account
as
the
principal.
B
So
that
was
actually
my
third
application,
which
I
didn't
get
to,
which
was
we
should
just
create
like
there
should
be
a
cluster
pod
certificate
authority
and
a
cluster
pod
certificate
type
where
every
pod
within
the
cluster
u1ca
automatically
gets
a
amtls
cert.
That
says
I
am
this
pod
running
as
this
service
account,
because,
quite
frankly,
I
I
like
chubby
off
right,
I
miss
chubby
off
a
few
of
you
will
get
that.
G
Yeah,
but,
and-
and
we
see
this
as
as
pretty
much
at
kind
of
a
cluster
level-
a
cluster
option
rather
than
at
an
individual
pod
or
an
individual
service
level
kind
of
option-
and
the
reason
is
that,
once
you
start
doing
mtls,
you
kind
of
you
should
buy
into
that.
You
find
a
tls
across
all
of
your
services,
and
so,
if
we
give
you
give
you
certificates
to
do
that,
you
should
you
know
they
should
be
available
where
sort
of
across
the
cluster
and
same
way
with
mtls.
G
You
should
have
both
service
and
client
certificates
available
everywhere.
So
we
see
this
more
at
the
you
know,
at
the
cluster
level,
rather
than
at
a
name
space
or
a
pod
level,
individual
object
just
just
turn
it
on
for
the
cluster
and
yeah.
Everybody
gets
a
thing
and
they
may
or
may
not
use
it,
but
in
most
cases
they
really
should.
A
Okay,
I'm
going
to
time
box
this
right
now,
I
think
there's
vague
agreement
of
we
can
do
a
thing,
so
I
think
you
can
write
a
cap
and
we
can
go
review
and
we'll
fight
over
there
or
we
can
continue
the
discussion
next
time.
I
just
want
to
be
able
to
get
to
everyone
else
on
the
agenda.
D
D
To
slack
channel.
F
Yeah
or
or
send
the
three
kept
template
or
prototypes
like
out
to
the
mailing
list.
A
Kinda,
like
those
all
right,
let's
see
tim,
are
you
on
the
call?
Did
you
want
to
talk
about
psp,
real,
quick.
F
Sure
I
don't
think
I
have
a
whole
lot
to
say
here
other
than
go.
Take
a
look
at
the
cap.
If
you
haven't
already
there's
still
a
handful
of
unresolved
sections
open,
but
we're
making
good
progress
on
those
and
I'd
like
to
at
least
get
the
provisional
kept
merged
in
the
next
week
or
so,
and
try
and
get
the
last
few
unresolved
sections
closed
out
soon
after
that,
so
that
we
can
start
implementation
for
122..
F
F
We
are
still
doing
the
bi-weekly
discussion
of
this
cap
as
a
breakout
session.
So
there's
another
one
of
those
on
wednesday.
Those
will
probably
wrap
up.
A
Cool
thanks
for
working
on
that
mike
did
you
want
to
introduce
daniel
report.
D
Yeah
I
just
wanted
to
announce
this.
This
is,
I
guess,
there's
a
initiative
for
maybe
from
the
contrib
x
sig.
Actually
I
don't
know
what
the
origin
is,
but
that's
they
have
this
kind
of
standard
questionnaire.
Where
asked
questions
about
sig
health,
so
rita
mo
and
tim,
and
I
got
together
filled
it
out.
I
think
we
had
some
good
discussions
on
stuff
that
we
could
do
better
and.
A
Together
so
right
so,
let's
see,
I
think
the
same
person
has
token
controller
deprecation
and
the
encryption
config
stuff.
F
Actually,
can
I
add
one
more
announcement
before
we
move
on.
I
I'm
not
sure
if
we
mentioned
this
in
the
last
sig
auth
meeting
but
rita
will
be
taking
over
my
responsibilities
as
chair,
so
we'll
be
kind
of
doing
gradual
transition
of
that
over
the
next
month.
Approximately
I
do
still
plan
to
be
actively
involved
in
the
sigoth
community.
I'm
just
going
to
be
stepping
down
from
the
administrative
chair
responsibilities
and
thank
you
rita
for
volunteering
to
step
up
there.
E
H
Yeah
yeah,
we
can
start
with
the
with
the
encryption
config
one,
so
we've
seen
scalability
issue
related
to
the
secrets
when
a
cluster
has
a
large
number
of
secrets,
they're
going
to
make
a
huge
number
of
calls
to
call
cms
right,
chemist
providers
in
general,
and
so
we
propose
that
maybe
we
can
extend
the
encryption
config
to
have
a
variable,
basically,
which
means,
let's
say
if
this,
this
value
equals
to
100.
Basically,
it's
a
100
secret
share,
the
same
deck
to
reduce
the
cost
to
chemist
provider.
H
So
yeah
just
try
to
looking
for
opinions
on
this
there's.
One
issue
with
this
is
how
to
do
with
the
rotation.
After
showing
the
deck.
H
H
The
second
one
is
more
transparent.
The
to
pro
to
we
need
to
it's.
We
need
to
figure
out
a
way
like
a
mechanism
to
reinterpret
the
secrets.
So
you
can
imagine
there
are
three
events
in
the
key
key
rotation.
The
first
one
is
the
user,
initiate
a
key
key
rotation
or
key
version
rotation
on
the
chemist
provider,
and
then
they
do
the
re-encrypt
right
and
then
the
chemist
plugging
does
the
actual
things.
H
So
the
first
two
events
we
so
the
first
event
is
the
the
the
user
initiate
data,
key
rotation
and
then
between
the
first
event
and
the
second
one.
They
can
use
the
second
mechanism
here
to
to
notify
the
kbf
server
to
do
to
to
basically
refresh
the
deck
or
we
do
it.
H
Like
after
they,
after
the
first
event,
right
after
the
first
event,
right
after
the
first
event
that
they
make,
they
do
the
key
version,
rotation
on
the
canvas
provider,
and
then
we
know
it
from
the
health
check
right
away.
It's
like
a
couple
seconds
per
hand,
check.
H
Sorry
so,
basically,
this
the
the
config
itself
without
the
without
the
health
check
changes
it's
back
compatible
because
the
current
one
is
just
basically
this
one
right
next
to
every
second
is
the
single
deck,
but
for
this
change
it
could
it's
it's
like
it's
not
a
back
compatible,
because
we
need
to
extend
the
encrypted
call
right
yeah.
I
was
just
trying
to
look
for
opinions
on
this
change
on,
so
I
want
to.
A
Act
as
a
starting
point,
I
you,
if
I
understand
correctly,
though
the
specific
scalability
concern
that
this
is
trying
to
address.
You
could
address
that
with
a
local
kms
to
the
api
server
right
that
delegates
to
the
cloud
kms,
but
only
every
100,
keys
or
whatever
right
like
it,
generates
a
key
asks,
the
cloud
kms
to
encrypt
that
but
holds
it
in
memory
and
does
the
encryption
locally
right.
So
just.
C
And
then
that's
one
of
that's
one
of
the
alternatives
xihong
has
analyzed.
I
can't
quite
remember
what
the.
H
Yeah
yeah,
so
basically
yeah
hierarchical,
one
will
will
also
work
and
it's
it's
in
and
we'll
we
probably
will
not
make
changes
to
the
open
source.
But
I
think
there's
one
talk
about
the
kms
improvements.
We
also
don't
have
a
good
observability
on
circuits,
like
which
security
is
increased
by
which
key
right,
but
if
we
extend
the
encrypted
call,
we
can
get
the
information
like
right.
I
think
yeah.
This
change
basically
can
benefit
the
community.
I
think.
D
Yeah,
I
think,
there's
two
problems.
The
first
one
is
the
scalability
issue.
It
can
be
addressed
outside
of
cube
api
server
and
a
kms
plugin.
D
That
is
fine
and
reasonable,
but
I
it
does.
It
makes
sense
to
solve
this
inside
cube
api
server
to
solve
the
scalability
problem
in
a
way
that
I
guess
applies
to
more
implementations
of
kms
plugins,
and
I
think
the
second
thing
is.
D
Yeah,
that's
a
good
suggestion
as
well.
It's
a
middle
ground.
F
F
D
We
need
a
storage
version
migrator
for
encrypted
objects,
basically.
F
A
And
so,
if
I
so,
I
remember
the
doc
you're
referring
to
xinhuang
that
one.
I
think
one
of
the
suggestions
had
been
there,
that
we
could
try
to
migrate
the
actual
on
this
format
on
kms
to
like
a
proper,
like
well-known
encryption
on
disk
format.
So
that
way
we
could,
even
if
it's
not
like
a
rest
api
or
a
thing
that
you
probe
outside
of
the
api
server.
A
I
was
kind
of
hoping
for
something
like
that,
mostly
because
it
doesn't
over
complicate
the
api
server
and
it
lets
you
kind
of
do
arbitrary
logic
on
the
outside,
for
whatever
you
want,
and
I
also
kind
of
hoped
for,
like
even
if
we
end
up
in
a
state
where
we
want
to
have
the
api
server,
do
more
work
and
sort
of
be
a
little
more
intelligent.
I
was
really
hoping
that
that
logic
would
live
in
the
actual
grpc
api
so
like,
instead
of
the
encryption
config
getting
a
new
knob.
A
The
the
api
server
would
ask
the
grpc
plugin
itself
like
hey.
A
H
Yeah
thanks
for
inputs,
yeah,
yeah
yeah.
I
think
because
of
the
current
information
in
open
source
for
for
first
secrets
or
conflict
maps,
because
they
both
use
the
encryption
configuration
rely
on
the
third-party
nkms
provider.
We
just,
I
just
think
the
it's
not
a
scalable
by
its
design.
Currently,
for
example,.
H
Yeah,
it's
like
a
surreal
surreal
inquiry
close
to
to
cloud
cameras
right
yeah.
Definitely
we
can
do
it
in
the
in
the
backing
and
I
have
more
customized
logic
there
yeah
and
also
for
the
yeah,
I
think
yeah,
probably
for
the
observability
observability
right
current
city.
We
don't
have
the
we
don't
know
the
which
sector
is
scripted
by
which
key?
H
What
do
you
think
ignore
the
the
the
knob
here?
How
about
do
it?
Do
we
need
to
like
extend
the
encryption
to
return
that
information
so
that
we
can
have
the
observability
in
the
open
source.
A
H
A
new
function
would
will
work
but
yeah,
I
just
don't
know
whether
it's
reasonable
or
not.
Currently,
there
are
two
right
encrypted
degree
and
a
version
function.
I
think
yeah,
it's
a
service
that
is
shared
by
the
kbp
server
and
the
canvas
plugin.
I
think.
C
When
we
were
exploring
this
one
thing
that
seemed
like
it
would
clearly
make
changes
in
the
api
server
was
having
an
explicit
way
for
users
of
this
subsystem
to
request
a
full
re-key
right
now
in
our
setup,
we
kind
of
have
them
do
that
with
a
dummy
annotation
that
they
can
like
change,
and
then
that
ends
up
introducing
like
a
decrypt
encrypt
cycle
for
for
all
secrets,
but
the
kms
plug-in
implementation
actually
doesn't
get.
C
Doesn't
really
understand
that
this,
these
encrypt
equip
requests
arose
because
the
user
wanted
to
rotate
their
keys.
So
if.
A
So
having
done
rotation
before,
why?
Why
don't
like
in
the
actual
encryption,
config
and
behind
the
api
server?
Why
don't
you
just
re-add
the
kms
again
as
a
higher
item
in
the
list
and
tell
it
to
do
re-encryption
that
way
like
because
the
api
server
will
see
it
as
a
completely
different,
kms
and
encrypt
it
right?
It
will
start
saying
that
all.
A
Yeah,
I
mean
you,
run
a
storage
migration
right
after
you
have
right
right,
like
you,
add
the
new
kms
as
the
last
item,
you
wait
for
your
api
services,
roll
app.
You
put
the
kms
at
the
top
of
the
new
canvas
at
the
top
of
the
list.
You
wait
for
that
to
roll
out,
you
run
storage
migration,
wait
for
it
to
complete.
F
F
A
F
F
The
things
that
you
could
envision
are
changing
the
format.
So
when
we
went
from
json
to
protobuf
like
three
years
ago,
that
required
a
storage
migration
when
the
storage
version
changes.
So
when
beta
a
beta
version
of
something's
gone
and
we're
storing
in
ncds
b1
like
it's
a
good
idea
to
do
a
migration,
then
the
api
server
publishes
that
information
and
discovery
and
the
storage
migrator
can
use
it.
And
then
this
storage
transformation,
if
you
reconfigure
your
transformation,
either
to
enable
encryption
or
to
change
the
key.
F
D
I
I
don't
know
where
this
is
discovered,
where
this
is
exposed
and
discovery,
but
would
it
make
sense
to
also
expose
a
key
version
on
resources
that
are
using
application
level?
Encryption.
F
The
it's
exposed
in
discovery
as
a
hash,
so
it's
it's
an
opaque
hash.
That
just
says
like
here
is
a
hash
of
all
the
things
that
affect
storage.
F
If
the
transformation
config
for
a
given
resource
got
folded
into
that
hash,
maybe
that'd
be
okay.
I'd
have
to
think
about
that.
That
only
would
that
only
would
include
things
the
api
server
knows
about
so
right
now.
That's
like
the
the
config
that
the
api
server
is
fed.
Something
dynamic
like
a
key,
a
kms
changing
which
key
like
future
encrypt
requests,
would
use,
wouldn't
be
visible
to
the
api
server
today,
so
it
wouldn't
be
able
to
pull
that
information
in,
but
you
thought.
F
E
I
don't
remember
discussing
it,
but
I
also
don't
know
how
a
cube
api
server,
no
right,
we've
had
cases
where,
where
customers
will
set
it
up.
So
there's
some.
You
know
kms
process,
you
go
out,
you
make
a
connection
to
it
and
it
has
its
own
key
rotation
concept,
and
so
the
qpi
server
has
no
concept
of
like
okay,
the
keys
change.
So
when
they
remove
one
suddenly
the
server
can't
decrypt
because
they
didn't
coordinate
the
storage
migration.
F
Right
yeah,
so
I
mean
the
config
that
the
server
has
given
that's
easy.
If
you
wanted
it
to
incorporate
information
from
some
external
source
like
to
be
able
to
to
say
like
if,
if
I
stored
things
now
for
this
resource,
what
would
it?
What
would
it
do?
That
starts
to
sound
like
some
way
to
interrogate
the
kms
plugin
to
say
like
what's
what's
the
current
key
id
or
something
but
you're
right?
F
That
still
doesn't
tell
the
server
when
it
when
it
needs
to
ask
that
question
to
update
and
tell
the
external
system
like,
please
don't
delete
the
old
key,
like
I'm
not
done
using
it,
so
yeah,
there's
more
coordination.
That
needs
to
happen
there,
but
including
more
things
in
the
hash
doesn't
seem
bad
at
first
glance,
that's
actually
one
of
the
reasons
we
made
it
opaque
so
that
it
wasn't
just
the
storage
version.
Other
things
like
the
the
format
or
the
transform
config
could
get
incorporated.
E
If
we
had
a
way
of
discovering
it
yeah,
I
don't
right
object
there.
The
thing
is,
I
I
would
say
that,
as
a
first
thought,
I
would
probably
want
something
to
drive
the
creation
of
please
migrate,
my
storage
and
watch
to
make
sure
it's
finished,
as
opposed
to
trying
to
update
storage
cash,
determine
that
independently
one
way
or
another.
You
have
to
know
whether
a
storage
migration
that
was
started
after
a
particular
point
in
time
is
finished
before
you
can
remove
the
old
keys.
F
E
F
E
These
are
scars
from
bringing
that
feature
over
the
line
at
openshift
to
be
able
to
do
the
I
have
to
create
this
key.
I
have
to
make
sure
everything
has
it
now.
I
got
a
switch
to
be
a
right
key.
Now
I
have
to
do
a
storage
migration.
Now
storage
migration
is
finished.
I
need
to
remove
the
old
keys
that
dance
is
important
to
get
right
the
first
time
and
it's
hard.
I
wouldn't
want
to
try
to
do
it
side
effect.
Based
on,
like
I
created
this
storage
hash.
F
Yeah,
just
updating
the
hash.
Like
again,
I
don't
know
that.
I
don't
think
I
objected
updating
the
hash,
but
just
updating
the
hash
isn't
enough
like
what,
if
the
storage
migrator
wasn't
running
like
something
needs
to,
it
need
positive
affirmation
that
it
ran
and
completed,
and
that
implies
some
like
higher
level
system
that
can
talk
to
kms
and
can
observe
the
migrator.
A
E
A
C
A
All
right,
okay,
we
only
have
10
minutes
left.
We
probably
need
to
table
this
and
do
the
next
item.
The
token
controller
application.
H
Yeah
this
one.
Basically,
so
it's
a
follow-up
item
for
the
balance
of
scan
token
volume.
I
think
there
are
two
opening
issues
regarding
this
talking
controller.
One
is
the
qb
system
that
we
we
don't
need
to
generate
the
legacy
tokens
for
the
cubase
group
system
in
the
service
council
under
the
capacitor
namespace.
The
second
one
is
basically
to
purge
the
existing
tokens
for
the
for
the
for
the
service
accounts
that
don't
need
the
legacy
tokens.
H
So
in
the
proposal
section
basically
the
first
step.
Basically,
we
propose
some
well-known
annotation
or
label,
basically
to
annotate
the
service
count
or
the
namespace
to
say
we
don't
need
the
secret-based
tokens,
don't
generate
them.
H
And
it's
a
it's
a
so
basically
we
don't
so
we
generate
by
default
right
to
align
with
the
existing
behaviors.
This
is
not
disruptive
in
my
opinion,
but
the
second
one,
the
second
step.
Basically
when
we
make
it
false
by
default,
basically
don't
generate
for
the
so,
basically
we
disable
the
token
controller
unless
the
user
specified
them
to
be
to
to
generate
right.
H
So
this
is
more
disruptive
and
I
couldn't
figure
out
a
way
to
smooth
smooth
the
first
one
to
the
second
one
over
over
the
second
one,
so
yeah
yeah,
I
just
yeah.
So
if
we
agree
on
the
like
the
four
steps
faces,
this
amount
like.
I
think
this
will
spread
over
like
a
couple
of
releases,
but
that's
a
general
plan
from
me,
but
well,
I
do
think
any
better
ways
to
to
make
it
less
disruptive.
H
F
H
F
F
So
people
can
request
service
account
tokens
to
use
out
from
outside
the
cluster.
You
know
in
ci
or
for
other
reasons
and
so
figuring
out
how
to
not
support
those
is
a
much
longer
process.
So
I
I
think
the
first
thing
we
can
do
is
start
warning
when
you
use
a
legacy
service
account
token.
The
second
thing
we
can
do
is
let
service
account
specific
service
accounts
or
specific
namespaces
opt
into
being
excluded
like
we
don't
want
to
use
secrets.
F
The
third
thing
is
probably
cleaning
up
and
no
longer
auto,
creating
secrets
for
injection
into
pods,
but
we
have
to
keep
the
token
controller
running
for
the
manually
requested
token
case
until
we
give
much
more
notice
about
deprecation
and
warning
and
allow
those
uses
to
switch
to
the
token
request,
api.
F
Not
at
the
same
time,
no
right
right,
yeah,
like
each
of
those
steps,
is
a
distinct
step
and
the
point
at
which
we
start
cleaning
things
up
or
no
longer
allowing
people
to
request
tokens
by
creating
an
annotated
secret.
Those
are
those
are
the
two
points
where
we
have
to
have
a
much
clearer
picture
of
usage
and
give
much
more
notice
and
and
time
like.
F
If
a
service
account
or
namespace
opt
out
like
explicitly
opt
out,
I
think
we
have
more
flexibility
there
to
like
clean
up
that
service
account
or
clean
up
that
namespace.
But
in
terms
of
like
automatically
doing
things
on
upgrade
without
someone
explicitly
opting
in,
we
we're
much
more
constrained
so.
A
F
I
mean
the
the
side
effect
like
being
able
to
create
a
secret
and
have
it
be
populated
with
a
token
that
probably
falls
under
ga
behavior
bucket.
So
that's
a
year.
F
Don't
no,
no
more
token
secrets
in
gym
system
like
they
can
indicate
that
on
the
namespace,
and
so
that
gives
us
more
freedom
to
you
know
clean
up
things
and
and
like
because
then
it's
opt-in,
but
just
for
a
regular
service
account
and
a
regular
namespace.
F
If
you
create
a
secret
and
annotate
it,
we
should
continue
populating
that
token
for
a
long
time.
We
should
warn
you
like
warn
you
when
you
do
that
and
warn
you
when
you
use
the
token
and
like
that's
fine,
we
should
start
doing
that,
but
the
types
of
software
that
do
those
things
are
likely
old
and
have
likely
been
around
for
a
long
time
and
will
likely
take
a
long
time
to
resolve
so
warren
yeah.
I
agree
with
reader
one
as
soon
as
possible,
like
we
should
do
that
now.
A
F
Doing
its
thing,
so
the
the
metrics
can
tell
can
tell
you
when,
when,
like
legacy
service
account,
tokens
are
being
used
so
once
the
default
tokens
injected
into
pods
transition
to
projected
tokens
and
like
you,
have
a
release
or
two
releases
or
three
releases,
so
that
all
of
the
pods
running
on
the
oldest,
supported
cubelets
like
have
been
drained
and
recreated
like
at
that
point,
usage
should
basically
go
to
zero,
except
for
anything
that
was
manually,
created
and
extracted
this
way,
and
so
at
that
point
the
metrics
would
be
like
a
really
good
sign,
like
you,
you've
got
things
using
this
old
path.
F
Generating
events
is
not
maybe
not
as
helpful
as
you
would
think.
It's
also
like
doing
an
ap
writing
an
api
object
in
response
to,
like
a
request,
is,
is
sort
of
a
scary
level
of
write
multiplication
and
because
events
are
transient,
if
you
just
did
it
like
once
when
the
secret
was
created.
Unless
someone
happened
to
be
watching
it
right,
then
it's
gonna
go
away
in
like
an
hour
anyway,
so
I
probably
wouldn't
write
an
event.
H
That
sounds
good,
I
think
yeah,
but
at
this
time
we
can
start
with
the
step
one
right
implement
such
a
rotation
to
opt
out
like
okay
in
both
like
a
service
count,
level
or
namespace
right.
H
And
how
about
we
open
our
system
at
the
same
time
or
in
a
different.
F
Step
that
has
to
be
for
the
cluster
deployer
to
do.
We
can
opt
out
the
service
accounts
that
the
controller
manager
creates.
So
the
service
accounts
that
we
create
that
we
define
roles
for
that
are
for
use
by
the
controller
manager
and
cloud
controller
manager.
We
could
opt
those
out.
A
D
It
looks
like
a
caps,
and
so
we
should
just
announce
that
there
is
a
cap,
so
it
review
it
before
the
next
meeting
and
I
guess
we'll
discuss
next
meeting.
F
F
The
schedule
for
122
is
not
finalized,
yet
I
think
they're
hoping
to
finalize
that
in
the
next
week,
but
as
always
enhancements
freeze
comes
early.
So
anything
you
want
to
be
in
122.
The
designs
for
those
should
be
getting
attention
and
converging
on
implementable,
like
in
the
next
week
or.