►
From YouTube: Sig-AUTH Bi-Weekly Meeting for 20220511
Description
Sig-AUTH Bi-Weekly Meeting for 20220511
A
Starting
the
recording
hello,
everyone
welcome
to
the
may
11th
2022
meeting
of
sig
auth
first
thing
on
the
agenda
is
that
the
125
enhancement
freeze
is
coming
up
on
june
16th.
So
let's
make
sure
that
we
get
our
cups
in
that
for
changes
that
we
want
in
125.
B
A
All
right
awesome,
so
I
this
is
going
to
be
in
125..
C
Sorry
my
internet
broke
up
about.
Who
am
I
oh
yeah?
This
came
up
at
a
couple
different
calls,
and
one
of
the
questions
that
I
had
was
actually
for
you.
C
I
seem
to
remember,
like
seven
years
ago
mike
saying
that
he
had
concerns
about
allowing
someone
with
possession
of
a
token
to
determine
the
identity
that
both
that
that
token
represented,
and
I
wanted
to
make
sure
you
had
a
chance
to
say
whether
you
had
a
concern,
the
idea
being
that
you
would
be
able
to
take
a
token
and
say
or
just
connect
to
the
server
and
say
who
am
I
and
they'll,
come
back
and
say
you
have
connected
as
david
eads
you're
a
member
of
these
groups.
This
is
your
user
extra.
A
Right
so
I
I
think
the
thinking
back
about
the
concern,
the
what
I
was
probably
talking
about,
was
kind
of
in
like
in
oauth
scenario.
We
would
require
something
like
an
email
scope
to
allow
an
oitc
client
to
determine
what
the
email
address
associated
with.
That
token
is
so
I
I
don't
think
you
know
in,
like
speaking
specifically
about
google
oauth.
A
We
wouldn't
consider
possession
of
a
bearer
token
to
automatically
grant
you
know
access
to
to
read
stuff
like
email
address.
I
think
that
might
be
I'm
not
sure.
If
that
is
specific
to
google,
I
guess
I
would
like
to
hear
from
you
know:
people
familiar
with
other
identity
systems.
A
I
I
you
might
have
started
talking
before
I
raised
my
hand,
but
what
I
was
gonna
say
is
like
one
of
the
existing
integrations
that
we
have
today
in
kubernetes
is
the
oidc
authenticator,
which
uses
the
id
token
as
the
mechanism
for
authentication,
which
is
interesting
one
because
it
really
annoys
the
open
id
connect.
Folks,
they're,
not
a
fan
of
that
they're
like
you're
supposed
to
use
access
tokens
for
this,
not
id
tokens.
A
But
you
know,
as
an
aside
that
token
itself,
you
know,
tells
you
at
least
your
username
and
group
sort
of
just
directly.
It
doesn't
necessarily
tell
you
well,
I
might
also
tell
you
uid,
depending
on
how
the
system
is
configured,
but
so
that's
like
one
sort
of
I
wouldn't
say
it's
a
precedence,
but
it's
at
least
a
model
that
has
information
available
to
the
user.
A
The
the
other
thing
I
was
going
to
mention
is
you
know
if
you
were
asking
just
third-party
external
so,
like
the
tons
of
hot
stack,
has
mi
command,
basically,
which
today
is
implemented
using
an
aggregated
api
that
simply
echoes
back
what
the
aggregate
api
server
saw
about
you
so
like
it,
which
today
means
username
groups
and
extra.
It
doesn't
include
uid
because
of
a
bug
in
the
aggregation
layer,
but
otherwise
it's
the
full
information,
but
that
api
is
accessible
by
anyone.
That's
authenticated,
it's
not
a
it's
not
meant
to
be
secret
in
any
way.
B
C
Yeah,
I
guess
I'll
bring
up
one
more
example
and
it's
open
shift.
It's
different
than
the
tonsil
case.
Open
shift
runs
our
I
think
it's
different
openshift
runs
our
own
oauth
server
internally,
which
mints
tokens,
and
that
means
that
there
is
only
we
only
expose
the
information
that
openshift
has,
and
so
things
like
user
email,
that's
dropped
things
like
what
the
the
oidc
integration
you
may
have.
D
A
So
for
us,
the
information
that
is
exposed
is
the
information
that's
going
to
be
presented
to
the
kubernetes,
odd
stack,
so
basically,
the
exactly
what's
in
the
user
info
interface
is
what
we
expose
back
but
yeah.
If
there
was
some
other
metadata
that
the
odds
like
had
access
to
like
your
home
address
or
something
because
you
decided
to
include
that
in
the
scope
of
the
oydc
client
or
anything,
we
don't
include
that
in
the
tokens
that
are
being
sent
around
so
it's
not
exposed.
I
I
think
in
that
sense,
david
it
matches
openshift.
C
C
A
Yes,
exactly
so,
just
as
openshift
effectively
federates
some
external
idp,
which
may
or
may
not
have
tokens,
the
timezoa
stack
was
basically
the
same
thing
so
yeah.
You
are
right
in
the
in
the
google
case.
It's
it's
a
it's
different
in
the
sense
that
google
itself
is
an
idp
and
in
the
gke
case,
it's
like
directly
integrated
as
the
idp
within
the
platform.
So
like
those
those.
A
Basically,
those
tokens
can
have
a
meaning
outside
of
kubernetes
right,
if
I'm
understanding
right
mike,
which
right
usable
against
other
cloud
platform
apis,
for
example,
yeah
well
in
in
the
tanzu
case,
and
also
in
the
openshift
case,
they
can't
be
used
outside
of
the
kubernetes
environment.
A
Yeah,
I
also
will
say
that
this
isn't
a
concern
for
gke
really
because
in
the
case,
where
the
token
doesn't
have
user
infoscope
will
just
return
uid
corresponding
to
the
user,
which
is
like
equivalent
to
what
we
do
with
rbac.
A
So
if
you
are
authorized
with
rbac
and
you
don't
you
send
user
infoscope,
you
have
to
use
the
numeric
uids
rather
than
email
addresses.
So.
A
That
is
to
say,
if,
if
you
think
this
is
going
to
be
useful,
there's
like
no,
I
don't
think
there's
a
practical
concern
from
me
at
this
point
in
time.
A
A
C
D
A
The
thought
I
had
was
a
an
ephemeral
arrest
api
like
token
review
and
subject
x3
all
of
those
in
the
off,
then
obviously
so.
Something
like
that,
but
one
that
just
basically
takes
in
so
at
that
layer
within
the
stack
you
you've
already
authenticated
the
system
already
knows
that
you've
already
also
authorized
everything,
so
it
would
be
basically
just
turning
around
and
sending
back
what
it
knew.
A
So
the
idea
would
is
is
that
it
would
be
compatible
with
like
any
front
proxy
or
anything
else
that
you
had
doing
odds
like
basically,
no
matter
what,
because
at
the
end
of
the
day,
they
have
to
tell
the
kubernetes
api
server
who
you
are,
and
this
api
is
basically
just
going
to
turn
around
and
say
after
whatever
magic
happened
in
front
of
me.
This
is
what
I
understood
to
be
the
user,
whether
it's
a
token
a
front
proxy
or
whatever.
I'm
just
going
to
send
that
back
to
you.
A
A
Yeah,
I
I
believe
that
that
would
probably
be
the
ideal
implementation.
A
B
A
A
C
A
Awesome,
oh
yeah.
I
I
it
sounds
to
me
like
there's
no
objections
to
this,
and
maybe
the
next
step
would
be
propose.
C
One
thing
I
would
like
us
to
clarify
here:
if
we
can,
I
don't
think
I
see
this
as
ever
being
required
for
conformance.
Do.
Do
we
see
a
state
differently
for
that
and
I
guess
a
capital
wanted
to
cover
that,
so
we
agreed
to
it
there
not
because
I
want
to
turn
it
on,
but
because
I
can
imagine
other
people
not
wanting
to
turn.
A
A
Yeah
I
mean
I
don't
I
don't
know
if
I,
if
I
care
one
way
or
the
other,
so
you
a
client
would
be
able
to
see
if
it's
present
in
discovery
right
if
it's
just
or
see
that
it's
not
present.
If
it's
disabled-
and
you
know,
if
you
had
like
a
cube
ctl
who
am
I
command,
it
could
say
that
this
api
is
disabled
on
the
server.
Please,
please
contact
your
admin
or
something.
C
A
Yeah
that,
I
think
that's
that's
fair,
I
don't
necessarily
feel
strongly
one
way
or
the
other
as
long
as
it
exists
and
is
easy
to
use.
B
D
A
E
Do
you
mind
making
me
a
presenter.
D
A
E
All
right,
sorry,
okay,
anyway,
so
this
cap
is
pretty
long,
so
I'm
gonna
try
my
best
to
go
through
it
quickly.
But
as
many
of
y'all
already
aware
of,
there
are
a
bunch
of
improvements
I
for
for
kms,
as
is
today,
and
this
cap
actually
introduces
proposes
v2
of
the
kms
service
contract,
specifically
to
enable
fully
automated
key
rotation
for
the
latest
key
improved
canvas,
plug-in
health
check,
reliability
and
improve
observability
of
the
envelope
operations
between
api
server,
kms
plug-in
and
kms.
E
So
and
actually
this
one
was
already
discussed
in
a
different
cap
and
it
was
actually
reviewed
and
approved,
so
we
are
actually
just
merging
it
as
one
because
all
of
these
are
part
of
the
v2
proposal.
E
Moving
on,
there
are
tons
of
motivations,
for
you
know,
performance
enhancement,
how
to
actually
enable
automated
rotation,
as
we
have
manual
steps
today
and,
of
course,
health
check
and
status
as
a
separate
api
today
and
observability,
as
I
mentioned
above
moving
on,
and
also
we
have,
you
know,
anish
mo
kristoff
and
damian
on
the
call
as
well,
who
have
been
working
on
this
cup
with
me,
so
folks
feel
free
to
jump
in
at
any
time
all
right.
E
E
So
we
thought
a
picture
is
worth
a
thousand
words.
So
here,
as
you
can
see
with
this
proposal,
the
new
the
the
new
v2
api
will
basically
enable
us
to
allow
something
like
key
hierarchy
so
for
encryption
request.
As
you
can
see
here
when
we
are
using
key
in
key
hierarchy.
E
When
the
api
server
talks
the
to
the
kms
plug-in,
it
basically
makes
a
request
to
encrypt
the
deck
with
a
local
keck,
and
here
the
external
kms,
then
encrypts,
the
local
cac
with
the
remote
tech
and
returns
the
encrypted
local
cac
right
and
once
that's
returned,
then
we
actually
cache
the
encrypted
local
cac
and
return
it
as
part
of
the
encrypt
response.
E
And
now,
as
you
can
see,
a
cipher
is
now
updated
to
ciphertext,
and
then
we
introduce
a
new
current
key
id,
which
I
will
discuss
in
later,
as
we
will
use
it
for
rotation,
but
specifically
metadata
here
is
introduced,
specifically,
is
something
that
we
could
or
any
plug-in
could
potentially
use
to
persist
this
encrypted
local
cache
to
scd,
and
when
it's,
when
key
hierarchy
is
not
used,
it
is
pretty
much
the
same
as
today.
As
you
can
see,
the
remote
cache
is
what
is
used
to
decrypt.
E
Sorry
is
what
is
used
to
encrypt
the
deck
and
then
encrypted
deck
is
now
returned
as
part
of
the
encrypt
response.
So
this
part
is
no
different
than
what
it
is
today
and
the
metadata
is
empty.
E
Moving
on
to
the
decrypt
request
here,
as
you
can
see,
assuming
you're
using
the
local
keck-
and
it
is
part
of
the
metadata
and
it
is
sent
to
it-
is
sent
as
part
of
the
decrypt
request
right.
As
you
can
see
here,
then
the
plugin
will
basically
decrypt
the
deck
with
the
local
keck.
E
All
right,
and
similarly,
if,
if
we're
not
using
hierarchy
the
key
hierarchy,
then
it
is
pretty
much
the
same
as
today,
I'm
going
to
pause
real
quick,
so
we
can
discuss
the
key
hierarchy.
Real,
quick
before
we
jump
into
rotation.
A
A
Weird,
that
is
unfortunate
yeah
now
I
don't
think
anyone
does
that
today
they
just
encrypt
the
deckhand
at
them,
and
it's
not
that
different.
A
E
And
maybe
we
should
talk
about
the
proposed
storage,
the
format,
the
new
format.
A
Yeah,
I
can
take
that
if
you
want
me
to
go
so
this
basically
is
meant
to
look
very
much
like
the
existing
protobus
dormant
storage
format.
We
have,
but
it
uses
a
different
magic
prefix
to
indicate
that
it's
being
used
and
instead
of
storing
a
runtime
dot
unknown,
you
know
as
a
way
of
figuring
out
what
it
is.
It
always
stores
the
exact
same
thing
right
now,
which
is
an
encrypted
object.
A
That's
meant
to
store
the
ciphertext
of
like
the
encrypted
data,
but
also
the
full
encrypt
response,
so
that
includes
the
full
metadata
and
everything
from
the
plugin
and
obviously,
we've
been
very
carefully
documented
to
plugin
authors,
but
that
part
is
stored,
unencrypted
in
the
sense
that
we
don't
store
it
in
any
encrypted
format.
We
store
exactly
what
you
gave
us
and
when
we
try
to
decrypt
something
we
send
it
all
back.
A
I
couldn't
come
up
with
a
way,
at
least
yet,
where
we
didn't
do
that
unless
we
were
willing
to
like
try
to
decrypt
an
encrypted
object
with
every
configured
kms
provider,
which
seems
also
bad
so
that
that's
still
a
weakness
of
this
proposal,
which
is
that
whatever
name
you
chose
for
the
plug-in
in
your
encryption,
configuration
is
what's
going
to
be
stored
as
and
if
you
don't
like
that,
sorry,
you
can't
change
it,
because
we
won't
be
able
to
find
the
data
again.
A
Actually
folks
have
better
approaches,
I'm
totally
open
to
it.
Let's
see
what
else.
What
else
is
there?
I
think
that
covers
the
storage.
It's
just
basically
a
nice
proto
storage,
instead
of
instead
of
a
flat,
cryptobyte
basic
slice.
Yeah,
do
you
want
to
talk
about
status
stuff,
or
I.
E
I
can
jump
into
rotation
if
there's
no
more
questions
on
the
key
hierarchy.
Again,
this
cap
has
a
lot
of
stuff,
so
I
want
to
divide
and
conquer
and
focus.
E
E
A
Is
a
technically
an
implementation
detail
of
the
kms
plugin?
What
we're
saying
is
that
we
will
provide
you
a
reference
implementation
that
can
do
a
key
hierarchy.
If
you
tell
it
to,
it
is
not
a
requirement
of
said
reference
implementation,
the
assumption
being
there
that
the
the
performance
hit
from
kms
comes
from
external
grpc
or
rest
api
calls
to
an
external
kms,
but
that
the
local
grpc
calls
over
the
unix
domain
socket
are
not
that
expensive.
A
We
have
done
some
limited
testing
to
see
that
this
doesn't
is
indeed
true,
but
I
think
it's
part
of
like
even
like
the
beta
criteria
for
this
cup
would
be
to
do
a
significantly
large
cluster,
with
at
least
you
know,
ten
thousand
hundred
thousand
secrets
and
prove
that
the
local
grpc
calls
are
not
the
ones
that
cause
the
system,
the
bottleneck,
because
if
they
are,
then
we
would
need
something
better.
We
would
need
something
within
the
api
server
to
help
break.
That
up.
A
Yeah,
so
I
guess
the
one
thing
on
the
storage
format
is.
We
should
evaluate
whether
we
want
to
continue
to
keep
it
opaque
to
cube
api
server
or
whether
we
need
kind
of
structured
metadata,
not
sure
which
one
is
preferable,
because,
like
one
option,
is
to
just
remove
that
limit
on
what
we've
passed
back
for
the
encrypted
key,
so
that
the
kms
plugin
can,
you
know,
define
their
own
format
for
the
envelope
or,
I
guess,
for
the
associated
key
data.
A
So
I
I
think,
and
we
don't
need
to
necessarily
rabbit
hole
on
this
is
so
the
the
api
that
is
being
sort
of
proposed
has
both
structured
bits
for
the
bits
that
we
want
to
have
opinions
on.
It
purposely
tries
to
leave
like,
like,
basically,
a
labels
map
map
of
string
to
string
as
available
storage
for
the
plugin,
basically
ask
us
to
hold
on
to
some
metadata
for
it
today.
That
metadata
is
just
a
string
to
string.
A
Certainly,
if
we
wanted
more
structure,
we
could
add
more
fields
to
the
the
request
and
response
whatever,
wherever
it
makes
sense
yeah,
I
didn't
know
if
there
was
other
other
things
that
we
would
want
to
support
like,
for
example,
I
could
see
the
the
plug-in
wanting
to
maybe
include
some
metadata
about
the
external
kms
in
its
response,
just
for
the
purposes
of
making
debugging
or
understanding
of
the
system
state
better
if
you're,
just
looking
at
like
the
raw
fcd
down,
and
so
we
wouldn't
want
to
like.
A
I
didn't
want
to
force
them
to
put
that
into
like
some
opaque
value
right.
I
want
to
at
least
give
them
a
structured
map
to
just
put
stuff
in
so
yeah
I
mean
yeah.
This
is
a
pretty
massive
capsule.
If
folks
want
to
like
very
carefully
read
it
and
think
about
things
and
make
suggestions,
I
don't
think
any
of
us
are
like
married
to
any
of
this.
It's
it's
more
of
like
this
is
the
closest
thing
you
could
come
up
with
for
a
good
athlete.
E
Rotation
sure
so
the
next
one
is,
as
I
mentioned
earlier,
we're
also
introducing
current
key
id
and
then
also.
E
Also,
a
new
status
status
api
here
one
is
to
improve
how
health
check
reliability,
but
also
to
provide
version
healthy
and
also
a
mechanism
in
which
we
can
trigger
key
rotation
via
the
storage
version
status
updates.
E
So
so
this
is
sort
of
how
the
new
v2
works
together
in
order
to
actually
allow
the
api
server
to
check
on
what
that
key.
Current
key
id
is
between
all
the
different
api
servers
and
then
be
able
to
trigger
a
rotation
and
a
migration
when,
when
that's
detected,.
E
And
also
as
mo
mentioned
since
this
is
currently
alpha,
this
cap
will
basically
try
to
scope
it
to
a
single
api
server.
For
now,
until
we
have
something
a
little
bit
more
stable
for
the
storage
version.
A
So,
is
it
desirable
that
a
single
kms
plug-in
can
serve
two
different
keys,
maybe
like
if
you
are
not
two
different
key
versions,
but
two
different
keys
like
if
you
want
to
change
from
you
know
one
key
to
another
key.
A
So
I
I
think
yes
mike,
so
I
think
the
what
what
this
api
is
trying
to
express
is
that
today,
as
far
as
like
storage
migration
is
concerned
to
cause
key
rotation,
the
only
way
our
our
code
knows
that
something
is
stale
in
terms
of
the
like
storage
transform
layer
is
if
the
actor
that
was
used
to
do
the
read
from
storage
wasn't
the
first
one
on
the
list.
A
Okay,
so
what
this
tries
to
do
is
to,
in
addition
to
that
also
support
a
a
single
kms
instance
being
able
to
say
hey,
I
I
see
you
asked
me
to
decrypt
some
data
or
whatever
for
an
update,
call
that
used
this
key
id
at
some
point
in
the
past,
but
the
one
that
I
would
do
future
rights
with.
Is
this
other
one?
A
So
technically,
that's
still,
even
though
api
server
you,
you
can
decrypt
it
because
you
have
everything
cached,
you
still
need
to
mark
it
as
scale
for
the
purposes
of
storage
migration.
That's
basically
what
the
whole
current
key
id
thing
is
doing
is
basically
giving
away
for
the
kms
plugin,
while
it's
running
to
change
what
it
believes
to
be
its
current
write
key,
so
it
can
have
as
many
read
keys
as
it
wants.
A
Internally
and,
for
example,
we
in
the
some
of
the
reference
poking
around
we
did
with
azure,
we
were,
you
know
we
were
able
to
see
like
if
you
had
encrypted
stuff
with
a
different
key.
The
the
plugin
would
notice
that
it
didn't
have
that
key
around
and
it
would
just
go
fetch
stuff.
A
I
mean
it
would
just
make
external
calls
and
go
ahead
and
fill
its
cache
and
do
the
decryption
anyway,
and
but
it
would
then
return
what
key
it
used
to
give
the
api
server
an
idea
of
like
oh,
I
see
that
that's
not
the
key
that
you
want
to
use.
So
that
means
it's
still
david.
I
saw
you
on
mute.
If
you
wanted
to
ask
me.
A
Yeah,
mostly
in
the
in
the
sense
that
I
mean
we
could,
if
we
want
to
expand
status,
to
also
include
all
possible
keys
that
the
plugin
knows
about
the
the
idea
there
was
is
that
when
it
encounters
a
key
that
it
currently
does
not
understand,
it
will
ask
the
external
kms
to
help
figure
out
what
that
opaque
string
means
based
on
whatever
apis
exist
for
that
external
kms.
A
The
only
way
you're
going
to
be
presented
with
a
particular
key
id
during
a
decrypt
request
is
if,
at
some
point
in
the
past,
you
or
another
instance
of
your
plugin
used
it
so
it'll
be
in
external
kms
unless
it's
been
deleted
or
something
so
that's
kind
of
like
so
we
could,
if
we
want
to
within
status,
include
some
capability
for
for
the
plugin
to
say,
here's
everything
I
understand,
I
don't
know
if
that's
a
good
idea,
because
well
I
don't
think
it's
necessarily
a
bad
idea.
A
A
C
Yeah,
I
guess
I'm
used
to
the
implementation
of
key
rotation
in
openshift,
where
we
do
keep
track
of.
Who
can
read?
What
does
everyone?
Can
everyone
read
the
version
that
someone
else
is
writing
once
everyone
can
read
a
particular
version.
We
can
then
promote
something
to
the
right
key
yeah
right
right.
So
in
practical
usage,
you
would
want
to
know
everybody
can
read
this
new
key
first
yep
all
right
now
everybody
can
write
with
the
new
game.
A
Yeah,
so
in
this
design
it
explicitly
makes
a
comment
about
that.
There's,
no
coordination
required
between
your
plugin
instances,
in
particular
that
type
of
coordination
is
not
required
in
this
design.
A
A
Yeah,
can
we
make
a
suggestion
that
the
act
or
available
decryption
keys
include
next
key,
current
key
and
previous
key
and
make
sure
that
the
next
key
is
available
for
decryption
a
while
before
we
start
active
start
using
it
for
encryption.
A
Is
that
something
I
don't
know
what
that
gets
us
in
this
design,
because
the
the
way
it's
meant
to
work
is
like
say
you
have
two
api
servers
and
one
of
them
notices
like
one
of
the
kms
plugins
to
api
server,
a
notices,
the
right
key
change.
First,
if
it
immediately
starts
using
it,
if
api
server
b
then
tries
to
decrypt
some
of
that
data,
it'll
totally
work.
Fine,
like
it
works
fine
in
like
our
implementation
stuff.
Today,
because
what
it
does
is
it
basically
asks
the
external
kms
hey?
A
Can
you
can
you
help
me
do
this
decryption,
because
I
just
saw
a
key
id
that
I
don't
currently
know
about,
but
you
probably
do
because
you're,
the
external
kms,
but
basically
it
uses
the
external
kms
as
a
full
coordination
actor
right.
So,
like
you,
you
can't,
but
you
can't
go
to
a
key
that
the
external
kms
doesn't
understand.
A
A
D
A
That's
the
benefit
of
staging
the
new
keys
as
available
for
decryption
well
before
they
are
actually
used
for
encryption
right
right.
So,
what's
unclear
to
me,
and
if
we
try
to
do
it,
that
way
is
how
would
then
the
api
servers
coordinate
this
among
themselves
right?
We
don't
have
a
mechanism.
This
tries
to
avoid
building
such
a
mechanism
right.
I
think
you
just
don't
and
say
a
day
is
probably
enough
for
your
kms
api
to
become
consistent
or
like
if
you're
rotating
every
day.
E
A
E
I
guess
the
other
point
is
like
today.
All
this
is
done
manually
right
like
like
this
proposal.
Isn't
this
isn't
like
a
regression
or
anything
right
like.
B
E
A
But
I
think,
like
the
ideal
usage,
is
important
to
understand
when,
when
we
look
at
when
we
review
these
designs,
even
if
you
know,
I
think
that
you
are
right
and
that
this
is
out
of
the
purview
of
what
kubernetes
will
be
responsible
for
like
what
would
we
put
in
the
operator
docs
but
yeah.
I
think
we
have
pretty
reasonable
answers
for
that.
It
sounds
like
already
if
there
are
other
things
in
the
stock
that
you
wanted
to
get
to
rita.
E
I
think
those
are
the
two
big
ones
other
than
that
we
are
adding
support
for
hot
reload
of
the
encryption
configuration
and
introducing
the
new
uid
field
to
for
observability,
which
was
touched
on
in
another
cap.
So
that's
the
big
items.
I
think
I
don't
think
I
missed
anything
else
right.
B
A
C
Okay,
I'm
I'm
not
against
it.
I'm
looking
for
oh
she's,
highlighting
it
on
screen.
Now
I
was
looking
for
at
least
precise.
Precise
expectations
can
be
set
there.
A
C
A
Yeah,
so
I
I
in
open
share,
assuming
you
know
the
work
I
did
was
correct.
It
would
never
generate
a
config
that
was
not
immediately
valid
because
it
never
tried
to
skip
any
steps
right.
It
never
tried
to
promote
a
right
key
before
it
was
ready.
C
A
A
A
Or
for
those
on
the
call
that
weren't
at
the
sig
api
meeting
recently,
I
had
wanted
to
use
the
storage
version
hash
from
discovery,
but
I
was
told
one
that
that
field
will
eventually
be
deleted,
so
don't
use
it
and
two.
There
was
important
and
significant
concerns
about
the
fact
that
discovery
is
accessible
by
most
actors
on
the
system.
So,
even
though
it's
technically
completely
opaque,
it
does
leak
information
about
the
system's
internal
state
to
basically
all
authenticated
users,
and
that
doesn't
seem
like
a
great
idea-
and
I
agree
with
that.
A
C
D
A
Do
we
have
guidance
like
like
for
the
like
of
the
overall
kubernetes
project
for
like,
if
you're,
if
you're,
building
a
feature
that
wants
to
rely
on
some
other
feature,
and
that
feature
is
not
done?
How
how
do
you
progress
like?
Would
we
try
to
block
on
it
or
do
we
try
to
have
partially
working
functionality
that
only
works
when
all
the
feature
gates
are
in
the
right
state.
C
There
are
other
people
building
like
optional
pieces
of
their
feature.
I
do.
I
think
I
am
aware
of
cases
where,
where
work
that
relied
on
other
work,
they
ended
up
pushing
that
other
piece
of
work
through
as
well.
C
C
A
Okay,
a
completely
meta
question
about
this,
so
I
I
feel
like
I
was
too
closely
involved
in
this
kept
to
be
an
approver
on
the
cat.
This
is
basically
rubber,
stamping
your
own
work
dude!
We
have,
I
don't
think
clayton's
on
the
call.
So
does
anyone
want
to
step
up
as
a
leader.
E
A
A
I
don't
know
so
we
have
two
items
left
tim
and
mo,
so
I
will
give
you
five
minutes.
Awesome.
All
right,
gotta
talk
fast,
so
we,
I
realized,
you're,
not
sharing
a
screen
yet,
but
I'll
just
keep
going.
We
discussed
this
a
while
back
and
david
has
taken
at
least
an
initial
peak
at
a
very
rough
cap,
but
the
the
gist
of
what
the
cap
is
trying
to
propose
is
the
right.
Now.
It
just
proposes
a
code
level
change
that
basically
says
within
kubernetes
client
go.
A
We
will
be
able
to
assert
that
sorry,
not
a
cert
but
configure
tls
configuration
of
the
client,
so
not
the
server,
but
the
client
itself
in
particular,
being
able
to
fully
opt
into
tls
1.3
or
stay
at
tls
1.2
and
control
the
ciphers
therein
of
today.
Neither
of
those
things
is
configurable.
This
is
sort
of
hard-coded
in
client,
though,
and
david
has
rightly
pointed
out
that
if
the
kep
has
written
right
now,
basically
it
only
provides
valuable
to
maybe
like
a
library
author.
A
A
I
am
nervous
about
trying
to
con
get
expand
this
into
like
cube,
config
territory,
because
then
it
starts
showing
up
around
things
like
cube,
ctl
and
like
some
of
the
web
hook
configuration
and
maybe
ways
that
are
not
great
and
I'm
not
sure
how
to
do
feature
flags
for
any
of
that
stuff.
I
know
how
to
do
feature
flags
for
api
server
and
controller
manager.
A
A
C
A
Yeah,
that's
basically
it
like,
as
a
for
example,
like
the
default
tls
config
that
we
have
today
will
let
you
use
triple
des,
because
that's
when
you
don't
specify
any
cipher
suites
goes
fine
to
drop
all
the
way
down
to
triple
des,
which
is
man.
It
will
let
you,
but
it
will
never
actually
use
it
right.
So
what
I'm
saying
is
if
the
server
says
I
only
support
triple
des,
then
we
will
happily
connect
right
and
when
I'm
saying
that's
a
bug,
I
want
to
not
connect.
A
I
want
a
failure
because
you
don't
notice
right
so
there's
the
other
aspect
of
even
if
you
control
the
server,
you
might
have
infinite
numbers
of
proxies
between
you
and
it,
and
if
some
of
them
do
things
that
you
don't
like
you,
you
want
to
know
right,
so
it
does
give
you
a
way
of
affirming
on
both
sides
of
the
equation.
I
I
I
do
understand
the
point,
though
the
server
side
config,
is
significantly
more
critical,
but
we
already
have
it
today.
A
You
said
this
would
be
process
level
it.
It
is
both
you,
you
you'd,
be
able
to
override
the
process
level
configuration.
So
if
your
existing
code
isn't
setting
the
new
fields
and
rest
config
it'll
get
whatever
the
new
process
default.
Is
that
you've
changed
from
the
existing
default
and
if
you
set
the
fields
and
those
fields
would
be
honored
in
particular
right
now,
there's
no
provision
for
saying
I
have
this
process
default
and
it
cannot
be
overwritten.
That's
not
in
the
cap
right
now.
A
If,
if
we
wanted
to
have
that,
we
could
so
you
could,
you
could
get
extra
pedantic,
and
so
you
can
only
use
like
the
fifth
ciphers
and
I
don't
care
what
the
rest
of
the
code
does.
I
could
still
see
that
being
valuable.
If
you
have
a
piece
of
code
that
you
fork
and
you
you
want
to
contort
it
in
some
way
and
you
don't
necessarily
want
to
go
patch
every
client
in
there.
A
Yeah,
there's
there's
nothing
obvious
to
me
that
is
concerning
about
this.
Why
don't
we
give
this
two
weeks
to
for
people
to
review
and
come
back
to
it.
C
The
other
aspect
that
caught
my
eye
was
in
many
of
the
cases
where
we
make
connections
from
a
cubicle
server
out
the
only
piece
of
information
describing
how
to
make
that
connection
is
a
cube
config,
and
so
I
am
not
at
this
point,
convinced
that
moving
too
stable
without
a
way
to
to
use
it
in
those
areas
is
where
we
want
to
go,
and
so
those
are
two
areas.
When
people
review
the
cap,
I
encourage
them
to
read,
and
hopefully
mo
will
elaborate
as
to
well.
A
Yeah
a
quick
thing,
if
just
based
on
the
feedback,
I've
gotten
so
far
from
you
david,
I
think
I
would
I
would
expand
it
to
at
least
allow
like
api
server
and
controller
manager.
Config
of
this,
like
the
global
process
default
and
once
that's
there,
then
this
would
this
would
go
to
the
standard
alpha
beta,
ga
cycle
just
because
then
it's
exposed
in
a
way.
The
the
thing
I
was
saying
was
just
going
to
be
stable,
as
is
basically
just
the
code
level.
Change
of
client
go,
but.
C
Yeah
unused
code-
I
don't
trust
to
ever
work
right,
so,
okay,
that'll,
be
that'd,
be
a
great
update.
I
will
have
a
look
when
that
goes
in.
A
Awesome
tim
conformance
for
upon
security,
ga.
B
Yeah,
so
this
is
just
a
follow-up
from
what
we
discussed
last
week
of
bringing
pod
security
to
ga.
I
wanted
to
get
this
pr
out
earlier,
but
apologies
for
the
last
minuteness
of
it.
The
one
thing
that
we
had
on
here
that
we
didn't
talk
about
last
week
was
conformance
test
plan
that
was
previously
an
unresolved
thing
for
ga.
B
I
basically
copied
what
we
had
from
the
conformance
future
extension
into
here,
saying
that
basically,
the
default
pod
security
mode
is
enforce
privileged,
which
is
a
no
op,
so
that
shouldn't
that
shouldn't
affect
conformance
of
any
clusters.
The
ete
framework
has
been
updated
already
to
explicitly
mark
namespaces
with
the
required
level.
B
The
one
question
that
I
kind
of
I
wanted
to
raise-
I
left
it
in
future
extensions,
but
this
question
of
whether
a
cluster
that
requires
baseline
should
be
able
to
pass
conformance.
B
B
So
we
might
be
able
to
get
at
least
close
to
that.
I'm
sure
that
we
can't
do
that
for
restricted
without
substantial
changes,
and
I
think
probably
that
would
require
the
conformance
profiles
feature
to
move
forward.
B
So
I
know
we're
kind
of
at
time.
I
just
wanted
to
mention
that
here
feel
free
to
chime
in
on
the
pr
or
add
your
last
minute,
thoughts.
A
Cool
and
we're
at
time,
thank
you
everybody.
I
guess
I
will
see
all
of
you
in
two
weeks.