►
From YouTube: sig-auth bi-weekly meeting 20210804
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
welcome
to
the
august
4th
2021
meeting
of
sig
off.
It's
like.
We
have
a
few
items
on
the
agenda.
The
first
one
is
the
certificates
proposal
update.
Does
somebody
want
to
kick
that
off.
A
B
Sorry
so,
yes,
we
have
fleshed
out
the
proposal
for
issuing
certificates
service
account
certificates
directly
to
pods.
Some
people
have
already
taken
a
look
at
it,
especially
mo
seem
to
have
taken
a
pretty
thorough
look
at
it.
B
There's
a
lot
of
he
had
a
lot
of
questions
and
seemed
to
want
to
take
a
different
approach,
so
I
think
we
need
to
discuss
with
him
and
see
what
his
concerns
are.
A
Cool,
I
guess
what
has
been
updated
since
the
last
time
we
looked
at
this.
B
B
B
B
C
Decided,
like
you
know
what
having
a
client
certificate
available
for
a
for
a
service
account
sounds
like
a
great
idea,
but
no
thank
you
spiffy,
as
I
understand
that
that
would
not
meet
your
goals
or
is
that
a
an
approach
you'd
be
interested
in
doing
I.
B
Would
be
disappointed
because
I
think
that
spiffy
is
the
correct
choice,
but
I
think
my
primary
goal
is
to
make
it
so
that
a
kubernetes
plot
is
never
required
to
use
a
bare
token
for
any
purpose.
C
B
Sure,
but
when
I
say
I
would
be
disappointed,
I
think
that
a
lot
of
the
value
of
providing
this
client
certificate
is
not
just
for
talking
to
cube
api
server,
although
that's
a
good
place
to
start.
I
think
a
huge
amount
of
the
value
comes
from
just
like
oidc
tokens
that
we
issue
deposits
today
random.
Other
third
parties
can
accept
these
certificates
and
that's
a
lot
easier
to
do
if
they
don't
have
to
build
kubernetes
specific
support.
Just
like
we
didn't
roll
our
own
jot
format.
B
C
Now
I
do
think
it's
worth
mentioning
that
a
client
certificate
as
cube
describes
it
has
some
unique
qualities
in
how
it
encodes
users
and
groups.
I
don't
recall
whether
we
specified
exactly
how
extra
is
specified
on
these,
but
it
seems
like
it
would
not
be
specific
to
a
cube
implementation
right.
You'd
be
able
to
take
a
ca
bundle
and
verify
these
certificates
anywhere.
B
Right,
yes,
but
for
example,
if
you
wanted
to
authenticate
to
stripe
or
twilio,
they
would
have
to
specifically
build
support
for
kubernetes
service
account
certificates.
B
They
would
need
to
know
our
cust
whatever
we
decide
on
our
custom
format
for
encoding,
namespace
user
or
namespace
and
service
account
name
and
then
map
that
into
their
policy
language,
whereas
spiffy
sort
of
provides
a
or
you
know
we'll
call
it
some
standard
profile
could
be.
Spiffy
could
be
whatever
else
that
provides
a
common
meeting
point
where
third
parties
just
have
to
know
how
to
extract
a
relevant
identity
from
the
certificate
that
works
for.
C
Is
a
little
bit
because
it's
talking
about
you
know
mtls
issuance
and
we
have
an
mtls
concept
in
cube
and
it
is
not
talking
about
using
that
it's
talking
about
using
something
different
talking
about
using
spiffy.
So
I
think
trying
to
tackle
those
as
as
step
8
to
get
to
step
b
is
fine
with
me
or
coming
in
with
a
yup.
Let's
make
it
actually
do
what
it
says
on
the
tin
and
use
the
existing
mtls
capabilities
we
have.
C
I
could
also
see
that
as
useful,
but
trying
to
do
both
at
the
same
time
seem.
I
mean
it
seems
like
a
lot
to
bite
off,
there's
different
ways
to
describe
and
classify
that.
But
if
you
have
independently
valuable
pieces,
how
about
we
go
after
those.
A
A
Yeah,
I
think
maybe
we
can
also
consider
you
know,
service
to
service.
If
that
is
the
one
of
the
primary
motivations.
E
And
I
would
like
to
see
support
for
legacy
services
in
here.
I
don't
want.
You
know
I'm
willing
to
to
go
forward
with
this
with
a
critical
spiffy
extension,
but
I
would
prefer
a
non-critical
spiffy
extension
so
that
you
can
drop
host
names
in
there.
So
you
can
make
this
viable
for
legacy
pieces
of
software,
and
I
do
think
that
the
existing
you
know
sni.
A
Yeah,
I
guess
did,
did
we
figure
out,
you
know
a
heuristic
for
approving
certificates
that
do
you
have
host
or
dns
sans
or
host
names?
Is
that
proposed
in
this
talk.
B
E
Okay
and
that
behavior
being
implementation,
specific,
is
not
unreasonable
so
long
as
there
is
sort
of
a
so
long
as
we
are
clear
about
what
is
and
isn't,
implementation
specific
on
the
grants
that
are
are
generated
to
some
extent
being
not
specifying
what
these
tls
certificates
actually
look
like,
not
specifying
how
you
extract
a
service
account
from
them,
not
specifying
what
you
know.
What
is
actually
said.
E
What
is
is
signed
is
an
advantage
because
it
allows
you
to
be
flexible
and
to
to
say
it's
implementation
specific
to
you
know,
so
that
providers
can
provide
grants
that
match
what
they
are
willing
to
sign
on
premises.
Kubernetes
has
a
very
different
set
of
of
expectations
than
you
know,
kubernetes
that
is
hosted
by
one
of
the
major
cloud
providers.
C
I
don't
know
that
I
agree
that
having
a
standard
cube
signer,
which
can
sign
a
certificate
that
either
looks
like
a
or
looks
like,
b
and
can
be
verified
like
a
or
be
verified,
like
b,
that.
E
Is
not
what
I'm
proposing?
What
I'm
saying
is
that
we
should
make
it
clear
that
there
is
that
the
the
standard,
kube
signer
has
this
format,
but
this
is
pluggable
and
your
the
the
actual
certificate
that
you
get
may
not
be
this
if
it's
signed
by
some
other
signer.
C
Right
so
so,
so
I
guess
I'm
confused
then
about
what
you're
saying,
because
if
I
create
one
that
says
I
want
to
have
the
kubernetes,
I
o
service
account
signer
and
there
is
one
version
that
creates
spiffy
x509.
I
don't
know
how
to
pronounce
it
svids
and
another
version
that
says
no.
I
don't
create
that.
Then
I'm
going
to
be
verifying.
When
I
try
to
consume
these,
I'm
going
to
be
trying
to
verify
them
in
different
ways.
I
don't
see
how
I
could
consistently
use
the
certificate
that
I
get
back
as
being
signed.
E
The
the
remote
party
doesn't
does
not
have
any
knowledge
of
the
kubernetes.io
service
account
signer.
It
sees
a
tls
client
certificate.
I
do
agree.
Maybe
the
the
answer
is
that
the
service
controller
is.
E
That
there
are
multiple
different
signers
that
there
is
a
kubernetes
automatically
adds
service
burp
there.
There
is
a
kubernetes
controller
that
will
automatically
add
a
that,
based
on
the
annotation
automatically
adds
a
tls
secret
volume
and
requests
a
csr
for
some
named
signer,
where
kubernetes
dot
io
service
account
is
the
default
signer,
but
that
you
can
paste
in
your
own
implementation,
specific
signer.
B
B
B
I'm
open
to
the
idea
of
you
know
it's
moving
first
to
have
a
smaller
cap
that
says:
let's,
let's
teach
cube
apis
over
how
to
understand
speaking
certificates,
for
example,
I'm
a
little
worried
that.
E
B
So
I
mean
we
would
100
use
that
support
in
gke.
We
have
a
lot
of
spitty
stuff.
E
F
Maybe
it
would
help
to
sort
of
there
are
like.
I
think
there
are
like
three
mechanisms
being
discussed
in
this
proposal.
One
is
the
spiffy
aspects
like
the
api
server
recognizing
50.
One
is
the
projected
volume
mechanism
where
which
sort
of
takes
care
of
key
delivery
and
interaction
with
the
csr
stuff,
and
then
one
is
this
like
specific
signer
that
we're
discussing
and
I
think
we're
sort
of
getting
like
all
three
tangled
together
and
some
people
are
like
hearing
one
of
those
mechanisms
and
thinking.
Oh,
I
could
use
that
for
x,
y
and
z.
F
That
sounds
cool.
I
don't
know
about
those
other
two.
Maybe
it
would
help
to
like
describe
the
mechanisms,
and
then
I
think
what
mo
was
asking
about
was
like
for
each
of
those
mechanisms
like
the
the
signer
and
the
api
server
authentication
bit
and
the
projective
volume
bit
like
there
are
things
that
would
let
you
do
that
out
of
tree
like
with
a
csi
driver
or
with
an
out
of
tree
controller.
It
would
just
be
helpful
to
kind
of
enumerate
like
here
are
the
mechanisms
this
is
proposing.
F
F
I
think
we
want
to
try
building
these
things
like
on
top
of
cube
before
we
build
them
into
cube
and
commit
cube
to
a
particular
direction
if
possible,
and
then,
if
there
are
some
of
those
mechanisms
that
you
can't
effectively
do
out
of
tree
like
that's
useful
to
know
and
that
might
sort
of
help
untangle
like
the
use
cases
and
untangle
like
interest
in
particular
aspects
or
get
ordering
correct,
because
I
I
think
there
are
some
things
like,
as
you
were
talking
like
being
being
able
for
a
pod
to
say,
like
please
inject,
like
a
certificate
and
a
key
for
this
signer.
F
That
seems
like
a
really
generally
useful
thing
and
like
if
I
have
a
delivery
mechanism
for
my
signer
and
I'm
issuing
like
in-cluster
serving
certs
or
whatever
like
that
seems
like
you
could
use
that
for
a
lot
of
a
lot
of
different
mechanisms.
Not
just
this
mtls
one,
and
so
by
teasing
those
apart.
B
E
Yeah,
all
of
these
bits
have
been
built
before
not
necessarily
in
quite
the
same
way,
but
they
have
all
been
built
before.
E
Hard
to
use
front
process
not
using
a
spiffy
certificate,
but
using
a
certificate
or
using
a
proxy
in
front
of
them.
Yeah.
B
D
A
Do
we
have
a
reasonable
enough
ai
to
try
to
like
figure
out
which
come
up
with
the
priority
among
these
three
components
and
maybe
propose
them
independently
as
a
smaller
chunk
that
we
can
debate
and
try
to
get
in
just
to
turn
this.
E
E
F
I
apologize
if
I
was
part
of
table
like
that.
I
don't
actually
remember
these
things
showing
up
before
I
remember
like
externalizing,
tls
handshakes
and
things
like
that.
The
the
one
and
I
can't
remember
if
it
was
talked
about
in
conjunction
with
this
proposal
or
separately,
but
the
the
idea
of
being
able
to
inject
certificates
to
pods
for
serving
that
all
of
our
api
server.
Pod
mechanisms
pay
attention
to
like
emission
web
hooks
and
conversion
web
hooks
and,
like
various
things
like
that,
like
having
having
a
signer
that.
E
F
Breaking
these
apart
sort
of
gets
some
of
the
value
sooner
like
being
able
to
inject
a
certificate
for
a
signer
and
defining
how
that
interaction
works.
Like
the
pod
spec
requests,
it
says
what
signer
it
wants
and
then,
in
the
request
the
cubelet
makes
on
its
behalf.
It
encodes
information
about
the
pod
that
it
came
from.
So
it's
requested
by
the
cubelet
with
pod
id
namespace.
F
You
would
information
like
having
having
that
piece
defined
lets
you
get
value,
even
if
you
haven't
settled
on
building
spiffy
end
or
not
or
having
like
an
auto
identity
certificate,
be
spicy
or
not.
B
Okay,
I
can
break
that
out
to
a
separate
cap.
I
believe
that
is
fully
specified
right
now
in
this
okay
in
this
dog
and
now
the
answer
right
now
is
it:
the
csr
is
a
spiffy
certificate
with
some
extra.
B
C
That
aspect
seems
a
little
bit
weird
to
me
in
terms
of
like
how
do
I
use
this,
but
having
a
client
certificate
like
one
that
could
be
accepted
today
by
the
qapi
server
or
terminated
by
anybody
who
understood
like
that
same
sort
of
pattern.
That
would
that
seems
like
it'd,
be
fine
to
me.
E
Well,
the
problem
is
what
information,
what
claims
do
we
encode
into
that
a
client
certificate
that
can
be
interpreted
by
the
kubi
api
server
is
very
simple:
it
just
has
in
the
common
name
the
service
account
identifier.
E
E
Sure,
but
what
we
would
do
for
this
circumstance
would
be
the
service
account,
would
be
matching
the
format
and
would
be
interpreted
as
the
system
service
accounts.
Namespace
service
account
right
it
would
it
it
just
so
happens
to
match.
It
is
an
arbitrary
string,
but
it
is
also
an
arbitrary
string
that
is
well
formatted
and
understood
by
the
qb
api,
so
server.
E
A
Agree,
yeah,
and
it
at
least
helps
us
focus
the
discussion
a
little
bit
so
yeah.
I
do
think
that
that
would
be
helpful
and
we
can
at
least
start
to
move
the
less
controversial
bits
in
slowly
or
decide
not
to
so.
A
E
A
Well,
I'm
sure
we
will
discuss
this
next
sig
off
meeting
but
yeah
thanks
for
bearing
with
us
both
of
you.
A
Cool,
I
actually
missed
an
announcement
kotha
jordan.
What's
the.
F
I
didn't
have
the
announcement,
but
the
122.0
release
is
getting
tagged
like
as
we
speak,
so
I
would
expect
code
law.
I
don't
know
later
today
tonight
tomorrow,
something
like
that.
A
G
Hey
so
this
is
a
an
issue
we
filed
a
few
years
ago
when
we
started
thinking
about
the
future
of
pod
security
policy.
G
We
have
a
bunch
of
different
policy-like
things
in
kubernetes
and
they
all
applied
policy
in
pretty
different
ways.
That's
kind
of
what's
summarized
in
the
the
top
section
there
and
then
in
the
the
second
half
of
this
issue
it
sort
of
describes
what
some
of
those
different
approaches
are.
G
So
I
wanted
to
re-raise
this
because,
with
the
addition
of
pod
security
admission,
we're
adding
another
policy
type
object
going
to
get
rid
of
pod
security
policy
and
pod
security
admission
is
applied
at
the
name
space
level.
So
it's
it's
more
momentum
towards
this
name.
Space
level
policy.
G
So
this
is
not
a
super
actionable
issue.
I
think
the
question
here
is
kind
of
like
do.
We
want
to
declare
essentially
that
name
space
level
policy
is
sort
of
the
way
forward
and
can
consider
this
issue
closed
or
is
there
more
do
we
feel
like
there's
more
discussion
and
follow-up
that
still
needs
to
happen
here?
I'm
not
necessarily
looking
to
have
that
discussion
or
make
the
decision
now,
but
sort
of
wanted
to
get
this
back
on
people's.
G
F
My
feeling
is
that
of
the
options
listed,
applying
policy
to
the
pod
service
account
and
to
the
requester
should
probably
be
struck
out.
As
generally.
F
We
went
into
this
in
a
lot
of
detail
with
the
pod
security
policy
stuff
like
that.
It
just
is
confusing
and
doesn't
work
well
with
how
kubernetes
objects,
work
with
controllers
and
stuff
applying
at
the
name.
Space
level
seems
to
be
what
things
are
converging
on
and
is
pretty
reasonable
to
understand.
F
C
So
a
comment
about
using
service
accounts
versus
namespaces,
so
namespace
level
configuration
is
nice
in
the
sense
that
everything
inside
the
namespace
is
always
consistent,
but
service
account
level.
Permissions
are
nice
because
a
user
can
have
the
power
to
delegate
his
own
permissions
to
a
service
account,
but
probably
does
not
have
the
power
to
modify
a
namespace
and
say
this
namespace
has
this
power
right
using
our
default
permissions?
You
don't
get
namespace.
A
Permissions
yeah-
and
I
do
want
to
make
the
distinction
that
some
of
these
are
kind
of
runtime
enforcement
policies,
and
some
of
these
are
more
like
api
admission,
which
you
have
kind
of
fairly
different
semantics.
Like
you
can
have
a
I
mean
you
can
have
two
service
accounts
or
two
two
pods
and
name
space
with
different
pod
security
context,
security
context,
but
I
think
to
have
two
sets
of
administrators
where
one
set
can
run
as
group.
A
A
and
other
set
can
run,
as
group
b
is
much
more
challenging,
with
kind
of
what
we
provide
as
far
as
toggles,
and
I
think
maybe
the
second
makes
a
lot
of
sense
for
a
name
space
level
just
to
align
kind
of
administrative
access
up,
and
I'm
not
sure
whether
the
same
is
true
for
something
like
network
policy,
where
it
actually
might
make
sense.
To
generally
say
that
you.
D
A
We
need
finer
granularity
of
network
policy
than
the
namespace
level,
but
I
don't
know.
G
A
A
C
F
C
Is
that
right,
I'm
thinking
more
in
terms
of
like
if
I
want
to
have
this
power
inside
of
a
name
space,
the
only
person
who
can
grant
a
power
inside
of
a
namespace
today
is
someone
who
can
label
it
right.
But
if
I
want
to,
if
power
is
granted
to
a
service
account,
anyone
with
admin
privileges
inside
of
the
namespace
can
delegate
authority.
So
you
could
say,
like
this
person
can
delegate
authority
for
whatever
it
is.
I
grant
him
power
to
to
do
since
our
backs
not
escalating.
C
F
C
There
is
some
aspect
of
that,
but
I'm
also
thinking
about
in
terms
of
like
mechanically,
how
do
I,
how
do
I
delegate
my
powers
as
a
namespace
admin
who
can
running
privileged
positives,
make
sense,
but
like
any
space
admin?
Who
can
do
a
thing?
I
can
create
a
role
binding
for
the
mission
that
I
have
in
a
namespace
or
in
the
cluster.
C
Do
that
if
I
don't
have
namespace
labeling
permissions,
I
don't
have
a
way
to
do
that
externally.
Right
and
if
you
grant
someone
namespace
labeling
permissions
now
they
have
the
ability
to
modify
any
permission
at
all
in
that
namespace,
and
maybe
I
only
wanted
them
to
be
able
to
modify.
I
don't
know
how
network
policy
works.
C
F
Yeah,
so
for
like
network
policies,
those
are
included
in
the
edit
and
admin
roles,
so
you
can
create
network
policies
that
constrain
network
visibility
that
are
effective
inside
your
namespace
image
policy
is
like
hard
coded
into
the
api
server
config
resource
quota
is
the
namespace
object,
but
it's
not
granted
to
admins
by
default.
It
could
be
added.
F
I
I
think
it
tends
to
be
sort
of
domain
specific,
whether
there's
an
object
inside
the
namespace
that
controls
the
configuration
of
it
and
how
and
whether
it
allows
subdividing
so
like
resource
quota
can
scope
quota
to
particular
types
of
pods
or
particular
resources,
but
all
of
them
have
the
thing
they
have
in
common.
It's
a
network
policy
image.
I
don't
know
about
image
policy,
the
resource
code
element
ranger.
F
The
thing
they
have
in
common
is
that
they
are
namespace
scoped,
like
you,
don't
have
resource
quarter.
That
applies
to
half
of
a
namespace
and
not
the
other
half,
and
so,
if,
if
this
is
asking
like,
what's
the
approach,
I
think
the
approach
is
namespace,
scoped
david's
question
about
how
do
you
express
that
is
a
reasonable
one?
And
so
that
might
be
an
argument
in
favor
of
a
namespace
scoped
object.
If
it's
something
you
expect
someone
to
who's
constrained
to
that
namespace
to
be
delegating.
F
F
G
F
I'm
going
to
search
the
website
or
developer
like
the
community
repo
I
I
would
probably
guess
community
repo,
since
that's
more
development-facing,
I
guess
it
depends.
Do
we
want
this
to
be
focused
on
kubernetes
developers
or
on
like
anyone
writing
any
policy
thing
for
humanities.
C
Right,
I
think
that
there
are
going
to
be
questions
or
actually
are
questions
about.
Like
hey,
I
want
to
have
this
action
taken
by
the
resource
owner
and
you
go
back
to
like
no,
no,
there
isn't
an
owner
inside
of
a
namespace.
You
have
all
the
editors
and
all
editors
are
equal.
C
I
get
that
sort
of
a
question
quite
a
bit.
I
got
one.
F
So
maybe
yeah,
I'm
not
I'm
not
sure
where
it
would
live,
maybe
a
new
topic
sort
of
under
controlling
access.
I
think
we
touch
on
the
idea
that
there's
not
like
a
creator,
that's
spelling
that
out
and
sort
of
talking
about
the
implications
as
it
touches
to
this.
F
E
Probably,
wherever
our
back
currently
is
it
sort
of
belongs
in
the
in
the?
How
do
you
create
proper
our
back
rules?
E
F
G
Sounds
good,
I
can
take
that
as
an
action
item
to
gets
from
that
documented.
A
All
right
so
final
item:
micah.
Do
you
wanna.
H
Yep
yeah,
so
after
our
last
meeting
I
just
went
ahead
and
tried
bumping
the
aws
cli
like
a
client
off
api
version
which,
if
your
cube
existing
cube
config,
specifies
the
client
authentication
v1
alpha
one
spec
and
the
tote
like
the
the
tool
that
returns
a
json
blob.
H
No,
I
mean
the
tool
would
have
to
take
an
arg
to
say
right,
I
mean
so
I
I
think,
there's
kind
of
two
ways
to
do
it.
H
One
is
to
if
I
I
think
it's
going
to
be
really
hard
honestly
without
cube
api
get
or
excuse
me
cube,
client,
go
or
whatever,
giving
more
information
about
the
request
like
what's
the
expected
api
response
type
and
or
or
what
q
config
is
being
read
and
context,
because
as
a
tool
returning
a
token
or
a
in
this
case
a
token
I
I
don't
know
what
api
version
have
been
called
with.
F
Doesn't
it
encode
information
about
the
token
requests
in
an
environment
variable?
I
thought
it
did.
Yes,
if
it
does
that'd
be
awesome.
I
just
missed
that.
Okay,
if
I
thought
it
did,
I
know
it
does
in
newer
versions,
because
we
actually
had
useful
information
to
pass
like.
Is
this
interactive
or
not?
F
F
A
All
right,
yeah,
let
us
know
if
that
is
not
true.
A
Cool
is
there
anything
else
that
anybody
wanted
to
discuss
today,
or
should
we
hop
into
test
flakes.
A
A
A
All
right
well,
that
is
good
to
know
for
the
next
meeting,
but
I
am
currently
I
do
not
have
firefox
installed,
so
I
will
cut
to
test
grid.
A
D
Yeah
for
the
csr
driver,
we
actually
grouped
it
into
a
couple
of
categories
and
I
think,
like
mostly,
we
are
focusing
only
on
the
post,
submit
the
release
signal
and
the
periodic
and
the
mostly
been
passing,
and
we
recently
migrated
some
of
the
providers
like
gcp
to
use
workload,
identity
and
all
that
so
hopefully,
in
the
next
couple
of
weeks,
we'll
see
more
green
on
it.
A
Awesome
and
okay,
so
gce
finally
started
working
and
looks
generally
still
pretty
happy,
which
is
kind
of
the
status
last
time
cool.
So
I
guess
I
will
skip
ci
flicks
the
cluster
this
meeting
and
grab
it
next
time.
A
Cool
so
looks
like
we
are
out
of
topics
for
today.
Let's
there's
nothing
else.
Let's
reconvene
in
two
weeks
thanks,
everyone.