►
From YouTube: Sig-Auth Bi-Weekly Meeting for 20211208
Description
Sig-Auth Bi-Weekly Meeting for 20211208
A
Hello:
everyone
welcome
to
the
december
8th
2021
meeting
of
sig
off
looks
like
we
got
a
couple
items
on
the
agenda.
Let's
start
off
with
this
announcement
mo
is
this
you.
B
Yeah,
you
want
to
open
up
the
listing
tiny
text
so
yeah
myself,
rita
anderson
natalie.
We
sent
this
out,
I
guess
almost
10
days
ago
now
and
we
got
some
good
response
and
if
you
want
to
open
up
that
neats
kept
link,
I
was
going
to
do
a
quick
highlight
reel
of
what's
on
there
just
to
give
this
some
more
awareness.
You
know
I
don't
expect
folks
to
be
working
on
this
year,
but
hopefully
folks
that
are
interested
will
be
able.
B
B
B
The
next
two
caps
on
that
dock-
that
we
right
now
are
similar
in
the
sense
that
they
both
talk
about
configuration
options
on
the
api
server
that
are
owned
by
sig
off
that
are
flags
today,
and
we
want
to
upgrade
all
both
of
those
implementations
to
structure
config
and
various
folks
have
indicated
interest
for
both
of
those
items.
So
that's
good
too.
B
I
think
this
one
is
less,
maybe
flushed
out
than
the
camera
stuff.
I
don't
think
we've
discussed
this
as
much
so
I
would
expect
these
to
start
out
more
as
higher
level
designs
and
because
I
don't
think
we've
talked
about
either
one
of
these
in
a
cigarette
painting.
Before
directly.
B
We
just
had
like
issues
and
asynchronous
conversations
about
them,
so
both
of
those
are
up
and
then
the
bottom
two
ones
are
sort
of
placeholders
for
things
that
we
have
that
are
within
cigars
curvy.
But
we
haven't
flushed
out
enough
details
for
someone
to
maybe
write
a
cap.
All
of
these
are
targeted
at
you
know.
Someone
has
to
basically
sign
up
to
write
the
cap
and
also
maybe
ideally,
the
same
person,
but
maybe
someone
else
also
signs
up
to
write
the
implementation.
B
We
are
asking
that
the
release
that
they
kept
is
marked
implementable,
that
same
release
should
that
same
release
of
the
next
release
should
involve
the
work
for
that
kept
to
be
started,
so
things
do
not
get
stale
and
we
want
to
keep
this
moving
and
also
just
at
the
high
level
for
anyone
interested.
B
B
B
C
Just
for
a
concrete
timeline,
I
would
guess
that
the
enhancements
freeze
for
124
would
probably
be
late
january.
I
think
the
last
time
I
saw
the
release
team
was
treating
beginning
of
january
as
week.
One
of
the
release
and
the
enhancements
freeze
is
typically
like
week,
three
or
four
of
the
release.
C
So
if
there's
going
to
be
a
cap
for
these
targeting
124,
I
would
expect
to
have
it
reviewable
like
at
the
beginning
of
january,
so
that
there's
time
to
iterate
on
it
and
close
on
questions
to
get
it
implementable
in
time.
B
So,
but
I
my
hope,
is
that,
even
if
you
know
we
don't
get
any
of
this
stuff
in
as
implementable
by
enhancement
space
that
we
could
get
it
in
as
a
kept
that
can
be
worked
on
in
one
like
merged
and
worked
on
on
125
time
frame,
because
I
think
we've
talked
about
this
stuff
ad
hoc
for
a
long
time.
But
we
haven't
made
progress.
So
it'd
be
nice
to
push
this
forward.
C
D
C
Yeah
like
if
a
kms
like
rework
comes
in
like
mid
january,
I
really
doubt
like
it
would
get
attention
and
feedback
and
be
able
to
iterate
and
actually
close
and
be
implementable
by
like
a
week
later.
So
that's.
D
It
okay,
so
I
think
the
tldr
here
is.
If
you
wanted
to
be
me,
the
1.124
deadline,
then
it
really
should
be
reviewable
by
early
january
yeah.
I
think
that's
fair.
C
And
like
it's
better
to
aim
for
early
and
if
it
doesn't
make
it
that's
okay,
at
least
we
made
progress
like
what
tends
to
happen.
Is
we
think?
Oh
we're
not
going
to
make
the
one
dot
whatever
enhancements?
Freeze,
that's
cool,
we'll
just
do
it
for
the
next
one
and
then
our
whole
release
passes
and
then
we're
in
the
same
boat.
So
it's
better
to
like
get
it
up
there
and
get
it
reviewed
and
if
it
happens
to
miss
124,
that's
okay.
B
That
did
remind
me
of
something
jordan,
so
you
know
I
of
the
the
top
three
on
here
I'm
interested
in
willing
to
review,
for
I
think
we
need
we
once
we
get
to
the
stage
of
having
cats,
I
think
we
would
need
some
folks,
or
at
least
volunteer
to
review
and
be
involved
other
than
just
myself.
B
Honestly,
I
think
I,
I
think,
david's
on
the
call
I
know
david.
You
had
expressed
interest
in
the
authorization
one.
I
don't
know
if
you'd
be
interested.
E
Yeah,
I
am
adding
a
second
authorization
book
early.
C
F
Natalie
here
should
we
just
should
we
also
be
noting
in
in
this
doc?
This
is
more
a
question
for
the
niche
reader
and
mo.
Maybe
we
should
note
like
just
expectations
for
folks
working
on
this.
If
we
want
this
in
one
two,
four,
then,
by
end
of
the
first
week
of
january
or
early
january,
we
need
the
kept
kind
of
in
a
draft
state
for
review.
B
Cool,
thank
you
anything
else
I
want
to
discuss
about
this.
We
have,
I
think,
a
few
other
items
on
the
agenda.
A
Awesome
thanks
for
putting
this
together
mo
and
rita,
and
whoever
else
worked
on
this.
A
All
right
moving
on
client
go
tls
config
for
ciphers
and
min
version
mo.
Do
you
want
to.
B
Yeah
you
want
to
bring
yourself
here,
so
I
brought
this
up.
I
don't
know
how.
Many
weeks
ago
now
in
the
uk
machinery-
and
I
spoke
with
david
and
someone
else,
great
bad
machinery
in
regards
to
basically
incline
go,
I
wanted
the
ability
to
configure
the
tls
stuff
more
specifically.
So
today
it
hard
codes
to
tls
min
version,
1.2
and
no
control
over
cypher
suite.
B
I
wanted
more
flexibility
there,
so
I
I
don't
know
why
I
started
the
academy.
She
knew
first,
but
I
did
so
that's
there
and
the
pldr
there
I
got
was
relatively
positive
response
with
a
desire
to
be
able
to
configure
it
like
override
the
default
globally
and
then
explicitly
override
it
at
a
rest,
config
level,
so
that,
yes,
they
you
are
friendly.
B
B
Not
necessarily
so
you
know,
you
know
that
same
code
is
used
for
web
hooks
and
other
things.
So
I
you
know
I
I
would
like
to
be
able
to
enforce.
You
know
things
and
also
there's
a
lot
of
code
within
our
like
api
server
logic
and
aggregate
logic
where
all
the
client
stuff
is
like
18
layers
in
so
like
having
like
the
ability
to
globally
override
it
to
do
something
else
would
be
helpful
because
most
of
it's
not
exposed
in
any
reasonable
way,.
B
So
the
at
a
very
high
level
that
I
thought
of
for
configuration
would
be
a
min
version
which
would
probably
only
allow
1.2
or
1.3
to
be
set,
and
then,
when
it
is
1.2
it
will
let
you
control
the
ciphers
and
obviously
not
letting
you
control
them.
When
you
have
1.3
because
code
doesn't
let
you
do
that.
So
there
isn't
any
point
in
that.
B
A
B
So
far,
the
way
I've
built
most
of
this
stuff
is,
I
just
set
the
ones
I
want
and
explicitly
disallow
any
ones
that
I
haven't
explicitly
picked
myself.
That
way,
there's
no
surprises
granted,
I
don't
think
the
tls
1.2
cipher
suite
is
actually
growing.
I
think
that's
that's
done
so.
Maybe.
C
The
suite
aspect
seems
sort
of
like
a
dead
end,
because
it's
tls12
only
and
then
one
three
and
up
it
doesn't
apply
like
I,
so
I
yeah,
I
don't
see
that
growing.
I
don't
know
from
it
from
an
auth
perspective.
I
don't
object
to
setting
these
like
it
seems
like
machinery
owns
most
of
the
plumbing
there.
C
B
Yeah,
you
know
the
client
configuration
like
it's
for
in
my
case,
because
I
was
using
the
code
in
a
bunch
of
different
ways.
I
I
was
basically
trying
to
not
reinvent
half
of
flying
goat
and
all
the
nice
wrappers
and
all
those
things
because
they're
useful
and
nice,
the
I
I
don't,
have
an
opinion
on
how
far
up
it
goes
for,
like
you
config
my
my
gut
says
that
I
I
talked
with
6
eli
a
little
while
back
about
having
a
qrc
file.
B
You
know
some
place
to
have
configuration
that
is
outside
of
a
cube.
Config,
that's
for
a
user.
I
could
see
maybe
some
value
there
for
something
like
this
for
a
user
to
explicitly
say
I
refuse
to
talk
to
servers
and
crap
ciphers
right,
but
even
but
that's
just
like
a
vague
thought.
I
wouldn't
I
wouldn't
tell
anyone
to
do
that,
but
my
my
for
for
I
you
know
david.
That
asked
me
to
write
a
kepler
flesh
out
the
details.
My
my
thought
process
on
that
was
I
I
would
stop
at
the
rest.
E
E
E
Oh,
like
you
could
could
even
be
like
n
bars
just
like
yeah.
I
know
that
would
drive
some
people
a
little
bit
batty,
but
it
would
be
a
cube
control,
presumably
only
thing,
but
I
you
know
it
wouldn't
have
to
be
a
cube
config,
but
I
would
probably
try
to
argue
about
how
far
up
the
stack
it
goes
after
the
core
library
made
it
happen
and
a
comment
about
fips
compliance
when
I've
done
fips
stuff.
E
It
has
been
the
case
where
people
want
to
just
completely
remove
ciphers
from
the
os
and
from
the
libraries
like
there's
a
special
open,
ssl
or
whatever
is
important.
There's
a
special
ssl
package
that
you
end
up
using.
So
you
can't
have
this
be
an
accident
in
fixed
components.
A
C
On
the
client
side
to
do
fips,
you
can
actually
build
with
the
fips
go
lane
tool
chain,
and
if
you
want,
you
can
import
a
fips
only
package
which,
like
puts
the
tls
stack
in
a
mode
where
it
will
refuse
to
talk
to
anything.
That's
not
that's
compatible,
and
it
does
really
weird
and
unpleasant
things
like
reject
talking
to
servers
with
keys
that
are
too
big
because
they
were
bigger
than
what
was
permitted.
The
last
time
that
stack
was
validated.
It's
really
unpleasant.
B
Yeah
yeah,
so
I'm
that
that
certainly
is
part
of
my
motivator.
I
also
just
in
general,
wanted
to.
B
This
is
why
I
had
more
control
on
just
to
assert
the
set
of
behaviors
that
I'm
expecting
of
sort
of
the
stack.
Certainly
fips
is
a
motivating
factor.
I've
also
because
people
are
bad.
I
have.
I
have
fips
requirements
that
are
like.
I
want
fips,
but
like
not
all
of
the
fips,
so
I
can't
like
import
the
fifth
only
thing
I
have
to
like
control
how
much
chips
only
it
is,
and
it's
just
yeah
yeah.
My
life
is
very
unpleasant,
so
I
yeah
ted.
A
Yeah,
I
don't
think
this
seems
too
objectionable.
I
would
say
that
if
we
have
two
allow
lists,
one
from
the
go
standard,
library
and
one
from
command
line,
flags
that
exist
in
some
configuration
file
or
in
cube,
config
or
scattered
all
over
the
place,
then
I
worry
that
those
windows
will
move
over
time
and
then
there
won't
be
any
intersection
and
stuff
will
just
break
versus
like
a
mode
where
we,
you
selectively,
turn
off
cipher
suites.
You
don't
have
to
worry
about
that
like
if
we
configured
deny
and
you
used
it.
B
B
Is
anyone
interested
in
being
a
reviewer
on
such
a
cat?
Any
chance.
E
A
Okay,
does
anybody
else
have
context
on
this.
E
I
do
this
is
about
the
trying
to
find
a
way
to
enforce
some
limits
on
ephemeral
volumes.
So
when
we
created
pod
security,
we
said
we
were
going
to
make
it
allow
ephemeral
volumes
for
csi
drivers
for
all
levels.
That's
restricted,
baseline
and
privileged
and
doing
it
in
that
way
allows
csi
driver
authors
or
anyone
to
create
an
admission
plugin
that
provides
further
restriction.
E
There
is
also
an
api
resource
that
gets
created
for
csi
drivers.
That
is
a
convenient
spot
to
hang
a
description
that
says
either
I'm
safe
for
restricted
users
or
I'm
safe
for
level
x,
and
if
we
were
to
do
that,
sig
storage.
As
as
I
understand
the
last
time
I
spoke
with
them,
they
were
amenable
to
the
idea
of
actually
owning
that
admission
plugin.
E
I
think
the
desire
from
the
storage
folks
is
to
have
blessing
to
interpret
the
namespace
label
provided
by
sigoth
right.
That
label
is
pretty
clearly,
let's
say
hours,
but
they
would
be
consuming
something
similar
or
they
would
be
consuming
the
same
thing
and
and
having
it
apply
in
their
admission
building.
B
Quick
question
david
is
so
I
I
understood
sort
of
why
we
went
with
the
label
based
approach
for
on
on
name
spaces
for
the
new
pot
security
stuff.
You
know
how
to
just
lot
of
nice
ux
things,
especially
around
dry,
run
and
other
concepts.
That
kind
of
allow
that
to
be
really
useful.
E
B
E
The
reasoning
for
this
particular
one
is
that
the
prototype
was
built
with
a
label,
because
labels
don't
require
schema
change.
I
don't
believe
there
would
be
a
principle
objection
if
the
approach
was
generally
blessed,
adding
an
actual
spec
field.
C
Stepping
back
just
a
second,
I
I
had
thought
when
we
were
talking
about
whether
to
allow
ephemeral,
csi
drivers
or
not
when
we
talked
with
six
stories.
At
that
point,
I
thought
the
direction
was
to
recommend
that
unsafe
csi
drivers
not
be
modeled
as
like
ephemeral,
inline
pod
drivers.
C
I
it
isn't
control
over
those
parameters
like
control
over
unsafe
parameters,
fed
directly
to
a
csi
driver
like
there's
a
way
to
protect
those
with
a
like
a
storage
class,
so
that
you
capture
the
parameters
and
then
things
inside
namespace
is
consumed
in
a
safe
way.
I
thought
the
guidance
was
to
not
create
unsafe,
ephemeral,
inline
csi
drivers.
G
C
C
C
It
takes
as
input
from
the
csi
parameters
like
things
that
translate
directly
into
those
kernel
commands.
So
it's
not
hard
to
imagine
like
providing
inputs
that
would
crash
the
machine
or
like
do
really
terrible
things
for
performance
or
like
who
knows,
and
so
when
the
values
for
those
parameters
are
coming
from
a
storage
class.
C
It's
a
more
reasonable
assumption
that,
like
the
cluster
administrator
set
up
that
storage
class
and
said,
like
I'm,
driving
these
unsafe
parameters,
but
I'm
selecting
the
values
to
set
there.
But
if
it's
being
driven
from
inline
pod
specs,
then
any
random
pod
author
can
say:
hey,
please
give
me
a
volume
and
I
can
set
any
parameter.
I
want
and
it
feeds
directly
into
that
drive.
C
C
All
right,
let
me
see
if
I
can
dig
up
the
the
zig
storage
meeting.
I
think
they
were
referenced
and
maybe
michelle
went
and
swept
the
list.
Let
me
see
if
I
can
dig
that
up
and
that
would
maybe
be
good
to
settle
like.
Would
it
be
less
work
and
better
a
better
long-term
direction
to
encourage
the
couple
ones
that
were
identified
as
being
unsafe,
that
this
isn't
a
great
use
case
and
and
see
if
it
would
be
reasonable
for
them
to
re-posture
those.
C
I
think
it's
michelle
he
did
this
week,
but
pull
in
the
storage
leads
and
double
check
that
this
is
necessary.
E
We
will
need
something
to
deal
with
them
now
we
could
say:
go
off
and
build
admission.
Web
looks
for
these
two.
We
could
say:
we've
identified
these
two
we're
just
going
to
block
them.
We
could
say:
we've
identified
two,
we
don't
know
what
other
ones
have
been
written
here
is
a
way
to
explicitly
indicate.
I
am
not
safe.
G
C
G
C
C
E
G
E
C
C
Are
a
lot
of
things
in
pods
that
we
wouldn't
do
the
same
way
if
we
were
going
to
do
them
today
and
ephemeral,
csi
drivers
are
brand
new,
like
they
just
shipped
last
release
or
like
a
couple
releases
ago,
so
they're
just
in
the
like
picking
up
use
phase.
This
seems
like
the
right
point
at
which
to
say:
don't
expose
any
functionality
to
pod
authors
that
you
consider
unsafe,
like
this
seems
the
right
point
at
which
to
do
that.
E
G
Gap
couldn't
couldn't
we
ship,
like
just
you,
know,
throw
up
on
a
repo
somewhere
an
admission,
a
a
emission
web
hook
that
detects
unsafe
usage
of
of
the
known,
unsafe,
inline
csi.
G
E
It
was
supposed
to
be
channeled
through
a
new
admission
plug-in
built
on
top
right,
like
a
it's,
it's
very
easy
to
create
a
pr
that
does
that.
I
would
not
support
adding
this
to
pod
security.
I
would
support
building
something
in
to
actually
make
this
configurable
entry,
but
lacking
that
you
know
someone
can
manage
it
through
validating
admission
web
hooks.
E
G
I
guess
what
I
mean
is
as
sig
author
as
signode.
We
would
say
something
like
we're
aware
of
these
two
cases
or
three
cases
where
people
have
written
csi
drivers
that
can
be
used
as
inline
volumes
and
they
accept
unsafe
parameters
in
an
untrusted
fashion
from
the
pod
we
have
filed
bugs
or
cves
or
whatever
against
these.
In
the
meantime,
here's
a
stopgap
admission
web
hook
that
will
block
usage
of
these.
E
Right,
it's
it's!
You
can
actually
make
that
block
usage,
something
that
is
first
class
present
always
on
and
you
add
a
label
and
you
have
fixed
your
cluster
in
the
field
and-
and
you
know
I
look
at
that
and
say-
am
I
likely
to
build
that
yeah,
I'm
100
likely
to
build
it.
The
question
is
whether
I
would
build
it
upstream
or
downstream,
and
this
is
effectively
the
offer
to
put
it
in
upstream.
C
A
E
Sure
I
I
will
have
a
look
at
the
list
that
jordan
put
together
and.
A
B
Cool,
let's
see
you
got
two
minutes
yet
so
I
want
both
of
these
to
be
kind
of
quick,
but
I
know
we've
talked
about
this
on
and
off
for
a
while,
and
I
wanted
to
see
if
we
could
sort
of
try
to
write
down
the
mvp
requirements,
around
client
certificates
for
kubernetes
service
accounts
and
like
serving
certificates
for
kubernetes
services.
B
G
F
G
B
Or
the
secret
one
or
just
anything
we
come
up
with
in
the
future
right,
but
today
the
token
authenticator
for
service
accounts
is
what
enforces
well
first,
it's
what
expresses
the
extra
fields
that
contain
that
information
and
also
enforces
that
they're
valid
client
certificates
today,
with
the
schema
that
we
have
for
them,
lack
the
ability
to
express
the
uid
or
in
extra
fields
of
service
accounts,
and
they
also
yeah
just
the
way
they're
expressed,
do
not
have
they
would
not
like
if
you,
if
you
created
such
a
certificate,
like
pretend
you
just
could
today,
with
all
the
field
set
that
the
token
authenticator
for
service
accounts
would
be
unaware
of
that
and
does
not
validate
that
the
pod
reference,
for
example,
maps
to
existing
pod,
with
that
servicing
et
cetera,
et
cetera.
B
So
I
know
I
think
these
are
all
tractable
problems
that
are
not
necessarily
hard
they
just
they
just
need
to
be
written
down
as
like.
This
is
how
we
plan
on
solving
it,
and,
generally
speaking,
that's
that's
what
I
would
want.
I
don't
I
mean
I
guess
I
don't
really
have
an
opinion
on
if
it's
a
new
signer
or
the
existing
one
as
long
as
it
functions,
because
certainly
the
existing
signer
can,
if
you
give
it
the
right
cn,
we'll
express
the
service
account
name
and
kubernetes
would
not
really
care.
G
Yeah,
I
think
what
I
was
thinking
of
was
just
in
the
cs,
treat
the
csr
as
not
really
a
contract
for
the
shape
of
the
certificate.
You
have
to
issue
from
your
signer,
but
just
a
cryptographic
proof
of
the
identity
of
the
pod,
requesting
the
certificate,
and
then
you
know
the
default
signer
that
we
already
have
will
respond
with
one
certificate.
B
I'm
not
sure
if
I
fully
followed
exactly
what
you
said
like
I.
I
guess
I
don't
necessarily
care
as
long
as
the
enforcement
is
still
there
as
long
as
the
protections
that
we
provide
for
bound
service.
Defense
right,
like
in
particular,
bound
service
account
tokens,
even
because,
even
though
their
jobs
are
still
revokable
right,
because
they
reference
objects
within
the
api
that
we
can
look
up
and
revoke
them
using,
and
I
would
want
that
same
property
for
these
certificates,
even
though
we
don't
have
like
crs
or
any
mechanism
certificates
in
general.
A
B
Yeah
plans
for
the
api
server,
I
don't
I
don't
I
mean
today.
We
don't
enforce
any
particular
trust
chain
requirements,
so
that's
where
the
new
signer
could
be
beneficial,
because
you
could
explicitly
say
that
this
should
that
new
signer
should
not
overlap
with
any
should
not,
but
it's
not
enforced
to
overlap
with
any
other
existing
signers
trust
chain.
So
that
way
you
can
keep
them
separate.
B
If
you
wanted
to.
This
is
what
I
would
want
to
do.
I
guess
but,
but
I
mean
I
guess
like
without
that.
Also
I
just.
G
G
G
B
G
It's
not
unsafe,
it's
the
same
safety
we
have
today.
Today,
you
can
go
off
in
with
some
difficulty
request
a
certificate
for
your
pod
and
use
it,
and
it
doesn't
have
object.
Mining.
B
Yeah,
that's
that's
the
part,
that's
what
I'm
saying.
Okay
yeah!
You
can
do
bad
things
if
you
have
enough,
like
time
invested
in
the
bad
things
I'm
saying
like
like
I
I
would
want
people.
I
don't
know
what
the
mechanism
would
eventually
look
like,
but
I
would
want
people
to
be
willing
to
have
this
basically
on
for
all
pods
all
the
time
with
no
qualms.
G
Yeah-
and
I
was
thinking
about
automatic
issuance-
don't
don't
think
about
that
for
now,
just
first,
let's
talk
about
how
we
actually
get
the
certificate
into
a
pod.
If
someone
is
willing
to,
you
know,
write
a
projected
volume
or
whatever.
G
G
B
I
guess
I
could
ask:
could
we
could
we
go
the
other
way?
Could
we
we
pull
out
the
object,
binding
validation
that
we
have
out
of
the
the
what's
the
space
the
service
account
token
authenticator
into
a
generic
layer
above
an
authentication
and
then
come
up
with
a
schema
for
full
expression
of
all
the
user
info
fields
for
certificates?
B
A
Yeah,
so
I
guess
I
I
think
that
client
certificates
for
the
cube
api
server
are
as
far
as
like
keps
go
is
gonna.
It
feels
lower
value
to
me
than
support
for
mtls.
A
Yeah,
so
I
guess,
like
you
know,
if
the
tokens
still
exist
like
what
problem
are
we
solving
by
pursuing
this,
and
I
guess,
if
it's
just
to
get
our
foot
in
the
door,
then
are
we
actually
really
getting
our
foot
in
the
door
towards
you
know,
like
maybe
higher
value
improvements
down
the
line.
G
I
think
object
binding
on
a
certificate
is
like
it's
pretty
easy
like
we
could.
Just
I
don't
like,
we
could
extend
the
existing
mtls
authenticator
pretty
easily
to
support
optional
object
binding
on
the
certificate
like
you
know,
we'll
have
to
do
some
dance
to
get
an
oh,
an
appropriate
oid
to
store
this
information
under
in
the
certificate,
but
then
we
can
just
stash
the
pod
uid
in
there
right
and
re-you
know
slice
and
dice
our
existing
object,
binding
code.
For
the
token.
G
So
if
that
is
valuable,
we
can
just,
I
think,
that's
kind
of
like
unconnected
or
unordered
with
the
rest
of
all
the
certificates.
Work.
G
A
So
so
my
comment
was
only
on
this
discussion.
Field
like
it
was
focused
primarily
on
mutual
tls2,
cube
api
server
that
seemed
like
between
mutual
tls,
between
pods
or
serving
certs
for
pods,
for
other
things,
to
talk
to
them
that
field
that
felt
lower
value
than
you
know,
the
other.
You
know
potential
motivate
motivations.
A
We
have
like
a
reasonable
story
for
authentication
to
the
cube
api
server
already,
so
I
would
say
maybe
we
should
prioritize
the
others
unless
this
is
like
a
clear
avenue
towards
you
know
achieving
the
other
goals.
A
In
short
order,
do
you
buy
it
david.
E
As
long
as
there's
still
some
iterative
path
where
it's
right
like
a
an
on-ramp
and
not
an
on
cliff,
because
I
don't
I,
I
want
to
make
sure,
there's
some
way
that
we
can
look
at
it
and
say
yep.
We
can
fit
this
much
in
this
time.
We
can
fit
that
much
in
the
next
time
and
rather
than
just
keep
saying
now,
you
can't
come
in
because
it's
it's
too
big
a
change
all
at
once
so
yeah
as
long
as
as
long
as
that's
still
there
then
yeah,
I'm
okay
with
that.
B
A
Yeah
authentication
the
cube
api
server
only
or
what
we
would
need
to
bring
mtls
to
keep
ava
server
up
to,
like
parody
with
branch
services,
account
tokens.
B
Oh
okay,
so
like
as
as
a
very
smallish
starting
point,
then
could
we
attempt-
maybe
just
I
I
guess-
I'm
not
necessarily
falling
to
here
where
you
said
that
serving
certificates
for
all
cube
services
is
hard.
I
feel
like
this
is
the
thing
that
david
built
years
ago
in
openshift.
I
just
just
want
the
same
thing.
Basically.
E
G
A
A
So
I
I
think
the
thing
with
services
is,
you
know,
they're
loosely
coupled
to
pods.
You
know
over
time.
A
Pods
may
be
added
added
and
removed
to
services.
I
think
that
has
implications
on
potentially
both
sides,
like
you
have
to
update
a
pods
serving
certificate
to
include
new
names.
You
have
to
revoke
old,
serving
certificates
for
pods
that
have
old
names.
If
you
want
to
do
that
sort
of.
A
G
My
thinking
for
the
concrete
first
step
was
focus
on
mechanism
for
getting
a
certificate
into
a
pod
and
then
letting
then
we
can
experiment
with
like
okay.
I
want
to
think
about
service
account,
certificates
or
sorry.
I
want
to
think
about
serving
certificates
for
a
service.
We
can
write
a
new
signer
that
has
some
strategy
for
authenticating.
G
B
B
I
I'm
not
stressed
by
revoking
them
if
you
can,
if
you're
part
of
the
service
you're
part
of
the
service,
if
you
were
recently
part
of
the
service
yeah,
whatever.
B
A
If
somebody
had
like
a
service,
v1
service,
v2
or
I've,
seen
people
like
version
their
services
by
you
know
some
like
deployment,
so
it's
like
the
creating
new
services
and
deleting
old
services
happens
frequently
now
each
one
of
those
events
require.
You
know,
refreshing
all
certificates
across
all
pods
within
the
bacnet
service.
Like
is
that
churn
like
something
that
we
want
to
that
we're?
Okay
with
do
those
updates
happen?
All
at
the
same
time
seems
like
a.
B
B
Yeah
so
so
I
think
david
correct
me
if
I'm
wrong
like
in
openshift,
we
kind
of
sidestepped
this
a
bit
just
because
we
stored
the
serving
certificate
in
a
secret.
So
it
was
just
mounted.
E
Right,
you
ended
up
having
to
say
I
have
this
pod.
I
know
this
is
the
service
that
I
want
to
have.
I
want
to
serve
for
you,
get
to
choose
one
and
it
will
create
it
in
a
secret,
and
then
your
pod
just
relies
on
that
secret.
There's,
no
automatic
injection,
but
it
did
make
it
very
easy
to
go
from.
I
have
this
thing
that
serves.
E
I
need
a
certificate
that
other
services
are
gonna,
be
able
to
trust,
and
I'm
gonna
at
least
say
that
inside
that
this
contained
two
pods
inside
this
name
space,
and
that
got
us
a
very
long
way.
I
I
do
have
to
drop,
but
I
can.
I
can
appreciate
that
if
that
is
not
the
direction
you
wish
to
go,
that
might
not
be
a
useful
step
for
you
to
move
in.
A
B
A
Hearing
more
thanks,
everyone
yeah
we're
a
little
over
time.
So
talk
to
you
all
in
the
new
years.