►
From YouTube: sig-auth bi-weekly meeting for 20210428
Description
sig-auth bi-weekly meeting for 20210428
A
Hey
everyone,
so
this
is
the
sig
off
meeting
for
april
28th
2021..
We
have
a
pretty
light
agenda,
so
we
can
get
started,
though
rita
have
you
had
the
first
bit.
B
Yeah
psp
replacement
discussions
are
still
ongoing.
I
think
the
cap
is
probably
nearing
approval.
I
added
a
checklist
to
the
top
of
the
pr
which
I'll
link
in
the
notes
in
a
moment
with
the
items
that
I
think
are
blocking
for
sorry,
the
unresolved
sections
in
the
cap
that
I
think
are
blocking
for
alpha
and
the
ones
that
I
think
we
can
wait
until
beta
to
resolve.
So
yeah
take
a
look
at
that.
B
B
Yeah
was
there
anything
else
that
I'm
not
sure
who
added
the
announcement
here,
I'm
happy
to
talk
about
it
in
more
detail.
If
there
are
any
questions.
A
C
Hey,
can
you
hear
me
okay,
zoom?
Where
was
I
so
yeah,
so
the
2020
annual
report?
Pr
is
there
definitely
welcome
review
and
feedback
specifically
for
sub
project
owners
and
working
group
owners.
C
I
think
schedule
wise
we're
already
behind
other
cigs.
I
I
mean
would
a
week
be
enough
for
folks,
like
maybe
to
till
the
next
sega
community
or
sorry,
maybe
till
next
wednesday,.
C
Okay
sounds
great
I'll.
Duh
I'll
also
send
that
out
in
the
mailing
list.
F
A
Did
you
want
to
go
over
your
pre-cap
or.
F
Just
talking
to
me,
in
general,
I
created
a
feedback
doc
with
some
of
the
main
questions
that
I
would
like
answers
on.
Basically
just
the
the
kind
of
the
higher
level
stuff
and
just
make
sure
that
you
know
if
there's
any
major
points
of
contention
or
just
you
know,
areas
of
disagreement
that
we
at
least
identify
them
and
then
and
also
kind
of
understand,
if,
like
you
know,
which
areas
are
are
in
agreement,
so
that
I
can
move
forward
and
actually
writing
the
rest
of
the
cab.
F
So
I
can
give
a
quick
summary
for
those
that
aren't
familiar,
and
then
we
can
talk
about
the
the
questions.
So
basically,
this
is
a
cap
to
address
our
use
case.
Ours
being
eks
amazon
use
case
of
request,
signing
as
a
method
of
clients.
I
think
authentication
in
kubernetes
and
the
the
approach
involves
executing
a
proxy
process.
F
The
so
far.
The
the
feedback
that
I've
gotten
is.
The
favorite
approach
would
be
a
full
proxy
where
the
request
is
sent
to
the
proxy
and
the
proxy
sends
the
request
to
the
api
server.
But
there
was
some
people
who
some
of
the
feedback
I
got
was.
F
So
that
would
be
one
of
the
questions
that
we
can
talk
about,
but
basically
other
than
that
you
know
you
configure
the
exec
mechanism
in
a
cube
config,
similar
to
the
existing
exec
mechanism,
that
we
have
the
client's
authentication
exec
stuff,
whether
or
not
it's
a
part
of
that
existing
config
is
another
question.
As
written
right
now
I
had
defined
a
separate
config
struct,
but
some
of
the
feedback
that
I
got
was
to
modify
it
slightly
so
that
it
can
be
included
in
the
existing
exec
mechanism.
F
And
I
can
slow
down
a
little
bit
for
for
notes,
but
at
a
high
level,
that's
kind
of
the
the
big
picture.
I
can
pause
here
for
questions.
A
B
G
I
remember
from
the
distant
past,
but
I
honestly
don't
remember
this
is
gonna
sound
bad.
I
don't
remember
why
it
was
important.
A
The
the
last
time
proxy
based
stuff
was
suggested
was,
if
you
wanted
to
use
a
kms
based
private
key,
so
you
did
not
provide
your
private
key
to
the
cube
ctl
binary.
So
if
you
wanted
to
do
kls
offload,
there
is
no
way
to
hook
in
to
client
go
or
any
of
our
infra.
To
do
that,
so
we
had
suggested
a
proxy
based
approach
for
that,
instead
of
trying
to
add
hooks
into
climb,
go
to
do
the
tls
handshake.
A
In
a
sense,
this
is
very
similar
right.
I
want
to
change
basically
arbitrary
things
about
the
request
on
the
way
out.
I
I
on
that
stuff.
I
had
stated
that
the
main
issue
with
the
proxy
based
approach
is
just
actually
setting
up
the
proxy
and
making
it
work
in
a
like.
You
know
you
can
download
a
cube
config
from
your
provider
that
will
have
a
client
go
exec,
plugin
or
the
gcp
stuff,
or
whatever
it
basically
just
works
and
like
with
stuff
like
the
install
hint
and
the
cluster
info.
A
I
can
basically
do
anything
you
want
now.
I
can
tell
you
where
to
go,
get
the
plugin.
If
it's
missing
that's
a
lot
harder
with
a
proxy,
especially
if
you
already
have
a
proxy
like
a
corporate
proxy
so
like
you
need
to
like
daisy
chain
your
proxies,
so
I
think
that's
what
this
cup
is
trying
to
address.
A
I
don't
necessarily
think
even
the
word
request
signing
needs
to
even
be
present
in
the
cap,
because
I
think
it's
just
a
broader
problem
of
I
want
to
make
it
easy
to
use
a
proxy
so
that
my
proxy
can
do
whatever
I
want
for
me.
Okay,.
F
Yeah,
so
to
address
some
of
those
first
of
all,
in
terms
of
our
use
case,
for
this,
so
amazon's
main
method
of
client
authentication
is
something
called
syncv4.
F
It's
a
it's
a
type
of
request
signing,
and
so,
if
we
want,
for
example,
like
the
kubernetes
api
to
be
authenticated
with
sig
v4,
then
we
we
would
need
the
client
to
support
it
as
well,
and
so
this
is
one
way
of
doing
that.
F
In
terms
of
scope,
I
did
write
this,
you
know
explicitly
saying
scoping
it
to
request
signing,
but
that
was
just
because
I'm
not
familiar
with
the
other
use
cases
and
and
obviously
happy
to
include
other
use
cases
in
this
but
yeah
I
would
just
want
you
know,
feedback
and
information
about
those
use
cases.
So
I
understand
and
and
and
can
actually
include
them.
F
So
you
said
so
I
guess
that
that
is
the
first
open
question
and-
and
maybe,
if
there's
other
opinions
on
this,
you
know
I'd
rather
include
specific
use.
Cases
and
user
stories
so
is
is
kms.
I
think
you
said
it
was
kms
based
tls.
A
Yes,
I'm
I'm
blanking
on
the
name
of
the
c
api
that
you
can
link
against,
to
have
your
kms.
Do
the
tls
handshake
for
you.
I
can't
remember
I'll
pop
my
head
for
some
reason
right
now,
but
we
did
have
some
folks
asked
for
that
relatively
recently,
yeah
there
we
go.
Thank
you
cj
pks11.
I
was
like
I
can't
remember
what
is
this
thing
called,
but
yeah
so
pkcs11
in
order
for
you
to
use
that
you
have
to
link
against
the
c
library
and
have
it
do
the
tls
handshake?
A
We
don't
want
to
link
against
set
c
library,
because
that's
painful
and
then
the
kubernetes
project
has
to
maintain
that
forever.
It
would
be
much
nicer
to
support
those
things
in
a
nice
seamless
way.
A
I
mean
I
I'm
I'm
not
necessarily
convinced
that
we
should
necessarily
do
the
work,
but
if
we
were
to
do
the
work,
I
would
think
a
proxy
based
approach
is
probably
the
simplest,
mostly
because
I
don't
want
to
get
into
cases
of
like.
Oh,
I
want
to
do
request
signing,
so
I
have
this
plug-in
structure
that
just
does
request.
Signing
like
it
literally
like
has
like
the
request
parameters,
so
I
can
sign
it
and
I
also
don't
want
to
get
into
like.
A
A
So
that
doesn't
sound
great
and
the
other
thing
is
like
for
me.
It's
very
strange
to
propose
yet
another
exact
thing,
mostly
because
like
andrew
and
I
and
other
folks
have
spent
a
long
time
finishing
the
first
one
and
it
has
a
lot
of
nice
functionality
like
you
know
like
if
you
want
to
do
different
semantics
for
different
clusters,
that's
easy!
A
Now,
if
you
need
to
tell
people
where
to
go
download
your
custom
binary,
that's
easy
if
your
proxy
needs
to
connect
or
sorry,
if
your
exact
plugin
needs
to
connect
to
set
api
server,
the
ca
bundle
is
available.
The
url
to
the
cluster
is
available.
All
the
nice
mechanics
that
are
there.
The
only
thing
different
is
instead
of
running
instead
of
returning
certs
and
token
or
either.
A
This
could
theoretically
now
return
proxy
endpoint
along
with
some
other
data
right,
and
the
idea
would
be.
Is
that
cheap
ctl
would
say.
Oh
I'm,
going
to
override
any
environmental
proxy
that
I
had
with
the
one
the
exact
plugin
told
me
to
use,
and
when
you
execute
the
plugin
you
pass
in
cube,
ctl's
environment.
So
the
proxy
process
is
aware
of
any
post-level
proxy,
so
it
can
honor
that
proxy
for
you
on
the
way
out
as
well,
and
the
nice
thing
of
that
approach
is
one.
A
F
Yeah,
so
on
that
point
it
I,
I
think,
windows,
you
know,
I'm
not
a
windows
expert,
but
I
think
windows
named
pipes
could
potentially
work.
F
They
seem
to
operate
similarly
to
unix
domain
sockets
and
then
for
newer
versions.
You
know
that
it
has
some
unix
domain
socket
implementation,
so
we
can
definitely
do
some
more
research
there,
but
but
to
to
the
reason
why
I
was
proposing
unix
domain
sockets
was
because
of
that
kind
of
nested
proxy
issue
and
just
in
terms
of
you
know
securing
it
to
just
the
the
process.
F
You
know
executing
the
the
proxy
passing
back
a
unix
domain
socket
path,
and
you
know
you
can
use
file
permissions
to
determine
who
can
read
and
write
to
it.
So
that
seemed
like
an
easy
approach,
but
there's
no
reason
why
we
can't
do
like.
Like
you
said
you
know,
exec
the
the
plug-in
pass
back
a
port
and
is
you
know
some
some
method
of
authentication
like
certificates
and
and
yeah?
Do
it
that
way
and
and
yeah?
F
I
didn't
think
about
the
the
reason
that
seemed
hard
to
me
was
just
that
the
kind
of
daisy
chaining
of
proxies,
but
passing
the
you
know.
If,
if
the
client
actually
passes
the
proxy
variables
to
the
the
proxy,
then
that
seems
like
it
should
work.
A
Yeah
I
had
recently
been
hesitant
for
cube
ctl
when
plugins
were
exact
to
pass
in
and
vars.
But
mike
had
reminded
me
that
for
the
g
for
the
gcp
stuff
to
move
out
of
tree
and
function,
they
rely
on
environment
variables
being
passed
through.
So
that's
there
and
it
will
always
be
there
as
part
of
the
api
contract.
A
So
I
like
that
approach
in
the
sense
that
I
know
it
will
work,
because
it's
just
a
proxy
where,
whereas
there
is
no
unix
domain,
socket
support
in
cube,
ctl
or
client
go
at
all
today
there
maybe
there
should
be.
I
think
it
has
come
up
every
so
often,
but
not
necessarily
very
common.
A
A
E
H
Hype
up
with
my
user
opinion
of
the
whole
sig
or
signing
v4.
As
someone
who
has
recently
played
around
with
elasticsearch
in
aws,
no.
H
The
the
requirements
for
implementing
amazon
signing
v4
are
complete
control
over
your
headers
at
the
very
final
moment
before
the
request
is
sent
out
and
it's
painful,
it
is
painful
to
write
a
client
that
implements
it.
I
appreciate
aws
providing
the
client
that
implements
it,
but
their
provided
client
doesn't
always
work.
D
H
Wrong,
I
have
checked
to
the
technical
idea
of
using
sig
b
sig
v4
from
a
from
a
proxy
perspective.
There
are
the
problems
with
proxies
are
already
outlined
in
every
document
about
the
problems
with
proxies.
I
think,
have
been
well
covered
here.
D
E
And
also
the
extension
that
we're
adding
doesn't
have
anything
specific
to
sig
v4
other
than
the
motivating
factor
for
introducing
it
now
and
we've
had
other
caps
like
the
external
tls
certificate.
Authenticator
cap,
which
was
a
more
constrained.
E
Feature
proposal,
however,
it
would
be
it
wouldn't
be
solved
by
this,
but
it
would
allow
the
problem
to
be
solved
outside
of
tree.
So
it
seems
like
there's
a
common
thread
here,
which
is
we.
There
are
a
lot
of
authentication
protocols
in
the
world.
We
don't
want
to
do
everything
in
cube,
control
or
client
go.
So
how
do
we
solve
this
problem
generally?
E
B
We
could
use
proxy,
but
it
would
certainly
be
easier
to
have
something
along
the
lines
of
a
request
executor.
B
Also,
regardless
of
where
the
discussion
on
augmenting
exec
or
adding
another
exact
mechanism
goes,
I
don't
see
any
reason
not
to
add
support
for
domain
sockets.
I
don't
think
that
solves
the
problem
everywhere,
especially
since
I
don't
think
windows
has
well.
I
know
windows
doesn't
have
unix
domain
sockets.
I
don't
know
if
there's
an
equivalent
that
would
work
in
that
environment
but
yeah,
I
don't
see
any
reason
not
to
just
allow
those
yeah.
B
The
unix
domain
target
proposal
seems
orthogonal,
but
a
useful
building
block
potentially
for
this,
and
if
you
know
you're
talking
about
like
setting
up
proxies,
you
could
you
could
theoretically
use
like
socket
activation
types
of
things
to
do
interesting
like
when
you
use
it,
you
hit
the
socket
and
then
your
proxy
starts
up
and
then
goes
away
like
it
could
be
used
in
interesting
ways.
I
I
E
The
api
here
is
so
far
fairly
constrained
and
I
think
mo
wants
to
use
the
existing
one
for
communicating
between
client
go
and.
E
This
plug-in
thing
so,
unless
we
need
to
add
a
ton
of
complexity,
I
would
say
add
a
field
which
is
localhost
port.
F
Yeah
the
way
I
see
it,
the
only
time
we
would
need
a
some
communication
protocol
or
something
would
be
if
we
exact
you
know
a
request,
signer
and
got
the
response
back
and
then
sent
it
from
clanco.
F
A
I
do
think
I
have
seen
this
envoy
api-
I
I
admit
I
did
not
understand
what
it
was
doing
there.
There
was
some
said
there
was
stuff
in
there
and
I
was
like
all
right.
I
think
network
pack
is
passed
through
this
api,
but
I'm
not
sure
how,
but
they
do
so
yeah
I
mean
I
I
don't
like
proxies,
but
I
don't
think
I
want
to
like
proxies
are
supported
in
the
go
standard
library
so,
and
so
that
means
someone
else
has
already
done
the
work
to
implement
most
of
the
request
shuffling.
A
So
I
I
think
my
gut
would
say
that
use
the
existing
mechanism
and
just
keep
it
proc
the
proxy
based
stuff
that
cube
ctl
supports
today
as
a
starting
point.
Nothing
prevents
us
from
adding
phoenix
to
main
socket
support
afterwards.
I
would
if
we
were
going
to
do
that.
I
would
prefer
that
it
also
worked
generically
like
in
in
the
static
proxy
config
inside
of
cube
config,
which
should
work
that
way
too.
B
A
A
A
Moyer
you
had
made
a
comment.
Did
you
want
to
say
that
loud.
J
Yeah
my
comment
is
just
like
a
plus
one
for
this
feature.
Existing
I've
had
various
use
cases
over
the
years
where
I
wanted
to
use
like
hardware-backed
keys
for
something
or
wanted
to
do
some
sort
of
header
manipulation
for
off
purposes,
and
this
would
have
been
a
good
tool
to
reach,
for
I
also
guys.
J
My
other
comment
is:
I
support
the
proxy
as
the
protocol
here,
but
these
proxies
are
going
to
be
really
tricky
to
write
and
it's
been
the
source
of,
like
you,
know,
significant
cvs
in
the
past
and
there's
all
kinds
of
pitfalls.
So
hopefully
we
can
come
up
with
some
good
shared
libraries
for
making
these
proxies
easier
to.
J
D
B
Want
I
mean,
like
some
of
that
is
already
in
place
like
you.
Can
you
can
point
cue
control
at
a
proxy,
but
recognizing
the
usability
issues
with
that
and
saying
like?
B
Can
we
improve
that,
whether
it's
domain
sockets
or
like
involving
some
process
that
can
means
you
don't
have
to
have
the
proxy
running
all
the
time
or
can
do
like
a
per
per
request
or
per
session
like
hey?
I'm
gonna
use
you,
okay,
giving
back
something
I
can
use
to
authenticate.
So
we
don't
leave
like
these
insecure,
escalating.
F
Proxies
running
all
the
time
locally,
so
that
brings
up
another
question
that
I
had,
which
is
the
way
that
I
wrote.
The
kep
was
that
the
the
proxy
would
determine
its
own
lifetime
and
then
the
client
could
restart
it
if
it
needed
to
the
other
alternative
would
be
well
there's
two
alternatives.
F
One
is
that
that's
the
the
lifetime
is
like
configured
in
some
way
and
and
given
to
the
to
the
proxy
assuming
the
exact
mechanism,
and
then
finally
it
would
the
the
final
one
would
just
be
just
assumed
to
run
forever,
and
then
you
know
if
it
bounded
by
the
the
parent
lifetime.
So
does
anyone
have
opinions
there.
A
I
was
going
to
mention
that
the
the
clango
exec
plug-in
mechanism
does
have
a
structured
api
for
passing
data
through
it's
a
well-known
environment
variable
that
passes
a
json
structure.
So
if
we
we
need
to
pass
structured
data
into
the
process,
we
totally
can
and
we
obviously
can
pull
structure
data
out.
So
I
think
we
have
flexibility
for
whatever
we
need.
A
I
think
the
problem
is
made
a
little
bit
easier
when
the
exact
process
is
returning,
how
to
talk
to
the
proxy,
because
the
qctl
process
or
the
client
build
process
learns
of
where
the
proxy
is
dynamically
so
like
like
when
credits
expire
plant
go,
will
try
to
go
refresh
them,
which
will
cause
the
exact
plug-in
to
be
exact
right.
So,
in
the
same
way
that
that
happens,
I
could
imagine
or
the
proxy's
not
there
anymore
I'll,
go
exact
the
process
again
and
hopefully
it'll
give
me
a
way
to
talk
to
the
server.
F
I
see
yeah,
I
was
kind
of
assuming
that
the
proxy
would
itself
would
be
exact,
but
I
think
so
what
you're
saying
is
that
the
exact
binary
is
just
something
that
I
guess
whether
or
not
it
is
the
proxy
itself,
but
what
it
does
is
just
return.
The
address.
A
I
guess
yeah,
so
it's
it's
telling
you
where
to
go,
it's
responsible
for
spawning
the
thing,
or
I
mean
I'm
imagining
this
I'm
imagining
an
approach
where
your
executive
process
it
spawns
a
proxy
and
then
hands
it
back
to
you,
so
cube
ctl
can
go
on
its
merry
way
and
through
that
communication
you
should
be
able
to
come
up
with
something
saying
for
your
lifetime.
There's
nothing
preventing
the
proxy
from
taking
in
custom
configuration
for
like
I
want.
A
A
It
would
be
like
always
run
like
don't
bother
doing
any
pitch
checks.
Don't
bother
doing
any
request
response
checks
just
run
until
forever,
but
I
I
I
kind
of
think
of
that
exactly
as
an
existing
clango
exec
plugins,
when
they
get
you
creds,
they
tend
to
cache
them
somewhere,
either
on
disk
or
in
kms
or
whatever
right.
They
don't.
A
Every
time
like
every
time,
you
run
cube
ctl
that
binary
isn't
like
doing
dynamic
fetching
of
credits
normally
because
it
would
be
very
slow
right,
like
every
single
time
you
did,
that
you'd
be
like
oh
log,
in
with
your
idp
again
like
no,
it's
not
going
to
work
right.
So
it's
almost
certainly
caching,
something
and
I
kind
of
think
of
this
as
the
same
process-
I'm
just
sort
of
caching
a
running
proxy
that
knows
how
to
authenticate
as
me,
possibly
with
some
custom
threads
embedded
in
the
process
as
memory.
F
Yeah
to
me
that
for
the
lifetime
question,
the
simplest
approach
is
to
just
if
you
just
exact
in
a
go
routine
and
just
assume
that
it
runs
forever
and
restart
it.
If
it
crashes
and
just
you
know,
that
seems
like
it
would
work
for
cube,
chl
and
or
cubelet,
but
maybe
I'm
missing
something.
F
Yeah,
like
you,
can
still
run
it
that
way,
and
you
know,
do
what
you
were
saying
and
pass
back
the
the
address
to
to
communicate
and
then.
A
Assuming,
I
guess
that's
gonna,
be
long-lived
and
then
based
on
what
output
it
gets,
they
could
decide
to
terminate
the
process.
Who
knows
it
doesn't
need
it.
You
can
see
that,
but
that's
also
fine.
I
guess
I
don't
know
if
there's
some
limitations
there.
A
I
I
guess
you
know,
I
could
imagine
the
limitation
of
something
similar
to
what
jordan
has
said.
You
can't
do
like
stuff
like
socket-based
activation
as
easily
then
yeah.
The
idea
being
is
the
exact
plug-in.
Might
point
you
to
some
other
process.
That's
already
running,
like
some
systemd
service,
that's
in
sleep
mode,
and
it
expects
you
to
call
it
in
that
way
like
like.
A
A
Yeah
I
mean
we
don't
want
this
necessarily
as
a
service
there
right
like
we
don't
want
to
tell
anyone.
Hey,
go
run
this
agent
right,
like
that's
like
the
hardest
part
about
like
running
ssh
max,
is
getting
your
agent
configured
correctly
and
make
sure
it's
present
and
right,
because
your
keys
are
supposed
to
be
encrypted.
But
then,
if
you
want
to
do
anything
with
them,
they
need
to
decrypt
it.
A
Yeah-
and
I
would
like
to
maintain
that
I
don't
I
don't
want
well
maintain
that
and
like
it-
should
work
on
all
three
major
os's.
This
would
be
my
desire
right.
I
want
it
yeah
it's
usable,
but
I
think
all
those
properties
are
well
covered
by
the
existing
exec
mechanism.
F
Yeah,
I
was
just
going
to
wrap
up
so
so.
Does
anybody
like
strongly
feel
that
some
kind
of
exec
based
proxy
is
less
desirable
than
the
this
sort
of
protocol
based?
You
know,
give
give
some
little
server
the
request,
sign
it,
get
it
back
and
then
send
it
from
from
clanco.
F
Sounds
like
not
so
I
think
that's
resolved
and
did
we
take
notes
on
that,
so
I
heard
tim
had
a
use
case
for
this
mo.
You
had
a
use
case
for
this
matt
had
a
use
case
for
this.
I
will
speak
to
each
of
you
after
to
kind
of
understand
that
and
see
if
we
want
to
add
those
to
the
cap.
A
This,
who
is
for
the
workload.
I
I
I
can
think
we
have
kind
of
broad
consensus
that
there
should
be
a
cubelet
mechanism
for
kind
of
translating
some
config
into
a
kubernetes
certificate,
signing
request
and
provisioning
a
certificate
based
on
that
with
keys
and
some
sort
of
configurable
trust
anchor
injection
into
the
volume
we
have
less
consensus
on
all
right,
and
we
also
have
instances
on
in
general,
pods
should
x
be
able
to
expect
certificates
with
certain
properties
at
a
certain
location
within
their
file
system
or
in
the
container
file
system.
I
We
know
that
these
things
should
be
usable
for
pod
to
pod
communications
within
a
cluster.
They
should
be
usable
for
pod
to
cube
api
server
communications
within
a
cluster.
I
We
have
a
little
less
consensus
is
my
feeling
in
the
draft
cap,
based
on
the
comments
on
what
precisely
workloads
should
be
able
to
expect
in
terms
of
the
format
of
the
certificates
and
the
two
main
horses.
I
guess
are
the
existing
cn
format
for
service
accounts,
usable
in
mqls
and
authentication
for
cube
api
server
today,
that
would
be
the
system.
Colon
service
account
colon
namespace
colon
service
account.
I
The
other
would
be
make
these
things
spiffy
certificates
so
that
they
are
kind
of
compatible
with
what
istio
does
whether
the
format
would
be
a
uri
subject:
alternative
subject:
alternate
name
where
that
encodes
sort
of
a
trust
domain,
which
is
like
a
dns
name
that
is
unique
somehow
to
the
cluster
or
to
a
collection
of
clusters.
H
Lessons
right
so
our
continuing
problem,
our
continuing
question,
I
think
that
comes
up
in
the
questions
we've
received
on
the
cap
as
well,
is
that
the
kubernetes,
like
kubernetes,
has
no
idea
which
cluster
it
is
aside
from
some
dns
suffixes
that
are
that
are
added
adding.
That
is
a
hard
problem
without
without
either
allowing
a
significant
amount
of
overlap
that
will
cause
federation
issues
or
requiring
a
fully
qualified
domain
name
for
each
cluster.
H
Google
has
a
very
good
scheme.
I
don't
trust
any
of
the
external
distributions
to
follow
that
scheme.
I
don't
want
to
make
make
it.
I
don't
want
to
end
up
in
a
situation
where
aws
defines
a
trust.
Domain
azure
defines
a
trust
domain.
Google
defines
a
trust
domain
and
everybody
chains
to
those
where
we
we
essentially
have
the
public
list
again.
I
I
I
would
qualify
it
as
a
little
less
yeah.
I
guess
in
client
go
basically
what
my
my.
If
I
had
a
pony
just
like
you,
can
get
a
in
a
odc
token
that
contains
certain
things
you
know
you
can
rely
on
at
a
certain
location
within
the
vlog
file
system.
I
H
And
I'm
in
absolutely
agreement,
I'm
I'm
aware
that
those
serious
foot
guns
exist.
I
agree
that
those
serious
foot
guns
exist,
and
this
is
where
we
have
the
problem.
In
that
you
know.
How
do
you
prevent
people
from
shooting
themselves
in
the
foot
is
is
hard,
let's
not
focus
on
that.
Let's
focus
on
the
things
that
we
do
agree
on
for
a
for
a
bit,
though.
So,
let's
talk
about
like
the
the
summary
just
in
terms
of
things
that
that
are
absolutely
unqualified,
good
things.
H
We
want
to
add
a
new
certificate
signer,
it
would
be
an
implementation,
specific
signer,
but
with
a
well-defined
kubernetes
baseline
version
that
signs
certificates
right
every
time,
a
pod,
a
pod
is
created.
You
can
go
to
this
signer
and
get
a
certificate
that
is
scoped
to
the
cluster.
For
the
pod,
we
create
a
projected
volume
source
that
automatically
generates
you
some
x
509
certificates
that
projected
volume
source
generates
an
a
key
of
a
reasonable
x509
type.
H
H
A
K
And
I-
and
I
think
that
that
from
a
I
represent
like
legacy
enterprises
that
are
that
are
moving
to
kubernetes
and
they
have
a
lot
of
legacy
baggage
more
that
we
have
a
you
know,
kind
of
an
opinion
about.
You
know
the
absolute
structure
of
that
the
harder
it
is
for
a
legacy
enterprise.
To
move
to
that.
K
So
I
think
the
baseline
that
that
that
ted
talked
about
so
far
is
is
excellent
right
and
the
more
that
we
get
into
the
opinion
that
these
certificates
must
look
like
this,
where
this
is
something
that's,
that's
new
and
different.
That's
where,
like
legacy
organization
to
use
this
and
should
use
something
like
this
get
left
behind,
because
they
can't
use
this
in
that
format.
I
I
generally
agree,
and
I
think
there
should
be
an
escape
hatch
so
that
you
can
use
the
exfinite
ivan
sorry,
the
x509
credentials
volume
to
generate
whatever
sort
of
certificate
you
want
and
maybe
bring
your
own
signer.
That
signs,
like
whatever
format
you
desire,
but
if
we
can't
arrive
at
a
set
of
assertions
in
the
certificate
that
pods
can
rely
on,
there's
really
no
point
in
standardizing
anything.
I
H
K
Think
my
my
legacy
organizations
will
argue
endlessly
and
never
get
anywhere,
while,
if
you
can,
you
can
on
a
per
cluster
basis
or
possibly
a
set
of
cluster
bases
have
an
opinion.
That's
fine
and
you
implement
that.
But
if
you
don't
share
that
opinion
with
somebody
else,
you
need
to
go
your
own
way
and
not
not
communicate
with
with
other
clusters
or
your
whatever
your
opinion
is,
is,
is
cluster
scope.
H
Tls,
you
can
automatically
generate
a
x
509
certificate
that
in
118
clusters
works,
but
let's
not
talk
about
that.
An
x
509
certificate
that
is
signed
by
a
certificate
authority
that
you
can
that,
when
the
search
when
you
in
the
same
place
that
you
find
your
x
509
certificate,
you
will
find
a
certificate
authority
certificate
to
associate
with
this.
H
I
E
Yeah,
so
I
would
say:
let's
tear
this
apart
and
figure
out
the
things
that
we
would
like
to
bring
into
tree,
maybe
one
by
one
and
justify
them.
If
it's
like,
we
want
a
cluster
scoped
way
to
distribute
a
ca
bundle
that
could
be.
E
If,
like
we
want
to,
I
don't
know,
extend
csi
in
some
way
that
could
be
part
of
it,
but
I
think
there's
a
lot
going
on
here.
So
we
need
to
you,
know
rationalize
each
little
bit
of
it
before
we
dump
it
all.
In.
H
If
I
were
to
take
one
one
piece,
it
would
be
the
kubelet
projected
volume
source
it.
The
x,
509
credential,
projected
volume
source
is
a
simple
mechanism
that
has
the
kubelet
generate
certificate
for
you
and
puts
that
there
before
your
pod
starts
up,
so
it
generates
an
rsa
key.
It
does
the
csr,
you
know,
create
wait
for
approval
copy
process,
but
why
not
use
csi
can
be
done
just
a
pain
to
get
in.
H
The
reason
that
you
don't
use
csi
is
because
putting
it
in
the
kubelet
has
advantages
in
terms
of
scope
and
of
the
the
private
data.
The
super
private
data
is
your
rsa
key.
It
makes
it
so
that
you
don't
need
to
have
an
external
binary
for
that.
It
makes
it
very
clearly
memory
scoped.
E
I
think
historically,
that
has
not
been
maybe
jordan
can
comment
on
this,
but
historically,
making
usage
of
a
feature
that
can
be
built
out
of
tree
easier
to
consume
is
not
in
itself
sufficient
to
motivate
inclusion
into
core
kubernetes.
E
Okay-
and
I
think
that
there
is
more
here-
I'm
just
trying
to
tease
it
out
of
you.
I
My
take
I,
if
I
may
offer
one
would
be
that
the
value
here.
C
I
E
Yeah
adopting
a
more
widespread
standard
for
client
certificate
format
feels
like
something
that
I
would
I
think
it's
nice
to
have.
I
think
pki
of
components
like
web
hooks
and
cube
cube,
aggregator
api
extensions
setting
up
pki
for
that
securely
is
very
challenging
and
if
we
could
include
some
subset
of
this
and
make
the
problem
for
web
hook,
deployers
go
away
and
people
have
more
secure
web
hooks
across
kubernetes
clusters.
I
think
that
would
be
a
fairly
strong
motivation
for
me
to
argue
for
this.
I
I
I'll
I'll
capture
that,
as
like
an
open
question
on
the
dock,
and
then
we
can
see
what
a
solution
for
that
might
look.
I
A
That
had
some
zoom
issues,
okay,
but
I
do
think
there's
not
nearly
enough
consensus
right
now
to
worry
about
a
cat.
Yet.
I
A
H
Mean
the
kubernetes
api
server
requires
you
to
have
a
a
server
certificate.
That
would
be
the
same
as
the
client
certificate
right.
It's
not
just
client
certificates,
it's
it's
client
and
server
certificates
for
pods
that
communicate
within
the
cluster.
I.
G
K
E
I
brought
that
up
in
order
to
note
a
problem
which
is
pki
for
extensions
is
challenging
to
to
get
right
and
secure.
That's
feels
like
a
problem
that
would
be
nice
to
solve
in
core,
but
I
think
the
general
thing
that
I
was
trying
to
push
you
towards
was
we
need
rationalization
for
including
the
stuff
in
core
versus
the
pieces
that
are
already
that
api.
G
A
I
Go
that
is
part
of
why
something
like
spiffy
is
attractive
because
you're
right,
the
paw,
doesn't
have
a
well-known
name,
but
it
does
have
a
well-known
identity.
H
E
H
H
K
In
fact,
yes,
this
this,
this
is
the
free
sample
that
gets
them
to
actually
do
real
security
right.
The
fact
that
you
can't
get
certificates
and
and
private
keys
in
an
easy
way,
like
lots
of
organizations
every
time,
you're,
gonna,
you're,
gonna,
build
a
tls
certificate,
requires
some
out-of-band
manual
process
right
and
that's.
Why
that's
why
her
dog
can't
encrypt?
K
H
Cert
manager
could
make
this
work
for
internal
certificates
with
an
internal
ca.
Cert
manager
is
primarily
aimed
at
creating
public
route
of
trust,
chained
certificates
that
you
can
use
not
making
it
so
that
you
can
like
you,
don't
the
the
trust.
Scope
for
intra-cluster
operations
is
very
different
from
the
public
root
of
trust,
scope.
G
We're
out
of
time,
but
that
was
that
was
an
interesting
wrinkle
right
at
the
end.
A
But
okay,
I
think
we
should
probably
continue
this
discussion,
but
yeah.
We
are
our
time.
So
thank
you
guys
for
all
the
great
discussion
today
and
we'll
see
y'all
in
two.