►
From YouTube: sig-auth bi-weekly meeting 20210623
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone:
this
is
the
sig
off
meeting
for
june
23rd
2021
get
started
real,
quick
jordan's.
Not
here
to
wait.
Can
you
also
get
a
screen
or
did
it
stop
sharing
the
screen.
A
Jordan
opened
the
initial
pr
for
kiss
me
tim.
Did
you
want
to
talk
about
spr?
I
actually
have
no
idea.
What's
going
on
with
this
pr.
B
Sure
so
yeah
this
is
the
initial
pr
for
the
pod
security
admission,
controller,
aka,
psp
replacement.
B
This
is
trying
to
lay
a
lot
of
the
kind
of
framework
and
skeleton
for
implementing
the
controller,
and
our
hope
is
that,
once
this
gets
merged,
then
we
can
parallelize
a
lot
of
the
remaining
work.
There's
a
bunch
of
to
do's
that
were
left
in
this
pr
and
we're
hoping
that
folks
can
help
with
some
of
those
pr
to
do's.
It
also
has
a
framework
for
implementing
what
we're
calling
checks,
which
is
each
of
the
like
kind
of
controls
being
checked.
B
So,
for
instance,
I
think
we
added
a
se
linux
run
as
non-root
and
maybe
allow
privilege
escalation
checks
in
this
one,
but
we
need
to
go
back
and
add
all
the
other
controls
as
well
and
yeah
we'll
file
issues
for
all
of
those
remaining
tasks
once
this
merges
and
yeah
hoping
to
get
this
in
pretty
soon
so
that
we
can
proceed
with
the
rest
of
that
and
get
it
all
done
by
code.
B
Freeze
one
thing
to
note
on
this:
it
is
a
36
000
line
pr
with
1800
files.
The
vast
majority
of
that
is
generated
yaml
for
test
data,
so
the
for
each
check
we
add
a
set
of
what
we're
calling
fixtures.
This
is
all
jordan's
brainchild,
so
a
bunch
of
fixtures
in
code
and
those
are
used
to
run
unit
tests
and
integration
tests
and
then
also
generates
the
yaml
that
can
be
used
to
test
external
implementations
of
this.
B
B
B
If,
if
you're
interested
in
doing
kind
of
more
of
a
deep
detailed
walk
through,
let
me
know
I
can
see
about
getting
you
added
to
that.
B
Yeah,
I
want
to
kind
of
keep
it
sort
of
focused,
but
but
yeah
if
you're
interested,
maybe
reach
out
to
me
on
slack
afterwards,
we
can
talk
about
it.
B
B
If
you
want
like,
if
anything
seems
particularly
exciting
or
you
want
to
kind
of
like
get
started
on
that
again
reach
out
to
me
on
slack,
and
you
can
probably
start
working
on
some
of
that
before
this
merges.
If
you
want
some
of
the
pieces,
may
are
more
likely
to
change
than
others.
C
Belong,
I
I
see
that
there's
a
project
board,
so
if
someone
were
to
want
to
volunteer,
do
they
go
to
that
board
and
how
do
they
pick
up
issues.
A
I
might
just
defer
this
to
later,
but
I
don't
know
if
anyone
has
any
context
during
the
issue
triage
on
monday,
we
were
trying
to
figure
out
if
this,
if
this
issue
ever
made
progress,
so
there
was
two,
there
was
a
few
things.
One
was:
is
it
technically
possible
to
do
service
account
key
rotation?
I
don't
actually
know
off
the
top
of
my
head,
which
is:
can
you
provide
multiple
keys
in
a
way
that
you
can
rotate
them
and
actually
remember
if
that
was
implemented?
A
But,
more
importantly,
are
there
any
docs
around
this
and
do
the
docs
cover
the
more
difficult
topics
of
long-lived
key
long-lived
like
shots
that
are
issued
to
components
that
don't
know
that
you're
rotating
stuff
micah,
let's
see
your
hand.
D
Yeah,
this
probably
is
kind
of
relates
to
the
outstanding
kept
that
I've
got
going.
But
basically,
yes,
short
answer
is,
yes,
you
can
rotate,
but
it
requires
an
api
server
restart,
and
so
you
can
provide
multiple
keys
and
it
I
think
q
api
server
basically
takes
like
a
shawsome
of
the
key
and
uses
that
as
the
key
id.
A
Okay,
but
I
assume,
do
we
have
any
ducks
around
a
yeah
okay.
Is
there
also
any
efforts
to
remove
the
restart
requirement
like.
D
Library
reloading:
that's
where
the
cap
comes
in
mostly
to
support
external
signing
and
to
to
have
non-restart
key
updates.
A
Okay,
would
you
mind
giving
a
tldr
as
a
comment
on
this
issue.
A
That
would
be
great
and
it
might
be
out
of
scope
for
this,
but
the
thing
that
still
sticks
in
my
mind
is
it's
great.
If
you
can
rotate
keys,
but
you
you
don't
know
who
you
issued
identities
to
and
traditionally
speaking,
they've
lasted
forever.
So
forever
is
a
very
long
time.
D
Keys,
so
that's
yeah
it.
You
can
only
sign
with
one
private
key
at
a
time,
but
you
can
have
multiple
public
keys,
so
you
can
keep
accepting
old
ones
and
if
you
know
that
you've
signed
with
them
for
a
certain
amount
of
time,
because
you
can
configure
the
max
life
of
the
tokens,
you
can
eventually
phase
it
out.
D
A
Yeah,
that's
the
one!
That's
on
my
mind
right
like
that.
That's
the
actual
one
I
want
to
get
rid
of
okay,
cool,
that's
good
to
know
tim.
I
think
you
put
the
next
topic
in
maybe.
E
D
D
A
E
A
It's
just
it's
unfortunate,
though,
because
then
you,
if
you
ever,
lost
a
credential
right
and
you
didn't
actually
realize
you
lost
it-
that
credential
is
probably
still
valid
and
has
whatever
our
back
is
tied
to
it.
And
it's
dangerous
right,
like
part
of
the
point
of
key
rotation,
is
that
eventually
stuff
just
expires,
and
it's
no
longer
a
problem.
A
Okay,
does
anyone
else
have
anything
else
in
this
one.
A
Cool
all
right
so,
who
put
this
one
on?
I
don't
remember
it's
been
a
while
because
I
think
it
was
put
on
before
flask
I
mean.
B
Very
busy,
so
I'm
happy
to
be
a
reviewer
on
this.
B
Sure
so,
linux
capabilities
are
super
confusing,
mostly
because
they've
been
sort
of
bolted
on
and
then
over
time
realized
that
the
thing
that
was
bolted
on
was
not
sufficient,
and
so
another
thing
was
bolted
on
to
that
thing.
That
was
bolted
on,
and
this
has
repeated
so
in
several
iterations
and
so
the
way
the
way
capabilities
work
today
in
kubernetes
is
essentially
inherited
from
what
docker
had
at
the
time
that
kubernetes
came
out.
I
don't
even
know
if
ambient
capabilities
existed
when
in
kubernetes
1.0.
B
But
basically,
there's
a
bunch
of
different
sets
of
capabilities
that
have
slightly
different
meanings
and
essentially
the
set
of
capabilities
that
kubernetes
manages
is
the
set
that's
available
to
the
root
user
more
or
less,
and
if
you
run
a
pod
as
a
different
user,
it
clears
out
the
set
of
capabilities
for
that
non-word
user.
B
The
one
exception
is,
if
you
execute
a
binary
that
has
file
capabilities
set,
then
you
end
up
with
the
intersection
of
the
capabilities
on
that
file
and
the
set.
I
can't
remember
if
it's
the
bounding
or
effective
set,
which
is
like
what
you
started
with
on
the
pod,
so
there's
sort
of
a
default
set
that
comes
with
the
pod
and
then
you
can
add
or
remove
capabilities
from
that.
B
But
if
you
want
to
run,
if
you
want
to
say,
I
don't
want
to
run
as
root,
but
I
need
to
run
a
web
server
that
binds
to
port
80.
Let's
say,
then
you
need
the
net
bind,
it's
cabinet
bind
capability,
and
so
you
can't
just
say,
add:
capnet
bind
because
that
gets
dropped
when
you
switch
to
the
non-root
user.
B
So
that's
where
ambient
capabilities
come
in
they're,
basically,
capabilities
that
get
that
stick
with
you
when
you
switch
to
a
non-root
user.
This
is
kind
of
an
oversimplification
of
all
of
this.
B
But
that's
the
basic
motivation
is
to
be
able
to
grant
capabilities
to
non-root
users,
and
that
makes
it
much
easier
to
actually
run
things
as
numbered
users,
but
we
don't
want
to
just
make
everything
an
ambient
capability,
because
you
know,
for
instance,
set.
I
don't
know
cap
dac
override
lets.
You
basically
ignore
file
permissions
if
you
grant
that
to
a
non-root
user
they're
getting
pretty
close
to
root.
B
So
there's
you
know
a
bunch
of
capabilities
that
you
probably
explicitly
don't
want
to
have
on
a
non-root
user.
So
what
this
is
proposing
is
basically
we
don't
touch
the
default
set
of
capabilities
if
it's
in
the
default
set
by
default.
It's
also
not
in
the
ambient
set
and
then
there's
some
mechanism
by
which
you
can
explicitly
add
capabilities
to
the
ambient
set.
B
A
Do
you
know
in
terms
of
release
like
what
this
is
targeting?
Like,
I
guess
in
my
head,
I
would
expect
the
container
runtimes
to
have
at
least
whatever
their
equivalent
of
beta
support
for
this
thing
is
before
we
try
to
start
using
it.
A
A
B
I
guess
the
one
thing
I'll
add
is,
I
think,
from
a
yeah.
I
think
this
mostly
falls
under
sig
node
signatures
per
view,
but
from
a
sigoth
perspective,
we
want
to
make
sure
that.
B
We
have
proper
support
in
policies
so
probably
adding
support
to
the
new
pod
security
controller
and
depending.
C
B
A
This,
this
might
be
too
specific.
I
it's
probably
in
the
cab
somewhere.
I
assume
we'll
probably
have
some
kind
of
metrics
or
something
to
go
with
this
functionality.
To
I
don't
know,
help
admins
understand
what's
going
on
in
their
clusters,
and
I'm
trying
to
understand
like
if
I
set
this
field,
but
like
the
runtime
doesn't
support
it
like
what
is
supposed
to
happen
is
it's
supposed
to
not
work?
Is
it
supposed
to
like
try
to
work
but
fail
in
some
weird
way
like?
B
Yeah,
it's
a
good
question.
I
don't
know.
A
C
Sure,
before
we
go
there
for
this
particular
one,
I
just
want
to
make
sure.
Are
we
involving
like
folks
from
the
container
like
container
d
project?
It
seems
like
that's
a
pretty
big
dependency
I
just
want
to
anyway.
I
can
comment
in
the
in
the
pr.
It's
not
a
big
deal.
A
C
Yeah
make
some
all
right
next
item.
I
did
add
it
to
the
call
to
the
agenda,
but
I
think
an
anish
has
created
a
dog
and
I'll.
Let
a
niche
speak
to
the
specific
issue
in
a
proposal.
F
Thank
you
so
just
to
give
a
brief
overview
of
what
the
csi
driver
is.
It
integrates
external
secret
stores
with
kubernetes,
and
then
it
implements
the
node
plug-in
for
the
csi
interface
and
the
way
it
does
that
is,
it
has
the
node
plug-in
and
then,
when
a
volume
is
requested
with
the
particular
csi
driver,
it
talks
to
a
provider
and
then
it
fetches
the
secrets
and
then
all
the
secret
is
available
to
the
pod
on
temp
fs.
So
it's
not.
Nothing
is
persisted
on
disk
and
we
support
this
for
linux
and
our
windows.
F
So
this
works
well
and
then
this
is
also
being
used
by
a
lot
of
users,
and
I
think,
in
terms
of
discussion,
one
particular
topic
that's
been
coming
up
lately
is
trying
to
decouple.
The
sync
is
kubernetes
secret
from
the
actual
mount,
so
what
users
are
really
requesting
is
they
want
to
only
sync
as
kubernetes
secret,
so
they
want
to
still
be
able
to
talk
to
the
external
secret
store.
F
So
they
like
the
architecture
that
we
have
today,
but
instead
of
doing
the
mount,
they
want
to
just
sync
it
as
kubernetes
secret
and
then
there's
also
another
request
on
just
being
able
to
inject
it
as
environment
variables
and
not
do
syncing
as
kubernetes
secret
or
doing
the
mount
part
at
all.
And
so
the
github
issue
that
we
linked
has
context
on
that.
So
I
put
together
this
doc,
I'm
just
explaining
how
it
works
today
and
then
also
what
we
would
need
to
do
to
support
it.
F
But
the
tldr
is
if,
as
a
community,
we
decide
to
support
the
decoupling
from
the
mount
and
thinking
is
kubernetes
secret.
Then
it
sounds
more
like
a
different
project.
It
wouldn't
fall
under
csi
driver
because
it's
just
a
standalone
controller
which
can
piggyback
on
all
the
architecture
that
we
have
today,
basically
pluggable
provider
interface
using
rpc,
to
talk
to
any
provider
and
then
get
the
content
and
then
do
whatever
back-end
operation
being
it
mount
or
syncing
as
kubernetes
secret.
F
But
the
only
difference
is:
do
we
need
to
do
the
mount,
or
do
we
just
want
to
explicitly
do
the
sync
is
kubernetes
secret,
or
do
we
just
inject
it
as
an
environment
variables,
and
there
are
a
couple
of
projects
today
available,
not
within
kubernetes
but
like
oss
projects
that
do
this
so
like
using
mutating
webhooks
or
like
having
sidecars
and
then
are
talking
to
the
external
secret
store
and
fetching
it.
F
A
So
my
initial
gut
reaction
is,
but
I
don't
want
my
secrets
inside
of
kubernetes
secrets.
If
I
wanted
to
use
kubernetes
secrets,
I
would
just
use
kubernetes
secrets.
I
wouldn't
use
my
very
fancy
vault
with
kms
plugins
and
everything
else
to
like,
like
the
kubernetes
secrets,
are
just
not
secret
enough
for
me
to
want
to
use
them
for
anything
that
I
actually
care
about
right.
G
With
you
there
just
because
that's
a
standard
api
interface,
and
so
if
you've
got
an
application
that
knows
how
to
talk
to
kubernetes,
to
load
information
that
says
okay,
I
know
I
can
load
it
from
a
secret.
I'm
going
to
call
that
secret
api
that
that's
a
really
nice
abstraction
all
right.
I'm
not.
A
I'm
not
arguing
about
the
abstraction.
The
attraction
is
great.
Kubernetes
itself
is
a
great
abstraction.
That's
that's
not
that's
not
like
the
relevant
thing.
I'm
talking
about
I'm
talking
about
the
fact
that
I
want
to
not
store
them
inside
of
xcd.
I
don't
want
them
available
to
the
kubernetes
cluster
like
infra.
In
that
way,
right.
F
Yeah,
I
mean,
I
think,
in
terms
of
usage,
it's
mostly
around
ingress
scenario
right,
because
ingress
still
heavily
depends
on
kubernetes
secret
reference,
and
then
that
is
what
we
see
in
terms
of
usage
like
they
want
to
have
the
tls
certainty,
ls
key,
and
it
just
makes
it
easier
for
them
to
integrate
it,
and
then
it
works
well
too
right,
but
I
mean
if
we
decouple
it,
it's
beneficial
for
users
so
that
they
don't
have
multiple
operations
that
could
fail
before
the
last
step.
That
actually
happens,
but
then
again,
there
is.
F
A
What
you're,
what
you're
describing
is,
in
my
opinion,
a
different
project
right,
which
I
mean
I
I
could
certainly
see
you
know
like
library,
components
that
are
like
a
third
like
you
know,
you
can
imagine
three
repos,
there's
some
other
thing.
There's
the
csi
driver
and
then
there's
a
third
repo,
that's
just
a
foundational
library
code
that
they
can
both
use
to
do
whatever
they
want.
A
But
I
I
would
find
it
strange
that
a
repo
called
the
csi
driver
didn't
actually
use
a
csi
driver,
or
at
least
in
a
lot
of
installations.
Then.
A
Maybe
maybe,
as
next
steps,
we
could
send
this
out
to
the
mailing
list
and
see
if
we
can
get
folks
opinions.
I'd
be
very
interested
in
like
what
mike
has
to
say,
and
I
feel
I
I
feel
a
little
bit
strange.
A
Changing
the
scope
of
the
an
existing
project
but
take
off
doesn't
have
many
projects,
so
I
don't
necessarily
have
like
a
good
indication
of
like
we
can
change
the
scope
and
just
change
the
name
of
it
or
something
and
be
like
yeah.
It's
that's
more
stuff.
Now,
that's
fine,
but
I
just
don't
think
there's
any
precedence,
one
way
or
the
other.
C
Yeah,
I
think,
there's
just
enough
people
asking
for
it
that
we
think
we
should
evaluate
if
it
makes
sense
to
create
another
sub
project.
B
A
A
F
A
All
right
and
then
we,
I
think
the
only
thing
we
have
left
is
the
service
accounts.
Mkls
free
cap
check
intent
is
that
you
post.
H
Yeah
and
to
here
has
a
excellent
document
that
also
deserves
review.
I
have
a
version
of
the
cap
that
is
cleaned
up
and
simplified
down
to
the
point
where
it
is
just
just
just
adding
as
an
alternative
to
tokens,
adding
to
the
existing
projected
token
a
tls
certificate
that
that
serves
precisely
the
same
purpose,
and
it
does
not
have
there's
a
lot
of
things
unfilled
in
here,
but
I
would
like
to
get
agreement
on.
H
A
So
I
have
not
read
the
google
doc,
I
have
read
it
kept
much
shorter
than
the
google
doc
and
I
just
didn't
get
the
google
app.
So
one
of
the
things
that
sort
of
stuck
out
to
me
is
maybe
this
was
just
in
the
process
of
trying
to
reduce
it
down
to
like
one
directed
use
cases.
I
I
didn't
find
enough
technical
details
around
like
how
some
of
this
would
be
implemented
for
me
to
like
say
like
that.
This
is
a
good
idea.
A
So
there's
that,
like
you
know
like
at
a
high
level,
you
know
some
concerns.
I
have
like.
We
support,
revocation
for
service
account
tokens,
don't
support
replication
certificates,
so
this
just
kept
care.
Is
there
some
gotchas
there's
a
lot
of
extra
validation
that
service
account
tokens
go
through
and
checks
along,
like
you
know
how
they're
bound
to
pods
and
stuff?
A
So
those
are
some
things
also,
I
felt
like
if
we
changed
some
stuff
in
core
that
didn't
necessarily
change
many
apis
or
if
any
I
mean
it
might
expand,
how
we
validate
certificate
some.
But
if
you
expanded
that
out,
I
didn't
necessarily
see
something
that
could
not
be
implemented
out
of
tree,
even
if
it
was
destined
to
come
into
tree
at
some
point.
A
But
those
are
just
some
like
initial
thoughts
I
had
but
like
to
me.
The
largest
thing
was
like:
I
just
don't
see
enough
details
to
like
be
like
yeah.
This
is
what
you
plan
on
implementing
and
that's
a
good
idea.
H
H
I
think
that's
that's
really.
It
is
that
this
is
about
adding
a
signer
that
you
can
use
for
for
service
accounts,
because
that
signer,
regardless
what
what
what
security
controls
are
on
in
place,
needs
to
be
there
for
any
of
the
rest
of
this
to
work.
E
H
Yeah,
externally,
to
that,
it's
that
they
that
the
a
signer
that
uses
that
is
using
the
same
key
that
the
kubernetes
api
server
already
recognizes
right,
because
unless
you
have
access
to
the,
unless
you
have
a
signer
that
has
another
copy
of
the
ca
certificate
of
the
ca
that
the
kubernetes
api
server
is
using.
It
won't
be
recognized
by
the
api
server.
A
Rest
of
what
we
were
saying
ted,
though,
is,
if
you
make
a
csr
that
says
the
cn
is
system.
Colon
service
account
colon
food,
colon
bar,
and
you
make
that
csr
targeted
at
the
api
server.
Client
signer.
I
can't
remember
the
exact
name
right
now
and
you
approve
it
and
that
signer
is
functioning
on
the
cluster,
which
it
is
everywhere
by
eks
mica,
I'm
shaking
my
hand
you
that
will
get
signed
and
then,
when
you
try
to
use
it,
the
identity
won't
match.
A
H
Crazy,
none
of
the
existing
ciders
have
a
will.
Allow
you
to
say
service
account
well
well,.
A
The
api
server
client
one
does
not
care
what
you
put
in
the
cn
as
long
as
the
usages
are
fine
right,
so
it
will
happily
let
you
sign
for
anything
that
you
want
like
if
it
is
approved
right.
The
the
problems
that
you
start
getting
into,
though,
is
the
fact
that
that
certificate,
when
authenticated
against
the
api
server,
will
not
be
validated
in
exactly
the
same
way.
A
I
would
like
to
be
able
to
say
that
because
then
it
would,
it
would
complete
the
whole
thing
right.
The
sort
of
the
the
more
controversial
idea
I
had
in
my
head
was
it
would
be
interesting
to
in.
C
A
A
Right,
I
was
curious
what
folks
felt
about
trying
to
enforce
that
more
strongly
right.
So
like
the
way
I
would
think
about.
That
is
like
a
required
critical
extension
on
the
client
cert,
which
would
then
make
everyone
else
in
the
world
be
like.
I
don't
know
what
this
is
go
away,
even
if
they
had
the
same
trusted.
Ca
bundle
if
you
were
reusing
the
ca
across
environments,
so.
E
E
E
A
If
some
external
entity
wants
to
parse
those
out
and
trust
them
and
understand
them
whatever.
I
don't
care
right,
but
the
idea
is,
if
I
was
to
present
it
to
like
an
old
api
server
or
any
other
thing
that
supports
mtls
in
the
world.
It
would
immediately
say
no
like.
I
don't
care
that
you're
signed
by
a
ca,
that
I
trust.
I
don't
understand
what
that
extension
means
go
away.
A
H
My
objective
is
actually
pretty
much
precisely
the
opposite
of
this.
In
that
I
want
these
certificates
to
use
the
standard
to.
I
would
like
to
get
to
a
world
where
existing
tls
implementations
are
able
to
take
advantage
of
some
of
these
things
and
that
we
are
able
to
to
to
have
a
pki
infrastructure
within
kubernetes.
That
does
not
require
you,
rewriting
all
of
your
existing
programs
to
understand
this
new,
this
new
variant
of
pki
infrastructure,
there's,
there's,
certainly
a
lot
to
be
said
for
for
adding
all
of
these
capabilities.
H
For
adding
information
about
here
is
the
effective
service
account,
but
here
is
the
actual
pod.
That
is,
that
it's
related
to
and
there's
certainly
something
to
be
said
for
making
that
critical.
If
you
can,
but
there
is
a
migratory
period
and
there's
that
migratory
period
is
is
actually
sort
of
undergoing
right
now
and
the
one
of
the
main
problems
of
migrating
to
something
like
like
istio
is
that
you
have
to
that.
You
have.
H
A
Is
so
I
I
would
say
that
is
fine,
but
I
would
want
that
sort
of
written
out
as
goals
so
that
way
we
could
think
through
it
and
talk
about
it,
because
what
I
see
isn't
a
different
like
I,
what
you
described.
H
H
I
don't
think
that
you
need
specific,
because
we
you
already
have
the
the
api
server
client
type
and
that
will
sign
anything
and
adding
adding
pod
cert.
You
know,
information
to
those
signed
certificates
is
very
reasonable,
but
if
they
are
scoped
more
tightly
than
the
api
server,
client
is
already
just
by
nature,
being
bound
to
a
projected
volume.
H
The
chances
that
they
leak
are
much
lower
than
the
already
accepted
chances
of
your.
Your
ci
system
leaking
its
its
search,
its
private
key
and
certificate
right.
If
you
are
using
this
api
server
client
to
sign
a
long
lived
or
medium
lifetime
certificate
that
you're
going
to
put
on
on
your
ci
system,
for
example,
the
the
possibility
of
leaks
is
already
much
greater
than
the
possibility
of
leaks
from
a
pod
running
within
the
cluster
that
is,
and
especially
since
it
is,
the
private
key
is
never
intentionally
shown
to
anything.
A
Happen
so
I'm
well
aware
of
that.
The
the
concern
I
have
is
I
get
relatively
frequent
requests
from
customers
that
want
to
use
their
corporate
pki
as
their
kubernetes
pki,
and
it's
like
foundationally
unsafe
to
do
today,
and
I
would
like
to
make
that
not
foundationally,
unsafe
right
and
the
only
way
to
do
that
is
for
cube
to
say
I
can
trust
this
ca,
just
cool
I'll
trust
it
for
everything.
But
if
you
send
me
stuff
that
doesn't
explicitly
say
it's
meant
for
me,
I
will
reject
it.
H
Because
you
can't
do
cn
constraints,
because
any
sub
ca
of
your
pki
can
can
create
a
a
service
account
at
that
point,.
A
Right
like
like,
as
a
for
example
right
like
I
could
make
an
issue
to
like
the
vmrit
and
ask
them
to
provision
me
a
certificate
for
some
app,
I'm
you
I'm
building
or
whatever,
and
then,
if
anywhere
in
the
vm,
where
that
ca
is
trusted
by
kubernetes,
I
can
go
present
the
kubernetes
and
like
cool
you're,
this
random
identity.
I
have
no
idea
what
it
is
and
I
don't
know
how
to
validate
in
any
meaningful
way
right
that
that's
like
one
direction
and
there's
other
issues
on
the
other
direction
too,
but
like.
H
I
I
think
that's
there's
actually
a
real
real
argument
that
that
is
a
valid
solution,
so
long
as
kubernetes
is
set
with
exactly
the
set
of
intermediate
cas
that
it's
set
to
having
the
kubernetes
cluster
chain
up
to
the
corporate
pki
is
totally
totally
a
valid
security
model,
if
not
one
that
we
want
to
encourage
without
careful
consideration.
H
Dki
effectively,
I
I
don't
disagree,
but
I'm
I'm
going
to
yell
about
these
small
corporations.
These
small
companies,
where
the
set
of
kubernetes
certificates
is
actually
a
reasonable
set
of
pki
and
just
not
having
like,
rather
than
having
a
single
route.
You
concatenate
all
of
your
kubernetes
clusters,
and
that
is
a
reasonable
pki.
E
Yes,
I'd
be
interested
in.
If
you
could
take
a
look
at
the
dock,
I
wrote
the
service
account
search
for
services.
C
E
No,
it's
not
a
problem,
and
it's
not
really
a
proposal.
It's
just
it's
kind
of
like
there
was
some
interest
a
few
weeks
back
around
like
why
google
does
it
that.
C
E
And
we're
at
least
on
gk
the
the
way
we're
kind
of
thinking
about
this
is
you
can
have
one
pki
that
may
or
may
not
be
in
your
internal
corporate
pki
and
it
can
span
multiple
clusters
and
that's
fine.
You
define
a
domain
of
you
know
20
clusters,
a
thousand
clusters
regionally
separated
or
whatever,
but
the
identities
across
them
are
the
same
within
reason.
You
can
take
a
cert
for
default
default
and
show
up
at
another
cluster
and
be
default.
E
A
Sad
dave
is
not
here
because
I'm
pretty
sure
he
had
like
an
aneurysm
at
the
idea
that
that
service
accounts
across
clusters
are
like
the
same
identity
right,
because
you
know
they're
like
they're,
I
mean
it
depends
on
how
you
think
about
your
kubernetes
right,
I
think
of
all
my
kubernetes
clusters
as
distinct
domains.
Right,
I
don't
want
them
crossing
other
than
I
in
some
very
carefully
curated
way
and,
generally
speaking,
the
carefully
curated
way
is
human
identities.
A
A
But,
but
that
that's
that's
the
thing,
though,
right
like
what
I.
What
I
want
is
the
ability
to
reuse
the
same
pki,
but
keep
things
separate
right,
like
the
reason
for
you
using
the
same
pki
is
because
a
lot
of
corporations
like
to
know
exactly
what
certificates
are
issued
and
where
they
are
right.
They
don't
want
random,
self-signed,
certs
or
whatever
it's
it's
a
compliance
issue
for
them,
and
it's
it's
just
not
what
they're
used
to
doing
right
so,
but
instead
of
continuously
having
arguments
about
words,
don't
do
this.
It's
unsafe!
A
H
Yeah,
the
problem
with
audience
constraints,
in
my
mind,
is
very
simply
that
they
are
they're
on
behalf
of
the
signer
that
the
designer
is
the
one
asserting
the
audience
constraints
and
the
signer
can
assert
whatever
it
wants
for
those
audio
constraints
or
you
can
simply
not
choose
not
to
assert
them.
And
then
you
have
problems
down
the
line,
and
I
don't
know
what
those
are
but
because
I
haven't
even
considered.
A
E
H
Yeah,
that's
basically
it
is
that
for
every
copy.
If
you
have
abc
in
your
depe
in
your
service
dependencies,
then
you
should
have
cluster
a's
abc.
You
should
have
cluster
b's,
abc
and
cluster.
You
know
the
first
cluster
cluster
number
ones
abc
should
be
capable
of
talking
to
a
should
be
capable
of
talking
to
cluster
two's
b
and
cluster
cluster
ones
b
should
be
capable
plus
talking
to
cluster
twos
c,
and
vice
versa,
as
is
necessary
to
keep
your
application
alive.
H
H
But
if
you,
if
you
do,
if
you
treat
all
of
your
clusters
the
same,
then
there
is
no
reason
why
you
shouldn't
trust
that
cluster
b,
which
is
has
its
own
intermediate
cert,
that
chains
up
to
your
root,
tki,
that
its
service
account
isn't
the
same
as
cluster
a
which
has
its
own
intermediates
or
that
chains
up
to
your
your
you
know:
root
ki.
H
I
think
there
are
a
lot
of
a
lot
of
considerations
here
that
the
average
corporate
pki
team
has
never
considered,
but
that
is
a
problem
of
the
average
corporate
pki
team
having
terrible
security
constraints
and
issuing
certificates
willy-nilly,
not
that
you
can't
build
a
good
crypto
system
around
this.
A
So
I
personally
see
this
trying
to
optimize
for
a
case.
That
is
the
minor
case
right.
I
think
most
people
don't
want
their
identities
crossing,
but
they
do
want
to
share
the
info
because
that's
the
easy
thing
to
do,
and
technically
the
compliant
thing
to
do
based
on
their
own
requirements
right,
you
know,
self-signed
certs,
always
use
corporate
pki,
always
know
every
cert.
That's.
E
E
I
have
you
know.
I
use
my
clusters
as
I
have
one
in
in
the
u.s
west.
I
have
one
in
japan
when
I
spin
up
one
in
africa.
I
don't
want
to
go
and
like
change
all
my
rbac
rules
everywhere,
just
to
permit
this
new
geo,
replication
or
load
balancing
those
applications
have
the
same
identity
globally,
and
so
I.
H
I
see
both
as
someone
who
is
small
and
talks
to
some
of
the
bigger
players.
I
do.
I
do
see
both,
but
that
said,
the
the
problems
with
the
the
problems
with
using
a
shared
corporate
ca
infrastructure
are
that
you
are
using
your
cas,
inappropriately
that
you
should
be
like
it's
entirely:
okay,
to
put
intermediate
certs
as
your
as
root
certs
for
specific
applications.
H
Your
kubernetes
api
server
should
have
its
own
intermediate
cert
as
the
as
the
the.
How
do
I
verify
clients
certificate,
whereas
other
applications
that
are
talking
to
each
see
the
the
root
cert
to
which
all
of
the
kubernetes
individual
clusters
chain,
because
they
want
to
ignore
the
fact
that
these
are
our
separate
domains
that
you
can
have
these
sub
constraints
in
a
in
a
very
simple
tree
by
just
having
good
intermediate
certs.
A
H
I
remember
that
that's
that
is
actually
correct,
presenting
the
full
chain
and
is
fine
if
you
if
the
trusted
certificate
is
in
is
in,
is,
is
somewhere
in
that
chain.
You
don't
have
to.
You
can
present
a
chain-
and
this
is
this-
is
part
of
just
how
x509
works
is
you
can
present
a
chain
that
roots
further
up,
but
you
don't
care
about
that.
You
only
care
that
your
your
ca,
your
trusted
list
that
it
roots
to
one
of
the
roots
in
your
trusted
list,
even
if
you
could
go
beyond
that.
A
H
They're
not
distinct
domains
so
long
as
you're
talking
to
so
long
as
you
are
authenticating
to
the
the
root
ca.
But
you
can
you
can
say
I
only
trust
this
intermediate
cert
and
it
won't
trust
from
a
different
intermediate.
E
E
I
think
it
it's
you're
right.
It's
probably
not
a
great
idea
to
have
the
cluster
ca
be
chained
to
some
random
on-prem
thing
that
should
be
scoped
to
the
cluster,
but
the
identities
issued
by
the
cluster
may
not
be
solely
cluster
local.
If
you
want
them
to
be
solely
cluster
local,
then
yeah
make
it
make
them
be
backed
by
the
cluster
ca
or
the
the
api
server
client
ca.
I
don't
know:
we've
got
like
10
cas
now
in
a
cluster.
H
I
don't
think
there's
one
time.
Yeah.
A
So
I
want
to
be
respectful
of
his
time
I'm
open
to
continuing
this
discussion.
Hopefully
there
is
more
than
like,
rita
and
myself
from
the
leads
here.
So
that
way
we
can
keep
talking
about
this
yeah
I
mean
I
overall
though
I
don't
necessarily
think
I
think
some
variation
of
this
cap
can
probably
make
it
into
cube.
At
some
point,
I
don't
think
that's
necessarily
an
issue.
I
think
we
just
need
to
distill
it
down
to
what
we
think
we
need
to
actually
put
in.
H
I
will
go
build
a
demonstration
of
this
because
I
already
have
90
of
what
I
need
to
to
issue
secrets
to
pods,
but
I'd
like
to
see
some
variation
of
this
make
it
mainline.
Even
if
the
broader
discussion
of
how
do
we
do
stuff
as
this
table.