►
From YouTube: Kubernetes SIG Auth 20171101
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20171101
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
Hi,
this
is
sig
off
for
November.
First
on
schedule.
Today
is
a
demo
for
spiffy
spire,
we're
gonna
start
with
that.
There
are
some
polls
to
talk
about
and
some
designs
and
then
are
a
few
discussions
to
end
with,
but
I
want
to
have
plenty
of
time
to
talk
about
the
km/s
stuff
that
has
open
polls.
So
Evan
is
here
to
give
us
a
demo
about
spiffy
spire
and
thank.
B
B
We
made
some
considerable
progress
since
then,
so
here
today,
kind
of
just
to
show
you
where
we
are
and
show
you
we've
got
what
we
call
kubernetes
worklet
at
the
station
working
so
honored
to
share
that
with
you
as
well
and
I
apologize
for
my
videos
camera.
It's
not
happy
with
me
today,
so
this
is
first
of
all.
This
is
a
github
repo
called
spiffy
example
burn
like
the
Beatrice
subdirectory.
B
B
So
here
is
a
diagram
of
what
kind
of
is
happening
under
the
coverage
in
the
demo.
So
it's
a
like
I
said
it's
kind
of
all
tied
up
by
make,
but
under
the
coverage
that's
using
vagrant
and
we
have
three
different
vagrant
VMs
here,
so
on
the
left
or
top
bar
sorry
on
the
top
right,
you
see
a
Cooperman
kubernetes
master
vm,
so
this
is
running
like
kubernetes
api
server,
we'll
also
use
it
to
run
the
inspire
server.
The
bottom
right,
there's
database
yeah.
B
So
so
I'm
going
to
show
you
today
that
there's
this
one
little
piece
here
called
this
workload.
Sidebar
wishes
could
be
better
named
quite
frankly,
but
but
basically
it's
just
a
small
bit
of
code
which
hits
the
workload
api,
which
was
exposed
by
the
spire
age
and
pulsed,
asserts
out
and
then
dumps
on
my
own
just
for
goods
just
I
know
we
haven't
quite
pot
ghost
I
know
how
to
do
that
idea.
Yet
and
just
kind
of
are
the
things
on
our
list
of
things
to
do.
B
Yeah
so
I'll
show
you
how
it
works.
So
let
me
just
turn
this
thing
up
real
quick.
Can
you
guys
to
see
my
terminal?
Okay?
Yes,
we
can
okay,
awesome,
ok,
so
Windows,
where
is
actually
is
the
same
host?
So
there's
three
different
hosts
and
we
just
have
two
terminals
that
have
been
on
each
one.
So
the
top
of
this
you
can
see
is
labeled
Kate's
master,
that's
the
one
running,
API
server
will
run
spire
server
there.
The
middle
row
is
two
terminals
on
the
node,
which
is
running
a
qubit
and
the
bottom
terminal.
B
B
Okay,
so
the
spire
server
is
up
and
running.
Now,
let's
generate
a
dossier
to
do
all
the
siding
and
everything
like
that
and
now
the
next
thing
we
want
to
do
is
we
need
to
start
the
agent
on
the
database
host
and
normally
we
have
like
automated,
no
dentist
so
depending
on
the
environment,
if
you're
an
AWS,
GCP,
Azure
or
whatever
it
is,
we
can
detect
that
and
we
can.
We
can
do
native
attestation
using
this
identity
document
or
whatever
method
you
deem
is
acceptable.
B
We
don't
have
anything
like
that
under
vagrant,
so
instead
of
automatic
attestation
today
we'll
be
used
to
what
we
call
joint
tokens,
which
is
similar
to
kind
of
the
way
that
dr.
swarm
works.
So,
let's
generate
a
token
and
what
we're
going
to
do
is
we're
going
to
assign
us
50
ID
to
this
token-
and
this
token
will
be
like
the
identify
this
node
so
we'll
just
call
it
DB
node
DB
host,
rather
mom,
doesn't
really
matter
so
we'll
copy.
B
B
So
now
the
agents
running
is
that
the
workload
API,
which
is
what
the
word
lives,
used
to
touch
there
for
certificates-
and
we
have
this
little
script
right
here
saved
at
the
same
place,
of
course,
which
kind
of
just
it
starts
that
helper
code,
which
interacts
with
the
workload
API,
and
it
also
starts
ghost
tunnel.
So
just
to
kind
of
show
you
what
happens
here
unless
we
start
it
now,
panics
and
suspects
your
bundles,
and
that's
because
we
haven't
declared
this
workload.
We
haven't
registered
this
workload
with
fire.
B
B
I'm
gonna
be
running
the
process
just
as
my
user
here,
so
you
can
see
that
we
are
the
ID
1000,
so
I'll
use
that,
as
as
what
we
call
selector
so
going
back
up
to
spire
server
will
register
it
and
gonna
say
that
the
parent
ID
is
the
DB
host
right,
those
it's
that's
where
we're
gonna
run
it,
and
then
we
describe
the
workload
using
a
selector.
In
this
case
it's
a
UNIX
UID
and
then
we're
gonna,
give
it
a
spiffy
ID.
B
B
B
B
B
Okay,
so
that's
running
so
now
you
see
a
little
bit
of
a
difference
here.
The
logs
are
rolling
over
continuously
on
this
particular
agent,
and
the
reason
for
this
is
that
I
showed
I
showed
the
diagram
previously
were
inside
the
pod.
We're
writing
the
blog
application,
plus
a
ghost
tunnel
plus
this
helper,
but
we
saw
earlier
the
helper
code
just
exits
if
it
can't
get
a
cert
from
the
API.
So
for
the
purposes
of
this
demo,
what
we
did
was
we
just
kind
of
three
that
help
her
into
a
while
true
loop.
B
So
you
see
it
here,
just
kind
of
continually
trying
and
exiting,
and
we
just
kind
of
turn
through
it
till
something
available.
So
we
want
to
issue
a
spiffy
certificate
to
the
work
that
inside
you
weirdo
these.
Now
previously
we
described
it
in
terms
of,
like
even
experiment
is,
but
on
kubernetes
we
want.
B
So
we,
the
number
of
options,
number
of
things
you
can
use
the
selectors
as
will
grow
quite
significantly
in
the
near
future,
is
just
a
few
large
change
to
enable
support
there
and
you'll
notice
that
that
this
log
is
stopped
rolling
over,
and
you
may
also
be
able
to
see
here
that
we
picked
up
that
blog,
that
blogs
the
fee
idea.
So
now,
if
we
look
on
this
host
for
ghost
tunnel,
we
can
see
that
it
is
in
fact
running
and
that
we
are
verifying
that
the
server's
50
ID
is
example.
B
B
There
it
is
so
there's
working
and
if
we
go
back
to
terminal
here,
we
can
kind
of
see
where
ghost
tunnel
has
opened
those
pipes,
and
so
now
this
is
blog
running
blog
running
as
kubernetes
workload,
attested
with
kubernetes
primitives
doing
em
TLS
or
the
sniffy
ID
to
Moorea
DB,
which
is
described
as
unix
primitives
on
just
a
bare
metal
of
the
m.
If
you
go
and
and
just
kind
of
to
demonstrate
that
this
is
actually
traversing
this
MPLS
tunnel,
we
can
kill
this
process
this
ghost
tunnel
process.
B
You
can
see
on
the
bottom
now,
where
those
pipes
have
closed
up
on
the
server
side
and
if
we
refresh
every
here,
we
go
500.
So
that's
pretty
much
it
that's
the
demo
future
demo.
What
we're
going
to
do
is
we're
going
to
show
the
same
thing
except
we'll
show
like
five
second
cert
rotation
and
will
delete
the
registration
entry
which
will
make
that
replication
automatic
that
that's
pretty
much
it
workload
at
the
station
on
kubernetes
using
spire
server.
B
B
That
we
have
IP
addresses
all
hard-coded
and
in
this
demo,
that's
why
I
can
hit
this.
This
blog
IP
address
reliably,
so
we
haven't
done
like
discovery
here.
It's
just
assuming
that
kind
of
like
with
Isis
fire
itself,
doesn't
really
solve
for
discovery.
Yes,
in
that
discovery
solved
in
some
other
way,
the
agent
just
configured.
Similarly,
like
the
agent
configuration
has
to
have
a
name
or
an
IP
address
to
reach
the
server
so
the
same
story
there,
okay.
D
B
So
the
way
that
we
do,
that
is,
we
use
a
special
socket
option.
The
API
is
exposed
as
a
UNIX
domain
socket
and
there's
a
socket
option
there,
which
allows
us
to
get
the
the
process
ID
of
the
caller.
So
we
do
this
first
then,
once
we
have
the
process
ID,
we
check
out
C
group
on
that
process
and
the
way
that
German
have
using
docker
currently
interact
is
such
that
the
pod
UID
is
embedded
into
the
C
group
names.
E
B
So
it's
combined
TTL
and
like
the
first
time
that
it
actually
is
used.
So
when
we
generate
it,
we
store
it
and
we
stored
along
with
the
TTL.
When
a
request
comes
in
to
do
attestation
using
the
join
token,
we
do
a
lookup
and
then
we
number
one
make
sure
that
we're
not
exceeding
the
TTL
and
number
two
that
it
hasn't
been
used
and
the
presence
in
the
database
or
present
some
persistence
tells
us
that
and
when
we
do
use
that
we
knew
that.
B
So
that's
how
we
enforce
yeah
and
and
also
I,
should
clarify
that
it
our
join
token.
It's
not
quite
as
fancy
as
the
docker
swarm
join
token.
Just
yet
it
right
now,
it's
literally
just
to
do
it,
but
in
the
future
we
may
change
that
to
also
kind
of
have
like
certificate
hash
and
the
stuff
like
docker
has
done,
because
we
see
value
there.
F
So
I
haven't
couple
quick
questions.
It
looks
like
sort
of
some
missing
pieces
yet,
but
I
think
you
know
people
can
fill
this
in
in
their
mind.
So
one
of
those
is
that
you
were
explicitly
associating
which
spiffy
IDs
go
with
which
workloads
and
you
were
sort
of
like
that
that
selector
table
was
being
explicitly
done.
Somebody
could
write
a
controller
and
trusted
controller
to
do
that
automatically
right.
Absolutely.
B
The
when
I
did
the
registration
on
the
CLI,
that's
actually
just
backed
by
an
API
under
the
covers,
so
anyone
can
hit
that
thing
and
and
how
you
decide
to
do
that.
We
kind
of
if
I,
figured
out
the
optimal
way
to
do
it
if
you're
one
of
these.
But
you
can
imagine
that
there
are
several
approaches.
Yes,.
F
There's
options
there
and,
and
the
same
thing
is
that
the
the
joined
mechanism
is
pluggable
also,
so
the
token
is
one
plan,
I
mean
another
thing
and
if
you
look
at
like,
like
you
know,
like
I'm
thinking
like
like
configuration
management
systems
and
how
they
establish
bidirectional
trust
a
lot
of
times,
there's
like
an
approved,
you
know
approve
API,
like
hey,
there's
much
a
request
that
you
can
go
in
and
manually
approve.
Those,
so
I
could
imagine
that
either
one
of
those
could
be
built
with
with
this
flow
yeah.
B
It's
all
fairly
generic
and
I
think
that
you
know
if
you
were
to
have
some
system
like
a
like
config
management,
which
is
doing
like
orchestration
of
host
provisioning
it
you
could
get
pretty
flexible
and
pretty
fancy
there.
So
I
don't
anticipate
any
problems
on
that
front
and
I.
But
I
do
things
like
the
ideal
scenario.
Is
that
we're
using
some
sort
of
platform
native
cessation-
and
this
thing
is
kind
of
automated
and
you
don't
have
to
do
those
sorts
of
dances,
I,
there's.
F
A
B
B
Definite
possibility
in
the
ideal
world
the
proxy
itself
knows
how
to
speak
to
the
workload
API
and
that's
work
that
we
will
be
doing
shortly,
but
we
just
haven't
got
there
yet.
So
the
this
little
side
card
bit
I
hope
to
not
have
around
for
too
long
so
but
yeah
automatic
injection
is
one
thing.
Another
thing
that
we
would
probably
want
to
utilize.
Something
like
that
for
is
hey.
You
know,
howdy,
where
is
this
billion
socket
living
and
how
do
I
reach
it?
B
F
G
Look
at
that
as
like
every
piece
of
software
right
now
that
has
configuration
flags
to
pass
in
certificate.
Private
key
see
a
cert,
sir
TLS
options.
You
could
think
about
replacing
all
of
those
with
a
single
option
that
points
it
at
a
at
a
spiffy
workload
API
and
teach
the
software
to
configure
itself
via
the
API
instead
of
from
files.
F
So
what
other
scenario
that
that
I?
Don't
think
that
I
don't
think
I've
been
covered
here,
is
that
this
stuff
could
be
used
both
for
workloads
running
under
kubernetes,
but
it
could
also
be
used
for
a
generic
way
for
bootstrapping
the
kubernetes
control
plane
itself.
Now,
there's
obviously
overlap
with
a
bunch
of
this
sort
of
certain
rotation,
certain
bootstrapping
stuff
that
happens
with
that's
being
built
into
commodities,
but
this
is
essentially
a
more
generic
version
of
that
stuff
sitting
outside
of
kubernetes.
H
B
A
good
question
so
right
now
we're
only
supporting
UNIX
doing
socket
personally.
I
think
that
supporting
also
a
tcp
binding
on,
like
a
loopback
adapter,
for
instance,
probably
makes
a
lot
of
sense
for
a
lot
of
use
cases.
The
reason
that
we
haven't
pursued
that
just
yet
is
because
backing
your
way
into
a
process
ID
from
a
TCP
socket,
is
horribly
non
performance
under
linux,
and
so
that.
B
Sorry
to
say,
and
I've
pushed
on
a
lot
of
different
places
and
haven't
found
anything
better
than
the
known
method
of
locking
proc.
So
I
think
probably
what
is
going
to
be
as
it
is
eventually
once
we
have
demand
or
some
good
reasons
to
implement
it,
we
will,
but
as
a
non-default
with
comes
on
with
some
sort
of
warning
or
something
like
that,
that
makes
sense.
Yeah.
H
B
Absolutely
and
like
Windows
portability
is
obviously
we're
not
working
on
it
right
now.
But
you
know
this
all
built
on
going
for
a
reason
and
we
do
wish
to
be
portable
in
those
types
of
environments,
and
we
briefly
looked
at
some
up
since
our
windows
has
what
they
call
named
pipes,
which
may
be
an
option
as
like
the
UNIX
domain
analog.
But
we
don't
know
quite
enough
about
them
yet
to
say
whether
for
sure
we'll
be
able
to
use
that
or
not
same.
H
A
H
So
the
kind
of
history
here
is
that
the
in
sort
of
1/8
timeframe,
Lee
wrote
this
Google
kms
plugin
KK
stead
of
working
on
the
volt
equivalent
of
the
same
thing.
Essentially,
the
idea
is
for
the
one
7
in
one
7.
We
enabled
secrets
encryption
with
the
key
on
disk
and
that's
obviously
not
a
good
way
to
manage
keys.
H
It
was
going
on
in
the
cloud
provider
kind
of
the
old
cloud
provider
model
which
was
sort
of
actively
being
trying
to
deprecated,
and
so
the
kind
of
condition
that
went
in
under
was
this
is
going
to
be
alpha
and
then
for
beta.
We
will
have
to
have
some
sort
of
out
of
process
version
of
doing
this
so
instead
of
compiling
and
into
the
master
and
having
this.
H
This
came.
This
extension
point
kind
of
inside
the
master
by
the
desire
from
the
API
side
of
kubernetes
was
to
get
it
out
of
the
master
and
and
have
some
sort
of
defined
API
that
we
could
write
to.
So
we
filed
a
bug
for
that
and
instead
of
had
someone
to
work
on
I,
thank
you
for
and
then
they
unexpectedly
could
not
work
on
it
anymore.
H
So
when
now
in
the
position
of
we've
got
kind
of
Microsoft,
vaults
and
Google,
all
trying
to
write
these
kms
provide
is
kind
of
current
plug-in
mechanism
is
not
kind
of
the
API
team
hasn't
been
sort
of
on
board
with,
and
so
we've
gotten
into
this
position
of.
What
do
we
do
now?
It?
It
doesn't
seem
like
you
know,
Microsoft
wants
to
get.
H
There
came
this
provider
in
bolt
one
kaykai's
trying
get
the
vault
one
and
we're
trying
to
move
for
with
the
Google
one,
but
there's
no
agreement
on
kind
of
the
plug-in
mechanism,
and
so
blocking
kind
of
one
of
those
out
of
those
out
of
that
set
doesn't
seem
like
the
right
thing
to
do
so.
We've
basically
proposed
that
Google
remove
its
kms
provider
from
where
it's
implemented
now,
and
we
focus
attention
on
getting
to
this
out
of
out
of
process
kms
Virata
and
the
kind
of
store
main
idea
there
is.
H
We
have
some
sort
of
G
OPC
interface
to
this
camus
provider
that
could
be
kind
of
launched
and
iterated
on
independent
of
kubernetes
releases
by
the
various
people
working
on
these
things,
so
it
doesn't
exist
yet.
But
the
current
kind
of
direction
I
think
we're
going
towards
is
to
pull
out
the
canvas
provider
block
the
vault
provider
on
this
work
and
then
focus
on
getting
this
work
done
and
then
have
them
all
use
that
interface.
A
C
It's
been
a
learning
experience
for
me,
but
we
did
put
out
a
trauma
and
picking
up
on
for
Craig,
pointed
out
in
the
bug,
I
think
for
the
next
meeting,
I'd
like
some
input,
especially
from
people
and
Joe
Custer,
whether
that's
the
right
direction
to
go
in
or
what
changes
we
need
to
make
it's.
It's
extremely,
it's
our
initial
thoughts.
So
it's
it's
an
item
that
David
had
on
the
agenda
further
down.
C
We
can
talk
about
it
at
next
meeting,
but
we
would
like
to
see
if
we
can
get
this
thing
done
correctly
out
of
tray,
so
we
can
evolve
these
two
separately.
That's
about
it.
I
mean
I'll,
set
aside
my
frustration.
That
really
doesn't
add
any
value
here.
That's
about
it!
I'm!
Looking
for
input
for
guidance
from
people
more
experienced,
then
I
have
beneath
in
this
trillion.
C
H
And
so
we
also
have
someone,
starting
on
our
team,
who
I'm
going
to
put
on
this
problem
as
well
too,
so
we'll
contribute
here
and
we'll
also
we've
kind
of
been
discussing
this
with
daniel
smith
kind
of
on
the
Google
side
on
the
API
team,
as
well
too
so
they're
happier
to
have
some
input
there
into
this
design.
So
yeah
we'd
like
to
push
this
vote
as
well
too
so
they'll
definitely
be
held
from
us
all.
A
H
Yeah
it'd
be
good
to
talk
timelines
here
to
just
briefly
so
it'd
be
I.
Think
we
want
to
get
this
design
done
fairly
soon
and
start
work
on
implementation
of
that
API
kind
of
so
it's
ready
for
use
in
110,
I
think
but
yeah.
It's
obviously
I
think
it's
gonna
be
too
late
to
get
it
into
1-9.
So
now
what
well
past
kind
of
where
that
can
happen?
Yeah.
A
Okay,
great
want
to
make
sure
we're
on
the
same
page,
so
the
the
next
poll
is
ones
been
open
for
a
little
while
it's
a
short
circuit
deny
for
authorization,
looks
like
it's
ready
to
merge.
It's
only
noteworthy
here
because
it
impacts
the
self
subject:
rules
review
for
webhook
authorizers
that
Eric
Jiang
was
working
on
and
I
wanted
to
bring
it
up
and
make
sure
that
Eric
and
Mike
were
talking
to
each
other.
J
Are
definitely
aware,
I
think
that
a
short-circuit
deny
is
ready
to
merge.
There
were
two
typos
and
they
are
fixed
now,
just
waiting
on
I
got
approval
from
their
LLC
teams
from
Liggett
after
those
changes
and
LG
TM
from
Eric
already
and
then
waiting
for
approvals.
So
if
you
guys
want
to
go
and
finish
that
off,
that
would
be
great
and
I'll
get
out
of
the
way
of
Eric
to
do
what
he
needs
to
do.
Okay,.
K
Yeah,
the
the
one
concern
on
my
part
for
the
self
subject.
Roles
review
is
that
we
are
probably
going
to
have
to
introduce
some
new
API
changes.
I
think
it
it
it's
hard
for
us
to
do
this
and
I
purely
backwards
incompatible
way,
just
because
clients
will
have
to
start
understanding
that
a
new
denying
mechanism
I
I
wish.
K
We
could
do
this
in
some
sort
of
way
that
was
transparent
to
the
client,
but
it's
hard
for
us
to
compute
denies,
because
if
you
say
something
like
this
person
has
access
to
everything
and
then
the
another
authorizer
comes
in
and
says
accept
this.
We
don't
have
a
good
way
of
expressing
that
in
the
API
today.
A
K
Yeah,
that
seems
to
summarize
it
pretty
well,
but
so
the
one,
the
one
I
think
our
one
saving
grace
and
David
we
talked
about
this
over
slack-
is
that
we
explicitly
put
this
in
the
API
documentation
that
this
is
not
an
API
that
should
be
used
to
compute
Ackles.
It
shouldn't
be
used
by
external
clients
to
to
determine
access
control.
That's
still
a
subject,
rules
review
or
not
those
review
of
an
exact
subject.
K
A
I
agree
that
makes
sense
and
I'm
very
glad.
We
did
it
the
last
one
that
I
knew--I
on
the
list
is
I
created
our
back
rolls
unless
the
poll
was
perfect
and
I
kind
of
doubt
that
it
it
could
use
some
review.
I
think
the
idea
has
has
reached
agreement
and
it's
now
into
the
exactly
how
it
works,
how
its
implemented
and
liking
the
test
coverage
that
sort
of
thing,
but
the
the
upgrade
downgrade
reconciliation
it.
A
Should
will
know
whether,
for
instance,
you
noobs
the
entire
admin
role,
things
won't
have
on
there,
because
I
don't
think
we
do
are
related
to.
If
you
mutated,
the
admin
role
yourself
to
add
a
permission
did
did
the
great
preserve
that,
for
you,
so
I
had
I.
Think
I
had
one
reconciliation
test
that
demonstrated
that
that
would
happen.
A
A
All
right
so
for
our
designs,
this
time
is
just
homework
for
next
time.
Take
the
time
read
that
external
kms
doc,
the
strawman
looks
pretty
straightforward.
Normally
I
would
laugh
at
eight
bytes
in
bytes
out
API,
but
that's
actually
exactly
what
we
need.
So
doc
is
up
and
hopefully
we'll
be
discussing
that
next
time,
I
also
have
two
discussions.
A
A
A
But
after
it's
finished
you
you
need
to
make
sure
that
no
other
changes
in
the
admission
chain
have
modified
the
the
security
settings
that
were
set
for
PSP
and
already
validated.
So
you
need
to
validate
it
a
second
time.
I
figured
I'd
go
ahead
and
bring
that
idea
up
here
and
gosh.
I
can't
remember
his
name.
There
was
someone
on
the
Google
side
who
was
looking
at
looking
at
this
for
1/8
looking
at
PSP
for
180.
I
Yeah
definitely
I
there's
a
lot
of
issues
that
have
come
up
with
the
the
mutation
in
the
pod
security
policy
and
a
bunch
of
those
were
addressed
recently
by
Jordans
changes
to
order
the
non
mutating
ones
before
the
mutating
ones,
but
I
think
there's
still
a
little
bit
of
confusion
around
how
the
mutation
should
work,
especially
since
you
can
have
multiple
pod
security
policies
bound
to
service
counts
versus
users,
and
it
just
feels
like
a
space
that
needs
a
little
more
thought
about
bug
the
user
experience
side
of
it.
Okay,.
A
I
could
see
that
I
think
what
we
are
gonna
do
there's.
Actually
someone
lined
up
from
the
API
server
side
to
go
ahead
and
make
this,
and
when
it's
open,
I'll
go
ahead
and
tag
sig
off
in
it.
So
you
guys
can
take
a
look.
It
shouldn't
be
extensive
surgery
because
of
the
way
we've
done
it.
An
admission.
Plugin
can
say
it's
both
and
gets
two
shots
at
a
particular
resource,
but
that's
going
to
be
something
up
and
coming.
A
The
other
thing
that
came
up
recently
is
the
cube.
Acp
ice
cube
api
server,
insecure
port.
It's
been
around
for
a
long
time,
but
new
features
for
the
cube
api
server
have
not
the
cubase
api
server.
Insecure
port
doesn't
support
new
features,
so
you
have
things
like
api
aggregation
that
require
you
to
have
a
secure
port
because
it
has
to
be
able
to
speak
back
and
check
things
like
authorization,
roles
right
and
insecure.
Port
logically
has
no
authorization.
C
A
Being
able
to
use
newer
features
of
the
API
server
I
think
we
are
going
to
deprecated
it
and
eventually
remove
it.
It
will
probably
year-long
process
from
now
for
full
removal,
but
deprecating
now
would
mean
we
wouldn't
be
worried
about
trying
to
add
new
features.
We
already
haven't
been
adding
new
features,
for
it
figured
out,
bring
that
up
here
and
see
if
anyone
had
a
burning
desire
for
it.
I.
F
F
A
L
L
A
I
think
that's
gonna
be
a
good
idea
for
you
in
general,
just
because
we
know
of
of
other
issues
in
this
area
that
are
probably
going
to
come
up
more
and
more,
and
it's
not
like
we're
moving
it
tomorrow,
but
I
will
send
a
note
out
to
kubernetes
dev
and
get
that
idea
for
a
local
proxy.
I'll
probably
mention
in
that
note,
that's
a
good
idea
and
that,
as
part
of
a
transition
plan
for
people
to
wean
themselves
off
of
it,
I.
J
If
anybody
removes
it,
they
will
have
to
I
I
hate
to
say
it,
but
whoever
whoever
starts
the
deprecation
and
once
you
actually
remove
from
the
API
server
to
do
that,
we'll
need
to
make
sure
that
tests
pass
on
their
PR
and
and
that's
where
the
the
brunt
of
the
work
is
so
great
cuz
on
those
hack
scripts,
yes
yeah.
So
whoever
does
the
deprecation
is
going
to
have
to.
M
M
104C
do
Clayton,
of
course,
I
do
so.
This
has
been
discussed
in
previous
Agathe
and
container
identity
worker
beings,
but
one
of
the
things
that
we
were
playing
around
in
the
container
at
that
ID
idea
was
part
of
that
long-term
art
of
getting
secrets
out
of
the
API
server
or
specifically
service
account
tokens
being
stored
as
secrets.
That
was
on
kind
of
our
long-term
roadmap
for
secrets
that
we
talked
about
in
1,
7
and
1/8.
M
I
think
I've
become
convinced
as
part
of
playing
around
with
this
that
we
could
move
most
of
the
service
account
token
infrastructure
out
of
the
core
of
kubernetes
and
by
out
of
core
I,
mean
make
it
decoupled
and
potentially
replaceable
I.
Think
I'm
gonna
propose
for
1/10
that
we
actively
try
to
prototype
to
that
point.
Mike
danijon
I've
been
playing
around
with
it
I'm
fairly,
convinced
that
it's
possible.
M
The
right
thing
to
do
helps
exercise
a
whole
bunch
of
extensibility
mechanisms
that
we
don't
have
around
security
such
as
you
know,
disabling
the
service
account
token
controller.
That's
built
in
using
web
block
authorization
going
through
the
exercise
of
having
a
more
canonical
look.
Authorization
example
flex
volumes
for
injecting
secrets,
leading
CSI
and
so
I'm
gonna,
pull
together
a
proposal
or
pull.
A
M
I
just
want
to
say
that
spire
would
be
perfect
for
this,
exactly
and
and
even
the
work
that
we've
been
talking
about
in
container
identity,
for
John
minting
for
people
who
want
to
continue
like
to
continue
a
cube
cluster
like
done
right.
We
kind
of
operation
and
spire
and
a
more
like
a
simpler
or
like
continuation
path
and
potentially
third-party
authentication
that
we
haven't
even
thought
of
yet,
and
they
all
kind
of
benefit
from
the
same
set
of
changes.
It
would
not
be
a
like.
This
is
not
necessarily
a
scope
increase.
M
It
is
more
of
a
pull
together,
something
that
realistically
shows
what
this
is
like,
and
actually
that
fixes
a
couple
of
problems
in
Cuba
might
be
optional.
For
some
people,
I
think
become
the
openshift
side,
I'm
already
kind
of
mentally
convinced
that
this
is
what
we
would
want
to
do
in
most
of
our
clusters
just
to
deal
with
some
of
the
security
but
beginnings
getting
service
account
tokens
better
secured.
It's
just
a
straight
up
win.
It's
also
a
scalability
win.