►
From YouTube: sig-auth bi-weekly meeting 20210901
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
everyone,
hello,
welcome
to
september
1st
2021,
a
meeting
of
sigoth.
Let's
kick
it
off.
We
have
a
few
items
on
the
agenda.
Let
me
share
my
screen.
A
We
have
so
just
some
123
deadlines
for
peps
prr
deadline
is
tomorrow
september,
2nd
and
then,
if
you
want
to
get
your
cab
submerged
reviewed,
please
do
so
before
september.
9Th.
A
B
Oh
okay,
cool
yeah
yeah,
so
maybe
I
can
share
my
screen
here.
Yeah,
let's
see
all
right,
you
might
need
to
grab
me
some
permission
or
something.
A
B
Cool,
so
I'm
mike
tauffin
I've
been
around
kubernetes
for
a
while.
It's
been
a
while,
since
I've
been
in
sig
off.
So
it's
great
to
see
all
of
you
again,
let
me
kind
of
actually
before
I
get
started.
Can
somebody
take
notes
as
we
discuss
this.
B
Awesome,
thank
you.
So
let
me
give
a
little
background
on
kind
of
what
I'm
trying
to
solve
or
improve,
and
then
we
can
talk
about
one
idea
which
is
what's
in
this
proposal
and
then
sort
of
just
open
up
for
discussion.
I
know
a
lot
of
people
had
really
interesting
comments.
A
lot
of
great
feedback
on
it.
There's
different
approaches
to
this,
maybe
potentially
totally
different
apis
that
that
we'd
consider.
So
that's
what
I'm
really
interested
in
hearing
about
from
y'all
today
so
just
to
get
started.
B
B
They
include
metadata
or
are
sort
of
bound
to
pods,
and
these
are
all
really
great
security
improvements,
but
there
is
sort
of
one
gap
which,
or
at
least
in
my
opinion,
a
gap
which
is
pulling
images
when
you
do
an
image
pull
today,
you,
depending
on
it
the
identity
that
the
image
is
pulled
as
sort
of
depends
on
the
environment.
You're
running
in
and
there's
usually
some
sort
of
ambient
authority
on,
like
the
vm
that
you're
on,
or
maybe
you
like,
exported
some
long-lived
key
from
a
cloud
provider.
B
That's
hosting
your
images
and
you
uploaded
that
in
your
cluster
and
image
pull
secret,
and
I
want
to
see
if
we
can
bring
some
of
the
benefits
of
the
token
work.
That's
been
done
over
the
last
couple
years
to
image
pulls
and
that's
the
main
goal
here.
I
don't
have
super
strong
opinions
about
the
best
way
to
do
that,
but
I
do
want
to
solve
that,
and
so
that's
kind
of
the
background.
B
B
It
basically
lets
you
configure
some
of
the
token
metadata
similar
to
configuring.
A
projected
token.
B
Next
to
where
you
would
have
configured
image,
full
secrets
and
then
cubelet
sort
of
reads
that
makes
the
token
request
for
a
token
passes,
that
into
a
plugin
that
is
mapped
to,
like
whichever
provider
is
hosting
your
images
and
then
that
plugin
can
do
the
right
thing
to
get
credentials.
B
The
the
two
that
I'm
familiar
with,
which
are
gcp
and
aws,
currently
require
the
kubernetes
dot
to
be
exchanged
for
a
credential
from
that
platform,
and
that
second
credential
is
what
could
be
used
to
pull
images.
So
that's
the
reason
to
have
a
plugin
in
my
opinion,
so
that
providers
can
kind
of
implement
that
any
extra
logitech
that's
required.
B
Thanks
clayton,
although
we've
had
we've,
also
had
some
feedback
on
the
kept
that
some
folks
might
be
interested
in
having
sort
of
a
more
pass-through
mode
where
they
do
get
the
kubernetes
job
verbatim.
So
that's
definitely
on
the
table
as
well,
and
I
think
I
just
want
to
open
up
for
like
discussion
and
hearing
people's
opinions.
At
this
point
I
know
most
folks
have
read
through
the
cap.
E
Yeah
I'll
go
first,
so
just
yeah.
I
I'm
really
excited
about
this.
I
think
this
is
like
awesome
step
forward
for
not
just
using
ambient
credentials
like
on
a
vm.
I
I
do
so
this.
This
seems
a
lot
like
what
we're
doing
already
for
csi,
like
we
do
like
there's
a
recent
csi
feature
where,
if
you
want
to
pass
a
you
know,
credential
to
a
csi
sidecar
to
possibly
authenticate
to
your
to
your
csi
provider
like
this
gets
used.
E
I
I'd
love
to
see
this
yeah
that
that's
that's
the
thing
in
csx,
so
I
I
I
can
see
the
case
to
have
this
like
pass
through
from
the
cubelet,
where
the
cubelet
does.
The
token
request
also
that
the
benefit
that
gets
us
instead
of
giving
instead
of
making
the
like
exchanger
process,
go
get
the
token.
Is
you
get
node
authorizer
so
that
you
don't
have
to
like
reinvent
that
wheel.
B
B
Yeah
yeah,
I
think
I
do
agree
with
cubelet
doing
the
exchange,
though
I
like,
I
don't
see
a
reason
to
duplicate
it
cool.
Can
you
or
is
that
the
link
mike?
Thank
you.
E
I'm
just
the
other
question
I
had
was
like.
Are
there
other
like
this?
This
probably
is
the
right
sounds
like
the
right
place
for
this
specific
feature.
Are
there
other
places
that
we
want
this
type
of
thing
like
so
we
already
knocked
out
storage.
It
sounds
like,
but
like
image,
full
is
an
obvious
one.
Are
there
others
where
we
do?
That's
not
us
like
a
container
in
the
pod
where
we
want
this,
and
do
we
always
want
to
only
support
the
service
account
of
the
pod
like?
E
B
It's
just
it's
a
really
good
question.
Both
you
know
like
there
there's,
maybe
arguments
for
like
granting
the
separate
identity
permission
to
pull
the
image
as
sort
of
infrastructure
thing,
but
you
don't
actually
want
the
workload
in
the
pod
to
have
permission
to
pull
images
right
like
theirs,
and
you
outline
some
other
like
good
areas
where
you
might
want
like
an
infrastructure,
identity
or
even
separate
ones.
G
Yeah,
like
audience
might
be
enough
just
because,
like
all
of
the
things
that
plug
into
a
node
ultimately
are
you
know,
I
can't
count
the
number
of
times
I've
actually
gone
and
looked
what
a
damon
said's
permissions
were,
and
they
were
terrifyingly
broad
right
like
our
node
authorizer
was
a
great
step
for
the
cubelet,
and
then
you
know
people
just
shoved
damon
sets
that
are
rude
on
the
whole
cluster
on
every
node.
So
it
doesn't
really
matter
too
much.
G
So
you
know
finding
a
way
to
tie
it
either
for
daemon
sets
that
are
intended
to
be
node
infrastructures,
like
a
logical
extension,
the
csi
pattern
c-
and
I
you
know
c-
and
I
I
I
hate
to
say
this-
this
isn't
really
this
group's
responsibility
at
some
point
c
and
I
might
be
better
positioned,
looking
more
modern,
since
a
lot
of
what
cni
is
doing
is
tied
into
cube
life
cycle
as
much
or
more
so
than
storage.
G
G
B
Yeah,
I
think
yeah
it
does
sound
like
there's
a
lot
of
use
cases
for
that.
I
don't
know
I
like
audience
might
be
enough
for
a
lot
of
people.
I
know
google,
we
have
this
like
goofy
quirk,
where
the
audience
has
to
be
set
to
the
name
of
the
workload
pool
that,
like
the
provider's
registered
in
so
that
kind
of
fixes
the
name
of
the
audience
in
our
case
and
and
makes
it
not
usable
for
like
scoping
down
delegated
authority.
G
In
this
video
in
service,
identity
and
spiffy,
I
think,
like
you
know,
one
of
the
original
goals
with
the
service
account
token
was
to
open
the
door
for
more
of
the
extension
we
kind
of
got
through
the
first
three
phases
of
stuff
we
discussed
like
almost
four
years
now
ago.
It
seems
I
don't
actually
is
it
for
three
or
four
years.
I
don't
remember
like
is
the
idea
of
side
cars
that
are
half
infrastructure,
half
workload,
also
being
able
to
more
concretely
tie
between
service
account
and
external
identity.
G
It's
not
that
this
is.
This
proposal
has
to
take
into
account,
but
it
was
definitely
within
the
remit
for
service
tokens
originally
to
open
the
door
for
people
to
inject
and
run
their
own
infrastructure.
B
E
Can
I
say
one
of
the
one
other
really
quick
thing
too
is
is
like
like
in
the
aws
case
about
like
audiences.
I
think
we
have
like
a
max
of
five
audiences
per
odc
provider
so
like
right
now,
everyone's
just
using
one
in
aws.
Okay.
Now
this
is
two.
We
have
four,
you
know
three
more
but
like
that's
not
a
lot
to
work
with.
It's
not.
B
I
think
it's
like
my
sort
of
half-formed
opinion
is
like
roles.
Are
the
thing
right
like
you
want
to
eventually
map
to
some
something
that
has
a
role
or
or
just
mapped,
to
a
role
that
that
you're
acting
as
and
that's
the
like
thing
that
defines
scope?
B
And
so
maybe
I
mean
it's
sort
of
hate
to
say
that
like
it
has
to
like
be
in
the
plug-ins,
because
it's
really
pretty
provider
specific.
But
it
does
sort
of
feel
that
way.
D
Yeah,
I
was
wondering
so
it
seems
like
using
this
is
going
to
require
the
user
like
the
end
user
who's,
creating
their
pads
to
kind
of
opt
in
every
single
time.
Maybe
there's
a
web
focusing
involved.
I
was
looking
you
linked
to
the
underlying
cap
about
the
the
credential
plug-in
or
credential
provider.
D
B
That's
an
interesting
idea.
I
had
not
thought
about
it
from
that
angle.
I
guess
the
yeah.
I
think
that
could
make
sense,
especially
if
we
have
cubelet
like
doing
the
token
requests
for
you,
like
the
sort
of
like
counter
proposal
would
be
like.
Oh
plugins,
you
just
like
use
keylet's
credentials
to
like
do
token
requests,
but
I
think
that
puts
a
lot
of
responsibility
on
plugins
that,
like
they
shouldn't,
have
to
worry
about
so
yeah.
I
think
that
oh.
B
Yeah,
it's
really
like
I
like
providing
controls
via
the
api
if
we
can
just
because
they're
way
easier
to
expose
to
customers
because
they
just
come
in
the
api
with
kubernetes.
It's
not
like
you
have
to
add
a
feature
like
every
provider
has
to
add
a
bunch
of
plumbing
for
setting
node
level
stuff
it.
It
might
make
sense
as
a
way
for
like
basically
providers
who
are
already
implementing
plugins
to
like
make
it
easier
for
their
plugin
to
get
a
token
by
default
instead
of
their
plugin.
D
Yeah,
so
my
thinking
is
definitely
we
need,
like
the
user
level
control,
but
so
from
I'm
only
thinking
about,
basically
from
like
the
gcp
and
aws
kind
of
model.
That's
what
I'm
familiar
with
but
like
right
now,
most
customers
don't
have
the
necessary
im
policies
to
let
their
workload,
identities,
pull
correct
tokens
all
right,
pull
images.
So
obviously
some
manual
opt-in
control
is
needed
there.
B
Yeah,
that
is
another
thing
I'm
I'm
thinking
about
is
whether
we
should
fall
back
to
the
existing
node
level
identity
or
not.
If
you
can
figure
this,
maybe
just
in
the
default
scenario
where
the
user
hasn't
like
specified
anything.
But
in
the
scenario
where
they
do
it's
like,
maybe
more
reliable
to
fall
back
because
they
get
a
backup
but
like
if,
if
they're
sort
of
indicating
they're
trying
to
scope
the
authority
down,
then
maybe
you
don't
want
to
fall
back
because
it's
sort
of
a
privilege,
escalation.
F
F
And
there
are
like
not
a
lot,
but
there
are
some
fairly
low
level
bits
being
exposed
in
the
api
as
proposed,
especially
like
you
know,
expiration
seconds
provider,
data
of
raw
extension.
G
Certainly,
for
user
user
facing
config,
I
mean
I
think
user
facing
config
is
kind
of
the
highest
tier
of
you
want
to
get
it
right,
the
first
time,
and
so
it
might
actually
behoove
us
to
work
our
way
up,
but
have
the
ability
to
to
do
this,
and
you
don't
want
to
take
the
ability
to
use
this
away
from
the
user.
You'd
want
to
come
up
with
the
the
most
the
thing
that
gives
you
enough
confidence
that
the
eventual
api
is
going
to
be
the
right
api.
G
I
certainly
you
know
just
at
a
broad
level,
the
large
clusters
typically
share
credentials
or
have
a
credential
provider,
and
it's
when
you
start
getting
into
smaller
clusters,
or
when
you
have
more
ad
hoc
usage,
you
tend
to
end
up
with
end
users
specifying
it.
So
it
really
is
a
mix
across
the
spectrum
of
cluster
sizes
and
use
cases,
and
certainly
working
from
the
bottom
does
have
at
least
the
advantage.
B
G
And
certainly
like
csi
inline
csi
plugins,
it
should
in
theory
be
possible
to
use
an
inline
cs
plug-in
today
to
demonstrate
this
api
to
to
test
the
idea
before
the
actual
api
is
and.
G
Inline
csi
plugins,
I
thought,
had
three
key
value
parameters
that
could
be
used
to
emulate.
Some
of
this.
F
Csi
actually
supports
surf
passing
service
tokens
to
drivers
today.
G
I
meant
a
custom
type
that
would
take
these
attributes.
That
would
then
the
csi
plug-in
would
get
these
attributes
and
then
ask
for
the
token
directly
and
maybe
maybe
the
necessary
glue,
isn't
there
on
the
inline
volumes.
Although
I
swear
it
was
requested
as
part
of
the.
Let
me
let
me
take
a
look.
B
Yeah,
I
guess
I
mean
you
can
definitely
assume
do
that
to
get
a
token
in
your
file
system,
it
just
wouldn't
be
the
plumbing
through
to
the
image
pull
process
right.
G
Yeah
it
would,
it
would
be
more
of
a
giving
us
room
to
test
the
idea
like
having
a
special
csi
plugin,
like
all
csi
plugins,
are
under
admin
control
effectively,
and
so,
if
there
was
a
specific
csi
plug-in
that
we
tested
the
idea
with
in
the
cubelet
that
it
cherry
picked
off
and
then
didn't
pass
it
on,
but
it
was,
it
was
alighted
from
the
pod
spec
at
the
cubit
level
to
test
the
concept
in
an
alpha
form.
You'd
effectively
be
able
to
do
you.
G
You
know
volume
type,
maybe
a
hard-coded
special
case
name
for
the
the
plugin
name.
That
acts
as
a
special
case.
Then
we
could
test
that
and
then
pull
it
out
or
say.
That's
the
alpha
version
and
then
now
that
we
have
experience
with
it.
We've
we've
gotten
the
lower.
G
Not
actually,
smuggling
is
exactly
the
word
that
I
would
use
and
and
to
be
fair,
I
think
there
are
use
cases,
for
instance,
I
would
wouldn't
be
surprised
if
tecton
and
other
build
things
might
actually
concretely
have
a
use
case
today
for
getting
a
pull
token
mounted
into
their
pod.
I
don't
know
that
that'd
be
the
most
compelling
use
case
for
them,
but
like
thinking
through
the
implications
that
would
be
a
potential
exploration.
H
Hey
folks,
first
time
in
the
meeting
so
forgive
potentially
to
simplifying.
H
Thank
you.
So
I
did
leave
a
few
comments
on
the
specs.
One
of
the
comments
I
had
was
really
kind
of
was
talking
or
asking.
How
do
I
actually
envision
users
using
this
api?
H
What
I'm
kind
of
stuck
on
is
that
you
know,
if
I
imagine
coming
in
into
a
I,
don't
know
gke
cluster
or
something
like
that
right.
I
guess
I
would
imagine
that
I
wouldn't
have
to
be
creating
this.
You
know
image
pool
token
spec
for
every
single
part
that
I
spin
up
right
now.
Maybe
there's
some
kind
of
a
web
hook.
That's
filling
this
automatically,
but
then
I
would
ask.
Is
this
really
the
kind
of
the
ideal
design
for
this
api?
H
Where,
for
every
single
pod,
you
would
have
to
be,
you
know,
filling
out
more
or
less
the
same
details
from
at
least
my
understanding
that
they
would
be
more
or
less
the
same
details
should
this
be,
you
know
like
so,
for
example,
for
the
existing
credential
providers.
As
far
as
I
understand
they're
kind
of
a
moral
cluster
level
concept,
should
this
be
elevated
to
that
level?
Should
the
users
really
be
thinking
about
this
kind
of
configuration
at
a
per
pod
level
versus
something
greater?
I
can
understand
that
it
might.
B
I
think
it's,
I
think
it's
a
related.
It's
a
really
good
point,
and
I
think
you
know
sort
of
similar
to
what
tahir
is
suggesting
with
maybe
moving
this
into
the
credential
provider
config.
To
start
right,
where
you
know
any
cloud
provider
who's,
doing
managed.
Kubernetes
can
set
that
up
for
you
and
it
wouldn't
even
need
a
web
hook
and
pretty
much
be
able
to
ignore
it
until
you
have
to
do
something
special
like,
like
you
know,
straddle
identity,
pools
and
and
migrate
between
them
or
something.
H
To
add
a
little
bit
of
context
of
why
interested
in
this
is,
you
know
we
installed
configuration
onto
you,
know:
kubernetes
configuration
onto
different
clusters,
and
we
also
you
know
fiddle
around
with
the
images
that
are
referenced
by
that
configuration
and
that
kind
of
happens
transparently.
H
You
know
from
the
user
perspective,
so
they
they're
not
even
thinking
about
individual
images
and
so
from
you
know,
for
our
use
case,
it's
very
beneficial
to
decouple
the
user.
Who
is
installing
some?
You
know
piece
of
kubernetes
software
from
the
user
who,
maybe
or
or
the
cluster
operator,
who's
configuring
what
the
cluster
can
kind
of
access,
because
otherwise
you
end
up
with
low
level
details
leaking
out
to
this
higher
level
activity.
B
What
do
people
think
about
like
if
we
somehow
added
something
to
like
namespaces
for
for
this
sort
of
use
case
like
you,
can
specify
the
defaults
for
an
entire
kubernetes
namespace
and
say
so,
like
I'm
thinking
of
a
specific
scenario
that
I
know
we
have
in
like
our
on-prem
products,
which
is
that
customers
are
using
image,
pull
secrets
today,
they
pretty
much
you
like,
there's.
Basically,
one
image:
pull
identity
for
the
entire
cluster,
that's
just
uploaded
in
image,
full
secrets,
and
it's
just
a
gcp
service
account
so
like
the
federated
identity.
B
Analog
would
be
oh,
like
have
these
rules
that
map
your
kurdish
service
accounts
to
this
specific
gcp
service
account
identity,
and
maybe
we
could
like
configure
that
on
a
per
name,
space
or
per
cluster
basis,
maybe
that
to
make
that
really
simple,
it
does
require
coming
up
with
like
a
special
like
special
identities
that
don't
exist
today
in
kubernetes.
H
H
Just
gonna
say
you
know
just
kind
of
think
a
little
bit,
so
I
I
I
was
kind
of
thinking
on
my
own
like
well.
Does
the
namespace
level
attachment
kind
of
make
more
sense,
and
I'm
still,
I
don't
know,
there's
still
something
off
with
it
right,
because
you
know
you
again,
you
keep
on
creating
new
name
space
as
a
user
of
a
cluster,
and
yet
again
you
kind
of
have
to
you
know
really
worry
about
this
right.
At
least
I
consider
a
lower
level
detail,
so
you
know
that's
just
all.
I
want
to
add.
G
Got
it
yeah?
Thank
you,
and
it's
worth
pointing
out
with
images
too
like
this
has
been
a
long-standing
problem
in
cuba.
If
you
have
different
people
using
the
same
machine,
you,
if
you're
against
different
people
with
two
different
sets
of
pods
on
the
same
node,
there's
really
nothing
that
prevents
the
other
user
from
accessing
the
the
images
pulled
by
someone
else.
Certainly,
that's
there's
a
mitigation
that
went
into
the
cubelet
a
very
long
time
ago,
which
is
always
pull,
which
forces
a
pull
attempt
which
is,
is
super
inefficient.
G
A
G
G
The
goal
is
to
get
secrets
out
of
the
core
infrastructure
and
have
the
ability
to
do
something
pull
you
know
in
other
cases,
maybe
even
push
or
make
api
calls
or
get
a
token
make
those
kind
of
exist
outside
the
system
and
the
user
doesn't
really
have
to
think
about
the
details.
You
don't
have
to
see
physical
secrets
show
up.
G
That's
really
the
net
win
that
we
were
going
for
with
service
account
tokens
and
so
like,
as
micah
said
earlier
on,
like
continuing
that
like
we
want
to
get
secrets
out
of
cube
that
shouldn't
be
in
cube
because
there's
a
better
way
now,
we
just
need
to
make
sure
it's
composable
enough,
and
I
think
that
composability
is
kind
of
what
that
last
point
was
what
what's
the
right
level
of
injecting
one
of
these
kinds
of
secrets
for
a
user
or
allowing
a
user
that
still
avoids
using
a
secret
and
instead
focuses
on
some
interplay
and
like
that's
kind
of
where
the
composability
of
writing
a
secret
is
super
composable,
because
you
can
write
a
secret
with
a
controller,
auto
update
the
service
account
and
things
just
work.
G
If
we
had
a
really
specific
approach,
that
was,
you
know,
maybe
two
or
three
different.
You
know
technology
choices,
a
credential
provider
for
infrastructure,
maybe
one
for
shared
and
then
one
for
the
end
user.
I
feel
like
that'd,
be
a
missed
opportunity
to
say
you
know.
Is
there
a
common
thing
we're
going
for
that
was
partially
that
suggestion
around
thinking
about
the
csi
plugins
like?
G
How
can
we
use
the
idea
of
maybe
csi
plugins,
injecting
pull
secrets
or
plugins
in
general,
injecting
pull
secrets
to
make
the
the
permission
right
a
little
bit
more
abstracted
from
the
pod
from
the
cubelet,
but
keep
each
of
those
three
layers
of
separation
for
use
case.
G
It's
on
the
the
the
reference
of
the
ephemeral
volumes
back.
G
We
there
are
a
couple
of
like
the
credentials,
some
of
the
different
credential
provider
plugins
that
are
out
there.
You
know
there
are
csi
plugins
that
are
attempting
to,
or
there
are
plugins
to
the
storage
system
that
are
not
really
concerned
with
storage,
but
file
based
data
access.
This
is
a
little
bit
different
of
a
use
case.
It's
a
little
bit
like
the
downward
api.
In
a
sense
we're
trying
to
inject
data
that
the
cubelet
can
service.
F
Yeah
yeah,
so
this
was
set
in
chat,
but
it
is
not
on
the
pod.
It's
actually
on
a
cluster
level
object.
Okay,
the
csi
driver
object.
B
Yeah
like
right
now,
I
feel
like
if
I
was
going
to
put
it
anywhere,
not
the
pot.
I'd
put
it
on
their
career
service
account,
but
that's
also
sort
of
my
mental
dumping
ground
for
everything
identity
related.
So
maybe
it's
not
like
the
most
appropriate.
C
Yeah,
so
stepping
back
a
little
bit,
so
I
I
can
see
how
like
just
sort
of
expanding
the
config
of
the
are
they
I
forget,
if
they're
so
alpha,
the
cubelet
they're.
C
What
I
would
want
to
like
still
consider
is
use
cases
where,
like
you
know,
if
you're
using
something
like
gke
or
aks,
but
you
just
for
whatever
reason
you
insist
on
using
your
own
private
registry
that
isn't
hosted
by
those
providers
right.
I
would
still
like
this
type
of
functionality
to
be
present.
In
that
scenario,
right,
I
don't
want
to
copy
some.
C
So
you
know,
I
know
we
sort
of
talked
about
like
a
pass-through
model
like
where
you
just
kind
of
try
to
pass
the
service
account
token
down,
and
the
registry
does
something
with
it
and
really
maybe
a
better
thing
than
pass-through
is
something
more
like
what
we
do
for
token
review
or
admission
review
with
web
books,
which
is
we
we
send
a
larger
set
of
data
as
a
json
blob
like
one
of
the
fields
would
be
the
token
almost
certainly
like,
there's
a
token
with
this
audience.
C
But
I'm
curious
if
you
think,
like
the
cases
where,
like
the
gcp
and
stuff,
where
like
there's
this
need
for
this
credential
exchange,
if,
if
a
lot
of
that
need
would
be
alleviated,
if
there
was
just
more
data
included
like
like
you
know,
I
I
assume
these
web
services
that
can
be
improved
over
time
like
they
don't
like,
like
they
don't
have
to
stay
static
right.
So
it's
it's
a
little
strange
to
me
like.
C
I
understand
why,
like
cube,
ctl
client
go
credential
plugins
exist
because
they
they
exist
normally
on
a
client
machine
like
an
end
user
machine
right.
It's
not
a
web
service
so,
like
you
can't
like
update
the
end
user's
machine
to
like
magically
figure
out
like
some
token
or
something
right,
but
if
we
just
if
we
gave
more
data
to
these
backends,
do
you
feel
like
we
could
get
to
a
place
where
we
don't
necessarily
need
all
these
binaries
that
affect
these
exchanges?.
B
I
think,
like
you
know
I
it's
so
I
guess
you
could
say
sort
of
like
gcp
supports
well
idc
federation
in
a
sense,
but
it's
all
through
these
token
exchanges
today-
and
I
think
it's
probably
many
years
of
work
to
change
that
just
based
on
how
things
are
implemented
inside
gcp.
So
I
think
it's
less
less
about
the
amount
of
data
that's
available
and
more
about
the
amount
of
legacy
that
would
have
to
be
updated.
C
So
so
I
mean,
even
if
it
is
all
based
on
that,
like
there.
Presumably
there's
no
reason
the
web
service
internally
can't
just
do
the
token
exchange
and
then
for
all
other
layers
just
pass
in
the
token
right,
like
I'm,
more
sort
of
just
talking
on
the
outside
right.
The
outside
could
continue
to
accept
the
exchange
token
for
legacy,
because
it
should
you
need
to
break
anybody,
it's
more
of
just
enhancing
it
to
like
hey
that
code
that
you
were
going
to
put
in
that
exchange.
C
F
B
C
I
think
that's
very
reasonable,
especially
since,
like
recently
I
was
like
well,
I
wonder
if
you
could
stick
it
in
the
surface
contour
and
I
was
like
oh
no,
no,
because
then
it
looks
trusted
like
you
know
it
has
the
the
signature
of
the
service
account
token
around
and
that
that's
probably
not
what
we
want,
but
the
idea
that
either
the
user,
the
cluster
admin
or
someone
can
configure
the
extra
bits
that
you
would
need,
along
with
the
service,
account
token
to
let
the
third
party
figure
out
what
to
do
with
it,
whether
it's
an
exchange
or
not.
C
B
Right
yeah,
it's
it's
that
kind
of
stuff
and
every
provider
has
their
own
like
different
special
metadata.
You
need
to
like
exchange
it
with
the
right
thing
right.
I
I
think
it's
a
really
interesting
model.
B
I
think
we
can
think
about
it,
but
my
gut
feeling
is
that
there's
definitely
like
a
time
cost
associated
with
it
because,
even
like,
let's
say
a
provider
implemented
support
for
that
model
in
some
central
way,
right,
like
still
every
service
endpoint,
that
that
provider
offers
that
team
each
of
those
independent
teams
has
to
integrate
with
that
new
thing,
and
so
like
somebody,
would
have
to
fight
the
fight
inside
their
company
to
like
get
buy-in
from
all
those
teams
and
get
those
integrations,
and
I
think
that
probably
just
takes
a
lot
of
time.
C
But
does
everyone
need
to
change
like
I'm
more
thinking
of
this
as
an
additional
thing
you
know
like
I
don't
even
I
don't
I'm
not
even
saying
that
you
wouldn't
do
this
in
conjunction
with
maybe
enhancement
to
the
credential
provider
configuration.
I
think
you
could
totally
do
that
and
continue
to
support
those
exchanges.
I'm.
A
C
To
like,
like
the
when
you
control
the
whole
stack
end
to
end
it,
this
really
does
start
to
feel
like
just
cluster
config.
It
doesn't
really
look
like
pod,
config
or
namespace
figures,
just
config
in
the
cluster
of
how
do
you
tell
the
registry?
C
This
is
the
sort
of
you
know,
public
keys
that
issue
tokens
for
this
cluster
figure
it
out
like
do
whatever
you
need
to
do
here.
Put
your
binaries
wherever
you
want.
That
seems
very
reasonable.
I'm
thinking
about
the
other
direction
right
in
the
other
direction,
where
you
don't
necessarily
have
all
the
pieces
controlled
by
one
entity
and
you
and
you
don't
want
to
pass
secrets
around.
C
B
Think
I
do
think
it's
different,
especially
when,
like
users
don't
control
the
cluster
configuration
rate.
That
was
like
one
of
your
comments
on
the
cap.
Like
hey,
you
know
I'm
going
to
be
running
on
you
like,
say
one
of
these
managed
platforms
and
that
platform
is
going
to
implement
some
plugins
but
like
they
maybe
don't
care
about
some
of
their
competitors,
like
compatibility
with
some
of
their
competitors,
and
that's
going
to
be
really
annoying
for
me
and
I
think
that's
a
really
valid
concern.
B
So
I
think
there's
probably
things
we
can
do
to
address
it
right,
like
the
pass-through
mode,
is
one
and
like
maybe
we
need
to
sort
of
add
some
additional
metadata
and
and
what
gets
passed.
I
don't
know
if
that
like
is
better
or
worse,
because
it's
like
maybe
adding
stuff
like
breaks.
People
are
just
accepting
like
normal
odc
federation,
but
definitely
some
kind
of
option
to
just
pass.
The
token,
through
instead
of
requiring
a
plugin
to
be
installed,
seems
like
a
good
idea
to
me
to
cover
those.
B
But
then
another
option
is
maybe
we
have
to
change
cuba,
credential
provider
plugins.
So
it's
easier
for
users
to
install
their
own,
like
it's
pretty
easy
to
install
a
csi
driver
on
a
cluster
anywhere
today
but,
like
you
know,
requiring
control
of
the
node
to
like
you
know,
install
one
of
these
binaries
and
then
go
like
edit
a
shared
config
file
on
that
node.
To
like
add
a
credential
provider.
Plugin
is
maybe
not
the
most
extensible
model,
and
maybe
we
want
to
change.
C
That
so
maybe
I
don't
I'm
not
that
familiar
with
csi,
I
thought
you
needed
to
control
the
infra
to
meaningfully
install
them.
I
thought
you
could
create
configuration
for
them
in
the
cube
api,
but
the
it
might
be.
The
infra
admins
have
to
somehow
do
the
wiring,
maybe
I'm
just
missing
them.
It
might
just.
D
I
C
Okay,
I
just
I'm
just
not
done
you
do.
D
B
C
True,
oh,
I
know
david
has
his
end
up
just
a
quick
thing.
What
was
I
going
to
say?
C
C
Maybe
there
needs
to
be
some
mechanism
to
specify
like
here's
another
container
in
my
pod
image
that
contains
the
binary.
I
need
to
use
there's
some
specification
for
what
data
gets
passed
in
the
cubelet
execs.
That,
and
you
know
you
can
imagine
that
it
could
be
overridden
with
web
hooks
or
it's
defaulted
by
whatever
the
cluster's
defaults
are
like.
Does
that
kind
of
make
sense
like
it's
basically
like
well
you're,
already
like
allowing
the
execution
of
effectively
arbitrary
code
by
running
a
container
in
the
pod?
C
B
C
That
would
avoid
all
the
like
concerns
of
like
well,
the
I
won't
support
my
competitors,
token
exchange
program.
It
doesn't
matter
if
you
don't
support
it.
You
support
me
executing
code,
so
I'm
going
to
execute
code
for
their
plugin.
B
J
All
right,
I'm
going
forward,
go,
go
down,
do
it
yeah,
so
the
cubelet
credential
provider
today
has
the
notion
of
a
key
ring
that
allows
someone
to
separate
different
image
registries
and
say
for
this
image
registry.
I
will
provide
these
credentials
for
pots
in
thinking
about
your
proposal.
Here
it
happens
to
line
up
nicely
with
something
like
the
openshift
internal
image
registry
right.
It
actually
does
accept
service
account
tokens,
and
I
can
imagine
myself
looking
at
this
and
saying
you
know
what
I'd
really
like
to
be
able
to
do.
J
I'd
like
to
be
able
to
say
on
every
cubelet,
if
you
see
a
request
for
an
image
pull
from
the
internal
image
registry,
I
would
like
you
to
get
the
service
account
token,
for
the
service
account
running
this
pod
and
use
that,
for
the
image
poll
is
that
a
sort
of
mechanism
that
you
would
be
interested
in
it
seems
like
it
would
be
very
similar
for
your
idea
of
plug-ins.
Yes,.
B
I
would
consider
that
for
sure,
as
an
option
in
the
credential
provider
configuration
to
have
sort
of
the
same
default
from
image
for
images
from
certain
registries:
okay,
yeah,
it's
similar
so
yeah,
so
that
would
be
sort
of
like
the
pass-through
version
of
that
configuration
and
I'd
also
want
a
plug-in
version
to
say
like
if
the
image
is
coming
from
gcr
just
get
a
token
and
go
to
our
plug-in.
With
the
token
don't
require
users
to
configure
it.
Okay,.
B
B
Yeah,
okay,
awesome
discussion.
So
far,
any
any
other
thoughts
I'm
gonna
have
to
rewatch
this
recording.
F
Yeah
one
comment
is
the
credential
provider
improvements
and
actually
the
initial
kepp
both
went
through
sick
node,
and
I
know
that
there
have
been
some
activity
there,
development
activities
still,
so
I
would
definitely
get
them
involved
once
you,
I
guess,
incorporate
the
changes,
so
I'm
not
sure
who
should
own.
Who
should
be
the
only
cap.
I
think
either
is
fine,
maybe
signode,
because
they
owned
the
last
one,
but
yeah.
B
B
Okay,
I
think
that's
a
really
solid
discussion
so
far.
Nobody
has
anything
else.
I
think
we
can
open
the
floor
for
anybody
who
has
broader
topics
of
interest.
F
Awesome,
I
think
that
is
the
test
flight
cluster
broken
or
something.
F
Even
when
you
yeah,
I
think
it's
just
broken.