►
From YouTube: sig-auth bi-weekly meeting 20210303
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hey
everyone.
This
is
the
cigot
meeting
for
march
3rd
2021.
We
have
a
pretty
full
agenda,
so
we're
going
to
get
started.
Real,
quick
and,
let's
see,
I
think,
martin
you
wanted
to
show
us
a
demo,
and
the
only
way
I
know
to
make
you
be
able
to
share
your
screen
is
to
make
you
a
co-host
so
yeah.
B
All
right,
then,
okay,
so
I'll,
be
quick.
Hopefully
there
won't
be
any
a
lot
of
questions.
So
last
year
more,
I
actually
made
a
dynamic
authentication
gap
for
allowing
and
use
or
allowing
customer
admins
to
dynamically,
add
new
ways
in
which
end
users,
or
are
able
to
authenticate
to
the
ips
server.
So
pretty
much
you
are
in
the
cap,
you
are
allowed
to
dynamically,
add
new
open
interconnect,
providers
new,
add
new
certificates
for
for
client-based
authentication,
and
there
was
another
use
case.
B
B
So
just
disclaimer
everything
everything
here
is
alpha.
I've
been
working
on
this
like
for
after
work
for
like
one
at
tops
like
20
hour,
stops
so
yeah.
It
is
pretty
much
very
much
alpha.
B
So
our
main
main
use
case
is
that
we
have
a
lot
of
kubernetes
clusters
and
workloads
in
those
kubernetes
clusters
often
talks
to
the
api
server
to
api
servers
of
of
different
clusters
and
currently
what
we
do
is
we
create
a
service,
account
tokens
and
then
we
store
them
in
yeah
in
each
one
of
those
clusters,
and
this
is
really
bad
because
those
tokens
have
to
be
rotated
also,
it
makes
the
entire
experience
quite
quite
worse.
B
So
what
we
want
to
achieve
in
our
use
case
is
to
establish
a
trust
between
those
kubernetes
clusters.
Using
service
account
token
volume,
projection
and
service
account
issued
in
discovery.
So
basically,
in
one
cluster,
we
we
should
be
able
to
register
all
the
identity.
B
The
operating
connect
endpoints
of
the
of
the
other
chromatic
clusters,
so
how
the
center
this
entire
thing
works.
So
it's
a
simple
web
hook
with
with
the
controller
inside
of
it
that
acts
on
this
open
id
connect
resource
and
inside
the
video
specify
the
standard
settings
that
you
have
in
the
cube:
api
server,
flux,
issuer
client,
id
username
claims
and
etc.
B
So
pretty
much
yeah,
you
have
some
nice
pictures
on
the
floor,
but
but
at
the
end,
yeah
I'm
not
being
creates
this
resource,
the
controller
or
the
webhook
watches
for
search
for
such
requests.
It
talks
to
the
to
the
discovery
endpoint.
B
It
gets
the
the
keys
and
then
it
it
adds
this
yeah
new,
this
new
open
interconnect
provider
to
the
list
of
of
open
interconnect,
authenticators
that
that
distorts
in
memory
and
when
the
end
user,
authenticates
so
yeah
I'll
just
keep
to
the
picture
the
user
authenticates
at
at
their
open
economic
provider.
B
Delegation
of
authentication
and
authorization,
it
also
checks,
let's
say
the
identity
of
the
colleague,
but
at
the
end
it
matches
possibly
matches
one.
The
token
which
the
end
user
presents
with
one
of
the
already
registered
identity
providers
and
then
it's
yeah.
B
The
flow
pretty
much
goes
back
to
the
api
server
with
with
the
document
review.
B
So
this
is
the
high
level
overview
for
the
end
user.
So,
let's
jump
straight
to
the
demo,
then
so,
okay,
so
I
have
started
in
advanced
mini
cube
and
I've.
I've
also
enabled
the
webhook
authenticator
with
with
with
with
the
standard
slack
for
the
demo
purposes.
I've
also
lowers
the
cache
to
text
to
10
seconds
just
to
make
things
more
easy.
B
Okay,
and
now
the
api
server
is
up
and
running
and,
as
you
can
see,
we
have
just
a
startup
quite
squaster,
with
yeah,
with
almost
not
give
set
of
it.
So
the
next
thing
that
I'll
do
is
I'll,
go
and
then
I'll
simply
go
and
create
the
the
custom
resource
for
this
open
the
connect
resource
and
then
I'm
going
to
deploy,
I'm
going
to
go
and
deploy
dex
and
the
authenticator
itself.
So
I'll
go
here
and
I
have
here
a
small
sample
and
in
this
case
we
create
the
service
folder
authenticator.
B
The
authenticator
currently
in
this
simple
deployment
on
minicube
listens
is
running
as
not
part,
because
we
also
want
to
access
it
and
decks
and
yeah.
This
makes
communications
and
at
least
in
this
demo
scenario
quite
easy,
and
then
we
yeah
we
deployed
the
authenticator
and-
and
here
if
in
this
namespace,
we
have
the
authenticator
up
and
running.
B
So
the
next
thing
that
I'll
do
is
I'll
actually
go
and
I
have
prepared
in
advanced
sorry.
I
have
prepared
in
advanced
cubeconfig
context,
which
allows
me
to
authenticate
with
dex.
So
here
you
can.
If,
if
we
see
the
config,
we
will
actually
use
the
yeah,
the
odc
login
plugin
to
quickly
get
the
token.
But
in
this
case
we
have,
we
have
provided
some
simple
dex
secret
and
then
client
and,
as
you
can
see,
the
plugin
is
already
automatically
authenticating
me.
B
Okay,
we
jump
on
this
one
and
now
when
I,
when
I
try,
for
example,
to
use
this
context
here,
we
see
that
the
api
server
does
does
not
recognize
the
token
which
I,
which
I
have
already
received,
and
now
the
next
thing
as
an
admin
is
I'll,
go
here
and
then
I'll
simply
apply
the
configuration
now,
the
the
open
id
connect
configuration
for
dex.
B
And
I
have
created
this
and
if
I,
if,
if
I
see
it,
we
see
that
yeah,
it's
it's
for
decks,
it
also
yeah.
It
uses
like
this.
B
B
And
and
our
authenticator
has
has
done
its
job
and
and
now
the
cube
api
server
recognizes
me
that
yeah
I'm
admin
at
example.com,
and
this
is
the
issuer
so
yeah.
So
as
as
an
example,
I'll
also
add
another
opening
the
connect
issuer,
which
I
have
prepared,
also
in
advance,
so
just
to
show
you
that
I'm
not
cheating.
B
Okay,
I
have
authenticated
and,
as
you
can
see
right
now,
my
user
is
also
yeah
able
to
to
be
yeah
is
recognized
by
the
cluster
and
if
I,
for
example,
now
decided
that
I
don't
want
this
identity
provider
I'll
be
sim
I'll,
simply
delete
it
all
right
now,
the
the
the
cube
ips
server
will
cache
this
response
for
fifth
for
10
seconds
and
if
I
execute
the
command
again
I'll
be
unauthorized.
B
So
so
this
is
so
far
the
demo.
Do
you
have
any
questions.
B
Yeah,
so
something
for
me
in
order
to
make
this
entire
scenario
working,
we
had
to
make
some
or
me
I
had
to
make
some
small
changes.
Let's
say
to
the
to
the
way
and
open
the
connect
issuer
today
works
one
one
problem
that
we
discover
that
I
discovered
is
that
currently
the
way
you
register,
for
example,
when
you
open
a
dick
connect
provider,
is
that
you
also
you
always
provide,
for
example,
the
ca
file
and,
in
the
context
of
a
comprehensive
controller,
this
doesn't
work
quite
well.
B
My
my
initial
hack
was
to
create
like
a
temporary
file
in
the
file
system
and
then
just
use
it.
However,
in
the
end,
I
chose
to
more
or
less
copy
the
entire
opening
file
and
then
simply
wrap
it
inside
an
additional.
B
B
The
other
issue
that
I
I
had
when
I
was
implementing
this
entire
thing
was
that
currently
the
the
way
the
verifier
works,
so
the
so
the
open,
the
connect
provider.
So
the
here,
the
opening
d
connect
waits
for
and
pause
the
the
at
the
opening
discovery
endpoint,
and
this
this
doesn't
make
a
quite
nice
experience,
at
least
for
like
the
controllers.
B
So
my
question
is:
do
you
want
to
contribute
like
some
of
those
changes
that
I,
for
example,
made
or
how
open
are
you
for
like
accepting
such
changes?
A
So
I
think
we're
a
little
bit
out
of
time
on
this
one
martin.
I
would
be
open
to
like,
if
you
just
made
either
one
or
many
issues
on
kk
and
we
could
we
can
okay
there.
So
I
fully
understand
exactly
where
you
have
to
make
the
changes
and
then,
if
we
want
to
follow
up
the
next
time
we
can
too
okay
awesome.
Thank
you
for
the
demo.
A
A
Let's
see
mike
are
you
in
the
car,
and
you
were
you
and
micah
discussed
this
some
too.
Where
were
you
at
on
this?
What
what
did
we
have
left.
A
A
I
guess
I
had
one
question
that
sort
of
come
up
on
this,
since
the
last
time
we
talked
about
it
is
so,
you
know
token
request
is
a
built-in
api
to
the
cube
api
server
and
currently
was
sort
of
guaranteed
to
return
you
a
service
account
token
that
looks
in
a
certain
shape.
A
A
Certainly,
in
the
respect
of
one
of
the
benefits
of
having
the
service
account
oidc
discovery
stuff.
That
we
added
is
that
you
can
then
authenticate
those
tokens
in
a
particular
way,
but
it
does
require
that
they
have
a
particular
structure
that
sort
of
matches.
What
you
expect
I
mean.
Certainly
you
can
you
can
authenticate
the
fact
that
they
originated
from
the
api
server
based
on
just
the
signing
information,
but
if
you
want
to
extract
any
claims,
you
must
then
have
to
make
some
assumptions
about
what
claims
are
present
today.
E
So
complicated
I
didn't
I
I
thought
what
was
being
proposed
was
strictly
additive,
so
that
the
standard
claims
that
are
included
would
still
be
included
and
additional
claims
or
stanzas
could
be
injected.
So
I
mean
dots
tolerate
additional
claims.
A
A
F
Sign
something:
can
we
pick
like
a
prefix
like
a
namespace
and
say
this
is
requested?
Requesting
claims
like
this
is
something
that
the
that
the
party
is
trying
to
communicate
about
themselves,
that
we
have
no
way
of
verifying.
But
here
you
go.
I
I
think
micah's
particular
use
case.
E
I
mean
like
this
is
sort
of
the
jot
equivalent
of
a
cross
sign
certificate.
It's
here's,
a
blob
and
kubernetes
has
claims
they
recognize
and
want
to
use,
and
this
other
thing
has
claims
they
recognize
and
want
to
use,
and
I'm
I
already
have
plumbing
to
get
this
credential
to
things,
and
I
already
can
associate
it
with
workloads
and
I
already
can
like
bound
its
lifetime
to
the
workload's
lifetime
like
there's
all
this
super
convenient
stuff
around
it,
it
would
be
like.
I
don't
necessarily
object
to
the
idea
of
it.
E
I
think
what
you're
bringing
up
mo
is
you
you
wouldn't
want
anyone
to
depend
on
it
other
than
the
the
thing
that
knows
about
those
special
claims
like
you,
wouldn't
want
workloads,
introspecting
those
claims
and
being
like
oh
cool,
I'm.
I
must
be
running
in
this
environment
because
this
claim
showed
up
in
my
service
account
token.
E
A
Reason
I
I
get,
I
guess
I
would
ask
is,
and
I
don't
know
if
this
is
actually
any
user,
do
we
I,
I
guess
there
must
probably
be
a
strong
use
case,
at
least
for
I
want
to
add
extra
stuff.
That's
not
a
simple
transformation
of.
What's
already
there
right.
I
want
to
add
extra
information
that
I
know
that
cube
does
not
know,
and
I
want
to
add
that
and
maybe
take
the
also
the
stuff
that
cute
does
know
and
do
something
with
it.
A
G
The
other
gross
like
option
I
talked
with
mike
about
you
might
tell
me:
if
you
don't
mean
he
brings
up
it's
like
you,
could
I'm
gonna
wave
my
hands
and
say
rego
or
something,
but
to
give
cube
api
server?
Basically
a
transformation
template
to
say
here's
the
claims.
You
know
about
inject
them
in
this
other
name
space
you
don't
have
to
make
an
external
call
to
basically
modify
the
claims.
Then
you're
at
this
point,
you're
picking
like
a
a
templating
yeah
like
a.
A
A
G
The
the
other
option
for
us
here
is
to
just
not
do
this
right
like
to
put
some
intermediary
that
the
token
goes
to
instead
of
what
we
currently
have,
but
that
requires
us
to
like
basically
have
every
pod,
instead
of
calling
the
existing
aws
service
that
they
do
with
their
token
call
some
other
aw
eks
hosted
service.
That
can
do
this.
G
Okay,
I
I
know
which
token
you
are:
you
don't
have
the
context
for
say:
sts
aws
ics
service,
but
I
can
add
the
you
know
the
session
in
aws
terms,
the
session
tags
that
I
want
from
your
kubernetes
claims,
but
yeah
that
wouldn't
require
kubernetes
change,
but
that
would
be
like
a
new
service.
We
have
to
stand
up
to
be
an.
G
We
basically
has
a
well-known
client
id
that
we
reference,
but
customers
can
pick
whatever
client
id
they
want
in
the
aws
iam
side
and
then
they
just
need
to
use
the
same
corresponding
client
id
on
their
like
in
their
when
they
say.
You
know
mount
this
token
in
my
in
my
pod
and
if
they
changed
one
up
to
the
one
other
than
that,
one
that
we
documented.
G
G
Then
I
have
I've
made
every
service
account
token.
Also
like
that
toxic.
That's
useful
against
api
server,
also
useful
against
aws
that
you
know
anti-pattern
bad
practice,
but
I
guess
we
just
we
just
don't
exert
control
over
today
over
what
what
client
id
customers
have.
We
have
a
documented
one,
and
we
could
say
only
this
one
is
supported
for
session
tags
and
that
might
be
feasible.
H
E
Yeah,
taking
adding
stuff
to
lots
of
tokens
that
makes
it
usable
to
other
places,
is
a
little
unfortunate
and
making
the
path
for
minting
all
tokens
more
complex
and
more
fragile
in
order
to
be
able
to
inject
claims
that
are
only
used
in
this
sort
of
non
like
secondary
case
is
also
kind
of
unfortunate.
E
I
mean,
like
the
token
exchange
thing
like
I
get
my
token
and
then
I
talk
to
a
token
granting
service
and
say,
like
I
am
this
pod
and
I
can
prove
it
give
me
a
token
for
sts
or
whatever,
like
that's
probably
the
like
correct.
If
running
a
service
was
cheap
approach,
but
I
recognize
that
it
makes
it
more
complex
for
clients
and
you
have
to
run
another
service.
I
don't
know
so.
Maybe
those
two
things
are
kind
of
the
most
like
the
biggest
objections
like
it.
E
This
exposes
api
service,
visible
api
service
to
clients,
because
they
can
look
at
their
token
and
say,
like
yeah,
there's
stuff.
In
my
token,
I
can
make
assumptions
about
it
and
then
it
makes
the
token
grinding
path
more
complex
for
a
secondary
use
case.
E
You
should
probably
time
box
this.
I,
like,
I
said
I'm
sympathetic
to
parts
of
the
use
case,
but
it
would
be
useful
to
know
like
are
these
things
that
lots
of
people
would
just
jump
on
and
use,
or
we
kind
of
how
bad
would
it
be
to
run
a
service
that
exchanges
tokens
and
how
bad
would
it
be
to
add
this
in
and
would
more
people
than
just
you
use
it.
G
A
A
related
question,
jordan,
you
might
remember
this:
can,
can
you
have
an
aggregated
api
that
does
not
have
a
matching
service
on
the
cluster
like?
Can
you
just
point
it
to
some
random
like
address.
C
A
Okay
cool,
so
if
we
had
that
mica
and
you
could
reasonably
just
have
an
api
service
that
is
on
your
cluster
and
thus
the
whole
like
what
cube
api
do,
I
call
to
do
this
sps
exchange
is
super,
easy
and
well
defined,
because
it's
the
same
on
every
cluster.
Let's
just
talk
to
the
ambassador
at
a
well-known
cube
api.
G
G
Jot
like
from
cubelet
without
ever
even
mousing
it
into
the
pod
that
could
then
be
exchanged
for
either
another
token
or
the
credentials
themselves,
so
that
might
be
the
other
route
to
investigate.
So
I
I
think
I'm
fine
to
time
boxes
here
and
say
like
do
more
investigation
in
those
routes.
G
It's
not
ideal
because,
there's
more,
you
know
more
exchanges,
but
that
that
might
be
sufficient,
so
yeah
I'll
I'll.
Take
that
as
like
homework
and
in
two
weeks
can
follow
up
and
say
here's
what
I
found.
A
Well,
thank
you.
Okay.
I
put
this
issue
on
the
agenda
because
I
saw
it.
Someone
had
made
an
issue
saying
hey
when
you
do
port
forward
auditing
does
not
record
the
report
that
was
forwarded
and.
A
E
A
A
I
think
we
only
look
at
like
the
audit
id
header,
which
I
I've
always
found
questionable,
because
I
don't
think
we
validate
that
anymore.
So
put
that
aside,
yeah
like
yes,
we
should
audit
all
the
things
if
possible,
but
it's
not
varied.
E
E
A
Okay-
okay,
so
I
missed
that
so
so
it
could
be
in
there
right
like
well,
okay,
so
I
I
think
in
this
person's
case
it
was
done
via
ptl,
which
used
the
header.
So
I'm
just
wrong
here.
I
can
go
fix
my
comments
and
say
that
no
it's
not
parameters,
it's
everything,
so
parameters
are
logged.
It's
just
not
headers!
E
E
E
Getting
treated
specially
for
admission,
so
we
may
need
to
do
something
similar
for
audit.
I
would
be
in
favor
of
doing
that.
That
seems
useful
to
audit
if
you
probably
at
the
same
philosophy
level
as
like,
if
you,
if
you
see
the
incoming
objects,
like
that's,
that's
the
right
level
of
verbosity
to
include
details
about
the
audit,
so
I
I
think
we
have
the
information
we
need.
It
might
just
not
be
wired
up
for
audit
because
it
had
to
be
done
specially.
E
A
E
Yeah
there
was
a
request
that
a
couple
of
us
weighed
in
on,
but
I
wanted
to
bring
it
up
here
for
just
to
see
if
there
were
other
perspectives
before
I
answered
this,
so
someone
had
requested
that
the
new
projected
service
account
tokens
be
able
to
be
used
as
an
environment
variable
source.
E
Currently
you
can
do
that
with
secrets
or
config
maps
or
downward
api
values,
and
I
replied
that
I,
I
don't
think
that's
a
great
idea
in
general,
using
like
exposing
secret
values
and
environment
variables,
plums
them
into
places
that
aren't
really
expecting
to
deal
with
confidential
information
like
the
container
runtime
specs
like
if
you
say,
correct,
control,
try
control
you
know
info.
E
You
can
actually
see
all
the
environment
variables
configured
for
a
pod,
and
so,
if
secret
values
were
plumbed
into
environment
variables,
they
show
up
there,
which
isn't
great
and
then,
in
particular
environment
variables
are
only
set
at
container
start
and
for
values
that
came
out
of
a
secret
object.
Maybe
that's:
okay,
maybe
you're
not
going
to
update
those,
but
for
these
the
projected
servicing
tokens
the
point
of
them.
E
One
of
the
points
of
them
is
that
they
expire
and
they're
time
limited
and
so
mapping
those
to
a
environment
variable
that
can
only
be
set
at
startup
didn't
make
sense
to
me.
The
person
reflecting
this
said
like
what
about
what,
if
it's
a
short-lived,
you
know
just
it's
just
going
to
run,
use
it
once
and
then
exit
like
it
doesn't
care
if
it
doesn't
get
refreshed.
E
My
feeling
is,
if
you
want
to
use
environment
variables,
for
that
like
write,
a
wrapper
and
like
take
the
value
from
the
file
put
it
in
an
environment
variable
and
then
call
the
the
thing
you're
going
to
call
that
expects
an
environment
variable,
but
I
wanted
to
just
bring
it
up
here
and
see
if
I
was
missing
like
clear
reasons
that
this
would
be
good
to
do
before.
I
close
this
up.
H
Security
improvements
that
we
listed
when
we
created
bound
service
count
tokens
or
the
token
volume,
but
I
don't
see
it
in
the
cap.
I
agree
with
your
sentiment,
though,.
H
Right
we
are
introducing
this
new
api.
We
are
dropping
support
for
this
yeah.
That's
what
I
remember
as
well.
E
Rita
what
what
have
you
have
you?
I
know
everyone
wants
it.
Do
you
suggest
like
write
a
rapper
if
you
have
a
thing
that
needs
to
consume
an
mvr
like?
Have
you
seen
good
objections
to
doing
it
that
way
or.
J
When
you
say,
custom
rapper
is
like
a
like:
a
script
that
that
runs
before
the
like
on
container
yeah.
J
Yeah
I
mean
I
can
send
you
the
issues
where
we
recommended
that
and
then
people
gave
thumbs
down.
I
mean
it
works,
but
people
don't
like
it
right.
People
just
want
it
in
environment
variable,
and
I
think
we
can
probably
blame
12
factor
apps
for
all
of
this.
But
I
think
because
that
feature
is
available
for
secrets.
J
I
And
specifically,
the
adaption
of
their
nvar,
I
suspect
a
lot
of
these
people
are
people
who've,
eliminated
shells
following
practices
from
certain
large
company,
large
cloud
companies
who
shall
remain
nameless
on
the
call
saying
get
rid
of
the
whole
operating
system
inside
the
container.
Now
you
have
no
way
to
work
around
this,
but
I
won't
like.
I
know
I've
seen
that
in
that
context,
there's
a
bunch
of
other
challenges
which
is
like
we
can't
go.
I
Do
secret
sources
for
everything
so
like
we
wouldn't
we'd,
be
able
to
do
it
for
tokens,
but
we
wouldn't
be
able
to
do
it
for
arbitrary
csi
volumes
and
then
we'd
have
to
get
into
like
you
know,
csi.
So,
like
I
feel
like
there's
a
if
this
is
something
that's
important
enough.
It
kind
of
feels
like
it
needs
to
be
a
a
new
type
of
design
and,
like
exact
containers,
are
one
way
of
bypassing
this
there's
other
ways.
E
Productive
thin
process
that
so
so
what
we
did
when
we
were
working
on
the
district
list
stuff
for
the
control
plane
images
we
ran
into
like
logging
redirection
issues.
So
we
had
a
go
runner
binary,
whose
only
job
was
to
take
a
command
line
and
then
invoke
that
and
redirect
output
to
a
place
like
there
was
some
well-heeled.
I
Set
of
tools
that
could
provide
standard
behavior
in
all
environments,
but
yeah
I
mean
it's,
not
bash.
I
mean
come
on.
Well,
I
mean
you
know,
like
you
could
argue
you
could
someone
could
go
easily
put
a
shim
in
all
the
distress
images
that
do
all
the
things
in
bash
right,
like
that's
busy
box.
Maybe
there
should
be
a
cube
runner
like.
E
E
Yeah
all
right,
I
think
I'll
summarize
this
in
the
the
issue
thanks.
J
E
Oh,
so
it's
something
at
the
cube
level,
so
it's
not
even
like
the
legacy
application
that
you
could
you're
building
it
to
an
image,
but
you
could
put
one
more
layer
on
that
invokes.
It
is
something
that
says:
environment
variables,
it's
something
that
is
already
cube
aware
and
it's
expecting
variables.
That's
right!.
I
For
some
help
charts-
and
maybe
maybe
this
is
like
that-
like
no
that's
a
good
point
but
like
we,
we
would
need
to
provide
guidance
about
what
the
change
is,
and
we
should
understand
so,
like
I
do
think,
maybe
rita
like
this
is
a.
We
should
get
a
write-up
of
what
we
recommend
doing
for
the
set
of
places
that
we've
heard
it,
and
if
we
can't
come
up
with
a
reason,
we
should
crowdsource
that
with
the
smart
people
and
the
users
list
and
be
like
hey,
this
is
deliberately
chosen.
Here's
the
reasons.
I
J
I
You
have
to
restart
the
process
which
gets
into
signals
which
gets
into
what
signal
do
you
send
which
gets
into?
How
long
do
you
wait,
which
gets
into
overrides
on
termination
for
graceful?
There's,
definitely
a
there's
a
reason
why
the
signal
stuff
hasn't
progressed
and
it's
because
we
can't
really
find
a
nice
clean
cut
line.
I
mean
there's
an
argument
here
that
things
like
probes
should
be
allowed
to
check
other
stuff,
and
so
you
could
easily
have
a
probe
today
that
checks,
whether
the
variables
change
it's
just
ugly
and
painful.
I
I
And
you
can't
have
multiple
probes
either,
which
one
is
also
a
blocker.
I
mean
there's,
there's
probably
other
solutions
to
a
lot
of
these.
Maybe
this
isn't
the
right
forum
to
talk
about
it,
because
there
is
a
little
bit
of
an
overlap
with
sig
app
and
there's
a
little
bit
of
overlap
with
like
the
runtime
environment
of
the
application,
so
it
overlaps
a
little
bit
with
sig
node.
So
it
could
be
a
sig
arch
discussion
where
we
say
hey
like
there's
a
lot
of
people
who
are
trying
to
do
dynamic
things.
I
A
A
K
Okay,
so
this
is
basically
a
carry
on
carry
forward
work
for
the
old
cap
658,
where
a
proposal
was
created
for
hooking
up
extension,
api
service,
webhook
client
more
easily
with
the
webhook
server.
It's
such
a
mouthful,
so
I'm
basically,
I
just
basically
did
some
research
on
his
proposal
and
tried
to
break
down
the
work
into
smaller
chunks
for
easier
work
and
proposal.
So
this
chunk
is
basically
on
the
client
and
so
providing
an
easier
setup
for
weapon
client
to
to
request
for
a
bearer
token.
K
So
the
current
challenge
is
even
though
we
have
dynamic
web
hook
registration.
We
still
configure
the
authentication
and
cube
config
files
and
then
refer
that
in
the
webhook
admission
configuration
it's
like
russian
dolls,
it's
layer
of
layer
of
files,
so
I'm
proposing
to
add
an
api
into
the
webhook
client
config
to
so
so,
basically
having
a
configuration
that
specifies
that
the
weapon
client
should
just
get
a
token
through
a
token
request
to
the
from
the
api
server
qbap
server.
K
So
any
thoughts
on
like
the
general
idea
of
it,
so
that
I
think
the
most
big
biggest
concern
in
the
previous
proposal
was
in
the
event
that
there
are
a
lot
of
like
web
web
requests.
The
the
process
of
requesting
a
token
would
slow
down
or
add
latency
to
the
webhook
request
and
in
general,
slow
down
the
whole
admission
process.
So
I
think
we
can
cash
the
token
in
the
authentication
info
resolver,
I
think,
from
from
the
code
reading.
K
I
think
that's
some
that's
the
place
that
we
can
augment
it
and
reduce
the
the
latency
in
general.
E
K
Yeah,
that's
a
very
good
question,
so
I
am
still
in
the
research
phase.
On
that
I
think
event.
I
think
the
general
direction
is
probably
using
a
service
account
of
that
extension
api
server.
K
I
think
that
will
definitely
require
more
setup,
for
example,
for
the
the
auth
resolver
to
have
access
to
the
service
account
token
or
yeah.
I
think
that
will
be
a
more
like
implementation
level.
Research
to
do
on
my
site.
A
K
Yeah,
if
we
scroll
down
to
like
the
a
little
bit
down
to
the
user
facing
change
so
you'll,
see
that
I
think
is
yeah.
So
here
like
in
the
client,
config
yeah,
so
authentication
source
will
be
in
the
client
config
and
in
the
config
config.
We
already
have
the
url
or
the
service
in
in
cluster
service
path
to
the
web
hook,
so
that
can
be
used
as
the
audience
of
the
token.
K
So
we're
not
sending
the
id
token
or
service
account
token
itself,
but
we're
requesting
another
access.
Token,
through
the
token
request,
api.
E
E
K
K
A
A
Right,
the
the
question
I
have
there,
though,
is
those
you
basically
have
to
become
dependent
on
this
structure
right.
What
we're
saying
is
we
are
not.
A
That's
how
you
verify
this
stuff
is
that
what
we
want,
the
the
the
like
the
mental
thought
I
have
on
this
is-
and
I
don't
know
it
doesn't
exactly
translate-
translate
to
a
cube
api,
but
on
the
client
side
like
what
keeps
it
going
stuff,
we
have
declined
for
exact,
plugins
right
that
are
basically
like
give
me
any
credential
you
want.
However,
you
want,
and
I
will
hand
that
to
whoever
I
talk
to
and
they
will
validate
that.
A
K
Yeah,
so
this
purple
is
not
proposing
to
replace
the
the
current
authentication
config.
K
This
is
an
alternative
yeah
for
people
who
are
too
lazy
to
set
up
the
whole
oidc,
and
I
think
I
read
I
kind
of
read
that,
through
the
lines
between
the
lines
in
the
original
proposal,
it's
like
they,
they
don't
really
want
to
go
through
all
the
token
minting
and
then
file
provisioning
or
mount
secrets
yeah.
So
this
is
one
way
to
do
it.
Alternatively,.
K
A
C
A
I
was
just
asking
like,
like
you
know,
you
had
mentioned,
building
up
the
the
structured
like
config
and
stuff
right,
like
those
only
applies
to
authentication
my
books
right
that
are
configured
via
the
cli
and
have
access
to
file
based
stuff.
K
Well,
it's
okay!
So
I
are
we
talking
about
the
client
side
or
the
web
hook.
K
Oh
yeah,
so
the
if
you
scroll
down
a
little
bit
more,
so
that's
the
existing
way
to
specify
the
cube
config
so
in
the
emission
configuration.
So
this
is
a
file.
This
is
the.
A
K
A
A
K
Yeah,
so
in
the
future,
if
you
want
to
expand
it,
we
can
use
other
authentication
sources.
This
is
my
denise's
idea
to
have
like
a
union
type
of
configuration,
so
I
borrow
the
page
from
him.
K
Okay,
so
I
will
follow
up
with
the
more
research
on
the
how
we
can
use
the
extensions
api
service
service
account
to
as
like
a
client
identity.
In
the
token
request,
yeah.