►
From YouTube: sig-auth bi-weekly meeting 20210721
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
welcome
to
the
july
23rd
2021
meeting
of
sigoth.
Let's
kick
it
off.
It
looks
like
we
have
a
demo
to
start
with,
so
I
will
kick
it
over
to
you.
I
guess
I
have
to
stop
sharing.
B
So
the
demo
is
not
really
much
of
a
demo,
but
I
will
I
will
do
my
demo
anyway,
but
so
I
have
a
cube
config
that
looks
like
this,
which
is,
I
have
a
server
with
a
ca
bundle,
and
then
I
have
a
proxy
set
as
my
exec
plugin
and
nothing
else,
there's
no
creds
and
when
I
run
this
so
I'll
describe
what
happens
so
there's
some
debug
logging
one.
It's
saying
that
the
proxy
told
me
this
is
the
port
it
wants
me
to
talk
on.
B
So
that's
that
that's
literally
the
demo,
because
that's
really
all
it
is,
but
I
just
wanted
to
have
a
chance
to
sort
of
talk
through
some
of
the
details,
because
I
know
we
had
had
a
kept
written
by.
B
I
think
micah
and
I
I
honestly
can't
remember
the
other
person's
name-
nick,
yes
nick
so
nick
and
micah
had
written
that
up
and
I
kind
of
just
wanted
to
play
around
with
the
poc
to
see
what
was
possible
just
because
I
wasn't
clear
on
a
lot
of
details,
so
I
just
wrote
out
some
code
for
this
to
kind
of
get
people's
feedback,
but
basically
the
gist
is
an
exact
plug-in.
B
That
way,
the
way
I
kind
of
wrote
it
out
was
I
basically
required
mtls
between
the
local,
proxy
and
and
cube
ctl,
because
otherwise
you
just
have
a
random
port
sitting
around
that
anything
on
your
box
can
talk
to
and
get
authenticated
as
you,
I
went
with
this
approach
over
any
like
named
pipe
or
domain
socket
approach,
because
this
works
everywhere,
because
it's
just
a
server
running
on
your
box.
B
Yeah
there's
a
bunch
of
open
questions,
but
I
kind
of
wanted
to
ask
what
folks
thought
they
had
questions
or
concerns.
B
The
the
mti,
so
the
proxy
just
generates
a
ca
bundle
and
a
client
cert
and
hands
it
back
to
cubectl.
So
then
cubecto
presents
that
every
time
it
or
client
go
presents
that
every
time
it
interacts
with
the
proxy
and
presumably
the
proxy
is
also
verifying
the
client
cert
on
the
way
in
but
are.
A
B
The
requirement
is
that
if
there
is
mtls
and
there-
and
you
hope
the
proxy
is
enforcing
verification
of
the
client
cert
as
a
way
of
not
letting
arbitrary
processes
on
your
cl,
like
your
browser
or
whatever
interact
with
this
and
just
act
as
you
against
whatever
cluster.
B
In
this
particular
case,
the
proxy
is
doing
the
tls
offloading
it's
doing
the
its
own
tls
handshake
with
the
api
server
authenticating
as
cluster
admin
and
letting
everything.
C
B
So
this
is
reusing
all
of
the
machinery
for
exact
plugins
and
one
of
the
things
that
they
can
return
is
the
client.
Cert
is
just
that
now
they
can
also
return
the
proxy
configuration
and
the
exact
plugin
effectively
saying,
if
you
send
me
a
proxy
configuration,
you
must
also
send
me
a
client
search
to
talk
to
it,
gotcha,
okay,
so
yeah
in
a
sense.
What
I've
done
is
I've
said
that
the
credential
that
you
can
return
can
also
be
a
proxy.
D
It
actually
makes
a
lot
of
sense
that
the
proxy
is
taking
on
the
role
of
the
kubernetes
master
so
rather
than
the
kubernetes
masters
client
certificate
and
ca
bundle.
But
it's
that
the
proxy
or
that
the
exact
plugin
is
returning.
It's
returning,
the
proxy's
client
certificate
or
the
the
cea
or
the
certificate
used
to
talk
with
the
proxy
rather
than
the
certificate
used
to
talk
with
the
master.
B
Yep
go
makes
this
kind
of
annoying.
By
the
way
like
you
can,
you
can
punch
in
client
certs
arbitrarily
with
a
a
callback.
You
cannot
do
that
with
root
case.
B
If
we
did
not
want
that
behavior
literally
our
basically,
our
only
sane
choice
is
to
set
insecurity,
let's
get
verify
true
and
then
literally
verify
the
connection
ourselves
and
then
that
verification
duplicate
the
standard,
lib
and
use
the
different
ca
bundle,
and
I
decided
that
was
atrocious
and
I
didn't
want
to
do
that
agreed
yeah.
So
I
can
walk
through
some
questions
so
like
the
lifetime
of
the
proxy
right
now
so
the
way
I
so
you
know
I'm
on
mac
os
box.
B
So
I
spent
a
lot
of
time
googling
this,
but
there
is
effectively
no
sane
way
across
all
three
os
is
to
say
hey.
I
I
spawn
the
child
process.
I
would
like
the
child
process
to
die
when
I
die,
regardless
of
how
I
die
if
I
get
sick,
killed
or
whatever
so
linux
seems
to
have
a
relatively
defined
way,
good
good
good
on
linux.
For
that
windows,
I
don't
understand,
there's
something
called
the
job
and
you
can
do
something
with
it,
but
I
couldn't
tell
if
that
would
work
on
sid
kill
or
not.
B
On
mac
like
like
my
machine,
the
proxy
itself
has
just
a
go
routine.
That's
trying
to
figure.
A
E
F
B
Back:
okay,
sorry
yeah,
I
I
have
learned
that
I
should
always
lock
my
machine
and
unlock
you
before
these
meetings,
because
I
will
inevitably
lock
my
machine
and
then
zoom
will
like
stop
working
and
then
I'll
have
to
sit
there.
Fumbling
with
my
password.
B
This
literally
happens
to
me
like
every
fourth
meeting
or
something
so
the
ca
bundle.
B
So
the
so
one
of
the
nice
features
of
the
existing
exec
functionality
is
you
can
on
the
invocation
of
the
proxy
pass
it
the
full
details
of
the
cluster
that
you're
trying
to
talk
to
so
the
proxy
plug-in
knows
exactly
how
to
talk
to
the
api
server,
including
the
correct
ca
bundle,
proxy
configuration,
tls
host
name.
All
of
that
stuff.
B
You
cube,
ctl,
slash,
client,
go
now
say
well,
my
hostname
is
127.0.0.1,
and
this
port
number
and
the
ca
bundle
to
talk
to
that
server
is
the
thing
that
the
proxy,
the
exact
plugin
just
gave
me,
and
that's
what
I'm
going
to
talk
to
so
it
sort
of
just
instead
of
going
that
way.
You
kind
of
loop
back
to
yourself
for
a
second
and
then
go
back
out
to
what
you
wanted
to.
B
Really
only
needs
to
be
one
right.
Yes,
yes,
I
was
lazy.
When
I
was
doing
this,
I
was
like
all
right.
I
just
need
to
make
my
server
returned
some
cas
or
whatever.
So
I
just
returned
to
my
system
roots,
but
but
the
idea
is
the
client
go
code.
Could
try
to
prevent
invalid
proxy
implementations
by
both
requiring
that
a
client
cert
is
returned
and
also
requiring
that
it's
certificate
callback
is
invoked,
meaning
something
did
try
to
make
it
do
clients
or
not.
B
It
cannot
obviously
enforce
that
the
server
cares
on
the
other
end,
because
that's
the
server's
responsibility,
but
it
can
enforce
that
in
the
tls
handshake
that
it
requested
client
certs.
So
that's
really,
that's
all
that
log
is
doing
that.
Hey
like
I
was
called,
and
this
is
how
many
this
is
the
cas
that
the
server
said
that
it
would
accept
for
client
search
authentication.
B
So
but
yes,
that
would
be
one
in
in
any
same
configuration
that
I
could
imagine
oh
and
what
moyer
was
saying
in
regards
to
the
mac
thing:
yeah
it
on
mac,
apparently
when
you're,
if
you're
a
child
process
and
your
your
parent
disappears.
Your
parent
implicitly
becomes
one
the
init
process
and
you
can
detect
that
by
just
looping.
So
that's
what
I
have
it
doing
right
now,
because
I
couldn't
come
up
with
any
way
for
client
go
to
guarantee
that
it
would
be
killed.
B
Also,
originally
I
didn't
realize
that
this
was
happening,
so
I
had
like
hundreds
of
these
little
processes
just
sitting
on
my
box
and,
like
I
was
like.
Oh,
I
think
I'm
gonna,
I'm
gonna
run
out
of
ports
pretty
soon
over
here,
so
that
that
was
one
bit
that
one's
pretty
not
great
named.
I
I
looked
some
into
named
pipes.
B
B
Maybe
I
don't
know
it
still
seems
a
little
weird
not
to
have
any
further
protection,
but
I
could
kind
of
understand.
A
So
named
pipes
was
the
proposal
for
windows,
but
it
was
unnamed
unix
sockets
for.
A
Mac
and
linux.
B
Yeah
so
yeah
we
could.
I
was
trying
to
like
be
like
all
right.
What
can
I
like
vaguely
do
that
looks
at
least
the
same
across
the
os,
as
I
was
like
well
named.
Pipes
is
a
thing
and
because
I
I
think
windows
actually
has
a
unix
domain
socket
now.
I
just
don't
know
if
it
actually
works
in
this
reasonably
similar
way,
and
I
think
it's
only
supported
on
windows
10,
which
I
guess
might
be
fine.
I
don't
know
I
used
to
have
a
windows
7
machine
over
there
for
playing
video
games.
B
B
A
B
Well,
the
the
concern
I
have
there
is
that
it
it
would
be
part
of
an
api
that
just
went
ga
122
and
if
you
didn't
know
like
it
would
be
hard
from
the
outside
to
observe
that,
like
this
tiny
section
of
an
otherwise
ga
api
is
not
ga,
I
feel
like
I
feel
like
that,
requires
some
kind
of
opt-in
somewhere
like
I
don't
think
there
needs
to
be
an
opt-in
if
the
functionality
was
completely
done
and
it
itself
was
ga,
but
until
then
probably
it's
some
kind
of
like
I.
B
I
want
this
feature
or
something
oh
number.
Five
I
have
to.
I
don't
really
know
what
to
do
if
the
proxy
just
randomly
dies
like
do
you
just
spawn
another
proxy
moyer
had
asked
like.
Could
we
my
original
thought
had
been
well,
we
would
run
the
exact
plugin
and
it
would
spawn
a
proxy
and
that
would
be
long
live
and
then
the
original
process,
the
exact
process,
would
exit.
B
But
I
think
micah
and
nick
didn't
like
that
one,
because
then
the
lifetime
got
even
more
harder
to
manage,
because
you
had
like
multiple
hops
and
stuff.
So
so
I
wrote
the
code
to
make
it
so
that
we
would
hang
around
and
wait
for
the
plug-in,
and
you
know
stream,
its
output
and
decode
and
everything.
But
let
it
keep
running
that
code
is
not
pretty,
but
it
does
work.
B
But
it
it
still,
it
still
doesn't
answer
the
question
to
me
what
to
do
if
the
proxy
dies
and
I
it
would
be
much
harder
to
even
answer
that
question
if
the
proxy
is
supposed
to
spawn
a
child's
process.
I
don't
know:
let's
see
who
asked
about
the
overhead.
E
These
are
mine.
I
think
I
asked
you
some
of
these
questions
yesterday,
but
we
didn't
write
them
down.
Yes,
my
only
concern
with
having
sort
of
this
extra
hop
of
local
tls
is,
if
you
wanted
to
use
this
for
coupe
ctl.
That
would
probably
be
fine,
but
if
you're
using
this
for,
like
kubelet
auth,
you
might
care
about
the
overhead
of
an
extra
local
host
round
trip
with
with
tls
enabled
it
wouldn't
be
like
huge,
but
maybe
something
that
would
need
to
measure.
B
Maybe
if
you
were
running
on
like
a
box
that
was
using
cube,
ctl
and
just
continuously
invoking
this,
maybe
you're
then
being
kind
of
wasteful,
because
I,
unless
the
proxy
itself
was
caching
its
ca
in
client
cert
in
an
attempt
to
like
sort
of
make
it
so
that
I
don't
even
know
if
that
would
work.
I
don't
I
don't
know.
If
the
os
would
let
you
reuse
those
connections,
I
suspect
not
mike.
I
did
see
some
stuff
about
philox,
but
I
was
it
wasn't
clear
to
me
exactly
how
those
would
be
implemented.
A
But
I
would
say
I
think
the
proxy
author
always
has
to
be
semi-responsible
for
exiting
if
client
go
has
already
exited
and
there's
no
automatic
way
to
you
know
forward
signals
for
like
sick
kills
or
something,
but
it
could
be
kind
of
like
a
more
standard
or
a
easier.
I
guess
making
the
signal
cross-platform
easier
is
the
best
solution
that
we
can
come
up
with.
B
B
F
Maybe
would
it
be
workable
to
require
proxy
authors
to
say,
like
okay
set
up
your
proxy
as
a
normal
system,
daemon
on
whatever
platform
you're
running
on
and
then
your
exec
auth
plugin
talks
to
that
retrieves,
the
credentials,
hands
them
off
to
client
go
and
then
immediately
exits,
then
client
go
doesn't
have
to
worry
about
managing
a
long
life
process.
B
Yeah,
the
the
the
thing.
I
don't
like
it's
unclear
to
me.
How
how
client
go
would
distinguish
between
a
a
process
that
spawned
the
proxy
like
an
exact
plug-in
that
spawned
the
proxy
and
it
goes
away
versus
a
proxy
that
was
spawned
as
one
process
and
then
died
because
of
a
bug
like
I
don't
I
don't
don't.
F
B
And
I
was
like
all
right:
fine
I'll
go
I'll,
go
make
the
other
way
work.
Just
I
I
I
don't
know
what
is
actually
easier
to
manage
for
users
like
I
can
see.
Both
approaches
have
some
pain
points
I
think
either
way
and
I'm
happy
to
do
either
one
I
just
did
the
one
that
was
harder
to
implement,
because
I
can
always
just
delete
the
code.
A
B
Right
right
now,
it's
just
a
tutu.
I
was
like.
I
don't
know.
I
think
I
don't
know
what
I
should
do
right
now.
G
A
G
D
B
Yeah,
I
think
originally
what
I
was
thinking
is
that,
if
the-
if,
if
it's
not
listening
anymore,
the
connections
closed
and
then
client
go
tries
to
re-exec
whatever
it
was
for
originally
exact
to
start
the
proxy
yeah.
I
guess
that
makes
sense
to
kind
of
think
would
we
would
we
spawn
some
kind
of
go
routine?
That
does
like
health
checks
or
something
or
would
we?
E
Yeah,
I
wanted
number
seven
to-
I
think
someone
else
kind
of
mentioned
this
already.
But
if
you,
if
you
had
a
proxy
that
as
part
of
connect
doing
the
outbound
connection
to
kubernetes
api
needed
to
do
some
interactive,
prompt
like
to
unlock
a
smart
card
or
whatever
you
might
want
to
pool
that
connection
across
many
indications
of
coup
ctl,
I'm
imagining
something
like
ssh
agent
kind
of
flow.
Where
I
my
exact
plugin
spawns
like
an
agent
processor,
connects
to
an
existing
agent
process
and
unlocks
it.
E
B
Yeah,
so
right,
right
now,
my
code
does
not
do
checks
to
see
if
the
proxy
went
away,
just
kind
of
assumed
it's
going
to
stay
around
and
bad
things
will
happen
if
it's
not
around,
but
what
you
described
does
make
sense,
which
is
that
we
could
support.
Basically,
if
you
are
going
to
spawn
a
proxy
we,
we
won't
wait
for
you
to
exit.
B
But
if
you
do
that's
fine
and
we
won't
get
upset
about
that
we'll
just,
but
we
will
get
upset
if
the
portman
told
us
no
longer
seems
to
function
as
a
server
that
way,
you
could
go
the
route
of
having
a
proxy
or
having
an
exact
plug-in.
That
itself
is
the
proxy
and
is
just
sort
of
active
for
that
one
thing
or
you
could
have
a
plug-in
that
spawns
another
process.
E
E
B
I
I
presume,
then,
the
proxy
author
would
have
to
come
up
with
some
mechanism
to
know
that
cube
ctl
is
being
reinvoked.
How
is
that
invocation
sort
of
authorized
to
reuse
the
session?
Somehow
some
way
you
would
have
to
build
that.
So
I
guess
it's
fine.
I
I
can't
stop
you
from
building
that
anyway.
So.
A
And
we'll
have
a
bunch
of
dangling
connections
to
cube
api
servers
now,
but
is
that
a
common
thing
where,
for
every
single
http
browser
request,
somebody
has
to
touch
a
ub
key.
B
One
of
the
conversations
I
had
had
with
them
was
the
fact
that
we
needed
to
make
it
so
that,
like
you,
could
establish
this
whole
thing
one
time
and
then
keep
it
around,
because
no
one
wants
to
sit
there
and
touch
their
yuba
key
every
time
they
type
in
a
cube,
ctl
command.
It's
just
not
so.
B
E
Are
even
more
severe
versions
of
that
too,
where
you're
not
just
touching
the
ubq
you're
typing
in
your
pen,
right
on
your
smart
card
device,
yeah
yeah,
my
last
question
there.
I
don't
know
if
it's
that
consequential,
but
the
the
way
that
you
you
did
the
proxy
here.
It's
kind
of
like
a
transparent,
reverse
proxy,
so
yeah
client
go
just
thinks
it's
connecting
to
localhost
and
it
kind
of
doesn't.
Actually
it's
not
really
treating
it
as
a
proxy,
it's
just
connecting
as
an
alternate
endpoint.
E
E
B
I
tried
the
proxy
approach
first
and
both
the
implementation
of
the
proxy
was
significantly
more
complicated
because
writing
a
man-in-the-middle
proxy
is
with
http
connect.
Proxy
is
not
fun
at
all,
like
you
know,
like
you're,
doing,
http,
hijacking
of
the
connection
and
other,
not
so
fun
things,
and
then
I
also
could
not
make
like
cube
ctl
exec
work,
and
maybe,
if
I
tried
hard
enough,
I
could
have
figured
out
what
I
was
doing
wrong.
But
I
was
like
I
don't
understand
like
if
this
is
worth
the
pain
and
suffering.
E
The
only
advantage
I
can
think
of
is,
if
you
did
want
to
have
one
proxy
process
that
was
being
shared
across
many
different
contexts.
Many
different
clients
connecting
to
it
that's
hard
to
do
unless
you
have
like,
unless
you're
treating
it
as
an
explicit
proxy
that
gets
to
know
the
destination
and
point
each
time.
But
even
that
is
hard,
because
you
also
have
to
remember
all
the
server
info,
like
the
the
ca
bundle
and
all
that
for
each
independent
connection.
B
Yeah,
I
I
think
we
can
still
do
that
because,
like
the
code
inside
of
the
standard
library
for
http
connect
stuff,
isn't
that
comp
like
isn't
that
complicated
for
like
the
extra
things
it
does
for
a
proxy
like,
I
think
it's
like
just
a
few
extra
tweaks
to
the
request.
We
could
just
do
the
tweaks
ourselves
so
that
way,
the
proxy
would
see
that
yeah,
like
you're,
trying
to
connect
to
server
a
and
not
server
b.
B
So
on
http
2,
having
written
one
of
these
proxies
before
I've,
basically
come
to
the
conclusion
that
you
need
to
build
like
two
separate
transports:
one
that's
http
one,
only
and
one
that's
http
2
only
and
for
all
the
exec
and
port
forwarding
things
you
need
to
use
one
and
upgrade
and
then
for
everything
else.
You
can
just
use
two
all
the
time
and
that's
just
fine.
Just.
A
B
A
I
I
think
I
know
why
it
happened,
but
I
have
to
page
that
in
it
has
never
worked.
I
think
okay,
so.
A
I
think
it's
awaiting
bug
fixes,
but
it
has
never
worked
in
the
past,
which
is
why
I
was
surprised
that
it
looked
like
it
worked
this
time
but
yeah,
maybe
not
no.
E
E
B
Yeah,
so
so
in
that
case
it
is
up
to
the
proxy
author,
but
we
forward
the
proxy
information
to
the
plugin.
So
we
tell
it
an
explicit
proxy
url
if
that
was
set
in
our
cube
config,
because
that's
part
of
the
cluster
info
and
we
forward
environment
variables.
So
it
can
also
then
decide
to
honor
them
yeah.
B
So
if
the
proxy
author
is
written
a
nice
proxy,
then
it
should
probably
honor
the
whatever
other
proxy
you
have
configured
on
your
system
as
well
yeah
and
as
sort
of
as
part
of
all
this,
my
thought
was
that
we
would
have
an
example.
B
Implementation
that
did
tls
offload
in
the
sense
of
you
just
provided
the
keep
a
key
pair
and
it
just
used
that
it's
like
hand
wavy
off
offloading,
but
to
sort
of
build
it
in
a
way
that
that
could
very
easily
be
reused
to
build
like
a
real
one
like
the
one
micah
wanted
to
build,
just
because
it
would
be
not
like
there's
really
actually
not
that
much
code
in
the
proxy
process
other
than
like
here's,
my
business
logic
and
here's
the
stuff.
That's
all
the
same
across.
B
B
So
I
think
we
spent
enough
time
on
this.
I
just
wanted
to
leave
leave
one
final
question,
which
is:
is
this
a
good
idea
like?
Should
we
do
this.
A
I
think
it's
a
good
idea.
I
think
we
should
continue
with
the
cap
and
see
if
we
can
can
get
it
merged,
but
I'm
glad
that
you
were
able
to
verify
that
this
is
reasonable.
I'm
interested
to
see
the
code.
What's
the
what's,
the
diff
look
like.
B
A
Yep
thanks
for
proving
it
out,
let's
try
to
get
an
alpha
kept
designed
and
yeah.
B
I
propose
that
you,
like,
I,
bring
you
why
don't
you
share
the
code
with
me
and
we
can
work
on
the
the
existing
kept
that
I
have
this
week
and
have
something
by
next
meeting
cool
yeah.
We
can
work
on
that
thanks.
A
Awesome
and
yeah,
let's
carry
on
because
we're
about
halfway
done.
Client
go
credential;
plugins,
remove
alpha
one.
Two,
three
deprecate
beta
one,
two
three
remove
beta
one,
two:
six:
what
is
this
mo.
B
So
andrew
keisler,
just
you
know,
finished
the
ga
work
for
clinical
credential
plugins
right,
so
they
are
officially
ga
in
122..
You
know
jordan
and
I
reviewed
all
those
things
they're
all
in
so
I
wanted
to
ask
thoughts
on
sort
of
moving
the
ecosystem
forward
in
regards
to
sort
of
slowly
getting
rid
of
the
beta
apis
and
I
don't
think
we
need
to
be
slow
on
alpha,
but
just
in
general
I
want
to
ask
people's
opinion
right.
A
So
when
and
why
do
we
remove
alpha
apis.
B
So
like,
in
this
case,
alpha
has
some
sort
of
vague
functionality
that
we
never
really
supported
forward.
So
it's
kind
of
got
some
weirdness
to
it,
I'm
not
sure
when
we
normally
remove
alpha.
I
my
feeling
was
that
we
could
have
removed
alpha
a
long
time
ago,
but
we
just
kind
of
left
it
around,
because
no
one
was
trying
to
remove
it,
but
I
felt
that
having
one
release
of
the
ga
api
was
sufficient
to
say,
please
stop
using
alpha.
Now
I
will.
I
will
just
remove
it.
A
E
Well,
so
there
are
a
few
things
going
on
there.
The
122
actually
stops
the
ability
to
serve
alpha
beta
our
back
stuff,
so
we
still
have
the
types
there
so
that
if
someone
still
has
data
in
a
cd
for
some
bizarre
reason
like
the
api,
server
won't
explode,
but
you
can't
serve
them
anymore.
E
In
122.,
for
the
closest
analog
I
can
think
of
to
the
client
to
the
exec
credential
stuff
is
the
web
hook,
authentication
authorization
admission
contracts
between
the
api
server
and
like
out
of
tree
implementations
of
those
things
so,
even
though
like
even
though
we
switched
to
v1
by
default
like
a
long
time
ago,
if
you
set
up
an
admission
web
hook,
talking
admission
review,
v1
beta
1,
we
will
continue
happily
integrating
with
that
backing
endpoint
in
the
one
beta
one
mode
like
what
we
expose
as
the
api
surface
from
the
cube
api
server.
E
Do
a
much
better
job
of
making
sure
all
of
the
entry
implementations
of
controllers
and
manifests
and
examples
and
documentation,
update
to
v1
types
for
integrations
that
people
have
written
out
of
tree
where
cube
components
are
calling
out
two
things:
we've
been
a
lot
slower
to
just
sort
of
drop,
a
a
version.
B
E
Yeah
I
it
would
be
helpful
to
sort
of
enumerate
the
benefits
so
dropping.
The
alpha,
I
think,
is
fine.
I
don't
have
any
concerns
with
dropping
the
alpha
on
pretty
much
any
timeline.
Actually,
the
beta
version
has
been
around
a
lot
longer
and
because
we
basically
didn't
progress.
The
feature
has
is
one
of
those
like
perma-beta
features
effectively,
and
so
it.
B
E
That's
that's
a
good
data
point
like
there's
some.
B
C
E
I
think
what
would
be
helpful
is
to
just
like
we
have
the
migration
guide
for
the
rest,
apis
that
calls
out
like
the
differences.
Let
me
like
someone
can
drop
that
into
the
meeting
notes
useful
just
like
we
have
that
page.
That
says
like
between
b1,
beta1
and
v1.
Like
here's,
here's
the
differences.
Basically,
you
have
to
specify
the
version
you
want,
I'm
just
making
this
up
off
the
top
of
my
head.
I
don't
know
what
they
actually
are.
E
You
have
to
specify
whether
you
want
interactive
mode
or
not,
that
might
have.
E
C
D
B
If
you,
if
you,
if
you
ask
for
the
cluster
info,
it's
version
to
the
version
you
asked
so
you
could,
but.
B
It
does
not
tell
you
if
the
client,
for
example,
supports
v1.
This
is
why
I
wasn't
sure
what
the
timeline
should
be,
because
you
you,
as
the
provider
of
the
cube
config,
that
will
have
a
hard-coded
version
in
it,
need
to
make
the
decision
for
what
hard-coded
version
should
be
in
there.
Even
if
your
plug-in
can
speak
all
three
versions,
you
still
have
to
pick,
which
is
not
great.
E
So
things
that
come
to
mind
for
the
rest,
apis
were
like
really
terrible
defaults
for
some
of
the
workload
v1
beta,
1
types
where
it
would
be
like.
Yes,
please
retain
old
replica
sets
indefinitely
for
all
deployments
and
be
like
okay,
so
I'm
filling
up
cd
like.
Why
is
this
a
good
default?
I'm
shooting
myself
in
the
foot
or
for
custom
resources?
Definitions
like
by
default,
preserve
unknown
data
and
just
like
allow
anything
to
be
persisted
in
unstructured.
Json
like
there
were
some
really
bad
defaults
and
so
same
thing
for
admission.
E
Web
books
right,
like
timeouts,
were
30
seconds
by
default
and
fail
open
by
default,
like
it
was
just
really
terrible
default
behavior,
and
so
the
v1
beta
1
versions
of
those
things
represented
what
guns
that
people
were
hitting
regularly.
So
it
wasn't
just
like
a
matter
of
cleanliness.
It
was
a
matter
of
people
using
these
beta
versions
are
sort
of
in
danger
of
hitting
these
bad
behaviors.
So
if
there
are
similar
things
for
this,
that
would
be
helpful
to
know.
E
If
there
aren't,
then
I
would
probably
try
to
get
a
sense
from
the
ecosystem
like
what
what
skus
are
people
supporting
like
cube,
control
versions
and
yeah,
do
a
struggle
of
how
people
are
configuring,
the
exact
plugin
and
then
go
from
there,
especially
if
the
cost
is
low
to
us,
and
there
aren't
really
just
horrible
downsides
to
using
the
b1
beta
1
version.
I
would
use
that
to
inform
like
how
many
releases
we
leave
the
beta
version
in.
A
Cool:
let's
keep
it
rolling
and
ask.
C
One
more
quick
question:
sorry
about
that
sure!
What
what
when
did
we
introduce
v1.
A
B
How
about
I,
instead
of
so
alpha,
goes
away
next
release
we
deprecate
beta
in
120
of
the
b.
That
doesn't
necessarily
mean
anything
yet,
and
we
remove
it
in
129,
as
in
six
releases
from
when
it
was
depleted,
which
is
two
years.
B
A
I
think
aws
cli
should
upgrade
to
v1
beta
1
asap.
Definitely,
and
I
I
think
two
years
sounds
reasonable,
mo
yeah.
That's
a
very
long
time
yeah.
What
would
I
do.
B
E
That's
true
that
that
said
the
like
beta
version,
that's
existed
for
three
years
like
we
have
a
burden
as
well
to
provide
a
stable
api
in
a
timely
way,
and
when
we
don't
do
that,
it's
sort
of
on
us
to
recognize
that
people
have
built
like
production-ish
things
on
our
beta
api.
And
so
no.
I.
B
E
Yeah
I
mean
like
the
I
enumerating
the
cost
to
maintain
it
and
the
deltas
and
risks
of
the
beta
version
and
trying
to
figure
out
like
what
people
who
are
using
the
exact
the
beta
version
of
the
exact
plugin.
What
kind
of
excuse
they're,
seeing
my
guess
is
like
a
a
time
frame
like
two
years
or
six
releases
would
be
sufficient
to
say,
like
the
oldest
cube,
control,
you're
using
might
be
three
releases
old
and
you
might
have
a
skew
one
or
two
versions
on
either
side
of
that.
E
A
Awesome,
I
am
going
to
try
to
keep
it
moving.
So
let's
do
this,
follow
up
for
secret,
store,
sub
project
discussion.
G
Yeah,
I
would
I
added
this
in
the
last
one
and
then
basically
how
we
discussed
on
why
we
require
this.
So
this
is
actually
the
issue
where
users
were
requesting
for
the
feature
and
then,
like
I
had
a
small
google
doc
for
the
proposal,
like
just
the
thing
that
the
sub
project
wants
to
do
so.
The
idea
is,
we
want
to
have
a
separate
controller
that
does.
G
So
the
idea
is,
the
csi
driver
continues
to
do
what
it
does,
which
is
the
mount,
so
users
who
want
to
do
that
can
continue
doing
that.
We
want
to
split
out
the
sync
controller
logic
and
then
move
that
as
a
separate
repo
or
like.
I
mean
we
still
haven't
worked
out
the
details,
but
something
that
we
can
do
where
users
can
opt
to
opt
between
the
csi
driver
or
the
sync
controller,
so
they
get
like
two
different
features
and
then
they
can
choose
which
one
they
want
to
do
with.
G
So
I
put
this
on
the
agenda.
I
think
a
few
weeks
back
and
then
we
talked
about
it
initially.
So
I
also
sent
a
note
to
the
cigars
mailing
group.
It
received
few
reviews
on
the
talk,
but
I'm
just
looking
for
more
feedback,
and
if
this
is
something
that
the
community
thinks
we
should
do,
then
I
can
work
on
creating
a
detailed
proposal
with
implementation
plan
on
how
do
we
how
we
plan
to
do
it.
A
I
wish
rita
was
here,
I
think,
she's
out
of
office
today
I
don't
know
so
this
would
be
a
separate
sub.
Project
of
this
is
would
be
a
proposal
for
a
new
sub
project
of
sig
auth.
G
Right
I
mean
like
right
now,
it's
all
within
one,
but
we
could
split
it
into
another
sub
project
and
then,
basically,
just
have
users
choose
between
the
two.
The
good
part
is
we
have
these
custom
resources
and
also
like
a
way.
So
we
use
grpc
for
driver
to
provide
a
communication.
So
we
would
still
use
all
those
basics
even
for
the
other
sub
project,
so
in
terms
of
configuration
users
would
still
configure
it
the
same
way
that
they
do
it
today.
G
B
B
Because
it
doesn't
really
make
like
yes,
the
secrets
api
is
nice
and
uniform
and
easy
to
consume
and
well
supported
in
the
ecosystem,
because
it's
what's
there
and
it's
always
been
there
it.
It
does
significantly
weaken
some
of
the
protections
you
get
by
not
exposing
those
secrets
into
the
kubernetes
control
plane
right.
B
So,
even
if
the
foundation
of
two
of
these
two
very
related
projects
was
basically
the
same
code,
I
was.
I
would
personally
still
want
them
to
be
separate
things,
because
the
security
properties
of
those
two
things
are
actually
very
different,
even
if
the
code
that
implements
them
is
not
very
different
right.
G
A
And
the
driver
code,
basically
is
the
code
that
somebody
would
write
outside
of
this
repo.
A
secret
csi
driver
to
like
implement
the
integration
is,
is
that
portable
across
the
two?
It
is
okay,.
G
Yeah,
like
the
way
we
have
it
is
like
the
driver
or
like
the
client,
makes
a
grpc
call
and
says
give
me
all
these
objects
from
the
explain,
secret
store
and
the
provider
returns.
Everything
back
to
the
client
so
like
the
logic
for
either
writing
it
to
the
mount
or
syncing
is
kubernetes
secret.
It's
all
still
at
the
client
level,
so
providers.
If
they
have
the
plugin
today
they
can
use
the
same
plugin
for
the
other
sub
project
as
well.
G
A
A
single
controller
right
I
previously,
I
thought
the
api
also
included
like
a
pod,
namespace
and
name
information
on
that
mount
call
to
the
to
the
integration
plug-in.
Is
that
true,
so
like
plugins
that
depended
on?
That
would
also
not
work
for
this,
because
you
wouldn't
know
when
you
create
a
secret.
G
Yeah,
the
pod
name
and
namespace
at
least
like
for
the
azure
plugin.
We
used
mostly
for
the
odd
scenarios
like
so
that
that
metadata
can
be
used
for
getting
a
token
but
like
there
would
be
some
limitations
in
terms
of
the
permissions
that
need
to
be
given.
So
today,
with
csi
the
benefit
is
you
can
basically
provide
the
required
permissions
to
your
individual
pod,
rather
than
giving
it
to
the
daemon
side,
like
the
csi
driver,
damon
said,
but
in
case
of
having
a
separate
controller
to
just
sync,
his
kubernetes
secret.
G
I
think
like
that
is
where
that
limitation
comes
in,
like
you
probably
need
to
assign
those
permissions
to
the
controller,
but
if
you're,
probably
using
like
for
gcp
workload,
identity
or
like
identity
specific
to
a
particular
part,
you
could
still
go
at
a
granular
level
and
grant
permissions
only
on
that
part,
and
then
the
provider
today
already
fetches
the
token
by
the
token
review
api.
A
Right,
I'm
yeah,
I'm
guess
I'm
interested
to
understand
how
this
works
for
a
like.
How
how
you
decide
what
namespace
can
ingress
secret
can
be
synced
into,
but
I
guess
that's
already
implemented
somehow
so
I
should
follow
up.
I
I
think
I
probably
feel
the
same
way
as
mo
that
it's
worth
separating
from
the
csi
driver
stuff.
A
How
did
we
do
this
last
time?
Was
there
a
cap
or
just
a
proposal
on
the
mailing
list?
Do
you
remember
mo.
B
I
I
thought
we
like
made
an
issue
to
some
repo
to
request,
or
some
github
repo,
to
request
the
stuff
and
then
had
to
be
signed
off
by
the
leads,
and
then
someone
else
from
somewhere
else
signed
off
on
it.
We
should
probably
be
able
to
dig
that
up
on
github.
I
did
want
to
ask,
because
mike
you
brought
up
a
good
point,
which
is:
did
you
guys
build
some
kind
of
authorization
mechanism
for
who
can
request
what
secret
or
does
that?
Is
it
just?
G
Yeah
today,
it's
tied
to
the
part
right,
so
the
user
creates
a
custom
resource
called
secret
provider
class.
So
that's
where
they
tell
which
all
secrets
to
get
which
provider
to
use
and
then
like
the
auth
mechanism
and
then
the
driver
at
that
point
reads
it
and
then
sends
it
to
the
provider
like
the
whole
thing,
and
then
the
provider
uses
different
auth
mechanisms
at
the
back
end,
so
like
for
gcp
plugin.
Today
we
use
workload,
identity
which
is
assigned
to
that
particular
part.
G
B
Okay,
so
all
that
makes
sense,
how
does
that
translate
into
secret
syncing?
Is
that
just
because
it's
happening
in
sort
of
inline
there,
so
the
authorization
is
effectively
already
linked,
like
it's
already
strongly
authorized,
so
you
don't
have
to
do
any
other
mechanism
for
the
right
thinking,
yeah.
A
G
B
Okay,
so
how
if
you
had
so
say
the
csi
stuff
was
gone
or
whatever,
and
you
just
had
the
secret
syncing
bits
as
a
distinct
project.
A
A
Sounds
good
so,
last
time
we
did
a
review
of
code
and
api
before
we
did
a
kind
of
once
we
decided
that
it
would
be
good
to
be
a
sagasa
project.
A
F
A
I
do
apologize
over
time.
There's
one
more
item:
I'm
going
to
push
it
to
the
top
of
next
meeting.
Go.