►
From YouTube: Pinniped Community Meeting - May 6, 2021
Description
Pinniped Community Meeting - May 6, 2021
We meet every 1st and 3rd Thursday at 9am PT / 12pm ET. We'd love for you to join us live!
This week's discussion included LDAP Support, Refresh flow issues in v0.4.x, and Impersonation proxy deployments on private EKS/AKS/GKE clusters.
Shoutout to community member, Scott Rosenberg, for continuing to share his knowledge and expertise with the maintainers as they work on these issues. Full notes and meeting details here: https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ#May-6-2021
A
All
right,
hi,
everyone
welcome
to
this
edition
of
the
pinniped
community
meeting,
reminder
to
please
read
and
abide
by
our
code
of
conduct
when
attending
these
meetings.
It
is
being
recorded
and
uploaded
to
our
youtube
playlist.
So
whenever
you
know
after
the
after
we're
done
with
this
meeting
there,
they
are
typically
added
to
the
youtube
playlist
within
48
hours.
A
A
A
Today's
date
is
may
6
2021,
which
means
it's
also
the
week
of
kubecon.
So
we
hope
you
have
enjoyed
another
lovely
week
of
virtual
kubecon
and
pinniped
participated
a
little
bit.
We
did
have
office
hours
this
morning
and
we
have
a
demo
that
is
playing
at
the
vmware
virtual
booth.
You
can
click
on
that
here
in
the
announcement
section
to
check
out
that
demo.
A
B
Yeah,
I
can
talk
about
why
we
moved
that.
Okay,
we
we
had
this
theory
that
some
users
won't
be
able
to
use
our
current
flow
because
of
a
device
firewall
like
on
on
a
windows
machine
or
something
like
that.
That
won't
be
able
to
open
the
listening
port
that
we
use
right
now.
B
We
realized,
as
we
got
into
starting
to
design
this
and
scope
it
out
that,
like
we
hadn't
really
validated
that
with
real
users.
So
I
still
think
it's
likely
that
we'll
do
this,
but
it's
sort
of
put
it
back
on
the
back
burner
until
we
can
get
some
real
customer
feedback
and
understand
exactly
why
we're
building
it
and
then
it'll
probably
move
back
up
in
the
roadmap.
C
I
was
going
to
mention
that
we
did
have
at
least
one
person,
not
necessarily
an
exact
use
case,
but
they
were
trying
to
use
the
jump
box,
so
they
didn't
have
a
browser
in
the
jump
box,
so
they
couldn't
log.
In
so
I
mean
you,
you
can
log
into
it
on
a
different
machine
and
copy
session
files
over
and
that
does
work.
But.
B
The
link
like
in
your
browser
also
needs
to
have
access
to
the
supervisor
and
we
like
basically
we're
wondering
if
device
code
flow,
actually
solves
that
scenario
or
not
or
if,
if
the
cases
where
you're
using
a
jump
box,
maybe
the
supervisor
is
also
kind
of
like
on
the
network
private
network.
Behind
the
jump
box,
you
actually
can't
can't
use
your
browser
locally.
B
In
that
case,
in
which
case
the
the
alternate
solution
which
you
could
actually
do
today,
is
like
some
sort
of
ssh
port
forwarding
to
get
get
that
local
host
port
on
the
jump
box
to
be
also
a
local
host
port
on
your
desktop,
but
anyway
yeah
like.
I
think
I
think
we
we
understand
in
theory
like
what
the
use
cases
are,
and
we
have
definitely
people
who
want
to
use
jump
boxes.
B
The
other
kind
of
like
scenario
about
like
host
firewalls,
we
haven't,
haven't
actually
encountered
with
one
of
our
users.
Yet
if
anybody's
watching
this-
and
you
have
a
use
case-
then
put
it
in
the
issue.
A
Great
thanks,
matt
and
mo
and
then
on
to
the
next
roadmap
item
we
have
ldap
support,
looks
like
the
main
pr
is
nearly
ready
to
merge.
B
Yeah,
I
can
give
an
update
on
this
too
or
marco.
If
you
want
to
you,
worked
on
this
more
than
I
did,
the
the
the
main
there's
there's
a
feature
branch
right
now
that
has
ldap
support.
It's
working,
it's
tested
integration,
tested,
supports
login
and
supports
getting
your
user
identity
all
the
way
through
the
whole
system.
It
all
works.
There's
a
few
code
review
items
left
on
that
branch
that
we
need
to
address
and
we've
also
been
sort
of
keeping
it
in
a
unmerged
branch.
B
Just
in
case.
We
wanted
to
release
the
current
main
branch
with,
because
it
has
some
other
bug
fixes
right
now
that
we
might
want
to
release,
there's
also
a
pr
that
adds
support
for
the
ldap
idp
type
in
the
git
coop
config
command,
so
that
you
can,
when
you
have
an
ldap
identity
provider,
you
can
also
use
git
code
config
to
get
the
config
not
to
do
any
manual
work
and
then
there's
some
issues
that
are
still
open
and
haven't
been
started
yet
around
adding
support
for
start
tls,
which
should
be
a
relatively
small
change.
B
Adding
support
for
group
search,
which
should
be
a
small
to
medium
change.
I
think
I
think
that's
it
and
then
I
think
then
I
think
we
were
going
to
call
this
done
for
version.
One.
B
Unless
anybody
else
has
anything
else,
I
forgot
it's
close.
I
think
it's
going
to
definitely
happen
in
may,
and
that
will
be
probably
0.80,
although,
like
I
said,
if
we
we
might
decide
to
ship
what
we
have
on
main
right
now
and
call
it
080
and
then
call
090
ldap
support,
we'll
see
every
time
every
time
we
do
any
of
these
things,
I
always
regret
putting
version
numbers
against
our
milestones
until
just
in
time
yeah
and
then
documentation.
I
can
talk
about
too
margot
marco
shipped
a
big
docs
update
this
week.
B
Last
week
we
had
a
new
new
documentation
page
about
how
to
use
gitlab
as
an
idp.
It's
pretty
cool
super
easy.
A
Okay,
great,
I
was
having
trouble
getting
off
mute
thanks
for
that
matt
and
as
far
as
this
discussion
topics
go
looks
like
we
have
a
couple
inputs
in
there.
Margo
refresh
flow
issues
in
vo.4.x.
D
Yeah
so
we
kind
of
discovered
some
surprising
behavior
in
the
in
the
refresh
flow,
where
we
expect
that,
for
as
long
as
your
refresh
token
is
valid,
you
know
you
can
you
can
use
it.
We
realized
we
had.
You
know
currently
assigned
hours,
we
store.
D
Secrets
related
to
the
access
token
and
refresh
token
information
and
the
access
token
only
last
15
minutes,
and
that
secret
was
getting
garbage
collected
soon
after
that,
and
this
was
causing
problems,
because
the
library
that
we
are
using
to
handle
our
oidc
stuff
phosphate
was
expecting
it
to
exist
so
that
it
could
revoke
that
secret.
But
since
we
had
already
deleted
it,
that
was
that
was
causing
errors.
D
D
C
C
I
was
thinking
of
the
case,
if
you
guys
have
many
many
access
tokens
over
time,
if
you,
if
you
have
more
than
one
access
token
associated
with
a
session
like
you
know
that
session
id
all,
but
the
newest
one
should
be
garbage
collectible
immediately.
C
So
theoretically,
we
could
have
another
variation
of
the
garbage
collector
that
doesn't
look
at
ttls,
but
instead
looks
at
session
ids
and
builds
up
like
an
ordered
list
per
session
id
and
for
any
list
that
has
more
than
one
item.
It
deletes
everything
that
isn't
the
last
thing
in
the
list
that
that
maybe
that
doesn't
matter
in
practice,
I
was
just
trying.
I
was
trying
to
think
of
it
in
my
head.
If
that
actually
happens
in
practice,.
D
Yeah,
I
thought
the
access
token
you're
talking
about
like
if
you
had
one
access
token
and
then
you
go
through
the
refresh
flow
and
you
you
need
a
new
one
with
that
old
one.
Stick
around
and
kind
of.
C
C
B
B
I
think,
like
you,
just
don't
get
this
property
and
I
I
would
honestly
be
fine
if
we
ever
decided
that
we
wanted
to
refactor
our
storage
to
be
stateless
and
like
not
actually
be
storage
but
just
be
signing
and
in
having
a
token
that
encodes
all
the
information
that
we
would
have
stored
like.
I
would
be
fine
with
that.
I
don't
think
we
would
be
really
missing
out
on
any
important
security
properties,
but
that's
not
what
we
have
right
now
right
now.
B
C
Yeah,
but
so
to
be
on
the
actual
pac-md
dock.
This
this
refresh
flow
actually
exists
in
all
versions
of
input,
not
just
for
x
right,
it's
it's
because
we
we
have
a
garbage
collection
time
that
happens
too
soon.
So,
basically,
if
you
you,
if
you
log
in
and
get
an
access
and
a
refresh
token,
and
you
wait
like
20
minutes
without
doing
anything,
your
access
token
has
expired
and
has
been
garbage
collected
for
sure
or
you
try
to
do
something.
C
We
try
to
refresh
your
token
because
it
doesn't
work
and
in
the
refresh
response,
at
the
very
end,
before
it'll,
give
you
a
new
refresh
and
access
token.
It
wants
to
revoke
everything
and
it
fails
to
revoke
the
access
token,
because
it's
already
gone
and
because
of
that
it
refuses
to
give
you
a
new
refresh
token.
So
the
flow
fails.
B
It
fails
to
do
that,
delete
not
to
the
user.
Okay,
it
sounds.
It
sounds
like
what
we
need
to
do
is
add
it.
Maybe
you
already
said
this
but
add
a
case
that
says
if
you
delete
something
and
you
get
a
404
error,
that's
fine!
You
d!
It's
deleted,
like
that.
That's
a
okay
case.
If,
if
fossil
tells
us
to
delete
an
access
token,
we
go
to
delete
it
and
it's
not
there,
that's
fine!
It
was
already
deleted.
C
B
That'd
be
okay
too,
or
we
could.
We
could
actually
yeah
basically
have
the
garbage
collector
only
work
on
refresh
tokens
but
link
the
owner
references
sort
of
I
know
our
garbage
collector
doesn't
actually
use
owner
references
but
basically
use
the
link.
The
access
tokens
to
be
owned
by
the
refresh
token.
C
B
C
B
C
Yeah
it
doesn't,
I
don't
think
it
matters
in
practice
and
at
the
end,
and
at
the
very
very
end
of
your
session,
when
it's
both
the
refresh
token
and
the
access
token
no
longer
makes
sense
well
garbage
collect
both
of
them
and
it's
time
so
like
we're
not
accumulating
crap
over
time
right
sounds.
B
D
D
D
D
B
A
C
B
B
B
Yeah,
in
fact,
now
that
we
have
good
caching
of
the
exec
credentials
like
certificates
yeah,
you
actually
don't
need
any
of
these
things
to
be
longer
than
just
about
like
30
seconds,
probably
because
you're
actually
you're
using
it
essentially
one
time
against
a
cluster
to
get
a
certificate.
The
certificate
is
valid
for
five
minutes
when
the
certificate
expires,
they're
going
to
go
back
and
do
another
thing.
B
If
we
we
don't
want
to
line
them
up
exactly
because
then
it's
like
really
non-deterministic,
what's
going
to
happen,
like
it'll,
sometimes
need
to
refresh
and
sometimes
not,
which
sounds
bad.
I
would
rather
just
say
all
the
tokens.
All
the
token
lifetimes
are
actually
shorter
than
the
certificate
lifetime,
so
that
in
the
normal
case,
every
time
your
certificate
expires,
you
go
through
a
refresh
slope.
C
B
When
I
what
it
was,
what
I
was
thinking
about
too,
when
it
was
the
caching
stuff
was
now
that
we
have
the
cash
on
the
certificate,
we
could
actually
even
stop
storing
access
tokens
and
stop
storing
id
tokens
and
the
cluster
id
tokens
in
the
session
cache
and
have
and
simplify
the
format
of
the
session
cache
to
literally
only
have
refresh
tokens,
because
every
time
you
read
from
the
cache
you're
almost
always
going
to
need
be
in
that
refresh
case.
Anyway.
B
We
could
eliminate
a
bunch
of
code
and
complexity
and
like
if
statements,
if
we
just
assume
that
we're
always
going
to
go
through
the
refresh
flow.
The
only
thing
that
breaks
in
that
mode
is,
if
you're,
using
a
non-pinpoint
and
not
the
supervisor,
using
some
idp
oidc
idp
directly,
which
is
the
case
that
we
support.
But
it's
not
like
a
really
core
use
case.
B
C
C
B
B
Imagine
I'm
using
like
octa
as
an
oidc
idp
directly
with
the
concierge
yeah
in
that
mode.
I
might
have
octa
configured
to
give
me
like
a
couple
hours,
long
access
and
id
tokens,
I'm
using
them
with
a
cluster
every
five
minutes.
The
cluster
is
going
to
need
a
new
cert,
but
I'm
not
going
to
actually
have
to
log
in
with
my
browser
except
every
few
hours,
because
in
that
mode
I
might
not
have
a
refresh
token
or
might
not
have
a
scope
that
lets
me
use
the
refresh
flow
anyway.
It
like
I
said.
B
I
don't
think
this
is
a
core
use
case.
I
think
if
we
could
simplify
the
code
a
bunch
and
like
remove
support
for
that
I'd
be
okay,
maybe
doing
that.
It's
a
little
hard
to
argue
that,
because
we
do
have
the
code
that
supports
that
now,
so
we
it's
a
little
complicated,
I
would
feel
great
cleaning
it
up,
but
yeah.
C
All
right,
so
maybe
there's
three
distinct
issues
right
there
there's
the
actual
bug,
there's
another
issue
to
shorten
all
these
lifetimes
to
at
least
in
the
supervisor,
to
whatever
actually
makes
sense
with
our
caption
third
issue
for
decide.
If
we
want
to
keep
supporting
the
not
supervisor
the
ydc
upstream
and
the
concierge,
sorry,
not
the
concierge
but
the
cli,
or
how
we
want
to
support
it
at
the
caching
level.
Basically,
I
think
those
are
three
distinct.
D
B
Okay,
I
just
took
some
notes,
feel
free
to
refine
those.
I
think
we
can
move
on
to
the
other.
The
last
topic
here
I
wanted
to
just
mention
this,
because
I
don't
think
I've
been
working
on
this
for
a
little
while
now
and
we
don't
actually
have
anything
in
github
about
this
yet,
but
one
thing
that
we
realized
after
releasing
o70
and
we
added
support
for
this
impersonation
proxy
and
the
way
it
works.
B
Is
it
automatically
detects
when
you
install
the
concierge
on
a
cluster
that
has
no
control
plane
nodes,
so
it
has
like
kubernetes
control
plane
hosted
where
we
can't
see
it
like.
This
is
the
case
on
gke
and
google
and
aks
on
azure
and
eks
on
on
amazon
and
and
then
also
turn
on
some
other
types
of
clusters,
and
when
we're
in
that
mode,
the
default
thing
we
do.
Is
we
auto
detect
that
and
we
start
up
a
load
balancer
service.
We
start
serving
this
impersonation
proxy.
B
We
have
all
this
code
on
the
client
side
to
detect
this
and
give
you
a
good
config
that
routes
through
the
impersonation
proxy
all
works
great,
and
we
tested
this
on.
All
the
default
configurations
for
all
these
all
these
cloud
cloud
provider
managed
kubernetes
clusters.
It
works
great.
B
What
we
didn't
test
or
consider
is,
if
you
have
a
cloud
provider,
managed
cluster
from
like
one
of
these
big
big
three
cloud
providers,
but
you
have
the
cluster
control
plane
in
a
private
network,
which
is
something
you
can
do
on
all
three
major
cloud
providers
either
either
in
the
case
where
you
have
it
literally
on
a
private
subnet
like
a
an
rfc,
1918,
private,
ip
range
or,
and
also
in
the
case
where
you
have
it
on
a
public
ip,
but
with
like
security
group
rules
or
firewall
ingress
rules
that
limit
the
scope
of
who
can
connect
to
it.
B
When
you
deploy
the
concierge
on
one
of
those
clusters,
it
opens
this
load
balancer
service
and
it
turns
out
that
on
on
some
of
these
cloud
providers,
even
though
your
cluster
is
a
private
cluster,
the
default
load
balancer
is
a
public
load
balancer,
and
so
what
we've
effectively
done
in
those
scenarios
is
open
up
a
public
proxy
public
ip
proxy
to
your
private
cluster,
which
is
probably
not
what
you
wanted.
It's
not
like.
B
This
becomes
a
feature
that
is
needs
a
little
bit
of
manual
control
so
that
we
have
to
know
something
about
the
user's
intention
when
they
installed
pen
a
pad
like
did
they
intend
to
open
this
public
proxy?
If
so,
that's
fine,
it's
not
honestly,
not
a
problem
for
most
people
or
do.
Would
you
rather
turn
that
feature
off
which
might
be
something
we
need
to
support?
Or
do
you
want
a
mode
where
we
run
the
proxy
run
the
listening
proxy,
but
we
don't
provision
the
load
balancer
in
front
of
it.
B
So
that's
up
to
you
to
deploy
manually
I
anyway.
This
is
a
little
unfortunate
because
it
makes
this
super
easy
default
automatic
feature
into
something
that
is
no
longer
quite
as
automatic.
So
that's
a
little
disheartening,
but
I
think
it's
probably
the
right
thing
to
do
there.
I
don't
know
if
anybody
else
has
thoughts.
I
thought
I'd
just
give
a
status
update
there.
I
also
have
an
action
item,
which
is,
I
need
to
file
my
findings
on
this
as.
E
B
Option
yeah,
I
thought
about
that
too.
I
I
think
I
haven't
looked
at
the
azure
ones,
I
think
on
on
eks.
You
can
add
an
annotation,
but
you
have
to
add,
like
the
subnet
id,
that
you
want
it
to
be
provisioned
in,
we
don't
have
a
way
of
auto,
detecting
that.
So
what
we
another
another
kind
of
like
semi-manual,
semi-automatic
thing
we
could
do
is
we
could
say,
provide
an
api
field.
Probably
the
credential
issuer
object
where
you
can
give
us
some
annotations
and
we
will
put
those
annotations
on
the
load
balancer
service.
B
That
would
be
probably
not
that
hard
to
add
as
a
feature,
but
it
also
it's
it's.
You
can
get
something
similar
if
we
just
say
like.
If
you
need
some
sort
of
custom
configuration
on
your
load,
balancer
service,
that's
fine,
just
like
define
it
outside
of
pen
pet
itself
and
point
it
at
a
particular
label.
Selector.
C
I
gotta
drop,
I
was
just
gonna
mention
one
thing
technically,
the
way
the
code
is
written
today.
If
you
go
mess
with
the
service
after
we've
created
it,
we
don't
reconcile
it
back.
So
if
you
wanted
to
go
ahead
annotations,
you
could
go
do
that
today
and
it
would
technically
work,
probably
not
saying
that's
supported
or
well-formed
as
an
api
yeah.
B
There
is
also
a
not
an
api,
but
an
internal
configuration
knob
that
we
use
for
testing
where
you
can
actually
like
do
all
the
things
I
mentioned
already.
It's
a
part
of
what
we
might
be
doing
to
fix.
This
is
take
that
kind
of
internal
config
map
knob
and
promote
it
to
a
proper
api,
and
I'm
actually
pretty
glad
that
we
didn't
do
that
already,
because
I
think
we
understand
this
particular
use
case
a
lot
better
and
I'm
sure
that
we
can
factor
that
into
the
api
design
more
than
we
would
have.
B
Yeah,
my
feeling
is,
we
can
probably
leave
the
default
the
way
it
is
on
the
git.piniped.com
install
concierge.yaml
like
the
default,
install
bundle,
I
think,
can
still
have
this
behavior,
because
if
you're,
a
user
you're
sitting
down
and
I'm
typing
the
command
to
install,
pin
ipad
and
I'm
watching
it
install,
I
think,
we'll
put
a
little
warning
in
the
docs.
Probably
to
tell
you
this
is
going
to
happen,
because
it's
not
obvious,
I
think,
that's
probably
still
the
right
default
behavior.
D
I
mean
it
is
great
that
we
talked
about
hey
we're
not
going
to
put
these.
You
know
knobs
on
our
public
api,
because
we
haven't
validated
that
people
need
them
and
now
we've
kind
of
gotten
validation
that
people
need.
E
Just
one
last
thing
on
the
impersonation
proxy
from
my
side
has
anything
else
been
thought
about
in
terms
of
the
support
for
the
tkgi
type
clusters,
where
I
have
control
plane
nodes,
but
don't
have
a
cube
controller
manager.
A
way
of
manually
saying,
I
want
an
impersonation
proxy
instead
of
it
being
auto,
detect
in
order
to
support
that
type
of
a
kubernetes
cluster.
B
Yeah,
that's
another,
that's
another
case.
I
think
we
also
are
looking
specifically
at
a
couple
of
cluster
types
that
we
don't
support
by
default
right
now,
including
tkji
and
probably
openshift.
I
think
it
just
needs
a
little
tweak,
so
another
thing
we
might
do
for
for
tka
tkgi
is,
I
think,
there's
just
some
small
tweaks
about
how
we
could
basically
use
the
coopster
agent
flow
on
those
clusters
and
make
that
work
and
then
on
openshift.
B
You
know
the
sort
of
the
signing
key
that
we're
looking
for
is
actually
in
a
kubernetes
secret,
so
it's
really
trivial
to
get
access
to
it.
We
just
need
to
write
a
little
piece
of
code
that
detects
that
that
secrets
available
on
openshift
and
goes
and
grabs
the
key
and
and
that's
something
I
think
we
have
in
our
backlog,
so
tkgi
support,
specifically,
though
something
that
I've
also
it's
also
on
our
on
our
backlog.
B
Yeah
and
the
the
thing
you
asked
specifically
to
it.
Yes,
I
think
I
think
we
have
this
internal
config
map
kind
of
control-
surface
right
now,
where
you
can
you
can
one
thing
you
can
do?
Is
you
can
force
the
impersonation
proxy
to
be
on
and
we
do
that
in
testing?
It
lets
us
test
this
whole
thing
unkind.
B
You
could
you
could
force
the
proxy
to
turn
on
and
kind
of,
otherwise,
wouldn't
so
I'm
sure
that
that
will
make
it
into
the
proper
api
as
well.
B
The
current
plan
is
we'll
put
these
new
settings
on
the
spec
of
the
credential
issuer.
So
right
now
we
have
this
credential
issuer
object.
That's
right
now
is
only
sort
of
like
a
write.
Only
api,
like
you,
you
install
pen,
embed
and
pen
pen,
the
concierge
creates
a
credential
issue
and
starts
updating
its
status.
B
I
think
that
makes
the
most
sense.
So
it's
more
like
the
concierge
configuration
that
you're
passing
in
and
who
knows,
maybe
we'll
rename
the
type
eventually
too
but
yeah.
Hopefully
these
are
all.
I
hope
these
are
all
pretty
small
changes.
There's
a
possibility,
also
with
these
two
bugs
that
we
might
decide
to.
I
don't.
Maybe
this
is
a
good
topic.
We
might
decide
to
try
and
fix
these
two
bugs
and
ship
those
fixes,
along
with
some
of
the
other
fixes
that
we
have
as
080
and
actually
hold
ldap
back
for
another
release.
B
A
Okay,
cool
thanks
everyone
for
the
excellent
discussion,
if
you're
watching
this
from
home.
Our
next
time
that
we
are
meeting
is
may
20th.
We
meet
every
first
and
third
thursday
of
the
month
at
9
a.m.
Pacific
time,
so
we
do
encourage
you
to
attend
one
of
these
live.
If
you
have
any
questions
for
the
maintainer
team,
they're
happy
to
help,
you
help
guide
you
in
any
way.
A
We
also
encourage
you
to
join
our
kubernetes
slack
channel,
which
is
just
hashtag
pinniped,
and
you
can
also
find
us
there
as
well
as
twitter,
which
is
pinnipeddev
our
project
pinniped,
and
hopefully
we
will
get
to
engage
with
you
in
one
of
those
opportunities
and
see
you
next
time.
Alright,
thank
you.