►
From YouTube: Pinniped Community Meeting - February 18, 2021
Description
Pinniped Community Meeting - February 18, 2021
Topics discussed include LDAP identity provider design, how to handle multiple IDPs, and deprecating local-user-authenticator in favor of some “local user” IDP in the supervisor. Notes and details on the next meetings found here: https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ?view
A
All
right,
hello
and
welcome
from
the
apocalypse
in
texas
thanks
for
joining
the
third
thursday
community
meeting
for
pinniped.
We
got
some
really
exciting
topics
to
cover
today.
A
A
Driver.
Thank
you.
C
Cool
we'll
go
through
status
updates
first
and
then
there's
a
quite
actually
a
long
list
of
discussion
topics.
I
kind
of
don't
expect
us
to
get
through
most
of
them,
so
they'll
be
relevant
next
time,
so
we'll
just
save
them
for
next
time.
If
we
don't
have
time
today,.
D
First,
up
is
andrew,
mostly
just
working
on
the
concierge
impersonation
or
helping
work
on
the
concierge
impersonation
proxy
implementation.
I'm
sure
we'll
hit
more
on
status
there,
but
that's
mostly
my
update.
Oh.
E
E
F
No
yeah
so
ryan
and
I
did
a
lot
of
ldap
work
this
week
and
I
think
the
main
thing
I
worked
on
in
between
the
last
meeting
and
this
one
is
the
changes
to
the
concierge
apis,
just
sort
of
something
up.
There.
C
C
Dependable
is
working
again,
so
there's
like
a
bunch
of
pr's
like
it
opened
a
bunch
of
pr's
yesterday
that
bumped
up
and
then
this
morning
it
closed
them
all
and
reopened
new
ones,
because
the
depths
changed
again.
I
think
this
is
really
cool.
It
doesn't
quite
work
yet
because
the
cla
bot
is
still
telling
it
like
it
needs
this.
The
github
needs
to
sign
our
cla,
which
is
not
not
true.
C
I
escalated
that
internally,
with
the
person
who
works
on
it.
That
project
for
the
cli
bot,
though,
seems
yeah.
Anyway,
we
can
talk
about
that
offline.
I've
been
working
on
docs
this
week
on
our
website,
so
you
might
notice
this
very
mind-blowing
change
on
our
website.
Where
now
the
fonts
are
bigger,.
C
One
of
the
changes
I
made
yesterday
was
bigger,
fonts,
more
contrast,
but
then
now,
mostly
most
mostly
I've
been
working
on
the
the
content
in
our
docs
and
kind
of
reorganizing
it.
I
wanted
to
call
out
that
we
hit
a
hundred
github
stars
yesterday.
I
think
this
is
not
working
right.
D
C
H
Yep,
I
guess
just
the
bullets
are
pretty
self-explanatory.
We've
been
doing
a
lot
of
planning
and
scoping
work
this
week
and
last
and
I've
been
catching
back
up
on
the
architecture,
because
I
was
out
so
long.
It
was
like
how
does
this
stuff
work
again?
I
need
to
look
at
this
so
that
helped
me
out
a
little
bit
with
that
and
also
thinking
through
a
longer
term
but
iterative
robot
to
share
with
the
open
source
community.
H
I
think
the
only
thing
that
I'm
trying
to
think
through
a
little
bit
more
on
this
is
you
know,
there's
a
bit
of
a
a
balance
between
the
vmware
side
of
the
house
and
what's
relevant
to
the
open
source
community.
So
thinking
about
how
we
put
stuff
forward
to
the
open
source
community
and
keep
our
main
backer
happy
as
well.
So
that's
all.
C
Cool,
I
think,
we'll
dive
into
discussion
topics
now.
So
we
did
kick
off
a
little
bit
of
discussion
yesterday
about
the
ldap
identity
provider,
but
andrew
was
out
and
it
wasn't
in
the
community
meetings.
I
think
I
don't
know
mo
and
ryan.
Do
you
think
you
can
introduce
that
a
little
bit?
Maybe
we
can
kick
off
that
rejoin
that
discussion
basically,
but
what
we
want
to
do
about
ldap.
F
You
know
I
can
give
my
like
little
spiel.
Maybe
ryan
has
slightly
different
thoughts
or
others
have
different
thoughts,
but
in
in
general
you
know,
I
think
what
we
want
is
pinniped
to
be
useful
in
a
broad
set
of
environments.
The
impersonation
proxy
is,
you
know
one
of
those
attempts
to
make
it
available
in
a
lot
of
kubernetes
environments,
where
we
don't
work
today.
F
Sort
of
the
flip
side
of
that
is
where
your
identity
lives
might
not
be
an
oidc
provider,
and
we
want
to
cover
that
site
as
well
right.
So
if
you,
if
we,
if
you
can
run,
pin
and
pad
successfully
on
a
variety
of
kubernetes
environments,
and
you
can
configure
pinuped
to
honor
your
identity
from
a
variety
of
sources
recover
both
of
those
edges
as
best
as
we
can
were
much
more
likely
to
be
broadly
useful,
so
we
felt
that
well,
idc
was
the
greatest
place
to
start.
F
You
know
like
that
is
sort
of
a
sort
of
like
a
great
protocol
for
like
sort
of
getting
you
a
really
big
footprint
on
what
providers
you
support,
but
the
reality
is
not.
Everyone
uses
it,
and
so,
like
a
very
common
deployment,
this
ldap
or
active
directory,
or
free
ipa
or
open
ldap,
or
a
variety
of
different
flavors.
It's
a
very
traditional
identity
provider.
F
It's
a
very
it's
been
deployed
for
a
long
time,
so
we
thought
that
that
one
would
be
a
good
one
to
support.
I
suspect,
over
time
we
will
support
more
and
it
would
be
great
if
the
sort
of
the
order
that
we
go
in
was
sort
of
based
on
feedback
from
just
like.
Wouldn't
it
be
cool
if
you
supported
github,
you
know
just
a
little
a
little
bit
of
a
community
driven
focus.
There
would
be
really
nice
personally
for
me,
so
they
want
to
anyone.
Have
those
things
to
add
about
our
thoughts.
H
From
a
product
lens,
that's
exactly
right:
utility
and
adoption
was
the
main
driver
between
behind
this
decision
so
well
well.
Articulated,
though,.
C
So
I
think
the
other
maybe
motivator
here
is
that
there
are
just
a
lot
of
ldap
installations
that
are
out
there,
that
don't
have
oidc
support
and
we
want
to.
We
want
to
target
those
environments.
F
Yeah,
you
should
not
expose
your
ldap
to
the
public
internet.
I
would
not
be
too
stressed
out
if
you
expose
the
supervisor
to
the
public
internet,
because
it's
supposed
to
be
safe
in
that
environment.
C
Cool
okay,
so
yesterday
we
does
ryan
or
mo.
Do
you
want
to
share
that
doc
from
yesterday
and
we
can
go
through
it?
Some
more
or
is.
Is
that
is
that
a
good
place?
Is
that
where
we
want
to
restart
or
is
there
a
different
place,
I
think
yesterday
we
got
through
a
good
chunk
of
the
dock,
and
the
section
that
we
were
we
left
off
on
was.
E
C
Yeah,
if
you
want
to
so
mo,
brought
this
up
this
morning,
it's
like
I'm
happy
to
defer
to
that
meeting.
If
we
want
to,
we
have
other
discussion
topics.
Do
you
folks
want
to
talk
about
it
now
or
do
we
want
to
move
on
to
other
topics
right
now,.
F
Can
I
get
a
roman
vote?
I
was
going
to
say
from
my
thought,
from
a
community
perspective,
what
I
was
more
trying
to
hope
to
sort
of
make
clear
to
anyone
is
like
what
are
our
open
questions
and
if
anyone
has
got
an
opinion
on
them
like,
I
think
a
lot
of
them.
We
don't
necessarily
like
know
like
what
choice
is
the
best.
F
F
We
should
certainly
link
the
doc
so
that
way,
people
can
find
it
and
they
can
make
comments
on
it,
but
it
is
like
re,
it's
kind
of
like
a
in
the
weed
stock
right
like
ryan,
and
I
spent
like
a
lot
of
time
basically
going
through
enough
different
variations
to
basically
pull
up
like
not
implementation,
details
per
se,
but
like
important
questions
to
answer
and
agree
upon
so
that
way,
whatever
we
build
is
sustainable
and
useful
for
a
lot
of
end
users
and
secure
that
type
of
stuff.
C
C
The
first
one
was
to
spur
a
little
bit
of
discussion
about
multiple
idp
support,
which
we
kind
of
got
into
yesterday.
We
decided
it
was
out
of
scope
for
ldap.
The
ldap
feature
wanted
to
just
bring
this
topic
up.
I
think
this
is
going
to
be
relevant
pretty
soon.
My
thought
is:
there's
there's
a
there's,
a
couple
of
things
that
we
want
to
do
and
I
think
they're,
all
pretty
small
one
would
be
have
some
sort
of
wiring
between
the
federation
domain
and
the
identity
provider.
C
C
Some
sort
of
additional
authorized
parameter
that
selects
an
idp,
so
that
would
be
during
the
cli
login
flow,
assuming
that
there
are
multiple
idps
and
that
the
cli
already
knows
which
one
it
wants
to
use.
We
should
be
able
to
express
that
in
the
original
authorized
request,
and
that
should
be
then
one
of
the
kind
of
authorized
parameters
that
we
track
and
then
that
lets
us
trigger
the
write
back
into
idp
login,
and
then
I
think,
maybe
there's
a
third
one.
C
That's
not
that
important
now,
which
would
be
a
discovery
api
so
that,
if
you
don't
know
which
idp
you
want
to
use,
you
can
list
the
available
idps
along
with
maybe
like
some
properties
about
each
one.
So
you
kind
of
can
choose.
You
can
set
a
priority
on
the
client
about
which
one
you
want
to
use
those
those
three,
those
three
items
so
a
way
to
wire
together,
multiple
idps
in
multiple
federation
domains,
a
parameter
to
select
one
and
maybe
a
discovery,
sort
of
advertising
mechanism.
Is
that
roughly
what
other
folks
had
in
mind?
F
This
is
an
initial
thought.
I
think
a
label
selector
makes
me
nervous,
mostly
because
it's
dynamic,
I
don't
don't
know
if
that
actually
makes
sense
for
what
the
properties
we
would
want
are,
if
you're
putting
the
label
selector
on
the
federation
domain,
you're
you're,
saying
the
federation
domain
picks,
which
identity
providers
belong
to
it
right.
But
that
would
to
me
imply
that
if
you
create
a
new
identity
provider,
you
should
not
be
able
to
coerce
a
federation
domain
to
consume
it
right
by
putting
in
a
label
selector
that
matches
that
federation
domain.
C
Maybe
I
would
say,
like
my
sort
of
like
threat
model,
for
this,
is
that
the
the
permissions
of
managing
federation
domains
and
managing
idps
are
always
belong
to
the
same
set
of
people.
So
you
never
have
permission
to
change
one
and
not
change
the
other.
It's
not
you
that
it's
not
useful
to
to
assign
permissions
that
way.
Okay
in
that
model,
it's
not
so
much
like
about
the
so
we're,
not
necessarily
trying
to
design
these
api,
so
that
they're,
so
that
they
can
express
any
other,
more
complicated
threat
model.
C
We're
just
trying
to
make
it
so
that
the
configurations
that
you'd
want
to
use
in
most
production
environments
are
easy
to
manage.
So,
in
other
words
like,
I
might
have
a
set
of
idps
that
are
for
developers,
and
I
could
call
those
like
developer
idps
and
set
a
label
on
them
and
maybe
have
another
set
of
idps
that
I
know
like
some
contractor
team
uses,
and
then
I
can
define
a
federation.
C
Yeah
I
mean
like
identity
provider
crd
instances,
so
we
have
oadc
one
we're
adding
the
ldap
one.
If
we
add
more,
they
would
all
have.
I
could
label
those,
so
I
could
say
like
okay,
my
oadc
provider
and
maybe
get
some
github
future
github
provider.
C
Those
are
like
developer
idps,
I've
got
some
like
contractor
and
they
have
like
contracting
firm
x
idp,
I
label
it
specially,
and
then
I
could
split
up
my
federation
domains
and
I
could
have
a
federation
domain
for
dev
clusters,
where
I
give
access
to
like
everything
and
then
another
federation
domain,
for
you
know
staging
prod
clusters
or
something
where
I
exclude
the
contractors
or
something
like
that.
C
I
don't
know
I'm
just
trying
to
think
about
what
kind
of
what
kind
of
relationships
and
configurations
might
be
useful
in
practice
and
how
we
can
express
them
in
like
the
simplest
way.
So
like
another
proposal,
would
be
don't
use.
Label
selectors
use
an
explicit
list
of
object,
references
in
each
federation
domain
and
I
feel
like
that
would
be
cumbersome.
Maybe.
C
So
I
think
that
the
the
mental
model
I
had
that
breaks
down
there
is,
if
I
have
separate
federation
domains
for
like
dev
and
prod
often
times
I
would
want
to
have
them
both
use
the
same
set
of
idps.
C
So
I
want
them
to
be
like
separated
for
some
other
reasons.
Maybe
I
have
different
token
lifetimes
or
something
like
that
downstream,
but
actually
it's
the
same
pool
of
users
logging
into
both
of
them.
I
don't
want
to
have
to
like
define
my
idp
twice
and
have
to
go
register
to
clients,
although
actually
you
have
to
register
to
clients
anyway
right
now,
because
the
callback
uri
is
nested
under
the
federation
domain,
url
right.
F
So,
like
the
reason
I
would
say
that
is,
I
think
you
want
them
to
actually
be
separate
idps
like
I
think
you
actually
want
to
define
them
twice.
I
don't
think
it's
actually
burdensome,
because
you
might
do
this
once
or
twice.
I
think
I
I
just
think
the
the
label
selector,
I
think,
is
very
kubernetes-like,
but
it's
incredibly
flexible
and
I
don't
actually
know
if
you
want
that
level
of
flexibility.
That's
the
concern
there
is
like
like.
Yes,
you
could
reuse
idps.
F
Maybe
you
don't
want
to
right,
like
maybe
that's
actually
not
the
desired
intent
that
here's
my
bucket
of
idps
for
this
federation
domain,
here's
my
bucket
of
ids
for
that
one.
If
there,
if
I
had
to
copy
place
one
yama
like
once
you
do
it
once
you
don't
have
to
keep
doing
it
right
like
not
really
like
how.
How
often
do
you
expect
someone
to
make
a
new
federation
domain
and
make
a
new
idp
like
it
tends
to
be
like
almost
static
config
once
it's
done.
E
If
it
was
that
you
suggested,
maybe
it
could
also
be
object,
references
that
would
work
just
as
well
to
allow
you
to
avoid
copying
and
pasting
your
config
around
you're
asking
if
you
would
have
to
find
another
oauth
client,
I
don't
think
you
would
right,
because
one
oauth
client
could
have
lots
of
different
allowed.
Redirect
endpoints.
C
That
yeah.
C
Makes
it
the
only
kind
of
ingress
that
we
have
to
the
supervisor
is
through
the
urls
defined
endpoints
defined
in
the
federation
domains
we
could
have
actually
could
have
actually
separated
that.
I
guess
we
didn't.
I
don't
think
we
even
really
considered
it.
We
could
have
had
another
we
could
have
had
on
on
the
idp
on
the
odc
identity
provider.
Crd
another
place
to
specify
an
https
endpoint
that
serves
as
the
callback.
C
I
think
if
we
wanted
to
make
one
oadc
identity
provider
well,
like
you
could
register
multiple
redirect
your
eyes,
but
if
you
didn't
want
to
reconfigure
your
provider
each
time
you
added
a
new
federation
domain,
we'd
have
to
we'd,
have
to
factor
out
the
callback
url
from
the
federation
domains,
and
maybe
that's
that's.
This
sounds
like
a
little
bit
complicated
and
confusing
to
have
to
define
that
separately.
C
But
if
we
do
want
to
think
about
like
pinpoint
supervisor
as
an
abstraction
layer
that
has
a
bunch
of
idps
on
the
back
back
end
and
then
has
federation
domains
on
the
front
end,
it
would
be
nice
if
they
were
kind
of
like
totally
separate
api
surfaces.
I
guess
so
that
you
can.
The
behavior
of
the
oadc
identity
provider
doesn't
really
depend
on
how
you're,
using
it
configuring
the
downstream
surface,
it's
only
configured
by
the
the
odc
identity
provider.
C
Crd,
I
think
I
think
I
think
I'm
going
to
like
a
kind
of
a
different
different
different
proposal
now,
but,
but
I
do
worry
like
for
me
like,
maybe
maybe
the
first
question
answer
is
okay.
If
we
have
many
right
now,
we
have
a
single
one,
single
federation
domain
and
a
single
idp
when
we
support
multiple
of
each
of
them.
What
is
the
relation
between
them?
Is
it
many
to
many?
C
Which
of
those
things
makes
sense?
I
think
what
moe
is
saying
is
that
it
should
be
one-to-many,
so
each
federation
domain
might
have
several
idps
backing
it,
but
each
idp
only
ever
belongs
to
a
single
federation
domain.
C
I
think
my
my
initial
thought
was.
It
was
like
many
to
many,
maybe
maybe
this
matters
too
like
if
we
do
have
like
an
object
reference
or
a
label
selector
it.
If
it's
many
to
one
like
you're,
saying
mode,
then
actually
the
label
selector
would
go
on
the
or
the
or
the
sorry,
not
the
label
selector
the
object.
Reference
would
go
on
the
idp
crd.
F
E
You
could
imagine
creating
several
idps
and
then
putting
labels
on
them
like
this
is
a
production
idp.
This
is
a
staging
idp
and
some
of
those
idps
might
even
have
multiple
labels
like
this
idp
is
both
production
and
staging
that
has
kind
of
cool
properties
in
terms
of
helping
you
organize
things
and
making
it
clear
like
why
you're
setting
it
up
not
just
the
way
that
you're
setting
it
up.
F
I
guess
I
thought
in
my
head
that
you
would
just
define
the
federation
domain,
call
it
pride
and
if
we
were,
if
it
was
all
in
the
federation,
I
mean
just
list
the
main
environments
right
by
definition.
You
know
that
in
that
world
you
could
theoretically
still
do
the
many
of
the
many
you
could
actually
reuse
still
if
the
pointers
were
going
from
the
federation
domain
out,
because
technically
two
federation
domains
could
still
select
but
yeah.
F
C
The
other
use
case
that
might
be,
I
think,
actually
it
is
almost
a
simpler
use
case-
is
the
multi-tenant
use
case
of
I'm
a
managed
services
provider.
I've
got
pen,
pet
supervisor
and
I've
got
a
customer
coke
and
a
customer
pepsi,
and
I
just
want
to
keep
them
totally
separate,
and
so
you
could
do
that
today.
One
one
thing:
one
way
you
could
do,
that
is
with
different
name
spaces
I
feel
like.
That
would
still
be
the
right
way
to
do
that.
E
C
We
add
support
for
like
multiple
idps
and
multiple
federation
domains
and
wiring
them
together.
You
would.
The
right
thing
to
do
in
that
case
might
still
be
to
run
two
copies
of
the
supervisor
separate
namespaces
each
of
them,
then
that
would
that
scenario
would
work
today
with
no
action
with
without
additional
work
from
us.
Is
that
does
that
match
people's
expectations
yeah?
So
it
depends.
F
On
the
level
of
isolation,
you're
trying
to
guarantee
and
like
sort
of
the
cost
you're
willing
to
have
for
that
isolation
right
because
there
is
a
cost
of
running
many
different
processes,
possibly
with
their
own
life
cycles
like
how
they're
managed
and
maybe
that's
exactly
what
you
want.
Maybe
you
want
to
upgrade
pepsi's
supervisor
on
a
particular
saturday
because
they
told
you
it
was
okay,
if
you
did
it
between
some
particular
hours,
even
if
it
had
no
downtime,
they
would
prefer
you
do
it
and
it
really
didn't
matter
so
yeah.
F
If
you
wanted
that
super
strict
isolation
and
technically
you
could
you
could
take
it
one
level
further
and
actually
give
them
a
different
api
group
and
then
and
a
different
space.
So
then,
like
you're,
like
they're,
like
complete
islands,
then
they
don't
share
anything.
That
would
be
like
the
ultimate
version
of
that.
I'm
not
saying
you
really
shouldn't
do
that.
That's
like.
E
If
you
really
want
isolation,
you
should
really
put
them
in
different
clusters:
right
yeah,
different
name
spaces,
yeah,.
C
Yeah
yeah
because
yeah
you're,
you're
trading,
yeah
you're,
always
trading
off
isolation
and
efficient
resource
efficiency
like
different.
You
might
as
well
say
you
need
to
be
at
different
data,
centers
different
air
gap,
data
centers,
that's
like
very
secure
but
yeah.
So
I
think
we
talked
about
two
use
cases
for
multiple.
So
I
think
we,
we
kind
of
all
understand
why
you
might
want
multiple
idps
because
you
might
have
maybe
transitioning
from
one
to
the
other.
You
might
have
like
an
idp,
that's
complicated
and
has
poor
uptime
and
another
idp.
C
That's
really
simple,
but
it's
like
always
online
or
you
might
have
idps
for
different
pools
of
users.
That
seems
more
clear
to
me.
I
don't
think
we've
necessarily
talked
about
why
you
might
want
multiple
federation
domains.
F
Obviously,
one
comment
on
one
of
the
common
patterns
I
saw
for
multiple
idps
is:
the
idps
would
actually
be
backed
by
the
exact
same
user
data.
It's
just.
The
ieps
themselves
had
different
levels
of
functionality,
so
I
I
often
saw
like
oidc
and
like
apple
directory
used
together,
so
the
oidc
flows
were
reserved
for
browser-based
interactions
and
the
active
directory
turbo
style
flows
were
reserved
for
cli
based
interactions,
so
it
was
basically
like.
I
want
sso
everywhere.
I
want
a
different
ssl
flow
for
the
cli,
and
I
want
a
different
ssl
flow
for
the
browser.
F
C
I
think
so
the
three
things
I
think
we've
talked
about
kind
of
already
here.
One
is
the
multiple
tenants
case.
Do
you
want
to
isolate?
C
Then,
if
they
happen
to
choose
the
same
value,
that's
and
they
have
the
same
issuer.
That's
that's
bad
right,
so
we're
using
the
issuer
to
separate
that.
I
think
we
also
talk
about
dev,
prod
isolation,
which
is
basically
similar.
C
I
think
it
keeps
it
keeps
you
from
having
to
coordinate
things
in
in
one
environment
with
things
in
another
environment
and
still
have
safety
properties.
I
think
another
thing
we
talked
about
is
like
you
might
have,
maybe
in
the
dev
pod
case,
or
maybe
for
some
other
reasons.
You
might
have
different
clusters
that
actually
want
different
configuration
parameters
for
things
like
token
lifetimes,
or
I
guess
the
other
thing
here
is
like
different
sets
of
valid
idps.
I
think
that
that's
like
the
thing
you
might.
C
You
may
have
doubt
clusters,
where
you
allow
maybe
login
from
github
and
prod
clusters,
where
you
only
allow
login
through
corporate
saml
or
whatever,
and
those
are
those
are
separate,
partly
for
what
mo
said,
which
is
that,
like
they
have
different
capabilities,
maybe
the
github
flow
lets
you
log
in
really
smoothly
and,
like
you
log
in
once
a
day,
and
you
have
pretty
long
token
lifetimes
and
you
can
have
refresh
operations
and
that
all
works.
C
E
Okay-
maybe
I
think
we
all
understand
this,
but
for
anyone
watching
the
video
it's
maybe
worth
just
stating
like
the
basic
concept
of
a
federation
domain
right,
is
that
it's
a
group
of
clusters
and
anyone
who
can
authenticate
into
the
federation
domain
can
authenticate
into
all
of
those
clusters.
Equally.
F
So
a
related
question
I
had
to
that
is:
I
know
we
don't
support
it
today,
but
what
are
folks
thoughts
on
an
enhancement
to
federation
domain,
for
it
to
become
more
opinionated
on
what
audiences
it
will
support?
Basically,
so
basically
have
the
sts
endpoint
have
like
it.
Could
you
know
it
could
be
super
strict?
You
could
say
if
you
ask
for
a
cluster
that
I
don't
know
about,
I
just
reject,
or
it
could
be
like
if
you
ask
for
a
cluster
that
I
do
know
about,
then
better
have
this
audience.
E
When
we
designed
it,
we
had,
I
thought
that
it
would
make
sense
to
add
a
kind
of
a
loud
list
like
you're,
describing
where
the
the
federation
domain
will
only
respect
or
only
give
out
sts
tokens
for
a
specific
allowed
list
of
audiences.
F
E
C
F
C
Be
a
static,
allow
list
that,
where
you
have
to
basically,
then
maybe
usually
have
some
controller
managing
it
or
it
could.
We
could
also
support
like
a
regex
or
some
other
things
if
we
wanted
to
add
some
some
level
of
validation,
but
I
think
the
static
static
list
of
allowed
audience
values
in
our
api
and
we'll
check
that
if
it's
there
is
like
a
reasonable
first
take
and
then
yeah.
C
I
think
we
talked
also
about
writing
a
plug-in
dependent
that
our
first
kind
of
like
pinpoint
add-on
would
be
a
controller
that
populates
that
list
based
on
cluster
api
data,
because
that
would
be
something
we
could
basically
build.
A
little
controller
that
watches
the
one
api
and
echoes
this
field
into
independent
bed.
F
I
was
gonna
say
when
you
were
trying
to
write
on
consensus
and
having
a
hard
time,
spelling
it
the
next
time
we
make
a
new
component.
Please
make
it
a
word
I
can
spell,
because
I
have
yet
to
successfully
spell
the
word
concierge
correctly
on
the
first
try.
So
just
as
a
personal
correct.
That
word
is
just
so
hard
for
me
to
spell,
and
I
always
get
it
wrong.
F
And
I
think
to
be
fair,
the
current
model
that
the
federation
domain
has
like
mod
is
similar
to
like
what
walt
has,
which
is
the
idea
that,
like
the
thing
that
gave
you
the
cube
config
that
made
it
so
that
you
requested
a
particular
audience
is
trusted.
F
So,
like
you
are
generally
safe
right,
but
basically
don't
get
cute
configs
from
places.
You
don't
trust
because
remember
that
it
can
execute
an
arbitrary
process
of
your
user
when
you
called
the
qctl
binary
and
they
could
then
go
read
your
home
directory
and
copy
all
your
keys
to
dollar
sign
bad
place.
So
you
have
to
trust
that,
and
so
as
long
as
you're
getting
it
from
a
trusted
place,
you're
totally
safe.
F
C
C
It's
a
little
subtle.
Okay,
I
feel
like
this
was
a
good
discussion,
but
we
kind
of,
I
think
we
kind
of
petered
out,
so
I
think
we
could
move
on
if
folks
are
happy
and
we
have
just
a
few
minutes
left.
This
is
another.
Maybe
this
is
like
maybe
a
little
more
contentious
topic,
I'm
looking
at
a
local
user
authenticator
and
thinking
that
it
was
very
useful
to
it's
very
useful
when
we
only
had
the
concierge
and
it's
very
useful,
still
to
test
webhook
support.
C
But
that's
I
was
looking
at
like
our
documentation.
That
kind
of
tells
you
how
to
use
it.
I
was
thinking
it's
not
a
great,
maybe
it's!
Maybe
it's
not
the
best
tutorial
kind
of
component.
I
wonder
if
we
could
replace
it
with
something
even
more
bare
bones:
that's
useful
for
testing
and
maybe
add
proper
local
user
support
in
a
through
a
crd.
C
C
That's
basically
the
same,
but
across
a
whole
federation
domain,
then
to
say
like
if
my
idp
is
that
so
the
use
cases
for
this
would
be
demo
infiniped,
that's
one
where
I'm
demoing
a
cluster
that
uses
pen
event
also
the
use
case
where
my
ldap
server
is
down,
and
so
I
can't
log
in,
but
I
have
this
local
user
just
in
case.
F
I
am
not
so
sure
about
the
concierge,
mostly
because
I,
in
my
mental
model,
the
concierge
is
not
trusted
by
the
user
right,
because
the
concierge
isn't
the
idp
normally
so
like
giving.
The
concierge
like
like
like
for
as
a
like
a
silly
example
like
if
the
concierge
had
an
idp
on
it
that
supported
username
and
password
or
something,
and
you
gave
it
your
password
like
to
me.
That's
like
a
violation
of
what
the
concierge.
C
Yeah,
I
would
say
we
should
not
even
call
them
passwords,
which
I
call
them
bearer
tokens
and
then
always
generate
them
randomly
so
you're
never,
and
it
would
be
then
a
token.
That's
specific
to
that
cluster
would
be
the
idea.
I
don't
know.
Another
thing
I
did
another
thought
I
did
have
here
is
another
way
we
could
get
local
user
support
today
is
to
add
a
cli
command
that
generates
a
jot
signing
key,
saves
it
locally.
C
Outputs
the
jot
authenticator
that
so
we'd
have
to
add
support
for
specifying
the
sign-in
keys
inline
in
the
crd,
basically
use
our
jot
signing
and
support
and
say:
hey
here's
a
job
authenticator,
that's
not
pointing
at
a
supervisor,
it's
not
pointing
at
an
oadc
server
at
all.
It's
just
telling
you
that
tokens
signed
by
this
key,
that's
on
my
laptop,
are
valid
and
basically
make
that
whole
flow
work
nicely.
C
E
C
You
can
see
I
can
see
how
that
is.
I
can
see
where
you're
coming
from.
I
think
that,
like
any
solution
that
involves
an
extra
moving
part
doesn't
satisfy
the
use
case
of
having
like
a
safe
fallback
right.
If
I
have
to
have
another
process
running
with
certs
managed
and
like
ingress
and
all
these
kind
of
extra
moving
pieces.
C
This
is
simple
and
it's
like
not
an
external
dependency
at
all,
but
I
think
maybe
maybe
another
way
to
address
some
of
the
concerns
is
be
really
strict
about
the
scope
of
the
local
user
support
and
maybe
go
really
far
to
make
sure
that
nobody
should
be
using
this
as
a
real
idp,
and
so
maybe
don't
support
groups
or
maybe
always
hard
code
and
say
you
can
only
use
this
for
system
colon
masters
users
like
this,
this
functionality
exists,
but
it's
really
specifically
for
this
one
use
case.
C
E
I
think
that
that's
a
local
user
authenticator
is,
and
it
wouldn't
be
very
hard
to
productionalize
it,
and
it's
already
it
would
be
easy
to
set
it
up
to
run
in
the
same
deployment.
So
it's
not
a
separate
moving
part.
It's
more.
Almost
like
a
side
car
there's
no
ingress
required
because
it's
available
as
a
kubernetes
service
within
that
same
deployment
and
something
that
you
can
keep
as
a
separate
process
with
a
separate
code
base
in
a
separate
configuration.
C
We
could
do
that.
I
did
think
about
that
so
productionalizing
local
user
authenticator
to
me
probably
looks
like
making
it
automatically
generate
its
own
web
token.
Authenticator
object
and
manage
it.
So
when
you
install
local
user
authenticator,
it
assumes
that
there's
going
to
be
a
concierge
already
running
and
it
configures
the
concierge
to
to
respect
it
basically,
and
so
that
that
takes
a
bunch
of
the
setup
manual
setup
out
of
like
connecting
them.
Then
right.
E
F
I
don't
see
why
we
would
do
that,
though,
or
suggest
building
it
in,
because
I
don't
know
if
it's
actually
any
different
work
and
it's
also
in
the
same
code
base,
and
it
has
all
the
same
problems
with
growing
without
bounds.
If
you
are
worried
about
that
right,
because
we
would
still
want
it
configurable
with
the
crd
to
be
convenient
to
use
so
like
what
you're
left
with
is
the
same
surface
of
work
and
the
same
sort
of
moving,
actually
there's
an
extra
moving
piece,
but
we're
hiding
it,
which
is
cool.
C
I
also
think,
maybe
maybe
there's
maybe.
This
decision
is
a
little
bit
easier
in
the
context
of
just
the
concierge,
because
the
concierge
is
not
even
really
trying
to
be
a
federation
anything
all
it's
really.
Configuring
is
token
validation,
and
so
it
I,
I
kind
of
see
the
scope
of
features
that
the
concierge
will
will
ever
have
for
token
validation.
C
Pretty
much
matches
like
what
kubernetes
has
for
token
validation,
which
is
jots
web
hooks
and
some
simple
bootstrappy
kind
of
token,
so
that
that
kind
of
like
draws
a
box
around
what
potential
scope
it
might
have.
The
supervisor,
I
think,
is
more
open-ended
because
it
has
like
proper
idp
support
and
then
maybe
in
the
future
would
have
a
richer
data
model
for
like
users
and
groups
and
refreshing
and
revocation
and
sessions,
and
all
kinds
of
more
complicated
stuff.
I
don't
know
if
any
of
that
is
as
complicated
on
the
concierge
case.
F
C
C
Groups
because
you
wouldn't
want
to
you
maybe
want
to
lock
that
down,
but
that
would
be
fine,
and
then
we
could
do
some
cli
stuff
to
make
it
a
nice
ux
for
for
a
demo
kind
of
case
on
the
supervisor
yeah
at
supervisor.
I
think
we
would
want
to
like
do
something
a
little
bit
more
proper
and
maybe
maybe
having
extra
server.
E
You
can
it
manages
a
set
of
secrets
where
each
secret
identifies
the
user's
username,
password
and
list
of
groups,
and
so
it
it
has
a
controller
watching
those
secrets.
So
you
can
dynamically,
create
edit
delete
users,
and
then
it
conforms
to
the
webhook
style
authenticator.
So
it
can
be
set
as
an
upstream
in
the
supervisor
as
a
webhook
authenticator
or
it
could
be
used
in
the
concierge
right.
C
Right
yeah,
we
we
could
do
web
hook.
Webhook
could
be
backed
by
the
same
kind
of
apis
on
the
front
end
that
we're
talking
about
for
using
using
your
password
auth
to
ldap.
Basically,
so
it
wouldn't
be
passing
through
the
oadc
machinery
really
wouldn't
open
a
browser
or
anything.
It
would
just
accept
your
your
token
on
some
api
and
and
validate
it
through
the
web
hook.
C
I
think
I
think
when
I'm
looking
at,
like
the
local
user
authenticator,
it
just
seems
like
a
lot
of
code
that
would
all
go
away
and
be
replaced
with
not
very
much
code
in
the
concierge
or
the
supervisor
to
do
the
same
thing,
because
we
have
a
lot.
We
have
a
lot
of
machinery,
that's
just
kind
of
duplicated
in
both
places.
F
So
as
a
counter
example
to
the
uaa
concern
that
ryan
had
so
like
when
I
was
in
openshift,
we
had
an
hd
password
idp,
which
would
literally
give
me
like
one
blob,
an
hd
password
file,
and
I
will
treat
each
each
line
as
a
user
right
so
user
with
their
password
hash
standard,
hd,
password
format
that
was
very
commonly
used
and
deployed
as
a
backup,
static
idp
that
did
not
have
to
go
anywhere
and
it
was
commonly
used
in
demo
environments.
C
F
One
difference
that
openshift
had
from
dex
is
because
groups
were
always
local
to
openshift
you
could
configure
which
as
well,
I
don't
know
people
did.
I
just.
I
just
know
that
was
available
so
but
yeah
that
that
is
the
upper
bound
of
what
I
would
want
to
support
like.
I
don't
think
I
would
want
an
hd
password
file
as
input
just
it
was
kind
of
annoying.
F
C
We
could
also
consider
some
local
local
authentication
method
that
is
not
using
static,
passwords
or
tokens
and
is
actually
using
some
sort
of
key
based
auth
like
we
we
could.
This
could
be
our
first
experiment
with
the
u2f
based
login,
or
something
like
that
too.
That
would
be
interesting,
I
think,
and
useful
for
the
especially
for
the
backup
idp
case.
If
I
could
say
like,
oh
as
a
backup,
I
can
always
log
into
this
cluster.
C
With
this
u2f
token,
if
my
idp
is
down
and
then
for
for
a
business
like
you
can
have
some
backup,
you
can
have
some
backup
tokens
that
maybe
aren't
even
assigned
to
an
individual
user
and
are
in
a
safe
right
as
a
backup
way
to
log
into
a
cluster.
C
I
don't
know,
essentially
we.
I
think
that
the
flow
that
we're
replacing
there
is
the
flow
that
people
do
today
in
kubernetes
with
admin
certificates,
but
those
are
those
are
inconvenient
because
they
expire
and
because
the
machinery
that
generates
them
is
linked
to
the
rest
of
the
kubernetes
ca,
and
so
you,
you
tie
yourself
into
a
lot
of
the
dependency
of
how
the
life
cycle
of
the
kubernetes
ca
is
managed
there.
C
Okay,
I
don't
think
we're
at
consensus
here.
We
are
short
on
time.
C
I
think
the
rest
of
the
items
here
we
could
get
there
pretty
quick
are
folks.
Okay.
Moving
on
okay,
we'll
talk,
I
I'm
happy
to
talk
more
about
this
another
time.
A
Could
we
start
with
pablo's
and
then
yep
go
to
the
one
just
above
it?
If
we
have
time,
I.
H
Can
go
very
quickly
here,
just
for
some
context.
There
was
a
document.
This
team
worked
on
internally,
which
is
kind
of
a
brainstorm
about
essentially
opportunity
areas
and
the
outcomes
that
this
team
was
thinking
about
driving
with
piniped
as
a
project.
H
I
was
thinking
that,
as
I
think,
through
the
roadmap
bits,
I
was
wondering
if
the
team
would
be
comfortable
actually
posting
that
document
almost
as
a
lightweight
way
or
kind
of
experiment
to
perhaps
validate
some
of
those
areas
engage
interest
from
the
broader
community
about
what
the
open
source
community
would
be
interested
in
seeing
happen
with
the
project
as
some
sort
of
proxy
measurement
for
also
helping
figure
out
how
we
might
also
approach
growing
open
source,
actual
contributors
like
converting
stargazers
into
contributors.
H
So
I
thought
that
might
be
kind
of
an
interesting
way
it's
hard
for
us.
I
think
in
some
ways
to
to
be
gauging
that
interest
right
now,
because
the
community
is
small,
so
I'm
just
trying
to
think
of
different
ways.
We
might
do
that
because
I
know
from
an
internal
standpoint
from
like
a
vmware
standpoint.
I
have
to
go
too
deep
on
that
merging
these
two
different
kind
of
road
maps
is,
is
always
kind
of
tricky
balancing
this
game
of
the
backer
and
the
open
source
community.
H
So
I
wanted
to
see
how
the
team
feels
about
publishing
that.
D
I
think
that
sounds
good.
I'd
maybe
be
interested
in
going
in
there
and
cleaning
it
up
agreed
yep
bit,
but
I
think
that
sounds
reasonable
to
me.
C
F
C
C
Okay,
we
are
out
of
time
I'm
going
to
just
briefly
mention
this
last
item:
to
get
a
temperature
check,
we
have
a
our
container
image
that
we
ship
right
now
has
three
binaries
in
it.
Has
the
concierge
binary
the
supervisor
binary
and
the
local
user
authenticator
binary
they're
each
about
like
50
megs
and
most
of
that
50
megs
is
just
like
kubernetes
runtime
client
go
machinery
and
stuff.
C
F
The
only
thing
I
would
ask
is:
if
we
do
do
that,
can
we
please
also
just
like
better
organize
our
code
into
like
these
bits
belong
to
the
supervisor?
These
bits
are
shared.
These
bits
belong
to
the
concierge,
because
right
now
with
a
flat
tree
of
stuff,
and
it
bothers
me,
but
I
haven't
gone
in
and
started
killing
anything
mostly
because
I
know
I
would
cause
a
giant
bunch
of
pain
for
everyone
working
on
something
yeah.