►
From YouTube: Easy, Secure Kubernetes Authentication With Pinniped
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
thank
you
for
joining
us.
Welcome
to
today's
cncf
live
webinar,
easy,
secure,
kubernetes,
authentication
with
pinniped,
I'm
libby
schultz
and
I'll
be
monitoring
your
webinar
today,
I'd
like
to
introduce
our
speakers.
Matt
moyer
and
margo
crawford,
both
software
engineers
at
vmware,
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
speak
as
an
attendee,
but
there
is
a
chat
box
on
the
right
side
of
your
screen
for
you
to
speak
up
and
ask
questions,
feel
free
to
drop
them
there
and
we'll
get
to
as
many
as
we
can.
A
A
B
Thanks
libby,
I'm
going
to
share
a
pre-recorded
version
of
part
of
our
presentation
here
and
then
we're
going
to
have
plenty
of
time
for
questions
afterwards.
While
this
is
playing.
One
of
the
benefits
is
that
margo
and
I,
while
we're
presenting
in
the
video
we'll
also
be
answering
questions
in
chat,
so
feel
free
to
drop
questions
as
we
go
and
we'll
probably
answer
some
in
line
and
save
some
for
the
end
to
answer
in
person.
B
Good
morning,
good
afternoon,
good
evening,
my
name
is
matt
moyer,
I'm
here
to
talk
to
you
today
about
pinafed,
I'm
here
with
margo
crawford.
We
are
engineers
at
vmware
on
the
penniped
team
and
today
we're
going
to
talk
about
what
the
problem
is
with
kubernetes
authentication
as
it
stands,
what
we
built
in
pinniped,
how
it
works,
how
you
can
use
it
to
enable
smooth
authentication
on
your
kubernetes
clusters
and
then
go
into
some
time
for
questions,
there's
really
two
gaps
in
the
kubernetes
authentication
user
experience
that
we
set
out
to
solve
with
pad.
B
The
first
is
that
even
though
kubernetes
auth
is
very
extensible
and
there's
lots
of
options,
they're,
mostly
all
configured
with
cli
flags
on
the
kubernetes
api
server,
this
means
that
at
best,
you'll
have
to
restart
the
api
server
anytime.
You
want
to
change
one
of
them
and
at
worst
it
means
that
you
won't
even
have
access
to
those
flags,
because
they're
managed
by
your
cloud
provider.
B
The
other
gap
is
that,
even
though
kubernetes
has
all
these
options,
it
sort
of
doesn't
come
with
any
opinionated
login
flow.
It
doesn't
come
with
sort
of
an
end-to-end
way
to
take
an
external
identity
provider
and
provide
a
way
for
users
to
log
into
a
cluster
sort
of
just
gives
you
the
tools
that
you
can
use
to
build,
one
of
those
by
yourself.
So
piniped
takes
the
options
that
are
existing.
Kubernetes
extends
them
to
have
a
dynamically
reconfigurable
and
end-to-end
out-of-the-box
login
flow.
It's
the
batteries
included
for
your
kubernetes
auth
experience.
B
So
what
is
penitbed
piniped
is
an
open
source
project.
We've
been
building
it
for
a
bit
over
a
year.
Now
it
enables
dynamic
configuration
of
kubernetes
authentication.
This
means
that
you
can
install
it
onto
any
existing
running
kubernetes
cluster.
Then
you
can
reconfigure
it
to
add
or
remove
different
authenticators
at
runtime,
and
it
provides
a
better
login
user
experience
for
coop
ctl.
B
So
you
have
a
coupe
config
that
doesn't
have
any
hard-coded
secrets.
You
can
easily
add
connect
with
open
id
and
ldap
identity
providers,
and
you
can
have
an
experience
that
spans
across
multiple
clusters.
So
you
can
have
a
login
once
in
the
morning
to
your
let's
say
your
oidc
identity
provider
and
then
for
the
rest
of
that
day,
you're
you're
logged
in
transparently
to
all
your
kubernetes
clusters,
even
if
you
have
10,
even
if
you
have
100
or
a
thousand
everything
just
works
throughout
your
day.
B
B
So
if
you
have
a
declarative,
get
ops
deployment
pipeline,
you
can
use
that
to
manage
piniped
as
well.
When
we
built
pinniped,
we
envisioned
a
common
deployment
scenario,
something
like
this.
So
on
the
left,
you
have
an
admin
user
that
installs
and
operates
in
kubernetes
clusters
they
have
access
to
and
what
we
call
an
admin
config
that
they
got
when
they
created
each
cluster.
This
is
typically
a
super
powerful
coop
config
with
some
hard-coded
secret
key
and
it
does
not
identify
any
individual
user.
B
Usually
it
encodes
the
system,
colon
masters
group,
so
that
it
even
bypasses
all
our
back
on
the
cluster.
It's
dangerous
to
keep
this
around,
but
it's
usually
necessary
to
bootstrap
the
rest
of
the
system
and
as
a
failsafe
on
the
right,
you
have
regular
users
that
just
want
to
log
into
the
cluster
and
deploy
applications.
These
might
be
developers,
for
example,
and
in
a
typical
organization.
We
expect
that
even
the
admin
users
don't
use
their
admin
level
access
on
a
daily
basis
just
when
they
need
to
perform
low
level
operations
on
the
cluster
infrastructure.
B
So
in
pinniped
the
admin
user
is
responsible
for
installing
and
configuring
peniped
they
can
set
up
logins
via
their
enterprise
identity
provider
such
as
active
directory
or
something
like
octa,
and
then
they
can
generate
a
coupe
config
for
regular
users
to
use,
and
these
piniped
based
coupe
configs
are
somewhat
special.
They
don't
contain
any
credentials
for
accessing
the
cluster
and
they're,
not
user
specific.
B
Instead,
they
just
describe
how
to
connect
to
the
cluster.
All
users
can
download
and
use
the
exact
same
config
file,
but
when
they
log
in
they'll
have
their
correct,
individual,
username
and
groups
from
the
external
identity
provider
and
as
an
admin.
This
means
it's
easy
to
manage
access
to
the
cluster
via
kubernetes
rbac
role,
bindings.
C
Input
has
a
few
different
components
that
can
be
deployed
independent
of
each
other.
First
up
we
have
the
concierge.
This
can
be
deployed
on
any
cluster.
It
takes
an
oidc
token
and
translates
it
into
something
that
the
kubernetes
cluster
can
process
in
one
of
two
ways,
depending
on
your
cluster
architecture,
one
is
creating
an
x
509
certificate
that
is
signed
and
therefore
trusted
by
the
cluster.
The
other
is
forwarding
requests
via
an
impersonation
proxy
on
behalf
of
the
user.
C
Next
up
we
have
the
supervisor,
which
is
typically
deployed
once
on
a
very
trusted
cluster.
It's
an
oidc
server
that
allows
users
to
authenticate
with
an
external
oidc
or
ldap
provider,
and
possibly
other
identity
providers
in
the
future
and
issues
its
own
tokens
based
on
user
information
from
the
idp
that
can
be
used
by
the
kubernetes
clusters.
C
The
pinniped
cli
used
to
generate
the
cube
config
that
can
be
used
by
users
and
behind
the
scenes.
It's
also
working
to
make
the
login
experience
seamless
when
users
try
to
keep
cuddle
commands.
C
C
We
do
pass
a
token
with
user
information
to
each
cluster,
but
we
make
them
unique
tokens
from
each
other
by
changing
the
audience
via
an
rfc
8693
token
exchange.
This
happens
behind
the
scenes
without
the
user
having
to
log
into
each
kubernetes
cluster
independently,
so
users
can
still
log
in
once
per
day
to
access
all
of
their
clusters
while
being
secure.
C
Now
we'll
take
a
look
at
this
architecture,
diagram
for
a
pinniped
deployment
in
this
case,
one
where
the
kubernetes
control
plane
is
accessible.
This
is
usually
the
case
for
self-hosted
clusters
on
the
user's
first
cube
cuddle
command.
The
piniped
login
is
triggered
via
the
kubernetes
exact
plugin
and
the
cli
requests
a
federated
login
via
the
supervisor.
C
The
cli
turns
around
and
requests
a
new
second
token
from
the
supervisor,
with
the
same
information,
but
with
a
cluster-specific
audience
which
the
supervisor
means
and
passes
back
next,
the
cli
will
create
a
credential
request
to
the
pinped
concierge's
aggregated
api.
Using
the
token
it
just
received,
the
concierge
uses
the
cluster's
signing
key
from
the
kubernetes
control
plane
to
create
a
short-lived
certificate
for
the
cluster
subsequent
kubernetes
api
requests
will
use
the
short-lived
certificate,
refreshing
it
as
needed.
C
C
The
first
steps
are
the
same
as
before
the
initial
login
happens
to
the
supervisor,
which
means
cluster
specific
tokens
after
login
to
the
external
idp,
the
piniped
cli
will
still
create
a
credential
request
to
the
pinniped
concierge's
aggregated
api.
However,
the
concierge
will,
instead
of
minting
certs
that
are
using
the
clusters.
Signing
key
issue
search
using
its
own
keys
that
are
not
automatically
trusted
by
the
cluster
requests,
will
include
this
certificate
and
be
passed
through
the
pinniped
concierge
impersonation
proxy,
which
uses
the
cert
to
construct
impersonation
headers.
B
B
B
B
Now
that
the
concierge
is
installed
on
our
cluster,
we
can
configure
it
to
use
an
openid
connect
provider
for
authentication
in
this
demo.
We've
chosen
gitlab
because
it's
free
and
easy
for
anyone
to
get
started
with,
but
you
could
also
use
octa
ping,
identity,
azure,
ad
adfs
or
any
other
oidc
provider.
B
To
start
we'll
need
to
go
into
gitlab
and
register
a
new
oidc
client,
which
gitlab
calls
an
application.
We'll
give
our
application
a
name
unmark
the
confidential
box,
because
this
is
a
public
client
set
our
redirect
uri
to
match
the
required
settings
for
the
pet
cli
and
ask
for
the
open
id
and
email
scopes
once
that's
created,
we'll
have
a
client
id
which
we
can
copy.
B
B
B
B
But
we
forgot
to
add
our
back
permissions.
Let's
take
a
look
at
what
user
name
we
are
actually
authenticating
with.
We
can
do
that
with
another
piniped
sub
command.
Who
am
I,
which
tells
you
everything
about
your
current
identity?
Let's
use
that
with
our
admin,
coop
config!
First,
where
we
can
see
that
our
username
is
kubernetes
admin.
B
B
B
B
If
we
take
a
close
look
at
the
credential
issuer
on
this
new
gke
cluster,
we
can
see
that
the
concierge
is
operating
on
a
different
mode
on
this
cluster
and
isn't
quite
healthy.
Yet,
that's
because
pinniped
only
supports
this
type
of
cluster
via
a
special
impersonation
proxy
mode,
which
takes
a
moment
to
initialize
in
order
to
safely
use
gitlab.
On
this
second
cluster,
I
need
to
create
a
second
oadc
client
in
gitlab.
B
We
set
up
the
second
cluster
completely
independently
from
the
original
kind
cluster.
If
I
clear
my
local
session
cache,
we
can
see
that
when
I
run
coop
ctl
against
the
kind
cluster
I
initially
need
to
log
in
with
my
browser,
and
if
I
run
that
same
command
against
the
gke
cluster,
I
have
to
do
the
browser
login
a
second
time.
You
can
imagine
that
if
I
had
10
or
100
clusters,
all
this
client
setup
and
all
these
browser
logins
might
become
arduous,
which
is
why
we
also
made
the
pinniped
supervisor.
B
Now
that
it's
installed,
we
can
see
some
new
pods
running
in
the
penipet
supervisor
namespace,
and
we
can
see
that
there
are
more
new
kubernetes
apis
for
configuring.
The
supervisor
I've
done
a
bit
of
setup
ahead
of
time
and
registered
a
dns
name
demo.peniped.dev,
pointing
at
a
static
ip
address
on
this
cluster.
B
I've
also
pre-provisioned
a
tls
certificate
from
let's
encrypt
which
we'll
use
to
configure
secure
ingress.
I
have
a
load
balancer
service,
that
routes,
inbound
https
traffic
on
our
static
ip
address
to
the
supervisor.
Pod
endpoints
we'll
apply
that
service
object
to
the
cluster
and
wait
for
it
to
be
ready.
B
Now
we
should
expect
that
something
is
listening
on
that
port,
but
we
still
get
a
strange
tls
error
to
configure
the
supervisor
to
listen
on
our
new
host.
We
use
the
federation
domain
custom
resource.
This
tells
the
supervisor
to
act
as
an
oidc
issuer
at
the
given
url
we
set
demo.piniped.dev
and
reference
a
secret
called
demo
tls
that
will
contain
the
tls
certificate
and
private
key
we'll
create
that
secret.
Using
the
let's
encrypt
certificate
I
provisioned
earlier
now
we
can
see
that
we
get
an
http,
not
found
error
when
we
crawl.
B
B
B
B
B
B
B
B
B
Next,
we're
going
to
show
how
logins
work,
if
you
don't
have
a
local
web
browser.
This
can
happen
if
you're
trying
to
use
coupe
ctl
from
a
remote
linux
machine
such
as
an
ssh
jump
post.
Here,
I'm
connected
to
a
linux
host
running
a
bare
bones:
debian
11
install
I
can
download
the
pinniped
cli
and
the
kubernetes
cli
using
curl
and
install
them
into
the
system
path.
B
B
B
B
Instead
we'll
add
an
ldap
identity
provider,
custom
resource
this
resource
describes
how
to
connect
and
authenticate
to
the
directory,
how
to
search
for
users
and
groups
and
how
to
map
their
ldap
attributes
into
kubernetes
user
and
group
names
I'll
apply
that
object
and
we'll
create
the
secret
with
our
ldap
bind
credentials
just
like
other
pinniped
apis.
I
can
check
the
status
of
the
new
object
to
see
that
it's
connected
and
ready
I'll
run
piniped
git,
goop
config
again
to
get
an
ldap
based
kubeconfig.
B
B
B
Pinniped
is
a
community
project
if
you're
interested
in
getting
involved
in
the
project,
either
as
a
user,
a
contributor
or
a
future
maintainer,
please
reach
out.
We
hang
out
in
the
pinniped
channel
in
kubernetes
slack,
we
hold
a
public
community
call
twice
a
month
and
we're
on
twitter
at
project.
Pinniped,
we'd
love
to
hear
your
use
cases,
your
bug,
reports,
feature
requests
or
any
ideas
you
have
for
the
project
and,
of
course,
you're
always
welcome
to
file
a
github
issue
or
start
a
discussion.
B
Next,
I
want
to
show
you
some
of
the
work
we
have
planned
for
the
project.
Most
of
these
are
early
stages
of
planning.
So
if
you
have
specific
ideas
about
how
you
think
they
should
work
or
particular
features
that
are
important
to
you,
let
us
know
this
is
our
roadmap,
which
you
can
also
find
in
github.
The
first
two
items
are
well
in
progress
and
should
land
by
the
end
of
this
month.
B
The
first
is
support
for
password-based
logins
to
compatible
oidc
identity
providers.
This
is
basically
pass-through
support
for
the
resource
owner
password
credentials
grant
and
it
lets
you,
for
example,
use
oadc
based
service
accounts
for
service
to
service
authentication
such
as
from
a
ccd
system.
If
your
idp
supports
that
flow,
the
next
is
specific
support
for
microsoft.
Active
directory,
an
active
directory
already
works
with
our
generic
ldap
support,
but
we've
taken
a
shot
at
really
streamlining
the
experience,
because
ad
has
a
lot
of
consistent
defaults.
B
We
can
give
our
apis
much
better
defaults
and
we
can
handle
some
of
the
ad
specific
edge
cases
better
than
in
the
generic
ldap
case.
The
next
item
on
our
list
is
support
for
multiple
idps
in
the
supervisor.
Currently,
you're
only
allowed
to
have
exactly
one
identity
provider
configured,
which
is
somewhat
limiting.
B
This
is
a
bit
of
a
table,
sticks
feature,
but
I'm
really
happy
with
how
we've
designed
it
to
fit
into
our
apis.
Next
up
we've
got
wider
concierge
cluster
support.
Our
goal
has
always
been
to
support
any
kubernetes
cluster,
but
today
we
fall
a
bit
short
of
that.
We're
planning
to
write
some
new
concierge
backend
strategies
so
that
we
can
support
openshift
clusters
and
we're
planning
to
add
a
new
strategy
that
uses
the
short-lived
certificate
support.
That
was
just
added
in
kubernetes
122
and
that
should
be
a
nice
portable
option
for
modern
clusters
moving
forward.
B
Next
up,
we
have
what
we've
been
calling
identity
transforms,
which
is
probably
the
feature
on
the
list
that
I'm
most
excited
about.
Currently,
when
you
connect
to
an
odc
or
ldap
identity
provider,
we,
basically
just
let
you
choose,
which
attribute
from
the
idp
maps
to
the
kubernetes
username
and
which
attribute
maps
to
the
group
names
we'll
still
support
that
mode,
but
we're
planning
to
give
you
a
ton
of
new
customization
options
by
embedding
a
small
scripting
engine
called
starlark
into
the
supervisor.
B
Next,
we
have
extended
idp
support.
We
already
support
any
oidc
or
ldap
provider,
but
there
are
a
couple
of
popular
providers
that
either
don't
work
because
they're
not
oidc
or
don't
work
perfectly
because
they
use
non-standard
features
of
oadc
and
two
examples
here
are
github
and
google
github
is
not
an
oidc
provider,
but
it
has
a
similar
oauth
based
authentication
protocol
and
google
works
today,
but
they
have
a
custom
groups,
api
and
some
tricky
edge
cases
related
to
their
hosted
domain
claim.
B
As
you
can
see,
there
are
a
bunch
more
items
after
that
we
try
to
stay
flexible
and
agile
and
how
we
prioritize
features.
So
once
again,
if
you
have
thoughts
about
anything
you
see
here
or
you
think
we're
missing
something
that
would
make
pennipet
a
perfect
fit
for
your
use
case.
Let
us
know,
thank
you
all
so
much
for
attending.
I
also
want
to
thank
my
teammate
margo
for
co-presenting
and
thanks
to
the
rest
of
the
team,
for
helping
build
this
awesome
tool.
B
Hello,
everyone.
It's
me
in
person
again
and
margot
as
well.
I
will
answer
questions
from
the
chat.
Please
keep
dropping
them
and
we'll
give
some
time
to
think
think
of
questions.
I
answered
one
question
about
key
cloak
already
in
the
chat.
B
B
We
don't
test,
we
don't
test
key
cloak,
but
it
is
also
an
odc
provider
and,
as
far
as
I
know,
it
follows
the
spec
pretty
well,
so
I
think
it
should
work
just
fine
if
anybody's
interested
in
sort
of
getting
more
official
support
for
that
we'd
be
happy
to
to
try
and
work
on
getting
that
into
our
test
grid.
B
The
the
sticking
points,
the
the
sticking
points
with
sort
of
more
adventurous
idps,
as
as
you
sort
of
get
to
maybe
things
that
we
haven't
tested.
It
tends
to
be
that
the
basic
login
flow
works
just
fine,
but
you
may
run
into
problems.
For
example,
getting
groups
to
flow
incorrectly,
or
even
some
of
the
edge
cases
with
groups
like
what
happens
if
somebody's
in
10
000
groups,
how
do
you
handle
that
case?
B
The
next
question
in
the
chat
is
about
comparing
what
we
built
to
decks
and
there
is
certainly
some
overlap,
we're
both
sort
of
interacting
with
the
same
technology
space
here,
oidc.
B
Dex
has
a
little
bit
larger
goal
and
a
little
bit
different
goal.
So
dex
wants
to
be
a
generic
oidc
gateway
that
connects
together
all
kinds
of
different
identity
protocols,
including
upstream
oidc,
but
also
upstream
things
like
saml,
ldap
and
then
downstream
on
the
sort
of
the
client
side.
It
wants
to
close
that
as
a
generic
odc
platform
and
that's
really
cool.
We
love
dex,
we
use
dex,
where
dex
is
different
from
from
pennopad,
is
that
we
have
focused
more
on
the
kubernetes
integration
side.
B
So
a
couple
of
aspects
of
that
one
is
all
of
our
configuration
apis
and
our
installation
process
is
all
meant
to
be
driven
via
kubernetes
api.
So
if
you
have
again
like
a
git
ops
pipeline
or
some
sort
of
declarative
system
that
you're
using
to
manage
your
kubernetes
configuration,
your
pet
config
is
just
another
set
of
yaml
objects
of
kubernetes
objects
and
then
as
well.
B
You
know
we
focused
on
the
client
side
of
the
experience
the
cli
and
kind
of
the
login
flow
that
you
have
locally
and
actually
the
integration
with
coop
ctl
and
all
of
this
and
with
dex
you
don't
get
that
with
dex
itself.
There's
some
some
tools
that
sort
of
surround
decks
and
you
can
you
can
build
a
workflow,
that's
similar
to
what
we've
built
like
one
of
those
tools
is
gangway.
B
But
again
we
focused
on
trying
to
provide
out
of
the
box
experience
that
just
works
and
it
is
sort
of
opinionated.
It
doesn't
do
everything
you
might
want
it
to
do
in
every
possible
scenario.
I
think
dex
is
probably
a
more
generic
tool
that
you
can
build.
All
kinds
of
interesting
things
with
pinaped
is
really
focused
on
that
kubernetes
integration.
B
First
question:
if
the
oidc
provider
is
running
in
the
same
cluster,
can
we
have
flexibility
in
the
cr
crd
to
have
non-https
url
support?
That's
a
good
question.
This
is
one
of
those
things
that
we've
taken
somewhat
of
a
hard
line
on
right
now
is
that
we
only
support
secure
configurations
of
things,
so
that
means
https
everywhere
for
ldap.
It
means
we
don't
support
any
insecure,
ldap
connection
formats.
B
If
you
have
a
compelling
use
case,
I
think
definitely
file
an
issue
or
stop
by
in
slack
and
we
can
chat
about
it.
You
know
it's
hard
in
a
local
cluster
scenario
like
that.
It
might
be
totally
reasonable
to
assume
that
the
ip
network
is
secure
and
you
don't
need
tls
now
it'll
be
okay.
We
just
wanted
to
make
sure
that
all
the
code
we
ship
has
safe
defaults
and
is
hard
to
misuse
in
a
way
that
you
don't.
B
Next
question:
would
you
use
it
in
production
at
this
stage?
Yes,
so
we
ship
pinniped
as
a
underlying
component
in
several
commercial
vmware
products.
So
it's
not
it's
not
the
star
of
the
show.
You
probably
would
never
notice
that
it's
there,
but
it's
sort
of
behind
the
scenes
powering
features
on
our
products,
so
tons
of
kubernetes
grid
time
to
mission
control.
B
It's
there
various
pieces
of
it
are
there
and
working.
So
we
we
trusted
enough
to
rely
on
that
and
I
think
it's
ready
for
ready
for
production
use,
we're
also
very
serious
about
quality
of
the
software.
So
we
have
very
good
unit
test
coverage.
We
have
an
excellent
integration
test
suite
and
we
have
a
like.
I
said
before,
a
large
test
grid
of
kubernetes
versions,
different
idps,
kind
of
running
through
all
of
our
tests
on
every
every
commit.
B
Okay,
I
think
I
got
all
the
questions
there.
Yeah
anwar
mentions
for
the
non-https
use
case
that
it's
got
a
service
mesh,
that's
providing
encryption.
That
makes
sense.
C
B
Seems
like
a
good
that
seems
like
a
good
natural,
stopping
point,
marco,
and
I
will
be
available
in
slack
immediately
following
this
and
also
in
the
future.
So
thanks.
Everyone.
A
Awesome,
thank
you
both
so
much
for
your
presentation
today.
Thank
you,
everyone
for
joining
us,
and
we
will
see
you
all
on
slack
and
this
will
be
up
later
today.
So
thanks
so
much
everybody
we'll
see
you
next
time.