►
From YouTube: Application Networking Day Session #10: An introduction to SPIRE, SPIFFE & how they’re used in Istio
Description
Featuring Shane O'Donnell. An introduction to SPIFFE and SPIRE, how they relate to each other, and how they work. We'll also take a look at the new SPIRE integration within Istio, and how it differs from the default Istio SPIFFE implementation.
A
A
So
this
talk
is
going
to
be
an
introduction
to
Spire
spiffy
and
how
they're
used
in
istio
and
we're
going
to
look
at
a
few
different
deployment.
Architectures.
It's
mostly
going
to
be
introductory
content,
so
we
don't
assume
you
know
anything
about
Spire
or
spiffy
or
istio
that
any
better
awesome
thanks.
A
Not
about
Cloudera
we're
going
to
be
talking
about
Spire,
spiffy
istio
and
how
they're
kind
of
interact
with
each
other,
how
they're
different
from
each
other
and
some
of
the
benefits
of
using
one
or
the
other
real
quick
introduction.
So
my
name
is
Jan
O'donnell
I'm.
Currently,
a
tech
lead
on
the
glue
mesh
team
here
at
solo,
I've
got
about
10
years
experience
in
the
last
two
and
a
half
of
those
have
been
here
at
solo.
A
So
the
agenda
we're
roughly
gonna
just
walk
through
what
is
spire
and
that
kind
of
necessary
necessarily
talks
about
what
is
spiffy.
How
does
it
relate
to
Spire
we're
going
to
dig
into
a
little
bit
how
they
work
kind
of
walk
through
how
the
various
different
components
are
bootstrapped,
how
we
establish
identity
and
for
a
specific
workload,
and
then
we're
going
to
talk
more
specifically
about
how
all
that
works
with
istio?
A
So,
first,
what
is
spire
the
first
thing
you're
going
to
find,
if
you
start
googling
about
what
is
fire
is
the
word
spiffy
they're,
pretty
much
intertwined
spiffy
is
essentially
the
open
source
standard
for
that
you
can
use
for
securely
kind
of
identifying
all
of
your
software
systems
where
Spire
is
a
production-ready
open
source
implementation
of
that
standard.
So
spiffy
is
the
framework
at
the
standards.
Buyer
is
the
concrete
implementation.
A
So,
with
that,
let's
talk
a
bit
about
what
is
spiffy.
What
are
the
benefits
of
it?
Why
do
we
want
to
use
it?
So,
first
of
all
it's
an
acronym.
It
stands
for
secure
production,
identity
framework
for
everyone,
and
it's
primarily
concerned
with
bootstrapping
and
issuing
identity
services
that
are
across
heterogeneous
environments
and
org
boundaries,
and
the
heterogeneous
environments
bit
is
really
important
here.
A
It's
also
very
specifically
designed
for
dynamic
environments,
so
you
know
these
days,
especially
with
all
the
kind
of
scale
that
larger
companies
are
running
with
with
large
workloads
minute
by
minute.
The
amount
of
workloads
that
have
to
kind
of
associate
with
any
given
identity
can
scale
up
and
scale
down.
So
we
need
a
system,
that's
able
to
react
to
that
in
real
time
and
then
once
you've
implemented
something
that
you
know
uses
the
spiffy
standard.
All
your
workloads
should
be
able
to
easily
and
reliably
and
securely
mutually
authenticate
with
each
other.
A
So
you
get
that
zero
trust,
really
kind
of
concrete
promise
and
then,
from
a
user
perspective
like
why
I
want
to
use
this
is
as
a
developer.
You
know
we're
St,
we're
kind
of
starting
to
see
ourselves
getting
pulled
further
up
the
stack
more
into
the
offside
of
things.
It's
not
just
writing
the
applications
anymore.
It's
understanding
how
they
work.
What
other
applications
they
talk
to,
what
some
of
the
platforms
they
run
on,
look
like
and
then
from
the
other
side
from
the
Ops
teams
perspective.
A
A
So,
given
all
that,
how
does
this
work
first
I'm,
going
to
just
kind
of
walk
through
a
couple
different
Primitives
that
I'm
going
to
be
using
for
the
rest
of
the
talk,
so
the
first
one
is
a
workload.
A
workload
is
essentially
just
going
to
be
any
service
or
running
workload,
which
needs
to
have
a
unique
specific
identity,
and
some
of
the
attributes
and
properties
of
this
is
that
it
can
have
a
custom
granularity.
A
It
can
be
a
single
process
running
on
a
node,
but
it's
also
elastic,
so
it
could
be
multiple
processes
running
across
multiple
loads.
So
you
think
of
like
a
web
server,
that's
going
to
scale
as
your
traffic
spikes
and
wanes
and,
interestingly,
you
know,
workload
can
have
multiple
IP
addresses
because
it
doesn't
it's
not
guaranteed
to
run
on
any
given
node
again,
it's
platform
agnostic,
we're
not
assuming
you're
running
on
kubernetes
here
we're
not
assuming
you're
running
in
any
particular
Cloud.
This
could
just
be
running
on
a
VM
on-prem.
A
A
It
just
says
you
know
whatever
you
do
as
an
implementer,
you
need
to
make
sure
that
that's
true,
so
that
brings
us
on
to
kind
of
three
Core
Concepts
that
spiffy
really
kind
of
walks
through,
and
these
are
the
things
you
really
need
to
implement
in
order
to
have
the
specific
API,
the
first
one
is
spiffy
ID.
So
this
is
essentially
just
the
string.
That's
going
to
identify
the
workloads
specifically
and
uniquely
and
I,
don't
know
if
you
guys
can
see
this
on
the
slide,
but
down
here
on
the
bottom
left.
A
This
is
just
an
example
of
it,
so
even
if
you've
never
touched
50
or
Aspire,
but
if
you've
used
istio
and
you've
used
some
of
the
authentication
policies.
You
might
have
seen
this
as
a
client
ID,
but
all
it
really
is
is
a
trust
domain
and
then
a
unique
identifier
for
a
workload
ID
in
istio's
example,
which
we're
going
to
get
into
in
a
bit
more
detail
in
a
second,
it's
always
going
to
be
by
namespace
and
service
account,
but
that's
an
istio
implementation
detail,
not
spiffy.
A
Isn't
that
specific
workload
ID
can
be
a
bit
more
flexible.
The
next
piece,
that's
really
important,
is
the
spiffy
verifiable
identity
document
or
svid,
and
you
can
kind
of
think
of
this
one
more
like
a
passport.
So
like
a
passport,
it
has
a
couple
properties
that
are
really
important.
You
need
to
be
able
to
verify
its
authenticity
like
this
is
a
real
issue.
It
the
person
who's
holding.
It
got
it
from
where
it
says
it
comes
from,
and
then
you
need
to
also
verify
that
the
holder
is
who
they
say
they
are.
A
So
those
are
two
really
important
properties
that
we
need
in
in
the
context
of
svids,
it's
guaranteed
by
cryptographically
verifiable
documents.
Currently,
the
spiffy
standard
allows
you
to
do
two
different
ones,
so
you
can
do
jot
tokens
or
you
can
do
x509
certs
for
most
of
this,
we're
going
to
be
concentrating
on
the
x509
search
just
because
it's
kind
of
the
more
commonly
used
one
and
it
isn't
as
vulnerable
to
some
things
like
token
replay
attacks
and
things
like
that
and
then
finally,
the
workload
API
itself.
A
So
this
is
just
the
API
which
a
given
workload
calls
in
order
to
get
all
of
the
information
it
needs
to
kind
of
participate
in
the
specific
system,
and
so,
interestingly
workload
API
is
usually
exposed
locally
commonly
over.
Like
a
Linux
socket,
it's
interestingly,
you
don't
need
to
have
any
kind
of
authentication
to
call
it.
The
workload
doesn't
need
to
know
anything
about
itself.
It
just
says:
hey
tell
me
who
I
am
and
then
what
it
gets
back.
Is
it's
going
to
get
a
spiffy
ID?
So
it
knows
who
it
is.
A
It's
going
to
get
an
S
bid,
which
is
again
this
cryptographically
verifiable
document
that
it
can
present
to
other
workloads
to
prove
it
is
who
it
says
it
is,
and
then
it's
also
going
to
get
the
trust
bundle
which
it
can
use
to
verify
other
callers
when
they
give
it
an
s-vid.
It
uses
it
to
mutually
authenticate
the
other
way.
A
When
we're
talking
about
what
this
actually
looks
like
deployed
in
an
environment
you're
going
to
have
these
workloads,
which
are
your
actual
services,
that
you
need
identities
for
you're,
going
to
have
the
Spire
agent
and
you're
going
to
need
one
of
those
on
every
single
node
that
you're
running
on
what
a
node
means
is
going
to
depend
on
what
your
deployment
architecture
looks
like.
A
But
if
we're
just
thinking
of
kubernetes,
it's
literally
just
a
kubernetes,
node
and
then
Aspire
server
is
the
kind
of
central
Authority
that
manages
all
of
the
identities
and
handles
the
registry
of
all
of
the
workloads
that
you're
allowed
to
issue
identities.
For
so
we're
going
to
talk
about
exactly
how
this
life
cycle
works.
Apologies
I
did
have
a
few
animations
on
this,
but
I
converted
it
to
PDF
at
the
last
second.
A
So
it's
all
coming
at
you
really
fast
at
once,
but
essentially,
when
we're
bootstrapping
this
environment,
we're
going
to
start
by
booting
up
the
spiffy,
sorry
the
Spire
server,
then
assuming
you
haven't,
given
it
a
a
Routier
you're
gonna,
it's
gonna
generate
a
self-signed
certificate.
A
Then
it's
going
to
if
it's
the
first
time
that
this
particular
server
is
starting
up.
It's
going
to
generate
a
trust
bundle
at
this
point.
The
server
is
basically
bootstrapped.
So
it's
going
to
start
that
registration
API,
which
can
be
called
in
order
to
register
new
workloads
and
we'll
get
into
that
in
a
second
and
at
that
point
the
supplier
server
is
pretty
much
ready
to
start
receiving
traffic.
A
But
we
can
just
talk
through
this
there'll
be
a
lot
of
pointing,
but
essentially
you've
got
here
on
the
left,
the
blue
Spire
agent
and
on
the
right,
we've
got
the
Spire
server
and
then
we've
got
some
kind
of
sorry,
some
kind
of
a
platform,
specific
validation
mechanism,
and
this
is
basically
just
a
third-party
API
that
the
spiffy
spec
kind
of
implicitly
trusts
in
order
to
verify
the
identities.
If
it
helps
to
kind
of
think
of
this
in
more
concrete
terms,
because
an
example
of
this
will
be
like
an
AWS
API.
A
So
the
first
thing
that's
going
to
happen
is
your
Spire
agent
is
going
to
when
it
boots
up.
It's
going
to
call
that
and
basically
say
tell
me
everything
about
the
note
I'm
running
on
give
me
verifiable
documents
that
still
I
can
prove
who
I
am
it's
going
to
send
those
from
the
agent
to
the
server
the
server
is
going
to
verify
independently
then
call
AWS
or
whatever
your
platform.
A
Specific
validation
mechanism
is
and
double
check
that
everything
looks
good
and
assuming
everything
checks
out,
then
the
square
server
issues
the
agent's
identity
to
itself
in
the
form
of
of
an
svid,
and
so
your
agent's
going
to
have
an
x509
cert.
It
can
prove
it
is
who
it
says
it
is,
and
it's
pretty
much
ready
to
go
into
the
next
step.
A
Next
up
is
the
agent
actually
starts
bootstrapping
itself,
so
it's
got
its
own
identity.
It's
ready
to
go
first
thing:
it's
going
to
do
is
call
the
Spire
server
when
it
calls
it.
The
first
thing
it's
going
to
ask
for
is
a
list
of
registration
entries
that
this
particular
node
that
this
agent
is
running
on
is
authorized
to
issue
workload,
identities
for
and
in
order
to
establish
that
connection.
It's
done
over
Mutual
TLS,
so
both
sides
authenticate
each
other.
A
A
Next,
assuming
the
handshake
went
well,
the
spider
server
is
going
to
go
to
that
registry,
find
all
of
those
authorized
registration
entities
from
its
data
source
that
this
agent
is
is
authorized
to
issue.
It's
going
to
send
them
back,
basically
has
a
list
of
registration
entries
to
the
agent
and
then
the
agent
for
each
entry.
Basically,
the
agent's
going
to
send
a
CSR
or
a
certificate
signing
request
to
the
server
and
the
server
is
going
to
just
stamp.
A
Each
of
them
send
them
back
and
then
at
that
point
the
agent's
going
to
have
a
list
of
workload
svids,
and
these
are
basically
the
identity
documents
that
each
each
individual
workload
can
use
in
order
to
guarantee
that
it
is
who
it
says
it
is.
But
of
course,
at
this
point,
they're
all
just
in
the
agent
we
haven't
actually
mentioned
workloads
at
all.
A
So
the
final
step
in
the
agent
is
that
it's
going
to
start
listening
on
that
workload,
API
socket
waiting
for
agents
to
start,
calling
that
workload,
API
and
say
hey,
give
me
an
identity
who
am
I.
A
That
process
is,
is
also
pretty
much
it's
very
similar
to
what
we
just
went
through,
but
just
from
the
workload
to
the
agent
instead
of
from
the
agent
to
the
server
so,
and
the
workload
calls
the
workload
API,
which
again
is
probably
going
to
be
local
over
a
Unix,
socket
I'm
running
on
the
same
node.
A
The
agent
is
going
to
initiate
that
workload
attestation
by
calling
the
workload
of
testers
which
are
pluggable
which
we're
going
to
talk
about
in
a
sec
as
well,
and
then
those
testers
are
going
to
use
various
kernel
and
user
space
details
in
order
to
discover
more
and
more
information
about
that
workload
and
then
it'll
return
to
the
agent
in
the
form
of
workload
selectors
and
all
of
this
information
that
it
found.
A
So
then,
the
agent
compares
the
workload
selectors
that
it's
found
to
the
previously
listed
listed
has
a
registration
entries
that
it's
authenticated
in
order
to
to
issue
and
assuming
that
there's
a
match,
it'll
find
that
estimate
that
matches
all
of
those
workload
selectors
and
send
that
back
to
the
workload
and
at
that
point
pretty
much.
The
whole
system's
set
up.
A
Bootstrap
ready
to
go
all
of
the
workloads
have
everything
they
need
in
order
to
authenticate
each
other
when
they're
calling
other
workloads
as
well
as
calling
workloads
that
are
calling
them
so
I
know
I
kind
of
went
through
that
a
little
bit
quick,
but
that's
kind
of
high
level.
How
the
whole
thing
works
start
to
finish
now.
A
Why
do
we
care
about
it
with
regards
to
istio
specifically,
so
we
take
a
step
back
when
we're
talking
about
istio,
just
to
kind
of
like
dig
in
specifically
into
you
know
what
a
workload
is
in
this
instance.
A
So
I
know,
I
said,
workloads
are
platform
agnostic,
but
I
think
it
helps
to
just
kind
of
think
of
a
concrete
instance
of
one
generally
in
kubernetes
when
we're
thinking
of
workloads,
the
kind
of
Primitives
we
think
of
are
usually
pods,
but
we
actually
have
to
drill
down
a
little
bit
and
again,
I
think
my
animation
is
probably
broken
here,
but
inside
of
a
pod
you're
going
to
have
the
service
that
you
know
you're
actually
running
that
your
application,
your
developers
are
developing,
but
then
you're
also
going
to
have
the
istio
sidecar
running
alongside
it
as
a
container
in
the
same
pod.
A
And
then,
if
you
zoom
in
even
further
on
that
sidecar
you're,
going
to
have
the
envoy
proxy
and
you're
going
to
have
the
istio
agent
and
that's
important
for
the
next
step.
So
it's
we're
drilling
all
the
way
down
to
that
Envoy
proxy
istio
agent
to
kind
of
think
about
how
this
identity
bootstrapping
flow
works.
So
this
is
how
it
works
in
istio,
and
this
is
before
we
even
touch
Spire.
This
is
just
the
the
default
out
of
the
box.
A
Istio
kind
of
spiffy
identification,
so
Envoy
proxy
is
going
to
or
first
of
all,
the
istio
agent
is
going
to
start
up,
what's
called
an
SDS
server
or
a
secret
Discovery
service,
and
this
is
basically
a
kind
of
API
that
Envoy
knows
how
to
talk
to
over
grpc
in
order
to
request
all
the
certificates
it
needs
to
do
mtls
with
all
of
your
other
workloads.
A
As
soon
as
istio
agent
starts
up,
it's
going
to
send
a
CSR
or
a
certificate
signing
request
to
sdod,
which
will
sign
it
using
the
certificate
Authority
and
send
it
back,
and
then
it's
just
pretty
much
good
to
go
with
identity
from
there
forward.
So
you
might
be
asking
yourself
if
this
year
already
has
this
out
of
the
box,
why
would
I
use
Spire
right?
I
already
have
identity
I
already
get
all
the
benefits
of
mtls.
Like
why
do
I
need
Spire,
so
there's
a
bunch
of
benefits.
A
These
are
just
like
the
top
three
that
you
know,
I
think
are
important,
but
there's
there's
a
lot
more
than
this
I
mentioned
earlier.
There's
a
plugable
attestation,
so,
if
you
think
of
our
agent
and
our
spider,
our
the
agent
and
the
server
diagrams
that
we
had
earlier,
those
are
testers
are,
you
know,
follow
this
plug-in
pattern,
so
you
can
write
your
own.
Various
vendors
can
I
publish
these,
so
you
can.
A
You
can
make
part
of
the
identity
and
then
multi-factor
attestation
is
kind
of
an
extension
of
that,
and
you
know,
there's
no
limit
to
the
number
of
a
testers
you
can
have
for
a
specific
identity
and
the
more
you
have
assuming
that
they're
all
securely
verifiable.
The
more
kind
of
secure
your
identity
is
going
to
be
and
then
finally
Federation
you
can
get
like
really
crazy.
With
Spire
stuff.
You
can
do
multiple
servers,
multiple
trust
domains.
You
can
do
tiered
and
identity
servers
I'm
not
going
to
get
too
much
into
that.
A
But
it's
it's
a
lot
flexible.
When
you
start
getting
into
the
kind
of
more
advanced
use
cases,
then
it
steals
out
of
the
box
stuff.
So
how
does
this
work
with
istio,
so
by
default?
Istio
agent
is
just
going
to
start
up
and
start
listening
to
that.
The
istio
agent
will
start
up
the
SDS
server
and
Envoy
will
talk
to
that,
but
in
istio
114
or
newer,
it's
going
to
start
up
and
on
on
Startup
of
the
istio
agent.
A
It
actually
checks
for
a
known,
binding
on
a
socket
path
and
as
of
114
or
newer.
If
there's
a
binding
on
that,
it's
going
to
actually
skip
that
default
flow
that
we
just
went
through
and
instead
issue
an
SDS
connection
over
that
that
Linux
socket
and
if
you've
done
everything
right.
The
other
side
of
that
limit
socket
is
going
to
be
our
Spire
agent.
So
what
that
looks
like
when
you
deploy
it
is
something
like
this,
and
so
this
is
a
little
bit
small
I
guess.
A
A
Is
the
perfect
kind
of
kubernetes
deployment
model
for
this,
because
we
want
one
agent
running
on
every
single
node,
because
there's
node
specific
properties
that
each
one
needs
to
know
elbow
and
then
we've
got
our
server
which
in
this
case
is
going
to
be
running
as
a
stateful
set
and
I
mentioned
registry
a
few
times
as
well.
So
in
kubernetes
we
can
run
this
kubernetes
workload
register
as
part
of
the
the
Spire
server
pod
and
that's
basically
going
to
Auto
register
any
new
workloads
that
are
kind
of
bootstrapped
into
our
kubernetes
cluster.
A
Now
this
gets
interesting.
When
say,
you
want
some
more
advanced
deployment
patterns,
so
kubernetes,
not
so
bad,
but
say
what
happens
if
you
want
to
onboard
a
VM
into
your
mesh,
so
onboarding,
your
VM
into
the
mesh
is,
has
quite
a
few
different
steps,
but
generally
the
the
way
we
like
to
do
it,
at
least
in
istio,
is
to
install
an
istio
agent
process
in
the
VM
alongside
the
app
and,
if
you
think
about
it,
this
actually
looks
quite
like
the
istio
Lego
pod
in
kubernetes.
Right,
you've
got
your
app.
A
You've
got
your
sidecar,
and
the
nice
thing
is,
it's
logically,
very
similar.
So
you
can
actually
just
drop
a
spider
agent
in
there
point
it
back
at
the
Spire
server,
and
then
this
vm's
app
is
going
to
have
everything
it
needs
in
order
to
both
ident
authenticate
the
identity
of
any
of
the
services
running
in
the
cluster,
but
also
vice
versa.
All
of
the
cluster
services
are
going
to
be
able
to
authenticate
the
identity
of
the
app
that's
running
in
the
VM,
so
this
is.
A
This
is
really
powerful
and
just
kind
of
another
point
that
we
kind
of
mentioned
a
few
times,
but
all
of
this
spire
stuff
is
pretty
platform
agnostic.
So
there
there's
no
reason
that
the
Spire
server
needs
to
run
in
kubernetes
right.
You
could
put
this
somewhere
else.
Some
Central
identity
server
doesn't
have
to
be
Cloud,
it
could
be
on-premises.
It
could
be
some
Legacy
system
that
just
is
in
charge
of
identity.
For
whatever
reason
and
again
you
can,
you
can
point
to
this
from
pretty
much
anything
that
you've
got
a
client.
A
That's
able
to
talk
to
it
on
so
you
can
grow
your
architecture.
Add
more
VMS.
Add
more
clusters,
add
more
Cloud
providers.
You
know
at
a
certain
scale.
You
have
to
start
tweaking
the
deployment
a
little
bit,
but
that's
a
whole
other
talk
so
yeah,
that's
that's
pretty
much.
The
the
Whirlwind
tour
I
I
did
have
a
demo,
but
I,
don't
trust
the
Wi-Fi
too
much
to
do
it.
A
So
here's
a
list
of
resources
that
are
basically
helpful
if
you
want
to
learn
more
about
spiffy
Aspire
cert
management
in
general,
istio's
implementation,
specifically
all
these
slides
should
be
available
on
our
website
afterwards,
as
well,
and
also
we're
hiring.
So
if
any
of
this
stuff
sounds
interesting
or
anything
else,
you've
heard
from
solo
folks
today,
you
know
come
find
anyone
with
a
little
gluey
guy
in
our
shirts
or
come
that
check
us
out
on
our
website
we're
headquartered
in
Boston,
but
we
do
hire
a
remote.