►
From YouTube: SPIFFE and SPIRE: Architecture Deep Dive - Andrew Harding, VMware + Evan Gilman, Scytale
Description
SPIFFE and SPIRE: Architecture Deep Dive - Andrew Harding, VMware + Evan Gilman, Scytale
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe 2021 Virtual from May 4–7, 2021. Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Okay,
all
right,
so
I'm
andrew
harding,
like
amir
mentioned,
I'm
a
a
spy.
I've
been
inspired
maintainer
for
a
couple
of
years
now
and
recently
I
was
made
a
spiffy
maintainer
and
participated
in
that
effort
for
quite
some
time
today,
we're
going
to
have
a
little
bit
of
a
deep
dive
on
on
spiffy
inspire
and
both
evan
and
myself
are
staff
engineers
at
vmware
I'll.
Let
evan
introduce
himself,
though.
B
Thank
you,
andrew
yeah,
so
I'm
evan,
I
spoke
a
little
bit
earlier
on
the
spiffy
update
thing.
You
know
I
I
the
goal
for
today.
The
goal
for
this
talk
is
really
you
know
that
we've
got
a
full
day
ahead
of
us.
We're
really
only
just
getting
started,
and
you
know
there's
there's
likely
to
be
kind
of
a
lot
of
jargon
thrown
around
and
a
lot
of
terms
and
things
like
that.
B
That
might
be
specific
to
spiffy
inspire,
and
so
the
goal
of
this
talk
is
really
to
kind
of
familiarize
everyone
with
that
and
and
do
as
deep,
a
dive
as
necessary.
We
hope,
as
is
necessary,
to
set
folks
up
to
to
really
understand
and
grock
the
the
presentations
for
the
rest
of
the
day.
You
know
we
also
recognize
that
that
you
know
this
might
not
be
new
to
some
of
you.
B
We've
tried
our
best
to
to
make
the
presentation
as
interesting
as
possible
for
for
those
who
already
may
know
some
of
this
stuff,
and
we
do
have
a
lot
to
share
with
you.
So
you
know,
I
hope
we
don't
go
to
too
far
over
time
here,
but
I'm
gonna,
I'm
gonna
in
the
interest
of
time.
I'm
gonna,
I'm
gonna.
Let
that
be
that
and
I'll
pass
it
back
to
andrew
to
obviously
the
agenda
and
kick
things
off.
A
Okay,
so
yeah,
like
evan,
said
we're
just
gonna.
Do
a
quick
overview
to
help
people
maybe
understand
the
rest
of
the
talks
are
coming
today,
we're
going
to
start
with
a
quick
spiffy,
refresher,
they're,
going
to
go
over
the
spiffy,
inspired
project
goals
and
then
dive
right
into
house
wire
specifically
tries
to
obtain
those
goals
through
the
agent
through
nodule
merkle
data
station
house.
A
Fire
server
manages
its
keys
and
rotation
strategies
and
then
we'll
talk
a
little
bit
about
you
know,
deployment
and
avoiding
failure
modes
so
right
off
the
bat
starting
with
spiffy.
I
did
that
there
we
go.
A
Kelsey
talked
about
some
of
these
topics,
so
hopefully
this
will
be
a
very
light.
Refresher,
we're
gonna
start
off
with
the
spiffy
id
again.
This
is
the
heart
of
spiffy
and
forms
the
way
that
we
structure,
identity
for
services
and
it's
a
uri
and
it's
it's
got
a
couple
components.
The
authority
component
represents
the
trust
domain
for
the
identity
and
the
path
component
represents.
You
know
the
particular
you
know.
Entity
identity
within
that
trust,
domain
and
the
trust
domain
is
is
in
spiffy.
A
Nomenclature
is,
is
essentially
a
name
space
and
provides
a
boundary,
so
so
trust
domain
boundaries
can
can
be
along
security
boundaries.
This
could
be,
like
you
know,
different
environments
like
production
versus
staging
or
even
other
sort
of
workloads
or
systems
that
you
might
have
some
sort
of
you
know
requirement
around
security
isolation
for
it.
A
As
an
administration,
administrative
boundary,
like
you,
you've
got
a
couple:
different
teams
who
want
to
manage
their
own
independent,
spiffy,
implementations
or
deployments,
and
so
this
could
be
like
you
know,
billing
versus
cells
versus
human
resources,
and
the
idea
is
that
trust
domains
have
signing
authorities
within
them
and
those
signing
authorities
are
responsible
for
issuing.
A
Identities
for
identity
within
that
trust
domain
and
the
secure
identity
inside
spiffy
is
codified
in,
what's
known
as
the
spiffing
verifiable
identity
document,
and
this
document
contains
a
spiffy
id
and
it's
again
signed
by
an
authority
within
the
trust
domain
and
we've
got
specifications
out
that
define
this
type
of
document
for
both
x
509
certificates
and
jot
tokens.
A
So
we've
got
our
game:
we've
got
it
embedded
into
a
you
know,
assigned
document.
Now,
let's
talk
about
how
we
verify
those
documents
and
we
do
that
with
materials
that
are
found
in
what's
called
the
spiffy
bundle.
So
again,
this
is
a
collection
of
public
key
material
from
the
authorities
for
trust
domain
and
it's
used
to
to
validate
aspects
that
belong
to
that
trust
domain.
A
If
you're
reading
through,
like
the
the
documentation
or
the
specifications,
you'll
also
see
this
called
the
trust,
domain
bundle
or
just
trust
bundle,
these
three
terms
are
used
interchangeably,
so
building
up
now,
we've
got
our
id
we've
got
assigned
document
over
that
id.
We
can
verify
it
with
bundles.
A
Let's
talk
about
how
workloads
receive
this
cryptographic
material
and
that
is
done
through
the
spiffy
workload
api,
so
the
spiffy
workload
api
is
something
that
unauthenticated
workloads
talk
to
and
again
provides
svids
and
bundle
materials
and
streams,
new
materials
to
the
workload
as
those
as
those
materials
change,
and
because
it's
an
authenticated
api.
In
other
words,
you
know
workloads
don't
have
to
bring
some
sort
of
identity
with
them
or
secret
with
them.
In
order
to
authenticate
against
this
api,
it
solves
the
secret
zero
problem
for
the
workloads.
A
The
last
thing
we'll
talk
about
in
relation
to
spiffy
is
is
another
mechanism
to
retrieve
the
bundles
for
a
trust
domain,
and
that
is
the
federation
api,
and
we've
talked
about
this
a
few
times
already
today
and
again.
This
is
just
a
very
quick
way
for
trust
domains
to
exchange
public
key
materials,
so
they
can
authenticate
each
other's
s-vids.
It's
a
it's
a
one-way
relationship.
A
B
A
Can
authenticate
their
identities,
but
the
person
you're
federating
with
can't
authenticate
yours
unless
they
also
perform
this
sort
of
one-way
federation
step
to
obtain
your
public
key
materials.
So
in
a
nutshell,
spiffy
gives
us
cryptographic
verifiable
secret,
zero,
solving,
frequently
rotated
federatable,
namespaced,
uniform
identity
and,
and
that's
quite
a
list.
That's
huge.
A
And
you
might
be
asking
yourself
well,
you
know
you
know,
maybe
maybe
all
my
infrastructure
runs
inside
a
very
homogeneous
environment
where
I
have
you
know,
maybe
some
of
these
check
boxes
checked
off
all
the
ones
that
are
important
to
me
so
like
what's
what's
really
spriffy
bringing
to
the
table
and
so
for
spiffy
and
it's
project
goals?
It's
not
about.
You
know,
services
that
are
running
in
a
single
cloud
environment,
or
maybe
this
other
cloud
environment
or
that
other
cloud
environment
right
or
this.
A
So
that's
it
for
for
a
spiffy
recap:
let's
talk
about
spire
now,
spire's
whole
goal
in
life,
as
as
a
as
a
spiffy
implementation
party
for
spiffy
implementation
is
really
to
light
up
that
workload.
Api
that
we
talked
about.
That
gives
you
those
you
know:
bundle
and
nesfid
materials
in
as
many
different
environments
as
possible
and
to
provide
a
sense
of
uniformity
around
management
of
those
identities,
and
it
does
this
starting
with
the
spire
agent.
A
This
is
kind
of
the
the
natural
place
to
start
for
spire
support
of
spiffy,
because
the
agent
is
what
lives
alongside
workloads
and
implements
that
workload
api
and
provides
those
cryptographic
materials
to
workloads
in
that
api
now
aspire.
Agent
itself
doesn't
doesn't
start
out
with
any
of
those
materials.
Those
materials
are
centrally
managed
and
signed
by
the
spire
server.
The
sparser
acts
as
as
the
centralized
signing
authority
inside
of
your
trust
domain,
and
so
there's
this.
A
That
it
will
later
on
feed
down
to
workloads
to
the
workload
api
and
it
caches
those
materials
in
an
internal
cache
and
so
as
as
bundles
are,
are
prepared
and
updated
by
you
know,
inside
of
your
trust
domain
by
spire
server
and
as
svids
are
minted.
A
Those
are
sent
down
to
the
spire
agent
and
cached
and,
of
course,
as
you
know,
those
materials
change
and
rotate
spire
agents
able
to
reach
out
and
and
continue
updating
those
materials
inside
of
its
cache
again
like
making
those
available
to
workloads
downstream
via
the
workload
api
you
know
now.
These
are.
These
are
cryptographically.
B
A
Identities,
these
are,
you
know
and
they're
associated
private
keys,
and
so
there's
you
know.
These
are
security.
Sensitive
materials
and
spire
server
isn't
just
going
to
hand
them
out
to
anybody,
and
so
there's
a
process
through
which
spire
agent
is
able
to
bootstrap
and
authenticate
against
spire
server
and
I'll.
Kick
it
over
to
evan
to
talk
about
that
next.
B
Thank
you,
andrew
yeah,
so
andrew
mentioned
that
you
know
aspire,
helps
to
solve
secret
zero
problem,
which
is
you
know?
How
do
you
get
the
first
secret?
How
do
you
get
the
first
credential?
You
probably
don't
want
to
bake
it
into
your
workflows
or
bake
it
into
your
nodes
and
deploy
them.
So
so
how
do
you
solve
this?
Ideally
at
runtime,
and
so
you
know
spire
and
spiffy
both
look
to
solve
that
problem
for
workloads
spire
in
particular,
which
which
uses
this
agent.
B
B
B
B
So
I'll
walk
you
through
a
couple
of
examples
of
how
how
this
actually
works
in
the
aws
case,
the
agent
tickles,
this
noted
tester
and
it-
and
this
noted
tester
knows
how
to
reach
out
and
talk
to
aws.
So
in
this
case,
we
reach
out
to
the
aws
metadata
service
and
we
grab
a
document
that
is
signed
by
aws.
That
aws
makes
available
only
to
this
node,
and
that
document
has
the
instance
id
and
other
identity
related
information
for
this
particular
node.
B
So
the
agent
plug-in
reaches
out
and
grabs
this
thing
and
then
shoots
it
over
to
spire
server
under
a
tls,
protected
connection,
aspire
server,
receives
this
document
and
then
passes
it
down
to
its
noted
tester
that
pairs
with
the
agent
one-
and
this
noted
tester
knows
not
only
how
to
validate
that
document
that
it
got
from
aws,
but
also
to
call
aws
apis
and
perform.
You
know
an
extra
set
of
validations.
Is
this
a
new
node?
There
are
some
anti-tampering
checks
that
occur
there.
B
You
can
write
whatever
logic
you
you
want
in
there
really
that's
the
beauty
of
this
of
this
plugable
system,
but
once
the
spire
server
has
effectively
been
convinced
that
the
agent
is
running
on
you
know,
instance,
id1234,
for
example,
spire
server
will
issue
the
agent
its
own
svid,
and
this
s
bit
identifies
the
agent
uniquely
within
the
trust
domain
and
this
this
identity
that
we
issue
the
agent
is
derived
from
the
at
the
station.
B
So
in
this
case,
we
issued
an
identity,
that's
a
function
of
its
aws
account
number
and
its
aws
instance
id
and
in
order
to
just
demonstrate
the
flexibility
of
of
this
node
attestation
mechanism,
I
have
one
more
example,
which
uses
a
tpm.
Tpm
stands
for
trusted
platform
module,
it's
a
small
little
chip
that
is
soldered
on
to
most
motherboards
these
days,
and
it
provides
an
enclave
if
you've
heard
that
term
before
secure
enclave
for
holding
private
keys
and
making
other
kinds
of
security
assertions
about
the
state
of
the
hardware.
B
So
there's
this
node
tester
tpm
this
tpm
based
motor
tester.
Rather
that
knows
how
to
reach
out
to
this
tpm
and
what
it
can
do
is
it
can
grab
a
certificate
that
is
burned
into
the
tpm
from
its
manufacturer,
and
it
sends
this
certificate
over
to
the
spire
server,
which
is
then
of
course,
passed
to
the
tpm
based
noted,
tester
aspire
server
that
noted
tester
is
configured
with
this
manufacturer
ca
certificate,
so
it
knows
how
to
validate
the
certificate
that
yes,
this
is
a
valid
certificate
from
the
tpm
manufacturer.
B
We
expect
inside
that
certificate
and
a
lot
and
a
message
sent
along
with
it.
There
are
some
public
keys
and
those
and
the
private
keys
that
are
paired
at
those
public
keys
are
actually
burned
into
this
tpm
hardware.
So
what
the
server
does
is.
It
is
then,
in
a
position
to
issue
a
challenge,
so
it
can
take
a
little
notch
or
a
small
little
randomly
generated
secret.
B
So
the
agent
receives
this
challenge
and
passes
it
down
into
its
noted,
tester
plug-in,
which
then
passes
it
back
into
the
tpm
to
be
solved
right.
So
the
private
key
that
the
tpm
is
holding
on
to
is
able
to
solve
this
challenge
and
send
back
in
clear
text
the
bit
of
information.
That's
fire
server
generator
you
can
see
here
at
this
point.
B
Fire
server
has
got
a
pretty
good
idea
of
of
the
identity
of
the
hardware
that
this
agent
is
running
on
and,
and
it
knows
that
you
know
particular
key-
that
is,
that
is
burned
into
this
tpm
and
same
as
before.
It
uses
that
information
to
issue
an
identity
to
the
agent,
and
in
this
case
the
agent's
identity
is
bound
to
the
identity
of
this
tpm
and
the
hardware
that
it
is
running
on.
B
So
that's
that's
as
fast
as
I
can
tell
the
story
of
node
attestation.
The
end
result
here
is
is
that
you
know
we've
gone
from
an
agent
or
a
node
that
just
comes
up
on
the
network
with
no
prior
knowledge
to
okay.
Now
we
know
exactly
the
identity
of
this
hardware
of
this
agent
that
we're
communicating
with
so
that
solves
the
secret
zero
problem
for
the
agent.
B
But
what
about
the
workload
you
know
andrew
described,
that
we
saw
this
problem
for
the
workload
too
by
the
workload
api,
so
there
has
to
be
some
some
magic
there
and
so
to
solve
that
we
do
a
very
similar
kind
of
approach
and
inspire
agent
that
we
call
workload
attestation.
B
So
this
is
able
to
take
a
workload
that
we
have
no
prior
knowledge
of
that
that
it
has
no
credentials
and
we're
able
to
identify
it
okay.
So
how
do
we
do
that
inside
spire
agent?
We
have
this
library
called
peer.
Tracker
peer
tracker
is
a
platform
specific
implementation
of
of
some
logic
that
is
able
to
introspect
the
kernel
that
the
agent
is
running
on
to
find
out
the
id
to
find
out
like
which
process
is
calling
it.
B
So
when
the
workload
calls
this
workload
api,
they
do
all
these
special
lookups
and
we're
able
to
figure
out
the
process
id
and
some
other
attributes
associated
with
the
process.
B
Once
we
have
this
information,
we
pass
the
process
info
back
into
these
workloaded
testers,
which
are
similar
to
noda
testers.
One
big
difference
here
is
that
the
agent
can
load
multiple
workloaded
testers
and
we
fan
out
across
all
of
these.
So
in
this
example,
we
have
you
know
one
for
unix.
That
knows
how
to
talk
to
the
linux
kernel.
We
have
one
for
docker.
That
knows
how
to
talk
to
the
docker
demon.
We
have
one
for
kubernetes.
B
That
knows
how
to
talk
to
the
cubelet,
so
we
dispatch
each
of
these
plugins
and
and
they
go
and
they
collect
information
about
this
process.
That's
calling
us
and
they
return,
what's
called
selectors
these
selectors
describe
that
calling
process.
In
this
case
we
have,
you
know
a
username.
We
have
the
shaw
sum
of
the
of
the
workload
binary.
B
We
have
the
docker
image,
ideas
and
kubernetes
related
information.
This
is
pretty
much
all
all
we
need
in
order
to
positively
identify
this
workload.
What
is
the
shape
of
this
workload?
Exactly?
What
is
the
identity?
Now
we
are
in
a
position
where
we
can
issue
it:
an
s-vid,
a
key
and
the
associated
the
associated
bundle.
B
So
he's
done
a
lot
of
time
so
far
talking
kind
of
about
the
agent
how
the
agent
gets
identity,
how
agent
issues
identity
to
workloads,
but,
of
course,
there's
the
server
component
that
andrew
mentioned
earlier.
You
know
it's
got
to
manage
these
keys.
It's
got
to
actually
meant
these
s.
Phases
got
a
bunch
of
responsibilities
on
that
shoulder
so
I'll
pass
it
back
to
andrew
to
take
a
look
at
some
of
those
internals.
A
Thanks
evan,
all
right
so
again,
spire
server
is
the
centralized
signing
authority
for
svids
inside
the
trust
domain
and
it
accomplishes
by
having
a
signing
authority
for
each
s-vid
type.
So
the
inside
spire
server
there's
a
distinct
authority
for
x-509
and
jot
svids.
A
Now
these
pairs
here,
the
x509
authority
is
represented
as
an
x509ca
certificate
and
it's
accompanying
private
key.
This
the
ca
certificate
can
be
self-signed.
It
can
also
belong
to
a
larger
existing
pki
scheme
and
be
signed
by
by
an
authority
inside
that
existing
pki.
A
The
jot
authority
is
just
a
simple
asymmetric
key
pair,
you
know,
and
it's
in
charge
of
signing
the
job
estimates,
and
one
thing
to
note
here
is
that
across
these
two
you
know:
authorities
the
private
key
material
is
not
directly
managed
by
spire
server
itself,
and
this
is
a
security
consideration
to
kind
of
separate
out
the
private
key
management
from
spire
server
itself
and
offload
that
to
what
is
known
as
the
key
manager
plug-in.
A
The
key
manager
plug-in
is
a
simple
interface
that
is
more
or
less
loosely
based
off
of
a
subset
of
pkcs11
and
through
this
interface,
spire
server
manages
multiple
key
slots
for
private
keys
and
can
create
these
keys
and
use.
You
know
use
these
key
slots
to
assign
arbitrary
data.
A
There
are
a
couple
of
key
manager
plugins
that
are
built
into
spire.
The
top
two
you
see
their
memory
and
disk
we've
touched
on
this
earlier
with
augustine,
gave
his
update
that
there
are
community
efforts
in
place
to
develop
a
tpm-based
key
manager
as
well
as
something
that
hits
aws's
key
management
servers
getting
back.
We,
we
talked
just
a
second
ago
about
how
the
x-519
authority
can
be
part
of
a
larger,
existing
pki
and
the
way
that
that's
accomplished
inside
spires
through
the
use
of
it.
B
Let's
see
yep
sorry
hold
on,
I
got
kicked
out
of
my
ssl
session
right
in
the
middle
of
our
presentation.
One
second
is
that
better.
I
need
to
share
hold
on
hold
on
okay
yeah.
B
A
All
right,
I'm
back
perfect.
Thank
you
all
right.
So
the
way
that
spire
accomplishes
interacting
with
this
you
know
sort
of
existing
upstream
pki
is
through
the
upstream
authority
plug-in
and
again.
This
is
a
very
simple
interface
that
just
provides
enough
functionality
for
aspire
to
interact
with
that
upstream.
A
Two
different
authority
types
that
it
manages,
specifically
as
the
next
509
authority,
is
prepared.
The
csr
for
that
intermediate
ca
certificate
that
spire
wants
to
be
signed
is
sent
upstream
through
the
mint
x
509
ca,
rpc,
where
it
is
signed
by
the
upstream
authority
and
and
then
shipped
back,
and
if
we
want
to
talk
about
jot
as
the
job
authority
is
prepared.
A
There's
a
whole
bunch
of
upstream
ca
implementations
here.
I
won't
go
over
most
of
these,
but
I
will
mention
this
last
one.
The
spire
upstream
authority
plug-in
evan's
gonna
dive
into
the
details
of
this
later,
but
this
is
essentially
where
spire
is
acting
as
the
upstream
authority
for
a
downstream
spire
server.
It.
B
A
Some
some
interesting
resiliency
and
isolation
benefits.
A
So
again,
we've
talked
about
how
these
authorities
you
know,
are
prepared
and
maybe
participating
inside
of
an
upstream
pki,
but
we
haven't
really
talked
about
how
the
public
key
material
from
these
authorities
makes
its
way
back
to
agents
and
down
through
the
workloads
via
the
workload
api,
and
so
essentially,
what
happens
is
spire
manages
a
storage
back
end
and
this
is
known
as
the
datastore
and
it's
involved
in
in
storing
all
sorts
of
different
things
which
we
won't
get
into
now.
A
We're
mostly
concerned
at
this
point,
with
the
trust
bundle
for
the
for
the
trust
domain,
as
these
authorities
are
prepared,
the
public
key
material
is
appended
to
the
bundle
inside
the
data
store
and,
like
we
talked
about
way
back
when
we
first
introduced
the
spire
agent.
You
know
the
spy
agent
is.
A
You
know
at
some
frequency
connecting
to
the
server
and
synchronizing
down,
bundled
material
and
getting
you
know
estimated
signed
and
rotated,
and
so,
as
part
of
that
synchronization
process,
we
can
see
that
you
know
it
pulls
the
bundle
from
the
data
store
and
directly
through
the
expire
server
and
as
the
x519
and
jot
authorities
are
rotated
again.
Those
the
new
public
key
material
is
appended
to
that
bundle
inside
the
data
store.
B
A
A
Now
there's
you
know
this:
this
rotation
of
x,
509
and
jot
authors.
This
happens
at
a
configurable,
cadence
and
we've.
You
know
we've
seen
how
how
the
public
key
material
from
those
you
know
newly
prepared
authorities
is,
is
you
know,
stuffed
until
outside
the
data
store
and
eventually
makes
its
way
down
to
to
agents,
but
there's
a
because
the
the
agent
is
not
getting
a
continuous
stream
of
updates
and
is,
is
kind
of
pulling
at
some
frequency.
A
There's
an
interval
in
which
you
know
an
x509
authority
has
has
been
prepared
and
it's
public
key
material
has
been
published
to
the
data
store,
but
where
an
agent
has
yet
to
pull
that
data
store
for
that
key
material,
and
so
at
that
point
you
can
imagine
that
if
that
newly
prepared
authority
was
immediately
start
to
immediately
assigned
to
to
start
minting
asvids
that
if
those
s
vids
made
their
way
to
an
agent
who
has
yet
to
pull
for
that
bundle,
update,
the
agent
would
be
unable
to
verify
those
asvids
and
so
spy
implements
an
interesting
rotation
strategy
to
kind
of
prevent
this
and
mitigate
this
situation,
and
it
accomplishes
this
by
actually
having
two
authorities
breastfed
type.
A
So
I
lied
a
little
bit
earlier
in
the
presentation.
The
first
set
of
authorities
is
considered,
the
active
set
and
the
active
set
sits
alongside
the
prepared
set
and
the
active
set
is
the
one
who
that
is
involved
in
signing
the
sveds,
so
so
any
authority.
That's
in
that
active
slot
is
the
authority.
That's
chosen
to
sign
incoming
asvid
requests
and,
of
course,
it's
public
key
material
exists
in
the
data
store
and
is
propagated
out
to
agents
as
they
sink,
and
you
know
at
some
point
you
know.
A
A
Instead,
it
prepares
a
new
set
of
authorities
in
advance
and
sticks
those
in
the
prepared
slot,
and
during
this
time
you
know,
of
course
the
the
public
key
material
is
added
to
the
bundle,
and
you
know
the
active
slot
is
still
the
one
in
charge
of
minting
these
different
identities,
and
so
there's
an
interval
of
time
here
where
our
active
slot
is
minting
identities
and
our
prepared
slot
has
been
prepared
and
it's
public
key
material
placed
in
the
bundle
where
it's
now
propagating
down
to
agents
in
advance
of
it
ever
being
used
to
sign
nesvids.
A
Now
the
bundle
material
stays
the
same.
We
don't
prune
out
the
old
authority
key
material.
Quite
yet
we
want
to
leave
it
in
there
for
a
bit
of
time,
because
there
could
still
be
first
of
all,
the
rotation
event
happens,
or
the
activation
step
happens
before
that
old
authority
has
has
actually
expired,
and
so
there
could
still
be
s
vids
floating
around
inside
your
system
that
have
been
signed
with
that
old,
now
retired
authority
that
you'd
still
need
to
validate.
A
And
again,
this
this
process
is
repeated
as
spire
continuously
monitors
and
rotates
these
authorities
to
to
maintain
you
know:
freshly
rotated
authority
material,
the
cadence
that
we
do
this
at
is
it's
a
pretty
simple
strategy
and
it's
based
on
how
much
time
is
left
on
the
active
authority.
A
So
when
the
active
authority
has
half
of
its
lifetime
left,
that's
when
we
go
ahead
and
prepare
a
new
authority
and
then
when
the
active
authority
has
one-sixth
of
its
lifetime
left
that's
when
we
go
ahead
and
activate
the
new
authority-
and
you
can
imagine
you
know
a
space
of
time
between
that
halfway
mark
and
that
one-sixth
mark.
A
That
is
the
time
that
we
give
that
prepared
bundle
material
to
propagate
out
to
the
to
the
you
know,
trust
me
before
we
start
minting
esvids
with
the
the
newly
prepared
authority
and
there's
some
some
caps
in
there
to
like
prevent
prevent
some
weird
times
with
with
really
long-lived
authorities.
But
I'm
gonna
have
to
talk
about
that.
I
won't
take
the
time
to
talk
about
that
right
now.
A
Let's
see
so,
we've
talked
a
lot
about.
You
know:
inspire
spires
responsibilities,
fire
servers
responsibilities
in
particular.
It's
it's
doing
a
lot
of
different
things.
It's
signing
sveds,
it's
rotating
authorities,
it's
publishing
stuff
upstream
and
it's
obviously
a
big,
a
big
point
of
failure.
A
If
something
goes
wrong
with
fire
server
that
has
large
implications
on
on
the
you
know,
on
our
ability
to
put
to
push
identity
out
to
our
workloads
and
so
evan
next
is
going
to
talk
about
sorry,
I'm
messing
around
with
slides
here,
evan's
going
to
take
a
minute
to
talk
about.
You
know
some
deployment
strategies
to
aspire
to,
try
and
mitigate
those
failure.
Modes.
B
Thanks
andrew
I'm
going
to
speed
through
this
as
fast
as
possible,
because
we're
already
well
over
our
a
lot
of
time
here.
So
you
know
this
is
kind
of
the
the
simplest
deployment
you
can
imagine
right
and,
as
andrew
mentioned,
spire
server
becoming
unavailable
is
particularly
problematic
if
all
the
workloads
are
depending
on
having
valid
essays
to
communicate
with
each
other.
The
good
news.
A
B
There
this
is
not
the
most
terrible
thing
in
the
world
if
the
spire
server
were
to
fail.
I
mean
it's
not
ideal,
but
you
know
andrew
mentioned
previously
that
this
fire
agent
does
have
a
cache,
and
the
agent
knows
what
what
identities
it
can
issue
and
it
fetches
those
in
advance
and
it
caches
them.
So
this
fire
server
goes
away.
You
know
we
can't
get
new
svids,
we
can't
rotate
expiring
estimates,
but
the
agent
can
still
perform
workload
attestation
and
can
still
serve
as
fits
to
workloads
from
its
cache
without
contacting
the
server.
B
So
you
know
it's
survivable
in
a
steady
state,
but
again
it's
not
ideal.
The
the
very
simple
kind
of
of
approach
to
addressing
this
is
to
scale
the
spire
servers
horizontally.
You
can
have
as
many
of
them
as
you
like
that
this
obviously
addresses
performance
issues
as
well.
We
don't
have
any
any
notion
of
active
or
passive.
B
Each
server,
you
know,
has
the
full
authority
and
they
do
have
kind
of
a
shared
shared
data
store,
though,
if
you
were
to
say
hey,
I
want
to
put
one
of
these
in
each
in
like
different
failure.
Domains
like
one
in
each
region
or
one
in
each
availability
zone
or
something
you
know
having
this
to
strike
a
data
store
across
those
things
is
not
ideal
and
so
another
tool
we
have
in
our
tool
chest
is
what
we
call
nested
spire.
I've
had
a
couple
mentions
of
it
today.
B
This
is
where
spire
uses
another
spire
server
as
its
upstream
authority
downstream's
fire
service
do
node
attestation
and
workload
attestation
the
same
as
a
regular
workload
does
so
you
could
have,
for
instance,
one
spire
server
in
aws
and
another
spire
server
and
azure,
and
both
of
them
kind
of
roll
up
to
this
global
root
level.
So
this
allows
you
to
scale
across
these
different
failure
domains
and
to
manage
the
failure
of
of
you
know:
different
tiers
of
spire
servers.
B
So
if
this
global
kind
of
tier
or
to
go
away
for
some
reason,
you
know
the
local
spire
servers
can
still
perform
signing
operations
can
still
rotate
workload
estimates
they
cannot
rotate
their
own
signing
keys
like
their
jot
keys
or
ca
certificates.
So
you
know:
that's
that's
what
we
look
to
the
upstream
servers
for,
but
you
know
you
can
imagine
if
you
have
a
one
week
lifetime
on
those
or
a
two
week.
B
Lifetime
on
those
you've
got
a
significant
amount
of
time
to
get
that
central
kind
of
cluster
back
up
and
running
the
final
tool
we
have
in
our
tool.
Trust
is
federation.
B
You
know
this
is
where
we
have
a
different
set
of
spire
servers
that
isn't
a
completely
different
trust
and
they
have
a
completely
different
set
of
authorities
and
then
the
spire
servers
between
each
other.
They
exchange
public
keys.
This
is
good
for
managing
failure,
domains.
It's
also
good
for
managing
security
domains.
If
the
spire
server
and
trusted
name
bar
on
the
right-hand
side
here
were
to
go
down,
it
does
not
affect
identity
issues
and
trust
domain
foo.
B
If
it
were
to
be
compromised,
it
also
doesn't
affect
security
of
identity
issuance
and
trust
domain
through.
B
So,
in
summary,
very
quickly,
we
learned
about
all
these
major
kind
of
spire
code
paths.
This
is
all
that
the
major
works
that
are
really
important
to
understand
like
how
spire
works
under
the
covers,
and
we
learned
particularly
about
node
and
workload
attestation,
aspire,
how
we
go
from
not
knowing
who
anything
is
to
knowing
who
think
what
things
are.
We
learned
about
key
management
and
rotation,
how
all
of
that
is
managed
by
fire
server
and
how
inspire
agent
receives
those
and
then
figures
out
which
workloads
to
give
them
to.
B
If
you
want
to
learn
more,
you
can
check
out
the
spiffy
website
at
fifi.io.
We
also,
these
are
the
two
main
github
repos
spiffy
inspire
github
repos.
We
also
have
our
slack
channel
it's
a
very
welcoming
community,
so
we
we
hope
to
see
you
there.
I
know
are
already
very
much
over
time.
B
So
we'll
take
questions
in
the
chat
I'll
be
there
and
andrew
will
be
there
to
answer
them
async,
and
we
also
have
a
session
later
today,
a
networking
session
on
which
we
can
you
know
kind
of
talk
about
any
anything
that
that
might
have
come
to
mind
during
this
presentation.
So
thank.
B
Us
in
this
in
this
much
longer
than
planned
talk,
and
we
really
really
hope
that
it's
helped
to
kind
of
set
the
stage
for
the
rest
of
the
day.
Thank
you,
everyone
yeah,
thank
you.