►
From YouTube: Passport App: The role of SPIFFE and SPIRE in a return to work solution - Frederick Kautz
Description
Passport App: The role of SPIFFE and SPIRE in a return to work solution - Frederick Kautz
In this session, Frederick demonstrates a SPIFFE/SPIRE enabled solution which will help employers manage there return to work strategy. We will do a quick deep dive on how SPIRE allows us to accomplish our mission and what it may enable us to do in the future.
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe 2021 Virtual from May 4–7, 2021. Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
I'm
gonna
go
over
to
two
things,
so
one
of
them
is
talk
a
little
bit
about
privacy
and
the
second
one
is:
I'm
gonna
talk
about
one
of
the
products
that
is,
that
is
architected
to
use
spire
specifically
aspire,
and
some
of
the
things
that
we're
doing
with
privacy
there
as
well,
so
to
begin
a
little
bit
about
me
and
and
doc
ai.
A
So
dr
eyes,
company
we
build
mobile
health
products,
ai
technology
and
algorithms
platform,
preserving
infrastructure,
I'm
a
I'm,
I'm
the
head
of
edge
infrastructure
at
doc,
ai.
I
also
co-founded
network
service
mesh
and
I
work
with
groups
like
the
cncf
as
well
to
help
them
with
early
adoption
of
cloud-native
technologies
in
telecom
and
and
healthcare
space,
and
I'm
also
a
co-author
of
solving
the
bottom
turtle
which
is
available
at
smithy.io.
A
Slash
book
go
get
it
it's
it's
free
to
download
and
it's
a
wealth
of
information.
There's
an
amazing
cast
of
writers
there.
So,
let's
jump
into
directly
into
some
of
the
content,
so
first
security
versus
privacy
you've
heard
a
lot
about
security.
Recently,
in
security,
you
have
confidentiality,
integrity
availability
and
how
those
three
pillars
are
integrated
together.
A
Privacy,
the
privacy
is
not
security.
There
are
actually
two
separate
concepts,
they're
related,
but
they're
different,
so
security
is
freedom
from
danger.
Privacy
is
the
quality
or
state
of
being
a
part.
It's
of
from
a
given
company
or
given
a
given
observation.
So
so
the
the
two
of
them
are
related,
but
they're,
not
they're,
not
the
same
thing
and
in
terms
of
privacy.
A
There's
a
variety
of
different
areas
where
we
are
building
as
an
industry,
we're
building
towards
more
secure
systems
like
we're,
learning
how
to
secure
kubernetes
systems
which
are
going
to
build
and
run
ai
models
or
they're
going
to
build
and
run
applications
to
to
do
a
variety
of
things
in
banking
or
so
on,
but
from
a
privacy
perspective.
There's
also
there's
also
issues
that
we
have
to
look
at
on
on
the
privacy
side
and
to
give
you
an
example
of
one
or
one
area:
that's
not
the
sole
area.
A
I
just
want
to
use
this
as
a
basis
to
is
to
establish
some
issues
around
privacy.
A
Is
you
look
at
ai
with
like
neural
networks,
so
neural
networks
learn
information
about
the
properties
that
of
the
features
that
they're
that
they're
fit
that
they're
fed
in
and
they
they
they
tend
to
over
fit
on
on
information.
A
That
is
that
is
in
there
and
those
overfitted
models
that
basically
learned
too
much
about
individual
individual
inputs,
and
the
same
can
be
said
about
a
lot
of
other
products
as
well
that
they
learn
more
than
what
they
need
to
in
order
to
fulfill
their
purpose
have
access
to
a
lot
more
information
than
than
they
should
and
so
as
a
thought
experiment.
A
If,
if
you
look
at
the
visualization,
take
a
guess
where
the
where
the
circles
are
and
where
the
squares
are
along
the
along
the
edges
of
these
two
simple
models,
and
if
you
look
at
the
dashed
line,
it's
easier
to
make
a
guess
that
there's
probably
a
square
where
it
dips
beneath
the
line
and
and
if
you're,
only
looking
at
the
model
from
from
the
solid
line.
It's
much
harder
to
tell
you.
You
know
that
there's
some
shape
there,
but
you
can't
tell
information.
A
That's
there
and
if
you
compare
the
actual
information
that
it
that
it's
trained
on,
you
can
see
how
that
those
informations
how
those
spikes
can
be
used
in
order
to
to
find
information
about
them.
So-
and
this
is
true
not
only
for
for
ai,
but
it's
true
with
any
data
set,
that
you
gather
any
information
that
you're
that
you're
collating
or
putting
together.
So
it's
important
to
secure
that
information,
if
you're
making
use
of
it,
but
should
also
consider
what
information
are
you
bringing
in
what?
A
What
is
the
cost
to
privacy
that
you're
in
that
you're
introducing
and
if
the
data
set
were
to
be
leaked
out,
put
onto
the
black
market?
There's
no
way
you
can
take
it
off
the
black
market,
and
so
the
question
then
becomes
what
is
the
cost
to
the
individuals
that
that
are
there
there's
a
variety
of
different
techniques?
You
can
do
in
ai
to
reduce
this.
We
won't
go
into
these
because
there's
not
enough
time,
but
just
know
that
there
are
things
you
can
do
to
help
with
these.
But
it's
these
are
not
enough.
A
There's
just,
and
so
one
technique
that
we
are
starting
to
use
is,
is
to
process
data
on
the
on
the
edge
and,
again
just
to
reiterate
these:
are
these
go
above
and
beyond
just
a
ai
training
like
what
what
information
do
you
want
to
to
work
on?
What
do
you
want
to?
What
do
you
want
to
operate
on?
So,
if
you
keep
traditionally,
we
try
to
centralize
as
much
information
as
possible.
A
Whatever
sets
of
updates
that
they
need
to
do
or
whatever
work
they
need
to
do,
and
you
may
you
may
update
some
aggregate
information
in
a
in
a
more
privacy
preserving
way
in
order
to
perform
some
actions
and
there's
two
reasons
to
do
it.
In
a
specific
way,
so
the
first
one
is
reserve
privacy.
The
second
one
is
that
the
value
of
the
information
may
actually
not
of
any
given
piece
of
information
may
not
actually
be
worth
sending
up.
So
they
give
you
a
an
example.
A
If
you
were
doing
a
bunch
of
of
training
or
work
on
light
bulbs-
and
you
want
to
know-
is
this
light
bulb
likely
to
give
out
or
not
look
at
the
total
amount
of
information
that's
being
brought
in
for
across
the
board
for
every
device
or
every
light
bulb.
That
is
in
your
organization.
A
There's
a
tremendous
amount
of
information
there
and
it
may
be
a
very
limited
value
of
collecting
every
bit
of
information.
That's
there
so
performing
the
competition
at
the
edge
aggregating
that
information
and
only
sending
the
results
back
up
is
also
an
economic
decision,
in
addition
to
a
privacy,
preserving
decision.
A
A
Is
we
add
it?
Is
we
add
noise
into
into
the
system
and
one
technique
for
this
is
differential
privacy?
So
suppose
you
had
a
sensitive
question
you
want
to
ask
like
is
this?
Is
this
system
like,
if
you're
into
healthcare,
are
you
doing
a
survey
you
might
ask?
A
Have
you
tried
heroin
in
the
past
in
the
past
year,
you're
going
to
get
a
bad,
a
bad
outcome
from
a
survey
perspective
because
people
will
not
want
to
respond
or
if
they
do
there,
there's
incentives,
both
legal
and
social,
to
do
not
tell
the
truth
so
one
way
that
they
get
around
this
is
they
add
privacy
into
the
question
in
such
a
way
that
you
can
put
a
person
into
a
room,
that's
private
and
there's
a
fair
coin
and
they
toss
the
coin.
If
it's
heads
they
toss
the
coin
again.
A
That
way,
they
don't
know
what
the
first
coin
talks
was,
and
they
answer
the
question.
If
the
first
coin
toss
was
tails,
they
toss
the
coin
again
answer
yes
of
its
head
or
tails.
If
it's
no,
what
this
does
is.
If
someone
is
asked
hey,
you
answered
yes
on
this.
A
They
can
say
yeah,
I
answered
the
coin
toss,
so
what
it
does
is
gives
people
plausible
to
liability.
You
don't
know
what
the
value
of
any
given
person
was,
the
coin
toss
or
or
the
real
value,
but
you
still
preserve
the
signal
of
the
of
the
population
that
you
can.
You
can
still
reason
about
what
the
whole
population
is
doing,
because
you
know
the
probability
distributions
of
those
coin
tosses,
and
so
so
we
want.
A
So
we
have
things
that
are
going
on
in
the
industry
that
are
designed
to
preserve
that
that
privacy
and
one
question
that
you
should
that
you
should
ask
yourself,
is
how
can
as
engineers,
how
can
we
how?
How
can
we
push
to
preserve
privacy
but
still
make
sure
that
we
meet
the
mission
of
of
company,
make
sure
that
we
still
they
were
able
to
get
those
those
positive
positive
outcomes
and
so
driving
a
bit
further
going
into
the
security
space?
A
One
of
the
other
trends
that
that
we're
seeing
is
a
push
towards
zero
trust,
and
I
won't
go
too
much
into
it.
You've
heard
a
lot
here
already
in
a
nutshell,
though,
think
of
zero
trust.
Think
of
as
the
next
generation
of
like
of
of
infrastructure,
moving
away
from
perimeter
defense,
so
perimeter
defense,
you
have
trusted
networks,
you
create
secure
connections
between
those
trusted
networks
and
then
the
workloads
can
then
communicate
if
an
attacker
enters
into
the
trusted
network.
You
are
at
risk,
zero
trust.
A
Instead,
you
trust
the
workloads,
create
secure
connections
between
the
workloads
and,
if
an
attacker
enters
your
network,
that's
they
don't
immediately
gain
access,
so
that
doesn't
mean
you're
that
it's
you're
immune
to
attack,
but
but
you
you're,
constantly
valid,
identifying
and
continuously
validating
the
the
things
that
you're
connected
to
and
things
that
you're
communicating
with
at
a
very
fine
grain
granular
level,
and
so
one
special
call
out
that
that's
in
this
specific
area
as
well,
is
you
start
looking
at
things
like
trusted
execution
environments
and
one
of
one
of
the
reasons
I'm
calling
this
this
one
specifically
out?
A
Is
that
we're
already
starting
to
see
this
available
in
modern,
not
only
servers,
but
also
in
phone
hardware?
There
are
encrypted
sections
of
memory
that
that
a
particular
system
can
can
have
and
what
they,
what
this
does
is
it
is.
It
has
a
level
of
security
that
it
prevents
manipulation
of
a
particular
process
from
from
the
host.
A
In
other
words,
the
operating
system
cannot
make
changes
with
without
breaking
the
the
system
or
without
breaking
the
container,
and
it
also
provides
privacy
so
that
you
cannot
peek
into
what's
going
on
in
that
particular
in
that
particular
container,
and
so
again
these
these
are
available
in
modern
in
modern
hardware,
for
both
servers
and
phones.
I
don't
think
desktop
computing.
A
Has
this
just
just
yet
to
the
same
degree,
but
we'll
start
to
see
that
pretty
soon
and
one
of
the
things
that
I'm
really
excited
about
is
the
concept
of
spiffy
identities
being
tied
into
these
things,
so
that
you're
able
to
then
doing
reason
about
them,
and
so
now
that
we
have
a
private
primer
on
some
privacy
and
security.
Talk
a
little
bit
about
some
of
the
things
that
were
that
we're
doing
with
spiffy
inspire.
So
we
have
a
product
called
passport
passport.
A
Is
our
return
to
work
solution
for
for
helping
companies
get
people
back
into
into
their
workplaces
in
a
in
a
safe
way?
So,
like
we
have,
the
vaccines
are
starting
to
come
around.
We
want
to
make
sure
that
people
are
as
safe
as
possible.
So
what
so?
What
we're
doing
is
we're
using
some
of
the
techniques
that
we
showed
before
in
order
to
help
in
order
to
help
people
answer
a
set
of
questions
that
that
provide
them
with
a
cryptographic.
A
Image,
that's
basically
like
a
qr
code
that
that
is
that
is
signed
so
that
that
gains
them
admittance
into
into
the
building.
And
so
we
ask
them
a
variety
of
questions.
Some
of
the
questions
can
be
added
by
by
the
employer
and
what
the
employer
is
looking
for
is.
A
Is
this
employee
safe
to
enter
the
workspace
or
not,
and
are
we
following
the
rules
of
the
public
health
authority
and
but
to
do
these
that
on
the
edge
not
to
aggregate
most
of
that
information,
but
instead
to
leave
the
sensitive
information
on
on
the
edge
and
only
process
them
on
the
phone
itself?
So
from
an
architecture
perspective?
It's
a
pretty
simple
architecture
in
terms
of
how
we
integrate
things
like
like
spire
and
open
policy
agent.
A
So
we're
we're
our
the
system
has
been
architected
to
to
support
those
that
infrastructure
so
that
the
front
end
and
the
back
end
communicate
over
that
and
also
to
to
gain
some
metrics
and
and
logging
on
the
overall
use
of
the
of
the
system,
whilst
well
not,
but
without
having
that
privacy
information
added
in,
and
so
the
user
doesn't
send
that
private,
that
private
information.
A
So
the
information
we
can
log
by
default
is
already
already
preserves
privacy
to
to
a
wide
degree,
and
so
so
we've
we've
built
this
with
with
passport
with
past
we
built
passport
with
spire
in
mind,
and
it's
it's
one
of
the
areas
that
we're
using
in
order
to
to
work
out
some
of
the
operational
concerns
that
come
up
with
with
spiffy
inspire
and
then
we're
going
to
take.
Take
that
and
then
continue
to
apply
some
of
the
learnings
that
we
have
there
with
other
customers.
A
Other
groups
that
we
that
we
work
with
on
a
regular
basis,
and
so
as
a
wreak
is
and
just
one
last
thing
and
the
privacy
side.
Those
questions
also
have
some
differential
privacy
put
into
them
in
terms
of
how
they're
they're
set
up,
and
so
there
are,
the
results
are
then
aggregated
together
and
and
sent
off
so
that
as
the
employer,
you
only
know
that
the
user
has
price
positively,
attested
to
the
questions
in
a
way
that
allows
them
to
enter.
A
If
there
is
a
failure
in
the
attestation
or
they'd
answered
negatively,
then
we
don't
report
the
negative
response,
and
so
we're
not
the
employer.
Never
stole
this
person
has
covet;
instead,
they
just
don't
get
that
positive
response,
saying
that
the
person
is
allowed
into
the
system.
So,
in
short,
I'm
happy
to
see
so
many
people
here,
focusing
on
on
increasing
security
going
towards
zero
trust.
A
A
Do
ask
the
question
about
privacy.
Are
we
doing?
Are
we
setting
up
good
privacy
structures?
What
happens?
Don't
assume
that
spiffy
and
spire
are
going
to
save
you
from
all
from
all
attacks,
but
instead
ask
the
question:
if
we,
if
we
have
had
a
breach,
what
then,
what
has
been
exposed
and
how
and
what?
If
and
how
and
how
can
we
reduce
that
overall
risk
anyways?
I
want
to
thank
everyone
for
your
time
and
if
you
have
any
questions
I'll
be
on
the
I'll
be
on
the
chat.
Thank
you
very
much.
B
Frederick
one
question:
if
you
don't
mind
taking
it
out
loud,
an
orthogonal
question
that
often
arises
is
compliance,
and
you
have
been
part
of
the
brain
trust
for
the
project
around
all
compliance
matters
and
and
a
sounding
board
for
for
that
matter.
What
pointers
do
you
have
if,
briefly
or
succinctly,
you
can
like
direct
people's
folks
mindset
like
what
they're
right
framing
to
reason
about
spire
in
the
context
of
compliances,
okay,.
A
So
when
you
talk
about
compliance,
you
have
to
ask
compliance
to
what
and
a
lot
of
things
I
look
at
are:
are
hipaa
environments
or
things
that
are
treated
as
hipaa
environments,
even
if
they're
not
hipaa.
So
you
have
to
ask
like
what
what
are
you
trying
to
to
comply
to
so
in
terms
in
terms
of
plants
and
compliance,
adherence
once
you've
identified?
What
that
thing
is
you
have
to
have
that
observability
to
tell?
Are
you
complying,
so
it's
not
enough
to
comply?
A
You
have
to
be
able
to
prove
that
you
are
that
you're
complying
and
this
so
from
an
observable
from
an
observability
perspective.
There's
we
one
of
the
reasons
that
I've
been
pushing
for
spires,
because
I
can
get
that
workload
to
each
workload,
to
have
its
own
cryptographic
identity
and
to
be
able
to
say
that
this,
this
information,
this
particular
system,
we
can
reason
it
shouldn't,
have
access
to
this
data.
Should
it
should
it
be
able
to
decrypt
this.
A
This
piece
of
information
is:
if
you
have
systems
that
are
that
are
approved
for
hipaa,
and
you
have
other
systems
that
are
not
approved
for
hipaa
compliance,
then
you
can
create
blanket
rules
that
that
default
reject
those
those
type
of
things.
And
if
someone
tries
to
write
a
world
over
to
override
it,
then
you
can
flag
those
as
policy
and
get
those
to
get
extra
review
in
those
spaces
on
an
ongoing
basis,
so
they're.
So
in
terms
of
compliance.
A
My
viewpoint
on
this
stuff
is
that
spiffy
inspire
and
the
related
ecosystem
are.
They
won't
solve
the
problem
directly
for
you,
but
they
give
you
the
tools
necessary
so
that
you
can
work
that
out
and
to
give
you
a
really
quick
example,
if
you
have
a
breach
in
in
the
healthcare
space
and
they
they
access
a
specific
database,
you
you
may
have
to
assume
that
the
entire
database
has
has
been
breached.
A
If,
if
they,
if
they've
gained
access
to
a
system
that
has
full
access
to
that
database,
you
can
use
observability
to
potentially
to
to
narrow
that
down
and
so
by
reducing
by
using
spike
spire.
We
perform
the
the
attestation.
We
get
the
identity.
We
then
use
something
like
oppa
to
say
they
can
only
ask
for
things
that
are
related
to
that
specific
user,
and
so
there
has
to
be
some
jwt
token
or
something
similar.
A
That
is
that
has
shown
that
the
receiving
system
can
then
pare
down
those
those
requests,
and
you
have
that
observability
as
to
what
it
was
called.
Then
if
there
is
a
breach,
you
can
then
look
at
that
and
work
out.
Well
yeah.
There
was
a
breach,
but
you
you
know:
we've
we
have
so
many
customers
where
we're
compromised.
A
You
can
then
say
that
these
are
the
the
customers,
potentially
the
exact
customers
and
even
the
exact
data
that
was
exfiltrated
out,
which
is
a
huge
difference
from
saying
you
know,
your
your
customers
have
had
two
million
records
with
some
wide
variety
of
data
coming
out
to
500
or
a
thousand
records
exposed
with
with
exact
data,
and
those
people
have
already
been
contacted
and
so
from
a
trust
building
perspective.
It's
not
just
compliance,
but
also
from
a
trust
building
perspective.
A
Up
helping
tremendously
there
as
well
that.