►
From YouTube: Successes? Lessons? Istio @ Virtru
Description
Istio Journey at Virtru: I didn’t know Virtru uses Istio?! Ben was a software engineer at Virtru and has a rich experience using Istio at Virtru, and playing with Envoy filters.
In this livestream, Ben will join Lin to discuss his job at Virtru and share why Istio, how was the Istio adoption and key lessons he learned along the way while adopting Istio at Virtru.
--
Join Solo on Slack: https://slack.solo.io
Follow us on Twitter: https://twitter.com/soloio_inc
Follow us on LinkedIn: https://www.linkedin.com/company/solo.io/
Past episodes: https://github.com/solo-io/hoot
A
B
B
B
A
I'm
sorry
I'm
hearing
a
little
bit
Echo
I
just
want
to
make
sure
okay.
So
today
we're
going
to
talk
about
istio
and
we're
going
to
have
Ben
here
with
us
to
talk
about
istio
Journey
at
virtual,
so
I
actually
didn't
know
virtue
at
all
and
I
didn't
know.
A
Virtue
was
using
istio,
so
Ben
with
the
software
engineer
for
SEO
at
virtual
and
have
very
Rich
experience
using
HCL,
so
I'm
so
excited
that
he
is
joining
us
today
to
share
his
job
responsibility,
a
virtue,
that's
related
to
istio,
and
why
it's
still
and
how
is
the
istio
adoption
and
the
key
lessons
he
learned
along
the
way,
while
adapting
istio
at
virtue.
So
with
that
I'm
going
to
welcome
Ben
to
the
hood
and
Ben
for
the
audience
who
are
not
familiar
with
you?
Can
you
give
them
a
quick
intro?
Please?
Yes,.
B
Hello,
everybody,
I'm,
Ben,
legget
I've
been
a
software
engineer
for
about
a
decade
now,
I've
done
pretty
much
everything
there
is
to
do
in
the
Sun
from
Front
End
Dev
and
reacts
to
kubernetes
to
back
in
to
go
to
c-sharp
way
back
in
the
day,
so
op
stuff,
all
that
sort
of
thing,
so
I'm
definitely
polyglot
in
terms
of
languages
and
always
been
interested
in
the
back
end,
and
you
know
server-side
Administration
and
service
side
apps
and
that
sort
of
thing
so
yeah.
That's
me.
A
B
A
Solo
last
week,
so
very,
very
excited
that
you
decided
to
you
know,
shoot
solo
out
of
many
possible
opportunities
out
there.
So
with
that
I'd
like
to
quickly
ask
you,
can
you
give
us
a
little
bit?
Intro
of
your
previous
company
virtue
did
I
pronounce
that
correctly.
B
Yeah,
you
did,
you
did
all
right.
Let
me
go
ahead
and
share
this
all
right.
So
yeah
virtue
is
a
data
encryption,
a
digital
privacy
company.
They
were
founded
several
years
ago
by
some
people
that
worked
in
various
Federal
administrations,
doing
data
protection
and
encryption
for
the
government,
the
ackerly
brothers,
notably,
and
they
still
run
the
company
they're
great
people,
it's
the
lovely
lovely
company-
and
you
know
security-
is
their
business
right.
B
It's
digital
privacy
data
protection,
that's
what
they
do
and
their
Court
one
of
their
core
Technologies
is
a
trust
data
format,
which
is
a
form
of
metadata
tied
to
data
with
cryptographic.
Guarantees
that
allow
you
to,
like
you
know,
Express
data
access
policy
in
terms
of,
like
you
know,
can
this
piece
of
data
be
read
by
this
person?
If
you
email
this
data
to
this
other
person,
could
they
read
it
or
not
that
sort
of
thing
so
they
have
a
suite
of
products
for
Enterprise
and
also
some
federal
products
as
well.
B
You
know
a
security
first
company,
a
company
that
has
always
had
a
pretty
Strong
pedigree
of
you,
know,
security,
a
deep
understanding
of
encryption
protocols,
deep
understanding
of
security
models,
a
deep
understanding
of
data
protection
in
general.
So
yeah,
that's
that's
them.
B
They
recently
open
sourced
the
implementation
of
the
tdf
spec
as
opentdf,
which
is
sort
of
a
baseline
ABAC
stack
for
data
protection
and
their
products
are
building
on
top
of
that
as
well
internally,
so
that
you
know
you
can
have
this,
you
know
rather
robust
platform
that
you
can
build
email.
You
know
their
Enterprise
email
offering
and
other
things
that
built
on
top
of
that
sort
of
military
grade.
Encryption
and
data
protection,
product
Suite.
B
B
A
Privacy
and
you
know,
security,
emotion,
Transit,
emotions,
yes,.
B
B
Yeah
so
I
was
on
the
emerging
products,
Innovation
team
at
virtuous,
so
one
of
the
things
we
were
working
on
is
you
know
at
that
time,
we're
working
on.
You
know
new
products
that
were
beyond
the
existing
stack
that
they
had,
but
also
you
know
there
wasn't
any
kubernetes
usage
in
virtually
when
I
joined
right.
It
was,
you
know
they
had
a
suite
of
products,
but
nothing
was
using
kubernetes.
Nothing
was
necessarily
containerized
or
lightly
containerized,
no
isteo,
and
none
of
that
stuff
right.
B
B
You
know
new
kinds
of
products
that
would
be
built
on
service
mesh
and
new
kinds
of
products
that
would
be
built
into
kubernetes
context,
but
also,
as
part
of
that
you
know,
I
and
some
other
people
who
came
involved
in
actually,
you
know
modernizing
virtues
infrastructure
stack
to
move
to
kubernetes
for
deployments
and
gke
and
other
places.
B
You
know
just
in
terms
of
being
able
to
increase
our
deployment,
speed,
be
able
to
you
know,
containerize
apps,
being
able
to
ship
deployments
to
people,
because
we
have
you
know
there,
of
course,
some
customers
that
want
to
deploy
their
own
stacks
and
their
internal
networks,
and
we
want
to
deploy
that
stack
for
ourselves
as
well,
for
the
SAS
virtual
did
so
it
behooved
us
to
have
something:
reproducible
containerized,
you
know
a
suite
of
Helm
charts
if
you
will
to
actually
deploy
things
to
kubernetes
environments
and
once
that
point
was
reached
right
once
everyone
got
up
to
speed
on
that
and
again
The
Innovation
team
have
been
building
products
top
of
kudinities
in
istio
for
a
while.
B
So
we
kind
of
back
fed
a
lot
of
that
too,
and
we're
also
some
strong
people
in
the
Ops
Team
there
that
you
know
we're
pushing
that
effort.
Once
we
got
to
that
point,
once
everybody
in
the
company
got
comfortable
with
kubernetes,
you
know
we
start
talking
about
things
around.
You
know
what
you
know:
istio
started
to
make
more
sense
for
production
traffic
it.
It
started
to
make
more
sense
for
production
traffic
because
of
course,
you
know,
we
need
mtls.
B
The
whole
thing
with
virtue
is
you
know
we're
it's
it's
a
zero
trust
company.
We
care
about.
You
know
not
just
having
an
external
perimeter,
that's
defended,
but
like
we
are
bound
by
our
own
standards
and
also
by
many
federal
standards.
You
know
around
zero
trust,
I
believe
actually,
there's
been
some
recent
months.
Federal
government
has
released
several.
B
You
know,
requirements
around.
You
know
all
vendor
products
need
to
start
embracing
zero
trust
models.
They
need
to
start
embracing
these
standards
right.
This
sort
of
thing
is,
the
old
model
is
no
longer
acceptable,
so
you
know
we
were
I
would
argue
well
ahead
of
that
to
begin
with,
but
you
know
it
was
just
something
we
needed
to
do
so.
The
whole
question
of
okay.
B
You
know
we
were
doing
mtls
between
some
of
our
services
to
begin
with
already
right
for
the
critical
ones,
but
it
was
the
idea
of
we
can
you
know
simply
the
basic
thing
for
istyle
right,
you
get
mtls
for
free
right.
You
can
just
have
it
in
your
environments.
You
deploy
workloads,
everything
is
mtls
encrypted.
That's
like
the
Baseline.
The
value
ad
for
for
isteo
is
just
you
know.
You
get
mtls
and
crucially
for
us,
we
can
get
fips
compliant
in
TLS,
because
you
know
that's
a
requirement
for
some
of
our
customers.
B
They
need
flips,
compliant
crypto
for
everything
and
they're.
You
know
there's
some
community
and
vendors
that
are
building
you
know
if
it's
compliant
istio,
that's
not
terribly
difficult
to
get
going.
So
that
was
a
big.
You
know
Focus
for
us
moving
that
stuff
out
of
applications
into
the
environment.
You
know
deployment
stat
into
the
into
the
into
the
platform
level
right,
so
the
individual
apps.
We
don't
have
to
audit
them
to
make
sure
that
every
single
team
building
an
app
is
doing.
Ntls
correctly.
B
A
Yeah
can
I
quickly.
Stop
you
asking
you
a
quick
question
about
the
fips
compliance.
Would
you
were
you
guys,
building
the
fips
compliance
images
yourself
or
you
when
you
leverage
our
vendors.
B
Originally
we
were
so
I
believe
before
there
was
isteo
use.
We
were
some
people
in
the
Ops.
Team
I
think
were
using
plain
Envoy
for
an
actual
Gateway
proxy
Thing
by
hand
at
some
point,
and
they
had
a
custom
fips
built
of
that
when
we
would
move.
B
For
all
traffic
I
think
we
pulled
an
off-the-shelf
vendor
I
think
it
was
a
community.
You
know,
we
didn't
have
it,
we
didn't,
you
know,
actually
go
to
a
vendor.
At
that
point,
we
prototyped
with
a
free
freely
available
I
believe
it
was
tetrates.
B
Fips
builds
of
Upstream,
and
that
was
just
like
you
know,
can
does
it
work,
you
know,
does
does
everything
if
we
install
the
fips
variant
to
do
things
break
or
is
everything
happy
and
everything
was
happy
and
that
was
actually
honestly
pretty
straightforward,
and
that
was
a
big
win
to
promote
compliance
perspective.
A
Okay,
got
you?
No
that's
interesting
about
your.
Is
your
usage.
You
talk
about
Mutual
tiers.
What
else
be
your
meter
GLS?
That
was
useful
and
required
from
virtual
perspective,
yeah.
B
Yeah
yeah,
so
in
terms
of
like
the
existing
products
right
so
again,
there's
there's
two
worlds:
I
was
on
the
you
know,
Innovation
team,
so
we
were
doing
much
more
complex
stuff
with
isteo,
but
the
production
wins
for,
like
you
know,
getting
into
production
Farmers
it
was.
The
mtls
was
like
a
basic
win
and
then,
of
course,
you
know
the
centralized
control
plane
for
Network
config
right,
our
Ops
Team
and
virtue
for
production
environments
had
already
done
a
fantastic
job,
deploying
everything
and
configuring
everything
is
get
off
site.
B
There's
a
strong
get
Ops
culture
with
terraform
for
everything,
even
before
we
moved
to
kubernetes
in
isteo,
and
that
made
the
switch
to
isteo
very
easy
for
everyone.
I
think
because
we
already
had
a
get
Ops
pipeline
in
place-
and
you
know
istio
just
is
a
declarative
networking
policy
right.
So
then
you
can
just
sort
of
roll
that
into
your
existing
get
off
stack,
which
the
Ops
team
had
already
done,
a
fantastic
job
of
setting
up
and
having
in
place.
B
It
was
a
really
easy
and
natural
fit
for
us,
because
you
know
it's
just
more
declarative
config
and
you
can
actually
take
something
that
used
to
not
be
declarative,
config
and
move
it
into
your
existing
get
Ops
pipeline.
It's
declarative,
config.
It's
easy
for
developers
to
understand
how
to
engage
with
that,
because
they're
already
in
the
get
Ops
pipeline
that
kind
of
thing,
so
that
was
a
benefit
as
well
plus.
B
Also
around
this
time
we
were
in
the
process
of
moving
our
SAS
products
to
oidc
and
the
idea
of
handling
certain
like
jot
token
validations
at
the
platform
level,
rather
than
each
individual
app
having
to
do
you
know,
validate
signatures
or
whatever,
like
you
know,
validating
the
Gateway
or
even
validating
you
know.
B
You
know
at
the
at
the
cluster
level,
whatever
you
know,
as
requests
come
in
like
just
offloading
that
stuff
such
that
individual
apps,
don't
have
to
worry
about
that
that
that
was
those
those
things
I
think
were
still
in
Flight
when
I
left.
But
like
that's
that's
certainly
the
goal
is
like
this:
that's
always
the
thing
with
isto
right,
you
say:
mtls
is
the
immediate,
quick
win
that
everybody's,
like
yeah
that'd,
be
great
to
have,
and
then
you
start
to
look
at
it
say.
B
Actually
we
can
also
offload
some
of
these
other
functionalities
to
the
mesh
out
of
the
application
layer
in
ways
that
reduce
the
amount
of
like
bespoke
code.
Everybody
has
to
write
boilerplate
code
in
every
app.
Everybody
has
to
write
so
yeah.
B
Yeah
so
I'll
get
into
that
as
well.
Here.
B
B
That
was
that
was
it,
but
it
was
pretty
painless
honestly
because,
like
I
said
that
get
UPS
culture
was
already
well
established,
I,
don't
think
it
would
have
been
as
straightforward
for
the
community,
the
company
to
move
their
existing
production,
apps
and
environments
to
kubernetes
and
also
isteo.
If
we
didn't
already
have
a
really
strong
get
UPS
culture,
that
was
to
me
like
the
big
win
that
made
it
pretty
straightforward.
B
B
Anymore,
that's
the
mesh
level,
so
you
know
just
going
to
like
at
teams
and
saying
it's
going
to
happen
in
the
platform
level:
here's
how
that
works!
Here's
the
security
model-
you
don't
have
to
do
this
anymore
and
just
kind
of
like
convincing
them
like
of
the
model,
understanding
how
that
works
and
explaining
how
that
works.
So
they
could
have.
You
know,
confidence
in
the
platform
level,
security
that
we
were
rolling
out
so
yeah
for
production
environments,
I,
think
that
was
pretty
straightforward
because
of
those
two
things.
A
Yeah
and
I
I
assume
that
Rick
Ross
changes
to
the
application,
but
on
the
bright
side
the
changes
are
removing
certain
code,
like
certain
baggages.
Their
application
have
to
take
care
of
to
do
like
certificate,
rotation
or
kind
of
getting
the
certificate
and
rotate
the
certificate.
So
hopefully
you
know
people
are
would
be
in
favor
of
that
change
correct,
even
though
it
does
mean
some
changes
to
that.
B
And
I
yeah,
and
as
far
as
like,
like
for
lifting
and
shifting
existing
containers
and
stuff
into
a
mesh
like
the
only
big
gotcha
I,
think
that
it's
not
really
big
guys.
The
only
thing
that,
like
occasionally
trips
people
up,
is
apps
that
expect
Network
to
be
available
at
startup
and
negotiating
the
whole.
You
know
you
know
my
my
app
will
expect
Network
to
be
there
for
its
health
checks,
and,
if
you
don't,
if
it
doesn't,
it
is
not
able
to.
B
If
it
doesn't
have
baked
in
retry
Logic
for
its
startup
call
outs,
then
it
may
get
confused
if
the
istio
mesh
is
not
ready
for
it,
it
won't
let
any
of
the
traffic
out
yet,
but
that's
usually
a
pretty
easy
fix
and
arguably
apps
should
be
robust
enough
to
have
retries
on
any
any
kind
of
startup
health
checks
that
leave
leave
the
Pod.
You
know.
B
A
Pretty
painless
that
that
that's
really
really
cool
by
the
way.
Why
did
you
guys
choose
istio
with
other
service
to
my
shoes.
B
Yeah
so
again
that
kind
of
gets
into
what
we
had
been
doing
previously
with
the
experimental
stuff.
Let
me
see
if
I
can
find
it
yeah,
so
we
had
already
for
the
experimental
products
we
were
working
on.
We
had
already
picked
ISO
right.
The
reason
we
picked
it
was
because
we
needed
something
that
was
Envoy
based.
We
needed
something
with
a
readily
available
fips
build
and
we
were
already
using
it
on
that
experimental
team.
That
I
was
on
right.
So
at
that
point
it
already
made
sense.
B
B
We
on
the
experimental
team
needed
an
Envoy
based
mesh,
but
also
the
production
Ops
Team,
like
I,
said
they
already
had
a
raw
Envoy
proxies
sitting
out
there,
so
they
had
some
Envoy
familiarity.
So
really
an
envoys
based
mesh
made
sense
and
istio
was
the
most
well
supported
and
mature.
One
that
also
had
read
readily
available
steps
builds,
so
that
was
kind
of
what
led
to
it.
We
did
look
at
a
few
others,
but
that
was
what
tipped
us
towards
isto
yeah.
B
B
It
was
a
case
kind
of
where
Ox
was
already
looking
to
do
that,
but
we
also
fed
back
some
learnings
there,
where
we
had,
like
you
know,
used
used
this
data
to
do
some
other
things
regards
around
spiffy
Inspire
and
some
other
data
protection
problems
solved
solve
those
there
and
it
just
made
sense
like
we
already
had
some
expertise.
They
already
had
some
expertise
to
just
kind
of
bring
that
back.
It
still
worked
well
for
everything
we
threw
at
it
pretty
much
yeah.
B
Yeah
so
again
like
for
I,
think
the
existing,
like
you
know,
moving
our
containers
and
our
production
traffic
and
our
production
environments
over
to
isteo
is
like
a
baseline
like
the
stuff
that
the
virtue
of
the
product
search
already
had
moving.
That
was
easy
where
things
got
a
little
bit
relatively
easy,
and
because
again
we
had
a
pretty
good
culture.
We
had
good
good
Ops.
B
We
had
all
that
stuff,
but
the
difficulties
we
encountered
were
on
the
experimental
side
when
we
were
building
new
products
on
top
of
the
service
mesh
and
to
talk
about
that
I
kind
of
want
to
explain,
virtue's
context.
Right
virtue
is
about
data
security
right
and
they're
about
zero
trust,
but
one
of
the
core
things
I
think
people
get
wrong
about
the
zero
trust.
A
B
Trust
right
we
have
a
perimeter,
it's
you
know,
but
the
whole
thrust
of
zero
trust
as
an
argument.
The
original
thesis
behind
it
was
that
perimeter-based
security
is
not
good
enough,
because
you
know
perimeters
can
be
breached.
Right
and
virtues.
Data
focused
idea
is
that
look
data
flows
over
perimeters
data
will
always
flow
over
perimeters,
and
that
is
why
perimeter-based
security
is
not
sufficient
right.
You
need
protections
that
travel
with
the
data
as
it
flows
over
the
perimeters.
B
You
cannot
simply
have
a
couple
of
perimeters
and
call
it
a
day
right,
which
is
usually
the
default
approach,
and
because
we
have
government
customers
right,
they
were
already.
You
know,
they're
already
using
the
tdf
protocol,
which
does
that
right.
It's
essentially
a
way
to
cryptographically
advise
an
open
standard
that
lets.
You
cryptographically,
bind
metadata
to
data
payloads
and
then
send
those
things
around
over
perimeters
and
then
apply
attribute-based
access
controls
to
that
data.
So
you
know
you
could
say
this
piece
of
data.
B
B
We
needed
that
and-
and
you
know,
that's
kind
of
what
we
did
was
like
bind
ABAC
call
that's
what
Mercury
does
is
bind
ABAC
policy
to
data
and
when
we
start
to
start
to
look
at
that
in
the
context
of
meshes,
it
becomes
kind
of
interesting.
If
you
talk
about
okay,
so
you
know,
data
is
going
to
flow
over.
You
know
through
workloads
in
the
kubernetes
cluster
right,
it's
going
to
be
doing.
You
know
it's
going
to
be
going.
B
Maybe
maybe
you've
got
some
sort
of
ETL
Pipeline
and
it's
doing
some
sort
of
data.
You
know
from
this
workload
to
that
workload.
To
that
workload
you
know
it's
going
to
be
flowing
through
the
mesh
and
you
need
to
be
able
to
understand
for
ABAC
purposes.
You
need
to
be
able
to
stand.
You
know
what
workloads
can
touch.
What
data
at?
What
point
is
this
workload
on
this
node
authorized
to
touch
that
data,
or
maybe
that
workload
on
this
other
node
is
not?
B
Or
you
know
whatever
else
you
need
to
do
so,
like
that's
kind
of
why
virtue
was
looking
at
this
stuff
and.
A
So
can
I
ask
a
quick
question
as
you
talk,
so
does
that
mean
let's
say
if
I'm
workload,
HTTP
15
right,
so
essentially
you
so
if
I'm
running
like
three
replicas
so
for
HTTP
theme,.
A
B
Are
yeah
so
this
model
yeah?
So
so,
essentially
right
we
wanted
to
get
it
down
to
the
narrowest
security
boundary
that
kubernetes
natively
supports
right,
which
is
the
Pod
more
or
less
right.
The
Pod
is
for
better
for
worse
sort
of
the
smallest
security
boundary.
The
kubernetes
kubernetes
will
handle,
and
you
know,
pods
can
be
distributed
across
many
nodes
and.
A
B
Of
the
one
of
the
things
I
think
a
couple
years
ago,
one
of
the
things
that
it
sparked
this
off
was
see
if
I
can
find
it.
There
was
a
okay,
but
there
was
an
IBM
there's
an
IBM
paper.
That
was
like
look.
You
know.
One
of
the
problems
with
kubernetes
is
it
sits,
is
the
security?
Is
the
service
account
model
right?
B
Everything
is
bounded
by
kubernetes
service
accounts
right.
So,
if
you
have
10
pods
on
10
nodes
and
they're,
all
in
the
same
service
account
from
most
security
and
attestation
perspective,
you
can't
tell
them
apart
right
for
Access
decisions
or
anything
else
right
kubernetes
treats
them
as
like
all
in
the
same
security
boundary.
You
can't
really
get
farther
Beyond
and
that
again,
that
kind
of
ties
into
the
whole
role-based
access
control,
which
virtue
very
much,
does
not
use,
and
then,
in
you
know,
versus
attribute-based
access
control,
which
is
not
role
based
and
kubernetes
by
default.
B
A
B
Yeah
that's
correct
and
that's
one
of
the
things
that
we
were
looking
to
actually
change,
because
and-
and
let
me
see
if
I
can
find
it
yeah.
So
this
is
this
is
the
actual
size
so
yeah,
so
that's
that's
kind
of
what
we
needed
to
solve
is
you
know
we
wanted
to
know.
You
know
it's
not
good
enough
for
some
of
our
customers
to
say
you
know
these
10,
these
ten
nodes
could
decrypt
this
sensitive
data
right.
B
Well,
okay,
but
like
what's
running
on
those
those,
what
is
the
Pod,
the
service
account
is
authorized,
but
what's
the
service
account?
Is
that
actual
container
Shaw
like
blessed
to
touch
that
data?
Is
this
node
running
in
this
account
actually
allowed
to
touch
that?
And
how
can
we
attest
that?
How
can
we
check
that?
How
can
we
gate
that-
and
that's
kind
of
that
was
kind
of
the
question
and,
like
I
said
there
was
that
IBM
paper
in
2020
around
using
this
fifth
Aspire
identities
as
workload
identities
and,
as
you
mentioned,
yeah
so
isto.
B
If
people
are
not
aware
of
okay,
if
people
are
not
aware
of
spiffy
Spire,
it's
a
workload,
identity
framework
that
allows
you
to
basically
Grant
x509
certs
as
workload
identities
to
workloads
right.
So
you
can
have
a
pod
that
has
an
x509
insert
and
it
has
a
spiffy
ID,
which
is
a
sort
of
like
URI
that
uniquely
identifies
that
workload.
There's
a
cert
generated
off
of
that
spiffy
ID.
So,
like
you
could
say,
okay
I
know
that
you
know
this
specific
cert
and
this
specific
ID
belong
to
this
specific
workload.
B
By
default,
istio
uses
50s
fire
under
the
hood
right.
It
actually
will
create
the
spiffy
IDs
for
all
workloads,
but,
as
you
mentioned,
it
only
does
that
for
at
a
service
account
level
right.
So
if
you
actually
go
and
look
at
the
spiffy
IDs
that
istio
generates
it's,
you
know,
slash
service
account
size
whatever
right.
So
everything
with
the
service
account
gets
the
same
identifier
in
the
mesh
right
which,
again,
if
service
account
is
not
granular
enough,
because
it's
our
back
and
our
back
is
not
granted
one
of
our
needs.
B
We
can't
do
that
right.
That
doesn't
work.
So
one
of
the
things
we
did
in
that
experimental
group
was
say:
okay.
Well,
let's
you
know
sort
of
follow
some
of
this
IBM
stuff
and
actually
hack
istio
to
not
use
its
own
SDS
and
not
use
that
service
account
based
spy
identity,
but
actually
Grant
identities
that
are
unique
to
a
specific
pod,
and
actually
you
tie
that
and
actually
actually
Grant
more
granular
workload
search
right,
because
the
other
benefit
of
that
is
suddenly
if
you're
taking.
B
B
That's
true,
but
you
wouldn't
be
able
to
tell
them
apart
in
terms
of
making
a
decision
about
this
identity
versus
identity.
So.
B
Think
there's
any
identifying
information
in
the
search
that
are
minted
other
than
the
spiffy
ID
I
think
the
spiffy
ID
is
as
granular
as
it
is
it's
baked
into
the
cert,
and
so
that
was
kind
of
what
we
were
fixing
is
like.
We
could
actually
change
that.
You
know
and
again
using
this
IBM
stuff,
where
they
kind
of
did
that
Envoy.
Thankfully,
you
know,
Envoy
is
a
blessing
and
a
curse.
It's
very
API
based
right.
So
the
thing
that
Envoy
uses
to
get
workload
certs
is
a
grpc
API.
B
It's
the
SDS
API
and
istio
ships,
its
own
SDS
implementation,
which
again
defaults
a
service
account
based
Sands
and
Source
account
based
certs
spiffy
Spire,
also
shipped
an
SDS
compliant
endpoint,
and
you
could
essentially
just
you
know,
do
an
Indiana
Jones
swap
you
know,
disable
the
isteo
SDS
server
and
then
put
in
the
spiffy
Spire
one,
and
then
you
could
basically
tell
us,
maybe
Spire,
hey
I'm,
going
to
individually.
You
know
it's
50.
B
Sorry
I
can
do
what
it
does,
which
is
it'll
actually
attest
workload,
so
it
has
like
a
agent
server
model
where
it
will
look
at
each
it'll.
Look
at
everything
in
the
cluster
and
say
Okay.
What
pod
is
this?
What
container
is
this?
What
node
is
it
running
on
if
I,
if
I
already
have
and
allow
less
registration
that
says
this
pod,
this
container
sha,
this
node
are
you
should
you
should
be
if
I
can
validate
these
things
are
true
about
these.
B
B
Just
the
service
account
right,
so
you
can
you
can
you
know
make
sure
that,
like
it
has
like,
you
know,
attestations
for
AMI
nodes
like
you
can
make
sure
that,
like
your
Ami
image,
that
the
node
is
built
from
is
a
specific
thing
or
the
container
Shaw
is
a
specific
thing
or
you
know
the
process
running
in
the
container
has
the
right
privileges
like
you
can
attest
on
many
different
things
right
and
then,
if
that
passes,
then
you
get
a
workload
cert,
and
so
we
were
actually
able
to
swap
that
out
and
then
we
could
actually
entitle
individual
workloads
pods
separately.
B
You
know
they
have
their
own
unique
identity
and
we
could
actually
entitle
them
differently
because
they
had
a
different
identity
in
the
mesh,
and
that
also
gave
us
a
lot
more
granularity.
Like
I
said
around.
You
know
you
might
have
the
same
container
shop,
but
is
it
the
same
container
shy
on
the
same
Ami?
You
know
or
node,
maybe
not,
and
that
might
give
you
different
identity,
which
then
you
could
have
different
entitlements
for
Access
own,
so
that's
kind
of
what
we
were
doing
there
and.
A
The
nice
thing
about
this
really
so
good
yeah.
So
that's
really
really
interesting.
If
you
don't
mind,
I'd
like
to
ask
in
fact
I
believe
I
was
actually
a
working
at
IBM
with
that
project
coming
out
it
was
a
project
from
the
IBM
Haifa
lab
by
the
IBM
research
team.
So
when
that
project
was
released
as
an
open
source
project,
I
believe
istio
doesn't
have
spell
integration
back
then
right
so.
B
B
A
B
Yeah
so
we
were
using,
we
were
using,
we
were
using
both
node
and
workload
level,
but
the
difficulty
was
when
we
started
this
and
this
again
back
in
2020
like
you're
right.
It
was
the
IBM
project
out
here
and
you
can
go.
You
know
you
can
go
look
at
it.
They
had
a
test
repo,
but
again
it
was
it
was
that
stuff.
It
was
like
here's
how
to
manually,
replace
the
SDS
server
right.
Istio
did
not
have
a
good
spiers
integration
at
all
that
point
so
and
we
were
using
it
before
way.
B
You
know
years
before
years
before
114,
so
it
was
not
a
clean
integration
for
us.
So
so
we
actually
had
to
work
around
a
lot
of
that
like
we
used
the
IBM
stuff
and
extended
it
a
little
bit
and
yet
so
yeah.
So
we
we
actually
had
to
do
quite
a
few
things
to
get
that
working
for
ourselves.
We
had
to
build
our
own
proxy
image
because
we
wanted
to
change
the
bootstrap
config.
B
We
had
to
change
the
socket
mounts
because
Spire
the
SDS
server
for
Spire
at
that
time
also
required
pod
level
socket
mounts
to
get
the
SDS
input.
They
don't
do
that
anymore.
They've
added
a
CSI
driver
such
that
you
don't
need
every
workload
to
have
a
socket
Mount,
which
is
great,
but
at
that
time
we
needed
to
do
that
and
yeah.
When
when
114
came
out,
it
was
great
because
we
could
drop
all
that
right.
B
A
B
Chart
that's
istio
of
stream
used,
so
we
were
really.
You
know.
It
was
quite
hacky
in
that
way
and
when
114
came
out,
I
was
so
happy
with
that
guy
I
saw
the
hpe
guys
were
like
involved
in
that
too,
because
it
was
a
real,
simple
thing.
It
was
just
something
like
you
know
it
already
already
technically
istio
supports
spiffy
IDs
or
you
know
Spire
IDs,
it's
50
IDs
and
you
know
it
doesn't
support
the
Spire
implementation
of
that,
but
it
did
support
the
IDS
and
it
supported
the
SDS
endpoint.
B
So
really
in
theory,
just
swapping
those
things
and
changing
the
SDS
endpoint
should
be
something
that
is
configurable,
but
until
114
it
was
not.
Once
114
came
out,
we
were
able
to
get
rid
of
all
those
hacks,
basically
and
use
a
vanilla,
Helm
distribution
of
istio
Upstream,
which
was
great
and
also.
B
And
that
that
actually
drastically
simplified
the
deployment
and
complexity
for
this
product.
We
were
working
on
because
it
was
very
dependent
on
that
mesh
being
there
working
a
specific
way
in
a
specific
configuration
so
yeah.
That
was
a
huge
help.
A
Yeah
that
makes
perfect
sense
now
before
114.
If
I
understood
you
correctly,
the
you
did,
the
customization
right
Wilshire
did
customization
of
the
istio,
but
the
customization
is
only
needed
on
the
data
plane
side,
which
is
why
you
talk
about
like
Envoy
images
you
have
to
rebuild
you
talk
about.
Maybe
the
boot
truck
configuration
you
have
to
configure
a
little
bit,
but
nothing
needed
on
the
control
plane
side
right,
like
you,
can
still
using
like
the
official
icod
images
from
the.
B
Community
yeah
now
so
I
believe
what
the
before
114,
what
we
had
to
do
was
we
had
to
up
straight
update
the
bootstrap
config
for
the
proxy
V2
image
such
that
it
would
look
in
a
different
location
for
the
SDS
server
socket.
We
would
have
to
modify
the
ejection
web
hook
that
I
think
istio
Upstream
shipped
with
to
actually
you
know,
add
a
host
mount
for
the
SDS
socket
for
every
workload,
and
that
was
that
was
like
why
we
had
to
build.
B
We
did
that
and
we
also
did
a
few
other
things
which
weren't
necessarily
directly
to
that
that
caused
us
to
build
our
own
image,
but
that,
like
that
was
those
were
the
main
things,
but
the
other
thing
too.
We
had
to
do
and
I
think
this
is
actually
still
a
concern.
B
The
again
like
I
said
istio
expects
spiffy
IDs
to
be
in
a
specific
format,
the
thing
about
spiffy
IDs
and
if
you
haven't
seen
them
they're,
like
you
know,
it's
like
a
URI
style
identifier
like
spiffy
colon,
slash,
slash,
you
know,
namespace
blah
blah
blah
and
you
they
can
be
free
form
they
can
be
in.
They
can
be
as
specific
as
you
want
right.
They
can
be
anything
like
that
yeah
as
long
as
they're
unique
right.
It
doesn't
really
matter
if
you
could
have
path
segments
that
are
as
narrowly
defined
as
you
want
right.
B
You
could
say
slash
pod.
Slash
knows
you
could
do
whatever,
but
isteo
assumes
and
I
believe
it's
hard
coded
right
now
and
still
at
that
point,
and
still
is
that
it
would
be
that
specific
format
of
spiffy
ID,
where
it
was
like
slash
service
account
right
and
so
that
so
actually
overriding
so
actually
using
50
IDs.
It
didn't
follow
that
format
mostly
worked,
but
one
place
where
it
didn't
was
I
believe
there
was
some
default
hard-coded,
cert
validation,
stuff
in
envoys
in
the
envoy.
B
Config
I
still
uses
that
expected
that
s
all
spiffy
IDs
on
all
certs
to
actually
be
in
that
slash
sa
format,
and
it
would
fail
to
like
do
sand
validation.
It
would
cause
sand
mismatch
errors.
If
that
wasn't
the
case,
and
if
you,
if
you
go
dig
into
it,
I
think
it's
like
Envoy,
obviously
supports
like
the
cert
validation
can
be.
You
know
it
can
validate
the
sand
with
any
mechanism
you
want,
but
again
I
think
istio
hard-coded
it
to
slash
sa
whatever,
and
if
it
didn't
match
that
regex
it
would
fail.
B
That's
not
a
it's
a
envelope
limitation.
That's
an
issue
limitation.
We
worked
around
that
by
just
deploying
destination
rules
for
every
workload
and
just
did
a
San
sand.
Just
just
actually,
you
know
alternate
ID,
basically
for
the
search
so
that
you
know
during
mtls
Communications.
You
know
the
validation
would
succeed.
B
A
Yeah
can
I
ask
a
clarification
question.
This
is
a
really
really
interesting.
So
the
reason
you
need
to
kind
of
fix
up
the
sand
if
it's
because
the
spiffy
ID
generated
in
the
environment
world,
when
you
were
using
spell
to
mince
the
identity
per
workload
and
per
node
yeah
I
assume
it's
in
a
non-conformant
format.
Yes,.
B
A
B
A
B
To
istio
yeah,
but
I,
don't
yeah,
but
but
there's,
but
but
again,
like
the
only
place
that
actually
caused
problems
were
in
the
sand
invalidation
and
once
we
fix
that
there
are
no
other
parts
of
SEO
that
care.
So
honestly,
I
don't
actually
know
that
it's
I
don't
know
for
sure
that
it's
really
a
standardized
form
that
we
really
need
in
SEO,
but
it
is
one
that
is
there.
Currently,
yes,
oh.
A
Yeah,
definitely
is
they
are
currently
and
probably
in
our
docs
as
well
too
yeah.
B
Much
it
and
we
were,
we
also
did
some
other
things
with
Envoy
filters.
B
At
the
time
we
were
using,
we
had
to
do
certain
things
where
we
wanted
to
like,
for
instance,
exchange
the
spiffy
or
the
the
you
know,
the
the
control
planes,
50
ID
minted
search,
mtls
workloaded,
and
we
wanted
to
actually
exchange
that
for
oidc
jot
at
certain
points
and
the
hacky
way
we
ended
up
doing
that
was
with
a
Lua
filter
where
we
would
actually
get
the
workload
cert
and
then
hit
an
oidc
authority
in
the
cluster
that
you
know
had
the
spiffy
trust
chain
trusted,
and
so
it
would
actually
say:
okay,
I
recognize
this
workload.
B
B
Wasm,
probably
would
have
been
better,
but
that's
what
we
were
using
at
the
time-
and
you
know
that
was
that's
something
that's
something
we
used
and
that
that
you
know
that
Lua
filters
are
sort
of
a
pain
and
again
they
are
plunged
through
to
Envoy
directly.
It's
not
really
an
estate
thing
right.
It's
it's
an
escape
hatch
for
things.
You
can't
do
in
normal
Sao.
You
go
to
an
Envoy,
filter
and
you're
on
your
own.
B
At
that
point,
more
or
less
that
was
sort
of
difficult,
because
a
they're
kind
of
slow,
B
they're
not
very
easy
to
debug
right,
you
kind
of
have
to
if
you're
riding
a
little
filter,
you
kind
of
have
to
deploy
it
and
then
watch
the
envoy
logs
for
like
a
blue
errors
and
like
turn
on
Lua
Envoy
filter
logging,
specifically
in
the
control
plane,
to
see
that
it's
doing
something
wrong.
That
was
kind
of
a
pain,
but
that's
just
normal
stuff
filter.
B
This
it
was
on
every
workload.
It
was
on
every
every
controlled
workload
in
this
yeah,
because
we,
you
know
we
we.
What
we
wanted
to
do
basically
was
if
a
workload
is
making
a
call
out
to
a
specific
other
service.
We
wanted
to
actually
exchange
that
workloads
search
for
a
oidc
token,
which
is
where
we
stored
our
entitlements,
basically
for
the
ABAC
stuff
right,
because
we
didn't
want
to
necessarily
cram
all
that
in
the
workload
cert
and
that
that's
problematic,
so
we
needed
that
exchange.
B
Point
I
mean
that
worked
pretty
well,
honestly,
yeah,
so
so
the
again
like
the
the
major
downside
to
that,
though,
was
that-
and
this
is
obvious
when
you
think
about
it-
but
when
you
make
an
eight
we're
using
the
Lua
HTTP
request
stuff
right.
The
on
webstream
supports
like
making
a
sphere
Quest
from
Lua
filters
with
a
constrained
API
and
we're
using
that.
B
That's
not
a
terrible
performant,
but
it
was
fine
for
what
we're
using
it
for
given
our
context,
but
the
difficulty
there
is
that
I
didn't
realize
this
at
first,
but
I
did
I
finally
figured
this
out,
but
when
you
make
an
outbound
call
from
the
Lua
filter
in
a
sidecar
right
in
an
istio
sidecar,
the
outbound
call
doesn't
get
decorated
with
an
xfcc
header
like
everything
else
does,
because
it's
that
those
calls
skip.
Those
outbound
calls
skip
the
full
filter
chain
that
a
normal
you
know,
container
through
sidecar
request
would
have.
B
Call
out
from
the
Pod,
why
isn't
it
appending
an
xfcc
header
like
it
does
with
everything
else?
It's
like,
oh
because
it's
from
an
Envoy
Lua
filter
and
it's
skipping,
the
filter
chain,
I,
found
that
you
know
it's
like
that's.
Why?
Okay,
because
there's
there's
parts
of
the
filter
chain,
it
doesn't
actually
hit,
which
makes
intuitive
sense
right.
B
You
want
to
avoid
a
chicken
and
egg
problem
at
some
point,
but
so
we
had
to
come
up
sort
of
a
hack
work
around
for
that,
but
we
did
and
yeah
that's
pretty
much
it
yeah
and
like
again
around
the
the
more
complex
some
more
learnings
around
that
right,
where,
like
I
said
before
race
conditions
between
app
startup
and
istio
sidecar
Readiness
were
always
sort
of
a
pain
until
we
actually
supported
that
stuff.
B
The
other
thing,
too,
is
because
we
again
had
this
model
where
everything
needed
to
be
attested
in
the
mesh
it
needed
to
have
a
workloaded
into
the
Pod
specific
identity.
We
had
jobs
that
also
needed
these
identities
to
be
able
to
communicate
with
other
things.
Kubernetes
jobs,
anistio
doesn't
have
a
really
great.
The
sidecar
model
does
not
play
nice
with
jobs
because
oh
yeah
jobs
are
supposed
to
terminate
at
some
point,
but
the
sidecar
proxy
will
never
terminate
because
it's
not
supposed
to
so.
B
A
good
fit
for
jobs
there,
so
we
actually
had
to
do
the
normal
hacks,
which
you
might
find
if
you
Google,
which
is
curling
posts
to
kill
the
sidecar
proxy,
which
meant
our
jobs
had
to
be
sort
of
aware
there.
That
was
not
great,
but
that
that's
something
that
I
think
beside
Carlos
I
still
imagine
probably
will
make
a
lot
simpler.
Yeah.
A
A
B
Yeah
and
the
other
thing
too,
I
think
also
that
it
will
help
with
is
obviously
currently
in
the
psycho
model.
You
have
to
have
an
init
container
that
rewrites.
B
Wish,
for
my
perspective
as
a
security
model
risk
right
because
yeah
we
want
we
want.
We
don't
want
the
Pod
to
be
privileged
to
do
nefarious
things
and
if
you
have
to
have
a
knit
container
in
the
Pod
that
is
privileged
to
do
that.
That's
another
point
of
trust,
and
we
don't
really
want
that,
and
the
ambient
model
gets
around
that
because
you're
not
doing
IP
tables
I
believe
you're
wrong.
You're
not
doing
IP
table
rewrites
from
within
the
Pod
anymore.
B
So
so
that
was
kind
of
why
we
were
tracking
the
ambient
stuff
is
like.
You
know
that
if,
when
this
becomes
mature,
it'll
knock
out
a
few
of
these
problems
and
also
probably
improve
our
security
posture
just
by
default
of
its
architecture.
So
yeah.
A
Yeah,
that's
great,
thank
you
so
much
for
sharing
all
the
lemons
yeah.
So
folks,
if
you
have
any
questions,
now
is
a
good
time
to
ask
you
can
just
type
it
in
the
chat
and
we'll
try
to
bring
it
up,
as
as
you
have
them,
let's
see
anything
else,
I
missed.
Oh
I
have
one
more
question,
so
you
we
talk
about
the
the
workload
per
node
identity.
Now,
would
you
continue
to
use
the
same
SEO
authorization
policy
with
that.
B
So
what
we
were
doing
essentially
was:
we
were
using
the
control
plane
to
proxy
certain
kinds
of
traffic
from
certain
workloads
to
a
proxy.
We
built
that
actually
did
traffic
level
encrypt
decrypt,
so
the
only
real
and
again
I'm,
not
super
real
Network
proposition,
the
only
real
like
controls.
We
were
applying
in
the
SEO
control
plane
around
this,
where
we
were
using
egress
gateways
to
control
pod
egress
right.
B
So
if
you
were
having
controlled
pods
in
in
this
mesh
right,
we
want
to
say
it
can't
get
out
to
do
things.
First
of
all,
it
can't
do
anything
unless
it's
got
an
identity
right,
which
is
the
MTL
assert,
which
we
give
you
on
a
specific.
You
know
pod
basis
through
our
Aspire
ax
right,
so
without
that
it
can't
do
anything
it's
in
a
black
hole.
Essentially,
we
also
want
a
black
hole
in
egress
that
it
gets
it
would.
It
would
be
able
to
do
if
it
is
a
tested
and
entitled
with
dessert.
B
Unless
it,
you
know,
matches
our
policy.
So
really
it
was
mostly
the
egress
and
Ingress
stuff
that
we
were
controlling
for
those
workloads,
I'm.
A
A
A
Wow,
this
is
something
you
know
if
I
have
a
little
smile
time,
I
would
love
to
actually
try
it.
You
know
with
the
spiral
integration
to
be
able
to
do
a
workload
and
node
level
identity
and
be
able
to
do
authorization
policy
to
enforce
that
I
think
that
would
be
a
super
cool
thing
to
do.
Let
me
know
if
you
guys
agree
in
the
comments
all
right,
I
think
this
being
a
really
interesting
learning
from
you,
Ben
I
appreciate
your
time.
A
I,
don't
see
any
questions
coming
in,
so
we're
going
to
wrap
it
up.
So
thank
you,
so
much
ben
for
your
time.
How
do
folks
reach
out
to
you.
B
Yeah,
thank
you
Lynn
for
running
this
and,
if
yeah
reach
out
to
me
on
LinkedIn
or
you
know,
other
avenues
or
Benjamin
Leggett
at
solo.io.
If
you
have
questions,
I'll
be
happy
to
feel
them.
A
That's
awesome
yeah!
Thank
you
guys
for
tuning
in.
If
you
guys
find
this
interesting
feel
free
to
reach
out
to
us
on
the
solo
slack,
and
if
you
have
any
other
topics,
you
would
love
to
see.
You
know
play
me
or
open
GitHub
issue
in
our
food
repository,
so
I
am
really
grateful
for
you,
guys,
listening
and
who
liked
our
past
live
stream
in
YouTube,
so
happy
learnings
and
we
will
see.