►
Description
#OpenSource #Istio
At this meetup, we celebrated 4 years in the open source world. The presentation and questions covered contributions that have helped build Istio. For the demo, please watch https://youtu.be/4AONRFwUsz8
A
Quick
introduction,
I
am
pratima
nambiar.
I
lead
the
teams
that
build
and
operate
the
service
mesh
and
ingress
gateway
platforms
at
salesforce,
and
today
I
will
walk
you
through.
A
You
know
I'll
start,
but
setting
some
context
on
service
mesh
evolution
at
salesforce
and
then
I'll
talk
about
how
we
migrated
from
our
in-house
control
plane
to
istio
and
some
of
the
contributions
that
we
made
to
istio
in
in
order
to
enable
that
and
then
I'll
talk
about
some
community
contributions
that
are
crucial
for
us
at
salesforce,
to
enable
us
to
adopt
istio,
to
solve
more
use
cases
that
we
are
very
closely
watching
and
also
contributing
to,
and
then
I
will
cover
how
we
do
sto
upgrades
again
because
we
are
using
is
your
quite
embedded.
A
We
started
to
you
know,
look
at
how
to
solve
these
common
problems
with
service
to
service
communication
and
use
service,
mesh
design
pattern
about
four
or
five
years
ago,
and
we
started
off
with
actually
with
just
java
libraries
embedding
them
and
services
to
solve
these
problems.
Then
we
moved
to
a
out
of
process
sidecar.
You
know
like
essentially
following
the
mesh
design
pattern
with
linguity
sidecar
and
then
in,
I
think,
around
2017.
A
However,
at
that
time
there
was
no
good
open
source
product
that
we
could
use
as
the
control
plane
and
therefore
we
started
off
with
a
in-house
control
plane
that
was
bare
minimal.
That
would
work
for
our
needs
to
configure
our
own
ways
that
we
had
and
then
in
2018,
when
istio
released
its
first
version
when
we
looked
and
looked
at
it
upon
initial
review,
it
looked
like
it
was
solving
some
of
the
same
problems
that
we
were
trying
to
solve.
A
Our
in-house
control
plane
worked
great
for
us
and
and
in
fact
it
still
runs
in
some
of
our
production
data
centers,
because
we
haven't
fully
migrated
to
sdo,
but
we
were
starting
to
get.
You
know
a
lot
of
use
cases
for,
let's
say:
stateful,
set
support,
service
manager,
sorry
service,
specific
config
that
was
making
it
was
becoming
harder
and
harder
for
us
to
evolve
our
in-house
control
plane.
So
that
is
why,
when
we
looked
at
istio
in
2018,
it
looked
upon
an
initial
review.
A
It
looked
like
a
feature-rich
control
plane
that
was
solving
for
the
same
problems
that
we
were
trying
to
solve
and
therefore
was
a
good
fit,
and
it
also
seemed
to
have
a
great
community.
That
was,
you
know,
that
contributing
to
the
product
and
therefore
we
decided
to
do
a
poc
on
istio
and
then
eventually
migrated
to
it.
With
our
adoption
with
istio.
A
We
first,
you
know,
had
to
get
feature
parity
with
our
in-house
control
plane
in
order
to
even
think
about
adopting
it
and
then
in
our
existing
deployments,
we
the
intention
our
intention
was,
as
we
were,
swapping
out
the
control
plane
we
had
to
make.
We
wanted
to
keep
the
customer
interface
changes
minimal
so
that
they
we
could
just
swap
it
out
without
them
having
to
do
much.
A
So
we
would
be
brought
up
both
the
control
planes,
the
in-house
and
the
and
sto
in
these
deployments,
and
then
we
migrated
services
gradually
and
then
in
new
deployments.
We
just
run
with
the
istio
only
mesh
platform
now
for
the
first
part
of
it,
which
is
to
get
feature
parity
with
our
in-house
control
plane.
This
is
kind
of
the
high
level
what
it
looks
like
we
run.
We
used
to
run
our
in-house
control
plane,
but
now
run
is
tod.
A
But
the
one
that
I
highlight
here
is
kind
of
I'd
like
to
talk
about
that
a
little
bit
to
start
with,
when
we
wanted
to
do
the
poc,
we
ran
into
our
first
blocker
with
using
istio,
which
was
that
at
that
time,
in
2018
istio
worked
with
ceradol
for
certificates
and
did
not
support
an
internal
ca
or
a
custom
ca
and
at
salesforce
we
are
required
to
use
our
internal
ca
to
for
certificates
and
therefore
we
had
to
integrate
with
that.
A
So
that
is
that
was
our
first
contribution
to
istio,
which
is
making
mtls
work
with
a
custom
ca
and
that's
what
we
run
in
our
production
data
centers
today-
and
this
is
what
it
looks
like
today.
It
didn't
look
like
this
two
years
ago,
but
this
is
what
it
looks
like
today,
envoy.
The
istio
proxy
communicates
with
pilot
agent
via
a
secret
discovery
service.
A
Api,
an
sds
api
and
pilot
agent
then
reads
the
search
from
the
file
mounted
or
in
memory
volume
mounted
certs
rather
than
from
istio
d
or
citadel,
and
we
have
a
custom
cert
refresher
that
is
salesforce
pacific,
that
we
run
in
our
kubernetes
clusters
that
refreshes
these
sets
so
we're
able
to
get.
You
know:
custom
custom,
search
provisioned
by
a
custom,
ca
work
with
envoy
for
mtls.
A
If
you
want
to
understand
what
it,
what
the
what
the
configuration
is,
you
can
configure
this
to
your
proxy
to
say
you
want
to
use
file,
mounted,
search
and
then
points
to
the
actual
location
of
the
the
directory
where
you've
mounted
these
certs,
both
the
client
and
the
server
side
search
and
then
istio
does
the
magic
and
the
proxy
actually
does
the
magic
of
ensuring
that
you
know
hot
reload
of
certs
happen
when
the
search
street
gets
refreshed
and
and
there's
no
interruption
to
traffic.
A
So
this
is
our
first
contribution
to
istio,
which
not
only
do
we
do
did
we
do
that?
First,
you
know
pr
to
actually
make
it
work,
but
over
time
as
istio's
sort
of
the
features
related
to
mtls
and
how
we
do
how
we
manage
so
it's
changed.
We
have
evolved
this
custom
ca
integration
so
that
we
can
ensure
that
it
continues
to
work
for
us,
so
we
were
involved
with
the
pre
sds
implementation.
A
We
added
integration
tests
to
ensure
that
this
integration
with
our
custom
ca
will
not
work.
It
will
not
break
and
and
then,
when
we
switch
to
the
sds
implementation,
we
made
changes
there
so
that
as
it
evolves,
this
feature
continues
to
work.
As
you
can
see,
it's
one
of
the
fundamental
features
that
we
would
expect
out
of
the
mesh
platform.
A
So
it's
crucial
for
us
moving
on
coming
back
to
that
diagram
that
we
were
looking
at
earlier,
our
in-house
control
plane
publishes
metrics,
so
essentially
the
the
proxies
publish
metrics
to
a
metric
service
onward.
You
know
it
has
a
metric
service
api
that
our
metrics
relay
service
implements
and
then
it
it
transforms
or
filters
out
metrics
before
it
publishes
them
to
our
matrix
platform,
with
the
intention
that
our
service
owners
should
not
have
any
change
in
their
interface
and
how
they
observe
traffic.
A
A
couple
of
years
ago
would
only
publish
metrics
via
mixer
and
it
sort
of
aggregates
metrics
and
does
not
let
you
publish
more
lower
level
telemetry.
We
enhanced
that
so
that
we
could
publish
more
lower
level
on
white
telemetry
and
we
did
a
couple
of
things
there.
So
we
made
the
metric
service
be
configurable.
So,
as
you
can
see,
you
can
configure
the
proxies
to
you
to
point
to
a
particular
metric
service.
A
If
you
want,
and
that
communication
also
uses
mtls,
so
you
can
configure
tls
for
it,
and
we
also
made
some
enhancements
to
publish
these
metrics
using
a
alias
so
that
we
could
maintain
the
metrics
queries
that
our
service
owners
were
using.
We
could
maintain
that,
and
so
they
could,
they
didn't
have
to
change
their
metrics
and
alerting
that
that
they
were
used
to
coming
back
to
that
diagram
there.
Let's
talk
a
little
bit
about
this
config
web
hook
with
our
in-house
control
plane,
we
were
applying
some
default
config
to
all
the
services.
A
For
example,
we
we
have
a
resiliency
framework,
a
testing
framework
that
runs
through
some
common
scenarios
like
rolling
upgrades
crashes
of
services,
transient
failures
and
we
tweaked
the
resilience
policies
of
the
proxy
to
maximize
the
success
rate
of
requests.
When
these
issues
happen,
and
then
we
apply
those
resilience
policies
using
our
in-house
control
plane.
A
Now,
when
we
switch
to
istio
as
you're,
probably
aware,
istio
allows
you
to
configure
these
things
using
istio
api
at
a
per
service
level,
so
you
can
define
virtual
service
and
destination
rules
per
service,
and
we
didn't
want
to
expose
that
to
our
service
owners
because
they
were
just
used
to
just
running
the
proxy
and
then
getting
mesh
capabilities.
A
So
what
we
did
was
we
had.
This
config
web
hook
automatically
generate
the
virtual
service
and
destination
rules,
but
then
we
hit
a
point
where
we
couldn't
apply
those
resilience
policies
because
they
were
not
exposed
as
as
istio
api,
so
the
next
set
of
yeah,
so
our
next
set
of
improvements
were
related
to
well.
You
know
what
I
think
I
missed
hold
on.
A
Let
me
go
back
yeah
yeah,
so
our
next
set
of
improvements
were
related
to
being
able
to
apply
these
resilience
policies
via
the
istio
api,
and
we
did
this
in
two
ways.
In
some
cases
these
resilience
policies.
It
made
sense
to
make
them
the
default
in
the
istio
product,
so
we
discussed
it
with
the
community,
and
we
said
yes
for
http
this.
A
And
so
we
we
enhance
the
sdo
api,
for
example,
to
com
to
configure
tcp
keeper
live
attributes
to
configure
some
retry
configuration
and
stuff
like
that,
so
that
we
can
then
use
that
api
and
apply
it
via
our
config
web
hook.
A
Let's
look
at
that
next
highlighted
section
there,
as
I
mentioned
before,
our
monolith
runs
on
bare
metal
and
we
onboard
those
bare
metal
services
onto
istio.
So
we
run
this
geoproxy
next
to
it,
and
then
it
talks
to
the
control
plane
in
kubernetes.
A
However,
in
order
for
this
bare
metal
service
to
participate
in
the
mesh,
we
continue
to
use
zookeeper
for
announcements
as
a
service
registry
for
these
service
experimental
services,
and
we
have
a
sync
service
that
creates
a
service
entry
object.
Now
a
service
entry
object
allows
is
an
stoc
or
dnsu
api
again
that
allows
these
services
that
are
not
running
on
kubernetes
to
be
to
participate
in
the
mesh
just
like,
as
if
they
were
a
kubernetes
service.
So
we
created
these
service
entry
objects
via
the
sync
service
into
and
created
them
in
kubernetes.
A
Now,
when
we
did
that,
we
noticed
that
the
control
plane
wasn't
as
performant,
we
had
a
ton
of
different
service
entry
objects
and
the
control
plane
wasn't
fast
enough
in
processing
these
requests.
We
saw
issues
with
reliability
of
availability
of
the
control
plane,
so
we
made
improvements
to
the
control
plane
in
how
it
processes
these
service
entry
objects
and,
in
general,
we've
made
changes
to
the
to
improve
the
performance
of
config
generation.
A
As
you
may
already
be
aware,
istio
takes
the
kubernetes
service
objects.
It
takes
the
istio
configuration
these,
the
crds
that
you
create,
and
then
it
processes
them
and
converts
them
into
onway
config.
So
in
within
that
config
generation
process
or
logic,
we
made
some
performance
improvements
over
time
so
that
it
is
efficient.
A
Some
other
contributions
that
are
worth
calling
out,
especially
as
we
were
running
introduction,
is
one
when
istio
when
kubernetes
gets
a.
You
know
a
shutdown
signal,
a
sends
a
sig
term
to
a
pod.
An
sdo
proxy
is
shutting
down.
It
wasn't
triggering
the
listener
draining
process.
A
A
We
also
made
some
changes
to
the
sidecar
crd
and
how
it
how
that
is
processed,
so
the
sidecar
crd
allows
us
to
to
configure
the
listeners
at
a
proxy,
and
we
made
some
changes
to
how
the
clusters
are
created.
Based
on
that
information,
so
we're
reducing
we're,
reducing
the
number
of
clusters
to
only
what's
required
for
that
envoy,
using
the
sidecar
crd
and
more
recently,
we've
adopted.
You
know
the
new
feature
of
istio,
which
is
intercepting
dns
requests
or
the
dns
proxy
feature
we
made
some
enhancements
to
that.
A
We
use
it
for
cross
cluster
communication.
So
if
there's
this
client
that's
running
on
one
kubernetes
cluster
in
the
service,
that's
running
on
another
cluster
and
they're
communicating
via
tcp,
we
need
the
dns.
A
A
A
Good
example
of
where
you
know
our
bet
to
use
istio
paid
off
was
when
onward
deprecated
the
v2
xds
api,
and
we
had
to
switch
to
the
v3
api
with
our
in-house
control
plane.
We
would
have
had
to
re-factor
our
in-house
control
plane
to
support
v3,
otherwise
it
would
be
incompatible
with
newer
versions
of
envoy.
A
A
A
quick
note
about
envoy
filters
so
again
onward
filter
is
a
crd,
an
api
that
istio
provides
that
allows
you
to
modify
the
on-white
config
that
that
the
control
plane
generates
so
you're
able
to
make
more
finer
grain
changes
directly
to
the
onward
config
without
making
any
stereo
api
change
and
we've
used
the
onboard
filter
in
some
cases
it
has
helped
us
sort
of
try
out
onward
config,
that's
not
yet
exposed
by
sqa
api,
maybe
to
see
if
it
is.
A
If
it's
even
a
good
thing
to
to
make
that
change,
we've
run
some
tests.
We
are
comfortable
with
the
change
and
then
we
take
that
to
the
community
and
and
figure
out
how
to
expose
that
feature
as
an
istio
api.
So
rather
than
first
make
the
hto
api,
in
which
case
we
have
to
be
very
careful
about
how
you
expose
it
via
the
sd
api.
Instead,
we
try
out
try
out
the
change
directly
with
onward
config.
A
You
know
we
get
good
confidence
in
what
the
change
is
and
then
we
take
it
to
the
istio
community
and
make
those
changes.
So
the
onboard
filters
has
definitely
helped
us
sort
of
adopt
a
service
mesh
for
a
variety
of
use
cases,
and
but
I
will
talk
about
one
of
the
side
effects
of
using
envoy
filter,
especially
when
you
have
to
do
upgrades
in
a
little
bit
staple
set.
Support
is
another
sort
of
broad
area
that
we
are
really
thankful
for
all
the
contributions
from
the
community.
A
We
are
adopting
mesh
for
a
ton
of
different
stateful
set
type
use
cases,
and-
and
this
support
has
definitely
helped
us.
We've
also
contributed
a
few
changes
related
to
this,
as
we
have
adopted
istio
for
these
use
cases
at
salesforce,
I
talked
about
multi-cluster
mesh
a
little
while
back
essentially
when
I
say
multi-cluster
mesh.
What
we
at
salesforce
are
looking
for
is
within
the
same
network
boundary.
A
We
might
run
multiple
kubernetes
clusters
with
different
services
there
for
a
variety
of
reasons,
but
we
would
the
mesh
platform
that's
on
top
of
it,
should
function
and
should
abstract
the
fact
that
we
run
on
multiple
kubernetes
clusters
so
that
a
service
that's
running
on
one
cluster
can
communicate
with
the
service
running
on
another
cluster
and
no
and
it's
no
different
from
if
they
were
running
on
the
same
cluster.
So
that's
what
we're
looking
for
and
we've
adopted
the
multi-cluster
mesh
features
of
sto2
to
make
that
happen.
A
So
we
are,
we
have
done
some
spikes,
but
we
haven't
actually
rolled
it
out
to
production
as
yet
we're
in
the
process
of
doing
that
again.
The
web
assembly
extension
support,
which
is
essentially
running
custom
code
and
dropping
that
into
the
data
plane
or
or
onto
an
onboard
running
envoy
for
to
add
custom
business
logic.
A
And
some
scale
related
enhancements
so,
as
I
talked
about,
some
performance
related
fixes
that
we
have
done
and
and
that
will
continue
to
happen,
but
there
are
also
some
scale
related
enhancements
that
are
being
made
within
the
community.
An
example
is
improving.
The
efficiency
of
the
config
delivery
between
the
envoy
and
the
control
plane
is
to
you
by
using
the
delta
xds
api
that
will
be
beneficial
for
us,
as
as
our
match
grows.
A
We
have
we
use
spinnaker
at
salesforce
for
as
our
deployment
pipeline
and
we
use
helm
to
define
these
operators
crds
and
in
general
to
configure
kubernetes
objects
and
and
and
use
our
pipelines
to
deploy
them
before
I
talk
about
how
we
upgrade
just
setting
some
context
here.
So,
as
I
mentioned,
we
use
the
operator
flow.
A
We
also
have
several
different
environments
that
we
propagate
upgrades
through
before
we
actually
upgrade
our
staging
and
our
production
environment.
So
we
have
a
few
in
my
few
environments
before
that,
and
we
have
alerting
in
place
in
each
of
these
environments
using
the
stod
telemetry.
A
A
We
also
alert
a
non-white
telemetry
to
see
you
know
to
to
monitor
the
health
and
the
performance
of
of
the
mesh,
and
this
is
what
our
setup
looks
like
for
and
how
we
use
the
operator
crd
in
our
canary
kubernetes
cluster
and
I'll
talk
about
that
in
a
little
bit
in
our
canary
kubernetes
cluster
and
the
essentially
the
very
first
cluster
that
we
do,
the
upgrade
in
we
run.
A
We
create
an
istio
operator
crd
and
deploy
a
canary
version
of
the
control
plane
to
a
special
namespace
called
called
the
canary
namespace,
and
then
we
also
create
another
sdo
crd
and
deploy
the
control
plane
to
the
main
namespace,
the
control,
plane,
namespace,
and
then
we
use
the
istiocrd
to
create
the
ingress
gateway
in
the
ingress
namespace
and
in
the
canary
kubernetes
cluster,
which
is
the
very
first
cluster
that
we
attempt
the
upgrade
in.
A
We
first
upgrade
the
canary
control
plane
and
we
have
some
sample
services
that
we,
you
know
we
we
communicate
that
communicates
with
this
canary
control
plane
and
we
run
some
tests.
We
rely
on
the
alerts
that
we
have
to
ensure
that
everything's,
okay
and
then
we
validate
additional
types
of
use,
cases
using
the
canary
control
plane
and
then
once
we
are
ready
with.
A
We
let
the
you
know
this
version
bake
in
the
canary
kubernetes
cluster
for
a
while,
and
once
we
determine
we're
good
with
propagating
it
to
additional
higher
level
environments.
A
We
then
do
an
in-place
upgrade,
so
we
upgrade
the
primary
control
plane
in
that
kubernetes
cluster
we
validate
with
sample
services,
and
then
we
either,
you
know,
initiate
a
rolling
restart
of
the
services
or
we
let
the
services
restart
on
their
own
to
to
get
the
new
proxy
injected,
and
then
we
upgrade
the
ingress
gateway
so
and,
as
I
mentioned
before,
we're
continuing
to
monitor
all
of
these
environments.
So
we
let
it
bake
in
these.
A
You
know
in
these
higher
level
environments
for
a
bit
before
we
propagate
them
further
and
eventually
deploy
the
staging
and
production,
as
I
mentioned
before.
I
actually
should
go
back
to
this
diagram
here,
as
I
mentioned
before,
we
use
onboard
filters
for
for
certain
types
of
configuration
and
in
some
we've
not
had
too
many
issues
with
this
upgrade.
A
But
one
issue
that
that
we
have
seen
that
we're
hoping
to
do
something
about
is
the
on
wave
filter
sometimes
becomes
incompatible
with
the
control
plane
and
we've
seen
issues
where
those
error
out
so
we're
hoping
to
to
be
hoping
to
fix
that
soon.
The
envoy
filters
have
a
proxy
version
that
you
can
tie
them
to.
So
that's
potentially
one
way
we
might
look
to
fix
it.
A
So
there's
we
haven't
taken
action
on
this
as
yet,
but
that's
one
one
of
the
ways
we
hope
to
fix
it
and
maybe
a
couple
of
times
you've
run
into
some
bugs
when
we
do
run
into
those
bugs
when
when
we
are
validating
the
or
sort
of
getting
ready
to
qualify
this
version
of
istio,
if
you
run
into
bugs
here
we
you
know,
we
go
back,
fix
it
in
the
open
source
product.
We
bring
it
back
to
the
canary
cluster
and
then,
and
only
then
will
be
propagated
forward.
So
we
so
we
do.
A
A
Let's
see,
I
think
that's
all
I
had
for
today.
I
think
we
came.
We've
come
a
long
way
in
our
journey
to
adopt
mesh
and
specifically
istio.
We
still
have
a
long
way
to
go
a
couple
of
years
ago,
when
we,
when
we
started
off,
you
know
sending
that
first
pr
to
to
the
istio
open
source
product,
it
was
a
little
rocky.
A
It
took
a
little
while
for
it
to
be
looked
at
and
and
for
it
to
be
checked
in,
but
since
then,
we've
had,
you
know
a
great
experience
with
making
changes
to
the
product.
We've
had
very
good
responses
from
community
members
on
to
for
prs
that
we
send
great
discussion
and
before
it
actually
gets
checked
in
pretty
quickly
also,
as
we
have
adopting
istio
for
a
variety
of
more
complex
use
cases
we
have
had.
A
You
know,
we've
been
able
to
rely
on
the
expertise
of
the
community
to
try
different
things
to
solve
those
problems
in
how
we
use
the
product
and
or
and
sometimes
some
cases.
If
we
find
a
gap,
we've
been
able
to
come
to
consent,
consensus
on
how
to
fix
it
in
the
product,
and
we
then
we
send
that
back
as
a
pr
and
it
gets
approved
and
becomes
part
of
the
product.
A
A
A
B
Thank
you
so
much
for
your
presentation
and
all
the
details
on
on
the
contributions.
I
think
we
have
several
questions
for
you
here
on
the
chat
box.
C
There
are
a
lot
maria,
yes,
I'm
gonna.
I
can
go
to
like
the
very
very
first
one.
Yes,
so
the
like
one
of
the
very
very
first
questions
was
from,
I
believe
it
was
from.
I
was
gonna,
say
it's
from
the
show,
but
actually
no
pratima
yeah.
It
was
from
prashant
michelle
asked.
How
can
I
manage
multiple
clusters
with
single
in
still
service
mesh.
C
A
Yeah,
so
you
know
the
way
we
if
I
understand
the
question
correctly,
the
way
we
run
istio
we
have.
There
is
a
primary
cluster
where
the
control
plane
runs
and
any
of
the
the
config
hook,
et
cetera
the
things
that
I
talked
about
run
and
then
we
can,
you
can
configure
and
then
you
have.
Let's
say
you
have
a
remote
cluster
right
and
that
you
want
to
make
it
part
of
the
same
mesh.
A
A
So
there
is
a
way
you
can
tie
the
two
together
using
service
accounts
and
sto
has
a
way
to
configure
that,
and
once
you
tie
the
control
plane
to
the
remote
cluster,
the
service
that's
running
in
the
remote
cluster,
it
can
participate
in
the
mesh
as
if
it
were
in
the
same
cluster
because
it
is
talking
to
that
same
control
plane.
A
C
Thanks
for
tima
vishal,
I
hope
that
answers
your
question.
If
not
you
can
come
off
of,
you
can
raise
your
hand
and
come
off
the
mic
or
drop
down
a
follow-up
question
in
the
chat.
The
next
question
is
from
chris
kim
and
oh
before
I
ask
the
question.
I
just
want
to
remind
everyone
who
is
participating
right
now.
If
you
would
like
to
be
considered
in
the
istio
swag
campaign,
please
fill
out
the
google
form.
C
A
We
we
make
changes
based
on
our
requirements,
but
we've
mostly
made
changes
to
the
networking
layer,
but
I
think
ram
answered
the
question.
I'm
looking
at
the
chat
there.
A
There
are
different
working
groups
for
based
on
the
different
areas
of
of
the
product
and
you
as
a
community
member,
can
participate
in
any
of
those
working
groups
as
you
as
you
feel,
as
based
on
what
you
need,
and
then
there
is
a
doc
related
to
how
you
can
actually
send
prs
in
order
to
send
prs.
You
don't
actually
necessarily
have
to
be
part
of
those
working
groups.
You
can
based
on
your
requirements.
A
C
A
Someone
asked
about
the
additional
latency
yes,
so
this
this
is
not
necessarily
a
you
know
how
it
istio
does
not
impact
that,
but
the
proxy
that
you
run
next
to
your
service
impacts,
the
latency
and
from
what
we've
seen
we've
seen
like
a
one
or
two
millisecond
latency,
because
the
traffic
is
routed
via
the
proxies
instead
of
direct
service
to
service
communication.
A
So
you
can
expect
like
two
millisecond
latency
there.
B
A
B
A
Again,
I
think
there
is
a
document
that
we
can
post
in
this
in
the
slack
in
the
chat
here
I
thought
someone
already
posted
it.
There
is
a
performance
benchmark,
you
know
blog
or
a
a
or
a
article
on
the
istio
website
that
you
can
read
up.
A
Yeah
or
wc.com
yeah
yeah
yeah,
so
we
will
be
using
it,
so
we
are
trying
it
out
in
our
test
environments,
and
we
see
that
it
will
definitely
help
us
get
rid
of
some
of
the
you
know
the
zookeeper
and
the
sync
service
that
we
have
so
yeah.
I
definitely
encourage
you
to
try
it
out
it
it.
We
we've
tried
it
out
in
our
test
environments,
it
works.
We
haven't
rolled
it
out
as
yet
and
we
are
definitely
going
to
use
it.
A
In
fact,
that
was
one
of
our
feedbacks
as
we
were
first
starting
off
to
use
istio.
The
fact
that
you
know
it
had
very
little
support
for
vms
and
bare
metal
was
was
was
one
of
the
feedbacks
that
we
that
we
gave
and
and
then
since
then,
and
there's
a
lot
of
work
that
has
been
done
for
vm
support
and
this
auto
registration
is
just
an
awesome
way.
A
C
D
Sorry,
I'm
nervous.
I
have
one
question
regarding
as
far
as
so
I
think
you
have
your
presentation
and
we
filter,
and
I
have
one
question
regarding
it
so
I'll
try
to
to
simplify
it
as
much
as
possible
regarding
hot
tepe
filter
can
we
use
a
hot
tip
filter
to
forward,
let's
say
all
hot
tip
requests
from
istio
ingress
to
some
like
odd
audit
api.
D
Yeah,
I
think,
if
I
can,
like,
let's
say,
catch
a
body
or
header
from
a
request
from
request
and
forward
it
to
some
api
which
works
with
elasticsearch,
let's
say,
and
it
uses
as
audit
api
logging
api.
Let's
say:
yeah.
A
Log
service
and
ram-
if
I'm
am
I
describing
it
correctly,
we
don't
actually
use
it
yet,
but
there
is
an
access
log
api
that
you
can
use
to
to
send
those
requests
of
sends
part
of
the
data
to
a
a
service
that
that
you
run,
but
it
has,
I
think
it
has
to
implement
a
certain
api.
We
can
try
to
look
for
the
documentation
and
send
it
to
you
later,
but
there
is
a
way
you
could
do
that.
D
B
Here
so
this
is
from
vishal
a
istio
or
service
mesh,
add
a
an
additional
layer
in
routing,
so
it
increased
the
overall
latency.
So
have
you
observed
latency
in
your
infra.
A
Like
I
mentioned
before,
you
know
we
with
running
those
proxies
between
you
know
when
you
route
the
traffic
via
the
proxy
on
the
client
side
and
then
the
proxy
on
the
server
side.
We
expect
a
two
millisecond
latency.
We
have
seen
that.
But
beyond
that
you
know
we
haven't
seen
any
other
issues
with
latency.
A
The
sidecar
crd
is
meant
to
be
in
is
meant
for
that
so
stu
as
you're
as
you're
aware
takes
all
the
services
in
a
community's
cluster
and
converts
them
into
one
way.
Config,
you
can
scope
it
down
using
the
sidecar
crd
api
of
istio,
and
I
think,
more
recently
in
1.0.
A
We've
also
added
a
discovery,
selectors
feature
where,
if
you
have,
let's
say
a
kubernetes
cluster
and
only
you
know
like
five
namespaces
out
of
that
are,
or
only
these
set
of
objects
are
really
mesh
services.
You
can
use
the
discovery
selectors
to
scope
it
down
to
just
those
so
so
they're.
The
the
two
ways
that
you
can
do
that.
B
Great
thank
you
and
we
have
more
questions
coming.
If
anybody
would
like
to
ask
their
question,
please
raise
your
hand
and-
and
we
can
make
space
for
you-
this
is
from
marius
rasvan
palimari.
A
Yeah
so
we
have
like
I
mentioned,
we
use
an
internal
ca,
so
that
is
what
manages
everything
for
us.
So
if,
even
across
multiple
clusters,
we
actually
use
a
in
separate
internal
ca.
That
is
what
manages
the
trust
for
us.
We
don't
use.
We
don't
use
citadel
because
that
runs
in
one
cluster,
but
our
our
infrastructure,
for
you
know
for
for
pki
kind
of
takes
care
of
that.
B
E
Yes,
so
I
want
to
follow
up
on
the
mtls
mode.
I
was
wondering
what
security
mode
are
you
running
your
sidecars?
Are
you
using
strict
which
switches
to
easter
mtls,
because
that
actually
comes
with
its
challenges?
E
A
Yeah,
so
we
configure
our
services
with,
as
the
mesh
is
configured
to
be
strict
because
that's
a
requirement.
We
have
to
be
mtls
everywhere
and
our
pki
sets
yes
do
generate
the
this.
They
do
have
the
spiffy
identity
in
them.
A
That's
part
of
our
sort
of
the
flow,
and
there
was
a
point-
and
you
know
there
is
a
way
to
turn
that
off
with
istio,
because
in
our
in
summer
for
yeah,
I
have
to
get
back
to
you
on
the
details,
but
there
is
a
way
you
can
turn
it
off.
Essentially,
you
can
remove
the
istio
authentication,
the
thing
that
kicks
off
that
those
extra
checks
that
you
talked
about.
A
I
think
it's
called
sg
authentication,
so
I
will
have
to
get
back
to
you
on
how
to
turn
it
off,
but
it
is
possible
we
we
have
tried
it
out
before
in
because
in
some
places
we
didn't
have
the
spiffy
id.
Like
you
mentioned,.
A
If
you
can
connect
with
me
like
so
that
I
so
that
I
have
your
email
address,
I
can
send
or
or
if
you
post,
post
in
disk,
you
know
in
in
our
group
in
in
you
know
in
our
chat.
I
can
then
try
to
get
answer
it.
There.
F
F
A
So,
in
our
case,
you
know
we
we
picked
enroy
as
our
proxy
technology
because
it
really
suits
our
needs.
There
are
other
control
planes
available
that
are
not
sdo,
but
they
don't
have
the
features.
I
think,
for
example,
for
example-
I
mean
I
don't
I
can't
say
enough
about
those,
because
it's
been
a
while,
since
we
picked
istio
and
we
haven't
looked
at
some
of
those
products
in
more
recent
times.
So
I
don't
want
to
answer
that
question
because
I
don't
know
where
they
are,
and
I
don't
want
to.
A
You
know,
give
you
a
misinformation,
I
guess,
but
when
we
looked
at
it
two
years
ago,
definitely
istio
was
what
we.
What
was
what
we
found
to
be
feature-rich
and
and
would
suit
our
needs.
F
One
one
other
technology,
ambassador.
A
Is
one
I
that
I
know
of,
I
don't
know
if
it
if
it
works
as
a
more
as
an
ingress
like
more
gateway
type
of
features
or
if
it
has
mesh
type
of
features.
I
haven't
looked
at
it
in
recent
times.
B
Okay,
we
have
a
few
more
questions
and
I
I
just
wanted
to
remind
everybody
if
you
are
looking
forward
to
connecting
afterwards,
please
make
sure
to
join
the
istio
slack
space
and
that's
an
easy
way
to
to
find
a
pratima
and
other
student
committee
members
to
to
continue
the
conversation
about
using
histone
production.
I
can.
I
can
share
the
link
to
the
slack
space
in
a
in
a
minute,
but
otherwise
it's
on
the
website.
Istio.Io
from
this
is
from
senghei
marusi.
B
A
We
look
at
latency
and
request
rate
error
rates,
but
from
performance
perspective,
it's
throughput
and
latency
that
we
that
we
have
looked
at
and
this
goes
beyond.
Like
you
know,
we
are,
we
have
a
separate
performance
engineering
team
that
runs
performance
loads.
So
I
don't
have
you
know
information
about
what
types
of
tools
they
use
but
yeah.
We
have
a
team
that
runs
these
performance
loads
to
to
to
test
with
respect
to
performance,
but
the
telemetry
that
we
use
is
is
always
telemetry
right.
A
So
envoy
has
both
client-side
and
server-side
telemetry,
and
we
use
that
to
you
know
we
monitor
latency
error
rate
request
rate
throughput
to
see
how
how
how
it's
performing
we
rely
on
that.
B
Great-
and
we
have
a
lot
of
comments
talking
about
the
upgrade
process
that
you
shared
and
how
how
what
an
amazing
work
your
team
has
done
with
running
things
at
this
kind
of
scale,
and
thank
you
and
it's
it's
also
good
to
see
your
upgrade
float
and
noting
the
that
you
use
both
canary
and
in
place
upgrades.
A
I
saw
someone
ask
about,
I
think
message,
queue
and
caching
and
whether
we
run
this,
you
know
issue
and
onward.
For
that.
Yes,
we
we
run,
you
know
we
run.
We
have
redis
that
uses
envoy
and
essentially
service
mesh,
it's
onboarded
onto
mesh
and
it's
exposed
via
sdo.
We
also
use
cupid
in
our
internal
deployments
and
cupid
is
also
it
uses
more
tcp
style
communication
between
the
client
and
server
also
onboarded
onto
service
mesh.
B
A
We,
if
you
are
you
talking
about
like
for
mobile
or
maybe
if
you
can
clarify
well
what
type
of
what
what
you
mean,
but
we
we
mostly
use
service
mesh
internally,
where
you
know
for
internal
communication,
are
if
you're.
If
your
question
is
related
to
mobile,
we
don't
use
mesh
there.
We
do
use
it
at
the
ingress,
so
any
traffic,
that's
coming
in
from
the
public
internet
flows
through
ingress
gateway,
which
is
essentially
a
steel,
ingress
gateway.
C
Sure,
and
before
I
do
that,
I
just
launched
the
poll
survey,
if
you,
if
everyone
on
the
call,
could
take
a
few
minutes
to
answer
that
survey,
it
would
be
extremely
helpful
for
us,
as
we
plan
out
these
community
events.
C
Okay,
let's
team,
I
don't
I'm
not
sure
if
you
answered
this
question
yet
there's
a
question
from
paul:
do
any
of
your
micro
services
live
outside
of
istio?
Are
you
able
to
get
to
that
one
already.
A
No,
some
of
our
services
actually
don't
use
istio
as
in
like
as
in
they
don't
use
envoy
either
they
let's
say,
for
example,
there
might
be
a
service
that
does
its
own
mtls
termination
and
validation.
So
we
on
board
them
using
this
service
entry
crd
so
that
the
clients
that
are
running
on
mesh
can
still
communicate
with
it
so
yeah.
C
Also,
thank
you.
I
think
we
have
time
for
just
one
more
question.
C
A
The
way
we've
approached
this
is,
you
know
we
we
make.
You
know
some
contributions.
We
focus
on
solving
business
problems
at
salesforce
right,
so
we
take.
A
We
are
taking
stu
we're
taking
on
where
we're
solving
for
business
problems
at
salesforce,
and
as
part
of
that,
if
we
need
to
make
contributions
to
the
open
source
product,
we
we
invest
the
time
in
making
it
rather
than
the
other
approaches
like
you
wait
for
someone
in
the
community
to
pick
it
up
and
and
and
and
solve
it,
but
instead,
if
it's,
if
it's
crucial
for
us,
we
or
even
otherwise
we
make
contributions
to
the
open
source
products.
A
We
invest
time
in
that,
but
it's
usually
always
because
we
know
it'll
solve
a
business
use
case
for
us
and
it's
also
or
if
we
know
that
we
it
will
solve
a
sort
of
a
future
use
case
that
we
know
we
were
expecting.
A
We
I
mean
I
would
love
to,
and-
and
I
know
that
you
know
so-
rama
who's
who
is
from
salesforce
is
a
very
active
member
of
of
the
istio
community
and
actually
the
only
community,
and
he
spends
you
know
like.
There
are
some
people
who
spend
personal
time
as
well
to
contribute
absolutely,
but
from
a
company
perspective
we
we
solve
for
business,
use
cases
of
sales
force,
and,
and
that
is
what
drives
our
contributions
to
the
product
and
even
for
future
like
like.
A
If
we
make
a
change
today,
it's
probably
likely
we're
using
it
today
or
we're
going
to
be
using
it
in
the
future
and
that
and
that's
what
drives
us.
If
you
will.
B
B
Hessian
we
have
come
to
the
end
of
this
meeting,
but
please
feel
free
to
to
join
the
istio
slack
space
and
continue
questions
over
there.
I
wanted.
F
B
I
would
like
to
thank
everybody
for
for
joining
the
meetup
today
and
special,
thank
you
to
pratima,
for
this
amazing
presentation
and
for
all
the
the
dedication
to
to
the
questions
and
the
comments
that
followed
as
well,
and
also
thank
you
so
much
from
for
joining
and
sharing
some
easter
resources
as
well
on
the
on
the
chat
box,
I
think
those
were
very
appreciated
as
well
for
all
the
participants
and
if
you
would
like
to
revisit
this
presentation,
this
is
this
recording
is
going
to
be
available
on
the
istio
youtube
channel
and
yeah.