►
From YouTube: 2021-01-28 Istio Community Meeting
Description
On January 28, 2021 the community hosted a meetup featured a presentation called, "Multi-cluster Istio operations,” Istio control planes across multiple clusters including configuration, security, and Web Assembly (Wasm) presented by Christian Posta, Field CTO, Solo.io.
B
A
Welcome
everybody
to
the
easter
community
meeting
and
to
2021
with
the
easter
community.
I'm
really
happy
that
you're
here,
I'm
sorry
for
the
disconnect
details
and
I
think
there
are
no
community
updates
or
user
group
updates.
So
far,
I'm
going
to
give
a
brief
update.
A
I,
as
you
may
have
heard,
we
are
ramping
up
to
a
in
produce
istiocon
in
late
february,
it's
going
to
take
place
from
february
22
to
25.
So
if
you
are
not
yet
familiar
with
the
site,
you
can
find
it.
Let
me
share
it
right
now
as
part
of
the
istio
website.
Here,
I'm
dropping
the
link
in
the
chat.
A
It's
events,
dot,
istio
dot,
io,
slash,
instacon
2021,
so
we
hope
to
see
many
of
you
there
to
learn
how
people
use
istio
in
production
and
learn
from
leadership.
Obviously,
as
well
as
the
the
roadmap
for
the
for
the
project,
I
believe
we
should
be
able
to
announce
the
program
pretty
soon.
So
that's
exciting!
A
Please
tell
everybody.
You
know
that
is
exciting
about
istio
to
to
come
to
the
conference
and
to
to
share
with
the
community
I'm
adding
that
to
the
to
the
agenda.
B
All
right,
thank
you,
maria,
and
definitely
thank
you
for
giving
an
update
on
the
istio
con.
That
is
the
week
of
february
22nd.
I
believe
I
do.
I
want
to
say
something
about
that
too.
I'm
I'm
I'm
I'm
surprised,
pleasantly
surprised,
and
it
was
really
good
to
see
the
the
number
of
submissions
that
came
in
for
istio
khan.
B
If
ever,
anyone
had
a
doubt
about
the
dominance
of
istio
as
the
service
mesh
that
people
run
in
production,
I
think
the
the
submissions
that
we
saw
completely
put
that
to
to
rest
right
large
companies,
small
companies,
lots
and
lots
of
companies
running
istio
in
production
and
are
successful
and
excited
to
share
their
their
journey
and
we're
hoping
to
have
a
really
good
istio
con
program.
B
For
for
that
I
know
some
of
the
organizers
are
on
this
call,
and
I
think
it's
all
in
you
know
it's
made
it
incredibly
difficult
to
try
to
pick
the
the
sessions
that
that
that
are
going
to
going
to
make
the
cut
and
we've
even
done
things
like
tried
to
expand
significantly
almost
double
expand
the
number
of
slots,
so
you
know
we're
doing
what
we
can,
but
I'm
I
was
blown
away
honestly
to
see
the
to
see
the
number
of
submissions
and
and
people
excited
about
istio
and
to
share
their
story.
B
So
thank
you
all
from
you
know
the
community,
the
these
are
the
folks
in
the
community
and
other
users.
Thank
you
for
for
those
submissions
and
making
it
difficult
for
us.
That's
a
good
problem
to
have
so
yeah.
I'm
I'm
happy
to
to
jump
right
in
here.
B
It's
gonna
be
a
little
bit
of
a
different
format.
I
I'm
gonna
say
I'm
trying
something
new
here.
Let
me
share
my
screen
and,
and
if
you
can
see
from
my
screen,
I'm
gonna
I'm
gonna
go
off
a
little
bit
of
normal
normalcy
in
normal
script,
where
people
show
slides
and
demos
I'll
show
some
demos,
but
I
also
really
like
you
know.
I
can't
nobody's
traveling
right
now
right
and
as
part
of
my
work
as
a
as
an
architect
and
and
working
with
with
customers.
B
I
I
really
enjoyed
traveling
and
being
in
front
of
them,
and
you
know
talking
about
solutions
and
sketching
things
on
on
whiteboards
and
going
back
and
having
that
collaborative,
feel
and
that's
not
something
that
that
we've
been
able
to
to
do.
And
so
one
of
the
formats
that
I
wanted
to
try
for,
for
at
least
this.
This
meetup
is,
is
trying
to
go
back
to
that.
B
Drawing
board
feel
and
sketch
concepts
as
they
pop
into
my
head
down
onto
paper,
or
in
this
case,
an
ipad
and
and
try
to
explain,
explain
and
and
illustrate
some
of
the
the
concepts
that
come
up
when
people
discuss
deploying
istio,
especially
across
multiple
clusters,
we're
gonna,
we
are
gonna,
take
a
look
a
little
bit
at
what
we're,
what
what
I've
been
working
on
and
and
the
folks
here
at
seoul,
have
been
working
on
in
terms
of
web
assembly
and
and
just
try
to
share
some
some
of
this
stuff
and
happy
to
make
it
interactive.
B
Right,
like,
like,
I
said
earlier,
the
best
part
was
being
with
people
and
and
exchanging
ideas
and
and
trying
to
draw
them
out.
You
know
I
would
prefer
this
to
be
two-way
street,
so
people
jump
in
and
ask
questions
or
clarifications
or
objections
and
pushback.
If,
if
you
know,
if
you
see
an
opportunity
for
that
so
with
that,
thank
you
for
joining
the
community
meeting
and
cross
your
fingers,
let's
hope
the
ipad
and
the
and
everything
cooperates
here
and
let's,
let's
get
going
okay.
B
So
the
first
thing
we
I
wanted
to
talk
about
is
is
istio
deploying
istio
into
into
multiple
clusters.
B
That
is
a
use
case
that
we
here
at
solo,
work
very
closely
with
organizations
some
extremely
large
organizations
with
hundreds,
if
not
more
clusters
and
some
of
the
folks
who
are
just
getting
started,
who
might
have
a
single
cluster
and
maybe
going
into
a
handful
of
those
istio
clusters
and
anything
in
between
I'm
going
to
focus
on
a
couple
of
problems.
B
You
know
this
discussion
could
go
on
for
days,
but
I'm
going
to
focus
on
a
couple
of
problems
specifically
around.
What
is
what
is
the
model
for
how
you
manage
these
clusters?
How
you
update
the
config?
B
What
that
configuration
looks
like
some
of
the
complexities
that
arise
in
that
and
those
scenarios,
what
are
some
of
the
models
and
then
we'll
get
to
what
we'll
switch
gears
completely
and
use
some
of
some
of
the
models
that
we
describe
and
see
how
that
fits
in
with
deploying
and
managing
extensions
to
the
mesh
based
on
webassembly,
so
web
assembly
is,
is
something
fairly
new
that
just
hit
upstream
envoy
and
although
it
has
been
around
in
various
dimensions
and
flavors
in
the
istio
proxy,
for
a
little
bit,
especially
around
customization
of
metrics
and
so
on.
B
But
we
can.
You
know
we're
on
the
cusp
of
of
being
able
to
use
webassembly
for
a
growing
set
of
use,
cases
to
extend
the
capability
of
the
mesh
and-
and
in
fact
at
least
here
at
so
we
are
working
with
some
large
customers
that
are
putting
weather
assembly
into
production
right
now,
which
a
month
or
two
ago
I
would
have
thought
that
be
scary.
But
it's
working
out.
It's
working
out
really
good.
B
So,
let's,
let's
see
if
I
can
switch
into
one
of
these
diagrams
and
let's
start
from
the
beginning
right.
So
if
you
go
to
the
istio
dot
io
website
and
look
at
the
documentation,
there's
a
few
different
flavors
for
running
istio
across
multiple
clusters.
B
There
is
the
the
model
where
everything
is
on
the
same
network
which,
if
you
have
and
can
enjoy
a
situation
like
that,
then
by
all
means,
go
for
it
and
what
that
means
is.
You
might
run
multiple
clusters
with
non-conflicting
networks
or
rather
routable
networks
between
each
of
the
pods,
so
that
the
the
you
know
a
pod
in
cluster.
One
talking
to
a
pod
in
cluster
two
doesn't
have
to
do
anything
special,
just
talk
to
the
aip
of
that
pod
or
that
kubernetes
service
and
everything
will
be
routed
and
everything
will
be
fine.
B
Another
model
is
the
it
is
where
the
clusters
are
in
separate
networks
and
in
that
model
the
the
the
workloads
they
communicate
with
each
other
by
first
going
through
an
ingress
gateway
right,
so
to
get
to
that
other
network.
B
Now,
how
does
how
does
this
service
here
know
that,
when
it
talks
to
a
surface
b
that
service
b
actually
lives
over
on
a
different
cluster?
Well,
istio
has
something
called
the
service
entry
right
and
that's
we.
We
we
put
that
config
piece
of
config
here
and
we
give
it
a
name
of
b,
let's
say:
there's
a
service
b
and
then,
when
this
particular
proxy
wants
to
talk
to
a
service
b
under
the
covers.
B
You
know
using
istio's
redirection
mechanisms,
we're
able
to
we're
able
to
force
the
traffic
to
go
to
the
ingross
gateway
that
lives
in
in
the
second
cluster.
B
So
now
this
this
model
will
happen
when
you're
on
different
networks,
and
you
need
to
hop
to
different
networks
now,
there's
another
variant
to
the
multi-cluster
model,
which
is
sharing
or
not
sharing
a
control
plane
in
this.
In
this
case
that
we
see
here,
we
have
two
different
clusters
with
two
different
istio
control
planes,
and
the
reason
that
I
I
we
we
typically
start
out
with
this
model
is
that
it
allows
for
various
types
of
failure
and
isolation.
B
So
if
this
cluster
goes
down,
we're
not
trying
to
share
a
control
plane
and
this
that
cluster
can
go
down
and
not
affect
the
the
other
cluster
okay.
So
then,
if
we
have
separate
control
planes,
then
we
need
some
way
of
telling
the
work
telling
the
the
control
plane
in
cluster.
One
we'll
call
this
one
cluster
one
and
this
one
cluster,
two
that
hey
this
is
service.
B
This
is
service
a
service,
a
talks
to
service
b,
but
that
service
b
also
lives
over
here
on
on
on
cluster
two
and
the
the
routing
that
happens
between
service
a
and
b
over
here.
B
Basically,
it's
an
entry
into
istio
registry
of
service
discovery
registry
that
says
okay
well,
surface
b
is
over
here
also
in
1.8,
and
although
this
this
capability
had
existed
before,
but
it
was
officially
documented
in
the
in
1.8
the
model
that
the
docs
suggest
is
to
connect
connect
cluster
one
to
cluster
two
to
its
kubernetes
api
right.
So
we
have
the
kubernetes
api
here
and
we
also
have
a
you
know
the
kubernetes
api
in
cluster
one
and
what
what
the
doc
suggests
is
create
a
little
a
little
secret
here
on
cluster
one.
B
That
knows
how
to
authenticate
to
cluster
two.
Do
the
same
thing
on
cluster
two
back
to
cluster
one
and
have
them
pull
the
the
endpoints
for
all
the
different
services,
and
so
if
there
are
endpoints
for
service
b
over
on
on
cluster
two
and
on
cluster
one,
then
istio
will
know
about
that.
There's
a
surface
beyond
cluster,
too.
All
right
and
in
this
model
we
don't
have
to
create
all
of
the
different
service
entries
and
and
so
on.
B
Now,
one
of
the
one
of
the
challenges
or
unfortunate
side
effects
of
this,
as
as
we've
seen
with
some
of
the
folks
who
have
adopted
this
model,
is
that
the
more
clusters
you
have?
Let's,
let's
draw
a
couple
more
here,
the
more
and
and
you
want
to
federate
the
services
across
these
different
clusters.
The
more
you
have
to
create
these
security
tokens
to
talk
to
the
kubernetes
api
in
the
various
clusters
right.
So
every
cluster
ends
up
having
access
to
the
kubernetes
api
of
any
of
the
other
clusters
in
that
that
group
right.
B
And
so
if
we,
if
we
accept
that,
then
we
have
to
say
what
what
is
what
is
a
different
model?
How
how
can
we,
if,
if
we're
going
to
solve
the
problem
of
service
discovery
which
which
we
need
to
do-
and
we
don't
want
to
share
these
secrets
and
shuffle
them
around,
because
you
could
have
100
clusters
and
one
cluster
gets
compromised,
then
all
of
them
potentially
could
get
compromised?
B
It's
not
a
very
it's,
not
a
very
safe
model,
one
of
the
one
of
the
options
that
we've
been
working
with
with
customers
around
is:
let's
see
if
I
can
go
to.
Let's
just
go
to
this.
This
diagram
is
going
back
actually
to
the
the
model
of
placing
the
service
entries
where
they
should,
where
they
should
go,
and
this
has
a
couple
of
advantages
one.
B
Instead
of
giving
every
cluster
access
to
every
other
cluster,
you
could
have
something
else,
some,
let's.
Let's
call
this
a
config,
config,
automator
or
whatever,
and
this
this
config
automation
can
say
all
right:
cluster,
one
and
cluster
two
for
the
services
that
that
need
to
talk
with
each
other.
We
will
put
service
entries
onto
those
clusters
on
onto
those
clusters
and
for
the
ones
that
don't
need
to
talk.
We
won't,
we
won't
add
the
services.
So
we
won't
every.
B
The
second
thing
is
now,
instead
of
all
the
clusters,
knowing
about
all
the
other
ones,
what
we
can
have
is
is
a
model
where
now
only
one
the
config
automator
needs
to
know
about
how
to
access
and
push
config
to
the
various
clusters.
B
All
right.
So
now
we
kind
of
go,
go
back
and
we're
going
to
we're
going
to
have
a
little
bit
more
control
over
what
gets
exposed
and
how
it
gets
exposed
and
we're
going
to
scope
down
to
security,
surface
or
potential
threat
surface
down
to,
let's
say
a
single
service
like
the
config
automator
orchestrator
piece.
B
Now,
even
that
has
its
own
drawbacks,
because
there
are
there's,
definitely
large
organizations
where,
as
I
mentioned,
they
might
get
into
the
thousands
of
clusters
and
so
looking
at
it
from
a
scalability
standpoint,
as
well
as
from
a
security
stamp.
B
And
now
we
have
one
single
component
that
has
access
to
everything
and
so
that
you
know
that
that
model
may
this
model
may
or
may
not
work
for
for
some
folks-
and
you
know
we-
we
have
we're
working
with
people
where
it
does
work
for
them
and
we
have
people
that
doesn't
work
for
them
and
for
those
folks
where
it
doesn't
work
instead
of
one
single
component,
knowing
about
knowing
about
everything
and
being
able
to
communicate
and
talk
with
everything,
what
we've,
what
we've
seen
work
is
the
the
opposite.
B
So,
instead
of
pushing
these
configs
or
pushing
the
demanding
security
being
able
to
talk
to
all
these
different
clusters
from
a
single
spot,
what
we,
what
we've
said,
is
why
don't
we
push
it?
Why
don't
we
have
the
clusters
oop
wrong
pen?
B
Why
don't
we
have
the
the
clusters
actually
connect
up
to
the
management
planner
or
some
control,
config,
control,
automation
and
say:
hey,
I'm
cluster
whatever?
B
Why
don't
you
give
me
the
config
that
that
I
need,
and
so
in
this
way
we
have
a
much
more
decentralized
and
pull
based
model
where
a
a
single
component
doesn't
have
you
know,
access
to
everything
and
the
the
security
boundaries
have
then
put
been
pushed
out
to
an
into
an
individual
cluster
so
that
if
you
get
access
to
any
one
of
them,
you
don't
have
access
to
everything.
B
C
Second,
hey
christian:
this
is
lynn,
I
think
it's
really
interesting.
You
are
describing
this
problem.
Certainly,
there
are
some
challenges.
I've
also
heard
that
there
are
concerns
to
allow
even
just
read
access
just
to
re,
allow
the
primary
cluster
to
allow
access
to
the
remote
cluster
just
on
the
read
access
for
the
api
server.
So
I've
also
heard
similar
concerns
on
that.
I
think
it's
interesting
how
you
guys
tackle
this
problem.
B
Definitely
yeah
so
so
far,
I'm
just
trying
to
talk
about
the
problem
generally
and
then
I'll
go
into
a
little
bit
of
of
how
we're
working
with
with
folks
to
to
solve
this
problem,
but
yeah.
I
just
just
just
first
wanted
to
kind
of
lay
the
groundwork
of
what
what's
available.
What
what
are
some
of
the
patterns?
What
what
are
some
of
the
drawbacks
and
and
talk
about
it
like
that?
But
then
yeah
I'd
love
to
go
into
some
of
the
stuff
that
we're
we're
specifically
doing.
B
Yeah,
it's
very
cool,
yeah
yeah.
Thank
you.
Anyone
else
have
have
thoughts
or
questions
or
acknowledgement
that.
D
They're,
following
along
this
makes
sense
yeah
they
sent
you
here
from
zendesk
technology.
It's
a
task
that
we
have
on
our
backlog
to
build.
We've
been
calling
this
internally
a
remote
service
reconciler,
which
was
your
previous
slide.
That
was
the
idea
we
had
in
mind
without
digging
further
but
yeah.
I
really
appreciate
you
sharing
your
thoughts
and
looks
like
we're.
Gonna
need
to
revisit
and
reconsider
that
architecture
that
we've
planned.
B
Yeah
sure,
happy
too,
I
have
a
good
share,
yeah,
okay,
so
that
that
is
a
great
segue
to
another
part
of
the
the
problem
which
I
alluded
to,
but
I
can
go
into
just
just
slightly
a
little
bit
more,
which
is
the
configuration
problem
right,
one
of
the
one
of
the
things
I
guess
more,
just
as
anecdotally,
what
what
we
see
and
some
of
the
the
big
outages
that
make
headlines
and
so
on,
and
even
in
your
own
organizations.
B
This
may
be
true,
as
well
certainly
has
been
in
my
past,
where
changes
to
the
system
typically
consists
of
more
than
just
application
code
changes
right,
there's
the
more
stuff
we
push
into
the
infrastructure,
the
more
of
the
more
things
we
lean
on
with
configuration.
B
You
know,
config,
shuffling
around
configs,
getting
the
configs
right
and
so
on.
Configuration
ends
up
being
a
gigantic
reason
why
things
fail
or
or
or
take,
outages
and-
and
so
another
part
of
this
problem
is
how
do
you?
It's
not
just
it's
not
going
to
adjust
to
say
like
so,
for
example,
an
istio.
B
But
if
they're
not
you
know,
if
they're
different
clusters
like
they
are
heterogeneous
clusters,
then
then
the
config
slightly
change
right
and
the
direction
of
traffic
is
sort
of
implied
in
the
configuration
and
some
somehow
that
needs
to
be
accounted
for.
Now,
if
you
take
this
and
and
spread
it
out
to,
let's
say
a
whole,
a
whole
bunch
of
different
clusters,
now
we
have
to
shuffle
these
configs
make
sure
the
implicit
directionality
and
all
the
all
the
other
stuff
is,
is
taken
care
of.
B
That's,
even
if
you
had
all
of
this
put
into
your
get
ups
pipeline.
There's
there's
still
a
lot
of
moving
pieces
here
and
let
me
just
switch
back
to
this
one.
This
is
the
one
I
wanted
to
be
on
anyway.
There's
still
a
lot
of
moving
pieces
here
and
a
lot
of
ways
that
things
can
go
go
wrong.
So
what
we've
been
working
with
with
folks
on
is
how
can
we?
B
How
can
we
simplify
this
a
little
bit
and
abstract
this
away
a
little
bit
so
that
the
the
model
that
you
use
as
a
as
a
as
a
user
right
is,
is
a
a
little
bit
more
simplified
and
focused
on
what
it
is
that
you're
trying
to
do
with
the
platform
as
a
whole,
whether
it's
just
your
or
whatever,
right,
you're
you're,
trying
to
orchestrate
a
release
or
you're
trying
to
introduce?
B
Maybe
you
have
like
maybe
a
bunch
of
different
clusters
running
your
application
workloads
and
you
have
a
separate
cluster
where
you
introduce
canary
releases
all
right
and
in
that
separate
cluster,
you
want
to
get
some
traffic,
that's
flowing
in
the
rest
of
the
system
over
to
this,
this
separate
cluster
there's
this
new
canary
cluster
right.
What
you
care
about
is
doing
the
release,
making
sure
the
canary
works.
B
B
I
want
to
make
sure
that
if
things
start
to
fail,
they
fail
gracefully
that
you
know
in
in
certain
cases
you
take
locality
into
into
account
and
so
on
that,
but
but
the
actual,
whether
there's
virtual
services
on
this
class
or
that
match
and
service
entries
and
so
on.
You
focus
on
the
the
part
that
matters
to
to
you
all
right
and
then
that
that
comes
back
to
this.
B
To
this
concept
of
the
you
know
something
that
can
manage
that
configuration
something
that
can
manage.
B
Actually,
configuring
clusters
the
the
correct
way
with
the
right
virtual
services,
with
the
right
direction
that
everything
needs
to
be
configured
in,
but
but
simplifying
the
model
that
the
end
user
has
to
has
to
worry
about,
and
so
I
I
guess
without
more
drawing
and
stuff,
why
don't
I
get
into
the
what
we've
been
working
on
or
what
we've
been
doing
and
how
that
fits
with
with
this
problem
and
then
we'll
extend
that
we'll
take
a
little
bit
of
a
shift
and
we'll
extend
this
same
problem
and
apply
it
to
how
we
how
we
manage
customizations
to
the
data
plane
customizations
to
the
the
surface
mesh
itself,
all
right.
B
So,
let's
move
this
thing
out
of
the
way.
So
what
I'm
going
to
show
you
real
quick
is
so
we
we
have
a.
We
have
a
system,
let's
see
if
I
can
bring
this
thing
back
up
that
that
look.
The
architecture
looks
like
this
all
right.
We
have.
B
We
have
a
few
different
clusters
and
we
have
some
piece
that
is
smart
enough
about
the
configuration
and
and
can
automate
that
configuration
and
has
a
a
a
an
api
that
users
can
can
use
to
influence
the
the
different
configs
that
end
up
getting
sent
down
to
these
clusters.
B
So
if
we
just
look
real
quick
at
the
demo,
we
have
two
different
clusters
in
this
case
running
on
gke
cluster,
one
cluster:
two,
they
run
some
parts
of
the
book
info
demo
from
istio.
You
can
see
product
page
and
details
and
reviews
you
want
to
b2
in
the
second
cluster.
We
see
reviews
v1,
v2
and
v3,
so
you
can
kind
of
think
of
cluster
two,
maybe
as
our
canary
cluster,
or
introduce
a
new
version
of
reviews,
and
we
want
to
from
from
the
rest
of
the
traffic
in
the
fleet.
B
We
we
might
want
a
percentage
of
that
to
come
to
this
canary
cluster.
And
you
know
if
things
fail
at
that
cluster,
then
what
you
know
whatever
we'll
we'll
blow
it
away.
B
So
the
first
thing
that
that
we
want
to
take
a
look
at
is
the
app
and
make
sure
that
it's
working
host
1980
remove
your
product
page
cross,
fingers
all
right,
so
the
normal
working
app
that
we
have
running
this
happens
to
be
in
cluster
one,
but
but
the
reviews,
v1
and
v2-
are
in
cluster
one
and
cluster
two.
We
refresh.
We
see
it
re
reviews,
v1
and
v2.
That's
that's
all
normal,
but
what
we
want
to
do
is
we
want
to.
B
We
want
to
release
reviews
v3,
and
we
want
to
do
that
in
a
canary
fashion
and
what
we're
gonna
do
is
we're
gonna
use
a
an
api,
a
little
bit
more
simplified
api,
but
it
does
take
into
account
the
fact
that
there
are
multiple
clusters.
B
You
know
fleet
of
clusters,
but
in
this
case
we
do
want
to
be
explicit
and
say
well
route
in
this
case,
I'm
going
to
choose
a
large
number,
so
we
don't
have
to
wait,
keep
refreshing,
but
I'm
going
to
rout
actually
75
for
the
10
of
the
traffic
to
any
of
the
traffic
coming
into
the
review
service
to
to
the
canary
that
would
be
running
in
cluster
two
and
this
traffic
policy
resource.
So
this
is
the
api
that
that
we
came
up
with
that
us
it
actually.
B
I
was
about
to
say
it
suits
most
people,
but
it
actually
doesn't
there's
no
one
api
that
suits
most
people
or
everyone
and
and
and
there's
there's
some
reasons
behind
this.
This
api
is
actually
customizable.
You
can
change
the
shape
of
this
api
depending
on
who
needs
to
use
it
I'll
talk
about
that
later.
But
so
anyway,
we
have.
We
have
this
traffic
policy
api.
That
is
a
little
bit
more
focused
on
who
are
this?
What
is
the
source
of
the
traffic?
B
So
let's
apply
this
to
our
config
automation
or
we
would
call
a
management
server
management
plane
and
then,
when
we
come
over
here
and
cross
our
fingers,
we
refresh,
we
should
see
some
traffic
go
to
the
red
stars,
which
would
be
reviews
v3.
So
it
is
going
across
from
from
cluster
one
to
cluster
two.
Because
of
this,
this
automation,
or
because
of
this
configuration
the
automation
under
the
covers.
So
if
I
take
a
look
at
our
policies,
we
see
our
our
policy
here.
We've
routed
75
of
the
traffic
to
a
different
cluster.
B
That's
all
good,
but
under
the
covers,
if
we
look
at
one
of
the
the
service
meshes,
we
see
the
correct
virtual
services,
the
correct
service
entries
destination
rules,
any
of
the
envoy
filters
that
we
need
to
need
to
use
here,
gateways
and
any
of
those
things
have
automatically
been
created
and
put
on
the
right
cluster
where
that
that
particular
mesh
lives.
So
if
we
happen
to
look
at
virtual
service
here,
this
is
a
istio
virtual
service.
B
So
that's
that's
an
example:
we're
using
a
a
tool
called
glue
mesh
which
is
open
source
project
and
and
specifically
targeting
the
this
problem
and
and
more
things,
but
this
problem
that
we
described
here-
and
this
can
also
be
used
to
extend
the
capabilities
of
the
mesh.
B
B
But
if
you've
been
watching
istio
and
envoy
and
some
of
the
stuff
that's
been
happening
around
it
with
webassembly
you,
you
would
kind
of
understand
that
webassembly
can
be
used
to
customize
the
behavior
of
the
proxy,
and
you
can
pick
the
language
that
you
like
for
webassembly
and
and
write
the
write,
the
functionality
in
that
language.
For
you
know,
those
limitations
is
still
emerging,
but
you
know
that
that
is.
That
is
a
way
to
customize
the
the
proxy.
Now
the
question
is:
how
do
you
actually
deploy
that?
B
I
mean
it's
nice
in
a
demo
to
be
able
to
just
put
it
into
a
a
on
the
file
system
and
hack,
the
config
to
load
it
off
the
file
system
and
so
on,
but
across
multiple
nodes
across
multiple
meshes?
How
how
do
you
manage
or
how
do
you
deploy
web
assembly?
B
So
I
guess
the
first
thing
that
we
want
to
do
is
before
we
deploy.
It
is
actually
build
a
new
webassembly
module
all
right,
so
I'm
gonna,
I'm
gonna,
use
some
some
tooling
that
we
have,
but
you
can.
You
can
use
any
tooling
to
build
your
webassembly
module.
B
I'm
gonna
choose
assembly
script,
we'll
target
iso
1.8
and
that
will
basically
bootstrap
for
us
a
a
project
that
has
all
of
the
right
all
of
the
right
versions
for
the
right
excuse
for
the
sdk
that
you
might
want
to
use
for
for
writing.
Your
your
web
assembly
module
and
it
it
spits
out
a
sample
sample
source
code
that
you
can
go
in
and
start
start
editing
in
this
case
we're
going
to
edit
the
the
headers
on
a
response
and
we'll
change.
We'll
add
a
header,
hello
world.
That
part.
B
You
know
that
that
part
is
its
own
again
its
own
topic,
but
let's
try
to
build
it.
There's
a
chance.
Do
I
have
docker
running,
let's
find
out.
I
actually
didn't
run
through
this.
It's
a
bit
off
the
cuff,
but
what
we're
going
to
do
here
is
we're
going
to
build
our
webassembly
module
as
an
oci
image
and
then
from
there.
We
can
push
it
into
a
registry.
B
We'll
give
that
we'll
give
that
a
second
and
then
once
it's
pushed
into
a
registry,
then
we
can.
Then
the
question
comes
alright,
and
how
do
I
deploy
this
thing?
So
we
built
it
and
we
have
it
let's
go
over
here.
Let's
make
sure
that
let
me
just
make
sure
this
is
work,
gonna
work
and
then,
let's
pop
into
this
other,
let's
delete
that
pop
into
this
other
demo.
B
And
what
we're
going
to
see
is
if
we
make
a
call
between
a
pod
and
the
review
service,
we
get.
We
get
a
certain
behavior,
the
out-of-the-box
behavior.
We
get
the
response
that
looks
good,
but
what
we
want
to
do
is
install
a
webassembly
module
to
change
the
behavior,
in
this
case,
simple
demo,
of
the
the
response
headers
right.
B
What
we're
going
to
do
is
we're
going
to
define
using
using
some
configuration
where
to
apply
this
change,
where
this
extension
extensibility
was
where
this
extension
should
exist
and
what
module
we
want
to.
We
want
to
use
and
and
pull
it
from
a
particular
repo,
and
so
if
we
take
this
same,
this
declarative
model
here
and
say
all
right:
we're
going
to
apply
this
to
reviews,
v2
that
happens
to
run
on
cluster
one
and
apply
it
to
our
config
automation,
server
management
server.
B
Then
we
should
cross
our
fingers.
If
we
get
the
deployment
itself,
we
should
see
a
status,
all
right,
it's
been
deployed,
and
now,
if
we
try
to
call
reviews,
it
might
take
a
couple
of
tries
because
we
applied
it
to
reviews
v2
and
it's
going
to
load
balance
between
the
two.
So
let's
just
try
a
couple
more
times.
B
A
couple
more
times,
hopefully
it
doesn't
break
okay.
Then
we
see
we
do
get
the
the
extension.
We
do
get
this
this,
this
change
in
in
behavior
through
to
web
assembly,
and
so
we
can
do
that.
We
did
this.
We
specified
cluster
one,
but
we
could
have
specified
any
of
the
cluster
all
of
the
clusters
and
in
the
way
this
is
working.
Is
the
web
assembly
module,
so
it
wasn't
baked
into
the
image
or
anything
like
that?
B
We
dynamically
loaded
it
at
runtime
and
actually
streamed
the
module
over
a
secondary
xds
channel
that
was
then
able
to
dynamically
load
into
the
proxy,
and
obviously,
here
change
the
behavior
of
of
the
proxy,
and
I
guess,
if
we
look
back
at
the
ui,
you
can
see
that
there's.
This
is.
This
is
something
that
we
added
to
the
particular
workloads
and
that
it's
showing
showing
that
it
was
installed
nicely
all
right.
B
So
I
was,
I
was
hoping
to
come
on
and
see
whether
or
not
the
ipad
illustrations
would
work
and
share
a
couple
of
things
that
we've
been
working
with
our
users,
our
customers
here
at
solo
and
and
then
show
you
what
we're
doing
around
webassembly
and
how
that
integrates
with
with
istio
but
yeah.
I
won't
take
any
more
of
your
time
and
I'll
leave
it
open
for
questions.
A
Thank
you
very
much
christian.
I
know
that
we
got
a
start
and
we
got
on
a
late
start,
so
we
may
not
have
time
for
going
into
breakout
rooms,
but
maybe
we
can
have
a
conversation
about
this
demo
and
if
you
have
questions
or
comments,
please
feel
free
to
speak
up.
But
if
that's
not
an
option
for
you,
we
can
also
read
the
chat.
E
B
The
the
architecture
of
the
configuration
service
allows
that
we
haven't
gotten
to.
We
have
users
who
are
about
to
run
into
that
and
we
will
support
it,
but
so
far
everybody's
been
on
the
same
version
of
istio,
but
this
this
absolutely
is
intended
to
be
used
across
any
versions
of
istio
and
you
know,
and
potentially
even
other
meshes.
But
that's
that
is
the
intent
to
support
that
use
case.
E
B
B
And
so
the
the
approach
we
have
for
for
that
is
to
ahead
of
time
you
can
config.
You
can
configure
right
if
you
say
apply
this
apply
this
config
and
it
doesn't
apply
successfully
to
all
the
clusters.
Then
then
roll
it
back
or
you
can
configure
it
by
default
to
go
ahead
and
apply
it
and
then,
if
one
of
them
doesn't,
if
one
of
them
doesn't
work
out
the
way
you're
expecting
that
at
least
notify
the
user
and
then
give
enough
debugging
tools
to
go,
go
help
figure
out.
B
Why
so
that's
that's
kind
of
where
we
are
right
right
now,
so
either
either
roll
back
or
fail
forward,
and
you
know
give
give
supporting
tools.
B
B
All
right:
well,
then,
you
know
happy
to
to
have
any
folks.
Let
me
see
if
I
can
get
this
back.
D
A
quick
one,
so
I've
seen
a
video
about.
It
was
like
a
two
three
minute:
video
about
easter
1.9,
and
there
was
a
very
vague
mention
about
a
global
registry
that
it
will
work
align
with
kubernetes
multi-cluster
support.
Are
you
familiar
with
those
changes
that
are
coming
down?
The
line
will
that
somehow
conflict
with
the
work
that
you're
doing
here.
B
I
personally
have
not
sounds
like
it.
Would
it
wouldn't
conflict,
it
would
make
it
easier
for
us
right,
but
no
I
haven't.
I
haven't
seen
that
yet
are
you
thinks.
E
F
Everyone,
this
is
derek
from
zendesk
a
lot
of
the
istio
multi-cluster
stuff,
we've
seen
kind
of
anticipates
a
model
where
you'll
be
load,
balancing
requests
for
the
same
service
across
many
clusters.
F
Our
architecture
is
such
that
each
cluster
is
its
own
shard
and
they
run
the
same
copies
of
the
same
services,
but
we
don't
want
to
load
balance
across
so
would
it
be
able
to?
Would
you
be
able
to
like
address
remote
instances
of
service
services
deliberately
and
explicitly
without
having
to
you
know,
you
make
a
request
to
service
b
and
cluster
two
that
won't
go
to
cluster
three
or
four.
B
Yeah,
that's
a
great
question,
and
so,
with
with
the
model
at
least
we
have
within
glue
mesh.
The
services
in
different
clusters
will
be
uniquely
identified
as
being
in
those
clouds,
so
you
can
route
specifically
to
those
or
if
you
want
to
group
them.
If
there's
other
ways
that
you
want
to
group,
then
you
can.
B
You
can
specify
a
global
name
and
and
have
them
grouped
to
either
one
cluster
or
two
clusters
or
whatever,
but
you
can
so
if
they
would
be
sharded
across
and
they're,
not
heterogeneous,
they're,
just
you
know
they're
in
their
own
cluster,
then
you
would
be
able
to.
You
would
be
able
to
directly
and
address
them.
C
And
just
to
answer
your
question
in
istio,
there
is
also
something
called
locality
load
balancer.
So
this
is
something
enabled
by
default,
but
it's
only
enabled
when
you
also
configure
outlier
detection
inside
of
your
destination
rule.
So
if
you
have
destination
rule
with
outlier
detection
property
config,
you
could
just
configure
hey.
I
always
prefer
local
services
first
and
then,
if
the
local
services
are
failing
and
not
meeting
my
requirements,
it
would
fade
over
to
the
remote
cluster.
So
I
think
that
could
potentially
work
for
you
also.
F
F
C
B
Cool
all
right.
Well,
I
appreciate
you
all
and
and
reach
out
if
you
have
questions
offline
and
yeah
thanks
thanks
maria
for
organizing
and
and
for
having
me.
A
Yeah
thank
you
for
for
presenting
this
very
original
demo.
As
a
reminder,
this
is
going
to
be
available
on
the
easy
youtube
channel
in
the
next
few
days.
I
just
wanted
to
let
you
all
know
that
a
jamie
has
just
added
the
the
tentative
schedule
for
instacon
on
the
istio
calendar,
the
community
calendar,
so
this
is
not
written
in
stone
yet
so
the
hours
might
vary
a
little
bit,
but
I
just
wanted
you
to
know
that
this
is
where
we're
going
to
be
updated,
updating
it
so
you're
up
to
date.
A
Next
month
we
are
not
going
to
have
a
community
meetup
because
we
will
be
hosting
istio,
but
we
do
hope
that
all
of
you
can
join
us
for
the
instagram
social
hour,
at
least,
if
you're
not
coming
to
to
the
conference
which
is
taking
place
on
thursday
and
it's
already
on
the
calendar.