►
From YouTube: Istio Community Meeting 2-7-2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
I
would
I
would
like
to
share
the
new
blog
policies
that
we
just
so
I'm
putting
the
link
to
the
blog
post.
You
can
find
more
information
there.
As
we
have
often
said
in
this
meeting,
documentation
is
a
great
place
to
start
on.
A
blog
post
is
definitely
now
a
very
viable
option.
We
have
a
whole
process
and
how
can
you
add
your
content
and
what
type
of
content
is
being
accepted?.
B
So
there
are
basically
four
types
of
content
that
we
think
are
appropriate
for
the
blog,
so
the
first
is,
if
you're
writing
a
post
detailing
your
experience
using
East,
you
I'm
configuring
it
particularly
if
it's
something
new
and
something
that
never
have
tried
and
that
no
one
has
ever
tried
before.
So
that's
a
very
good
content
for
the
post.
Also,
if
you
are
highlighting
or
announcing
new
features,
if
you
want
to
announce
or
recap,
a
nice
new
event,
that's
also
a
great
blog
post.
B
C
B
So
it's
very
important
if
you
want
to
to
have,
for
example,
detailed
steps
on
how
to
accomplish
the
task,
and
not
only
that,
but
you
want
that
procedure
to
be
kept
up
to
date
and
tested
and
maintained,
then
that
type
of
content
should
not
go
in
the
blog
post.
It's
very
important
that
you
understand
that
if
you
publish
a
blog
post
or
instructions
in
a
blog
post,
that
content
won't
be
revisited
as
the
software
keeps
changing.
C
And
then
the
goal
there
is
that
that
we
can
communicate
clearly
to
people
who
go
to
the
blog,
that
the
blog
is
not
reference,
documentation
and
again.
Writing
that
isn't
essentially
a
sample
or
is
a
good
reference.
You
should.
You
should
put
that
over
there
when
people
look
at
the
blog,
they
should
understand
that
that
was
a
you
know,
someone's
experience
at
a
point
in
time
not
hears
necessarily,
you
know
best
practice
for
this
thing
or
or
or
a
reference
for
that
thing.
If
what
you're
writing
is
a
reference,
go
ahead.
B
A
A
C
B
A
C
We
had
an
interesting
idea
yesterday,
I
once
a
week,
she
works
in
the
who
works.
A
lot
on
the
kubernetes
community
and
working
groups
had
an
idea
for
us,
and
he
said
you.
One
thing
we
might
consider
is
on
a
regular
basis
and
with
the
number
of
working
groups
we
have,
this
might
mean
quarterly.
Have
working
group
leads?
Do
a
kind
of
status
update
to
be
to
the
community
as
a
whole?
It
wouldn't
be
that
that
much
work
I
think
for
any
given
working
group,
but
just
a
hey.
Here's.
D
C
E
C
A
It
it's
actually
part
of
the
working
group
charters
that
they
will
do
that
so
I
think
it's
yeah.
It
works
well
for
kubernetes.
Kubernetes,
of
course,
has
so
many
things
that
they
actually
rotate
out.
How
many
people
you
know
each
one
has
an
assignment
and
with
a
CEO
there
are
some
that
are
not
as
active
in
the
you
know,
with
like
new
architecture,
new
things
that
are
being
proposed.
So
sometimes
the
updates
are
not
that
exciting,
but
yeah.
A
C
Great
I
love
that
idea
as
soon
as
I
heard
it
and
so
I'm
going
to
propose
that
to
the
to
the
working
group
members
that
we
establish
a
quarterly
schedule
for
them
to
and
I'll
guess
quarterly
quarterly,
as
scheduled
for
them
to
come
kind
of.
Do
a
presentation
on
what's
up
with
the
working
group,
good
great,
any
any
disagreement.
Anybody
think
that
would
be
a
waste
of
this
time.
I.
A
A
A
A
F
C
G
Let
me
you
can
see
it
yep.
We
can
see
it
yeah,
okay,
good.
So
as
Lynn
was
saying,
the
idea
today's
is
educate
on
what's
happening
with
multi
cluster
support
in
the
upcoming
release,
1.1
release
and
not
only
use
the
opportunity
for
education,
but
also
use
it
in
order
to
solicit
feedback
from
the
community
and
ask
people
to
tell
us.
Are
we
doing
the
right
thing?
Is
this
what
you
need?
You
need
something
else.
G
G
Before
we
sort
of
go
into
the
actual
implementation,
I
just
wanted
to
take
a
minute,
maybe
to
go
into
what
do
we
mean
when
we
say
multi,
cluster
and
I
think
it
to
a
lot
of
people?
It's
somewhat
context
dependent
in
a
sense
that
what
to
have
what
it
means
to
have
a
multi
cluster
sto
would
depend
on
your
use
case.
It's
how
you
want
to
use
it.
G
G
So
on
top
of
the
mesh
itself.
Now
we
we
were
talking
about
you,
know
mesh
cluster
network
and
people
sometime
assumed
that
there
is
some
kind
of
a
containment
or
relationship
between
them.
What
we
found
is
that
pretty
much
any
combination,
any
relationship
want
to
end
many-to-many
whatever
can
exist,
and
what
we
want
to
support
it
is
to.
You
is
ultimately,
of
course,
everything,
but
the
order
in
which
we
do
this
is
going
to
depend
on
what
the
community
needs.
G
What
is
the
prevalence
of
each
one
of
the
use
cases
and
then
get
support
earlier
for
those
cases
that
are
more
common
at
a
high
level?
I
think
that
there
are
two
paradigms
or
two
patterns
that
we've
seen.
One
is
a
single
mesh,
which
means
multiple
clusters
connected
or
interconnected
and
have
a
single
logical
view
of
all
of
the
services
services
are
shared
across
multiple
clusters.
They
ultimately
behave
as
one
cluster
from
in
East
your
perspective.
G
Or
some
other
kind
of
management-
tooling,
that's
going
to
depend
on
on
the
way
you
do
it,
but
ultimately
this
is
a
set
of
clusters
that
are
part
of
the
same
mesh
and
the
other
typical
use
use
case.
That
we
see
is
a
Federation
of
meshes
relatively
independent,
meshes
under
different
management
or
administrative
domains,
and
you
want
to
selectively
expose
services
or
share
services
between
the
different
meshes
that
exist.
G
You
know
to
East,
you
know
security
policies,
connect
to
the
mixer
return,
the
response
and
and
get
all
of
the
operational
data
that
is
to
your
needs
collected
somewhere
within
the
the
mesh,
do
feel
free
to
just
interject
with
any
question.
Just
stop
me
whenever
I'll
try
and
keep
this
short
so
that
we
have
enough
time
towards
the
end
for
discussion,
but
if
there's
something
that
is
urgent,
bring
it
up.
G
Please,
for
one
point,
though,
we
had
a
single
mesh
and
there
was
an
implicit
assumption
that
it
can
span
multiple
clusters
as
long
as
you
have
a
single
network
across
those
clusters,
meaning
that
you
have
a
different
set
of
addresses
in
each
one
of
the
clusters.
There's
no
IP
reuse,
there's
no
pod,
IP,
reuse,
there's
no
service
reuse
across
different
clusters,
but
still
those
addresses
are
going
to
be
routable.
If
your
deployment
can
satisfy
that,
then
you
can
use
East
your
1.0
to
create
a
single
mesh
out-of-the-box.
G
What
does
it
look
like?
You
know
picture
wise.
We
have
a
single
network,
we
have
only
one
cluster
running
the
hto
control
plane.
Let's
say
it's
cluster
one
runs
pilot
mixer
Citadel.
The
remote
clusters,
like
cluster
two,
are
going
to
be
running
a
smaller
installation
and
much.
That
only
includes
the
sto
injection
and
the
other
remote
part,
and
when
you're
obviously
going
to
make
a
connection
you're
going
to
get
from
the
DNS
resolution,
you're
going
to
get
an
IP
that
belongs
to
a
remote
cluster.
G
G
We're
introducing
the
concept
of
split
horizon,
EDS
and
sni
are
where
routing
it's
one
solution
and
there's
going
to
be
a
cluster
away
service
routing
as
another
type
of
solution,
and
each
one
of
these
will
be
probably
a
better
fit
for
for
different
use.
Cases
that
you
have
so
split
arise
in
EDS
basically
means
that
each
cluster,
and
maybe
it
will
be
easier
to
see
it
in
in
a
picture.
G
G
Let's
say
pilot
for
now
those
things
are
changing,
but
each
one
of
the
end
points
be
it
v1
or
v2,
is
going
to
be
associated
with
a
network
name
indicating
to
the
east
EO
control
plane
where
in
which
cluster
in
which,
when
a
network
that
endpoint
resides,
if
it's
in
the
local
cluster,
which
is
probably
a
bad
name.
But
what
we
mean
here
is
the
master
cluster
or
the
hub
cluster,
where
the
sto
control
plane
runs
or
it
belongs.
G
You
know
in
one
of
the
remote
clusters
when
the
when
a
client-
let's
say
the
sleep
sample
pod
in
the
local
cluster,
wants
to
make
a
request.
It
also
provides
pilot
with
its
network
label,
so
pilot
is
able
to
know
where
each
color
is
coming
from
and
based
on
the
location
of
the
color.
It's
going
to
provide
a
different
view
of
the
endpoints
that
and
avoiding
use
for
doing
the
routing.
G
When
it
actually
makes
a
request,
it
will
be
load
balancing
between
v1
and
load
balance
through
v2
through
the
sto
gateway.
In
order
to
make
this
empty
LS
work
across
clusters,
there
is
a
shared
root
CA,
which
is
common
to
the
Citadel
that
runs
on
the
remote
clusters
and
Citadel
that
runs
on
the
local
clusters.
G
G
The
server
name,
indication
which
is
part
of
the
TLS
establishment
is
going
to
contain
a
clear
text
which
says
I'm
trying
to
hack
to
access
hello
world
that
sample,
which
means
that
the
east,
yo
gateway
would
actually
be
able
to
see
and
route
based
on
that
information,
because
it's
in
clear
text
and
would
be
able
to
accept
the
connection
coming
into
the
cluster
and
route
it
correctly
using
the
SMI
information
into
the
second
version
to
running
in
the
remote
cluster.
It
is
the
flow
from
client
to
server
clear
questions.
G
G
E
Rules
right
so
one
of
the
the
key
use
cases
here
is
to
make
sure
you
stay
local
and
although
only
load-balanced,
remote
or
some
cost
is,
is
really
the
concern
between
regions
and
between
zones
on
clouds.
So
that
would
be
my
only.
The
rest
is
fine
but
load
balancing
that
that
cement,
that
that
information,
that
tells
you
yeah
when
I
its
morale,
mainly
failover,
but
not
round-robin
or
more
percentage
peace.
G
Absolutely
I
think
that
you-
that
is
definitely
one
of
the
things
that
needs
to
be
improved,
I.
Think
right.
Now,
it's
going
to
basically
do
weight
based
routing
where
the
weight
that
is
assigned
to
each
remote
cluster
is
dependent
on
the
number
of
instances.
So,
if
we
had
five
instances
on
the
remote
cluster
and
one
instance
locally,
the
weight
distribution
is
going
to
be
five
requests
going
over
to
the
remote
and
only
one
request
going
over
to
the
local
one.
I
completely
agree
with
your
statement
and
a
sentiment
that
this
is
not
likely.
G
I
G
I
think
if
you
wanted
to
separate
into
different
gateways
and
have
control
over
if
I'm
accessing
you
know
as
hello
world,
it
needs
to
go
through
gateway
one
and,
if
I'm
accessing
a
different
service,
its
need
to
go
through
steel
gateway.
Number
two
I'm,
not
sure
that
can
be
done
in
this
design.
Right
now
in
this
solution,
it's
something
that
needs
to
be
extended
and
built
into
it.
G
D
G
That
the
the
two
designs
are
completely
the
same
in
terms
of
features.
There
are
certain
features
that
are
going
to
be
available
in
one
and
nothing
the
other,
but
ultimately
we
want
to
be
able
to.
You
know,
sort
of
address
all
of
the
requirements,
and
if
this
is
a
required
for
you,
I
would
ask
that
you
know
an
issue
is
filed
or
we
are
somehow
have
a
better
understanding
of
your
requirements.
So
we
can
start
to
address
it.
Sure.
I
G
Yes,
sorry
Manish!
Let
me
just
ensure
that
I
understood
the
question
you're
saying
that
in
the
picture
that
is
being
put
up
here,
the
Gateway
is
associated
with
a
cluster
with
the
remote
cluster,
and
you
may
want
to
have
a
case
where
you
want
to
associate
different
gateways
with
different
services
within
the
remote
cluster.
Great.
D
I
Completely
dependent,
you
know,
completely
depend
on
how
the
services
exist,
for
example,
if
one
of
them
is
a
secure
service,
what
is
insecure
or
what
is
using
a
different
protocol
to
the
other
other
service
r1
is
using
a
different
security
profile
than
the
other
service
right,
so
would
make
a
lot
of
sense
that
we
create
multiple
gateways
for
those
and
I
was
wondering
if
I
assumed
under
mesh
networks,
you
could
specify
multiple
gateways.
Yes,.
F
I
F
G
Would
that
also
work
across
cluster
so
could
could
you
use
the
East
EO
ingress,
Gateway
and
create,
but
so
basically,
what
you
want
to
do
is
when
sleep
on
local
cluster
makes
an
outbound
connection
depending
on
the
service
that
it
wants
to
access
on
the
remote
class
or
it
should
get
a
different
gateway
from
the
remote
cluster.
Correct,
yeah
I
in.
D
C
C
G
In
services
and
everything
needs
to
be
done,
and
we
think
that
this
you
know
falls
squarely
within
if
you
have
multiple
co-located
clusters,
you
know
with
perhaps
small
delays,
small
latency
next
to
each
other,
and
you
want
to
group
them
together
into
a
larger
cluster.
In
order
to
create
you
know,
isolation,
domains,
fault
domains,
it's
a
high
availability
capacity,
whatever
you
want
to
use
that
as
long
as
they
are
close
by
this
seems
like
a
good
match
from
an
architectural
perspective
to
to
organize
the
multi
cluster
I
know.
C
C
But
I'd
be
curious
to
know
there.
There
are
times
when
you're,
when
you're
right
when
they're,
when
they're
close
by
and
you
don't
mind,
load
balancing
between
them.
But
as
someone
asked
earlier,
there
are
times
when
the
other
one
actually
is
far
away.
You
might
not
only
incur
latency
might
incur
cost
getting
that
other
one,
and
you
only
want
to
do
that
when
necessary.
C
J
K
J
Right
so
pilot
win
whenever
it
creates
the
response
to
eight
years
and
it
takes
into
account
multiple
drivers.
One
of
them
is
the
same
affinity,
zone
or
zone
of
the
end
points.
Then
it's
only
one
additional
argument.
The
network
that
it's
coming
from
so
if
things
on
aware
is
walking,
is
expected
before
this
change
in
a
single
cluster,
then
it
will
also
work
on
multiple
clusters.
F
J
F
So
are
you
saying
so
I
think
this
example
might
be
a
little
bit
confusing
because
you
have
me
while
and
beat
you
on
to
local
area
no
class.
So
what?
If
your
v1
is
also
on
local
and
also
remote,
and
then
you
were
trying
to
reach
out
to
be
one
of
your
hello
word
from
local
cluster?
Ideally,
you
want
to
hit
that
version.
F
Well,
hello,
woods,
probably
100%
of
the
time
unless
your
local
clusters
down
but
I,
don't
think
we
have
the
intelligence
today
to
be
able
to
tell
hey
okay
I
have
to
cluster
both
arm
but
and
I
have
to
cluster
and
then
the
same.
The
same
replica
for
hollow
word
for
a
given
version
is
also
running
on
both
local
and
remote
and
I
have
the
intelligence
to
go
to
local
one
hundred
percent
of
the
time
unless
it
fails,
I
think
it's
a
round-robin
today.
J
I
G
Believe
that
the
comment
that
you
made,
that
it
needs
to
be
validated
so
that
we
understand
the
the
actual
behavior
with
zone
aware
load-balancing
makes
sense.
Is
it
enough
to
prefer
one
zone
over
the
other?
Should
we
also
take
into
account
Layton
sees
because
you
know
the
fact
that
two
things
are
in
two
different
zones
doesn't
mean
that
they're
really
that
far
apart,
maybe
you
can
select
them
if
there
are
80
milliseconds
away,
you
know
across
the
continental
US,
then
maybe
it's
not
such
a
good
idea
to
to
failover
to
to
another
zone.
C
C
F
L
H
F
Only
for
mixer
right
only
for
the
policy
check
for
policy
I
guess
we
call
we
split
mixer
into
tonometry
and
policy.
So
the
only
thing
that's
part
of
the
data
flow
needs
to
reach
back
to
the
control
plane
was
to
check
with
policy
if
this
call
is
allowed-
and
we
also
have
a
cache
envoy
site
I-
believe
it's
around
one
minute.
F
So
there's
a
plan
for
mixer
version,
two
I
believe
that's
being
actively
developed
to
push
how
many
of
them
makes
a
policy
function
down
to
the
Envoy
cycle,
so
you
would
actually
be
able
to
do
the
checking
on
the
client
side
to
be
have
a
a
little
bit
more
distributed.
Mixer
policy
then
centralize.
Today,
okay,.
I
So
does
like
yeah
mix
still
exists.
C
So
so
this
is
what
I
propose
the
essentially
mixer
would
would
not
have
to
exist
that
you
could
have
this.
This
is
there's
a
bunch
of
work
going
on
with
assembly
in
envoy
and
and
that
work
will
enable
us
to
take
the
logic
to
the
mixer
today
and
literally
push
it
in
on
way,
so
that
no,
you
don't
have
to
run
a
service
called
mixer
got.
F
K
I
just
had
a
question
about
the
documentation
on
using
multiple
gateways.
I
just
posted
this
issue
in
the
in
the
chat
there,
which
is
kind
of
I,
guess
some
like
issue
based
discussion
where
or
a
polar
coaster
where
we
took
I,
think
Lynn
you,
you
talked
about
being
able
to
implement
different
gateways
using
the
modifications
to
the
helm.
K
Ok
yeah
because
because
now
it
kind
of
seems
like
when
you're
doing
the
the
the
multiple
gateways-
let's
say
you're
you're
on
a
cloud
provider
that
that
provides
you
that
the
single
load
balancer.
So
if
you're,
if
your
provider
is
only
giving
you
the
single
load
bouncer,
you
wouldn't
be
creating
multiple
load
balancers
on
on
a
single
cluster.
So
in
a
sense
your
gateway
would
only
have
one
IP
address
to
actually
want
one
gateway,
IP
and
then
you're,
using
kind
of
this,
multiple
hosting
that
I
posted
in
the
chat.
So
I
think.
K
My
question
is:
if,
if
you,
if
you
only
have
a
single
low
back,
blowdown
saw
on
your
cluster,
so
you'll
have
one
external
IP
address.
Are
you
really
creating
multiple
gateways?
Because
or
are
you
just
kind
of
using
one
gateway
and
then
substituting
not
going
to
Gateway
for
multiple
hosts?
What's
the
approach
there,
because
on
a
lot
of
clusters,
you
only
have
the
single
external
load,
balancer.
J
Choosing
different
port,
or
so
we
host
and
part
of
the
information
that
you
get
in
the
SNI
is
the
port
number
that
you
are
going
to
target
and
based
on
this
port
number,
the
internal
routing
within
the
cloud.
The
remote
cluster
is
going
to
happen,
so
you
have
only
single
East,
your
gateway,
but
you
have
like
multiple
gateways,
Bank
binding
to
that
or
virtual
services
binding
the
Gateway
to
the
local
services.
F
K
E
And
I
think
it
won.
One
important
thing
that
we
forget
is
is
yeah.
Having
the
you
know,
think
about
the
one
of
the
best
current
scenarios.
Right
of
this
is
is
a
regional
failover
for
clouds
right.
So
this
is
a
one
good
use
case
for
that,
so
make
sure
that
we
we
have.
One
is
ingress
for
external
traffic,
and
one
is
chatty
is
to
that.
So
it's
not
really
the
SEO
gateway,
but
really
having
two
two
mechanisms
to
communicate
between
the
clusters.
E
Right,
I'm,
not
sure
how,
where
that
fits
in,
but
I
would
probably
say
that
you
know
internally,
you
go
between
clusters
and
that's
an
internal
traffic
and
how
you
come
into
the
cluster
is
external
traffic,
so
the
ingress
and
mapping
to
that
ingress
is
is
one
thing
we
should
make
sure
it's
clear
and
probably
not
having
a
couple
to
try
to
separate
all
of
them
together,
because
there's
two
separate
things
that
you're
trying
to
do
and
configuring
multiple.
You
know
this
is
still
the
Gateway
configuration
will
and
becomes
a
little
bit
brittle.
G
Think
the
comment
that
you're
making
is
it
needs
to
be
clear
in
the
documentation
and
the
tasks,
concepts,
etc.
How
do
you
clearly
configure
and
externally
facing
is
to
your
gateway
and
another
one?
That's
being
used
one
or
more
they're
being
used
for
internal
communication
between
the
clusters.
I
believe
that
was
the
yeah.
G
E
If
we
do
that-
and
you
know
if
in
and
it's
a
cleaner,
the
you
know,
then
you
can
kind
of
focus
that
this
is
a
common
use
case
that
we'll
see
I
guess
in
in
the
more
complex
manner.
Because
that's
what
happens
is
you?
The
samples
usually
are
very
simple
and
then,
when
you
try
to
extrapolate,
you
get
in
a
whole
mess
of
trouble.
E
When
you
win
extinct,
start
getting
complicated
and
and
then
we'll
talk
about
you
know,
operation
of
the
co
cluster
or
like
kubernetes
and
and
I
think
now
is
to
community
tends
to
kind
of
say.
Well,
how
are
you
gonna
operate
this
mess
and
when
there's
a
problem
right
detection
still
is
a
challenge
on
these
environments.
I.
C
G
Okay,
so
just
at
a
high
level
there's
another
system
which
is
more
about
independent
measures
that
can
be
tied
together,
and
the
idea
here
is
that
when
you
first
try
and
resolve
the
service
name
locally,
if
that
fails,
we
are
assuming
it
belongs
to
a
global
scope
and
we're
going
to
try
and
resolve
it.
If
the
service
is
indeed
expose
from
some
remote
cluster,
it
would
have
been
populated
with
in
the
local
cluster
so
that
you
get
the
cross
cluster
connectivity.
So
an
cluster
one.
It's
going
to
try
and
resolve
a
name.
G
It's
going
to
access
bar
namespace
two.
It
will
fail
locally
because
there
is
no
service
endpoint
for
it.
Ultimately,
it
tries
to
resolve
it
to
bar
namespace
two
in
that
global,
which
gets
delegated
to
East
EOS
tier
is
going
to
resolve
it
to
the
remote
gateway
and
basically
communicate
across
clusters
by
using
the
global
service.
So
we
are
selectively
exposing
services
between
the
different
clusters.
G
I
just
want
to
take
maybe
another
minute
to
make
the
final
comment,
which
is
we
believe
that
we're
not
aware
of
any
limitations.
All
of
these
two
features
should
be
able
to
run
as
these
out
of
the
box.
Routing
security.
Metrics
should
I
mean
it's
a
matter
of
getting
the
configuration
right,
of
course,
but
other
than
that
I.
Don't
think
there
are
any
conceptual
roadblocks
in
getting
all
of
these
working
in
the
two
modes
and
the
two
usage
patterns
that
we
have,
and
we
said
we're
not
done
yet.
There
is
improvement
in
documentation.
G
H
Okay,
so
we
have
a
few
minutes
left
and
I
think
there's
a
couple
more
things
that
we
wanted
to
ask
from
from
the
community.
So
what
I
want
to
show
is
just
the
book
info
sample
that
everybody
knows
and
loves
running
on
the
two
different
variations
that
eat.
I
talked
about
the
two
variations
of
Multi
cluster
support.
That's
coming
in
1-1.
This
does
not
cover
the
flat
network
approach.
Both
of
these
are
are
ones
where
the
the
pods
are
not
addressable
from
you
know
across
the
clusters.
H
Okay,
as
a
quick
overview
of
looking
for
the
only
thing
you
need
to
know
for
this
demo
is
that
there's
three
versions
of
the
review
service.
The
first
one
has
no
stars.
The
version
2
has
black
stars
and
version.
3
has
red
stars,
okay,
so
jumping
into
the
first
part,
which
is
the
split
horizon,
eds
or
cluster,
aware
service
arriving
methods.
What
we're
gonna
do
is
deploy
product
page
or
have
it
already
deployed.
H
I
should
say
product
page
reviews,
v1
and
reviews
v2
and
ratings
on
the
first
cluster,
which
is
the
the
local
cluster
or
the
master
cluster
that
has
the
ISTE
of
the
main
sto
control
plane
and
then
the
second
cluster
has
reviews
v3,
which
is
the
black
star.
Okay.
So
if
I
jump
on
over,
so
I'll
switch
my
context
to
local
cluster
and
then,
if
I
do
get
pods,
you
can
see.
I
have
most
of
the
book
info,
except
reviews,
v2
and
v3.
H
Okay,
now
the
second
cluster,
which
is
the
remote
cluster
I,
do
cute
cutter
will
get
pods
again.
In
this
case,
we
have.
Is
everyone
able
to
see
my
screen?
Okay,
yes,
okay,
I
just
made
the
text
bigger.
The
second
one
has
reviews
v3
and
ratings,
just
as
shown
in
this
and
this
diagram.
So
let
me
just
jump
back
to
my
local
cluster,
which
is
the
first
cluster
and
then.
H
Get
the
Gateway
IP
of
my
ingress,
Castillo
ingress
gateway,
Computers
running
a
bit
slow
with
screen-sharing
I
got
a
product
page
and
if
I,
just
you
know,
do
refreshes,
you
can
see
it
switching
from
no
stars
weevil,
which
is
v1
and
then
red
stars
which
is
v3.
Okay.
Some
of
the
interesting
things
too.
To
take
a
look
in
this
configuration.
H
So,
in
terms
of
setup
between
the
two
different
approaches,
this
one
does
take
more,
but
in
terms
of
deploying
enough
to
actual
application
and
the
use
case,
it
is
more
simpler
than
than
the
other
approach,
because
you
have
that
one
control
plan
that's
able
to
talk
to
both
the
local
kubernetes,
api
and
remote
community.
The
only
thing
that
we
needed
to
do
to
set
this
up
in
this
scenario
is
no
deploy.
H
H
H
This
scenario
we
have
just
the
v1
versions
of
the
book
info
page
I'll,
keep
cuddle
game
pods,
you
can
see
it
just
have
products,
reviews
and
details,
no
v2
and
v3
and
then
jump
to
cluster
2,
which
is
the
cluster
on
the
right
and
then
do
cute
cuddle.
Get
pods
I
have
the
reviews,
v2
version,
so
this
one
will
do
something
a
little
bit
more
interesting.
Think
of
the
use
case
of
you.
Have
your
production
application
like
running
on
your
main
cluster
v1
and
then
on
a
different
cluster.
H
You
have,
let's
say
this,
this
Canary
version
of
v2
deployed
and
you
just
want
to
route.
You
know
some
version
of
your
traffic
or
you
want
to
do
dynamic,
host
based
routing
where
you
define
a
virtual
service
and
say
no
I
just
want
some
of
my
users
to
get
to
v2.
So
it's
okay.
So
in
that
case,
what
we'll
do
is
set
up
a
virtual
service.
You
cuddle,
get
virtual
service.
H
H
H
H
You
can
see
that
I'm
always
getting
b1,
but
then,
if
I
log
in
as
Jason
I
can
get
traffic
routed
over
the
pond
to
cluster
2
and
now
I'm
getting
black
stars.
So
if
I
refresh
I,
keep
getting
black
stars
and
if
I
sign
out
I
just
get
back
to
the
b1
against
a
couple
of
things
that
we
worth
looking
into.
Let's
take
a
look
at
the
Envoy
configuration
to
see
how
this
looks
so
cute
cuddle,
get
pods
and
then
grab
the
product
page
and
then
sto
proxy
config
and
points
for
the
product
page
pod.
H
But
you
just
want
to
grep
for
just
the
reviews
and
points,
as
you
can
see
here,
the
local
ones
at
this
local
172
IP
address
and
then
the
remote
v2
versions
have
the
remote
gateway
IP
and
that
the
past
port
one
last
thing
I
want
to
show-
is
this
kind
of
service
entry.
All
the
remote
configurations
and
this
approach
do
require
you
to
create
the
service
entry,
then
Maps
that
remote
service
to
the
to
the
hostname
of
with
the
full
hostname
of
the
dot
Covell.
So
it
knows
where
it
is.
H
H
So
this
does
put
the
burden
compared
to
the
previous
approach
that
every
remote
service
requires
you
to
have
this
service
entry
endpoint
defined,
where
you
have
to
define
the
gateway
IP
address
of
your
remote
customer
as
well,
so
I
think
with
that
I
think
I'm
going
to
hand
it
over.
We
only
have
one
minute
left,
so
we
wanted
to
ask
about
the
survey
so
each
I
I'm
kind
of
back
to
you,
I'll.
A
G
I
was
about
to
suggest
that
maybe,
in
the
interest
of
time,
just
post
a
questionnaire
URL
to
the
chat
and
I
believe
you
know
everything
that
we
shown
today
at
the
meetings
going
is
recorded
will
be
shared.
The
presentation
is
I
believe
already
on
the
drive
right,
Lynn
yeah
I
can
see
it
done,
there's
a
link
to
it.