►
From YouTube: Cloud Native Live: Multi-cluster failover using Linkerd
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
To
Cloud
native
live
where
we
dive
into
the
code
behind
Cloud
native
I'm,
Taylor
dolezal,
head
of
ecosystem
at
the
cncf,
where
I
get
to
work
closely
with
teams
as
they
navigate
their
Cloud
native
Journeys.
Every
week
we
bring
a
new
set
of
presenters
to
Showcase
how
to
work
with
Cloud
native
Technologies.
These
folks
will
build
things.
They
will
break
things
and
they
will
answer
your
questions
in
today's
session,
I'm
stoked
to
introduce
Flynn
from
buoyant
who
will
be
presenting
on
multi-cluster
failover
using
Linker
d.
A
This
is
an
official
live
stream
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
So
please
don't
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
to
all
of
your
fellow
participants
and
presenters
be
excellent
to
one
another
with
that
I'd
love
to
hand
it
over
to
Flynn
to
kick
off
today's
presentation
was
that
Flynn,
please,
take
it
away.
A
A
Right
now
is
about
the
time
I
make
a
UDP
joke,
but
not
sure
that
you
might
get
it.
A
B
B
A
B
B
This
is
a
good
thing,
I'm
going
to
talk
a
little
bit
about
what
we
mean
by
multi-cluster
failover
and
what
we
mean
by
failover,
that
sort
of
thing-
and
hopefully,
maybe
we'll
get
to
a
point
where
we'll
be
able
to
do
a
little
bit
of
a
demo,
if
not
everything
that
I
was
going
to
do,
is
posted
up
in
the
repo
in
a
GitHub,
repo
and
I'm,
going
to
send
this
send
the
URL
of
the
GitHub
repo,
so
you
all
can
follow
along
no
matter
what
happens
to
me.
B
B
The
readme.md
is
the
steps
that
I've
been
following
along
to
get
things
set
up
and
hopefully
we'll
be
able
to
actually
follow
some
of
that
live,
so
I
guess
we'll
see
what
happens
right
all
right.
In
the
meantime,
let's
talk
a
little
bit
about
multi-cluster
and
failover
and
all
that
kind
of
stuff.
B
So,
first
up.
Okay,
that's
me!
First
up,
let's
see
if
the
screen
share
is
going
to
catch
up
when
I
did
something
different
foreign.
A
C
How
you
know
it's
a
how
you
know.
A
It's
a
live
demo
and
it
think
thankfully
it's
not
Friday
so
we're
good
on
that
front.
B
All
right:
well,
here
we
go
yeah,
it's
coming
close
to
Friday
for
sure
so
I
guess
we
get
to
just
do
this,
the
the
old
entertaining
way
Taylor.
Do
you
want
to
drop
that
screen
share
and
let's
see
what
we
can
do
I'm
going
to
try
restarting
that
once
and
if
that
works
then
great,
and
if
it
doesn't
work,
then
you
know
we
can
manage
something
else
right.
B
B
Well,
you
know,
worse
comes
to
worst.
I
can
in
fact
just
go
through
and
talk
out
loud
and
we'll
see
how
it
goes
right,
but
I
would
really
like
to
have
at
least
you
know,
some
things
that
I
can
point
to.
That
would
be
lovely
okay,
let's
try
this
once
more.
Shall
we
and
then
it's
going
to
be
a
lot
of
fun,
trying
to
go
through
and
catch
up,
so
we
don't
run
too
far
over
time
right
all
right.
B
B
All
right:
well,
you
know
what
I
guess
we're
just
gonna
have
to
do
this
with
words.
How
creepy
is
that
you
can
go
ahead
and
drop
that,
because
that
doesn't
seem
to
be
updating
with
what
I'm
actually
doing.
B
Let
me
talk
a
little
bit
about
failover
and
what
we're
talking
about
with
multi-cluster
at
all,
basically
failover
is
a
really
really
old
concept.
It
has
been
around
since
long
before
kubernetes.
It
will
be
around
probably
forever.
It's
basically
just
the
idea
that
if
you
go
through
and
you
have
a
service
that
isn't
working,
then
you
want
to
go
and
redirect
traffic
for
that
service
to
something
that
is
working,
which
I
hope
makes
sense
to
everybody.
B
I'm
also
seeing
some
questions
in
the
chat
about
istio
and
Linker
D,
and
we
can
come
back
to
that
a
little
bit
later
when
we're
talking
about
simple
failover,
we're
mostly
talking
about
failing
over
a
service
with
him
cluster.
So
maybe
you
have
our
Emoji
vote
demo
running,
which
is
what
I'm
supposed
to
be
demoing
for
you,
and
you
can
have
multiple
instances
of
the
Emoji
about
pod
running
in
the
same
cluster
than
if
one
of
your
pods
goes
down.
B
One
of
the
patterns-
that's
getting
a
little
bit
more
popular
these
days,
is
the
idea
that
the
entire
cluster
can
be
a
fungible
object,
and
so
this
implies
not
just
that
you're
going
to
treat
the
services
as
something
that
can
die
and
get
immediately
restarted,
but
that
you'd
like
to
be
able
to
treat
the
entire
cluster
as
something
where
you
know.
If
the
whole
cluster
crashes,
no
big
deal
just
go
ahead
and
pick
it
up
with
a
different
one.
B
This
is
a
really
cool
idea
and
multi-cluster.
Failover
is
a
specific
example
of
this,
where,
if
you
have
a
service
in
one
cluster
that
dies,
you
can
just
go
ahead
and
Route
traffic
over
to
another
cluster,
the
same
service
in
a
different
cluster.
It
it
isn't
quite
as
dramatic
as
having
your
entire
cluster
catch
on
fire
and
be
replaced
by
another
cluster
for
everything.
But
you
can
do
that
as
well.
If
you
treat
the
Ingress
as
the
service,
that's
going
to
be
failed
over
right.
B
Questions
so
far
as
I
try
to
figure
out
how
to
go
through
and
continue
with
some
of
the
other
possibilities
here,
not
seeing
anything
all
right,
so
multi-cluster,
failover
and
Linker
d
there's
another
old
concept,
software
of
layering,
where,
as
things
get
more
and
more
complex,
you
try
to
split
them
up
into
layers.
So
you
have
something
simple
that
happens
at
the
bottom.
Then
you
layer
something
a
little
bit
more
complex.
On
top
of
the
simple
thing,
then
you
layer
something
more
complex
on
top
of
that.
B
B
B
The
traffic
split
resource
can
then
do
things
like
say.
Oh
50
of
the
traffic
going
to
service
Foo
should
go
to
foo1
and
the
other
50
should
go
to
fu2.
It's
a
simple
way
of
doing
Canary
or
load
balance.
Well,
not
really
load
balancing,
but
can
every
Canary
deployments
things
like
that
we
get
to
use
it
for
failover
as
well.
B
B
Okay,
the
multi-cluster
extension
fits
into
all
of
this
by
giving
Linker
D
not
so
much
a
way
of
not
so
much
a
way
of
magically
causing
a
cluster
to
connect
to
another
cluster.
What
the
multi-cluster
extension
really
does
is
allows
linkerty
to
see
what
services
have
been
exported
in
one
cluster
and
Bridge.
Only
those
Services
into
your
first
cluster
and
to
Carlos's
question
the
service
mesh
should
be
in
both
clusters.
What
you
do
with
Linker
D
is
you
install
linkready
in
both
of
your
clusters?
B
For
the
example
here,
I'm
going
to
be
talking
about
the
East
cluster
and
the
West
cluster,
so
they
would
both
be
running
Linker
D,
and
then
you
would
provide
links
between
from
the
East
cluster
to
the
West
cluster.
And
if
you
want
from
the
West
cluster
back
to
the
east
cluster,
the
link
is
directional
because
it
determines
the
direction
of
service
mirroring.
If
you
link
the
East
cluster
to
the
West
cluster,
the
West
cluster
will
be
able
to
get
services
from
the
East
cluster
and
vice
versa.
B
That,
in
turn,
then
allows
us
to
use
this
traffic
split
resource
to
split
traffic
between
services
that
aren't
even
in
the
same
cluster
by
using
those
mirror
services
and
that's
the
way
all
of
this
works
is
that
we
set
up
a
traffic
split
that
allows
failover
extension
to
flip
traffic
between
the
normal
service
in
the
same
cluster
and
the
mirrored
service
in
the
other
cluster.
B
All
right,
we'll
see,
if
we'll
see,
if
maybe
the
packets
are
gonna
flow
again.
B
B
So,
while
I'm
messing
with
this
a
little
bit
further,
let's
go
ahead
and
talk
to
Carlos's
questions
there.
B
Yes,
there
are
situations
where
it
can
be
a
very
good
thing
to
have
both
of
your
clusters
active
and
yes,
you
do
need
to
be
careful
about
those
for
all
the
usual
reasons
that
you
know
you
don't
want
to
be
redirecting
traffic
to
one
cluster.
That's
then
going
to
redirect
it
back
to
the
first
cluster.
Definitely
a
thing
that
you
need
to
be
very
careful
of
a
more
common
scenario.
Actually
would
be.
You
might
go
ahead
and
Link
the
Clusters
together
bi-directionally,
but
you
might
export
different
services.
B
B
It's
not
clear
to
me
that
that's
all
that
really
the
sort
of
thing
that
would
happen
in
the
real
world,
but
on
the
other
hand,
something
that
might
well
happen
in
the
real
world
is
to
imagine
you
have
clusters
that
are
in
different
regions
and
now
imagine
that
you're
dealing
with
gdpr,
for
example,
where
it's
very
very
important
that
the
personal
information
for
your
European
users
stay
in
Europe
and
for
your
American
users
stay
in
America,
that's
a
scenario
where
it
could
very
well
make
sense
to
allow
the
web
service
itself
to
fail.
B
All
right,
so
I
think
we
can
do
some
stuff
here
to
Hugo's
question.
It's
going
to
depend
in
the
case
of
the
failover.
Will
the
traffic
still
be
passing
through
the
first
cluster?
To
only
then
go
to
the
failover
cluster
kind
of
depends
on
which
service
it
is
in
general,
the
traffic
has
to
get
to
the
point
that
Linker
D
can
get
a
hold
of
it.
So
with
just
two
clusters.
B
Yes,
you
would
have
to
go
from
your
first
cluster
over
to
your
second,
but
we
could
also
Imagine
a
scenario
where
you
have
three
clusters
where
one
of
them
holds
the
Ingress
and
the
other
two
only
hold
back
in
services,
in
which
case
the
Ingress
cluster
would
be
the
one
redirecting
to
one
of
the
others,
and
it
would
not
at
that
point
need
to
go
through.
If
you
have,
you
can
go
from
the
Ingress
to
the
West
cluster
or
the
Ingress
to
the
east
cluster
without
having
to
go
from
the
West
to
the
east.
B
Did
that
make
sense
that
that
would
be
a
little
easier
to
do
with
drawing
things
all
right.
So
normally,
when
I'm
doing
these
things,
I
can
go
through
and
actually
run
all
these
commands,
while
we're
looking
at
this
stuff
and
you
get
to
see
all
of
this
stuff
live
I
kind
of.
Don't
dare
do
that
right
now,
because
I'm
kind
of
convinced
that
if
I
try
to
do
that,
the
world
will
come
to
an
end.
B
But
at
least
now
I
can
show
you
the
things
where
you
know
I
can
I
can
walk
through
the
steps
and
maybe
more
importantly,
I,
can
point
out
the
gotchas
that
are
in
here
and
actually
let
me
first
start
with
bit
where
the
assumption
that
we're
making
here
is
that
you
have
two
clusters
for
this
demo
that
they
are
called
east
and
west,
and
if
you're
trying
to
do
this
with
clusters
that
are
not
named
East
and
West,
it
doesn't
really
matter.
B
You
can
just
change
the
context
names
of
the
rest
of
this
file
or
you
could
use
group
control
config
rename
context
if
you
want
to
be
a
little
destructive
about
it.
But
I
want
to
point
out
a
specific
thing
if
you're
trying
to
do
this
with
k3d,
which
is
what
I
was
doing,
and
for
that
we
need
to
look
in
the
create.md
file.
C
B
B
The
other
really
important
thing
is
that
and
we'll
talk
about
this
a
little
bit
later.
They
have
to
share
a
trust
route
because
they're
going
to
do
mtls
between
the
two
clusters,
which
is
very
very
helpful
when
they
talk
to
each
other
over
the
public
internet.
B
If
you're
going
to
set
this
up
with
k3d,
there
are
some
kind
of
weird
gotchas
in
here,
specifically
when
you're
doing
this,
you
must
set
up
all
of
your
k3d
clusters
so
that
they're
on
the
same
Docker
Network.
Otherwise
they
won't
be
able
to
talk
to
each
other
at
all
and
because
they're
on
the
same
network
and
they're
Bridge
through
the
same
host,
you
also
have
to
play
games
with
setting
the
the
port
for
the
API
server
and
the
port
that
you
want
to.
B
You
know
we
set
this
up
with
an
English
controller
so
that
we
can
actually
use
things
like
name-based
routing
and
all
of
those
want
to
expose
ports
to
the
host.
But
again
this
is
k3d
in
the
docker
Network.
They
all
show
up
with
the
same
IP
address,
and
so
you
have
to
give
them
different.
Four
numbers:
I'm
not
going
to
go
through
all
the
rest
of
this
script,
but
I
did
want
to
point
out
those
gotchas,
because
those
are
very,
very
important.
B
So,
let's
start
walking
through
assuming
that
you
already
have
the
Clusters
set
up.
This
really
really
long.
Markdown
file
has
all
the
commands
to
go
through
and
set
up
the
Clusters
from
scratch.
Get
everything
installed
in
a
way
that
will
work
and
let's
go
ahead
and
walk
through
that
since
I.
Don't
think
I
dare
try
to
run
these
right
now
once
again.
B
So
a
couple
obvious
things
here
is
this
starts
by
running
Liberty
check
to
make
sure
that
your
East
and
West
cluster
can
actually
run
linkerty,
which
is
kind
of
important
since
we're
using
liquidy.
Then
we
get
a
certificate
setup,
there's
a
whole
service
mesh
Academy
on
certificate
management.
That
goes
much
more
into
the
details
of
what
it.
B
Okay,
where'd,
you
lose
me
was
I
talking
about
the
step,
command
and
stuff.
A
B
Okay,
so
listen
there's
an
entire
certificate
management
Workshop.
That
goes
into
considerably
more
detail
here,
but
we
basically
are
using
the
step
certificate,
create
command
to
make
a
single
trust
anchor
certificate.
We
do
that
here
and
then
we
create
any
issuer
for
each
certificate
or
sorry
for
each
cluster.
B
The
Clusters
have
separate
identity
issuers,
but
both
of
the
identity
issuers
are
summed
by
the
same
root,
the
same
trust
anchor
in
that
way
they
can
do
saying
they
get
to
manage
all
their
own
workloads
in
each
cluster,
but
whenever
cluster
East
talks
to
the
West
cluster,
then
ndls
just
works,
because
the
root
chain
is
the
same
certificate
in
both
after
you've
made
the
certificates.
We
can
then
go
through
and
install
Linker
D.
B
A
Yeah
I
think
it's
gone
through
a
couple
blips,
but
last
one
was
short,
but
this
one's
lasts
a
little
bit
long,
we'll
see
if.
B
B
C
B
Next
last
Almost,
the
last
thing
in
the
install
everything
step
is
we
installed
the
Emoji
photo
setup.
This
is
straight
out
of
the
Emoji
photo
Quick
Start,
except
that
we're
just
installing
it
into
each
different
cluster
and
then
the
last
thing
we
do
is
set
up
access
through
the
Ingress
controller.
So
we
can
talk
to
things.
B
B
For
failover
and
for
multi-cluster
in
general
and
the
failover
extension.
B
A
All
right:
let's
try
a
few
things.
Let's
see
if
Flynn
comes
back
here
in
just
a
second
and
then
we'll
go
ahead
and
continue,
we
might
disable
our
videos
and
just
leave
the
slides
up
prescription.
C
A
B
Actually,
you
know
what
we
could
do
is
why
don't
you
share
a
browser
window
with
this
in
it.
A
B
Yeah,
let
me
let
me
do
that
because,
hopefully,
hopefully
hopefully
that'll
work
out
a
little
bit
better.
We
hope.
B
That's
it's
already
in
the
private
chat.
That's
the
so.
B
A
B
That's
really
more
about
providing
a
little
bit
more
information
for
debugging
than
it
is
a
strict
necessity,
yeah
that
one
it's
in
here,
because
it
was
helpful
for
me
when
I
was
doing
all
this
stuff
Okay.
So
once
again,
going
through
and
Liberty
check.
Is
your
friend
and
running
likerty
check
on
both
of
those
clusters
can
be.
B
Great
I
should
just
make
Taylor
narrate
all
this
put
him
on
the
phone,
so
a
particular
gotcha
that
can
be
a
little
bit
weird,
especially
if
you're
using
k3d
is
that
when
you
set
up
the
links
using
the
Linker
D
command.
B
One
of
the
things
that
has
to
happen
is
that
if
you
want
to
link
the
East
cluster
to
the
West
cluster,
then
the
West
cluster
has
to
be
able
to
get
cluster
permissions
for
the
East
cluster.
It
actually
has
to
get
kubernetes
credentials
and
the
way
it
ends
up
doing
this
is
that
the
Linker
D
CLI
itself
is
what
tries
to
go
ahead
and
read
the
credentials
for
the
kubernetes
cluster.
B
But
unfortunately,
if
you're
using
k3d,
the
Linker
D
CLI
running
on
the
host
will
end
up
seeing
a
local
host
address
for
the
API
server
and
that
won't
work
from
the
other
k3d
cluster.
So
I'm
not
going
to
go
through
this
absurd
API
server
Adder
function,
but
the
point
here
is
that
it
can
go
through
and
figure
out.
The
right
API
server
address
to
use
for
the
setup
that
we're
using
here,
and
it
should
also
do
the
right
thing.
B
If
you
scroll
down
a
little
bit
more,
let's
take
a
look
at
that
link.
The
cluster
command.
B
But
that
is
not
what
happens
when
you're
doing
the
Lincoln
multi-cluster
link
command.
What
happens
is
Linker
D
multi-cluster
Link
goes
through
and
constructs
a
link
object,
a
link
resource
that
then
needs
to
be
applied,
but
you
construct
the
link
resource
based
on
information
for
one
cluster,
and
then
you
apply
it
to
the
other
cluster.
B
B
Looking
through
this
to
remember
that
the
next
thing
happening
in
step
three
is
that
we
actually
go
through
and
Export
some
of
the
services
in
this
case
we're
going
to
export
the
Emoji
service
and
the
Emoji
service
is
the
one
that
produces
lists
of
emoji.
So
it's
kind
of
down
deep
in
the
call
graph.
B
What
happens
is
that
your
web
browser
is
going
to
talk
to
the
web
service
and
then
the
web
service
ends
up
talking
to
the
Emoji
Service,
as
do
some
of
the
other
things,
but
we're
going
to
export
the
Emoji
service
from
the
East
cluster
to
the
West
cluster
and
vice
versa,
so
that
from
either
cluster
you
have
a
way
to
reach
the
Emoji
service
in
the
other
cluster
and
as
noted
there
after
you
do
that,
you
should
actually
see
if
you'd
run
that
group
control
context,
East
get
service,
Dash
and
Emoji
photo.
B
You
should
actually
see
a
service
in
there
called
Emoji
service,
and
you
should
see
it.
One
called
Emoji
service,
Dash
West
in
the
west
cluster
you'll,
see
Emoji
service
and
Emoji
Service
East,
so
wow
yeah
I
really
wish
I
could
show
this
stuff.
B
If
you
at
this
point
after
going
through
and
exporting
the
services,
then
everything
should
be
working
exactly
as
normal,
because
the
service
is
being
exported
are
not
going
to
be
taking
any
traffic
yet
and
if
you
run
Linker
divis
stat,
you
can
see
that
the
Emoji
service
will
be
taking
100
of
the
traffic
and
the
Emoji
service.
West
and
Emoji
Service
East
clusters
won't
be
getting
any
traffic
at
all.
B
B
A
I'm
in
the
install
the
traffic
split
section,
but
I
can
keep
going
down.
If
that's
perfect,.
B
B
B
B
So
if
we
look
at
this,
it's
not
very
profound,
it's
a
little
tough
to
read
but
or
it
might
be
a
little
tough
to
read
depending
the
important
thing
here
is
there.
There
are
a
couple
of
things
that
are
important.
B
One
of
them
is:
there's
a
label
that
explicitly
says:
hey,
linkerty
failover,
it's
okay
for
you
to
mess
with
this
and
linguity
failover
will
not
touch
a
traffic
split
that
does
not
have
that
label,
there's
also
an
annotation
that
tells
it
the
primary
service,
and
that
is
letting
it
know
that
if
nothing's
gone
wrong,
the
Emoji
service
is
the
one
you
want
to
use.
B
This
is
important
because,
if
you
really
want
to
you
could
say
things
like
oh
Emoji,
service
West
is
the
primary
I.
Don't
know
why
you
would
want
to,
but
make
sure
that
that's
the
way
you
want
it
to
set
it
to
the
one.
That's
actually
in
your
cluster.
B
Finally,
the
last
bit
here
is:
you
can
look
at
the
weights
down
there
in
the
back
ends
and
you'll
see
that
currently,
this
traffic
split
is
configured
to
send
all
of
the
traffic
to
the
primary
service
Emoji
service.
That's
exactly
what
we
want.
When
we
install
this
traffic
split,
we
don't
want
it
to
do
anything
silly
like
splitting
50,
50
or
anything
weird
like
that.
B
And
many
thanks
to
Taylor
for
his
wonderful
slide
running
here,
happy
to
help
a
little
bit
further
down
a
little
bit
further
down
to
the
install
the
traffic
split
section
again
there
we
go
so
we're
just
going
to
go
ahead
and
apply
that
this
is
the
first
thing
in
this
this
whole
readme,
where
we're
only
doing
it
on
one
cluster.
B
After
applying
that
traffic
split,
you
should
not
see
anything
change,
because
once
again,
all
the
traffic
is
still
going
to
the
one
in
the
local
cluster,
and
you
can
check
that
again
with
that
Linker
dvis
stat
command
that
one
I'm
actually
running
watch
in
front
of
it
just
to
sit
there
and
watch
it
for
a
few
seconds
to
make
sure
that
there's
nothing
going
over
all
right
scroll
down
a
little
bit
further.
B
And
the
faila
service
section
there
we're
not
doing
anything
profound
here
at
all.
We
literally
just
scale
the
Emoji
workload
to
zero
replicas
in
the
East
cluster,
and
at
that
point
you
should
instantly
see
the
weights
flip
in
the
traffic
split.
If
you
get
the
traffic
split
back
from
kubernetes
you'll
see
that
emoji
service
will
have
a
weight
of
zero
and
Emoji
service.
West
will
have
a
weight
of
one.
B
If
you
then
go
back
to
the
browser,
everything
will
still
work
because
it's
just
going
ahead
and
sending
all
the
traffic
over
to
the
still
running
service
in
the
west
cluster.
If
you
run
that
watch,
Linker
Dave
is
stop
there.
You
will
see
over
the
course
of
a
few
seconds.
You'll
see
the
traffic
move
from
the
Emoji
service
to
the
Emoji
service
West.
B
That's,
actually
all
there
is
to
it.
If
you
then
rescale
the
the
Emoji
deployment
back
to
one
replica
you'll,
see
the
weights
flip
again,
you'll
see
viz
stat
show
you
the
traffic
moving
and
that's
really
all
there
is
to
it.
It's
like
80
percent,
careful,
careful
setup
and
then
things
just
start
working.
B
One
really
important
thing
to
point
out
here
is
that
the
way
these
Services
the
ways
these
extensions
are
kind
of
stacked
on
each
other
gives
you
an
enormous
amount
of
flexibility
to
do
things
differently.
The
failed
link
already
failover
service
is
actually
fairly
simple.
As
long
as
there
are
running
end
points,
then
it
won't
fail.
B
If
you
want
to
do
something
more
sophisticated,
you
can
certainly
do
that.
All
you
need
to
do
is
figure
out
how
in
your
environment,
you
can
arrange
it
such
that
when
you
see
a
service,
that's
wrong.
You
can
just
change
the
weights
of
the
traffic
split,
and
you
can
also
then
rely
on
the
multi-cluster
extension
providing
new
services
that
link
across
to
the
other
cluster,
so
that
actually
goes
through
everything
that
I
wanted
to
show
you
so
I
think
we
still
have
a
little
bit
of
time
for
questions
if
there
are
any
or
Taylor.
A
B
I
think
that
might
be
the
same
question
that
Carlo
just
answered,
split
brain
can
mean
a
couple
of
different
things,
but
I'm
going
to
assume
for
the
moment
that
we're
talking
about
a
network
partition.
B
The
simplest
version
of
the
answer
to
that
question
is
that
Linker
D
is
going
to
trust
that
your
network
is
functioning
and
if
it
is
not,
then
you
should
absolutely
be
using
health
checks
and
maintaining
the
you
know.
Yes,
you
should
absolutely
absolutely
be
using
health
checks
for
that.
B
B
A
I
think
one
question
that
I
had
for
you:
Flynn
was,
as
teams
go
to
you
know
getting
Services
up
and
running.
It
can
be
kind
of
a
feat
in
and
of
itself.
You
know
whether
it's
lifting
and
shifting
an
application
starting
to
containerize
or
move
something
to
kubernetes
and
then,
like
you,
said,
implementing
service
mesh
can
mean
one
of
many
things
as
you
start
to
kind
of
up
the
Comfort
level
with
kubernetes
and
add
things
on
ad
extensions
Etc.
A
Are
there
any
things
that
you
see,
teams
go
to
do
or
adopt
that
might
not
work
as
well
as,
as
you
know,
they
would
expect,
or
do
you
have
any
like
tips,
tricks
or
insights
as
far
as
people
looking
to
adopt
a
multi-cluster
kind
of
setup.
C
B
So,
for
example,
I
would
strongly
strongly
encourage
things
like
pick
a
service
to
do
this
with
pick
a
setup,
you
know
go
through
and
do
it
in
your
don't
do
it
in
production
to
start
off
with
even
just
I
think
the
right
way
to
phrase
this
one
really
is
that
as
you
more
and
more
complex
things
with
kubernetes,
it
becomes
more
and
more
important
to
play
around
with
it
and
to
really
try
to
understand
what
you're
getting
yourself
into
before
you
just
go
ahead
and
flip
it
on
I'll.
B
Take
this
opportunity
to
link
that
back
to
the
question
about
istio
versus
Linker,
D
and
point
out
that
one
of
the
things
we
hear
over
and
over
and
over
and
over
again
is
that
one
of
the
things
people
really
like
about
linkerty
is
that
it's
entirely
possible
to
just
fire
it
up
in
a
k3d
cluster
and
have
a
proof
of
concept
running
in
an
hour
as
opposed
to
several
days
and
I
cannot
emphasize
enough
how
important
it
is
to
do
that
sort
of
just
playing
around
with
things
right.
B
A
lot
of
you
know
I,
obviously
had
to
go
through
and
get
all
this
stuff
working
which
now
you
can't
see.
Thank
you
fias,
but
I
obviously
had
to
go
through
and
take
the
time
to
become
accustomed
enough
to
multi-cluster,
to
talk
about
it
and
to
get
it
running
and
yeah.
You
know
it
was
tricky
at
first
until
I
kind
of
wrapped
my
head
around
what
it
was
really
doing.
B
It's
yeah
a
lot,
a
lot
of
stuff
that
we
do
a
lot
of
the
stuff
that
we
do
as
Tech.
Evangelists
is
learning
about
things
and
then
teaching
other
people
about
them
and
I
it's
hard
to
overstate
the
importance
of
doing
things.
Iteratively
and
you
know
getting
something
you
know
really
getting
handled
on
something
before
you
go
ahead
and
and
try
to
roll
it
out
across
your
entire
world.
All
the
other
ones,
I.
A
I,
like
that
advice
in
terms
of
keeping
it
simple
and
just
like
you
know
like
don't
you
don't
have
to
stress
yourself,
iterate,
take
it
slow,
make
notes,
validate
your
approach
and
those
kinds
of
things
too.
I
I
completely
agree.
I
think
that
in
a
lot
of
cases
we
tend
to
kind
of
jump
over
that
right
into
the
code
or
the
configuration
or
the
solution,
and
it's
you
know
you
can
take
that
time
to
actually
think
it
through
or
validate
the
right
approach.
B
Kubernetes
is
complex
enough
all
by
itself.
We
we
don't
really
need
to.
We
don't
really
need
to
make
that
harder
than
it
needs
to
be.
B
Hugo
asks
if
multi-cluster
is
in
beta
or
if
it's
ready
to
production
multi-cluster
is
in
production.
It
is
not
beta
the
SMI
extension
production,
the
failover
extension
is,
you
know
it's
production
ready,
but
it's
also
simple,
and
so
it's
the
one
that
I
suspect
you
might
want
to
look
the
closest
at
to
see
whether
it's
going
to
meet
your
needs
or
to
see
what
you
need
to
do
to
make
it
meet
your
needs
and
Carlo
asks
about
the
bigquery
service.
B
So
so
here's
the
interesting
thing
about
things
like
bigquery,
where
the
big
question
I
have
about
bigquery
is
not.
Is
it
possible
to
fail
over
that
kind
of
service?
The
big
question
I
have
is:
how
are
you
managing
all
of
the
data
behind
that?
Are
you
trusting
that
both
of
your
clusters
are
have
you
know?
Are
they
both
mounting
the
same
volume?
Somehow
it
might
work
if
they're
in
the
same
regions?
I
don't
know,
but
most
of
that
to
me
is
less
a
question
about
permission
and
is
more
a
question
about
State
and
data.
B
In
terms
of
permission,
though,
if
you
do
a
little
bit
of
digging,
you
will
find
that
the
multi-cluster
link
command
actually
defines
a
service
account
and
sets
up
some
our
back
and
things
like
that,
and
so
I
would
approach
permission
first
by
looking
at
that,
and
then
I
would
approach
permission
further
by
looking
at
user
authentication
in
the
application
itself.
If
that
makes
any
sense,
I
don't
really
talk
in
the
repo
about
that
are
back
stuff.
So
I'm
going
to
make
a
note
of
that,
because
that
was
a.
B
That
was
the
thing
that
was,
that
was
very
surreal
until
I
understood
what
was
going
on.
The
the
mechanisms
by
which
multi-cluster
is
arranging
for
cluster
permissions
are
normal,
plain,
vanilla,
kubernetes
stuff
and
it
works
by
yeah.
It
works
by
allowing
the
Linker
D
command
to
read
credentials
and
move
them
around,
and
you
know
tokens
and
things
like
that.
So
that
would
be
another
good
thing
to
talk
about
a
little
bit.
I
think
Boris.
It
looks
simple
when
you
have
static
data,
how
do
you
manage
Dynamic
database
data?
B
B
I
mean
just
you
know,
talk
about
a
business
opportunity,
wow
any
other
questions,
anything
else,
anything
else
in
your
mind,
Taylor.
A
I
think
that's
it
honestly!
It's
given
me
a
lot
to
think
about,
and
I'm
really
excited
to
kind
of,
go
and
test.
This
out
myself.
I
think
that
in
the
previous,
the
N
minus
one
through
five
places
that
I've
worked
having
a
multi-cluster
setup
has
been
one
of
the
biggest
things
and
most
difficult
technical
things
to
accomplish
with
those
teams.
A
In
terms
of
you
know,
kind
of
potency,
all
the
cues
all
those
fun
things.
So
this
is
gonna,
be
fun
to
try
yeah.
B
Yeah,
it's
there's
some
really
really
tricky
stuff
that
happens
when
you're,
when
you're
dealing
with
that
yeah.
A
A
B
Yeah
two
or
three
jobs
ago
was
it:
we
had
a
guy
who
was
just
completely
an
average
of
rabbit
mq
and
wanted
to
use
it
for
everything.
So
we
got
a
lot
of
really
good
experience
about
things.
Rabbitmq
is
great
for
and
things
it's
not
so
great
for
so
yeah.
It
would
be
it'd
be
really
interesting
to
hear,
though,
as
you
as
you
play
around
with
it
I'd
love
to
hear
about
how
that
stuff
goes
for
you
and
on
that
note,
I
put
in
the
chat.
B
Hopefully,
it'll
show
up
soon
the
URL
for
the
Linker
D
slack
and
how
you
can
find
me.
There
I'm
also
going
to
put
one
more
Link
in
there
yeah,
so
I,
that's
slack.linkerty.io
I
am
at
Flynn
there
always
around
almost
always
around,
and
one
more
thing
I
want
to
put
in
there
as
well
is.
This
is
the
link
to.
B
So,
every
month
we
at
buoyant
do
a
free
free
workshops,
really
a
very
similar
format
to
this
one,
where
we
go
through
and
pick
on
a
topic
and
try
to
tear
it
apart
and
I
hope
that
the
network
is
working.
B
The
next
one
coming
up
is
actually
a
deep
dive
into
mtls
and
we
would
love
to
see
you
there
as
well
and
if
that
doesn't
make
it
up.
It's
at
buoyant.io
service,
Dash,
Mash,
Dash,
Academy,.
B
So
other
than
that,
many
thanks.
It's
always
a
pleasure
to
be
here.
I'm
sorry,
this
one
was
so
Rocky,
but
hopefully
we
were
able
to
salvage
something
out
of
it,
and
people
got
something
out
of
it.
I
hope.
A
Cdns
networks,
but
you
know
thank
you
so
much
I'm.
B
Telling
you
I
I
think
the
real
moral
of
this
story
is
that
you
should
always
have
two
internet
connections
to
your
house.
A
A
B
And
I
hope
to
see
you
all
again
soon.
A
Thank
you.
Thank
you.
Thanks
everyone
for
joining
the
latest
episode
of
cloud
native
live.
We
really
enjoyed
your
interaction
and
discussions
today,
thanks
for
joining
us,
and
we
hope
to
see
you
again
soon
see
you
later.
Everybody
bye.