►
From YouTube: Cloud Native Live: Linkerd 2.14
Description
Enterprise multi-cluster, Gateway API conformance, & more
A
Hello,
everyone
Welcome
to
Cloud
native,
live
where
we
dive
into
the
code
behind
Cloud
native
I,
Anita,
vasto
and
I'm
cncf
Ambassador,
as
well
as
CMO
at
vision
and
I
will
be
your
host
tonight.
So
every
week
we
bring
a
new
set
of
presenters
to
Showcase
how
to
work
with
Cloud
native
Technologies.
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions.
A
You
can
join
us
Tuesdays
or
Wednesdays
to
watch
live,
and
this
week
we
have
a
great
session
coming
up
with
Lynn
here
with
us
to
talk
about
linkerd
2.14,
very
excited
for
this
one,
and
as
always,
this
is
an
official
live
stream
of
the
cncf
and
as
such
it
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
B
Presentation:
okay,
thanks
glad
to
be
here,
we
are
going
to
talk
about
linkerd
2.14
I
have
some
slides
to
go
through
at
the
beginning
to
explain
what
we're
going
to
talk
about,
but
we
are
going
to
try
to
keep
the
slides,
quick
and
then
go
into
the
demo
and
then
still
have
time
for
questions.
That's
me:
that's
how
to
reach
me.
Email
is
good.
You
can
reach
me
as
Flynn
at
basically
any
slack
associated
with
any
cncf
stuff
that,
as
far
as
I
can
tell
so
yeah
the
agenda.
B
We're
going
to
go
over
some
of
the
headline,
linkerd
2.14
features
and
then
we're
going
to
do
a
demo
of
multicluster
phases,
as
opposed
to
our
usual
faces
demo.
Hopefully,
that'll
work,
okay,
2.14
2.14
is
very
new
I
want
to
say
it
came
out
like
three
weeks
ago,
maybe
headlines.
We
have
flat
Network
multicluster,
which
we're
going
to
talk
about
which
also
supports
workload,
identity,
Crossing
clusters.
We
did
some
work
with
Gateway,
API
conformance,
which
is
kind
of
nice.
B
B
The
way
that
worked
was
that
it
would
funnel
traffic
from
the
workload
in
one
cluster
through
a
Gateway
in
the
other
cluster
to
the
workload
in
the
other
cluster
right,
and
this
worked
really
well
as
long
as
you
have
IP
connectivity
between
the
cluster
and
the
Gateway
and
such,
but
it
does
have
two
caveats.
The
first
one
is:
it
adds
some
latency
because
it
has
to
go
from
the
workload
to
the
Gateway
and
then
from
the
gate
way
to
the
other
workload.
Typically,
this
is
not
a
major
deal.
B
Linker
is
really
fast,
but
it's
still
there.
The
much
more
significant
caveat
was
that
the
identity
of
workload,
one
basically
gets
lost
workload.
Two
sees
the
identity
of
the
Gateway,
not
workload
one,
and
that
can
make
life
really
complex
if
you're
trying
to
do
authentication
and
authorization
across
clusters,
which
you
know
this
is
a
good
thing
right.
People
would
like
that,
so
the
basic
solution
that
we
adopted
with
2.14
was
just
get
rid
of
the
gateways.
B
This
is
great
because
it
preserves
identity
everywhere
and
all
the
mtls,
goodness
that
you're
used
to
in
lardy
just
works
and
all
the
policy
stuff
just
works,
and
it
lowers
latency,
and
it's
really
great-
and
there
are
a
few
things
that
you
need
to
think
about
I'm,
not
actually
going
to
go
over
everything
word
for
word
on
this
slide.
B
It
mostly
boils
down
to
the
things
that
you
had
to
do
to
establish
trust
across
the
two
clusters
you
still
have
to
do,
and
in
addition
to
that,
you
have
to
be
running
in
a
network
that
allows
a
pod
in
one
cluster
to
talk
directly
to
a
pod
in
another
cluster
using
the
pods
IP
address.
This
implies
that
your
clusters
must
use
different
cluster
CER
ranges,
because
otherwise
they'll
step
on
each
other
and
nothing
will
work.
B
It
also
implies
that
it
is
often
a
good
idea
to
give
your
clust,
distinct
trust,
domains,
distinct,
cluster
domains
as
well,
just
because
it
can
be
easier
to
keep
track
of
which
identity
comes
from
which
cluster,
when
you
do
that,
if
they
don't
all
say,
food.,
service.,
cluster.,
loal,
but
instead
say
food.,
service.,
East
or
food.
serv.,
north
or
whatever.
B
That
is
not
necessary
and
in
fact
the
demo
I'm
going
to
show
we're
not
going
to
do
that,
because
the
demo
I'm
going
to
show
is
mostly
talking
about
routing
and
not
authorization
and
authentication.
But
you
know
good
thing
to
bear
in
mind
if
you're
trying
to
do
this
in
Anger,
oh
I'm,
sorry,
one
other
thing
that
I
wanted
to
mention
is
the
there
is
still
a
command
that
you
will
run
called
linkerd
multicluster
link
that
explicitly
tells
Linker.
A
B
These
two
clusters
should
be
joined.
That
command
is
still
there
with
a
a
tweak
to
it.
So
you'll
see
that
in
a
little
bit,
okay.
By
the
way,
let
me
also
point
out
that
at
any
point,
daring
here
feel
free
to
jump
in
with
questions
you
can
put
them
in
the
chat
and
you
will
relay
them
and
otherwise
I'm
just
going
to
keep
going
and
assume
that
everything
is
perfect.
B
Even
though
everything
is
never
perfect,
but
we'll
just
assume
okay
Gateway
API,
as
of
linkerd
2.12,
we
started
using
the
Gateway
API
as
sort
of
the
core
mechanism
to
describe
classes
of
HTTP
traffic
very
much,
including
grpc,
and
this
is
a
thing-
that's
kind
of
a
big
deal.
We
say
this
every
time
that
we
expect
it
to
deprecate
SMI
and
service
policies,
and
things
like
that.
We
still
do
not
have
a
timetable
for
that.
You
are
still
free
to
continue
using
those
mechanisms
from
older
versions
of
linkerd.
B
They
will
still
work,
but
linkerd
2.14
is
actually
conformant
with
Gateway
API
version
0.8.0
mesh
profile.
This
is
kind
of
a
big
deal,
the
whole
conformance
profile
thing,
I
I.
Think.
Actually
you
know
what?
Let's
just
talk
a
little
bit
about
the
conformance
profile
thing.
The
Gateway
API
when
it
first
came
out,
was
very
much
built.
B
Assuming
that
you
were
talking
about
Ingress
traffic,
not
service,
mesh
traffic,
so
in
particular,
early
versions
of
Gateway
API
had
this
idea
that
if
you
were
going
to
run
their
conformance
tests
that
you
also
had
an
engress
controller
Linker
is
a
service
mesh.
We
do
not
ship
with
an
Ingress
controller,
and
this
meant
that
there
was
no
way
for
us
to
be
compliant
to
sorry
for
us
to
be
conformant
with
Gateway
API
in
Gateway
API
0.8,
which
came
out
in
very
late
August.
B
So
about
a
month
ago,
in
0.8
Gateway
API
introduced
the
concept
of
a
conformance
profile
which
describes
a
set
of
features
within
Gateway
API
that
should
be
tested
and
also
gives
you
a
way
to
talk
about.
Oh
yeah,
we're
conformant
with
this,
but
not
this
other
thing
and
that
finally
gave
linkerd
the
ability
to
say
oh
yeah
we'll
go
and
run
conformance
tests
against
the
mesh
profile.
We
will
test
all
the
meshy
things
in
Gateway
API
against
linkerd
and
that
passes.
B
So
we
are
in
fact
conformant
with
Gateway
API,
as
of
linker
2.14,
at
least
with
the
mesh
profile.
Do
not
try
to
use
Linker
as
an
ingrish
controller.
It
will
not
work.
So
a
big
part
of
that
is
the
API
Group
that
you
see
in
things
like
HTTP
routes
in
Linker,
2.12
and
2
13.
You
would
see
HTTP
routes
only
in
the
policy.
linkerd
doio,
API
Group,
whereas
in
2.14
you
can
also
use
the
official
gateway.
networking.
ks.
B
B
It's
also
worth
pointing
out
that
if
you
are
running
an
installation
where
you
have
the
official,
like
suppose,
you've
installed
a
Gateway
API
Gateway
controller
as
well
as
linkerd,
you
will
find
that
you
have
both
HTTP
routes.
policy.,
liner.
and
HTTP
route.
gateway.,
networking.
ksio
and
unfortunately,
you
may
have
to
explicit
itly
spell
out
the
whole
thing
when
you're
doing
a
CBE
control
command,
because
Cube
control
can
very
very
easily
get
confused
as
to
which
one
it
should
be
talking
about.
B
We
might
actually
end
up
trying
to
get
a
bug
fix
into
coup
control
for
that,
because
it's
kind
of
awkward
right
now:
okay,
timeouts
as
I,
make
sure
okay
I
see
some
hellos
and
a
lovely
Linker
D
for
the
win.
Thank
you,
but
I
do
not
see
questions
yet.
Okay
in
linkerd
2.14,
we
have
added
support
for
timeouts
in
HTTP
routes.
With
the
you
know,
timeouts
stanza,
here
in
the
HTTP
route
very
exciting.
There
are
two
kinds
of
things
that
you
can
see
in
the
timeout
Stander.
You
can
see
requests.
A
B
That
is
a
lovely
question.
Let
me
finish
talking
about
Gateway
API
and
then
come
back
to
that
before
the
demo,
because
I
think
that
will
make
a
little
bit
more
sense.
If
we
talk
about
it
with
the
demo,
if
I
talk
about
it-
and
it
doesn't
make
sense,
then
ask
your
question
again:
if
I
don't
answer
it
correctly.
B
Okay,
we
were
talking
about
two
kinds
of
timeouts,
there's
request,
which
is
an
end
to
end
timeout,
there's
also
a
backend
request,
which
is
just
the
part
talking
to
the
back
end
and
we'll
show
this
in
a
little
bit
more
detail
shortly.
The
Syntax
for
the
timeout
is
basically
like
a
go
time.
duration,
but
you're
not
allowed
to
use
floating
point.
So
130m
is
fine
for
an
hour
and
a
half
1.5h.
Not
fine!
Don't
do
that.
B
The
Gory
details
of
all
of
this-
and
let
me
tell
you
some
of
the
details-
are
very
gory
indeed
are
in
Gateway
enhancement,
proposals,
1742
and
2256
7
1742
talks
about
the
syntax
of
the
timeout
stand,
stanza
and
2257
talks
about
the
details
of
the
syntax
of
the
thing
that
you
put
in
as
a
timeout,
and
there
are
links
there.
B
If
you
have
a
really
unbreakable
case
of
insomnia
some
night,
and
you
really
want
to
go
and
read
something
that
will
put
you
to
sleep,
okay,
to
explain
a
little
bit
more
about
what
the
two
different
things
are.
If
we
have
this
scenario
where
our
client
is
talking
to
workload,
one
and
then
load,
one
has
to
talk
to
workload.
Two
then
timeout.
request
covers
the
entire
thing
from
the
client
all
the
way
through
to
workload.
Two
and
timeout.
backend
request
only
covers
the
chunk
where
the
workload
one
is
talking
to
workload.
B
Two,
the
backend
Services
there,
usually
you
would
kind
of
only
like
it-
doesn't
really
make
sense
to
set
backend
request
to
something
longer
than
request.
In
fact,
that's
not
allowed
by
the
validators
and
in
a
lot
of
cases
it
doesn't
really
make
sense
to
use
backend
request
unless
you've
also
configured
retries
or
something
like
that.
So
most
people
are
probably
just
going
to
set
request
and
be
done
with
it,
but
the
functionality
is
there.
B
B
If
you
want
to
use
timeouts
in
an
HT
V
route,
because
Gap
1742
didn't
quite
make
Gateway
API
0.8,
it
was
right
down
to
the
wire
it
didn't
quite
get
in,
and
so
it
will
be
a
part
of
Gateway
API
version,
one
which
should
be
happening
real
soon
now,
but
for
now,
if
you
want
timeouts,
you
have
to
use
the
linky
specific
version.
B
Okay,
it
is
now
time
to
talk
about
the
demo.
The
demo
architecture
pict
we're
going
to
use
is
the
typical
faces
thing
that
yall
have
probably
seen
if
you've
seen
me
on
any
one
of
these,
where
we
have
Au,
which
is
a
single
page
web
app
that
talks
to
a
workload
called
face,
face
talks
to
a
workload
called
Smiley
and
a
workload
called
color.
The
smiley
workload
is
supposed
to
return
this
grinning
face
and
the
color
workload
is
supposed
to
return.
B
The
color
green,
the
face
workload
puts
the
two
together
and
hands
it
back
to
the
guey,
which
should
show
you
a
nice
grid
of
grinning
faces
on
green
backgrounds.
B
Unlike
many
iterations
of
or
many
demos
involving
the
face
demo
in
this
one,
we
actually
expect
to
see
that
from
the
start,
because
we're
not
demoing
reliability,
stuff
we're
going
to
do
a
multicluster
demo,
it's
also
worth
pointing
out.
We
do
have
a
smiley
2
workload,
which
returns
hard-eyed
Smileys
and
a
color
two
workload
which
Returns
the
color
blue.
B
We
have
Smiley
in
the
East
cluster
and
color
in
the
west
cluster,
we're
actually
going
to
start
off
with
a
color
workload
in
the
north
cluster
as
well,
so
that
we
can
show
some
stuff,
that's
kind
of
interesting
to
get
back
to
the
question
about
why
in
the
world,
would
you
want
to
do
this?
Sorry
I
got
very
distracted
by
the
comment
that
says:
wow
and
I'm
wondering
what
the
wow
is
for
anyway.
The
reason
you
might
want
to
do
this
is
for
this
particular
kind
of
demo.
B
What
we're
doing
here
is
showing
something
that
we've
been
seeing
a
little
more
often
as
time
goes
on,
which
is
that
back
in
the
I
would
call
them
the
bad
old
days
except
they're.
Not
really
all
that
old
people
tended
to
set
up
these
gargantuan
clusters
run
everything
in
one
gargantuan
cluster
and
then
they
would
just
go
through
and
throw
in
name
spaces
so
that
their
developers
could
have
a
name
space
to
play
in
and
the
name
spaces
were
providing
isolation.
Then
people
started
figuring
out.
B
B
You
know
we
could
use
clusters
for
is
like
we
used
to
use
name
spaces
where,
instead
of
having
some
gargantuan
cluster,
we
can
give
each
of
our
developer
teams
their
own
cluster
and
it
will
be
much
much
smaller
and
that
way
we
can
isolate
the
Clusters
from
them
from
each
other,
and
if
one
of
them
goes
down
or
if
a
developer
screws
something
up
in
their
cluster,
then
it
doesn't
affect
the
rest
of
the
application.
So
that's
the
use
of
multicluster
that
we've
been
seeing
recently.
B
We've
been
seeing
people
talk
about
recently,
and
so
that's
the
reason
why
the
phases
multicluster
demo
is
set
up
this
way
and
again
seil.
If
that
does
not
answer
your
question
and
if
I'm
pronouncing
your
name
well
enough
that
you
recognize
it
I
hope.
So
if
that
does
not
answer
your
question
then
toss
another
comment
in
and
we'll
see
if
we
can
clarify
okay.
B
So
there
is
source
code
for
this
demo
available
in
this
repo
in
the
sneak
peek
214
directory,
with
a
lot
of
demos
that
I
do
I
will
start
with
an
empty
cluster
and
then
build
it
up.
I
am
not
going
to
do
that
on
this
demo,
because
this
demo
involves
three
k3d
clusters
that
are
set
up
on
the
same
network
and
have
their
routing
tables
modified.
So
they
can
all
talk
to
each
other
and
no,
we
are
not
going
to
demo
that
because
it's
mostly
just
crazy,
absurd
k3d
Blackmagic.
B
So
if
you
want
to
take
a
look
at
that
and
I
would
encourage
that,
especially
if
you
play
with
k3d,
you
can
find
the
setup
scripts
in
here
create
cluster.,
sh
and
setup.
Demo.
sh
are
the
really
relevant
ones
there
and
they're
kind
of
ugly
just
be
warned
all
right.
It
is
now
demo
time,
let's,
let's
hope
the
demo
gods
are
with
us.
B
B
sh
and
you
will
end
up
with
three
clusters
which
have
Linker
installed
and
I,
think
Emissary
and
the
faces
demo
and
all
the
stuff
that
you
need
and
you
should
read
over
them
and
it'll
be
kind
of
horrifying
but
it'll,
hopefully
be
educational
as
well.
A
A
Audience
question
as
well
and
by
the
way
Sunil
said
excellent.
Thank
you
so
well
covered
there
and
then
AI
asks.
Can
we
do
that
in
managed
cluster
from
cloud.
B
It
depends
on
your
cluster
man.
Your
cluster
provider
I
chose
not
to
do
this
in
the
cloud
so
that
I
didn't
have
to
fight
with
a
cluster
provider.
The
question
that
you
need
to
bring
to
your
cluster
provider
is:
is
there
a
way
that
I
can
set
up
two
clusters
or
more
such
that
are
sharing
a
network
and
can
route
directly
from
pod
to
pod
I'm,
pretty
confident
like
I'm,
pretty
confident
Amazon
can
do
that,
but
I
might
be
mixing
up
Amazon
and
Azure,
since
they
both
start
with
a
I.
B
Don't
remember
about
Google
I've
run
across
at
least
one
of
the
smaller
cluster
matat
providers
that
just
full
stop
cannot
do
it
so
yeah,
it
kind
of
depends
worth
pointing
out.
The
older
style
of
of
Gateway
based
multicluster
does
still
work
with
2.14.
So
if
you
find
that
you
just
can't
do
that,
then
you
can
still
go
back
and
use
the
Gateway
mechanism
too.
All
right!
So,
let's
take
a
look
and
see
our
faces.
Demo
is
running.
This
is
good.
B
It's
actually
giving
us
gritting
faces
on
green
backgrounds
like
it's
supposed
to
that's
a
that's
a
rare
site.
Okay,
so
let's
go
take
a
look
under
the
hood
and
see,
what's
going
on
here,
I
already
said
that
we're
doing
this
with
three
clusters
rather
than
one
we
have
clusters
named
Northeast
and
West.
We
already
talked
about
which
one
was
which,
but
just
to
make
that
clear.
North
has
the
face
workload
and
is
where
all
the
North
South
traffic
comes.
It
also
has
an
instance
of
color.
B
The
East
cluster
is
the
only
place
that
the
smiley
workload
lives
and
the
West
cluster
only
has
color
workloads.
We
can
observe,
with
coup
control
cluster
info
carefully,
managing
the
context
that
we're
using
here
we
can
observe
that
we
have
API
servers
in
three
different
places,
so
we
have
North,
East
and
West
are
all
at
different
ports
and
I.
B
All
I
have
set
these
all
up
so
that
I
have
one
coup
config
file
with
three
different
contexts
in
it,
so
that
it's
very
easy
for
me
to
talk
to
whatever
CL
cluster
I
want
to
I'm,
not
going
to
tell
you
how
many
times
developing
this
I
typed
a
command
with
the
wrong
context
and
got
very,
very
confused,
except
that
it
was
a
large
number
I'll
tell
you
that
I've
already
done
the
cluster
linking,
but
let's
take
a
look
at
how
that
works
under
the
hood.
B
This
is
an
example
of
a
command
that
you
would
use
to
link
clusters
together.
Specifically,
this
is
the
one
that
links
North
to
East
pay
very
careful
attention
to
the
contexts
on
the
commands
in
this
pipeline.
So
we
run
linkerd
multicluster
link
itself
in
the
East
contexts,
linkerd
context,
East
multicluster,
link
that
tells
the
East
clusters
linkerd
installation
to
generate
a
set
of
resources
that
we
can
apply
in
some
other
cluster
to
link
that
other
cluster
to
East.
We
also
do
gateway
fults
to
tell
it.
B
We
are
not
doing
Gateway
stuff
we're
doing
flat,
Network
stuff
here
and
of
course,
we
set
the
cluster
name
to
East,
and
then
we
apply
all
of
that
in
the
north
context.
So
because
we
want
to
tell
the
north
cluster
how
to
talk
to
the
east.
B
A
A
A
B
B
Go
absolutely
Cloud
native
live,
not
Cloud
native
recorded
okay,
where
was
I
I
was
saying
that
links
are
unidirectional,
so
this
is
a
link
that
will
permit
North
to
talk
to
things
in
the
East
cluster,
but
it
will
not
let
East
to
talk
to
North
for
our
demo
right
now.
We
don't
really
care.
We
don't
need
for
East
to
be
able
to
reach
back
into
the
north
cluster.
B
B
Look
at
all
of
this
by
oh
dear
I,
typed
examing
that
should
have
been
examining
well
clearly,
there
will
be
a
push
made
to
this
repo
shortly
we
can
go
and
actually
look
at
the
link
resources
that
were
created
by
the
m
multicluster
link
command,
and
we
see
that
in
the
north
cluster
we
have
one
for
East
and
we
have
one
for
West.
If
we
look
at
the
resource
itself,
we
get
a
lot
more
information.
The
interesting
things
in
here
are,
for
example,
there's
a
secret
that
has
a
bunch
of
credentials
for
this
cluster.
B
You
should
perhaps
make
sure
that
those
secrets
are
not
readable
by
everybody
that
you
don't
want
them
to
be
readable
by
this
is
also
in
the
Linker
multicluster
namespace.
So
if
you
protect
that,
then
that's
a
good
way
to
go.
B
You
can
see
that
there's
no
Gateway
involved
here,
and
you
can
also
see
that
it's
got
a
thing
set
up
to
look
at
labels
in
the
other
cluster.
We'll
talk
about
that
in
a
minute.
Another
thing
that
we
get
to
do
here,
that's
kind
of
fun,
is
that
we
can
use
the
same
trick
to
look
for
links
in
the
East
context
and
the
West
context
and
see
that
there
are
no
links
in
those
clusters.
They
are
just
kind
of
doing
their
normal
East
and
West
things
and
not
doing
anything
special
with
multicluster.
B
From
their
point
of
view
again,
we
could
link
them
back.
It
wouldn't
help
this
demo,
so
I
didn't
do
it,
but
it
works
fine
if
you
want
to
set
up
bidirectional
stuff
all
right.
Quick
aside,
if
you
are
familiar
with
Linker
2.13
multicluster
with
the
gateways
and
everything
I
can
ask
linkerd
multicluster
to
show
me
all
the
gateways
and
it
will
come
back
and
say:
nope
no
gateways
life
is
good.
B
I'm,
pretty
sure
that
24
14
still
has
the
interesting
characteristic
that
if
I
ask
the
East
cluster
for
its
gateways,
it
will
hang
for
30
seconds,
because
the
infrastructure
that
permits
it
to
know
instantly
that
there
are
no
gateways
is
not
there.
That
will
be
fixed
in
a
later
linkerd
version
if
it's
not
fixed
in
2.4.1.
B
Actually,
the
let's
see
the
way
that
this
works
without
the
gateways.
Just
for
the
record
is
that
the
control
plane
in
the
north
cluster
talks
to
the
control
plane
in
the
east
and
west
clusters
and
gets
information
about
the
services
in
the
mesh
in
the
east
and
west
cluster
and
figures
out
which
ones
need
to
be
mirrored
across,
and
we
will
look
at
all
that
pretty
much
right
about
now.
Actually
I
believe.
B
So,
let's
take
a
quick
look
at
what's
running
in
the
north
cluster.
If
we
look
at
the
workloads
that
are
running
in
the
faces
Nam
space,
we
can
see
that
we've
got
some
color
workloads.
We've
got
two
color
workloads.
Actually
you
know
two
replicas
and
we've
got
a
couple
of
replicas
of
the
face
workload
and
we've
got
the
goey,
but
we
do
not
have
a
smiley
workload
at
all.
B
However,
if
we
flip
back
to
the
web
browser,
we
are
clearly
getting
smiley
faces,
so
the
way
that
this
works
is,
let's
take
a
look
at
the
services
and
the
interesting
thing
you
might
notice
here
is:
we
have
a
smiley
service
and
we
also
have
have
a
service
called
Smiley
East.
So
we
can
go
look
underneath
at
some
of
these
things
and
we
will
find
that
smiley.
East
is
a
mirror
in
the
north
cluster
of
the
smiley
service
from
the
East
cluster.
B
This
is
why
it's
important
not
to
name
your
clusters
and
services
the
same
way
when
I
did
this
for
service
mesh
Academy
I
had
clusters
named
face
smiley
and
color,
and
then
I
ended
up
with
Services
called
smiley
smiley,
which
is
very,
very
difficult
to
talk
about
with
a
straight
face.
But
if
we
go
and
look
in
the
East
cluster
and
ask
it
for
services
in
the
faces
Nam
space,
we
can
see
that
there
we
do
have
the
smiley
service
and
we
have
Smiley
2
and
we
also
have
the
workloads.
B
If
we
take
a
closer
look
at
that
smiley
service,
we
will
find
that
it
is
marked
for
exports
using
the
remote
Discovery
method.
If
you're
again,
if
you're
familiar
with
2.13,
you
might
be
used
to
seeing
mirror.
liner.
exort
enabled
rather
than
remote
Discovery
enabled
is
what
you
do.
If
you
want
to
use
gateways,
if
you
don't
want
to
use
gateways,
set
it
to
remote
Discovery.
B
So
the
fact
that
that
was
marked
in
the
East
cluster
for
remote
Discovery
means
that
the
mirror
got
created
in
the
north
cluster.
And
yes,
it's
currently
named
as
service
name
cluster
name,
which
is
why
you
would
get
Smiley
East.
B
And
if
we
take
a
quick
look
at
that
at
its
Endo,
specifically
in
the
East
cluster,
we
can
see
that
it
has
two
endpoints.
This
makes
sense.
There
are
two
replicas.
There
should
be
two
end
points.
If
we
ask
the
endpoints
ask
the
north
cluster
for
the
end
points
of
smiley
East,
we
will
find
that
there
are
no
end
points
of
smiley
East,
which
seems
bizarre.
B
The
reason
it
works
is
that
the
linkerd
control
plane
is
actually
keeping
track
of
those
end
points
with
the
remote
discovery
rather
than
going
through
and
constantly
updating
at
CD
every
time
it
changes.
We
just
keep
track
of
it
in
the
linkerd
control
plane,
but
we
can
use
the
Diagnostics
endpoints
command,
linkerd
diagnostic
endpoints,
to
ask
for
the
endpoints
of
that
thing,
using
its
fully
qualified
name
and
port
number.
B
B
Finally,
I
should
point
out
that
the
smiley
or
sorry
the
face
application
does
not
know
how
to
talk
to
Smiley
East.
It
only
knows
how
to
talk
to
Smiley
when
it's
going
in
when
the
face
workload
is
trying
to
fetch
a
smiley.
It
just
goes
to
the
smiley
workload
and
the
way
that
this
works
out
is.
We
also
have
a
service
in
here
called
Smiley.
If
you
take
a
very
care
careful
look
at
this,
we'll
put
Annie
on
the
spot,
hey
Annie.
Do
you
notice
anything
weird
about
this.
A
B
Spotted
there's
a
little
bit
of
a
delay
with
the
audience,
but
yeah
it'll
be
fun
to
see
if
anybody
comes
up
with
this
I'm
going
to
go
ahead,
while
we
wait
for
the
audience
to
chime
in.
But
if
you
look
very
carefully
at
this
service,
you'll
realize
there's
no
selector
in
it.
B
If
there
is
no
selector
in
a
service,
it
can
never
match
any
pods,
and
so
it
will
never
have
any
end
points,
and
so
we
have
here
a
cluster
or
a
service
in
our
North
cluster
that
is
literally
set
up
so
that
it's
not
possible
for
this
service.
To
talk
to
anything
and
its
only
purpose
in
life
is
to
have
this
HTTP
route
associated
with
it.
This
HTTP
route
says
anything
directed
to
smiley
the
service
redirected
immediately
to
Smiley
East,
and
that's
the
thing
that
permits
the
whole
demo
to
work.
B
We're
capturing
we're
basically
doing
a
just
a
forward
from
this
Smiley
service
to
Smiley
East.
Also
worth
noting
that,
if
you
read
very
very
carefully,
you
would
have
seen
that
that
HTTP
route
was
using
a
policy.,
Linker
link.
HTTP
route,
not
a
gate.
Gateway.
networking.
Cades
gateway.
networking.
B
ks.,
that
is
very
hard
to
say,
we're
not
using
that
kind
of
HTTP
route,
and
the
reason
is
that
there's
a
timeout
in
there
that
we're
going
to
show
off
in
a
bit
also
I'm,
sorry
I
should
have
mentioned
this
before
only
V1
beta
3
has
timeout
support.
If
you
have
older,
V1
beta
2s
or
something
like
that,
then
timeouts
will
not
work
until
you
switch
them
to
B1
beta
3.
Everything
else
is
the
same.
You
can
literally
just
change
the
API
version.
It'll
be
really
easy.
B
Okay,
we
already
talked
about
why
you
have
to
use
policy.
link.
for
timeouts
in
HTTP,
reps
all
right,
another
important
caveat.
You
cannot
use,
for
example,
an
HTTP
route
in
the
north
cluster
to
direct
traffic
to
Something
in
the
East
cluster,
where
there
is
also
an
HTTP
route
that
will
do
further
things
with
it.
B
You
also
can't
do
that
in
the
same
cluster,
you
can't
stack
HTTP
routes.
The
reason
for
that
is
that
the
first
route
it
sees
linkerd
uses
that
route
makes
a
decision
and
goes
straight
to
the
endpoint
and
so
the
second
and
subsequent
httv
routes.
Never
have
a
chance
to
do
anything
that
should
be
true
in
every
mesh
that
is
compliant
with
the
Gateway
API
mesh
conformance
profile,
all
right,
let's,
let's
mess
with
timeouts
a
little
bit.
B
Shall
we,
if
you
flip
back
over
to
our
web
browser,
you
see
how
some
of
those
cells
are
kind
of
fading
out.
The
reason
for
that
is
that
it's
basically
just
taking
too
long
for
it's
taking
too
long
for
the
face
workload
to
get
an
answer
back
or
it's
taking
too
long
for
the
guey
to
get
an
answer
back,
and
so
we
want
to
try
to
use
a
timeout
to
make
that
a
little
bit
quicker.
B
If
we
come
back
over
here,
we
can
do
that
for
the
smilees
anyway,
just
by
scrolling
down
here
and
changing
the
timeout
to
300
milliseconds
I'm
300
milliseconds,
because
that
worked
pretty
well.
The
last
time
I
tried
this
demo.
If
we
flip
back
to
the
browser
now,
then
you
should
see
there
there
we
go
fewer
faded
out
cells,
but
you
also
see
those
counters
appearing
where
the
timeouts
are
firing
and
we're
seeing
some
things
going
on
and
exercise
to
the
reader.
B
B
Free
another
thing,
while
y'all
are
thinking
about,
why
we're
not
getting
why
we
still
get
any
fading
cells
at
all,
why
we
still
get
things
that
are
taking
too
long,
we're
not
limited
to
just
directing
all
of
the
traffic
over
to
the
other
cluster.
We
can
also
split
traffic
between
clusters
using
HTTP
routes.
This
is
why
we
have
a
color
workload
in
north
and
also
the
color
workloads
in
West.
B
So
what
we're
going
to
do
right
now
is
we're
going
to
start
by
mirroring
the
color
2
service
from
the
West
cluster
into
the
north
cluster.
The
Clusters
are
already
linked.
I
have
not
marked
the
color
2
service
for
remote
Discovery,
yet
so
I
will
do
that
now
and
the
moment
we
do
that.
If
we
come
over
here,
we
will
now
see
that
color
2
West
service.
B
All
right,
my
mouse
was
in
the
wrong
place,
so
I
couldn't
click
on
that
excellent.
If
we
check
out
the
end
points
for
that,
then
we
can
see
that
we
have
end
points
for
it
as
well,
so
we
should
be
good
to
go.
The
astute
Observer
also
will
note
or
possibly
note
that
when
we
looked
at
the
end
points
for
Smiley
East,
we
got
endpoints
in
102314
and
for
color
2
West,
we're
getting
end
points
in
10,
23224,
again
different
cluster
ciders,
very,
very
important
Ahad.
B
Attempt
what
was
I
doing
that
for?
Oh,
yes,
sorry
I
was
also
running
the
command
to
look
directly
at
the
endpoints
in
the
west
cluster
and
note
that,
yes,
they
are
the
same
as
the
ones
that
we
get
for.
Lardy
Diagnostics
end
points
all
right.
So
if
we
do
that
we
can
add
an
HTTP
route
that
will
split
traffic
50/50
between
the
color
service
in
the
north
cluster
and
color
2
West,
also
in
the
north
cluster,
but
pointing
to
the
color
two
workload
in
the
west
cluster.
B
B
Okay,
we
can
do
something-
that's
kind
of
entertaining
at
this
point
too.
So
right
now
we're
getting
some
half
of
our
color
is
coming
from
the
north
cluster
and
half
of
it
is
coming
from
the
West
cluster.
If
we
edit
the
HTTP
route
and
just
delete
that
first
backend.
B
R
now
100%
of
our
colors
should
be
blue
because
they
should
all
be
coming
from
the
West
cluster.
On
the
one
hand,
this
is
pretty
pedestrian.
All
we
did
was
we
did
a
canary
and
we
went
yeah.
The
canary
is
good,
so
we'll
go
ahead
and
flip
everything
over.
But
if
you
think
about
it,
we
actually
just
migrated
the
color
workload
from
one
cluster
to
another,
one
without
doing
anything
with
the
applications
or
anything
like
that.
So
one
of
the
things-
that's
really
really
cool
about
this.
B
Oh
and
let's,
let's
just
go
ahead
and
delete
it
to
prove
that
it's
gone.
It
is
gone.
It's
still
working
over
here,
so
one
of
the
things
that's
really
cool
about
some
of
this
multicluster
stuff
is
that
you
can
actually
migrate
between
clusters
by
deleting
things
and
if
you
can
figure
out
how
to
get
this
set
up
so
that
you've
got
say,
clusters
in
different
cluster
providers,
then
one
of
the
really
fascinating
things
you
can
do.
This
actually
holds
true
for
the
the
older
Gateway
style
too
right.
B
One
of
the
really
fascinating
things
you
can
do
is
you
could
set
up
your
application.
So
that
half
the
traffic
is
going
to
GC
or
gke
and
the
other
half
is
going
to
AKs
and
then,
if
you
decide
that
Google
is
annoying,
you
one
day
just
turn
it
off.
B
Instead
of
making
a
big
production
out
of
the
migration
which
is
kind
of
cool,
obviously,
there's
a
lot
more
going
on
beneath
that,
but
once
you
have
it
all
set
up,
it's
pretty
cool
that
you
can
just
go
through
and
do
that
I'm
going
to
go
ahead
and
show
that,
yes,
those
pods
really
did
vanish,
but
R
running
that
command.
So
quick
sampling
of
pod,
tood
multicluster
with
Linker
2.14.
B
We
do
have
I
think
a
certain
amount
of
time
for
questions
in
the
meantime,
I'm
going
to
point
out
that
the
reason
we
still
get
some
fading
cells
here
is
that
the
face
and
smiley
workloads
still,
we
didn't
put
timeouts
on
them,
so
they
can
still
go
through
and
take
a
really
long
time
up
to
something
like
a
second
and
a
half
and
that's
the
reason
why
we
still
get
fading
cells
in
here.
A
A
Let's
see
if
any
comes
in
right
now,
no
but
great
that
we
got
that
mystery
solved
now.
So
that's
always
nice.
But
yes,
we
have
time
for
questions.
So,
let's
see.
A
Perfect
always
good
and
actually
connected
to
that.
While
we
wait
to
see
if
audience
has
any
questions,
what
would
be
like
a
good
resource
that
everyone
should
jump
on
next
if
they
want
to
learn
more.
B
B
So
service
mesh,
Academy
buyant
does
this
every
month.
The
next
one
is
on
October
26th,
where
I
will
be
joined
by
Regina
Scott
from
Argo
CD,
and
we
will
talk
about
linkerd
and
Argo,
and
the
Gateway
API
and
it'll
be
great.
B
The
fact
of
the
matter
is
I.
Don't
really
know
enough
about
Argo
yet
to
teach
this,
but
that's,
okay,
that's
why
it's
a
month
away
and
I'll
be
talking
with
a
lot
with
Regina
and
it'll.
Be
a
lot
of
fun.
Cbe
crash
is
coming
up.
This
is
a
virtual
conference
that
happens
before
cucon.
It
is
virtual,
it
is
free,
it
is
100%
focused
on
open
source
stuff
and
we
will
be
talking
about
multicluster
at
scale,
using
we're
actually
going
to
do
a
stateful
demo.
B
For
this,
which
is
new
and
different,
we
will
be
using
Linker,
D,
Emissary,
cockroach,
DB,
C
manager
and
Polaris
the
policy
engine,
so
that
should
be
a
lot
of
fun.
B
And
you
can
also
go
and
join
the
Linker
Forum,
which
is
nice
because.
B
Don't
disappear
like
they
do
on
Slack
yeah,
and
we
also
have
a
certification
course
now
at
learn.,
bu.
A
Many
good
places
to
learn
from
for
sure
and
to
soak
up
a
lot
of
info
before
cubec
Con
in
particular,
so
that
everyone
go
go
there.
B
B
B
For
example,
we
say
this
a
lot
as
far
as
I
know:
there
are
no
companies
running
Linker
in
production
that
have
a
Linker
expert
who
is
paid
to
only
be
a
linkerd
expert,
except
for
Guan
and
they're
they're
paid
to
be
Linker
experts
to
work
on
linkerd,
not
so
much
because
they
have
to
be
linkerd
experts
to
keep
it
running
right.
B
But
it's
very
common
running
ISO
that
you
will
have
at
least
one
person
whose
full-time
job
is
the
care
and
feeding
ofo,
and
it
is
very,
very
uncommon
to
see
that
with
linkerd
like
I
I,
don't
I
don't
think
I
personally
have
ever.
We
also
find
regularly
that
lingery
consumes
much
less
resource
and
causes
much
less
latency
than
sto
does.
So.
Those
are
the
reasons
that
I
would
would
look
at
those
two.
If
there's
something
that
you,
you
know,
if
there's
some
bit
of
functionality,
that
sdo
provides
that
you
absolutely
must
have.
A
Sounds
good
and
then
there
was
also
question
from
Ahmed.
Thank
you.
Can
you
explain
how
a
service,
without
any
selector
Works
in
this
way
to
redirect.
B
Quickly,
okay,
if
you're
used
to
thinking
about
the
world
from
the
perspective
of
an
application
developer,
dealing
with
North
South
traffic,
you
know
dealing
with
the
Ingress
problem.
You
are
probably
used
to
thinking
of
services
as
just
this
monolithic
thing
where
you
direct
traffic
to
the
service
and
it
shows
up
at
the
end
points
and
life
is
grand
in
the
service
mesh.
B
We
have
to
make
a
distinction
between
the
front
end
of
the
service,
which
has
a
cluster
IP
and
the
back
end
of
the
service,
which
are
all
the
end
points
all
the
pods
that
provide
the
compute.
That
actually,
you
know,
causes
that
service
that
workload
to
to
do
something
in
the
mesh.
We
attach
HTTP
routes
to
the
front
end
of
the
service
with
the
cluster
IP,
and
then
the
mesh
does
all
the
routing.
It
discovers
the
end
points
and
it
does
all
the
routing
to
the
back
ends.
B
Having
a
service
without
a
selector
is
basically
asking
kubernetes
hey,
please
allocate
a
cluster
IP
for
me,
but
don't
give
it
any
end
points,
and
so
you
end
up
with
a
service
with
nothing
but
the
front
end
of
the
service.
Then
you
can
use
an
HTTP
route
to
direct
that
to
the
back
end
of
whatever
Services
you
want.
B
I
would
actually
really
really
like
it
if
there
was
a
way
to
have
an
HTTP
route
and
then
just
ask
kubernetes
hey.
Please
give
me
a
cluster
IP
for
this
HTTP
route.
That
is
not
possible
right
now,
and
this
is
the
subject
of
it.
A
lot
of
discussion
I
mean
a
lot
of
discussion
in
various
kubernetes
working
groups.
So
hopefully
that
answered
the
question.
It's
that
being
able
to
answer
that
question
I
think
involved
about
three
months
of
discussion
in
the
gamma
working
group.
It
was
lovely.
A
A
B
B
B
So,
if
you're
trying
to
use
clusters
for
isolation
and
you're
concerned
about
you
know
the
entire
a
crashing
because
I
don't
know
the
US
East
data
center
gets
hit
by
a
meteor
or
something
I
would
encourage
you
to
replicate
that
workload
in
another
cluster
in
another
Zone
and
then
perhaps
use
HTTP
routes
to
split
the
traffic
across
the
two
one
of
the
things
we're
going
to
be
talking
about
in
Cube.
Crash,
actually
is
exactly
the
scenario
of
oh
look.
This
whole
Zone
just
went
away.
What
happens
now
and
it'll
be
interesting.
B
B
So
I
may
be
misinterpreting
the
question.
I
think
I
think
you're
asking
if
you
have
a
cluster
that
is
running
without
linkerd
and
you
want
to
add
linkerd
to
your
existing
stuff.
The
short
answer
is
that
you
end
up
doing
a
rollout
restart
of
those
workloads.
There
are
a
couple
of
different
ways.
You
can
get
the
mesh
a
couple
of
different
ways
to
handle
actually
injecting
it
in
there.
B
The
certification
course
covers
this
actually
I'm,
pretty
sure
we
have
a
service
mesh
Academy
that
covers
it.
If
not
I,
suppose
we
should
do
a
service
mesh
Academy
that
covers
it,
but
yeah.
The
simple
way
to
do
it
is
to
annotate
the
name
space
for
your
workloads
with
linkerd
doio,
inject
colon,
enabled
and
then
do
a
roll
out
restart
of
the
workloads
in
that
namespace
and
they
will
all
be
injected
into
the
mesh.
Once
that
happens,
the
longer
more
detailed
answer
is
that
kuber
sorry
linkerd
provides
a
kubernetes
admission
controller.
B
B
A
Let's
see
if
anything
comes
up,
I
guess
this
is
then
kind
of
last
call
for
questions.
If
there
isn't.
B
A
Here
on
one
go,
which
sometimes
happens,
it's
a
positive
problem
to
have
if
we
have
too
many
at
the
end
of
the
show
away.
Let's.
B
Yeah,
thank
you
yeah
I
guess
I
could
have
spent
more
time
on
the
slides,
but
that
wouldn't
have
been
nearly
as
fun
as
spending
more
time
with
the
demo
questions.
So.
A
B
Budget
for
implementing
a
service
mesh,
including
software
and
support
costs
the
right
answer
to
that
question.
The
best
answer
to
that
question
is
go
to
the
linkerd
slack
and
ping
Gary
Mick
Gary
surname
myk,
and
he
will
be
happy
to
talk
to
you
about
how
to
spend
money
on
lardy
the
much
the
more
detailed
well.
No,
he
he
will
have
the
more
detailed
question
I
can
give
you
the
more
detailed
answer.
I
can
give
you
the
sort
of
broader
answer.
B
There
are
at
least
two
routes
to
go
for
Linker
d,
one
route,
which
is
the
one
that
I
as
an
open
source
person
think
is
awesome.
Is
you
can
just
use
open
source
linkerd
and
not
pay
anybody
any
money?
You
will
of
course
have
to
do
your
own
linkerd
support.
You
will
be
able
ask
questions
on
GitHub
you'll
ask
questions
on
slack
in
the
Forum,
but
you
know
nobody
is
going
to
be
out
there
committing
to
provide
support
for
you
me
as
an
employee
of
buoyant
I
will
go.
Oh
yeah.
B
You
should
totally
go
and
get
a
support
contract
from
buoyant
and
do
all
the
buoyant
things
and
give
us
tons
of
money,
because
that
would
make
me
very
happy
and
permit
me
to
keep
working
on
open
source
stuff,
but
yeah
you
can
totally
do
it
in
open
source,
I,
don't
I,
don't
think
I
know
of
anybody
who
has
more
than
like
half
an
FTE
devoted
to
babysitting
linardy,
maybe
less
than
that
I'm,
not
I'm,
not
actually
sure,
but
yeah.
B
You
know
by
all
means
feel
free
to
Ping
me
on
feel
free
to
ping
me
on.
On
slack
and
I
can
point
you
in
the
right
direction.
I
can
also
point
out.
B
Linardy
you
can
talk
to
William
on
slack
and
that'll
be
great,
but
there
is
how
to
find
me
on
slack
or
bya
email.
If
you
want
to
ask
more
questions
about
that,
too,.
A
B
B
Linker
does
some
things
at
the
cluster
level
like
crds
and
such
in
general.
It
is
fairly
tricky
somewhere
between
tricky
and
don't
do
that
for
trying
to
run
two
service
meshes
in
the
same
cluster.
B
Just
I,
don't
know
anybody
who's
doing
that
I
would
not
recommend
it
so
effectively.
Service
meshes
are
kind
of
a
cluster
level
thing
HTTP
route
for
podt
pod
communication
definitely
works.
You
saw
it
in
this
demo,
where
we
used
an
HTTP
route
to
control
traffic
from
the
a
face
pod
to
the
smiley
pod
and
the
colour
pod.
B
The
bit
where
I
was
Canary
between
the
blue
color
and
the
green
color.
Some
of
that
traffic
was
crossing
a
cluster
boundary,
but
some
of
it
was
all
within
the
north
cluster
and
there's
no.
You
can
do
exactly
the
same
thing
with
color
and
color
2,
all
both
in
the
north
cluster
and
it
works
great
HTTP
route
originally
was
specified
as
a
thing
you
could
only
attach
to
a
Gateway
resource
within
Gateway
API.
The
work
that
the
gamma
working
group
has
done
over
the
last
year
or
so
allows
also
associating
HTTP
routes
with
Services.
B
We
do
not
believe
that
that's
the
end
State.
We
think
we're
going
to
have
to
do
some
more
work
still
in
that
there
there
are
things
where,
if
you
try
to
do,
if
you
only
Support
Services,
there
are
some
things
that
get
really
hairy
that
we'd
like
to
be
able
to
do
within
gamma.
So
we
think
there's
some
other
stuff
coming
as
well,
but
for
right
now,
that's
how
you
use
HTTP
route
for
configuring,
a
service
mesh,
and
if
you
want
to
know
more
about
that
in
particular,
let.
A
Sounds
good
and
then
we
have
at
least
one
more
question
from
the
audience
and
we
have
still
more
time
if
anyone
wants
to
ask
a
bit
more
questions.
This
was
exactly
what
we
were
hoping
to
happen,
though
people
coming
in
with
the
chrs,
which
is
always
nice
to
see.
Thank
you
so
so
much
everyone
and
let
me
get
the
right
link
to
everyone
in.
B
Yeah
so
so
yeah
I
pasted
Annie
a
link
to
the
some
of
the
docs
there
about
the
way
that
Gateway
API
works
with
service
mesh,
and
so
that's
the
that's
a
good
place
to
start.
If
you're
curious
about
that
at
least
I
think
it's
a
good
place
to
start
I
hope
it's
a
good
place
to
start,
because
I
wrote
a
lot
of
that.
B
Linker
already
supports
Canary
deployments
and
AB
testing
and,
depending
on
exactly
what
you
mean
by
Blue
greed
deployments,
it
can
do
that
as
well
or
a
better
way
to
put
that
is
so
linkerd.
You
actually
saw
a
canary
deployment
where
we
split
traffic
across
two
different
workloads
in
a
weighted,
fast
fion.
So
you
can
already
do
things
like
oh
take
1%
of
my
traffic
and
ship
it
over
to
my
new
workload
and
find
out
if
it
breaks.
B
You
can
also
do
things
like
take
all
the
traffic
for
the
you
take
all
of
the
traffic
with
a
header
that
says:
xes,
user
test
user
and
direct
it
to
you
know:
Smiley
2
instead
of
smiley
and
in
fact,
I
think
I've
done
that
demo
for
cloud
native
live,
haven't
I,
pretty
sure
I've
done
one
on
Linker
Dynamic
request
routing,
which
is
the
the
thing
that
we
call
that
within
Linker.
A
B
B
So
the
only
thing
for
sh
sures
Kumar's
question
about
Blu
green
deployments
is
depending
on
how
people
are
thinking
about
blue
green
deployments.
Sometimes
they
are
talking
about
something
that's
pretty
similar
to
an
AB
test
and
sometimes
they're
talking
about
something
where
they
roll
out
the
green
deployment.
And
then
the
system
will
automatically
divert
all
of
the
traffic
to
the
green
deployment
if
it
works
and
otherwise
it'll
roll
back
to
Blue,
and
for
that
and
this
also
kind
of
gets
to
sesh
Kumar's
next
question
about
third
party
Integrations
plugins
Etc.
B
B
So
yeah,
there's
there's
a
lot
of
ways
to
do
that.
The
thirdparty
integration
and
plug-in
thing
I
tend
to
think
of
it
more
as
Integrations,
rather
than
plugins,
because,
for
example,
with
flux.
B
Actually,
these
days
I,
don't
even
know
that
you
need
anything.
Special
I
think
you
just
hook
the
two
of
them
up
and
they
work
because
flux
has
a
mode
where
it
can
use
HTTP
routes
to
handle
shifting
traffic
around
and
Linker
does
http
routes
natively
now,
but
previously
yeah
you'd
use
the
the
Linker
e
service,
mesh
interface
plugin
and
then
tell
flux
to
use
SMI
things,
and
that
worked
great.
A
Good
and
we
have
a
couple
minutes
left
if
anyone
is
typing
away
a
question
and
wants
to
ask
ask
it
super
quickly,
but
just
to
clarify
papag
as
well.
All
of
the
recordings
for
all
the
cloud
native
lives
are
available
in
YouTube,
for
example,
from
cncf
there.
So.
A
Put
cncf
cloud
native
Computing
foundation
on
YouTube:
they
are
either
in
the
video
tab
or
in
the
live
stream.
Tab
I
think
they
used
to
be
in
the
video
site.
Now
they
are
on
the
live
stream
side
stream.
A
Yeah
yeah
and
there
should
be
a
playlist
as
well,
so
you
can
find
this
one
one
as
well
as
all
the
previous
ones,
with
Flyn
and
Linker
D.
So
you
can
find
all
of
those
from
there
and
I
think
also.
The
recording
of
This
is
available
on
the
LinkedIn
side
as
well,
but
that
might
get
buried
a
bit.
So
if
you
want
to
find
out
the
older
ones,
Maybe
YouTube
Works
a
bit
better
for
that.
B
Yeah
YouTube
is
typically
when
I've
wanted
to
go
through
and
find
stuff,
then,
actually,
typically,
what
I've
done
is
just
Google,
Cloud
native
live
linery
or
whatever,
and
that
usually
gets
me
someplace
close
enough.
I
can
find
it
and
it's
pretty
much
always
been
YouTube
yeah.
A
That
is
very
true
and
there's
I.
Think
the
final
question
for
today,
since
we
have
one
minute
left,
is
it
possible
to
integrate
Argo
CD
with
any
prerequisites.
B
My
understanding
and
again
I
am
currently
not
an
Argo
expert,
although
that
is
likely
to
change
somewhat
in
the
next
month.
But
my
understanding
is
that
as
of
linkerd
2.14,
then
it's
pretty
straightforward
to
have
Argo
use,
Gateway
API
to
do
its
traffic
shifting
and
things
pretty
much
just
work
great,
but
October
26th
in
service
mesh
academy.
We
will
be
diving
deep
into
Argo
CD
and
linkerd
and
it'll
be
kind
of
fascinating
to
find
out.
If
there
are
any
gotas
we
have
to
talk
about.
There
are
always
gotas.
A
On
something
perfect,
but
that's
it
I
think
for
today,
amazing
to
have.
B
A
Any
questions
in
the
future,
but
amazing
that
we
had
so
many
questions
in
the
end,
particularly.
A
I
fully
agree
yeah,
but
thank
you,
everyone
for
joining
the
latest
episode
of
cloud
Med
live.
It
was
great
to
have
a
session
about
linger.
Le
214
really
love
the
interaction
as
well
as
the
a
questions
from
the
audience,
and
we
bring
you
the
latest
Cloud
nting
code,
every
Wednesday
or
Tuesday,
and
in
the
coming
weeks
we
have
more
great
sessions
coming
up
so
to
stay
tuned.
For
those.
Thank
you
for
joining
us
today
and
see
you
all
in
future.