►
From YouTube: Cloud Native Live: 2.13 Linkerd and the Gateway API
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
A
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions,
so
you
can
join
us
every
Wednesday
to
watch
live
so
this
week
we
have
Flynn
here
with
us
to
talk
about
2.13,
Linker
D
and
the
Gateway
API
very
exciting,
and
as
always,
this
is
an
official
live
stream
of
the
cscf
and
as
such
it
is
subject
to
the
cntf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
any
questions
that
would
be
in
violation
of
that
code
of
conduct.
B
Thank
you
happy
to
be
here
so
today
we
do
have
a
few
slides
since
we're
talking
about.
What's
coming
in
Linker
d2.13,
there
will
be
a
demo
Don't
Panic.
Hopefully
the
demo
will
even
work.
It's
a
pretty
simple
agenda,
we're
just
overview
of
the
3.13
features
and
then
we'll
go
directly
into
a
dynamic
request,
routing
Workshop.
If
you
want
to
follow
along,
you
can
otherwise
you
can
just
watch
what's
going
on
and
mock
me
if
the
demo
gods
don't
smile
upon
us,
I
am
going
to
be
using
a
k3d
cluster
for
this.
B
B
There
are
a
couple
of
significant
features
that
are
showing
up
I'm
only
going
to
talk
about
two
of
them
here,
I'm
going
to
talk
about
Dynamic,
request,
routing
here,
I'm
going
to
talk
about
circuit,
breaking
I'm,
really
briefly
going
to
talk
about
changes
in
buoyant
cloud
and
I'm
not
going
to
talk
about
fips
in
the
Azure
Marketplace.
At
all,
I
actually
meant
to
take
that
off
of
that
slide.
So
sorry,
Dynamic
request
routing.
B
So
those
of
you
who
are
familiar
with
Linker
D
2.12
and
earlier
we
had
this
thing
in
2.12,
where
you
could
use
SMI
traffic
splits
to
control
where
requests
would
be
routed
inside
the
mesh
and
the
traffic
split.
Pretty
much
allows
routing
for
things
like
okay,
for
this
particular
workload
go
and
split
90
off
to
this
other
thing
you
know
in
this
case
you
can
take
one
percent
of
the
traffic
over
to
the
new
version
of
a
foo
workload.
B
You
can
take
99
to
the
original
workload,
which
is
the
sort
of
thing
you
would
do
for
Progressive
delivery
or
a
canary
rollout.
You
could
also
do
things
like
just
completely
take
over
traffic
for
one
service
and
Route.
It
someplace
completely
different,
which
might
be
the
sort
of
thing
where
you've
realized
that
the
food
service
in
your
cluster
is
dead
and
so
you're
going
to
Route
all
of
its
traffic
over
to
a
different
cluster
entirely
with
Linker
d
multi-cluster,
that's
about.
As
far
as
you
can
take
routing
in
2.12.
B
2.13.
We
changed
the
world
around,
so
in
2.13
you
can
route
based
on
most
attributes
of
the
request,
so
you
can
route
based
on
headers
the
paths,
the
verbs
you
don't
get
to
Route
on
the
body.
This
is
still
a
service
mesh,
not
an
application
firewall
or
something
like
that,
but
even
just
being
able
to
route
with
headers
and
verbs,
and
such
gives
you
a
lot
of
flexibility
that
Linker
d2.12
simply
did
not
have.
B
These
are
not
configured
with
the
SMI
anymore.
These
are
configured
instead
with
the
Gateway
API.
Again,
if
you're
familiar
with
Linker
d2.12,
we
introduced
the
HTTP
route
object
from
the
Gateway
API
in
Linker
d2.12,
but
in
2.12
it
was
just
a
thing
that
you
could
hang
policy
off
of.
You
can
hang
authorization
policy
off
of
it
specifically
in
2.13
same
object,
but
now
you
also
get
to
control
routing
with
it
as
well
as
hanging
policy
off
of
it.
You
can,
of
course,
continue
to
hang
policy
off
of
it
as
well
that
still
works.
B
This
permits
you
to
do
some
really
fascinating
things
like
if
you
have
a
workload
deep
in
your
call
graph,
but
you
want
to
do
header
based
a
b
testing
so
that
some
of
your
users
get
one
version,
and
some
of
your
users
get
another
version,
but
you're
far
away
from
the
Ingress,
where
this
would
normally
happen.
You
can
do
that
now,
which
is
kind
of
cool.
You
could
do
a
per
user
Canary
rollout.
You
could
do
Canary
Roll-Ups
still
the
old
way,
but
you
have
a
lot
more
control
over
them
now.
B
B
If
you
are
not
familiar
with
the
Gateway
API,
it
was
originally
designed
for
Ingress
traffic
traffic
coming
from
outside
the
cluster
going
to
inside
the
cluster
over
time,
as
people
have
been
looking
at
this,
they
kind
of
realized
that
there
should
be
no
particular
reason
that
we
can't
use
the
same
resources
like
HTTP
route
and
control
routing
within
the
service
mesh
as
well.
This
has
given
rise
actually
to
the
gamma
initiative,
which
is
I'm
not
actually
going
to
try
to
expand
to
that
acronym.
B
B
B
As
of
2.12,
we
have
adopted
the
Gateway
API
as
the
core
mechanism
that
we're
going
to
use
within
Linker
D
for
describing
classes
of
HTTP
traffic,
by
which
I
mean
things
like
you
know:
here's
a
chunk
of
traffic
for
a
given
user
going
to
this
workload
or
grpc
calls
to
this
particular
service,
or
you
know
that
sort
of
class,
because
using
the
Gateway
API
for
that
gives
us
a
lot
of
flexibility
to
do
things
like
auth
policy.
B
Dynamic
request
routing
circuit
breaking
all
this
kind
of
stuff,
where
we
want
to
identify
a
particular
chunk
of
the
HTTP
or
grpc
traffic
in
a
cluster
and
then
be
able
to
take
actions
that
are
specific
to
that
particular
chunk
of
traffic
in
the
future.
We
expect
to
extend
this
on
to
things
like
you
know,
beyond
HTTP
traffic
TCP
traffic
whatever,
but
at
the
moment
we're
focusing
on
HTTP
traffic.
B
B
A
I
can
still
hear
it's
a
bit
low
and
a
bit
okay,
interesting,
but
better.
A
A
Yeah
exactly
and
why
don't
you
take
a
moment
to
fix
that
I
want
to
remind
the
audience
that
you
can
ask
your
questions
throughout
the
whole
show.
So
even
if
you're
right
now
wondering
oh
I'm
wondering
about
this
part
of
this
or
something
like
that,
just
send
your
question
in
immediately
and
then
we'll
get
to
it
when
we
have
a
good
spot
or
I'll,
just
ask
it
immediately
when
it
comes
in
as
well.
So
you
can
get
immediate
answers.
So
that's
always
fun.
B
So
this
is
how
you
know
we're
doing
this
live
exactly
where
was
I
I
think
I
had
finished
talking
well,
okay,
just
to
back
up
in
case
I
was
mentioning
that
we
copy
the
HTTP
group
HTTP
route
out
of
policy
that
Linker
or
into
the
policy.linkerty.io
group.
For
reasons
of
conformance,
everybody
heard
that
right.
B
Good,
so
we're
kind
of
just
now
getting
to
the
point
where
we
are
defining
what
conformance
for
a
Gateway,
API
mesh
looks
like
in
gamma,
so
hopefully
real
soon
now
we'll
be
able
to
pull
HTTP
route.
You
know
and
use
the
real
Gateway
API
HTTP
route
object,
because
we
will
be
able
to
have
linkurdy
pass
the
conformance
tests
for
Gateway
API
itself
using
the
the
gamma
stuff.
B
We
will
still
continue
supporting
policy.linkerty.io.
Don't
worry,
you
won't
have
to
immediately
go
and
switch
everything
over.
Hopefully
that
conformance
thing
will
be
coming
in
2.14,
but
you
know
I
cannot
yet
promise
that
in
the
long
run
we
expect
Gateway,
API
and
linkerty
to
deprecate
the
SMI
extension.
We
also
expect
it
to
deprecate
the
older
service
policy
crd,
but
both
of
those
deprecations
will
be.
You
know
these
are
still
things
we're
going
to
support
for
some
time
going
forward.
B
We
have
no
plans
to
show
up
next
week
and
say
everyone
must
change
their
cluster
config.
So
don't
panic
about
that
all
right,
I
believe!
Oh,
yes,
gotchas!
We
should
always
talk
about
gotchas.
There's
a
big
one,
there's
only
one
big
one,
but
the
big
one
is
that
service
profile
and
HTTP
route
don't
compose.
B
If
you
have
a
service
profile
that
defines
routes,
it
will
take
precedence
over
an
HTTP
route
that
conflicts
with
it.
The
reason
for
this
is
that
the
service
profile
is
the
older
mechanism.
We
do
not
want
it
to
be
the
case
that
you
have
a
cluster
running
using
the
2.12
mechanisms
and
then
somebody
just
randomly
decides
to
throw
an
HTTP
route
in
to
play
with
and
they
break
everything.
B
So
this
is
going
to
be
the
case
for
the
foreseeable
future.
The
big
changes
here
are
that
HTTP
route
will
gradually
gain
more
and
more
capability
so
that
by
the
time
we
actually
fully
deprecate
service
profile,
you'll
be
able
to
just
Define
an
HTTP
route
that
does
all
of
the
same
things
that
your
service
profile
did
in
the
past.
In
2.13,
HTTP
route
is
still
pretty
limited,
because
this
is
this
is
still
pretty
early
days.
So
this
is
a
great
place
to
pay
attention
as
we
go
forward.
B
The
next
stuff
I'm
going
to
do
is
talk
actually
yeah.
Let's
talk
about
circuit
breaking
here
and
then
I'll
talk
more
about
the
specifics
of
what
these
routes
look
like
in
during
the
demo.
Let
me
reiterate
what
Annie
was
saying:
if
you
have
questions,
throw
them
in
the
chat
now's
a
great
time
to
get
to
some
of
these.
B
In
the
meantime,
I'm
going
to
talk
a
little
bit
about
circuit,
breaking
so
circuit
breaking
the
idea
here
is
you're
sitting
here.
You're
running
along
everything
is
great.
One
of
the
endpoints
of
one
of
your
workloads
starts
to
fail,
for
whatever
reason
we
don't
really
want
to
keep
just
hammering
it
with
traffic,
while
it's
already
failing,
it
would
be
better
to
stop
routing
traffic
to
the
broken
endpoint,
give
it
a
chance
to
recover
and
then
come
back
to
it.
B
Yet,
as
those
of
you
who
have
paid
a
lot
of
attention
to
link
or
D
will
have
noticed,
the
demo
that
I'm
doing
is
actually
going
to
be
doing
using
the
latest
Edge
release,
which
has
HTTP
routes
in
it,
has
Dynamic
request
routing,
but
it
doesn't
actually
have
circuit
breaking
turned
on.
So
this
next
slide
I'm
going
to
talk
a
little
bit
about
what
we
kind
of
expect
this
to
look
like
in
2.13,
but
please
note
that
point
at
the
bottom.
These
are
not
yet
set
in
stone.
B
These
things
could
change
in
the
beginning
with
2.13,
you
will
be
configuring
these
using
annotations.
We
will
probably
have
annotations
that
allow
you
to
do
things
like
tell
it.
Oh
I
want
you
to
do
circuit
breaking
when
you
see
more
than
some
number
of
consecutive
failures,
we'll
probably
have
an
annotation
for
things
like
how
about,
if
you
see
a
certain
number
of
failures
over
a
certain
span
of
time,
consecutive
or
not
or
if
your
success
rate
Falls
too
low.
So
we
expect
that
we
will
see
annotations
that
look
kind
of
like
these.
B
B
Really
briefly,
one
of
the
interesting
things
that
we've
been
learning
about,
Linker
D,
is
that
as
we
look
over
people
who
are
running
it,
there
are
a
surprisingly
large
number
of
linkery
installations
that
are
not
being
kept
up
to
date
and
don't
have
any
practical
way
of
finding
out
when
something
has
gone
wrong
and,
in
fact,
are
running
with
things
being
wrong.
B
B
This
is
a
big
problem
for
meshes,
though,
because
the
whole
point
of
a
mesh
is
that
it's
fundamentally
a
tool
for
adding
security
and
reliability
to
your
application,
but
we
can't
do
that
very
well
if
linkerty
itself
is
not
being
kept
up
to
date,
so
very
quickly,
we
are
opening
buoyant
Cloud
to
the
community.
Previously
Cloud
was
entirely
a
paid
product.
Now
with
2.13,
we
are
opening
up
a
free
tier
for
it
to
try
to
help
with
this
particular
problem.
B
B
B
B
And
if
you
are
looking
a
lot,
you'll
realize
that
this
looks
remarkably
like
the
installing
the
late
installing
the
latest
production
version
of
linker
D
process.
It's
just.
Instead
of
using
run.linker.io
install,
we
use
install
Dash
Edge
if
we
do
that,
it
comes
through
and
points
out
that
I
already
had
Edge
23.3.3,
it's
downloaded,
and
so
it's
now
going
to
make
that
the
default,
and
at
this
point
on
my
machine
running,
Linker
D,
will
in
fact
get
me
the
Linker
D
Edge
23.3.3
CLI,
not
the
stable,
2.12.4
CLI.
B
A
B
A
That's
always
a
tough
question.
Yeah,
maybe
I
have
a
question.
What
are
you
the
most
excited
about
for
the
do
2.13
release?
What's
your
favorite.
B
You're
actually
going
to
see
it
in
the
demo.
The
the
dynamic
request,
routing
stuff
can
do
some
exceptionally
cool
things
which.
B
It's
probably
worth
pointing
out
actually
because
I,
don't
I,
don't
really
know
what
kind
of
people
what
people
know
about
my
background,
but
my
background
in
the
cloud
native
world
has
always
been
about
looking
at
this
technology,
not
for
the
sake
of
the
technology
itself.
Nobody
runs
kubernetes,
just
to
say,
they're
running
kubernetes
people
run
kubernetes
because
they
have
a
problem.
B
They
want
to
solve,
and
I've
always
been
looking
at
this
from
the
perspective
of
okay,
if
you're
an
application,
developer
and
you're
trying
to
run
an
application
in
the
cloud,
how
do
we
actually
make
that
seamless
and
how
do
we
make
that
easy
and
some
of
the
stuff
that
you
can
do
with
Dynamic
request,
routing
dovetails
directly
into
that
and
so
I
think
that's
really
cool
all
right.
So
we
got
that
going.
B
The
faces
demo
is
running
and
now
I'm
going
to
get
both
Emissary
Ingress
and
the
faces
demo
into
the
mesh
I'm
gonna
do
that.
First
by
telling
linkerty
hey
anytime,
you
see
a
pod
appear
in
these
two
namespaces
go
ahead
and
inject
it
into
the
mesh
and
then
I'm
going
to
do
a
rollout
restart
again
in
both
of
those
things
to
actually
allow
Linker
D
to
see
the
to
see,
pods
being
created
and
get
them
into
the
mesh.
B
Normally
doing
this
demo
actually
I
tended
to
do
the
restart
and
then
the
wait
and
then
the
restart
and
then
the
wait,
and
so
this
time
I'm
trying
restarting
them
both
in
parallel
and
waiting
kind
of
in
parallel.
So
we'll
see
if
this
is
any
faster.
B
Silly
kubernetes
tricks,
okay,
so
now
Emissary
is
going
okay
and
we
can
let
the
faces
demo
come
up
as
well,
and
all
of
this
is
kind
of
boring
from
really
everybody's
perspective.
You
don't
really
get
to
see
anything
going
on
from
here,
but
this
looks
good
all
right.
So
at
this
point
everything
should
be
in
the
mesh
and
I
should
be
able
to
show
you
all
a
couple
of
web
browsers,
showing
the
faces
GUI
actually
running.
B
One
of
my
web
browsers
is
going
to
be
totally
normal
and
the
other
one
I'm
using
the
mod
header
extension
to
insert
this
header
x
dash
faces
Dash
user
colon
test
user
into
all
the
requests
going
down
into
the
application,
and
we
will
use
those
later
so
here's
my
normal
browser,
it
says
user
unknown
because
it's
not
actually
sending
an
x-spaces
user
header
and
it
is
indeed
showing
us
a
grid
of
smiley
faces
on
a
green
background
which
is
kind
of
nice
and
here's.
My
other
browser,
my
other
browser,
you'll
notice,
says
user
test
user.
B
This
is
the
one
that's
inserting
the
xbases
user
header
into
the
requests.
Every
time
it
fetches.
One
of
these
faces
all
right,
so
those
are
the
same
two
browser
windows.
By
the
way
you
can
tell
because
I
just
selected
that
perfect.
B
B
Show
us
what
being
in
the
mesh
means.
Let
me
come
back
to
that,
because
I
can
yeah
I
can
share
that.
Let
me
go
ahead
and
come
back
to
that
at
the
end
after
we
run
through
and
take
a
look
at
this
stuff
just
because
we
could
go
Fairly
deep
down
the
rabbit
hole
on
some
of
that
and
I
want
to
make
sure
we
get
through
the
through
the
demo
before
we
do
that,
although
we
actually
looks
like
we
have
plenty
of
time,
so
that's
good,
okay,
simplest,
sort
of
dynamic
request.
B
This
is
a
pretty
basic
thing
that
you
do
for
Progressive
delivery.
It's
a
pretty
basic
thing
you
do
just
to
you
know
make
certain
that
a
new
version
of
your
service
is
actually
going
to
function
before
you
just
throw
all
the
traffic
at
it.
I
know.
Testing
in
prod
is
a
thing,
but
maybe
having
some
control
over
testing
a
prod
is
good.
B
Basically
this
is
a
route.
That's
going
to
be
attached
to
the
service
named
color
in
the
Face's
namespace
and
I
want
to
call
a
little
bit
of
attention
to
the
word
service
here,
if
you're
familiar
with
Linker
D,
you
will
also
recognize
that
there
are
things
in
Linker,
D
called
servers
as
opposed
to
service.
In
this
particular
case,
I
want
the
service
I'm,
basically
saying
hey,
linkerty
anytime,
you
see
traffic,
that's
trying
to
be
sent
to
the
cluster
IP
associated
with
the
color
service.
B
This
route
applies
to
that
because
we're
talking
about
traffic
to
a
Services
cluster
IP.
We
also
need
to
talk
about
the
port,
and
the
port
here
is
the
port
number
defined
in
the
service
resource.
That's
why
it's
80,
as
opposed
to
you,
know
some
cluster
based
port
number
here
for
every
traffic
there
we're
going
to
send
90
of
its
traffic
to
the
pods
associated
with
the
service
called
color,
and
then
we're
going
to
send
the
other
10
percent
to
the
pods
associated
with
the
service
named
color.
2.
B
B
It
felt
natural
to
me
that
the
back
end
refs
would
talk
about
cluster
ports.
No
talking
about
service
ports.
Still
another
thing
that's
worth
pointing
out
is
that
these
numbers
actually
don't
have.
To
sum
to
a
hundred.
The
important
thing
is
the
ratio
between
the
two
numbers,
but
for
me
personally,
I
kind
of
like
to
make
them
some
to
100,
because
it
makes
it
really
easy
for
me
to
just
look
at
them
and
go
oh
yeah
percentages.
I
can
deal
with
percentages.
So
let's
go
ahead
and
apply
this
and
see
what
happens?
B
B
B
So
far,
what's
happened
in
here
if
we
go
back
and
take
a
look
at
the
demo
architecture
instead
of
having
all
of
this
go
here
over
to
the
color
service,
we're
adding
another
one
where
10
percent
of
it
is
getting
routed
over
here
happening
in
the
mesh
I
also
want
to
point
out
that
we
are
far
away
from
Emissary
at
this
point.
Typically,
you
do
things
like
this
by
altering
things
at
the
Ingress.
Emissary
would
not
be
able
to
do
this
because
Emissary
has
absolutely
no
control
over
this
traffic.
The
Linker
D
can
do
it.
A
B
It
is
not
what's
happening,
is
back
at
the
demo
architecture
I'm
using
linky
the
lobster
the
Linker
D
mascot,
to
represent
the
sidecars
well
to
kind
of
represent
the
sidecars.
The
sidecar
proxies
that
are
attached
to
these
pods.
The
face
workload
actually
only
has
one
proxy,
so
I
should
really
be
showing
this
linky
going
around
talking
to
this
linky,
but
that
just
makes
the
diagram
ugly,
so
I
didn't
do
it.
B
So
we
haven't
changed
anything
about
the
number
of
pods
in
the
replica
set.
We
haven't
done
anything
different
inside
the
Pod.
We've
simply
told
that
one
proxy
that
it
should
be
doing
something
different
with
traffic
routing,
which
is
part
of
the
reason
why
this
is
really
powerful.
It
means
that
we
don't
have
to
consume
more
resources.
We
don't
have
to
slow
anything
down
any
further.
B
If
that
did
not
answer
your
question
Chewie,
please
elaborate
all
right.
So,
let's
come
back
over
here
and
to
further
demonstrate
that
we
can
go
and
edit
those
weights
there's
nothing
particularly
magic
about
a
90
10
split.
So
if
we
change
this
to
50
50,
we
should
now
see
half
of
them.
Turn
blue
and
you
know,
I
mean
worth
pointing
out
we're
talking
about
percentages,
we're
talking
about
Randomness,
and
you
know
stochastic
splits
and
things
like
that.
So
we're
not
exactly
going
to
have
always
eight
blue
backgrounds
and
eight
green
backgrounds.
B
But
we
can
look
at
this
and
see
yeah.
You
know,
roughly
50
percent
I
could
also
install
linkery
Vis
and
then
go
through
and
look
in
Liberty
Vis
to
see
much
more
detailed
information,
but
you
know
for
50
50
splits
with
stuff
like
this.
One
of
the
things
I
like
about
this
demo
is
that
you
can
just
see
it.
B
Another
thing
that
we
can
do
is
we
can
edit
one
of
these
backends
to
have
a
weight
of
a
zero
which
does
the
intuitive
thing
that
you
would
expect
where
it
just
makes
all
of
the
traffic
go
to
the
other
one.
A
weight
of
zero
means,
don't
put,
don't
send
any
traffic
here,
and
so
now
we
see
all
blue
backgrounds
and
again
we've
done
this.
We
haven't
changed
the
pods
that
are
running
there's
in
fact,
there's
only
one
pod
running
for
each
of
color
and
color.
Two
right
now
don't
do
that
in
production
people.
B
This
is
a
very,
very
bad
idea
for
anything,
but
demos,
on
the
other
hand,
running
it
in
k3d
in
production
is
also
well
okay.
Running
getting
k3d
on
your
laptop
is
also
a
bad
idea
for
production
running
it
in
k3s
in
a
cluster,
someplace
works
really
well
Okay.
So
there
you
go,
there's
a
really
simple
Canary
deployment.
If
there
are
any
other
questions
about
that,
now
would
be
a
great
time
to
bring
those
up
too.
A
No
question
so
far,
but
Chewie
did
say
that
amazing,
that's
very
clear.
So,
oh.
B
Good
now
another
thing
that
I
mentioned
earlier
that
I
think
is
really
cool.
Is
you
can
also
do
a
b
testing
very
far
away
from
your
Ingress
controller
down
deep
in
the
mesh,
and
so
we're
going
to
do
that?
One
next,
the
way
we're
going
to
do
this
one
is
that
we're
going
to
arrange
it
so
that
if
we
see
that
X
faces
user
test
user
header
on
a
request,
then
that
is
going
to
get
routed
over
to
the
smiley
2
service,
which
will
return
the
hard-eyed
smiley.
B
So,
let's
take
a
little
take
a
look
at
the
HTTP
route
for
that
once
again,
we
are
doing
this
in
the
faces
namespace
because
we're
working
with
workloads
in
the
faces
namespace
same
thing
as
before.
We
are
attaching
to
a
service
only
this
time,
we're
attaching
to
the
smiley
service.
Rather
than
the
color
service,
this
is
making
me
realize:
I
should
really
rename
the
face
service
just
to
anyway,
never
mind.
So
this
is
a
new
one
where
we
introduced
this
matches
Clause.
B
So
this
is
saying:
if
you
see
a
header
named
x,
dash
faces
Dash
user
and
you'll
notice.
This
is
all
lower
case
because
per
HTTP
to
normalization
rules
headers
get
normalized
to
lowercase.
If
we
see
that
header
and
its
value
is
exactly
test
user,
then
we
will
match
this
backend
ref
and
send
it
to
the
smiley2
service.
B
But
we
should
see
again
is
that
the
front
browser
window
should
end
up
showing
us
still
the
normal
smiley
faces
and
the
back
browser
window
should
end
up
showing
us
things
that
have
hard-eyed
smileys,
and
that
is
actually
what
we
see
it's
so
nice
when
the
demo
actually
works.
I
really
appreciate
that.
B
B
B
B
A
very
simple
demo
of
the
dynamic
request,
routing
functionality
in
Linker
d2.13,
like
I,
said
I,
tend
to
think
this
is
pretty
cool
because
there's
a
bunch
of
stuff
you
can
do
with
it.
That
is
really
really
amazing,
especially
if
you're
accustomed
to
only
having
control
of
things
at
the
Ingress.
B
If
you
want
to
do
header
base
routing
the
header
you're
after
has
to
be
present
at
the
place
where
you're
doing
header
base
routing
right.
So
in
this
particular
case,
the
demo
the
faces
demo
has
to
pass
the
x-faces
user
header
from
the
face.
You
know
from
the
browser,
through
the
faces,
the
face
service
onto
the
smiley
and
color
service.
If
it
doesn't
do
that,
the
information
that
you
want
to
be
doing
routing
on
is
simply
not
present
and
again,
that's
not
anything
specific
to
linkready.
B
B
So
that's
your
typical.
The
show
me
all
the
pods
that
are
running
in
the
Face's
namespace.
You
know
actually
I
realized,
I
lied,
I'm,
actually
running
two
replicas
of
everything
in
here,
rather
than
one
I
thought.
I'd
turned
that
back
down
to
one
but
okay,
fine,
so
I
have
two
of
them.
An
interesting
thing
that
you
might
notice
here
is
that
each
of
them
says
they
have
two
containers
rather
than
one
container
and
if
I
pick
one
of
these
click
on
the
smiley
service
for
a
minute.
B
I
repeat
that
command
with
the
correct
namespace
added
you
can
come
through
here
and
you
can
see
that
in
our
containers.
B
B
B
I
am
not
actually
going
to
try
to
summarize
that
one,
because
some
of
the
choices
that
they
make
what's
the
right
way
to
phrase
this
one,
it's
actually
you
know
what
I
take
it
back.
I
will
try
to
summarize
that
it
starts
with
more
specific
things,
take
precedence
over
less
specific
things.
So,
for
example,
if
you
have
an
HTTP
route
for
a
path
of
Slash
Foo
that
will
take
precedence
over
an
HTTP
route
with
a
path
of
Slash,
because
slash
Foo
is
more
specific.
B
B
The
corner
cases
are
all
defined
in
a
way
to
arrange
it
such
that
it's
really
not
possible
to
have
a
tie.
There
is
always
a
route
that
will
have
greater
precedence
than
the
other
one,
and
so
there's
always
a
way
for
it
to
win,
and
it's
always
a
deterministic
way
for
it
to
win,
even
though
it
can
sometimes
be
very
convoluted.
A
B
So
in
this
case,
I'm
saying,
hey,
Linker
Dave
is
give
me
stats
on
the
namespace
faces
and
you
will
notice
that
it
first
says:
oh
all
of
these
are
meshed,
oh,
and
this
is
actually
11
out
of
11
deployments.
Sorry
11
out
of
11
pods
in
this
namespace
are
in
the
mesh
traffic
through
here
currently
has
a
success
rate
of
100
we're
doing
35.3
requests
per
second,
which
is
a
little
higher
than
I,
would
have
expected,
but
that's
okay.
B
B
B
There
you
go,
there's
all
of
them
and
I
can
see
that
in
Emissary
we're
only
running
two
pods,
both
of
them
are
meshed
faces,
is
fully
meshed.
The
Linker
D
itself,
of
course,
is
mashed,
as
is
Linker
div
is,
and
so
you
can
get
a
really
quick
overview
of
what's
in
the
mesh
and
what
is
not
right
now,
okay,
so
if
other,
if
people
have
more
questions
about
that,
that
would
be
good
I'm
actually
going
to
do
one
other
thing
really
quickly
here.
B
Yeah
my
I
was
looking
over
trying
to
get
to
Liberty
Vis
to
the
viz
dashboard
and
the
viz
dashboard
is
not
working
right
now,
which
honestly
doesn't
surprise
me
too
much
because
I
haven't
tested
it
in
this
configuration
in
quite
some
time,
so
moving
on
does
Linker
D
automatically
Mount
the
containers
per
pod.
B
So
this
annotation
tells
Linker
D.
Whenever
you
see
a
pod
created
in
this
namespace
go
ahead
and
inject
that
pod
into
linked
mesh,
which
really
is
inject
the
proxy
container
into
the
pod.
So
whenever
that
annotation
is
present
on
the
namespace
or
on
a
deployment
or
some
you
know
actually
really
you
need
to
put
it
on
these
pod
template
in
a
deployment.
So
it
appears
on
the
pod
as
long
as
it's
present,
on
the
namespace
or
on
the
Pod
Linker
D
will
automatically
go
through
and
do
everything
there
is
to
do.
A
B
B
Yeah
Chewie,
one
of
the
things
that's
worth
pointing
out
here,
is
that
LinkedIn
is
designed
to
be
operationally
simple,
so
operationally
simple
means
it
should
be
really
easy
to
get
things
into
the
mesh.
You
should
not
have
to
do
a
lot
of
work
for
that,
and
so
there
are
a
couple
of
different
ways
to
do
this,
depending
on
your
workflow
but
they're
all
pretty
simple
jubis,
faucet
or
jebus
faucet.
B
Sorry,
if
I'm
mispronouncing
your
name,
is
there
a
cleanup
procedure
or
technique
delete
the
resources
which
I
realize
sounds
facetious,
but
for
example,
there
you
go,
there's
all
my
HTTP
routes
if
I
want
to
go
through
and
get
rid
of
one
of
them,
I
can
just
delete
it
and
in
fact,
let's
do
that,
shall
we
so
if
I
delete
the
color
Canary,
probably
point
out,
yes,
I
have
aliased
K
to
cube
control.
Sorry
I
should
have
said
that
earlier.
B
B
B
But
in
this
case
this
is
what
the
dashboard
roughly
looks
like
where
you
can
see
for
the
Cube
Crash
demo.
We
have
quite
a
few
more
namespaces
and
we
were
a
little
more
selective
about
what
we
brought
into
the
mesh
and
what
we
did
not.
But
if
I
look
at
their
faces,
namespace
then
I
can
come
in
here
and
we
get
to
immediately
see
oh
so.
B
You
can
see
that
the
faces
GUI
isn't
really
getting
any
traffic
because
we
tend
to
load
the
web
app
once
and
then
never
hit
it
again.
You
can
see
that
the
color
and
face
and
smiley
surfaces
are
all
hovering
around
eight
requests
per
second,
which
makes
sense
because
the
Faces
Spa
has
16
cells
that
are
each
request,
each
refreshed
anomaly
every
two
seconds,
so
that
works
out
to
eight
requests
per.
B
Second,
we
can
see
that
color
two
is
currently
taking
no
traffic
noise,
smiley
2.,
and
that
tells
us
something
about
how
this
particular
demo
configuration
is
set
up
at
this
very
moment
so
yeah,
that's,
basically
what
the
dashboard
looks
like.
We
can
also
do
things
like.
B
If
we
click
into
the
face
deployment,
then
we
can
see
who's
talking
to
it.
We
get
to
see
whether
it's
working
and
we
can
come
down
here
and
see
the
top
requests
that
are
happening
where,
for
the
most
part,
what's
happening
is
that
Emissary
Ingress
is
calling
over
to
the
slash
cell,
slash,
endpoint
and
then
face
is
calling
the
slash
endpoint
on
color
and
smiley
did
something
just
happen
there.
B
A
Great
and
then
there
was
a
comment,
slash
question
from
Julie
Chewie
as
well
about
how
their
company
was
looking
into
Easter
several
years.
B
B
Is
that
warranted
and
the
answer
is
kind
of
depends
on
what
service
mesh
you
look
at
Linker
D
is
designed
to
be
operationally
simple
and
we
like
to
think
that
we
actually
succeed
in
that
we're
doing
a
thing
at
kubecon
coming
up
in
Amsterdam,
where
you
can
drop
by
at
linkerty
day,
the
first
ever
inaugural
linkage
day
happening
at
kubecon
Amsterdam
and
you
can
bring
in
your
laptop
and
fire
up
Liberty
and
have
it
working
in
five
minutes
which
will
get
you
a
shiny,
new,
lincolny
hat.
B
Sadly,
I
don't
have
a
liquidy
hat
on
me
to
show
you
what
it
looks
like,
which
that's
what
I
get
for
letting
my
kids
steal
my
Liberty
hat,
but
we
hear
regularly
from
companies
that
have
spent
weeks
or
months
trying
to
get
a
proof
of
concept
working
with
service
meshes
and
failed,
and
then
they
get
it
working
in
Liberty
in
hours.
So
the
operational
Simplicity
is
it's
a
big
deal
for
us?
It's
a
thing.
B
We
take
very
seriously
all
right
if
there
are
no
other
questions
or
maybe
another
different
way
to
put
that
is
yeah
go
ahead
and
get
in
questions
if
you
have
any
more
in
the
meantime,
yes,
the
inaugural
Linker
today
is
happening
at
Tuesday,
Day,
Zero,
kubecon,
Amsterdam,
hope
to
see
you
there
yeah.
This
is,
if
you
want
to
try
out
buoyant
Cloud,
go
to
buoyant.io
demo.
B
We
hope
you
will
and
on
May
18th
in
the
service
mesh
Academy,
we
will
be
doing
a
deeper
dive
into
circuit
breakers
and
dynamic
request
routing,
and
we
will
go
into
much
more
detail
than
we
did
for
this
so
other
than
that.
That's
how
to
reach
me
I'm
also
Flynn,
on
the
cncf
slack,
if
that's
easier,
if
you're
already
on
that
one
or
you
dropping
a
line
and
yeah.
If
there's
anything
else,
we
actually
finished
with
time
to
spare
this
time.
I
think
that
might
be
the
first
time
this
has
happened.
A
It
works
this
way
as
well,
but
there
could
be
a
lot
of
questions
coming
in.
So,
let's
see
but
yeah
great
a
great
demo
by
the
way,
really
great
stuff.
B
A
Honestly
mentioned
there
now:
is
it
the
perfect
time
to
ask
questions?
We've
already
had
a
little
question,
so
that's
always
nice
to
see,
but
if
you
have
any
start
typing
them
away
him
as
well,
so
we
have
about
10
minutes.
If
you
have
any
questions,
but
obviously
we
can
wrap
up
early
as
well.
If
everyone
already
got
all
of
their
questions
in
yeah,
but
Linker
Decon
sounds
wonderful.
Looking
forward
to
that,
we.
B
A
And
there's
all
that
are
asking:
is
there
a
good
Henson
n2o
to
linkready
on
web.
B
All
right
so
I've
stuck
a
link
in
to
get
copied
over
to
the
chat
and
yeah
there's
a
getting
started,
a
quick
start
guide
that
is
pretty
quick
in
the
meantime,
our
backstage
magicians.
Hopefully
there
we
go
we'll,
go
through
and
copy
it
up.
B
Obviously,
that
is
the
2.12
getting
started
because
2.12
is
the
current
stable
release.
Https
schooling.io,
2.13,
slash
getting
started,
slash
will
work
as
soon
as
2.13
is
released.
B
I
didn't
explicitly
mention
this
when
I
went
through
the
slide,
but
Linker
d2.13
is
happening
soon,
like
really
really
soon
now,
I
just
knocked
on
my
wooden
keyboard
case
because
well
yeah,
but
yeah
I'm,
looking
forward
to
2.30
2.13
being
out
and
I'm
also
looking
forward
to
Liberty
day,
which
should
be
cool.
A
B
A
For
sure,
it's
very
nice
very
easy
for
the
attendee
for
sure
right,
right,
okay,
I
think
we're
approaching
Final
Call
for
the
questions.
We
could
talk
about
coupon
here
for
probably
hours
but
yeah.
B
And
if
you
are
at
kubecon,
whether
you
have
questions
or
not,
you
can
always
drop
by
the
grant.
Booth
there's
sorry
Buena
will
have
a
booth
where
we
will
be
happy
to
talk
to
you
about
All,
Things,
Liberty,
there's
also
going
to
be
a
link
review
booth
in
the
cncf
project,
Pavilion,
where
we
will
also
be
happy
to
talk
to
you
about
all
things.
Likerty
I
will
be
at
one
of
those
booths
for
most
of
the
con
I
suspect
I'm,
not
sure.
B
A
Sounds
good
sounds
good,
it's
always
good
to
be
available.
So
that's
nice
and.
B
There
will
be,
there
will
also
be
Linker
D,
maintainers
and
business
folks
in
the
buoyant
side
and
all
that
so
a
lot
of
opportunities
to
get
answers
both
to
technical
questions,
about
linkerty
and
to
business.
Questions
about
buoyant.
A
Perfect
and
while
we
see
probably
no
questions
coming
anymore,
so
we'll
start
wrapping
up
soon,
but
before
that
I
do
want
to
say
that
I
thought
I
saw.
There
was
a
really
fun
comment
after
the
demo,
which
was
parsa,
said
demo
Gods
be
smiling
and
I
enjoyed
that
and
had
like
a
bit
of
a
chuckle
here.
That
was
very
nice.
A
good,
smiling
demo
yeah.
It's.
B
A
Exactly
perfect
but
yeah,
let's
start
wrapping
up.
So
thank
you.
Everyone
for
joining
the
latest
episode
of
cloud
native
live
and
we're
great
to
have
a
session
about
2.30,
lingardy
and
the
Gateway
API,
and
we
also
really
love
the
interaction
and
the
question
from
the
audience
and
tune
in
in
the
coming
weeks
as
well,
because
we
bring
you
the
latest
Cloud
native
code,
every
Wednesday
and
I
said
in
the
coming
weeks.
We
have
really
great
sessions
coming
up
so
tune
in
then,
and
thank
you
for
joining
us
today
and
see
you
all
next
week.
B
I
have
one
more
link
that
I'd
like
to
get
pasted,
which
is
this
was
in
the
presentation
as
well,
but
this
way
maybe
people
will
be
able
to
copy
and
paste
it
if
they
want.
That's
the
direct
link
to
the
source
code
for
this
demo.