►
From YouTube: Hoot [Episode 2] - Linkerd Service Mesh
Description
Hoot is a livestream by engineers talking about and trying out new technology.
Get to Know Service Mesh
We kick this off with a series on service mesh - each episode will look into a different service mesh provider.
* Istio
* Linkerd
* Consul
* Community requests -- suggest a service mesh
* Compare and contrast the different service meshes, explain their unique features and how to choose which one(s) to use for your applications.
A
Great
more
lives
all
right,
well,
hello.
Everyone
welcome
to
the
second
hoot
are
and
currently
the
focus
of
food.
This
video
series
is
about
unboxing
and
getting
to
know
service
meshes.
So
if
you
follow
it
along
with
us
the
last
time
the
hoot
series
we
covered
sto
with
Christian
or
field
CTO
today,
I'll
be
covering
linker
D
and
in
two
weeks
we'll
do
another
one
of
these
unboxing
console
Connect
and
so
we're
working
our
way
through
the
meshes
and
today
and
now
we'll
dig
into
linker
D.
A
As
a
brief
background,
I
don't
want
to
focus
too
much
conceptually
on
what
is
a
service
mesh,
but
just
to
kind
of
ground.
The
conversation
now
because
I'll
be
using
these
phrases
a
lot
at
the
service
mesh
kind
of
paradigm
is
how
people
are
kind
of
deploying
software
now,
especially
with
orchestrators
like
kubernetes,
that
make
it
really
easy,
where
the
paradigm
calls
for
deploying
a
sidecar
proxy
next
to
each
of
your
actual
services.
A
This
enables
you
to
decouple
from
your
application.
Some
of
the
cross-cutting
features
related
to
routing
and
security,
observability
and
and
kind
of
expose
the
configuration
of
those
features
in
a
common
user
friendly
way,
so
that
all
of
those
proxies
consists
of
the
data
plane
and
then
in
the
service
mesh
world.
There's
a
control
plane
that
is
pushing
the
configuration
to
those
proxies.
So
that
is
conceptually
kind
of
what
we're
talking
about
here
and
to
start,
let's
go
through
a
little
bit
like
the
focus
of
linker
D,
one
of
the
completing
service
mesh
implementations.
A
This
is
a
pretty
good
kind
of
concise
explanation
of
where
linker
D
focuses
making
it
really
usable
out
of
the
box.
It's
very
simple:
to
enable
em
TLS
start
getting
really
interesting
kind
of
tools
to
observe
your
application
and
that's
really
the
focus
of
link
Rd,
where,
if
you,
if
you
kind
of
watch
to
the
the
hoot
video
last
week
about
sto,
there's
a
much
broader
kind
of
range
of
features,
a
much
more
complex
configuration,
API
and
there's
trade-offs
adapters,
there's
advantages
to
using
mesh
that
provides
those
capabilities.
A
There
are
advantages
to
using
something
focused
on
MPLS
and
observability,
especially
if
that's
your
core
use
case
and
in
fact,
just
to
you
know
a
good
kind
of
background
on
linker
D.
If
you're
interested
is,
there
was
a
podcast
with
wool
Morgan
who's,
leading
like
the
buoyance
at
on
the
software
engineering
daily
podcast
talking
about
kind
of
where
link
reduce
its
and
and
really
emphasizing
that
in
the
long
run,
there's
going
to
be
a
lot
of
different
mention
plantations,
some
that
are
really
good
for
particular
cloud
providers.
A
But
you
know
that
might
not
be
the
right
choice
if
you're
in
a
different
cloud
provider
or
if
you're,
specifically
looking
for
features
that
are
better
suited
by
a
different
mesh.
So
it's
an
interesting
angle
and
I
think
like
as
we
go
through
the
kind
of
onboarding
for
the
product,
we'll
really
see
how
they've
been
emphasizing
these
things
over
and
over
again
from
the
jump.
A
Now
before
we
dive
into
the
kind
of
actual
installation
and
demos,
you
want
to
talk
a
little
bit
about
the
architecture,
so
in
particular
the
question
many
people
start
with
when
it
comes
to
gateways
or
service
messages
is
what
is
the
proxy
implementation.
So,
as
we
said
before
that
we're
going
to
be
injecting
the
sidecar
proxy
next
to
every
service
in
our
application.
In
the
case
of
link
Rd,
they
have
an
open-source
proxy
that
they've
built
alongside
the
control
points,
called
link
or
D
to
proxy
I'll
open
that
up.
A
So
this
is
an
open
proxy
implementation.
Written
in
rust,
linker
D
essentially
took
the
approach
of
let's
build
our
own
proxy,
with
the
features
that
we
care
about
built
in
and
they've
been
working
on.
This
for
years,
it's
pretty
mature
at
this
point
there
have
been,
you
know,
dozens
of
contributors,
a
hundred
and
or
more
releases.
A
It
is
worth
noting,
though,
because
you
know
other.
There
are
control
point
implementations,
for
example,
that
are
based
on
envoy
and
open
source
proxy
pioneered
by
lyft.
So,
for
example,
sto
uses
this.
It
has
probably
a
larger
feature
kind
of
set
surface
area,
but
it
requires
kind
of
sending
configuration
and
envoy
language.
You
know
there
are
kind
of
one
advantage
of
going
with
something
like
envoy
as
opposed
to
rolling
your
own.
Is
you
can
pick
up
improvements
from
the
community?
A
So,
for
instance,
envoy
community
is
really
excited
about
webassembly
as
a
way
to
further
customize
the
behavior
of
the
proxy.
You
would
obviously
wouldn't
get
that
with
Lync
or
D,
but
you
know,
as
we
said
before,
that's
not
really
the
focus
here
and
for
the
use
cases
that
link
review
supports
out-of-the-box
today,
their
proxies
is
more
than
sufficient,
very
performant
and
pretty
easy
to
use
cool
and
then
obviously
so.
This
is
if
the
data
plan
consists
of
all
these
proxy
instances.
A
Next,
your
application,
then,
of
course
there's
a
control
play
and
a
set
of
components
that
manages
and
sends
configuration
to
those
and
powers,
the
applications.
So,
let's
let's
get
installed,
and
then
we
can
take
a
look
at
what
actually
those
components
consist
of
so
I'm.
Switching
over
to
my
terminal,
I
just
deployed
mini
cubed,
a
fresh
installation
so
that
we
could
start
essentially
from
a
blank
slate.
A
So
the
first
thing
that
we'll
do
in
terms
of
getting
deployed
is
installing
the
command
line
tool,
so
one
of
the
most
common
ways
to
interface
with
Lync
or
D,
especially
for
the
onboarding
and
the
initial
install,
is
through
this
tool.
It
also
serves
it's
basically
based
on
top
of
the
same
api's
that
the
web
dashboard
is
based
on,
and
so
it
provides
in
the
command
line.
Really
you
functionality
in
terms
of
tap
and
top
and
other
features
that
we'll
get
into
in
a
little
bit
at
solo.
A
We
were
really
inspired
by
this
kind
of
one-liner
for
easy
install,
so
anyone
can
just
run
this
single
command
and
it
downloads
a
script.
It
runs
the
scripts
and
make
sure
that
you
have
the
command
line
tool
in
the
kind
of
expected
location
prompts
you
to
add
it
to
your
path,
and
then
you
know
basically
how
to
get
fully
up
and
running
with
linker
D,
really
like
this
kind
of
very
simple,
clear
experience
for
the
user.
So
I've
already
done,
this
I've
have
linker
D
in
my
path,
we'll
do
a
quick
version.
A
Okay,
so
now
I'm
running
the
latest,
stable
release
and
I.
Currently
don't
have
the
server
or
the
control
point
installed
on
my
cluster,
in
fact,
if
we
look
at
the
cluster,
this
is
a
vanilla
mini
cube.
It
was
just
installed
eight
minutes
ago,
set
up
a
pin,
see
oh
cool,
so
let's
go
ahead
and
do
the
first
thing,
which
is
a
pre-check.
This
is
really
nice.
It
basically
is
a
set
of
really
quick
checks
but
fairly
comprehensive,
ensuring
both
that
or
ensuring
a
bunch
of
things
that
make
sure
that
you
can.
You
have
kubernetes
running.
A
You
can
query
it.
It's
the
right
version.
It
checks
out
the
our
back
of
your
kubernetes
profile
or
contacts,
making
sure
that
you
can
actually
create
the
resources
that
need
to
be
created
and
making
sure
that
there
aren't
conflicting
resources
that
already
exists.
This
is
really
great.
It
also
kind
of
makes
sure
that
your
tooling
is
up
to
date,
so
probably
a
good
time
to
just
mention.
A
As
an
aside,
there
are
different
release
channels
for
linker
keys,
so
the
latest
stable
release
which
is
kind
of
how
they're
you
know
intended
for
mass
consumption
and
intended
for
enterprise
users,
who
are
relying
on
long
term
support.
So
the
latest
stable
release
was
two
six
two
R
two
six
one.
The
latest
patch
and
I
read
the
announcement.
This
morning
it
looked
like
it
took
about
seven
or
eight
weeks
since
the
last
stable
release,
so
they've
been
actually
ramping
up
their
velocity
and
the
releases
are
coming
out
once
or
twice
a
quarter.
A
So
let's
do
the
so
we
did
the
pre
check.
Everything
looks
good.
So,
let's
start
the
install
process,
link
ready,
makes
it
pretty
easy.
I'm
going
to
first
run
the
install
command
and,
as
you
can
see
it,
it
spits
out
a
bunch
of
UML.
This
is
basically
a
dry
run,
so
what
the
install
command
does
is
produced
two
standard
out
all
the
gamal
that
you
would
need
to
apply
with
normal
coups
e-tail
apply.
A
We
were
kind
of
looking
at
the
linker
decode
a
while
back
and
and
one
of
the
reasons
that
we
did,
that
this
was
the
kind
of
the
recommended
starting
point
as
opposed
to
helm,
though
more
recently
there
are
various
helm
support.
Is
this:
can
this
manifest
contains
things
like
public
keys
certificates
that
were
generated
during
install,
so
this
is
kind
of
a
good
way
to
inspect
a
valid
manifest
for
installing
linker
dthe
and
now
will
pipe
that
to
keep
CTL
apply
to
actually
perform
the
installation.
A
So,
as
you
can
see,
it's
kind
of
setting
up
the
namespace
and
all
of
the
are
back
for
service
accounts
roles
and
roll
bindings,
and
then
it's
actually
deploying
to
the
namespace.
All
the
control
playing
components
and
configuration
looks
like
everything
got
created,
so
just
means
kubernetes
resources
applied
successful.
You,
let's
see.
A
What's
in
the
namespace,
so
I
deployed
this
to
the
kind
of
standard
linker
D
system
namespace
and
as
we
can
see,
it's
initializing
and
let's
run
check
now
now
we're
not
running
the
pre
check,
but
we're
running
we're
kind
of
checking
the
active
installation
now
again
a
very
useful
tool
as
like
a
quick
debugging
step
to
see,
are
we
healthy
is
there,
and
in
this
case
it's
is
the
control
plane
healthy,
whereas
before
it
was,
is
the
control
plane
installable
there
are
also
commands
for
checking.
Is
the
status
of
my
application?
A
A
Also,
it's
a
good
way
to
kind
of
since,
since
the
install
command
and
applying
that
just
created
the
resources,
it's
a
good
way
to
feel
confident
that
no
once
check
succeeds.
Actually
the
control
plane
is
ready,
not
just
in
the
process
of
being
deployed,
so
the
checks
all
passed
now.
This
means
that
link
Rd
is
healthy,
good
to
go
and
the
control
plan
is
installed.
So
let's
go
back
to
the
architecture
and
kind
of
look
at
what
is
actually
deployed
here.
A
So,
as
you
can
see,
we
have
nine
nine
pods
here
running.
These
are
all
the
different
components
of
the
control
plane.
We
can
also
look
at
these
in
the
UI,
but
quickly
I'm,
going
to
talk
through
kind
of
high-level
what
the
architecture
is
here
and
actually
maybe
it's
useful
to
do
this
in
context.
They've
got
a
really
good
architecture
document.
A
So
here
this
is
the
rough
kind
of
architecture
of
all
of
the
components,
as
we
were
talking
about.
There's
a
separation
where
the
data
plan
is
the
application
on
the,
but
this
bottom
arrow
is
kind
of
the
requests
path
through
the
application
and,
as
you
can
see,
the
requests
go
through
the
proxies
then
to
the
application,
and
then
they
continue
on
we'll
talk
a
little
bit
as
we
go
through
a
demo,
we'll
talk
kind
of
how
that
gets
initialize.
A
And
then
identity
is
a
certificate
authority
that
provides
signed
certificates
to
all
of
the
proxy
instances
that
that
enables
that
seamless,
out-of-the-box
MPLS
behavior
between
all
of
the
applications
that
you
injected
link
or
D
into
so
then,
there's
a
couple:
there's
a
lot
of
other
components
on
here:
I
guess.
The
next
question
is:
how
do
how
does
my
application
actually
get
deployed
like
this?
Often
when
you're
managing
an
application
or
kubernetes
you're,
managing
your
own
set
of
deployments,
which
tell
you
what
pods
will
get
spun
up?
A
So
instead
there's
a
proxy
injector
component
that
helps
with
this
process
and
it
can
be
invoked
either
dynamically,
with
through
annotations,
on
namespaces
or
generally
like
configuring
kind
of,
and
you
know
configuring
other
resources
in
kubernetes.
It
can
be
dynamically
injected
as
those
resources
are
applied
or
it
can
be
done
as
a
command-line
tool,
which
I'll
show
in
a
little
bit
then
there's
several
other
features
here.
A
The
proxy
has
a
built-in
support
for
a
tap
filter,
so
you
can
actually
stream
and
watch
real-time
requests
in
response
to
traffic
through
a
particular
workload.
There's
a
Prometheus
in
Griffin
instance
or
metrics.
It's
it's
set
up
to
automatically
scrape
both
the
control
playing
API,
as
well
as
all
of
the
proxy
instances
and
there's
a
built-in
dashboard
that
comes
with
linker
D.
A
A
So
this
is
pretty
cool,
it's
kind
of
showing
all
of
the.
If
this
is
essentially
showing
the
health
of
the
control
plan,
so
I
can
see
it
was
successfully
installed
up
here,
there's
kind
of
a
a
recommendation
for
how
to
kind
of
get
started
and
then
down
here
we've
got
all
of
the
actual
components
that
I
was
mentioning
in
terms
of
what
goes
into
the
control
plane
and
and
then
a
summary
kind
of
namespaces
and
whatnot.
A
Cool,
oh,
and
so
the
dashboard
command
essentially
is
setting
up
the
right
networking
locally.
So
you
know
in
kubernetes
that
often
means
port
forwarding
setting
it
up
so
that
you're
serving
it
through
localhost
I'm
running
off
of
mini
cubes,
so
it
kind
of
needs
to
set
that
up.
In
the
background,
and
now
we
also
have
this
graph
on
a
dashboard,
so
out
of
the
box,
I
can
already
see
kind
of
all
the
traffic
that's
happening.
Currently,
this
is
just
control,
plane,
traffic.
A
Just
checking
my
notes:
okay,
so
I'm
gonna
kill
that
now,
let's
actually
install
a
demo
app
so
first
well
kind
of
before
applying
it.
I
just
want
to
take
a
look
at
what
this
is.
So
this
is
the
this
is
the
standard
link
or
D
demo.
It's
called
emoji
voda
boto.
It's
it's
basically
got
several
components
to
web
UI.
A
So
it's
quite
a
useful
demo
application
to
really
understand
the
basics
of
the
of
the
mesh,
notably
it
doesn't
have
anything
in
this
manifest
related
to
the
proxies
or
linker
D.
You
know
the
word.
Linker
D
isn't
anywhere
in
here.
So,
as
I
was
saying,
there
is
a
component
that
helps
with
the
injecting
of
that
additional
configuration
typically
into
your
deployment
specs,
so
that
you
know
it's
spinning
up
the
right
proxy
container
next
to
your
real
container
for
the
application.
A
But
none
of
that's
here,
that's
all
done
by
the
injector
and
that's
nice,
because
you
know
a
lot
of
people
will
manage
this
deployment
in
some
kind
of
get
ops
pipeline
where
it's
not
really
until
you
know
you
might
want
to
you
know
not
have
to
manage.
You
might
want
the
transformation
to
be
part
of
the
continuous
delivery
part
of
the
pipeline,
rather
than
actually
kind
of
stored
in
the
in
your
repo
or,
however,
you're
managing
your
kind
of
infrastructure
config
as
code.
A
A
Now
it's
worth
noting
so
far,
I've
just
deployed
the
dental
application.
I
haven't
done
anything
related
to
connecting
it
with
linker
deeds
and
as
I
was
saying
before
it's
it's
there's
there's
several
common
ways
that
meshes
support
kind
of
that
injection
or
that
onboarding
of
or
you
know,
as
link
Rd
has
called
it
the
meshing
of
pods
or
workloads
in
your
application.
A
So
one
of
those
is,
you
could
annotate,
you
know,
name
typically,
you
can
annotate
namespaces
or
have
some
kind
of
global
configuration
with
a
admission
control
like
a
mutating
web
hook
or
something
to
that
effect.
In
our
case
here
we're
just
going
to
manually.
Do
it
even
after
the
initiative,
the
applications
deployed
initially,
but
before
we
do
that,
let's
just
take
a
quick
look
at
what
this
application
is
doing
so
I'm
going
to
pour
forward
the
web
service,
which
is
the
actual
site.
A
This
is
the
emoji
application.
Essentially,
what
it's
allowing
you
to
do
is
pick
different
emojis
and
then
there's
a
leaderboards
as
you
can
see
by
the
time.
I
logged
in
here
there's
already
been
a
ton
of
traffic.
That's
because
of
the
load
generator
that's
running
in
the
background
and
something
like
every
second,
it's
it's
putting
a
new
boat
in.
It's
also
worth
noting
that
you
know-
and
you
know,
we're
kind
of
wheat-
there
are
some
bugs
in
here
so
and
well.
A
We
use
the
dashboard
to
start
to
drill
into
what
those
bugs
are,
but
you
know
it's
kind
of
built
in
to
the
demo
here
full.
So
the
you
know
now,
I've
got
my
application
deployed.
It's
running
everything's
going
well,
but
I
need
to
inject
this
with
the
side
carbs
to
get
it
hooked
up
to
linker
DS
and
in
fact,
before
I.
Do
that
just
to
show.
So
if
I
look
at
kind
of
need
to
be.
A
So
if
I,
you
know,
if
I
look,
I
can
see
that
there
is
this
namespace.
Now
the
UI,
the
linker
D
dashboard
knows
about
it
knows
that
there's
workloads
there,
but
they
aren't
mesh
they're,
not
injected.
So
you
can't
really
do
anything
meaningful
with
those
at
this
point
with
this
dashboard,
let's
go
fix
that
so
the
next
step
here
is
to
actually
we're
gonna
we're
gonna.
Do
this
a
different
way
since
the
research
source
is
already
there,
we're
just
going
to
read
them
out,
run
them
through
the
injector
and
then
reapply
them
back
to
kubernetes.
A
A
I'm
doing
now
pipe
that
to
link
or
D
inject,
and
this
essentially
provides
a
dry
run.
So
this
isn't
quite
animal
because
I'm,
seeing
both
standard
air
and
standard
out
but
meaningfully
if
I
run
the
inject
command,
but
don't
pipe
that
to
keep
CTL
apply.
Then
it's
just
kind
of
spitting
out
to
standard
out
what
the
manifests
will
be.
And
if
we
look
through
it,
we
can
start
to
see
how
the
these
deployments
are
actually
going
to
be
modified
in
order
to
become
hooked
up
to
link
Rd
apologize
for
my
scrolling.
Looking
up
too.
A
A
A
A
A
A
We
can
now
see
that
first
of
all,
there's
an
init
container
in
each
of
these
pods
and
the
init
container
is
running
this
container
called
proxy
in
it,
and
this
is
basically
setting
up
your
IP
tables
so
that
the
networking
transparently
to
the
application
is
now
going
through
the
proxy.
So
this
is
that's.
A
Then
there,
if
we
look
at
the
containers
on
this
thing,
so,
first
of
all
that
you
know
there's
one
here
which
was
already
there.
You
know
this
is
the
same
container
that
has
always
been
part
of
this
pod,
and
then
we
see
a
second
container
and
this
was
also
injected,
and
so
what
this
container
basically
is
is
the
linker
D
proxy,
with
a
bunch
of
environment,
variables
and.
A
Lost
my
mouse:
okay,
a
bunch
of
environment
variables
and
there's
a
certificate
that
gets
mounted
and
some
other
volume
mounts,
and
this
is
basically
all
the
configuration
necessary
to
run
the
proxy
and
to
do
things
related.
You
know
related
to
getting
the
right
certificate
and
identity
configurations
and,
as
we
can
see,
it's
running
the
proxy
the
container,
the
proxy
image
that
we
were
looking
at,
the
same
version
I
was
getting
confused
before,
because
I
was
looking
at
the
deployments
looking
for
the
site
cards,
but
obviously
I
found
them
in
the
pods.
A
So
the
deployment
has
enough
information
so
that
when
the
pod
is
created,
it
is
done
so
with
the
appropriate
containers
cool.
So
now
we
have
an
injected
application.
Let's
go
back
to
the
dashboard
and
see
how
it
looks
now
you
so
we
can
see
this
updated
the
emoji
vote,
so
application
is
now
meshed
all
four
of
the
workloads
that
are
running
inside
of
it.
This
means
that
you
know
it
means
that
they
are
all
connected
the
match.
They
all
have
sidecars.
A
A
There's
this
funny
kind
of
animation,
with
a
anti-gravity
kind
of
so
over
to
you
know,
we
can
kind
of
see
the
request
traffic
and
you
know,
in
a
more
complex
application-
is
my
interestingly
kind
of
rearranged
over
time
as
new
services
come
online
and
generally,
you
know,
can
ultimately
kind
of
get
some
high
level
stats
across
the
entire
across
all
the
workloads
in
the
namespace,
and
we
can
drill
in
on
specific
ones
to
get
a
better
sense.
So,
let's
first
look
at
this
emoji
deployment.
A
It's
got
a
hundred
percent
success
rate,
so
there's
probably
not
anything
interesting
here,
but
we
can
at
least
get
a
sense
of
who's
who's.
Making
requests
to
this
workload
looks
like
in
this
case,
there's
two
common
requests
that
the
web
component
is
sending
in
this
is
this
is
a
tap
of
all
the
line
of
the
actual
live
request
traffic.
A
A
Now,
if
I
go
to
the
voting
or
the
oh
and
the
actual
votes
are
coming
in
here,
so
in
the
architecture
of
this
demo
application
the
traffic
yeah,
you
know
when
I
actually
make
a
vote.
It's
actually
coming
here.
Now
it's
worth
noting
that
every
once
in
a
while,
these
requests
are
failing.
So
I
can
already
see
here
whenever
someone
is
making
a
request
to
vote,
donut
I'm,
getting
a
failure
rate
and
the
success
is
actually
0%
in
this
case.
A
A
So
that's
that's
kind
of
the
extent
of
the
demo
that
I
wanted
to
go
through,
but
I
did
want
to
touch
on
a
few
more
really
cool
things
about
linker
D.
That's
are
worth
mentioning
without
going
too
deep
on
it
for
this
intro
session.
So
as
I
was
saying
before,
like
the
focus
of
linker,
D
is
really
on.
First
of
all,
rapid
onboarding
and
ease
of
use,
and
as
we
can
see,
it
was
very
quick
to
get
things
deployed.
It
was
easy.
A
There
were
a
bunch
of
checks,
so
really
good
guardrails
and
then
once
you've
got
it
several
and
so
then,
once
you've
got
the
control
plane
deployed,
there's
several
ways
to
actually
go
and
kind
of
setup,
your
application
to
be
injected
or
mesh,
and
once
you
have
that
you
get
all
these
tools
out
of
the
box
and
the
the
key
focus
or
the
best
tooling.
That
is
really
attractive
to
people
with
linker
D.
Are
these
tap
and
top
capabilities?
A
And
we
were
seeing
it
kind
of
geared
around
the
deployments
here,
but
you
can
really
see
both
aggregate
kind
of
stats
about
request
and
response
rates
and
generally
the
house,
if
your
your
pods
or
workloads,
as
well
as
the
requests
going
through
them,
and
then
you
with
the
tap
feature.
You
can
actually
look
at
these
real-time
requests.
A
I
will
mention
here
before
leaving
the
UI
that
there's
also
this
traffic
split
configuration
option
not
going
to
go
through
a
demo
of
this
now
just
for
time,
but
I
think
the
it's
an
experience
that
I've
had
working
on
service
mentor
stuff
is
seeing
kind
of
link
or
DS
presence
as
part
of
a
broader
community
effort
to
standardize
on
how
on
the
API
is
for
service
messages
that
which
is
called
service,
mesh
interface
or
SMI.
So
the
linker
D
was
written
to
support
all
the
SMI.
A
Api
is
out
of
the
box,
and
one
of
those
api
is
as
traffic
split,
enabling
you
to
essentially
reroute
traffic
for
a
particular
destination
to
one
or
more
alternative
destinations.
You
could
be
using
this
for
progressive
delivery.
Things
like
flagger,
where
you're
essentially
slowly
shifting
traffic
over
to
a
new
version
of
a
service
or
you're
kind
of
rolling
out
two
versions
of
a
server
side
by
side
to
collect
data
from
your
users.
A
So
there's
a
really
good
UI
for
that
here
and
if
you
look
at
the
most
recent
kind
of
release,
announcement
you'll
see
a
lot
more
information
about
that.
But
the
last
thing
I
wanted
to
touch
on
is
just
really
props
to
the
linker
D
people
for
not
just
providing
a
good
dashboard
for
this.
This
kind
of
information,
but
also
good
tooling,
from
the
command
line.
So
if
you
for
all
of
these
features
that
we
were
looking
at,
we
can
do
the
same
thing
from
the
command.
A
A
A
A
And
I
need
to
have
made
this
too
large.
It
goes
off
the
screen,
but
we
can
kind
of
there's
a
lot
more
information
off
to
the
right.
If
this
was
reasonably
sized,
so
you
got
requests
count
and
there's
some
stats
about
the
performance
of
those
and
so
forth
and,
lastly,
same
thing
with
the
tap.
So,
let's
tap
a
particular
workload.
A
A
So,
as
you
can
see
out
of
the
box
with
basically
zero
configuration,
we
get
really
fast.
Onboarding
of
a
new
service
mission,
implementation
out
of
the
box,
we're
getting
M
TLS
for
free
with
a
certificate
authority,
that's
generated
unique
certificates
for
each
of
our
side
cards
and
once
we've
onboard
an
application
which
is
ultimately
a
single
command
or
able
to
start
utilizing.
All
these
interesting
features
related
to
observability,
looking
at
aggregate
stats
about
the
namespace
or
about
particular
workloads
and
then
seeing
the
actual
traffic
with
link.
A
Your
D,
you
can
also
do
things
like
configure
policy
and,
as
I
mentioned
before,
they're
starting
to
move
into
features
like
traffic
splitting,
so
that
about
covers
it
for
the
for
the
kind
of
unboxing
of
Lync
Rd.
There's
there's
a
lot
more
things
you
can
do
once
you've
gotten
through
the
basic
getting
started
so
I
would
encourage
you
to
really
dig
into
the
docs.
A
It's
a
really
good,
summary
of
kind
of,
more
broadly,
all
the
features
that
are
available
and
then
some
useful
user
guides
to
get
an
understanding
of
how
to
use
those
features,
as
well
as
other
references.
I
mentioned
the
release
channels.
So
you
know
there's
the
standard
way
that
we
installed
is
going
to
get
us
the
latest
stable
release,
but
there's
more
frequent
releases
if
you
want
to
install
the
edge.
So
you
can
really
stay
up
and
running,
and
last
thing.
A
I'll
note
is
that
there's
a
very
active
community
and
a
lot
of
enterprise
and
enterprises
are
looking
for
essentially
solving
first
with
service
mesh,
the
things
that
link
or
D
provides,
and
so
the
community
around
this
is,
is
pretty
broad,
a
lot
of
contributors
and
a
lot
of
people
in
slack.
So
you
know,
if
you
want
to
learn
more
I,
would
encourage
you
to
check
these
things
out,
yeah
that
about
wraps
it
up.
So
hopefully
you
enjoyed
this
hoot
about
link,
Rd
and
now
have
a
better
sense
of
generally
the
architecture
of
link
Rd.
A
What
it
looks
like
to
on
boards
the
control
plane
to
your
cluster,
and
then
you
know
that
bring
the
data
plane
to
your
actual
application
and
then
get
a
brief
sense
of
some
of
the
tools
that
are
at
your
disposal,
as
I
mentioned
at
the
top,
we're
going
to
be
doing
more
of
these
kind
of
trying
to
hit
all
of
them.
The
major
service
meshes
and
kind
of
be
able
to
compare
and
contrast.
So
in
two
weeks,
look
for
another
one
of
these
about
console,
connect
and
yeah
I.
A
Looks
like
in
the
chat
so
I'm
going
to
end
things
here
in
the
future.
If,
if
you
are
interested
in
kind
of
making,
this
more
interactive
would
encourage
you
to
follow
along
in
our
YouTube
channel,
where
there's
a
live
stream
of
of
this
event,
and
you
can
actually
provide
questions
and
and
I'm
happy
we're
happy
to
respond
to
those
questions
as
we
go
all
right.
Thank
you
very
much
and
see
you
next
time.