►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Live
glad
to
have
you
all
here,
this
is
cloud
native
live
where
we
dive
into
the
code
behind
club
native,
so
very
happy
to
have
you
all
here,
I'm
annie
tolesto
and
I'm
the
cncf
ambassador,
as
well
as
a
senior
product
marketing
manager
at
camunda
and
I'll,
be
your
host
tonight.
A
So
every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
burning
questions
so
join
us
every
wednesday
to
watch
live
as
you
have
done
now
or
you're,
watching
this
online.
Very
much
welcome.
A
So
this
week
we
have
jason
morgan
here
with
us
to
talk
about
service
mesh
101,
an
introduction
with
linker
d,
very
exciting
topic,
and
as
always,
this
is
an
official
live
stream
of
the
cncf
and
as
such
it
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
of
your
fellow
participants
as
well
as
presenters.
A
We
will
be
taking
questions
throughout
the
whole
session,
so
if
you
have
any
questions,
comments
concerns
anything
just
send
them
to
the
chat
and
we
will
get
to
them.
Hi
hi,
to
ahmed
from
egypt,
for
example.
Thank
you
for
letting
us
know
you're
watching
and
engaging
perfect,
but
with
that
I'll
hand
it
over
to
jason
and
we
will
get
started.
B
All
right,
hey
folks,
how's
it
going
so
my
name
is
jason
morgan.
I
work
for
buoyant
the
company
behind
the
lingerie
project.
Lingerie
is
a
graduated
project
from
the
cncf
and
we're
a
service
mesh.
Just
a
quick
question
to
you
is:
is
abu
bakr
going
to
join
also
for.
A
I
think
he's
moderating
the
question
like
the
q,
a
to
my
understanding
if
he
wants
to
join
of
course,
happy
to.
B
B
All
right,
so
I've
got
a
little
little
white
board
here
and
I
have
an
example
application
and
it's
it's
really
living
in
a
kubernetes
cluster.
So
imagine
this
whole
box
is
a
kubernetes
class
there,
so
I've
got
I've,
got
an
application
and
we're
actually
gonna
we're
actually
gonna
deploy
an
application.
That
looks
just
like
this
and
a
little
demo
right
afterwards.
B
We've
got
some
kind
of
web
front
end
and
two
back-end
services,
and
so
this
is
this
is
your
app
on
kubernetes
right
and
I
want
to
show
you
what
what
a
service
mesh
is
and
how
it
interacts
with
the
rest
of
an
application.
So
the
the
long
story
short
is
a
service.
Mesh
is
a
tool
that
shifts
some
responsibility
from
application
developers
onto
platform
operators
right
and
it
does
it
by
installing
a
bunch
of
a
bunch
of
proxies
in
between
your
applications
and
those
proxies
manage
network
traffic.
B
So
they
can
handle
things
like
service
discovery,
adding
observability,
you
know
doing
things
like
request
level,
load
balancing.
You
know
a
bunch
of
other
things,
mtls
right
if
you
care
about
encrypting
and
mutually
authenticating
connections
between
environment,
so
that
the
process
of
installing
a
service
mesh
is
is
fairly
simple
right.
So,
first
off
you're,
gonna,
you're
gonna,
take
your
kubernetes
cluster
and
you're
going
to
import
or
install
a
control
plane.
B
So
the
control
plane
is
the
interface
for
us
platform,
operators
and
and
our
service
mesh
right.
So
we
install
a
control
plane
in
our
cluster
and
then
we
begin
beginning
adding
applications
to
a
mesh
and
the
way
we
do
it
is
we
install
a
little
proxy
and
we
sit
it.
We
sit
beside
our
application
and
that
proxy
then
handles
all
of
its
traffic
on
the
network.
Right
it
is.
It
is
your
your
proxy
or
your
arbiter
for
things
that
happen.
B
B
A
B
A
I
can
see
it
quite
okay,
but
maybe
it
could
be
a
bit
bigger,
but
I
think
it's,
it's
probably
fine.
I
have
a,
I
haven't
essentially
taken
my
whole
screen
and
then
it
works
at
least.
B
All
right,
so,
let's
start
off
right.
Let's,
let's
see
what
is
the
process
to
install
linkedin
so
right
now
I
have
a
kubernetes
cluster,
it's
running
locally
and
it's
got
one
app
going
right
and
that
app
is
very
similar
to
the
one
that
we
described.
There's
a
web
front
end
there's
two
back
ends
and
then
there's
a
traffic
generation
service.
B
So
let's
just
let's
just
get
started
so,
first
and
foremost,
I
want
to
install
linker
d
right
so
so
to
step
back
one
more
time
that
the
goal
with
link
rd
is
to
make
it
as
non-invasive
as
possible.
We
want
you
to
have
apps
that
run
well
in
kubernetes
and
then
install
linker
d
and
have
your
your
app
continue
to
run.
You
know
the
way
it
was
running
before
so,
if
we,
if
we
go
over
here,
kns
emoji
boto,
where
I'm
just
going
to
swap
namespaces
over
here
on
the
bottom
left.
B
Service,
slash
web.
I
think
it's
web
dash
service,
8080
180
and
I
go
to
my
web
browser.
B
And
go
localhost
80..
I
can
see
I've
got
my
I've
got
my
application
going.
I
can
vote
on
things.
I
can
check
out
my
leaderboard
right,
so
I've
got
a
I've,
got
a
web
app
it's
running
in
kubernetes
and
and
right
now
it
works
and
we're
gonna
we're
going
to
install
linguity
service
mesh.
Then
we're
going
to
add
our
application
to
the
mesh
and
everything
is
going
to
continue
to
work
right.
That's
our
that's
our
starting
point
with
linker
d!
B
So
what
I
did
there
is
I
I
pulled
the
I
pulled
a
curl
just
install
the
lingerie
binary
on
my
laptop
right.
It
was
actually
already
installed
because
I
do
this.
You
know
professionally,
but
you
get
that
you
get
the
gist
of
it
and
we're
gonna
go.
Let's
see
it's
1209
eastern
we're
gonna
have
a
fully
installed
linker
d
instance
by
12
14.,
so
we're
going
to
do
zero
to
mesh
in
five
minutes
after
I
install
the
binary.
I
just
extend
my
path.
A
A
Bad
even
better,
so
what
is
the
difference
between
service
mesh
and
networking
in
communities
like
calico,
weavenet
and
other
networking?.
B
Yeah
absolutely
wonderful
question.
Thank
you
so
much
so
that
that
it's,
you
know
it's
the
tcpip
stack
right,
so
the
tcpip
stack
is
you've.
Got
these
seven
layers
right,
like
physical
data,
physical,
like
data
link,
ip
network,
you
know,
there's
there's
seven
layers.
I
can't
remember
them
all.
B
It's
been
a
while,
since
I've
done
my
my
my
networking
exam,
but
essentially
you
know
your
your
cert,
your
networking
tools
run
down
at
layer,
3
or
layer
4
of
the
stack
and
they
worry
about
ip
address
to
ip
address,
maybe
like
namespace,
the
name
space
whatever.
That
is.
That's
that's
where
they
live.
B
Service
meshes,
live
up
at
layer
7..
So
they
are
an
application
focus
tool.
So
they
take
the
network
and
just
use
it.
However,
it
works
and
they
don't.
They
don't
concern
themselves
with
the
networking
they
let
network
handle.
You
know
getting
a
packet
from
one
place
to
another.
They
provide
application
level
logic
and
control
into
your
environment
right.
So
we'll
you'll
see
it
a
bit
when
we
get
into
the
demo
right,
but
they'll
do
things
like
inspect
the
traffic
between
your
applications.
So
you
can
see
how
the
individual
api
calls
are
going
right.
B
They'll
do
things
like
you
know,
change
the
load
balancing
from
connection
level
right,
which
is
you
know,
I
open
a
connection
from
one
point
to
another
point
and
everything
everything
sits
there
and
changes
so
that
they
they
see
each
each
request
and
then
they'll
balance
those
requests
against
all
the
available
connections
as
opposed
to
just
one.
B
So
so
with
that,
and-
and
please
feel
free
to
ask
more-
if
I
didn't
quite
get
it-
oh
awesome
with
that-
we're
just
going
to
test
that
our
cluster
works,
so
I'm
actually
running
k3d
in
memory
cluster
on
docker
desktop
on
my
on
my
windows,
11
box.
So
I've
got
a
lot
of
weird
stuff
going
on,
so
I
just
want
to
check
that
that
this
is
going
to
work
and
the
linker
dcli
gives
me
this
handy
pre-check
flag.
B
I
can
validate
that
things
are
going
to
be
okay
if
I
install,
or
at
least
that
it
thinks
it'll
be
okay,
get
a
bunch
of
green
check
marks,
which
is
awesome
and
I
feel
confident
to
move
forward.
So
with
that
we're
gonna,
try,
type
linker
d
install
and
see
what
happens,
and
I'm
still
I've
got
two
minutes
so
we'll
see
if
I
hit
my
target
so
when
I
type
liquor
to
install
what
I
get
is,
instead
of
anything
actually
changing
on
my
cluster.
So
this
is
all
the
active
pods
on
my
cluster.
B
It's
that
I
get
a
bunch
of
yaml
right-
and
this
is
this-
is
the
kubernetes
manifest
that's
going
to
install
linker
d?
So
I
could.
I
could
put
this
in
a
git
repo.
Add
it
with
a
git
ops
flow.
You
know
I
could.
I
could
generate
this
yaml
using
the
linker
dcli
or
I
could
generate
this
yaml
using
helm
and
both
the
cli
and
helm
in
linkery
use
the
exact
same
yaml
templates.
So
an
argument
that
you
provide
to
the
cli
will
work
on
the
on
the
helm
chart
and
vice
versa.
B
So
you
don't
see
a
ton
of
space
between
the
helm
chart
and
the
cli
so
to
actually
do
the
install
and
meet
my
targets.
I've
only
got
a
minute
left.
I
type
linkery
install
and
I
type
it
over
to
cube
ctl
apply
when
that
happens.
I
get
you
know
three
new
three
new
deployments
being
created.
I
have
an
identity
service,
a
destination
service
and
one
other
that
I
don't
remember.
B
B
That
is
a
tool
to
actually
distribute
our
proxies,
a
tool
to
know
where
traffic
should
go
and
a
tool
to
give
people
their
their
valid
identity
or
give
our
pods
their
valid
identity
so
that
they
can
talk
to
one
another
and
we're
at
12
14
and
I've
got
my
I've
got
my
core
control
plane
up
and
running,
we'll
run
a
linker
d
check
just
to
validate
that
it's
up
and
status
check,
screen
and
12
14.
So
I
feel
like
I
feel
like
I'm
there.
B
So
that's
that's
the
core
linkery
install,
but
as
of
right
now
we
can
take
a
look
over
at
our
emoji
photo
app,
and
none
of
this
is
actually
in
the
mesh.
So
the
second
half
of
what
we
do
after
we
install
that
control
plane.
So
we
actually
have
to
add
these
add
these
proxies
in
so
we're
gonna.
We're
gonna
go
ahead
and
do
that
next
actually
slightly
lying
here,
because
what
I'm
really
gonna
do
now
that
I've
got
now.
I've
got
the
lingerie
control
plane
installed.
B
As
of
linkery
210,
we
broke
the
control
plane
into
a
couple,
different
components.
No
so
femi
femi
yusuf
asks
asked
if,
if
pods
are
installed
in
every
node
on
the
cluster,
the
answer
is
not
necessarily
right.
When
you
install
lingerie
in
production,
you'll
wanna
install
it
in
mode,
so
you'll
want
at
least
three
replicas
of
the
common
control,
plane
components,
but
there's
no
like
there's
no
daemon
set
with
linker
d
that
necessarily
installs
one
per
node.
B
I
hope
that
answers
your
question
so
now
that
I've
installed
the
main
control
plane.
I'm
also
because
I'm
doing
a
demo
I'm
going
to
want
to
install
the
linker
dashboard.
That's
like
our
cool
graphical
interface
right
and
allows
us
to
do
some
neat
stuff
that
I'm
going
to
show
off
in
just
a
minute.
So
I'm
going
to
install
link
or
dvis,
and
I'm
going
to
it,
will
again
generate
generate
yaml
templates
and
we're
going
to
hand
it
off
to
the
cube
c.
B
The
kubernetes
api
to
actually
get
the
dashboard
installed
make
sense,
hopefully
so
the
by
default
right,
the
linker
d
vis
components
are
installed
in
a
different
name
space.
I
want
to
be
clear
this
this
visualization
component.
This
dashboard
is
optional
right.
It's
handy
and
I'd
recommend
it
for
a
lot
of
use
cases,
but
it's
not
it's
not
required
now
that
I've
got
it.
B
It
installs,
a
bunch
of
things
from
like
our
we've
got
this
tool
called
tap,
which
allows
you
to
to
see
the
metadata
about
every
request
between
these
proxies
and
actually
like
do
like
a
bit
of
like
wireshark
style,
inspection
of
of
the
of
the
calls
between
the
applications,
and
it
includes
a
prometheus
and
a
grafana
component
right.
So
prometheus
and
grafana
are
like
prometheus
is
like
a
time
series
database
allows
you
to
collect.
A
And
there's
another
question:
if
yeah.
A
Yeah
is
there
any
performance
impact
on
apps
due
to
the
proxy
container
in
the
middle.
B
I'm
sorry
yeah
great
question,
so
yes,
there
is
whether
or
not
it's
a
negative
performance
impact
is
going
to
depend
on
your
application
and
the
scale
at
which
you're
running
right.
So
we've
got
lots
of
examples,
and
if
you
look
at
kubecon
north
america,
some
folks
over
at
a
company,
I
think
it
was
entained
they're
they're,
an
australian
company.
They
do
they
do
betting
basically
or
they
run
a
betting
platform.
So
what
they
found
is
because
they
were
doing
a
lot
of
grpc
connections.
B
Linguity,
adding
liquidity
into
their
environment,
increased
their
performance
and
allowed
them
to
get
over
a
10x
increase
in
the
number
of
requests
they
could
handle
by
adding
linker
d.
So
the
very
short
answer
is
adding
proxy.
Adding
any
additional
steps
in
your
network
path
is
going
to
slow
down
any
given
request
right
as
you
scale
or
as
you
do
more
and
more
traffic,
the
benefits
of
connection
level
load,
balancing
or
intelligent,
intelligent
endpoint
selection
can
outweigh
the
cost
of
having
those
additional
proxies
in
the
space.
A
B
Awesome
yeah
there's
a
there's
a
ton
out
there.
Let
me
know
if
you
have
any
more
questions
for
all
this
I'll
I'll
put
up
the
link
to
the
link
or
d
slack
at
the
end
feel
free
to
hop
in
ask
us
questions
directly.
You
can
find
me
right
there
all
the
time
so
now
that
I've
installed
the
lingerie
dashboard.
What
I'm
going
to
do
you'll
see
there's
this
component.
B
This
tap
injector
right,
so
that
is
that,
like
liquidy,
adds
the
proxy
by
adding
in
what
we
call
a
mutating
web
hook
and
that
what
that
does
is
it
sees.
You
know
an
object
type
and
it
changes
it
on
the
back
end.
So
you
don't
you
don't
see
it
necessarily,
but
what
we
have
is:
oh,
actually,
no,
I'm
fine.
What
we
have
is
when
we
inject
emoji
photo
it's
going
to
get
the
tap
configuration
as
well
as
the
the
basic
configuration
in
there
to
actually
to
actually
get
things.
B
Get
things
set
up
so
yeah.
B
Absolutely
yeah,
so
the
linkery
docs
have
this
getting
started
guide
which
which
walks
you
through.
Basically
everything
I'm
doing
today,
right
like
in
depth,
step
by
step,
how
you,
how
you
get
this
up
and
running
if
you're,
totally
new
to
kubernetes
and
don't
know
how
to
send
stand
up
an
individual
environment.
The
thing
I
would
recommend
is
try
something
like
docker,
desktop
or
k3d
or
depending
on
what
you're
on.
B
B
When
last
I
checked
we're
just
going
to
look
at
the
pods
in
the
lingerie
namespace
cool
pop
out
a
dashboard
sure
this
isn't
totally
necessary
yet
so
I'm
actually
just
going
to
skip
that
right
now,
we'll
pop
out
a
dashboard
in
a
second,
I
just
don't
want
to
do
it
quite
yet.
So
what
we're
going
to
do
now
is
we're
going
to
add
yeah.
You
can
absolutely!
B
You
can
sell
linker
d
on
any
kubernetes
distribution,
so
whether
it's
managed,
whether
it's
your
own
version,
it's
fine,
there's,
no
there's
no
networking
or
special
kubernetes
api
requirement
right
it
just
it
just
works
wherever
you're
going
so
and
you
can
always
test
it
if
you're
not
sure
whether
or
not
you
can
do
the
install
do
that
linker
d
check
command,
so
linker
d,
space
check
space
dash,
dash,
pre
right
that'll.
Tell
you
whether
or
not
you
have
the
right
permissions
and
whether
your
kubernetes
environment
is
set
up
correctly
to
install
linkedin.
B
So
you
can
always
test
that
before
you
before
you
run
anything
so
here.
What
I
want
to
do
is
I
want
to
add.
I
want
to
go
back
to
that
diagram,
sorry
to
keep
flipping
around
here,
going
back
to
that
diagram.
What
I
want
to
do
is
oh
you're,
so
welcome.
What
I
want
to
do
now
is
add
the
proxies
to
these
components
right.
So
I'm
going
to
I'm
going
to
inject
my
environment.
That's
what
we
call
it
is
injecting
the
the
proxy.
B
So
what
I'm
going
to
do
is
I'm
going
to
get
all
the
deployments
in
the
emojivoto
namespace,
I'm
going
to
output
them
as
yaml,
and
I'm
going
to
send
them
over
to
the
link
or
dcli
command.
The
link
or
dci
command
is
linkerd
inject
right.
What
that's
going
to
do
is
look
through
the
objects
that
you
send
it
it's
going
to
see.
B
Hey
is
this
object,
a
deployment
set,
a
stateful
set
or
a
replica
set,
and
probably
one
more
daemon
set
and
if
it
is
it'll,
add
a
little
annotation
that
says
linkery
inject
true,
and
I
can.
I
can
show
you
that
in
a
second
and
then
we're
going
to
send
it
back
to
the
kubernetes
api
and
that
will
tell
the
linker
d
web
hook
to
go
ahead
and
change
that
deployment
object
and
inject
the
proxy.
So
that's
a
lot
of
talking,
so
you
can
just
see
it.
B
What
you're
going
to
see
is
when
I
hit
enter
these
pods
are
all
going
to
get
restarted
and
now
there's
going
to
be
two
containers
instead
of
one.
So,
let's
go
so
I
want
to
just
talk
about
a
potter,
real,
quick
right
kind
of
a
funny
name
right,
like
you
know,
this
all
came
out
from
docker.
You
know
we
had
a
little
whale
and
so
like
a
pod
was
like
a
pot
of
whales
swimming
together.
B
So
we've
got
our
a
pod
is
like
a
a
container
basically
or
like
a
little
name
space
for
containers
right
and
so
what
we
do.
The
way
a
service
mesh
works
is
it
is
it
sits
a
second
container
beside
your
first
one.
It
sits
the
second
container
beside
your
first
one
and
and
then
that
second
container
is
the
proxy
that
does
whatever
whatever
it's
gonna
do.
B
B
A
B
So
what
I'm
doing
here
and
but
don't
hold
me
to
it
right,
I
will
I
promise
to
provide
some
answers.
I
just
want
to
kind
of
show
this
injection
process
first.
So
what
we're
going
to
do
now
is
we're
just
going
to
check
that
this
proxy
is
ready.
I
can
check
it
in
a
bunch
of
ways
like
I
can
just
look
over
here,
and
I
know
that
it's
ready
because
my
pods
are
running,
but
let's
go.
Let's
go
ask
linkery
check
how's
the
health
of
the
proxy
in
the
emoji
photo
name
space.
B
B
Why
why
did
I
do
this?
I
don't
need
this.
So,
let's,
let's
now
take
a
look
at
the
dashboard.
Let's
see
what
we
can
see
about
our
environment
right,
so
so,
because
we've
added
link
or
d
right
before
all
I
could
see
is
I
had
four
containers
right,
but
I
didn't
really
know
very
much
like
I
don't
know
what
it
looks
like
as
they're
talking
to
each
other
or
any
any
details.
B
So
this
is
this
is
in
that
viz
component.
So
when
I
installed
that
link
or
dvis,
this
is
what
I
was.
This
is
what
I
was
really
installing
right.
I
now
have
a
view
of
my
cluster,
all
my
name
spaces
for
each
namespace.
How
many
containers
are
are
in
the
mesh
right
and
then
I
can
see
what
we
call
the
golden
metrics
for
each
namespace.
B
B
What's,
the
latency
buy
our
latency
bucket
and
then,
if
we
want,
we've
got
this
handy,
dandy,
grafana
button
where
we
can
pop
out
a
grafana
instance,
that's
able
to
talk
to
our
prometheus
and
let
us
set
up
our
own,
our
own
dashboards
or
use
the
the
dashboards
that
are
already
built
to
be
clear.
If
you
already
have
prometheus,
there
are
documentation-
and
it's
very
well
supported
to,
instead
of
using
the
the
in-memory
prometheus
that
comes
with
linker
d
viz
to
instead
use
your
external
prometheus.
B
So
that's
a
very
common
scenario:
okay,
so
let's,
let's
first
take
a
look.
We
can
sort
all
these
namespaces
by
the
success
rate,
so
that
is
what
percentage
of
the
calls
are
actually
successful
and
the
other
thing
we
want
to
do
is
or
the
other
thing
I
want
to
do
is
I
want
to.
I
want
to
do
that
port
forward
command
again,
because
I
want
to
see:
does
my
application
still
work
the
way
it
was
working
before.
B
B
So
if
I
refresh
emoji
vote
still
works
great
right,
I
can
go
click
on
my
various
apps.
Oh
I
didn't
mean
to
do
that
and
I
can
vote
on
them.
I
can
see
the
current
leaderboard,
so
my
I
didn't
I
didn't
create.
I
didn't
create
a
new
custom
resource
type
to
say:
hey
this
is
you
know
this
is
the
virtual
gateway,
so
I
can
get
my
traffic
over
here
or
virtual
server
or
anything
like
that
right,
I'm
just
using
standard
kubernetes
primitives
and
the
my
application
still
works
as
as
designed
right
with
the
addition
of.
B
B
I'll
answer
that
in
a
second,
we
can
see
why
it's
broken
and
how
we
can
see
all
sorts
of
details
about
like
what
is
the
you
know.
What
does
the
actual
environment
look
like
right?
We
just
have
much
data
and
we
also
have
now
everything,
although
all
our
traffic
is
mtls
right
in
in
our
class
there.
A
A
B
Link
rd
are
both
service
mesh
projects
right
linker
d
is
a
cncf
service.
Mesh
istio
is
run
by
another
foundation.
I
don't
I
don't
know
what
it
is
but
like
they.
They
have
different
design
philosophies
right.
Istio
istio
uses
the
envoy
proxy,
so
it,
instead
of
building
its
own
proxy,
it
used
an
existing
one
called
envoy,
which
is
a
cncf
project
and
it's
a
great
tool
and
it
has
tons
of
features
and
capabilities.
B
Istio
has
a
lot
of
features
that
you
won't
see
in
linkerid
right,
like
you'll,
hear
people
talking
about
running
wasm,
plug-ins
or
you
know,
whatever
some
some
other
stuff
that
you
can
do
because
of
envoy,
that's
in
istio,
and
so
it's
it's.
It's
got
a
lot
of
features,
but
those
features
tend
to
come
with
a
bit
of
a
cost
right.
So
you
know
when,
when
we
look
at
it,
sorry
I'm
just
gonna
finish
this.
So
so
it
comes
at
a
bit
of
a
cost
right.
B
So
istio
can
be
a
little
bit
complex
to
use
right.
So
with
with
linker
d,
we
don't
require
you
to
use
any
sort
of
custom
resources
to
work
with
linkerd
right.
We
do
everything
in
a
kubernetes
native
way,
using
kubernetes,
primitives
and
and
stick
to
it
right
with
istio.
You
need
to
need
to
make
your
app
work
with
istio,
so
you
need
to
write
custom
objects
and
custom
yaml
to
support
istio.
It
also
tends
to
be
a
little
bit
more
complex
if
from
an
operator
perspective,
to
use
and
run,
especially
when
you
get
to
scale.
B
But
if
you
look
at
we,
we
got
this
benchmark
suite
from
the
folks
over
at
kinvoke
who
wrote
basically
a
testing
suite
to
decide
like
what
mesh
performs
better
than
another,
and
we
we
ran
the
tests
last
year
or
earlier
last
year
and
then
recently
just
the
other
day
with
our
2.11
release,
and
we
found
that
lingerie
d
performs
really
well
compared
to
istio,
especially
in
terms
of
resource
consumption,
as
well
as
the
actual
speed
to
send
traffic
along
now,
it's
faster.
B
Oh
thanks
for
the
benchmarks,
I'll
I'll
post
them
up,
the
other
one
is
we're
not
using
envoy
as
a
sidecar.
So
that's
a
that's
a
great
point.
So
linker
d
linkerity
doesn't
use
envoy
right
linguity.
B
The
folks
over
at
lincoln,
decided
to
write
a
rust
based
proxy
for
linker
d
right.
So
there's
there's
some
advantages
to
rust.
Right
that
we
see
right,
one
of
which
is
a
lot
of
work
on
on
modern
networking
is
happening
in
rust
today
and
by
building
on
top
of
those
rust
libraries,
we're
able
to
take
advantage
of
advances
in
in
rust,
networking
to
make
our
proxy
more
performant.
The
other
one
is
rus.
Rust
is
like
necessarily
more
memory
safe
than
c
plus
plus
right.
B
So
it
allows
us
to
avoid
a
lot
of
the
memory
management
vulnerabilities
that
you
see
with
a
different
language.
So
it's
a
lingerie
proxy
we
think
is,
or
we
know
is
extremely
small-
is
extremely
fast
and
is
extremely
secure
compared
to
other
proxies.
It
is
also
much
more
limited
in
its
scope
of
what
it
does.
It
is
not
a
general
purpose
proxy.
You
can't
use
it
if
you
want
to
set
up
an
ingress
with
lingerie
proxy
good
luck
right
like
it
doesn't
it
doesn't
work
right.
B
It
only
works
with
the
linker
d
mesh
because
it
only
exists
to
support
linker
d
right.
There
are
folks
they're
great
projects
like
emissary
from
the
folks
over
at
ambassador
and
and
a
ton
of
others
right
that
are
amazing,
ingresses
that
are
built
on
top
of
envoy,
and
you
know
they're,
it's
a
it's
a
great
tool
for
that
use
case.
We
think
it's
too
much
for
what
we're
trying
to
build
with
linker
d.
B
I
hope
that
answered
your
question
puff
it,
and
let
me
let
me
know
if
that,
if
I
missed
anything
one
more
that
asks
the
difference
between
aws
azure
or
kubernetes
it
it's
it's
actually
way
too
long.
A
topic
to
get
into
now.
Kubernetes
is
like
a
it's
just
a
container
scheduling
tool
that
can
run
on
top
of
whatever
infrastructure
you
choose
to
build
on,
whether
that's
bare
metal,
vms
or
bare
metal,
vm,
yeah,
bare
middle
or
vms.
B
That's
all
I
can
think
of
right
now
and
it
works
in
in
whatever
cloud
you
wanna,
you
wanna
work
it
in
what
type
of
communication
happens
between
the
control
plane
and
the
proxies?
No,
it's
not
it's
not
very
heavy,
so
shrini,
I
think
so,
trini
the
it's.
It's
just
command
and
control
traffic.
So
the
big
thing
is
the
proxies,
like
the
proxies
have
to
ask
the
control
plane.
B
What
are
the
available
endpoints
for
any
given
service
so
that
they
can
do
that
intelligent
load,
balancing
that
you
get
with
linker
d
and
we
call
it
ewma.
It's
a
long
story.
It's
exponentially
weighted
moving
average.
It's
basically
just
a
really
good
way
to
pick
who's
the
fastest
pod
to
respond
to
a
given
request-
and
you
can
you
can
read
more
about
that
in
our
docs.
B
Let
me
know
trini
if
that
was
an
okay
answer,
so
with
that
I'd
love
to
pop
back
into
this
dashboard
and
just
explore,
what's
broken
in
emoji
photo.
Does
that
work.
B
I
didn't
thank
you
so
much.
I
totally
missed
that.
Do
I
inject
linker
d
on
all
pads,
somehow
related
to
my
my
application.
What
if
I
want
to
have
database
within
the
cluster
too
yeah?
Oh,
it's
a
wonderful
question!
So
you
inject
inject
linker
d,
where
you
want
the
benefits
of
the
mesh
right
and
the
benefits
are
essentially
security,
observability
and
reliability
right
so
we'll
do
better
load
balancing
than
you'll
see
in
kubernetes,
we'll
give
you
metrics
and
statistics
about
your
environment
that
you
won't
see
otherwise
and
we'll
give
you
mtls
right.
B
So
if
you
need
that,
you
you
inject
it,
you
can
absolutely
run
databases
and
connect
and
mesh
them
in
linker
d
right.
The
thing
you
have
to
do
is
be
aware
of
whether
the
traffic
is
http
or
grpc,
or
if
it's
some
other
some
other
tcp
protocol
right
and
depending
on
the
type
of
traffic
you.
You
may
want
to
tell
linker
d
to
treat
that
connection
as
just
a
as
a
generic
tcp
trunk
right
yeah
as
a
generic
tcp
trunk,
and
there's
a
lot
more
on
that
in
our
docs
and
I'll.
B
Like
I
said
I'll
post,
the
linker
d
slack
I'd
love
to
get
into
it
in
a
bit
more
in
depth
with
you.
If
you
want
to
hear,
but
no
reason
not
to
inject
databases,
and
we
do
it
all
the
time
like
we
run
cortex
for
our
production
applications
and
we
have
cortex
fully
injected
with
linker
d
and
it
behaves
great
okay.
Let
me
let
me
finish
this
demo
and
then
I'll
answer
more
questions.
Does
that
work.
B
B
I
start
with
this
little
graph
tells
me
you
know:
what's
the
communication
look
like
in
this
app
so,
instead
of
just
you
know
a
bunch
of
pods
right,
I
instead
see
a
little
a
little
service
map
that
tells
me
who's
talking
to
who
right
now
I
can
see
on
a
per
deployment
basis.
What
is
the
success
rate?
What
is
the
latency?
What
is
the
volume
of
requests
all
that
stuff
right
now?
I
can
see
that
voting
in
the
web
service
both
have
sub
100
success
rates.
B
So,
looking
at
it
right,
my
problem
is
somewhere
between
web
and
voting
right.
It's
not
votebot
and
web.
It's
not
web
and
emoji
it's
web
and
voting
all
right
cool.
So,
let's
click
on
web
and,
let's
see
what's
going
on
so
with
web,
you
know
I
can
see
who's
talking
to
web,
so
I've
got
prometheus
and
votebot
talking
to
the
web
service
and
who
is
web
talking
to
you.
Well,
it's
talking
to
voting
and
emoji
and
here's
our
success
rate
requests
all
that
stuff.
B
B
So
I
can
see
that
from
votebot
to
the
web
service
at
api
vote,
I'm
only
seeing
a
90
success
rate
and
I
can
see
to
the
emoji
service
right
all
the
various
calls
that
are
going
on
now.
I'm
actually
missing
something,
so
it
hasn't
popped
up
yet
so
I'm
going
to
give
it
I'm
going
to
give
this
a
refresh
see
if
I
can
get
my
my
appropriate
error
here.
B
B
B
Thank
you
so
much
folks.
I
really
appreciate
it
so
now
we're
in
voting
right
and
we
can
see
from
voting's
perspective
it's
talking
to
two
services
or
three
services,
because
tap's
also
in
there
and
thank
you,
everyone
who
responded.
I
really
appreciate
it.
So
we
got
the
web
service
talking
to
voting
and
we
can
look
now
again.
Let's
go
to
the
let's
go
to
the
live,
calls
that
are
coming
in:
let's
sort
them
by
their
success
rate.
B
Right
and
I'm
you
know
I'm
at
the
will
of
whatever
actual
requests
come
through,
so
I've
got
a
traffic
generator.
That's
sending
me
some
messages,
so
I
can
actually
probably
vote
on
the
donut.
Oh
no,
it
didn't
it
didn't
work.
Let
me
try
this
again
see
if
I
can't
make
linkery
pick
it
up.
B
There
we
go
come
on
buddy,
oh
there
we
go,
so
I
can
see
that
from
the
web
web
deployment
to
this
vote
donut
I
have
some
failure
right
and
of
course,
we
can
see
that
by
going
in
the
ui,
if
we
try
and
vote
on
donut,
in
spite
of
it
being
it
being
like
by
far
the
best
emoji
in
our
list,
it's
it's
not
getting
the
it's
not
getting
the
recognition.
B
It
deserves
on
the
leaderboard
right
and
that's
because
when
we
make
the
call
from
web
to
vote
for
donut
right,
we
get
we
get
an
error.
We
can
dive
in
a
little
bit
deeper,
actually
do
a
tap
like.
So
I
want
to
do
like
the
live
tap
on
this
traffic
right
and,
let's
start
it
right
and
again,
I'm
talking
looking
at
the
voting
service.
Have
I
started
yeah
I
started
so
we
just
have
to
wait
for
something
to
call
it.
B
B
When
we
vote
for
donut,
however,
because
this
is
a
grpc
call-
we
get
an
http
status
of
200
so,
like
the
the
the
hcb
call
works
great,
but
we
have
a
grpc
error
of
unknown,
so
it's
not
on
it's
not
that
it
doesn't
know
what's
going
on.
This
is
literally
a
grpc
error
code
of
unknown
being
raised,
and
so
we
see
the
grpc
error
code
right
here.
So
so
we've
got
everything.
We
need
to
be
like
hey
folks,
that
make
the
voting
service
voting
service
team.
B
Please
go
fix,
vote
donut
because
it's
clearly
having
a
problem
right
now.
This
is
a
bit
of
a
contrived
a
contrived
example,
but
you
get
the
you
get
the
gist
of
it
right.
B
It's
there
to
give
you
an
easy
way
to
just
hop
in
you
know,
look
at
what
your
environment
is
and
and
like.
Let's
just
get,
let's
get
to
the
root
of
our
problem
as
quickly
as
we
can
right
and
this
whole
time
right,
we've
gotten
things
like
jrpc
load,
balancing.
We
have
all
our
connections
upgraded
to
http
2.
We
have
mtls
everywhere,
you
know.
If
we
want,
we
can
turn
on
policies.
So
we
can
restrict
what
service
is
allowed
to
talk
to
what
service
right.
B
So
there's
a
lot
of
power
there,
but
there's
no
there's
no
required
complexity.
Right.
If
I
look
at
linker
d
right,
so
let's
go
back
to
that.
To
that
istio
comparison
right.
If
I
do
k
get
crd
right
and
let's
just
grab
for
linkedin
grab
linker
d,
I've
got
I've,
got
three
custom
resource
definitions
right
on
in
this
in
this
environment.
B
There's
actually
four
because
there's
another,
which
is
the
traffic
split
smi,
so
there's
four
custom
resource
definitions
in
linkerid,
none
of
which
are
required
to
use
linker
d
right
like
none
of
this
is
critical
path
to
make
the
thing
work
right,
yeah.
A
B
A
B
So
so,
what's
missing
from
this
picture
right,
what's
missing
from
this
picture
is
a
way
to
do
what
we
call
north-south
traffic
or
traffic
from
outside
the
cluster
to
inside
the
cluster
right.
So
so,
there's
nothing
there
right.
So
there's
no
built-in
ingress
or
gateway
in
linkedin
to
bring
traffic
from
outside
the
cluster
in,
except
for
when
you're,
talking
multi-cluster,
which
is
a
much
different
story.
B
So
linker
d
doesn't
add
an
or
doesn't
include
an
ingress.
So
if
you
use
kong,
if
you
use
nginx,
if
you
use
ambassador,
you
know
what
it
whatever
it
may
be
right,
you
just
you
basically
just
add
the
you
just
add
that
ingress
into
the
mesh
and
then
you
get
all
the
benefits
of
linker
d
plus
whatever
that
that
ingress
gives
you
natively
does
viz
integrate
with
kubernetes
are
back?
Will
it
be
able
to
see
its
own
name?
B
So
viz
is
a
really
it's
a
really
very
simplistic,
ui
right
and
it's
it's
not
like.
There's
no
there's,
no
there's!
No
user
login
right,
like
there's
no
login
log
out,
there's
no
user!
There's
nothing
like
that
right.
If
you're
looking
for
role-based
access
control
for
a
linker
d
dashboard,
that's
where
we
have
products
like
point
cloud,
which
is
a
commercial
product
which
is
free
to
use
right
for
anybody
for
up
to
two
clusters,
but
it's
got.
B
It's
got
our
back
rules
and
stuff,
but
the
open
source,
linker
d,
viz
dashboard
doesn't
have
doesn't
have
any
sort
of
role
thing.
There's
there's
some
cool
stuff
you
can
do
like.
So,
if
you're
using
the
ambassador
edge
stack,
you
can
set
up,
you
know
you
can
you
can
decide
on
an
account
by
account
basis,
who's
allowed
to
access
the
dashboard
and
who
isn't?
But
it's
it's
all
or
nothing
once
you're
in
the
dashboard
right.
B
The
the
positive
thing
is,
this
dashboard
doesn't
give
you
access
like,
even
even
when
I
do
that
tap
it
doesn't
give
you
access
to
the
doesn't,
give
you
access
to
any
of
the
actual
traffic
right.
It
just
gives
you
access
to
the
metadata
about
the
traffic.
So
when
I
start
this
right,
I
don't
actually
see
I
don't
actually
see
any
packets.
I
don't
see
any
data
right
and
it's
also
read-only.
B
So
there's
not
like
it's
not
a
ton
of
need
to
expand
it
right,
there's
not
a
ton
of
need
to
add
a
ton
of
authentication
in
here
like
I,
I
leave
a
version
of
this
linkerd
dashboard
exposed
to
the
internet
all
the
time.
It's
my
job
to
talk
about
linker
d
and
I've
never
had
I've
never
had
any
incident
related
to
it
so
yeah.
I
hope
that
answers
your
question
from
phalma.
I
think,
and
I'd
love
to
hear
from
you,
if
that,
if
that
was
useful,
puff
puffett
is
asking
yeah.
B
Okay
yeah,
so
the
right,
oh
geez,
great
great
question
both
of
you.
So
no
again,
the
liquor
d
dashboard
is,
is
it's
all
or
nothing
right
and
it
doesn't
even
it
doesn't
even
come
with
even
basic
auth
enabled
right.
It's
just.
Can
you
can
you
get
to
it?
Or
can
you
not
right?
That's
the
answer.
Oh
sorry,
bruno
I
just
saw
that
got
got
the
names
now.
Were
there
any
other
questions
or
have
I
have
I
answered
everybody
so
far.
A
B
Yeah,
that's
that's
the
bulk
of
what
I
was
going
to
show
you.
You
know
like
that.
The
exciting
things
lately
right.
We
just
published
another
set
of
benchmarks
based
on
that
that
kinvoke
testing
suite
right,
not
something
that
that
we
wrote.
You
know
where
we
had
great
performance
against
against
istio,
but
again
that
comes
at
comes
at
a
trade-off
right
if
there,
if
there
are
features
in
istio,
that
you
love
and
need
you're
not
going
to
get
oh
right
trini,
I
I
did
see
that
question.
I
and
I
ignored
it.
B
I'm
so
sorry,
so
srini
asked
does
the
support
communication
with
services
running
outside
of
kubernetes.
So
it
depends
what
you
mean
like.
Yes,
obviously
you
can
talk
with
things
outside
kubernetes
into
kubernetes,
but
you're
doing
it
through
whatever
your
standard
ingress
path?
Is
you
don't
have
you
don't
have
like?
You?
Don't
have
the
the
mesh
the
the
service
mesh,
these
proxies
sorry!
Here
we
go
these
proxies
right
now
we
have
no
ability
for
you
to
for
you
to
deploy
them
and
extend
the
mesh
beyond
the
cluster
right.
B
The
mesh
is,
is
a
inside
kubernetes
situation.
Only
right
you
can
do
multi-cluster,
but
again
it's
only
in
kubernetes.
That
being
said
tune
back
in
in
the
back
half
or
at
middle
of
next
year,
because
we
are
looking
at
you
know.
Is
it
reasonable
to
use
link
or
d
for
situations
beyond
kubernetes
so
that
that
may
be
something
that
we
do
in
the
future?
B
And
I
I
would
really
love
we
love
to
hear
from
you
about
your
about
your
use
case,
because
I
think
I
think
it'd
be
cool
to
get
an
understanding
of
what
folks
are
doing,
speaking
of
which
here's
our
here's,
our
slack
so
dot
slack.liberty.io
right.
I
can't
I
can't
actually
post
anything
in
the
chat
here.
But
if
you
check
out
and
I'll
here
hold
on
I'll
put
in
the
private
chat.
B
It
so
slack.linkerdotio
will
allow
you
to
join,
join
the
link,
be
slack
and
you
can
just
reach
out.
Ask
me
questions.
I'd
love
to
follow.
The
istio
folks
are
also
extremely
cool.
B
At
least
as
cool
as
we
are,
because
you
know
we're
all
working
in
tech,
so
you
got
to
keep
that.
Keep
that
in
mind,
femi
asks
or
maybe
yusuf
will
this
d
version
work
with
any
release
of
kubernetes?
No,
no.
There
is
a
limit
to
how
far
back
how
far
back
any
given
release
goes.
If
you
want
to
check
just
run
just
run
that
linkery
sorry
control,
woof,
I've
got
a
lot
going
on
here.
Sorry,
oh
cd!
Here
we
go.
Let
me
make
this
way
smaller.
B
If
you
ever
want
to
check,
if
linker
d
will
work,
if
the
lingerie
version
you
have
will
work,
lingerie
check
dash,
dash
pre.
This
will
test
whether
or
not
you
can
install
it
in
the
cluster.
If
I
run
it
now,
it
will
fail,
because
I
can't
install
link
or
d
because
link
rd
is
already
installed,
but
on
a
on
a
blank
cluster,
it
will
tell
you
rafael
asks:
can
we
see
traffic
to
mqs
like
kafka
and
rabbit?
B
Yes,
yes,
kind
of
right
so
great
great
question
this,
like
this
level
of
detail
that
you
see
here
right
where
we
can
go
into
an
individual
like
let's
go
pop
into
let's
pop
into
web
right.
You
know
the
way
we
can
see
the
individual
api
calls
and
understand
that
traffic
that
exists,
because
the
lingerie
proxies.
B
So
it's
it's
kind
of
limited.
That
being
said,
if
you
do
have
a
big
use
case,
where
you're
looking
for
an
inspection
of
the
kafka
traffic,
that
is
something
that
we've
been
looking
at.
Building
into
linkery
and
your
feedback,
like
listen,
user
feedback
and
real
world
scenarios,
is
what
is
what
drives
what
lingerie
is
and
does
right,
so
we
we
depend
on.
We
depend
on
y'all
out
there
to
tell
us
what
to
put
in
liquor
d
and
a
great
place
to
make
feature.
Requests
is
over
on
slack,
also
github
right.
B
You
can
find
linkery
on
github
this
linker
d.
Where
is
it
yeah
lingerie,
slash,
linker
d2.
This
is
where
you
can.
You
can
raise
issues
also,
if
you
have
something
that
you're
looking
for
in
terms
of
features,
oh
slack
tends
to
be
a
really
good
place
to
put
that
in
as
well
yeah
srini,
I
happy
to
happy
to
chat
more
about
your
scenario.
Honestly
would
be,
would
be
great
to
great
to
hear
about
what
you're
doing
raphael
same
thing.
B
Man
or
would
would
love
to
love
to
hear
your
perspective
and
and
talk
to
you.
If
you
all
can
join
the
slack,
it
would
be
fantastic
to
meet
with
you,
I'm
just
at
jason
and
I'll
I'll.
Look
at
who
joins
the
slack
after
this
and
and
come
say
hi
to
all
y'all
individually,
so
puff
it.
Oh
sorry
that
was
bruno,
I
believe,
says
you
mentioned
that
pods
are
already
doing
mtls.
B
Is
this
the
default
one
injecting
the
sidecar
yeah?
So
it
is
possible.
It
is
possible
to
turn
off
mtls
and
linker
d,
but
it's
not
simple
or
straightforward
right
and
in
general
it
is
the
default
you
just
get
mtls
everywhere.
B
I
don't
know,
that's
not
simple,
but
I
don't
know
how
to
do
it,
because
I've
never
I've
never
tried,
and
I'm
not
I'd,
love
to
hear
if
you
are
trying
to
turn
it
off
I'd
love
to
hear
why
and
what
the
use
case
is
because
I
I
think
what
you'll
find
is
is
there's
no
reason:
there's
no
performance,
reason
to
disable
mtls
when
you're
doing
when
you're
doing
linkedin
right
the
the
better
thing
to
do.
If
you
have
like
a
given
connection
that
you
don't
want,
you
know
him
like
being
meshed
you
can.
B
A
B
Yeah
well,
I'd
love
to
dive
in
if
y'all
have
any
any
particular
questions.
Some
people,
okay,
so
bruno,
says
some
people
are
allergic
to
tls
inside
of
kate's
cluster.
You
can
certainly
specify
custom
certificates,
so
so
the
the
linker
d,
the
way
linker
d
works
when
it
gets
installed.
It's
such
another
good,
one
that
put
on
here
right
the
way
link
ready
works
is
there's
actually,
if
you've
heard
of
like
three-tiered
architecture
for
certificate
authority.
The
way
it
works
is
is
basically
you
build.
B
In
general,
when
you're
looking
at
certificates,
you
build
like
a
root
certificate.
That
is,
establishes
the
trust
in
a
given
like
domain
right
and
that
root
certificate.
You
kind
of
keep
really
private
and
and
keep
it
offline.
Then
you'll
do
what's
generally
referred
to
as
an
intermediate
certificate.
So
any
like
one
any
one
use
case
in
our
case,
any
one
kubernetes
class
there
gets
an
intermediate
certificate
that
is
signed
by
the
root
that
everybody
trusts
right
and
that
intermediary
certificate
goes
to
the
control
plane.
B
So
the
control
plane
holds
on
the
intermediary
and
it
uses
it
both
the
public
and
private
key
to
create
and
sign
individual
proxy
certificates.
So
every
proxy
gets
an
individual
certificate
that
is
generated
by
the
control
plane,
but
that
that
root
and
that
intermediary
can
both
be
generated
by
you
and
honestly
for
production
environments.
B
So
if
you
decide
that
like
one
cluster
is
compromised
for
whatever
reason
you
can
revoke
the
intermediary
of
that
certificate
of
that
cluster
and
all
of
a
sudden,
it
can't
talk
to
the
rest
of
the
mesh
right,
but
each
and
every
other
cluster
is
still
is
still
set
up.
I
hope
that
I
hope
that
answers
your
your
question.
Bruno
tani
is
the
thing
that
issues
the
client's
certificates-
plugable,
oh
yeah.
It
is
not
right
so
great
great
question
tani
or
teddy
teddy.
B
I'm
sorry,
if
I
said
your
name
incorrectly
it,
the
the
the
identity
service,
generates
the
certificates.
That
being
said,
the
source
of
those
certificates
is
pluggable
right,
so
you
can
bring
whatever
whatever
ca
architecture
you
want.
B
You
know,
folks,
if
you
want
to
use
like
cert
manager
as
an
example
to
generate
certs
at
a
vault
or
out
of
like
some
aws
certificate
issue
at
I'm,
not
that
great
at
aws
honestly,
so
I
don't
know
what
they
have
in
terms
of
options
for
generating
generating
intermediary
certificates,
but
you
can
use
cert
manager
to
generate
certificates
from
whatever
authority
you
choose
srini.
Maybe
I'm
sorry,
I'm
not
not
sure
how
to
say
your
name
asks
if
we
can
suggest
any
resources
for
a
deep
dive
in
olympiad.
B
You
know
that
the
docs
are
really
good,
I'll,
be
honest,
like
that.
The
docs
are
great
and
and
start
using
it
like
the
nice
thing
about
lingerie.
Is
you
don't
have
to
go
super
deep
right
like
go
check
out
the
getting
started
guide
to
get,
get
a
sense
and
then
go
through
go
through
the
tasks
right
this
when
I
I
only
joined
buoyant
in
february
right.
The
first
thing
I
did
is:
I
just
started
going
through
these
various
tasks
and
they
were
great
right
because
I
was
like
okay.
How
do
I
bring
my
own
prometheus?
B
How
do
I
do
you
know?
How
do
I
set
up
retries
or
timeouts?
You
know
this
one
debugging
http
applications
with
per
route
metrics.
That
was
awesome.
Right,
really
good,
shows
you
how
to
use
some
of
the
custom
resources
that
do
come
in
link
rd
to
to
get
better
statistics
or
better,
better
insight
into
what
you're
doing
and
then,
with
all
this
check
out,
buoyant
buoyant
dot
cloud.
B
If
you
actually
want
to
try
like
the
it's
a
hosted
product
right,
there's
a
there's,
a
paid
subscription,
there's
a
free
subscription,
that's
good
for
up
to
two
clusters
and
50
workloads.
It's
a
good
tool.
It
makes
it
easier
to
see
some
of
the
stuff
that's
going
on
in
linkerid,
and
it
gives
you
gives
you
alerts
if
you've
done
any
kind
of
misconfiguration.
So
you'll
know
that
right
away,
but
yeah
I
would.
I
would
check
out
initially
check
out
the
tasks.
It's
it's.
A
And
yeah,
as
the
coordinating
foundation
already
said,
the
all
of
the
sessions
from
cloud
native
live,
you
can
view
them
afterwards
is
recorded.
So
no
worries
you
can
play
back
all
of
the
details.
You
want
yeah.
I
think
we
have.
We
have
what
we
have
four
minutes
left
officially.
So
if
there's
any
quick
final
questions,
we
do
have
some
time
and
usually
when
I
say
this,
the
longest
questions
always
pop
in
immediately
at
that
point,
but
not
too
much
time.
But
do
you
have
jason
any
final
words
wrap
up
anything.
B
Yeah,
so
the
getting
started
guide
is
a
great
place
to
go
after
that
check
out
that
check
out
this
multi-cluster.
It's
a
it's
a
fun
one
right,
because
this
will
require
you
to
do
the
install,
but
but
customize
it
right.
You
can't
use
you
can't
use
that
linker
d
install
command,
as
is
with
no
commands,
because
when
you're
doing
multi-cluster
you
have
to
create
and
specify
a
specific
certificate
authority
right
for
for
each
cluster
right
so
generate
two
different
intermediary
certificates
and
connect
them.
You
know
I've
got.
B
I've
got
a
talk
that
I
did
for
the
the
sivo
folks,
where
I
connected
a
cluster.
Oh
sorry
I
mean
it's
buoyant
hold
on.
Let
me
take
you
over
to
buoyant.io
there
we
go
buoyant
as
in
it
floats
and
here
I'll
post
that
link
but
yeah.
It's
point
we're
the
folks
that
make
linker
d,
that's
kind
of
our
our
claim
to
fame,
if
you're,
using
linkery
and
you're
in
production
love
to
hear
from
you
at
a
minimum.
B
B
So
if
you
want
a
very
cool
link
or
d
hat
or
some
linkery,
shirts
and
stickers,
add
yourselves
to
adopters
and
and
love
to
hear
from
you
and
also
comment
feel
free
to
just
pop
in
on
slack
and
happy
to
happy
to
talk
to
you
you'd
love,
to
get
you
love
to
get
you
into
production
with
linguity,
if
you're,
not
there,
and
you
know
love
to
answer
your
questions.
If
you're
concerned
about
stuff
around
service
mesh.
A
Perfect
perfect
call
to
action
to
to
finish
off
of
this
and
no
new
questions
there,
but
I
think
everyone
will
now
rush
over
to
the
slack
side
to
ask
their
all
of
your
questions
later
there,
but
yeah.
It's
been
absolutely
wonderful
hour
here
with
everyone,
so
many
questions
so
much
interaction.
Thank
you
so
much
everyone
and
thank
you.
Everyone
for
joining
in.
Obviously,
today
to
the
latest
episode
of
cloud
native
live.
It
was
really
great
to
have
jason
morgan
talking
about
linker
d
today
and,
as
mentioned
before,
always
I
really
love
the
interaction.
A
Thank
you
so
much
for
joining.
Thank
you
so
much
for
your
questions.
It
was
a
lot,
but
it
was.
I
think
it
was
the
best
way
to
spend
this
hour
next
week.
We
will
have
a
session
about
multi-architectural
kubernetes
clusters
so
tune
in
at
the
same
time,
next
week,
looking
forward
to
seeing
you
there
and
thank
you
for
joining
us
today
and
see
you
next
week.