►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
cloud
native
live
where
we
dive
into
the
call
behind
cloud
native,
I'm
itai
shakuri,
I'm
the
director
of
open
source
here
at
aqua
security
and
I'm
also
a
cncf
ambassador
and
we'll
be
hosting
today's
show
so
cloud
native
live
every
week
we
meet
to
bring
you
a
new
set
of
presenters
that
will
showcase
how
to
work
with
cloud
native
technologies.
A
They
will
build
things,
they
will
break
things
and
they
will
answer
your
questions
feel
free
to
ask
questions
as
we
go
along
in
the
chats
and
we
will
do
our
best
to
answer
them.
This
week
we
have
jason
morgan
here
with
us
from
buoyant
to
talk
about
linker
d
service
mesh
before
we
get
started.
I
just
want
to
remind
everyone
that
kubeco
north
america
is
upcoming
coming
soon
in
october.
So
if
you
haven't
registered,
it's
going
to
be
both
in
person
and
virtual
really
looking
forward
to
that.
A
So
just
a
quick
reminder
and
a
bit
of
administration.
I
want
to
remind
everyone
that
this
is
an
official
live
stream
of
the
cncf
and
the
such
is
subject
to
the
cncf
code
of
conduct.
So
please
don't
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
basically
just
be
respectful,
but
just
be
respect
respectful
of
your
fellow
participants
and
presenters,
and
with
that
jason,
would
you
like
to
introduce
yourself
yeah.
B
Hey
thanks
for
thanks
for
having
me
on
so
my
name
is
jason
morgan.
I
am
a
technical
evangelist
with
buoyant
and
it's
my
job
to
talk
to
folks
about
liquor
d
and
try
and
encourage
people
to
use
it.
And
thanks
for
having
me.
A
All
right
cool,
so
we're
here
to
talk
about
linker
d.
Do
you
want
to
give
a
just
a
one
minute
background
about
it.
B
Yeah,
so
linker
d
is
a
service
mesh
and
it
is,
it
is
built
specifically
for
kubernetes.
A
service
mesh
is
a
tool
that
you
add
to
your
kubernetes
cluster,
that
that
will
intercept,
and
you
know,
work
with
the
traffic
between
your
applications.
B
So
a
service
mesh
works
by
adding
a
number
of
proxies
beside
your
applications
to
handle
the
traffic
between
between
them
and
allow
you
to
enhance
them
with
additional
observability
features,
make
your
request
more
reliable
or
more
performant,
or
to
or
to
add
security
into
the
environment
like
ensuring
that
your
communications
between
apps
are
all
mutually
authenticated.
A
Great,
so
that
service
mess
service,
machine
general
and
did
you
have
a
specific
use
case
or
story
that
you
wanted
to
cover
for
today?
Yeah.
B
B
All
right
yeah:
should
we
kick
off
yeah.
We
are
all
right.
If
you
don't
mind,
I'm
gonna
share
screens.
So
give
me
one
second.
B
All
right,
y'all
see
my
screen.
Okay,
we
see
okay,
all
right,
so
I've
got
two
clusters.
B
Well,
I'm
going
to
have
two
clusters
by
by
the
end
of
this,
we've
got
a
cluster
in
new
york
right,
so
that's
the
sibo
kubernetes
new
york
data
center
and
one
in
their
london
data
center,
and
what
we're
going
to
do
here
is
we're
going
to
route
traffic
from
our
user
right
in
this
case,
top
hat
person
with
the
monocle
we're
going
to
go
over
the
internet
into
a
traffic
ingress
through
to
a
front-end
application
in
that
in
our
kubernetes
cluster,
then
linker
d
is
gonna.
B
Take
that
that
communication
from
traffic
to
the
front
end
to
this
multi-cluster
gateway
through
a
a
layer,
7
connection
between
these
two
gateways
and
then
over
to
a
back
end
sitting
there
in
london.
So
we're
going
to
deploy
an
app
we're
going
to
deploy
some
clusters,
we're
going
to
show
the
front
end,
not
working
until
we
connect
it
to
the
back
end
over
in
london
and
that's
that's
what
I've
got
for
slides.
So
we
can
hop
over
to
a
demo.
A
So
the
scenario
is
that
the
application
is
kind
of
spread
across
two
regions,
or
something
like
that.
That's
not
like
this
is
supposed
to
be
like
components
of
the
same
application
right
that
we're
yeah.
C
B
Isn't
this
in
particular,
one
is
a
bit
contrived
just
so
it's
easy
to
easy
to
demo
right.
A
more
common
use
case
might
be
like
you
know,
I've
seen
examples
of
folks
that
you
know
they
have
an
application
that
the
scale
is
so
massive
that
it
needs
to
live
in
its
own
cluster,
so
they
use
multi-cluster
to
connect.
You
know
one
app
to
another
in
that
circumstance
or
another
one
might
be
like.
B
So
let's
get
started
so
the
first
thing
I'm
going
to
do
is
I'm
going
to
use
I'm
going
to
use
the
sibo
cli
to
create
a
couple
clusters.
So
let's
do
that
here?
Can
you
see
my
terminal?
Okay?
Yes,
so
what
I'm
doing
here
is
I'm
creating
a
cluster
called
nyc2
with
onenote
and
it's
going
to
be
small
class
there
and
then
we're
going
to
wait
for
that.
To
finish,
then
we're
going
to
do
something
similar
in
london,
so
the
terminal
on
my
left
is
going
to
be
my
new
york
cluster.
B
So
while
those
spin
up
we'll
get
going
so
that
the
the
first
thing
I
have
to
do
to
to
use
lingerie
in
a
multi-cluster
manner
is,
I
have
to
generate
certificates
for
linker
d
that
use
a
shared
trust
route.
So
what
happens?
Is
I've
got
these
two
clusters
and
we're
gonna
we're
gonna
do
mutually
authenticated
traffic
from
one
cluster
to
the
other,
so
they
all
have
to
trust
a
common,
a
common
certificate
common
root
certificate
so
that
they
can.
B
They
can
do
that
so
we're
gonna
use
a
tool
called
step
to
generate
a
root
certificate
and
then
two
different
intermediate
ur
certificates
so
step
and
back
linker
d
has
a
component
that
automatically
generates
a
new
certificate
for
every
application
inside
inside
your
kubernetes
cluster.
That's
part
of
the
mesh
right
and
it
generates
that
certificate
with
this
with
this
issuer
certificate
that
it
has
so
that's
going
to
be.
Our
step
now
is
the
is
to
make
both
issuers
trust
the
same
route,
so
they'll,
so
one
issuer
will
trust
certificates
from
the
other.
B
I'm
going
to
use
a
tool
called
step
to
do
that,
we're
just
going
to
go
ahead
and
short
circuit
this,
because
it's
taking
a
second.
So
while
these
clusters
create
oh
there,
you
go,
I
just
got
impatient
we're
going
to
go
ahead
and
generate
a
new
root
certificate.
So
first
I'm
going
to
just
move
into
a
temp
directory.
B
Just
make
sure
it's
empty,
so
we've
got
an
empty
temp
directory
and
we're
going
to
create
a
new
root
ca.
So
I've
got
this
tool
called
step.
It's
basically
just
an
easier
way
to
do.
Openssl
if
you're
not
already
good
at
open
ssl,
I
can
send
a
link
to
it.
It's
also
under
our
multi-cluster
docs.
Let
me
actually
pull
this
over
we're
gonna
follow
generally
this.
This
multi-cluster
tutorial
that
you
can
find
right
here,
I
don't
know,
is
it
already
put
in
the
chat
or
what's
the
best
way,
to
share
that
app.
A
A
Put
it
in
some
some
something's
larger
yeah.
Okay,
we
will
try
to
put
it
on
the
screen
as
well.
B
Okay
cool
so
we're
essentially
following
that
that
tutorial
here,
which
has
all
the
steps
for
all
the
all
the
commands
for
how
we
do
something
like
set,
so
we're
going
to
create
a
new
root
certificate
right
here.
B
So
with
this
root
certificate
we
can
now
create
an
issuer,
so
I'm
going
to
create
one
issuer
for
new
york
and
one
issuer
for
london.
So
let's
do
that.
B
So
here
I'm
creating
a
a
certificate
for
identity.linkery.cluster.local.
The
issuer
is
going
to
be
called
issuer.london.crt
and
we're
using
the
we're
using
the
root
ca
that
we
just
created.
B
London
all
right,
so
if
we
look
in
our
directory
now,
we've
got,
we've
got
the
routier,
a
new
york
issuer
and
a
london.
B
C
B
B
All
right,
so
you
should
be
able
to
see
in
my
terminal
right
here
in
the
prompt
it's
going
to
tell
you
what
what
kubernetes
cluster
and
what
name
space
we're
in
right.
Just
so
you've
got
a
sense
of
where
we're
working
so
again,
we're
sticking
the
left
side
is
going
to
be
new
york.
The
right
side
is
going
to
be
london.
B
So
now
we
need
to
install
linker
d
right,
so
we're
using
the
linkery
cli
to
actually
do
the
install.
So
by
default,
the
linker
dcli
is
going
to
generate
new
certificates
for
you
every
time
in.
B
Want
it
to
use
that
the
certificate
that
I
created
so
we're
giving
you
commands
us,
what's
the
what's
the
root
ca,
what
are
your
identity
certificates
going
to
be
so
that
you
can
create
this
environment
and
then
we're
going
to
go
ahead
and
and
apply
the
apply,
the
the
resulting
yaml
to
our
cluster?
Then
we're
also
going
to
run
a
liquidity
check
at
the
end,
so
we
know
what's
happening
so
here's
the
new
york
install
with
the
new
york
certificates
we're
going
to
do
the
same
thing
for
london.
B
Any
any
questions
as
we're
going
still
making
sense
so
far,
so
we've
bootstrapped
we've
bootstrapped
empty
kubernetes
clusters,
so
sibo
they're,
a
kubernetes
service
provider
or
they're,
actually
a
whole
infrastructure
service
provider,
but
we're
using
their
kubernetes
of
service,
offering
we're
spinning
up
a
k3s
cluster
in
these
two
data
centers
right
now,
I'm
installing
liquidity.
So
there's
no!
There's
no
apps!
Here
beyond
the
default.
The
default
traffic
instance
installs
with
with
k3s.
B
We
did
our
install
and
we
used
the
liquidity
check
command
just
to
validate
that
you
know
the
install
took
correctly
right
and
then
we're
ready
to
go
on
this
class
there.
Now
we're
going
to
add
on
so
liquid
has
components
right.
So
we
installed
core
linker
d,
which
is
the
functions
of
the
mesh
that
you
need
to
actually
do
work
right.
So
that's,
that's,
you
know,
add
in
the
proxy
beside
your
applications.
B
B
We're
also
gonna
go
ahead
and
install
the
linkery
dashboard
so
that
we
can
pop
up
in
a
dashboard
if
we
want
right.
So
this.
B
B
Right,
so
the
goal
with
clicker
d
is
to
be
an
extremely
lightweight,
extremely
fast,
extremely
easy
to
use
service
mesh
right
like
trying
to
make
it
as
low
complexity
as
possible.
So
in
general,
like
this,
the
setup
of
connecting
clusters
between
regions
can
be
fairly
difficult.
But
what
we
do
is
you
know
with
the
routing
we
do
we
do
a
layer,
7
connection,
so
I
didn't
set
up
any
special
routing
rules
between
london,
new
york
right.
This
is
any
two
places
that
can
make
an
https
connection
between
each
other.
We
can
set
up.
B
B
D
A
Level
tunnel
between
the
the
data
centers
to
to
make
the
connectivity
work
between
the
the
pods
between
the
services,
but
the
service
mesh
obstructs
that
yeah.
B
Absolutely
right,
so
all
we're
gonna
do
is
inside
of
our
app
we're
just
gonna
I'll
show
you
exactly
what
we're
doing,
but
we're
making
just
a
dns
call
to
the
appropriate
service,
and
then
it's
gonna
route
traffic
for
us
automatically
there'll
be
no
like.
There's
no
special
objects
created,
it's
still
just
a
kubernetes
service
right
and
once
you're
in
the
mesh.
You
can
use
a
kubernetes
service
that
to
talk
in
that
way,
so
we
now
have.
We
now
have
liquidity
in
each
thing.
So
let's
just
do
a
quick
step,
so
k
get
pods.
B
Okay,
get
pods
dash
a
so
we're
just
going
to
look
at
all
the
pods
we
have
on
this
cluster
make
a
little
bit
smaller
and
let
it
again
you
know,
we've
got
our
our
cube
system
installed,
so
we've
got
standard
stuff
as
well
as
some
some
sibo,
some
sibo
components.
B
We've
got
linker
d
right,
so
we
have
the
identity,
the
proxy
injector
and
our
destination
service.
That's
our
core
linker
d
components
for
multi-cluster.
We
have
our
linker
d
gateway.
So
this
is
the
actual.
This
is
the
actual
ingress
point.
So
again,
let's
look
at
our
diagram.
So
what
we
have
right
now
is
we
have
the
new
york
and
london
clusters,
we
have
traffic,
we
actually
have
traffic
in
both,
but
I'm
only
showing
in
the
one,
because
this
is
the
ingress
we're
going
to
be
using
and
we
have
these
gateways.
B
B
So
what
we're
running
is
a
command
linker
d,
multi-cluster
link
right.
So
we're
saying:
hey
we're
going
to
link
this
this
london
kubernetes
cluster
right
we're
going
to
generate
we're
going
to
generate
a
yaml
manifest.
Let's
actually
run
that
again.
C
B
Oh
yeah,
okay,
so
I
can
just
run
it
again.
So
what
we
do
when
we
run
the
first
part
of
that
command
is
we
just
generate?
We
generate
the
pot
security
policies,
services
and
and
service
accounts
role
bindings
that
sort
of
stuff.
The
permissions
that
we
need
to
tell
one
cluster
can
talk
together.
Then
we
need
to
apply
it
so
from
the
london
cluster.
I
need
to
generate
this
yaml
and
apply
it
over
on
the
kubernetes
cluster.
So
now
I
can
do
linker
d,
multi-cluster
gateway
and
see
you
know.
B
B
Right
and
it
doesn't
have
any
peers,
so
what
we?
What
we've
done
is
we've
given
new
york
access
to
london,
but
we
haven't
given
london
access
to
new
york,
because
in
our
case
we
don't
need
it.
It's
not
bi-directional.
Our
front
end
is
going
to
live
in
new
york
and
it's
going
to
talk
to
london
and
that's
that's
the
end
of
that
story.
B
So
now
now
we've
got:
we've
got
our
gateways
connected
right
and
again
it's
a
it's
a
one-way
connection
here.
B
B
We
built
this
connection
here
so
that
we've
got
our
our
link
set
up.
That's
what
we
did
with
that
multi-cluster
link
command.
Now
we
can
actually
start
deploying
some
applications.
B
So
here
on
my
new
york
cluster,
I'm
going
to
deploy
my
front
end
application
and
then
we'll
take
a
look
at
the
ammo
after
we
do
that.
B
B
B
B
B
Right
we've
got
our
we've
got
our
front
end
service
and
we
have
an
ingress
gonna,
get
ingress
right
where
nothing's
going
to
nothing's
going
to
or
it's
it's
set
up
to
to
wrap
to
our
front
end.
So
here,
let's
just
do
so
yaml.
B
B
D
B
B
So
if
we
look,
I
created
a
namespace.
I
said
I
said
on
the
namespace
linker
d
inject,
so
I
told
I
told
kubernetes
or
I
really
I
told
liquidity
that
anything
that
gets
created
in
this
namespace
I
want
to.
I
want
to
add
in
my
proxy
to
it
or
cause
I
want
to
be
part
of
the
network.
Then
I've
got
a
little
config
for
this
nginx
instance,
so
my
front
is
really
just
a
little
enginex
server
right,
and
so
we
set
up
a
config
and
we're
telling
it
talk
to
talk
to
this.
B
This
address
right,
hcp,
pod
info
dash
lon
on
port
9898
right
so
what's
happening
is
because
there's
no
service,
no
valid
service
for
pod
info.
In
this
class
there
it's
not
able
to
hit
the
back
end.
So
it's
crashing
right.
That's
why
we're
that's?
Why
we're
at
work,
our
failure
and
then
otherwise?
I
just
have
the
config,
for
I
just
have
the
config
for
the
rest
of
my
my
front
end
right.
B
Let's
go
back
to
that
link
command,
really
quick,
so
the
link
command.
One
of
the
things
that
we
do
is
we
specify
the
cluster
name
right.
So
so,
when
we
export
a
service
from
one
cluster
to
another,
the
service
name
gets
appended
with
gets
appended
with
like
a
dash
and
then
a
cluster
name,
so
that
you
know
where
it's
from.
And
so.
If
you
look
at
the
pod
info
config,
you
know
the
nginx
instance
is
looking
for
pod
info
dash
lawn
lon
or
london
right
here.
B
So
if
I
do
take
it
service
on
this
side,
I've
got
sorry.
I
need
to
change
namespaces.
B
I've
got
a
pod
info
service,
but
we
don't
see
it.
We
see
it
on
our
london
cluster,
but
not
on
our
new
york
bus
there
right,
we
don't
have
it
so
we
actually
have
to.
We
accept
the
change
or
edit
the
pod
info
service
to
export.
It
right
tell
linker
d
to
share
it
between
share
it
between
these
clusters.
B
B
C
B
B
A
Yeah,
I
don't
see
anything
coming
up
on
the
chat.
I
guess
it
was
a
really
good
demo.
Everything
was
clear
and
yeah.
Let's
see
the
magic
happens,.
B
Well
then,
I'm
gonna
show
the
next
thing
I'm
gonna
show
is
you
know
so
we've
got
we've
got
this
going.
I
could
restart
it,
but
it
does
it
just
picks
it
up
on
its
own.
I
guess
it's
just
like
crashed
yeah
we'll
go
ahead,
we'll
bring
it
out.
Okay,
delete
pod,
whatever.
B
Yeah
right
so
now
we'll
see
the
new
one
come
up
and
it's
going
to
start
successfully
because
it's
got
it's
got
a
back
end.
You
don't
actually
have
to
delete
it,
but
I
didn't
want
to
wait
for
the
crash
loop
back
off
so
now
we're
two
of
two
running
right.
So
we
see
a
success
and
if
I
go
back
to
this
page
instead
of
seeing
service
available,
I've
got
hello
from
london
right
as
our
as
our
back
end
over
in
london,
hey
mate.
B
Our
back
end
over
in
london
has
now
has
now
connected
and
we're
sending
traffic.
A
B
Yeah,
so
what
we
can
do
seeing
is
we
have
way
more
time
than
I
thought
we
had.
We
can
take
a
look
over
at
so
I
also
did
this
earlier
right
and
I
created
an
environment
left.
It
left
it
actually
run
for
a
while-
and
I
connected
those
to
point
cloud
so
point
cloud-
is
a
linguity
commerce,
a
commercial
product
around
lingerie
that
point
created
to
make
it
easier
to
do
you
know
multi-cluster
management
or
just
to
check
on
the
health
of
your
linkedin
instances,
it's
free
for
up
to
50
50
workloads
right.
B
Thanks
time,
so
what
we've
got
here
like
one
of
the
things
you
want
to
ensure
when
you're
doing
multi-cluster
is,
is
to
make
that
connection
work
between
between
multiple
clusters,
the
the
key
component
right.
I
stress
that
a
bunch,
but
it
it's
one
that
is
really
worth
remembering
right.
I
have
to
share
that
common
trust
root.
So
here
I
can
easily
identify
what
my
trust
rate
is
by
its
by
its
signature
and
ensure
that
they
match
right
and
that's
that's
the
big
component
in
that's
the
big
component.
B
In
making
this
work,
then
we
can
do
some
fun
stuff
like
like
right
now.
If
we
go
look
at
the
workload
right,
so
if
I
go
check
out,
you
know
the
actual
stuff
we
could
sort
by
http
metrics.
I
could
sort
by
requests.
B
Let's
go
see
it
this
way
right
the
things
that
are
really
getting
hammered,
so
I've
got
a
traffic
generator
out
there.
So
we
look
at
this
this
thing.
What
I
also
have
is
this
traffic
generator
service
over
in
over
in
london,
which
is
actually
hitting
this
traffic
in
grass,
sending
a
message
from
traffic
over
to
our
front
end
right,
and
then
we
go
from
the
front
end
to
our
gateway.
B
B
And
here
right
I
can
see
the
I
can
see
the
data
on.
You
know
my
my
requests
and
successes
in
this
environment,
and
if
I
do
something
a
little
bit
a
little
bit
destructive,
I
can
go.
I
can
go
start
generating
a
bunch
of
errors
or
failures.
Right
so
say
we
had.
You
know
a
change
to
our
front
end
or
change
to
our
back
end
right.
We
started
we
started
sending.
We
started,
throwing
up
occasional
errors.
What
we're
going
to
see
is
that
data
reflected
in
in
the
actual
live
traffic
in
our
apps.
B
So
I
can
see
that
pod
info
is
starting
to
send
some
500s.
That's
cascading
down
to
the
front
end,
which
is
going
over
to
traffic
right.
We
can
also
introduce
so
latency
still
looks
good,
but
we're
seeing
we're
seeing
some
errors
get
introduced
right,
but
our
request
volume
is
also
going
up,
and
then
we
can
add
something
like
like
a
delay
right
where,
for
whatever
reason,
as
traffic
goes
up,
we're
also
seeing
a
delay.
A
Yeah,
that's
really
nice,
since.
D
A
Add
a
few
yes,
I
thought
you
used
here
traffic
for
the
ingress.
Is
there
any
relationship
between
traffic
and
and
linker
d?
Or
is
it
just
you
know
just
you
want
to
choose
something
that
you
liked
yeah
so.
A
B
D
B
Traffic
inside
the
mesh,
what
we
don't
care
for
or
what
we
don't
worry
about
in
liquidity,
is
how
the
traffic
gets
from
the
internet
into
your
application.
And
so
we
work
with
traffic.
We
work
with
ambassador
nginx
or
whatever.
The
ingress
is
that
you're,
using
like
we'll
we'll
connect
to
it
and
and
bring
traffic
into
the
match
and.
B
Did
kind
of
a
detailed
session
with
the
folks
over
at
sivo
on
you
know,
kind
of
the
the
do's
and
don'ts
and
hows
and
whys
of
integrating
your
ingress
with
your
mesh,
so
that
was
a
a
bit
of
a
of
a
a
loop
at
the
time.
Sorry,
the
the
answer
is
we
don't
have
any
special
integration
with
traffic
traffic
works
great
with
laker
dd,
as
does
as
far
as
I
can
tell
like
every
every
ingress
I've
worked
with,
we
haven't
had
a
problem
integrating
right
there.
B
A
Cool
very
cool:
what
are
the
other
use
cases
we've
seen,
multi-cluster
we've
seen
metrics
being
reported
collected
like
error
rates,
latencies
and
so
on.
Are
there
any.
A
Cases
for
linkedin
well.
B
So,
like
the,
if
you
step
back
right
like
the
reason
that
to
use
a
service
mesh
right,
is
that
like,
like
it's
kind
of
why
you
go
to
kubernetes
right,
you
go
to
kubernetes
because
you
want.
Ultimately,
you
have
an
objective
right,
and
that
objective
is.
I
want
my
business
logic
inside
my
applications
to
work
and
work
well
and
work
work
reliably
when
I
need
it
to
work
right.
B
So
so,
linker
d
is
there
to
allow
developers
who
are
working
on
a
platform
that
includes
linker
d,
to
focus
more
on
business
logic
and
less
on
common
functionality
like
ntls
right.
So
it
may
be
important
from
a
regulatory
standpoint
that
all
my
all
my
traffic
between
applications
is
encrypted
right,
like
even,
if
I'm
in
say,
I'm
in
aws
and
I've
got
three
azs
right
like.
B
I
want
to
know
that
that
that
traffic
between
availability
zones
really
is
encrypted
the
whole
time
as
an
as
an
example
right,
you
can
get
that
for
free
on
your
platform
by
adding
in
adding
in
these
services
to
your
your
service
mesh.
On
top
of
that,
right,
like
there's
way
more,
I
want
to
know
about
my
environment
than
just
are
things
encrypted
like,
as
I
write
an
application
like
if
you've
ever
dealt
with,
you
know
hundreds
of
microservices,
like
it's
hard
to
get
consistent,
metrics
from
every
application.
B
Like
it's
a
huge,
I
can't
tell
you
the
struggle
that
I
had
at
one
of
my
previous
jobs
trying
to
get
people
to
agree
on
a
standard
or
even
just
to
make
like
hey,
listen,
everyone.
We
need
a
slash,
metrics
endpoint,
because
it's
part
of
what
we
do
to
determine
whether
or
not
your
app
can
go
to
production
right
like
that
was
that
was
a
struggle
right
with
using
linker
d.
You
don't
have
to
go
bother
those
application
teams,
because
you're
going
to
get
common
metrics
from
every
single
application.
B
Like
I
didn't
instrument
anything
in
this
environment
right
I
added
link
or
d,
and
now
I
can
see
the
success
rate,
the
requests
per
second,
the
latency.
I
can
look
in
right.
I
can
go
further
in
and
see
what
what
api,
what
paths
within
the
apis
on
these
apps
are
being
collected
and
what?
What
are
the
failures
or
response
times
by
those
particular
paths?
Let's
actually
pop
that
up
real
quick,
so
here
in
london,
we'll
do
linkery
this
dashboard
right,
we'll
pop
open.
B
This
is
just
the
open
source
dashboard
that
comes
with
linker
d
right,
so
I'm
connecting
over
to
london.
So
there's
a
there's
a
bit
of
a
delay,
but
I
can
see
I'm
actually
going
to
change
clusters,
because
this
is
this
is
the
one
that
I'm
not
hammering
with
a
bunch
of
errors.
So
let's
make
it
a
little
bit
more
interesting.
B
B
B
Right
so
I'm
getting
things
with
the
match
like
like
access
to
this
dashboard.
That
gives
me
like
detailed
metrics,
so
I
can
see
all
my
name
spaces
that
same
info.
I
was
seeing
in
point
cloud.
I
could
click
over
into
the
built-in
grafana
instance
and
and
set
up
my
own
queries
there
or
connect
it
to
the
you
know:
the
enterprise
grafana
instance.
I
have
and
go
into
pod
info
and
I
can
see
like
specifically
right
what
service
inside
this
namespace,
so
what
deployment
inside
this
namespace
having
issues.
So
it's
not
my
generator.
B
My
generator
is
happy.
It's
pot
info
here.
So
look
at
that.
Look
at
that
app.
I
can
see
you
know
the
map
of
my
traffic
where
things
are
coming
from
and
then
I
can
even
see
which
which
service
is
returning
errors.
Well,
it
turns
out
I've
got
an
endpoint
called
501
that
just
echoes
you
know
whatever,
whatever
status
code,
you
send
it.
B
So
this
is
a
bit
artificial
right,
but
you
can
see
you
can
see
you
can
diagnose
where
that's
coming
from
and
get
the
latency
right
like
which
which
of
these
have
like
the
worst
response
time
right.
Oh,
it
turns
out
my
slash
delay.
One
endpoint
has
about
a
one
second
delay
every
time
it
comes
in
right.
So
now
again
another
artificially
introduced
introduced
thing,
but
we
can.
We
can
get
this
information.
We
can
get
it
to
our
sre
team
to
our
application
developers
without
without
having
to
to
get
them
to
instrument
anything
right.
B
They
just
get
it
by
being
on
the
platform
and
you
as
a
platform
developer,
just
get
just
get
the
tools
to
deal
with
it.
Now
I
could
add
in
if
I
wanted,
I
could
add
in
a
service
profile
and
say:
hey,
listen
when
you
hit
pawn
info,
no
matter
what
I
want
the
maximum
of
a
500
millisecond
timeout
on
any
given
call
as
an
example
right,
it's
probably
aggressive,
but
you
get
the
point.
A
Us
where
we
can
go
to
learn
more
about
linker
d,
and
if
people
want
to
experiment
with
click
link,
how
can
they
get
in
touch
with
you.
B
Yeah,
so
I'm
gonna
I'll
post
a
couple
things
in
the
chat,
so
the
the
best
place
to
go
to
get
started
is
check
out
the
linker
d
slack.
So
it's
slack.liquordy.io,
that's
where
you
can
join
our
community.
We're
super
active
right,
we're
you
know,
there's
a
lot
of
folks
that
will
help
respond
to
your
questions,
help
you
help
you
get
going.
Thank
you
very
much.
There's
also
our
getting
started
guide.
B
There
we
go,
there's
our
getting
started
guide
which
will
walk
you
through
the
initial
steps
of
actually
setting
up,
link,
rd
and
there's
that
multi-clustered
demo
that
I
sent
you
to
that
multi-cluster
demo
is
a
bit
more
complicated
than
what
I
just
did
as
it
includes
using
a
traffic
split
to
share
traffic
between
east
and
west
or
between
two
clusters
right,
but
the
the
basic,
the
basics
are
still
there
right.
B
B
It
simple:
don't
make
new
custom
resource
definitions,
so
we
don't
want
people
to
have
to
do.
You
know
deal
work
with
new
object
types
just
to
use
linker
d
or
to
use
multi-cluster
right,
it's
kubernetes,
so
we
want.
You
know
we
want
kubernetes.
Well,
kubernetes
objects
to
be
like
the
the
primary
thing
that
you
work
with.
B
Guide's
a
great
one,
so
those
are
those
are
the
big
things
I
checked
that
out
and
then
you
know
we
actually
had
a
another
yeah,
a
gustrous.
It
absolutely
works
great
on
k3s
and
there's
a
cool
cli
tool
that
you
can
use,
or
so
the
linker
dcli
comes
with
a
a
test
to
let
you
know
if
it
works.
B
B
Yeah,
so
I'm
going
to
create
a
new
cluster,
so
this
is
a
k3s
cluster
running.
You
know
in
docker
on
wsl,
so
a
bunch
of
weird
stuff,
but
we
can
just
validate
if
this
is
going
to
be
a
safe
target
for
what
we're
doing
or
a
kind
cluster
or
you
know
whatever
you're
working
with,
and
let
me
just
run
liberty
check
dash
dash
pre,
and
this
will
just
tell
us:
do
we
have
the
permissions?
B
C
A
B
Well,
try
it
out.
I
hope
you
know
it's,
it's
really
not
too
bad,
and
what
we
see
is
a
lot
of
adopters
right,
find
that
you
know
the
experience
with
linkery
like
by
by
doing
things
like
reducing
the
like
the
cognitive
load
that
you're
under
right
like
so
like.
What
do
you
need
to?
We
need
to
know
to
make
it
work
by
keeping
that
really
low.
B
I
haven't
played
with
any
custom
resource
definitions,
yet
I'm
lying,
I
actually
created
the
link,
but
that
was
on
the
back
end
using
linker,
dcli
right,
but
I've
I've
got
one
crd
created
in
these
two
clusters
and,
or
you
know,
instantiated
and
and
now
I'm
doing
a
connection
between
new
york
and
london
right
right
and
then
it's
even
less
when
you're
staying
in
cluster,
you
need
to
work
with
zero
customer
resource
definitions.
We
use
your
kubernetes
objects
right
at
the
gate.
Right.
B
Oh
one
more
sorry,
grpc
load
balancing
if
you're
using
grpc
to
do
connections,
a
thing
that
like
grpc,
is
great
right
because
it
allows
you
to
to
multiplex
connections
over.
So
that
is
do
a
bunch
of
connections
over
one
one
http
connection.
So
you
can
send
a
bunch
of
requests
over
one
http
connection,
which
is
awesome
right,
but
in
in
kubernetes
right
because
kubernetes
does
what's
called
connection
level
load.
B
Balancing
you're
going
to
get
is
essentially
hot
pods
if
you
have
a
grpc
service,
so
one
one
responder
for
your
grpc
service
is
going
to
it's
going
to
get
hotter
than
the
others
right,
because
it's
going
to
get
all
the
traffic
and
so
with
something
like
linker
d.
What
you
get
is
right
out
of
the
box,
request
level
load
balancing
for
your
applications
right
so
that
you're
going
to
balance
that,
across
all
the
different,
all
the
different
components.
A
It's
easy
to
set
up
as
every
other
thing
that
we've
seen
here.
B
A
Great,
I'm
convinced
awesome,
I
hope
the
audience
as
well.
A
So
if
there
are
no
other
questions,
I
guess
we
can
conclude
this
one
and
feel
free
to
ask
any
other
questions
that
you
may
have,
while
playing
with
link
rd
in
their
slack
or
in
the
cncf
slack
under
the
cloud
native
dash
live
channel
jason.
Thank
you
very
much.
It's
been
a
really
nice
quick
fast,
just
like
clinker
d
demo.
So
looking
for
that
and
I'll
see
you
next
week
on
cloud
native
live
next
wednesday
every
wednesday.