►
From YouTube: Webinar: Introduction to Linkerd
Description
https://linkerd.io/
https://buoyant.io/
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
Previously,
if
you've
been
on
with
webinars
before
you
will
know
that
we
have
questions
at
the
end,
but
we
tried
out
having
questions
throughout
the
last
time
around
and
it
was
a
lot
of
fun.
So
I'm
going
to
go
with
that
again
this
time.
If
you
have
questions
during
Williams
webinar,
please
drop
them
in
either
the
Q&A
box
or
the
chat
window
down
at
the
bottom
and
I'll
find
opportune
moments
to
interrupt
William
William,
where
you
go,
I'm
ready.
B
Let's,
let's
make
it
fun,
take
it
away
all
right.
So
your
today
to
talk
about
link
radik,
which
is
large
sub,
is
one
of
the
newest
projects
in
the
cognitive
computing
foundation.
I
work
in
a
company
called
buoyant
I'm
going
to
be
doing
the
webinar
today,
but
I
actually
don't
do
a
lot
of
programming
anymore.
Most
of
my
job
involves
talking,
but
if
you're
going
to
cloud
native
Khan
in
Berlin
later
this
month
or
if
you
are
going
to
be
anywhere.
B
A
B
B
We
have
a
website
kind
of
slack
channel,
which
is
kind
of
where
the
primary
or
the
community
stuff
happens
and
then,
of
course,
we're
on
github
and
then
I
just
want
run
through
kind
of
some
of
the
numbers.
This
is
the
kind
of
thing
that
always
makes
me
feel
happy
and
proud,
because
as
an
open
source,
you
know
maintainer,
there's
really
only
one
reward
that
you
ever
get,
which
is
adoption
all
right.
B
Every
on
there
aspect
of
time:
okay-
yeah-
sometimes
it's
fun,
but
it
can,
it
can
be,
it
can
be,
they
can
be
a
slog
at
times,
but
getting
adoption
is
really
a
things
and
they
teach
you
and
really
excited
so
we're
about
13
months
old,
this
around
kind
of
around
600
over
600
folks,
in
the
slack
channel
1600
give
up
stars.
Please
keep
those
coming
for
the
support
and
electric
that
makes
me
feel
happy.
200,000,
dr.
hunt
polls,
which
is
not
really
a
measure
of
anything
real.
B
Unfortunately,
what
more
excited
eyes
we
have
over
30
contributors
over
20
confirmed
companies
and
production
with
a
lot
more
in
the
pipeline,
getting
closer
production
and
we
served
over
a
hundred
billion
production
requests
that
we
know
of.
Of
course,
one
of
the
you
know
crazy
things
about
an
open
source
project.
You
don't
really
know
is
every
exclusive
or
actual
excuse
me
I
put
some
of
the
companies.
B
We
know
about
up
here,
some
of
them
I
56
countries
box,
but
there's
other
companies,
I'm
really
excited
to
talk
about
coming
up,
we're
going
to
forget
ever
Krantz
reveal
a
little
bit
later.
Okay.
So,
let's
get
down
to
you
know
what
we're
actually
talking
about.
So
what
is
the
service?
Metrics
a
link
or
DS
a
service
wish
a
service
mesh
is
a
dedicated
infrastructure
layer
for
service,
the
service
communication,
okay
and
so
it's
decoupled
from
the
application
I'm
going
to
try
my
best
to
minimize
this
will
zoom
in
there
here.
B
So
I
can
actually
read
my
slides.
What's
interesting
is
that
I
can't
see
my
mouse
so
I
think
a
little
complicated?
Let's
see
I'm
going
to
say,
I'll
just
try
and
remember
what
the
word
car
marks
faces
hold.
It's
kind
of
hiding
injury,
all
right,
I,
dedicate
a
tree
structure
for
service
to
service
communications,
be
coupled
from
the
application.
It's
all
kind
of
focused
on
services
than
requests
that
we
had.
You
know.
Obviously,
we've
had
communication
between
games
for
a
very
very
long
time.
Some
of
the
ideas
that
work
programming
has
been
invented.
B
B
That
three
is,
you
know,
companies
to
build
off
a
three
and
four,
they
kind
of
skip
over
to
layer.
Seven
and
like
seven
that
what
stuff-
and
you
often
know
lies-
you
know,
there's
five
and
six
so
we're
trying
to
free
a
little
more
like
this
is
layer,
five
Olympic,
this
kind
of
work,
specially
five
kind
of
rail
service
bishops.
So
it's
protocol
aware
right.
We
know
enough
about
the
protocol
to
do
stuff
to
do
intelligent,
stuff
and
I'll
talk
about
what
that
stuff
is.
But
what
we
don't
know
about
is
a
payload.
B
We
don't
actually
care
what
the
payload
is.
We
don't
know
whether
it's
a
pond
or
protobuf
or
really
you
know
what
the
application
is
going
to
do
all
right.
So
that's
where
servers
fish
tips
that
why
do
I
need
one?
Okay,
you
need
one
anyone
because
service
of
service
communication,
which
is
also
called
term
them
with
bigger
enterprise
shop
speech
to
us
communication,
needs
to
be
monitored.
Okay,
it
needs
to
be
managed
and
they
should
be
controlled.
You
can't
you
can't
leave
it
alone.
B
This
is
something
that
you
now
need
to
deal
with:
okay
and
I'm,
going
to
I
put
this
very
scary
picture
up
here.
This
is
I
used
to
I
used
to
work
at
Twitter
as
an
engineer
there
for
around
2014,
and
this
was
the
diagram
of
how
the
Twitter
services
communicated
with
each
other
the
dependencies
between
services.
I
think
this
is
a
little
bit
of
an
exaggeration,
I.
Think
people
pretended
some
things
here,
but
this
is
you
know
this
is
at
least
by
phone
view.
B
The
view
of
the
service,
the
service
communication
of
tapping
at
Twitter,
which
is
one
of
the
earliest
kind
of
very
large
scale
micro
service,
although
we
didn't
call
it
up
at
work
at
the
time
micro,
social
support,
so
this
picture
is
just
a
scary,
so
scary.
This
is
why
you
need
to
start
thinking
about
this
later
as
something
that
needs
to
be
monitoring
and
managing
control,
because
it's
really
going
to
be
driving
along
the
runtime
behavior
of
your
application.
Okay,
but
I
never
needed
before.
Well,
okay,
in
the
past,
you
don't
need
this.
B
You
know
really
needs
it
if
you're
running
in
language-
okay,
you
didn't
need
this
before,
because
you
weren't
running
containerized
micro-services
and
then
a
portage
in
it.
Now
that
you
are
it's
really
the
micro
services,
part
I,
think
that's
the
most
critical
aspect.
Containerization
is
great
in
a
lot
of
ways
that
is
not
directly
not
directly
a
prerequisite
for
this
and
the
orchestration
environment.
A
B
Bet:
yeah.
Yes,
that's
nice
all
right,
so
you
know.
Let
me
just
drive
home
this
point
a
little
bit
more,
so
you
know
if
you,
if
we
think
about
you,
this
shift
from
from
monolithic
architectures
to
micro-service
architectures,
okay
in
the
monolithic
world,
we
have
these
things
that
are
denoted
them
with
those
black
arrows.
Here
we
have
these
things
that
are,
we
understand
it
very
well
called
function,
calls
all
right,
so
main
called
function,
V
called
function,
a
right
and
those
function
calls
we
really
don't
spend
a
lot
of
time.
B
Thinking
about
them,
because
they're
performance
characteristics
are
very
well
understood
and
bring
little
more
on
their
failure.
Characteristics
are
very
well
as
well
understood
what
you
kids,
what
you
don't
fail,
a
lot.
You
know,
unless
you're,
really
into
kind
of
very
low
level
systems,
programming
make
a
function.
Call
you
don't
really
think
about
it
twice.
So
what
we've
done,
at
least
in
the
kind
of
most
naive
translation
in
the
shift
to
to
the
microservice
world,
is
we've
replaced.
B
Those
function
calls
with
now
these
scary
red
arrows,
which
are
Knipper
calls,
and
not
just
one
that
we
call,
but
with
tens
of
thousands
or
hundreds
of
thousands
of
never
calls
all
happening
concurrently
and
those
Network
calls.
You
know
what
do
we
know
that
never
calls.
We
know
that
they
fail
and
we
know
that
they
take
a
long
time.
You
know
compared
to
function
constantly,
so
the
characteristics,
the
characteristics
of
kind
of
just
core.
You
know
primitive,
that
we're
relying
on
is
fundamentally
very,
very
different.
B
That
is
why
we
need
to
measure
it.
We
need
to
monitor
it.
We
need
to
be
able
to
control
okay.
What
so
that's
kind
of
a
setup
for
how
we
motivate.
Why
we're
talking
about?
Why
we're
talking
about
the
server's,
much
okay
and
later,
to
use
an
implant
implementation
of
the
service
wish
I
would
argue
that
actually
going
back
to
my
Twitter
days,
Twitter
had
a
service
mesh.
B
You
know
we
didn't
use
that
terms
like
we
didn't
use
the
term
micro
services,
but,
and
it
had
it
in
a
purely
library
form,
because
Twitter
mandated
the
way
that
you're
going
to
write
folks
going
to
use
this
library
to
handle
all
your
service
service
communication
and
a
library
to
help
enable.
But
in
you
know
in
that,
has
some
advantages.
I
have
some
disadvantages.
The
linker
T
is
an
implementation
of
a
darkish
okay,
but
the
important
yet
important
idea
here
is
this
has
to
be
so
I.
B
Can
we
go
back
to
the
picture
for
a
second?
This
has
to
be
the
service.
The
service
communication.
You
know
in
this
new
world
of
containerized
micro
services
running
in
or
executing
parts
it
has
to
be
a
first-class
citizen
of
your
environment
has
to
be
something
that
our
focaccia
member
during
pumping
iron.
It
has
to
be
something
that
you
can
look
at
and
that
you
can
monitor
and
control
okay.
B
So
now,
finally,
we're
going
to
start
talking
about
link
Rd.
So
how
does
this
work
like?
What
is
what
is
what
does
this
thing
actually
do?
What's
it
look
like
okay,
it's
a
link
or
Deezer
process.
It's
a
proxy!
Actually,
it's
project,
ID
proxy,
since
you
know
the
dawn
of
time,
Chloe
yet
another
proxy,
but
the
future
said
is
very
different
and
the
focus
certainly
is
very
different.
So
link
could
be
that
you
deploy
deploy.
B
You
know
on
a
per
host
basis
or
if
you're
in
the
kubernetes
world,
you
can
deploy
a
per
pod
or
per
host
using
getting
cuts
and
then
acts
of
this
transparent
proxy
plus
reverse
proxy.
For
all
these
first
service
communication
so
applications
well,
then,
once
Lydia's
been
deployed,
applications
will
send
their
HTTP
or
the
GRP
key
or
its
wrist
or
whatever
calls
through
our
local
link,
ready
instance.
Okay
lyndie
will
take
care
of
everything
else.
B
It
takes
care
of
service
discovery,
it
takes
care
of
load,
balancing
takes
care
of
the
reliability,
takes
care
of
instrumentation
takes
everything
and
I'll
talk
in
gory
detail
about
what
is
what
those
things
are
and
how
linker
the
accomplishes
them.
But
the
core
goal
here
is
that
the
application
should
naturally
care
that
case.
You
should
even
know
that
littered
easier.
So
if
we
do
this
right,
this
right,
the
application
somebody
be
coupled
from
later
D
Thank
You
Justice
Party
underlying
expert
all
right
so
far,
so
good.
B
B
This
is
kind
of
a
little
deployment
diagram
of
how
it
might
be
on
something
like
you
know,
if
you
using
kubernetes
something
like
a
service
mesh,
so
you
have
pods
representing
certain
or
individual
services
service,
a
B
and
C
they
be
deployed.
You
know,
could
really
kind
of
manage
with
those
things
go.
Lickity
would
be
deployed
as
a
demon
site
and
the
services
were
just
rather
than
communicating
directly
through
them
service.
B
A
talking-to
service
be
directly,
you
know
and
doing
a
dns
lookup
and
making
a
connection
and
dealing
with
its
own
retries
and
timeouts
or
whatever
else.
It
would
just
talk
to
localhost
one
two
three
four
and
treat
that
as
an
HT
proxy
in
and
if
we
get
to
the
demo-
and
this
is
done
with
successful
I'll-
show
you
kind
of
30
very
concrete
terms.
B
What
that
looks
like,
but
ideally
like
I,
said
service
a
doesn't,
actually
care
about
the
type
of
liquor
theater
and
you
may
not
even
have
to
make
a
code
change
it
and
then
the
other
point
you
know
the
other
thing
to
notice
here
is
I
talked
about
winter
evening,
acting
both
as
a
proxy
and
as
a
reverse
proxy.
So
what
you'll
see
is
that
linker
DS?
Actually
don't
talk
directly
to
the
destination
instance.
They
talk
to
each
other,
okay
and
then,
and
then
talk
to
the
destination.
B
Okay,
so
we're
actually
introducing
to
network
ops
and
network
engineers
in
the
room
are
probably
freaking
out,
because
we've
introduced
to
user
space
network
ops,
you
know
to
every
service.
The
service
communication
is
actually
actually
makes
things
faster,
make
things
faster
because
okay
comments,
let's
qualify
that
okay,
just
clearly
like
adding
more
hops,
doesn't
really
make
things
faster.
It
makes
me
faster
because
we
can
reduce
tail
links.
B
We
can
reduce
camaleón
seas
by
being
in
coaching
about
the
way
that
we
load
balance
and
the
way
that
we
do
flow
control
in
between
linkages,
so
you're
working
your
best
case
behaviors
or
your
best
case
performance
service.
They
talk
about
your
repeat:
does
take
a
hit
right
in
your
best
case
performance.
You
know
you're
going
to
spend
a
millisecond
in
Liberty
and
another
millisecond
of
the
other
weaker
D,
and
you
set
them
trends
transit
time
in
between
your
best
case,
behavior
takes
a
hit
your
worst
case.
B
Behavior
gets
improved
by
the
way
that
linker
D
can
manage
these
connections.
Okay
and
that's
actually
what
you
care
about
in
the
big
distribute
system.
You
want
to
reduce
your
tail
links,
so
we
can
do
that.
Okay
and
then
this
service.
Sorry,
this
linker
to
linker
model
here
has
a
whole
bunch
of
nice
benefits.
We
do
this
for
a
reason.
Okay,
so
let's
what
some
of
the
things
that
we
read?
He
actually
does
really.
A
A
B
That's
a
really
good
question:
it's
the
link
of
the
instances
are
decoupled
and
they
don't
talk
to
each
other
for
metadata
reasons,
so
each
link
or
the
instance
is
making
its
own
keeping
its
own
stats
of
a
per
instance
basis,
which
is
something
you
know
which
instances
are
alive
which
ones
are
fast
which
ones
are
slow,
so
we
simply
distribute
that
decision-making
power.
So.
B
You
know
we
we
purposely
do
not
want
the
liquidy
instances
to
be
kind
of
tightly
coupled
to
each
other,
each
of
even
to
really
know
about
where
the
other
ones
are,
except
kind
of.
When
we're
making
a
direct
connection.
We
want
these
things
to
be
stateless
and
we
want
them
to
be
independent
so
that
they're
very
easy
to
deploy
and
also
it's
helpful
I
think
in
some
cases,
because
you
may
have
very
different
latency
profile
on
different
notes.
Right
different
notes
could
have
different
performance
characteristics.
They
could
be
in
different
parts
of
you
know.
A
B
So,
in
summary,
these
linker
D
instances
are
stateless
them.
They're
independent
of
each
other,
so
deployments
become
pretty
easy.
We
do
in
advanced
usage.
We
do
start
centralizing
some
of
these
things,
so
you
actually
do
want
centralized
control
over
bits
of
these
things
and
I'll
talk
about
that
as
we
as
we
go
on
okay.
So
what
does
it
do?
Let's
actually
look
at
some
examples?
Okay,
so
what
is
linker?
Do
you
do?
Why
am
I
using
this?
Why
am
I
adding
this
other
like
piece
tonight?
Can
I
you
know
it's.
B
My
infrastructure
stack,
so
I'll
list
out
a
couple
things.
So
the
one
thing
it
does
just
add:
reliability
primitives,
okay,
so
it
does
load
balancing
and
it
doesn't
even
see
where
ways
that
mean
for
that
particular
instances
are
slowing
down
a
little
shift
traffic
away
from
them
as
I
start
speeding
back
up
you'll
send
traffic
to
them.
B
Okay,
can
you
do
that
because
it's
operating
at
layer,
five
right
in
our
OSI
model,
so
it
actually
is
aware
of
individual
requests,
so
I
can
actually
create
a
a
latency
profile
and
I
can
keep
that
up
to
date
based
on
them
for
performance,
but
individual
interest
and
I.
Can
you
know
optimize
my
tracks?
Memory.
Okay,
does
circuit
braking
a
very
similar
way
if
a
host
or
if
a
particular
instance
is
responding
really
really
quickly,
but
it's
forming
500.
You
know,
okay,
great,
so
latency.
B
You
know
the
latency
base
load
balance
things
like
this
guy's
great,
you
know,
but
it's
actually
returning
five
undersold
on
all
sort
of
braking
can
be
aware
of
that
layer.
You
know
at
that
level.
Air
isn't,
can
just
kick
it
out
as
cool
hey,
hey
buddy,
but
we're
fine
doesn't
retry
little
manage
retries
for
you
and
it
does
it
using
it
parameterizing
so
using
budgets
from
and
retry
policies
and
we'll
talk
about
that.
A
second,
that's
actually
quite
important,
and
we
can
do
things
like
deadlines
are
again.
B
I
think
I'll
talk
about
that
one
in
a
minute
deadlines:
propagation.
Instead
of
a
lot.
Instead
of
sending
explicit
China,
which
7op
more
importantly,
decouples
the
transport
protocol
from
the
application
protocol,
okay,
so
the
applications
would
be
speaking,
HTTP
one
you
know,
but
the
linker
D
instances
could
be
talking
to
each
we
chose
we
can
upgrade
a
similar
upgrade.
That's
actually
quite
quite
ominous,
to
wrap
things
in
TLS
the
linker.
The
instances
you
know
are
both
initiating
and
terminating
TLS
and
so
in
in
kubernetes.
B
Actually
quite
nice,
because
kubernetes
guarantees
said
pod,
you
know
that
talking
to
your
demon
sent
will
happen
over
localhost,
so
that
communication
to
be
unencrypted
and
weaker,
the
linker
communication
can
be
encrypted.
Then
you
can
have
automatic
cross
node
TLS,
that's
another
form
of
protocol
upgrade
and
then
finally
gives
us
this
way
of
doing
sanitized
name
if
it
decouples
this
idea
of
like
what
says.
What's
the
service
name
kind
of
in
my
head,
when
I'm
think
about
the
architecture
which
is
what's,
the
service
main
has
actually
been
deployed
right.
B
Think
about
this
thing
in
the
shower
and
we're
like
I
tried.
You
know,
reasoning
about
the
application.
You
know
we're
thinking
about
the
user
service
right,
it's
cause
the
diffuser
search,
but
any
production
we're
actually
the
plane
the
thing.
Well,
we
might
have
different
data
centers.
There
are
staging
foster
versus
them.
Product
costs
here
versus
a
test
culture
versus
something
else.
We
might
have
different
versions
of
the
user
service
running
concurrently
for
doing
Bluegreen
deploys
the
link
that
you
use
this
principled
and
consistent
way
of
doing
that.
Mapping
between
those
two.
B
So
the
application
then
stays
very
pure
I.
Just
want
to
talk
to
user
share
so
I'll,
say
and
that
we
G
users
and
linker
B
will
translate
that
into
what
the
actual
production
either
destination
is
and
that
mapping
is
very
powerful.
We
have
time
I'll
give
some
examples
of
that.
That's
from
the
core
routing
logic:
that's
able
to
be
changed
on
the
fly,
so
you
can
change
the
routing
logic,
dynamically
kind
of
cool
stuff.
You
can
do
as
a
result
of
it.
B
B
We
can
change
a
apply
and
do
all
sorts
of
fancy
stuff
there
in
terms
of
traffic
shifting,
we
can
shift
traffic
based
on
percentages
because
it
gives
you,
because
of
abstracting
way
to
access
to
service
discovery
actually
gives
you
this
way
of
gluing
together
things
like
communities
with,
you
know
existing
system.
If
you
have
something
running
on
zookeeper
or
another
system
or
marathon
or
console
well,
you
can
tile
those
things
together,
link
or
you
can
actually
talk
to
multiple
service
discovery
systems,
and
you
can
express
terms
of
precedence
between
them
and
again.
B
The
application
is
totally
unaware.
The
application
says:
okay,
I
just
want
to
use
your
service
and
lincolni
will
find
out
what
that
actually
means
and
will
proxy
that,
regardless
with
a
destination
service
is
in
the
same,
you
know,
environment.
You
can
extend
that
same
model
if
I
fail
over
and
I
reckon
I'm
not
going
talk
a
whole
lot
about
that,
and
then
we
have
consistent.
This
is
kind
of
nice
part.
B
Is
you
have
consistent
global
metrics
across
everything,
so,
if
you're
living
in
a
polyglot
environment,
this
is
something
that's
really
hard
to
do,
because
you
have
to
keep
your
frameworks
and
your
different
languages
all
up-to-date.
So
if
you
want,
you
know,
if
you
want
success
rates,
when
you
need
to
have
a
way
of
doing
that,
you
know
on
kind
of
an
application-level
basis
per
framework
or
per
language
with
linker
D,
because
we're
sitting
at
way
or
five
and
measuring
requests.
We
actually,
you
know
what
to
success
rate
this.
B
B
Okay
and
then
but
communities
already
has
load-balancing
a
service
covering
all
fun
stuff.
Yes,
it
does
it's
a
different
layer
in
the
stack
you
know:
load
balancing,
cooperate,
close
balance,
use,
layer,
4
and
what
we're
doing
is
request
to
load
balancing,
Italy
or
five.
So
this
stuff
is
a
little
confusing
I
think,
especially
if
you're
getting
started.
We
have
an
excellent
series
of
blog
posts
that
I'm
going
to
just
point
you
to.
If
you
search
for
you,
know,
kubernetes
service
mesh
you'll
get
this
whole
long
laundry
list
of
a
blog
post.
B
Where
we
have
specific
examples
and
like
command,
you
can
run
and
kubernetes
config
things
you
can.
You
can
try
to
deploy
linker
begin
to
do
a
bunch
of
different
stuff.
So
if
you
wanna
learn
more
great
way
to
get
started,
okay,
so
let's
see
I
do
have
some
examples
and
then
I
think
we
probably
should
have
enough
time
for
that.
So
I'll
run
through
a
couple
examples
and
then
we'll
do
the
demo
at
camp.
B
Here
we
go
all
right,
so,
let's
take
I
mentioned
briefly
I,
mention
briefly
that
we
did
deadlines
rather
than
timeouts.
Let's
take
a
look
at
what
that
means
of
it.
So
here's
kind
of
the
classic
setup
right,
you're,
making
a
bunch
of
services
I've
got
a
web
server
to
talk
to
timeline
story,
to
talk
to
use
your
services
and
talk
to
daily
and
I
got
some
time
out
and
retry.
You
know
policies
that
I
put
on
here,
and
this
is
you
know,
kind
of
where
you
start.
B
This
is
what
I
call
the
web
browser-based
model
of
communication.
How
does
your
web
browser
talk
to
the
web
server?
Well,
you
know
hawks
observer
and
the
server
doesn't
respond
in
400
milliseconds,
but
I'll
try
again
and
then
it
doesn't
respond
after
three
times
and
I
just
give
up
okay,
but
the
problem
is
when
you're
doing
service
a
service
communication.
It
means
that
multiple
hops
here
these
timeouts
and
these
retries
don't
actually
compose
well.
Okay.
So
what
is
the
end?
End
behavior
here?
What's
going
to
happen
all
right?
Well,
let's
take
a
look.
B
Let's
say
the
database
starts
slowing
down
and
we
hit
a
timeout
in
the
user
service
when
we
start
retrying,
okay,
well,
the
user,
it
could
potentially
take
up
the
$600
all
right
and
once
you
hit
600
milliseconds
well
now,
you've
triggered
a
timeout
on
the
time
line
service.
The
time
line
service
is
going
to
retry
like
potentially
take
up
the
intervals,
and
now
you
turn
your
time
up
in
the
web
service.
So
now
you
have
this
kind
of
cascading
level
of
not
only
a
failed
request.
B
You're
actually
adding
way
more
load
system
because
you're
doing
a
bunch
of
retries
that
you
don't
need
to
do.
Ok,
and
this
is
something
that
we
ran
into
a
Twitter
all
the
time.
It's
really
hard
to
reason
about
timeouts
and
retry
logic
when
you
have
multiple
hops,
ok,
so,
instead,
if
we
instead
parametrize
things
by
deadlines,
ok,
we
set
a
top-level
deadline
and
say:
hey
your
time.
I
just
know
your
your
top-level
deadlines
for
primitive,
but
should
be
important
to
note
like
them.
B
It
should
say
guideline
you
topple
with
them
on
is
400
milliseconds
I
mean
you've,
got
400
milliseconds
to
make
this
request
happen,
and
if
it
doesn't
happen,
then
you
know
we
can
for
our
notes
like
this
and
just
stop
there.
Ok
and
then
you
get
to
the
first
top
and
you
subtract.
You
know
how
much
time
that
the
last
thing
is
the
next
something
you
should
track.
How
much
are
with
the
left,
you
get
the
third
hop
and
so
link
your
D
can
propagate
these
deadlines
and
then
it
can
cut
off.
B
Ok,
so
we
talk
about
time,
outs
and
deadlines
will
talk
about
retries,
ok,
so
one
of
the
problems
we
saw
in
that
previous
slide
is
that
the
worst
case
behavior,
if
you're
doing
retries
on
a
per
request
basis,
the
worst
case
behavior
is
actually
quite
bad.
But
if
you
have
three
retries
you're
saying:
okay,
we're
case
behavior,
actually,
once
you
have
to
because
I
put
some
more
loads.
Well,
that's
going
to
be
problematic
right,
because
how
do
computers
work
well,
the
more
loads
that
you
put
on
them?
The
worse?
They
get
ok!
B
So
now,
we've
just
exacerbated
system-
and
you
know
that's
already
starting
to
slow
down
or
now
it's
just
paranoia,
but
with
green
under
particular
kind
of
sources
that
they
off
refer
to
retry
storm.
Ok.
So
what?
If,
instead
of
parameterizing
it
as
the
per
request
policy,
we
said
they
hate
will
give
you
a
retry
budget.
B
If
you
budget
a
20%-
and
that
means
a
link
to
the
update
20%
of
your
total
requests-
can
be
retries,
and
if
you
exceed
that,
then
you're
not
allowed
to
do
any
more
than
just
start
tailing
there
then
our
worst
case
behavior
is
20%.
Ok,
now
we
have
a
much
saner
system.
So
this
is
the
sort
of
stuff
that
you
know
that
later
he
can
do,
you
can
do
all
the
stuff
in
your
application,
it's
a
little
difficult
to
get
right
or
can
confer
to
two
to
linker
D
and
actually
get.
A
B
Complicated
to
get
right
when
we
start
thinking
about
the
interactions
between
load,
balancing
and
circuit,
breaking
and
sort
of
discovery
and
a
couple
of
differences,
but
you
know
I
think,
ultimately,
you
know
the
goal
is
all
this
complicated
logic
should
be
part
of
the
underlying
infrastructure,
because
it's
critical,
it's
critical
to
the
way
that
your
application
behaves
at
run.
Large,
ok,
I
also
mention
briefly
the
request
form
of
load
balancing
so
because
we're
observing
the
latency
and
the
success
rate
is
individual
requests.
B
B
Ok,
all
right
and
the
final
is
this
thing
called
later
D.
You
know:
why
do
we
call
it
the
no
linking
daemon
right?
What's
the
do?
Link,
okay,
so
we
actually
took
the
inspiration
from
this
bottle
is
operating
system
of
a
laker
that
said
I,
guess
that
runs
when
you're
running
an
executable
running
a
binary
and
shared
libraries,
identical
image,
okay
for
the
basic
job,
here's
a
book!
You
know
that
we
took
a
screenshot,
so
this
is
how
we
do
research.
B
B
Don't
have
to
worry
about,
you
know
where
get
line
is
right,
you
just
invoke
it
and
the
links
are
trying
to
figure
it
out.
Similarly,
we
have
a
you
know,
idea
in
in
coding
the
cloud
native
world
of
you
as
a
service
and
I
kind
of
alluded
to
this.
When
I
was
talking
about
abstract
names
and
concreting
is
before
us,
a
service
should
really
care
where
something
is.
You
just
want
to
make
the
request?
B
Okay,
so
in
this,
in
this
case,
in
this
little
diagram
here,
I'm
the
time
line,
service
and
I
want
to
talk
to
the
user
service,
well,
which
one
do
I
care,
I
mean
which
one
do
I
want
to
talk
to.
Well,
that
shouldn't
really
be
a
decision
up
in
the
application
code
right
that
should
be
handled
by
the
underlying
infrastructure.
So
we
can
find
those
requests.
In
fact,
we
can
do
fancy
stuff
and
find
it
on
a
per
request
basis.
B
So
we
can
not
only
say
a
time
line,
you
know,
I,
you
know
I'm
doing
a
blue
green
deploy
between
users
and
users,
user,
Z,
1
and
user
C,
2
and
I
want
everyone
to
start
shipping.
The
traffic
over
all
of
the
upstream
comments
sessions
in
the
traffic
over
and
I
shall
do
it
on
a
per
request
basis.
You
can
say
hey
for
this
individual
request:
I
want
you
to
go
through
the
production
topology
it.
Instead
of
talking
to
you,
the
user
service.
B
I
want
you
to
talk
to
users
user
see,
so
we
can
give
you
mechanisms
for
doing
staging
and
canary,
and
if
you
look
at
the
blog
posting
list
in
the
previous
slide,
we
if
we
have
some
good
examples.
Ok,
so
again,
this
is
kind
of
just
driving
home.
The
point
about
logical
names
versus
concrete
names
and
type
the
mapping
between
those
two.
It's
called
it's
called
routing,
ok
and
I
talked
about
per
request.
Routing
I'll
just
run
through
this
briefly,
because
I
want
to
get
to
the
demo.
B
You
know
one
example
of
this
is
to
say:
hey
I've
got
my
new
instance.
You
know
service
B,
I'm,
trying
something
like
that.
I
think
I've
got
a
bug,
fix
or
a
W
feature.
I
just
want
to
send
one
request
through
it.
Well
can
I
send
that
request
through
the
production
topology,
except
instead
of
talking
directly
the
server
speech.
I
talk
to
you
know,
B
prime
overhearing
can
I
just
do
it
for
this
one
request,
but
we
can
do
that
with
the
per
request.
Routing
policy.
You
know
another
good
use
cases
for
injecting
debug
proxies.
B
Ok,
the
way
to
be
to
services
are
talking
to
each
other
in
production
is,
is
something
through
either
I
want
to
see.
What's
going
on
there
on
a
per
request
basis
we
can
get,
is
injecting
a
little
debug
proxy
in
between
the
two
things,
and
then
you
can.
Actually
you
know
without
having
to
deploy
stuff
and
get
invited
to
into
what
the
actual
you
know.
I
don't
know
the
product
it
can
be
whatever
you
want
uses
for
failure
injection
to
okay,
so
these
are
kind
of
some
of
the
advanced
features.
B
A
B
That's
a
great
question:
what
linker
D
does
is
it
can
talk
to
an
existing
service
discovery
system
because
you
can
tell
it
I've
got
console
running
over
here.
I've
got
you
keep
er
over
here.
I've
got.
You
know
the
kubernetes
api,
which
is
also
a
service
discovery
over
here.
Okay
and
so
liquidy
is
doing
a
service
discovery
in
the
front
stuff
like
they
can
talks
with
these
things
and
kind
of
merge
them
together.
But
it's
not
storing
that
service
discovery
data.
B
B
It
make
H
a
proxy
obsolete,
I
think
there's
still
a
lot
of
good
use
cases
for
H
a
proxy
one.
Well,
for
now,
no
I,
don't
I,
don't
think
so.
There's
a
couple
things
though,
then
maybe
in
the
future
we
will
I,
don't
know
I
think
in
the
future.
We
will
right
now
one
big
thing
that
we
don't
have
loans,
he's
going
to
change
pretty
soon,
that
we
don't
have
this
pure
TCP
proxy
right.
So
everything
that
link
UD
does
is
really
focused
on
its
really
focused
on
request
level
stuff.
B
A
A
B
Yeah,
okay,
so
in
this
case,
actually
the
proxy
would
be
some
other
projects
that
you
wrote.
Okay,
so
what
I'm
doing
here
is
allowing
you
to
insert
some
service
in
between
you
know
in
between
calls
and
some
existing
production
technology.
So
this
is
the
use
case
here.
Is
you
know
I
want
to
see
what's
happening,
because
this
little
p,
you
know
in
this
diagram
is
not
actually
linked
early.
It's
like
something
that
you
wrote.
B
A
B
That's
a
good
question:
it
can
gracefully
shutdown,
it
cannot
do
a
hot
restart
because
we
don't
actually
like
that
pattern
very
much.
So
what
we
prefer
to
do
is
move
more
and
more
of
the
configuration
into
kind
of
an
external
service.
So
we
do
this,
like
the
routing
policies
is
the
first
thing
that
we've
done
with
this
routing
policy.
We!
Actually,
if
you
look
in
the
link
review
project,
we
have
this
other
little
binary
or
little
entry
point
called
neighbor,
D
and
name
of
E.
Is
a
routing
policy
store
such
a
little
bit
more
events?
B
So
if
you
want
to
change
routing
policy
across
your
entire
fleet
of
link
ADIZ,
you
would
change
it
in
name
retainage
integrity
instance
or
be
listening
to
name
being
updated
policy
accordingly,
so
our
our
preference
basically
is
to
never
have
to
restart
Liberty
unless
you're
like
raiding
Lee
could
be
instead
to
blue
balls
that
config
that
you
might
want
to
change.
It's
just
our
concern
right.
A
Aaron
says
great
things,
so
apparently
you
answered
that
question
correctly.
Okay,
I
was
going
to
ask
you.
How
does
this
compare
to
hyster
--ax,
but
then
V
V,
G
SH,
jumped
in
and
stole
my
question
with
a
better
one,
which
is,
he
says
first
time
at
the
link
of
D,
but
who
would
you
say
our
competitors
in
the
domain
and
how
does
link
at
the
excel
above
those
if
they're,
already,
okay,
Wow?
Let's
go
okay,
yeah.
B
You
know
I
think
generally,
as
an
open
source
project.
I
mean
I'm
looking
to
use
the
word
competitor,
because
we
don't
sell,
link
or
D.
So
someone
uses
something
else.
It's
not
like
a
boss.
You
know
five
dollars
that
are
gone
out
of
my
pocket
and
generally
I,
like
the
idea
of
there
being
kind
of
a
thriving
ecosystem
of
like
project
all
doing
cool.
B
It's
good,
you
know
it's
good,
it's
good
for
the
world,
so
all
instead
I
want
to
answer
the
comparator
question
ourselves
that
answer
what
are
other
projects
in
the
space
that
could
bet
that
you
can
look
at
there's,
certainly
a
lot
of
proxies
out
there.
Trey
SiC
is
one
that's
very
popular
in
the
kubernetes
community.
Nginx
is
one
there's
a
project
from
list
called
envoy
and
I.
Think
Bluebird
had
one
at
some
point
called
HyperCard
called
hyper
bomba.
B
No,
if
it
became
open-source,
there's
also
things
like
calm,
which
is
an
api
gateway,
but
you
know
it's
effectively
a
proxy,
but
things
like
late
limiting.
So
there's
really
there's
a
lot
of
projects
in
this
ecosystem.
They
each
have
kind
of
a
slightly
different.
Take
on
on
what
the
trying
to
accomplish
widely
would.
A
So
I'm
going
to
skip
ahead
a
few
questions
then
come
back
just
because
you
brought
up
envoy
photo
B
invoice,
employee,
I,
don't
I
said
anyway,
yeah
Naraku,
says
I,
know
envoy
on
boy
project
aims
to
set
a
similar
space
and
it's
built
on
C++
to
be
designed
for
low
resource
resource
utilization.
What
you
will
take
on
the
resource
footprint
yeah.
B
B
A
B
Mesh,
that's
the
best
name
that
I've
come
up
with
it.
You
know
part
of
the
problems
that
we
had
proxies
since
the
beginning
of
time.
You
know
so
just
calling
it
a
proxy
kind
of
I,
don't
think
it's
quite
as
useful
I
think
service
mesh
is
really
the
best
term,
but
that's
still
a
pretty
young
terms
when
you
search
for
service
mesh.
Now,
mostly,
you
find
a
lot
of
references
to
a
company
called
service
mesh
to
CEO
got
in
trouble
that
wasn't
lead
by
relaxing
all.
A
A
B
Basically
in
our
experiments,
if
you're
running
kind
of
a
non
crazy
amount
of
through
lady,
so
under
a
thousand
requests
per
second,
we
can
squeeze
it
down
to
about
a
hundred
hundred
and
five
Meg's
and
not
an
extreme
amount
of
CPU
usage.
That's
under
the
current
architecture,
there's
some
things
we're
going
to
do
that
will
significantly
improve
that
you
shouldn't
have
to
do
any
real
tuning
of
this
thing
unless
you're
getting
to
very,
very
high
workload.
B
So
you
know
in
our
experiments
we
got
up
to
around
40,000
requests
per
second
through
a
single
link
or
the
instance
which
point
I
think
we
saturated
the
network
connection.
We
had
to
do
a
little
bit
of
tuning
to
get
there,
but
basically,
unless
you're
at
high
workload,
you
shouldn't
have
to
do
a
lot
of
tuning.
We
want
this.
We
don't
want
this
to
be
a
thing
that
you
worry
about.
You
know
it's
it's!
B
The
goal
is
for
it
to
come
shipped
out
of
the
box
as
something
that's
lightweight
and
performant,
and
you
know
your
autism
as
a
docker
container,
or
something
really
has
to
look
at
what
you
know
the
gory
details
underneath
the
hood
course.
That's
not
always
true
practice
operated
off
early
production,
ization,
operationalization,
political.
A
Excellence
we've
got
to
two
more
Thomas
says,
is
to
become
an
approach
for
implementing
link
of
the
Antibes
entities
either
a
by
the
sidecar
approach
or
B
by
our
demon
set
I
found
that
the
sidecar
approach
has
had
trouble
coming
back
up
during
an
auto
scaling,
Kuban
technique,
which
doesn't
appear
to
be
a
problem
with
Stephen
step
approach.
I,
don't.
B
Know
why
sidecar
would
have
that
particular
problem?
We
haven't
done
a
whole
lot
with
auto
scaling,
but
I
don't
I,
don't
know
why
psycho
would
be
that
bad.
Our
preference
right
now
is
Dean
in
sets
simply
because
of
the
resource
utilization
question
it's
easier
to
essentially
get
lots
of
pausing
ever
seen
easier
to
scale.
You
know
linker
t1
preneur
rather
than
one
per
pod,
but
that's
really
mostly
a
question
of
amortize
and
resource
costs.
A
B
A
B
I'm
definitely
aware
of
people
doing
this
can
I
say
who
that's
I,
don't
know
if
you
ping
me
on
the
flack
or
send
me
a
Twitter,
DM
@wm
I'll
tell
you
kind
of
one-on-one
I,
don't
know
what
the
rules
are.
We
we
get
exposed
to
a
lot
of
people
of
you,
know
internals
and
I'm,
not
never,
really,
sure.
What's
public
and
what's
not,
but
yes,
there's.
Definitely
multiple
link,
Rd
plus
console
production
users
that
I
can
point
you
to
I
just
draw
the
bill
privately.
B
All
right,
so
here
we
are
mark.
Can
you
see
the
screen?
Everything
look.
Okay,
dia
looks
great
okay,
so
here's
here's
what
we're
going
to
do-
and
this
is
going
to
be
interesting,
I
used
to
give
a
lot
of
demos
with
the
docker
compose
and
what
was
weird
with
a
I'm,
sorry,
this
and
dr.
for
Mac.
What
was
weird
was
that
when
I
tried
it
at
home
docker
from
back,
you
know
everything
worked
perfectly
as
soon
as
I
started,
giving
a
demo
it
would
crash.
B
What
I
realized
is
that
the
video
conferencing
Sasha
taking
up
so
much
resources
at
doctor,
though,
would
cause
dr.
max
the
crash.
So
we're
going
to
see
this
wedge
works
perfectly
at
home,
I'm
using
mini
cube
in
this
kid,
but
we'll
see
what
happens
when
I
got
the
you
know
the
webinar
software
running
all
right,
so
here's
what
I'm
going
to
do
I
actually
have
a
little
mini
cube
instance
running
it
doesn't
have
anything
on
it
right
now.
Let
me
go
show
you
what
this
dashboard
looks
like.
B
So
here's
what
my
communities
maybe
cube,
looks
like
it's
got
nothing
on
it,
I'm
going
to
add
a
bunch
of
things.
So,
let's
see,
let's
add
first
thing:
I
want
to
do
that:
linker,
deep,
ok
and
this
by
the
way
this
repo
is
available
on
github.
So
if
you
go
to
github.com
cash
point,
I
oh
there's
a
whole
turkey
example
repo
and
you
can
cover
this
directory
here
and
kind
of
follow
along
at
home.
Okay,
so
there's
my
caddy
I'm
gonna.
Let
see
also
a
install
something
called.
B
A
B
A
B
B
B
So
this
is
my
very
simple
hello,
world
micro
service
systems
and
all
those
service
that
talks
to
a
world
service
and
the
world
term
is
refers
to
the
straight
world
and
the
hello
service
takes
up
string
and
append
prepend
the
string
elbow
okay,
so
so
far,
so
good
right,
everything's
running
ok,
so
that's
just
so.
How
do
we
use
this
thing?
How
do
we
use
this
thing
in
real
life?
B
So,
let's
say
I'm
going
to
do
the
very
first
thing
I'm
going
to
do
is
I'm
going
to
take
one
of
these
I'm
going
to
get
the
IP
address
of
one
of
these
interview.
Instances!
Ok!
So
this
is
in
this
case
many
cubes
I.
Think
there's
only
running
on.
One
knows
this
is
mildly
silly,
so
here
I'm
taking
an
arbitrary.
Let's
just
pretend
I've
got
a
cluster.
B
You
know
50
machine
I,
think
an
arbitrary
liter
diesel
and
what
I
can
do
now
is
I
can
actually
use
curl
to
talk
to
this
instance
and
show
off
some
of
the
service
discovery
stop.
So,
let's
look
at
find
a
good
command
here.
Ok,
so
here
I
am
I'm
using
curl.
Let's
unpack
this
for
a
second
I,
even
curl
and
I'm
telling
it
to
talk
to
HTTP
colon,
slash,
slash
hello.
B
If
this
is
the
name
of
the
hello
service,
it's
going
to
go
through
linker
DS
routing,
it's
going
to
go
through
screws,
the
surrounding
rules,
it's
going
to
end
up
in
a
thermos
discovery.
You
look
up
on
the
kubernetes
api
and
it's
going
to
find
a
hello
instance
and
kind
of
proxy
request
is
going
to
handle
any
be
tries,
is
going
to
return
that
result
back
to
you,
but
what
you'll
notice
is
that
I'm
using
this
HTTP
proxy
environment
variable
here?
B
This
is
a
very
easy
way
of
saying:
hey,
curl,
and
actually
this
works
with
almost
every
C
program.
Ruby
Python
go.
They
all
understand
these
environment
variables,
like
hey
curl.
You
know
when
you
make
this
call:
don't
do
a
DNS
lookup
right,
you
talk
to
you
this
proxy,
alright
and
when
I
run
this
command.
What
we'll
see
is
alright,
coupon
that
work,
hello,
well,
ok
and
I'm.
Actually
putting
the
IP
address
the
pot
IP
is
efficiences.
If
you
run
this
again,
we
should
get
slightly
different.
Ip
addresses
no
okay.
B
There
we
go
okay,
so
this
is
not
using
DMS
at
all
and
that's
important,
because
DNS
is
actually
horrible
in
in
production
when
things
are
moving
very
very
rapidly.
If
you
have
experience
with
large
field
root
systems
that
try
to
use
the
nuts
for
service
area
you'll
realize
you
quickly
run
into
a
bunch
of
calls
okay,
so
the
system
works
we're
able
to
access
that
hello
service
from
anywhere
in
the
service
mesh.
We
can
do
a
similar
thing,
just
to
kind
of
show
you
you
do
a
similar
thing
with
the
world
service.
B
So
if
we
just
want
to
talk
to
world
directly,
we'll
see
that's
okay,
that's
what
the
world
service
responsible!
Okay,
great!
So,
let's
actually
pump
a
little
traffic
through
here
and
I'll
show
off
some
of
the
metrics.
All
right
so
here
we
are
I'm
just
going
to
curl
hello,
a
little
run
over
again,
okay
and
we'll
see
how
fast
my
little
laptop
can
can
run
here
and
then
actually
before
I.
Do
that
before
I?
Do
that,
let
me
let
me
do
I'm
going
to
open
up
two
dashboards.
B
Up
play
to
find
the
right
commander,
this
is
going
to
open
up
the
link,
D,
dashboard,
okay
and
while
that's
opening
up
I'll
actually
do
the
maitre.
D's
is
dashboard
tuna.
We
can
kind
of
compare
and
contrast
these
two
things.
Let
me
run
a
little
traffic
through
here
to,
but
that's
taking
a
while
there
we
go
okay,
my
portable
laptop
is
struggling
under
this.
Alright,
here
we
go
so
let's
take
a
look
at
the
laker
d
dashboard.
First
of
all,
ok,
so
here
I
am
I'm.
B
Look
at
the
dashboard
of
an
individual
link
or
the
instance.
Okay
and
I
can
see.
There's
a
couple
things
going
on
right
off
the
bat
first
of
all,
I
have
some
traffic
going
through
the
system.
Okay,
this
is
yellow
line
here.
There
actually
is
a
little
purple
line,
that's
hidden
behind
there.
If
the
random
numbers
PI
out,
we'll
see
it
in
a
second
okay
and
I
got
these
two
clients
here:
okay,
I've
got
one
is
called
hello
and
one
is
called
world
p1,
okay,
and
so
this
is
link
reduce
view.
B
The
world
saying:
hey
I,
looked
up
these
two
things
and
service
discovery
on
proxying
requesting
them.
You
can
see
I'm
getting
about
you
know.
Maybe
one
request
second,
through
here
and
okay,
there's
a
purple
line
there.
So
you
can
see
the
traffic
is
going
to
most
pieces
to
services.
I've
got
latency
profiles
here,
doesn't
look
too
good
and
I've
got
success
rates
here.
Alright,
so
both
of
these
are
responding
at
100%.
So
that's
good!
For
us.
We
do
groups
of
good
code.
B
We
can
see
there's
a
retry
budget
which
a
chemical
use
and
we're
only
talking
to
well
endpoint
right
now.
Okay,
so
so
far,
so
good.
What
you
do
seems
to
be
happy.
Let's
actually
do
something
with
other
terminal.
What
I'm
going
to
do
is
I'm
going
to
first,
let
me
get
the
right.
Ip
address,
I'm,
going
to
send
a
command
to
one
of
the
instances
of
the
world
service.
I'm
gonna
tell
this
world
service
to
start
failing,
I'm,
going
to
start
feeling
it
like
20%
once
made
cube,
gets
us
back
together.
B
Alright,
so,
okay,
here
I,
am
I'm
sending
command
to
one
of
those
instances
to
say
I
want
of
the
world
service
say
I
want
you
to
fail,
okay,
so
what
we
should
see
if
the
demo
gods
are
with
us
is
this
line
here.
This
client
setup
line
should
start
dropping.
We
should
see
some
dips
there.
We
go
as
one
of
those
instances
starts
responding
with
failures.
Okay,
I
actually
can't
see
that
line
because
marks
edges
in
the
way.
Okay,
there
we
go
yeah,
so
this
really
cute
great
okay.
B
So
what's
interesting
now
is
even
though
traffic
is
going
through
here.
You
know,
and
one
of
these
instances
is
failing-
we're
actually
getting
100%
success
rate
on
the
outgoing
service
year,
all
right
so
because
maybe
this
case
is
configured
with
a
retry
budget
and
we've
also
told
it
hey
by
the
way
these
are
HTTP
commands
and
they
are
HTTP
requests
and
they
conform
to
standard
HTTP,
verb,
semantics
right
and
linka
Dinos.
Okay,
get
summary.
Triable
posts
are
not
put
some
be
tribal,
delete,
you're,
not
okay.
B
We
just
saw
that
you
know
if
you
sort
of
that
retry
budget.
Okay.
So,
even
though
we
have
an
instance
here,
that's
failing
our
outgoing
success
rate
is
still
100%,
as
link
or
D
is
able
to
retry
online,
and
we
can
do
a
similar
game
I'm
not
going
to
do
it
now,
since
we
only
have
a
couple
minutes
left,
we
can
play
a
similar
game
with
latency,
where
we
can
have
one
existence.
B
It
starts
slowing
down
and
we
should
equal
gracefully
shift
traffic
to
two
different
instances
so
later,
because
this
is
kind
of
some
of
the
core
reliability
primitives,
the
application
is
quite
dumb.
It's
saying:
hey,
I'm
HTTP,
you
know
sorry
IP,
you
know
it's
saying
actually
achievements
saying
it's
saying
literally,
like
I
want
to
talk
to
HTTP
colon
slash,
slash
world,
just
like
we
were
doing.
B
We
didn't
actually
know
this
before
right.
We
didn't
know
anything
about
these
services
and
B
services
are
written
in
whatever
language
are
written
in
or
whatever
friends
they
are,
but
because
we
have
the
maitre
d
service
fish
everywhere
we
get
consistent,
dyeing
uniform
metrics
cost
everything
and
we
get
the
measures
that
we
really
care
about,
which
are
things
like
success
rate
and
request,
request
volume
and
across
latencies,
okay,
so.
B
A
No
accolades
coming
through,
yet
that's
a
gentle
nudge
to
the
people
listening.
We
do
have
one
big
question,
though,
and
it
comes
into
pretty
much
essay
format.
So
I'm
going
to
try
to
read
it
correctly:
okay,
one
hundred
and
twenty
seconds
to
answer
it
all
right,
I'm,
ready
a
game
from
Aaron
Aaron
says
now
trying
to
be
more
clear
about
my
earlier
question
so
say:
I
have
500
posts,
running
service,
a
service,
a
pastor
says
it's
a
link
at
B,
which
then
passes
the
days
of
service,
be
running
on
different
hosts.
A
B
So
this
is
a
question
of
like
how
many
concurrent
instances
could
an
individual
link
or
deep
individual
all
right.
How
many
concurrent
like
service
instances
could
an
individual
linger
D,
you
know
kind
of
keeping
it
slowed
down
some
cool
right,
that's
a
good
question.
I
can
make
something
up,
but
that's
really
a
question
for
Oliver,
my
my
co-founder,
who
writes
the
code.
My
guess
would
be
it'd,
be
quite
large.
That's
usually
not
the
thing
that
starts
failing.
A
A
We've
got
Johnson
great
webinar,
thanks
David,
fantastic,
great
content,
looking
forward
to
get
more
familiar
with
link,
D
and
all
sorts
of
other
things
coming
through.
I
want
to
thank
all
of
you
for
attending
and
coming
up
with
all
these
interesting
questions.
I
want
to
thank
you
William
for
a
very
entertaining
webinar.
Thank
you
very
much
for
it,
and
it
was
my
pleasure
last
but
not
least,
I
want
to
remind
everyone
that
Oliver
right
virtua-pod
ends.
Right.
Oliver
will
be
a
cloud
native
Cumberland
next
week
with
many
sessions.