►
From YouTube: Deep Dive: Linkerd - Oliver Gould, Buoyant
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Deep Dive: Linkerd - Oliver Gould, Buoyant
In this session, Oliver Gould, will focus on lessons learned, how to's, and what the future of Linkerd holds.
https://sched.co/MPju
A
A
A
A
That's
my
dog,
bazoom
and
I
chose
this
handle
and
my
20
is
and
I'm
stuck
with
it
for
that's
my
life,
so
good
fire
me
to
Alex
or
basically
anywhere
so
a
brief
history
of
link
OD
before
I
get
too
much
into
this
gonna
be
a
pretty
loose
talk.
I
want
to
be
I'm
here
to
be
able
to
answer
questions
for
you
mostly,
but
I
have
some
things
I
want
to
get
through.
A
You
know
too
much
time,
but
I
want
to
keep
it
flexible
for
you
anyway,
going
back
a
couple
years
now
we
serve
burka
and
linker
tea
in
2015,
released
it
in
2016
very
early.
We
spent
about
a
year
working
with
a
bunch
of
you.
Some
of
you,
probably
in
the
room,
to
get
it
into
prod
in
some
places.
This
was
a
big
JVM
base,
link
ready-made
for
mezzos
kubernetes
any
Orchestrator.
You
wanted
even
on
just
very
flexible
piece
tool,
but
a
year
later
we
donated
that
to
the
C&C
F,
which
is
a
great
decision.
A
My
introduced
me
to
all
of
you,
for
instance,
and
fast
forward
a
couple
years.
Kubernetes
took
adoption
and
that
really
pushed
us
into
a
sidecar
model
that
we
didn't
anticipate.
So
he
wanted
me
to
kind
of
assume.
Well,
you
just
run
on
one
of
these
per
host
before
that
I
had
been
working
at
Twitter,
where
he
ran
Aurora
and
as
awesome
production
and
a
host-based
daemon
was
fine,
but
going
forward
into
kubernetes.
We
look
at
the
new
pod
model
and
beautiful
podmobile,
sidecars
I
can
put
on
things
putting
JVM
sidecar
on
every
10.
A
Meg
go
processes,
not
something
most
of
you
want
to
do,
I
assume,
or
at
least
the
feedback
we
got,
and
so
we
spent
a
lot
of
time
retooling
and
we
launched
something
at
khoob,
khun
danai
in
2017
I'll
conduit.
Did
anyone
try
conduit
cool
your
early
adopters
out
there,
and
that
was
a
kind
of
approach
to
do
it
all
over
again.
If
we're
gonna
start
over
with
all
the
lessons
we
did
with
linker
t1,
what
would
you
be
doing
n?
A
That's
focused
on
kind
of
being
lightweight,
simple
and
focused
on
the
essentials,
and
not
all
of
the
configuration
linga
t1
had
very
minimal,
basically,
no
configuration
sim
early,
we
spent
nine
months
or
so
no
more
than
that
we
spent
a
year
and
a
half
or
so
no
better
year
can't
do
math.
My
head.
We
spent
about
a
year
getting
users
of
conduit
getting
feedback
figuring
out.
A
If
was
a
total
waste
of
time,
we
had
to
really
invent
a
whole
new
tech
stack
to
get
there
or,
if
maybe
it
was
a
good
thing,
and
we
got
a
bunch
of
feedback
about
conduit.
Very
early
you're
not
really
used
widely
at
all,
but
we
felt
confident
enough
to
kind
of
bring
it
into
the
lean
pretty
project
and
promoted,
as
only
pretty
to
because
we
think
it
provides
the
same
value
proposition,
but
much
better
and
very
couple
to
kubernetes.
We
we
really
don't
support
non
kubernetes
environments,
linker
you
to
and
I
love
that
position.
A
A
We've
got
a
lot
of
interest
and
these
are
just
the
kind
of
big
names
that
you've
probably
heard
of,
but
more
importantly,
it's
running
in
lots
of
small
shops.
Helping
people
solve
kind
of
the
getting
started.
Problems
like
going
into
kubernetes
is
a
daunting
experience
and
adding
some
other
layer
of
complexity.
A
On
top
of
that,
like
another
set
of
API
platforms
and
the
being
complicated
service
mission
is
not
what
we
want
to
do,
you
want
to
add
debug
ability
to
operability
visibility
easily,
and
so
it's
way
more,
you
outside
of
these
large
organizations,
but
it's
using
better
ground
rules
for
this
talk
I'm.
Sorry,
there
have
been
many
lengthy
talks
this
week,
but
I
missing
some
ground
rules,
I'm
happy
to
talk
about
what
linky
does
why
it
does
it,
how
it
might
do
it
in
the
future?
A
A
A
Okay,
how
many
of
you
in
it
written
rust
good
in
a
year,
it's
gonna
be
a
lot
more
or
maybe
two
it's
on
the
way
so
liquidy
to
this
github
repo.
This
is
the
go
repo
it's
where
we
have
all
our
issues.
This
is
a
good
place
to
get
started
and
get
in
touch
with
us
or
start
browsing
around,
and
when
you
install
a
link
ready,
you
kind
of
get
this
hand-drawn
deployment.
Here
we
have
a
linker
D
namespace.
It
has
a
bunch
of
very
tiny
go
processes
that
are
responsible,
basically
kubernetes
controllers.
A
They
talk
to
the
criminales
api,
they
expose
G,
RPC
interfaces
to
the
proxy
or
to
the
tooling,
the
CLI,
etc.
We
use
the
API
server
for
authentication
and
all
the
stuff
that
you're
supposed
to
use
the
API
server
for
and
then
we
have
these
proxies
and
they
are
have
IP
tables
at
the
process,
either
via
CNI
or
in
it
container
that
forces
all
of
the
TCP
traffic
asterisk
through
the
process
proxy.
A
So
all
the
inbound
traffic
goes
to
one
port,
one
route,
all
the
a
bound
traffic
goes
through
another
and
that's
the
basis
for
what
we're
going
to
talk
about
the
Restless.
So
if
this
is
confusing,
stop
me
now,
okay,
but
it's
actually
a
little
bit
of
this.
We
don't
have
to
use
hand-drawn
pools.
Linker
T
itself
can
tell
you
how
its
deployed
this
is
the
whole
point
of
using
something
like
linker
D,
so
the
linker
D
control
plane
itself
has
the
sidecar
the
link
new
proxy
injected
into
all
the
controller
pieces.
A
This
means
we
get
visibility
on
the
control
plane
by
default.
It's
say,
you
know
the
snake
that
eats
its
tail.
It's
basically
like
that,
and
so
here
we
see
that
we
have
a
several
containers,
they're
all
doing
a
little
traffic
most
of
the
murder
and
health
checks
at
this
point,
but
because
we're
looking
the
dashboard,
we
see
recruiting
Prometheus's
weight
clear
just
here,
but
I
actually
I
can
get
a
little
better
than
this.
If
I
click
on
that
controller
thing
well,
okay
lost
the
slide.
A
Maybe
it'll
come
back
later,
I
don't
know,
and
so
what
I
want
to
talk
about,
then,
is
the
data
plane,
and
this
is
where
I
spend
most
of
my
time.
I
do
work
on
both
sides
of
the
equation.
If
you
looked
at
those
get
up
stats
before
we,
the
proxy
is
like
or
the
liquidity
repo
is
about,
80%
go
or
70%
go
20%
JavaScript
and
the
rest
of
its
shell
scripts
in
the
ammo,
probably
more
EML.
A
In
there
the
night
looks
a
kind
of
fourth
we
get
Liberty
choose
proxy
is
in
a
separate
repo,
and
you
see
that's
basically
entirely
rust
there
and
over
the
past
year
since
Lee.
Certainly
since
we
last
spoke,
we've
really
been
in
the
process
of
factoring
pieces
out
of
the
proxy
into
a
common
set
of
libraries
called
Tower,
and
this
is
a
part
of
the
rest
ecosystem
that
anyone
who's
used
to
link
pretty
ones
initial
stack,
a
Nagel.
This
is
going
to
look
very
similar
on
the
goal
here.
A
Is
that
we've
a
bunch
of
usable
components
that
we
can
use
to
build
things
like
proxies
or
web
apps?
We
have
G
RPC
implementation
in
here.
We
have
a
whole
bunch,
retries
timeouts
load,
balancers
routers,
you
name
it
and
that's
good,
because
we're
making
the
proxies
codebase
much
smaller.
Those
tower
libraries
are
being
used
to
dry,
a
whole
bunch
of
production
applications
and
companies.
You've
heard
of
that
I'm
not
allowed
to
talk
about
and
yeah
it's.
B
A
We're
getting
benefits
by
putting
the
reusable
pieces
of
lincolni
into
the
community
having
other
people
work
there
and
contribute
back.
That's
been
working
really
well.
So,
if
Russ
is
interesting
to
you
tower
is
a
great
place
to
get
started,
it's
still
pretty
new,
but
that
means
there's
a
lot
of
opportunity.
Oh
and
here
we
go
my
old
side,
sorry
about
that,
and
what
we
also
get
here.
Is
we,
if
you
click
on
the
controller,
we
can
actually
get
topology
information.
A
This
is
actually
maybe
cooler
than
it
seems
so
we're
looking
at
deploy
stats
here
and
the
way
we're
doing
this.
We
have
traffic
sets
right.
We
say
this
controller
calls
Prometheus
and
it
calls
little
tap
service
and
it's
called
by
the
web
service.
That's
done
without
any
instrumentation
of
the
application,
and
it's
done
only
the
Prometheus.
That's
the
way
we
do.
That
is.
We
have
a
whole
bunch
of
kubernetes
specific
metadata
in
the
stats
that
the
proxy
emits,
so
we
can
actually
say
what's
the
success
rate
to
this
deployment.
What's
the?
A
What
are
the
points
you
talk
to?
This
all
comes
from
e,
this
meaning
we
don't
need
tracing,
and
this
is
something
I
really
want
to
be
heavy-handed
about.
I've
been
asked
many
times
this
week.
Why
isn't
leave
these
support
tracing
and
the
reason
is
because
tracing
requires
that
you
change
your
application,
your
if
your
application
doesn't
participate.
Tracing
at
least
forwarding
headers
tracing
is
useless,
and
so
we
don't
want
to
require
you
to
change
your
application.
However,
if
you
do
these
tracing
in
your
application,
lincolni
won't
break
it.
A
A
In
order
to
do
that,
we
initially
started
with
the
CLI
tool:
the
linker
D
command.
You
can
run
linger
d,
inject
yeah,
give
it
your
ammo
file
and
read
all
the
ammo
add
a
bunch
of
proxy
amyl
to
it
spit
the
ammo
back
out,
and
then
you
who
coop
level
apply
it
recently.
We've
changed
that
and
to
four.
This
is
going
to
be
a
lot
better,
and
so
now
we
have
a
mutating
webhook
controller,
that's
required,
and
that
does
all
the
proxy
injection.
A
And
so
now,
when
we
do
an
inject,
we
had
a
single
annotation
out
of
your
pods
as
those
pods
get
created,
they
get
called
through
the
web
hook.
We
add
the
proxy
and
they
get
written
back.
It
means
to
upgrade
the
link
ready
proxies.
If
you
do
a
control
plane
upgrade
you
just
need
to
delete
pods
as
they
get
recreated.
They'll
get
the
new
proxy
versions
of
the
new
configuration.
It's
a
much
simpler
model,
a
lot
less
manual.
However,
you
can
also
do
link
the
inject
manual
and
get
deal
behavior.
We're
not
breaking
anything.
A
There
point
is,
once
you
do:
it:
you'll
get
a
proxy
unit
container,
which
does
the
IP
table
setup
and
a
proxy
container
which
is
going
to
receive
all
the
inbound
outbound
traffic.
If
you,
if
you're
sensitive
to
using
meta
min
in
your
pods,
we
have
a
scene
I
as
well,
so
this
can
all
happen
before
the
Bark's
gets
fiddled.
A
Ok,
now
my
favorite
part:
let's
talk
about
the
proxy
bit
so
in
two
three,
which
was
released
about
a
month
and
a
half
ago,
or
so
we
introduced
automatic
mutual
TLS.
With
that
configuration
I,
see
you
squinting
I'm,
sorry
for
my
handwriting,
if
you
can't
guess,
I
was
doing
this
earlier
today,
I'm
a
Content
person,
not
a
pictures
person,
but
wait
the
way
this
works
is
when
the
proxy
starts
up.
A
It
generates
a
key,
a
private
key
and
it
also
generates
the
CSR
because
it
knows
its
identity
because,
as
the
control
plane
injected
it,
it
said
proxy.
Here's,
your
ID
MB
and
here's
your
trust
route
so
that
we
know
how
to
trust
other
connections.
We
then
generate
our
private
key.
We
generate
a
CSR
with
the
identity
we
created.
We
send
it
off
to
the
control
plan
and
say:
oh,
we
also
read
a
service
account
token
from
the
and
we
need
this
is
an
important
detail.
A
So
we
tie
identity
to
the
surface
accounts
and
the
pods
so
that
we
have
a
way
to
prove
identity
to
the
kubernetes
api.
So,
as
we
read
the
the
service
count
token
and
the
CSF
and
this
yes,
sir,
we
send
that
off
to
the
control
plane.
It
then
sends
the
service
account
token
to
the
crazy
API
via
the
token
Review
API
and
says,
is
this
person
or
is
this
pod,
this
identity?
A
It
claims
to
be
if
the
Serbs,
if
they
token
validates
it,
comes
back
and
says
yes,
it's
that
a
fault
service
account
in
the
FEU
namespace
and
then
the
control
plane
can
sign
the
csr.
We
get
a
certificate
back.
It
comes
back
to
the
proxy
for
about
a
day,
and
so
we
rotate
it
about
every
20
hours
or
so
that's
a
configurable
timeout,
and
so
these
pods
the
keys,
never
leave
the
pod
they're
stored
in
memory.
We
dynamically
fetch
them
and
we're
not
using
cured
secrets,
and
this
is
very
different
from
the
initial
TLS
model.
A
We
started
with
an
experimental
months
and
months
ago.
We
redone
this
into
three
and
it's
much
better,
and
so
this
can
just
refresh
and
it
works
pretty.
Well
also,
the
identity
service
can
be
replicated
and
we
are
it
we're
currently
it
bundles
its
own
CA.
Just
so,
you
don't
have
to
go
provision
a
CA
in
some
way
to
get
started,
but
we're
looking
going
forward
and
integrating
with
cloud
providers
as
backends
here.
A
So
if
you're
in
AWS
and
you
want
to
use
their
CA,
you
can
use
that
or
if
you're
in
a
juror-
and
you
want
to
use
key
store,
you
can
use
that
or
if
you
want
to
use
a
hash
for
vault
off
a
cluster,
we
want.
We
don't
want
to
own
the
key
management
part
of
this.
We
want
to
integrate
with
other
things
there.
A
So
as
soon
as
it
gets
this
affiliate
we're
good
to
go,
then
we
can
start
serving
traffic
and,
as
get
a
request,
connection
into
the
proxy
on
the
inbound
router
from
another
pod.
We
do
a
few
things
and
part
of
lincolnÃs
being
a
no
configuration
system
means
we
have
to
support
arbitrary
protocols
in
the
proxy
transparently,
which
is
a
little
bit
of
a
challenge,
and
so
we
do
this
in
a
few
ways.
One
we
as
soon
as
we
get
the
accept
the
socket.
We
look
at
the
Esso
original
dest.
A
Anyone
familiar
with
this
the
way
iptables
rewrites
work
is
when
iptables
says.
Instead
of
going
to
this
thing,
you
go
over
there.
We
can
actually
ask
the
kernel
and
say:
where
was
it
originally
going,
and
so
when
we
get
an
exception,
that
was
a
connection
that
was
sent
to
us.
We
can
access
the
kernel,
which
port
was
this
originally
headed
to
in
the
pod
or
which
remote
IP
import
was
just
heading
to
out
of
the
bottom.
A
After
that
we
do
at
EOS
that
we
try
to
do
at
EOS
takur
and
that
little
skull
is
because
I
draw
this
during
the
six
months
ago,
when
we
replaced
the
I
notified
piece
of
this,
and
so
we
already
have
the
certificate.
We
know
our
identity
and
we
say.
Ok,
let
me
try
to
read
a
TLS
client,
hello
message.
If
I
can
read
a
TLS
client,
hello
message,
I'm
gonna
check
its
S&I
server,
name
identifier,
based
on
the
s
and
I
we
say
is
this
request?
A
Is
this
T
this
TLS
connection
coming
to
me
at
an
identity
in
the
mesh
identity,
or
is
this
food
out
example.com?
It
is
an
ingress.
We
don't
want
to
terminate
TLS
here,
we
want
to
let
the
ingress
terminate
TLS,
and
so,
if
it's
not
to
us,
we
just
forward
it
along
like
it's
pure
TCP,
and
we
can't
see
it
if
it
is.
If
it
is
s
and
I
to
us,
we
decrypt
it
and
now
it's
clear
text
and
you
can
start
doing
more,
then
we
do
some
TLS
or
some
TCP
metrics.
A
You
know
we
get
the
number
of
connections,
the
bytes
and
all
that
stuff,
and
then
we
do
protocol
detection.
So
this
is
pretty
simple.
It's
just
like
the
the
file
command
on
your
your
UNIX
laptop.
It
looks
at
the
first
few
bites
of
the
client
message
and
says:
well:
does
it
start
with
HTTP
space?
If
so
its
nature
to
be
one
message,
or
maybe
it's
get
space
so
just
serve
with
a
verb.
A
A
The
two
best
examples
I
know
of
are
my
sequel:
that's
some
TP,
and
so,
if
you
have
a
system
that
uses
those
protocols,
what
we
require
that
you
do
is
set
a
skip
reports
flag
and
what
that
does
is
when
you,
you
set
that
on
your
pot
annotation
as
a
no
for
this
pod
skip
the
ports,
3,
306
and
25.
We
skip
those
by
default
by
the
way,
and
that
means
that
none
of
those
connections
will
go
through
the
proxy
at
all.
They'll
just
go
plain
through
without
the
proxy,
and
that
way
we
don't
break.
A
My
sequel
I
think
eventually,
we'll
want
to
have
some
way
to
have
the
proxy
support
those
things.
But
you
know
our
our
mission
is
to
make
things
work
with
our
configuration
not
to
satisfy
every
corner
case
that
you
might
have.
Oh,
that's
a
very
polished
thought
I
have
there
and
then
the
kind
of
the
other
part
of
this
is
when
we're
going
to
send
a
request
outward
and
so
I'm
a
client
of
another
service.
A
What
we
do
here
is
that
we
actually
take
it
to
be
one
request,
an
HTTP
request
and
we
all
treat
them
like
the
same
thing
in
the
proxy.
That's
just
HTTP,
and
so
that
means
all
of
our
middleware
is
our
generic
over
that
those
details.
We
only
really
deal
with
it
on
those
service.
On
the
client
side,
the
the
main
routing
logic
is
a
host
header
or
authority
header
based,
and
so
we
don't
want
to
require
a
lot
of
configuration
for
this.
A
If
you've
already
set
up
kubernetes
services,
we
can
just
work
in
that
world
or,
if
you're
talking
to
you,
know
Twitter
comm,
that
that'll
just
work,
and
the
idea
here
is
that
your
application
has
already
resolved
the
post
name.
So
we
know
it's
a
real
hostname
we
get
the
or
we
hope,
if
not,
it's
not
going
to
totally
break,
but
it
might
be
confusing.
So
first
we
look
at
the
authority
or
the
Hostetter,
or
we
may
actually
look
at
the
elf
at
the
desk,
override
header.
A
A
Then
we
have
a
little
bit
of
a
challenge,
so
in
the
kubernetes
world
you
can
have
you
can
put
crazy
dns
policies
in
your
pods.
You
can
have
all
sorts
of
awful
logic
in
there
that
I
don't
know
about.
Definitely
don't
want
link
didn't
know
about,
and
so
what
we
do
is
we
just
re
resolve
the
name
that
you
put
in
the
host
header.
A
A
Then
we
do
something
called
service
profiles
and
it's
basically
routes
or
HTTP
routes,
so
matching
requests
by
verb
and
path,
regex
or
something
like
that,
and
so
we
can
get
per
path,
stats
so
or
per
GRP.
Cm
point
stats
in
this
way,
and
so
you
you
can
take
a
GC,
protobuf
spec
or
an
open,
API
spy.
Respect
basically
just
gives
them
the
link.
Od
will
read
them
turn
them
into
a
service
profile.
So
that
way
you
can
configure
a
link
review
based
on
this.
A
You
can
also
set
retries
and
timeouts
based
on
that
and
we'll
be
adding
some
other
configuration
on
to
that
soon
as
well,
and
if
you'd
like
to
see
anything
else
that
you
can
hang
off
of
route
PR
is
happily
accepted
with
some
review.
The
point
is
we
get
metrics.
We
know
whether
it's
a
six
that
we
can
have
her
per
endpoint
success
or
failure,
characteristics,
retry
ability
and
that's
all
gum
there
and
then
once
we've
done
all
that
we
go
to
the
service
discovery
and
load
balancer.
A
A
A
It's
a
really
overly
complicated
process,
but
we've
got
that
down
by
now
and
with
that
we
can
actually
read
all
the
pod
metadata
as
we
do
that
we
actually
can
take
labels
off
the
pods
and
put
them
on
the
response
that
we
give
to
those
the
proxies
and
that's
how
we
hydrate
the
metrics
that
we
were
looking
at
before
to
have
topologies.
This
all
comes
because
we
can
read
this
from
the
discovery
system.
A
Also.
This
is
where
the
identity
kind
of
kicks
them.
So
when
you
inject
the
proxy
and
or
in
the
proxy
injector,
is
adding
the
proxy
to
a
pod.
It
knows
whether
I'd
end
these
enabled
on
that
pod.
By
default,
we
enable
it
you
can
forcefully
disable
it
if
identity
support
in
that
pod.
For
mesh
communication,
we
know
we
can
select
on
those
by
label
and
we
know
we
can
look
at
those
and
send
them
TLS
requests
if
they're
not
injected.
With
that
configuration,
we
don't
try
to
do
to
us
to
them.
A
We
just
treat
as
plain
text
this
all
comes
back
from
the
scary
system.
Important
detail
here
is
that
you
don't
have
to
install
linker
de
on
everything
you
could
installed
on
one
service
that
talks
to
their
services
or
a
subset
of
services
will
do
meshed
stats,
telemetry
identity,
all
that
stuff
within
the
mesh
services
and
all
the
other
ones
will
continue
to
work.
Happily,
you'll
just
be
a
little
blind
and
clear
text
we
used
to
actually
in
two
three.
We
saw
this
tomorrow.
We're
gonna
merge
a
branch
that
removes
dns
fallback.
A
We
used
to
go
do
another
dns
lookup
if
we
couldn't
resolve
the
name
through
service
discovery.
For
some
reason,
the
idea
there
that
we
want
to
do
load,
balancing
potentially
on
other
name
we
don't
know
about,
but
that
actually
makes
ingress
much
much
harder.
So
we're
just
gonna
tear
that
feature
out
and
drop
it
on
the
floor.
It's
great
I
love
deleting
code.
A
It's
like
the
best
thing,
my
job,
the
load
bounced
itself
is
a
PG
cpq,
a
load
balancer,
which
is
the
same
thing
we
use
by
default,
I
think
in
the
linker
t1
Alex
am
I
right
on
that.
Yes,
I
am
so
with
that
PG
cpq.
My
load
balancer
means
that
we're
watching
latency
to
each
endpoint
that
we're
talking
about
beekeeping
and
moving
a
way
to
moving
average
of
each,
and
so
we
PG
c
means
we
pick
two
at
random.
There's
this
great
paper.
A
That's
basically,
you
can
maintain
a
heap
is
really
mean
costly
to
maintain
a
large
heap
of
a
thousand
nodes,
it's
statistically
equivalent
to
pick
two
at
random,
compare
their
weights
and
use
the
cheaper
one,
and
so
that's
what
we
do
and
it
works
great.
It's
a
constant
time
load,
balancing
algorithm
and
it's
latency
aware
and
that's
awesome.
We
really
recommend
using
this
and
a
lot
of
people
just
deploy
a
link
pretty
just
to
get
your
basic
load
balancing
with
the
latency,
where.
A
A
We
start
a
stream,
a
gr,
PC
streaming
response
from
the
server
and
so
then
we're
down
like
constantly
getting
updates
from
the
destination
service
about
as
pod
change,
etc,
and
so
we
only
pay
that
latency
cost
in
the
first
request,
once
it's
cached,
we
have
our
load
balancer
set
and
ready
to
roll
as
further
updates
come
in.
That
doesn't
affect
the
request.
Latency
good
question.
A
C
A
So
if
you
remove
link
Rudy
from
another
pod
does
is
link
ready,
maintain
its
identity
configuration
no,
no,
it
all
gets
pushed
back
down
to
all
the
proxies.
So
when
you
change
any
pod
config,
it's
gonna
go
push
updates
to
all
the
proxies
that
are
watching
that
pods
State
or
the
service
that
includes
this
pod,
rather
good
questions.
There
was
another
one:
okay,
alright
yeah!
Well
we'll
try
to
get
make
some
time
for
the
end.
The
end
point
stack
again:
I
had
a
paper
over
some
old
TLS
stuff.
A
That
was
there
and
the
end
point
stack
is
then
for
each
end
point
and
a
load
balancer.
We
have
this
stack
of
layers
and
services.
Basically,
a
service
hierarchy
must
like
an
eagle
in
that.
First
of
all,
we're
like
do.
I
do
TLS
on
this?
Yes
or
no.
We
use
that
TLS
configuration
to
inform
stats,
etc.
We
want
to
know
the
identity
very
well.
Then
again,
we
record
lots
of
her
endpoint
metrics
here
and
so
higher
in
the
stack.
A
We've
set
lots
of
context,
information
on
the
requests,
so
we
can
read
it
in
the
per
endpoint
stack
and
show
you
the
you
know:
latency
distribution
for
each
pod.
We
have
very
high
granularity
data
in
the
proxy,
it's
again
scraped
by
Prometheus
every
ten
seconds
we
ship,
linker
D
with
its
own
Prometheus,
that's
tuned
to
be
very
small.
It
only
is
six
hours
retention.
The
point
is
there,
where
you
want
this
to
be
a
cheap
thing,
that's
a
escape
hatch
that
you
can
use.
This
is
not
supposed
to
be
your
production,
Prometheus
instance.
A
A
Then
we
have
this
cool
feature
called
tap
taps
a
unique
two
linker
D
as
far
as
I
know,
and
it
basically
allows
you
to
push
requests
at
runtime
into
the
proxies
so
instead
of
us
logging,
every
request
and
all
the
payloads
and
all
the
headers
that
go
through
the
system.
What
we
want
to
allow
you
to
do
is
say
basically
on
demand.
A
Tracing
go
in
this
namespace
show
me
the
requests,
but
our
gets
to
this
service,
and
then
you
get
a
selection,
a
sampling
from
all
the
pods
in
that
namespace
of
their
question
that
service
right
now.
This
is
really
only
a
request
and
response
metadata.
So
was
it
successful?
What
path
was
it
etc?
We're
in
the
process
of
adding
full
headers
to
that
and
ultimately
full
payloads
to
that
which
will
be
awesome?
In
my
opinion,
the
work
that's
kind
of
blocking
that
is,
we
are
in
the
process
of
our
back
applying
all
of
this.
A
The
idea
that
we're
gonna
ship,
all
your
payloads
to
anyone
who
connects
to
a
proxy
in
this
port.
No,
so
we're
in
the
process
of
making
sure
that
the
tap
server
itself
has
our
back
wall
permissions
and
you
can,
if
you
have
access
to
read
those
pods
you'll,
be
able
to
tap
them,
and
if
not,
you
want,
and
after
that
we'll
start
shipping
you
full
payloads
and
headers,
and
all
that
good
stuff
and
that's
in
in
flight
now,
and
definitely
an
area
where
folks
could
contribute.
A
If
you
care
to
another
thing
that
we
do
in
linker,
d2
is
because
we're
you
have
a
proxy
on
both
sides.
We
can
make
collusion
decisions.
We
do
this
for
identity.
We
can
also
do
this
for
protocols,
and
so,
if
you're,
using
HTTP,
1
or
100
or
some
terrible
thing
and
your
application,
we
will
do
HB
2
between
the
proxies.
So
that
means
every
proxy
will
have
at
most
one
connection
with
every
other
proxy.
A
The
advantage
of
doing
that,
especially
with
a
TLS
identity,
is
that
we
only
have
one
TLS
handshake
for
proxy
pair,
so
we
do
multiplex
communication
of
all
of
your
traffic,
all
of
your
HTTP
traffic
through
these
h2
tunnels,
which
is
really
nice
and
if
you're,
using
h2
in
your
app
it
just
layers
flawless
and
it's
it's
great
and
so
on
the
way
out,
we
say:
does
this
other
endpoint?
Is
it
a
link
or
youtube
proxy?
If
so,
we
can
upgrade
at
h2.
A
You
can
also
disable
this
view
if
you
think
it's
sketchy,
but
it
tends
to
be
pretty
good
and
then
we
dispatch
it
on
on
either
an
h1
or
h2
connection.
If
we
can't
upgrade
it,
we
just
do
normal
issue
1,
like
we
should
okay
and
just
a
little
bit
of
forward-looking
stuff
thinking
about
five
minutes
left
or
so
so
wrap
this
up.
A
You
all
hear
about
SMI
anyone
not
hear
of
mine,
one
okay
earlier
this
week,
Microsoft
and
console
and
us
hash,
warping
and
and
buoyant
all
partnered
to
launch
what
we
call
the
service
mesh
interface.
It's
a
mouthful
and
we're
really
starting
there
with
what
we
think
are
the
kind
of
three
core
API
is
that
we
expect
people
to
integrate
with
the
idea
here
is
we
want
folks
who
are
building
tools
on
top
of
service
meshes
to
have
good
standard
interfaces
for
the
kind
of
core
functionality
that
being
ackles
I?
A
Telemetry
we
got
great
metrics
in
our
Prometheus.
If
you're
a
link,
ready
user,
you
can
go
to
Prometheus
New
Orleans.
Do
you
want,
but
what
we've
done
is
we've
made
that
kind
of
a
generic
metrics
interface,
much
like
the
metrics,
API
or
custom
metrics
API.
It's
an
API
extension
that
reads
from
our
Prometheus.
So
any
tool
can
query
the
some
point:
if
you
add
it
and
whether
you're
using
a
CEO
or
console,
connect
or
link,
or
you
can
basically
get
the
same
metrics,
which
will
really
empower
lots
of
dashboards
and
things
like
flagger,
etc.
A
The
other
part
of
that
kind
of
flagger
story,
for
instance,
is
something
like
traffic
management
or
traffic
split,
particularly
and
that's,
where
we're
doing
kind
of
weighted
Canaries
between
services
or
ship
traffic
shifting
between
services.
So
you
have
an
apex
service
and
downstream
services
or
leased
services,
and
you
can
weight
your
traffic
between
those
services
that
we
have
a
branch
for
that's,
not
quite
ready
for
master,
but
once
I
catch
up
on
sleep,
we're
gonna
come
back
at
it
and
get
that
ready
for,
for
probably
towards
the
end
of
June.
A
Cuz
I
need
a
lot
of
sleep,
alright
and
then
the
other
thing
that
we're
really
kind
of
keen
on
right.
Now
what,
as
we've
talked
to
people
starting
is
lincolni.
We
realized
that
linker
D
makes
many
other
tools
better
and
when
you're
using,
for
instance,
a
tracing
system
and
our
telemetry,
you
can
do
some
really
powerful
things,
and
so
what
we
want
to
do
is
not
pull
all
of
that
logic
into
the
Lincoln
project
is
kind
of
a
core
dependency
but
create
more
of
an
add-on
system.
A
For
instance,
right
now
we
bundle
a
graph
on
a
dashboard
and
it's
a
nice
thing
to
have
by
default,
but
it's
also
kind
of
a
big
dependency
for
the
quick,
getting
started
thing
so
we'd
like
that
to
be
an
add-on,
for
instance,
where
we
just
have
it
pop
up
the
dashboard,
if
it's
present
and
not
be
there,
if
it's
not
ok.
Finally,
reach
Anytus,
I
hope
you've
caught
my
message
here.
That
I
hope
some
of
you
leave
here
wanting
to
get
involved
at
least
a
little
bit.
Maybe
pick
off
some
getting
started
issues.
A
If
you
could,
our
github,
we
have
a
bunch
of
Help
Wanted
and
getting
started
is,
and
it's
really
a
growing
and
healthy
community.
So
please
do
get
involved
and
you
can
always
you
know,
join
our
slack
again.
Lots
of
people
ask
some
questions,
especially
this
week.
I
lots.
We
love
answering
questions
who
don't
even
work
on
Lincoln
II,
which
is
my
favorite
thing
and
yeah
we're
all
on
Twitter,
etc,
and
with
that
I'm
gonna
open
the
floor
to
questions,
so
anybody
got
any
all
right.
Let's
see
how
this
works.
D
Hi
I
have
two
questions
for
you:
first
taste:
mutual
TLS
for
ingress,
yeah
I
think
you
don't
support
it
currently
and
if
you
have
something
like
guideline,
what
we
should
do
instead
yeah
and
second
one-
is
something
like
service
entry
from
a
steel.
That
police
is
saying.
Okay
from
these
spots,
I
can
call
outside
the
cluster.
Only
to
these,
if
you
are
supporting
it,
okay.
A
Good
questions,
even
though
you
you
asked
me
to
come
here,
but
yeah,
that's
fine,
I,
don't
know
III
yeah
yep,
so
the
first
one
was
ingress.
Mutual
to
us,
and
maybe
ingress
in
general
is
maybe
a
better
thing
to
address
there.
There
are
many
very
good
increases
out
there
that
are
full-featured
and
have
lots
of
features.
A
The
next
question
was
kind
of
egress
policies.
Right,
that's
a
really
good
question.
So
one
of
the
other
topics
that
I
didn't
highlight
there
is
we're
starting
to
think
through
the
multi
cluster
story,
a
little
bit
more
seriously.
This
is
obviously
a
big
theme
of
this
conference
and
that's
where
we
get
into
some
more
egress
and
ingress
type
things,
even
though
I
don't
want
to
go
down
deep
down
that
rabbit
hole.
A
My
answer,
for
that
would
be
pleased
open,
an
issue.
It's
not
something
we
spend
a
lot
of
time
on
because
you
haven't
had
many
people
asking
for
it
and
the
way
link
or
D
works
is
when
our
community
is
loud
and
vocal
and
clear
where
you
listen,
and
when
people
murmur
and
the
halls,
we
don't
hear
it
and
we
don't
work
on
it,
so
help
us
out.
Okay,
more
questions,
gotta
be
want
me
to
didn't.
We
pass
this
back
to
somebody
with
their
hand,
raised
or
Kevin.
A
That's
so
do
we
plan
to
support
user
authentication
right
and
yep?
We've
tend
to
see
that
as
an
ingress
problem
right
that
really
users
are
not
hitting
things
deep
in
the
call
stack
generally,
and
so
the
people
we've
tried
it
people
who
talked
about
this.
We
try
to
person
to
make
that
an
ingress
problem,
and
then
we
have
identity
from
there
on.
A
However,
because
you
know
once
you
do
my
validation
you're,
basically
good,
if
you
trust
that
the
mutuality
else
is
working
everywhere
else,
however,
that's
we
also
have
had
folks
ask
for
that
feature
again:
it's
not
hiring
a
fabulous
because
it
doesn't
really
fit
the
core.
You
space,
you
see,
but
we're
open
to
it.
So
I
think
an
issue
would
be
a
great
place
to
start
if
more
than
you
just
want
it
in
here,
then
maybe
we'll
get
some
interest
and
maybe
we'll
even
get
some
PRS
on
that
sure.
B
C
A
A
However,
it's
possible
and
conceivable
that
someone
could
write
a
Kassandra
codec
submit
a
PR
for
doing
a
protocol
discovery
and
then
we're
off
the
ground.
So
this
is
a
thing
where,
as
we
get
the
codecs
and
that
tower
libraries
talking
about,
if
we
get
features
there,
we
will
be
able
to
pull
them
into
the
link
Ranieri
Pro
as
a
more
first-class
citizen.
So
there.
C
A
I
should
know
it
just
in
full
disclosure.
We
don't
yet
TLS
the
permit
you're
gonna
hand
on
the
mic.
We
don't
yet
tell
us
the
Prometheus
grapes
yet
either,
though
that's
certainly
on
our
high
on
our
to-do
list,
all
right.
They
give
me
the
book,
yet
we
gotta
go
one
more
any
more,
no
more
all
right!
Well,
thank
you
very
much.
We
have
a
boost
in
the
vendor
hall,
where
there's
plenty
of
people
like
we're
all
out
of
hats,
I'm,
sorry,
but
other
than
that.
I
really
appreciate
you
all.