►
From YouTube: Deep Dive: Linkerd - Oliver Gould, Buoyant
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Deep Dive: Linkerd - Oliver Gould, Buoyant
In this session, Oliver Gould, will focus on lessons learned, how to's, and what the future of Linkerd holds.
https://sched.co/UagO
A
A
How
many
of
you
aren't,
but
wanna,
be
okay.
Most
of
you
are
using
everydays
of
direction.
That's
amazing!
Last
year,
when
we
did
this,
it
was
like
half
the
hands
in
the
room.
So
that's
really
impressive.
How
many
of
you,
like
writing,
go
how
many
of
you
have
written
Russ
before
all
right?
That
number
is
gonna,
be
a
lot
higher
next
year.
I
promise
you
so
my
name's
Oliver
I'm,
the
creator
of
link
Rudy.
A
My
I
do
a
lot
of
work
on
the
proxy
side,
but
I
work,
kind
of
across
the
project
and
I'm
I
do
a
deep
dive
at
every
coop
con
that
I
go
to
and
usually
that's
kind
of,
a
technical,
deep
dive
where
I
have
lots
of
hand-drawn,
slides
and
I
kind
of
ramble
over
those
and
I'm
gonna.
Do
is
try
to
do
a
slightly
different
one.
A
This
time
and
it's
either
gonna,
be
really
quick
and
will
be
done
quickly
and
have
lots
of
time
for
questions
or
I'll
be
running
away
over
we'll
find
out
soon,
but
I
want
to
talk
about
why.
Why
do
I
work
on
Linc
Reedy?
Why
do
I
spend
all
of
my
time
working
on
microdata
when
I
could
be
doing
other
things
well,
besides
hanging
out
with
my
dogs
but
yeah
other
than
that?
So
why
does
a
link
ready
exist?
A
I
think
that
this
is
a
any
project
should
have
a
good
answer
for
this,
but
this
Center
is
pretty
personal
for
me,
and
so
I
want
to
talk
about
why
a
link
ready
exists.
We
have
to
talk
about
why
I
was
in
a
position
to
create
this
and
work
on
this
and
why
we're
gonna
continue
working
on
this
for
quite
some
time
and
to
do
that.
I
have
to
take
you
back
about
a
decade.
This
is
not
the
start
of
my
career,
but
it's
where
the
story
starts.
I
I
took
a
job
at
Twitter.
A
So
the
my
manager,
the
the
head
of
ops,
sat
me
down
said
all
we
need
to
get
all
of
the
host
data
into
a
time
series
database.
This
other
other
group
is
working
on
time
series
data
base.
We
got
to
make
sure
all
of
the
host
and
system
data
gets
in
there,
so
that
we
can
do
off
salutes
and
then
some
other
people
get
wind
of
this
project
and
say
well
that'd
be
really
cool
if
we
could
add
application
metrics
to
we
have
all
these
Ruby
processes,
unicorns.
A
Whatever
is
running
Twitter
at
the
time
that
we
also
want
to
get
into
that
system,
and
we
actually
would
need
to
provide
alerting,
because
we
can't
use
nod
us
anymore,
because
it's
not
gonna
work
with
the
new
system.
So
we
really
need
to
go
build
our
own
huge,
alerting
system
in
Java
for
some
reason,
and
then
we
also
need
customizable
dashboards.
We
need
every
team
is
gonna,
have
a
different
set
of
metrics.
They
need,
and
ganglia
is
just
really
not
usable
for
anyone.
A
Looking
something
like
this
and
since
I've
lost
the
team,
it's
I'm
sure
been
improved
quite
a
bit,
but
is
that
we
had
a
collector
that
would
go
talk
to
had
a
could
enumerate
every
host
and
could
go
get
hosts
metadata
or
host
metrics
from
every
host.
We
could
also
talk
to
zookeeper
server,
sets
and
discover
things
and
then
go
collect
data
from
there,
and
then
we
need
to
write
that
all
into
a
service.
The
time
series
database
called
KooKoo.
A
Thank
everybody
and
then
what
we
want
is
we
want
a
kind
of
nice,
queryable
interface
on
that,
so
I
can
run
ad-hoc
queries
when
I'm
an
incident
that
I
need
to
diagnose
things
so
that
we
can
build
that
that
dashboard
system
and
so
that
our
match
or
alerting
system
has
someplace
to
plug
into
and
actually
get
data
and
I
was
on
that
team
for
about
three
years
or
so
and
as
I
was
leaving,
I
wanted
to
go
work
on
some
other
projects
of
the
company.
That
I
thought
would
help
the
observability
system.
A
But
there
are
a
few
big
lessons
and
one
it
would
been
really
nice
if
there
were
open
source
tools
of
the
time
open,
TST
B
was
just
getting
started
while
we
were
developing
it
and
it
was
a
little
bit
of
a
too
early
to
bed
on
and
it
wasn't
I,
don't
necessarily
think
a
good
bet.
But
now
we
have
great
reusable
tools.
A
The
other
thing
I
learned
is
that
configuration
is
the
root
of
all
evil,
absolutely
full.
Stop
that
that
collector
system
we
had
was
initially
this
Python
thing,
I
wrote
twisted
and
you
had
to
configure
all
of
the
targets
and
sources
for
it
to
go
talk
every
time
a
new
service
came
online,
they
defile
a
ticket
with
me
and
I
would
go
edit.
A
Python
file
and
I
have
to
like
load
balance,
all
the
configs
to
make
sure
services
were
distributed.
It
was
a
lot
of
awful
manual
work
for
folks.
A
A
That's
not
as
far
as
I
know,
if
still
not
possible,
Twitter,
to
go
to
a
Zipkin
dashboard
and
link
to
an
alert,
for
instance.
That
would
be
wonderful,
but
we
need
common
nouns
that
we
can
use
to
reference,
and
URI
is
basically
the
reference
of
crossing
systems,
and
the
really
surprising
thing
is
that
I
was
so
in
over
my
head
on
the
offer.
A
A
team
of
skilled
team
is
able
to
go
solve
that
problem
and
just
work
through
those
things
your
organization's
inside
of
that
is
gonna,
be
much
more
than
a
skilled
team
for
a
year
or
two
going
to
working
on
a
problem,
and
if
you
can't
get
folks
to
agree
on
the
operational
data
model,
for
instance,
it's
good
or
you
can't
get
people
in
high
leverage
situation
places
like
a
deploy
system.
It's
going
to
be
really
hard
to
instrument
these
things
and
productionize
it.
A
This
will
come
back.
These
are
these
lessons
are
irrelevant
so
after
that
I
play
ping
pong
for
about
a
year
and
then,
if
you
worked
at
Twitter,
you'd
know
that
was
true
and
then
I
went
to
work
on
this
thing
called
the
traffic
team
and
we
were
given
an
inner-tube
remit
there.
So
we
have,
as
I
mentioned
before
we
had
this
zookeeper
service
discovery
cluster.
Anyone
here
done
zookeeper
service
discovery
and
it
wouldn't
been
on
call
for
a
zookeeper
cluster.
A
It's
a
tough
I
feel
you.
So
we
took
that
on
for
some
reason
and
we
were
up
to
very
close
to
the
finagle
team,
so
Myers
Eriksson,
who
was
the
creator
of
Nagel,
was
on
this
team
with
us,
and
we
sat
right
next
to
the
core
library
news,
building,
finagle
and
our
job
was
really
to
deal
with
service
discovery,
related
incidents
and
make
sure
we
were
fixing
the
kind
of
core
infrastructure
in
to
finagle.
Finagle
is
a
JVM
Scala,
functional
networking
library
that
makes
that
basically
every
Twitter
service
is
written
in.
A
So
if
you
just
go
fix
things
there,
you
don't
have
to
really
worry
about
going
to
get
people
to
upgrade.
You
just
have
to
be
like
deploy
again
and
it'll,
be
fine
and
really
the
the
feature
we're
working
with
staging
and,
more
generally,
making
requests
routing
flexible
in
a
complex
topology,
and
so
a
very
simplified
version
of
Twitter
might
look
something
like
this.
You
have
a
big
front
end
here.
A
That's
doing
all
sorts
of
composition,
we
have
data
services
behind
that
that
own
different
parts
of
the
domain,
and
then
we
might
have
something
like
a
user
service
and
every
everyone
has
a
user
service
summer
and
let's
say
like
this-
is
three
calls
down
in
the
system
and
I
want
I.
Do
a
new
version
of
the
user
service
I
want
to
stage
out
a
version
before
what
we
do.
Is
we
probably
pick
one
random
host,
upgrade
it
with
the
new
code
and
like
hope
that
doesn't
help
people
remember
that
even
happened.
Hope
for
that.
A
We
can
actually
know
those
differences,
so
the
kind
of
canary
and
or
we
had
very
complex
staging
infrastructures
where
you
could
basically
reserve
a
whole
stack
of
this
and
replace
part
of
it.
For
your
use
case,
and
you
know
there
can
only
be
a
finite
number
of
those
things,
because
this
is
a
lot
of
resources,
so
people
would
be
basically
fighting
over
staging
resources
that
they
could
go
claim
to
test
their
code,
and
so
we
wanted
to
do
is
basically
make
that
a
header.
A
So
you
can
add
a
header
on
to
your
browser
and
it
says,
instead
of
talking
to
the
user
service,
talk
to
the
users
be
to
service
and
anywhere.
That
request
goes
with
the
context.
We
can
apply
that
logic
and
again,
because
this
is
all
in
finagle
every
place
we
on.
We
can
make
sure
these
contexts
get
wired
through
properly
and
as
long
as
we
don't
hit
any
evil
non
finagle
services
will
be
great.
A
And
so
the
big
lessons
for
my
time
on
the
traffic
team
were
their
micro
services
are
all
about
communication.
In
fact,
the
name
link
Rd
comes
from
this
concept.
We
learned
on
this
team
that
really
thinking
about
the
system
like
a
linker
and
a
loader,
a
loader
being
something
that
schedules
pods
and
schedules
and
crates
them
a
linker
being
something
that
names.
These
targets
names,
these
other
libraries
and
a
micro
service.
A
Your
libraries
are
served,
our
services
that
are
running
they're,
not
necessarily
code
units
and
you
link
at
the
network
layer
and
so
communication
is
the
kind
of
fundamental
thing
that
we
have
to
solve
here
or
allow
for,
and
to
do
that,
we
need
Diagnostics
out
the
nose
you
can't
you
can
no
longer
go
to
logs
and
try
to
correlate
logs
across
various
systems.
You
can
no
longer
attach
it
a
bugger
to
one
thing
and
inspect
it
and
actually
the
clue.
A
The
highest
leverage
way
we
found
to
do
that
was
by
putting
things
in
finagle,
a
thing
that
we
knew
was
in
every
request
path
and
every
service
of
the
company.
We
could
go
make
these
changes
there.
We
had
launched
this
whole
new
staging
system
without
really
having
to
get
any
of
these
services
to
buy
into
it
or
convince
them
about
it.
We
could
just
deploy
it
and
even
if
they
had
a
downstream
service
that
they
didn't
know,
it
was
staging.
We
could
implement
all
this
and
so
having
that
kind
of
fundamental
infrastructure.
A
Layer
of
control
is
really
important
to
roll
out
types
of
politician
use
of
the
company,
and
so,
as
you
do,
when
you
work
at
a
place
for
a
certain
amount
of
time,
you
get
tired
and
want
to
go
somewhere
else
and
I.
Had
this
friend
of
mine,
who
had
been
driving
me
to
work
for
several
years
with
him,
he
had
quit
Twitter
and
he
was
like
we're.
Gonna
start
a
company
I
thought
it
was
crazy
and
he's
like
no,
no,
like
look
at
what's
happening
with
docker
right.
A
Ivan
used,
docker
I've
been
in
this
Twitter
hole
for
a
while,
but
I
knew
about
medicine
or
or
any
new
about
I
knew
about
what
would
I
learn
to
become
micro
services.
They
were
called
SOA
or
services
first
hundred
short
Twitter,
but
it
was
clear
to
me
that
there
was
an
opportunity
and
the
the
types
of
tools
I
was
working
on
at
Twitter.
A
I
could
go
work
on
as
my
full-time
job
in
open
source,
which
has
been
a
huge
part
of
my
life
since
I've
been
in
college
basically,
and
so
that's
what
I
we
set
out
to
do
and
basically
took
Twitter's
code,
Twitter
finagle
and
really,
how
do
we,
this
great
thing
that
we
know
it
has
all
this
operation
of
value?
Has
all
this
power?
Is
this
uniform
layer
of
control
and
visibility?
How
do
we
make
that
something
that
people
who
don't
want
to
write
scala
on
the
JVM
can
benefit
from?
A
Can
we
make
that
a
component
or
a
product
that
we
can
drop
in
there
and
so
to
lincolni?
One
which
we
created
and
started
working
on
in
2015
was
released
in
2016
was
the
first
version
of
that
it
was
super
configurable.
So,
coming
from
that
that
framework
we
had
within
the
JVM
in
finagle,
we
had
really
good
abstractions
for
services
gallery,
so
nothing
was
zookeeper
specific.
We
could
talk
about
marrying
zookeeper
and
kubernetes
and
console
and
fcd
and
building
topologies
that
coorporate.
A
All
of
these
things
and
most
of
the
places
lincolni
one
was
deployed-
was
to
kind
of
satisfy
these
complicated
multi,
scheduler
flexibility
cases
and,
of
course
we
have
all
of
this,
like
we
took
all
of
that
routing
logic.
I
was
just
talking
about
dropped
it
in
finagle,
bye-bye
and
delinquent
you
by
default,
and
that
meant
you
had
to
go,
learn
a
bunch
of
complex
configuration
around
service
naming
and
fallbacks,
etc.
To
get
any
of
this
working-
and
it
was
it's
a
nice
system,
there
were
people
out
there
who
really
loved.
A
It
really
have
done
some
very
sophisticated
things
that
are
right
over
my
head
honestly,
but
it's
a
lot
to
get
started,
and
if
this
is
going
to
be
useful,
it
can't
you
can't
like
require
a
course
on
service
mesh
to
get
started
with
it
and
linky
one
had
this
deployment
model
again.
This
is
kubernetes
was
floating
out
in
ether,
but
it
was
not
1.0.
Yet
there
was
swarm
and
nomads
and
mezzos
and
marathon,
like
just
a
big
messy
container
orchestration
ecosystem
and
our
model
coming
from
the
mezzo
spoil
of
Twitter
was
well.
A
We
can
have
one
of
these
on
every
host.
It'll
handle
basically
connection
multiplexing,
all
the
hard
thing
at
a
host
level,
it's
the
JVM,
so
we
can
only
really
get
it
down
to
150
Meg's
or
so.
If
we
really
squeeze
and
pray,
which
is
you
know,
a
hundred
fifty
Meg's
per
host,
just
like
okay,
if
you're
not
on
a
micro
host
or
anything
really
small,
but
it
is
a
pod
model
running
one
of
these
per
pod
becomes
wild.
A
A
So
some
lessons
there
again
configuration
is
the
root
of
all
evil
there
get
if
you
anyone
written
a
DTaP
in
here.
Ok,
a
few
of
you.
Anyone
like
reading
dee,
Tubbs,
ok,
Alex,
isn't
here,
I
know
one
person
who
likes
writing
details
but
they're
they're,
a
wild,
dark
art.
The
JVM
is
also
kind
of
root
of
all
evil.
I
I
would
have
been
offended
by
that
a
couple
years
ago,
but
it's
a
really
nice
system
for
building
lots
of
enterprise
applications,
especially
Twitter.
Everything
is
built
on
the
JVM.
A
It
works
it's
great,
but
when
you're
at
this
part
in
the
infrastructure
in
the
data
path,
it's
just
really
not
suitable
from
our
resources
point
of
view:
linker
d-did.
We
we
knew
this
micro
service
thing
was
happening,
but
no
one
was
talking
about
it
really
quite
yet.
Linker
D
at
first
was
really
positioned
of
like
oh,
it's
gonna,
replace
a
five
load
balance
and
all
sorts
of
weird
things
that
you
can
use
it
for,
but
weren't,
really
our
intent
and
overtime.
A
We
really
saw
that
everyone
who
is
picking
up
link
reading
seriously
was
doing
so
because
they
had
micro
service
problems.
The
other
thing
we
learned
is
that
kubernetes
is
king.
Cabrini's
is
one
we
can
all
agree
or
I
can
proclaim.
It
was
really
obvious
that
the
criminais
is
modeled.
The
pod
model
was
so
much
more
usable
than
what
was
out
there.
A
Otherwise,
Coast
scheduling
processes
like
a
proxy
with
an
application
is
an
obvious
fall
out
of
the
pod
model
that
just
works
well
doing
that
in
medicine
or
at
the
time
was
quite
cumbersome
and
so
focusing
on
that
as
a
security
model
and
being
able
to
have
per
pod
or
security
guarantees
or
privacy
or
isolation.
We
knew
that
kubernetes
was
going
to
be
leaking,
and
so
sometime
in
2016
we
started
prototyping
new
proxies.
A
We
wrote
linker
D
TCP,
which
was
a
first
version
of
linker
d2
in
a
way
and
then
I
think
it
was
coop
con
2017
in
Austin.
Where
were
you
now?
Something
called
conduit
and
conduit
was
our
experimental
version
of
what
became
linker
d2
and
it's
a
kubernetes
native
service
measure.
So
we've
ditched
well,
I,
don't
want
to
say
we
ditch
support
for
everything
else,
but
we've
did
support
for
everything
else.
We
don't
support
details
anymore.
A
We
don't
have
a
Plex
Alexa
buscar'
system,
we're
betting
on
kubernetes
primitives
through
and
through
the
first
thing
you
have
to
get
here
or
the
reason
to
installing
pretty
is
get
out
of
the
boxes
of
traffic.
Observability
kubernetes
does
a
great
job
of
showing
you,
the
state,
the
metrics
various
things
about
the
resources
as
they're
running
what
pods
are
running
on
with
nodes,
how
much
memory
they're,
using
how
many
CPU
cycles
are
using,
etc.
A
A
We
also
want
to
provide
out
of
the
box
MPLS
identity.
That
means
we
have
before
before
we
implanted
this.
We
lots
of
requests
around
I
want
TLS
and
no
one
really
could
articulate
what
Tillis
was.
Some
people
wanted
to
do:
ingress
TLS
some
people
undo,
egress,
TLS
and
after
having
a
bunch
of
this
conversation
to
realize
that
the
pada
pod
communication
is
what's
squarely
within
our
wheelhouse
and
what
we
can
do
automatically
without
configuration
again,
I
hate
configuration.
A
A
Additionally,
it's
not
just
this
kind
of
baseline
security
and
visibility
concerns,
there's
a
whole
bunch.
We
can
do
in
the
reliability
space,
and
so
one
of
the
obvious
things
that
people
are
picking
up.
Lincoln
e14
was
surprisingly
gr.
Pc
load
balancing
that
kind
of
surprised
me
at
the
time,
but
most
load
balancers,
like
Kubb,
proxies
load
balancer.
For
instance,
it's
just
a
connection
of
a
load
balancer.
So
you
get
one
connection
here.
A
You
go
one
connection
there,
your
one
connection
there
and
then
all
the
requests
in
that
connection
or
bound
on
that
with
HTTP
load
balancing
we
can
actually
look
at
the
request
right.
Each
request
gets
can
be
dispatched
to
a
different
host
on
that
connection.
If
we
look
at
the
latency
from
the
response
and
that
informs
our
node
selection,
and
so
we
can
really
substantially
I
have
another
whole
talk
on
how
we
can
improve
success
rate
just
by
doing
load
balancing
it's
a
really
important
cool
night
toolkit.
A
A
It's
it's
awesome,
I
guess,
but
it's
quite
a
bit
more
difficult
that
we
thought
we
had
written
our
own
kubernetes
client
in
Scala,
and
it
was
well
Manzo's
written
some
great
blog
post
about
their
incidents.
Let
me
put
it
that
way:
the
kubernetes
api
is
you
don't
want
to
write
a
client
for
it.
It's
a
really
hard
thing
to
get
right,
you're
dealing
with
the
distributed
system
and
converging
States,
it's
just
very
difficult.
So
we
wanted
to
leverage
something
like
Clank.
Oh,
that
would
hopefully
solve
a
lot
of
those
problems
for
us.
A
A
We
knew
we
needed
native
language,
and
so
we
went
rest,
oh
and
finally,
I
get
to
use
Promethean
grief
on
us,
your
fauna,
so
we
bundle
a
small
Prometheus
instance
with
us
with
some
default,
your
fauna
dashboards,
so
that
you
just
get
some
basic
stats
out
of
the
box
without
having
to
actually
configuration.
You
can
first
configure
another
Prometheus
to
scrape
all
this
data,
we're
working
on
making
a
Prometheus
part
pluggable.
So
you
can
just
use
your
own
directly
and
not
use
ours.
A
A
Lisa
is
prometheus,
and
then
we
have
proxies
that
get
added
to
every
pod
or
every
pod
that
you
enable
to
so
a
mutating
webhook
called
the
proxy
injector
up
on
the
top
there.
Every
time
a
pod
gets
created
in
your
system,
kubernetes
says:
hey
linker
T.
What
should
I
do
to
this
pod
manifest
and
if
it
looks
right,
we
add
the
proxy
to
it.
If
it
looks
very
wrong,
we'll
reject
it,
otherwise
I'll
probably
let
it
through
and
then
we
add,
a
proxy
IP
table
stuff
gets
to
set
up
there.
A
This
is
the
whole
service
mission.
Config
down
set
I,
don't
wanna
go
into
right
now
and
so
I
don't
have
a
lot
of
lessons
around
like
32,
because
they're
still
right
in
the
middle
of
it
and
so
I
think
anything
I
have
to
say
won't
be
that
insightful,
but
I
have
a
few
things.
I
want
a
harp
on.
Kubernetes
is
the
database.
A
This
is
something
Thomas
burg
on
our
team
said
to
me
earlier
this
year,
and
it
is
something
I've
been
chewing
on
since
kubernetes
is
a
totally
open
database,
so
you
can
think
about
like
Etsy
D,
but
you
have
schemas
and
CR
DS
and
we
have
this
data
model.
We
have
pods
and
service
accounts
and
labels.
We
have
that
taxonomy
is
some
degree
that
we're
missing
in
other
systems
right
having
to
describe
what
workload
coordinates.
A
Are
we
get
for
free
out
of
kubernetes
and
we
have
a
label
system
that
we
can
use
to
do
selections
and
so
I
think
it's
still
a
little
too
open-ended
for
a
production
system.
You
want
to
have
some
constraints
and
say
every
pod
must
have
these
labels
etc.
But
this
is
a
great
building
block,
and
on
top
of
that
now
we
have
things
like
API
extensions,
I
can
go.
Add
new
endpoints
in
the
Kerberos
API
can
add
custom
resource
definitions
that
have
schemas
that
are
validated
before
they
go
in.
A
We
have
built
an
operational
database
that
happens
to
have
these
controllers
running
on
them.
That
just
persist
the
state
of
the
running
system.
It's
awesome
like
if
I
had
this,
when,
if
we
had
thought
about
Aurora
and
mezzos
and
zookeeper
in
terms
of
this,
with
that
nice
API
I
think
that
project
would
be
much
more
successful,
having
to
go
figure
out
how
to
use,
do
keep
her
from
first
principles
and
have
no
validation,
etc
was
not
really
workable.
A
I
would
not
have
said
this
last
year,
but
I
have
come
to
the
conclusion
that
the
world
needs
more
reference,
architectures,
it's
weird,
but
we
get
focus
all
the
time
we
say.
I
really
want
to
do
a
multi
region
or
I
really
want
to
do
tracing
even
and
we
get
into
that
and
there's
actually
very
little
in
terms
of
link,
ready
features
that
can
or
should
be
done
there
really.
A
What
folks
are
looking
for
is
I
need
a
pattern
that
shows
how
do
I
do
instrument
tracing
from
it
ingress
through
my
application
and
benefit
the
mission
that
that
all
together
is
the
useful
thing.
Any
one
slice
of
that
is
not
actually
going
to
get
me
wind
of
prod,
and
so
similarly
things
like
multi,
cluster
and
multi
region.
We
really
have
to
think
about
global
scale,
load
balancing
and
what
does
failover
mean?
There's
some
big
concerns
here
that
probably
don't
I
hope
don't
belong
in
link
rudy,
but
we'll
solve
them.
A
If
we
have
to-
and
the
other
lesson
I've
been
learning
slowly
is
that
infrastructure
projects
like
those
in
the
scene
CF
only
succeed
by
building
trust
slowly
over
time,
and
there
is
no
one
talk.
I
will
give
to
you
that
will
convince
you
that
link,
reduce
production,
ready
or
really
convince
you
to
use
it
even,
but
what
we
have
to
do
as
project
maintainer
is
and
community
is
just
keep
showing
up
being
really
open
and
clear
in
our
communication,
and
so
certainly
in
2020
we're
going
to
be
focusing
heavily
on
that
per
side
of
things.
A
Okay,
I'll.
Take
a
little
sip
of
water
so
why
another
proxy
and
I
know
I'm,
probably
running
out
of
time
the
proxy
goals,
one
small
in
terms
of
memory.
If
this
is
a
sidecar,
that's
in
every
pod,
it
can't
be
a
can't
even
be
10,
Meg's,
I,
think
right
now
or
it
likes
2
to
6
by
default
with
a
couple
hundred.
Yes,
it
needs
to
be
fast
and
terms
of
low
latency.
We
can't
have
garbage
collection,
pauses,
adding
lots
of
latency
unpredictability
into
the
system,
and
so
we
also
need
it
to
be
pretty.
A
If
we're
consuming
your
cpu
quota,
you're,
not
gonna,
have
a
good
CPU
like
you're.
Not
gonna
have
a
good
time
your
cluster.
So
we
need
to
make
sure
that's
low
overhead,
this
probably
shipping
first,
but
it
needs
to
be
safe,
I'm
not
putting
heartbleed
on
every
node
in
your
cluster
like,
but
that's
not
an
instant
might
want
to
deal
with.
And
finally,
it
has
to
be
malleable,
like
I
have
to
be
able
to
work
in
this
thing
and
modify
it
every
day.
A
I
can't
use
some
general
thing
and
it's
kind
of
off
the
shelf
and
configurable.
We
go
build
features
that
touch
the
data
plane
and
to
do
that
best
be
something
that
I
actually
wanna
work
on
every
day.
So
that
leads
us
to
one:
no
garbage
collection,
no
G,
JVM
no
go
a
native
language,
so
you
get
that
memory
footprint
down
to
get
that
CPU
usage
proper,
and
this
one
might
be
a
little
more
surprising.
I
need
a
strong
type
system
in
my
life.
A
I
am
not
smart
enough
to
write
good
or
at
least
workable
software
without
a
type
system
and
a
compiler
that's
going
to
help
and
so
I.
This
was
something
we
learned
from
Scala
and
and
really
poured
it
for
with
us
and
not
going
away
anytime
soon,
and
so
we
also
want
to
be
able
to
specialize
and
that's
in
terms
of
one-
it's
not
a
configurable
thing
if
any
there
might
be
a
proxy
config.
A
This
is
something
that
is
opaque,
instrumentation
detail
and
we
want
to
keep
that
really
keep
it.
That
way.
Part
of
that
is,
we
do
transparent
protocol
detection.
You
never
have
to
tell
us
what
protocol
are
given
ports
talking.
Well,
you
might
in
the
future
for
very
weird
reasons,
but
everything
should
just
work
out
of
the
box
and
we
want
to
customize
to
that.
We
want
to
do
automatic,
transparent,
TM
TLS
within
the
mesh,
and
so
we
have
some
customized
logic
there.
That's
specific
to
link
routing
on
how
that
gets,
initiated
and
terminated.
A
Something
most
folks,
don't
know
is
that
we
do
automatic
hb2
multiplexing
between
every
proxy.
So
if
a
proxy
is
talking
to
another
proxy,
it
will,
it
should
never
have
more
than
one
connection
per
pair,
and
so,
if
you
have
HTTPS
one
service
and
between
in
your
app,
we
will
take
that
and
shove
that
all
through
an
HTTP
channel
to
the
other
link
ready
and
then
we've
returned
the
back
in
HD
one
on
the
other
side,
and
so
it
actually
really
saves
on
cross
data
center
connection,
initialization
costs
and
MT,
less
cost
etc.
A
Finally,
we're
not
finally,
but
we
we
want
to
have
really
really
really
good
for
me
disintegration
when
we
started,
we
met
with
Frederick
from
there
from
UTS
product
Frederick
and
he
was
like.
The
community
is
not
getting
this.
We
need
really
good
Prometheus
support
in
a
proxy
like
this,
so
he
helped
us
work
on
that.
We
have
lots
of.
We
basically
take
all
the
kubernetes
made
it
metadata
and
just
shove
that
into
the
labels
for
the
traffic.
So
we
have
really
hydrated
stats
that
can
do
things
like
dependencies
and
cetera.
A
We
have
something
called
liquidy
tap,
which
I
don't
know
of
any
other
system,
but
it's
a
way
for
us
to
basically
push
inquiries
into
our
live
running
proxy,
so
rather
than
say:
I'm,
gonna,
log,
everything
to
Splunk
or
whatever
and
query
it
afterwards.
We
can
actually
connect
to
proxies
at
runtime
and
say.
Show
me
requests
that.
Look
like
these
I
want
to
dig
in
and
see
live
requests
that
look
at
these
things.
A
So
if
you
go
to
linker
you
dashboard
you'll
see
lots
of
live
requests,
that's
all
powered
by
a
link,
ready,
tap
and
there's
a
whole
bunch
more
we're
going
to
do
there.
I,
don't
know
what,
in
what
order!
My
focus
right
now
is
very
much
in
the
security
roadmap,
but
there
will
be
lots
more
things
that
we
want
to
do
in
the
data
plane.
This
project
goes
on
all
right:
I'm
gonna
whip
through
the
rest,
evangelism
strike
for
us.
A
Here's.
What
some
proxy
code
looks
like
after
about
two
years
of
fighting
it.
We
actually
have
really
nice
composable
layers
here.
This
is
the
outbound
endpoint
stack
so
per
for
every
endpoint.
We
create
a
service
that
has
all
of
these
layers.
At
the
outer
side,
we
have
a
the
bottom
there
we
have
a
tracing
layer
which
gives
us
some
extra
context
for
log
messages
and
errors.
We
you
shouldn't
with
metrics
with
tap
view
protocol
upgrading,
which
your
Petters
and
what
the
point
is
here.
A
Okay,
this
quote
is
the
nicest
thing
anyone
has
ever
said
about
my
work,
so
I
just
have
to
put
up
on
the
slide
once
at
least
we
had
a
security
audit
done
with
C
and
C
F
in
July.
We
worked
as
this
group
here
53
in
Berlin.
They
two
minor
bugs
in
our
web
dashboard
as
of
edge
two
weeks
ago,
we
fixed
them
they'll
both
be
fixed
in
the
next
stable
release,
but
they
gave
us
really
glowing
reviews
about
the
project
and
the
way
we
work
on
it.
So
don't
take
my
word
for
it.
A
Take
theirs:
okay,
wrap
it
up
big
bets
for
2020,
mandatory
TLS
by
default.
Something
we're
going
to
do
is
has
to
be
iMessage.
There
can't
be
a
I'm
gonna,
just
enable
TLS
opportunistically,
and
maybe
it's
not
there
and
I
have
a
soldering
tools.
We
need
to
get
to
a
place
where
this
is
just
mandatory.
If
it's
not
TLS
or
health
check
or
writing
this
probe,
then
we
will
fail
the
request
and
that's
has
to
be
how
it
works
we
need
to
get
to.
A
If,
after
we
get
there,
we
can
start
to
talk
about
inter-cluster
idea
and
identity
and
policy.
This
is
a
big
request.
This
is
not
something
we're
gonna
do,
though,
until
we've
nailed
the
identity
model,
and
so
we
need
to
get
to
cross
cluster
identity.
But
that's
after
we
get
identity
nailed,
and
this
is
the
craziest
bet
I.
Think
on
the
slide
is
that
I
want
to
reduce
linker
you
line
decoded
by
at
least
10%
I.
A
Think
there's
I
want
to
maintain
less
code,
I
want
to
have
a
a
better
ecosystem
of
libraries,
etc,
and
so
we're
gonna
definitely
push
on
that
and
part
of
that
part
of
all
three
of
those
things
is
the
service
mesh
interface.
This
is
a
partnership
or
doing
with
Microsoft
and
hasha
Corp
and
some
other
folks
around
standard
about
standardizing
certain
C
or
DS,
certainly
API
extensions,
so
that
integrators
who
want
to
do
things
like
like
flag
or
does
traffic
splitting
and
traffic
shifting.
They
don't
have
to
implement
that
to
any
one
service
mesh.
A
Furthermore,
I
think
I
hope
that
we'll
be
donating
or
finding
common
implementations
of
many
core
components
that
linker
D
shouldn't
be
maintaining
things
like
the
C
and
I,
the
proxy
net
container,
which
configures
IP
tables
and
some
of
the
multi
cluster
syncing
things
left
to
do
I
think
all
belong.
Ns
Mayan
we're
gonna
have
to
work
with
that
community
to
do
that.
A
Okay,
the
big
flashy
slide
that
I'm
required
to
have
here
is
that
we've
been
in
production
for
a
long
time,
we're
doing
awesome
and
we
had
the
security
audit
done.
We've
had
a
distributed
tracing
recently
and
20/20
stuff
is
really
again
getting
the
security
mode
up
advanced
and
making
that
extend
across
clusters.