►
From YouTube: Envoy Overview and Maintainer Q&A - Harvey Tuch; Lizan Zhou; Stephan Zuercher & Snow Pettersen
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Envoy Overview and Maintainer Q&A - Harvey Tuch, Google; Lizan Zhou, Tetrate; Stephan Zuercher, Slack; & Snow Pettersen , Square
A general overview of Envoy (https://www.envoyproxy.io/) as well as an opportunity to ask Q&A to maintainers in attendance.
https://sched.co/UakG
A
Okay,
let's
let's
get
started,
thank
you
for
being
here
in
this
horrible
slot
and
the
tail
end
of
the
conference.
I
didn't
think
we'd,
yeah,
I
didn't
think
we'd
get
this
many
people,
I'm
Matt,
Klein
I'm
up
here
with
a
bunch
of
the
Envoy,
maintain
errs
so
at
the
format
and
I'll.
Let
them
introduce
themselves
just
a
bit
later.
We're
just
gonna.
Do
a
quick
like
15,
20
minute
presentation
for
those
of
you
that
don't
know
what
envoy
is
and
then
we'll
do
a
Q&A
session.
A
Okay,
let's
go
so
just
a
quick
overview
of
networking
and
the
cloud
native
space
right
now
in
terms
of
what
most
most
end
users
are
actually
facing.
We
are
living
in
a
world
these
days.
Where
you
know
most
companies
are
not
a
one
language
shop
anymore.
The
days
of
all
Java
are
pretty
much
over
through
acquisition
or
just
general
trends.
We
now
see
most
companies
that
are
using
many
many
different
languages
and
within
those
languages,
they're
using
lots
of
different
programming
frameworks.
A
So
that
leaves
us
in
a
situation
in
which
you
know
folks
are
struggling
with
figuring
out
how
to
operate
their
systems
and
do
networking
across
all
these
different
code
bases.
We
have
a
lot
of
different
protocols.
Obviously
we
have
HTTP,
we
have
G
RPC,
we
have
databases,
we
have
caches
lots
of
different
types
of
infrastructure
that
folks
are
facing.
We
have
VM,
so
we
have
containers,
we
have
functions.
We
have
people
that
are
running
an
on-prem
cloud.
We
have
people
that
are
running
in
base
on-prem
bare
metal.
A
We
have
people
that
are
running
in
public
cloud,
lots
of
different
load
balancers
the
people
are
using.
We
have
cloud
load,
balancers
software
load,
balancers,
heartburn,
balancers
and
in
terms
of
observability.
You
know
how
people
operate
these
systems
and
get
stats
out
of
them,
they're
all
very
different.
So
if
you're
using
a
cloud
load,
balancer
produces,
you
know
certain
types
of
stats
and
logs
if
you're
using
an
f5,
it
produces
different
types
of
stats.
So
it
can
be
quite
quite
confusing
and
form
a
distributed
system
perspective.
A
As
the
industry
moves
more
towards
micro
service
architectures,
you
know,
there's
a
lot
of
best
practices
that
people
need
to
have
things
like
circuit,
braking
and
rate,
limiting
and
retries,
etc,
etc,
etc.
And
you
know
the
implementation
of
all
of
these
things
end
up
being
different
across
all
of
these
different
libraries
and
frameworks
and
load
balancers.
A
Obviously,
lots
of
different
concerns
around
authentication
and
authorization
that
folks
have
to
deal
with
and
where
this
leads
right
now
is
that
you
know
for
people
that
have
lots
of
different
languages
that
they're
using
we
wind
up.
Typically
with
per
language
libraries,
you
know
for
people
who
are
trying
to
make
RPC
calls
for
people
who
are
trying
to
do
timeouts,
retry
circuit,
breaking
stats,
logging
and
tracing,
and
that
leads
us
here,
which
is
typically
a
place
of
pain.
A
So
what
is
ongoing
on
the
goal
of
envoy?
Is
that
we'd
like
to
make
the
network
transparent
to
application
developers?
I
often
say
at
all
of
my
talks
that
infrastructure
is
mostly
overhead,
so
you
know
we
would
like
our
application
developers
to
be
focusing
on
writing
business
logic
and
anytime,
that
they're
spending
fighting
with
networking
or
databases
or
orchestration
or
any
of
those
types
of
things
is
time.
That's
not
well
spent
for
the
company
where
they
work.
A
So
the
goal
of
ongoing
is
is
a
transparent
substrate.
Where
you
know
we
can
mediate
a
bunch
of
the
concerns
that
application
developers
have
to
deal
with,
particularly
in
the
networking
space,
so
those
are
concerns
again
around
service
discovery,
load,
balancing,
observability,
etc,
etc,
etc.
So
you
know
I've,
given
a
variant
of
these
slides
for
many
years.
At
this
point
and
a
couple
of
years
ago,
I
had
to
you
know,
do
this:
what
is
serviced
mesh
slide
now
service
mesh?
A
Is
you
know
the
subject
of
epic
vendor
battles
so
I'm
sure
all
of
you
have
are
pretty
familiar
with
server
smashing
at
this
point,
but
if
you're,
not
the
idea
behind
the
service
mesh
is
that
I
have
a
bunch
of
different
applications
that
are
written
in
different
languages.
I
have
to
do
all
of
these
again.
Similar
concerns
around
service
discovery,
load,
balancing
observability.
A
So
from
a
feature
perspective
envoy
is
a
network
proxy.
So
you
know
for
those
of
you
that
are
not
starts
a
software
network
proxy,
so
you've
probably
heard
of
nginx,
you
probably
heard
of
H
a
proxy
on
boil
would
be
most
similar
to
those
pieces
of
software,
and
so
it
is
an
out
of
process
architecture.
It
is
not
a
library,
it
is
modern.
C++
14
code
base,
I,
know
that
Lisa
and
would
like
to
make
it
a
C++
17
code
base.
A
But
that
has
not
happened
yet,
and
you
know
fundamentally,
envoy
is
a
l3
l4
filter
architecture.
So
what
that
means
is
that
it
operates
at
a
TCP
or
a
UDP
level
in
terms
of
bytes
in
bytes
out,
and
then
we
can
write
filters
that
can
transform
those
bytes
and
do
various
things.
So
that
could
be
you
know,
inspecting
protocols.
So,
for
example,
we
have
a
MongoDB
filter
that
parses
the
MongoDB
data
stream
and
produces
stats.
A
B
A
That
has
you
know
an
important
implications
from
an
architectural
perspective
that
also
allowed
envoy
to
be
one
of
the
first
proxies
that
could
do
ERP
see
it
has
quite
a
few
different
features
in
terms
of
service
discovery,
as
well
as
health
checking
so
service
discovery
is
how
do
I
find
my
back-end
hosts.
We
support
DNS,
we
support
our
own
API
call
EDS
the
endpoint
discovery
service,
which
I'll
talk
about
a
bit.
We
support,
static,
configuration,
etc,
active-passive
health
checking
so
active
health
checking,
is,
you
know,
actively
sending
out
a
ping
and
getting
a
response.
A
Passive
health
checking
is
ma
during
monitoring
traffic
for
say,
consecutive
five,
hundreds
or
you
know
success
rate
outlier
I
will
talk
about
this
more,
but
on
boy
has
become
very
popular
for
its
configure
api's,
which
we
call
XDS,
that
is
star
discovery,
service
and
envoy
has
quite
a
few
different
load.
Balancing
features
from
you
know
being
able
to
do
failover
from
a
priority
perspective.
A
We
like
to
say
that
we
have
best-in-class
observability,
so
that
would
be
stats,
logging
and
tracing.
We
put
a
lot
of
effort
into
making
sure
that
envoy
is
easy
to
operate
and
ongoing
has
many
features
that
allow
it
to
be
used
as
an
edge
proxy,
so
things
like
TLS
termination.
We
are
spending
an
increasing
amount
of
time
on
security,
and
you
know
the
reality
is
that
the
code
that's
used
in
an
edge
proxy
versa,
server
smash
proxy
or
a
middle
proxy
is
99%
the
same.
A
And
what
what
you'll
see
here
is
that
you
know
envoy,
as
I
said
before,
is
composed
of
a
bunch
of
layer.
Four
filters-
and
you
know
those
filters
are
composed
in
blocks
which
we
call
filter
chains.
So
when
a
connection
is
received,
it's
assigned
to
a
work
or
thread.
We,
you
know,
set
up
a
set
of
filter
chains
to
operate
in
sequence
at
layer,
four
in
many
deployments
that
are
using
HTTP.
A
A
So,
for
example,
it
can
transparently
proxy
a
sheet
if
you
want
H
to
be
or
a
CB
to
HTTP
one
or
there's
many
other
scenarios
like
that,
and
we
have
a
bunch
of
abstractions
in
place,
allow
most
of
these
components
at
this
point
to
be
fully
pluggable.
So
people
can
plug
in
you
know
their
own
custom
clusters
or
load
balancers
or
filters,
or
things
like
that.
I
talked
about
before
you
know
we
like
to
think
of
Envoy
as
a
universal
data
plane,
and
at
this
point
we
have
lots
of
api's.
A
A
But
as
I
said,
this
is
really
about
allowing
in
you
know
highly
volatile
architectures
or
where
we're
doing
a
bunch
of
auto
scaling
and
things
are
coming
up
and
coming
down.
We
can
now
avoid
having
to
do
this
dance
of
having
templating
and
having
to
send
out
configuration
files
to
all
the
hosts.
We
can
have
central
management,
and
you
know
the
the
goal
and
I
think
where
many
companies
are
getting
to
with
envoy
is
having
a
bootstrap
configuration
that
is
shared
by
all
envoys
and
then
all
the
configuration
comes
from
that
central
management
server.
A
A
So
just
a
very
brief
history
of
Envoy
I
started
envoy
at
lyft
in
2015,
we
worked
on
at
lyft
for
about
a
year
and
deployed
it
fully.
So
it's
deployed
as
our
edge
proxy
and
our
quote
service
smash
proxy
and
we
open
sourced
envoy
at
the
end
of
2016,
and
you
know
at
the
time
I
thought
it'd
be
great.
If
someone
would
use
this
thing
and
it
just
wasn't,
wasn't
sure
what
would
have
happened
and
it's
been
a
pretty
amazing
ride
since
then,
so
that
was
a
picture
of
me
in
2017.
Looking
very,
very
sad.
A
A
We
became
a
graduated
CN
CF
project
at
the
end
of
2018
and
now
at
the
end
of
2019,
pretty
incredible
to
me,
but
envoy
is
used
pretty
much
everywhere.
This
is
just
a
small
small
list
of
the
adopters,
but
it
ranges
from
and
user
companies
like
lyft
to
all
the
major
cloud
providers
to
lots
of
startups
that
are
now
using
envoy
as
part
of
their
their
product
and
obviously
lots
of
servers,
special
products
that
are
based
on
unemployment.
A
So
you
know
from
you
know
the
the
project
perspective
in
terms
of
why
it
has
again
become
popular.
Is
that
I
think
that
we've
been
able
to
maintain
a
really
high
quality
bar?
We
have
pretty
incredible
velocity.
It's
a
very
extensible
platform.
We
have
a
pretty
flexible
and
rich
API
that
people
can
can
use
to
actually
use
it,
and
then
we
get
into
the
more
interesting
things,
which
is.
That
envoy
is
fairly
unique
at
this
conference
in
the
sense
that
it
was
not
created
by
a
VC
back
company.
A
It
was
not
created
by
a
cloud
provider,
it's
actually
an
end
user
driven
project
and
we've
been
able
to
build
a
stable
of
maintainer
Xand
contributors
that
span
really
all
of
the
cloud
providers
lots
of
end-user
companies.
So
it
continues
to
be
the
very
end
user
driven
project
which
is
which
is
great,
and
that
means
that
there's
no
open
core
version.
There's!
No
sorry,
there's
no
Enterprise
version,
it's
not
open
core,
so
we
don't
block
any
features.
A
We
we
allow
pretty
much
any
any
reasonable
change
to
come
in
and
you
know
I'm
most
proud
of
the
larger
community
I
think
it's
really
incredible.
How
many
people
come
to
Envoy
con
and
are
pretty
excited
about
the
software?
So
that's
great,
so
I
am
going
to
end
there.
Just
it's
a
quick,
quick
little
overview
of
the
project.
A
D
E
F
B
F
G
A
very
large
client
core,
it's
a
super
suppose
up
and
I
guess.
Well.
The
question
is
about
my
mobile
and
in
this
core
we
have
a
problem
with
them
like
Android
is
fine,
but
iOS
is
very
restricted
and
it's
kind
of
hard
to
use
busy
circuits
and
like
envision.
Some
problem
in
the
space
like
Apple,
like
failing
I.
F
Mean
it's
it's
hard
to.
You
know:
I,
don't
work
at
Apple,
so
it's
it's
hard
to
say
what
they
will
do
and
they
want
you.
What
I
can
say
is
that
you
know.
For
example,
one
of
one
of
the
extension
points
that
I'll
go
is
its
transport
sockets,
and
so
there's
there's
ways
that
if
they
restrict,
you
know
some
of
the
code
base
because
of
one
or
one
reason
or
another
we
have.
We
have
ways
to
you
know
potentially
work
with
the
api's
that
they
actually
expose
and
and
and
replace
things
that
are
problematic.
F
A
H
You
have
a
question
about
the
the
last
point
here,
community
and
I'm
sure
that
all
of
you
overlap
with
some
other
tech
communities
as
well.
Like
cube,
con
is
kind
of
a
bunch
of
overlapping
Venn
diagram
of
tech
communities.
Obviously,
on
boys
got
a
real
strong
one,
but
as
you
look
out
at
other
communities,
are
there
aspects
that
you
see?
Are
there
like
habits
or
rituals,
or
things
that
you
want
to
borrow
and
bring
in?
Are
there
other
communities
that
you
look
up
to
and
and
are
modeling
yourselves
after.
C
Yeah
so
I
think
the
kubernetes
community
actually,
and
not
just
because
of
the
current
audience,
but
we
were
actually
been
drawing
a
lot
from
them
in
terms
of
establishing
about
best
practices,
around
security
handling
and
CVS
and
I
think
that
they've
had
to
conference
many
of
these
issues
well
before
we
have,
and
we
also
see
them
as
probably
a
more
complex
project,
but
one
which
is
many
parallels
to
envoy
and
one
which
we
could
certainly
draw
from
also
as
a
result
of
sort
of
their
involvement.
Cn
CF
I
mean
I
think
in
general.
C
F
A
Yeah
for
for
me,
I
think
probably
kubernetes
I
think
there's
portions
of
the
Linux
kernel
better
that
are
worth
worth
looking
at,
not
others,
but
we
we
are
increasingly
having
many
of
the
scalability
issues
that
some
of
these
larger
projects
have.
So
there
are
things
to
be
learned,
for
example,
from
the
Linux
kernel
in
terms
of
how
they
treat
versus
how
they
treat
the
core
core
carnal,
and
that's
very
similar
to
envoy
for
how
we
treat
extensions
versus
core
envoy,
so
I
think
you
know
and
we're
starting
to
figure
out
how
to
do
stable
releases.
A
So,
looking
at
how
the
Linux
kernel
does
table
releases
is
also
very
useful,
I.
The
flip
side
of
that
is
that
having
some
of
these
larger
examples,
it
also
helps
us
learn
from
what
we
might
not
not
want
to
do
so.
For
example,
if
you
look
at
how
kubernetes
and
some
of
the
projects
do
CI
and
how
they
integrate
with
github
I
think
there
are
some
painful
parts
of
that.
So
we're
also
trying
to
learn
from
some
of
those
problems
and
hopefully
make
sure
that
we
don't
make
the
same
mistakes.
But
it's
it's
difficult.
I.
I
Hope
I
would
like
to
ask
a
little
bit
about
the
features
of
NY
and
one
particular
thing
and
that's
source
IP
and
not
for
HTTP
any
idea
how
that
could
work,
for
example,
for
protocol
like
pop3
email.
If
in
case
we
use
TCP,
it's
the
you
know
the
the
type
forwarding
could
that
works,
somehow
we've
to
get
the
source
IP
of
a
client.
If
clients
like,
what's
the
send
email,
we
try
to
use
that
as
a
global
answer
to
our
backends,
any
idea
any
chance
to
get
the
source
IP.
A
J
A
Sorry,
are
you
asking?
Can
you
proxy
with
the
same
source,
IP
that
the
client
used?
Ok,
ok,
yes,
there
are
people
that
are
doing
that.
What
I
this
is
probably
a
bit
arcane
for
for
this
audience,
but
there
is
in
the
docs
or
if
you
send
us
email.
There
is
a
section
on
IP
transparency
which
talks
about
this
topic.
So,
yes,
it
is
possible
with
a
bunch
of
IP
tables,
trickery
and
some
other
stuff.
L
Okay,
so
the
common
pitfalls
I've
seen,
have
been
around
the
amount
of
changes
that
kubernetes
like
enables
so
like
in
virtual
machines,
and
things
are
relatively
stable,
like
because
people
tend
to
mutate
their
infrastructure,
but
kubernetes
is
a
immutable
infrastructure,
so
things
are
changing
a
lot
and
I.
Think
I'm
we've
seen
a
lot
of
issues
with
the
control
plane.
All
this
data
coming
in
and
and
sending
it
out
to
all
the
sidecar
envoys
there.
L
Some
pitfalls
are
around
also
Alma
has
a
feature
called
panic,
routing
that
make
that
enables
so
that
notifies
you
in
a
loud
way.
That's
the
mesh,
isn't
conversion,
and
it's
just
due
to
the
fact
that
things
are
changing
so
fast
and
your
control
play
is
trying
to
keep
up
and
so
I
think
SEO
is
having
this
problem
as
well,
and
it's
something
the
community
needs
to
figure
out
incremental.
L
What
is
what
was
it
in
there?
There
were
right
now:
the
xes
API
send
the
state
of
the
world,
and
the
community
is
working
on
like
encrypted
I.
Think
it's
Delta
subscription.
What
was
it
called
incremental,
extramental,
X,
yes,
and
so
there
there
are.
Things
are
happening
to
make
it
work
better,
but
yeah.
Those
are
the
common
pitfalls,
I
see
because
there's
just
so
many
changes.
A
I
think
wow,
the
the
problem
that
we're
having
right
now
is
that
basically,
every
large
company
that
is
adopting
envoy
is
writing.
Their
own
control.
Plane
like
there
are
hundreds
of
control
planes
out
there,
and
it
just
isn't
true
that
any
of
the
open-source
service
mesh
control
planes
right
now,
they're
they're
not
really
heavily
used,
I
mean
it's
like.
There
are
people
starting
to
adopt
sto
and
then
there's
console
connect
and
I
mean
there.
There
are
other
products,
but
they're
not
used
at
any
any
large
enterprise.
No
matter
what
any
of
the
vendors
tell
you.
A
L
M
A
Project
perspective,
you
know:
what
can
we
do
to
make
that
simpler?
But
it's
unfortunately
it's
not
it's
not
an
easy
problem,
because
everyone's
infrastructure
is
slightly
different.
So
from
what
Ledo
was
saying,
I
think
there
are
patterns
that
control
planes
have
to
do
particularly
around
batching.
So
you
know
if
the
infrastructure
is
going
like
this
all
the
time
you
can't
send
every
update
out.
It's
not
gonna
work,
so
I
mean
the
they
have
to
implement
batching
and
caching
and
various
other
things.
So
then
the
question
becomes.
A
Can
we
potentially
put
some
of
this
logic
in
to
go
control
playing,
which
is
our
control,
plane,
library?
But
you
know
it's
not
it's
it's
relatively
in
early
days
and
what
I
would
say
and
again
this
is
different
from
what
the
vendors
will
tell
you
is
that
servers
meshes
are
very,
very
complicated,
like
do
not
do
one
unless
you
are
already
having
such
bad
problems
that
the
alternative
is
is
worse,
right
and
I.
Think,
that's
not
what
that's
not
what
the
vendors
would
tell
you,
but
that's
what
I
would
tell
you.
F
N
D
Like
there's
like
it
still
is
like
really
getting
more
like
more
adoptions,
you
strike
one
off
like
large
deployment
like
like
Shriram,
had
a
talk
with
Fred
American.
You
see
the
Walmart
laughs
in
the
keynote
as
well,
so
it's
getting
more
and
more
stable
than
we
expect
that
will
become
like
more
easy
to
adapt
in
the
in
the
like
next
couple
months.
D
O
A
I've
been
working
on
UDP
proxying,
so
just
base
you
to
be
proxying
that
that's
be
part
of
the
final
quick
solution
that
will
hopefully
merge
in
basic
form
in
the
next
two
weeks
or
so
dan
from
Google
she's
working
on
the
actual
l7
quick
support,
I
I
think
we're
still
planning
on
having
an
MVP,
something
that
could
be
tested
by
the
end
of
the
year.
We're
committed
at
lyft
to
getting
it
out
to
production,
probably
in
h1
of
next
year.
So-
and
there
are
other
people
too-
that
are
interested
in
this.
A
So
like
quick
is,
is
very
complicated
and
there's
just
there's
there's
a
lot
of
moving
parts
in
terms
of
how
it
actually
has
to
be
deployed.
It's
much
much
more
complicated.
Then
then
you
know
just
doing
something
with
TCP
or
HTTP
2.
It's
gonna
take
some
time
to
figure
out
what
that
looks
like,
but
we're
pretty
close
to
having
like
a
basic
version
working
for
initial
testing,
I.
C
So
yeah
I,
actually
sir
I,
won't
claim
in
a
deep
knowledge
of
the
service
mesh
interface
specification.
But
having
chatted
with
a
few
folks
about
this
I
mean
while
we're
talking
about
is
something
at
a
different
level
of
abstraction,
for
example,
to
own
voice.
Api
is
the
idea
is
really
essentially
at
their
level
of
abstraction,
something
like
SEO
xapi
is:
can
we
create
a
common
set
of
standard
which
could
sort
of
generate
config,
which
would
be
suitable
for
all
these
different
service
measures
and
simplify
their
management?
And
so
on
and
yeah
standards
are
great
I.
E
Mean
to
me
it's
like
not
directly
related
to
on
wear
itself,
it's
more
about
having
all
different
services.
Mesh
implementations,
with
my
use
on
my
other
hood,
make
it
easier
for
the
for
users
of
those,
the
kind
of
like
swap
all
different
implementations.
The
API
service
of
SMI
versus
on
ways
like
it's
so
different
only
has
like
hundreds
of
more
features
than
SMI.
So
it's
like
not
it's
not
really
they're
operating
in
very
different
levels.
L
A
Actually,
so
there's
gonna
be
lots
of
vendor
specific
extensions
and
then
what's
the
point
of
having
that
thing
in
the
first
place,
like
I'm,
not
I'm,
not
opposed
to
it,
I'm
just
I'm,
just
dubious
that
that
it's
gonna
end
up
having
real
value,
because
for
the
simple
things
that
this
comes
back
to
you
know,
should
you
have
a
service,
mesh
and
I?
Think
for
the
simple
API?
If
the
simple
API
works
for
you,
you
probably
shouldn't
have
a
service
mesh
I
mean
it's
just
it's
like
too
much
complexity.
P
Thank
you
so
I'm
interested
in
a
particular
use
case,
which
I
don't
know
if
it's
supported
or
if
it's
not.
Maybe
you
know
if
it
could
be
supported.
Somehow
so,
there's
retries
right
and
there's,
there's
timeouts,
let's
say
I
want
to
connect
to
a
service
which
is
down
right
now,
because
it's
wanting
to
be
down
then
I'm
gonna,
wake
up
the
service
and
I
don't
want
to
wait
until
the
next
retry
cons
to
actually
start
practicing
the
traffic
by
the
way
I'm
thinking
TCP.
So
is
there
any
way
I
can
tell
em
boy
hey.
E
The
way
that
I
would
imagine
this
kind
of
happening
is
that
you're
going
through
the
XTS
API
we're
waking
up
is
changing
the
configuration
and
kind
of
adding
that
host
into
the
pool
of
considered
hosts,
and
in
that
case
you
retry
will
just
be
able
to
pick
it
up
normally
like
any
other
thing
now,
so
it
kind
of
depends
on
your
exact
use
case
and
like
yeah.
So
it's
it's
there's
a
lot
of
different
ways
of
doing
it.
C
C
If
you
want
to
do
this
kind
of
thing
today,
you
who's
gonna,
provide
a
custom
filter
and
you
can
probably
there
are
certain
hooks,
for
example,
in
the
cluster
manager,
to
ask
its
to
go
off,
and
you
know
fetch
a
new
house
to
wait
for
something
to
happen,
and
then
let
you
know
as
a
filter
to
in
order
to
resume
the
that
the
traffic
so
I,
don't
think.
There's
any
plans
to
add.
This
kind
of
thing
like
is
an
out
of
the
box
in
the
API
feature,
but
it
certainly
has
a
like
an
extension
point.
D
N
A
N
A
C
Sure,
yes,
you
know
for
so
from
the
perspective
of
envoy
right
and
even
in
service
mass.
You
know
we
have
the
data
plane
where
all
the
traffic
is
moving,
essentially
right
between
the
different
services
and
the
control
plane,
which
we
sometimes
confusingly
call
the
control
plane
for
the
data
plane,
and
sometimes
they
defy
an
API
which
is
actually
even
more
confusing,
but
the
control
plane
for
this
data
plane
is
essentially
delivering
dynamic,
configuration
and
intelligence.
You
know
where
to
route
various
bits
of
traffic
how
to
do
service
discovery.
C
You
know
what
ports
silverstern,
although
this
kind
of
stuff
is
what
they're
these
these
api's
are
delivering.
So
the
service
mesh
is,
you
know,
you've
got
a
bunch
of
these
envoys
in
a
cubanelle,
each
running,
an
individual
kubernetes
pods,
and
you
want
to
tie
them
together
and
provide
curve
he
civ
traffic,
routing
and
management
and
observability
over
all
of
these.
That's
what
the
service
mesh
is
I,
think
in
a
nutshell,
yeah.
B
A
Think
it
really
depends
on
what
you
want
to
use
on
buy
for
and
that's
what
I'm
saying
it
before
is
that
we
have
people
that
are
using
it
for
API
gateway,
edge,
serving
service
mesh
so
and
there's
vendors
that
are
using
envoy.
For
all
of
these
things,
lots
of
vendors,
so
I
think
that
the
first
decision
point
is
what
what
problem
you're
trying
to
solve.
Is
it
service
mesh
or
something
else,
and
then
you
can
focus
on
on
finding
a
vendor?
Are
you
asking
about
service
mesh
all
that
all
that
please.
D
K
So
back
to
the
sort
of
the
context
of
kubernetes
I'm,
more
familiar
kind
of
like
the
service
abstraction
is
the
idea
them
with
envoy
plugging
that
in
and
then
it's
sort
of
fitting
within
that
mechanism.
If
you're,
if
you're
running
this
in
pods,
then
if
you
have
sir
there's
a
plain
nice,
obviously
with
all
of
those
constructs
already
or
does
it
introduce
sort
of
its
own
out
of
context,
sort
of
approach.
A
Envoy
really
has
nothing
to
do
with
Co
gratis
like
it's,
it's
completely.
It's
it's
really
completely
separate.
So
we
have
people
that
use
envoy
with
consult
with
people
use
envoy
in
May.
So
if
you
have
people
use
envoy
and
on-prem
with
with
their
own
orchestration
systems,
so
you
know
people
wrap
envoy
in
a
variety
of
different
ways,
but
from
the
envoy
project
perspective,
it's
really
orchestration
agnostic.
Q
B
Q
A
It's
it's
never
really
felt
like
we.
We
really
need
them.
I
mean
we
have
a.
We
have
a
bi-weekly
community
meeting
I
think
we're
very
happy
to
set
up
mailing
lists
and
do
project
tracking.
However,
however,
people
want
I,
guess
I
mean
I,
can't
speak
for
all
of
them,
but
I'm,
not
one
that
who's
for
processed
for
process
sake,
so
I
think
I.
Think
we
just
want
to
find
the
right
way
of
getting
work
done.
Is
there
I
mean?
Is
there
a
context
by
which
you're
asking.
A
Sure
I
I
think
the
issue
is
that
again
we
are
a
community
driven
project,
we're
not
a
vendor
driven
project,
so
we
would
need
people
that
want
to
do
work
so
from
a
project
perspective.
We're
very
happy
to
start
a
working
group
as
in
to
make
an
email
list
or
get
you
a
zoom
meeting.
But
if
there
aren't
people
that
are
willing
to
do
the
work,
I
don't
know
what
the
working
group
would
actually
do.
So
we
need
people
to
come
forward
who
want
to
do
work
and
we
have
the
bio
akley
community
meetings.
C
Today,
a
lot
of
the
security
stuff
is
actually
handled
by
the
Envoy,
a
product
security
team,
which
is
what
you
know,
volunteer
efforts
and
there's
a
small
group
of
us
and
we
a
I,
think
we're
largely.
We
don't
hear
a
lot
from
other
folks
who
are
interested
in
actually
joining
that
effort
if
there
are
we're,
definitely
interested
in
the
sort
of
vetting
them
and
see
if
they
will
be
interested
in
joining
that
organization
and
that's
effectively
the
Envoy.
Our
security
working
group.
R
Hey
so
I'm
I'm,
pretty
interested
in
using
envoy,
either
on
its
own
or
through
a
service
mesh,
but
most
of
my
company's
traffic
goes
over
Nats,
which
is
TCP.
Protocol
and
I
was
wondering
like
what
what's
the
state
of
on
voice
support
for
protocols
like
NAT,
sir,
my
sequel
that
aren't
you
know
necessarily
HTTP,
because
I've
read
conflicting
things:
online,
I'm,
not
sure
if
it's
just
a
matter,
I
need
to
write
the
support
myself
or
it's
like
impossible.
E
It's
totally
possible
to
write
support.
Whatever
protocol
you
want,
there
is
so
it
was.
Some
work
been
done
on
a
my
sequel
filter
to
understand
a
protocol.
I
think
it
was
primarily
around
stats
with
some
ambitions
on
doing
access,
control,
not
sure
what
the
state
of
it
I
think
it's
partially
done,
partial
support
so
yeah.
It
really
comes
down
to
you
having
somebody
who
is
willing
to
put
in
the
work
to
add
support
for
it.
You
have
full
access
to
all
the
data
being
signed
over
TCP.
You
can
do
whatever
you
want
with
it.
O
Hey
so
we're
currently
rolling
out
envoy
to
services,
probably
we
were
in
front
of
2,000
different
services
and
I
just
said:
we'll
use
different
frameworks
or
languages,
so
we
have
them
not
that
often,
but
we
see
problems
that
envoy
changing
some
header
or
the
not
like
clean
the
herd
that
is
fins
and
by
the
upstream
of
the
downstream
or
whatever.
So
usually,
what
we
do
is
expect
the
traffic
and
see
what's
going
on
how
the
requests
look
before
going
through
on
boil
houses.
O
S
O
C
I
mean
there's
no
reason
we
couldn't
support
that
I
mean
I,
guess
the
idea
is
it
effectively.
What
you
want
to
do
is
something
like
TCP
dump
it
and
then
you
know,
run
Wireshark
and
be
a
look
at.
They
have
the
session
key
for
TLS,
so
you
can
actually
see
what
the
track
or
of
the
traffic
in
plaintext
looks
like
today.
The
way
we
approach
this
largely
idols
of
I
think
operational
simplicity
was
through
l4
tap.
C
This
is
before
we
added
the
our
seven
tap
and
we
essentially
just
added
a
transport
socket
wrapper
which
will
dump
things
out
in
convoys
internal
format.
We
have
a
conversion
tool
between
that
and
peak
app,
so
you
can
run
this
transports.
If
you
configure
this
transport
socket
filter
in
its
tap
mode,
you
can
basically
at
a
pcap
file
outs
in
plain
text
as
well,
so
that
for
us
for
our
various
testing
and
security
needs
was
sufficient,
but
I
don't
think
we
would
be
a.
A
I
mean
the
only
thing
I
would
add.
Is
that
the
feature
that
you're
talking
about
specifically
I
think
most
people
will
consider
it
like
a
security
disaster,
so
I
personally
from
an
operational
standpoint,
I
would
not
recommend
doing
that
if,
as
a
contributor
like,
if
you
can
find
a
clean
way
of
contributing
it
I,
you
know,
as
Harvey
was
saying
I'm,
not
sure
that
we
would
object
to
it,
but
I
think
it's
doubtful
that
you're
gonna
find
one
of
us
to
add
that
it's
an
environment
variable
you're,
sucking
out
the
key
like
it's.
D
A
A
M
M
But
there
is
a
huge
race
that
whenever
the
connection
to
the
XDS
broke,
my
containers
won't
start
up
or
they
will
fail.
The
whole
communication
failed.
Is
there
any
plan,
or
is
there
any
already
existing
way
to
provide
a
hybrid
mode
of
quantification
to
envoy
saying
that
use
XDS
as
the
first
thing?
But
if
the
connection
breaks,
if
something
goes,
bad
use
this
hybrid
mode,
this
config.
A
I
can
give
you
my
personal
opinion
that
doing
that
as
a
horrible
idea
like
it's
and
and
here's.
Why?
Because
you're
not
fixing
the
case
where
new
instances
won't
work
right,
I
mean,
like
you,
just
you're,
looking
for
a
band-aid
over
the
fact
that
your
control
plane
is
not
working
so
putting
on
my
a
sorry
hat
I'd
really
recommend
that
you're
not
do
that
and
you
just
need
to
make
your
control
plane
work
and
you
need
to
make
your
control
plane.
A
C
C
That's
an
in
that
that's
supported
a
I
think
in
the
context
of
UDP
a
which
is
going
to
Pearlie
intercept
the
v4
api's
next
year
with
there
will
be
some
more
harmonization
between
Dinah
what
is
possible,
aesthetically
and
dynamically,
and
will
be
very
much
more
explicit
about.
How
would
you
overrides-
and
things
like
that
now-
whether
that's
a
good
idea
to
use
in
operationally
may
as
Matt
points
out?
C
That
might
not
be
the
greatest
idea,
but
I
think
we
want
to
make
in
UDP
a
things
basically
expressible,
both
dynamically
and
statically,
and
after
fighting
a
good
story
for
overrides.
It's
actually
part
of
our
overall
story
of
doing
things
like
Federation
and
so
on.
Metal,
tines,
that's
a
whole
dynamic
versus
static
within
the
proxy.
C
M
K
Not
to
keep
sort
of
beating
the
same
drum
with
kubernetes,
but
I
guess
I'm,
just
thinking
about
situations
that
you've
seen
where
people
are
sort
of
using
envoy
in
conjunction
with
kubernetes,
whether
it's
like,
maybe
ingress
or
again.
It
seems
to
me
for
me
to
be
some
overlap
between
the
service
model.
What's
being
done
in
there
in
terms
of
you
know,
containers
needing
to
reach
out
and
find
things
I'm
curious
how
it
has
been
sort
of
integrated
in
there.
E
A
Yeah
I
mean
there
there
are
there
tons
of
people
both
on
the
ingress
iodine
and
on
the
servers
to
service
space
who
are
binding
and
binding
the
binding
the
system's
together,
but
that
tends
to
be
where
they're
taking
they're,
taking
CRTs
and
they're,
translating
those
to
envoy
config
and
how
that
is
done
is
typically
vendor.
Specific.
J
I
have
more
like
philosophy,
question
for
you,
Matt
when
you
started
a
project
like
and
boy
have
you
looked
first?
If
there
was
existing
open
source
project,
you
can
modify
I
contribute
I.
Think
like
there
is
a
lot
of
reverse
proxy
for
what
proxies,
like
even
the
words
like
squid
or
httpd.
What,
where
are
the
motivation
about
starting
a
new
project
and
a
follow-up
question
would
be?
Aren't
you
afraid
that
at
some
point-
and
we
would
be
so
developed
that
it
will
turn
into
something
huge
like
the
project
where
five
years
ago.
A
I
think
at
the
time
the
the
the
way
forward
would
have
been
to
work
nginx
and
that
probably
would
have
gone
okay
right,
because
I
think
at
the
time
this
is
almost
five
years
ago.
It
was
clear
that
nginx
at
that
time
was
not
taking
patches.
It
wasn't
going
to
take
a
bunch
of
the
stuff
that
we
knew
that
we
needed
to
do
so.
I
could
have
worked
it
right.
A
I
thought,
from
a
judgment
perspective
that
C++
is
way
more
productive
than
C,
so
I
thought
that
we
could
move
faster
or
build
on
existing
libraries
and
I
thought
that
we
could
get
something
working
relatively
quickly
from
empty
files
to
production.
I
had
something
in
production
at
lift
in
about
three
or
four
months
so
I
mean
it
wasn't
like
a
multi-year
effort.
Obviously,
what
existed
then
is
a
fraction
of
what
we
have
today.
So
that's
why
I
mean
I?
A
Yes,
there's
lots
of
things
that
that
could
have
been
done,
but
that
that
was
the
judgment,
call
that
I
made
and
I
was
given,
as
they
say,
a
large
rope
to
hang
myself
with
from
a
job
perspective.
But
in
terms
of
you
know
like
will
it
get
too
big
I
think
there's
a
natural
life
cycle
right,
it's
like
who
is
Amelie
gonna,
get
replaced
by
something
else.
T
So
you've
probably
heard
this
before,
but
I'm
just
curious
about
the
the
motivation
at
lift
like
what
was
it
that
you
were
doing.
That
I
mean
not
many
of
us
work
at
places
where
we
need
to
go
write
this
kind
of
software
from
scratch
so
just
went
see
if
you
could
describe
a
little
bit
more
about
the
problems
you
were
facing
that
necessitated
this.
A
You
know
we
at
the
time
that
I
joined
lyft,
we
had
a
monolith
I
think
we
had
30
services
like
written
and
Python,
and
the
the
Micra
service
rollout
was
basically
paused
because,
like
no
one
knew
how
to
debug
anything,
networking
around
Python
and
service-oriented
architectures
is
difficult,
and
so
from
previous
experience,
I've
been
at
Twitter
before
lyft
and
seeing
you
know
how
people
build
large,
Microsoft's,
architectures
and
many
companies
before
had
done
them
using
an
improv
library.
You
know
so
at
Twitter.
A
It
was
done
with
finagle
and
Netflix
has
history,
so
it's
very
common
set
of
patterns
that
that
people
do.
But
it
was
also
clear
that
we
couldn't
write
one
library,
because
you
have
to
write
it
in
PHP
and
in
Python
and
a
bunch
of
other
stuff.
So
I
knew
what
had
to
be
done
and
I
knew
that
it
had
to
be
out
of
process
because
we're
not
gonna
write
it.
You
know
four
or
five
times
so
then
it's
do
we
use
nginx.
Do
we
use
H
a
proxy
I?
A
Think
at
the
time
someone
proposed
to
me
that
we
just
do
it
in
Python,
no
joke,
and
so
you
know
like
all
of
these
things
are
trade
offs
I.
Think
in
hindsight
it's
kind
of
incredible
to
me
that
left
to
actually
let
let
me
go
and
and
do
it
right,
but
that's
that's
that's
history,
but
those
are
the
problems
that
we
were
facing.