►
From YouTube: Envoy Community Meeting - 2018-11-06
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Should
we
get
started,
oh
yeah,
yeah
yeah.
Let's
do
it,
okay,
I'm
trying
to
think
of
anything
else.
Okay
looks
like
Chris
wanted
to
mention
that
odd
boycott
is
almost
sold
out.
Buy
your
tickets,
it's
gonna
be
awesome.
Do
we
do
we
know
which
maintain
earners
are
going?
Are
you
going
Greg.
B
And
this
is
really
just
mean
it's
pretty
everything
anything
new
is
gonna
be
set
on
this
course.
We're
gonna
just
raise
general
community
awareness
that
right
now,
I'm
looking
at
event
manager
replacements
for
this.
It
would
be
an
either/or
thing.
The
idea
would
be.
You
could
actually
turn
on
alternative
event.
Managers
such
as
UV
or
UV.
B
That's
probably
true,
but
there's
various
you
know
small
features
like
hanging
hooks
for
special
additional
flags
to
control
connection
management
or
which
we
have
an
issue
for,
and
also
for
you
know,
these
additional
statistics,
which
would
motivate
us
working
on
an
active
code
base.
In
particular,
I
would
like
to
work
on
a
code
base,
we're
adding
features
if
I
am
going
to
upstream
them.
B
That's
gonna
actually
make
it
to
a
point
to
release
some
time
soon,
as
we
work
with
distributions
you
might
want
to,
for
example,
dynamically
linking
system,
but
libraries
such
as
their
Lib,
Eevee
or
live
event
or
libuv
they're
unlikely
to
want
to
work
with
some
patched
version
of
that
library,
one
which
is
it's
some
random
points
in
the
version
control
history.
So
that's
certainly
the
case
for
live
events.
Today,
it's
very
infrequent
releases
and
it's
very
stable
Libby.
B
A
B
Movies,
where
all
the
cool
kids
are
these
days,
yeah
it's
a
live.
Uv
is
like
looking
anything
that
it
does
a
lot
more
than
we
actually
need
it
to
do.
But
if
we
just
take
the
list
they
that
loop
from
that
these
will
be
on
an
active
for
code
base
and
what
which
is
shared
fate
with
nodejs.
So
we
know
that
at
least
one
other
pretty
massive
open-source
project
also
considers
it
a
key
dependency,
and
we
also
have
other
nice
overlap
with
nodejs,
like
you
know,
energy,
HTTP,
2
and
so
on.
Yeah.
A
E
E
E
E
A
A
A
A
B
That's
pretty
much
it
I
was
actually
gonna.
Look
at
Libya
v
this
afternoon.
Yeah
I
was
treating
it
as
essentially
something
much
closer
to
a
baby
than
what
Greg
tells
me.
It
is
a
Greg.
Do
you
know
if
you
can
actually
get
to
the
lower
level
like?
Does
it
have
a
lower
level
library
abstraction
which
allows
you
to?
You
know
essentially
stick
with
something
closer
to
the
baby
model.
E
B
E
You
know
I'd
have
to
look
at
whether
it
actually
supports
Windows,
because
you
know
on
Windows
the
the
api's
for
files
versus
sockets
are
completely
different.
You
can't
just
you
know,
switch
between
them,
like
you
can
on
on
UNIX
systems,
so
it
may
or
may
not
work
at
all
if
you
kind
of
treated
it
as
all
file,
watchers,
okay,.
B
B
A
They're
they're
very
they're,
very
likely
we'll
be
like
not
not
probably
envy
one
of
the
Cisco
after
you
work,
but
I
mean
I
think
eventually
they
are
going
to
have
to
replace
the
event
loop
with
DP,
DK
or
or
something
similar
so
like
that
there
is
parallel
work
here.
It's
just
it's
hard
for
me
without
look
like
you're
gonna.
Do
looking
into
the
left,
like
the
the
libuv
API
is
hard
for
me
to
understand
what
like
what
this
means
so
I
don't
know.
E
A
Say
that's
a
better
model
in
my
opinion,
so
let
know
the
IACP
model
is
writing
it's
superior
yeah,
yeah
yeah,
so
so
I
mean
this.
This
might
be
like
the
be
better
long-term
direction
because
it
would
support
what
knows
better
like
whether
the
library
natively
supports
Windows
or
not.
We
eventually
would
want
to
move
windows
over
to
using
IO
completion
ports
anyway,
so
this
might
make
it
easier,
but
yeah
like
/,
/,
/,
Greg
and
again
without
me.
Knowing
anything
I'm,
just
I'm
worried
that
this
is
gonna.
E
A
F
F
Right
so
for
those
who
don't
know,
we
would
like
to
have
the
ability
to
connect
to
the
upstreams
with
the
original
source.
Ip
and
maybe
port
I
came
into
the
system,
and
we
determine
that
either
through
something
like
proxy
protocol
or
just
assume
that
what
arrives
at
envoy
from
the
downstream
is
the
correct
source
and
destination.
Sorry
source
port
and
IP.
F
F
And
we
also
need
to
do
that
at
work
to
get
that
connection
information
into
the
connection
pool.
So
we
had
a
bit
of
it
back
and
forth
on
this
I
put
up
a
design
document
and
I
think
the
approach
we're
going
to
take
is
to
create
a
connection
pool
but
traps
other
connection,
pools
and
sort
of
handles
a
lot
of
the
complexities
of
ensuring
that
the
source,
port
and
IP
matches
what
came
into
the
system,
the
biggest
complexity.
Being
the
fact
that
you
can't
just
open
a
connection
to
the
upstream
expected
to
work.
F
Requests
in
flight
for
a
given
connection,
so
I
kind
of
been
moving
slowly
on
it
for
the
past
bit
I
just
been
pretty
busy
with
other
things,
but
I'm,
hoping
in
the
next
week
to
release
sort
of
buckle
down
and
get
to
work
on
it.
So
we
can
have
something
a
show
soon.
Anyways.
Anyone
has
comments
on
that.
F
A
Me
know
yeah
if,
if,
if
folks
have
not
seen
the
design
doc,
it's
super
detailed,
there's
lots
of
comments
on
it.
It's
it's
it's
worth
reading
is
the
one
thing
that
I
wanted
to
point
out
is
that
the
reason
that
I'm
really
excited
about
this
work
is
that
it
won't
just
it
won't
just
solve
source
port.
There's
been
like
there's
been
repeated.
D
D
So
I
kind
of
did
this
pattern
where
I
just
went
hog-wild
in
something
I'll,
never
try
to
do
a
PR
for
to
illuminate
as
much
static
memory
as
possible,
which
I
was
able
to
cut
like
half
of
it
at
like.
Well,
actually,
his
background,
there's
cases
where
we
are
scaling
envoy
to
a
surprising
number
of
clusters,
tens
of
thousands,
maybe
more,
and
in
that
scenario,
like
almost
all
the
memory
is
stat
names.
D
So
that
seems
like
not
a
great
usage
of
memory
and
most
of
those
names
are
just
different
combinations
of
the
same
strings
put
together
in
different
ways,
so
I've
done
kind
of
I
think
most
of
what
I
can
do
along
that
line
without
doing
the
more
radical
thing
of
introducing
a
simple
table
and
that's
kind
of
in
flight.
Now,
along
the
way,
I
found
various
things
that
might
make
that
work
easier,
that
Matt
and
I
have
been
chatting
about.
One
of
them
is
to
simplify
or
maybe
even
eliminate,
shared
memory
stats.
D
The
elimination
of
it
would
be.
There
still
be
hot,
we
start,
but
it
would
be
involve
having
you
know,
a
transfer
of
control
and
the
transfer
of
data
from
the
old
process
to
the
new
process.
That
was
Matt's
idea.
Actually,
I
haven't
looked
at
that
at
all
other
than
to
talk
with
him
about
it.
I
also
thought
that
it
would
simplify
shared
memory,
stats
to
not
store
the
stat
name
at
all
in
the
shared
memory,
but
instead
store
like
a
shot
hash
and
just
use
that
for
unique
affine
for
deterministically.
D
You
can
find
stat
names,
but
were
the
fixed
size,
and
that
would
eliminate
a
lot
of
complexity
and
reduce
the
amount
of
shared
memory
needed,
but
mostly
what
I'm
looking
at
now
is
a
flow
where
I
can
have
this
symbol
table
all
of
the
stat
name
memory
held
in
a
symbol
table
without
taking
a
lot.
You
know
in
the
hot
path
that
last
part
is
the
tricky
one,
because
the
simple
table
itself
requires
locks,
so
I
think
I
have
a
solution
to
that.
D
A
Yeah
one
one
thing
just
just
to
throw
out
there,
and
this
is
something
that
I've
been
thinking
about
for
a
while
and
I'm
curious.
If
people
have
thoughts
on
this,
which
is
this
whole
topic
of,
there's
been
a
bunch
of
work
already
done
with
like
symbol,
table
and
other
stuff
and
the
so
you
know
just
just
for
historical
reasons:
stats
started
out
way
simpler.
Then
they
are
now
like
things
have
gotten
a
lot
more
complicated.
So
some
of
the
design
decisions
that
we
started
with
say
three
and
a
half
years
ago.
A
They
may
not
apply
anymore,
and
one
of
those
decisions,
I
am
becoming
increasingly
convinced,
is
not
worth
it
anymore
is
to
keep
the
stats
themselves
in
shared
memory
and
and
very
roughly
the
way
that
I
think
that
would
work
is.
We
would
move
each
process
individually
over
to
symbol.
Table
I
could
be
able
to
do.
He
been
minimized
memory
as
much
as
possible
and
then
in
the
hot
restart
protocol
we
would
actually
have
some
type
of
pagination
API,
where
the
new
process
can
just
ask
the
old
process.
A
Give
me
all
your
counter
and
gauge
values
right,
and
it
would
do
that
until
the
old
process
shuts
down
so
say,
every
flush
interval
like
every
five
seconds,
every
60
seconds
the
new
process
would
say
to
the
old
process.
Hey
like
give
me
all
your
stats.
The
whole
process
would
start
sending
RPC
frames
to
the
new
process
with
some
blocks
of
stats,
and
then
the
new
process
would
basically,
when
it
output
stats.
It
would
add
the
old
processes,
counters
and
gauges
to
the
new
processes,
counters
and
gauges.
A
So
you
know
it's
not
going
to
be
as
real
time
or
as
accurate,
but
to
be
honest,
I
think
it
will
work
perfectly
well,
and
people
won't
even
know
the
difference
and
I
think
if
we
did
not,
it
will
simplify
so
much
like
we
can
get
rid
of
all
the
truncation
stuff
like
we
can
start
using
the
same
code
path
for
for
like
symbol
tables
so
and
again,
just
for
historical
context.
The
reason
that
I
didn't
do
that
way
back
in
the
day
is
that
things
used
to
be
way
way
simpler.
A
So
like
shoving,
the
stats
in
shared
memory
was
a
lot
simpler
than
writing
this
pagination
thing.
But
now
writing
the
pagination
thing
seems
trivial
all
compared
to
all
of
the
other
work
that
we're
doing
so.
So
you
know
that's.
My
current
thinking
is
actually
that
we
should
rip
stats
out
of
shared
memory,
so
I'm
curious.
If
people
have
any
thoughts
on
that
I
think.
E
That's
worth
investigating
it
sounds.
It
sounds
like
it
a
good
change
if
we
can
pull
it
off
and
curious
how
it
will
scale
with
large
numbers
of
stats,
and
you
know
how
much
data
we
have
to
send
between
the
processes
and
yeah,
how
much
CPU
time
that
consumes
right.
Yeah
I
mean
my
order
of
a
hundred
a.
B
A
Yeah
I
mean
and
like
as
part
of
this,
we
could,
we
could
rethink
a
bunch
of
things.
I'd
say
gauges
are
the
most
important
thing
right
like
you
could
argue
that
we
could
just
do
this
for
gauges
in
a
perfect
world.
You
would
have
calendars
too,
because
the
old
process
is
still
doing
things
right
so
like
it
would
be
nice
to
be
able
to
add
both
of
the
things
that
the
whole
process
is
doing,
like
closing
connections,
draining
connections
etc.
A
But
I
I
do
agree
that,
as
if
we
go
down
this
road
I
think
we
can
have
a
larger
conversation
of
like
what.
What
is
the
optimal
way
of
doing
this
and
I
I
do
think
that
it
is
a
coherent
argument
to
say
that.
Well,
it's
too
much
work.
If
we
do
counters
and
gauges
there's
way
fewer
gauges
like
let's
just
do
gauges,
but
I
do
think
like
I
guess.
The
optimal
solution
would
continue
to
set
counters
and
gauges.
So
like
I
guess
my
opinion
would
be.
A
Let's
start
there
if
it
looks
like
per
Greg
that
it's
using
too
much
CPU
or
like
something
like
that,
you
know
like
we.
We
could
back
it
off,
but
my
gun
tech-wise,
you
say
just
real
quick.
My
gut
tells
me
that
it
won't
be
an
issue,
because
if
we
only
send
the
data
from
the
old
process
to
the
new
process,
every
like
30
seconds
I
just
don't
see
it
being
a
big
deal
right
and
if
you're
doing.
B
A
A
Yeah
yeah,
actually,
maybe
that
that's
worth
thinking
about
for
sure
it'll
be
a
little
tricky
because
you
know
when
you
said
stats
and
when
you
request
them
as
a
hot
restart
implementation
detail,
because
the
new
process
is
shutting
down
the
old
process.
So
there
might
be
some
timing
issues,
but
yeah
I
I
mean
this.
This
might
be
the
opportunity
that
we've
looked
for
also
to
change
the
the
RPC
API
to
using
proto's.
So
it's
like
I
mean
there
there's
some
real
benefits
here.
That
might
that
we
might
want
to
invest
in
this.
A
D
Cool
I
will
say
that
that
has
a
bunch
of
benefits
which
you've
talked
about
the
existence
of
this
not
restart
path
and
the
alternate
way
of
allocating
static
memory
is
not
really
getting
in
my
way
at
all.
Okay,
all
right,
because
I
managed
to
find
kind
of
points
to
virtualize
that
and
I
feel
like
that's,
not
really
making
my
life.
Even
the
truncation
is
not
really
a
big
deal,
but
but
I
understand,
there's
a
bunch
of
benefits
to
this
and
it
sounds
more
portable
and
but.
A
Yeah,
it
just
seems
like
I
feel
like,
as
this
code
has
gotten
more
and
more
complicated
I
feel
like
this
would
actually
allow
us
to
vastly
simplify
it
and
make
it
easier
to
understand
again
right,
whereas
now
like
with
the
different
code
paths
with
the
heap
allocator
and
the
shared
allocator,
it's
like
it's
so
complicated,
it's
very
very
hard
to
wrap
one's
head
around
it.
No.
A
D
A
D
A
Exactly
so,
the
way
that
we've
been
thinking
about
this
is
that
in
newer
kernels
with
UDP,
if
you
use
I,
think
sso
reuse,
part
I,
think
the
kernel
actually
hash
so
forwarding
wouldn't
necessarily
be
necessary,
like
it's
showing
nice,
it
shouldn't
wind
up
at
the
right
place
like
that's
that's
how
Google
does
it
at
scale?
So
I
like
I,
don't
I'm
in
common
case
I,
don't
think
that
there
will
there
will
have
to
be
cross
worker
sentence.
It
should
just
work
well,
we'll
have
a
fallback
path.
Yeah.
E
C
Had
a
quick
question
on
what
we
were
duplicating
some
of
the
TCP
stuff,
we
had
kind
of
a
blocker
with
the
source.
Ip
is
not
being
implemented
fully
in
v2
for
filter
matching
yeah,
and
then
there
was
kind
of
another
parallel
PR
for
adding
a
source
type
which
seemed
like
it
was
potentially
also
going
to
do.
This
I
had
kind
of
a
parallel
PR.
You
had
source
IP
in
source
ports,
but
then
we
decided
source
ports
is
probably
something
you
want.
It
definitely
entirely.
Anyone
have
any
more
follow
up
on
at
that
source.
B
Going
with
that
and
mavis
be
insisting
that
it's
got
this
really,
there
was
that
it
was
an
incomplete
solution
like
it
was
definitely
an
interesting
feature
which
you
couldn't
emulate
with
the
existing
matches.
This
idea
of
in
trying
to
make
sure
to
determine
whether
an
incoming
IP
was
essentially
coming
from
the
localhost
or
not,
but
the
other
the
claim
was
I'm
being
kicked
out.
B
That's
you
do
that
just
by
kind
of
match,
IP
addresses
was
not
really
complete
for
more
complex
systems,
so
my
take
was
that
I
was
trying
to
add
what
they
were
after,
but
they
needed
to
rephrase
what
it
actually
was
doing
versus
versus
claiming
there
was
matching
from
the
same
host,
or
at
least
you
know,
since
you
just
say
my
pay,
so
yeah
I'm
being
out
so
I
can
continue
this
in
mind.
Yeah.