►
From YouTube: Envoy Community Meeting - 2019-02-12
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
Yeah
yeah,
so
I've
been
working
on
a
benchmarking
project.
We
call
the
nighthawk
and
it's
based
upon
invoice.
Libraries
well
I'm.
Currently
it's
in
like
in
terms
of
functionality,
it's
similar
to
what
W,
R
K
has
to
offer
it's.
It's
still
pretty
early
stage
and
simple,
but
it
already
performs
fairly
well
and
well.
B
B
A
A
We
don't
have
yet,
though,
because
basically,
we
have
generalists
hack
to
make
that
work
at
the
main
repo
wanna
use,
negative
coverage
support
and
the
person
is
responsible
that
I'd
actually
been
out
and
but
they'll
be
back
this
week,
and
hopefully
we
can
get
them
to
help
us
out
to
get
make
that
happen.
But
ideally
we
make
this
sort
of
a
coverage
driven
air.
They
don't
we
get.
You
know
similar
kinds
of
you
know
close
to
a
hundred
percent
coverage
that
we
have
four
main
repair
applied
here.
Why?
A
Don't
you
think
thing
to
think
about
might
be
is
how
we
structure
some
of
these
ancillary
scripts
and
tools
which
exist
today
in
the
main
envoy
repo.
We
want
to
use
these
in
other
repositories
that
we
host
under
the
Envoy
proxy
organization.
It
would
be
nice
to
mayor
haps.
You
can
have
these
as
a
sub
module,
which
we
share
across
them
is
right.
Now
we
sort
of
have
a
lot
of
copy
and
plate
in
boiler.
D
C
So,
with
the
with
the
Perth
thing,
is
there
like
a
roadmap
doc
that
people
can
look
at
or
is
there
I'm
just
curious?
Is
there
something
that
can
be
shared
with
people
just
to
see
what
the
what
the
different
milestones
are
and
what
our
plans
are?
Yeah.
C
Mean
whatever's
easiest:
it
could
be
a
little
roadmap.
It
could
be
even
in
the
on
Viper
if
we
could
convert
it
into
issues
with
some
check
boxes.
I
just
think
it
would
be
nice
for
for
people
to
understand
where
the
project
is
going
so
that
people
can
comment
if
there's
things
that
people
would
actually
like
to
see.
Yeah.
A
C
A
C
That
would
be
great
and
then
also
I
mean
in
github.
Now
you
can
do
project
boards.
You
know
you
can
have
milestones,
you
can
have
labels,
so
that
might
be
just
a
good
way.
You
know
to
generally
track
what
people
are
working
on
in
different
milestones.
Just
because,
as
we
get
further,
you
know,
we've
talked
about
this
offline
I.
Think
this
project
has
the
potential
to
become
very
widely
used
out
outside
of
Envoy
like
it's
a
it's
a
pretty
cool
thing,
so
it
would
just
be
nice
for
people
to
understand.
What's
what's
going
on,
yeah.
A
C
And
h2
and
then
we'll
eventually
get
quick
support
and
a
whole
bunch
of
other
things
and
actually
there's
some
there's
some
nice
synergies,
because
if
we
want
a
low
test,
quick
will
need
quick,
client
support.
We
also
need
quick
client
support
for
running
envoy
on
the
client,
so
I
mean
there's
a
there's,
a
bunch
of
work
that
that
I
think
comes
together.
Pretty
nicely
err.
C
C
Then
one
other
thing
just
just
from
a
roadmap
perspective,
I'm
sure
it's
not
in
the
first
version,
but
longer-term
figuring
out
how
we
eventually,
you
know,
get
this
into
our
CI
system,
so
that
there's
some
way
on
stable
resources
that
we
can
run.
You
know,
even
if
it's
not
every
commit,
but
just
to
look
for
like
a
like
a
weekly
trend
of
performance,
so
that
we
can
see
if
we've
had
any
major
regressions
on
CPU
or
memory
usage.
I
mean
that
the
benefit
there
would
be
so
tremendous
yeah
I
would.
A
I
think
we're
gonna
come
a
bit
later
like
right
now
for
spoke
of
the
work.
That's
other
his
to
be
involved
in
is
largely
just
about
the
tool
itself,
but
it's
basically
it's
it's.
The
enabler
for
building
the
rest
of
this
and
I
think
said
that
others
are
interested
in
contributing
to
you
know
doing
this
infrastructure
wiring
job
welcome.
A
C
And
it's
it's
something
that
I
suspect
that
once
we
get
a
little
bit
further
on,
we
could
have
CN
CF
pay
a
contractor
to
help
with
some
of
that
plumbing
work
like
because
that's
less
systems
work,
that's
more
just
tying
it
together
into
provisioning
in
CI.
Not
that
it's
not
hard,
but
you
know
it's
a
different
skill
set
and
we
can
probably
get
someone
else
to
be
paid
to
to
do
that.
C
B
On
purely
synthetic
benchmark
against
and
for
surfing,
like
aesthetic
lorem
ipsum
file,
the
one
that
that
was
already
in
the
end
for
your
repository
I
get
like
super
tight
standard
deviations
like
within
12
microseconds,
or
something
like
that,
and
that's
interesting,
because
I
think
that
if
we
can
run
bare
metal
in
CI,
we
should
be
able
to.
You
know,
set
a
pretty
tight
benchmark
there
for
us
to
maintain
yeah.
C
This
is
this
is
super
exciting
and
that's
why
you
know
I
mean
there's,
there's
always
a
lot
of
public
conversations
about
how
you
know
projects
should
publish
perf
numbers
and
I'm,
always
saying
that
so
hard,
like
I
mean
it
takes
literally
months
and
months
of
effort
to
to
do
this
correctly,
but
we're
actually
doing
all
of
the
months
and
months
of
of
effort
to
do
it
correctly.
So
it'll
be
really
amazing.
If,
in
a
couple
of
months
we
can,
we
can
get
this
working
so
that
we
have
it
in
CI
and
have
published
results.
A
Certification
program,
it
sounds
exciting,
you
sure
boy
have
a
certification
program,
see
github.
Is
there
a
test
suite
analogous
to
this
is
to
certify
management
service
or
implement
I.
C
C
Right
so
for
people
out
there,
just
just
a
quick
update,
so
you
may
have
noticed
that
we
have
a
lot
of
queuing
or
we
should
have
had.
We
shouldn't
have
any
queuing
anymore,
but
we
had
a
lot
queuing
before
yesterday
to
make
a
very
long
story
short
circles,
see.
I
was
gracefully,
giving
us
three
CI
for
many
months
and
we
reached
a
resource
level
with
circle
where
they
were
willing
to
give
us
any
more
free,
CI
resources.
So
that
means
that
we
need
to
pay
for
our
CI.
C
So
behind
the
scenes,
we've
been
obviously
looking
at
a
whole
bunch
of
different
things
from
how
do
we
pay
for
our
existing
resources?
To
you
know
how
do
we
make
our
builds
faster?
How
do
we
develop
some
tooling
so
that
we
don't
run
all
of
the
CI
jobs
on
on
each
PR
or
each
commit?
You
know
so
maybe
have
like
a
final
test
pass
for
tests
that
that
you
know
typically
don't
fail
if
the
main
test
pass.
So
things
like
mac
or
compile
time
option,
or
things
like
that.
C
So
there's
a
there's
a
couple
things
happening
in
parallel:
we're
working
on
getting
direct
funding
for
our
CI
bill.
I
I,
don't
want
to
share
anything
publicly
about
that
right
now,
but
I
feel
confident
that
that
will
have
the
funding
that
we
need.
If
you
are
listening
to
this-
and
you
are
a
company
that
appreciates
envoy-
and
you
would
like
to
help
contribute
to
our
CI
bill-
please
reach
out
to
me
or
the
maintainer
z--
c
is
probably
like
one
of
the
most
valuable
things
that
we
do.
C
It's
not
cheap,
but
it
keeps
our
project
at
super
high
velocity.
So
if
you
would
like
to
contribute
X
dollars
per
per
month,
please
please
contact
us.
That
would
be
great
and
then
we're
also
investigating
some
things
to
make
builds
faster
and
stuff
like
that.
I
think
these
on
is
on
the
call.
Do
you
want.
D
D
We
turned
on
the
GCS
cash
back
a
couple
of
days
ago,
so
that
kept
that
up,
so
that
was
showing
very
good
performance
in
recent
release.
Runs
that
turned
out
that
some
some
of
bills,
finishing
20
minutes
which
was
like
near
two
hours
before
so
that
was
great
I'm,
also
had
a
meeting
with
the
remote
building
executor
and
gonna.
Give
it
a
try.
D
So
those
are
things
that
we
like
foreign
country
to
make
to
do
it
faster
and
there's
some
tech
that
can
potentially
make
the
professor
as
well.
That
has
all
the
issues
dynamic
linking
right,
which
we
dynamically
linked.
One
is
also
the
one
that
Emma
connect,
probably
not
we're.
Not
speeding
up
tests
really
watch
I'm,
not
sure
I
I
will
give
it
a
try
to
see
how
that
goes,
and
the
using
LEDs
probably
can
make
the
reduce
spilled,
linking
faster.
That
yeah.
C
I
mean
that
that
one
seems
like
a
no-brainer
like
it
seems
like
we
stick
with
GCC.
For
now
we
could
think
with
with
with
the
other
linker
I
you
know,
per
other
discussions
like
I
would
be
also
I.
Think
we're
reaching
a
point
where
enough
people
are
using
clang.
Now
that
night,
like
I,
would
be
fine.
Switching
our
official
build
over
new
clang.
The
only
thing
per
our
private
discussion
is,
and
for
people
out
there
is
I,
don't
think
we
can
stop
doing
CI
with
GCC
I
mean
I,
just
think.
C
A
Hesitate
on
that
one,
because
we've
seen
historically
issues
around
things
like
difference,
you
know
implications
of
STL
and
that
kind
of
thing
which
have
manifested
in
themselves
in
tests
in
actual
real
test
failures
as
we
switch
between
the
different
compilers.
So
that
seems
you
know
if
it
was
usually
the
case.
That's
they
were
producing
effectively
identical
binaries
and
we
didn't
really
ever
see
any
behavioral
differences.
I
would
totally
agree
that.
Why
don't?
C
Well,
and
and
that
comes
actually
back
to
I-
think
once
EJ
developed
some
of
that
additional
repo
Kitty
tooling
for
us
like
there's
a
bunch
of
tests
that
we're
running
now,
just
don't
really
have
to
run
on
every
single
commit
in
every
PR
right.
So
it's
like
you
know
some
of
them.
We
may
decide
to
rely
only
to
our
master
or
some
of
them.
C
D
E
E
So
yeah,
unfortunately,
tower
does
not
here
this
time.
He
we
didn't
talk
about
it
two
weeks
ago,
but
at
least
so
we
got
kind
of
the
intro,
but
now
it's
a
doc.
Everybody
can
get
to
I.
Think
the
easiest
way
to
get
to
it.
If
you
didn't
see
it
on
the
slap
channel,
is
that
it's
issue
number
868
and
Todd
linked
the
dock
at
the
bottom
of
that
one.
So
you
can
open
it
up,
but
I'll
kind
of
go
over
the
high
points
of
it
really
quick
and
see
if
there's
questions.
E
So
the
idea
here
is
that
this
is
kind
of
a
plugin
based
architecture.
The
the
proposal-
and
we
don't
have
code
yet
to
share-
is
that
we
would
supply
an
HDPE
filter
that
would
perform
caching
and
envoy,
but
it
wouldn't
have
a
cache
in
itself.
You
have
to
plug
in
a
real
cash
that
you
want
to
use,
and
so
mostly
what
this
doc
specifies
is.
What
is
the
interface
between
this
caching
filter
that
we
will
supply
eventually
and.
E
You
know
and
the
cache
back
end,
which
might
be
you
know
something
that's
proprietary
in
different
networks
or
it
might
be.
You
know
we
could
do
something
based
on
Redis
or
80s
or
something
in
memory,
and
the
idea
is
that
you
might
have
multiples
of
these
than
have
multi-level
caches,
but
that's
kind.
C
E
E
It's
also
designed
with
a
lot
of
kind
of
wisdom
from
HTTP
caching
and
Google
handling
things
like
variants,
which
is
actually
a
very
customizable
thing
range
requests.
The
variants
are
so,
if
you
don't
know
much
about
variants
variants
or
when
you
say
well,
I'd
like
to
send
this
response
to,
for
example,.
E
Clients
that
specify
a
specific
issue
request
header
in
a
certain
way,
but
that's
part
that'll,
be
part
of
the
key
that
so
you
can
pick
an
HTTP
header
that
you
want
to
become
part
of
the
key
and
and
a
good
example
of
this
would
be
the
accept
header
or
the
accept-encoding
header.
And
that
way
you
could
have
responses
that
vary
based
on
those,
and
you
have
to
specify
kind
of
in
your
installation
which
of
those
you
care
about
for
your
server.
E
E
C
E
Is
currently
not
in
this
spec,
we
may
do
another
spec,
which
talks
about
how
you
could
let
you
know
later.
All
the
semantics
that
are
required
for
this
HTTP
cache
plug-in
based
on
a
key
value
store,
will
be
iterating
on
that
and
also
innovating
on
getting
the
code
which
is
currently
you
know
not
in
you
have
kind
of.
E
C
So
for
the
for
the
general
caching
infrastructure,
I
guess
you
have
some
particular
use
case
in
mind.
Do
you
like
do
you
plan
on
having
like
they'll,
be
the
main
filter,
of
course,
but
then
you
were
saying
that,
like
do
you
foresee
having
weather
whether
it
be
Redis
or
something
else
like
do
you
foresee
having
some
reference
like
full
and
complete
implementation
in
the
public
repo
or
like?
Do
you
think
that
people
are
gonna,
have
to
go
off
and
like
build,
build
some
back-end,
basically
I.
E
E
E
C
G
C
E
B
Also
have
fun
a
question
about
there
about
the
dock,
so
say
that
I
would
want
to
implement
a
cache
that
actually
its
internal
to
and
for
in
that
filter.
Currently,
the
threatening
mortal
is
that
I
think
all
outbound
connections.
All
IO
is,
is
running
on
the
same
dispatcher
and
same
threat
as
as
the
outbound
I,
all
right,
so
the
client
and
the
server
connection
spells
are
aligned
and
I
was
wondering
if
it's
possible
to
do
that
when
I
plug
this
cache
underneath
or
will
there
be
like
fret
witching
going
on.
E
We've
talked
quite
a
bit
about
this
I
think
that
if
you
have,
you
know,
I
think
you
could
imagine
one
scenario
where
you
would
have
an
in-memory
cache
per
thread,
but
that
would
be
a
little
insane
because
you
would
I
think
the
most
of
our
thinking
is
that
what
you
would
probably
do
is
you
would
suffer
some
walk
overhead
when
you
used
an
inch.
But
what
you
do
is
you
could
program?
E
How
much
of
that
you
were
willing
to
accept
by
shorting
the
cache
into
as
many
shards
as
you
wanted,
and
so,
if
you
have
like
a
hundred
core
machine,
maybe
you
would
mean
you
know
a
nineteen
way.
Shard
charted
cache
and
you
would
get
fairly
infrequent
hits
on
that.
I
definitely
would
make
those
caches
not
too
huge,
because
you
want
to
do
most
of
the
heavy
lifting
in
in
some
kind
of
share
back
my
credits
for
the
super
hot
requests.
E
B
And
another
question
that
floated
to
the
top
of
my
mind
when
I
read
this
is:
would
this
make
with
this
complicate
life
for
plugin
builders?
If
there's
like
different
trading
falls
or
is
the
care?
Is
it's
the
cash
like
of
transference
to
the
filters
running
up
and
downstream
of
the
cache
I?
Would.
B
E
If
you,
if
you
would
be
great
to
see
some
comments
in
the
dark
about
like
how
the
problems
you've
seen
in
the
past,
with
those
kinds
of
interactions
between
a
cache
and
a
server
and
other
filters,
we
should
make
sure
that
we,
you
know
it's
definitely
not
too
late-
to
try
to
design
around
that
to
prevent
that
from
occurring.
Okay,.