►
From YouTube: Envoy Community Meeting - 2018-03-27
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
D
D
D
D
You
know
issues
who
are
maintained
errs
and
that
the
feeling
also
is
that
there's
a
bunch
of
people
out
there
who
either
you
know,
don't
know
or
don't
want
to
learn,
C++
or
they're,
just
not
interested
in
doing
network
programming,
but
there's
people
that
would
like
to
help
so
I.
Think
our
thinking
is
that
we'd
like
to
get
kind
of
a
pretty
good
idea
of
all
the
different
types
of
automation
that
we
would
actually
like
and
then
possibly
try
to
reach
out
to
the
larger
community
to
find
people
who
might
be
might
be
interested.
D
So
that's
the
general
thinking
so
I
don't
know
that
we
have
to
kind
of
discuss
it
at
superlame
today,
but
would
love
to
hear
people's
thoughts
on
either
what
type
of
automation
people
would
like
to
have
any
any
ideas
on.
You
know
venues
in
which
we
could
find
people
that
might
want
to
help
or
anything
like
that.
F
Throw
in
one
plug
for
a
PR
I
have
up
right
now,
which
is
about
sort
of
automating
sort
of
issue.
Creation
for
deprecation
I
think
it
is,
but
I
wrote
that
in
like
an
hour
or
two
and
it's
just
using
you
know
PI
github
and
it
Python,
and
you
know,
within
the
you
know,
50
lines
of
code
or
what
everything
right
you
can
do
a
lot.
You
know
this
isn't
a
particularly
heavy
white
thing
to
do.
D
D
But
but
yeah
I
mean
you
can
do
like
super
amazing
things
with
the
with
the
github
API,
so
I
I
think
this
is
kind
of
a
a
larger
issue
which
is
it'd,
be
really
nice
to
figure
out.
You
know
like
we.
We
could
definitely
use
a
bot
that
we
actually
can
write
and
deploy.
It's
not
super
clear
to
me
like
where
we
would
deploy
it.
So
I
think
that's
probably
a
question
that
we
could
take
offline
with
Chris
from
CN
CF.
Just
to
talk
about
like.
Should
we
deploy
it
on
something
like
Heroku
or
like?
D
F
D
You
probably
want
that
to
happen,
because
you
want
the
bot
to
be
able
to
comment
back
and
forth,
but
I
I
think
that
like
if
we
find
a
ways
to
host
a
bot,
and
we
can
then
find
people
to
work
on
it
and
actually
work
on
deploying
it
it's
more
it's
just
again.
It's
like
it
sounds
simple
in
theory,
but
it's
like.
If
we
have
a
bot
deployed
somewhere,
we
need
to
figure
out
like
source
control
for
the
bot
code
like
how
do
we
deploy
it?
D
F
Another
related
topic
you
know,
I
find
as
a
reviewer,
maketo
type
interface
is
pretty
terrible
and
allowing
me
to
see
you
sort
of
updates
on
the
last
time
in
which
you
know
CPR
is
being
modified
and
give
me
an
idea
of
you
know
as
I.
Do
my
rich
sweeps
every
few
hours
I
would
really
like
either
a
better
I.
D
I,
don't
I
don't
know
of
anything
off
the
top,
my
head,
that
doesn't
mean
that
it
doesn't
exist,
but
even
what
you
just
talked
about,
we
could
write
a
hundred
lines
of
Python
code
that
would
basically
do
a
sweep
and,
like
seven
people
emails.
So
it's
like
there's,
there's
I,
think
there's
pretty
low
hanging.
D
A
D
Know
that's
true
yeah
yeah,
so
at
lift
again
not
to
plug
our
totally
hipster
development,
but
we
have
like
a
whole
set
of
thoughts
that,
like
are
on
slack
and
like
also
actually
go
through
and
talk
to
github
and
you'd,
be
surprised
at
how
little
code
that
that
actually
is
like
the
slack
API
is
super
simple.
That
github
API
is
super
simple,
so
it's
more
just
like.
Where
do
we
host
the
code?
So,
okay?
How
about
this
like?
D
So
this
this
kind
of
discussion
item
is
an
open-ended
discussion.
I,
don't
really
have
an
answer
right
now,
which
is
in
the
future,
as
we
increasingly
get.
People
that
want
to
you
know
have
extensions
that
are
potentially
in
the
Envoy
repo.
You
can
think
how
the
Linux
kernel
works
with
device
drivers.
You
know
there's
a
whole
process
in
terms
of
how
do
we
actually
scale
that
in
terms
of
CI
and
and
that
ranges
from
things
where
you
know
we
have
extensions
in
the
repo
where
they're
not
actually
tested
in
CI
and
they're?
D
Only
given
like
a
cursory
look
for
maintainer
x',
you
know
that
this
is
somewhat
sane
to
like
they're
tested
as
part
of
CI.
We
actually
have
some
type
of
dual
sign-off
process
where
we
have.
You
know
first
level
owners
who
will
do
most
of
the
reviews
and
then
a
maintainer
who
just
does
like
a
quick,
a
a
quick
sanity
pass.
So
there's
there's
lots
of
different
models
here:
I
I,
don't
I,
don't
really
have
any
answer
now.
F
So
one
thing
I'll
throw
out
there
is
I
think
it
is
a
good
idea
to
try
maintain
a
reasonably
high
quality
bar,
as
we
do
this,
because
one
of
the
main
boundaries
for
up
streaming
code
from
companies
and
having
all
this
code
in
the
central
repository
as
we
move
on
boy
as
things
break.
For
you
know
these
extensions.
We
fix
them.
If
these
exam
extensions
don't
have
suitable
tests
and
that
kind
of
stuff,
that's
gonna,
be
much
harder
right.
Yeah.
D
I
mean
yeah,
so
so
one
thing
that
had
occurred
to
me,
it
is
that
what
we
could
do
is
actually
have
a
new
repo,
which
is
basically
called
something
like
envoy
extension
sandbox
or
like
something
like
that,
and
that
repo
is
basically
a
total
free-for-all,
like
it's
not
a
free-for-all
and
that
everyone
gets
commit
access.
But
it's
more
of
a
free-for-all
in
the
sense
that
every
extension
in
that
repo
is
not
necessarily
endorsed
by
the
core
envoy
kind
of
maintainer
situation.
D
But
it's
a
place
where
people
could
actually
collaborate
and
if,
if
an
extension
shows
like
a
particular
quality
bar
or
if
the
people
that
work
on
that
extension
want
to
promote
it
basically
into
the
core
envoy
repo,
they
would
have
to
match
envoy
style.
They
would
have
to
do
CI
and
test.
They
have
to
do
code
coverage
and
then
they
would
likely
also
have
to.
D
You
know
essentially
volunteer
to
be
owners
and
maintain
years
of
that
extension,
and
that
would
involve
you
know
not
having
a
single
point
of
failure
so
having
at
least
two
people
that
I.
You
know
that
can
that
can
basically
do
reviews
and
I.
Think
again,
this
is
not
a
fully
formed
idea,
but
I
think
what
that
would
do
is
that
would
allow
people
to
actually
host
extensions
in
the
Envoy
org
and
then,
if
the
extension
looks
promising,
it
would
allow
people
to
kind
of
agree
to
a
a
higher
bar.
D
Yeah,
so
that
was
my
rough
thinking
just
in
terms
of
having
that
kind
of
dual
dual
layer
and
again
like
all
this
has
to
be
worked
out,
but
like
the
way
that
I
would
that
I
would
see
it
happening.
It
is
almost
very
similar
to
the
way
that
CNCs
I
started
to
talk
about
their
they're,
like
sandbox
versus
incubation
kind
of
graduation
levels.
D
So,
like
it's
the
kind
of
thing
where
to
get
into
you
know
the
sandbox
extension
repo
you
just
have
to
get
the
endorsement
of
like
one
maintainer
or
something
like
that
right,
but
then
to
actually,
you
know,
get
into
the
main
envoy.
Repo
you'd
have
to
go
through
the
entire
review
process.
You'd
have
to
agree
to
like
being
owners
and
like
doing
code,
reviews
and
and
stuff
like
that.
So
that
was
my
like
super
rough
thinking
and
and
I
feel
like.
D
D
D
It's
kind
of
the
current
reality
is
that
I'm
boy
is
becoming
popular
to
the
extent
that
they're
going
to
be
increasingly
be
a
lot
of
companies
that
want
extensions
like
they're
gonna
want
extensions
for
their
products
like
whether
those
be
security,
products
or
logging
products
or
s'more
stat
products
and
I
can
in
I
can
just
see
that
there
would
be
an
explosion
of
extensions
in
the
main,
repo
and
I
feel
that
if
we
don't
get
ahead
of
this,
it's
going
to
become
chaos.
I
mean.
D
I
mean
it's
like:
there's,
there's
absolutely
no
way
that
we
can,
you
know,
require
the
quote
core
maintainer
x'
to
be
reviewing.
You
know,
16
different
stat
extensions
and
like
14
different
logging.
Extensions
like
it
just
doesn't
make
sense,
so
I
I
think
I
think
there
has
to
be
this
kind
of
like
approach.
D
It's
more
I've
kind
of
come
to
the
opinion,
though,
that
trying
to
do
a
multi
layer
approach
within
the
main
repo
is
probably
going
to
be
pretty
chaotic,
because
you
know
it's
like
you're,
either
not
gonna
test
it
in
CI
or
you're
gonna
have
like
relaxed
standards
for
this
one
directory,
which
is
kind
of
horrible
right.
So
it's
like
to
me
it's
like.
C
D
D
Of
course,
if
we
change
the
filter
API
and
there's
this
giant
sandbox
repo,
you
know
it's
like
we
probably
have
to
go
through
and
and
fix
it
to
some
extent,
I
think
it's
self-correcting
in
the
sense
that
as
envoy
becomes
more
popular
and
there's
more
extensions
written
the
bar
to
like
change.
This
API
is
just
gets
higher
higher
and
higher.
D
So
you
know,
I
I
do
feel
like
if
people
kind
of
change
a
core
functionality
that
breaks
a
bunch
of
sandbox
filters,
they
should
probably
go
and
actually
fix
them,
but
this,
but
this
all
has
to
be
codified.
So
that's
why
I
think
before
we
do
open
season
like
in
the
in
the
main
repo,
like
there's
already
people
that
are
emailing
us
off
off
list
to
say
we'd
like
to
do
filters
for
X
Y,
&,
Z
I'm,
just
like
I'm,
really
hesitant
to
start
letting
people
commit
filters
until
we
really
think
this
through
ya.
F
Know
one
other
thing
that
I'm
just
getting
thinking
about
scaling
is,
if
you
only
make
it
best,
F
at
CI
on
the
extension,
the
sandbox
repo
as
we
get
more
and
more
developers
and
more
people
breaking
things
will
get.
You
know
essentially
hidden
failures
and
things
like
that
sand.
We
wash
you
slow
down
the
velocity
there
as
soon
as
we.
C
F
C
B
C
D
And
I
think
I
think
that's
totally
reasonable,
but
I
do
think
that
per
this
discussion,
I
think
there's
gonna
be
actually
like
a
pretty
large
document
that
comes
out
of
this,
like
whether
it
be
in
G,
doc
or
markdown,
and
I
really
feel
that
we
have
to
like.
We
have
to
nail
this
policy
before
we
start
otherwise.
It'll
be
total
chaos.
So
it's
like
my
my
current
thinking
is
basically
that
for
extensions
today
that
are
basically
either
written
or
kind
of
endorsed
by
core
core
maintained
herbs.
D
We
just
keep
going
with
the
with
the
status
quo
so
so
like,
for
example,
I'm
gonna,
add
a
tap
dump
extension
next
next
quarter
and
it's
like
you
know,
that's
something
as
a
core
maintainer
like
I
will
own,
like
I
will
make
sure
that
it
that
it
works
properly,
but
I
feel
like
for
other
organizations
that
aren't
core
maintainer
x'
that
want
to
basically
own
extensions
like
this
is
where
we
have
to.
You
have
to
kind
of
get
this
policy
down.
D
So
what
what
I
can
do
here
since
I
I,
don't
foresee
any
one
else
jumping
at
the
bit
to
actually
sign
up
for
this
is
I
can
go
through
and
just
do
like
a
like
a
straw
proposal
on
what
this
would
look
like,
but
again,
I.
Don't
really
have
the
answers
here,
like
I
think
this
is
gonna.
Take
some
some
iteration,
so
I
would
love
to
work
with
other
people
who
were
interested
in
this.
So
if
you're
interested
in
this
topic,
definitely
reach
out
like
I'm
sure
Josh
would
be
so
like.
D
I
can
definitely
work
with
Josh
on
this.
But
if
there's
other
folks
that
are
interested,
let's,
let's
chat
and
I
would
actually
suggest
that
we
start
like
a
small
working
group
with
like
three
people
or
something
and
just
like.
Let's,
let's
try
to
hammer
out
like
this
proposal
and
then
we
can
get
it
out
for
people
to
actually
review
yeah
I.
D
Okay
yeah,
so
you
can.
You
can
assign
that
to
me
just
just
to
kind
of
at
least
do
like
an
initial
straw
proposal,
but
if
there's-
but
you
know
if,
if
out
there,
if
you're
interested
and
actually
helping
with
the
proposal,
I
like
I,
don't
have
the
answers
here
so
I
think
it's
gonna
be
a
collaborative
process.
D
Right
so,
like
I,
think
I
think
extensions
that
are
in
the
current
repo.
There
are
already
blessed
extensions
like
they're
being
used
in
production
like
we.
We
maintain
them
as
pork
or
maintain
errs
and
I
I
mean
to
be
clear,
like
I
I,
see
that
those
extensions
growing
but
I
think
the
point
is
we
have
to
keep
the
quality
bar
high
and,
more
importantly,
for
extensions
that
none
of
the
core
maintainer
run
in
production.
D
Okay,
on
this
topic,
real
quick,
something
came
up
this
morning
in
code
review
I
just
since
people
are
here,
I
wanted
to
discuss
it
really
briefly,
so
I've
been
moving
the
code
over
into
the
extensions
folder
and
there
was
a
question
as
to
whether
we
should
move
TCP
proxy
and
the
HTTP
connection
manager.
I
I
had
put
some
verbiage
in
the
github
issue
of
why
I
actually
moved
them.
The
TLDR
there
is
that
I
moved
them
because
they
are
loaded
as
extensions,
even
though
they
might
not
be
able
to
be
compiled
out.
D
The
alternative
is
to
keep
them
where
they
are,
which
I
think
is
a
little
worse
from
a
code
discoverability
standpoint
and
then
the
option
3
that
I
threw
out
is
actually
making
a
new
directory
called
something
like
core
extensions,
and
those
extensions
would
follow
the
same
directory
structure
as
extensions,
but
they
would
not
be
able
to
be
compiled
out.
So,
though,
here
I
think
our
three
options
so
I
wanted
to
throw
that
out
there
for
discussion.
F
Yeah
I
mean
I,
you
know
they're,
not
that
commonly
on
the
github
issue
is
largely
just
a
reflection
of
the
the
to
do's
you
had
in
there
to
reduce
dependency
of
things
like
website
and
gcpd
proxy.
Do
you
think
it'll
be
possible
in
general
to
depend
to
use
dependency
of
all
of
coram
boy
on
HTTP
connection
manager
and
TCP
proxy?
It.
D
D
F
Basically
compiled
I
think
I.
Think
the
existing
structure
is
fine,
I
think
what
it
would
be
nice
is
we
just
we
make
it
a
lot.
An
exception
to
the
rule
which
says
you're
allowed
to
from
core
code
depend
on
TCP
proxy
ratio
that
can
extra
mention
just
those
two
or
there's
a
third
one
whatever,
and
we
actually
listen
to
check
format.
We
can
easily
just
analyze
the
build
files
and
verify.
D
Yeah
that
would
okay,
yeah
I
I
will
I
will
make
a
follow
up
issue
because,
as
I'm
doing
this,
there
there's
a
bunch
of
follow
ups
that
are
becoming
clear
so
like,
for
example,
in
order
to
compile
out
Redis.
You
actually
have
to
be
able
to
have
pluggable
help
check,
which
is
something
that
we
probably
want
anyway.
So,
like
there's
a
bunch
of
follow-up
items
here
that
will
that
will
come
out
of
this,
so
I'm
gonna
do
a
tracking
issue
for
basically
follow.
Ups
and
I'll
be
liberal
with
my
with
my
to
use.
D
F
F
In
the
two
minutes
that
we
have
yeah
so
what's
come
up
recently
is
it
sto
is
considering
moving
to
a
model.
They
can't
have
them
all
similar
to
ours,
where
they're
api's
are
separated
from
the
core,
very
good
reason
for
the
separation.
You
don't
want
to
force
dependency
on
all
of
on
voyages
to
consumers.
Api's
and
logically,
the
api's
are
forming
specification.
The
envoy
proxies
and
implementation
that
those
are
essentially
the
two
who
are
guarding
reasons,
at
least
for
us
and
I
assume
for
estudio.
It
is
a
lot
of
overhead
I'm
sure
everyone's
experience.
F
The
issues
of
having
to
check
injure
docks
in
one
repo
make
a
change
here,
check
the
you
change
the
docks
there
change
the
API
in
the
data
play
an
API
Reaper,
then
change
the
sha
back
in
the
main
repo
and
so
on.
There's
a
lot
of
sort
of
thoughts
there,
which
could
be
important
if
we
were
all
in
a
single
single
repo.
Now,
obviously,
then
that's
a
disadvantage.
That's
if
you
just
do
that
naively
you
force
dependency
on
a
ballboy.
F
The
way
this
is
solved,
I,
believe
I,
think
it
came
up
in
the
kinds
of
kubernetes
and
I
think
also
to
burn
labs.
Mention
that
they
done
something
similar
is
you
said
you
just
have
a
periodic
BOTS
jobs,
it's
a
cron
job
essentially
which
go,
and
they
can
they
can
be
BOTS.
If
you
want
you
to
synchronize.
B
F
D
A
D
Yeah
so
I
mean
that
that
seems
like
a
perfect
compromise
to
me.
It
is
to
basically
move
the
API
and
the
docs
into
the
main
repo
and
then
do
like
a
nightly
sync
out
to
the
out
to
the
existing
data
playing
API
rate,
though
so
I
would
suggest
that
we
open
a
track
and
get
up
issue
for
that,
or
maybe
just
link
it
to
the
to
the
body
issue.
But
but
this
will
be
blocked
on
us
having
some
type
of
BOTS
cron
system.