►
From YouTube: Envoy Community Meeting - 2018-04-24
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
E
Nicholas
and
myself
have
put
a
proposal
out
for
sort
of
evolving
XDS.
You
know
it's
designed
to
I.
Think
I've
mentioned
this
in
previous
community
meetings.
We
have
some
pain
points
around
scaling
XTS,
but
for
very
large
configuration
sizes
and
also
dealing
with
use
cases
such
as
server
lists
where
you
need
to
do
our
messaging
late,
binding.
You
need
to
on
demand
load
from
envoy
positively,
which
I'd
say,
request
holding
the
data
pipeline
and
our
additional
cost
to
resources
and
so
on.
So
there
is
a
very
concrete
proposal
out
there.
F
C
There's
there's
a
bunch
of
comments
on
the
dock.
I
would
encourage
everyone
to
go
and
read
the
dock
because
it
not
IIIi,
don't
consider
it
a
huge
change.
It's
actually.
The
implementation
with
an
envoy
is
actually
really
simple,
but
it
has
large
implications
for
you
know
for
long-term
use.
So
if
people
out
there
kind
of
care
about
this,
I
would
really
encourage
people
to
look
I'd
still
like
to
resolve
some
of
the
version
comments.
C
So
if
you
want
to
talk
about
that
now
for
like
five
minutes,
that's
fine
I,
don't
think
we're
actually
very
far
off.
So
my
my
main
concern
was
really
as
I
started,
to
look
at
kind
of
implementing
envoy
side
version,
tracking
and
stats,
and
an
admin
output
I
basically
realized
that
in
the
incremental
case.
C
Sorry,
what
in
the
in
the
incremental
case?
Having
you
know,
one
version
per
resource,
but
not
like
a
transactional
version,
is
pretty
problematic
from
a
debugging
perspective,
so
we
can.
We
can
do
it
in
the
doc,
but
I
I
would
like
to
figure
out
a
way
where
I
get
that
you
might
want
to
have
a
perversion
resource.
It
would
be
awesome
if
we
could
simplify
it,
such
that
you
know,
there's
a
there's,
a
transactional
kind
of
version
and
then
any
resource
that
is
applied
and
that
transaction
just
gets.
C
That
version
I
think
that's
a
lot
simpler.
But
if
we
want
like
a
perversion,
resource
I
would
suggest
that
we
keep
the
top-level
version
so
there's
basically
the
concept
of
a
transaction
version,
and
then
we
optionally
allow
a
perversion
resource.
So
if,
if
no
perversion
resource
is
applied
on
value,
we'll
take
the
transaction
version
and
apply
that
to
each
resource
that
was
applied
and
if
there
is
a
per
a
perversion
resource
way,
we'll
keep
track
of
the
transactional
version
for
debugging
reasons
and
then
on
a
per
resource.
It'll
apply
the
perversion
resource
yeah.
E
I
mean
the
thing
is
like
this:
I
do
have
a
worry
about
there
being
some
complexity
here
in
particular,
like
you
know,
understanding
how
you're
supposed
to
use
these
different
versions
and
different
management
servers
and
they'll
be
implemented
as
somewhat
inconsistently.
For
example,
in
some
situations,
you're
only
going
to
have
perversion
resources-
and
you
won't
have
this
like
c.d.s
wide
the
versioning
rights
yeah.
C
Yeah,
no
I
I
totally
hear
that
I
just
I
feel
pretty
strongly
that
it's
a
non-starter
to
not
have
this
this,
like
transaction
version,
just
because
it's
a
it's
a
super
common
debugging
situation
in
which
your
your
example
holds.
Like
you
go
from
zero
to
zero
and
it's
like
you,
don't
know
what
happened
right.
I
mean
it's
like
we.
We
have
to
allow
people
to
debug
the.
E
Business
I
think
I
definitely
want
to
know
I
think
I
would
want
to
know
in
debugging
what
is
the
last
transaction
version
that
arrived
from
the
wire
exactly
yeah
yeah
yeah,
that's
different!
If
that
managers
just
telling
me,
since
she
won't
close
the
last
local
exchange
I
made
with
my
management
server,
it
tells
me
nothing
about
the
semantic
content
of
the
resources
I
have
loaded.
You.
C
C
My
point
is
I
suspect
that
most
people
do
not
need
per
resource
version
complexity,
and
if
you
gave
them
that
one
version
field
that
is
actually
going
to
be
enough
for
most
people,
because
the
way
that
I
would
likely
implement
it
at
lift
is
even
if
we
were
doing
incremental
I
would
basically
I
would
be
I
would
be
incremental,
but
within
a
particular
back-end
config
shop
effectively.
So
it's
like,
as
I,
asked
for
the
resources
right.
C
It's
like
resources
might
come
back
at
a
particular
version,
and
the
version
field
actually
might
be
the
same,
so
it
might
be
still
like
version
4
right
and
then,
as
I,
ask
for
more
resources,
I
just
internally
track
them
at
version
4
when
they
were
applied,
and
then,
let's
say
the
back-end
config
system,
someone
does
something
and
switches
to
version
5.
So
then,
if
I
increment
alasc
for
something
the
next
message
would
come
back
with
version
5
and
that's
independent
of
of
nots.
C
So
that's
why
I
actually
think
that
you
kind
of
need
them
to
be
separate,
so,
like
I,
totally
get
that
there's
extra
complexity
here,
but
I
think
that
what
I've
proposed
it's
the
most
flexible
and
it'll
make
everyone
happy
like.
However,
you
want
to
design
your
system,
you
can
do
it
basically
I
mean.
E
C
No,
no,
so
that's
how
so
that
might
be
like
I'm,
not
sure
what
exactly
is
happening
today,
but
that's
how
we're
using
it
at
lift.
Today,
like
we
have
a
back-end
Shah.
Basically,
that
is
the
version
and
that
version
stays
constant
like
even
through
different
fetches
right.
So
basically
like
we,
we
have
a
shot
of
config
and
then,
as
the
config
sha
changes
right.
Okay,.
G
E
I
mean
the
main
reason.
This
is
just
that
doing
this
with
the
rest
is
a
lot
more
complicated
because
you
know
in
gr
PC,
you
have
bi-directional
streaming
semantics,
so
it's
very
easy
to
imagine:
delivering
some
partial
resources
and
then
later
on,
asking
for
more
and
so
on.
Having
this
chewy
exchange
with
rest,
we
would
have
to
design
a
way
to
actually
fit
that
on
top
retrofit
down.
On
top
of
rest,
I
mean,
is
there
actually
a
strong
need
for
this,
like
this
level
of
scalability
and
on
demand
nests
in
the
rest
world?
G
Control
plane
that
interest
currently
uses
the
rest
implementation
of
the
Envoy
data
plane
API
and
we're
we're
looking
at
using
on-demand
config
loading,
because
we've
got
a
similar
number
of
clusters
to
support
as
other
folks
who
are
also
interested
in
this
feature.
But
but
but
but
couldn't
you
switch
to
G
RPC
I
mean
it
I.
G
C
So
here's
I
I
think
here's,
probably
our
stance
is
I,
don't
like
I,
don't
think
we're
opposed
to
supporting
this
functionality
in
rest,
but
I,
don't
think
that
we
can
assume
that
the
people
who
are
doing
the
work
like
have
to
backfill
it
because
it's
totally
non-trivial.
So
if,
if
you
or
someone
else,
wants
to
come
in
and
figure
out
like
how
to
do
it
with
rest,
I,
don't
think
there's
gonna
be
any
opposition
to
to
that.
But,
like
I
think
you
probably
have
to
do
that
that
have
you
lifting
okay.
E
C
Okay,
so
I
mean
in
the
in
the
interest
of
time.
Why
don't
we
go
back
to
the
dock,
because
I
I
do
feel
I
just
want
to
make
sure
that
we
really
think
through
all
this
version
and
stuff,
because
it's
the
kind
of
thing
where,
if
we
don't
think
through
it
now
we're
gonna
have
a
problem
later.
So
it's
worth
investing
just
some
time
into
that
now.
Maybe.
C
C
E
C
Don't
we,
why
don't
we?
Why
don't
we
do
this?
Why
don't
you
make
a
new
issue
in
envoy
tracking
in
implementing
incremental
XDS?
Why
don't
you
put
a
link
to
the
doc
in
there
and
then
maybe
just
say
that
we're
gonna
have
a
meeting
later
this
week
and
and
just
see
if
anyone
else
wants
to
join
and
then
we
could
schedule
a
dedicated
meeting
towards
the
end
of
this
week.
Okay,.
H
E
E
Myself
I
think
Constance
is
listed
as
a
mentor
as
well.
This
is
Anna
ruin,
seeker,
ectly
and
he'll
be
working
on
fuzzing,
so
we
were
actually
kicked
off
the
fuzzing
efforts
already
and
we're
making
our
way
through
a
whole
bunch
of
backlog,
of
sort
of
server,
config
file,
stuff
I
plan
on
sort
of
looking
at
protocol
fuzzing
shortly,
but
I
think
he'll
be
looking
at
all.
Those
are
there's
a
lot
of
work
to
do
that
we
actually
have
a
hot
we
open
issue
or
actually
added
to
the
issue.
E
Yesterday,
a
list
of
our
potential
projects
to
work
on.
Please
do
contribute
to
that
issue.
If
you
have
additional
things,
you'd
like
to
see
fasting
envoy,
this
is
actually
a
really
useful
way
to
find
bugs
we
we
have,
like
you,
know
continuous
sort
of
fuzzing
using
what's
chromium's
cost
of
files
running
and
yeah
it's.
You
should
expect
that
your
adversaries
are
also
doing
this.
E
C
My
my
thinking
is:
let's
have
them
start
on
the
server
validation
which,
like
per
hour,
conversation
well,
I'm,
sure
it
exposed
like
60
bugs,
and
that
will
because
that's
so
similar
to
what
you
already
did.
It
should
be
pretty
straightforward
or
to
actually
make
that
happen.
So
my
thinking
is
to
have
them
do
that.
Have
them
fixed
like
the
50
bugs
that
occur
in
the
validation
path,
and
then
we
can
maybe
move
him
on
to
more
complicated
stuff.
Yeah
I
think
maybe
EDS
up
to
that
game.
E
Not
you
need
significantly
more
resources
than
CI
has
available,
but
we
do
essentially
have
CI
for
fuzzing
with
this
cost.
The
first
thing
I
described
this
is
infrastructure
than
the
chrome
project
or
chromium
is
operating
for
a
whole
bunch
of
open
source
projects
and
on
every
commit
that
you
make
well
actually
is
check
it
out
and
spin
up
a
bunch
of
VMs
and
GCP
and
throw
a
bunch
of
resources
as
fuzzing
that
files
issues
automatically
with
the
envoy
security
team
when
they
come
up
Thanks.
C
So
that's
really
exciting.
Okay,
let's
just
talk
really
briefly
about
the
DCO
bot,
like
I
I
sent
off
really
complete
nasty
email
to
Chris.
Last
week,
I,
like
I'm
kind
of
at
my
wit's
end
like
it's.
This
is
like
by
far
the
most
painful
thing
that
we
deal
with
it's
like
an
endless
stream
of
people
that
don't
know
what
to
do
or
like
the
bot
is
broken
so
I.
C
Just
you
know
like
to
me,
I
put
in
that
email,
I
think
there's
some
like
really
basic
usability
things
that
can
be
done
to
just
help
guide
people
from
the
BOTS
like
in
what
to
do
and
what
went
wrong
so
I
guess.
My
main
question
to
Chris
is
since
most
CN
CF
projects
are
moving
towards
these.
Like
gifts,
n
CF,
invest
some
resources
and
making
the
bot
less
terrible.
Yeah.
D
I
mean
if
you
give
us
explicit
issues,
we're
happy
to
fund
some
work
to
send
Paul
I
mean
it's
all
open
source,
so
we're
happy
to
improve
it.
So,
just
just
let
us
know
in
detail
what
you
want
in
the
nasta
Graham,
you
sent.
You
listed
a
couple
things
so
we'll
take
a
look
at
those,
but
for
other
folks
in
the
Envoy
community.
If
you
have
specific
issues,
you'd
like
to
see
improve
I.
C
Mean
people
should
just
reach
out
and
and
say
what
their
issues
are.
My
main
things
which
I
think
would
fix
it,
which
are
already
in
the
email,
is
just
that
the
bot
basically
needs
to
be
super
clear
of
like
what
it
was
checking.
What
was
wrong
and
actually
have
a
link
to
some
page
with,
like
detailed
information
about
gotta.
H
C
What
what
you
did
wrong,
possibly
with
like
get
commands
and
like
the
entire
thing
you
just
needs
to
be
a
more
like
hand-holding
process
to
help
people
what
went
wrong
and
their
edge
cases
in
the
bot
were
like.
There
was
something
that
happened
last
week,
we're
like
if
your
email
doesn't
match
like
your
github
email,
the
locks
even
respond,
it
just
hangs
just
like
it.
Let
me
to
have
to
fix
you
know
those
those
bugs
exit
all
right
got.
It.
F
B
B
So
I
did
last
night:
I
pushed
a
pull
request.
That
is
entirely
informational,
so
it
is
a
you
know,
a
proof
of
concept
in
the
classic
piece
of
crap
implementation
style,
meaning
I,
basically
just
did
whatever
I
had
to
do
in
order
to
in
order
to
make
the
thing
work.
So
please
take
the
amount
of
replication-
and
you
know
just
plain
hacking
with
a
grain
of
salt,
but
I
think
it
provides
some
value
in
setting
some
discussion
points
on
how
we
move
forward
to
make
it
work
in
a
deployable
fashion.
B
So
I
guess
to
summarize
that
the
biggest
highlights
that
are
points
that
need
to
be
resolved
are.
We
envoy
currently
has
two
socket
classes:
there's
transport
socket
and
then
there's
connection
socket
and
because
they're
different
classes
and
they're
not
there's,
there's
no,
so
so
the
the
integration
of
the
transports
talking
stuff
through
the
extension
stuff
that
was
just
recently
pushed
worked
fantastically
right.
So
that's
great.
The
problem
is
to
listen
and
connect.
B
Side
of
things
is
a
separate
implementation,
and
so
we
need
to
figure
out
what
the
plan
is
to
unify
that
or
to
put
in
a
parallel
effort
to
allow
the
specification
of
an
altar
alternate
transport
for
the
connection
side.
I
took
a
stab
at
you
know,
naively
just
trying
to
go
refactor
things
and
got
entirely
way
over
my
head.
So
so
at
this
point,
what
you
know
what
I'm
looking
for
is
some
direct
feedback
on.
B
You
know
where
I
hacked
things
and
where
things
are
just
you
know
done
ignorantly,
because
it
was
what
I
had
to
do
to
get
it
to
work
and
how
we
can
help.
So
there's
really
kind
of
like
two
phases
that
need
to
happen.
You
know
one
is
we
need
to
make
the
unvoiced
stuff
so
that
the
extensions
are
clean
and
then
I
can
go.
Add
the
the
VPP
implementation
under
that
and
I'm
perfectly
willing
to
work
on
any
or
all
of
that,
and
so
the
question
is
you
know?
B
How
do
you
want
to
move
forward
and
identifying?
What's
the
right
thing
to
do?
What's
the
the
other,
you
know
the
caution
I
have.
Is
it
in
my
attempts
to
go?
Do
some
naive
refactoring,
it
became
clear
to
me
that
this
is
gonna,
be
high
risk.
If
we
do
it,
you
know
the
way
I
would
have
done
if
it
were
a
clean
slate
implementation,
meaning
I
would
just
have
a
single
socket
class
and
then
direct.
B
You
know
inherit
her
inherit
that
where
we
had
separate
entities
that
needed
to
use
different
characteristics
of
that
I
think
that's
gonna
be
way
way
too
risky
to
lop
off
in
one
in
one
chunk,
and
so
I
really
need
you
guys
to.
Let
me
know
we're
out
how
else
could
we
cut
this
such
that
we
can
take
more
baby
steps
that
are
lower
risk
because
I
don't
think
this
is
a
trivial
implementation,
yeah.
C
B
C
But
whatever
I'd
like
to
do
is
we'll
tag
like
four
or
five
people
on
on
the
PR
to
kind
of
just
take
a
first
pass
and
kind
of
look
at
it
and
then
I
I
think
from
there.
We
can
either
talk
about
it
again
in
two
weeks
time
at
the
next
immunity
call
just
because
I'm
I
kind
of
have
a
feeling.
It's
gonna
be
complicated
and
all
that
we
do
in
the
PR.
But
you
can
start
there,
but
I
have
a
feeling
impressed.
B
I
also
wrote
up
a
Google
Doc,
that's
linked
in
the
PR
that
describes.
You
know
the
overall
scenario.
I'll
continue
to
extend
that
I
realize
it
didn't.
I
didn't
include
the
complete
test
configuration
so
I
can
do
that
so
that
somebody
else
could
stand
up
what
it
is.
I
ran
and
and
test
it,
and
you
know,
there's
there's
sections
of
that:
okay,.
C
B
C
H
D
H
D
Is
certainly
constructive,
yes,
I'd
love
to
see
that
pulled
in
I
know
that
we
had
some
stuff
around
the
quick
work
where
there
was
an
interest
in
sort
of
things
close
to
this
as
well,
and
we
generalize
out
from
what
we're
doing
right
now
with
sockets
I,
think
that
would
be
a
very
valuable
set
of
input.
The
how-to
yeah.
C
D
B
So
I
guess
the
other
comment
I
would
make
is
given
that
we've
got
multiple
projects
that
are
gonna
work
on
this.
It
would
help
if
we
were
to
solidify
the
requirement.
So
I
don't
know
whether
you
want
to
do
that
with
it,
and
you
know
the
current
Google
Doc
I
have.
We
can
expand
that
we
can
run
a
separate
doc
that
codifies
at
the
very
least,
the
use
cases
that
that
we
need
to
go
pass.
Yeah.
B
Out
I
think
we
need
to
meet
and
discuss
it.
I'm
mostly.
What
I
have
are
a
bunch
of
questions
when
you
look
at
the
write-up
I
have
just
because
I
don't
have
enough
experience
with
the
Envoy
code
base
to
make
any
you
know
meaningful
contribution
at
this
point
on
from
from
the
VPP
POC.
The
other
thing
to
note
is
that
what
I
did
here
was
just
take
one
of
the
one
of
the
test
cases
for
the
VPP
ho
stack
and
add
envoy
as
a
TC
proxy
into
it.
C
D
C
D
Great,
if
it
were
four
weeks
from
now
and
I
would
certainly
not
try
and
slow
things
down,
but
I
would
be
shocked.
You
would
be
seeing
my
shocked
face
so
new,
particularly
trying
to
get
something
right,
that's
usable
by
multiple
parties.
It
will
take
a
little
bit
of
time
to
start
it
out
and,
like
I
said,
sort
of
sort
of
sorting
things
out
this
summer.
I
think
is
a
reasonable
aspiration.
Yeah,
just
the
chips
fall,
yeah.
C
I
mean
I
would
I
would
like
to
I
like
I'm
I'm,
pretty
aggressive
in
terms
of
getting
stuff
done,
so
I
think
I
think
we
can
do
early
summer.
I
mean
like
let's,
let's
try
to
iterate
on
this,
but
I
I
would
also
be
shocked,
shocked
face
if
we
can
figure
out
a
design
in
less
than
four
weeks
like
it's
just
gonna
take
a
bunch
of
time.