►
From YouTube: Ambient Mesh WG Meeting 2022 10 19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
it's
recorded
cool,
all
right,
yeah,
so
I
already
shared
this
doc.
I
want
to
go
over.
So
I've
talked
a
lot
about
in
the
past
about
how
the
Z
tunnel
doesn't
really
scale.
This
stock
is
kind
of
Frozen.
The
path
forward
that
myself
and
Steven
and
a
few
others
have
kind
of
been
looking
into
all
the
different
ways
that
we
can
make
the
zetonal
scale
and
what's
about
best
path
forward,
so
the
stock
is
fairly
long
and
it's
that's
it's
just
a
big
wall
of
text.
A
There's
no
pictures
so
I
don't
think
we
should
just
go
over
it
line
by
line.
Maybe
I'll
give
a
summary
of
what
I
think
we
should
do
and
the
problems
we're
trying
to
solve,
and
then
people
can
ask
questions
and
go
more
in
depth
where
needed.
A
So
some
background
I
think
most
people
here
are
probably
familiar,
but
the
Z
tunnel
has
a
very
specific
role
of
just
tunneling
data
over
mtls,
so
it
has
a
much
much
smaller
scope
than
the
Waypoint
proxies
or
side
proxies
that
do
you
know
the
whole
kitchen
sink
of
HTTP
processing
quasim,
you
know
all
sorts
of
stuff.
So
it's
a
much
simpler
problem
to
solve,
but
it
also
needs
to
scale
really
well
inside
cars
world.
You
can't
just
naively
deploy
sidecars
in
you
know
like
a
10,
000
pod
cluster.
A
You
have
to
kind
of
meticulously
set
up
these
sidecar
scoping
objects,
because
the
Z
tunnels
and
per
node
not
per
workload.
It's
much
harder
to
do
that
and
we
don't
want
users
have
to
do
that.
So
the
scaling
problem
becomes
a
lot
different
because
we
need
the
Z
tunnels
to
be
able
to
access
potentially
arbitrary
pods
in
the
mesh
right.
A
So
let's
see
requirement
scale.
So
these
were
kind
of
the
targets
that
I
was
looking
at.
When
I
was
thinking
about
scale,
we
don't
have
like
concrete
numbers
on
what
they
should
be.
These
are
just
what
I
thought
seemed
like
good
numbers
to
to
strive
for
which
would
be
that
at
rest.
So,
like
just
the
Baseline,
when
we
don't
have
traffic
Z
tunnel
should
use
under
75
megabytes
of
RAM
for
150
pod
cluster
and
for
every
pod,
that's
turning.
A
We
should
be
sending
less
than
500
bytes
or
so
so,
as
you
get
into
a
bigger
bigger
clusters.
Obviously,
there's
more
pods
training.
So
if
you
have
expensive
per
pod
rate,
then
it
won't
scale
to
large
clusters,
so
we
have
kind
of
like
going
back.
We
had
initially
implemented
a
go
prototype
of
the
sea
tunnel
and
then
we
transitioned
it
to
Envoy
and
worked
over
I.
Don't
know
past
six
months,
maybe
a
year
I'm
trying
to
get
ongo
to
work.
A
We
also
started
exploring
a
another
prototype.
That's
basically
a
copy
of
the
go
Z
tunnel,
but
Richard
and
rust.
So
it's
the
exact
same
architecture
almost
copy
and
pasted
just
in
a
different
language,
and
then
we
ran
a
variety
of
tests
on
on
these
three
implementations.
A
So
I
want
to
give
a
huge
caveat
that
all
three
of
these
implementations
are
kind
of
like
prototypes
effectively.
None
of
them
have
been.
You
know
thoroughly
optimized,
so
this
is
more
of
an
indication
of
where
things
can
go,
not
final
numbers
by
any
means.
A
So
if
we
did
like
just
a
standard,
you
know
10
000,
QPS
bunch
of
connections.
We
can
see
that
the
latency,
like
the
rest
proxy,
is
slightly
better
than
actually
sorry
before
I
go
into
this.
Let
me
jump
ahead
to
my
proposal
and
then
I'll
go
more
into.
Why
so
I'm
going
to
propose
that
we
write
our
own
custom,
Zito
and
rust,
and
this
is
kind
of
the
justification
for
why
so
the
latency
is
moderately
better
Rams,
quite
a
bit
better
on
this
test.
A
It
wasn't
really
that
convincing
as
we
go
on
we
can
see
like,
for
example,
this
is
throwing
a
Max
QPS.
Instead
of
just
10
000.,
we
can
see
the
rest
implementation
handles
the
most
throughput
in
the
memory
usage
is
hugely
better,
like
90
megabytes
versus
4.5
megabytes.
That
is
huge
difference.
A
We
see
similar
tests
in
iperf
the
throughput's
a
little
bit
worse,
but
I
think
that's
because
of
the
you
know
some
tuning
that
we
can
do
some
of
the
really
really
big
differences
start
to
come
when
we
look
at
not
the
data
plane
performance
in
isolation,
but
the
control
plane
performance
in
a
large-scale
test.
A
So
if
we
have
just
a
cluster
with
2000
pods
and
we
restart
one
pod
every
two
seconds,
we
were
saying
that
Envoy
was
using
500
megabytes
of
ram.
The
rest
proxy
is
only
using
20
megabytes
and
on
the
East
2D
side
we
were
like
using
eight
cores
that
might
have
been
the
max
F3.
With
the
limit
was
set
up
of
East
2D
CPU.
It
was
almost
zero
for
the
rust
one.
If
we
go
to
20
000
pods
yeah
go
ahead,
Lynn.
B
Yeah
quick
question
yeah
on
this
comparison.
Number
four
I
assume
boy
you're,
sending
like
the
full
XDS
configuration,
so
icod
has
to
do
more
work
where
in
the
raster
implementation,
because
you
are
proposing
this
new
workload,
the
config
API,
so
ecod
would
do
minimum
work.
Yeah.
A
Yeah,
so
the
reason
that
it's
so
much
better
and
is
because
we
we've
been
forced
to
use
the
XDS
apis
and
try
and
put
something
that
is
not
purpose
built
for
them
into
that
shape.
Right.
So
there's
a
lot
of
inefficiencies
for
what
we're
trying
to
represent
for
the
go
and
rest
implementation.
We
just
have
a
purpose-built
protobuf
which
is
later
on
in
the
dock.
That's
exactly
the
information
we
need
as
small
as
possible,
and
so
it's
much
much
more
efficient
to
you
know,
manage
and
update
custom
go
ahead.
Yeah.
C
I
also
want
to
point
out
that,
with
the
custom
implementation
of
of
the
Proto,
we
can
finally
do
on
demand.
So
it's
completely
independent
of
the
size
of
them,
as
you
can
have
10
million
endpoints
and
and
supposed
to
be
the
same
units
whenever
the
communication
happens
to
loads.
That's
when
it
gets
a
config
which
would
avoids
extremely
difficult.
A
It's
simply
that
building
something
that's
purpose-built
allows
you
to
have
much
more
efficient
outcome
in
you
know
if
the
thing
that
you're
trying
to
use
is
not
close
to
what
you
want
right,
so
Envoy
is
very
generic,
and
the
Z
tunnel
is
not
the
case
that
was
optimized
for
so
we're
trying
to
fit.
Like
a
you
know,
around
Square
in
a
round
hole
whatever.
Whatever
the
saying
is
yeah
so
I've
been
going
on,
you
know:
20
000
pods
on,
but
I
couldn't
even
scale,
but
High
rust
was
under
60
megabytes.
A
The
really
interesting
one
I
think
was
that
an
empty
cluster
with
actually
zero
pods.
Well,
there's
like
one
pod,
I
guess
for
the
the
tunnel
itself.
Envoy
was
using
60
megabytes.
Well,
the
rest
was
only
two
megabytes,
so
we're
actually
able
to
scale
the
rust
one
up
to
20
000
pods
using
less
overhead
than
an
empty
Envoy
yeah.
A
So,
let's
see
each
this
is
all
details,
details
just
to
kind
of
go
an
example
of
like
what
it
looks
like
to
have
purpose
built
like
this
is
not
exact,
photo
I'm
proposing,
but
something
similar
where,
instead
of
having
like
you
know
all
these
nested
fields
and
a
bunch
of
envoy
stuff
that
we
kind
of
mapping
istio
resources
to
Envoy
resources,
we
just
put
the
exact
input
we
need
so
like.
What
do
we
need
to
know?
A
We
need
to
know
for
each
pod
in
the
cluster
when
you
know
its
name,
it's
namespace,
it's
identity,
its
address
and
whether
it's
H1
or
not
right
what
node
it
runs
on.
So
we
may
have
a
few
other
things,
but
the
general
idea
is
that,
like
we
can
just
put
the
exact
info,
we
need
in
the
most
efficient
representation
and
then
send
that
incrementally
using
the
XDS
transport
protocol,
but
not
the
XDS
apis
and
get
a
lot
of
efficiency
against
there
yeah.
The
rest
is
just
implementation.
A
Details
on
how
we're
going
to
manage
things
like
build,
tooling,
CVS
testing
licenses
limiting
Etc,
so
I.
D
A
Go
into
that,
but
I
want
to
pause
for
questions.
You
know
any
anything.
People
want
me
to
discuss
more.
E
A
The
downsides,
I
think
are,
are
largely
the
risks
of
building
our
own
thing
and
building
our
own
thing
in
the
language
that
we
as
a
project
I,
don't
have
as
much
experience
in
right
so
with
Envoy.
Obviously,
if
we
find
some
bug
in
I,
don't
know
some
low-level
TCP
proxy
and
code,
there's
a
lot
of
people
that
hear
about
Envoy
and
that
may
fix
it.
Now,
that's
the
theory.
A
In
practice
we
often
end
up
needing
to
fix
things
ourselves,
an
Envoy
because
we're
using
bespoke
enough
code
paths
that
no
one
else
actually
runs
into
these
issues
and
we
lack
a
lot
of
ongoing
expertise
to
do
that.
A
The
other
thing
is,
like
you
know
the
language.
Of
course.
It's
not
just
a
language,
it's
a
whole
ecosystem.
You
know,
there's
libraries,
there's
CVS
in
those
libraries,
you
know
security
policies,
that
sort
of
thing
so
I
did
a
lot
of
research
into
this
and
I've
written
about
it
a
bit
here.
But
to
me
it
is
certainly
a
risk,
but
I
think
it's
a
risk
that
is
kind
of
mitigated
in
many
ways
and
that's
worth
taking
so.
B
So
John
I
made
a
comment
about
the
the
implementation
to
simplify
employee
and
you
said
that
doesn't
exist.
So
I
guess
that,
because
of
at
one
point
I
think
Isa
you
mentioned
to
me,
there's
a
path
forward
for
that
I'm.
Just
curious
is:
are
you
guys
thinking
that's
not
what's
pursuing
at
all
at
the
moment
to
Simply.
F
Yeah
I
think
to
say
it
doesn't
exist.
Maybe
over
States
it
so
Stephen
wrote
up
a
plan.
I,
don't
know.
If
he's
on
the
call
Stephen,
can
you
talk
a
little
bit
about
well
I'll,
say
it
for
symptoms?
Maybe
you
can
go
to
more
depth.
F
We
have
a
plan
that
we
think
with
a
lot
of
effort
will
definitely
improve
Envoy,
but
it's
kind
of
high
risk
because
it
depends
on
pretty
significant
changes
in
on
the
way
that
don't
exist
yet,
and
even
if
we
do
it,
we
don't
know
if
we're
gonna
get
to
the
kind
of
scale
and
performance
goals
we'd
like
to
hit-
and
it's
traditionally
been
quite
difficult
to
get
I
mean
how
many
years
have
we
worked
on
each
Bond
as
a
community
like
it's
been
quite
difficult
to
make
changes
at
the
pace
if
you'd
like
to
so
but
yeah
Stephen.
F
You
can
talk
a
little
bit
more
about
the
plan
that
you
wrote
up
and
it
might
actually
make
sense
to
to
pump
up
kind
of
append
that
to
this
document,
so
it
becomes
more
of
a
kind
of
enveloped
rust
proxy
document.
But
I'll,
let
you
say
a
few
words.
D
Yeah
I
mean
there's
like
a
handful
of
things
being
done:
Upstream
Envoy
that
we
sort
of
rely
on
and
that
we're
we.
D
Give
us
the
API
that
we
need
to
do
what
we
want
to
do
so
like
stuff
around
picking,
certs
and
letting
us
send
some
custom
config,
and
then
we
actually
do
still
kind
of
take
a
stance
of
you
know
it's
a
lot
easier
to
write
logic
as
code
instead
of
logic
as
config,
but
it's
a
lot
more
limited
what
we
can
do
and
filters.
D
But
then
we
have
a
bunch
of
extra
complexity
and
indirection
to
accomplish
the
same
thing
that
we
would
with
the
rust
proxy,
and
then
it
gets
us
like
significantly
better,
but
still
not
quite
as
good
as
rust,
proxy
and
yeah
scale.
Problems
in
the
studio.
D
are
like
the
one
of
the
tricky
parts.
A
A
So
even
if
we
optimize
the
config
to
be
be
zero,
it's
still
going
to
use
that
as
the
Baseline,
which
is
where
rust
gets
at
20
000
pots.
Right
so
obviously
I
mean
we
could
go
change
the
core
of
envoy-
that's
not
even
related
to
Z
tunnel,
specifically
to
be
more
efficient
in
various
areas,
but
you
know
there's
like
a
huge,
a
huge
amount
of
work
there.
So.
B
Okay,
the
other
thing
I
want
to
ask
is
I
mean
a
lot
of
people,
love
Envoy,
because
it's
very
very
extensible
right.
So
you
guys
talk
about
future
people
can
extend
Envoy
as
needed,
and
also
open
up
doors
for
windows
as
well
in
this
rust
implementation.
What
are
the
thoughts
to
allow
people
to
extend
as
needed?
It
doesn't
have
to
be,
like
extension
for
average
user,
but
it
could
potentially
be
used
for
if
people
wants
to
do
some
interesting
stuff
that
zitano,
but
then
doesn't
support
I.
A
Feel,
like
that's
almost
the
benefit
of
not
using
Envoy
for
the
Z
Tunnel
right.
We
intentionally
do
not
want
scope
creep
in
the
Z10
on
that's
something
that
we've
been
worried
about
since
the
beginning
of
fighting
against.
You
know,
once
you
stick
an
Envoy
on
the
Node,
people
are
going
to
want
to
put
stuff
there
right.
A
A
C
A
Yeah
I
mean
like
we're
not
going
to
you
know.
Someone
could
propose
like
let's
put
wasm
in
the
Z
tunnel
and
maybe
convince
someone
I,
don't
think
that
anyone's
going
to
give
it
someone
to.
G
C
Have
already
a
list
of
you
know
kind
of
doing
bypassword,
proxilious
grpc
doing
all
kind
of
you
know
first,
but
that
very
also
very
custom
and
very
very
specific.
So.
B
F
Oh
yeah
I
was
just
gonna,
say
and
and
just
a
framing
so
we're.
We
think
that
rust
is
probably
the
right
way
to
go
forward.
But
there's.
F
We
couldn't
support
both,
like
I,
think
we're
planning
to
put
a
lot
of
investment
in
getting
the
rest
proxy
there
and
part
of
that
might
be
a
matter
of
you.
E
B
F
Of
the
downsides
of
using
rest
for
Z
tunnel
is
there's
going
to
be
less
emphasis
on
on
extensibility
in
the
Z
tunnel
versus
doing
it
in
the
Waypoint
proxies.
But
but
again
this
is
not
like
a
fairly
early
stage
and
it's
fairly
experimental.
So
that's
that's
something
that
we
can
consider
having
or
alternatively,
you
may
end
up
with
customers
using
non-vous
economics
depending
on
that
people
kind
of
TV
on,
like
the
internal
implementation
works,
became
more
efficient.
F
C
We
have
I
believe
we're
still
running
the
agent
that
is
dealing
with
certificates
and
other
things
we
are
probably
still
going
to
have
ability
to
run
additional
processes
for
containers
or
whatever,
where,
like
external
Z,
we
may
add
I
mean
that's,
probably
something
that
may
be
needed
at
some
point
so
and
in
general,
I
mean
having
microservices,
maybe
as
a
way
to
extend,
instead
of
just
linking
in
a
lot
of
features
and
using
a
very
complicated
config,
maybe
a
better
way
to
extend
what
we
are
doing
today.
I
Didn't
you
mention
the
journal,
so
you
mentioned
that
the
Z
tunnel
doesn't
have
extensions,
but
that's
that's
a
decision
that
we
can
change
right.
I
mean
if
we
dim
extension,
something
worthy
of
a
zit
and
all
that's,
not
something
that
we
can't
add.
Should
we
want
to.
F
F
But
what
I'm
hearing
is
that's
not
a
consensus
position
in
the
community
and
the
community
comes
through
consistency
in
a
different
path
on
that,
like
you
know,
certainly
possible
stomach
sensibility
to
arrest
proxy
it.
It
may
be
less
than
Envoy,
which
has
kind
of
sensibility
into
its
course.
One
of
the
reasons
yeah
but
yeah
I
I,
would
say
like
there's,
not
there's
not
like
a
strong,
I
I.
Think
that's
a
separate
issue.
Maybe
I'll
put
it
that
way.
G
J
Can
you
talk
a
little
bit
about
the
goals
around
observability
John
like
would
what
sort
of
effect
might
this
have
on
the
type
of
Telemetry
logs
tracing
support
that
we
have
now
that
we
would
potentially
have
then.
A
Yeah
in
terms
of
metrics
I,
don't
see
any
reason
why
we
want
to
implement
the
same
metrics
that
we
have
in
Envoy.
In
this
same
with
logs
I'm.
A
You
know
that
the
amount
of
metrics
that
we
can
provide
in
the
z-tunnel
and
either
implementation
is
very
limited
to
just
the
TCP
subset
right,
which
is
very
easy
to
record
right.
It's
just
a
recording
number
of
bytes
and
connections
open
and
closed,
so
I,
don't
see
any
reason
why
we
couldn't
report
the
same
metrics
from
either
implementation.
C
Maybe
not
at
all
the
options
I
mean,
like
everyone,
has
a
ton
of
for
plugins
that
export
Telemetry
in
different
ways,
probably
will
in
the
tunnel
I
suspect,
will
have
one
or
two,
maybe
even
open
the
limit
or
I,
don't
know
what
what
we
are
planning
to
do
in
Rust
for
the
API
itself.
A
B
Yeah-
and
this
is
well
extensibility-
could
be
very
useful
to
integrate
with
some
of
the
existence
Telemetry
system,
maybe
there's
a
different
requirements
by
different
user.
F
Oh
I
was
just
gonna,
say
yeah.
We
have
not
thought
massively
about
extensibility
in
The
Rusty
tunnel,
so
I
I
don't
want
to
give
the
impression
that
we're
like
a
hard
no
on
it.
I
I
think
we
need
to
like
think
about
it
as
a
group
we'll
we'll
honor
and
go
off
and
think
about
it
a
little
bit
and
come
up
with
the
kind
of
the
agreed
position
on
us,
but
yeah
I
think
that
can
be
an
open
area
of
discussion.
C
C
Of
telemetry
and
using
so
let's
say,
Theta
gov
or
some
other
plugins
that
invoice
reports.
This
is
a
perfect
example
of
you
know,
extensibility
through
to
separate
modules,
because
open
Telemetry,
for
example,
they
have
an
agent
that
is
a
separate
process
and
it
supports
a
dozen
Telemetry
protocols
and,
and
that
model
is
your.
Application
is
exporting
over
over
open
Telemetry,
grpc
or
Proto,
and
their
agent
is
adapting
to
other
other
protocols.
C
So,
basically,
we
can
support
even
more
than
invoice
supports
through
the
use
of
the
open,
Telemetry
agent
and
and
that
can
go
for
authentication
for
a
lot
of
other
other
things.
Where
we're
you
know,
kind
of
doing,
microservice
based
extensibility
would
work
perfectly
like
in
this
case,.
G
Okay,
I
think.
The
thing
that
we
want
to
avoid
is
going
down
the
road
of
like
L7
extensions,
because
once
you
start
pulling
on
that
thread,
then
the
whole
project
is
going
to
sort
of
unravel,
so
I
think
we're
not
against
extensibility,
but
I.
Don't
think
we
would
want
the
full
extensibility
that
Envoy
provides
just
because
we
don't
want
to
go
down
that
full
L7
processing
path.
A
Yeah,
definitely
if
we
think
that
we're
starting
simple
now,
but
in
a
year
we'll
have
added
back
all
the
extensibility
that
Envoy
gave
us
for
free.
Then
this
is
a
terrible
idea
and
we
shouldn't
do
it
right.
This
only
works.
If
we
think
that
it's
going
to
start
snowball
now
and
become
only
slightly
less
simple
a
year
from
now
right.
F
I
Yeah
yeah
I
think
we
also
agree.
I,
think
that
the
main
difference
is
that
this
box
is
kind
of
towards
you
know,
has
a
view
of
all
the
pods
and
knows
automatically
to
what
to
do
with
them
versus
Android,
where
you
have
to
tell
it
exactly
what
to
do
and
I,
don't
think
I
think
that's
kind
of
the
invariant
that
will
not
change.
A
Yeah,
if
I
can
give
an
analogy
for
IP
tables
and
ebpf
for
cube
proxy,
not
for
Easter
sidear
I,
like
the
IP
tables
for
q-proxy,
doesn't
scale
well,
because
it's
like
this
declarative
config,
where
you
have
to
go
two
IB
tables
and
configure
the
entire
state
of
the
world
and
how
you
want
to
behave
in
ebts,
though
you
just
configure
the
actual
data
right
like
here's,
the
list
of
all
the
Pod
IPS
and
some
metadata
about
them
and
the
actual
business
logic
is
not
encoded
in
the
configuration
it's
encoded
in
code
right
in
the
DTF
code.
A
A
B
But
John
you
could
do
the
same
with
the
tunnel
implemented
in
onboard
two
right.
Let's
say,
for
instance,
if
let's
say
if
Community
go
down
the
rest
pass
and
somebody
is
in
the
community
is
interested
in
Envoy
as
the
tunnel.
They
could
potentially
write
the
business
logic
in
boy
and
just
have
Envoy
process.
B
The
work
I
think
the
workload
the
API
that
can
fix
through
XDS
protocol
from
the
control
plane
right
and
then
write
the
business
logic.
Whatever
config
Envoy
needs
on
inside
of
envoy
was
you
know
today
in
ecod
and
the
zetano
zetano
receives
the
exact
configuration
from
sgod,
because
that's
because
today
we
don't
have
this
work
holiday,
API
and
yeah,
and
it's
the
only
sense.
The
exact
config.
A
Yeah,
that's
true,
and
that's
kind
of
how
Steven's
proposal
for
how
we
would
do
it
if
we
do
use
Envoy
works,
but
in
many
ways
it
ends
up
that
we
just
are
putting
all
of
our
logic
in
in
a
custom,
Envoy
filter,
but
like
we're
not
getting
any
value
from
Envoy
other
than
it
being
like
some
process
that
executes
our
code
right.
It's
very
easy
to
just
execute
the
code
directly.
A
A
Almost
I
don't
know
if
a
lot
of
chicken
powder
wasn't,
but
maybe
yeah
I
mean
you
could
compile
it
too.
You
might
be
able
to
compile
it
to
an
ongoing
Network
filter
is
just
broad
native
code.
I,
don't
know.
C
By
the
way,
another
point
I
want
to
make
and
I
looked
a
bit
at
John
Squad.
Most
of
the
logic
is
already
implemented.
It's
in
produce
I
mean
it's
not
that
99
of
the
of
the
you
know
proxying
is
based
on.
You
know
solid
libraries
that
are
used
in
a
lot
of
production
environments.
I
mean
most
of
the
stuff
is
group
mode
that
is
getting
config
from
XDS
again,
using
some
pretty
standard
and
very
simple.
So
it's
not
like
we
are
taking
a
huge
maintenance
of
a
huge
code
base.
F
C
F
Pretty
dumb
I
mean
it
opens
a
socket,
opens
an
age
phone
connection
forwards,
so
most
of
that
is
Library
code
and
there's
very
good
rest
networking,
Library
libraries
are
out
there
already
so
I
mean
this
is
going
to
be
now.
If
we
we
can
choose
to
make
it
complicated,
but
if
we
are
disciplined
about
not,
this
is
actually
going
to
be
particularly
difficult
to
me.
A
Yeah,
but
in
the
implementation,
it's
actually
not
using
the
exact
thing
that
I
put
on
there.
It
has
a
bit
more
information.
I
didn't
want
to
put
the
exact
thing
in
there,
because
this
is
something
I
hacked
together
in
like
a
week.
So
I'm
I
know
that
it's
bad
and
I
didn't
want
people
to
get
hung
up
on
all
the
weird.
The
weird
things
that
were
in
there.
A
C
And
be
very
careful
because
at
some
point,
when
we
get
close
to
one
zero,
we'll
have
the
same
problem,
backward
compatibility
and
and
stability
of
the
API
and
so
forth.
But
until
you
get
one
zero,
we
have
you
know
some
flexibility,
yeah
yeah.
D
I
Yeah,
thank
you.
I
I
know
we
talked
about
each
one
using
HTTP
3
at
some
point.
This
rust
change
that.
A
F
A
3,
we
did
a
little
bit
of
research
in
this
and
we
found
that
for
our
needs
of
tunneling
HTTP
3,
like
there's
no
connect,
UDP
support
in
any
system
that
exists
today,
so
either
we
will
need
to
wait
for
it
to
be
built
or
we
will
need
to
build
it
ourselves,
whether
that's
an
Envoy
or
in
Rust.
My
costume
says
Russ
has
has
a
few
different
H3
libraries
that
are
pretty
solid,
there's,
also
the
possibility
of
leaving
in
the
quiche
libraries,
which
is
what
Envoy
uses
as
well.
A
H
D
H
Not
right
so
I
think
this
is
very
important
to
talk
about
this
and
this.
This
is
why
we're
kind
of
like
coming
with
extension.
We
don't
need
it
right
now,
right,
I
get
it
but
yeah.
What
will
happen
if
a
customer
in
production
now
need
this
thing?
You
know
like
it's
a
community
project,
so
we're
trying
to
figure
out
how
we
can
get
the
fact
that
yeah
the
Innovation
will
allowed,
and
so
on
that
this
is
that
yeah.
A
I,
don't
think
I,
don't
know
if
I
went
into
depth
on
it
here,
but
I
can
add
more
details.
I
had
been
able
to
convince
myself,
but
a
few
other
folks
that
we
have
a
viable
path
forward
for
it.
So
I
think
I
think
we
will
it's
just
not
ready
at
this
point.
B
Okay,
foreign,
okay.
Is
there
any
other
question
for
John?
If
not
I
think
we
should
discuss
about
open
source
because
I
think
other
people
probably
wants
to
run
the
test
zone.
We
are
interested
in
running
them.
I.
Think
lee
were
also
interested,
there's,
probably
other
people
interested
just
making
sure
you
know
we
can
run
it
and
also
get
similar
results
as
what
you
have
here.
A
C
One
one
comment
is
there:
are
two
sides
of
this
I
believe
one
is
in
history
d
as
well.
Yes,.
D
C
In
in
the
past,
I
kind
of
market
for
having
this
in
in
the
separate
branch
of
separate
repository
whatever
for
but
I,
think
the
workload
I
mean
the
simplified
protocol
would
be
a
good
fit
for
getting
into
Master,
because
it's
relatively
small
It's,
relatively
isolated.
It's
no
risk.
D
C
Think,
and
it
will
allow
us
to
to
iterate,
probably
faster,
so
I'll,
be
happy
to
approve
a
PR
toward
the
the
new
XDS
extension
to
to
hdmaster.
C
That's
an
interesting
question
right
now.
The
way
it
works
is
because
it's
using
XDS,
we
have
a
lot
of
product,
debug,
protos
and
other
things
that
are
kind
of
not
sneaked
in
basically
I
mean
because
it's
it's
a
not
an
API,
it's
an
internal
protocol,
so
everything
that
is
your
cutter
is
using
and
all
the
other
things
are
basically
just
resources
without
any
content,
but
yeah,
probably
we
should
put
it
in
in
the
area,
but
I
I
will
keep
them
separated.
E
D
D
E
We're
doing
so
I
wonder
if
there's
a
an
intermediate
step
of
making
it
available,
but
not
you
know,
merging
the
API
to
master
and
creating
this
like
is
there
a
way
that
we
can
say
like
okay?
This
is
now
something
that
we
can
try
out
and
validate
and
then
do
that,
for
you
know
some
period
of
time,
some
short
period
of
time
and
then
decide.
C
Yeah
sorry
for
confusion,
I
I
just
suggest
to
put
it
as
an
API
I
mean
I
I.
Think
there
are
other
reasons
why
it's
useful
to
have
a
way
in
history.
I
did
to
get
an
on-demand
information
about
workloads,
debug
and
and
other
experiments
and
other
things
that
are
are
possible.
I
mean
we
have.
You
know
all
the
debugging
for
all
the.
C
A
The
plan
was
to
use
boring
SSL
in
fips
mode,
so
that's
it.
A
F
Yeah
I
want
to
say
a
couple
things.
One
I
I
think
where
we
are
now
like.
We
don't
need
formal
agreement
from
the
community
that
this
is
what
we're
going
to
do.
F
An
envoices
I
think
from
Google's
perspective,
we're
just
saying
we're
gonna
build
this
thing,
we'll
we'll
put
a
quarter
or
two
into
it
and
we're
going
to
put
it
in
public
and
we
think
it
will
be
better
and
we
hope
it's
better
and
the
the
decision
about
like
what
the
default
D
tunnel
should
be
in
the
long
term
can
come
later.
We're
pro.
F
We
are
probably
not
going
to
put
Engineers
into
all
of
the
work,
that's
necessary
to
get
Envoy
to
a
good
State
for
the
Z
tunnel,
so
we're
kind
of
making
a
decision
to
invest
and
rest
first.
But
if
it
doesn't
work
out,
we
can
revisit
that.
F
I
think
that
was
the
big
thing
and
then
the
other
point
I
wanted
to
make,
which
I
kind
of
made
implicitly
is
that
we
are
going
to
put
investment
in
this
and
we'd
love
like
to
the
extent
there's
interest
people
to
test
it.
Collaborate
on
development,
all
that
sort
of
stuff,
so
yeah.
A
B
A
I
was
interested
in
was
getting
directional
agreement
that
we
want
to
start
developing
this
in
Easter
repos,
like
making
a
news,
Eternal
repo
and
committing
the
control
plane
changes
on
the
experimental
ambient
branch,
if
not
I,
suppose
I
can
run
it
under
my
personal
repo,
but
then
it's
not
suited
for
collaboration
or
anything
right.
C
I
I
would
say
it's:
either
history
equals
history,
so
we
have
plenty
of
133
kind
of
not
officially
supported,
but
in
progress
so
and
I
don't
see
any
reason.
I
mean
we
can
just
general
what
is
a
voter
or
a
doc
approved,
or
what
is
the
process
for
writing
a
repo,
but
I
don't
see
any
reason.
Why
not
to
do
it.
B
Yeah
I
think
having
an
istio
org
as
a
project
for
the
Eternal
rest.
Implementation
does
make
sense
and
I
agree
with
what
Ethan
said.
Once
other
people
have
chance
to
play
with
it
get
a
feeling
with
it.
Then
we
can
decide
on
the
ambient
branch
which
one
should
be
because
we
can
constantly
evaluate
which
one
should
be
the
default
zetano
implementation
for
the
ambient
Branch
right,
because
so
far
it's
only
on
your
personal
machine
and
we
know
in
the
past
John
you
got
to
stop
working
and
nobody
else
could
get
it
working.
B
So
so
we
want
to
just
have
more
people.
Try
this
and
evaluate
this
yeah.
A
I,
don't
think
anyone
actually
can
make
a
transition
on
like
what
we're
doing
long
term.
The
plan
was
only
to
get
feedback
on
whether
this
was
a
Direction.
That's
worth
pursuing
more
yeah.
B
And
also
Stephen
made
an
interesting
comment
which
I
think
Steven
you
actually
made
it
clear
early.
Even
the
simplified
Envoy
plan
would
also
require
the
similar
workload
API
that
you
are
proposing.
So
I
don't
have
a
strong
objection
to
the
workload
API.
The
only
challenging
I
would
have
is
is
this
is
causing
any
issues
for
people
using
psycha
today,
if
we
put
it
in
master
because
the
whole
ambient
branch
is
still
a
branch,
so
that
made
me
think
I
couldn't
figure
out
a
reason
why
the
workload
API
needs
to
reside
in
master.
C
Debug
and
other
things
and
I
I'm
definitely
not
proposing
putting
all
the
stuff
in
in
the
branch
with
you
know
all
the
invoy
configs
that
are
done
for,
but
just
this
particular
workload,
API
I
think
it
would
be
very
good
addition
to
master.
B
Okay,
if
it's
debugging
I,
think
want
to
make
sure
you
use
a
knows
how
to
use
them
and
is
it
exposed
to
istiocado,
or
are
we
going
to
put
on
our
debug
API
pages?
So
people
can
use
them
because
it
feels
odd
if
it's
just
an
internal
thing
that
people
working
on
ambient
will
be
using.
It
just
feels
hard
to
be
in
master
at
the
moment.
H
H
A
A
I
have
a
big
task
list
of
all
the
things
that
I
think
we
need
to
do.
Obviously
more
will
be
added,
but
for
now
you
know
we
have
that
list
for
kind
of
getting
from
now
to
production,
ready,
I
think
to
get
it
to
feature
parity
with
where
envoys.
Today
we're
looking
at
maybe
two,
maybe
three
weeks
due
to
kubecon,
there's
not
that
much
more
work
to
get
into
production.
Ready,
though
you
know
that's
a
lot
of
a
lot
of
debugging
testing,
you
know
adding
observability
into
it.
A
You
know
all
sorts
of
edge
cases
need
to
handle.
So
that's
a
bit
less
clear
on
what
the
exact
timeline
will
be,
because
we
may
find
issues
that
we
didn't
think
about.
A
Obviously,
but
I
would
think
that
we
would
have
pretty
reasonable
State,
and
you
know
three
months
from
now
or
so
so
that
yeah
my
goal
would
be
that
in
you
know
two
three
weeks
we
have
something
that
goes,
can
be
dropped
in
place
of
envoy
past
all
the
tests
that
we
have
in
the
ambient
branch
and
you
know
being
the
same
avonboy.
B
C
Looking
at
I'm
working
with
proximately
grpc
folks,
who
also
need
to
you,
know,
start
planning
for
how
to
deal
with
ambient
and
and
again
it's
it's
it's
it's
it's
such
a
small
change
practically
because
and
and
such
low
risk
that
I
don't
see
any
reason
not
to
put
it
in
no,
not
in
API
again
just
have
it
at
the
same
level,
with
the
debug
tools
and
with
the
proxy
the
JPC
support.
C
B
Okay,
yeah
I
think
it
makes
sense
with
the
proxy
this
grpc
support,
but
sounds
like
a
plugin
Z
tunnel
for
a
release
like
170.
I.
Don't
think
it's
going
to
work
nicely
without
the
control
plane
change,
so
my
my
freelance
user
would
still
have
to
need
to
update
the
control
plane
anyway,
even
though
it
was
to
say
and
support
for
the
workload
API.
C
A
B
Yeah,
it
would
be
good
to
find
out
because
that's
a
huge
value
too.
So
let
me
ask
you
this
John
at
a
high
level
when,
with
this
workload,
API
are
you
because
today,
with
the
Z
Tunnel
right,
istio
d
has
to
be
very
intelligent
to
think
out
like
for
this
particular
Z
tunnel.
It's
serving
this
particular
node,
and
these
are
the
configurations
for
this
seasonal,
but
with
the
workload
API
I
assume,
you
are
going
to
send
the
same
data
to
Z
tunnel
Riga
and
also
maybe
to
the
Waypoint
proxy
as
well.
B
A
Because,
like
nothing
in
this
changes,
the
Waypoint
processes
or
sidecars
now
the
Waypoint
proxy
also
has
scalability
issues
that
we'll
have
to
address
I,
don't
think
the
solution
that
will
be
rewrite
the
whole
Waypoint
in
something
else,
because
that
has
you
know
massive
feature
set,
but
we
will
need
to
improve
it
somehow,
whether
that's
using
the
workload
API
in
Envoy
and
looking
that
up
somehow
or
doing
something
else,
I,
don't
know.
We
need
to
go
design
that,
but
it's
kind
of
orthogonal.
B
Okay,
but
it's
clear
that
zetano
would
be
getting
the
same
workload
config
regardless,
which
node
they
are
on,
so
we
could
potentially
sending
the
same
one
from
from
the
istio
117.
Let's
say,
I
assume
it
supports
this
API
right.
Whenever
you
install
a
z
tunnel,
you
could
potentially
start
to
have
the
data
getting
the
workout
config.
C
In
this
document
is,
there
are
two
sides:
one
is
the
implementation
of
of
the
proxy
in
rust
or
other
languages,
and
the
other
is
having
an
XDS
extension
that
is
more
suitable
for
on-demand
and
and
optimized
for
scalability
there's
a
part
where
we
have
an
API
in
XDS.
That
say
Hey
for
this
IP.
Give
me
all
the
information
about
IDs
that
I
need
to
to
establish
a
a
connection
with
that.
C
Ip
that
can
be
reused
in
the
Waypoint
proxies
can
be
used
in
in
a
lot
of
other
places,
I
mean
it
will
be
used
by
proximity
or
any
other
client.
That
is
not
because
Zeta
will
not
be
the
only
place
where
you
need
to
have
scalability,
and
if
someone
wants
to
write
some
some
employee
extension
or
whatever,
to
support
this
extra
XDS
type,
that's
wonderful
and
will
help
long
term
for
scaling
up
the
gateways,
even
a
regular
gate.
There's
no
necessarily
I
mean
everything
will
benefit
from
the
on-demand
optimized
configuration
path.
B
Well,
I
guess
the
other
challenge
I'm
thinking.
If
somebody
take
the
time
to
write
the
Amway
filter
to
process
this
workload,
configuration
to
the
the
full
XDS
configuration
of
what
it
needs,
then
they
could
potentially
reuse
that
code
for
the
simplified
Z
tunnel.
That's
Envoy
based
too,
and
then
that
would
bring
consistency
between
Z
tunnel
and
Waypoint
proxy
for
user
to
debugging
and
troubleshooting.
F
I
I
think
the
XDS
that
Waypoint
needs
is
so
much
more
complicated
and
sophisticated
than
Z
tunnel.
F
Whether
or
not
we
use
envelope
for
Z
tunnel,
but
but
I
do
think,
like
suppose
that
we
end
up
in
a
state
of
the
world
six
months
from
now
where
we
find
some
customers
want
a
more
lightweight
fast.
So
you
tunnel
on
someone
on
void
for
extensibility
or
bosom
or
some
other
reason.
F
It
would
not
be
difficult
to
have
a
consistent,
XDS
API.
The
controller
sees
above
them
because
it's
so
simple
that
is
the
same
for
the
rust
proxy
and
the
envoy
proxy
right.
You
could
imagine
that,
essentially,
the
rest
proxies
API
becomes
the
the
standard
API
and
you
write
a
little
translator
on
the
Node
that
converts
it
to
what
Android
needs
or
something
like
that.
F
So
in
general,
yeah
kind
of
standardizing
API
surface
is
probably
a
good
thing,
but
I
do
think
in
the
long
term.
It's
probably
best
well.
My
hope
is
that
the
rust
proxy
is
so
good
that
this
becomes
good
and
we
don't
need
Beyond
blazey
tunnel,
but
you
know
we're
we're
early.
C
A
C
Rust
is
perfect,
I
mean
you
know:
proximately
needs
to
support
the
five
language
sort
of
whatever
languages
they
have,
and
it's
wonderful.
If
this
is
used
by
other
nginx
or
whoever
I
mean
it's,
anyone
should
be
able
to
use
this
protocol.
I
mean
it's
a
protocol
H1
and
the
XDS
extension
should
be
Universal
and
should
be
independent
of
what
we
choose
to
use
as
a
default
visit.
Error.
F
Yeah
I
think
the
goal
for
the
rest
is
eternal,
at
least,
is
to
Define
an
XTS
API.
That
is
exactly
the
information
that
these
phone
needs
to
work,
no
more,
no
less
in
the
most
logical,
clean,
understandable
organization,
and
then
the
hope
is
that
other
data
plane
information
that
come
along
that
are
doing
the
tunnel-like
stuff,
reuse
that
right
so
proxilious,
grpc
being
one
of
them.
H
Can
I
ask
a
question:
that's
of
course
I'm
just
thinking
about
resources,
a
lot
of
the
stuff
that
we
are
getting
from
envelope
for
free,
because
it's
a
huge
community
and
a
lot
of
people
using
it.
It's
a
lot
of
stuff
that
I
will
do
the
stability
of
Android,
the
CVS,
the
you
know
stuff,
like
any
protocol
that
you
know
it's
running
in
hundreds
of
locations
right
now,
right,
a
thousand
probably
right.
So
so
we're
getting
a
lot
of
this
right
of
some
use
cases
that
people
think
of
it.
H
I
will
worry
about
this.
I
mean
this
is
the
only
thing
that
I'm
worried
about,
because
I
honestly,
like
let's
assume
that
Google
will
put
two
resources.
It's
a
that's
all
we
put
another
term.
It
says
this
is
four
resources
and
it's
not
running
anywhere.
That's
the
thing
that
I'm
worried
about
then
again,
maybe
other
people
will
join.
H
F
Envoy
is
a
very
large
community
behind
it
most
of
that
Community
is
working
on
stuff,
that's
not
particularly
relevant
to
us,
but
you
know
the
CV
tracking
and
all
that
sort
of
stuff
is
the
useful
thing
on
the
flip
side,
just
being
a
little
bit
Frank
on
our
end
on
Google,
like
hiring
and
developing
and
attaining
talent
that
has
a
deep
enough
understanding
of
envoy
to
actually
drive.
Some
of
the
changes
that
we
need
to
move
istio
forward
has
been
a
massive
challenge.
D
F
Difficult
for
us
to
kind
of
maneuver
around
or
a
smaller
Community,
that's
more
tailored
to
our
use
case.
F
H
Give
me
the
only
reason,
I'm
worse,
because
way:
Waypoint
right
now
will
stay
invoice,
not
really
getting
rid
of
that
problem.
We
will
still
need
to
con.
You
know,
add
stuff
there
and
do
this.
So
that's
that
so
that's
an
extra
work
that
we're
taking
on
us
and
and
yeah
honestly
I'm,
always
a
big
believer
that
we
need
to
so
you
know
what
is
the
best
solution
and
not
and
I'm
ignoring
resources
right
now.
Let's
assume
that
we
can
teach
people
to
be.
H
H
F
Yeah
I,
I
I
think
like
the
story
with
H
Brown
is
like
a
great
example
like
we
like
Google,
has
a
number
of
really
exceptional
Android
engineers
and
the
H
bone
has
been
floating
around
in
the
istio
community
for
literally
years
right
and
I.
Don't
think
it's
because
the
engineers
who
are
working
on
it
aren't
effective
or
talented
or
hardworking.
It's
just
it's
hard
to
do
anything
so.
H
F
A
A
I
mean
what
you're
saying
like
those
are:
those
are
risks
of
the
project.
Certainly
I
think
the
they're
they're
mitigated
I
think
the
benefits
outweigh
the
risks,
but
obviously
doing
something
new
versus
something
established
is
risky,
but
in
many
ways
there's
not
that
much
code
in
the
Z
Tunnel
right,
we're
not
writing
low-level
networking
code,
we're
writing
high
level
business
logic
and
then
using
libraries
that
are.
Are
they
as
used
as
Envoy
I,
don't
know,
but
they're
not
obscure,
libraries
right,
Tokyo
hybrid.
These
are
kind
of
the
core
of
the
rust
ecosystem.
A
Curl
I
think
uses
these.
You
know
all.
A
F
I,
don't
know
if
this
made
it
into
the
dock
and
if
it
didn't,
we
should
add
it,
but
one
of
the
key
deciding
factors
for
us.
We
actually
did
look
into
all
the
libraries
we
need
and
what
their
CPE
process
is
and
if
there's
people
maintaining
them
that
sort
of
stuff.
That's
one
reason
why
I
think
for
quick
support.
We
should
use
Google's,
quiche
library
and
stuff
cloud
service
because
of
better
posts
on
that.
So
one
of
the
things
that
could
have
caused
to
say
hey.
This
is
the
right
path
forward.
F
For
example,
it's
like
the
HTTP
Library
doesn't
have
a
solid
CBE
process
around
it
right
because
I
mean,
if
you
look
at
ongoing,
cdes
like
the
vast
majority
of
them,
are
just
in
the
HTTP
Library.
So
so
it
is
something
that
I
think
we
can
mitigate
a
little
bit
based
on
the
fact
that
there's
good
libraries
for
it.
F
I
mean
we're
out
of
time.
It
again
and
I
really
want
to
emphasize
this,
like
we're
thinking
of
this
as
trying
to
get
this
out
in
front
of
the
community
really
early.
You
know
this
is
like
an
experiment
that
at
Google
we've
convinced
ourselves
is
right
enough
to
invest
engineers
in
the
building,
but
that
doesn't
mean
that
there's
consensus,
I'd
love
to
talk
about
it
further
in
this
forum
or
people
want
to
talk
to
me
one-on-one
or
small
groups.
F
That's
good,
too,
take
feedback
and
build
a
joint
roadmap
requirements
and
all
that
sort
of
stuff.
But
this
is
you,
know,
we're
very
early
on
this
intentionally,
because
we
want
to
get
feedback
and
kind
of
collaborate.
So.
B
B
Next
step
we
should
maybe
have
John
publish
this,
so
other
people
can
look
at
and
maybe
put
some
instructions
on
how
to
run
the
tests,
so
other
people
can
try
it
too.
So
we
can
evaluate
look
at
the
code
and
you
know
then
we
should
discuss.
Maybe
after
cucumber
a
little
bit
more
on
the
feedback
from
the
community.
A
B
Maybe
just
do
this
as
a
repo
in
istio,
but
it
doesn't
have
to
change
the
ambient
branches
default.
Zetano
right
it
could
be
maybe
yeah.
A
Not
the
default
yeah,
just
the
control
plane,
changes
on
the
Easter
Easter
branch
and
then
a
new
repo
for
the
Z
tunnel,
but
so.
A
A
That
makes
sense.
Okay,
one
thing
I
want
to
call
out,
because
you
said
you
were
going
to
go,
try
and
run
benchmarks
and
stuff,
so
one
this
is
like
not
production
code
I'm
dropping
on
this
repo.
This
is
like
proof
of
concept,
so
there's
tons
of
stuff
that's
terrible
right,
but
especially
on
the
benchmarking.
A
So
I
just
want
to
caveat
that
your
numbers
will
probably
be
a
little
bit
worse
than
mine,
and
that
is
something
that
is
not
going
to
persist.
So,
okay,.
A
It
that
throughput,
like
tanked,
which
is
not
because
boring
SSL,
is
slow.
It's
just
because
I
didn't
put
the
right
settings
somewhere.
So
it's
not
it's
not
a
big
deal
at
all
long
term.
That's
just
saying
that
if
you
go
run
the
Benchmark
today,
you're
probably
going
to
get
bad
results
and
don't
be
surprised.
A
H
Another
thing
that
we
can
I
just
wanted
to
mention
which
important
is?
Oh,
maybe
you
did
mention
it.
Okay,
if
I
said
you
maybe
mentioned
it
in
the
doc,
is
regarding
debugging.
H
I
F
F
You
know
on
a
on
a
GitHub
project
board
and
one
of
the
things
that
I've
said
as
a
requirement
is
lots
of
debugging
stuff
right,
so
logs
flame
graphs
counters,
all
that
sort
of
stuff
right,
just
getting
the
thing
able
to
forward
traffic
for
a
benchmark
is
not
like
Google's
not
going
to
release
a
production
product
with
just
that
right.
We
have
to.
F
We
have
to
run
this
thing
and
maintain
it
so
so
yeah
we're
not
just
going
to
release
the
code,
we're
going
to
release
our
roadmap
and
we
would
very
much
love
feedback
on
that
roadmap.
A
But
I'll
add
those
as
like
issues
on
the
repo
and
to
make
it
and
like
half
of
them
are
about
adding
debugging
things.
So
that's
definitely
you
know
a
huge
concern
for
us.
H
H
Nothing
besides
the
fact
that
I
do
think
that
solo
will
really
really
care
about
the
extensibility.
We
have
some
use
case
in
mind
that
we
will
need
this,
for
so
you
can
talk
about
it
offline
if
you
guys
want,
but
thank
you.
B
Okay,
great
and
John,
we
can
think
offline.
A
B
I'm
pretty
sure
you
are
adamant,
yeah,
so,
okay,
cool
thanks
everybody
for
joining.
Then
thanks
John
for
presenting
and
the
answer
all
for
what
questions
and
see
you
guys
at
cubecon
yeah
and
see
you
guys
there
at
the
kubecon
by
the
way,
we're
running
a
meetup
on
Tuesday
from
seven
to
nine,
so
we'll
send
out
the
information
on
the
announcement
Channel
once
we
have
it
so
definitely
see
you
guys
there.