►
From YouTube: IETF112-MASQUE-20211108-1200
Description
MASQUE meeting session at IETF112
2021/11/08 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
The
role
is
quite
simple:
whenever
someone
in
java
just
says
at
mike
or
whatever
just
pop
in
the
queue
and
relay
that
so
people
can
hear
it
we'll
do
our
best
to
keep
an
eye
on
the
conversation
as
it
progresses.
But
it's
it's
easy
to
miss
things.
A
Okay,
when
we
get
going
welcome
everyone
to
the
the
first
meeting
of
the
week
mask
thanks
for
being
here,
it's
quite
early
for
some
of
us,
so
hopefully
this
is
a
productive
session.
Nevertheless,
typical
this
meeting
is
recorded.
You've
used
this
tool
before.
A
If
you
use
this
tool
before
you
know
how
the
all
the
buttons
work
at
the
top,
but
if
you're
not
they're
fairly,
it's
fairly
intuitive.
If
you
want
to
join
the
mic
queue
just
press
the
button,
you
do
have
to
unmute
yourself
to
talk
and
share
video.
Otherwise,
you
know
typical
idtf
rules
apply
in
terms
of
queue
management
and
as
a
reminder
as
well
just
state
your
name
with
the
mic.
Just
so
the
the
speaker
knows
who
you
are
and
the
note
taker
can
jot
it
down.
A
This
is
the
notewell
probably
have
not
yet
seen
this
this
week,
because
it's
the
first
meeting
you're
expected
to
be
so
familiar
with
the,
but
what's
here
I
wanna
take
a
moment
to
briefly
call
up
the
code
of
conduct.
In
essence,
what
we
are
looking
for
is
people
to
be.
You
know
professional,
respectful
and
courteous
to
others
and
how
they
engage
and
discuss
technical
details.
A
A
A
So
hopefully
we
can
keep
it
that
way.
Moving
forward
eric.
Do
you
have
anything
else,
you'd
like
to
add
cool
beans?
Okay,
here's
some
links.
The
agenda
is
up.
It's
also
here
in
the
slide
deck.
We
have
a
note
taker.
Thank
you
robin
javascribe.
Thank
you.
Lucas.
A
And
here
is
our
agenda.
First
up
we're
going
to
talk
about
connect,
qdp
and
hd,
datagrams
david's,
going
to
give
those
presentations
and
then
tommy's
going
to
speak
to
connect
ip.
The
unified
proposal
based
on
the
two
drafts
presented
during
the
last
meeting
and
then
as
time
permits,
we'll
talk
about
two
new
work
proposal,
items
that
is
paying
from
ben
and
prioritization
from
lucas
quickly
pause
here
to
see.
If
anyone
wants
to
do
any
rearrangements
or
refactoring.
C
Thank
you,
chris.
Let's
ask
to
share
slides.
C
Yes,
we're
all
set
awesome
all
right,
good
morning
afternoon
evening,
absolute
complete
middle
of
the
night,
everyone.
My
name
is
david
snazzy
and
we're
here
to
talk
about
mask
more
specifically,
this
presentation
I'll
be
talking
about
both
the
http
datagrams
and
the
connect
udp
draft.
C
So
sorry,
first,
a
quick
recap
for
if
anyone
is
new
to
mask
in
the
room,
so
the
main
goal
of
these
two
drafts
is
to
build
connect
udp,
which
is
like
connect,
which
is
the
tcp
method
for
proxima,
sorry,
the
http
method
for
proxying
tcp,
but
for
udp
and
on
in
our
you
know,
discussions
of
requirements
over
the
the
now
two
years.
Almost
that
we've
been
working
on
this.
C
C
And
since,
as
we
started
working
on
this,
this
was
all
in
one
draft.
But
we
realized
that
there
were
other
features
or
applications
such
as
web
transport
that
were
interested
in
using
datagrams
of
http,
but
not
specifically
in
connect
udp.
So
we
split
it
into
two
drafts,
so
some
other
working
groups
could
depend
on
the
http
datagrams
draft
without
depending
on
the
connect
udp
draft.
C
So
earlier
in
april
this
year
we
had
an
interim
to
focus
on
http
datagrams
and
we
kind
of
redesigned
the
everything
after
and
had
some
discussion
on
the
list,
and
then
we
merged
some
pr's,
which
was
great
and
then
at
the
next
itf
meeting.
We
re-redesigned
everything
again,
which
is
also
great,
but
you
know
it
is
causing
quite
some
churn
in
the
drafts.
So
hopefully
our
goal
today
is
to
if
we're
redesigning
things
again
fairly.
You
know
reach
some
point
where
we
have
something
that
everyone
likes.
C
So
we
can
then
move
on
from
there,
because
there
are
folks
who
have
this
in
production
and
it'd
be
good
to
get
to
a
place
with
the
dresser
more
stable.
If
we
all
find
a
design
that
everyone
likes.
C
C
The
cloudflare
quiche
one,
the
ericsson
one
and
the
google
quiche
one,
and
both
google
and
erickson
have
cliented
servers
and
cloudflare
has
a
client
and
we
reached
successfully
drop
with
all
of
these
at
the
hackathon
last
week.
So
good
news
things
work.
C
C
We
also
landed
on
the
capsule
protocol,
which
is
a
way
to
send
information
end
to
end
between
endpoints
as
a
even
if
there
are
http
intermediaries
along
the
way
and
the
capsule
protocol
is
as
simple
as
you
can
kind
of
get
it
it's
a
sequence
of
tlvs.
It's
inside
the
data
stream
and
by
data
stream.
I
mean
the
for
http
two
and
three.
C
What's
inside
data
frames
and
it's
in
http
one,
that's
just
the
the
the
mainstream
of
data
for
a
connection,
and
additionally,
one
specific
capsule
that
we
define
is
the
datagram
capsule
that
carries
an
http
datagram
and
in
particular,
that's
useful
for
versions
of
http
that
work
over
tcp,
so
http
one
and
two,
because
you
don't
have
the
quick
datagram
frame
there.
C
So
that's
how
you
convey
http
datagrams
for
those
versions
and
then
in
terms
of
encoding,
for
the
quick
datagram
frame,
the
like
what
at
the
quick
layer
is,
the
payload
of
a
quick
datagram
frame
starts
with
a
warrant,
so
I
rc
9000
style,
variable
length.
Integer,
which
has
the
stream
id
divided
by
four
and
kind
of
the
tweak
of
that
encoding
is
because
client-initiated
bidirectional
streams,
which
are
the
request
streams
in
http
3,
are
all
divisible
by
four.
So
we
just
divided
by
four,
and
that
saves
some
bits.
C
And
finally,
because
that
encoding
could
you
know
potentially
change
that
on
the
road
and
also
changes
the
semantics
of
what
we
see
in
quick,
datagram
frames,
which
didn't
have
semantics
before
we
negotiate
that
with
an
http
3
setting
all
right.
Now,
where
are
we
today?
So
we
had
some
good
discussion
on
the
list
last
week,
thanks
for
kicking
things
off
empty
and
one
of
the
things
that
we
realized
there
is.
C
While
there
are
some
parts
that
you
know
we're
reaching
good
agreement
on,
such
as
most
of
what
I
just
talked
about
the
bits
about
extensibility
and
how
you
demultiplex
aren't
quite
clear,
or
you
know
quite
full
agreement
there.
So
I'm
kind
of
gonna
walk
us
through
what
we're
trying
to
do
with
those
what
our
requirements
are
and
kind
of
explain
the
current
design
and
how
we
could
potentially
change
it.
So
yeah,
I'm
gonna,
go
through
that
apologies.
If
this
is
kind
of
a
long
ramble.
C
It
is
the
complete
middle
of
the
night
over
here
in
california,
but
I
will
try
to
be
as
clear
as
possible
feel
free
to
jump
in,
especially
with
clarifying
questions.
But
then,
once
I
have
gone
through
some
set
of
slides,
I've
definitely
left
a
lot
of
time
for
discussion
to
make
sure
we
get
something
that
folks
like
all
right.
So
this
is
a
set
of
extensions
that
have
been
discussed
in
or
around
the
working
group
of
protocols
that
use
http
data
grabs.
C
So
to
be
clear,
I'm
not
saying
that
let's
not
try
to
figure
out
if
those
extensions
are
a
good
idea
or
if
there's
something.
We
want
to
build
they're,
just
random
ideas
that
exist
and
because
they're
out
there
and
some
people
are
passionate
about
them,
maybe
like.
We
should
make
sure
that
the
protocol
is
extensible
enough
to
allow
building
them
down
the
road.
So
we're
you
know
eventually
going
to
build,
connect
ip
and
there
is
interest
in
being
able
to
compress
the
ap
header
on
connect
ip.
C
Similarly,
in
connect
udp,
some
folks
have
expressed
interest
in
being
able
to
speak
ecn,
or
you
know
if
there
is
ecn
between
the
proxy
and
target
to
be
able
to
convey
that
on
the
encapsulated
path
between
the
client
and
the
proxy
or
similarly,
if
you,
if
the
proxy,
receives
an
icmp
packet
that
refers
to
that
five
tuple
being
able
to
let
the
client
know
about
that
is
an
extension
that
someone
wants
to
build.
C
Otherwise,
there
is
interest
in
doing
path.
Mtu
discovery
for
http
datagrams
so
because
the
quick
datagram
frame
can't
be
fragmented,
it
has
a
maximal,
payload
size,
so
an
mtu
for
datagrams,
and
there
is
interest
in
being
able
to
figure
out
what
that
is,
and
that
would
be
an
extension.
So
ben
has
a
presentation
about
his
proposal
there
in
the
end
as
time
permits
section.
C
So
I
won't
need
to
go
into
too
much
detail,
but
there's
interest
in
doing
this
at
the
htp
level,
as
opposed
to
at
a
higher
level
and
then
over
in
web
transport.
There's
discussion
of
conveying
priorities
where
potentially,
instead
of
just
you
know
you,
you
can
always
prioritize
any
data
as
you're
sending
it,
but
being
able
to
tell
your
peer
how
you
want
to
prioritize.
C
It
is
something
that
we
might
want
to
be
able
to
do,
for
you
know
different
types
of
data
grabs,
so
the
the
reason
I
kind
of
selected
these
extensions
in
particular.
Is
they
right
now
we're
building?
You
know
a
way
to
send
http
datagrams,
which
you
know:
okay,
we're
sending
these
data
payloads,
but
it
sounds
like
there
is
interest
in
being
able
to
send
multiple
different
types
of
payloads
at
the
same
time,
and
that
means
demultiplexing.
C
So
let's
say:
if
I'm
sending
you
know
a
connect,
ap
uncompressed,
full
ip
packet
and
I'm
also
sending
one
that
is
compressed.
I
need
a
way
to
tell
my
peer.
Oh
when
you
receive
it,
this
one
is
compressed
and
this
one
is
not
so
that's
where
we
get
into
demultiplexing.
C
C
But
what
the
the
big
question
here
that
has
come
up
on
the
list
is:
why
do
we
need
to
care
now?
Can't
we
just
say
this
is
extend.
These
are
extensions,
we'll
deal
with
them
when
we're
done
and
that
that
is
absolutely
a
reasonable
approach.
We
want
to
keep
the
core
protocol,
ideally
as
simple
as
possible,
but
we
want
to
make
sure
that
we
don't
paint
ourselves
into
a
corner.
We
want
to
make
sure
that
when
we
go
to
build
these
extensions,
we
don't
look
at
what
we
have
and
go.
C
Oh,
if
we
had
just
done
this
bit
slightly
differently,
we
would
have
been
able
to
extend
this
easily,
but
now
we
can't
so.
I
think
it's
valuable
to
have
this
discussion
right
now
before
we
publish
this
to
make
sure
that
whatever
the
extensibility
knob
or
the
extensibility
foothold,
if
you
will
that
we
need,
is
in
the
core
protocol
so
that
you
know,
as
our
favorite
thing
is
a
good
segue
into
kind
of
requirements.
What
will
we
want
for
these
extensions.
C
Coffee,
so
the
the
first
goal
for
for
this
specific
one
is
this
idea
of
demultiplexing.
We
can
have
multiple
formats
that
coexist
simultaneously
and
we
want
the
receiver
to
be
able
to
differentiate
between
them.
C
Another
requirement
is,
you
know
we
we're
working
across
intermediaries
here
and
often
intermediaries
don't
get
updated
as
often
as
endpoints.
So
as
we've
seen,
you
know
with
other
protocols
at
the
atf,
the
fewer
machines
you
need
to
update
the
more
successful
your
protocol
is
likely
to
be
take.
You
know
it
was
a
lot
easier
to
deploy
quick
than
ipv6,
for
example,
because
you
know
if
you
only
need
to
deploy
the
modify
the
endpoints
instead
of
everyone
along
the
way,
you're
in
better
shape.
C
C
Another
requirement
is
there
is
some
interest,
like
the
mtu
discovery,
extension
for
extensions
that
apply
to
multiple
protocols,
so
I'm
saying
protocol
or
application
independently.
I
mean
like
what
sits
on
top
of
http
datagram,
so
that,
right
now
the
main
ones
we
have
in
mind
are
connect.
Udp
connect,
ip
and
web
transform,
and
there
is
some
value
like
this
one
isn't
absolutely
obviously
required.
C
But
if
we
have
a
way
to
write
this
once
and
for
all
of
them,
that
could
be
useful
and
then
another
requirement
that
we
didn't
really
set
off
with
in
the
first
versions
of
http
datagrams.
But
that
became
very
clear
this
year
was
that
we
want
to
make
this
mechanism
optional.
C
There
are
implementers
that
don't
want
this
feature
today
and
really
want
to
get
things
out
the
door
as
simple
as
possible,
so
they
really
want
to
be
able
to
pretend
that
if
we
build
the
system
into
the
core
spec
to
pretend
it's
not
there,
so
that
means
you
know
not
only
optional
to
implement
sorry,
but
also
that
they
don't
need
to
reason
about
too
many
concept
concepts
to
be
able
to
implement
the
core
spec
and
then
a
final
design
requirement
is
what
I've
been
calling
zero
latency
extensibility,
which
is
the
fact
that
were
to
be
to
allow
using
extensions
in
the
first
application
flight.
C
I'm
gonna.
I
have
some
slides
to
explain
into
more
detail
what
that
means
exactly
all
right.
So
on
the
topic
of
you
know,
supporting
extensions.
One
important
property
here
is:
if
something
is
not
mandatory
to
implement
or
if
it's
an
extension,
that
means
that
not
everyone
will
implement
it,
and
one
of
the
design
properties
of
http
is
when
the
client
wants
to
send
its
request.
C
It
doesn't
know
the
full
feature
set
of
the
proxy
it's
talking
to
and
you
can't
use
settings
to
negotiate
feature
capabilities
because
there
could
be
an
intermediary
between
the
client
and
the
proxy
so
and
that's
a
common
thing
across
all
of
http.
That
is
one
of
the
design
constraints
that
we
kind
of
inherited
from
designing
deciding
to
build,
mask
over
http
and
waiting
a
full
round
trip
from
the
between
the
client
and
the
proxy
to
get
the
ttp
response
which
could
negotiate.
The
support
for
an
extension
is
unacceptable,
like
we're
in
2021.
C
Latency
is
the
most
important
like
performance
metric
for
pretty
much
everything
we
do
and
we
need
a
way
to
be
able
to
use
things
optimistically
and
fall
back.
You
know
if
they're
not
supported
and
without
everything
breaking
we
can't,
but
we
can't
burn
a
full
round
trip
to
to
wait
for
these
things
all
right.
So
what?
What
do
I
mean
exactly
by
zero
latency
extensively?
Well,
what's
your
latency
setup!
C
So
this
is
a
pretty
standard
scenario
where
you're
just
doing
the
client
wants
to
do,
connect
udp
to
a
proxy
and
then
inside
those
udp
payloads.
It
wants
to
speak
quick
to
a
target
server
and
one
of
the
design
choices
we
have
in
connect
udp,
which
technically
is
also
possible
in
connect.
C
But
is
you
send
your
connect,
udp
request
and
alongside
it,
maybe
even
in
the
same
packet,
you
can
send
an
http
datagram
that
contains
your
quick
initial
to
the
target
and
if
the
proxy
accepts
it,
it'll
send
a
200
okay
back
to
the
client
and
at
the
same
time,
it'll
receive
that
quick
initial
and
send
it
over
to
the
target.
If
the
proxy
decides
to
reject
the
request.
It'll
just
silently
drop
that
quick
initial.
So
there's
no
harm
no
foul,
but
you've
just
saved
a
round
trip
between
the
client
and
proxy.
C
You
don't
need
to
say,
request
response
and
then
send
the
datagram.
You
can
send
it
optimistically,
knowing
that
if,
for
some
reason
it's
denied,
it
gets
dropped,
not
the
end
of
the
world
and
again
that
that
is
user-visible
latency
for
that
end-to-end,
quick
connection
between
the
client
and
target
that
is
being
saved.
So
big
win
there.
C
So
now
ian.
What's
up.
D
On
the
last
one,
so
are
you
I'll
just
go
back
one
slide
a
few
minutes,
so
my
understanding
is
this:
just
requires
the
core
functionality
of
of
the
draft,
not
the
like
any
of
the
optional
functionalities.
Is
that
accurate,
or
am
I
missing?
That's.
C
D
I
just
want
to
make
sure
I
understood
correctly
of
like
which
portions
of
these
scenarios
that
you're
trying
to
enable
like
require
the
extensions
and
which
dome
thanks.
E
All
right,
hello,
cool.
I
think
the
audio
is
working
now,
just
just
to
jump
in
a
little
bit
to
respond
to
ian,
though,
as
we've
been
looking
at
the
different
options
for
how
you
communicate
like
if
context
are
optional
and
stuff,
that
does
have
an
impact
on
how
you
send
the
early
data
and
like
what
capsule
types
are
allowed.
C
Yeah,
I
actually
have
some
slides
to
discuss
this
in
coming
right
up
thanks
tommy,
so
the
kind
of
sorry
the
order
of
the
slides
is
kind
of
a
little
bit
of
a
mess,
I'm
kind
of
jumping
back
and
forth,
and
I
apologize
for
that.
So,
as
we
were
saying
some
of
these
extensions,
they
want
to
be
able
to
send
multiple
formats.
C
So
what's
the
simplest
way
to
do
that,
you
put
an
identifier
at
the
start,
so
when
you're
receiving
you
parse
the
identifier
and
then
you
know
which
which
format
like
what
parser
you
should
use
to
parse
this
datagram
like?
Is
this
just
a
udb
payload
or
is
this
compressed
or
does
this
have
some
additional
data
at
the
start
at
the
end?
And
but
if
we're
making
this
optional,
you
end
up
with
an
interesting
conundrum,
which
is:
is
that
identifier
at
the
start
present
or
not?
C
Should
I
be
parsing
this
identifier,
or
should
I
just
assume
that
it's
not
there
and
that
that
question
is
where
some
of
these
complexities
for
that
extensibility
design
come
from,
because
in
the
original
version,
where
this
was
always
there,
it
was
a
lot
easier.
But
but
turns
out,
we
can
make
this
work.
C
So
then,
this
is
what
tommy
was
alluding
to
a
little
bit
now.
This
is
when
we
want
to
do
extensibility,
but
without
a
latency
cost.
So
let's
say
you're
a
client.
You
want
to
talk
to
a
proxy
and
you
want
to
use
some
extensions.
Your
main
goal
is
to
do
connect
udp
to
a
target.
You
still
want
to.
You
know,
send
you
have
your
end-to-end
quick
connection
to
your
connect,
udp
proxy,
but
maybe
you
want
to
use
let's
say
the
ecn
extension
you
just
don't
know
for
sure.
C
C
You
just
need
to
make
sure
to
let
the
proxy
know
that
this
datagram,
for
example,
doesn't
have
a
demultiplexing
identifier
at
the
start.
So
there's
no
confusion,
even
if
the
proxy
does
or
does
not
support
this
extension.
And
then
it
can.
You
know
seamlessly
without
a
round
trip
forward
that
over
to
the
target
and
let
you
know
with
his
response
that
extensions
are
supported
on
this
connection
or
sorry
on
this,
for
this
request.
C
All
right,
so
what
is
the
current
design
we
have
and
that's
you
know
as
we're
always
not
set
in
stone.
What
we
have
today
is
we
been
vinnie.
Go
ahead.
Sorry.
F
Slight
understand:
can
you
just
clarify
a
little
bit
absolutely
so
the
proxy
actually
doesn't
know
anything
about
that
extension.
In
that
scenario
you
described
it
just
allowed
that
extension
to
work
and
end.
Is
that
correct.
C
C
C
Those
could
have
ecn
markings
on
them
at
the
ip
layer
between
the
proxy
and
the
target,
and
this
extension
would
allow
the
proxy
to
communicate
those
markings
to
the
client
as
it's
forwarding
these
encapsulated
udp
payloads
like
as
it's
sending
a
udp
payload,
you
could
say:
oh
this
one
was
marked
ect
0.
This
one
was
marked
congestion
experienced.
C
G
So
a
kind
of
a
clarifying
question
here
on
what
you
do,
then,
if
you're
expecting
an
extension
to
pass
through
the
proxy
and
head
on
toward
the
target,
and
I'm
thinking
here
about
rfc
5795
style,
robust
header
compression
for
ip
headers.
So
you
can
imagine
using
that
in
two
different
ways,
one
of
which
is
saying:
hey,
I'm
going
to
use
rock
and
my
compressing
ip
headers
to
you,
the
proxy,
or
to
tell
the
proxy
that
you
expect
rock
to
go
end
to
end
to
the
target.
G
So
in
that
second
case,
I
assume
from
this
design,
you
wouldn't
negotiate
the
extension
through
the
proxy.
It
would
then
just
take
the
the
connect
ip
payload
and
treated
like
it.
It
was
an
uncompressed
ip
packet,
send
it
out
and
if
it
failed,
that
was
on
on
the
client
for
not
understanding
that
what
it
was
sending.
G
Is
that
approximately
correct?
Yes
actually.
G
C
Feedback
please
use
headphones
or
mute,
so
the
extensions
that
are
like
transparent
to
the
proxy
in
the
sense
that
are
client
to
target
are
kind
of
out
of
scope
here,
because
they're
they'll
work,
but
the
proxy
doesn't
even
need
to
know.
So
what
I'm?
All
the
extensions
I'm
talking
about
are
extensions
between
the
client
and
the
proxy
like,
for
example,
in
this
case,
if
they're
extensions
inside
this
end-to-end
client
to
target
connection,
the
proxy
doesn't
need
to
know
or
care
there,
and
it
can't
even
because
they're
encrypted
inside
quick.
C
So
when
I
say
extensions
here
to
clarify
I'm
specifically
talking
about
extensions
between
the
client
and
proxy,
where
both
the
client
and
the
proxy
know
about
them.
Thanks
for
the
question
that
was
clear,.
G
That
mostly
does
I
do
think
that
there
there
may
come
some
cases
where
the
the
proxy
has
to
know
that
what's
going
on,
isn't
malformed,
because
obviously
ip
with
rock
headers
and
and
presumption
of
context
looks
different
from
standard
ip,
but
it
probably
doesn't
need
to
do
a
lot
to
support
it.
But
I
I
think
this
is
probably
not
the
main
topic.
So,
let's
move
on.
C
D
Clarifying,
I
think,
comment.
The
proxy
here
is
what
echo
referred
to
as
a
forward
proxy,
and
the
target
here
is
one
or
more
reverse:
proxies
machines.
Application
front
ends,
yada
yada
that
received
the
traffic
right
like
just
to
clarify
like
like
the
target,
could
be
like
one
or
more
machines,
and
we
kind
of
don't
care
because
it
just
should
just
all
work
because,
like
reverse
proxy
providers,
need
to
deal
with
this
stuff
themselves.
Is
that
accurate.
C
Absolutely
so,
in
this
scenario,
let's
say
you're
using
connect2udp
as
like
an
ip
blinding
service
it
from
from
the
client.
In
the
proxy's
perspective,
the
target
is
just
an
an
ip
address
on
the
internet
and
a
udp
port
that
you're
sending
things
to
and
that
you're
receiving
things
from
there's
worlds
of
complexity
and
servers
and
machinery
behind
the
target.
But
from
our
perspective
we
don't
know
we
don't
care
the
is.
This
could
all
be
running
on
a
raspberry
pi
on
a
corner
or
it
could
be.
C
You
know,
backed
by
a
giant
google
data
center
completely
oblivious
to
us.
We
don't
care,
but
the
the
from
the
client's
perspective.
There
could
still
be
an
interpreter
in
a
proxy
which,
like
in
general,
the
client
doesn't
really
need
to
care.
All
it
knows
is
it's
trying
to
connect
to
a
connect,
udp
proxy,
with
a
given
authority.
C
What
we
just
need
to
build
here
is
make
sure
that
the
if
the
entity
that
wants
to
deploy
the
connect
udp
proxy,
wants
to
deploy
that
behind
a
front
end,
in
other
words,
behind
intermediary
that
needs
to
work
in
practice.
It
doesn't
really
change
the
protocol
much,
but
it
just
impacts
some
design
requirements
like
the
fact
that
we
can't
rely
on
settings
for
everything,
for
example,
all
right
so.
D
D
C
No,
no,
on
the
contrary,
this
is
really
useful,
because
these
are
trivial
points.
Thank
you,
the
so
you're
in
a
in
a
given
http
deployment,
it
interpret
areas
are
pretty
common.
So,
like
take
our
setup,
we
have
the
google
front
end
and
then
we
have
back
ends
so
mapping
to
like
the
the
terminology
here.
The
google
front
end
is
an
intermediary
and
the
google
backend
would
be
the
one
implementing
connect
udp
and
which
we
the
one
that
we
list
as
the
proxy
and
that
could
be
the
egress
to
the
internet.
C
In
some
scenarios,
the
settings
are
hub
by
hop
at
the
http
layer,
whereas
requests
are
end
to
end.
In
that
scenario,
where
the
end
is
the
proxy,
you
could
be
running
something
into
that,
but
the
request
could
flow
through
an
interpret
area,
and
we
have
folks
in
the
working
group
that
wish
to
deploy
that
this
way,
in
particular
for
web
transport.
D
So
to
walk
back,
let
me
try
to
reset
this
in
in
more
terms
that,
like
I
am
you
know
that
are
very
focused
on
hardware
infrastructure,
so
like
in
this
case,
like
google,
is
running
like
a
target
which
is
like
you
know,
probably
at
least
two
and
sometimes
three
layers
of
load
balancers,
stuck
in
in
front
of
like
an
application
front
and
you're,
saying
that
the
first
load
balancer
needs
to
make
sure
to
communicate
the
right
settings
for
all
the
things
behind
it,
because
otherwise,
like
bad
things
could
happen
right
is
that
yeah.
C
So
well
yeah.
So
so
let's
say
I
think
the
web
transport
example
is
perhaps
a
better
one
where
you
have
a
front
end,
which
is
also
a
load
balancer,
and
you
know
you
talk
through
that
to
a
myriad
of
back
ends
as
a
client.
You
don't
even
know
that
there
are,
you
know,
front-ends
and
back-ends.
You
just
know
that
you
like
the
front-end
sir
and
the
or
in
other
words
the
interpreter,
served
you
with
the
appropriate
approved
for
the
tls
certificate,
and
when
you
send
your
requests,
you
gotta
reply.
C
Maybe
you
went
through
that
to
a
different
machine,
but
potentially,
let's
say
some
of
the
fleet
supports
the
this
web
transport.
With
extension,
some
of
it
supports
web
transport
without
extensions
and
the
front
end
doesn't
know
all
of
these,
so,
like
supported
protocols,
features
extensions
of
all
of
its
back
ends
necessarily,
and
so
it
can't
synthesize
a
setting
frame
for
you
that
covers
everyone
in
the
back.
You
need
to
go
and
send
the
request
and
get
a
response
to
know
fully
what
they
support.
D
But
just
to
like
kind
of
really
like
slice
this
down
to
like
the
very
specific
points,
so
the
so
the
the
server
the
the
the
load
balancer
needs
to
know
whether
say,
connect
ip
connect,
udp
connect
or
web
transport
is
supported
because
the
way
you
load
balance,
those
protocols
is
just
like
transformatively
different
from
standard
http,
so
like
you're,
just
not
getting
away
without
like
having
that
knowledge
of
the
the
terminating
load
balancer.
But
still
so
I
think.
C
So
to
clarify
that
you
need
to,
and
that's
that's
specifically
what
we're
picking
at.
So
let
me
just
clarify
on
what
you're
saying
the
intermediary
needs
to
know
how
to
proxy
http
datagrams
and
especially
for
web
transfer,
where
transport
has
other
custom
things
that
need
to
be
proxied,
such
as
like
custom,
unidirectional
streams,
for
example.
So
the
interpreter
needs
to
be
aware
of
this,
but
if
we,
for
example,
want
to
add
in
the
future
an
extension
the
the
goal
there
is
that
this
front
end
that
was
modified
once
to
speak.
C
D
Kind
of
distinction-
okay,
that
makes
sense
to
me-
I
I
think
I
I
think
we
should
just
move
on,
because
I
think
we've
kind
of
gotten
it
down
to
the
point
where
I
just
wanted
to
get
it
to,
which
is
the
yeah.
The
load
balancer
does
actually
need
to
know
like
whether
web
transport,
for
example,
is
supported,
but
it's
it's
valid
to
say
once
web
transport
is
supported
that
maybe
there's
some
other
stuff
you
want
to
like
tunnel
to
the
to
the
actual
application
front.
D
C
Thanks
cool
well,
dude
thanks;
no,
no!
That
was
that
was
a
great
point,
and
that
was
what
I
was
trying
to
get
across,
but
yeah
taking
the
time
to
actually
state.
It
clearly
really
important.
So
thanks
for
your
help,
so
yeah
the
the
current
design.
How
do
we
do
this?
It
introduces
the
concept
of
datagram
format
types
which
is
a
okay.
C
This
is
the
semantics
of
my
payload,
so,
for
example,
in
our
current
drafts
connect
udp
registers,
the
udp,
payload,
datagram
format
type
and
the
client
starts
off
by
recall
sorry
by
sending
a
registered
datagram
capsule
that
says:
hey
I'm
going
to
be
sending
udp
payloads
on
here.
Good,
like
those
are
the
ones
that
you
need
to.
C
When
you
get
the
payload,
you
then
put
that
you
write
that
into
your
udp
socket
to
the
target,
and
additionally,
we
have
this
de-multiplexing
mechanism,
which
is
inside
this
box
on,
and
so
you
you
negotiate
using
the
secu's
datagram
context's
http
header
and
the
draft
introduces
the
concept
of
a
context,
and
the
idea
is
when
this
is
in
use.
Every
datagram
starts
with
a
warrant.
C
That
is
a
context.
So,
for
example,
you
could
say:
okay,
I
want
to
register
the
udp
payload
context
and
I
want
to
register
a
udp
payload
with
two
bits
of
ecn
at
the
start,
context
for
the
ecn
extension
or
you
could
register
the
icmp
context
extension.
So
then,
when
the
other
side
receives
a
an
http
datagram,
it
knows
it'll
read
this
warrant
off
the
head
and
then
say:
oh,
this
is
just
a
udp
pillow.
Let
me
put
this
into
my
udp
stack
or
this
one.
C
Oh,
I
need
to
parse
these
first
two
bits
for
ecn
or
this
one's
kind
of
a
separate
sidecar
of
icmp
or
something,
and
it
knows
how
it
can
parse
and
what
actions
it
needs
to
take.
Based
on
this
demultiplexing
identifier,
at
the
start
and
mt
mentioned
on
the
list
that
this
is
he,
he
says
that
this
is
overly
complicated.
C
I
don't
disagree,
and
actually
we
had
a
good
conversation
with
him
and
others
last
week
and
the
fundamental
thought
that
mta
got
across
you,
which
I
I
think
is
very
very
reasonable,
is
like
concepts
aren't
free.
When
you
meant
a
new
concept,
you
have
to
keep
it
in
mind
forever
and
if
you
ever
got
anything
wrong
in
the
definition
of
that
concept,
you're
gonna
like
pay
it
for
for
many
years.
So
if
we
can
get
something
even
simpler,
that's
better!
C
So
so
we
have
an
issue
to
track
that
and
we
we
kind
of
came
up
with
a
new
design
that
I
just
wanted
to
run.
People
with
and
then
as
soon
as
I'm
done
running
through
that
I'll
have
a
slide
and
then
we
can
like
jump
into
a
full
discussion.
C
Sorry!
So
the
idea
there
is,
I
initially
thought
that,
like
the
datagram
format,
type
was
our
best
extensibility
joint
or
foothold.
If
you
will
to
enable
all
this
extensibility,
but
it
turns
out
after
some
more
thinking
and
discussion,
it's
possible
to
get
the
extensibility.
We
need
without
introducing
this
concept,
and
I
think
that
that's
a
nice
property
we
want
to
have
an
extensibility
joint,
but
we
kind
of
already
have
one
in
the
protocol.
It's
capsule
types.
C
So
as
a
reminder,
capsule
is
this
client
to
proxy
protocol
where
you
just
send
a
sequence
of
tlvs
and
the
t
there.
This
type
is
something
where
you
can
mint
new
one
they're
registered
with
viana
and
that's
a
really
good
extensibility
joint,
because
proxies
and
clients
will
drop
any
capsule
that
it
doesn't
understand.
C
So
a
new
extension
protocol.
Anything
can
define
its
own
capsule,
and
so
this
this
pro
115.
The
idea
is,
we
remove
those
registration
capsules
because
we
don't
need
them
in
the
core
spec
we
remove
datagram
format
types.
Then
we
we
add
some
text
to
say:
okay,
if
you're
an
extension
you'll
be
able
to
use
these
capsule
types
as
your
extensibility
mechanism
and
so
what
what
the
with
this
potential
design?
C
What
you
say
is
you
send
a
header
as
part
of
your
request
to
when
you're
the
client
and
you
wanted
this
demultiplexing
functionality.
You
send
that
header
and
if
the
proxy
replies
with
that
header
as
well,
it
means
okay.
We
both
have
like
support,
at
least
one
of
these
extensions.
We
think
there's
value
here.
We
want
this
demultiplexing.
C
C
C
We
add
the
concept
of
data.
We
add
multiple
capsules
for
datagram
for
to
to
convey
the
ones
that
explicitly
have
or
do
not
have
context.
So
the
idea
there
is,
while
you're
negotiating
the
headers.
So
as
a
client
before
you've
received
the
headers,
you
can't
send
the
like
regular,
quick,
datagram
http
datagram,
because
there
could
be
some
confusion
about
whether
that
warrant
is
present
or
not.
So
until
you
know
for
sure
and
have
gotten
the
sort
of
response
you
use
these
separate
capsules
that
are
crystal
clear,
they're
self-describing.
C
D
The
shape
of
what
you're
describing
sounds
good
one
kind
of
wording,
suggestion
or
comment
is
we're
using
the
word
demultiplexing
here
at
any
point,
do
you
actually
mean
like
an
intermediary
or
a
proxy
or
something
like
that?
Taking
a
single
stream
and
then
sending
it
to
two
entirely
different
locations?
Or
are
you
really
talking
about
like?
D
I
might
even
call
it
like
format,
conversion
or
compression,
or
something
like
that
where
basically
you've
developed
like
a
better
formatting
for
whether
it's
like
ecn
or
ip
compression
or
whatever,
and
the
thing
that
terminates
the
proxy
is
responsible,
which
is
in
some
sense
and
hop,
is
responsible
for
saying
like
okay,
I
understand
this
format
and
I
know
how
to
convert
it
to
like
you
know,
edp
or
ip
or
whatever.
I
need
to
pop
out
the
other
end,
and
so
it's
it's
kind
of
it's
a
format,
conversion,
it's
a
compression.
C
So
so,
no
that's
a
great
question.
I
I'm
using
demultiplexing
to
say
separating
between
multiple
things
inside
an
incoming
flow
of
things.
So
that's
horribly
unclear
what
I
mean
by
that
is
it's
the
goal,
isn't
to
say,
like
let's
say,
you're
a
connectudp
proxy.
It's
not
to
say
these!
These
packets
go
here.
These
packets
go
there.
It's
let's
say
your
connect
ip!
It's!
This
packet
came
here.
C
C
H
D
Yeah
exactly
I,
I
just
wanted
to
kind
of
make
sure
that
everyone
was
on
the
same
page
about
that
sorry,
another
clarifying
question,
please
no,
no
yeah!
Those
are
extremely
helpful.
Thank
you.
C
Cool
and
so
what
does
this
means
in
in
terms
of
the
like
design
requirements
that
we're
talking
about
earlier
the
so
it's
clear
that
this
has
this:
has
this
multiplexing
for
the
definition
we
just
talked
about
it
supports
this?
Well,
it's
clear
that
intermediaries
are
not
involved
as
part
of
this
at
all,
so
we
are
you'd,
be
able
to
deploy
this
later
without
having
to
modify
the
intermediary,
which
is
really
cool.
C
It
allows
you,
because
it's
done
at
the
http
datagram
layer
to
have
extensions
such
as
benz
mtu
discovery
extension
that
you
can
write
once
and
then
use
for
web
transform,
connect,
udp
and
connect
ip.
It
has
this
view
latency
property,
thanks
to
that
separate
capsule,
but
I
want
to
insist
a
little
bit
on
the
optionality,
because
that's
been
the
sticking
point
for
folks
who
don't
want
this
concept
to
who
don't
want
to
implement
it.
C
C
What
would
be
the
foothold
that
we
need
to
make
sure
the
we
can
like
ship
that
later,
and
the
only
thing
we
need
is
those
capsules
that
allow
you
to
send
things
before.
You
know
what
the
server
supports
and
so
right
now,
instead
of
having
just
a
datagram
capsule
in
the
draft,
you
would
need
so
you
would
move
the
datagram
with
context
capsule
to
that
extension
draft,
but
you're
still
left
with
two
in
the
base
draft
there's
the
datagram
one
which
might
or
might
not
have
the
context
and
the
datagram
without
context
one.
C
So
conceptually
you
know,
maybe
we
if
we
split
it
out
of
the
draft-
and
we
didn't
want
to
have
the
word
context
anywhere
in
this
draft
we'd
call
it
unextended
datagram
and
we
would
just
require
implementations
of
http
datagrams
that
are
not
extended
to
parse
both
of
these
capsules
in
the
exact
same
way.
Whatever
you
do,
you
get
one
or
the
other.
You
just
take
the
payload
and
you
parse
it
as
a
payload.
C
That's
the
only
kind
of
foothold
we
need
and
an
extension
can
kind
of
slightly
tweak
these
semantics
and
say
well,
the
datagram
capsule.
Might
have
the
context
id
if
it's
negotiated,
but
the
unextended
datagram
capsule
never
has
it
and
that'll
that's
the
little
the
kind
of
the
hack
that
allows
you
to
send
things
during
like
the
before
you've
gotten
the
responses.
So
my
general
idea
is:
we've
managed
to
pare
this
down
to
kind
of
the
simplest
thing
possible,
the
only
extensibility
joint
that
we
require.
C
That
is,
a
burden
for
implementation,
that
don't
care
is
that
they
need
to
understand
two
capsule
types
instead
of
one
and
put
the
exact
same
machinery,
so
you
would
put
you
know
two
switch
statements
back
to
back
when
you're
parsing
and
I
think
yes,
that
is
the
end
of
my
long
ramble
about
all
these
things.
C
Now,
let's
bring
everyone
in
for
discussion
in
particular.
C
Are
there
requirements
that
we've
discussed
here
or
that
we
haven't
discussed
that
you
care
about
and
that
you
want,
or
are
there
properties
of
this
system
that
don't
work
for
you
for
some
reason
or
do
you
have
any
other
random
thoughts,
comments,
questions
and
I
think,
like
we
have
some
agenda
time,
I
haven't
been
keeping
too
much
track
carrick.
You
can
tell
me
how
much
more
we
have,
but
we
have
some
time
allocated
for
discussion
here.
Please
come
on
up
to
the
mic
line.
Go
ahead,
eric.
I
So
I
think
we're
going
to
try
to
time
box
this
to
about
15
minutes
and
when
we
get
to
the
end
of
that,
we'll
see
where
we
are.
If
we've
completely
face
planted,
we'll
go
from
there
and
if
we
haven't,
then
that's
wonderful.
So
let's
try
and
keep
comments
as
short
as
possible
to
let
everybody
get
some
time
in.
F
You
know
ideally
the
the
less
complex
this
is
the
better.
So
I
like
the
idea
that
you've
kind
of
constrained
some
of
these
things,
I
assume
just
standard
all
the
proxy
authentication
methods-
will
work
with
with
mask.
Is
that
a
correct
assumption.
C
Absolutely
so
mask
in
particular,
in
this
case,
connect
udp,
but
all
there
are
things
we're
discussing
are
traditional
http
requests
and
responses
like
so
connect
udp.
Now
we're
using
extend,
connect,
we'll
talk
about
that
in
a
minute.
J
C
K
Not
all
right,
I
can't
even
see
myself.
C
I
can
hear
and
see
you
though
it
seems
it's
dark
where
you
are
as
well.
K
K
This
just
has
a
summary
right.
You've
got
a
couple
of
points
here
and
there's
there's
a
lot
here,
that's
bound
up
with
the
design
of
the
different
protocols
that
I
can
be
using
the
h3
datagram
stuff
and
I've
been
sort
of
convinced
back
and
forth
throughout
your
discussion
of
all
of
these
things
about
the
the
virtues
of
some
of
these
other
things.
K
But
ultimately,
what
I
think
we
want
here
is
the
ability
to
send
a
stream
of
data
to
the
other
end
bi-directional
and
associate
a
bunch
of
datagrams
with
it,
and
that's
it.
K
K
Something
those
protocols
can
deal
with
and
if
the
intermediary
understood
that
protocol,
then
it
can
participate
in
that.
If
it
doesn't
understand
that
protocol,
then
it
probably
has
no
business
in
participating
in
that
protocol,
and
we
can
have
a
much
simpler
design
here
and
probably
a
much
simpler
design
all
over.
If
we
just
have
those
capabilities
that
means
no
capsules
period
and
that
that
would
be
something
that
I
think
we
could
probably
make
work.
C
So,
let's
pick
on
that,
one
of
the-
and
actually
I
didn't
talk
about
this
in
the
slides
because
that
part
has
been
in
the
draft
for
a
while.
So
we
have
a
datagram
capsule
and
the
goal
there
is
to
be
able
to
send
datagrams
over
versions
of
http
that
aren't
http
3
and
a
pretty
common
deployment
model
that
people
have
talked
about
here
is
that
you're
going
to
have
http
3
from
the
client
to
the
first
intermediary.
C
In
other
words
the
proxy,
and
I
think
that
the
idea
here,
or
at
least
in
the
draft
as
it's
currently
written
is
you
have
a
capsule
that
is
the
datagram
capsule
and
the
interpretatory
speaks
that
because
it's
going
to
need
to
do
this
conversion
and
and
then
kind
of
you're
done.
That
way.
You
don't
need
to
reinvent
this
datagram
capsule
for
each
protocol.
Is
that
what
you're
suggesting.
K
I'm
I'm
actually
suggesting
that
we
reinvent
them,
because
that
it's
not
reinventing
them.
I
think
the
idea
is
pretty
simple,
but
the
the
protocol
that
the
the
intermediary
speaks
that
does
involve
folding
datagrams
into
the
stream
or
pulling
them
out
again
on
a
hot
by
hop
basis,
is
something
that
can
be
negotiated
on
a
hot
by
hot
basis.
C
So
that
makes
at
the
end
of
the
day
were
the
question
there
is:
do
we
want
to
have
a
common
way
to
do
this
across
protocols
or
do
we
want
each
protocol
to
do
their
their
out?
My
general
sense-
and
I
see
more
people
getting
in
the
queue,
so
is
it
feels
silly
to
have
everyone
reinvent
the
same
wheel
when,
like
suppo
folks,
seem
to
like
capsules,
because
you
know
how
can
you
get
any
more
simpler
than
a
sequence
of
tlvs
but
yeah?
I
I
see
that
we're.
L
Hi,
I
there's
a
lot
going
on
here.
I
think
that
it
is
getting
a
little
confusing
because
we're
talking
about
multiple
layers
that
are
really
quite
independent
here,
so
I
think
a
lot
of
people
following
this
presentation
may
not
have
have
caught
that
the
the
datagram
format
types
are
pretty
much
unrelated
to
the
capsule
discussion
that
they
they
run
end
to
end
and
are
not
connected
to
that.
I
do
think
that
we
need
to
be
clear
that
there
are.
L
L
But
there's
one
of
the
back
ends
that
I
talk
to
is
h2
only
a
protocol
comes
in.
I
don't
even
know
what
that
protocol
is.
I
don't
know
if
it
uses
capsules,
and
so
I
can't
forward
these
datagrams,
and
I
also
can't
signal
that
the
datagrams
have
failed.
L
That's
the
that's
the
case
that
worries
me
the
most
and
it
I.
I
will
just
float
that
if
you
think
that
all
of
this
is
too
complicated,
you
know
this
complexity.
A
lot
of
this
is
because
we're
effectively
reproducing
a
lot
of
the
functionality
of
web
transport,
in
my
view,
because
we
need
it,
and
so,
if
you
think
this
is
too
complicated,
I
I
would
suggest
weighing
the
alternative,
which
would
be
to
run
all
of
this
over
web
transport.
B
All
right
so.
B
I
think
perhaps
we
have
some
disagreement
requirements,
which
is,
I
don't
understand
how
you
get
into
the
state
then
suggests,
which
is
to
say,
you
have
been
configured
with
the
you'll,
be
you've
been
configured
with
the
url
and
settings
of
a
of
an
endpoint
that
you're
connecting
to
that
endpoint.
That
should
not
happen.
If
there's
a
distribution
embed
suggests
in
which
the
european
configure
with
that
and
then
the
back
end
doesn't
work
simply
should
not
happen,
but
you
should
not
be
configured
with
it
and
if
it
does,
the
answer
is
within
an
error.
B
So
it's
not
like
only
discovery
here
is
happening
in
settings
because
we're
not
discovering
like
it's
not
like
we're
discovering
that
you
can
use
you
know
compression.
We
were
talking
about
a
specific
service
which
configured
the
client
either
via
some
mechanism.
It
was
advertised
by
the
server.
So
what
I'm
saying
is:
don't
do
it
say
simply
not
configure
system
that
way
and
the
problem
will
go
away.
So
the
the
word
intermediary
appears
in
this
document
on
30-ish
times.
B
It
should
appear
zero
times,
and
the
right
answer
is
that
the
that
the
front
that
the
front
end
the
thing
which
you
initially
quit-
connect
to
should
advertise
only
things
that
make
sense,
and
if
and
it
is
its
responsibility
to
translate
the
things
that
make
sense
to
it,
things
make
sense
behind
it,
and
so
I
I
I
guess
I
I'm
persuaded
that
any
of
this
machinery
is
required.
B
So
I
I
guess
that,
and
so
so
I
I
think
that
would
be
my
put,
and
I
I
I
think
that
probably
that
is
that's
preparatory
to
like
any
discussion
of
how
to
actually
implement
the
functionality.
You
should
see
the
desiring
here.
C
Thanks,
I
mean
that's
in
the
sense
that
you
know
folks
want
to
deploy
this
behind
front
ends
so
intermediaries.
One
potential
design
is
what
you're
saying
is
we
terminate
everything
at
that
first
hop
and
conceptually,
then
there's
no
more
intermediary,
because
you
say
that
any
protocol
that
supports
http
datagrams
must
kind
of
terminate
that
request.
It
can
send
another
request
behind
it
if
it
wants,
but
conceptually
those
are
no
longer
the
same
one.
C
I'm
sorry,
I
need
to
repeat
that
argument,
so
the
so,
let's
say
for,
for
example,
let's
focus
on
web
transport
folks
want
to
deploy
web
transport
on
back
ends
through
a
front
end.
I
think
that's
that
is
going
to
happen
and
there
are
two
ways
of
doing
that.
You
can
either
say
the
request
is
end
to
end
from
client
to
back
end
and
there
is
an
intermediary
or
you
can
say
what
so
that's
like
the
design
in
the
draft
right
now.
What
you're
proposing,
I
think
is
to
say
the
the
front.
C
End
terminates
the
request
and
if
it
and
it
then
it
creates
a
separate,
disconnected
request
from
the
front
end
to
the
back
end,
and
so
in
that
what
that
means
is
then
you
no
longer
have
discussion
of
intermediaries
in
the
http
datagram's
draft,
because
then
you
have
ngb
intermediaries
from
client
to
front
end
and
from
front
end
to
back
end
separately,
but
then
there
is
no
sharing
of
state
or
anything
between
them.
C
B
Yes,
I
and
what
I'm
saying
so,
I
think
what
I'm
saying
is
one
way
to
do
is
to
terminate,
and
the
other
way
is
to
have
a
private
arrangement
between
that
between
between
the
intermediary,
the
front
end
and
the
back
end,
which
describes
exactly
how
which
parts
of
the
channel
are
clear
on
parts
of
the
channel
are
not
clear
and
that
and
that
the
what's
happening
here
is
you're
hoisting.
B
All
that
complexity
on
the
client,
because
you
know,
because
you
don't
want
to
make
that
privilege
and
they
put
this
between
the
server
and
and
and
between
the
origin
and
the
immediate
area.
B
But
the
truth
of
the
matter
is
is
that
there
are
many
such
arrangements
in
many
different,
many
different
configurations,
and
that
and
like-
and
I
think
the
tri,
the
the
the
attempting
to
dictate
the
behaviors
intermediaries
has
proven
to
be
extraordinarily
difficult
in
both
hdp
and
sip,
and
that
it's
far
so
important
when
you're
modeling
this
b2b
ways
and
that
then
the
bbx,
whatever
the
heck
they
want,
and
it's
simply
not
a
non-memphis
standardization,
so
yeah.
B
C
M
Yeah,
I
just
wanted
to
say
that,
like
at
some
level,
the
ways
just
document
the
way
this
model's
extensibility
for
things
like
connect
udp,
is
that
connect.
Udp
is
a
resource
and
fundamentally
any
extension
to
connect
utps
that
you
negotiate
is
a
property
of
an
individual
resource,
as
opposed
to
property
of
the
connection
and
to
comment
on
akka's
point.
M
I
do
things
that
of
proxy
to
back
and
require
standardization,
because
in
cases
like
cdns,
those
are
the
objects
that
are
operated
by
different
entities
and
even
in
cases
that
aren't
they're
operated
by
same
entities.
There.
Often
the
software
is
implemented
by
different
people
like,
for
instance,
me
running
nginx
as
a
reverse
proxy.
M
So
I
do
believe
that
we
are
fundamentally
do
need
to
standardize
those
and
that
works
for
http.
So
that
should
work
for
mask.
C
Or
what
so,
thanks
victor,
so
to
clarify
what
you're
saying
you
you're,
suggesting
that
we
have
http
datagrams
function,
end-to-end
from
client
to
proxy
in
such
a
way
that
we
don't
need
to
modify
the
interpreter
to
terminate
things
on
the
intermediate
area.
Is
that
what
you
were
saying.
M
Yes,
I
I
believe
that,
like
similar
to
things
like
http
headers,
that
those
should
be
also,
my
thing
tells
me
to
stop
talking.
So
I'm
going
to
stop
talking
awesome.
A
Alan
yeah,
so
the
timer
just
expired
and
there's
been
a
lot
of
discussion
going
around
back
and
forth.
I
think
at
this
point
it's
fairly
clear
that
there's
a
lot
of
different
hot
takes
on.
You
know
what
the
straps
should
do
and
and
what
the
requirements
are
and
what
the
what
the
simplest
thing
is
moving
forward.
A
So
what
eric
and
I
would
like
to
do
is
propose
a
design
team
to
sort
of
resolve
these
particular
issues,
followed
by
an
interim
meeting
between
now
and
itf,
113,
to
present
the
results
and
hopefully
drive
towards
consensus.
A
So
if
you
would
like
to
participate
on
this
design
team,
please
either
like
send
eric
or
myself
an
email
or
just
you
know,
drop
a
note
in
the
in
the
chat
and
we'll
make
sure
you're
on
it
david
since
you've
been
so
instrumental
in
moving.
This
strap
forward,
wondering
if
you'd
be
willing
to
to
lead
the
effort
and
and.
C
Help
make
yeah
happy
too.
So
folks,
if
you
want
to
like,
I
don't
know,
apply,
you
know,
type
dt,
plus
or
something
in
the
in
the
chat,
I'll
I'll,
make
sure
to
gather
everyone's
names,
and
then
we
can
have
a
side
beating
sometime
soon
so
or
that.
A
Are
you
can
I
better
than
I
will
help
coordinate
logistics,
to
make
it
easier
for
you
all
to
focus
on
the
technical
details
rather
than
dealing
with
time
zones
and
whatnot?
So
thank
you
in
the
interest
of
time,
al
and
lucas
tell
me
if
you
have
clarifying
questions
that
still
can
be
asked
or,
or
you
still
like
to
ask.
Of
course
you
can
ask
them,
but
if
the
things
that
can
be
punted
to
the
design
team
later
on,
I'd
encourage
you
to
do
that.
C
All
right
thanks
all
right,
lucas
you're
still
in
the
queue.
N
Yeah
I
just
I
just
want
to
say
that
the
decision
like
people
have
been
arguing
about
getting
rid
of
capsules,
but
if
we
do
that,
we're
back
to
where
this
draft
started
is
hb3
datagram.
The
whole
point
of
adding
capsules
was
to
support
datagram
over
http
2,
which
was
a
use
case.
People
wanted
and
hb1
as
well.
So
if
we're
revisiting
that,
it
would
make
me
sad
but
and
I'd
want
to
know
why,
and
we
can
have
that
discussion
in
the
design
team
or
later
ones.
So
thanks.
C
Cool
thanks
lucas
all
right.
Let
me
see
where
was
I
right?
C
So
I
had
another
question
for
this
draft
about
wait.
No,
that's!
There
was
an
alternate
proposal
and
a
discussion
on
reliable
datagram,
but
I
think
I'm
going
to
punt
this
to
after
that,
after
the
design
team,
because
there's
no
point
going
even
resolving
this
until
we
know
next,
so
let
let's
kind
of
switch
gears
and
talk
about
connect
udp
for
a
bit.
C
So
where
are
we
with
connect
udp?
So
the
connect
udp
draft
was
kind
of
on
hold
for
a
while,
while
we
were,
you
know,
redesigning
http,
datagrams
and
then
we
kind
of
since
we
landed
on
a
vastly
different
design
for
http
datagrams.
C
I
took
some
time
to
update
connect
udp
to
reference
that
and
also
to
update
some
of
the
discussions
we
had
in
the
past
and
the
the
main
change
there
is
that
we
connect
udp
is
no
longer
the
connect
udp
method.
C
So
we
said
we
now
say:
connect
udp
relies
on
extended
connect
which
is
rfc
8441
and
h2,
and
a
draft
that's
currently
in
the
http
working
group
for
h3
and
the
idea
is
the
server
sends
a
setting
to
say.
I
now
allow
a
new
pseudo
header,
which
is
the
colon
protocol
header,
and
so
you
send
a
connect
method
with
with
a
colon
protocol
and
you
send
that
with
connect
udp.
C
Pardon
me
in
the
previous
meeting
and
it
kind
of
simplifies
things
and
people
are
saying
that
it'll
also
be
easier
to
deploy
given
existing
infrastructure,
and
then
another
important
change
was
configuration
of
on
the
client.
So
let's
say
you're.
You
want
to
tell
your
web
browser,
for
example,
to
use
this
connect
udp
proxy
before
you
would
just
you
know,
give
it
a
host
import,
and
now
we
say
we
use
a
uri
template.
So
at
the
bottom
of
the
slide.
C
Here
I
have
like
three
random
examples
of
how
you
would
connect
configure
the
client
and
conceptually
what
it
means
is
that
you
send
the
target.
So
the
target
again
is
the
server
that
you
want
to
proxy
to
in
the
in
the
path
pseudo
header
when
you're
sending
it
and
the
important
distinction
there
is
so
in
almost
every
http
method.
Under
the
sun,
the
authority
is
the
authority
of
the
proxy
and
then
the
path
is
how
you
tell
what
you
want.
You
know
your
actual
target.
C
What
you
want
to
talk
to
connect
was
its
own
beast,
where
in
the
authority
it
kind
of
in
the
authority
instead
of
having
the
authority
of
the
proxy,
you
had
the
authority
of
the
target
of
the
duality.
Sorry,
the
host
name
and
port
of
the
target.
A
lot
of
folks
were
saying
that
that
design
choice
for
connect
was
odd
and
we
didn't
want
to
replicate
that
for
connect
udp.
C
So
here
we're
back
to
a
more
traditional
http
design,
where
the
authority
header
contains
the
authority
of
the
proxy
you're
talking
to,
and
then
the
target
is
encoded
in
the
path.
So
right
now
we
have
in
the
draft,
is
a
uri
template
for
this.
So
that
said,
the
ui
template
was
something
that
I
forgot,
who
proposed
a
few
atf's
to
go.
When
I
went
to
implement
this
for
the
hackathon
last
week,
it
was
way
more
of
a
pain
than
I
thought
it
ever
could
be.
C
So
for
folks
that
I
have
read
the
rfc
on
uri
templates,
they're,
very
complicated
ways
of
encoding.
Uri
templates-
and
I
mean
it's.
Okay,
don't
worry
it's
not
like.
Anyone
has
introduced
security
vulnerabilities
by
having
mistakes
in
their
parser
before
and
so
my
thought
was.
C
Do
we
actually
need
a
uri
template
here,
because
the
really
cool
properties
that
we
rely
on
is
the
property
that
we
have
a
your
uri
for
configuration,
but
it
doesn't
necessarily
need
to
be
a
template,
because
if
we
have
a
uri
for
configuration,
it
means
that
the
scheme
is
decided
at
configuration
time
and
that
also
conveys
the
authority
of
the
proxy
at
configuration
time,
which
you
obviously
need,
and
it
allows
you
to
rely
on
the
path.
So,
for
example,
you
could
have
behind
the
same
authority,
multiple
paths
for
proxying
and
you
could
pass
in.
C
You
know
as
a
http
query,
parameter
like
the
user
name
or
whatever
you
fancy,
and
so
the
question
is:
do
we
want
to
how
do
we
send
than
the
target
post
import
that
you
want
to
connect
to?
Do
we
send
it
in
the
path
with
a
uri
template,
or
do
we
say?
No,
the
path
is
reserved
for
like
just
configuration
elements
and
it's
send
in
a
separate
header.
C
I
initially
thought
the
uri
template
was
really
great,
but
it
ended
up
being
way
more
complex
to
implement,
and
that
wasn't
great
and
I'm
not
sure
we
should
replicate
this.
The
so
ui
template
really
made
sense
for
something
like
dough
because
it
allowed
you
to.
You
know
just
type
in
your
query
into
like
curl
and
get
a
response
like
directly
from
the
account
which
was
really
neat.
That's
not
something
you're
going
to
want
to
do
for
connect,
udp
and
the
other
property
is
when
we
so
in
the
connect
ip
draft.
C
Tell
me
we'll
be
talking
about
later.
We
decided
to
mimic
connect
udp
at
the
end
of
the
day.
Let's
try
to
keep
them
as
close
to
pos
together
as
possible,
but
the
ui
template
parameters
were
different,
so
I
thought
it
would
be
really
cool
that
you
could
configure
your
mask
proxy
with
one
configuration
string,
but
if
you're
using
a
ui
template
you
can't
really
or
you
need
all
those
parameters
in
there.
C
C
K
So
I
I
guess
the
question
that
you
want
to
ask
here
is:
what
resource
are
you
identifying
here?
Are
you
identifying
a
generic
resource?
That's
got
the
ability
to
connect
to
anything
such
that
having
different
ip
addresses
in
the
url
would
be
a
mistake,
or
are
you
identifying
a
resource
that
has
generic
connectivity
properties.
C
Anyway-
and
that's
actually
that
that
is
exactly
the
question
that
we're
trying
to
answer
from
a
like,
so
I
was
asking
it
in
terms
of
like
practical.
You
know
wine
coding
matter,
you're
asking
in
terms
of
like
conceptual
matter,
but
I
totally
agree
it's
the
same
question:
do
you?
Are
we
saying
that
we
are
like
the
the
resource?
Were
you
know
in
http
terms,
the
resource
that
we're
going
to
is
that
the
part
of
the
proxy
that
handles
connect
udp?
Is
that
the
resource
or
is
the
target
the
resource?
C
And
if
the
resource
is
the
target,
then
we
want
this
to
be
encode
like
the
target
host
and
port
to
be
encoded
in
the
path.
If
the
resource
is
the
proxy
proxying
capability,
then
we
don't
want
to
encode
it
in
the
path
we
want
to
encode
it
in
a
separate
header.
That's
the
question
we're
trying
to
answer
here.
K
C
F
A
couple
seconds
here:
can
you
just
elaborate
on
how
you
envision
header
parsing
being
simpler
than
template?
Parsing,
like
I
didn't
understand
that
point.
They
both
gonna
have
the
same
kind
of
parser
type.
Bugs
would
happen
in
both
right.
C
No
and
that's
a
great
question,
I
guess
I
should
have
been
more
clear
so
right
now
on
the
client,
so
I
need
to
parse
the
uri
template,
so
the
the
you
you're
gonna
have
to
run
some
code
on
the
client
which,
as
input
is
going
to
take
the
configuration
and
the
target.
So
let's
say
your
configuration
is
a
uri
or
ui
template.
That's
how
you
refer
to
the
proxy
and
and
you're
going
to
take
like
the
at
the
moment.
C
Your
browser
is
saying
I
want
to
connect
to
this
target
over
there,
so
you're
going
to
take
those
two
bits
of
information
and
combine
them
into
an
http,
headers
frame
and
so
right
now
what
you
need
to
do
is
parse
the
ui
template
in
order
to
like
the
technical
term
from
that
rsc
is
to
expand
the
ui
template
into
a
fully
resolved,
uri
and
so
you're
gonna
need
to
if
you're,
using
a
uri
template.
C
C
Pseudoheader,
you
put
the
path
in
the
path
to
the
header
and
you
put
what
separately
you
got
as
your
target
ip
into
its
new
header
and
your
target
port
into
its
new
header.
That
also
makes
parsing
on
the
server
on
the
proxy
side
easier,
because
now
you
don't
need
to
go
and
parse
some
of
this
to
get
the
ip
address.
You
just
say.
Well
I
look
at
this
header
and
there
it's
there.
I
don't
need
to
split
it
out
in
any
way.
C
B
Oh
right,
I'm
waiting,
I
forgot
so
I
mean
I
guess
I
don't
feel
super
strongly
about
this,
but
I
also
don't
think
that
the
arguments
you're
offering
are
very
persuasive.
No,
you
have
an
s509
parser
so,
like
I
don't
think,
like
I
mean
I
don't
think
like.
The
additional
security
like
like
like
overhead
of
personal
ui
template
is
like
is
like
that
bigger
crisis.
I
I
I
I
didn't
know
I
do.
I
do
know
the
comments,
but
how
complicated
it
is.
B
I
think
I
thought
I
thought
lucas's
suggestion
that
maybe
your
profile
level
one
would
be
fairly
persuasive.
The
I
do
think
from
a
philosophical
perspective
like
which
I
mean
is
not
necessarily
terminative.
The
the
header
is
wrong.
I
mean
the
verb
is
connect.
B
The
the
resource
is
the
thing
you
want
to
connect
to
and
if,
if
the,
if
there,
if
the,
if
the,
if
what
you
wanted
to
say,
was
that
the
thing
you're
connecting
to
is
somehow
the
proxy,
then
that
reaches
the
proxy,
then
the
verb
should
not
be
connected
to
something
else,
and
you
know
we
already
have.
This
worked
example
of
regular
connect,
which
is
how
this
works.
So
I
I
guess
you
know
that's
like
not
disposed
of
necessarily
you
know.
B
One
could
argue
that
http
should
not
have
been
designed
the
way
it
had
been
designed
but
but,
like
I
think
you
know
for
trying
to
if
we're
aiming
for
aiming
for
some
full.
So
I
think
that,
like
I
think,
like
the
argument
you're
offering
that
like
this
is
like
a
pain
in
the
ass
like,
I
think
someone
has
a
merit,
but
I
think
the
argument
that,
like
you
know
that
it's
that
that,
like
philosophically
it
should
be
that
I
think.
C
I
no
I
I
agree
with
you.
I
mainly
care
because
it
was
a
pain
to
implement
at
the
end
of
the
day.
That's
what
I'm
trying
to
minimize
for
I'm
an
implementer.
The
conceptual
aspects
of
http
are
not
my
personal
main
concern
ben.
L
Waiting
for
audio
okay,
so
just
to
remind
people,
uri
templates
mandate,
full
unicode
support,
so
a
uri
template
can
contain
any
unicode
code
point
which
then
has
to
be
escaped
and
normalized
down
to
the
uri
car
set
that
happens
after
substitution.
So
you
can
have
supremely
weird
interactions
here,
like
a
you
know,
like
a
rtl
reversal,
unicode
code
point
that
like
sometimes
appears
depending
on
the
template
contents
or
like
there's
unlimited
weirdness
there.
L
L
The
connect,
udp
port
target
port
port
number
actually
be
the
origin
port
number
or
you
could
have
be
a
sub
domain
of
the
or
even
part
of
a
domain.
You
could
register
all
65
000
domains
like
anything,
is
possible
with
uri
templates.
It's
just
a
little
bit
of
a
warning
I'll
just
note
that
there
is
an
intermediary
there's
a
middle
ground
option
here,
where
we
say
that
that
everything
has
to
go
in
the
uri
and
we
actually
specify
the
uri
structure
a
little
bit
rather
than
just
templating.
It.
C
Cool
thanks,
tommy.
E
Waiting
for
audio
cool
yeah,
so
I
think
I
agree
with
what
people
are
saying:
implementations
that
support
dough
already
need
to
support.
Uri
templates
people
who
are
doing
mask
probably
also
support
dough,
so
it
shouldn't
be
too
much
of
a
new
lift
to
ben's
point
about
weird
uri
template
types.
I
mean,
I
think
we
you
know
if
it
hurts
to
do
that.
Don't
put
your
things
in
the
scheme.
We
should
probably
have
some
constraints
here
to
say,
like
hey,
we're
already
using
the
scheme,
we're
already
using
the
authority
in
the
port.
E
E
I
could
very
well
have
a
generic
vpn
that
doesn't
have
any
constraints
on
what
I'm
connecting
to
it's
totally
open-ended,
and
I
have
one
that
only
forwards,
sctp,
packets
or
whatever
and
like
those
are
very
reasonably
different
resources
than
I'm
accessing,
and
so
I
do
think
the
logic
is
clearer
there.
But
then
you
can
kind
of
back
apply
it
into
connect,
udp
and
say
yeah.
E
This
really
is
the
resource
you're
connecting
to,
and
if
you
wanted
to
split
up
your
server
to
think
of
it
as
different
resources
that
you're
accessing
have
different
access
control
that
way
it
it
would
make
sense.
So
I
think
we
should
leave
it
in
the
uri
and
maybe
just
add
some
constraints
to
the
crazy
things
you
could
do
with
a
template
that
are
just
wrong.
C
Okay,
thank
you.
Yeah
sounds
like
from
the
folks
in
the
mic
line.
There's
some
interesting
keeping
this
in
your
eye
templates
all
right,
moving
on
to
our
next
and
final
issue
for
today
in
so
in
connect
udp.
Since
now
we
are
so
for
for
h2
and
h3.
We
use
extended
connect
with
the
pro
the
colon
protocol
pseudo
header.
The
method
is
connect.
The
protocol
is
connect
udp.
C
What
do
we
do
for
http
one
right
now
for
http
one?
We
use
the
upgrade
mechanism
and
we
so
at
the
end
of
the
day,
this
mimics
websocket,
which
is
the
only
standardized
use
of
extended
connect
that
I'm
aware
of,
and
so
we
say,
okay
same
as
what
socket
you
can
do,
connect
udp
over
h1
by
using
upgrade.
C
But
the
question
is:
what
method
do
you
use
because,
when
you're
using
upgrade
conceptually,
you
don't
look
at
the
method?
You
just
look
at
what
you're
upgrading
the
only
kind
of
purpose
is
to
say:
oh
what
happens
if
you
accidentally
hit
a
server?
Maybe
who
doesn't
support
upgrade
like
what
flavor
of
failure
would
you
prefer
to
have
in
that
scenario?
C
And
so
right
now
the
connect
udp
draft
says
you
use
connect
with
upgrade
websocket
says
you
use
get
with
upgrade,
which
one
should
we
use
here?
I
personally
don't
have
an
opinion.
Do
folks
have
thoughts.
C
All
right:
well,
if
no
one
cares,
we
can
keep
it
as
we
have
it
today
in
the
draft,
and
we
can
always
change
that
later.
Martin.
O
So,
if
I
understand
correctly,
if,
if
we
ran
into
a
server
didn't
that
didn't
understand
upgrade,
then
we
would
just
end
up
sending
datagrams
that
would
become
random
bits
somewhere
and
a
tcp
connection
coming
out
the
proxy,
whereas
with
get
it
would
just
sort
of
return,
a
404
or
something
correct.
C
C
Yeah,
well
I
mean
from
and
actually
so
connect
you
send
a
http
one
connect.
C
You
send
an
authority
without
a
path,
whereas
here
you
would
send
connect
with
there
would
be
a
flash
in
there
and
now
you
know
it's
it's
always
hard
to
imagine
what
kinds
of
bugs
exist,
but
someone
would
probably
sending
what's
with
this
flash
to
get
at
her
info
or
something
so
I'd
expect
that
it
would
fail
spectacularly
at
the
dns
resolution
or
trying
to
parse
the
ip
stage,
as
opposed
to
trying
to
create
an
an
outbound
tcp
connection.
O
I
I
don't
know
we
need
to
like
analyze
the
failure
path
here,
but-
and
you
know
other
people
written
more
hp,
departures
than
me,
but,
like
I
mean
I
would
just
say,
just
whatever
has
a
more
deterministic
and
less
goofy
failure
path
would
be
the
way
to
go
here
thanks.
I.
C
Totally
agree
because
what
we,
the
new
thing
we
build
will
work
whatever
the
connect,
the
method
we
use,
it's
just
if
we
hit
right.
So
maybe
maybe
we
just
need
to
do
some
homework
and
hit
a
bunch
of
existing
servers
with
what
it
was.
This
would
look
like
and
see
in
what
way
things
things
fail.
Eric.
J
C
C
That
sounds
great
to
me.
Okay,
unless
someone
dislikes
that
idea,
I'm
gonna
take
the
action
item
to
email,
the
http
list
asking
them
what
they
think
and
I
see
some
plus
ones
in
there
perfect
all
right-
and
this
brings
us
to
the
end
of
this
presentation-
thanks
everyone
for
listening
and
good
good
luck
until
dawn
for
everyone
who's
with
me
on
the
west
coast,
all
right
thanks
everyone,
and
over
to
you,
tommy.
E
All
right,
okay,
hello,
everyone!
I
will
be
presenting
a
new
unified
draft
for
ip
ip
proxy
support
for
http.
E
All
right
so
connect
ip.
I
think
we've
heard
this
many
times
bounced
around
the
working
group
before
why.
Why
is
this
different,
and
mainly
the
difference
is
that
we
have
had
many
different
versions
of
the
proposals
from
different
authors
and
over
the
past
couple
months.
Those
different
authors
and
myself
have
tried
to
join
forces
and
come
up
with
a
single
proposal
for
what
connect
ip
should
look
like
so
yay
for
unification.
E
E
That's
a
lot
of
the
driving
use
cases
that
constrain
the
space,
but
you
can
also
think
of
this
as
just
a
connect-like
proxy
for
an
arbitrary
ip
protocol,
and
to
that
end
you
can
have
a
lot
of
similarities
to
connect,
and
I
think,
as
the
authors
looked
at
collectively,
a
lot
of
the
differences
in
perspectives
were
because
we
had
some
perspectives
coming
from
the
purely
vpn
side
and
some
coming
from
the
side
of
saying
this
should
be
a
connect
protocol
for
arbitrary
ip.
E
So,
as
we
discussed
with
the
authors
as
we
were
coming
together
here,
we
agreed
on
a
couple
things
as
the
scope
of
this.
So
you
know
right
now:
it's
defined
as
an
extended
connect
protocol
mirroring
connect
udp.
E
E
E
Then
another
key
thing
that
the
authors
agreed
upon
was
the
fact
that
it
sounds
quite
simple
when
you
think
about
it.
The
connect
ip
document
should
be
concerned
only
with
things
that
exist
in
an
ip
header
itself
and
that's
the
kind
of
the
unit
that
they
are
operating
on,
so
the
different
endpoints
can
request
and
assign
and
route
based
on
fields
that
exist
in
the
ip
header.
So
that's
the
source
address,
that's
the
destination
address
and
that's
the
next
protocol.
E
E
That
would
be
useful
extension
to
have
both
for
ip
and
udp
connect
methods,
and
there
are
also
fancy
things
we
could
do
in
which
we
could
get
a
bit
more
next
protocol
aware
and
understand
what
we
do
for
different
port.
Allow
lists
for
different
vpns,
but
that
should
also
be
an
extension.
E
Okay,
so
what
actually
is
defined
in
this
document-
and
it's
not
too
much,
thankfully,
first
there's
the
the
upgrade
or
protocol
token
connect
ip,
which
mirrors
the
stuff
that
david
was
just
talking
about
as
connect
utp.
So
that's
really
just
a
lift
and
rename.
E
Now,
in
this
case,
it
doesn't
make
sense
to
talk
about
a
target
name
and
a
target
or
a
target
host
in
a
target
port,
because
there
are
no
ports.
Instead,
you
have
essentially
the
target
host
that
you
want.
If
you
have
one
or
you
can
have
none
and
a
target
next
protocol,
if
there
is
one
or
it
can
just
be
generic
and
get
anything,
there
are
a
couple
different
capsules
defined.
E
This
is
using
capsules
really
as
just
a
negotiation
mechanism
so
that
you
can,
after
the
time
of
request
over
the
stream,
be
able
to
add
and
remove
addresses
and
routes,
and
so
there
are
three
capsules
defined,
only
there's
an
address
assign
which
allows
one
endpoint.
To
tell
the
other
side
you
can
send
from
this
address.
E
When
you
want
to
generate
a
packet,
you
can
also
have
an
address
request,
which
is,
I
would
like
to
send
from
this
address,
or
this
subnet
and
then
in
the
same
way
that
address
assign
can
define
the
source
address
for
packets
a
route
advertisement
capsule
defines
the
destination
that
a
site
is
allowed
to
send.
So
you
can
say
you
are
allowed
to
send
to
anything.
E
Essentially
that
falls
within
this
routing
table,
I'm
going
to
give
you
and
then
the
last
thing
that's
defined
in
the
document
is
currently
it's
called
the
datagram
format,
because
the
http
datagrams
document
still
has
these.
But
essentially
it's
just
saying
the
thing
that
goes
within
a
datagram
is
a
full
ip
packet.
E
E
E
What's
going
to
be
allowed,
this
also
allows
a
proxy
that's
operating
a
bit
more
like
a
normal
connect
or
connect
connect,
udp
proxy
to
be
able
to
share
source
ip
addresses
between
multiple
clients.
E
So
if
one
client
really
only
is
trying
to
talk
to
a
specific
host
or
a
specific
subnet,
maybe
it's
it's
not
trying
to
open
up
a
completely
full
tunnel
scenario.
Then
you
don't
need
the
same
type
of
ip
address,
provisioning
requirements
that
you
would
as
a
normal
vpn.
E
E
E
E
E
You
could
also
have
a
case
where
the
client
is
only
interested
in
talking
to
a
single
host.
On
the
other
end,
maybe
it's
just
trying
to
open
up
a
sctp
flow.
For
some
reason-
and
in
this
case
the
protocol
works
exactly
the
same
except
you
can
get
back
a
very
limited
route.
That
only
is
the
thing
that
you
requested
to
connect
to.
E
And
then
the
last
example
we
have
here
is
just
showing
how
you
could
use
this,
even
if
you're,
connecting
to
a
specific
host
that
you
can
receive
different
address
families
to
send
from
because
we
like,
supporting
both
v4
and
v6,
and
you
can
get
a
routing
table
that
supports
both
anyway.
So
that's
the
protocol.
D
E
P
E
B
Echo
yeah.
I
think
this
is
a
this-
is
a
pretty
good,
pretty
good
starting
point.
I
think
it's
probably
good
enough
to
adopt.
I
sent
some
testimonial
comments
in
email
last
night.
Thank
you
for
using
them
completely
because
there's
not
like
midnight
but
or
something,
but
actually
I
think
it
was
more
like
you
know
three,
but
anyway
yeah.
I
I
think
you
know
you
know
these
are
all
things
we
could
hash
out
in
some
later
discussions.
E
Great
yeah
and
yeah-
and
I
did
see
your
comments-
thank
you
for
them.
I
think
there's
stuff
we
can
discuss.
I
think
we
can
go
either
way
on
a
lot
of
them.
E
C
E
I
Yeah,
so
thank
you,
tommy
for
the
presentation
and
we're
gonna
use
the
show
of
hands
tool,
so
everybody
find
this
tab
and
go
find
where
this
is
but
effectively.
I
I
I
I
O
I
I
I
C
I
L
So
this
is
about
this
presentation
is
about
an
individual
non-adopted
draft
that
that
I
wrote
in
response
to
some
previous
discussions
about
path,
mtu
discovery
and
it's
it's
very
much
premised
or
phrased
in
terms
of
the
in
terms
of
the
current
datagram
format
and
capsule
system,
so,
to
the
extent
that
the
design
team
decides
to
make
changes
there,
this
also
would
need
to
change
so
ping
is
an
http
datagram
format,
type
and
pings
flow
between
a
client
and
an
origin,
or
we
can
also
call
it
a
proxy
intermediaries.
L
L
L
The
main
purpose
of
this
is
inspired
by
connect
ip
connect
ip
in
order
to
be
a
fully
compliant.
Implementation
of
ip
needs
to
be
able
to
supply
some
minimum
mtu,
and
so
this
gets
tricky
if
you're
running
over
a
link,
that's
already
at
or
close
to
the
mtu
with
any
of
the
links
on
the
chain
between
the
client
and
and
the
back
end
have
have
low
mtu.
L
Then
what
currently
the
way
this
needs
to
the
way
you
can
solve
this
is
by
by
sending
that
datagram
reliably.
Well,
okay,
so
we
can
get
to
that
in
a
second,
but
but
basically,
endpoints
need
to
be
able
to
detect
that
there's.
Some
datagram
that,
like
the
ip
layer
is,
is
officially
able
to
rely
on
so
like
this
datagram
is
less
than
1280
in
ipv6,
so
it
is
guaranteed
to
be
deliverable,
but
actually
it's
not
because
when
you
wrap
it
in
all
of
the
mask
overhead,
it
exceeds
the
underlying
wire
mtu.
L
So
we
need
to
do
something.
We
need
to
change
what
we're
doing,
but
in
order
to
do
that,
the
endpoints
need
to
know
something
about
the
path
mtu
and
because
of
the
presence
of
intermediaries
using,
for
example,
the
mtu
estimated
by
the
underlying
quick
session
is
not
going
to
be
good
enough
and
there
are
even
more
complicated
intermediary
arrangements
like
when
the
the
back
end
is
actually
speaking
to
an
intermediary
over
http2.
L
But
this
is
a
very
general
anything
that
uses
datagrams
needs
in
general
to
know
something
about
the
mtu
of
that
datagram
path,
and
also
it's
very,
very
common
to
need
to
know
something
about
the
round
trip
times
and
packet
loss
rates.
And
so
it's
a
very
general
problem,
and
so
it
seems
appropriate
to
have
a
general
solution.
L
L
So
you
know
these
are
problems,
and
this
is
a
this
is
the
solution
that
kind
of
falls
out
from
from
one
set
of
design
choices
about
how
datagrams
and
formats
work,
and
I
think
that,
hopefully,
that
can
help
inform
the
design
team,
as
as
they
consider
that
design
space
you
know,
try
to
figure
out
well
what
what
are
you
going
to
do
about
these
mtu
issues?
Do
you
want
a
solution
that
looks
like
this
datagram
ping,
or
do
you
want
something
else?
L
And
another
thing
that's
important
to
note
here
is
that
in
the
design
as
considered
here,
it's
it's
important
for
clients
and
servers
to
be
able
to
identify
when
a
datagram
needs
this
kind
of
protection.
It
needs
this
extra
special
end-to-end
reliability
because
it's
too
large
for
the
path
mtu
and
be
able
to
send
it
that
way
and
right
now
we
don't
have
actually
a
mechanism
in
the
current
drafts
that
that
really
allows
that,
and
so
we've
had
some
discussion
on
the
mailing
list
about.
L
Do
we
need
a
mechanism
like
this
or
do
we
need
to
require
intermediaries
to
adopt
some
more
sophisticated
behavior
in
order
to
in
order
to
make
it
more
make
it
possible
to
deliver
those
reliably
or
or
indeed
do.
We
need
intermediaries
to
to
adopt
a
less
sophisticated
behavior
that
would
enable
that
that
end-to-end
property.
Q
Yes,
so
I
I
think
it's
an
interesting
mechanism.
I
don't
think
it
really
solves
the
the
toughest
case
at
all
of
the
perfect
discovery,
especially
if
you're
going
to
run
the
plpd
of
your
math
tunnel.
You
get
the
oh
before
I
can
tell
if
the
actual
flow
is
supported
or
not
you're
going
to
have
a
delay,
because
first
you
need
a
probe.
What
is
the
empty
you
supported,
and
then
you
can
make
a
decision
about
it.
So
it's.
Q
L
Yeah
so
there's
there's
this
learning
mtu
is
always
asynchronous.
You
know
it
always
takes
you,
never
know
it
before
you
start.
You
always
have
to
wait
for
it,
and
the
question
is:
how
long
do
you
have
to
wait.
Q
Q
L
Yeah-
and
one
thing
that
I
would
want
to
emphasize
is
I
think
this
illuminates
one
of
the
questions.
That's
that's
come
up
a
lot
with
the
design
team
like.
Why
can't
we
just
use
settings
frames
for
everything
if
you
try
to
use,
for
example,
if
you
try
to
use
settings
frames
to
communicate
the
the
mtu
of
the
h3
datagram,
so
the
setting
says
this,
this
proxy
supports
datagrams
up
to
this
size,
then
that
implies
knowledge
of
all
of
the
paths
to
all
of
the
back
ends
of
that
proxy.
L
C
O
And
thanks
for
thinking
this
through,
I
think
if
nothing
else
has
been
a
useful
exercise
to
exercise
this,
as
you
said,
but
I
guess
it
what
I'm
wondering
is
if
you
have
been
under.
If
you
have
a
protocol
running
over
a
mask
yeah,
I
mean
typically,
these
off-the-shelf
protocols
will
presumably
have
their
own
plp
mtud
mechanism,
and
they
also
have
the
advantage
of
not
only
probing
the
the
mask
mtu,
but
also
all
the
way
to
the
origin
server.
So
what
is
the
actual
utility
of
having
this
utility
in
mask
itself?.
L
Sure
so
the
fundamental
problem
here
is
that
the
iprfcs
say
that,
in
order
to
be
a
compliant
ip
implementation,
you
must
have
an
effective
mtu
of
at
least
1280
for
ipv6,
and
while
transport
protocols
running
over
it
are
allowed
or
encouraged
to
use
path
mtu
discovery.
They
are
also
allowed
explicitly
to
simply
cap
their
datagram
output
sizes
at
1280..
L
If
you
never
produce
output
greater
than
1280
the
ip
specs
say
that
you
do
not
have
to
do
any
path,
mtu
discovery,
and
so,
if
we
want
to
be
a
compliant
implementation
of
ip,
then
we
need
to
always
provide
functional
delivery
of
packets
of
size.
1280.,
that's
a
that's
an
actual!
If
right,
we
have
the
option
of
saying
this
is
an
almost
complete
implementation
of
ip
and
one
thing
that
it
doesn't
quite
give.
You
is
the
mtu
guarantees,
and
so
you
actually
need
to
do
some.
Some
pmtud
for
smaller
sizes.
I
N
N
Okay,
so
this
is
about
data
ground
priorities,
which
is
probably
a
low
priority,
given
what
the
mask
working
group
wants
to
work
on.
This
was
originally
intended
to
be
presented
at
the
last
meeting,
but
got
pushed
off
because
we
ran
out
of
time.
Hence
why
I
just
wanted
to
disease
any
time
here
to
mention
it
and
to
see
if
anyone's,
really
interested
or
if
we
don't
care
right.
Now
I
have
a
draft
an
id.
It
didn't
take
much
work
to
write
up
and
it's
going
to
expire
soon.
N
So
if
no
one's
interested,
we
can
just
leave
it
and
maybe
mothball
it
and
pick
it
back
up
when,
when
the
time's
right,
if
that
time
ever
occurs.
So
briefly,
there's
a
lot
of
background
here.
We
won't
go
into
that,
but
quick
doesn't
specify
anything
about
prioritization
of
anything.
It
says
you
probably
should
do
it,
but
it
doesn't
specify
how
the
datagram
draft
that's
now
opposed
to
the
w
g
lc
is
defines
the
frames,
but
doesn't
talk
about
priorities
of
them.
N
Hp3
doesn't
define
any
priority
signals
either
and
that's
post
working
group.
Let's
call
and
the
the
new
mask
http
3
datagram
draft
that
we
have
also
doesn't
say
anything
so
so
we're
kind
of
bouncing
around.
We
have
different
issues,
raising
a
different
draft
that
have
all
been
closed
to
say.
Well,
you
know
the
princess
is
in
another
castle
kind
of
thing:
go,
go,
look
somewhere
else.
N
You've
got
people
like
pointing
at
each
other
and
saying:
well,
you
know.
Maybe
we
should,
but
I
think,
with
all
the
drafts
we
have
today
we're
fine,
actually
we're
not
saying
anything
about
priorities,
and
so
that
raises
some
problems
for
anyone.
Who's
tried
to
implement
something
based
on
datagrams,
where
they're
being
multiplexed
with
streams
as
well
or
even
logical
flows
of
datagrams
might
need
different
priorities
and
you
know
they
have
different
needs
and
they
need
to
be
scheduled
differently.
N
Otherwise,
you
know
certain
kinds
of
problems
might
happen,
so
I
wrote
this
draft
just
to
get
the
idea
out
of
my
head.
This
is
based
on
the
extendable
priority
scheme.
I
don't
necessarily
think
it's
the
only
way
to
do
things
or
the
right
one,
but
it
is
a
way
to
do
things
we'll
skip
the
the
recap
here.
N
The
philosophy
of
this
design
is
that
datagrams
may
have
a
different
priority
than
the
stream
that
they're
related
to,
and
this
is
only
at
the
http
layer,
nothing
else
so
there's
some
flexibility
here,
but
sensible
defaults
there's
no
per
context
prioritization,
which
kind
of
skips
some
of
those
problems
that
came
up
earlier
yeah.
The
proposal
is
just
a
new
parameter
that
gets
defined
a
similar
scale
to
this
urgency
parameter
that's
already
in
priority
kind
of
decorates
things
and
inherits
from
it.
This
is
all
in
the
specification.
N
The
question
I
was
going
to
pose
last
time
is:
should
we
adopt
this,
but
I'm
thinking
right
now
the
question
is:
who
cares
you
know
we
could
have
an
adoption
discussion
later
on
when
people
have
actually
read
this,
maybe
mask,
isn't
even
the
correct
venue,
I'm
happy
if
you
know
with
whatever
people
might
think
if
they've
had
any
time
to
look
at
it.
So
that's
it.
E
Yeah
I
have
time
for
a
clarifying
question.
You
mentioned
how
you
know
explicitly.
This
is
a
different
priority
than
the
one
on
the
stream
data.
N
It
is
if
effectively
allowed,
if,
if
you
say
that
you
support
this
scheme
as
like
a
a
server,
for
instance,
and
the
client
doesn't
send
anything,
the
default
is
to
inherit
the
stream's
priority.
So
this
effectively
just
makes
that
abundantly
clear
that
that
people
would
go
with
that
kind
of
flow.
You
could
equally
say
that
without
putting
it
in
any
specification,
okay.
A
All
right,
sorry
to
abruptly
shut
the
queue,
but
we're
at
the
top
of
the
hour
at
the
end
of
the
meeting
and
folks
have
to
get
to
the
next
one.
So
thanks
everyone
for
your
time
comments,
so
we'll
follow
up
on
the
list
with
actions
regarding
the
design,
team
and
adoption
of
connect.
Ip
and
we'll
see
you
at
the
interim
next
meeting
and
on
the
list
thanks
all
and
thanks
for
notetakers.